code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example 6-1
# Principal component analysis of the scikit-learn digits dataset (a subset of
# the MNIST dataset)
from sklearn import datasets
from sklearn.decomposition import PCA
# + jupyter={"outputs_hidden": false}
# Load the data
digits_data = datasets.load_digits()
n = len(digits_data.images)
# + jupyter={"outputs_hidden": false}
# Each image is represented as an 8-by-8 array.
# Flatten this array as input to PCA.
image_data = digits_data.images.reshape((n, -1))
image_data.shape
# + jupyter={"outputs_hidden": false}
# Groundtruth label of the number appearing in each image
labels = digits_data.target
labels
# + jupyter={"outputs_hidden": false}
# Fit a PCA transformer to the dataset.
# The number of components is automatically chosen to account for
# at least 80% of the total variance.
pca_transformer = PCA(n_components=0.8)
pca_images = pca_transformer.fit_transform(image_data)
pca_transformer.explained_variance_ratio_
# + jupyter={"outputs_hidden": false}
pca_transformer.explained_variance_ratio_[:3].sum()
# + jupyter={"outputs_hidden": false}
# Visualize the results
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# + jupyter={"outputs_hidden": false}
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
for i in range(100):
ax.scatter(pca_images[i, 0],
pca_images[i, 1],
pca_images[i, 2],
marker=r'${}$'.format(labels[i]), s=64)
ax.set_xlabel('Principal component 1')
ax.set_ylabel('Principal component 2')
ax.set_zlabel('Principal component 3')
plt.show()
# -
| source/06.01_PCA_on_MNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Normalize Array
#
# This example will demonstrate how to perform a normalization or any custom
# mathematical operation on a single data array for an input data set.
#
# This filter allow the user to select an array from the input data set to be
# normalized. The filter will append another array to that data set for the
# output. The user can specify how they want to rename the array, can choose a
# multiplier, and can choose from two types of common normalizations:
# Feature Scaling and Standard Score.
#
# This example demos :class:`PVGeo.filters.NormalizeArray`
#
import numpy as np
import pyvista
from pyvista import examples
import PVGeo
from PVGeo.filters import NormalizeArray
# Create some input data. this can be any `vtkDataObject`
#
#
mesh = examples.load_uniform()
title = 'Spatial Point Data'
mesh.plot(scalars=title)
# Apply the filter
f = NormalizeArray(normalization='feature_scale', new_name='foo')
output = f.apply(mesh, title)
print(output)
output.plot(scalars='foo')
| locale/examples/filters-general/normalize-array.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''base'': conda)'
# name: python3
# ---
6
def my_sequence(arg1, arg2, n):
result = 0
for i in range(n):
result = result + arg1 + arg2
print(result)
my_sequence(2, 9, 4)
def my_sequence(arg1, arg2, n):
result = 0
i = 0
while i < n:
result = result + arg1 + arg2
i += 1
print(result)
my_sequence(2, 9, 4)
# +
def my_sequence(arg1, arg2, n):
result = 0
i = 0
while i < n:
result = result + arg1 + arg2
i += 1
yield result
my_gen = my_sequence(2, 9, 4)
# -
my_gen
next(my_gen)
next(my_gen)
next(my_gen)
next(my_gen)
"""
next(my_gen)
"""
print("If you run this, your code will turn error")
# ### Determining the n-th Term of an Arithmetic Sequence and Arithmetic Series
# Dalam latihan ini, kita akan membuat barisan aritmatika berhingga dan tak hingga menggunakan fungsi Python sederhana. Sebagai masukan, kita ingin memberikan suku pertama barisan tersebut, a1 , beda persekutuan , d , dan panjang barisan, n .
# <li>Hanya satu suku (suku ke- n ) dari barisan tersebut.
# <li>Urutan angka lengkap.
# <li>Jumlah n suku deret aritmatika, untuk membandingkannya dengan hasil deret aritmatika yang diberikan sebelumnya.
#
# We need to provide the first term of the sequence, a1, the common difference, d, and the length of the sequence, n, as inputs.
# +
# Pertama, tulis fungsi yang mereturn hanya n-th.
def a_n(a1, d, n):
an = a1 + (n-1) * d
return an
# -
# Test thy codes
a_n(4,3,10)
# Sekarang, tulis fungsi yang menginkrementasi suku awal, a1, sebanyak d, dan n kali. Selanjutnya kita menyimpannya semua kenaikan levelnya di dalam list
def a_seq(a1, d, n):
sequence = []
for _ in range(n):
sequence.append(a1)
a1 = a1+d
return sequence
# Test the code
# +
a_seq(4,3,10)
# From here, we can check, that the value increases by 3 starts at 4 and has a length of 10
# -
# Sekarang, mari buat urutan infinit. Gunakan generator
# +
"""def infinite_a_sequence(a1,d):
while True:
yield a1
a1 = a1 + d
for i in infinite_a_sequence(4,3):
print(i, end=" ")"""
print("Don't run this cell")
# -
# So let's calculate the sum of the terms of our sequence by calling the sum() method
sum(a_seq(4,3,10))
# Implementasikan formula an=a1 + (n-1)d, yang memberi kita urutan aritmetik, jadi kita bisa membandingkannya dengan hasil kita
# +
def a_series(a1, d, n):
result = n*(a1 + a_n(a1, d, n))/2
return result
a_series(4,3,10)
# -
# ## Geometric Sequences
# +
# First as example, let's write a python function that calculates the nth term of a geomteric
# function, based of the an = rn - 1a formula
def n_geom_seq(r, a, n):
an = r**(n-1)*a
return an
# -
n_geom_seq(2,3,10)
# +
def sum_n(r, a, n):
sum_n = a*(1-r**n)/(1-r)
return sum_n
sum_n(2,3,10)
# -
# ### Writing a Function to Find the Next Term of the Sequence
# Jumlah bakteri meningkat secara geometris dan sekuensial. Diberikan jumalh populasi bakteri per hari, dalam hari n, selanjutnya kita akan membuat fungsi yang menghitung populasi pada hari n+1.
# <li>1. Tulis fungsi yang menerima sejumlah variabel argumen ( *args ) dan menghitung rasio antara elemen apa pun dan elemen sebelumnya (dimulai dari elemen kedua). Kemudian, periksa apakah semua rasio yang ditemukan identik dan kembalikan nilai uniknya. Jika tidak, fungsi mengembalikan -1 (urutan tidak memiliki rasio umum yang unik)
def find_ratio(*args):
arg0 = args[0]
ratios = []
for arg in args[1:]:
ratio = round(arg/arg0,8)
arg0 = arg
ratios.append(ratio)
if len(set(ratios)) == 1:
return ratio
else:
return -1
# <li> Sekarang, cek fungsi tersebut untuk 2 kasus berbeda. Pertama, gunakan urutan berikut
find_ratio(1,2,4,8,16,32,64,128,256,512)
# <li>Gunakan urutan yang lebih berbeda lagi
find_ratio(1,2,3)
# Seperti output di atas, fungsi mencetak rasio jika rasionya ada, dan memprint -1 jika urutannya tidak geometris
# <li>Selanjutnya, buat fungsi kedua yang membaca secara berurutan dan mencetak next term yang akan terjadi. Untuk melakukannya, baca dalam daftar angka (dipisahkan koma), temukan rasionya, dan dari situ, prediksi suku berikutnya
def find_next(*args):
if find_ratio(*args) == -1:
raise ValueError("The sequence you entered isn't geometric sequence. Pls check the input!")
else:
return args[-1]*find_ratio(*args)
# Ingat bahwa kita ingin memeriksa apakah daftar bilangan ini memiliki rasio yang sama dengan cara memanggil fungsi <b>find_ratio()</b>. Jika ada kesalahan, dia akan me-raise value error, jika tidak, dia akanmencari next term, dan meretrunnya.
# <li>Check if it works by using the following sequence
find_next(1,2,4)
find_next(1.36,0.85680,0.539784,0.34006392)
# Dalam kasus pertama, hasil yang jelas, 8.0 , dicetak. Dalam kasus kedua, hasil yang kurang jelas dari barisan geometri menurun ditemukan dan dicetak. Untuk meringkasnya, kita dapat menulis fungsi yang mendeteksi barisan geometri, menemukan rasionya, dan menggunakannya untuk memprediksi suku barisan berikutnya. Ini sangat berguna dalam skenario kehidupan nyata, seperti dalam kasus di mana suku bunga majemuk perlu diverifikasi
# +
"""
find_next(1,2,4,3)
"""
print("ValueError")
# -
def factorial(n):
if n==0 or n==1 :
return 1
elif n==2:
return 2
else:
return n*factorial(n-1)
factorial(1)
factorial(2)
factorial(3)
factorial(4)
factorial(5)
factorial(100)
# ### Creating a Custom Recursive Sequence
# In this exercise, we will create a custom recursive sequence using the concepts we explained in the previous section. Given the first three elements of the sequence, Pn, that is, P1=1, P2=7, and P3=2, find the next seven terms of the sequence that is recursively defined via the relation: Pn+3= (3*Pn+1 - Pn+2)/(Pn – 1).
# <li>Pertama, kita define fungsi rekursif python, dan mengimplementasikan hubungan yang diberikan sebelumnya untuk elemen ke-n.
def p_n(n):
if n < 1:
return -1
elif n == 1:
return 1
elif n == 2:
return 7
elif n == 3:
return 2
else:
pn = (3*p_n(n-2) - p_n(n-1) )/ (p_n(n-3) + 1)
return pn
# Di sini, kita mulai dengan mendefinisikan kasus dasar, yaitu hasil yang diketahui seperti yang diberikan dalam ringkasan: jika n=1 , maka P=1 , jika n=2 , maka P=7 , dan jika n=3 , maka P =2 . Kami juga menyertakan kasus di mana n<1 . Ini adalah input yang tidak valid dan, seperti biasa, fungsi kami mengembalikan nilai -1 . Ini membuat fungsi kita dibatasi dan dilindungi dari memasuki infinite loop dan input yang tidak valid. Setelah kasus-kasus ini telah ditangani, maka kita telah mendefinisikan relasi rekursif.
# <li>Sekarang, mari uji fungsi kita dan mencetak 10 nilai percentile
for i in range(1,11):
print(p_n(i))
# Bisa dilihat, fungsi kita bekerja dan mereturn kembali nilai yang diketahui ( P 1 = 1 , P 2 = 7 , dan P 3 = 2 ) dari barisan dan suku berikutnya ( P_1 hingga P_10 ) yang kita sedang mencari
# <li>Sebagai bonus, sekarang kita coba plot hasilnya
# +
"""from matplotlib import pyplot as plt
plist = []
for i in range(1,40):
plist.append(p_n(i))
plt.plot(plist,linestyle = "--", marker='o',color='b')
plt.show()"""
print("Don't run this. Take a long time fuck")
# +
from math import sqrt
def hypotenuse(a,b):
h = sqrt(a**2 + b**2)
return h
hypotenuse(3,4)
# -
# ### Plotting a Right-Angled Triangle
# In this exercise, we will write Python functions that will plot a right triangle for the given points, p1 and p2. The right-angled triangle will correspond to the endpoints of the legs of the triangle. We will also calculate the three trigonometric functions for either of the non-right angles. Let's plot the basic trigonometry functions
# +
import numpy as np
from matplotlib import pyplot as plt
# Sekarang, tulis sebuah fungsi yang mereturn hipotemus menggunakan algoritma pitagoras
def find_hypotenuse(p1,p2):
p3 = round((p1**2 + p2**2)**0.5,8)
return p3
# -
# <li>Tulis fungsi lain yang mengimplementasikan relasi untuk, sinus, cosinus, dan tangen. Inputnya adalah panjang dari tiap sisi di setiap sisi segitiga
def find_trig(adjacent, opposite, hypotenuse):
return opposite/hypotenuse, adjacent/hypotenuse, opposite/adjacent
# <li>Tulis fungsi yang memvisualisasikan triangle nya. Secara simple, tempatkan angle yang bagus pada koordinat (0,0).
def plot_triangle(p1, p2, lw=5):
x = [0, p1, 0]
y = [0, 0, p2]
n = ['0', 'p1', 'p2']
fig, ax = plt.subplots(figsize=(p1,p2))
# plot points
ax.scatter(x, y, s=400, c="#8C4799", alpha=0.4)
ax.annotate(find_hypotenuse(p1,p2),(p1/2,p2/2))
# plot edges
ax.plot([0, p1], [0, 0], lw=lw, color='r')
ax.plot([0, 0], [0, p2], lw=lw, color='b')
ax.plot([0, p1], [p2, 0], lw=lw, color='y')
for i, txt in enumerate(n):
ax.annotate(txt, (x[i], y[i]), va='center')
# Di sini, kita membuat daftar, x dan y , yang menyimpan poin dan satu daftar lagi, n , untuk label. Kemudian, kita membuat objek pyplot yang memplot titik terlebih dahulu, lalu tepinya. Dua baris terakhir digunakan untuk membubuhi keterangan plot kita; yaitu, tambahkan label (dari daftar, n ) di sebelah poin kita
# <li>Selanjutnya kita perlu memilih 2 point secara berurut untuk mendefine sebuah segitiga. Kemudain kita panggil fungsi kita untuk menampilkan visualisasinya
p01 = 4
p02 = 4
print(find_trig(p01,p02,find_hypotenuse(p01,p02)))
plot_triangle(p01,p02)
# <b>Correct!</b>
# <li>finally, untuk mendapatkan general overview dari sinus dan cosinus, kita visualisasikan mereka!
# +
x = np.linspace(0,10,200)
sin = np.sin(x) # For sinus
cos = np.cos(x) # For cosinus
plt.xticks([0, np.pi/2, np.pi, 3*np.pi/2, 2*np.pi, 5*np.pi/2, 3*np.pi],['0','','\u03C0','','2\u03C0','','3\u03C0'])
plt.plot(x, sin, marker='o', label='sin')
plt.plot(x, cos, marker='x', label='cos')
plt.legend(loc="upper left")
plt.ylim(-1.1, 1.6)
plt.show()
# -
# ### Finding the Shortest Way to the Treasure Using Inverse Trigonometric Functions
# Pada kegiatan kali ini, anda akan diberikan secret map yang mengarah pada B, tempat target yakni harta telah tersimpan lama. Asumsikan anda pada point A, dan instruksinya. Anda harus menavigasi 20 km ke selatan lalu 33 km ke barat sehingga Anda tiba di harta karun. Namun, segmen garis lurus, AB , adalah yang terpendek. Anda perlu menemukan sudut θ pada peta sehingga navigasi Anda berorientasi dengan benar
# - First import <i>atan</i> and <i>phi</i>
from math import atan, pi
# - Find the tangent of theta using BC and AC
# +
AC = 33
BC = 20
tan_th = BC/AC
print(tan_th)
# -
# - Selanjutnya, temukan angle nya dengan menggunakan reverse tangent
theta = atan(tan_th)
print(theta)
theta_degrees = theta * 180/pi
print(theta_degrees)
# This answer is 31.22, that will navigate us correctly
# - Calculate the distance that we will travel along the path AB using Pythagorean Theorem
AB = (AC**2 + BC**2)**0.5
print(AB)
# The sortest distance is 38.59 km.
# ### Finding the Optimal Distance from an Object
# Anda mengunjungiing arena lokal Anda untuk menonton acara favorit Anda, dan Anda berdiri di tengah arena. Selain panggung utama, ada juga layar tampilan agar orang bisa menonton dan tidak ketinggalan detail pertunjukan. Bagian bawah layar berdiri 3 m di atas ketinggian mata Anda, dan layar itu sendiri tingginya 7 m. Sudut pandang dibentuk dengan melihat bagian bawah dan atas layar. Temukan jarak optimal, x , antara Anda dan layar sehingga sudut pandang dimaksimalkan
# Ini adalah masalah yang sedikit melibatkan yang membutuhkan sedikit aljabar, tetapi kami akan memecahnya menjadi langkah-langkah sederhana dan menjelaskan logikanya. Pertama, perhatikan seberapa banyak plot masalah memandu kita dan membantu kita sampai pada solusi. Masalah dunia nyata yang tampaknya kompleks ini diterjemahkan ke dalam gambaran geometris yang jauh lebih abstrak dan sederhana.
# ## Vectors
# +
import numpy as np
A = np.array([1,2,3]) # Vector A
B = np.array([4,5,6]) # Vector B
# Sum of A and B
print(A + B)
# -
# The difference
A - B
# Element-wise product
A * B
# Dot product
A.dot(B)
# Cross product
np.cross(A,B)
# Note that vector addition, subtraction, and the dot product are associative and commutative operations, whereas the cross product is associative but not commutative. In other words, a x b does not equal b x a, but rather b x a, which is why it is called anticommutative.
# +
# Next we write a Python program that calculates the angle between two vectors with numpy!
import numpy as np
from math import acos
A = np.array([2,10,0])
B = np.array([9,1,-1])
print(A)
print(B)
print("-----------")
# find the norm (magnitude) of each vector
Amagn = np.sqrt(A.dot(A))
Bmagn = np.sqrt(B.dot(B))
print(Amagn)
print(Bmagn)
print("-----------")
# Finally, find the angle
theta = acos(A.dot(B) / (Amagn * Bmagn))
print(theta)
# -
# ### Visualizing Vectors
# +
# Here we will write function that plots two vctors in 2D space.
import numpy as np
import matplotlib.pyplot as plt
# Create functiion that admits two vectors as inputs, as list, plots them
def plot_vectors(vec1, vec2, isSum=False):
label1 = "A";label2 = "B";label3 = "A+B"
orig = [0.0, 0.0] # Position of origin axes
# -
# vector 1 dan vector 2 masing-masing berisi dua bilangan real. Di mana tiap pasangannya menunjukkan koordinat titik akhir (head) dari vector yang sesuai, sedangkan titik asalnya ditetapkan pada koordinat (0,0). Untuk label sendiri sudah ada pada label yang terletak pada variable label di dalam function tersebut.
# - Next, kita tempatkan koordinat pada objek matplotlib.pyplot
def plot_vectors(vec1, vec2, isSum = False):
label1 = "A"; label2 = "B"; label3 = "A+B"
orig = [0.0, 0.0] # position of origin of axes
ax = plt.axes()
ax.annotate(label1, [vec1[0]+0.5,vec1[1]+0.5] )
# shift position of label for better visibility
ax.annotate(label2, [vec2[0]+0.5,vec2[1]+0.5] )
if isSum:
vec3 = [vec1[0]+vec2[0], vec1[1]+vec2[1]]
# if isSum=True calculate the sum of the two vectors
ax.annotate(label3, [vec3[0]+0.5,vec3[1]+0.5] )
ax.arrow(*orig, *vec1, head_width=0.4, head_length=0.65)
ax.arrow(*orig, *vec2, head_width=0.4, head_length=0.65, \
ec='blue')
if isSum:
ax.arrow(*orig, *vec3, head_width=0.2, \
head_length=0.25, ec='yellow')
# plot the vector sum as well
plt.grid()
e=3
# shift limits by e for better visibility
plt.xlim(min(vec1[0],vec2[0],0)-e, max(vec1[0],\
vec2[0],0)+e)
# set plot limits to the min/max of coordinates
plt.ylim(min(vec1[1],vec2[1],0)-e, max(vec1[1],\
vec2[1],0)+e)
# so that all vectors are inside the plot area
plt.title('Vector sum',fontsize=14)
plt.show()
plt.close()
# - Now, we will write a function that calculates the angle between the two input vectors, as explained previously, with the help of the dot (inner) product
def find_angle(vec1, vec2, isRadians = True, isSum = False):
vec1 = np.array(vec1)
vec2 = np.array(vec2)
product12 = np.dot(vec1,vec2)
cos_theta = product12/(np.dot(vec1,vec1)**0.5 * \
np.dot(vec2,vec2)**0.5 )
cos_theta = round(cos_theta, 12)
theta = np.arccos(cos_theta)
plot_vectors(vec1, vec2, isSum=isSum)
if isRadians:
return theta
else:
return 180*theta/np.pi
# First, we map our input lists to numpy arrays so that we can use the methods of this module. We calculate the dot product (named product12) and then divide that by the product of the magnitude of vec1 with the magnitude of vec2. Recall that the magnitude of a vector is given by the square root (or **0.5) of the dot product with itself. As given by the definition of the dot product, we know that this quantity is the cos of the angle theta between the two vectors. Lastly, after rounding cos to avoid input errors in the next line, calculate theta by making use of the arccos method of numpy
# - Selanjutnya kita gabungkan dua fungsi untuk mengimplementasikannya
# +
ve1 = [1,5]
ve2 = [5,-1]
find_angle(ve1,ve2,isRadians=False,isSum=True)
# -
ve1 = [1,5]
ve2 = [0.5,2.5]
find_angle(ve1, ve2, isRadians = False, isSum = True)
ve1 = [1,5]
ve2 = [-3,-5]
find_angle(ve1, ve2, isRadians = False, isSum = True)
# ## Complex Numbers
a = 1
b = -3
z = complex(a,b)
print(z)
print(z.real)
print(z.imag)
# +
def find_polar(x):
from math import asin
x = z.real
y = z.imag
r = (x**2 + y**2)**0.5
phi = asin(y/r)
return r, phi
find_polar(1-3j)
# -
def complex_operations2(c1, c2):
print('Addition =', c1 + c2)
print('Subtraction =', c1 - c2)
print('Multiplication =', c1 * c2)
print('Division =', c1 / c2)
# Now, let's try these functions for a generic pair of complex numbers, c1=10+2j/3 and c2=2.9+1j/3
complex_operations2(10+2j/3, 2.9+1j/3)
# Using purely imaginary number
complex_operations2(1, 1j)
import cmath
def complex_operations1(c):
modulus = abs(c)
phase = cmath.phase(c)
polar = cmath.polar(c)
print('Modulus =', modulus)
print('Phase =', phase)
print('Polar Coordinates =', polar)
print('Conjugate =',c.conjugate())
print('Rectangular Coordinates =', \
cmath.rect(modulus, phase))
complex_operations1(3+4j)
# ### Conditional Multiplication of Complex Numbers
# Dalam latihan ini, Anda akan menulis sebuah fungsi yang membaca bilangan kompleks, c , dan mengalikannya dengan dirinya sendiri jika argumen bilangan kompleks lebih besar dari nol, mengambil akar kuadrat dari c jika argumennya kurang dari nol, dan tidak tidak ada jika argumen sama dengan nol. Plot dan diskusikan temuan Anda
# - Import the necessary packages!
import cmath
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# - Define a function to plot the vector of the input complex number
def plot_complex(c, color='b', label=None):
ax = plt.axes()
ax.arrow(0,0, c.real, c.imag, head_width=0.2, head_length=0.3, color=color)
ax.annotate(label, xy=(0.6*c.real, 1.15*c.imag))
plt.xlim(-3,3)
plt.ylim(-3,3)
plt.grid(b=True, which='major')
# - Next, create a function that reads the input, plot it by calling the function defined previously, and then investigates the different cases, depending on the phase of the input.
def mult_complex(c, label1='old', label2='new'):
phase = cmath.phase(c)
plot_complex(c, label=label1)
if phase == 0:
result = -1
elif phase < 0:
print('old phase:', phase)
result = cmath.sqrt(c)
print('new phase:', cmath.phase(result))
plot_complex(result, 'red', label=label2)
elif phase > 0:
print('old phase:', phase)
result = c*c
print('new phase:', cmath.phase(result))
plot_complex(result, 'red', label=label2)
return result
# Note that for negative phases, we take the square root of c, whereas for positive phases, we take the square of c
# - Now, transform a number that lies on the upper half of the complex plane
mult_complex(1 + 1.2j)
mult_complex(1-1.2j)
c0 = 1+1.2j
n = 0
while n < 6:
c0 = mult_complex(c0, None, str(n))
n+=1
# ### Calculating Your Retirement Plan Using Series
# Di beberapa negara, program pensiun ditawarkan oleh beberapa perusahaan. Rencana ini memungkinkan anda langsung berkontribusi dari gaji anda, sehingga membuat efektivitas meningkat untuk menabung dan berinvestasi untuk masa pensiun. Next, anda ditugaskan untuk menghitung dan memplot pengembalian bulanan anda berdasarkan jumlah dan durasi kontribusi.<br><br>
#
# Rencana ini terakumulasi oleh waktu, persis seperti deret geometris. Modelnya seperti investasi, di mana anda menyimpan uang setiap bulan untuk mengumpulkannya nanti, setiap bulan, dengan nilai tambah atau bunga. Variabel utama untuk menghitung laba ini adalah saldo anda saat ini, kontribusi bulanan, kontribusi majikan, usia pensiun, tingkat pengembalian, harapan hidup, dan biaya lainnya.
# 1. Identify the variables of our problem. These will be the variables of our functions. Make sure you read through the activity description carefully and internalize what is known and what is to be calculated.
# 2. Identify the sequence and write one function that calculates the value of the retirement plan at some year, n. The function should admit the current balance, annual salary, year, n, and more as inputs and return a tuple of contribution, employer's match, and total retirement value at year n.
# 3. Identify the series and write one function that calculates the accumulated value of the retirement plan after n years. The present function should read the input, call the previous function that calculates the value of the plan at each year, and sum all the (per year) savings. For visualization purposes, the contributions (per year), employer match (per year), and total value (per year) should be returned as lists in a tuple.
# 4. Run the function for a variety of chosen values and ensure it runs properly.
# 5. Plot the results with Matplotlib
#
# - First, we need to identify the input variables and note that the problem boils down to calculating the n-term of a geometric sequence with a common ratio (1 + interest) and scale factor for the annual salary.
# annual_salary and the percentage, contrib, of it is what we contribute toward our plan. current_balance is the money that we have at year 0 and should be added to the total amount. annual_cap is the maximum percentage that we can contribute; any input value beyond that should be equal to contrib_cap. annual_salary_increase tells us how much we expect our salary to increase by per year. employer_match gives us the percentage amount the employer contributes to the plan (typically, this is between 0.5 and 1). Lastly, the current age, the duration of the plan in years, the life expectancy in years, and any other fees that the plan might incur are input variables. The per_month Boolean variable determines whether the output will be printed as a per-year or per-month amount of the return.
#
# - Buat fungsi pertama. untuk menghitung element ke-n dari deret punya kita, yang mereturn kontribusi dan kecocokan pemberi kerja sebagai tuple yang dipisahkan koma
def retirement_n(current_balance, annual_salary, \
annual_cap, n, contrib, \
annual_salary_increase, employer_match, \
match_cap, rate):
'''
return :: retirement amount at year n
'''
annual_salary_n = annual_salary*\
(1+annual_salary_increase)**n
your_contrib = contrib*annual_salary_n
your_contrib = min(your_contrib, annual_cap)
employer_contrib = contrib*annual_salary_n*employer_match
employer_contrib = min(employer_contrib,match_cap\
*annual_salary_n*employer_match)
contrib_total = your_contrib + employer_contrib
return your_contrib, employer_contrib, current_balance + contrib_total*(1+rate)**n
# Seperti yang ditunjukkan di sini, adalah saldo saat ini dan gaji tahunan dalam nilai absolut. Kita juga mendefinisikan kontribusi, batas kontribusi (yaitu nilai maksimum yang diizinkan), kenaikan gaji tahunan, kecocokan pemberi kerja, dan tingkat pengembalian sebagai nilai relatif (floating antara 0 dan 1). Batas tahunan juga dimaksudkan untuk dibaca sebagai nilai absolut
# - Buat fungsi untuk menjumlahkan jumlah individu setiap tahun dan menghhitung nilai total dari rencananya. Ini akan membagi angka ini dengan jumlah tahun di mana rencana kana digunakan, sehingga pengembalian per tahun dari rencana akan direturn oleh fungsi ini. Sebagai input, parameter ini harus membaca usia saat ini, durasi paket, dan harapan hidup.
def retirement_total(current_balance, annual_salary, \
annual_cap=18000, contrib=0.05, \
annual_salary_increase=0.02, employer_match=0.5, \
match_cap=0.06, rate=0.03, current_age=35, \
plan_years=35, life_expectancy=80, fees=0, \
per_month=False):
i = 0
result = 0
contrib_list = []; ematch_list = []; total_list = []
while i <= plan_years:
cn = retirement_n(current_balance=current_balance, \
annual_salary=annual_salary, \
annual_cap=annual_cap, n=i, \
contrib=contrib, match_cap=match_cap, \
annual_salary_increase=annual_salary_increase,\
employer_match=employer_match, rate=rate)
contrib_list.append(cn[0])
ematch_list.append(cn[1])
total_list.append(cn[2])
result = result + cn[2]
i+=1
result = result - fees
years_payback = life_expectancy - (current_age + plan_years)
if per_month:
months = 12
else:
months = 1
result = result / (years_payback*months)
print('You get back:',result)
return result, contrib_list, ematch_list, total_list
result, contrib, ematch, total = retirement_total(current_balance=1000, plan_years=35,\
current_age=36, annual_salary=40000, \
per_month=True)
from matplotlib import pyplot as plt
years = [i for i in range(len(total))]
plt.plot(years, total,'-o',color='b')
width=0.85
p1 = plt.bar(years, total, width=width)
p2 = plt.bar(years, contrib, width=width)
p3 = plt.bar(years, ematch, width=width)
plt.xlabel('Years')
plt.ylabel('Return')
plt.title('Retirement plan evolution')
plt.legend((p1[0], p2[0], p3[0]), ('Investment returns','Contributions','Employer match'))
plt.show()
| Work_5/work_5-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (fractals)
# language: python
# name: fractals
# ---
# ### Generating the distance from the complex origin.
import math
def generate_r(p,c,i):
'''
outputs a radius, r, from the origin of the complex plane.
p: starting point as a complex number
c: constant complex number
i: number of iterations
'''
z = p
iterations = 0
r = 0
while iterations < i:
z = z**2 + c
r = math.sqrt(z.real**2 + z.imag**2)
iterations += 1
return round(r,2)
# +
c = -0.7 + 0.27j
p = 0.5 + 1j
i = 2
generate_r(p,c,i)
# -
# ### Function from blog post part 1
# +
import matplotlib.pyplot as plt
import numpy as np
def julia_set(w, h, c = -0.7+ 0.27j, zoom=1, niter=256):
""" A julia set of geometry (width x height) and iterations 'niter' """
# Why (hxw) ? Because numpy creates a matrix as row x column
# and height represents the y co-ordinate or rows and
# width represents the x co-ordinate or columns.
pixels = np.arange(w*h,dtype=np.uint16).reshape(h, w)
for x in range(w):
for y in range(h):
# calculate the initial real and imaginary part of z,
# based on the pixel location and zoom and position values
zx = 1.5*(x - w/2)/(0.5*zoom*w)
zy = 1.0*(y - h/2)/(0.5*zoom*h)
for i in range(niter):
radius_sqr = zx*zx + zy*zy
# Iterate till the point is outside
# the circle with radius 2.
if radius_sqr > 4: break
# Calculate new positions
zy,zx = 2.0*zx*zy + c.imag, zx*zx - zy*zy + c.real
color = (i >> 21) + (i >> 10) + i*8
pixels[y,x] = color
# display the created fractal
plt.imshow(pixels)
plt.show()
# -
julia_set(1024,768,zoom=4,niter = 256)
arr = np.arange(1024*768,dtype=np.uint16)
print(arr)
print(len(arr))
arr2 = np.arange(1024*768,dtype=np.uint16).reshape(768,1024)
print(arr2)
print(len(arr2))
arr = np.arange(1024*768)
print(arr)
print(len(arr))
arr2 = np.arange(1024*768).reshape(768,1024)
print(arr2)
print(len(arr2))
| fractal-exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Shear Wave Splitting for the Novice
#
# When a shear wave encounters an anisotropic medium, it splits its energy into orthogonally polarised wave sheets. The effect is easily measured on waveforms with -- at least -- 2-component data (provided those 2 components are orthogonal to the wavefront vector, which can be different from the ray vector). The key parameters are the polarisation of the wave fronts (which is captured by the parameter, $\phi$, which ca be defined as a vector in 3 dimensions, but in practice). This angle is measured relative to some well-defined direction, e.g. North, or upwards, in the plane normal to the wave prop
#
#
# ## Splitting the signal
# Let's start with two components. Put a pulse of energy and some noise on these components, with a polarisation of 40 degrees. Note the pulse of energy is centred in the middle of the trace -- this is deliberate -- it is a feature of this software that analysis is always done at the centre of traces.
# +
import sys
sys.path.append("..")
import splitwavepy as sw
import matplotlib.pyplot as plt
import numpy as np
data = sw.Pair(noise=0.05,pol=40,delta=0.1)
data.plot()
# -
# Now let's add a bit of splitting. Note, this shortens trace length slightly. And the pulse is still at the centre.
data.split(40,1.6)
data.plot()
# Measuring shear wave splitting involves searching for the splitting parameters that, when removed from the data, best linearise the particle motion. We know the splitting parameters so no need to search. Let's just confirm that when we undo the splitting we get linearised particle motion. Again, this shortens the trace, and the pulse is still at the centre.
data.unsplit(80,1.6)
data.plot()
# ## The window
#
# The window should capture the power in the pulse of arriving energy in such a way as to maximise the signal to noise ratio. It should also be wide enough to account for pulse broadening when splitting operators are applied to the data.
#
# +
# Let's start afresh, and this time put the splitting on straight away.
data = sw.Pair(delta=0.1,noise=0.01,pol=40,fast=80,lag=1.2)
# plot power in signal
fig, ax1 = plt.subplots()
ax1.plot(data.t(),data.power())
# generate a window
window = data.window(25,12,tukey=0.1)
# window = sw.Window(data.centre(),150)
ax2 = ax1.twinx()
ax2.plot(data.t(),window.asarray(data.t().size),'r')
plt.show()
data.plot(window=window)
# +
# Now repreat but this time apply loads of splitting and see the energy broaden
data = sw.Pair(delta=0.1,noise=0.01,pol=40,fast=80,lag=5.2)
# plot power in signal
fig, ax1 = plt.subplots()
ax1.plot(data.t(),data.power())
# generate a window
window = data.window(25,12,tukey=0.1)
# window = sw.Window(data.centre(),150)
ax2 = ax1.twinx()
ax2.plot(data.t(),window.asarray(data.t().size),'r')
plt.show()
data.plot(window=window)
# large window
largewindow = data.window(23,24,tukey=0.1)
data.plot(window=largewindow)
# -
# ## The measurement
#
#
# +
# sparse search
tlags = np.linspace(0,7.0,60)
degs = np.linspace(-90,90,60)
M = sw.EigenM(tlags=tlags,degs=degs,noise=0.03,fast=112,lag=5.3,delta=0.2)
M.plot()
# dense search
# tlags = np.linspace(0.,7.0,200)
# degs = np.linspace(0,180,200)
# M = sw.EigenM(M.data,tlags=tlags,degs=degs)
# M.plot()
# -
M.tlags
M = sw.EigenM(delta=0.1,noise=0.02,fast=60,lag=1.3)
M.plot()
np.linspace(0,0.5,15)
p = sw.Pair(delta=0.1,pol=30,fast=30,lag=1.2,noise=0.01)
p.plot()
p.angle
| devel/Splitting_for_novices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Importing experiment configs
import yaml
from box import Box
from comet_ml import Experiment
with open("configs/configs_abideI_n4.yml", "r") as ymlfile:
cfg = Box(yaml.safe_load(ymlfile))
with open("configs/general_args.yml", "r") as ymlfile:
general_args = yaml.safe_load(ymlfile)
# + colab={"base_uri": "https://localhost:8080/"} id="Ffk_08tbgIoA" outputId="bc619b1c-3ad4-4eb0-9eaf-b68d9a9ca778"
# Create an experiment with your api key
experiment = Experiment(
api_key = cfg.logging.comet_experiment_api,
project_name = cfg.logging.comet_project_name,
workspace = cfg.logging.comet_workspace,
)
experiment_name = f"aenc_{cfg.params.args.conv_model[-1]}_{cfg.params.args.learning_rate}_{cfg.params.n_sites}_sites/seed_{cfg.params.r_seed}"
experiment.set_name(experiment_name)
experiment.log_dataset_info(cfg.params.dataset_info)
# +
# importing utils for training, logging and viz
from models import *
from viz import *
from utils import *
from train import *
# +
# importing data
import torch
save_folder = cfg.logging.weights_save_folder
selected_all_sum_tensor = torch.load(cfg.inputs.data)
if len(selected_all_sum_tensor.size()) < 5:
selected_all_sum_tensor = selected_all_sum_tensor.unsqueeze(1).float()
selected_targets_tensor = torch.load(cfg.inputs.targets)
selected_sites_tensor = torch.load(cfg.inputs.sites)
selected_sexes_tensor = torch.load(cfg.inputs.sex)
selected_ages_tensor = torch.load(cfg.inputs.age)
# +
# defining a training dataset with N_sites parametr from configs
n_sites = cfg.params.n_sites
site_codes = cfg.params.site_codes
code2site = {i : s for s, i in site_codes.items()}
site_labels = [code2site[i] for i in range(1, n_sites + 1)]
# manially cropping data to cubic format
### this should be manually corrected for each dataset used
img_crop = range(cfg.params.img_crop[0], cfg.params.img_crop[1])
selected_all_sum_tensor = selected_all_sum_tensor[..., :, img_crop, :]
selected_all_sum_tensor = torch.nn.functional.pad(selected_all_sum_tensor, pad=tuple(cfg.params.pad))
# asserting if shape mismatch
assert selected_all_sum_tensor.size()[2:] != cfg.params.img_size , f'Input shape is {selected_all_sum_tensor.size()[2:]}, doesnt match {cfg.params.img_size}'
# +
# Decoding site names to use it as One Hot Encoded vectore while training
selected_sites_tensor_ohe = []
for v in selected_sites_tensor.unique():
selected_sites_tensor_ohe.append((selected_sites_tensor == v).float())
selected_sites_tensor_ohe = torch.stack(selected_sites_tensor_ohe, dim=-1)
selected_sexes_tensor_ohe = []
for v in selected_sexes_tensor.unique():
selected_sexes_tensor_ohe.append((selected_sexes_tensor == v).float())
selected_sexes_tensor_ohe = torch.stack(selected_sexes_tensor_ohe, dim=-1)
# Choosing top `n_sites` by size in dataset
selected_attrs_tensor = selected_sites_tensor_ohe[:, :n_sites]
selected_idx = selected_attrs_tensor.sum(axis=1).byte()
# Normalizing input vector prior to training
selected_all_sum_tensor = selected_all_sum_tensor - selected_all_sum_tensor.mean()
selected_all_sum_tensor = selected_all_sum_tensor/selected_all_sum_tensor.std()
# +
#defining a random state and args
import random, numpy
r_seed = cfg.params.r_seed
random.seed(r_seed)
numpy.random.seed(r_seed)
torch.manual_seed(r_seed)
torch.cuda.manual_seed(r_seed)
args = {k : v for k, v in general_args.items()}
args.update(cfg.params.args)
# Updating variables to tuples and lists, after *yml read
args.update({'img_shape': tuple(args['img_shape'])})
args.update({'noises': list(np.zeros_like(
list(args['conv_model'])))})
args.update({'n_attrs_outputs': [selected_attrs_tensor.size(1)]})
# args.update({'n_epochs': 10})
experiment.log_parameters(args)
# + colab={"base_uri": "https://localhost:8080/"} id="s0I6YkxuKgyf" outputId="c9f949fc-e063-4f69-92e0-2b04618c9f4d"
# wriring dataloader for training on selected data
tensor_dataset = data.TensorDataset(selected_all_sum_tensor[selected_idx],
selected_targets_tensor[selected_idx],
selected_attrs_tensor[selected_idx])
idx = np.arange(len(tensor_dataset))
train_idx = idx
val_idx = idx
train_loader = torch.utils.data.DataLoader(
data.Subset(tensor_dataset, train_idx), batch_size=args["batch_size"], shuffle=True)
val_loader = torch.utils.data.DataLoader(
data.Subset(tensor_dataset, val_idx), batch_size=args["batch_size"], shuffle=False)
# set target for domain classification
AE, D, AE_opt, D_opt = create_model(args)
# -
# training
train_stats = train_fadernet_sched(
train_loader, val_loader, args,
AE, D, AE_opt, D_opt, device,
AE_crit=nn.MSELoss(),
D_crit=nn.BCEWithLogitsLoss(),
score_freq=cfg.params.save_vis_freq,
vis_freq=cfg.params.save_vis_freq,
site_labels=site_labels,
save_freq=cfg.params.save_vis_freq,
experiment_name = experiment_name,
experiment = experiment,
save_folder= cfg.logging.weights_save_folder)
| .ipynb_checkpoints/[smri_n4_64][aenc_all]_abide_sex-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/logodwengo.png" alt="Banner" width="150"/>
# <div>
# <font color=#690027 markdown="1">
# <h1>TOEPASSING SPREIDINGSDIAGRAM - REGRESSIE</h1>
# <h2>OLD FAITHFUL GEISER - OEFENING</h2>
# </font>
# </div>
# <div class="alert alert-box alert-success">
# In deze notebook zal je een regressielijn bepalen bij de data m.b.t. de activiteit van de geiser Old Faitful. Een regressielijn is een rechte die het best past bij de data en een eventuele trend die vervat is in de data, weerspiegelt.
# </div>
# ### Nodige modules importeren
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.optimize import curve_fit # voor regressie
# -
# <div>
# <font color=#690027 markdown="1">
# <h2>1. Data inlezen en visualiseren</h2>
# </font>
# </div>
# De Old Faithful geiser in het Yellowstone National Park in de VS barst op regelmatige tijdstippen uit. Je leest een csv-file in die de wachttijd tussen de erupties en de duurtijd van de erupties in minuten bevat voor 272 observaties [1]. Je zal deze data visualiseren. <br>
# Het bestand vind je in de map `data`: `oldfaithfulgeiser.csv`.
# ### Opgave
# Lees het bestand in en laat de puntenwolk zien.
# <div>
# <font color=#690027 markdown="1">
# <h2>2. Lineaire regressie en grafische voorstelling</h2>
# </font>
# </div>
# ### Opdracht
# Stel een Python-script op dat bij uitvoer de regressielijn toont die past bij het spreidingsdiagram.
# <div>
# <font color=#690027 markdown="1">
# <h2>3. Vergelijking van de regressielijn</h2>
# </font>
# </div>
# ### Opdracht
# - Stel een Python-script op om de vergelijking van de regressielijn te bepalen.
# - Voer het script uit.
# ### Referentielijst
# [1] <NAME>. (2013). All of Statistics. https://www.stat.cmu.edu/~larry/all-of-statistics/<br>
# <img src="images/cclic.png" alt="Banner" align="left" width="100"/><br><br>
# Notebook Python in wiskunde, zie Computationeel denken - Programmeren in Python van <a href="http://www.aiopschool.be">AI Op School</a>, van <NAME> & <NAME>, in licentie gegeven volgens een <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Naamsvermelding-NietCommercieel-GelijkDelen 4.0 Internationaal-licentie</a>.
| Wiskunde/Spreidingsdiagram/1200_RegressieToepassingGeiser.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pickle
import subprocess
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
matplotlib.rc('pdf', fonttype=42) # Enable pdf compatible fonts
import scipy.stats
from sklearn.neighbors import KernelDensity
from sklearn.mixture import GaussianMixture
import Bio.SeqIO
# +
# Sample information
ortholog = 'Cas13bt-1'
samples = ['Bt1_BR1_S4_R1_001spacers.p', 'Bt1_BR2_S5_R1_001spacers.p', 'Bt1_BR3_S6_R1_001spacers.p', 'Bt1ctrl_BR1_S1_R1_001spacers.p', 'Bt1ctrl_BR2_S2_R1_001spacers.p', 'Bt1ctrl_BR3_S3_R1_001spacers.p', 'Bt1_input_S7_R1_001spacers.p']
samples_to_names = {samples[0] : 'Cas13bt1 Experiment Rep 1',
samples[1] : 'Cas13bt1 Experiment Rep 2',
samples[2] : 'Cas13bt1 Experiment Rep 3',
samples[3] : 'Cas13bt1 Control Rep 1',
samples[4] : 'Cas13bt1 Control Rep 2',
samples[5] : 'Cas13bt1 Control Rep 3',
samples[6] : 'Cas13bt1 Input Library',
}
# Map sample ids tosample names and filepaths
sample_pair = (samples[0], samples[2])
pair_names = [samples_to_names[sample_pair[0]], samples_to_names[sample_pair[1]]]
# Load non-targeting spacers list
nt_spacers = []
with open('nt_spacers.csv', 'r') as f:
for line in f:
nt_spacers.append(line.strip())
# -
# Obtain the experiment condition sample information
e_N_avg = {}
e_Ns = []
es = []
for e_name in samples[0:3]:
e = pickle.load(open(e_name, 'rb'), encoding='latin1')
# Get sum of all read counts
e_sum = sum([v for v in e.values()])
# Normalize individual spacer count by sum of all read counts in sample
e_N = {u : float(v)/e_sum for u,v in e.items()}
e_Ns.append(e_N)
es.append(e)
for u in e_Ns[0]:
e_N_avg[u] = ((e_Ns[0][u], e_Ns[1][u], e_Ns[2][u]), (es[0][u], es[1][u], es[2][u]))
# Obtain the control condition sample information
c_N_avg = {}
c_Ns = []
cs = []
for c_name in samples[3:6]:
c = pickle.load(open(c_name, 'rb'), encoding='latin1')
# Get sum of all read counts
c_sum = sum([v for v in c.values()])
# Normalize individual spacer count by sum of all read counts in sample
c_N = {u : float(v)/c_sum for u,v in c.items()}
c_Ns.append(c_N)
cs.append(c)
for u in c_Ns[0]:
c_N_avg[u] = ((c_Ns[0][u], c_Ns[1][u], c_Ns[2][u]), (cs[0][u], cs[1][u], cs[2][u]))
# Compute the ratios between the average experimental condition abundance and average control condition abundance
ratios = {}
for u in c_N_avg:
# Keep track of total read counts across replicates
c_total_count = np.sum(c_N_avg[u][1])
e_total_count = np.sum(e_N_avg[u][1])
c_abundance = np.average(c_N_avg[u][0])
e_abundance = np.average(e_N_avg[u][0])
# Use 1e-9 to avoid divsion by near zero
ratios[u] = (c_total_count, e_total_count, c_abundance, e_abundance, e_abundance / (c_abundance+1e-9))
# +
eps = 1e-12 # Additive constant to avoid division by small numbers
min_read_count = 100 # Minimum read count for analysis
sigma = 5 # Number of standard deviations away from mean to establish significance
# Obtain targeting and non-targeting experiment (Y) vs control (X) average abundances.
X,Y = zip(*[(v[2]+eps, v[3]+eps) for u,v in ratios.items() if v[0] >= min_read_count and not u in nt_spacers])
X_nt, Y_nt = zip(*[(v[2]+eps, v[3]+eps) for u,v in ratios.items() if v[0] >= min_read_count and u in nt_spacers])
# Obtain mean, medan and all log depletion ratios of non-targeting spacers
mean = np.mean(np.array(np.log10(Y_nt)) - np.log10(np.array(X_nt)))
median = np.median(np.array(np.log10(Y_nt)) - np.log10(np.array(X_nt)))
# Get the spacers depletion ratios of the non-targets
dep = np.log10(np.array(Y_nt)) - np.log10(np.array(X_nt))
# Perform fit on two component Gaussian mixture model
x_d = np.linspace(-4,2, 200)
m = GaussianMixture(n_components=2)
m.fit(dep[:, None])
m_m = m.means_[0]
m_std = np.sqrt(m.covariances_[0])
logprob1 = scipy.stats.norm(m_m,m_std).logpdf(x_d)[0,:]
m_m = m.means_[1]
m_std = np.sqrt(m.covariances_[1])
logprob2 = scipy.stats.norm(m_m,m_std).logpdf(x_d)[0,:]
hi_idx = np.argsort(m.means_.flatten())[-1]
print(m.means_)
high_mean = m.means_[hi_idx]
high_std = np.sqrt(m.covariances_[hi_idx])
# Renormalize targeting and non-targeting conditions by the control median (which is in log10 space)
# Normalization parameter for all experimental conditions (to keep depletions of non-target with no offtarget
# centered at 1)
median = high_mean
Y = np.array(Y) / np.power(10, median)
Y_nt = np.array(Y_nt) / np.power(10, median)
# Redo the GMM fit using the renormalized data
dep = np.log10(np.array(Y_nt)) - np.log10(np.array(X_nt))
x_d = np.linspace(-4,2, 200)
m = GaussianMixture(n_components=2)
m.fit(dep[:, None])
m_m = m.means_[0]
m_std = np.sqrt(m.covariances_[0])
logprob1 = scipy.stats.norm(m_m,m_std).logpdf(x_d)[0,:]
m_m = m.means_[1]
m_std = np.sqrt(m.covariances_[1])
logprob2 = scipy.stats.norm(m_m,m_std).logpdf(x_d)[0,:]
hi_idx = np.argsort(m.means_.flatten())[-1]
print(m.means_)
high_mean = m.means_[hi_idx]
high_std = np.sqrt(m.covariances_[hi_idx])
depletion_thresh = float(np.power(10, high_mean - sigma*high_std))
print(depletion_thresh)
# -
import json
"""
with open('./randoms.json', 'w') as f:
data = {'median' : float(median), 'high_mean' : float(high_mean),
'high_std' : float(high_std), 'depletion_thresh' : float(depletion_thresh)}
json.dump(data, f, sort_keys=True, indent=4)
"""
with open('./randoms.json', 'r') as f:
d = json.load(f)
print(d)
plt.figure(figsize=(3,2))
plt.axvspan(np.log10(depletion_thresh),high_mean+10,color='k',alpha=0.03)
plt.hist(dep,density=True, bins=100, color=[193/255,195/255,200/255],label='_nolegend_')
plt.plot(x_d, m.weights_[0]*np.exp(logprob1), color=[241/255,97/255,121/255], lw=2)
plt.plot(x_d, m.weights_[1]*np.exp(logprob2), color=[74/255,121/255,188/255], lw=2)
plt.axvline(np.log10(depletion_thresh), c='k',label='_nolegend_', lw=0.5)
plt.axvline(high_mean, c='k', ls='--', lw=1)
plt.xlim([-2,1])
plt.ylabel('Normalized counts')
plt.xlabel('NT spacer abundance')
plt.legend(['NT without off-target','NT with Off-target','Baseline mean',r'5$\sigma$ of baseline'], prop={'size': 6.5})
ax = plt.gca()
for item in ([] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(6.5)
for item in [ax.title, ax.xaxis.label, ax.yaxis.label]:
item.set_fontsize(7)
plt.savefig('./generated_data_and_data/'+ortholog+' nt GMM.pdf')
# +
bins = np.linspace(-2,1,100)
plt.figure(figsize=(3,2))
u = np.histogram(np.log10(np.array(Y_nt) / np.array(X_nt)), bins=bins, density=True)
plt.fill_between(u[1][1:],u[0], step="pre", color=[[255/255, 81/255, 101/255]], lw=0, alpha=0.5)
u = np.histogram(np.log10(np.array(Y) / np.array(X)), bins=bins, density=True)
plt.fill_between(u[1][1:],u[0], step="pre", color=[[0.2, 0.25, 0.3]], lw=0, alpha=0.5)
plt.xlim([-2, 1])
plt.axvline(np.log10(depletion_thresh), linestyle='-', color=[0.05, 0.05, 0.1],lw=1)
plt.axvline(high_mean, c='k', ls='--', lw=1)
plt.xlabel('Log Depletion Ratio')
plt.ylabel('Normalized Counts')
plt.legend(['5$\sigma$','GMM Mean','NT', 'EG'], loc='upper left', frameon=False, prop={'size' : 6.5})
plt.ylim([0,2.7])
ax = plt.gca()
for item in ([] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(6.5)
for item in [ax.title, ax.xaxis.label, ax.yaxis.label]:
item.set_fontsize(7)
plt.savefig('./generated_data_and_data/'+ortholog + ' Depletion Ratios.pdf')
plt.show()
# +
# Plot experimental vs control abundance
fig = plt.figure(figsize=(5,3.2))
ax = plt.gca()
x_l, x_r = min([min(X), min(X_nt)])/10*9, 3e-4
X_line = np.linspace(x_l, x_r, 1000)
Y_line = depletion_thresh*X_line
Y_middle_line = X_line
plt.plot(X_line, Y_middle_line, '--', c=[0.2, 0.2, 0.2], linewidth=1, zorder=3)
plt.plot(X_line, Y_line, '-', c=[0.2, 0.2, 0.2], linewidth=1, zorder=3)
plt.scatter(X, Y, color=[[255/255, 81/255, 101/255]], marker='o', s=5, alpha=0.1, rasterized=True,lw=None)
plt.scatter(X_nt, Y_nt, color='k', marker='o', s=5, alpha=0.5, rasterized=True, lw=None)
plt.xlim([x_l, x_r])
plt.ylim([1e-8, 1e-3])
ax.set_yscale('log')
ax.set_xscale('log')
plt.title(ortholog + ' Depletion')
plt.xlabel('Average control spacer abundance')
plt.ylabel('Average adjusted \nexperimental spacer abundance')
plt.legend(['x=y','5$\sigma$', 'EG', 'NT'], loc='lower right', frameon=False,prop={'size' : 6.5})
[i.set_linewidth(1) for i in ax.spines.values()]
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.gcf().subplots_adjust(bottom=0.15, left=0.2)
ax = plt.gca()
for item in ([] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(6.5)
for item in [ax.title, ax.xaxis.label, ax.yaxis.label]:
item.set_fontsize(7)
plt.tight_layout()
plt.savefig('./generated_data_and_data/'+ortholog + ' Average Abundance Depletion.pdf', dpi=900)
plt.show()
# -
combined_rep = [(u, (v[3]+eps) / (v[2]+eps) / np.power(10, median), v[0], v[1]) for u,v in ratios.items() if v[0] >= min_read_count and (v[3]+eps) / (v[2]+eps) / np.power(10, median) < depletion_thresh]
all_rep = [(u, (v[3]+eps) / (v[2]+eps) / np.power(10, median), v[0], v[1]) for u,v in ratios.items() if v[0] >= min_read_count]
non_depleted_rep = [(u, (v[3]+eps) / (v[2]+eps) / np.power(10, median), v[0], v[1]) for u,v in ratios.items() if v[0] >= min_read_count and (v[3]+eps) / (v[2]+eps) / np.power(10, median) >= depletion_thresh]
len(combined_rep), len(all_rep), len(non_depleted_rep)
# Index of the CDS sequences used in the experiment for targeting
cds_ids_in_exp = [
169,
46,
336,
1222,
793,
157,
156,
3994,
136,
1471,
1797,
2695,
2906,
2882,
3984,
3236,
2608,
2376,
3780,
179,
159,
28,
1018,
502,
3495,
2824,
448,
4592,
2903,
4399,
1056,
2685,
3751,
155,
1464,
1560,
2164,
1223,
1981,
2119,
447,
1484,
442,
3319,
2130,
]
# +
# Get the e coli transcripts (CDS)
spacer_len = 30
records = list(Bio.SeqIO.parse(open('e_coli.gbk'), 'genbank'))
genome_seq = records[0].seq
cds_orig = []
flank = 500
for i,feature in enumerate(records[0].features):
if feature.type != 'CDS':
continue
loc = feature.location
feature_seq = genome_seq[loc.start-flank:loc.end+flank]
# Get the sense strand
if feature.strand == -1:
feature_seq = Bio.Seq.reverse_complement(feature_seq)
cds_orig.append((feature.qualifiers['product'][0], feature_seq))
# Filter cds to only be those from
cds = [cds_orig[i] for i in cds_ids_in_exp]
# -
spacer_to_target_map = {}
for i,(u,v) in enumerate(ratios.items()):
if v[0] < min_read_count:
continue
search = Bio.Seq.reverse_complement(u)
s = ''
coords = (None,None)
for j,(name, seq) in enumerate(cds):
idx = seq.find(search)
if idx >= 6:
s = seq[idx-6:idx+spacer_len+6]
coords = (idx, (idx - flank) / (len(seq)-2*flank), 1)
# Add to CDS
break
if s == '':
continue
if len(s) < spacer_len+12:
continue
spacer_to_target_map[u] = s
# +
# Identfy weblogos (requires weblogo to be installed)
with open('./generated_data_and_data/for_logo_control.fa', 'w') as f:
for i,(v) in enumerate(all_rep):
if not v[0] in spacer_to_target_map:
continue
f.write('>'+str(i)+'\n')
f.write(str(spacer_to_target_map[v[0]]).replace('T','U')+'\n')
subprocess.call(['weblogo',
'-s', 'small',
'-n', '42',
'-S', '0.05',
'--ticmarks','0.05',
'-W', '4.8',
'-F','pdf',
'-D','fasta',
'--color', '#FAA51A', 'G', 'Guanidine',
'--color', '#0F8140', 'A', 'Adenosine',
'--color', '#ED2224', 'U', 'Uracil',
'--color','#3A53A4', 'C', 'Cytidine',
'-f', './generated_data_and_data/for_logo_control.fa',
'-o', './generated_data_and_data/'+ortholog+'_weblogo_control.pdf'])
with open('./generated_data_and_data/for_logo_non_depleted.fa', 'w') as f:
for i,(v) in enumerate(non_depleted_rep):
if not v[0] in spacer_to_target_map:
continue
f.write('>'+str(i)+'\n')
f.write(str(spacer_to_target_map[v[0]]).replace('T','U')+'\n')
subprocess.call(['weblogo',
'-s', 'small',
'-n', '42',
'-S', '0.05',
'--ticmarks','0.05',
'-W', '4.8',
'-F','pdf',
'-D','fasta',
'--color', '#FAA51A', 'G', 'Guanidine',
'--color', '#0F8140', 'A', 'Adenosine',
'--color', '#ED2224', 'U', 'Uracil',
'--color','#3A53A4', 'C', 'Cytidine',
'-f', './generated_data_and_data/for_logo_non_depleted.fa',
'-o', './generated_data_and_data/'+ortholog+'_weblogo_non_depleted.pdf'])
with open('./generated_data_and_data/for_logo_depleted.fa', 'w') as f:
for i,(v) in enumerate(combined_rep):
if not v[0] in spacer_to_target_map:
continue
f.write('>'+str(i)+'\n')
f.write(str(spacer_to_target_map[v[0]]).replace('T','U')+'\n')
subprocess.call(['weblogo',
'-s', 'small',
'-n', '42',
'-S', '0.15',
'--ticmarks','0.15',
'-W', '4.8',
'-F','pdf',
'-D','fasta',
'--color', '#FAA51A', 'G', 'Guanidine',
'--color', '#0F8140', 'A', 'Adenosine',
'--color', '#ED2224', 'U', 'Uracil',
'--color','#3A53A4', 'C', 'Cytidine',
'-f', './generated_data_and_data/for_logo_depleted.fa',
'-o', './generated_data_and_data/'+ortholog+'_weblogo_depleted.pdf'])
all_rep_sorted = sorted(all_rep,key=lambda x: x[1])
one_perc = int(0.01 * len(all_rep_sorted))
with open('./generated_data_and_data/for_logo_top_one_perc.fa', 'w') as f:
for i,(v) in enumerate(all_rep_sorted):
if i > one_perc:
break
if not v[0] in spacer_to_target_map:
continue
f.write('>'+str(i)+'\n')
f.write(str(spacer_to_target_map[v[0]]).replace('T','U')+'\n')
subprocess.call(['weblogo',
'-s', 'small',
'-n', '42',
'-S', '0.5',
'-W', '4.8',
'-F','pdf',
'-D','fasta',
'--color', '#FAA51A', 'G', 'Guanidine',
'--color', '#0F8140', 'A', 'Adenosine',
'--color', '#ED2224', 'U', 'Uracil',
'--color','#3A53A4', 'C', 'Cytidine',
'-f', './generated_data_and_data/for_logo_top_one_perc.fa',
'-o', './generated_data_and_data/'+ortholog+'_weblogo_top_one_perc.pdf'])
offtargets = [(u, (v[3]+eps) / (v[2]+eps) / np.power(10, median), v[0], v[1]) for u,v in ratios.items()
if v[0] >= min_read_count
and (v[3]+eps) / (v[2]+eps) / np.power(10, median) < depletion_thresh
and u in nt_spacers]
print(len(offtargets))
with open('./generated_data_and_data/for_logo_offtargets.fa', 'w') as f:
for i,(v) in enumerate(offtargets):
# Offtargets do not have any genomematch
f.write('>'+str(i)+'\n')
f.write(v[0].replace('T','U')+'\n')
subprocess.call(['weblogo',
'-s', 'small',
'-n', '42',
'-S', '0.5',
'-W', '4.8',
'-F','pdf',
'-D','fasta',
'--color', '#FAA51A', 'G', 'Guanidine',
'--color', '#0F8140', 'A', 'Adenosine',
'--color', '#ED2224', 'U', 'Uracil',
'--color','#3A53A4', 'C', 'Cytidine',
'-f', './generated_data_and_data/for_logo_offtargets.fa',
'-o', './generated_data_and_data/'+ortholog+'_weblogo_offtargets.pdf'])
# +
# Group spacers into +PFS or -PFS
X_has_pfs = []
Y_has_pfs = []
X_no_pfs = []
Y_no_pfs = []
for i,(u,v) in enumerate(ratios.items()):
if not u in spacer_to_target_map:
continue
s = spacer_to_target_map[u]
# NOTICE - Cas13b-t1 specific PAM
if s[5] != 'C':# and (s[15] in ['G','C'] or s[20] in ['G','C']):
X_has_pfs.append(v[2]+eps)
Y_has_pfs.append(v[3]+eps)
else:
X_no_pfs.append(v[2]+eps)
Y_no_pfs.append(v[3]+eps)
# Normalize by nontarget median
Y_has_pfs = np.array(Y_has_pfs) / np.power(10, median)
Y_no_pfs = np.array(Y_no_pfs) / np.power(10, median)
# +
# Plot abundance histogram
plt.rcParams.update({'font.size': 6})
bins = np.linspace(-1.5,0.5,100)
plt.figure(figsize=(1.1,0.7))
plt.subplot(1,2,1)
u = np.histogram(np.log10(np.array(Y_no_pfs) / np.array(X_no_pfs)), bins=bins, density=True)
m = np.mean(np.log10(np.array(Y_no_pfs) / np.array(X_no_pfs)))
plt.axvline(m, color=[0.2, 0.25, 0.3], lw=0.5)
plt.fill_between(u[1][1:],u[0], step="pre", color=[[0.2, 0.25, 0.3]], lw=0, alpha=0.25)
plt.xlim([-1.5, 0.5])
ax = plt.gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
u = np.histogram(np.log10(np.array(Y_has_pfs) / np.array(X_has_pfs)), bins=bins, density=True)
m = np.mean(np.log10(np.array(Y_has_pfs) / np.array(X_has_pfs)))
plt.axvline(m, color=[241/255, 95/255, 121/255], lw=0.5)
plt.fill_between(u[1][1:],u[0], step="pre", color=[[241/255, 95/255, 121/255]], lw=0, alpha=0.5)
plt.xlim([-1.5, 0.5])
plt.ylim([0,3])
plt.yticks([0,3])
ax = plt.gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(0.25)
ax.tick_params(width=0.25)
plt.gcf().subplots_adjust(bottom=0.25, right=1)
plt.subplot(1,2,2)
u = np.histogram(np.log10(np.array(Y_nt) / np.array(X_nt)), bins=bins, density=True)
plt.fill_between(u[1][1:],u[0], step="pre", color=[[188/255, 230/255, 250/255]], lw=0)
plt.xlim([-1.5, 0.5])
ax = plt.gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.ylim([0,3])
plt.yticks([0,3])
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(0.25)
ax.tick_params(width=0.25)
plt.savefig('./generated_data_and_data/'+ortholog + ' Depletion Ratios With PFS.pdf')
plt.show()
# +
# PFS Efficacy in prediction
pfs_eff = np.sum(np.array(Y_has_pfs) / np.array(X_has_pfs) < depletion_thresh) / len(Y_has_pfs)
off_target = np.sum(np.array(Y_nt) / np.array(X_nt) < depletion_thresh) / len(Y_nt)
eff = np.sum(np.array(Y) / np.array(X) < depletion_thresh) / len(Y)
print(pfs_eff, eff, off_target)
# -
# Get coordinates and guide information on a per CDS basis
spacer_len = 30
counts = {}
guides = {}
for i in range(len(all_rep)):
search = Bio.Seq.reverse_complement(all_rep[i][0])
s = ''
coords = (None,None)
for j,(name, seq) in enumerate(cds):
# Find match
idx = seq.find(search)
if idx > 0:
# If match, extract match sequence and coordinates
s = seq[idx-6:idx+spacer_len+6]
coords = (idx, (idx - flank) / (len(seq)-2*flank), 1)
# Count number of guides mapping to the CDS j
counts[j] = counts.get(j,0) + 1
if not j in guides:
guides[j] = []
# Append all the guides matching to this CDS
guides[j].append(search)
if i % 1000 == 0:
print(i)
# Get depletion information on a per CDS basis
depletion_info = []
depletion_no_pam_info = []
depletion_nt_info = []
all_js = set()
spacer_len = 30
for i in range(len(all_rep)):
search = Bio.Seq.reverse_complement(all_rep[i][0])
# Get the normalized depletion (NT median divided off)
d = all_rep[i][1]
s = ''
coords = (None,None)
for j,(name, seq) in enumerate(cds):
if not j in guides:
continue
rc = Bio.Seq.reverse_complement(seq)
idx = seq.find(search)
if idx >= 6:
s = seq[idx-6:idx+spacer_len+6]
coords = (idx, (idx - flank) / (len(seq)-2*flank), 1)
break
if all_rep[i][0] in nt_spacers:
depletion_nt_info.append((j, coords, s, d))
continue
if s == '':
continue
if len(s) < spacer_len+12:
print(s)
continue
# Cas13b-t1 specific conditions
if s[5] != 'C':# and (s[15] in ['G','C'] or s[20] in ['G','C']):
depletion_info.append((j, coords, s, d))
else:
depletion_no_pam_info.append((j, coords, s, d))
# +
import itertools
delta = 0.025
# Create a coordinate line linspace
L = np.arange(-0.05,1.05+delta/2,delta)
V = []
V_no_pam = []
for i in range(len(L)-1):
l = L[i]
u = L[i+1]
kv = [(v[0],v[3]) for v in depletion_info if (not v[1][1] is None) and l <= v[1][1] and v[1][1] < u]
# Groupby
gb = {}
for j,d in kv:
if not j in gb:
gb[j] = []
gb[j].append(d)
# Calculate mean across each cds
mean_by_cds = list(map(lambda x: (x[0], np.mean(x[1])),gb.items()))
# Take mean of means
v = np.mean([m for j,m in mean_by_cds])
V.append(v)
kv = [(v[0],v[3]) for v in depletion_no_pam_info if (not v[1][1] is None) and l <= v[1][1] and v[1][1] < u]
# Groupby
gb = {}
for j,d in kv:
if not j in gb:
gb[j] = []
gb[j].append(d)
# Calculate mean across each cds
mean_by_cds = list(map(lambda x: (x[0], np.mean(x[1])),gb.items()))
# Take mean of means
v = np.mean([m for j,m in mean_by_cds])
V_no_pam.append(v)
W = [v[3] for v in depletion_no_pam_info + depletion_info if not v[1][1] is None and l <= v[1][1] and v[1][1] < u]
div_factor = 1
plt.figure(figsize=(0.5,0.6))
plt.plot(L[:-1]+delta/2,np.array(V_no_pam),color=[0.2, 0.25, 0.3], lw=0.5)
plt.plot(L[:-1]+delta/2,np.array(V),color=[241/255, 95/255, 121/255], lw=0.5)
plt.ylim([0.5,1.5])
plt.yticks([0.5,1,1.5])
ax = plt.gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(0.25)
ax.tick_params(width=0.25)
ax.tick_params(axis='x',direction='out', length=1.44, width=0.25)
ax.tick_params(axis='y',direction='out', length=1.80, width=0.25)
plt.xlim([-0.05,1.05])
plt.savefig('./generated_data_and_data/'+ortholog+' positional_preference.pdf')
# +
pos = [u[1][1] for u in depletion_info if not u[1][1] is None]
plt.figure(figsize=(4,2))
plt.hist(pos, 100, color=[0.2, 0.25, 0.3], density=True)
plt.xlabel('Normalized position along gene')
plt.ylabel('Normalized \nguide count')
plt.title(ortholog)
plt.xlim([-0.2, 1.2])
plt.gcf().subplots_adjust(bottom=0.25, left=0.2)
plt.savefig('./generated_data_and_data/'+ortholog + ' Gene Position Distribution.pdf')
plt.show()
# -
depletion_info = []
all_depletion_info = []
all_js = set()
spacer_len = 30
for i in range(len(all_rep)):
search = Bio.Seq.reverse_complement(all_rep[i][0])
d = all_rep[i][1]
s = ''
coords = (None,None)
for j,(name, seq) in enumerate(cds):
if not j in guides:
continue
rc = Bio.Seq.reverse_complement(seq)
idx = seq.find(search)
if idx >= 6:
s = seq[idx-6:idx+spacer_len+6]
coords = (idx, (idx - flank) / (len(seq)-2*flank), 1)
break
if all_rep[i][0] in nt_spacers:
depletion_nt_info.append((j, coords, s, d))
continue
if s == '':
continue
if len(s) < spacer_len+12:
print(s)
continue
if d < depletion_thresh:
depletion_info.append((j, coords, s, d))
all_depletion_info.append((j, coords, s, d))
print(len(depletion_info))
print(len(all_depletion_info))
# +
# Multi positional preferences
bases = ['A', 'T', 'G', 'C']
p1 = 5
p2 = 37
p3 = 38
tokens = {(a,b,c) : 0 for a in bases for b in bases for c in bases}
tokens_all = {(a,b,c) : 0 for a in bases for b in bases for c in bases}
for i in range(len(depletion_info)):
try:
token = (str(depletion_info[i][2][p1]), str(depletion_info[i][2][p2]), str(depletion_info[i][2][p3]))
tokens[token] += 1
except:
pass
for i in range(len(all_depletion_info)):
try:
token = (str(all_depletion_info[i][2][p1]), str(all_depletion_info[i][2][p2]), str(all_depletion_info[i][2][p3]))
tokens_all[token] += 1
except:
pass
token_depletion = {u : 1-tokens[u] / (tokens_all[u]+0.001) for u in tokens.keys()}
dual_bases = [(a,b) for a in bases for b in bases]
dual_bases_labels = [a+b for a,b in dual_bases]
Z = np.zeros((4,16))
for i,a in enumerate(bases):
for j,(b,c) in enumerate(dual_bases):
token = (a,b,c)
depletion = token_depletion[token]
Z[i,j] = depletion
plt.figure(figsize=(4,1.75))
cm = plt.cm.get_cmap('magma_r')
ax = plt.gca()
im = plt.imshow(Z,cmap=cm, vmax=1)
ax.set_xticks(np.arange(len(dual_bases_labels)))
ax.set_yticks(np.arange(len(bases)))
# ... and label them with the respective list entries
ax.set_xticklabels(dual_bases_labels, rotation=-60, fontdict={'fontfamily' : 'Andale Mono'})
ax.set_yticklabels(bases,rotation=0, fontdict={'fontfamily' : 'Andale Mono'})
plt.xlabel('3\' PFS (+2, +3)')
plt.ylabel('5\' PFS (-1)')
plt.title(ortholog)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im,cax=cax)
plt.savefig('./generated_data_and_data/'+ortholog + ' pfs map.pdf')
plt.show()
# -
| 190507_Bt1_PFS/.ipynb_checkpoints/Bt1_analysis-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# # Insurance: Predicting claims
# *Created by:
# <NAME>, data scientist
# Antwerpen, Belgium
# Date: October 13, 2019
# Contacts: <EMAIL>*
# ***
#
# ## Table of Contents
# Stage 1 : Business Understanding
# Stage 2 : Analytic Approach
# Stage 3 : Data Requirements
# Stage 4 : Data Collection
# Stage 5 : Data Understanding
# Stage 6 : Data Preparation
# Stage 7 : Data visualization
# Stage 7 : Modeling
# Stage 8 : Evaluation
# Sample of use
# ***
#
# ### Stage 1 : Business Understanding
# **Problem:**
# The insurance companies are extremely interested in the prediction of the future. Accurate prediction gives a chance to reduce financial loss for the company. Forecasting the upcoming claims helps to charge competitive premiums that are not too high and not too low. It also contributes to the improvement of the pricing models. This helps the insurance company to be one step ahead of its competitors.
#
# **Question:**
# Can we predict if a potential customer will be a claimer?
#
# ### Stage 2 : Analytic Approach
# As the question requires a yes/no answer, a classification model will be built.
#
# ### Stage 3 : Data Requirements
# **Data content:** To answer the question we need data about current customers, that have next features about each customer: age, sex, bmi, number of children, smorer, region, sum of charges last year, if customer was a claimer last year or no.
# **Data formats:** CSV format
# **Data sources:** corporative information from the insured company.
#
# ### Stage 4 : Data Collection
# Data was collected from [Kaggle](https://www.kaggle.com/easonlai/sample-insurance-claim-prediction-dataset)
#
# +
import numpy as np
import pandas as pd
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
#visualization
import matplotlib.pyplot as plt
import seaborn as sns
from pandas.plotting import scatter_matrix
#machine learnong
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import roc_curve
import pylab as pl
# -
# !pip install ml_metrics
import ml_metrics
# +
#reading the file
df = pd.read_csv('insurance3r2.csv')
#from Kaggle we read like following: df = pd.read_csv('//kaggle/input/sample-insurance-claim-prediction-dataset/insurance3r2.csv')
# -
# ### Stage 5 : Data Understanding
# ##### Descriptive statistics:
print(df.shape)
print(df.columns)
print(df.describe())
df.hist()
plt.title('Data distribution')
plt.xlabel('Frequency')
plt.ylabel('Relative values')
plt.tight_layout()
plt.show()
df.hist(column='charges', bins=100)
plt.title('Data distribution: charges')
plt.xlabel('Individual medical costs, $ USD')
plt.ylabel('Frequency')
plt.show()
charges_outliers = df.loc[df.charges >= 50000]
print(charges_outliers.shape)
df.hist(column='steps', bins=100)
plt.title('Data distribution: steps')
plt.xlabel('Steps per day')
plt.ylabel('Frequency')
plt.show()
# Let's round groups to 3000, 4000, 5000, 8000, 10000 for a nicer overview:
decimals = -3
df['steps']=df['steps'].apply(lambda x: round(x, decimals))
print(df.steps.value_counts())
# ##### Correlation discovering:
pp = sns.pairplot(df, palette = 'deep', height=1.2, diag_kind = 'kde', diag_kws=dict(shade=True), plot_kws=dict(s=10) )
pp.set(xticklabels=[])
plt.suptitle('Correlation between variables')
plt.show()
fig, ax = plt.subplots(figsize=(12, 8))
sns.heatmap(df.corr(), square=True, cmap='RdYlGn', annot=True, mask=None)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
plt.title('Correlation between features')
plt.show()
# ### Stage 6 : Data Preparation
# ##### Dealing with missing data and outliers:
sns.heatmap(df.isnull(), cmap='viridis', cbar=False, yticklabels=False)
plt.title('missing data')
plt.show()
df.boxplot()
plt.title('Data distribution: outliers')
plt.ylabel('Relative values')
plt.xticks(rotation='vertical')
plt.show()
# ##### Features engineering:
a = df
a['age'] = a['age'].mask(a['age'] <= 20, 1)
a['age'] = a['age'].mask(
(a['age'] >= 21) & (a['age'] <= 30), 2)
a['age'] = a['age'].mask(
(a['age'] >= 31) & (a['age'] <= 40), 3)
a['age'] = a['age'].mask(
(a['age'] >= 41) & (a['age'] <= 50), 4)
a['age'] = a['age'].mask(
(a['age'] >= 51) & (a['age'] <= 60), 5)
a['age'] = a['age'].mask(a['age'] >= 61, 6)
unique, counts = np.unique(a.age, return_counts=True)
# ### Stage 7 : Data visualization
# **How old are our customers?**
labels = 'Younger then 20 y.o', '21 - 30 y.o.', '31 - 40 y.o.', '41 - 50 y.o.', '51 - 60 y.o.', 'Older then 60 y.o.'
sizes = [166, 278, 257, 281, 265, 91]
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.title('Customers overview: age')
plt.show()
# **Do we have more male clients?**
labels = 'Male', 'Female'
sizes = [676, 662]
colors = ['tab:blue', 'tab:pink']
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.title('Customers overview: sex')
plt.show()
# **How much smokers do we have?**
labels = 'Non-smoker', 'Smoker'
sizes = [1064, 274]
colors = ['tab:green', 'tab:gray']
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.title('Customers overview: smoker/unsmoker')
plt.show()
# **How often do our customers claim?**
labels = 'Claim', 'Non claim'
sizes = [783, 555]
colors = ['tab:red', 'royalblue']
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.title('Customers overview: claim/ non-claim')
plt.show()
# ### Stage 7 : Modeling
# We build a predictive classification model with next different classifiers :
# - RandomForestClassifier
# - GradientBoostingClassifier
# - KNeighborsClassifier
# - GaussianNB
#
# The best model will be chosen with ROC curve method.
X = df.drop(['insuranceclaim'], axis=1).values
y = df['insuranceclaim'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42)
print("X_train : " + str(X_train.shape))
print("X_test : " + str(X_test.shape))
print("y_train : " + str(y_train.shape))
print("y_test : " + str(y_test.shape))
models = []
models.append(RandomForestClassifier(n_estimators=165, max_depth=4, criterion='entropy'))
models.append(GradientBoostingClassifier(max_depth =4))
models.append(KNeighborsClassifier(n_neighbors=20))
models.append(GaussianNB())
plt.figure(figsize=(10, 10))
for model in models:
model.fit(X_train, y_train)
pred_scr = model.predict_proba(X_test)[:, 1]
fpr, tpr, thresholds = roc_curve(y_test, pred_scr)
roc_auc = ml_metrics.auc(y_test, pred_scr)
md = str(model)
md = md[:md.find('(')]
pl.plot(fpr, tpr, label='ROC fold %s (auc = %0.2f)' % (md, roc_auc))
pl.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6))
pl.xlim([0, 1])
pl.ylim([0, 1])
pl.xlabel('False Positive Rate')
pl.ylabel('True Positive Rate')
pl.title('Receiver operating characteristic example')
pl.legend(loc="lower right")
pl.show()
# The best score - GradientBoostingClassifier, accuracy score R^2 is 99%.
# **Result: The model has been built with accuracy score 99%.**
#
# ### Stage 8 : Evaluation
# As the model has accuracy score 99%, it's accepted as adequate.
#
# ### Sample of use:
#
# 2 potential customers:
# 1. Ben: age 19, male, bmi 27.9, steps 3000 per day, no children, smoker, North-West, charges last year 16 885 USD,
# 2. Jerry: age 23, male, bmi 24.3, steps 6000 per day, no children, non-smoker, North-West, charges last year 2 137 USD,
# To predict if they will be claimers:
Ben_Jerry_test = [[1, 0, 27.900, 3000, 0, 1, 3, 16885], [1, 0, 24.3, 6000, 0, 0, 3, 2137]]
df_Ben_Jerry = pd.DataFrame(Ben_Jerry_test)
clf = GradientBoostingClassifier(max_depth =4)
clf.fit(X_train, y_train)
Ben_Jerry_prediction = clf.predict(df_Ben_Jerry)
print('Prediction (1 - claim, 0 - no-claim) {}'.format(Ben_Jerry_prediction))
# **Result:**
# - Ben is predicted as a claimer,
# - Jerry is predicted as no claimer,
# with accuracy of the model 99%.
| Labs/Insurance: Predicting claims.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# #### New to Plotly?
# Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).
# <br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online).
# <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
# + [markdown] deletable=true editable=true
# #### Line Chart and a Bar Chart
# + deletable=true editable=true
import plotly.plotly as py
import plotly.graph_objs as go
trace1 = go.Scatter(
x=[0, 1, 2, 3, 4, 5],
y=[1.5, 1, 1.3, 0.7, 0.8, 0.9]
)
trace2 = go.Bar(
x=[0, 1, 2, 3, 4, 5],
y=[1, 0.5, 0.7, -1.2, 0.3, 0.4]
)
data = [trace1, trace2]
py.iplot(data, filename='bar-line')
# + [markdown] deletable=true editable=true
# #### A Contour and Scatter Plot of the Method of Steepest Descent
# + deletable=true editable=true
import plotly.plotly as py
import plotly.graph_objs as go
import json
import six.moves.urllib
response = six.moves.urllib.request.urlopen('https://raw.githubusercontent.com/plotly/datasets/master/steepest.json')
data = json.load(response)
trace1 = go.Contour(
z=data['contour_z'][0],
y=data['contour_y'][0],
x=data['contour_x'][0],
ncontours=30,
showscale=False
)
trace2 = go.Scatter(
x=data['trace_x'],
y=data['trace_y'],
mode='markers+lines',
name='steepest',
line=dict(
color='black'
)
)
data = [trace1, trace2]
py.iplot(data, filename='contour-scatter')
# + [markdown] deletable=true editable=true
# #### Reference
# See https://plotly.com/python/reference/ for more information and attribute options!
# + deletable=true editable=true
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
# ! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'mixed.ipynb', 'python/graphing-multiple-chart-types/', 'Python Multiple Chart Types | plotly',
'How to design figures with multiple chart types in python.',
title = 'Python Multiple Chart Types | plotly',
name = 'Multiple Chart Types',
thumbnail='thumbnail/multiple-chart-type.jpg', language='python',
has_thumbnail='true', display_as='file_settings', order=16)
# + deletable=true editable=true
| _posts/python-v3/fundamentals/mixed/mixed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
data = [[ 0, 0, 1.64, -1.64, 0., ],
[ 0.03125, -0.86875, 0.4785714286, 0, 0, ],
[ 0., -1.0571428571, -0.1635714286, 0.8545, 0, ],
[ 0., -0.86875, 0., 0.86875, 0, ],
[ 0., 0., -0.73375, 0.73375, 0, ],
[ 0.66125, 0., 0., -0.66125, 0, ],
[-1.1021428571, 0., 0., 1.1021428571, 0., ],
[ 0., -0.5428571429, -0.7485714286, 0.904 , 0., ],
[ 0., 0., 0., 0., 0., ],
[ 0., 0., 0., 0., 0., ],
[ 0., 0., 0., 0., 0., ],
[ 0.1185714286, 0., 0., -0.1185714286, 0., ],
[ 0.66125, 0., 0.0166666667, -0.2795, 0., ],
[ 0., 0., 0., 0., 0., ],
[-0.66125, 0., 0., 0.66125, 0., ],
[ 0.66125, 0., 0., 0., -0.66125, ],
[ 0., 0., 0., 0., 0., ],
[ 0., 0., 0., 0., 0., ],
[ 0., 0., 0., 0., 0., ]]
# -
data_flat = "".join(data)
data_null = np.where(data_flat==0, float('nan'), data_flat)
data_nuller = np.where(data_flat>250, float('nan'), np.where(data_flat<-500, float('nan'), data_flat))
data_nullest = np.where(data_nuller==0, float('nan'), data_nuller)
# +
dm = 5
sites = 19
num_dm = np.arange(dm)
ind = np.arange(sites)
width = 1/(sites+1)
s = sites*dm
# some_dim = [data_array_flat[i], i for i in range(0, 160, 4)]
dim = dict()
for i in num_dm:
dim[i] = [data_nullest[i] for i in range(i, s, dm)] # y-axis for 0 dim
dim_num = dict()
for i in ind:
dim_num[i] = [data_nullest[i] for i in range(i, s, sites)]
sums = list()
#Sum of na values per row
for i in ind:
sums.append(sum(np.isnan(list(dim_num[i]))))
dim_na = dict()
dim_loc = dict()
for i in ind:
dim_na[i] = np.array(dim_num[i])[np.array(np.isnan(dim_num[i]))==False]
dim_loc[i] = np.arange(len(dim_num[i]))[np.array(np.isnan(dim_num[i]))==False]
colors = [
"tab:blue",
"blueviolet",
"saddlebrown",
"tab:orange",
"tab:green",
"mediumvioletred",
"coral",
"tab:red",
"tab:purple",
"dodgerblue",
"tab:brown",
"gold",
"tab:pink",
"limegreen",
"tab:gray",
"chocolate",
"tab:olive",
"mediumvioletred",
"goldenrod",
"tab:cyan",
"violet"
]
col_len = len(colors)
alpb = ['A','R','N','D','C','E','Q','G','H','I','L','K','M','F','P','S','T','W','Y','V','n']
alpb_d = dict()
for i in num_dm:
alpb_d[i] = colors[i%col_len]
alpb_d[alpb[i]] = alpb_d.pop(i)
# +
fig, ax = plt.subplots()
pi = dict()
for i in ind:
ln = 1/len(dim_na[i])
rn = np.arange(1/ln)
pi[i] = ax.bar(x=i+np.array([j for j in rn])*ln, height=[j for j in dim_na[i]], width=ln, align="edge", color=[colors[i%col_len] for i in list(dim_loc[0])])
ax.set_xticks(ind + width+0.5)
ax.set_xticklabels(np.arange(1, sites+1))
for i in range(sites+1):
ax.axvline(i, color="#D4D4D4", linewidth=0.8)
color_map = [color for color in list(alpb_d.values())]
markers = [plt.Line2D([0,0],[0,0],color=color, marker='o', linestyle='') for color in alpb_d.values()]
ax.legend(markers, alpb_d.keys(), loc=0, ncol=7, prop={'size':4})
ax.tick_params(width = 2, labelsize = 4) #width of the tick and the size of the tick labels
plt.xlabel('CDRH3 Site')
plt.ylabel('rFon1D')
#Regressions of off values onto each site of target RNA (orthogonalized within)
#plt.savefig('rFon1D_off_star.png', bbox_inches='tight')
figure = ax.get_figure()
figure.savefig('moresanergraph.png', dpi=400)
# -
| ortho_seq_code/Sidhu/initial_results/better_heatmaps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dependencias
# +
import mlflow
import os
import shutil
import boto3
from datetime import datetime
S3_CLIENT = boto3.resource('s3')
mlflow.set_tracking_uri(os.getenv('MLFLOW_TRACKING_URI'))
MLFLOW_CLIENT = mlflow.tracking.MlflowClient()
REGISTERED_MODELS = ["Hands"]
MODELS = {}
def downlod_model(bucket_name, remoteDirectory_name):
bucket = S3_CLIENT.Bucket(bucket_name)
for obj in bucket.objects.filter(Prefix=remoteDirectory_name):
if not os.path.exists(os.path.dirname(obj.key)):
os.makedirs(os.path.dirname(obj.key))
bucket.download_file(obj.key, obj.key)
def update_models(version=-1, remove_old_versions=False):
update = {}
for model_name in REGISTERED_MODELS:
model = None
update[model_name] = 0
for mv in MLFLOW_CLIENT.search_model_versions(f"name='{model_name}'"):
mv_bckp = mv
mv = dict(mv)
if version == mv['version'] or (version == -1 and mv['current_stage'] == 'Production'):
mv['last_updated_timestamp'] = str(datetime.fromtimestamp(int(mv['last_updated_timestamp'] / 1000)))
bucket = mv['source'].split('//')[1].split('/')[0]
folder = mv['source'].split('//')[1].split('/')[1]
if os.path.exists(os.path.join('./models', folder)):
print("Load existing model...")
model = os.path.join(os.path.join('./models', folder), "artifacts/model/data/model.h5")
else:
print("Downloading model...")
downlod_model(bucket, folder)
model = os.path.join(os.path.join('./models', folder), "artifacts/model/data/model.h5")
if remove_old_versions and os.path.exists('./models'):
shutil.rmtree('./models')
if not os.path.exists('./models'):
os.mkdir('./models')
shutil.move(os.path.join(os.getcwd(), folder), './models')
update[model_name] = 1
print("Using model {name} v{version} ({current_stage}) updated at {last_updated_timestamp}".format(**mv))
#response = {k: v for k, v in mv.items() if v}
break
if model:
MODELS[model_name] = (model, mv_bckp)
return update
def get_model(model_name):
return MODELS.get(model_name, None)
# -
# # Descargar la última versión del modelo
os.chdir('/home/ubuntu/tfm/standalone')
versions = [-1]
for version in versions:
update_flag = update_models(version)
model_path, model_meta = get_model('Hands')
print(MLFLOW_CLIENT.get_run(model_meta.run_id))
os.listdir('/home/ubuntu/tfm/standalone/models')
| MLflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # VOQC: A Verifed Optimizer for Quantum Circuits <img src="tutorial-files/umd.jpeg" width="100" align="right">
#
# Welcome to the tutorial for VOQC!
#
# VOQC is a **verified optimizer for quantum circuits**, implemented and *formally verified* in the [Coq proof assistant](https://coq.inria.fr/). All VOQC optimizations are *guaranteed* to preserve the semantics of the original circuit, meaning that any optimized circuit produced by VOQC has the same behavior as the input circuit. VOQC was presented as a [distinguished paper at POPL 2021](https://arxiv.org/abs/1912.02250); its code is available at [github.com/inQWIRE/SQIR](https://github.com/inQWIRE/SQIR).
#
# To run VOQC, we (1) extract the verified Coq code to OCaml, (2) compile the extracted OCaml code to a library, (3) wrap the OCaml library in C, and (4) provide a Python interface for the C pyvoqc. This tutorial introduces the optimizations available in VOQC through its Python interface. For convenience, we have written Python pyvoqc code that makes VOQC look like an optimization pass in [IBM's Qiskit framework](https://qiskit.org/documentation/getting_started.html), allowing us to take advantage of Qiskit's utilities for quantum programming. In this tutorial, we'll use Qiskit for building and printing circuits.
#
# ## Outline
#
# * End-to-end Example
# * Unitary Optimizations
# * Not Propagation
# * Single-qubit Gate Cancellation
# * Two-qubit Gate Cancellation
# * Rotation Merging
# * Hadamard Reduction
#
# We begin with an example of reading in a circuit, optimizing it using VOQC, and writing it back to a file.
# We then describe each unitary optimization available in VOQC, along with an example of how to run it in Qiskit.
#
# ## Preliminaries
#
# To compile the extracted OCaml code into a library, run `dune build lib/libvoqc.so` in the pyvoqc directory.
#
# To begin the tutorial, import modules below from Qiskit and VOQC.
# Click on this cell and press Ctrl+Enter or run it with the "Run" button
from qiskit import QuantumCircuit
from pyvoqc.voqc import VOQC
from pyvoqc.qiskit.voqc_optimization import QisVOQC
from qiskit.transpiler import PassManager
from qiskit.qasm import pi
# ## End-to-end Example
#
# Our Python interface allows us to pass a Qiskit circuit object through VOQC and receive an optimized Qiskit circuit.
#
# VOQC can be called just like Qiskit's built-in transpiler passes (e.g. "Commutative Cancellation" or "CX Cancellation"). Simply append `QisVOQC([opt list])` to a pre-defined `Pass Manager` where `opt list` is an optional argument specifying one or more of the unitary optimizations in VOQC (see *Unitary Optimizations* below). `QisVOQC()` with no arguments will run all optimizations available in VOQC.
#
# In the example here, we first read in a small quantum circuit from the qasm file "tof_3_example.qasm". This circuit consists of three 3-qubit CCX (Toffoli) gates that together perform a 4-qubit Toffoli gate. Qiskit will *decompose* these 3-qubit gates into the 1- and 2-qubit gates supported by VOQC. We call the initial circuit `circ`. We then define a pass manager `pm` and schedule our VOQC transpiler pass. Finally, we run `circ` through the pass manager, producing optimized circuit `new_circ`. Rather than printing the resulting circuit (which is fairly large), we print the its gate counts. We also write `new_circ` to the file "tof_3_example_optimized.qasm".
# +
# Read circuit from file
circ = QuantumCircuit.from_qasm_file("tutorial-files/tof_3_example.qasm")
print("Before Optimization:")
print(circ)
# Append VOQC pass without argument to the Pass Manager
pm = PassManager()
pm.append(QisVOQC())
new_circ = pm.run(circ)
# Print info about optimized circuit
print("\nAfter Optimization:\n")
print('gate counts = ', new_circ.count_ops())
print('circuit depth = ', new_circ.depth())
# Save optimized circuit
qasm_str = new_circ.qasm(filename="tof_3_example_optimized.qasm")
# -
# ## Unitary Optimizations
#
# Appending `QisVOQC()` to the pass manager runs the VOQC's main optimization function, which runs all its unitary optimizations in a predefined order. You can also run optimizations individually, or in a custom order, by passing arguments to the `QisVOQC` class (e.g. `QisVOQC(["not_propagation"])` to run "not propagation"). VOQC provides five unitary optimizations: *not propagation*, *single-qubit gate cancellation*, *two-qubit gate cancellation*, *rotation merging*, and *hadamard reduction*.
#
# We provide a brief example of each optimization below. For more details see Section 4 of [our paper](https://arxiv.org/abs/1912.02250) or an earlier paper by [Nam et al. (2018)](https://www.nature.com/articles/s41534-018-0072-4), which inspired many of VOQC's optimizations.
# ### Not Propagation
#
# *Not propagation* commutes X (logical NOT) gates rightward through the circuit, cancelling them when possible. In the example below, the leftmost X gate propagates through the CNOT gate to become two X gates. The upper X gate then propagates through the H gate to become a Z, and the two lower X gates cancel.
# +
# Build circuit with 2 qubits and 4 gates
circ = QuantumCircuit(2)
circ.x(0)
circ.cx(0, 1)
circ.h(0)
circ.x(1)
print("Before Optimization:")
print(circ)
# Append "not_propagation" optimization to the Pass Manager
pm = PassManager()
pm.append(QisVOQC(["not_propagation"]))
new_circ = pm.run(circ)
# Print optimized circuit
print("\nAfter Optimization:")
print(new_circ)
# -
# ### Single-qubit Gate Cancellation
#
# *Single-qubit gate cancellation* has the same "propagate-cancel" structure as not propagation, except that gates revert back to their original positions if they fail to cancel. In the example below, the upper leftmost T gate commutes through the control of the CNOT, combining with the upper rightmost T gate to become an S gate. The lower T gate commutes through the H; CNOT; H subcircuit to cancel with the Tdg gate.
# +
# Build circuit with 2 qubits and 7 gates
circ = QuantumCircuit(2)
circ.t(0)
circ.t(1)
circ.h(1)
circ.cx(0, 1)
circ.h(1)
circ.t(0)
circ.tdg(1)
print("Before Optimization:")
print(circ)
# Append "cancel_single_qubit_gates" optimization to the Pass Manager
pm = PassManager()
pm.append(QisVOQC(["cancel_single_qubit_gates"]))
new_circ = pm.run(circ)
# Print optimized circuit
print("\nAfter Optimization:")
print(new_circ)
# -
# ### Two-qubit Gate Cancellation
#
# *Two-qubit gate cancellation* is similar to single-qubit gate cancellation, except that it aims to commute and cancel CNOT gates. In the circuit below, the first CNOT gate commutes through the second, to cancel with the third.
# +
# Build circuit with 3 qubits and 3 qates
circ = QuantumCircuit(3)
circ.cx(0, 1)
circ.cx(0, 2)
circ.cx(0, 1)
#circ.h(0)
print("Before Optimization:")
print(circ)
# Append "cancel_two_qubit_gates" optimization to the Pass Manager
pm = PassManager()
pm.append(QisVOQC(["cancel_two_qubit_gates"]))
new_circ = pm.run(circ)
# Print optimized circuit
print("\nAfter Optimization:")
print(new_circ)
# -
# ### Rotation Merging
#
# *Rotation merging* combines Rz gates that act on the same logical state (see discussion in Sec 4.4 of [our paper](https://arxiv.org/abs/1912.02250)). In the example below, the two Rz(pi/6) gates can be combines into a single Rz(pi/6 + pi/6) = Rz(pi/3) gate.
# +
# Build circuit with 2 qubits and 4 gates
circ = QuantumCircuit(2)
circ.rz(pi/6, 1)
circ.cx(0, 1)
circ.cx(1, 0)
circ.rz(pi/6, 0)
print("Before Optimization:")
print(circ)
# Append "merge_rotations" optimization to the Pass Manager
pm = PassManager()
pm.append(QisVOQC(["merge_rotations"]))
new_circ = pm.run(circ)
# Print optimized circuit
print("\nAfter Optimization:")
print(new_circ)
# -
# ### Hadamard Reduction
#
# *Hadamard reduction* applies a series of identities to reduce the number of H gates in a circuit, for the purpose of making other optimizations (e.g. rotation merging) more effective. In the example below, we replace the first circuit with the second to remove two H gates.
# +
# Build circuit with 2 qubits and 5 gates
circ = QuantumCircuit(2)
circ.h(1)
circ.sdg(1)
circ.cx(0, 1)
circ.s(1)
circ.h(1)
print("Before Optimization:")
print(circ)
# Append "hadamard_reduction" optimization to the Pass Manager
pm = PassManager()
pm.append(QisVOQC(["hadamard_reduction"]))
new_circ = pm.run(circ)
# Print optimized circuit
print("\nAfter Optimization:")
print(new_circ)
# -
| tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import itertools
# -
data = pd.read_excel('data/deception_IQ_data_anonymous.xlsx')
# # Figure 2
# +
sns.set(style="white", palette="muted", color_codes=True)
fig = plt.figure(figsize = (15,4))
ax = fig.add_subplot(131)
sns.distplot(data.strategy[np.logical_and(data.study == 1, data.include == 1)], bins = 10, kde = False, ax = ax)
ax.set_title('Experiment 1', fontsize = 16)
ax.set_ylim((0,16))
ax.set_xlabel('Honest Strategy Deceptive')
ax = fig.add_subplot(132)
sns.distplot(data.strategy[np.logical_and(data.study == 2, data.include == 1)], bins = 10, kde = False, ax = ax)
ax.set_title('Experiment 2', fontsize = 16)
ax.set_ylim((0,16))
ax.set_xlabel('Honest Strategy Deceptive')
ax = fig.add_subplot(133)
sns.distplot(data.strategy[np.logical_and(data.study == 3, data.include == 1)], bins = 10, kde = False, ax = ax)
ax.set_title('Experiment 3', fontsize = 16)
ax.set_ylim((0,16))
ax.set_xlabel('Honest Strategy Deceptive')
fig.savefig('figures/Fig2.eps', dpi = 300)
# -
# # Figure 3
# Here the parameter estimates are entered manually, because that's the easiest way.
# +
iq = np.arange(-2,3,1)
ex = np.arange(-2,3,1)
# enter parameter estimates (means of posterior distributions)
iq_p = 0.63
e_p = 0.17
iq_e_p = 0.36
predicted_odds = np.zeros(len(iq)*len(ex))
pairs = []
for i, element in enumerate(itertools.product(reversed(iq), ex)):
pairs.append(element)
predicted_odds[i] = element[0] * iq_p + element[1] * e_p + element[0] * element[1] * iq_e_p
predicted_odds = np.exp(predicted_odds)/(1+np.exp(predicted_odds))
predicted_prob = predicted_odds.reshape((len(iq),len(iq)))
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(111)
sns.heatmap(predicted_prob, annot = True, cmap = 'coolwarm', square = True,
xticklabels= np.arange(-2,3,1), yticklabels=np.arange(-2,3,1)*-1, ax = ax,
annot_kws = {'fontsize' : 18})
ax.set_xlabel('Extraversion', fontsize = 18)
ax.set_ylabel('Fluid intelligence', fontsize = 18)
ax.tick_params(axis='both', which='major', labelsize=12)
#fig.savefig('Fig_IQ_Extroversion.png', dpi = 300)
fig.savefig('figures/Fig3.eps', dpi = 300)
# -
# # Figure S1
post_s1 = np.load('posteriors/main_s1.npy')
post_s2 = np.load('posteriors/main_s2.npy')
post_s3 = np.load('posteriors/main_s3.npy')
# +
nrows = 7
ncols = 3
fig = plt.figure(figsize = (12,15))
ax = fig.add_subplot(nrows,ncols,1)
sns.kdeplot(post_s1[:,1], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,1], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,1], label = 'Experiment 3', ax = ax)
ax.set_title('Sex', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,2)
sns.kdeplot(post_s1[:,2], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,2], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,2], label = 'Experiment 3', ax = ax, color = 'red')
ax.set_title('Age', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,4)
sns.kdeplot(post_s1[:,3], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,3], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,3], label = 'Experiment 3', ax = ax)
ax.set_title('RPM', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,5)
sns.kdeplot(post_s1[:,4], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,4], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,4], label = 'Experiment 3', ax = ax)
ax.set_title('3-back: discriminability', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,6)
sns.kdeplot(post_s1[:,5], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,5], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,5], label = 'Experiment 3', ax = ax)
ax.set_title('3-back: bias', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,7)
sns.kdeplot(post_s1[:,6], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,6], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,6], label = 'Experiment 3', ax = ax, color = 'red')
ax.set_title('SSRT', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,8)
sns.kdeplot(post_s1[:,7], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,7], label = 'Experiment 2', ax = ax)
ax.set_title('Stroop: switching costs', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,9)
sns.kdeplot(post_s3[:,7], label = 'Experiment 3', ax = ax, color = 'red')
ax.set_title('CCT: accuracy', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,10)
sns.kdeplot(post_s1[:,8], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,8], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,8], label = 'Experiment 3', ax = ax)
ax.set_title('NEO: Neuroticism', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,11)
sns.kdeplot(post_s1[:,9], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,9], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,9], label = 'Experiment 3', ax = ax)
ax.set_title('NEO: Extraversion', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,12)
sns.kdeplot(post_s1[:,10], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,10], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,10], label = 'Experiment 3', ax = ax)
ax.set_title('NEO: Openness to Experience', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,13)
sns.kdeplot(post_s1[:,11], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,11], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,11], label = 'Experiment 3', ax = ax)
ax.set_title('NEO: Agreeableness', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,14)
sns.kdeplot(post_s1[:,12], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,12], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,12], label = 'Experiment 3', ax = ax)
ax.set_title('NEO: Conscientousness', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,16)
sns.kdeplot(post_s1[:,13], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,13], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,13], label = 'Experiment 3', ax = ax)
ax.set_title('RPM x Neuroticism', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,17)
sns.kdeplot(post_s1[:,14], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,14], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,14], label = 'Experiment 3', ax = ax)
ax.set_title('RPM x Extraversion', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,18)
sns.kdeplot(post_s1[:,15], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,15], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,15], label = 'Experiment 3', ax = ax)
ax.set_title('RPM x Openness to Experience', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,19)
sns.kdeplot(post_s1[:,16], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,16], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,16], label = 'Experiment 3', ax = ax)
ax.set_title('RPM x Agreeableness', fontsize = 18)
ax.set_xlim((-1, 1))
ax = fig.add_subplot(nrows,ncols,20)
sns.kdeplot(post_s1[:,17], label = 'Experiment 1', ax = ax)
sns.kdeplot(post_s2[:,17], label = 'Experiment 2', ax = ax)
sns.kdeplot(post_s3[:,17], label = 'Experiment 3', ax = ax)
ax.set_title('RPM x Conscientousness', fontsize = 18)
ax.set_xlim((-1, 1))
fig.tight_layout()
fig.savefig('figures/FigS1.eps', dpi = 300)
| Data_visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # AB browser test
# В данном задании нужно будет:
# * проанализировать АБ тест, проведённый на реальных пользователях Яндекса;
# * подтвердить или опровергнуть наличие изменений в пользовательском поведении между контрольной (control) и тестовой (exp) группами;
# * определить характер этих изменений и практическую значимость вводимого изменения;
# * понять, какая из пользовательских групп более всего проигрывает / выигрывает от тестируемого изменения (локализовать изменение).
#
# Описание данных:
# * userID: уникальный идентификатор пользователя
# * browser: браузер, который использовал userID
# * slot: в каком статусе пользователь участвовал в исследовании (exp = видел измененную страницу, control = видел неизменную страницу)
# * n_clicks: количество кликов, которые пользоваль совершил за n_queries
# * n_queries: количество запросов, который совершил userID, пользуясь браузером browser
# * n_nonclk_queries: количество запросов пользователя, в которых им не было совершено ни одного клика
#
# Обращаем внимание, что не все люди используют только один браузер, поэтому в столбце userID есть повторяющиеся идентификаторы. В предлагаемых данных уникальным является сочетание userID и browser.
# +
from __future__ import division
import numpy as np
import pandas as pd
from scipy import stats
from statsmodels.sandbox.stats.multicomp import multipletests
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# -
ab_data = pd.read_csv('ab_browser_test.csv')
ab_data.info()
ab_data.head()
#transform 'browser' column to int
ab_data.browser = [int(ab_data.browser[i][9:]) for i in range(ab_data.shape[0])]
ab_data.head(10)
# Основная метрика, на которой мы сосредоточимся в этой работе, — это количество пользовательских кликов на web-странице в зависимости от тестируемого изменения этой страницы.
#
# Посчитаем, насколько в группе exp больше пользовательских кликов по сравнению с группой control в процентах от числа кликов в контрольной группе.
#number of people in exp and control groups
ab_data.slot.value_counts()
# Примем в первом приближении, что количество человек в каждой из групп одинаково.
#indices split by groups
exp = ab_data.slot.loc[ab_data.slot == 'exp'].index
ctrl = ab_data.slot.loc[ab_data.slot == 'control'].index
#assumption error
err = 1 - ab_data.slot.loc[exp].shape[0] / ab_data.slot.loc[ctrl].shape[0]
print('Assumption error: %.4f' % err)
# +
#total number of clicks in each group
exp_cl_num = ab_data.n_clicks.loc[exp].sum()
ctrl_cl_num = ab_data.n_clicks.loc[ctrl].sum()
print('Total number of clicks in each group')
print('Exp: %d' % exp_cl_num)
print('Control: %d' % ctrl_cl_num)
# -
#proportion increase of clicks for exp over control
prop_inc_clicks = (exp_cl_num / ctrl_cl_num - 1) * 100
print('Proportion increase of clicks for exp over control: %.3f%%' % prop_inc_clicks)
# Давайте попробуем посмотреть более внимательно на разницу между двумя группами (control и exp) относительно количества пользовательских кликов.
#
# Для этого построим с помощью бутстрепа 95% доверительный интервал для средних значений и медиан количества кликов в каждой из двух групп.
# +
#Clicks mean values
exp_cl_mean = ab_data.n_clicks.loc[exp].mean()
ctrl_cl_mean = ab_data.n_clicks.loc[ctrl].mean()
print('Mean number of clicks in each group')
print('Exp: %.4f' % exp_cl_mean)
print('Control: %.4f' % ctrl_cl_mean)
print('')
#Clicks median values
exp_cl_mean = ab_data.n_clicks.loc[exp].median()
ctrl_cl_mean = ab_data.n_clicks.loc[ctrl].median()
print('Median number of clicks in each group')
print('Exp: %d' % exp_cl_mean)
print('Control: %d' % ctrl_cl_mean)
# -
def get_bootstrap_samples(data, n_samples):
indices = np.random.randint(0, len(data), (n_samples, len(data)))
samples = data[indices]
return samples
def stat_intervals(stat, alpha):
boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)])
return boundaries
# +
# %%time
#confidence intervals estimation
np.random.seed(0)
num_of_samples = 500
exp_cl_mean, ctrl_cl_mean = np.empty(num_of_samples), np.empty(num_of_samples)
exp_cl_median, ctrl_cl_median = np.empty(num_of_samples), np.empty(num_of_samples)
ctrl_cl_var = np.empty(num_of_samples)
exp_data = get_bootstrap_samples(ab_data.n_clicks.loc[exp].values, num_of_samples)
ctrl_data = get_bootstrap_samples(ab_data.n_clicks.loc[ctrl].values, num_of_samples)
for i in range(num_of_samples):
exp_cl_mean[i], ctrl_cl_mean[i] = exp_data[i].mean(), ctrl_data[i].mean()
exp_cl_median[i], ctrl_cl_median[i] = np.median(exp_data[i]), np.median(ctrl_data[i])
ctrl_cl_var[i] = ctrl_data[i].var()
# +
delta_mean = map(lambda x: x[0] - x[1], zip(exp_cl_mean, ctrl_cl_mean))
delta_median = map(lambda x: x[0] - x[1], zip(exp_cl_median, ctrl_cl_median))
delta_mean_bnd = stat_intervals(delta_mean, 0.05)
delta_median_bnd = stat_intervals(delta_median, 0.05)
print('Conf. int. delta mean: [%.4f, %.4f]' % (delta_mean_bnd[0], delta_mean_bnd[1]))
print('Conf. int. delta median: [%d, %d]' % (delta_median_bnd[0], delta_median_bnd[1]))
print('legend: diff = exp - control')
# -
# Поскольку данных достаточно много (порядка полумиллиона уникальных пользователей), отличие в несколько процентов может быть не только практически значимым, но и значимым статистически. Последнее утверждение нуждается в дополнительной проверке.
_ = plt.figure(figsize=(15,5))
_ = plt.subplot(121)
_ = plt.hist(ab_data.n_clicks.loc[exp], bins=100)
_ = plt.title('Experiment group')
_ = plt.subplot(122)
_ = plt.hist(ab_data.n_clicks.loc[ctrl], bins=100)
_ = plt.title('Control group')
# t-критерий Стьюдента имеет множество достоинств, и потому его достаточно часто применяют в AB экспериментах. Иногда его применение может быть необоснованно из-за сильной скошенности распределения данных.
# Для простоты рассмотрим одновыборочный t-критерий. Чтобы действительно предположения t-критерия выполнялись необходимо, чтобы:
#
# * среднее значение в выборке было распределено нормально N(μ,σ2n)
# * несмещенная оценка дисперсии c масштабирующим коэффициентом была распределена по хи-квадрат c n−1 степенями свободы χ2(n−1)
# Оба этих предположения можно проверить с помощью бутстрепа. Ограничимся сейчас только контрольной группой, в которой распределение кликов будем называть данными в рамках данного вопроса.
#
# Поскольку мы не знаем истинного распределения генеральной совокупности, мы можем применить бутстреп, чтобы понять, как распределены среднее значение и выборочная дисперсия.
#
# Для этого
#
# * получим из данных n_boot_samples псевдовыборок.
# * по каждой из этих выборок посчитаем среднее и сумму квадратов отклонения от выборочного среднего
# * для получившегося вектора средних значений из n_boot_samples построим q-q plot с помощью scipy.stats.probplot для нормального распределения
# * для получившегося вектора сумм квадратов отклонения от выборочного среднего построим qq-plot с помощью scipy.stats.probplot для хи-квадрат распределения
#probability plot for means
_ = stats.probplot(ctrl_cl_mean, plot=plt, rvalue=True)
_ = plt.title('Probability plot for means')
#probability plot for variances
_ = stats.probplot(ctrl_cl_var, plot=plt, dist='chi2', sparams=(ctrl_cl_mean.shape[0]-1), rvalue=True)
_ = plt.title('Probability plot for variances')
# Одним из возможных аналогов t-критерия, которым можно воспрользоваться, является тест Манна-Уитни. На достаточно обширном классе распределений он является асимптотически более эффективным, чем t-критерий, и при этом не требует параметрических предположений о характере распределения.
#
# Разделим выборку на две части, соответствующие control и exp группам. Преобразуем данные к виду, чтобы каждому пользователю соответствовало суммарное значение его кликов. С помощью критерия Манна-Уитни проверим гипотезу о равенстве средних.
# +
users_nclicks_exp = ab_data.loc[exp].groupby(['userID', 'browser']).sum().loc[:,'n_clicks']
users_nclicks_ctrl = ab_data.loc[ctrl].groupby(['userID', 'browser']).sum().loc[:,'n_clicks']
users_nclicks_exp.head()
users_nclicks_ctrl.head()
# -
stats.mannwhitneyu(users_nclicks_exp, users_nclicks_ctrl, alternative='two-sided')
# Проверим, для какого из браузеров наиболее сильно выражено отличие между количеством кликов в контрольной и экспериментальной группах.
#
# Для этого применим для каждого из срезов (по каждому из уникальных значений столбца browser) критерий Манна-Уитни между control и exp группами и сделаем поправку Холма-Бонферрони на множественную проверку с α=0.05.
# +
browsers_nclicks_exp = ab_data.loc[exp].groupby(['browser', 'userID']).sum().loc[:,'n_clicks']
browsers_nclicks_ctrl = ab_data.loc[ctrl].groupby(['browser', 'userID']).sum().loc[:,'n_clicks']
browsers_nclicks_exp.head()
browsers_nclicks_ctrl.head()
# +
#Unique browsers
browsers = np.unique(ab_data.browser)
print('Unique browsers numbers: ' + str(browsers))
print('')
print('Mann-Whitney rank test without multipletest')
mw_p = np.empty(browsers.shape[0])
for i, br in enumerate(browsers):
print('Browser #%d: ' % br),
_, mw_p[i] = stats.mannwhitneyu(browsers_nclicks_exp.loc[br, :], browsers_nclicks_ctrl.loc[br, :], alternative='two-sided')
print('p-value = %.4f' % mw_p[i])
print('')
print('Mann-Whitney rank test with multipletest')
_, mw_p_corr, _, _ = multipletests(mw_p, alpha = 0.05, method = 'holm')
for i, br in enumerate(browsers):
print('Browser #%d: ' % br),
print('p-value = %.4f' % mw_p_corr[i])
# -
# Для каждого браузера в каждой из двух групп (control и exp) посчитаем долю запросов, в которых пользователь не кликнул ни разу. Это можно сделать, поделив сумму значений n_nonclk_queries на сумму значений n_queries. Умножив это значение на 100, получим процент некликнутых запросов, который можно легче проинтерпретировать.
# +
browsers_nonclk_q_exp = ab_data.loc[exp].groupby(['browser']).sum().loc[:,'n_nonclk_queries']
browsers_clk_q_exp = ab_data.loc[exp].groupby(['browser']).sum().loc[:,'n_queries']
browsers_nonclk_q_prop_exp = browsers_nonclk_q_exp / browsers_clk_q_exp
browsers_nonclk_q_ctrl = ab_data.loc[ctrl].groupby(['browser']).sum().loc[:,'n_nonclk_queries']
browsers_clk_q_ctrl = ab_data.loc[ctrl].groupby(['browser']).sum().loc[:,'n_queries']
browsers_nonclk_q_prop_ctrl = browsers_nonclk_q_ctrl / browsers_clk_q_ctrl
print('Control / experimental groups')
for br in browsers:
print('Browser #%d' % br),
print(browsers_nonclk_q_prop_ctrl.loc[browsers_nonclk_q_prop_ctrl.index == br].values),
print('/'),
print(browsers_nonclk_q_prop_exp.loc[browsers_nonclk_q_prop_ctrl.index == br].values)
| 4 Stats for data analysis/Homework/14 test AB browser test/AB browser test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tables and units
# + jupyter={"outputs_hidden": false}
import numpy as np
import pandas as pd
from astropy import units as u
from astropy import constants as const
# + jupyter={"outputs_hidden": false}
comet_table = pd.read_csv('./Data/Comets.csv')
# + jupyter={"outputs_hidden": false}
comet_table
# -
# ## `DataFrames` and units - A (not so good) example
#
# * Use `.to_numpy()` to pull data out of `DataFrame`
# * Then add the units
# + jupyter={"outputs_hidden": false}
semi_major = comet_table['Semi_Major_AU'].to_numpy() * u.AU
# + jupyter={"outputs_hidden": false}
semi_major
# + jupyter={"outputs_hidden": false}
semi_major.to(u.km)
# + jupyter={"outputs_hidden": false}
comet_table['Semi_Major_km'] = semi_major.to(u.km)
# + jupyter={"outputs_hidden": false}
comet_table
# -
# #### Pull out column and assign units everytime you want to use them.
#
# * Even with dimensionless units (like `Eccentricity`).
# + jupyter={"outputs_hidden": false}
def find_perihelion(semi_major, eccentricity):
result = semi_major * (1.0 - eccentricity)
return result
# + jupyter={"outputs_hidden": false}
my_semi_major = comet_table['Semi_Major_AU'].to_numpy() * u.AU
my_semi_major
# + jupyter={"outputs_hidden": false}
my_ecc = comet_table['Eccentricity'].to_numpy() * u.dimensionless_unscaled
my_ecc
# + jupyter={"outputs_hidden": false}
perihelion_AU = find_perihelion(my_semi_major, my_ecc)
perihelion_AU
# + jupyter={"outputs_hidden": false}
comet_table['Perihelion_AU'] = perihelion_AU
# + jupyter={"outputs_hidden": false}
comet_table
# -
# ##### Save `comet_table` to a file (`.csv`)
# + jupyter={"outputs_hidden": false}
comet_table.to_csv('./Data/Comet_DataFrame.csv', index=False)
# -
# ---
#
# # `DataFrames` and units
#
# * `DatFrames` and units do not play together well
# * Using a `DataFrame` and units requires you to:
# * Pull out column and assign units everytime you want to use them.
# * `comet_table['Semi_Major_AU'].to_numpy() * u.AU`
# * The to save your results, without units, back to the table
# * `comet_table['Perihelion_AU'] = perihelion_AU`
# ---
# ---
#
# # Astropy `QTable`
#
# * A `QTable` = a table with units!
# * Does not have the huge number of `.methods` of a `DataFrame`
# * **Only** used by Astronomers
# * Can be easily converted to a `DataFrame`
# + jupyter={"outputs_hidden": false}
from astropy.table import QTable
# + jupyter={"outputs_hidden": false}
comet_table = QTable.read('./Data/Comets.csv', format='ascii.csv')
# + jupyter={"outputs_hidden": false}
comet_table
# -
print(comet_table)
# ### Adding a unit to a column
# + jupyter={"outputs_hidden": false}
comet_table['Semi_Major_AU'].unit = u.AU
# + jupyter={"outputs_hidden": false}
comet_table
# + jupyter={"outputs_hidden": false}
comet_table['Semi_Major_AU']
# + jupyter={"outputs_hidden": false}
comet_table['Semi_Major_AU'].to(u.km)
# + jupyter={"outputs_hidden": false}
comet_table['Semi_Major_AU'].unit
# -
# ### Functions and Tables
# + jupyter={"outputs_hidden": false}
def find_perihelion(semi_major, eccentricity):
result = semi_major * (1.0 - eccentricity)
return result
# + jupyter={"outputs_hidden": false}
find_perihelion(comet_table['Semi_Major_AU'], comet_table['Eccentricity'])
# + jupyter={"outputs_hidden": false}
comet_table['Perihelion'] = find_perihelion(comet_table['Semi_Major_AU'], comet_table['Eccentricity'])
# + jupyter={"outputs_hidden": false}
comet_table
# + jupyter={"outputs_hidden": false}
comet_table['Perihelion'].to(u.km)
# + jupyter={"outputs_hidden": false}
comet_table['Perihelion'].info.format = '.2f'
# + jupyter={"outputs_hidden": false}
comet_table
# + jupyter={"outputs_hidden": false}
comet_table.info()
# + jupyter={"outputs_hidden": false}
for row in comet_table:
output = f"The comet {row['Name']:9} has a peihelion distance of {row['Perihelion']:.2f}"
print(output)
# -
# ## `QTable` manipulation and modification
#
# * Does not have the huge number of `.methods` of a `DataFrame`
# * Can do most 'obvious' stuff: slices, sorts, filtering, etc...
# * Documentation: [Astropy Table Modifications](https://het.as.utexas.edu/HET/Software/Astropy-1.0/table/modify_table.html)
# + jupyter={"outputs_hidden": false}
comet_table
# -
comet_table['Name'][0]
# + jupyter={"outputs_hidden": false}
comet_table[0:2]
# + jupyter={"outputs_hidden": false}
comet_table[comet_table['Eccentricity'] < 0.8]
# -
comet_table[comet_table['Eccentricity'] < 0.8]['Name'][-1]
comet_table.rename_column('Name', 'Comet Name')
comet_table
# + jupyter={"outputs_hidden": false}
comet_table.sort('Perihelion')
# + jupyter={"outputs_hidden": false}
comet_table
# + jupyter={"outputs_hidden": false}
comet_table['Comet Name'][0]
# + jupyter={"outputs_hidden": false}
comet_table.sort('Perihelion', reverse=True)
comet_table
# -
# ### Can save `QTables` with all the units info intact (`.ecsv`).
# + jupyter={"outputs_hidden": false}
comet_table.write('./Data/Comet_QTable.ecsv', format='ascii.ecsv', overwrite='True')
# + jupyter={"outputs_hidden": false}
my_new_table = QTable.read('./Data/Comet_QTable.ecsv', format='ascii.ecsv')
# -
my_new_table
# ---
# ## Strange Bug ...
#
# * If you read-in a .ecsv file and sort a column with a unit it sometimes throws an error
# * Workaround - `.info.indices = []`
# * I have NO idea what this is about ...
my_new_table['Perihelion']
# + jupyter={"outputs_hidden": false}
my_new_table.sort('Perihelion')
# -
my_new_table['Perihelion'].info.indices = []
# +
my_new_table.sort('Perihelion')
my_new_table
# -
# ---
#
# ### Can convert `Table` to pandas `DataFrame` - Loose all units info :(
# + jupyter={"outputs_hidden": false}
comet_table_pandas = QTable(comet_table).to_pandas()
# + jupyter={"outputs_hidden": false}
comet_table_pandas
# + [markdown] jupyter={"outputs_hidden": false}
# ### Long Tables
# -
long_table= QTable.read('./Data/Comets_100.csv', format='ascii.csv')
long_table
long_table.show_in_notebook()
| Python_Units_Tables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''.venv'': venv)'
# name: python3
# ---
# In this notebook, used of machine learning classifiers Logistic Regression, Support Vector Machine and Random Forest
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
# %matplotlib inline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import mutual_info_classif
from imblearn.over_sampling import SMOTE
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import StratifiedKFold
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import confusion_matrix, roc_curve
from sklearn.metrics import classification_report, roc_auc_score
from sklearn import svm
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.calibration import CalibratedClassifierCV, CalibrationDisplay, calibration_curve
# -
dataset = pd.read_csv('../data/train.csv')
dataset.head()
# +
Xvar_discrete = dataset[['var24', 'var25', 'var27','var40',
'var44','var45','var46','var47',
'var48','var49','var50','var51',
'var52','var53','var54']]
yvar_discrete = dataset['y']
# -
# Estimate mutual information for a discrete target variable
mutual_info_score = mutual_info_classif(Xvar_discrete,yvar_discrete, discrete_features=True)
print('Mutual Info Classif:\n', mutual_info_score)
sorted(zip(Xvar_discrete.columns, mutual_info_score))
dataset['y'].replace(to_replace=[1, 0], value=['contracted', 'not_contracted'], inplace=True)
sns.countplot(x='y',data=dataset, color="#33adff")
plt.grid(True, axis='y')
plt.title("The customer contract a new product")
plt.tight_layout()
plt.show()
dataset['y'].replace(to_replace=['contracted', 'not_contracted'], value=[1, 0], inplace=True)
#Define X and y
features = ['var22','var24','var25','var27','var40',
'var44','var45','var47','var49','var50',
'var51','var52','var53','var54','var63',
'var64','var65','var66','var67','var68']
X = dataset[features]
y = dataset.y
# ### Class imbalance is a common problem in the field of classification
#
# Imbalanced logistic regression
#
# The class imbalance problem are penalizing and weighting the likelihood, but these two parameters not sufficient
# for handling the class imbalance problem
#
# Object to over-sample the minority classes by picking samples at random with replacement.
# The bootstrap can be generated in a smoothed manner.
sm = SMOTE(random_state=1234)
X, y = sm.fit_resample(X, y)
y.value_counts()
#Stratified K-Folds Cross-Validation
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=12345)
skf.get_n_splits(X, y)
for train_index, test_index in skf.split(X, y):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X.loc[train_index], X.loc[test_index]
y_train, y_test = y.loc[train_index], y.loc[test_index]
reg = 0.01
logistic_regression_model01 = LogisticRegression(C=1/reg, n_jobs=-1, solver='newton-cg').fit(X_train, y_train)
print (logistic_regression_model01)
y_pred = logistic_regression_model01.predict(X_test)
print('Predicted labels: ', y_pred[0:10])
print('Actual labels:\n',y_test[0:10])
print(classification_report(y_test, y_pred))
scaler = StandardScaler()
X_train_scaler = scaler.fit_transform(X_train)
X_test_scaler = scaler.transform(X_test)
X_train_scaler = pd.DataFrame(X_train_scaler, columns=X_train.columns)
X_train_scaler.head()
X_test_scaler = pd.DataFrame(X_test_scaler, columns=X_test.columns)
X_test_scaler.tail()
logistic_regression_model02 = LogisticRegression(C=1/reg, n_jobs=-1).fit(X_train_scaler, y_train)
print (logistic_regression_model02)
predictions = logistic_regression_model02.predict(X_test_scaler)
print('Predicted labels: ', predictions[0:10])
print('Actual labels:\n',y_test[0:10])
print(classification_report(y_test, predictions))
conf_matrix = confusion_matrix(y_test, predictions)
conf_matrix
class_names=[0,1]
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
sns.heatmap(pd.DataFrame(conf_matrix), annot=True, cmap="Blues" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.title('Confusion matrix', y=1.1)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
y_scores = logistic_regression_model02.predict_proba(X_test_scaler)
print(y_scores)
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 6))
plt.plot([0.0, 1.0], [0.0, 1.0], 'k--')
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
# Well-Calibrated Probabilities
#
# The classfication model is used to predict the probability of customer contract a new product.
# It is desirable that the estimated class probabilities are reflective of the true underlying probability of
# the sample.
#
# That is, the predicted class probability needs to be well-calibrated.To be well-calibrated, the probabilities must effectively reflect the true likelihood of the event of interest.
# +
# Creating classifiers
logistic_regression_model04 = LogisticRegression(C=1/reg, n_jobs=-1,solver='newton-cg')
svm_model = svm.SVC(C=1.0,probability=True)
rfc_model = RandomForestClassifier()
models_list= [
(logistic_regression_model04, "Logistic"),
(svm_model, "SVC"),
(rfc_model, "Random forest")
]
# +
fig = plt.figure(figsize=(10, 10))
gs = GridSpec(4, 2)
colors = plt.cm.get_cmap("Dark2")
ax_calibration_curve = fig.add_subplot(gs[:2, :2])
calibration_displays = {}
for i, (clf, name) in enumerate(models_list):
clf.fit(X_train, y_train)
display = CalibrationDisplay.from_estimator(
clf,
X_test,
y_test,
n_bins=10,
name=name,
ax=ax_calibration_curve,
color=colors(i),
)
calibration_displays[name] = display
ax_calibration_curve.grid()
ax_calibration_curve.set_title("Calibration plots")
# Add histogram
grid_positions = [(2, 0), (2, 1), (3, 0)]
for i, (_, name) in enumerate(models_list):
row, col = grid_positions[i]
ax = fig.add_subplot(gs[row, col])
ax.hist(
calibration_displays[name].y_prob,
range=(0, 1),
bins=10,
label=name,
color=colors(i),
)
ax.set(title=name, xlabel="Mean predicted probability", ylabel="Count")
plt.tight_layout()
plt.show()
# -
# References
#
# https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html
#
# https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.RandomOverSampler.html
#
# https://scikit-learn.org/stable/auto_examples/calibration/plot_compare_calibration.html#sphx-glr-auto-examples-calibration-plot-compare-calibration-py
#
| notebook/notebook02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Regular Expression Practice Exercise
# #### Import the random email data file
# +
import pandas as pd
email_data = pd.read_csv('Random Email Dataset.csv')
# -
# #### Display the Email ids
email_data['Email Address']
# #### FInd the number of gamail email Ids (ending with @gamail.com)
# +
import re
x = email_data['Email Address']
# to be able to use finditer, we need to pass a string. We use the join function to achieve that.
#Example of join Functions
print('||'.join(x))
#here the column Email Address has been converted to a string where each email id is searated by a pipe
# -
#now let us use the joijn to find out the number of email ids with gamail
pattern1 = re.compile(r'[a-zA-Z0-9_]@gamail\.com\b') #using a space after each address in the pattern
matches = pattern1.finditer(' '.join(x)) #using a space after each address as in the pattern which matches with our string
counter = 0
for mat in matches:
counter = counter+1
print('Number of gamail email ids:', counter)
# #### Find the number of yahooo email Ids (ending with @yahooo.com)
# We will follow the same approach.
pattern2 = re.compile(r'[a-zA-Z0-9_]@yahooo\.com\b') #using a space after each address in the pattern
matches = pattern2.finditer(' '.join(x)) #using a space after each address as in the pattern which matches with our string
counter = 0
for mat in matches:
counter = counter+1
print('Number of yahooo email ids:', counter)
# #### Find the number of entries that are not email ids (consider the entries that do not have a @ and a .com/.in/.org in them)
# +
pattern3 = re.compile(r'[a-zA-Z0-9_]+@[a-zA-Z0-9_]+\.com\b')
matches = pattern3.finditer(' '.join(x)) #using a space after each address as in the pattern which matches with our string
counter = 0
for mat in matches:
counter = counter + 1
#print(mat)
email_ids = counter
print('Number of email ids:', email_ids)
# let us find the total number of non-email data entries
total_entries = len(email_data['Email Address'])
print('Total Number of non email entries:',total_entries-email_ids)
# -
# #### find the total entries that have the pattern 'asd' in them
# +
pattern4 = re.compile(r'asd')
matches = pattern4.finditer(' '.join(x))
counter = 0
for mat in matches:
counter = counter + 1
#print(mat)
print('Number of such patterns:', counter)
# -
# #### find the number of email Ids that start with k
# +
pattern5 = re.compile(r'\b[k][a-zA-Z0-9_]*@[a-zA-Z0-9_]*\.[a-z]{2,4}\b')
matches = pattern5.finditer(' '.join(x))
counter = 0
for mat in matches:
counter = counter + 1
#print(mat)
print('Number of such email Ids:', counter)
# -
| Regex Practice Exercise - Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %run startup.py
# + language="javascript"
# $.getScript('./assets/js/ipython_notebook_toc.js')
# -
# # A Decision Tree of Observable Operators
#
# ## Part 4: Grouping, Buffering, Delaying, misc
#
# > source: http://reactivex.io/documentation/operators.html#tree.
# > (transcribed to RxPY 1.5.7, Py2.7 / 2016-12, <NAME>, [axiros](http://www.axiros.com))
#
# **This tree can help you find the ReactiveX Observable operator you’re looking for.**
# See [Part 1](./A Decision Tree of Observable Operators. Part I - Creation.ipynb) for Usage and Output Instructions.
#
# We also require acquaintance with the [marble diagrams](./Marble Diagrams.ipynb) feature of RxPy.
#
# <h2 id="tocheading">Table of Contents</h2>
# <div id="toc"></div>
#
#
# # I want to shift the items emitted by an Observable forward in time before reemitting them
# ## ... **[delay](http://reactivex.io/documentation/operators/delay.html) **
reset_start_time(O.delay)
d = subs(marble_stream('a-b-c|').delay(150).merge(marble_stream('1-2-3|')))
# # I want to transform items and notifications from an Observable into items and reemit them
# ## ... by wrapping them in Notification objects **[materialize](http://reactivex.io/documentation/operators/materialize-dematerialize.html)**
rst(O.materialize)
def pretty(notif):
# this are the interesting attributes:
return 'kind: %(kind)s, value: %(value)s' % ItemGetter(notif)
d = subs(O.from_((1, 2, 3)).materialize().map(lambda x: pretty(x)))
# ### ... which I can then unwrap again with **[dematerialize](http://reactivex.io/documentation/operators/materialize-dematerialize.html)**
# +
rst(O.dematerialize)
d = subs(O.from_((1, 2, 3)).materialize().dematerialize())
header('Dematerializing manually created notifs')
d = subs(O.from_((rx.core.notification.OnNext('foo'), rx.core.notification.OnCompleted())).dematerialize())
# -
# Materializing a sequence can be very handy for performing analysis or logging of a sequence.
# You can unwrap a materialized sequence by applying the Dematerialize extension method.
from rx.testing import dump
d = subs(O.range(1, 3).materialize().dump(name='mydump'))
# # I want to ignore all items emitted by an Observable and only pass along its completed/error notification
# ## ... **[ignore_elements](http://reactivex.io/documentation/operators/ignoreelements.html)**
rst(O.ignore_elements)
d = subs(O.range(0, 10).ignore_elements())
# # I want to mirror an Observable but prefix items to its sequence **[start_with](http://reactivex.io/documentation/operators/startwith.html)**
rst(O.start_with)
d = subs(O.from_(('a', 'b')).start_with(1, 2, 3))
# ## ... only if its sequence is empty **[default_if_empty](http://reactivex.io/documentation/operators/defaultifempty.html)**
rst(O.default_if_empty)
# the default here is to emit a None:
d = subs(O.empty().default_if_empty())
d = subs(O.empty().default_if_empty('hello world'))
# # I want to collect items from an Observable and reemit them as buffers of items **[buffer](http://reactivex.io/documentation/operators/buffer.html)**
#
# Very good intro is [here](http://xgrommx.github.io/rx-book/content/observable/observable_instance_methods/buffer.html)
# Buffer 'closing' means: The buffer is flushed to the subscriber(s), then next buffer is getting filled.
#
# Note: The used scheduler seems not 100% exact timewise on the marble streams. But you get the idea.
# +
rst(O.buffer)
header('with closing mapper')
# the simplest one:
print('''Returns an Observable that emits buffers of items it collects from the source Observable. The resulting Observable emits connected, non-overlapping buffers. It emits the current buffer and replaces it with a new buffer whenever the Observable produced by the specified bufferClosingSelector emits an item.''')
xs = marble_stream('1-2-3-4-5-6-7-8-9|')
# defining when to flush the buffer to the subscribers:
cs = marble_stream('---e--e----------|')
print('\nCalling the closer as is:')
d = subs(xs.buffer(closing_mapper=cs))
sleep(2)
print('\nCalling again and again -> equal buffer sizes flushed')
cs = marble_stream('---e|')
d = subs(xs.buffer(closing_mapper=lambda: cs))
# +
rst(title='with buffer closing mapper')
xs = marble_stream('1-2-3-4-5-6-7-8-9|')
# the more '-' the bigger the emitted buffers.
# Called again and again:
cs = marble_stream('------e|')
cs2 = marble_stream('--e|')
print ('Subscribing two times with different buffer sizes')
d = subs(xs.buffer(buffer_closing_mapper=lambda: cs), name='BIIIIIG bufs')
d = subs(xs.buffer(buffer_closing_mapper=lambda: cs2),name='small bufs')
# +
rst(title='with buffer opening mapper')
xs = marble_stream('1-2-3-4-5-6-7-8-9|')
opens = marble_stream('---o|')
d = subs(xs.buffer(buffer_openings=lambda: opens))
# -
rst(title='with buffer opening and closing mapper')
#TODO: behaviour not really understood. Bug?
xs = marble_stream('1-2-3-4-5-6-7-8-9-1-2-3-4-5-6-7-8-9-1-2-3-4-5-6-7-8-9|')
opens = marble_stream('oo---------------------------------------------------|')
closes = marble_stream('-------------------------c|')
d = subs(xs.buffer(buffer_openings=opens, buffer_closing_mapper=lambda: closes))
# ### ... buffering by counts **[buffer_with_count](http://reactivex.io/documentation/operators/buffer.html)**
rst(O.buffer_with_count)
xs = marble_stream('1-2-3-4-5-6-7-8-9-1-2-3-4-5-6-7-8-9|')
d = subs(xs.buffer_with_count(2, skip=5))
# #### ... and take only the last (by count) **[take_last_buffer](http://reactivex.io/documentation/operators/takelast.html)**
rst(O.take_last_buffer)
xs = marble_stream('1-2-3-4-5|')
d = subs(xs.take_last_buffer(2))
# #### ... and take only the first (by time) **[take_with_time](http://reactivex.io/documentation/operators/takelast.html)**
rst(O.take_with_time)
xs = marble_stream('1-2-3-4-5|')
d = subs(xs.take_with_time(310))
# #### ... or only the last (by time) **[take_last_with_time](http://reactivex.io/documentation/operators/takelast.html)**
rst(O.take_last_with_time)
xs = marble_stream('1-2-3-4-5|')
d = subs(xs.take_last_with_time(310))
# # I want to split one Observable into multiple Observables **[window](http://reactivex.io/documentation/operators/window.html)**
#
# Window is similar to Buffer, but rather than emitting packets of items from the source Observable, it emits Observables, each one of which emits a subset of items from the source Observable and then terminates with an onCompleted notification.
#
# Like Buffer, Window has many varieties, each with its own way of subdividing the original Observable into the resulting Observable emissions, each one of which contains a “window” onto the original emitted items. In the terminology of the Window operator, when a window “opens,” this means that a new Observable is emitted and that Observable will begin emitting items emitted by the source Observable. When a window “closes,” this means that the emitted Observable stops emitting items from the source Observable and terminates with an onCompleted notification to its observers.
#
# from: http://www.introtorx.com/Content/v1.0.10621.0/17_SequencesOfCoincidence.html#Window
# > A major difference we see here is that the Window operators can notify you of values from the source as soon as they are produced. The Buffer operators, on the other hand, must wait until the window closes before the values can be notified as an entire list.
# +
rst(O.window_with_count, title="window with count")
wid = 0 # window id
def show_stream(window):
global wid
wid += 1
log('starting new window', wid)
# yes we can subscribe normally, its not buffers but observables:
subs(window, name='window id %s' % wid)
src = O.interval(100).take(10).window_with_count(3).map(lambda window: show_stream(window))
d = subs(src, name='outer subscription')
# -
# > It is left to the reader to explore the other window functions offered by RxPY, working similar to buffer:
rst(O.window, title="window")
rst(O.window_with_time, title="window_with_time(self, timespan, timeshift=None, scheduler=None)")
rst(O.window_with_time_or_count, title="window_with_time_or_count(self, timespan, count, scheduler=None)")
# ## ...so that similar items end up on the same Observable **[group_by](http://reactivex.io/documentation/operators/groupby.html)**
#
# The GroupBy operator divides an Observable that emits items into an Observable that emits Observables, each one of which emits some subset of the items from the original source Observable. Which items end up on which Observable is typically decided by a discriminating function that evaluates each item and assigns it a key. All items with the same key are emitted by the same Observable.
#
# +
rst(O.group_by)
keyCode = 'keyCode'
codes = [
{ keyCode: 38}, #// up
{ keyCode: 38}, #// up
{ keyCode: 40}, #// down
{ keyCode: 40}, #// down
{ keyCode: 37}, #// left
{ keyCode: 39}, #// right
{ keyCode: 37}, #// left
{ keyCode: 39}, #// right
{ keyCode: 66}, #// b
{ keyCode: 65} #// a
]
src = O.from_(codes).group_by(
key_mapper = lambda x: x[keyCode], # id of (potentially new) streams
element_mapper = lambda x: x[keyCode] # membership to which stream
)
# we have now 6 streams
src.count().subscribe(lambda total: print ('Total streams:', total))
d = src.subscribe(lambda obs: obs.count().subscribe(lambda x: print ('Count', x)))
# +
rst(O.group_by_until, title='group by (with time intervals)')
src = marble_stream('-(38)-(38)-(40)-(40)-(37)-(39)-(37)-(39)-(66)-(65)-|')
def count(interval):
grouped = src.group_by_until(
key_mapper = lambda x: x, # id of (potentially new) streams
element_mapper = lambda x: x, # membership to which stream
duration_mapper= lambda x: O.timer(interval))
d = grouped.count().subscribe(lambda total: print (
'Distinct elements within %sms: %s' % (interval, total)))
header('grouping interval short')
# now every event is unique, any older stream is forgotten when it occurs:
count(20)
sleep(2)
header('grouping interval medium')
# just enough to detect the directly following doublicates:
count(200)
sleep(2)
header('grouping interval long')
count(1000)
# -
# # I want to retrieve a particular item emitted by an Observable:
# ## ... the first item emitted **[first](http://reactivex.io/documentation/operators/first.html)**
rst(O.first)
d = subs(O.from_((1, 2, 3, 4)).first(lambda x, i: x < 3))
# ## ... the sole item it emitted **[single](http://reactivex.io/documentation/operators/single.html)**
rst(O.single)
# you can also match on the index i:
d = subs(O.from_((1, 2, 3, 4)).single(lambda x, i: (x, i) == (3, 2)))
# ## ... the last item emitted before it completed **[last](http://reactivex.io/documentation/operators/last.html)**
rst(O.last)
d = subs(O.from_((1, 2, 3, 4)).last(lambda x: x < 3))
| notebooks/reactivex.io/Part IV - Grouping, Buffering, Delaying, misc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import seaborn as sns
# command to print plots directly in jupyter
# %matplotlib inline
sns.set() # sets the default seaborn theme
df = pd.read_csv("../data/Library_Usage.csv")
df.columns
# +
import pandas as pd
import seaborn as sns
# command to print plots directly in jupyter
# %matplotlib inline
sns.set() # sets the default seaborn theme
# use sample to generate a random subsample
df = pd.read_csv("../../../data/Library_Usage.csv").sample(n=1000)
sns.relplot(x='Total Checkouts', y='Total Renewals',
hue='Provided Email Address', style='Provided Email Address', kind='scatter',
size='Year Patron Registered', data=df)
# -
sns.relplot(x='Year Patron Registered', y='Total Renewals', kind='line', estimator=np.median, data=df)
pd.Series.median
palette = sns.choose_colorbrewer_palette('qualitative')
# ## Relational plots
# ### Line plots - Liniendiagramme
sns.lineplot(
x='Year Patron Registered', y='Total Checkouts', data=df,
estimator=len
)
# ### Scatter plots - Streudiagramme
sns.scatterplot(
x='Year Patron Registered', y='Total Checkouts', data=df,
)
# ## Categorical plots
# ### Stripplots
#
# Geeignet, für eine nicht-metrische und eine metrische Variable
sns.stripplot(x='Year Patron Registered', y='Total Checkouts', data=df,)
# ### Boxplots
# ### Barplots - Balkendiagramme
sns.barplot(
x='Year Patron Registered', y='Total Checkouts', data=df, color="steelblue"
)
# ## Distribution plots
# ### Pairplot
sns.pairplot(df)
# ### Histograms - Histogramme
# ## Facet grids
| content/descriptive_statistics/visualizations/examples.files/examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Statistical tests in Python
#
# Today I'm giving you some data as files in URLs. You will need to load those data into Python. You can do this many different ways, though some require less code than others.
#
# In this exercise, you will need to:
# * Plot some data with error bars
# * * s.d.
# * * s.e.m.
# * * 95 % CI
# * Perform one sample _t_-tests of means against a reference mean
# * Perform two sample _t_-tests of means between two treatments
# * Perform paired _t_-tests of means from before and after treatment
#
# Some modules you will probably need:
# ```
# import matplotlib.pyplot as plt
# import numpy as np
# import pandas as pd
# import scipy
# ```
#
# As you have noticed I'm giving you less and less starter code. This is intentional and is meant for you to practice your Google skills for finding the right modules and example code. Have fun!
#
# Start with your imports...
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy
import scipy.stats as stats
import seaborn as sns
# %matplotlib inline
# -
# #### Data sets
#
# * Data set 1: [https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset1.txt](https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset1.txt)
# * Data set 2: [https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset2.txt](https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset2.txt)
# * Data set 3: [https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset3.txt](https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset3.txt)
#
# Begin by downloading the data sets and loading them into pandas, numpy, or whatevs floats your Python boat.
# +
d1 = pd.read_csv('https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset1.txt'
, names=['data'])
d1.head()
# -
#
# Let's continue by making three figures (one for each data set) with three panels each. The first panel should plot the data with error bars as the standard deviation. The second panel should show the error bars as the s.e.m. The final panel should show the error bar with the 95% CI. This last panel will be tough and may actually be easier to do later. What kind of plot will you use? Columns? Bars? Boxplot?
#
# Hint: USE A FUNCTION! Not a subtle hint. But you will run essentially the same code for all three data sets so a function makes sense, riiiight?!
#
# Create the function:
def pointplot(data, label='d1'):
mean = data.mean().values[0]
std = data.std().values[0]
sem = data.sem().values[0]
n = data.count().values[0]
h = sem * stats.t.ppf(0.95, n - 1)
err = [std, sem, h]
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(9, 5))
x = 1
y = data.values
w = 0.1
xvals = x + np.random.choice(np.linspace(-w, w, n+1), replace=False, size=n)
for i in range(3):
ax[i].bar(x,
height=mean,
yerr=err[i], # error bars
capsize=12, # error bar cap width in points
width=w, # bar width
tick_label=[label],
alpha=0)
ax[i].scatter(xvals, y, color='b', s=100)
ax[i].set_xlim([0.75, 1.25])
ax[i].set_ylim([0, 80])
# Call it on data set 1
pointplot(d1, label='d1')
# Call it on data set 2
d2 = pd.read_csv('https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset2.txt'
, names=['data'])
pointplot(d2, label='d2')
# Call it on data set 3
d3 = pd.read_csv('https://raw.githubusercontent.com/UWDIRECT/UWDIRECT.github.io/master/Wi18_content/DSMCER/L8.dataset3.txt'
, names=['data'])
pointplot(d3, label='d3')
# #### Great. Now let's start doing some hypothesis testing in Python.
#
# ##### The one sided _t_-test of means.
# You have reason to believe that all the data sets, which were obtained using the same experimental method, but under different conditions, could be compared to the published literature value of the mean. The value you find in the literature is **42.0**. Perform a statistical test to determine the test statistic and _p_-value that compares each of the three datasets to this reference value.
#
for datasetA in [d1, d2, d3]:
[t, p] = stats.ttest_1samp(datasetA, 42)
print(p)
#
# Were any significant? How did you know?
# Type $\alpha^{2}$ with LaTeX
# $\alpha^{2}$
# #### The two sided t-test of means.
# Now you want to investigate if the three different treatments' means are similar to each other. Perform pairwise statistical tests of the means. Do this using whatever language constructs work best for you (for loops, list comprehensions, ...).
#
# Find a way to present these data to a journal article reader. A table? A figure?
for datasetA in [d1, d2, d3]:
for datasetB in [d1, d2, d3]:
[t, p] = stats.ttest_ind(datasetA, datasetB)
print(p)
# ##### Paired test of means.
#
# Now you learn that data set 2 and data set 3 are paired. That is, the same lab samples were used with data set 2 being before treatment with some compound and data set 3 after treatment. Perform a statistical test to see if the treatment had a statistically significant impact on the lab samples.
[t, p] = stats.ttest_rel(d2, d3)
print(p)
# What is your conclusion about the treatment?
| L5_Hypothesis_testing_filled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: lsst
# language: python
# name: lsst
# ---
# # 3 Tile tract and patch lists
#
# In this notebook we will generate the list of patches that are present on a given VISTA tile. We will do so such that the tile is completely covered. That is we aim to provide every patch that contains any region of the tile.
# +
from astropy.table import Table, Column, vstack # Perhaps we should use LSST tables
import numpy as np
import json
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
# -
VIDEO_IN = './data/video_images_overview_20200820.csv'
VHS_IN = './data/vhs_images_overview_20201102.csv'
VIKING_IN = './data/viking_images_overview_20201218.csv'
HSC_IN = './data/hsc_images_overview_20210315.csv'
video_ims = Table.read(VIDEO_IN)
vhs_ims = Table.read(VHS_IN)
viking_ims = Table.read(VIKING_IN)
hsc_ims = Table.read(HSC_IN)
len(vhs_ims),len(video_ims),len(viking_ims),len(hsc_ims)
def fileToType(filename):
filetype = ''
types = {
'tile':'_tl.fit',
'stack':'_st.fit',
}
for k,v in types.items():
#print(k,v)
if filename.endswith(v):
filetype = k
return filetype
video_ims['type'] = [fileToType(f) for f in video_ims['file']]
vhs_ims['type'] = [fileToType(f) for f in vhs_ims['file']]
viking_ims['type'] = [fileToType(f) for f in viking_ims['file']]
video_ims = video_ims[video_ims['type']=='tile']
vhs_ims = vhs_ims[vhs_ims['type']=='tile']
viking_ims = viking_ims[viking_ims['type']=='tile']
len(vhs_ims),len(video_ims),len(viking_ims)
# +
#We are using a rings skymap
from lsst.geom import SpherePoint
from lsst.geom import degrees
from lsst.skymap.ringsSkyMap import RingsSkyMap, RingsSkyMapConfig
#Following taken from dmu1/2_Survey_comparisons.ipynb test choice
coord = SpherePoint(35.429025*degrees,-4.90853*degrees)
config = RingsSkyMapConfig()
#These config options are chose to be the same as HSC:
#https://github.com/lsst/obs_subaru/blob/master/config/hsc/makeSkyMap.py
#and copied for obs_vista
#https://github.com/lsst-uk/obs_vista/blob/master/config/makeSkyMap.py
config.numRings = 120
config.projection = "TAN"
config.tractOverlap = 1.0/60 # Overlap between tracts (degrees)
config.pixelScale = 0.168
sm = RingsSkyMap(config)
sm.findTract(coord)
# +
## Test on known tile
# -
#Tile from dmu4/dmu4_Example
EX_TILE = "20121122/v20121122_00088_st_tl.fit"
ex_row = video_ims[
video_ims['file'] == "/home/ir-shir1/rds/rds-iris-ip005/data/private/VISTA/VIDEO/"+EX_TILE
][0]
ex_row
ex_row['ra_0_0']
patches = sm.findTractPatchList(
[
SpherePoint(ex_row['ra_0_0']*degrees,ex_row['dec_0_0']*degrees),
SpherePoint(ex_row['ra_0_y']*degrees,ex_row['dec_0_y']*degrees),
SpherePoint(ex_row['ra_x_0']*degrees,ex_row['dec_x_0']*degrees),
SpherePoint(ex_row['ra_x_y']*degrees,ex_row['dec_x_y']*degrees)
]
)
t = patches[0][0]
# +
tp_dict ={}
for tract in patches:
#print(tract[0].getId())
#tp_dict[tract[0].getId()] = [[t.getIndex()[0],t.getIndex()[1]] for t in tract[1]]
tp_dict[int(tract[0].getId())] = [t.getIndex() for t in tract[1]]
j = json.dumps(tp_dict, separators=(',', ':'))
j
# -
def corners_to_patch_list(ex_row):
"""Take a ra dec limited region and return a list of all patches contained witing it
Inputs
=======
ex_row astropy.table.row
Row of image overview table containing corner columns
Returns
=======
j str
String which can be loaded by json to create a dictionary
the tracts indices are strings as required by json
"""
try:
patches = sm.findTractPatchList(
[
SpherePoint(ex_row['ra_0_0']*degrees,ex_row['dec_0_0']*degrees),
SpherePoint(ex_row['ra_0_y']*degrees,ex_row['dec_0_y']*degrees),
SpherePoint(ex_row['ra_x_0']*degrees,ex_row['dec_x_0']*degrees),
SpherePoint(ex_row['ra_x_y']*degrees,ex_row['dec_x_y']*degrees)
]
)
tp_dict ={}
for tract in patches:
tp_dict[int(tract[0].getId())] = [t.getIndex() for t in tract[1]]
j = json.dumps(tp_dict, separators=(',', ':'))
except:
#print(ex_row['file']+" failed")
j=''
return j
corners_to_patch_list(video_ims[0])
video_ims[0]
# +
# !mkdir figs
fig, ax = plt.subplots(figsize=(4, 4), dpi=140)
test = json.loads(corners_to_patch_list(video_ims[0]))
for tract in test:
t = sm.generateTract(int(tract))
for patch in test[tract]:
p = patch
vertices = t.getPatchInfo([p[0], p[1]]).getInnerSkyPolygon(t.getWcs()).getVertices()
ra = [np.arctan(vertices[n][1]/vertices[n][0])* 180/np.pi for n in np.mod(np.arange(5),4)]
#print(tract, ra)
dec = [(vertices[n][2])* 180/np.pi for n in np.mod(np.arange(5),4)]
ax.fill(ra, dec, c = 'r', alpha=0.3, linewidth=0.5)
t =video_ims[0]
ra = [t['ra_0_0'], t['ra_x_0'] , t['ra_x_y'] , t['ra_0_y'] , t['ra_0_0'] ]
dec = [t['dec_0_0'], t['dec_x_0'] , t['dec_x_y'] , t['dec_0_y'] , t['dec_0_0'] ]
ax.plot(ra, dec, c = 'b', alpha=0.3, linewidth=1.0)
ax.axis('scaled')
#ax.set_xlim([39, 33])
ax.set_xlabel('Right Ascension [deg]')
ax.set_ylabel('Declination [deg]')
fig.savefig('./figs/test_im.pdf', overwrite=True)
fig.savefig('./figs/test_im.png', overwrite=True)
# -
video_ims['tract_patch_json'] = [corners_to_patch_list(row) for row in video_ims]
vhs_ims['tract_patch_json'] = [corners_to_patch_list(row) for row in vhs_ims]
viking_ims['tract_patch_json'] = [corners_to_patch_list(row) for row in viking_ims]
#corners_to_patch_list(video_ims[0])
video_ims[0]
video_ims[6]['tract_patch_json']
#video_ims.write(VIDEO_IN.replace('images','tiles_tracts_patches'),overwrite=True)
vhs_ims.write(VHS_IN.replace('images','tiles_tracts_patches'),overwrite=True)
viking_ims.write(VIKING_IN.replace('images','tiles_tracts_patches'),overwrite=True)
video_ims = Table.read(VIDEO_IN.replace('images','tiles_tracts_patches'))
video_ims.add_column(Column(
data= [t.split('/')[-2] for t in video_ims['file']],
name='date'))
# +
tract = 8524
patch = [3,5]
#has_tract_patch = (
# [str(tract) in j for j in video_ims['tract_patch_json']]
# and [patch in j for j in video_ims['tract_patch_json']]
#)
# -
def has_patch(tract, patch, j):
"""Take a json string and return true if a given patch is in it
"""
j = json.loads(j)
try:
has_patch = patch in j[str(tract)]
except KeyError:
has_patch = False
return has_patch
import json
j = json.loads(video_ims[6]['tract_patch_json'])
mask = [has_patch(tract, patch, j) for j in video_ims['tract_patch_json']]
mask &= video_ims['filter'] =='Y'
np.sum(mask)
len(np.unique(video_ims[mask]['date']))
s = "makeCoaddTempExp.py data --rerun coadd"
for date in np.unique(video_ims[mask]['date'])[:5]:
d = date[0:4]+'-'+date[4:6]+'-'+date[6:9]#+'^'
s+=" --selectId filter=VISTA-Y dateObs={} ".format(d)
s + "--id filter=VISTA-Y tract=8524 patch=3,5 --clobber-config"
has_patch(8766, [2,0], video_ims['tract_patch_json'][0])
video_ims.colnames
patch
has_8524_33 = np.array([has_patch(8524, [3,3], j) for j in video_ims['tract_patch_json']])
has_8524_55 = np.array([has_patch(8524, [5,5], j) for j in video_ims['tract_patch_json']])
np.sum(has_8524_33 & has_8524_55)
np.unique(video_ims[has_8524_55 ]['filter'])
# ## 2. Make total tract_patch_json dicts
# +
video_total_patch_dict = {}
vhs_total_patch_dict = {}
viking_total_patch_dict = {}
for tile in video_ims[video_ims['type']=='tile']:
tile_dict = json.loads(tile['tract_patch_json'])
for tract in tile_dict:
patches = set([str(p) for p in tile_dict[tract]])
try:
video_total_patch_dict[tract] = set(video_total_patch_dict[tract]).union(patches)
except KeyError:
video_total_patch_dict[tract] = patches
for tile in vhs_ims[vhs_ims['type']=='tile']:
tile_dict = json.loads(tile['tract_patch_json'])
for tract in tile_dict:
patches = set([str(p) for p in tile_dict[tract]])
try:
vhs_total_patch_dict[tract] = set(vhs_total_patch_dict[tract]).union(patches)
except KeyError:
vhs_total_patch_dict[tract] = patches
for tile in viking_ims[viking_ims['type']=='tile']:
tile_dict = json.loads(tile['tract_patch_json'])
for tract in tile_dict:
patches = set([str(p) for p in tile_dict[tract]])
try:
viking_total_patch_dict[tract] = set(viking_total_patch_dict[tract]).union(patches)
except KeyError:
viking_total_patch_dict[tract] = patches
for tract in video_total_patch_dict:
video_total_patch_dict[tract] = [[int(p[1]),int(p[4])] for p in video_total_patch_dict[tract]]
for tract in vhs_total_patch_dict:
vhs_total_patch_dict[tract] = [[int(p[1]),int(p[4])] for p in vhs_total_patch_dict[tract]]
for tract in viking_total_patch_dict:
viking_total_patch_dict[tract] = [[int(p[1]),int(p[4])] for p in viking_total_patch_dict[tract]]
# +
n = 0
for tract in vhs_total_patch_dict:
n += len(vhs_total_patch_dict[tract])
print("There are {} patches in VHS.".format(n))
n = 0
for tract in video_total_patch_dict:
n += len(video_total_patch_dict[tract])
print("There are {} patches in VIDEO.".format(n))
n = 0
for tract in viking_total_patch_dict:
n += len(viking_total_patch_dict[tract])
print("There are {} patches in VIKING.".format(n))
# -
with open('./json/video_total_patch_dict.json', 'w') as outfile:
json.dump(video_total_patch_dict, outfile, separators=(',', ':'))
with open('./json/vhs_total_patch_dict.json', 'w') as outfile:
json.dump(vhs_total_patch_dict, outfile, separators=(',', ':'))
with open('./json/viking_total_patch_dict.json', 'w') as outfile:
json.dump(viking_total_patch_dict, outfile, separators=(',', ':'))
# ### HSC
hsc_ims['tract'] = [f.split('/')[16] for f in hsc_ims['file']]
hsc_ims['patch'] = [f.split('/')[17] for f in hsc_ims['file']]
hsc_ims['depth'] = [f.split('/')[13] for f in hsc_ims['file']]
hsc_ims[0]
# +
hsc_total_patch_dict = {}
for file in hsc_ims:
tract = file['tract']
patch = set(["[{}]".format(file['patch'])])
try:
hsc_total_patch_dict[tract] = set(hsc_total_patch_dict[tract]).union(patch)
except KeyError:
hsc_total_patch_dict[tract] = patch
for tract in hsc_total_patch_dict:
hsc_total_patch_dict[tract] = [[int(p[1]),int(p[3])] for p in hsc_total_patch_dict[tract]]
# +
#json.dumps(hsc_total_patch_dict, separators=(',', ':'))
# -
with open('./json/hsc_total_patch_dict.json', 'w') as outfile:
json.dump(hsc_total_patch_dict, outfile, separators=(',', ':'))
| dmu1/3_Tile_tracts_patches.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 3 - Asking a Statistical Question
# ##### PHYS434 - Advanced Laboratory: Computational Data Analysis
# ##### Professor: <NAME>
# <br>
# ##### Due date: 10/23/2021
# ##### By <NAME>
# <br>
# This week we are going to concentrate on asking a statistical question. This process almost always consists of 3+ steps:
# 1. Writing down in words very precisely what question you are trying to ask.
# 2. Translating the precise english question into a mathematical expression. This often includes determining the pdf of the background (possibly including trials), and the to integral to do to obtain a probability.
# 3. Coverting the probability into equivalent sigma
#
#
# +
# Importing needed libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import scipy
from scipy import stats, signal
from astropy import units as u
# This sets the size of the plot to something useful
plt.rcParams["figure.figsize"] = (15,10)
# This sets the fontsize of the x- and y-labels
fsize = 30
lsize = 24
# -
# ## Problem 1
# In our first example we are looking at the temperature reading (meta-data) associated with an experiment. For the experiment to work reliably, the temperature should be at around 12 Kelvin, and if we look at the data it is mostly consistent with 12 Kelvin to within the 0.4 degree precision of the thermometry and the thermal control system (standard deviation). However, there are times when the thermal control system misbehaved and the temperature was not near 12 K, and in addition there are various glitches in the thermometry that give anomalously high and low readings (the reading does not match the real temperature). We definitely want to identify and throw out all the data when the thermal control system was not working (and the temperature was truly off from nominal). While it is possible to have an error in the thermometry such that the true temperature was fine, and we just had a wonky reading, in an abundance of caution we want to throw those values out too.
d = np.append(stats.norm.rvs(loc = 12., scale = 0.4, size = 100000), [10., 10.3, 2.1, 0., 0., 15.6, 22.3, 12.7])
fig, ax = plt.subplots(1, 1)
ax.hist(d,100, density=True)
plt.tick_params(labelsize = 24)
plt.yscale('log')
ax.set_xlabel('Temperature (K)', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('Temperature Distribution', fontsize = fsize, fontweight = 'bold')
plt.show()
# ## A)
# ### 1.
# Let's play around with the data and come up with criteria for throwing out certain data points.
x = np.linspace(10, 14, 1000)
d2 = stats.norm.pdf(x, loc = 12., scale = 0.4)
fig, ax = plt.subplots(1, 1)
ax.hist(d,100, density=True)
ax.plot(x, d2, linewidth = 3)
plt.tick_params(labelsize = 24)
plt.yscale('log')
ax.set_xlabel('Temperature (K)', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('Temperature Distribution', fontsize = fsize, fontweight = 'bold')
plt.show()
fig, ax = plt.subplots(1, 1)
ax.hist(d,100, density=True)
ax.plot(x, d2, linewidth = 3)
ax.vlines(10, 5e-6, 1e0, color='r', linestyle = '--')
ax.vlines(14, 5e-6, 1e0, color='r', linestyle = '--')
plt.tick_params(labelsize = 24)
plt.yscale('log')
ax.set_xlabel('Temperature (K)', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('Temperature Distribution', fontsize = fsize, fontweight = 'bold')
plt.show()
# Let's suggest boundaries of values 10 and 14 (+-2 on each side of the mean) to discriminate 'bad' data points - essentially, setting these as thresholds for the data to be in between.
# ### 2.
# If we take the survival function of 14 under our pdf we get the following probability and sigma:
norm_dist = stats.norm(loc = 12., scale = 0.4)
prob = norm_dist.sf(14)
sigma = round(-stats.norm.ppf(prob, loc=0, scale=1), 4)
sigma
# This seems to be a good threshold for our data - so that if the data value lies beyond five sigma from the mean of the distribution, then we will throw the data point away. 5 sigma seems to be placed right outside the majority of our distribution and excludes the data point that are outliers.
#
# Then, our statistical question becomes:
#
# _Is the probability of getting the data point in our distribution smaller than $5\sigma$?_
#
# If this is the case, we will throw out the data point.
# ### 3.
# We now restate our question in mathematical terms. For a data point with value $V$.
def exclude_data(dist, V, sigma):
'''
Returns True if data point should be thrown out,
False if it should be kept.
'''
Vprob = dist.sf(V)
Vsigma = -stats.norm.ppf(Vprob, loc=0, scale=1)
if abs(Vsigma) > sigma:
exclude = True
else:
exclude = False
return exclude
# We run this in a loop and get:
# + tags=[]
included_array = []
excluded_array = []
for item in d:
if exclude_data(norm_dist, item, sigma):
excluded_array.append(item)
else:
included_array.append(item)
print(f'Excluded: {excluded_array}')
# -
# ### 4.
# Reminder: Our 'bad' data points are {10., 10.3, 2.1, 0., 0., 15.6, 22.3, 12.7}
bad_data = [10., 10.3, 2.1, 0., 0., 15.6, 22.3, 12.7]
kept_bad_data = []
for i in included_array:
for k in bad_data:
if i == k:
kept_bad_data.append(i)
bad_data, kept_bad_data
len(d) - len(bad_data), len(bad_data)
len(included_array), len(excluded_array), len(excluded_array) + len(included_array), len(kept_bad_data)
# We construct a truth table showing our results from above:
#
# | | **True T** | **Bad T** |
# | --- | --- | --- |
# | Test Include | 100000 | 3 |
# | Test Exclude | 0 | 5 |
# | Total | 100000 | 8 |
# ## B)
# Now, we evaluate how the omissions (throwing out 'good' data) depends on the threshold (sigma) I chose above.
#
# Since the test does not omit any good data for my threshold of $ 5\sigma $, it does not depend on the threshold if the threshold increases (to a larger sigma). However, if we decreased the threshold so that the sigma would converge onto the actual good data points - so that the width of the statistical "inclusion" is narrower than the distribution of the normal distribution in the background - and then the test would start excluding good data points.
# ## C)
# There are still some data points that are 'bad' data that gets into my final distribution even after the statistical test. These are located among the distribution of the background, and so they are included as my test does not omit them since they are within the 'inclusion zone' defined by my threshold of $ \pm \: 5\sigma $.
#
# There is no way I can change my threshold - effectively the width of the inclusion zone - so that the test would not also exclude good data.
# ## Problem 2
# In this example we will be looking for asteroids. If we look at the alignment of stars on subsequent images, they don't perfectly align due to atmospheric and instrumental effects (even ignoring proper motion). The resulting distribution is two-dimensional, and for this lab let's assume it is a 2D Gaussian with 1 arcsecond RMS. Or said another way, if I histogram how far all the (stationary) stars appear to have moved I get something like:
a = np.vstack((stats.norm.rvs( scale = 1, size = 100000), stats.norm.rvs( scale = 1, size = 100000)))
a.shape
fig, ax = plt.subplots(1, 1)
h = ax.hist2d(a[0,:],a[1,:],bins=100, density=True);
ax.set_aspect('equal', 'box')
plt.xlim([-3 , 3])
plt.ylim([-3 , 3])
plt.title("2D Histogram of positional uncertainty", fontsize = 24)
plt.ylabel("$\Delta$y arcseconds", fontsize = 18)
plt.xlabel("$\Delta$x arcseconds", fontsize = 18)
plt.colorbar(h[3], ax=ax);
# If I have a potential asteroid, it will have some true movement between the images. We would like a '5 sigma' detection of movement. What is that distance in arcseconds?
# ## 1.
# We know that our 2D Gaussian is related to a Rayleigh distribution, such that our Rayleigh distribution will have a standard deviation of $ \sqrt{\sigma} $ if the Gaussians have a standard deviation of $ \sigma $.
#
# Let's state our statistical question in words:
#
# _What is the distance in arcseconds that when integrated from the right up to that value (distance) corresponds to a probability of 5 'sigma'?_
# ## 2.
# For a value V, Rayleigh distribution of $ R(x) $ and standard normal distribution of $ N(x) $:
#
# $$ \int_{V}^{\infty}{ R(x) dx} = \int_{5\sigma}^{\infty}{ N(x) dx} $$
#
# Then, we take the $isf()$ of the first integral to find the value of V.
#
# (Thus, essentially, our mathematical question asks what is the value of V that makes this equation true.)
# ## 3.
prob_5sigma = 1/(3.5e6)
sigma_gaussian = 1
sigma_rayleigh = np.sqrt(sigma_gaussian)
rayleigh = stats.rayleigh(scale = sigma_rayleigh)
det = rayleigh.isf(prob_5sigma)
det
# stats.norm.isf(rayleigh.sf(prob_5sigma))
print(f'This means that the detection of movement of 5 sigma corresponds to {det} arcseconds')
# ## Problem 3
# As we discussed in class, one of the key backgrounds for gamma-ray telescopes are cosmic rays. Cosmic rays are charged particles—usually protons or electrons but can include atomic nuclei such a alpha particles (helium) or iron. Because of their charge cosmic rays spiral in the magnetic field of the galaxy. From the perspective of the Earth they appear to be coming uniformly from all directions like a high energy gas, and the direction the cosmic ray is travelling when it reaches the Earth tells us nothing about where it came from because we don't know what tortured path it has taken through the galaxy to reach us. However, at trillion electron volt energies and above, the spiral loops are fairly big and the sun and the moon will block cosmic rays. This means the sun and the moon appear as holes in the cosmic ray sky (cosmic rays from that direction are absorbed).
#
# Assume in a moon sized patch on the sky we normally have a cosmic ray rate of 1 cosmic ray per minute (arrivals are random in time). We observe where the moon is for 8 hours per night (not too close to the horizon) and we observe for 15 days and see 6800 cosmic rays. Let's find the signficance of our moon shadow detection.
# ## 1.
# We assume the cosmic rays to follow a Poisson distribution, since we are dealing with rates of events (from cosmic rays).
# In this problem, we are not dealing with trials since there is no look-elsewhere effect - we are not looking for the 'brightest' candidate of our signals. Rather, we are adding our exposures together to extend the time we are observing. Thus, we are convolving our distribution over 7200 times (see cell below). However, we know that a Poisson distribution convolved with another Poisson distribution is a Poisson distribution with a mean equal to the sum of the means of the previous distributions.
(8 * u.hour * 15).to(u.min)/u.min # 8 hours and 15 days
# We state our statistical question:
#
# _What is the probability that the "normally" occurring cosmic ray background - a Poisson distribution with mean 7200 - produces a signal of 6800 cosmic rays?_
# ## 2.
# We will let $S = 6800$.
# We start by showing the background for an 8 hour exposure (1 night).
# + tags=[]
N = 7200
trials = 1
mu = 1
resolution = 1
background = stats.poisson(mu*N)
xmin, xmax = (6000, 8000)
x = np.arange(xmin, xmax+1, resolution)
cx = np.arange(xmin, xmax+1, resolution/N)
# cxstairs = (np.arange(xmin, xmax+1+0.5*resolution/N, resolution) - 0.5*resolution/N)/N
cxstairs = (np.arange(xmin, xmax+1+0.5*resolution, resolution) - 0.5*resolution)
# -
fig, ax = plt.subplots(1, 1)
plt.tick_params(labelsize = lsize/2)
ax.stairs(background.pmf(x), cxstairs, fill=True)
ax.set_xlim([6500, 7900])
ax.set_xlabel('N cosmic rays', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('15 days of 8-hour exposures', fontsize = fsize, fontweight = 'bold')
plt.show()
fig, ax = plt.subplots(1, 1)
plt.tick_params(labelsize = lsize/2)
ax.stairs(background.pmf(x), cxstairs, fill=True)
ax.set_xlim([6500, 7900])
ax.set_ylim([1e-21, 1e-2])
ax.set_xlabel('N cosmic rays', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('15 days of 8-hour exposures', fontsize = fsize, fontweight = 'bold')
ax.set_yscale('log')
plt.show()
# Which is the $pdf()$ of the background.
Y = 6800
Y
# Let's describe the integral that we need to do for a 6800 cosmic ray detection.
#
# Since this value is smaller than the mean of the distribution $\mu$, we need to integrate from the left ($-\infty$) up to our value $Y = 6800$. Our integral equation then becomes:
#
# $$ \int_{-\infty}^{Y}{ P(x) dx} = \int_{\sigma}^{\infty}{ N(x) dx} $$
prob_moon = (background.cdf(Y)) # We have to integrate from the left, since we are observing a deviation from the normal **less than** the mean
prob_moon
print(f'The probability of detecting 6800 cosmic rays in our observation is {prob_moon:.2e}.')
# ## 3.
sigma_moon = abs(stats.norm.ppf(prob_moon))
print(f'The sigma of our detection is {sigma_moon:.3}.')
# This detection is significantly different from previous detections we have worked with in the past. In this scenario, we are looking for a 'lack' of cosmic rays coming from the patch of sky covered by the moon. Therefore, we have been dealing with taking the integral from the **left** up to our value $Y$.
| Lab3/Lab3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="GOk6KmYU4QQV"
# #My first Notebook
# **Let's try some codes**
# + colab={"base_uri": "https://localhost:8080/"} id="HypxPk9Q4HIo" outputId="d3771d81-8ec1-448f-e68b-52faae155755"
#calculate
2*(3+4)
# + colab={"base_uri": "https://localhost:8080/"} id="odbpjShr4-eS" outputId="31b0798c-87fe-4434-c6ce-4716746522be"
print('first value:', 1)
print('second vaue:', 2)
# + colab={"base_uri": "https://localhost:8080/"} id="AR0LFwBc5QQ_" outputId="5ee0c7c7-f115-413e-cd62-320f53d19ae0"
Numbers= [4,3,2,1]
Numbers.sort()
print(Numbers)
| Mynewnotebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import json
import unicodedata
from sklearn.feature_extraction import DictVectorizer as DV
# %matplotlib inline
sns.set(color_codes=True)
# +
def json_load_byteified(file_handle):
return _byteify(
json.load(file_handle, object_hook=_byteify),
ignore_dicts=True
)
def json_loads_byteified(json_text):
return _byteify(
json.loads(json_text, object_hook=_byteify),
ignore_dicts=True
)
def _byteify(data, ignore_dicts = False):
# if this is a unicode string, return its string representation
if isinstance(data, unicode):
return data.encode('utf-8')
# if this is a list of values, return list of byteified values
if isinstance(data, list):
return [ _byteify(item, ignore_dicts=True) for item in data ]
# if this is a dictionary, return dictionary of byteified keys and values
# but only if we haven't already byteified it
if isinstance(data, dict) and not ignore_dicts:
return {
_byteify(key, ignore_dicts=True): _byteify(value, ignore_dicts=True)
for key, value in data.iteritems()
}
# if it's anything else, return it in its original form
return data
# -
with open('fav_stories_metadata.json') as data_file:
data = json_load_byteified(data_file)
data[0]
for i, e in enumerate(data):
for key in e:
if type(data[i][key]) is list:
data[i][key] = '--'.join(data[i][key])
data[0]
vectorizer = DV( sparse = False )
data_v = vectorizer.fit_transform(data)
data_v.shape
vectorizer.get_feature_names()
df = pd.DataFrame(data)
dfv = pd.DataFrame(data)
del dfv['author_url_relative']
del dfv['reviews_url_relative']
del dfv['story_end_url_relative']
del dfv['story_start_url_relative']
del dfv['story_summary']
# dfv.head()
pd.DataFrame(data_v).head()
df.story_parent.value_counts().plot(kind='bar')
print(df.story_parent)
| .ipynb_checkpoints/Plot_fav_story_data-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Abhinav9512/Leaf-Recognition-Model/blob/main/Untitled3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="TPAaNLoboQtg" outputId="57fdcc2c-43b4-49e6-8691-68ff279452c4"
import tensorflow as tf
from tensorflow.keras.layers import Input, Lambda, Dense, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.applications.inception_v3 import InceptionV3
#from keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.inception_v3 import decode_predictions
from tensorflow.keras.applications.inception_v3 import preprocess_input
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img
from tensorflow.keras.models import Sequential
import matplotlib.pyplot as plt
import os
from os import listdir
from PIL import Image as PImage
import numpy as np
from glob import glob
import pandas as pd
from google.colab import drive
drive.mount('/content/drive')
# + id="4Yhhmd2Co-L9"
img_width, img_height = 256, 256
batch_size = 2
train_path = '/content/drive/My Drive/dataset/images/field'
# + id="OxvVho5MpDE2" colab={"base_uri": "https://localhost:8080/"} outputId="4eed0d1c-1b0d-4c43-d813-339d1e3b2089"
inception = InceptionV3(input_shape=(img_height, img_width, 3), weights='imagenet', include_top=False)
# + id="fpBe8vZNpGzb"
for layer in inception.layers:
layer.trainable = False
# + id="JAlz1KzDpKHD"
folders = glob('/content/drive/My Drive/dataset/images/field/*')
# + id="fvxQ1ryKpNyz"
x = Flatten()(inception.output)
# + id="hTp1PPhepQfo"
prediction = Dense(len(folders), activation='softmax')(x)
# create a model object
model = Model(inputs=inception.input, outputs=prediction)
# + id="txAA0WtZpTbR"
model.summary()
# + id="ZgP9aUjWpYR2"
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
# + colab={"base_uri": "https://localhost:8080/"} id="yYfwENiapfLZ" outputId="cb96cf60-7035-4977-df61-451820dff600"
ds_train = tf.keras.preprocessing.image_dataset_from_directory(
"/content/drive/My Drive/dataset/images/field/",
labels="inferred",
label_mode="categorical",
batch_size=batch_size,
shuffle=True,
seed=123,
validation_split=0.2,
subset="training",
)
# + colab={"base_uri": "https://localhost:8080/"} id="wbZ8N64WqI7c" outputId="523c0418-954c-4cb0-dcb3-41ff7e61d6be"
ds_validation = tf.keras.preprocessing.image_dataset_from_directory(
"/content/drive/My Drive/dataset/images/field/",
labels="inferred",
label_mode="categorical",
batch_size=batch_size,
shuffle=True,
seed=123,
validation_split=0.2,
subset="validation",
)
# + colab={"base_uri": "https://localhost:8080/"} id="zZNyRlZjqMsf" outputId="f3ace363-ec0f-4bad-97d2-f7282de48cd2"
r = model.fit_generator(
ds_train,
validation_data=ds_validation,
epochs=5,
steps_per_epoch=len(ds_train),
validation_steps=len(ds_validation)
)
# + colab={"base_uri": "https://localhost:8080/", "height": 531} id="7HveGTdp2W4R" outputId="de182e0b-c1a4-4b71-c9c6-3e893d0362de"
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# plot the accuracy
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
# + colab={"base_uri": "https://localhost:8080/"} id="zyAaYnGCcDmj" outputId="6bc6f4f6-0195-41d1-f937-7e06811cd28f"
model.save('InceptionV3-CNN.model')
# + id="44TB7zY_dwk9"
model = tf.keras.models.load_model("InceptionV3-CNN.model")
# + id="Jam05wDWd4-5"
import cv2
CATEGORIES = ["acer_palmatum", "betula_lenta" ,"cedrus_libani","diospyros_virginiana","evodia_danielli","ficus_carica","ilex_opaca","juglans_nigra","koelreuteria_paniculata","larix_decidua","malus_pumila","nyssa_sylvatica","ostrya_virginiana","pinus_taeda","quercus_palustris","robinia_pseudo-acacia", "styrax_japonica","tonna_sinensis","ulmus_pumila", "zelkova_serrata"] # will use this to convert prediction num to string value
# + id="TgSkRhUf1j3j"
filepath='/content/drive/My Drive/dataset/images/field/ulmus_pumila/13291724500056.jpg'
IMG_SIZE = 256 # 50 in txt-based
img_array = cv2.imread(filepath) # read in the image, convert to grayscale
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE)) # resize image to match model's expected sizing
# new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
new_array= np.reshape(new_array, (-1, IMG_SIZE, IMG_SIZE, 3))
# print(new_array)
# + id="5SbsuNL20brI"
prediction = model.predict(new_array)
print(prediction)
print(prediction[0][0])
print(int(prediction[0][0]))
# print(CATEGORIES[int(prediction[0][0])])
print(CATEGORIES[int(np.argmax(prediction[0]))])
# + colab={"base_uri": "https://localhost:8080/", "height": 68} id="3eKQabIRIxRr" outputId="16421ddb-e114-4df5-f823-0e21c52cc13d"
model = tf.keras.models.load_model("InceptionV3-CNN.model")
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
print("model converted")
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
from google.colab import files
files.download('model.tflite')
| Untitled3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **<NAME>**
# *July 20, 2021*
#
# # RAP Data
#
# There are different products available on the cloud.
# +
from herbie.archive import Herbie
from toolbox.cartopy_tools_OLD import common_features, pc
from paint.standard2 import cm_tmp
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# -
H = Herbie('2021-07-19',
model='rap',
product='awp130pgrb')
x = H.xarray('TMP:2 m above')
# +
ax = common_features(crs=x.herbie.crs, figsize=[8,8])
p = ax.pcolormesh(x.longitude, x.latitude, x.t2m,
transform=pc,
**cm_tmp(units='K').cmap_kwargs)
plt.colorbar(p, ax=ax,
orientation='horizontal', pad=.05,
**cm_tmp(units='K').cbar_kwargs)
ax.set_title(x.t2m.GRIB_name, loc='right')
ax.set_title(f"{x.model.upper()}: {H.product_description}", loc='left')
# -
| docs/_build/html/user_guide/notebooks/data_rap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exam 2 - <NAME>
# %load_ext sql
# %config SqlMagic.autocommit=True
# %sql mysql+pymysql://root:root@127.0.0.1:3306/mysql
#
# ## Problem 1: Controls
#
# Write a Python script that proves that the lines of data in Germplasm.tsv, and LocusGene are in the same sequence, based on the AGI Locus Code (ATxGxxxxxx). (hint: This will help you decide how to load the data into the database)
# +
import pandas as pd
import csv
gp = pd.read_csv('Germplasm.tsv', sep='\t')
matrix2 = gp[gp.columns[0]].to_numpy()
germplasm = matrix2.tolist()
#print(germplasm) ##to see the first column (AGI Locus Codes) of Germplasm.tsv
lg = pd.read_csv('LocusGene.tsv', sep='\t')
matrix2 = lg[lg.columns[0]].to_numpy()
locus = matrix2.tolist()
#print(locus) ##to see the first column (AGI Locus Codes) of LocusGene.tsv
if (germplasm == locus):
print("lines of data are in the same sequence")
else:
print("lines of data are not in the same sequence")
# -
# **I have only compared the first columns because is where AGI Codes are (they are the same in the two tables).**
# ## Problem 2: Design and create the database.
# * It should have two tables - one for each of the two data files
# * The two tables should be linked in a 1:1 relationship
# * you may use either sqlMagic or pymysql to build the database
#
#
#
# +
##creating a database called germplasm
# %sql create database germplasm;
##showing the existing databases
# %sql show databases;
# +
##selecting the new database to interact with it
# %sql use germplasm;
# %sql show tables;
##the database is empty (it has not tables as expected)
# +
##showing the structure of the tables I want to add to the germplasm database
germplasm_file = open("Germplasm.tsv", "r")
print(germplasm_file.read())
print()
print()
locus_file = open("LocusGene.tsv", "r")
print(locus_file.read())
germplasm_file.close() ##closing the Germplasm.tsv file
locus_file.close() ##closing the LocusGene.tsv file
# -
##creating a table for Germplasm data
# %sql CREATE TABLE Germplasm_table(locus VARCHAR(10) NOT NULL PRIMARY KEY, germplasm VARCHAR(30) NOT NULL, phenotype VARCHAR(1000) NOT NULL, pubmed INTEGER NOT NULL);
# %sql DESCRIBE Germplasm_table;
##creating a table for Locus data
# %sql CREATE TABLE Locus_table(locus VARCHAR(10) NOT NULL PRIMARY KEY, gene VARCHAR(10) NOT NULL, protein_lenght INTEGER NOT NULL);
# %sql DESCRIBE Locus_table;
##showing the created tables
# %sql show tables;
##showing all of the data linking the two tables in a 1:1 relationship (it is empty because I have not introduced the data yet)
# %sql SELECT Germplasm_table.locus, Germplasm_table.germplasm, Germplasm_table.phenotype, Germplasm_table.pubmed, Locus_table.gene, Locus_table.protein_lenght\
# FROM Germplasm_table, Locus_table\
# WHERE Germplasm_table.locus = Locus_table.locus;
# **- I have designed a database with two tables: Germplasm_table for Germplasm.tsv and Locus_table for LocusGene.tsv**
#
# **- The primary keys to link the two tables in a 1:1 relationship are in the 'locus' column of each table**
# ## Problem 3: Fill the database
# Using pymysql, create a Python script that reads the data from these files, and fills the database. There are a variety of strategies to accomplish this. I will give all strategies equal credit - do whichever one you are most confident with.
# +
import csv
import re
with open("Germplasm.tsv", "r") as Germplasm_file:
next(Germplasm_file) ##skipping the first row
for line in Germplasm_file:
line = line.rstrip() ##removing blank spaces created by the \n (newline) character at the end of every line
print(line, file=open('Germplasm_wo_header.tsv', 'a'))
Germplasm_woh = open("Germplasm_wo_header.tsv", "r")
import pymysql.cursors
##connecting to the database (db) germplasm
connection = pymysql.connect(host='localhost',
user='root',
password='<PASSWORD>',
db='germplasm',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
connection.autocommit(True)
try:
with connection.cursor() as cursor:
sql = "INSERT INTO Germplasm_table (locus, germplasm, phenotype, pubmed) VALUES (%s, %s, %s, %s)"
for line in Germplasm_woh.readlines():
field = line.split("\t") ##this splits the lines and inserts each field into a column
fields = (field[0], field[1], field[2], field[3])
cursor.execute(sql, fields)
connection.commit()
finally:
print("inserted")
#connection.close()
# -
# %sql SELECT * FROM Germplasm_table;
# +
import csv
import re
with open("LocusGene.tsv", "r") as LocusGene_file:
next(LocusGene_file) ##skipping the first row
for line in LocusGene_file:
line = line.rstrip() ##removing blank spaces created by the \n (newline) character at the end of every line
print(line, file=open('LocusGene_wo_header.tsv', 'a'))
LocusGene_woh = open("LocusGene_wo_header.tsv", "r")
import pymysql.cursors
##connecting to the database (db) germplasm
connection = pymysql.connect(host='localhost',
user='root',
password='<PASSWORD>',
db='germplasm',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
connection.autocommit(True)
try:
with connection.cursor() as cursor:
sql = "INSERT INTO Locus_table (locus, gene, protein_lenght) VALUES (%s, %s, %s)"
for line in LocusGene_woh.readlines():
field = line.split("\t") ##this splits the lines and inserts each field into a column
fields = (field[0], field[1], field[2])
cursor.execute(sql, fields)
connection.commit()
finally:
print("inserted")
#connection.close()
# -
# %sql SELECT * FROM Locus_table;
# To do this exercise, I have asked <NAME> for some help because I did not understand well what you did in the suggested practice to fill databases.
#
# **As 'pubmed' and 'protein_length' columns are for INTEGERS, I have created new TSV files without the header (the first row gave me an error in those columns because of the header).**
# ## Problem 4: Create reports, written to a file
#
# 1. Create a report that shows the full, joined, content of the two database tables (including a header line)
#
# 2. Create a joined report that only includes the Genes SKOR and MAA3
#
# 3. Create a report that counts the number of entries for each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx)
#
# 4. Create a report that shows the average protein length for the genes on each Chromosome (AT1Gxxxxxx to AT5Gxxxxxxx)
#
# When creating reports 2 and 3, remember the "Don't Repeat Yourself" rule!
#
# All reports should be written to **the same file**. You may name the file anything you wish.
##creating an empty text file in current directory
report = open('exam2_report.txt', 'x')
# +
import pymysql.cursors
##connecting to the database (db) germplasm
connection = pymysql.connect(host='localhost',
user='root',
password='<PASSWORD>',
db='germplasm',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
connection.autocommit(True)
print('Problem 4.1. Create a report that shows the full, joined, content of the two database tables (including a header line):', file=open('exam2_report.txt', 'a'))
try:
with connection.cursor() as cursor:
sql = "SELECT 'locus' AS locus, 'germplasm' AS germplasm, 'phenotype' AS phenotype, 'pubmed' AS pubmed, 'gene' AS gene, 'protein_lenght' AS protein_lenght\
UNION ALL SELECT Germplasm_table.locus, Germplasm_table.germplasm, Germplasm_table.phenotype, Germplasm_table.pubmed, Locus_table.gene, Locus_table.protein_lenght\
FROM Germplasm_table, Locus_table\
WHERE Germplasm_table.locus = Locus_table.locus"
cursor.execute(sql)
results = cursor.fetchall()
for result in results:
print(result['locus'],result['germplasm'], result['phenotype'], result['pubmed'], result['gene'], result['protein_lenght'], file=open('exam2_report.txt', 'a'))
finally:
print("Problem 4.1 report written in exam2_report.txt file")
# -
# **I have omitted the locus column from the Locus_table in 4.1 and 4.2 for not repeating information.**
# +
print('\n\nProblem 4.2. Create a joined report that only includes the Genes SKOR and MAA3:', file=open('exam2_report.txt', 'a'))
try:
with connection.cursor() as cursor:
sql = "SELECT Germplasm_table.locus, Germplasm_table.germplasm, Germplasm_table.phenotype, Germplasm_table.pubmed, Locus_table.gene, Locus_table.protein_lenght\
FROM Germplasm_table, Locus_table\
WHERE Germplasm_table.locus = Locus_table.locus AND (Locus_table.gene = 'SKOR' OR Locus_table.gene = 'MAA3')"
cursor.execute(sql)
results = cursor.fetchall()
for result in results:
print(result['locus'],result['germplasm'], result['phenotype'], result['pubmed'], result['gene'], result['protein_lenght'], file=open('exam2_report.txt', 'a'))
finally:
print("Problem 4.2 report written in exam2_report.txt file")
# -
print('\n\nProblem 4.3. Create a report that counts the number of entries for each Chromosome:', file=open('exam2_report.txt', 'a'))
try:
with connection.cursor() as cursor:
i = 1 ##marks the beginning of the loop (i.e., chromosome 1)
while i < 6:
sql = "SELECT COUNT(*) AS 'Entries for each Chromosome' FROM Germplasm_table WHERE locus REGEXP 'AT"+str(i)+"G'"
cursor.execute(sql)
results = cursor.fetchall()
for result in results:
print("- Chromosome", i, "has", result['Entries for each Chromosome'], "entries.", file=open('exam2_report.txt', 'a'))
i = i +1
finally:
print("Problem 4.3 report written in exam2_report.txt file")
# +
print('\n\nProblem 4.4. Create a report that shows the average protein length for the genes on each Chromosome:', file=open('exam2_report.txt', 'a'))
try:
with connection.cursor() as cursor:
i = 1 ##marks the beginning of the loop (i.e., chromosome 1)
while i < 6:
sql = "SELECT AVG(protein_lenght) AS 'Average protein length for each Chromosome' FROM Locus_table WHERE locus REGEXP 'AT"+str(i)+"G'"
cursor.execute(sql)
results = cursor.fetchall()
for result in results:
print("- Average protein length for chromosome", i, "genes is", result['Average protein length for each Chromosome'], file=open('exam2_report.txt', 'a'))
i = i +1
finally:
print("Problem 4.4 report written in exam2_report.txt file")
##closing the report file with 'Problem 4' answers
report.close()
| Exam_2/Exam_2_Answers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import warnings
warnings.filterwarnings('ignore')
# +
import math
from time import time
import pickle
import pandas as pd
import numpy as np
from time import time
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.metrics import accuracy_score, f1_score
# -
import sys
sys.path.append('../src')
from preprocessing import *
from utils import *
from plotting import *
# # Splitting the dataset
# +
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity',
'R1_mean', 'R2_mean', 'R3_mean', 'R4_mean', 'R5_mean', 'R6_mean', 'R7_mean',
'R8_mean', 'Temp._mean', 'Humidity_mean', 'R1_std', 'R2_std', 'R3_std', 'R4_std',
'R5_std', 'R6_std', 'R7_std', 'R8_std', 'Temp._std', 'Humidity_std']
df_db = group_datafiles_byID('../datasets/preprocessed/HT_Sensor_prep_metadata.dat', '../datasets/preprocessed/HT_Sensor_prep_dataset.dat')
df_db = reclassify_series_samples(df_db)
df_db.head()
# -
df_train, df_test = split_series_byID(0.75, df_db)
df_train, df_test = norm_train_test(df_train, df_test)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
# # Basic Neural Network
# +
def printResults(n_hid_layers,n_neur,accuracy,elapsed):
print('========================================')
print('Number of hidden layers:', n_hid_layers)
print('Number of neurons per layer:', n_neur)
print('Accuracy:', accuracy)
print('Time (minutes):', (elapsed)/60)
def printScores(xtest,ytest,clf):
xback, yback = xtest[ytest=='background'], ytest[ytest=='background']
print('Score del background:', clf.score(xback,yback))
xrest, yrest = xtest[ytest!='background'], ytest[ytest!='background']
print('Score del resto:', clf.score(xrest,yrest))
num_back = len(yback)
num_wine = len(yrest[yrest=='wine'])
num_banana = len(yrest[yrest=='banana'])
func = lambda x: 1/num_back if x=='background' else (1/num_wine if x=='wine' else 1/num_banana)
weights = np.array([func(x) for x in ytest])
# Score donde las tres clases ponderan igual
print('Score con pesos:', clf.score(xtest,ytest,weights))
print('========================================')
# +
# NN with 2 hidden layers and 15 neurons per layer
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start = time.time()
clf = MLPClassifier(hidden_layer_sizes=(15,15))
clf.fit(xtrain,ytrain)
score = clf.score(xtest,ytest)
final = time.time()
printResults(2,15,score,final-start)
# +
# Adding early stopping and more iterations
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start = time.time()
clf = MLPClassifier(hidden_layer_sizes=(15,15),early_stopping=True,max_iter=2000)
clf.fit(xtrain,ytrain)
score = clf.score(xtest,ytest)
final = time.time()
printResults(2,15,score,final-start)
# -
# Análisis del score
print('Proporcion de background:',len(ytest[ytest=='background'])/len(ytest))
printScores(xtest,ytest,clf)
# Demasiado sesgo hacia el background, hay que reducirlo aunque el score baje
# # Removing excess of background
# prop: ejemplos que no son background que habrá por cada ejemplo de background
def remove_bg(df,prop=2):
new_df = df[df['class']!='background'].copy()
useful_samples = new_df.shape[0]
new_df = new_df.append(df[df['class']=='background'].sample(n=int(useful_samples/2)).copy())
return new_df
# Para evitar el sesgo quitamos elementos clasificados como background, pero solo en el train set
df_train, df_test = split_series_byID(0.75, df_db)
df_train, df_test = norm_train_test(df_train, df_test)
df_train = remove_bg(df_train)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
# +
start = time()
clf = MLPClassifier(hidden_layer_sizes=(15,15),early_stopping=True,max_iter=2000)
clf.fit(xtrain,ytrain)
score = clf.score(xtest,ytest)
final = time()
printResults(2,15,score,final-start)
# -
# Análisis del score
printScores(xtest,ytest,clf)
# Aunque se ponga la misma cantidad de background que de bananas o wine sigue habiendo un sesgo hacia el background.
# # Hyperparameter analysis
# +
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start_total = time.time()
for n_hid_layers in range(2,5):
for n_neur in [10,20,40]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time.time()
clf_nn = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True
)
clf_nn.fit(xtrain, ytrain)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (final-start)/(60))
print('Number of hidden layers:', n_hid_layers)
print('Number of neurons per layer:', n_neur)
end_total = time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
# -
# # Two Neural Networks
# ## 1. Classify background
def printScoresBack(xtest,ytest,clf):
xback, yback = xtest[ytest=='background'], ytest[ytest=='background']
print('Score del background:', clf.score(xback,yback))
xrest, yrest = xtest[ytest!='background'], ytest[ytest!='background']
print('Score del resto:', clf.score(xrest,yrest))
num_back = len(yback)
num_rest = len(ytest)-num_back
func = lambda x: 1/num_back if x=='background' else 1/num_rest
weights = np.array([func(x) for x in ytest])
# Score donde las tres clases ponderan igual
print('Score con pesos:', clf.score(xtest,ytest,weights))
print('========================================')
df_db = group_datafiles_byID('../datasets/raw/HT_Sensor_metadata.dat', '../datasets/raw/HT_Sensor_dataset.dat')
df_db = reclassify_series_samples(df_db)
df_db.loc[df_db['class']!='background','class'] = 'not-background'
df_db[df_db['class']!='background'].head()
# +
# Primero probamos a no quitar el exceso de background
df_train, df_test = split_series_byID(0.75, df_db)
df_train, df_test = norm_train_test(df_train, df_test)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
start_total = time.time()
for n_hid_layers in range(2,5):
for n_neur in [10,20,40]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time.time()
clf_nn = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True
)
clf_nn.fit(xtrain, ytrain)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (end_total-start_total)/(60))
end_total = time.time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
# En más de la mitad de ocasiones aquellos datos que no son background son clasificados erroneamente.
# Veamos si es cuestión de quitar background.
# +
# Ahora, lo mismo quitando el exceso de background
df_train, df_test = split_series_byID(0.75, df_db)
df_train = remove_bg(df_train,prop=1)
df_train, df_test = norm_train_test(df_train, df_test)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
start_total = time()
for n_hid_layers in range(2,5):
for n_neur in [10,20,40]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time()
clf_nn = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True,
shuffle=True
)
clf_nn.fit(xtrain, ytrain)
score = clf_nn.score(xtest, ytest)
final = time()
printResults(n_hid_layers,n_neur,score,final-start)
printScoresBack(xtest,ytest,clf_nn)
end_total = time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
# -
# ## 2. Classify wine and bananas
df_db = group_datafiles_byID('../datasets/raw/HT_Sensor_metadata.dat', '../datasets/raw/HT_Sensor_dataset.dat')
df_db = reclassify_series_samples(df_db)
df_db = df_db[df_db['class']!='background']
df_db.head()
# +
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start_total = time()
for n_hid_layers in range(1,5):
for n_neur in [5,10,15,20,40]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time()
clf_nn = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True,
shuffle=True
)
clf_nn.fit(xtrain, ytrain)
score = clf_nn.score(xtest, ytest)
final = time()
printResults(n_hid_layers,n_neur,score,final-start)
end_total = time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
# -
# # 3. Merge the 2 NN
class doubleNN:
def __init__(self, n_hid_layers, n_neur):
self.hid_layers = n_hid_layers
self.neur = n_neur
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
self.backNN = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True,
shuffle=True
)
self.wineNN = MLPClassifier(
hidden_layer_sizes = tup,
max_iter=2000,
early_stopping=True,
shuffle=True
)
def fit_bg(self, xtrain, ytrain):
ytrain_copy = np.array([x if x=='background' else 'not-background' for x in ytrain])
self.backNN.fit(xtrain, ytrain_copy)
def fit_wine(self,xtrain,ytrain):
self.wineNN.fit(xtrain, ytrain)
def predict(self,xtest):
ypred = self.backNN.predict(xtest)
ypred[ypred=='not-background'] = self.wineNN.predict(xtest[ypred=='not-background'])
return ypred
def score(self,xtest,ytest):
ypred = self.predict(ytest)
score = np.sum(np.equal(ypred,ytest))/len(ytest)
return score
# +
# With all the background
xtrain, ytrain, xtest, ytest = split_train_test(df_db,0.75)
start_total = time.time()
for n_hid_layers in range(2,4):
for n_neur in [10,20]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time.time()
clf_nn = doubleNN(2,20)
clf_nn.fit_bg(xtrain, ytrain)
xtrain_notbg = xtrain[ytrain != 'background']
ytrain_notbg = ytrain[ytrain != 'background']
clf_nn.fit_wine(xtrain_notbg, ytrain_notbg)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (final-start)/(60))
print('Number of hidden layers:', n_hid_layers)
print('Number of neurons per layer:', n_neur)
end_total = time.time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
# +
# Removing background
df_train, df_test = split_series_byID(0.75, df_db)
df_train = remove_bg(df_train,prop=1)
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity']
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
start_total = time.time()
for n_hid_layers in range(2,4):
for n_neur in [10,20]:
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
start = time.time()
clf_nn = doubleNN(2,20)
clf_nn.fit_bg(xtrain, ytrain)
xtrain_notbg = xtrain[ytrain != 'background']
ytrain_notbg = ytrain[ytrain != 'background']
clf_nn.fit_wine(xtrain_notbg, ytrain_notbg)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (final-start)/(60))
print('Number of hidden layers:', n_hid_layers)
print('Number of neurons per layer:', n_neur)
end_total = time.time()
print('\n====> Total tiempo transcurrido (horas):', (end_total-start_total)/(60*60))
# -
# # Creating Windows
# +
# with open('../datasets/preprocessed/window120_dataset.pkl', 'wb') as f:
# pickle.dump(win_df, f)
win_df = pd.read_pickle('../datasets/preprocessed/window120_dataset.pkl')
# +
xtrain, ytrain, xtest, ytest = split_train_test(win_df,0.75)
start = time.time()
clf_nn = MLPClassifier(
hidden_layer_sizes = (32,16),
max_iter=2000,
early_stopping=True,
shuffle=True,
alpha=0.01,
learning_rate_init=0.01
)
clf_nn.fit(xtrain, ytrain)
ypred = clf_nn.predict(xtest)
final = time.time()
metric_report(ytest, ypred)
print('\n====>Tiempo transcurrido (minutos):', (final-start)/(60))
# +
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity',
'R1_mean', 'R2_mean', 'R3_mean', 'R4_mean', 'R5_mean', 'R6_mean', 'R7_mean',
'R8_mean', 'Temp._mean', 'Humidity_mean', 'R1_std', 'R2_std', 'R3_std', 'R4_std',
'R5_std', 'R6_std', 'R7_std', 'R8_std', 'Temp._std', 'Humidity_std']
# Varía ciertos hiperparámetros con ventanas e imprime los resultados más relevantes
def hyper_sim(win_df,num_val,n_hid_layers,n_neur,alpha):
errs_acc = []
errs_f1 = []
rec_ban = []
loss = []
for i in range(num_val):
df_train, df_test = split_series_byID(0.75, win_df)
df_train, df_test = norm_train_test(df_train,df_test,features_to_norm=features)
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
tup = []
for i in range(n_hid_layers):
tup.append(n_neur)
tup = tuple(tup)
clf_nn = MLPClassifier(
hidden_layer_sizes=tup,
max_iter=2000,
early_stopping=True,
shuffle=True,
alpha=alpha,
learning_rate='adaptive'
)
clf_nn.fit(xtrain, ytrain)
ypred = clf_nn.predict(xtest)
errs_acc.append(accuracy_score(ytest,ypred))
errs_f1.append(f1_score(ytest,ypred,average='weighted'))
rec_ban.append(np.sum(np.logical_and(ytest=='banana',ypred=='banana'))/np.sum(ytest=='banana'))
loss.append(clf_nn.loss_)
errs_acc = np.array(errs_acc)
errs_f1 = np.array(errs_f1)
rec_ban = np.array(rec_ban)
loss = np.array(loss)
print('Train loss:',np.mean(loss),'+-',np.std(loss))
print('Accuracy:',np.mean(errs_acc),'+-',np.std(errs_acc))
print('F1-score:',np.mean(errs_f1),'+-',np.std(errs_f1))
print('Recall bananas:',np.mean(rec_ban),'+-',np.std(rec_ban))
# -
for alpha in [0.1,0.01,0.001]:
print('<<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>')
print('Alpha:',alpha)
for n_hid_layers in range(1,4):
print('##############################################')
print('\t Hidden layers:',n_hid_layers)
for n_neur in [4,8,16]:
print('==============================================')
print('\t \t Neurons per layer:',n_neur)
hyper_sim(win_df,3,n_hid_layers,n_neur,alpha)
print('==============================================')
# +
# Nos quedamos con:
# alpha: 0.01
# hidden_layers: 3
# n_neurons: 4
features = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'Temp.', 'Humidity',
'R1_mean', 'R2_mean', 'R3_mean', 'R4_mean', 'R5_mean', 'R6_mean', 'R7_mean',
'R8_mean', 'Temp._mean', 'Humidity_mean', 'R1_std', 'R2_std', 'R3_std', 'R4_std',
'R5_std', 'R6_std', 'R7_std', 'R8_std', 'Temp._std', 'Humidity_std']
errs_acc = []
errs_f1 = []
rec_ban = []
for i in range(5):
df_train, df_test = split_series_byID(0.75, win_df)
df_train, df_test = norm_train_test(df_train,df_test,features_to_norm=features)
xtrain, ytrain = df_train[features].values, df_train['class'].values
xtest, ytest = df_test[features].values, df_test['class'].values
clf_nn = MLPClassifier(
hidden_layer_sizes=(4,4,4),
max_iter=2000,
early_stopping=True,
shuffle=True,
alpha=0.01,
learning_rate='adaptive'
)
bag = BaggingClassifier(base_estimator=clf_nn,n_estimators=100,n_jobs=3)
bag.fit(xtrain, ytrain)
ypred = bag.predict(xtest)
metric_report(ytest, ypred)
errs_acc.append(accuracy_score(ytest,ypred))
errs_f1.append(f1_score(ytest,ypred,average='weighted'))
rec_ban.append(np.sum(np.logical_and(ytest=='banana',ypred=='banana'))/np.sum(ytest=='banana'))
errs_acc = np.array(errs_acc)
errs_f1 = np.array(errs_f1)
rec_ban = np.array(rec_ban)
print('Accuracy:',np.mean(errs_acc),'+-',np.std(errs_acc))
print('F1-score:',np.mean(errs_f1),'+-',np.std(errs_f1))
print('Recall bananas:',np.mean(rec_ban),'+-',np.std(rec_ban))
# -
with open('../datasets/preprocessed/nn_optimal.pkl', 'wb') as f:
pickle.dump(bag, f)
| tests/neural_networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # This is Notebook B
# +
import scrapbook as sb
sb.glue('title', 'Dashboard B with a Long Title')
sb.glue('description', 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.')
# -
| released/notebook_b.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Classification of leukemia types by gene expression data
# https://www.kaggle.com/varimp/gene-expression-classification/notebook
#
# +
# Import all the libraries that we shall be using
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.cluster import KMeans
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
import xgboost as xgb
# -
# Import labels (for the whole dataset, both training and testing)
y = pd.read_csv('actual.csv')
print(y.shape)
y.head()
# In the combined training and testing sets there are 72 patients, each of whom are labelled either "ALL" or "AML" depending on the type of leukemia they have.
#
# Here's the breakdown:
#
# AML - Acute myeloid leukemia
#
# ALL - Acute lymphoblastic leukemia
y['cancer'].value_counts()
# We actually need our labels to be numeric, so let's just do that now.
# Recode label to numeric
y = y.replace({'ALL':0,'AML':1})
labels = ['ALL', 'AML'] # for plotting convenience later on
# +
# Import training data
df_train = pd.read_csv('data_set_ALL_AML_train.csv')
print(df_train.shape)
# Import testing data
df_test = pd.read_csv('data_set_ALL_AML_independent.csv')
print(df_test.shape)
# -
# The 7129 gene descriptions are provided as the rows and the values for each patient as the columns. This will clearly require some tidying up.
df_train.head()
df_test.head()
# +
# Remove "call" columns from training and testing data
train_to_keep = [col for col in df_train.columns if "call" not in col]
test_to_keep = [col for col in df_test.columns if "call" not in col]
X_train_tr = df_train[train_to_keep]
X_test_tr = df_test[test_to_keep]
# -
# Neither the training and testing column names are not in numeric order, so it's important that we reorder these at some point, so that the labels will line up with the corresponding data.
# +
train_columns_titles = ['Gene Description', 'Gene Accession Number', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10',
'11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25',
'26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38']
X_train_tr = X_train_tr.reindex(columns=train_columns_titles)
# +
test_columns_titles = ['Gene Description', 'Gene Accession Number','39', '40', '41', '42', '43', '44', '45', '46',
'47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59',
'60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72']
X_test_tr = X_test_tr.reindex(columns=test_columns_titles)
# -
# Now we can simply transpose the columns and rows so that genes become features and each patient's observations occupies a single row.
# +
X_train = X_train_tr.T
X_test = X_test_tr.T
print(X_train.shape)
X_train.head()
# +
# Clean up the column names for training and testing data
X_train.columns = X_train.iloc[1]
X_train = X_train.drop(["Gene Description", "Gene Accession Number"]).apply(pd.to_numeric)
# Clean up the column names for Testing data
X_test.columns = X_test.iloc[1]
X_test = X_test.drop(["Gene Description", "Gene Accession Number"]).apply(pd.to_numeric)
print(X_train.shape)
print(X_test.shape)
X_train.head()
# -
# +
# Split into train and test (we first need to reset the index as the indexes of two dataframes need to be the same before you combine them).
# Subset the first 38 patient's cancer types
X_train = X_train.reset_index(drop=True)
y_train = y[y.patient <= 38].reset_index(drop=True)
# Subset the rest for testing
X_test = X_test.reset_index(drop=True)
y_test = y[y.patient > 38].reset_index(drop=True)
# -
# Let's now take a look at some summary statistics:
X_train.describe()
# +
# Convert from integer to float
X_train_fl = X_train.astype(float, 64)
X_test_fl = X_test.astype(float, 64)
# Apply the same scaling to both datasets
scaler = StandardScaler()
X_train_scl = scaler.fit_transform(X_train_fl)
X_test_scl = scaler.transform(X_test_fl) # note that we transform rather than fit_transform
# -
pd.DataFrame(X_train_scl)
# With 7129 features, it's also worth considering whether we might be able to reduce the dimensionality of the dataset. Once very common approach to this is principal components analysis (PCA). Let's start by leaving the number of desired components as an open question:
pca = PCA()
pca.fit_transform(X_train)
# Let's set a threshold for explained variance of 90% and see how many features are required to meet that threshold.
# +
total = sum(pca.explained_variance_)
k = 0
current_variance = 0
while current_variance/total < 0.90:
current_variance += pca.explained_variance_[k]
k = k + 1
print(k, " features explain around 90% of the variance. From 7129 features to ", k, ", not too bad.", sep='')
pca = PCA(n_components=k)
X_train.pca = pca.fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
var_exp = pca.explained_variance_ratio_.cumsum()
var_exp = var_exp*100
plt.bar(range(k), var_exp);
# -
# +
pca3 = PCA(n_components=3).fit(X_train)
X_train_reduced = pca3.transform(X_train)
plt.clf()
fig = plt.figure(1, figsize=(10,6 ))
ax = Axes3D(fig, elev=-150, azim=110,)
ax.scatter(X_train_reduced[:, 0], X_train_reduced[:, 1], X_train_reduced[:, 2], c = y_train.iloc[:,1], cmap = plt.cm.Paired, linewidths=10)
ax.set_title("First three PCA directions")
ax.set_xlabel("1st eigenvector")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("2nd eigenvector")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("3rd eigenvector")
ax.w_zaxis.set_ticklabels([])
# +
pca3 = PCA(n_components=10).fit(X_train)
X_train_reduced = pca3.transform(X_train)
plt.clf()
fig = plt.figure(1, figsize=(10,6 ))
ax = Axes3D(fig, elev=-150, azim=110,)
ax.scatter(X_train_reduced[:, 8], X_train_reduced[:, 9], X_train_reduced[:, 7], c = y_train.iloc[:,1], cmap = plt.cm.Paired, linewidths=10)
ax.set_title("First three PCA directions")
ax.set_xlabel("3rd eigenvector")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("4th eigenvector")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("5th eigenvector")
ax.w_zaxis.set_ticklabels([])
# -
fig = plt.figure(1, figsize = (10, 6))
plt.scatter(X_train_reduced[:, 0], X_train_reduced[:, 1], c = y_train.iloc[:,1], cmap = plt.cm.Paired, linewidths=10)
plt.annotate('Note the Brown Cluster', xy = (30000,-2000))
plt.title("2D Transformation of the Above Graph ")
fig = plt.figure(1, figsize = (10, 6))
plt.scatter(X_train_reduced[:, 0], X_train_reduced[:, 4], c = y_train.iloc[:,1], cmap = plt.cm.Paired, linewidths=10)
plt.annotate('Note the Brown Cluster', xy = (30000,-2000))
plt.title("2D Transformation of the Above Graph ")
# # Model Building
# Let's start by establishing a naive baseline. This doesn't require a model, we are just taking the proportion of tests that belong to the majority class as a baseline. In other words, let's see what happens if we were to predict that every patient belongs to the "ALL" class.
print("Simply predicting everything as acute lymphoblastic leukemia
(ALL) results in an accuracy of ", round(1 - np.mean(y_test.iloc[:,1]), 3), ".", sep = '')
# ### K-Means Clustering
# +
kmeans = KMeans(n_clusters=2, random_state=0).fit(X_train_scl)
km_pred = kmeans.predict(X_test_scl)
print('K-means accuracy:', round(accuracy_score(y_test.iloc[:,1], km_pred), 3))
cm_km = confusion_matrix(y_test.iloc[:,1], km_pred)
ax = plt.subplot()
sns.heatmap(cm_km, annot=True, ax = ax, fmt='g', cmap='Greens')
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('K-means Confusion Matrix')
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels, rotation=360);
# -
# Is k-means better than the baseline?
# ### Naive Bayes
# +
# Create a Gaussian classifier
nb_model = GaussianNB()
nb_model.fit(X_train, y_train.iloc[:,1])
nb_pred = nb_model.predict(X_test)
print('Naive Bayes accuracy:', round(accuracy_score(y_test.iloc[:,1], nb_pred), 3))
cm_nb = confusion_matrix(y_test.iloc[:,1], nb_pred)
ax = plt.subplot()
sns.heatmap(cm_nb, annot=True, ax = ax, fmt='g', cmap='Greens')
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('Naive Bayes Confusion Matrix')
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels, rotation=360);
# -
# The naive bayes model is pretty good, just three incorrect classifications.
# ### Logistic Regression
# +
log_grid = {'C': [1e-03, 1e-2, 1e-1, 1, 10],
'penalty': ['l1', 'l2']}
log_estimator = LogisticRegression(solver='liblinear')
log_model = GridSearchCV(estimator=log_estimator,
param_grid=log_grid,
cv=3,
scoring='accuracy')
log_model.fit(X_train, y_train.iloc[:,1])
print("Best Parameters:\n", log_model.best_params_)
# Select best log model
best_log = log_model.best_estimator_
# Make predictions using the optimised parameters
log_pred = best_log.predict(X_test)
print('Logistic Regression accuracy:', round(accuracy_score(y_test.iloc[:,1], log_pred), 3))
cm_log = confusion_matrix(y_test.iloc[:,1], log_pred)
ax = plt.subplot()
sns.heatmap(cm_log, annot=True, ax = ax, fmt='g', cmap='Greens')
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('Logistic Regression Confusion Matrix')
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels, rotation=360);
# -
# This logistic regression model manages perfect classification - ZERO mistakes
# +
# Parameter grid
svm_param_grid = {'C': [0.1, 1, 10, 100], 'gamma': [1, 0.1, 0.01, 0.001, 0.00001, 10], "kernel": ["linear", "rbf", "poly"], "decision_function_shape" : ["ovo", "ovr"]}
# Create SVM grid search classifier
svm_grid = GridSearchCV(SVC(), svm_param_grid, cv=3)
# Train the classifier
svm_grid.fit(X_train_pca, y_train.iloc[:,1])
print("Best Parameters:\n", svm_grid.best_params_)
# Select best svc
best_svc = svm_grid.best_estimator_
# Make predictions using the optimised parameters
svm_pred = best_svc.predict(X_test_pca)
print('SVM accuracy:', round(accuracy_score(y_test.iloc[:,1], svm_pred), 3))
cm_svm = confusion_matrix(y_test.iloc[:,1], svm_pred)
ax = plt.subplot()
sns.heatmap(cm_svm, annot=True, ax = ax, fmt='g', cmap='Greens')
# Labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('SVM Confusion Matrix')
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels, rotation=360);
# -
# ### Random Forest
# +
# Hyperparameters search grid
rf_param_grid = {'bootstrap': [False, True],
'n_estimators': [60, 70, 80, 90, 100],
'max_features': [0.6, 0.65, 0.7, 0.75, 0.8],
'min_samples_leaf': [8, 10, 12, 14],
'min_samples_split': [3, 5, 7]
}
# Instantiate random forest classifier
rf_estimator = RandomForestClassifier(random_state=0)
# Create the GridSearchCV object
rf_model = GridSearchCV(estimator=rf_estimator, param_grid=rf_param_grid, cv=3, scoring='accuracy')
# Fine-tune the hyperparameters
rf_model.fit(X_train, y_train.iloc[:,1])
print("Best Parameters:\n", rf_model.best_params_)
# Get the best model
rf_model_best = rf_model.best_estimator_
# Make predictions using the optimised parameters
rf_pred = rf_model_best.predict(X_test)
print('Random Forest accuracy:', round(accuracy_score(y_test.iloc[:,1], rf_pred), 3))
cm_rf = confusion_matrix(y_test.iloc[:,1], rf_pred)
ax = plt.subplot()
sns.heatmap(cm_rf, annot=True, ax = ax, fmt='g', cmap='Greens')
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('Random Forest Confusion Matrix')
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels, rotation=360);
| Bioinformatics/2nd term/Spr_Sem/9 - (03.17.21)/CancerGeneExpression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Data preparation
# Open **train2017.tsv**, clear tweets of links, words with # and @ and unnecessary symbols. Replace some positive and negative emoticons with 'positive_tag'/'negative_tag' correspondigly (for better features later). <br> Tweet clearing involves as well tokenizing, removing stopwords and lemmatizing words.
# +
import pandas as pd
import nltk
from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk import pos_tag
from nltk.stem import StemmerI, RegexpStemmer, LancasterStemmer, ISRIStemmer, PorterStemmer, SnowballStemmer, RSLPStemmer
from nltk.stem import WordNetLemmatizer
import matplotlib.pyplot as plt
import numpy as np
from collections import Counter
from nltk.corpus import stopwords
import pickle
import seaborn as sns
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
import gensim
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
from sklearn import svm
import os
import re
# %matplotlib inline
#read train2017.csv and cleanup
data = pd.read_csv("twitter_data/train2017.tsv", engine='python', sep="\t+", escapechar='\\', header=None, names=['id1','id2','sent','tweet'])
data.info()
print "\n"
for col in data.columns:
print col + ": " + str(len(data[col].unique())) + " unique values."
# -
# #### Functions with implementation of the above
# +
#clear tweets
def clear_tweets(data):
tweets = [re.sub(r'https?:\/\/[^ ]*', '',s).strip() for s in data['tweet']] #clear links
#replace emoticons with positive/negative tags !!!
pos_regex = '[:;]-?[)Dp]+|<3'
neg_regex = ':-?\'?[(/Oo]+'
tweets = [re.sub(pos_regex, ' positive_tag ',s).strip() for s in tweets]
tweets = [re.sub(neg_regex, ' negative_tag ',s).strip() for s in tweets]
#clear tweets of everything else not necessary
tweets = [re.sub(r'#[^ ]*', '', s).strip() for s in tweets] #clear words with hashtag
tweets = [re.sub(r'@[^ ]*', '', s).strip() for s in tweets] #clear words with @ sign
tweets = [re.sub("[^A-Za-z_' ]+", "", s).strip() for s in tweets] #clear all others
return tweets
def tokenize_tweets(tweets):
tokens = []
for sentence in tweets:
#tokens.append(word_tokenize(sentence))
tokens.append([w.lower() for w in word_tokenize(sentence)])
return tokens
def extra_clear(tokens):
#take words, clears dots etc
for idx, item in enumerate(tokens):
for value in item:
if ("'" in value or "'" == value) and value != "c'mon":
item.remove(value)
elif value == "st" or value == "th":
item.remove(value)
tokens[idx] = item
return tokens
def remove_stopwords(tokens):
filtered = []
for lst in tokens:
filtered.append([w for w in lst if not w in stopwords.words('english')])
return filtered
# +
#Lemmatize with POS Tags
#it may take some minutes !!
from nltk.corpus import wordnet
def get_wordnet_pos(word):
"""Map POS tag to first character lemmatize() accepts"""
tag = nltk.pos_tag([word])[0][1][0].upper()
tag_dict = {"J": wordnet.ADJ,
"N": wordnet.NOUN,
"V": wordnet.VERB,
"R": wordnet.ADV}
return tag_dict.get(tag, wordnet.NOUN)
def lemmatize_words(tokens):
lemmatizer = WordNetLemmatizer()
lems = []
for lst in tokens:
lems.append([ lemmatizer.lemmatize(w, get_wordnet_pos(w)) for w in lst ])
return lems
# +
#check if directory exists
if not os.path.isdir("pkl_files"):
os.mkdir("pkl_files")
if not os.path.isfile("pkl_files/words.pkl"):
tweets = clear_tweets(data)
words = tokenize_tweets(tweets)
words = extra_clear(words)
words = remove_stopwords(words)
lems = lemmatize_words(words)
pickle.dump(lems, open("pkl_files/words.pkl", "wb"))
lems = pickle.load(open("pkl_files/words.pkl", "rb"))
print "End result:"
print lems[:10]
# -
# Keep a list of tuples with every word and its frequency of occurence in each tweet
#fix tuples
total = []
for lst in lems:
if len(lst) > 0:
count = Counter(lst)
total.append(count.most_common(len(count)))
print len(total)
# ## Analyze data of training set
# Some code for finding useful statistics and wordclouds for presenting
# +
# find most common words in whole corpus -> wordcloud
buf = []
for lst in lems:
for value in lst:
buf.append(value)
count = Counter(buf)
#freq is a string with the 20 most common words
freq = ""
for x in count.most_common(20):
freq += x[0] + ' '
wordcloud = WordCloud(max_font_size=50, max_words=20, background_color="white").generate(freq)
# Display the generated image:
plt.figure()
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
# +
# find most frequent words for each sentiment category -> unique(set) -> wordcloud
#positive
pos_words = []
neg_words = []
neutral_words = []
for idx, item in enumerate(lems):
for x in item:
if data['sent'][idx] == "positive":
pos_words.append(x)
elif data['sent'][idx] == "negative":
neg_words.append(x)
else:
neutral_words.append(x)
count1 = Counter(pos_words)
count2 = Counter(neg_words)
count3 = Counter(neutral_words)
out = ' '.join([x[0] for x in count1.most_common(10)])
out += ' '.join([x[0] for x in count2.most_common(10)])
out += ' '.join([x[0] for x in count3.most_common(10)])
wordcloud = WordCloud(max_font_size=40, max_words=20, background_color="white").generate(out)
# Display the generated image:
plt.figure()
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
# -
#find number of unique tokens
unique = set()
for lst in lems: #stin lems einai oi pio "kathares" lekseis
for value in lst:
unique.add(value)
print len(unique)
print list(unique)[50:100]
# ## Open testing set and prepare data
test_data = pd.read_csv("twitter_data/test2017.tsv", engine='python', sep="\t+", escapechar='\\', header=None, names=['id1','id2','sent','tweet'])
test_data.columns
# +
#open gold.csv for f1 score only!
val_data = pd.read_csv("twitter_data/SemEval2017_task4_subtaskA_test_english_gold.txt", engine='python', sep="\t+", header=None,names=['id','sent'])
print val_data.columns
print val_data.info()
# -
# #### Make X,y sets for validation and test sets
# **yval**: Contains y of training (validation) set<br>
# **ycor**: Contains y of testing set<br>
# **Χ**: For testing set
#y value, only for validation
y = data.sent
yval = []
for i in range(len(y)):
if len(lems[i]) == 0:
continue
v = y[i]
if v == 'neutral':
yval.append(0)
if v == 'positive':
yval.append(1)
if v == 'negative':
yval.append(-1)
print len(yval)
# +
#X and y values --> FOR test2017.csv
X = test_data.tweet
tmp = val_data.sent
ycor = []
for v in tmp:
if v == 'neutral':
ycor.append(0)
if v == 'positive':
ycor.append(1)
if v == 'negative':
ycor.append(-1)
print len(ycor)
# -
# Same list of tuples as above, but for test data
# +
words = []
for s in X:
words.append(s.split())
total_test = []
for lst in words:
count = Counter(lst)
total_test.append(count.most_common(len(count)))
print total_test[:5]
# -
# ## Bag of words
# **NOTE**: All the useful data saved in **.pkl** files will exist in **"pkl_files"** directory
# +
from sklearn.feature_extraction.text import CountVectorizer
#concatenate words into sentences
con_tweets = []
for lst in lems:
if len(lst)>0:
con_tweets.append(' '.join(lst))
print len(con_tweets)
#make bag-of-words
if not os.path.isfile("pkl_files/bow_train.pkl"):
bow_vectorizer = CountVectorizer(max_df=0.90, min_df=2, max_features=3000, stop_words='english')
bow_xtrain = bow_vectorizer.fit_transform(con_tweets)
pickle.dump(bow_xtrain, open("pkl_files/bow_train.pkl", "wb"))
bow_xtrain = pickle.load(open( "pkl_files/bow_train.pkl", "rb" ))
print bow_xtrain.shape
# +
#bag of words for testing
if not os.path.isfile("pkl_files/bow_test.pkl"):
bow_vectorizer = CountVectorizer(max_df=0.90, min_df=2, max_features=3000, stop_words='english')
bow_xtest = bow_vectorizer.fit_transform(X) #X : a list with the actual tweets to test
pickle.dump(bow_xtest, open("pkl_files/bow_test.pkl", "wb")) #save in bow.pkl
bow_xtest = pickle.load(open( "pkl_files/bow_test.pkl", "rb" ))
print(bow_xtest.shape)
# -
# ## TF-IDF
# +
from sklearn.feature_extraction.text import TfidfVectorizer
if not os.path.isfile("pkl_files/ifidf_train.pkl"):
tfidf_vectorizer = TfidfVectorizer(max_df=0.90, min_df=2, max_features=3000, stop_words='english')
tfidf_temp = tfidf_vectorizer.fit_transform(con_tweets)
pickle.dump(tfidf_temp, open("pkl_files/tfidf_train.pkl", "wb")) #save tf_idf.pkl
tfidf_train = pickle.load(open( "pkl_files/tfidf_train.pkl", "rb" ))
print tfidf_train.shape
# +
#tfidf for testing
if not os.path.isfile("pkl_files/ifidf_test.pkl"):
tfidf_vectorizer = TfidfVectorizer(max_df=0.90, min_df=2, max_features=3000, stop_words='english')
tfidf_temp = tfidf_vectorizer.fit_transform(X)
pickle.dump(tfidf_temp, open("pkl_files/tfidf_test.pkl", "wb")) #save tf_idf.pkl
tfidf_test = pickle.load(open( "pkl_files/tfidf_test.pkl", "rb" ))
print tfidf_test.shape
# -
# ## Word embeddings
# +
from gensim.test.utils import common_texts, get_tmpfile
from gensim.models import Word2Vec
if not os.path.isfile("pkl_files/wemb_train.pkl"):
model_w2v = gensim.models.Word2Vec(
lems, #give lemmatized words!
size=300, # desired no. of features/independent variables
window=5, # context window size
min_count=1,
sg = 1, # 1 for skip-gram model
hs = 0,
negative = 10, # for negative sampling
workers= 2, # no.of cores
seed = 34)
model_w2v.train(lems, total_examples= len(lems), epochs=20)
pickle.dump(model_w2v, open("pkl_files/wemb_train.pkl", "wb"))
model_w2v_train = pickle.load(open( "pkl_files/wemb_train.pkl", "rb" ))
#little test :p
model_w2v_train.wv.most_similar(positive="mcgregor")
# +
#word embeddings for testing
if not os.path.isfile("pkl_files/wemb_test.pkl"):
model_w2v = gensim.models.Word2Vec(
words, #give our set X - its words!
size=300, # desired no. of features/independent variables
window=5, # context window size
min_count=1,
sg = 1, # 1 for skip-gram model
hs = 0,
negative = 10, # for negative sampling
workers= 2, # no.of cores
seed = 34)
model_w2v.train(lems, total_examples= len(words), epochs=20)
pickle.dump(model_w2v, open("pkl_files/wemb_test.pkl", "wb"))
model_w2v_test = pickle.load(open( "pkl_files/wemb_test.pkl", "rb" ))
# -
# Word embeddings **visualization**
# +
#function to show the word embeddings visualization
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
def tsne_plot(model):
labels = []
tokens = []
count = 0
for word in model.wv.vocab:
tokens.append(model[word])
labels.append(word)
count += 1
if count == 1000:
break
tsne_model = TSNE(perplexity=40, n_components=2,
init='pca', n_iter=2500, random_state=23)
new_values = tsne_model.fit_transform(tokens)
x = []
y = []
for value in new_values:
x.append(value[0])
y.append(value[1])
plt.figure(figsize=(16, 16))
for i in range(180):
plt.scatter(x[i],y[i])
plt.annotate(labels[i],
xy=(x[i], y[i]),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.show()
# +
#call
tsne_plot(model_w2v_train)
# -
# #### Combine the vectors of each word into one for the whole tweet
# +
#func to add all vectors of words into one (for every tweet)
def fix_vectors(tuples, model):
tv = []
for i in range(len(tuples)):
sent = tuples[i]
temp = []
count=0
for tpl in sent:
v = model[tpl[0]]
v = v*tpl[1]
temp.append(v)
count += 1
if temp:
a = reduce(lambda x,y: x+y, temp)
tv.append(a/count)
return tv
# -
# Lexicon analysis --> search lexicon files, compute the **-mean-** value of valence of each tweet and add as extra feature.<br>
# In our case, each word will have **300** features and if we perform searches in **N** files, each vector will end up having 300+N features
# +
#lex_file must contain whole path from current to lexicon
import collections as cl
def lexicon_analysis(lex_file, total):
lex_data = pd.read_csv(lex_file, engine='python', sep="\t+", escapechar='\\', header=None, names=['word','val'])
word_dict = cl.defaultdict()
for row in lex_data.itertuples():
word_dict[row.word] = row.val
lexicon_vals = []
for i in range(len(total)):
tweet = total[i]
sum = 0
for word, count in tweet:
if word_dict.get(word):
#word in lexicon
sum += word_dict[word] * count
lexicon_vals.append(sum)
for i,v in enumerate(lexicon_vals):
if v < 0:
lexicon_vals[i] = -1
if v == 0:
lexicon_vals[i] = 0
if v > 0:
lexicon_vals[i] = 1
return lexicon_vals
# +
tweet_vectors = fix_vectors(total, model_w2v_train)
print "Total length: ", len(tweet_vectors)
print "Length of individual vector: ", len(tweet_vectors[0])
tweet_vectors_test = fix_vectors(total_test, model_w2v_test)
print "Total length: ", len(tweet_vectors_test)
print "Length of individual vector: ", len(tweet_vectors_test[0])
# -
# #### Add some extra features in each vector
# Count positive/negative tags in tweets
index=0
for i in range(len(con_tweets)):
tweet_vectors[index] = np.append(tweet_vectors[index], [con_tweets[i].count("positive_tag"), con_tweets[i].count("negative")])
index += 1
# +
pos_count, neg_count = [],[]
pos_regex = '[:;]-?[)Dp]+|<3'
neg_regex = ':-?\'?[(/Oo]+'
for sent in X:
plen = len(re.findall(pos_regex, sent))
pos_count.append(plen) if plen > 0 else pos_count.append(0)
nlen = len(re.findall(neg_regex, sent))
neg_count.append(nlen) if nlen > 0 else neg_count.append(0)
#add them to tweet_vectors_test
index=0
for i in range(len(words)):
tweet_vectors_test[index] = np.append(tweet_vectors_test[index], [pos_count[index], neg_count[index]])
index += 1
# +
#run above function for the wanted files in lexica
v1_train = lexicon_analysis("lexica/affin/affin.txt", total)
v2_train = lexicon_analysis("lexica/emotweet/valence_tweet.txt", total)
v3_train = lexicon_analysis("lexica/generic/generic.txt", total)
v4_train = lexicon_analysis("lexica/mydict/mydict.txt", total)
#add the vectors calculated to the tweets
i=0
for index in range(len(tweet_vectors)):
tweet_vectors[index] = np.append(tweet_vectors[index], [v1_train[index], v2_train[index], v3_train[index], v4_train[index], len(total[i])])
i += 1
print len(tweet_vectors[0])
# +
#run above function for the wanted files in lexica
v1_test = lexicon_analysis("lexica/affin/affin.txt", total_test)
v2_test = lexicon_analysis("lexica/emotweet/valence_tweet.txt", total_test)
v3_test = lexicon_analysis("lexica/generic/generic.txt", total_test)
v4_test = lexicon_analysis("lexica/mydict/mydict.txt", total_test)
#add the vectors calculated to the tweets
i=0
for index in range(len(tweet_vectors_test)):
tweet_vectors_test[index] = np.append(tweet_vectors_test[index], [v1_test[index], v2_test[index], v3_test[index], v4_test[index], len(words[i])])
i += 1
print len(tweet_vectors_test[0])
# -
# ## SVM
# **BOW** and validation set
#split training set
xtrain_bow, xvalid_bow, ytrain, yvalid = train_test_split(bow_xtrain, yval, random_state=42, test_size=0.2)
svc = svm.SVC(kernel='linear', C=1, probability=True)
svc = svc.fit(xtrain_bow[:5000], ytrain[:5000])
prediction_bow = svc.predict(xvalid_bow)
svm_score_bow_train = f1_score(yvalid, prediction_bow, average='micro')
print svm_score_bow_train
# **BOW** and testing set
prediction_bow = svc.predict(bow_xtest)
svm_score_bow_test = f1_score(ycor, prediction_bow, average='micro')
print svm_score_bow_test
# **TF_IDF** and validation set
xtrain_idf, xvalid_idf, ytrain, yvalid = train_test_split(tfidf_train, yval, random_state=42, test_size=0.2)
svc = svm.SVC(kernel='linear', C=1, probability=True)
svc = svc.fit(xtrain_idf[:5000], ytrain[:5000])
prediction_idf = svc.predict(xvalid_idf)
svm_score_idf_train = f1_score(yvalid, prediction_idf, average='micro')
print svm_score_idf_train
# **TF_IDF** and testing set
prediction_idf = svc.predict(tfidf_test)
svm_score_idf_test = f1_score(ycor, prediction_idf, average='micro')
print svm_score_idf_test
# **Word Embeddings** and validation set
xtrain_we, xvalid_we, ytrain, yvalid = train_test_split(tweet_vectors, yval, random_state=42, test_size=0.2)
svc = svm.SVC(kernel='linear', C=1, probability=True)
svc = svc.fit(xtrain_we[:6000], ytrain[:6000])
prediction_we = svc.predict(xvalid_we)
svm_score_we_train = f1_score(yvalid, prediction_we, average='micro')
print svm_score_we_train
# **Word Embeddings** and testing set
prediction_we = svc.predict(tweet_vectors_test)
svm_score_we_test = f1_score(ycor, prediction_we, average='micro')
print svm_score_we_test
# ## KNN
# **BOW** and validation set
# +
from sklearn.neighbors import KNeighborsClassifier
xtrain_bow, xvalid_bow, ytrain, yvalid = train_test_split(bow_xtrain, yval, random_state=42, test_size=0.2)
knn = KNeighborsClassifier(n_neighbors=20)
knn.fit(xtrain_bow[:5000], ytrain[:5000])
# -
prediction_bow = knn.predict(xvalid_bow)
knn_score_bow_train = f1_score(yvalid, prediction_bow, average="micro")
print knn_score_bow_train
# **BOW** and testing set
prediction_bow = knn.predict(bow_xtest)
knn_score_bow_test = f1_score(ycor, prediction_bow, average="micro")
print knn_score_bow_test
# **TF_IDF** and validation set
xtrain_idf, xvalid_idf, ytrain, yvalid = train_test_split(tfidf_train, yval, random_state=42, test_size=0.2)
knn = KNeighborsClassifier(n_neighbors=20)
knn.fit(xtrain_idf[:5000], ytrain[:5000])
prediction_idf = knn.predict(xvalid_idf)
knn_score_idf_train = f1_score(yvalid, prediction_idf, average="micro")
print knn_score_idf_train
# **TF_IDF** and testing set
prediction_idf = knn.predict(tfidf_test)
knn_score_idf_test = f1_score(ycor, prediction_idf, average="micro")
print knn_score_idf_test
# **Word Embeddings** and validation set
xtrain_we, xvalid_we, ytrain, yvalid = train_test_split(tweet_vectors, yval, random_state=42, test_size=0.2)
knn = KNeighborsClassifier(n_neighbors=20)
knn.fit(xtrain_we[:6000], ytrain[:6000])
prediction_we = knn.predict(xvalid_we)
knn_score_we_train = f1_score(yvalid, prediction_we, average="micro")
print knn_score_we_train
# **Word Embeddings** and testing set
prediction_we = knn.predict(tweet_vectors_test)
knn_score_we_test = f1_score(ycor, prediction_we, average='micro')
print knn_score_we_test
# ### Show f1 score in plots
# **SVM**
svm_scores = [svm_score_bow_test, svm_score_bow_train, svm_score_idf_test, svm_score_idf_train, svm_score_we_test, svm_score_we_train]
knn_scores = [knn_score_bow_test, knn_score_bow_train, knn_score_idf_test, knn_score_idf_train, knn_score_we_test, knn_score_we_train]
labels = ("bow test", "bow_train", "tf_idf test", "tf_idf train", "w-embbed test", "w-embbed train")
# +
x = np.arange(6)
plt.figure(figsize=(15, 5))
plt.title("SVM scores")
plt.bar(range(len(svm_scores)), svm_scores, color="r", align="center")
plt.xticks(x, labels)
plt.show()
# -
# **KNN**
plt.figure(figsize=(15, 5))
plt.title("KNN scores")
plt.bar(range(len(knn_scores)), knn_scores, color="b", align="center")
plt.xticks(x, labels)
plt.show()
# ## Bonus: RoundRobin Classification
# Implementation of **Round Robin Classification** algorithm and executing with BOW, TF_IDF sets
# +
from scipy.sparse import lil_matrix
def Round_Robin(y, ytest, train_data, test_data):
c1 = c2 = c3 = 0
for i,v in enumerate(y):
if v != 0:
c1 += 1
if v != -1:
c2 += 1
if v != 1:
c3 += 1
pos_neg_x = lil_matrix((c1,3000))
pos_neg_y = []
pos_neu_x = lil_matrix((c2, 3000))
pos_neu_y = []
neg_neu_x = lil_matrix((c3, 3000))
neg_neu_y = []
c1 = c2 = c3 = 0
for i,v in enumerate(y):
if v != 0:
pos_neg_x[c1] = train_data[i]
pos_neg_y.append(v)
c1 += 1
if v != -1:
pos_neu_x[c1] = train_data[i]
pos_neu_y.append(v)
c2 += 1
if v != 1:
neg_neu_x[c1] = train_data[i]
neg_neu_y.append(v)
c3 += 1
print pos_neg_x.shape, len(pos_neg_y)
print pos_neu_x.shape, len(pos_neu_y)
print neg_neu_x.shape, len(neg_neu_y)
#Train Classifiers
pos_neg_knn = KNeighborsClassifier(n_neighbors=20)
pos_neg_knn.fit(pos_neg_x, pos_neg_y)
pos_neu_knn = KNeighborsClassifier(n_neighbors=20)
pos_neu_knn.fit(pos_neu_x, pos_neu_y)
neg_neu_knn = KNeighborsClassifier(n_neighbors=20)
neg_neu_knn.fit(neg_neu_x, neg_neu_y)
#Get predictions
pos_neg_train_pred = pos_neg_knn.predict_proba(train_data)
pos_neg_test_pred = pos_neg_knn.predict_proba(test_data)
pos_neu_train_pred = pos_neu_knn.predict_proba(train_data)
pos_neu_test_pred = pos_neg_knn.predict_proba(test_data)
neg_neu_train_pred = neg_neu_knn.predict_proba(train_data)
neg_neu_test_pred = neg_neu_knn.predict_proba(test_data)
print neg_neu_train_pred.shape
#Train the final KNN classifier
train_pred = np.concatenate((pos_neg_train_pred,pos_neu_train_pred, neg_neu_train_pred),axis=1)
test_pred = np.concatenate((pos_neg_test_pred,pos_neu_test_pred, neg_neu_test_pred),axis=1)
#Run KNN
fin_classifier = KNeighborsClassifier(n_neighbors=20)
fin_classifier.fit(train_pred, y)
prediction = fin_classifier.predict(train_pred)
train_score = f1_score(y, prediction, average='micro')
prediction = fin_classifier.predict(test_pred)
test_score = f1_score(ytest, prediction, average='micro')
return train_score, test_score
# -
# **Bag of Words**
rr_score_bow_train, rr_score_bow_test = Round_Robin(yval, ycor, bow_xtrain, bow_xtest)
print "f1 score on training set: ", rr_score_bow_train
print "f1 score on testing set: ", rr_score_bow_test
# **TF-IDF**
rr_score_idf_train, rr_score_idf_test = Round_Robin(yval, ycor, tfidf_train, tfidf_test)
print "f1 score on training set: ", rr_score_idf_train
print "f1 score on testing set: ", rr_score_idf_test
| exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import requests
def get_json(from_year, to_year):
if not 2000 < int(from_year) < 2050 or not 2000 < int(from_year) < 2050:
return []
all_holidays = []
for i in range(from_year, to_year+1):
r = requests.get(f'https://date.nager.at/api/v2/publicholidays/{i}/CH')
all_holidays.extend(r.json())
return all_holidays
def set_date_index(df, col='date'):
return df.set_index(pd.to_datetime(df[col])).drop(col, axis=1)
def filter_canton(df, can='BS'):
return df[[(str('CH-' + can) in row) if row is not None else True for row in df.counties]]
canton = 'BS'
from_year = 2014
to_year = 2020
data = get_json(from_year, to_year)
# -
holidays = (pd.DataFrame.from_records(data)
.pipe(set_date_index)
.pipe(filter_canton, can=canton))
holidays.drop('fixed', axis=1, inplace=True)
holidays.drop('localName', axis=1, inplace=True)
holidays.drop('countryCode', axis=1, inplace=True)
holidays.drop('global', axis=1, inplace=True)
holidays.drop('counties', axis=1, inplace=True)
holidays.drop('launchYear', axis=1, inplace=True)
holidays.drop('type', axis=1, inplace=True)
holidays.to_csv(f'../raw_data/holidays_{canton}_{from_year}_{to_year}.csv', columns=['name'])
holidays
| notebooks/Holidays.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import numpy.polynomial as P
import scipy as sp
from matplotlib import pyplot as plt
from tqdm import tqdm
#from sklearn.preprocessing import PolynomialFeatures
from multiprocessing import Pool
import multiprocessing
import ZVnbrosse
from sklearn.preprocessing import PolynomialFeatures
from potentials import GaussPotential,GaussMixture,GausMixtureIdent,GausMixtureSame,BananaShape
from samplers import MCMC_sampler,Generate_train,ULA_light
from baselines import set_function,construct_ESVM_kernel,GenerateSigma
from martingale import approx_q
from optimize import Run_eval_test,optimize_parallel_new
from utils import *
import copy
def H(k, x):
if k==0:
return 1.0
if k ==1:
return x
if k==2:
return (x**2 - 1)/np.sqrt(2)
c = np.zeros(k+1,dtype = float)
c[k] = 1.0
h = P.hermite_e.hermeval(x,c) / np.sqrt(sp.special.factorial(k))
return h
def test_traj(coefs_poly_regr,gamma,r_seed,lag,K_max,S_max,N_test,f_type,x0):
"""
function to perform 1-dimensional martingale decomposition
"""
X_test,Noise = generate_traj(x0,N_test,gamma,r_seed)
test_stat_vanilla = np.zeros(N_test,dtype = float)
test_stat_vr = np.zeros_like(test_stat_vanilla)
#compute number of basis polynomials
num_basis_funcs = K_max+1
#compute polynomials of noise variables Z_l
poly_vals = np.zeros((num_basis_funcs,N_test), dtype = float)
for k in range(len(poly_vals)):
poly_vals[k,:] = H(k,Noise)
#initialize function
f_vals_vanilla = X_test**2
#array to store control variates values
cvfs = np.zeros_like(f_vals_vanilla)
#compute coeffitients bar_a
bar_a_1 = np.zeros((lag,N_test),dtype=float)
bar_a_2 = np.zeros_like(bar_a_1)
for i in range(lag):
#second-order coefficients
bar_a_2[i,1:] = coefs_poly_regr[i,2]*np.sqrt(2)*gamma*(sigma(X_test[:-1]))**2
bar_a_2[i,0] = coefs_poly_regr[i,2]*np.sqrt(2)*gamma*(sigma(x0))**2
#first-order coefficients
bar_a_1[i,1:] = coefs_poly_regr[i,1]*np.sqrt(gamma)*sigma(X_test[:-1]) +\
coefs_poly_regr[i,2]*2*np.sqrt(gamma)*sigma(X_test[:-1])*(X_test[:-1]+gamma*b(X_test[:-1]))
bar_a_1[i,0] = coefs_poly_regr[i,1]*np.sqrt(gamma)*sigma(x0) +\
coefs_poly_regr[i,2]*2*np.sqrt(gamma)*sigma(x0)*(x0+gamma*b(x0))
bar_a_1 = bar_a_1*poly_vals[1,:]
bar_a_2 = bar_a_2*poly_vals[2,:]
#compute martingale sums
M_n_1 = 0.0
M_n_2 = 0.0
for l in range(N_test):
for r in range(min(N_test-l,lag)):
M_n_1 += bar_a_1[r,l]
M_n_2 += bar_a_2[r,l]
print("M_n_2: ",M_n_2)
print("M_n_1: ",M_n_1)
return np.mean(f_vals_vanilla),np.mean(f_vals_vanilla)-M_n_1/N_test,np.mean(f_vals_vanilla)-M_n_1/N_test-M_n_2/N_test
def approx_q(X_train,Y_train,N_traj_train,lag,max_deg):
"""
Function to regress q functions on a polynomial basis;
Args:
X_train - train tralectory;
Y_train - function values;
N_traj_train - number of training trajectories;
lag - truncation point for coefficients, those for |p-l| > lag are set to 0;
max_deg - maximum degree of polynomial in regression
"""
dim = X_train[0,:].shape[0]
print("dimension = ",dim)
coefs_poly = np.array([])
for i in range(lag):
x_all = np.array([])
y_all = np.array([])
for j in range(N_traj_train):
y = Y_train[j,i:,0]
if i == 0:
x = X_train[j,:]
else:
x = X_train[j,:-i]
#concatenate results
if x_all.size == 0:
x_all = x
else:
x_all = np.concatenate((x_all,x),axis = 0)
y_all = np.concatenate([y_all,y])
#should use polyfeatures here
print("variance: ",np.var(y_all))
print(y_all[:50])
poly = PolynomialFeatures(max_deg)
X_features = poly.fit_transform(x_all)
print(X_features.shape)
lstsq_results = np.linalg.lstsq(X_features,y_all,rcond = None)
coefs = copy.deepcopy(lstsq_results[0])
coefs.resize((1,X_features.shape[1]))
if coefs_poly.size == 0:
coefs_poly = copy.deepcopy(coefs)
else:
coefs_poly = np.concatenate((coefs_poly,coefs),axis=0)
return coefs_poly
# +
a = 5.0
c = 5.0
sig = 2.0
def b(X_t):
"""
b function in the diffusion
"""
return a*(c-X_t)
def sigma(X_t):
"""
b function in the diffusion
"""
return sig*np.sqrt(X_t)
def sample_discretized_diffusion(X_t,gamma_t):
"""
args: X_t - current value,
gamma_t - step size;
returns: (X_{t+1},xi_{t+1}) - value at the next time moment and the corresponding noise variable
"""
xi = np.random.randn()
return X_t + gamma_t*b(X_t) + np.sqrt(gamma_t)*sigma(X_t)*xi,xi
#currently we use this function without the burn-in
def generate_traj(x0,n,gamma,r_seed):
"""
args:
x0 - starting point;
n - number of steps;
gamma - step size (assumed to be fixed for now);
returns:
x_all,noise_all - np.arrays of shape (n,)
"""
x_all = np.zeros(n,dtype = float)
noise_all = np.zeros(n,dtype = float)
np.random.seed(r_seed)
x_all[0],noise_all[0] = sample_discretized_diffusion(x0,gamma)
for i in range(1,n):
x_all[i],noise_all[i] = sample_discretized_diffusion(x_all[i-1],gamma)
return x_all,noise_all
def run_monte_carlo(x,f_type):
if f_type == "quadratic":
f_vals = x**2
else:
raise "not implemented error"
return np.mean(f_vals,axis=1)
# -
n = 2*10**3 #sample size
gamma = 5e-2 # Step size
n_traj = 1
n_traj_test = 100 # Number of independent MCMC trajectories for test
f_type = "quadratic"
K_max = 2 #max degree of Hermite polynomial
S_max = 2 #max degree of polynomial during regression stage
lag = 50 #maximal lag order
N_test = 2*10**3
# Sample discretized diffusion
x0 = 1
r_seed = 1812
X_train, noise_train = generate_traj(x0,n,gamma,r_seed)
#set target function
Y_train = X_train**2
X_train = X_train.reshape((1,-1,1))
Y_train = Y_train.reshape((1,-1,1))
# ### Bernoulli:: Optimize coefficients by solving regression with polynomial features
#polynomial coefficients
coefs_poly = approx_q(X_train,Y_train,n_traj,lag,S_max)
print(coefs_poly.shape)
print(coefs_poly)
regr_vals = np.zeros((lag,X_train.shape[1]),dtype=float)
for i in range(len(regr_vals)):
for j in range(S_max+1):
regr_vals[i,:] += coefs_poly[i,j]*X_train[0,:,0]**j
# Test our regressors
cur_lag = 1
N_pts = 500
plt.figure(figsize=(10, 10))
plt.title("Testing regression model",fontsize=20)
plt.plot(Y_train[0,cur_lag:N_pts+cur_lag,0],color='r',label='true function')
plt.plot(regr_vals[cur_lag,:N_pts],color='g',label = 'practical approximation')
plt.legend(loc = 'upper left',fontsize = 16)
plt.show()
test_seed = 1453
nbcores = multiprocessing.cpu_count()
trav = Pool(nbcores)
res = trav.starmap(test_traj, [(coefs_poly,gamma,test_seed+i,lag,K_max,S_max,N_test,f_type,x0) for i in range (n_traj_test)])
#res = trav.starmap(test_traj, [(Cur_pot,coefs_poly,step,test_seed+i,lag,K_max,S_max,N_burn,N_test,d,f_type,inds_arr,params,x0,fixed_start) for i in range (n_traj_test)])
trav.close()
res_new = np.asarray(res)
print(res_new.shape)
# ### Comparison plots
title = ""
labels = ['Vanilla\n Euler scheme', 'Euler scheme \nwith MDCV-1']#, 'ULA \nwith MDCV-2']
data = [res_new[:,0],res_new[:,1]]#,res_new[:,2]]
boxplot_ind(data, title, labels,path="./diffusion_quadratic.pdf")
| Code/Cox_Ross_semi_implicit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python 3.6
# language: python
# name: python36
# ---
# # Übung
# Nach DIN 1946-6 sind Türen mit einer Überströmöffnung zu versehen. Das ist erforderlich, damit die Luft auch bei geschlossenen Türen von den Zulufträumen zu den Ablufträumen strömen kann.
#
# Die Überströmöffnung wird meist durch Kürzung des Türspalts um einen bestimmten Betrag sichergestellt. Nach DIN 1946-6 ist die folgende Tabelle einzuhalten:
#
# Luftmenge<br>Tür mit Dichtung | $\frac{m^3}{h}$ | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100
# ------------------------------|-----------------|----|----|----|----|----|----|----|----|----|----
# Überströmfläche | $cm^2$ | 25 | 50 | 75 | 100|125 |150 |175 |200 | 225| 250
# Kürzungsmaß | $mm$ | 3 | 6 | 8 | 11 | 14 | 17 | 20 | 22 | 25 | 28
#
# - Berechnen Sie die lichten Breiten der Tür
# - berechnen Sie die Strömungsgeschwindigkeiten $v$ der Luft im Türspalt,
# - stellen Sie die Überströmfläche und das Kürzungsmaß jeweils in einem Diagramm dar, wobei die $x$-Achse der Volumenstrom in $\frac{m^3}{h}$ ist.
# +
# pandas ist die Bibliothek für DataFrames (Tabellen)
import pandas as pd
# numpy ist die Bibliothek für numerisches rechnen (num py)
import numpy as np
# matplotlib wird zum Plotten benötigt
import matplotlib.pyplot as plt, matplotlib.ticker as tk
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
# -
# **Die Ausgangstabelle als DataFrame**
df = pd.DataFrame(
{
'dV_tuer': range(10,101,10),
'A_tuer': range(25,251,25),
'k_tuer': [3,6,8,11,14,17,20,22,25,28]
}
)
df
# - Lichte Breiten der Tür:
#
# Die Lichten Breiten werden nicht in den Dataframe `df` integriert, weil schon bekannt ist, dass diese Größen
# nicht sinnvoll sind:
#
# Die Formel zur Berechnung ergibt sich aus der Formel für den Flächeninhalt:
#
# \begin{align}
# A_\text{tuer} &= k_\text{tuer}\cdot b_\text{tuer} \\[2ex]
# b_\text{tuer} &= \dfrac{A_\text{tuer}}{k_\text{tuer}}
# \quad = \quad
# \dfrac{A_\text{tuer}\cdot\dfrac{100\,mm^2}{cm^2}}{k_\text{tuer}}
# \end{align}
df_b_tuer = pd.DataFrame(
{'A_tuer': df.A_tuer,
'k_tuer': df.k_tuer,
'b_tuer': df.A_tuer*100/df.k_tuer
}
)
df_b_tuer
# Realistischer ist es, von der Standardbreite einer Innentür `b = 900 # mm` ausgehend, die Kürzungsmaße der Tür
# zu berechnen (`df.k_ber`) und mit den Werten zu vergleichen, die in der Tabelle angegeben sind. So wird deutlich, dass die Kürzungsmaße auf ganze $mm$ gerundete Werte sind.
# - Berechnung des Kürzungsmaßes und der Strömungsgeschwindigkeit im Luftspalt:
# +
b = 900 # Breite einer Standardinnentür in mm
df['k_ber'] = df.A_tuer*100/b
df
# -
# Die Strömungsgeschwindigkeit ergibt sich aus dem Zusammenhang zwischen Volumenstrom, Strömungsquerschnitt und Strömungsgeschwindigkeit:
#
# \begin{align}
# \dot V &= A\,v \\
# v &= \dfrac{\dot V}{A}
# \quad = \quad
# \dfrac{\dot V\cdot\dfrac{1\,h}{3600\,s}}{A\cdot\dfrac{1\,m^2}{10^4\,cm^2}}
# \end{align}
# +
df['v_tuer'] = df.dV_tuer/3600/(df.A_tuer/1e4)
df
# -
# Die Strömungsgeschwindigkeit ist eine Konstante und beträgt etwa $1.1\,\frac{m}{s}$
# - Diagramm mit Überströmfläche und Kürzungsmaß:
# +
ax = df.set_index('dV_tuer')[['A_tuer','k_ber']]\
.plot(subplots=True,
color=['b','r'],
grid=True,
rot=90,
)
ax[0].set(ylabel=r'Fläche in $\left[cm^2\right]$')
ax[1].scatter(df.dV_tuer,df.k_tuer,c='r',label='k_tuer')
ax[1].legend(loc='best')
ax[1].set(
xlabel=r'Volumenstrom in $\left[\dfrac{m^3}{h}\right]$',
ylabel=r'Kürzungsmaß in $\left[mm\right]$'
)
plt.show() # verhindert ungewollte Textausgaben
| src/05-Uebung_lsg_mit_pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## Linear Regression - Introduction
#
# Linear regression relates a continuous response (dependent) variable to one or more predictors (features, independent variables), using the assumption that the relationship is linear in nature:
# - The relationship between each feature and the response is a straight line when we keep other features constant.
# - The slope of this line does not depend on the values of the other variables.
# - The effects of each variable on the response are additive (but we can include new variables that represent the interaction of two variables).
#
# In other words, the model assumes that the response variable can be explained or predicted by a linear combination of the features, except for random deviations from this linear relationship.
# + slideshow={"slide_type": "notes"}
from pathlib import Path
# %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pandas as pd
pd.options.display.float_format = '{:,.2f}'.format
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import statsmodels.api as sm
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
# -
# ### Simple Regression
# #### Generate random data
# + slideshow={"slide_type": "slide"}
x = np.linspace(-5, 50, 100)
y = 50 + 2 * x + np.random.normal(0, 20, size=len(x))
data = pd.DataFrame({'X': x, 'Y': y})
ax = data.plot.scatter(x='X', y='Y');
# + [markdown] slideshow={"slide_type": "slide"}
# Our linear model with a single independent variable on the left-hand side assumes the following form:
#
# $$y = \beta_0 + \beta_1 X_1 + \epsilon$$
#
# $\epsilon$ accounts for the deviations or errors that we will encounter when our data do not actually fit a straight line. When $\epsilon$ materializes, that is when we run the model of this type on actual data, the errors are called **residuals**.
# -
# #### Estimate a simple regression with statsmodels
# + slideshow={"slide_type": "slide"}
X = sm.add_constant(data['X'])
model = sm.OLS(data['Y'], X).fit()
print(model.summary())
# -
# #### Verify calculation
beta = np.linalg.inv(X.T.dot(X)).dot(X.T.dot(y))
pd.Series(beta, index=X.columns)
# #### Display model & residuals
# + slideshow={"slide_type": "slide"}
data['y-hat'] = model.predict()
data['residuals'] = model.resid
ax = data.plot.scatter(x='X', y='Y', c='darkgrey')
data.plot.line(x='X', y='y-hat', ax=ax);
for _, row in data.iterrows():
plt.plot((row.X, row.X), (row.Y, row['y-hat']), 'k-')
# -
# ### Multiple Regression
#
# + [markdown] slideshow={"slide_type": "slide"}
# For two independent variables, the model simply changes as follows:
#
# $$y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$$
# -
# #### Generate new random data
# + slideshow={"slide_type": "slide"}
## Create data
size = 25
X_1, X_2 = np.meshgrid(np.linspace(-50, 50, size), np.linspace(-50, 50, size), indexing='ij')
data = pd.DataFrame({'X_1': X_1.ravel(), 'X_2': X_2.ravel()})
data['Y'] = 50 + data.X_1 + 3 * data.X_2 + np.random.normal(0, 50, size=size**2)
## Plot
three_dee = plt.figure(figsize=(15, 5)).gca(projection='3d')
three_dee.scatter(data.X_1, data.X_2, data.Y, c='g');
# -
X = data[['X_1', 'X_2']]
y = data['Y']
# #### Estimate multiple regression model with statsmodels
# + slideshow={"slide_type": "slide"}
X_ols = sm.add_constant(X)
model = sm.OLS(y, X_ols).fit()
print(model.summary())
# -
# #### Verify computation
beta = np.linalg.inv(X_ols.T.dot(X_ols)).dot(X_ols.T.dot(y))
pd.Series(beta, index=X_ols.columns)
# #### Save output as image
plt.rc('figure', figsize=(12, 7))
plt.text(0.01, 0.05, str(model.summary()), {'fontsize': 14}, fontproperties = 'monospace')
plt.axis('off')
plt.tight_layout()
plt.subplots_adjust(left=0.2, right=0.8, top=0.8, bottom=0.1)
plt.savefig('multiple_regression_summary.png', bbox_inches='tight', dpi=300);
# #### Display model & residuals
# + slideshow={"slide_type": "slide"}
three_dee = plt.figure(figsize=(15, 5)).gca(projection='3d')
three_dee.scatter(data.X_1, data.X_2, data.Y, c='g')
data['y-hat'] = model.predict()
to_plot = data.set_index(['X_1', 'X_2']).unstack().loc[:, 'y-hat']
three_dee.plot_surface(X_1, X_2, to_plot.values, color='black', alpha=0.2, linewidth=1, antialiased=True)
for _, row in data.iterrows():
plt.plot((row.X_1, row.X_1), (row.X_2, row.X_2), (row.Y, row['y-hat']), 'k-');
three_dee.set_xlabel('$X_1$');three_dee.set_ylabel('$X_2$');three_dee.set_zlabel('$Y, \hat{Y}$')
plt.savefig('multiple_regression_plot.png', dpi=300);
# -
# Additional [diagnostic tests](https://www.statsmodels.org/dev/diagnostic.html)
# ## Stochastic Gradient Descent Regression
# ### Prepare data
#
# The gradient is sensitive to scale and so is SGDRegressor. Use the `StandardScaler` or `scale` to adjust the features.
scaler = StandardScaler()
X_ = scaler.fit_transform(X)
# ### Configure SGDRegressor
sgd = SGDRegressor(loss='squared_loss', fit_intercept=True,
shuffle=True, random_state=42,
learning_rate='invscaling',
eta0=0.01, power_t=0.25)
# ### Fit Model
# sgd.n_iter = np.ceil(10**6 / len(y))
sgd.fit(X=X_, y=y)
coeffs = (sgd.coef_ * scaler.scale_) + scaler.mean_
pd.Series(coeffs, index=X.columns)
resids = pd.DataFrame({'sgd': y - sgd.predict(X_),
'ols': y - model.predict(sm.add_constant(X))})
resids.pow(2).sum().div(len(y)).pow(.5)
resids.plot.scatter(x='sgd', y='ols');
| Chapter07/01_linear_regression/linear_regression_intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from datetime import datetime
import pandas as pd
import numpy as np
import codecs, json
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import scipy
# +
with open('/Users/calmaleh/Desktop/school/project_course/jeppesen/data_rich_ac.bsad') as json_file:
json_data = json.load(json_file)
frames = []
for j in range(len(json_data['tables'])):
df = pd.DataFrame(np.array(json_data['tables'][j]['table'])[:,:],
columns = json_data['tables'][j]['header']['variables'][:])
df['state'] = json_data['tables'][j]['header']['flightphase']
if df['state'][0] == 'cruise':
frames.append(df)
df = pd.concat(frames,ignore_index=True)
df = df[['DISA','ALTITUDE','MASS','MACH','FUELFLOW']]
X = df.drop(['FUELFLOW'], axis=1)
y = df.FUELFLOW
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
#X_train.insert(4, 'FUELFLOW', y_train, False)
#X = X_train.append(X_test)
#X = X.sort_index(axis=0)
test = X_test.iloc[0]
y_check = y_test.iloc[0]
# -
| old/pandas_approach/.ipynb_checkpoints/test_jepperson-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''conda-forge'': conda)'
# language: python
# name: python3
# ---
# +
import bisect
def cookies(k, A):
A.sort()
niter = 0
while len(A) >= 2:
least1, least2 = A[0], A[1]
A = A[2:]
bisect.insort(A, least1+2*least2)
niter += 1
print(A)
if A[0] >= k: break
if A[0] < k: return -1
return niter
print(cookies(9, [2, 7, 3, 6, 4, 6]))
print(cookies(7, [1, 2, 3, 9, 10, 12]))
'''
Well, this implementation exceeds the time limit since insertion sort has to move all elements to the right of the insertion location, which takes O(n).
'''
# +
import heapq
def cookies(k, A):
heapq.heapify(A)
niter = 0
while len(A) >= 2:
if A[0] >= k: break
least1, least2 = heapq.heappop(A), heapq.heappop(A)
heapq.heappush(A, least1+2*least2)
niter += 1
if A[0] < k: return -1
return niter
print(cookies(9, [2, 7, 3, 6, 4, 6]))
print(cookies(7, [1, 2, 3, 9, 10, 12]))
# -
| hacker-rank/jesse-and-cookies.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tushare as ts
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
df = ts.get_hist_data('300036')#一次性获取全部日k线数据
df
x = df["ma5"]
y = df["low"]
plt.plot(x, y)
plt.title('300036 Hist-Data')
plt.show()
| databook/tushare/ts_histdata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.1.1
# language: julia
# name: julia-1.1
# ---
# # Set-Up
# +
# <NAME>
# 2019 January
# DMF Package
# Simulations
# DMF Package
push!(LOAD_PATH, "../src/")
using DMF
# Basic Linear Algebra Functionality
using LinearAlgebra
using Statistics
using StatsBase
# Plotting and Output
using Plots
using Measures
using LaTeXStrings
# File IO
using FileIO
using JLD2
# SOBI Wrapper Function
using RCall
include("../src/SOBI_Wrapper.jl")
# Set plotting interface
ENV["MPLBACKEND"]="qt5agg"
pyplot()
# -
# ## Reload Data
# +
# To Load Data
tmp = load("data/synthetic_data.jld2")
n_list = tmp["n_list"]
k = tmp["k"]
p = tmp["p"]
p_missing = tmp["p_missing"]
q_list = tmp["q_list"]
sigma_list = tmp["sigma_list"]
f_first = tmp["f_first"]
f_second = tmp["f_second"]
fs = tmp["fs"]
arma_std = tmp["arma_std"]
ar = tmp["ar"]
ma = tmp["ma"]
trials = tmp["trials"]
trace_size = tmp["trace_size"]
cos_eigenvalue_error = tmp["cos_eigenvalue_error"]
cos_eigenvector_error = tmp["cos_eigenvector_error"]
cos_signal_error = tmp["cos_signal_error"]
arma_eigenvalue_error = tmp["arma_eigenvalue_error"]
arma_eigenvector_error = tmp["arma_eigenvector_error"]
arma_signal_error = tmp["arma_signal_error"]
a_trials = tmp["a_trials"]
ac_35 = tmp["ac_35"]
ac_27 = tmp["ac_27"]
cos_phase_eigenvalues = tmp["cos_phase_eigenvalues"]
cos_phase_eigenvalue_error = tmp["cos_phase_eigenvalue_error"]
cos_phase_eigenvector_error = tmp["cos_phase_eigenvector_error"]
cos_phase_signal_error = tmp["cos_phase_signal_error"]
cos_missing_eigenvalue_error = tmp["cos_missing_eigenvalue_error"]
cos_missing_eigenvector_error = tmp["cos_missing_eigenvector_error"]
cos_missing_signal_error = tmp["cos_missing_signal_error"]
cos_missing_n_eigenvalue_error = tmp["cos_missing_n_eigenvalue_error"]
cos_missing_n_eigenvector_error = tmp["cos_missing_n_eigenvector_error"]
cos_missing_n_signal_error = tmp["cos_missing_n_signal_error"]
lags = tmp["lags"]
cos_sobi_signal_error = tmp["cos_sobi_signal_error"]
cos_sobi_eigenvector_error = tmp["cos_sobi_eigenvector_error"];
# -
# ## Parameters
# +
n_list = Int.(round.(exp10.(range(log10(500.0); stop = 4.0, length = 10))))
n = maximum(n_list)
k = 2 # Number of modes
# Mode vectors
p = 100
p_missing = 500
# Missing Data Observation Probability
q_list = exp10.(range(-2.0; stop = 0.0, length = 15))
# Noise Variance
sigma_list = [0.0; exp10.(range(-2.0; stop = log10(2.0), length = 15))]
sigma_list = [sigma_list; 10.0.^(log10(2.0) .+ diff(log10.(sigma_list))[end] * (1:1:4))]
# Cosine Parameters
f_first = [0.25; 0.5]
f_second = [0.25; 2.0]
fs = 1.0
# ARMA Parameters
arma_std = 1.0
ar = [[0.2; 0.7], [0.3; 0.5]]
ma = [[], []]
trials = 200 # For missing data, noise, ARMA
trace_size = (400, 250); # For plotting
# -
# # Cosines
# +
# Eigenvectors
Q = randn(p, k)
Q = mapslices(normalize, Q; dims = 1)
# Two time series
S1_full = [gen_cos_sequence(n_list[end], f_first[1], fs)[1] gen_cos_sequence(n_list[end], f_first[2], fs)[1]]
S2_full = [gen_cos_sequence(n_list[end], f_second[1], fs)[1] gen_cos_sequence(n_list[end], f_second[2], fs)[1]]
# Magnitudes
D = Diagonal([1.0; 1.0])
# True Eigenvalues
w1_true = sort(cos.(f_first))
w2_true = sort(cos.(f_second))
cos_eigenvalue_error = zeros(length(n_list), k, 2)
cos_eigenvector_error = zeros(length(n_list), 2)
cos_signal_error = zeros(length(n_list), 2)
for nn = 1:1:length(n_list) # Loop over signal length
n = n_list[nn]
S1 = mapslices(normalize, S1_full[1:n, :]; dims = 1)
S2 = mapslices(normalize, S2_full[1:n, :]; dims = 1)
X1 = Q * D * S1'
X2 = Q * D * S2'
w1, Q1, C1, A = dmf(X1; C_nsv = k, lag = 1)
w2, Q2, C2, A = dmf(X2; C_nsv = k, lag = 1)
# Compute (squared) error
cos_eigenvalue_error[nn, :, 1] = (w1_true - sort(real.(w1[1:k]))[:]).^2.0
cos_eigenvalue_error[nn, :, 2] = (w2_true - sort(real.(w2[1:k]))[:]).^2.0
cos_eigenvector_error[nn, 1] = eigenvector_error(Q[:, 1:k], Q1[:, 1:k])
cos_eigenvector_error[nn, 2] = eigenvector_error(Q[:, 1:k], Q2[:, 1:k])
cos_signal_error[nn, 1] = eigenvector_error(S1[:, 1:k], C1[:, 1:k])
cos_signal_error[nn, 2] = eigenvector_error(S2[:, 1:k], C2[:, 1:k])
end
# +
# Q ERROR
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_eigenvector_error[:]) / 2, 10.0 * maximum(cos_eigenvector_error[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_eigenvector_error[:, 1], label = L"\omega_2 = 0.5", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_eigenvector_error[:, 2], label = L"\omega_2 = 2", marker = :star4, color = :red, markersize = 6)
boundline1 = (5.0 ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
boundline2 = (0.003 ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline2, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline1[end - 4], L"5/n", 12))
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline2[end - 4], L"0.003/n", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
# +
# S ERROR
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_signal_error[:]) / 2, 10.0 * maximum(cos_signal_error[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_signal_error[:, 1], label = L"\omega_2 = 0.5", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_signal_error[:, 2], label = L"\omega_2 = 2", marker = :star4, color = :red, markersize = 6)
boundline1 = (5.0 ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
boundline2 = (0.003 ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline2, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline1[end - 4], L"5/n", 12))
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline2[end - 4], L"0.003/n", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
# +
# Eigenvalue ERROR
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_eigenvalue_error[:]) / 2, 10.0 * maximum(cos_eigenvalue_error[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_eigenvalue_error[:, 1, 1], label = L"\omega_2 = 0.5, \lambda_2", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_eigenvalue_error[:, 2, 1], label = L"\omega_2 = 0.5, \lambda_1", marker = :star4, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_eigenvalue_error[:, 1, 2], label = L"\omega_2 = 2, \lambda_2", marker = :square, color = :red, markersize = 6)
p1 = scatter!(n_list, cos_eigenvalue_error[:, 2, 2], label = L"\omega_2 = 2, \lambda_1", marker = :utriangle, color = :red, markersize = 6)
n_pl = [10^2.6; n_list; 10^4.1]
boundline1 = 0.003 * (1.0 ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline2[end - 4], L"0.003/n", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = Plots.font("Helvetica", 11), guidefont = font, xtickfont = font, ytickfont = font)
# -
# # ARMA
# +
# Eigenvectors
Q = randn(p, k)
Q = mapslices(normalize, Q; dims = 1)
# Magnitudes
D = Diagonal([1.0; 1.0])
# True Eigenvalues: Lag 1, Lag 2
w1_true = sort([0.6; 2.0 / 3.0])
w2_true = sort([0.68; 5.0 / 6.0])
arma_eigenvalue_error = zeros(length(n_list), k, 2, trials)
arma_eigenvector_error = zeros(length(n_list), 2, trials)
arma_signal_error = zeros(length(n_list), 2, trials)
for tr = 1:1:trials
S_full = [gen_arma_sequence(n_list[end], ar[1], ma[1], arma_std) gen_arma_sequence(n_list[end], ar[2], ma[2], arma_std)]
for nn = 1:1:length(n_list) # Loop over signal length
n = n_list[nn]
S_inner = mapslices(normalize, S_full[1:n, :]; dims = 1)
X = Q * D * S_inner'
w1, Q1, C1, A = dmf(X; C_nsv = k, lag = 1)
w2, Q2, C2, A = dmf(X; C_nsv = k, lag = 2)
# Compute (squared) error
arma_eigenvalue_error[nn, :, 1, tr] = (w1_true - sort(real.(w1[1:k]))[:]).^2.0
arma_eigenvalue_error[nn, :, 2, tr] = (w2_true - sort(real.(w2[1:k]))[:]).^2.0
arma_eigenvector_error[nn, 1, tr] = eigenvector_error(Q[:, 1:k], Q1[:, 1:k])
arma_eigenvector_error[nn, 2, tr] = eigenvector_error(Q[:, 1:k], Q2[:, 1:k])
arma_signal_error[nn, 1, tr] = eigenvector_error(S_inner[:, 1:k], C1[:, 1:k])
arma_signal_error[nn, 2, tr] = eigenvector_error(S_inner[:, 1:k], C2[:, 1:k])
end
end
# +
# Q ERROR
arma_eigenvector_error_m = dropdims(mean(arma_eigenvector_error; dims = 3); dims = 3)
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(arma_eigenvector_error_m[:]) / 2, 10.0 * maximum(arma_eigenvector_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, arma_eigenvector_error_m[:, 1], label = "Lag 1", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, arma_eigenvector_error_m[:, 2], label = "Lag 2", marker = :star4, color = :red, markersize = 6)
boundline1 = 1200 * (log.(log.(n_pl)) ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
boundline2 = 100 * (log.(log.(n_pl)) ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline2, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline1[end - 4], L"1200 \log(\log(n))/n", 12))
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline2[end - 3], L"100 \log(\log(n))/n", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
# +
# S ERROR
arma_signal_error_m = dropdims(mean(arma_signal_error; dims = 3); dims = 3)
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(arma_signal_error_m[:]) / 2, 10.0 * maximum(arma_signal_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, arma_signal_error_m[:, 1], label = "Lag 1", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, arma_signal_error_m[:, 2], label = "Lag 2", marker = :star4, color = :red, markersize = 6)
boundline1 = 1200 * (log.(log.(n_pl)) ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
boundline2 = 100 * (log.(log.(n_pl)) ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline2, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline1[end - 4], L"1200 \log(\log(n))/n", 12))
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline2[end - 3], L"100 \log(\log(n))/n", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
# +
# Eigenvalue ERROR
arma_eigenvalue_error_m = dropdims(mean(arma_eigenvalue_error; dims = 4); dims = 4)
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(arma_eigenvalue_error_m[:]) / 2, 10.0 * maximum(arma_eigenvalue_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, arma_eigenvalue_error_m[:, 1, 1], label = L"Lag\,\,1, \lambda_2", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, arma_eigenvalue_error_m[:, 2, 1], label = L"Lag\,\,1, \lambda_1", marker = :star4, color = :blue, markersize = 6)
p1 = scatter!(n_list, arma_eigenvalue_error_m[:, 1, 2], label = L"Lag\,\,2, \lambda_2", marker = :square, color = :red, markersize = 6)
p1 = scatter!(n_list, arma_eigenvalue_error_m[:, 2, 2], label = L"Lag\,\,2, \lambda_1", marker = :utriangle, color = :red, markersize = 6)
n_pl = [10^2.6; n_list; 10^4.1]
boundline1 = 2.5 * (log.(log.(n_pl)) ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[4], 2 * boundline1[3], L"2.5 \log(\log(n))/n", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :topright, legendfont = Plots.font("Helvetica", 11), guidefont = font, xtickfont = font, ytickfont = font)
# +
# Autocorrelation Function
"""
a_trials = 100
ac_35 = zeros(n_list[end])
ac_27 = zeros(n_list[end])
for tr = 1:1:a_trials
ar35 = gen_arma_sequence(n_list[end], [0.3; 0.5], [], 1.0)
ar27 = gen_arma_sequence(n_list[end], [0.2; 0.7], [], 1.0)
ac_35 += autocorrelation(ar35)
ac_27 += autocorrelation(ar27)
end
ac_35 /= a_trials
ac_27 /= a_trials
"""
p_idx = 1:20
p1 = plot(reuse = false, size = (trace_size[1], trace_size[2] / 2))
p1 = scatter!(p_idx, ac_35[p_idx .+ 1], label = "AR(2), (0.3, 0.5)", xlim = (p_idx[1] - 0.5, p_idx[end] + 0.5), ylim = (0, 1.5), markersize = 6, marker = :circle, color = :blue, ylabel = "Correlation", xticks = [1, 2, 5, 10, 15, 20])
for pp = 1:1:length(p_idx)
p1 = plot!([p_idx[pp], p_idx[pp]], [0, ac_35[p_idx[pp] .+ 1]], label = "", color = :black)
end
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :topright, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
p2 = plot(reuse = false, size = (trace_size[1], trace_size[2] / 2))
p2 = scatter!(p_idx, ac_27[p_idx .+ 1], label = "AR(2), (0.2, 0.7)", xlim = (p_idx[1] - 0.5, p_idx[end] + 0.5), ylim = (0, 3), markersize = 6, marker = :star4, color = :red, xlabel = "Lag", ylabel = "Correlation", yticks = ([0.0, 1.0, 2.0, 3.0], ["0.0", "1.0", "2.0", "3.0"]), xticks = [1, 2, 5, 10, 15, 20])
for pp = 1:1:length(p_idx)
p2 = plot!([p_idx[pp], p_idx[pp]], [0, ac_27[p_idx[pp] .+ 1]], label = "", color = :black)
end
font = Plots.font("Helvetica", 12)
p2 = plot!(legend = :topright, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
plot(p1, p2, layout = (2, 1), size = trace_size)
# -
# # Cosine: Randomized Phase
# +
# Eigenvectors
Q = randn(p, k)
Q = mapslices(normalize, Q; dims = 1)
# Magnitudes
D = Diagonal([1.0; 1.0])
phase_list = pi * (-1.0:0.1:1.0)
# True Eigenvalues
w2_true = sort(cos.(f_second))
cos_phase_eigenvalues = zeros(length(n_list), length(phase_list), length(phase_list), k) * Complex(0, 0)
cos_phase_eigenvector_error = zeros(length(n_list), length(phase_list), length(phase_list))
cos_phase_signal_error = zeros(length(n_list), length(phase_list), length(phase_list))
for p1 = 1:1:length(phase_list) # First phase
ph1 = phase_list[p1]
for p2 = 1:1:length(phase_list) # Second phase
ph2 = phase_list[p2]
S_full = [gen_cos_sequence(n_list[end], f_second[1], fs, ph1)[1] gen_cos_sequence(n_list[end], f_second[2], fs, ph2)[1]]
for nn = 1:1:length(n_list) # Loop over signal length
n = n_list[nn]
S_inner = mapslices(normalize, S_full[1:n, :]; dims = 1)
X = Q * D * S_inner'
w1, Q1, C1, A = dmf(X; C_nsv = k, lag = 1)
cos_phase_eigenvalues[nn, p1, p2, :] = w1[1:k]
cos_phase_eigenvector_error[nn, p1, p2] = eigenvector_error(Q[:, 1:k], Q1[:, 1:k])
cos_phase_signal_error[nn, p1, p2] = eigenvector_error(S_inner[:, 1:k], C1[:, 1:k])
end
end
end
# -
cos_phase_eigenvalue_error = min.(abs.(cos_phase_eigenvalues .- w2_true[1]), abs.(cos_phase_eigenvalues .- w2_true[2])).^2.0;
# +
# Q ERROR
cos_phase_eigenvector_error_m = mapslices(maximum, cos_phase_eigenvector_error; dims = [2; 3])
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_phase_eigenvector_error_m[:]) / 2, 10.0 * maximum(cos_phase_eigenvector_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_phase_eigenvector_error_m[:, 1], label = "", marker = :circle, color = :blue, markersize = 6)
boundline2 = (0.005 ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline2, label = "", linestyle = :dash)
# +
# S ERROR
cos_phase_signal_error_m = mapslices(maximum, cos_phase_signal_error; dims = [2; 3])
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_phase_signal_error_m[:]) / 2, 10.0 * maximum(cos_phase_signal_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_phase_signal_error_m[:, 1], label = "", marker = :circle, color = :blue, markersize = 6)
boundline2 = (0.005 ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline2, label = "", linestyle = :dash)
# +
# Eigenvalue ERROR
cos_phase_eigenvalue_error_m = dropdims(mapslices(maximum, cos_phase_eigenvalue_error; dims = [2; 3]); dims = (2, 3))
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_phase_eigenvalue_error_m[:]) / 2, 10.0 * maximum(cos_phase_eigenvalue_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_phase_eigenvalue_error_m[:, 1], label = L"\lambda_2", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_phase_eigenvalue_error_m[:, 2], label = L"\lambda_1", marker = :star4, color = :blue, markersize = 6)
boundline1 = 0.002 * (1.0 ./ n_pl).^(1.0)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
# -
# # Missing Data
# +
# Fixing observation probability
e_val_err_fcn = (truth, est) -> (est = real.(est);
min.(abs.(truth - sort(est)),
abs.(truth + sort(est)),
abs.(truth - sort(est; rev = true)),
abs.(truth + sort(est; rev = true))).^2
)
p_missing = 2000
trials = 50
# Magnitudes
D = Diagonal([2.0; 1.0])
# True Eigenvalues
w2_true = sort(cos.(f_second))
cos_missing_eigenvalue_error = zeros(length(n_list), k, 2, trials)
cos_missing_eigenvector_error = zeros(length(n_list), 2, trials)
cos_missing_signal_error = zeros(length(n_list), 2, trials)
S_full = [gen_cos_sequence(n_list[end], f_second[1], fs)[1] gen_cos_sequence(n_list[end], f_second[2], fs)[1]]
q_fix = 0.1
t1 = time()
for nn = 1:1:length(n_list) # Loop over signal length
n = n_list[nn]
S_inner = mapslices(normalize, S_full[1:n, :]; dims = 1)
for tr = 1:1:trials
# Eigenvectors
# Q = randn(p_missing, k)
# Q = mapslices(normalize, Q; dims = 1)
Q = qr(randn(p_missing, k)).Q[:, 1:k]
X = (Q * D) * S_inner'
mask = float.(rand(p_missing, n) .<= q_fix)
Xm = X .* mask
# DMD v. tSVD + DMD
w1, Q1, C1, _ = dmf(Xm; tsvd = false, nsv = k, C_nsv = k, lag = 1)
w2, Q2, C2, _ = dmf(Xm; tsvd = true, nsv = k, C_nsv = k, lag = 1)
# Compute (squared) error
cos_missing_eigenvalue_error[nn, :, 1, tr] = e_val_err_fcn(w2_true, w1[1:k])
cos_missing_eigenvector_error[nn, 1, tr] = eigenvector_error(Q[:, 1:k], Q1[:, 1:k])
cos_missing_signal_error[nn, 1, tr] = eigenvector_error(S_inner[:, 1:k], C1[:, 1:k])
cos_missing_eigenvalue_error[nn, :, 2, tr] = e_val_err_fcn(w2_true, w2[1:k])
cos_missing_eigenvector_error[nn, 2, tr] = eigenvector_error(Q[:, 1:k], Q2[:, 1:k])
cos_missing_signal_error[nn, 2, tr] = eigenvector_error(S_inner[:, 1:k], C2[:, 1:k])
end
end
t2 = time()
println(t2 - t1)
# +
# Q ERROR
cos_missing_eigenvector_error_m = dropdims(mean(cos_missing_eigenvector_error; dims = 3); dims = 3)
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_missing_eigenvector_error_m[:]) / 2, 10.0 * maximum(cos_missing_eigenvector_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_missing_eigenvector_error_m[:, 1], label = "DMD", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_missing_eigenvector_error_m[:, 2], label = "tSVD + DMD", marker = :star4, color = :red, markersize = 6)
boundline2 = 5 * (1 ./ n_pl).^(0.5)
p1 = plot!(n_pl, boundline2, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[end - 3], 2 * boundline2[end - 3], L"5 / \sqrt{n}", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :topright, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
# +
# S ERROR
cos_missing_signal_error_m = dropdims(mean(cos_missing_signal_error; dims = 3); dims = 3)
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_missing_signal_error_m[:]) / 2, 10.0 * maximum(cos_missing_signal_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_missing_signal_error_m[:, 1], label = "DMD", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_missing_signal_error_m[:, 2], label = "tSVD + DMD", marker = :star4, color = :red, markersize = 6)
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :topright, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
# +
# Eigenvalue ERROR
cos_missing_eigenvalue_error_m = dropdims(maximum(cos_missing_eigenvalue_error; dims = 4); dims = 4)
n_pl = [10^2.6; n_list; 10^4.1]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_missing_eigenvalue_error_m[:]) / 2, 500.0),# * maximum(cos_missing_eigenvalue_error_m[:])),
xlabel = "n = #Samples", ylabel = "Squared Error",
xticks = [10^2.7, 10^3.0, 10^3.5, 10^4.0]
)
p1 = scatter!(n_list, cos_missing_eigenvalue_error_m[:, 1, 1], label = L"DMD,\lambda_2", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_missing_eigenvalue_error_m[:, 2, 1], label = L"DMD, \lambda_1", marker = :star4, color = :blue, markersize = 6)
p1 = scatter!(n_list, cos_missing_eigenvalue_error_m[:, 1, 2], label = L"tSVD + DMD,\lambda_2", marker = :square, color = :red, markersize = 6)
p1 = scatter!(n_list, cos_missing_eigenvalue_error_m[:, 2, 2], label = L"tSVD + DMD, \lambda_1", marker = :utriangle, color = :red, markersize = 6)
boundline1 = 0.050 * (1.0 ./ n_pl).^(0.5)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[3], 2.5 * boundline1[3], L"0.05 / \sqrt{n}", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :topright, legendfont = Plots.font("Helvetica", 10), guidefont = font, xtickfont = font, ytickfont = font)
# -
# ## Fixed n
# +
# Fixing sample size
e_val_err_fcn = (truth, est) -> (est = real.(est);
min.(abs.(truth - sort(est)),
abs.(truth + sort(est)),
abs.(truth - sort(est; rev = true)),
abs.(truth + sort(est; rev = true))).^2
)
p_missing = 2000
trials = 20
# Magnitudes
D = Diagonal([2.0; 1.0])
# True Eigenvalues
w2_true = sort(cos.(f_second))
cos_missing_n_eigenvalue_error = zeros(length(q_list), k, 2, trials)
cos_missing_n_eigenvector_error = zeros(length(q_list), 2, trials)
cos_missing_n_signal_error = zeros(length(q_list), 2, trials)
S_full = [gen_cos_sequence(n_list[end], f_second[1], fs)[1] gen_cos_sequence(n_list[end], f_second[2], fs)[1]]
t1 = time()
for qq = 1:1:length(q_list) # Loop over signal length
q = q_list[qq]
# Eigenvectors
Q = randn(p_missing, k)
Q = mapslices(normalize, Q; dims = 1)
X = Q * D * S_full'
for tr = 1:1:trials
mask = float.(rand(p_missing, n_list[end]) .<= q)
Xm = X .* mask
# DMD v. tSVD + DMD
w1, Q1, C1, A = dmf(Xm; tsvd = false, nsv = k, C_nsv = k, lag = 1)
w2, Q2, C2, A = dmf(Xm; tsvd = true, nsv = k, C_nsv = k, lag = 1)
# Compute (squared) error
cos_missing_n_eigenvalue_error[qq, :, 1, tr] = e_val_err_fcn(w2_true, w1[1:k]) # (w2_true - sort(real.(w1[1:k]))[:]).^2.0
cos_missing_n_eigenvector_error[qq, 1, tr] = eigenvector_error(Q[:, 1:k], Q1[:, 1:k])
cos_missing_n_signal_error[qq, 1, tr] = eigenvector_error(S_full[:, 1:k], C1[:, 1:k])
cos_missing_n_eigenvalue_error[qq, :, 2, tr] = e_val_err_fcn(w2_true, w2[1:k]) # (w2_true - sort(real.(w2[1:k]))[:]).^2.0
cos_missing_n_eigenvector_error[qq, 2, tr] = eigenvector_error(Q[:, 1:k], Q2[:, 1:k])
cos_missing_n_signal_error[qq, 2, tr] = eigenvector_error(S_full[:, 1:k], C2[:, 1:k])
end
end
t2 = time()
println(t2 - t1)
# +
# Q ERROR
cos_missing_n_eigenvector_error_m = dropdims(mean(cos_missing_n_eigenvector_error; dims = 3); dims = 3)
n_pl = [q_list[1] / 1.2; q_list; 1.2]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_missing_n_eigenvector_error_m[:]) / 2, 100.0 * maximum(cos_missing_n_eigenvector_error_m[:])),
xlabel = "q = P(obs)", ylabel = "Squared Error"
)
p1 = scatter!(q_list, cos_missing_n_eigenvector_error_m[:, 1], label = "DMD", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(q_list, cos_missing_n_eigenvector_error_m[:, 2], label = "tSVD + DMD", marker = :star4, color = :red, markersize = 6)
boundline1 = 0.01 * (1.0 ./ n_pl).^(1.5)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[end - 3], 0.6 * boundline1[10], L"0.01 / q^{3/2}", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
# +
# S ERROR
cos_missing_n_signal_error_m = dropdims(mean(cos_missing_n_signal_error; dims = 3); dims = 3)
n_pl = [q_list[1] / 1.2; q_list; 1.2]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_missing_n_signal_error_m[:]) / 2, 100.0 * maximum(cos_missing_n_signal_error_m[:])),
xlabel = "q = P(obs)", ylabel = "Squared Error"
)
p1 = scatter!(q_list, cos_missing_n_signal_error_m[:, 1], label = "DMD", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(q_list, cos_missing_n_signal_error_m[:, 2], label = "tSVD + DMD", marker = :star4, color = :red, markersize = 6)
boundline1 = 0.01 * (1.0 ./ n_pl).^(1.5)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[4], 2 * boundline1[3], L"0.01 / q^{3/2}", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
# +
# Eigenvalue ERROR
cos_missing_n_eigenvalue_error_m = dropdims(maximum(cos_missing_n_eigenvalue_error; dims = 4); dims = 4)
n_pl = [q_list[1] / 1.2; q_list; 1.2]
p1 = plot(reuse = false, size = trace_size, markersize = 6,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_missing_n_eigenvalue_error_m[:]) / 50, 100.0 * maximum(cos_missing_n_eigenvalue_error_m[:])),
xlabel = "q = P(obs)", ylabel = "Squared Error"
)
p1 = scatter!(q_list, cos_missing_n_eigenvalue_error_m[:, 1, 1], label = L"DMD,\lambda_2", marker = :circle, color = :blue, markersize = 6)
p1 = scatter!(q_list, cos_missing_n_eigenvalue_error_m[:, 2, 1], label = L"DMD, \lambda_1", marker = :star4, color = :blue, markersize = 6)
p1 = scatter!(q_list, cos_missing_n_eigenvalue_error_m[:, 1, 2], label = L"tSVD + DMD,\lambda_2", marker = :square, color = :red, markersize = 6)
p1 = scatter!(q_list, cos_missing_n_eigenvalue_error_m[:, 2, 2], label = L"tSVD + DMD, \lambda_1", marker = :utriangle, color = :red, markersize = 6)
boundline1 = 0.001 * (1.0 ./ n_pl).^(1.5)
p1 = plot!(n_pl, boundline1, label = "", linestyle = :dash)
p1 = plot!(ann = (n_pl[6], 10 * boundline1[6], L"0.001 / q^{3/2}", 12))
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = Plots.font("Helvetica", 11), guidefont = font, xtickfont = font, ytickfont = font)
# -
# # SOBI Comparison
# +
# Eigenvectors
Q = randn(p_missing, k)
Q = mapslices(normalize, Q; dims = 1)
# Magnitudes
D = Diagonal([2.0; 1.0])
# True Eigenvalues
w2_true = sort(cos.(f_second))
lags = 1
cos_sobi_signal_error = zeros(length(n_list), 2)
cos_sobi_eigenvector_error = zeros(length(n_list), 2)
S_full = [gen_cos_sequence(n_list[end], f_second[1], fs)[1] gen_cos_sequence(n_list[end], f_second[2], fs)[1]]
for nn = 1:1:length(n_list)
n = n_list[nn]
S_inner = mapslices(normalize, S_full[1:n, :]; dims = 1)
X = Q * D * S_inner'
# DMD
_, Q_DMD, S_DMD, _ = dmf(X; C_nsv = k)
# SOBI
Q_SOBI, S_SOBI = SOBI_Wrapper(X, lags)
cos_sobi_signal_error[nn, 1] = eigenvector_error(S_inner[:, 1:k], S_DMD[:, 1:k])
cos_sobi_signal_error[nn, 2] = eigenvector_error(S_inner[:, 1:k], S_SOBI[:, 1:k])
cos_sobi_eigenvector_error[nn, 1] = eigenvector_error(Q[:, 1:k], Q_DMD[:, 1:k])
cos_sobi_eigenvector_error[nn, 2] = eigenvector_error(Q[:, 1:k], Q_SOBI[:, 1:k])
end
# +
markers = [:circle, :star4]
colors = [:red, :blue]
labels = ["DMD", "SOBI"]
n_pl = [10^2.6; n_list; 10^4.1]
cos_sobi_signal_error[isnan.(cos_sobi_signal_error)] .= 2 * k
cos_sobi_eigenvector_error[isnan.(cos_sobi_eigenvector_error)] .= 2 * k
p1 = plot(reuse = false, size = trace_size, markersize = 8,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_sobi_signal_error[:]) / 2, 10.0 * maximum(cos_sobi_signal_error[:])),
xlabel = "n = #Samples", ylabel = "Squared Error"
)
for p_idx = 1:1:length(labels)
p1 = scatter!(n_list, cos_sobi_signal_error[:, p_idx], label = labels[p_idx], marker = markers[p_idx], color = colors[p_idx], markersize = 6)
end
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
display(plot(p1))
# +
markers = [:circle, :star4]
colors = [:red, :blue]
labels = ["DMD", "SOBI"]
n_pl = [10^2.6; n_list; 10^4.1]
cos_sobi_eigenvector_error[isnan.(cos_sobi_eigenvector_error)] .= 2 * k
p1 = plot(reuse = false, size = trace_size, markersize = 10,
yscale = :log10, xscale = :log10,
xlim = (minimum(n_pl), maximum(n_pl)),
ylim = (minimum(cos_sobi_eigenvector_error[:]) / 2, 10.0 * maximum(cos_sobi_eigenvector_error[:])),
xlabel = "n = #Samples", ylabel = "Squared Error"
)
for p_idx = 1:1:length(labels)
p1 = scatter!(n_list, cos_sobi_eigenvector_error[:, p_idx], label = labels[p_idx], marker = markers[p_idx], color = colors[p_idx], markersize = 6)
end
font = Plots.font("Helvetica", 12)
p1 = plot!(legend = :bottomleft, legendfont = font, guidefont = font, xtickfont = font, ytickfont = font)
display(plot(p1))
# -
# # Clean-Up
save("data/synthetic_data.jld2",
Dict("n_list" => n_list,
"k" => k,
"p" => p,
"p_missing" => p_missing,
"q_list" => q_list,
"sigma_list" => sigma_list,
"f_first" => f_first,
"f_second" => f_second,
"fs" => fs,
"arma_std" => arma_std,
"ar" => ar,
"ma" => ma,
"trials" => trials,
"trace_size" => trace_size,
"cos_eigenvalue_error" => cos_eigenvalue_error,
"cos_eigenvector_error" => cos_eigenvector_error,
"cos_signal_error" => cos_signal_error,
"arma_eigenvalue_error" => arma_eigenvalue_error,
"arma_eigenvector_error" => arma_eigenvector_error,
"arma_signal_error" => arma_signal_error,
"a_trials" => a_trials,
"ac_35" => ac_35,
"ac_27" => ac_27,
"cos_phase_eigenvalues" => cos_phase_eigenvalues,
"cos_phase_eigenvalue_error" => cos_phase_eigenvalue_error,
"cos_phase_eigenvector_error" => cos_phase_eigenvector_error,
"cos_phase_signal_error" => cos_phase_signal_error,
"cos_missing_eigenvalue_error" => cos_missing_eigenvalue_error,
"cos_missing_eigenvector_error" => cos_missing_eigenvector_error,
"cos_missing_signal_error" => cos_missing_signal_error,
"cos_missing_n_eigenvalue_error" => cos_missing_n_eigenvalue_error,
"cos_missing_n_eigenvector_error" => cos_missing_n_eigenvector_error,
"cos_missing_n_signal_error" => cos_missing_n_signal_error,
"lags" => lags,
"cos_sobi_signal_error" => cos_sobi_signal_error,
"cos_sobi_eigenvector_error" => cos_sobi_eigenvector_error)
)
| examples/.ipynb_checkpoints/SyntheticSimulations-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py3.8-torch
# language: python
# name: py3.8-torch
# ---
# # Temporality Fine-Tuning
#
# Use this notebook for temporality fine-tuning on an LM using MLM.
from __future__ import print_function, absolute_import, division
# %load_ext autoreload
import sys, os, json, time, datetime, logging, multiprocessing, itertools
from pathlib import Path
import pandas as pd
import numpy as np
def console_log(msg, end='\n'):
os.write(1, ('[LOG/{}]'.format(multiprocessing.current_process().name)+msg+end).encode('utf-8'))
# +
import torch
import spacy
import transformers
import nltk
import lemminflect
from transformers import AutoTokenizer, AutoModelForMaskedLM, AutoModelForCausalLM, Trainer,DataCollatorForLanguageModeling,TrainingArguments
print(torch.cuda.is_available())
TORCH_DEV = torch.device(f'cuda:0') if torch.cuda.is_available() \
else torch.device("cpu")
# +
DATA_PATH = './exp_data/nyt_fine_tune.csv'
# where to save checkpoints
MODEL_PATH = "./tmp/"
# -
# ## Fine-Tuning
import datasets
ft_dataset.read_csv(DATA_PATH)
raw_data = datasets.Dataset.from_pandas(ft_dataset)
transformers.set_seed(hash("some_random_str") % (2 **32 - 1))
# ## MLM on RoBERTa
model = AutoModelForMaskedLM.from_pretrained("roberta-base")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
tokenizer.pad_token = tokenizer.eos_token
model.resize_token_embeddings(len(tokenizer))
tr_cfg = TrainingArguments(
output_dir=MODEL_PATH,
do_train=True,
do_eval=False,
save_total_limit=2,
seed=rd_seed,
disable_tqdm=False,
)
tokenized_datasets = raw_data.map(
lambda s: tokenizer(s['sent'], return_special_tokens_mask=True),
batched=True, num_proc=4,
batch_size=500,
)
# +
trainer = Trainer(
model=model,
train_dataset=tokenized_datasets,
args=tr_cfg,
tokenizer=tokenizer,
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer,
mlm_probability=0.15)
)
# + jupyter={"outputs_hidden": true}
train_result = trainer.train(model_path=tmp_path)
| nyt_finetune.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="YHhuHtKbZUnh" colab_type="text"
# #Step-By-Step into Machine Learning
# Learning Machine Learning Project in Python
#
# Beginners Need A Small End-to-End Project
#
# 1. Define Problem.
# 2. Prepare Data.
# 3. Evaluate Algorithms.
# 4. Improve Results.
# 5. Present Results.
#
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Hello World of Machine Learning
#
# >Example project: The best small project to start with on a new tool is the classification of iris flowers (e.g. the iris dataset).
#
# This is a good project because it is so well understood.
#
# - Attributes are numeric so you have to figure out how to load and handle data.
# - It is a classification problem, allowing you to practice with perhaps an easier type of supervised learning algorithm.
# - It is a multi-class classification problem (multi-nominal) that may require some specialized handling.
# - It only has 4 attributes and 150 rows, meaning it is small and easily fits into memory (and a screen or A4 page).
# - All of the numeric attributes are in the same units and the same scale, not requiring any special scaling or transforms to get started.
# - Let’s get started with your hello world machine learning project in Python.
# + [markdown] id="23jkuBTQZUcx" colab_type="text"
#
# + id="eEyyTiknZDdg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="94f8f7ce-acb5-4bbd-9062-25f25faca69e" pycharm={"is_executing": false}
# Check the versions of libraries
# Python version
import sys
print('Python: {}'.format(sys.version))
# scipy
import scipy
print('scipy: {}'.format(scipy.__version__))
# numpy
import numpy
print('numpy: {}'.format(numpy.__version__))
# matplotlib
import matplotlib
print('matplotlib: {}'.format(matplotlib.__version__))
# pandas
import pandas
print('pandas: {}'.format(pandas.__version__))
# scikit-learn
import sklearn
print('sklearn: {}'.format(sklearn.__version__))
# + id="uHeJeWqaamkl" colab_type="code" colab={} pycharm={"is_executing": false}
# Load libraries
import pandas
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
# + id="2V8QOrtAapx-" colab_type="code" colab={} pycharm={"is_executing": false}
# Load dataset
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']
dataset = pandas.read_csv(url, names=names)
# + [markdown] id="2heJvIqSa9SP" colab_type="text"
# ## 3.1 Dimensions of Dataset
# We can get a quick idea of how many instances (rows) and how many attributes (columns) the data contains with the shape property.
# + id="ma0USRTEawca" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1cef38b8-fe88-4072-f84a-2a5d236762cd"
# shape
print(dataset.shape)
# + [markdown] id="ulUxN4ska3sF" colab_type="text"
# ## 3.2 Peek at the Data
# It is also always a good idea to actually eyeball your data.
# + id="Bbr7wNfma1Xo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="39e39378-5363-4b08-f74d-f328afa14c92"
# head
print(dataset.head(20))
# + [markdown] id="F2plI_iqbHeZ" colab_type="text"
# ## 3.3 Statistical Summary
# Now we can take a look at a summary of each attribute.
#
# This includes the count, mean, the min and max values as well as some percentiles.
# + id="XOm1A5m8bLpk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="0317b728-fa2e-4f91-b4ec-082d85067eb9"
# descriptions
print(dataset.describe())
# + [markdown] id="Yw0FrufwbS9j" colab_type="text"
# ## 3.4 Class Distribution
# Let’s now take a look at the number of instances (rows) that belong to each class. We can view this as an absolute count.
# + id="_XlBdxydbWBK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="bfa3455b-f3ac-4208-d636-778f952dce0b"
# class distribution
print(dataset.groupby('class').size())
# + [markdown] id="cj7F0EKobdoj" colab_type="text"
# ## 4. Data Visualization
# We now have a basic idea about the data. We need to extend that with some visualizations.
#
# We are going to look at two types of plots:
#
# Univariate plots to better understand each attribute.
# Multivariate plots to better understand the relationships between attributes.
# ## 4.1 Univariate Plots
# We start with some univariate plots, that is, plots of each individual variable.
#
# Given that the input variables are numeric, we can create box and whisker plots of each.
# + id="E4I64FGObkXn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="662c4b92-0252-4334-d062-9f8aa3781816"
# box and whisker plots
dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False)
plt.show()
# + id="dK6dyWv7bp46" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="d913fd7b-63f7-4446-98f4-61f64b37e08e"
# histograms
dataset.hist()
plt.show()
# + [markdown] id="Oo-ErnL7bukS" colab_type="text"
# ## 4.2 Multivariate Plots
# Now we can look at the interactions between the variables.
#
# First, let’s look at scatterplots of all pairs of attributes. This can be helpful to spot structured relationships between input variables.
# + id="eL3Ob1S0byhC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="d284fa6c-1294-445a-d38b-a85fffef8898"
# scatter plot matrix
scatter_matrix(dataset)
plt.show()
# + [markdown] id="YmTspZSgb32N" colab_type="text"
# ## 5. Evaluate Some Algorithms
# Now it is time to create some models of the data and estimate their accuracy on unseen data.
#
# Here is what we are going to cover in this step:
#
# Separate out a validation dataset.
# Set-up the test harness to use 10-fold cross validation.
# Build 5 different models to predict species from flower measurements
# Select the best model.
#
# ## 5.1 Create a Validation Dataset
# We need to know that the model we created is any good.
#
# Later, we will use statistical methods to estimate the accuracy of the models that we create on unseen data. We also want a more concrete estimate of the accuracy of the best model on unseen data by evaluating it on actual unseen data.
#
#
# That is, we are going to hold back some data that the algorithms will not get to see and we will use this data to get a second and independent idea of how accurate the best model might actually be.
#
#
# We will split the loaded dataset into two, 80% of which we will use to train our models and 20% that we will hold back as a validation dataset.
#
# + id="chn2JV40cEV4" colab_type="code" colab={}
# Split-out validation dataset
array = dataset.values
X = array[:,0:4]
Y = array[:,4]
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
# + [markdown] id="mmZrQDyIcJk4" colab_type="text"
# You now have training data in the X_train and Y_train for preparing models and a X_validation and Y_validation sets that we can use later.
#
# Notice that we used a python slice to select the columns in the NumPy array. If this is new to you, you might want to check-out this post:
# + [markdown] id="2Ke5GezPcM7A" colab_type="text"
# ## 5.2 Test Harness
# We will use 10-fold cross validation to estimate accuracy.
#
# This will split our dataset into 10 parts, train on 9 and test on 1 and repeat for all combinations of train-test splits.
# + id="dkggh1mNcWRa" colab_type="code" colab={}
# Test options and evaluation metric
seed = 7
scoring = 'accuracy'
# + [markdown] id="ex9w47kWcbKj" colab_type="text"
#
# + [markdown] id="21pkaq9IcblA" colab_type="text"
# ## 5.3 Build Models
# We don’t know which algorithms would be good on this problem or what configurations to use. We get an idea from the plots that some of the classes are partially linearly separable in some dimensions, so we are expecting generally good results.
#
# Let’s evaluate 6 different algorithms:
#
# - **Logistic Regression** (LR)
# - **Linear Discriminant Analysis** (LDA)
# - **K-Nearest Neighbors** (KNN).
# - **Classification and Regression Trees** (CART).
# - **Gaussian Naive Bayes** (NB).
# - **Support Vector Machines** (SVM).
#
# This is a good mixture of simple linear (LR and LDA), nonlinear (KNN, CART, NB and SVM) algorithms.
#
# We reset the random number seed before each run to ensure that the evaluation of each algorithm is performed using exactly the same data splits. It ensures the results are directly comparable.
#
# Let’s build and evaluate our models:
# + id="Ecwp2Y2Tcurf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="379888c5-0781-45c2-fb08-206efbbe73a6"
# Spot Check Algorithms
models = [('LR', LogisticRegression(solver='liblinear', multi_class='ovr')), ('LDA', LinearDiscriminantAnalysis()), ('KNN', KNeighborsClassifier()), ('CART', DecisionTreeClassifier()), ('NB', GaussianNB()), ('SVM', SVC(gamma='auto'))]
# evaluate each model in turn
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state=seed)
cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
# + [markdown] id="sMVUqiFEc1TT" colab_type="text"
# ## 5.4 Select Best Model
# We now have 6 models and accuracy estimations for each. We need to compare the models to each other and select the most accurate.
#
# + id="_Dwa1Oh6c9qw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 294} outputId="3c8c9b89-3eb2-4cdc-fb06-49d1abfb68c0"
# Compare Algorithms
fig = plt.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
# + [markdown] id="ibB07nK4dB1K" colab_type="text"
# ## 6. Make Predictions
# The KNN algorithm is very simple and was an accurate model based on our tests. Now we want to get an idea of the accuracy of the model on our validation set.
#
# This will give us an independent final check on the accuracy of the best model. It is valuable to keep a validation set just in case you made a slip during training, such as overfitting to the training set or a data leak. Both will result in an overly optimistic result.
#
# We can run the KNN model directly on the validation set and summarize the results as a final accuracy score, a confusion matrix and a classification report.
# + id="JUsYKapjdFzO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="e57218a1-56e5-45a3-9228-060b63bd4f45" pycharm={"name": "#%% \n"}
# Make predictions on validation dataset
knn = KNeighborsClassifier()
# Fitting
knn.fit(X_train, Y_train)
# Predicting
predictions = knn.predict(X_validation)
# Pritting Score
print(accuracy_score(Y_validation, predictions))
# Matrix
print(confusion_matrix(Y_validation, predictions))
# Report
print(classification_report(Y_validation, predictions))
| 03 Misc/machine_learning_in_python_step_by_step.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from information_imbalance import feature_selection, plot_imbalances
# -
distances = np.loadtxt('distances.dat')
print('n. samples =', distances.shape[0])
print('n. features =', distances.shape[1])
# +
# Example with less samples, less features
dist = distances[:1000][:,:50]
# create a dictionary with a name for each feature
dist_dict = {'d_%04d'%i:d for i,d in enumerate(dist.T)}
# -
dist_dict.keys()
# +
# %%time
selected_features, imbalances = feature_selection(dist_dict, max_feats=5, mode='vectorized')
# -
fig, ax = plot_imbalances(imbalances, logplot=True)
# +
# Example with less samples, full features
dist = distances[:1000]
# create a dictionary with a name for each feature
dist_dict = {'d_%04d'%i:d for i,d in enumerate(dist.T)}
# +
# %%time
selected_features, imbalances = feature_selection(dist_dict, max_feats=5, mode='vectorized')
# -
fig, ax = plot_imbalances(imbalances, logplot=True)
# **Note**: Handling many features is not *so* tragic. On the other hand, handling many points *is* tragic, as the algorithm scales as O(N^2). Furthermore, with many points, the vectorized version of the algorithm may fail to run due to memory requirements. Switching to a sequential version is possible, altough slower.
# +
# Example with many samples, less features
dist = distances[:,:10]
# create a dictionary with a name for each feature
dist_dict = {'d_%04d'%i:d for i,d in enumerate(dist.T)}
# +
# %%time
# max_feats=1 just to show the speed
selected_features, imbalances = feature_selection(dist_dict, max_feats=1, mode='sequential')
# -
# **Note**: as running the algorithm for many samples quickly becomes considerably slow (after like 1000-2000 samples), one may consider reducing the number of samples through, e.g., Farthest Point Sampling or CUR decompos
| Example2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import libs, set paths and load params
# +
import os, glob
import numpy as np
import pandas as pd
import sys
sys.path.insert(0, "../src")
import auxilary_functions as f
import subprocess
import csv
import matplotlib.pyplot as plt
cfg_file = "../src/config-ecoli.json"
cfg = f.get_actual_parametrization("../src/config-ecoli.json")
networks = ['fflatt']
organisms = ['ecoli']
sizes = ['500','750']
n_trials = 10
cascades=['1']
p2=['0.5','0.7','0.9'] #0.2, 0.5, 0.8 (and 0.3?)
p4=['0.5','0.7','0.9'] #0.2, 0.5, 0.8 (and 0.3?)
os.chdir('../networks/')
fflattdir = '../snippets/'
topology_dir = os.path.join(os.getcwd(), 'fflatt_motif_depletion', 'no_depletion')
# -
topology_dir
#collect data
for size in sizes:
for cascade in cascades:
for network in p2:
for organism in p4:
current_dir = os.path.join(topology_dir, size, cascade, network, organism)
if not os.path.exists(os.path.abspath(current_dir)):
print('making dirs...')
os.makedirs(os.path.abspath(current_dir), exist_ok=True)
print('running fflatt...')
subprocess.call(['python3', fflattdir+'parameter_space_exploration.py',\
cfg_file, size, str(n_trials), current_dir, network, organism, cascade])
# ## Display and save z-scores
for size in sizes:
for cascade in cascades:
for network in p2:
for organism in p4:
current_dir = os.path.join(topology_dir, size, cascade, network, organism)
for rep, file in enumerate(glob.glob(os.path.join(current_dir, '*sv'))):
if not os.path.exists(os.path.join(topology_dir, 'z-scores', size+'_'+cascade+'_'+network+'_'+organism+'_'+str(rep)+'_z_score.tsv')):
pandas_df_lst = []
print(rep, file)
report = f.analyze_exctracted_network(cfg, file, network, rep, size, stability_motifs=True)
print(report)
pandas_df_lst.append(report)
pandas_df_list = sum(pandas_df_lst)/len(pandas_df_lst)
pandas_df_list['size'] = size
pandas_df_list['p2_value'] = network
pandas_df_list['p4_value'] = organism
pandas_df_list['cascade_value'] = cascade
pandas_df_list['rep_num'] = rep
print(pandas_df_list)
pandas_df_list.to_csv(os.path.join(topology_dir, 'z-scores', size+'_'+cascade+'_'+network+'_'+organism+'_'+str(rep)+'_z_score.tsv'))
# +
#df_topo
# -
# ## Group-by z-scores and save as table
zscore_stats_lst = []
zscore_stats_lst = []
for rep, file in enumerate(glob.glob(os.path.join(topology_dir, 'z-scores', '*.tsv'))):
zscore_stats_df = pd.io.parsers.read_csv(file, sep=",", index_col=0, header=None, skiprows=1)
zscore_stats_df['motif'] = zscore_stats_df.index
zscore_stats_df.reset_index()
zscore_stats_df.columns = ['counts_ori', 'counts_rand', 'sd_rand',\
'z-score', 'p-val', 'size', 'p2', 'p4', 'cascades', 'rep_num', 'motif']
print(zscore_stats_df)
zscore_stats_lst.append(zscore_stats_df)
zscore_stats_df = pd.concat(zscore_stats_lst)
zscore_stats_df.reset_index(drop=True, inplace=True)
zscore_stats_df = zscore_stats_df[zscore_stats_df['cascades']==1]
zscore_stats_df = zscore_stats_df.drop('cascades', 1)
zscore_stats_df
zscore_stats_df_mean = zscore_stats_df.groupby(['p2', 'p4', 'motif']).mean()
zscore_stats_df_mean = zscore_stats_df_mean['z-score'].unstack()
zscore_stats_df_mean = zscore_stats_df_mean.round(3)
zscore_stats_df_mean
zscore_stats_df_std = zscore_stats_df.groupby(['p2', 'p4', 'motif']).std()
zscore_stats_df_std = zscore_stats_df_std['z-score'].unstack()
zscore_stats_df_std = zscore_stats_df_std.pow(2, axis = 1).div(n_trials).round(3)
zscore_stats_df_std
final_table_s2 = zscore_stats_df_mean.astype(str) + u"\u00B1" + zscore_stats_df_std.astype(str)
final_table_s2
final_table_s2.to_csv("s2_table_no_motif_depletion_500_and_750.csv", sep="\t")
| snippets/TableS2_no_depletion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import os
import json
def score_text(text):
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
score = sid.polarity_scores(text)
return score["compound"]
# -
# ## 1. Vocabulary-based sentiment scoring will be scored for the query "Amazon Company"
# ## Positive and negative paragraphs are defined
# +
query = "Amazon Company"
negative_paragraph = """{0} is very bad. And author doesn't provide any justification. People don't like {0}.
Some even hate {0}, because {0} is evil. Some groups believe {0} is their main enemy.
""".format(query)
positive_paragraph = """{0} is very good. And author doesn't provide any justification. People like {0}.
Some even love {0}, because {0} is honest. Some groups believe {0} is their best friend.""".format(query)
negative_paragraph
# -
# ## 2. For the sake of simplicity top 5 relevant articles were retrieved.
# ## These articles are read.
folder = "./amazon_company/"
files = os.listdir(folder)
df = pd.DataFrame(index=list(range(0,len(files))), columns=['title', 'link', 'text'])
for i, filename in enumerate(files):
with open(folder + filename, 'r') as f:
data = json.load(f)
df.loc[i] = data['title'], data['link'], data['text']
# ## 3. Then these articles are scored and ranking is built
ranking = df.copy()
ranking['score'] = df['text'].apply(score_text)
ranking = ranking.sort_values(by='score', ascending=False)
ranking
# ## 4. Article with median score is choosen for reference
reference = ranking.iloc[len(files)//2]
reference
# ## 5. Two copies are added with positive/negative editions in article text
df_extended = df.copy()
df_extended
df_extended = df_extended.append(pd.DataFrame(data={'title':['neg_edit', 'pos_edit'],
'link':['',''],
'text':[reference['text'] + negative_paragraph,
reference['text'] + positive_paragraph]}))
df_extended
# ## 6. New ranking is built
ranking = df_extended.copy()
ranking['score'] = df_extended['text'].apply(score_text)
ranking = ranking.sort_values(by='score', ascending=False)
ranking
# ## 7. We observe, that as expected, article with positive edition is higher in ranking, while article with negative edition is lower. I.e. while exact sentiment scores are not precise enough ranking is done properly.
| evaluation/VADER_Evaluation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## Notes on computations in ForwardHG class
# Let $s=(s^i)_{i=1}^C$ be the state vector divided in its components (e.g. vaious weights of a neural network, accumulated gradients, ...). Assume that each component $s^i\in\mathbb{R}^{d_i}$ is a (column) vector as well as $s\in\mathbb{R}^d$, with $d=\sum_i d_i$. We should compute the update of the total derivative of $s_t$ w.r.t. a single scalar hyperparameter $\lambda$. $s_t$ is the $t-$th iterate of the mapping $\Phi$:
# $$
# s_0 = \Phi_0(\lambda) \qquad
# s_t = \Phi_t(s_{t-1},\lambda),\quad t \in \{1,\dots,T\}
# $$.
# Let $\Phi^i$ denote the components of the iterative mapping relative to $i$-th state vector.
# Calling
# $$
# A_t = \partial_s \Phi_t(s_{t-1},\lambda) \in \mathbb{R}^{d\times d} \qquad B_t = \partial_{\lambda} \Phi_t(s_{t-1},\lambda)\in \mathbb{R}^d,
# $$
# the update on the variable $Z=\frac{\mathrm{d} s}{\mathrm{d} \lambda}$ will be
# \begin{equation}
# Z_t = A_t Z_{t-1} + B_t. \end{equation}
# Let $Z=(Z^i)_{i=1}^C$ be also divided in its component, where each $Z^i$ has the same dimensionality of $s^i$ (sice the hyperparameter is a scalar). Componentwise, the above update becomes
# $$
# Z^i_t = \partial_s \Phi^i_{t}(s_{t-1}, \lambda)^T Z_{t-1} + \partial_{\lambda} \Phi_t^i(s_{t-1}, \lambda) =
# \sum_{j=1}^C \partial_{s^j} \Phi^i_t(s_{t-1}, \lambda)^T Z^j_{t-1}
# $$
# ### How to compute the $Z$-update using only scalar gradients
# Introduce the auxiliary variable $v=(v_i)_{i=1}^C$ divided in the same way as the state $s$.
# Then, dropping the iteration index for simplicity, $B$ can be computed as
# $$
# B = \partial_v \left[\sum_{i=1}^C \partial_{\lambda} ( \Phi^i(s, \lambda)^T v_i) \right]
# $$
# by noting that $\Phi^i(s, \lambda)^T v_i$ is a scalar quantity as well as $\partial_{\lambda} ( \Phi^i(s, \lambda)^T v_i)$.
# For the computation of $A_t Z_{t-1}$, instead, (dropping the iteration index) define
# $$
# \Psi_i = \partial_{s^i} \left[\sum_{j=1}^C \Phi^j(s, \lambda)^T v_j \right].
# $$
# Then
# $$
# \partial_s \Phi^i(s, \lambda)^T Z = \partial_{v_i} \left[\sum_{j=1}^C \Psi_j^TZ^j \right],
# $$
# which proves to be the right quantity by excanging the order of summations and derivatives, and makes use only of gradients of scalar functions.
# On a practical note... the variables $v_i$ may take any value since they do not appear in the final computation
| notes/Notes on forward HG.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Good review of numpy https://www.youtube.com/watch?v=GB9ByFAIAH4
# ## Numpy library - Remember to do pip install numpy if running on your laptop
# ### Numpy provides support for math and logical operations on arrays
# #### https://www.tutorialspoint.com/numpy/index.htm
# ### It supports many more data types than python
# #### https://www.tutorialspoint.com/numpy/numpy_data_types.htm
# ### Only a single data type is allowed in any particular array
import numpy as np
a = np.array([1,2,3,4,5,6])
b = a + 1
c = np.reshape(a,(2,3)) # note the tuple
d = np.transpose(c)
d = np.ones((3,2))
d
import numpy as np
import matplotlib.pyplot as plt
a = np.array(np.arange(0,100))
print(id(a))
b = np.array(a)
print(f'b = {id(b)}')
a = a + b
a
# +
# Remember slicing of an array
# -
# # <img src='numpyArray.png' width ='400'>
# arange vs linspace - both generate a numpy array of numbers
import numpy as np
a = np.linspace(0,10,5) # specifies No. of values with 0 and 10 being first and last
b = np.arange(0, 10, 5) # specifies step size=5 starting at 0 up to but NOT including last
print(a)
print(b)
x = np.linspace(0,10,10) # generate 11 numbers
x = x + 1 # operates on all elements of the array
x
# +
# generate points and use function to transform them
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0,10,0.1)
y = np.sin(x)
plt.plot(x,y)
plt.plot(x,-y)
plt.plot(x, 2*y)
plt.title('Sin Waves')
plt.xlabel('time')
plt.ylabel('cost')
# -
# ### Subplots
# https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplots_demo.html
x = np.array(np.arange(1,20,0.1))
y = np.cos(x)
fig, axs = plt.subplots(3)
fig.suptitle('Vertically stacked subplots')
axs[0].plot(x, y)
axs[1].plot(x, -y)
axs[2].plot(x, np.sin(2*x))
# <img src='figureax.png' width = '600'>
# +
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
a = np.random.choice([0,1,2,3,4,5,6,7,8,9],100)
plt.hist(a,bins=10, density=True)
a
# -
plt.hist(a, bins = 10, range=(0,10) ,density=True)
# Use Bins 1/2 wide - What does plot meean?
plt.hist(a,bins=np.arange(0,10,0.5),density=True)
# +
# Data as sampling from an unseen population
# Choose at random from 1 through 10
import numpy as np
import matplotlib.pyplot as plt
a = np.random.random(100)*10.0 # random.random produces numbers 0-1
plt.hist(a, bins=20)
# -
# what is the mean of the following
a = np.array([9, 2, 9, 6, 4, 8, 7, 5, 3, 7])
def mymean(ar):
n = len(ar)
total = 0
for x in ar:
total = total + x
return total/n
print(mymean(a))
# # Normal Distribution
#
# $
# \text{the normal distribution is given by} \\
# $
# $$
# f(x)=\frac{1}{\sqrt{2 \pi}}e^{-x^2/2}
# $$
# $
# \text{This can be rewritten in term of the mean and variance} \\
# $
# $$
# f(x)=\frac{1}{\sigma \sqrt{2 \pi}}e^{-(x- \mu)^2/2 \sigma^2}
# $$
# The random variable $X$ described by the PDF is a normal variable that follows a normal distribution with mean $\mu$ and variance $\sigma^2$.
#
# $
# \text{Normal distribution notation is} \\
# $
# $$
# X \sim N(\mu,\sigma^2) \\
# $$
#
# The total area under the PDF curve equals 1.
# ## Mean and Variance
#
# $$
# \mu = \frac{\sum(x)}{N}
# $$
# $$
# \sigma^{2} =\sum{\frac{(x - \mu)^{2}}{N} }
# $$
# <img src='normalDist.png' width = '400'>
# +
# here calculate the mean and variance
# then plot the theoretical curve by inputting the formula above
def myMean(sample):
N = len(sample) # calculate mean of data
total = 0
for x in sample:
total += x
return total/N
pMean = myMean(p)
print(f'mean= {pMean}')
def myVar(sample,mean): # calculate variance of data
N = len(sample)
tsample = sample - mean
var = np.sum(tsample * tsample)
return var/N
pVar = myVar(p, pMean)
print(f'variance= {pVar}')
plt.hist(p, density=True)
# Plot theoretical normal distribution
x = np.linspace(-4,4,Npoints)
y = np.exp(-(x*x)/(2*pVar))/(np.sqrt(2*np.pi*pVar))
plt.plot(x,y)
# +
# EXERCISE - use the above formula to plot the normal distribution over x = -4 to 4
# take mean = 0, and sigma = 1
# Compare data to the curve given by the formula
import matplotlib.pyplot as plt
import numpy as np
Npoints = 100
np.random.seed(0)
p = np.random.normal(10,2,Npoints) # generate normal distribution with meaan of 0 and standard deviation of 1
plt.hist(p, density=True)
# here calculate the mean and variance
# then plot the theoretical curve by inputting the formula above
def myMean(sample): # calculate mean of data
N = len(sample)
total = 0
for x in sample:
total = total + x
return total/N
pMean = myMean(p)
print(f'mean= {pMean}')
def myVar(sample,mean): # calculate variance of data
tsample = sample - mean
var = np.sum(tsample * tsample)
return var / len(sample)
pVar = myVar(p, pMean)
print(f'variance= {pVar}')
# Plot theoretical normal distribution
x = np.linspace(6,14,Npoints)
tx = x - pMean
y = np.exp(-(tx*tx)/(2*pVar))/(np.sqrt(2*np.pi*pVar))
plt.plot(x,y)
# -
# +
# here calculate the mean and variance
# then plot the theoretical curve by inputting the formula above
def myMean(sample): # calculate mean of data
N = len(sample)
total = 0
for x in sample:
total = total + x
return x/N
pMean = myMean(p)
print(f'mean= {pMean}')
def myVar(sample,mean): # calculate variance of data
tsample = sample - mean
var = np.sum(tsample * tsample)
return var / len(sample)
pVar = myVar(p, pMean)
print(f'variance= {pVar}')
# Plot theoretical normal distribution
x = np.linspace(-4,4,Npoints)
y = np.exp(-(x*x)/(2*pVar))/(np.sqrt(2*np.pi*pVar))
plt.plot(x,y)
# -
# Something to think about - What is the area under the curve between say -4 to 4
# +
# Normal Data
a = np.random.normal(10,2,100)
plt.hist(a,bins=np.arange(5,16,1),density=True)
#plt.scatter(np.arange(0,10),a)
# -
plt.hist(a,bins=np.arange(5,16,0.5), density=True)
plt.hist(a,bins=np.arange(5,16,1))
# +
import numpy as np
import matplotlib.pyplot as plt
a = np.random.normal(0,2,100) # normal takes mean and SD as args
plt.hist(a, density = True)
# -
# ## Mean and Variance
#
# $$
# \mu = \frac{\sum(x)}{N}
# $$
# $$
# \sigma^{2} =\sum{\frac{(x - \mu)^{2}}{N} }
# $$
#
# +
# IN CLASS - Generate a Population and calculate its mean and variance
import matplotlib.pyplot as plt
Npoints = 10
p = np.random.normal(0,10,Npoints)
def myMean(sample):
print('COMPLETE THIS')
pmean = myMean(p)
print(f'mean= {pmean}')
def myVar(sample,mean):
print('COMPLETE THIS')
print(f'Variance = {False}')
# +
# Problem of Variance in Sample vs Population
population = np.array([5, 4, 4, 9, 9, 8, 2, 0, 3, 6])
sample = np.array([5, 4, 4, 8, 6])
def variance(sample, mean, bias):
N = len(sample)
s2 = 0
for i in range(0,N):
x = sample[i]-mean
s2 = s2 + (x * x)
var = s2/(N - bias)
return var
N = 10
var_pop = []
var_sample = []
for mean in range(0,N): # shift the mean from 0 to N
var_pop.append( variance(population, mean, 0))
for mean in range(0,N): # shift the mean from 0 to N
var_sample.append( variance(sample, mean, 1))
# Plotting
plt.scatter(range(N),var_pop,color='b', label='Population')
plt.scatter(range(N),var_sample,color='r', label='Sample')
plt.ylim([0, 50])
plt.ylabel('Variance')
plt.xlabel('Mean')
plt.legend(loc='upper right')
# +
# How would you check that your functions are working ?
# +
# Use this plotting to draw a normal distribution
import numpy as np
import matplotlib.pyplot as plt
x= np.arange(34,40,0.01)
y = np.random.normal(x)
# Plot it
plt.style.use('ggplot')
fig, ax = plt.subplots()
lines = ax.plot(x, norm.pdf(x,loc=37,scale=1))
ax.set_ylim(0,0.45) # range
ax.set_xlabel('x',fontsize=20) # set x label
ax.set_ylabel('pdf(x)',fontsize=20,rotation=90) # set y label
ax.xaxis.set_label_coords(0.55, -0.05) # x label coordinate
ax.yaxis.set_label_coords(-0.1, 0.5) # y label coordinate
px=np.arange(36,37,0.1)
plt.fill_between(px,norm.pdf(px,loc=37,scale=1),color='r',alpha=0.5)
plt.show()
# -
# ### Numpy 2D Arrays
#
## Multi-Dimensional Arrays
<img src='multiArray.png' width = 500>
import numpy as np
a = np.arange(0,9)
z = a.reshape(3,3)
y = z[0:2,0:2]
q = z[0:2][0:2] # This is legal but DO NOT USE IT - can you tell me why
q
# +
# Create Numpy 2_D Arrays - Remember [1,2,3] is different to np.array([1,2,3])
a = np.array([0,1,2])
b = np.array([3,4,5])
c = np.array([6,7,8])
z = np.array([a,
b,
c])
type(z)
# -
a = np.arange(0,9)
z = a.reshape(3,3)
z
z[2,2]
z[0:3:2,0:3:2]
# +
## Exercise - Produce a 8x8 checkerboard of 1s and 0s
### Code it so you could also produce a 20x20 checkerboard later
# +
import numpy as np
import seaborn as sns
from matplotlib.colors import ListedColormap as lc
Z = [[1,1,1],[2,2,2],[3,3,3]]
print(Z)
sns.heatmap(Z, annot=True,linewidths=5,cbar=False)
import seaborn as sns
sns.heatmap(Z, annot=True,linewidths=5,cbar=False)
# -
# ### Your solution should look like this
# <img src='checker8.png' width ='400'>
# +
# Fancy Plotting
from matplotlib import collections as matcoll
Npoints = 20
x = np.arange(0,Npoints)
y = np.random.normal(loc=10, scale=2, size=Npoints )
lines = []
for i in range(Npoints):
pair=[(x[i],0), (x[i], y[i])]
lines.append(pair)
linecoll = matcoll.LineCollection(lines)
fig, ax = plt.subplots()
ax.add_collection(linecoll)
plt.scatter(x,y, marker='o', color='blue')
plt.xticks(x)
plt.ylim(0,40)
plt.show()
ylim=(0,10)
# +
# More Fancy plotting using other libraries
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
for c, z in zip(['r', 'g', 'b', 'y'], [30, 20, 10, 0]):
xs = np.arange(20)
ys = np.random.rand(20)
# You can provide either a single color or an array. To demonstrate this,
# the first bar of each set will be colored cyan.
cs = [c] * len(xs)
cs[0] = 'c'
ax.bar(xs, ys, zs=z, zdir='y', color=cs, alpha=0.8)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
# -
| 04-Linear_Regresion_Python/NumpyIntro5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import rosbag
import pymap3d as pm
import numba as nb
from scipy.signal import savgol_filter
# %matplotlib inline
# +
def wrap_angle(angle):
return (angle + np.pi) % (2 * np.pi) - np.pi
@nb.njit()
def to_euler(x, y, z, w):
"""Dari Coursera: Return as xyz (roll pitch yaw) Euler angles."""
roll = np.arctan2(2 * (w * x + y * z), 1 - 2 * (x**2 + y**2))
pitch = np.arcsin(2 * (w * y - z * x))
yaw = np.arctan2(2 * (w * z + x * y), 1 - 2 * (y**2 + z**2))
return np.array([roll, pitch, yaw])
# Compile the to_euler
_ = to_euler(1.5352300785980803e-15, -1.3393747145983517e-15, -0.7692164172827881, 0.638988343698562)
# -
class get_data_from_bag(object):
def __init__(self, path):
self.bag = rosbag.Bag(path)
self.cs = self._read_msg_from_topic('/control_signal', ['t', 'action_throttle', 'action_steer',
'error_lateral', 'error_yaw','error_speed',
'actual_x','actual_y','actual_yaw','actual_speed',
'ref_x', 'ref_y', 'ref_yaw', 'ref_curvature', 'ref_speed',
'wp_idx', 'deg_ref_yaw', 'deg_actual_yaw', 'deg_error_yaw'])
self.ar = self._read_msg_from_topic('/logging_arduino', ['t', 'steering_setpoint', 'steering_angle', 'throttle_voltage'])
self.gnss = self._read_gnss()
self.imu = self._read_imu('/imu', ['t', 'yaw'])
self.ekf = self._read_msg_from_topic('/state_2d_new', ['t', 'yaw', 'yaw_imu'])
def _read_msg_from_topic(self, topic, columns_name):
data = []
for _, msg, _ in self.bag.read_messages(topics=[topic]):
temp = []
for name in columns_name:
if name == 't':
temp.append(msg.header.stamp.to_sec())
else:
nm = 'msg.' + name
temp.append(eval(nm))
data.append(temp)
return pd.DataFrame(data, columns = columns_name)
def _read_gnss(self):
lat0, lon0, h0 = -6.8712, 107.5738, 768
data = []
for _, msg, _ in self.bag.read_messages(topics='/fix'):
temp = []
temp.append(msg.header.stamp.to_sec())
pos = pm.geodetic2enu(msg.latitude, msg.longitude, msg.altitude, lat0, lon0, h0)
temp.append(pos[0])
temp.append(pos[1])
temp.append(pos[2])
temp.append(msg.position_covariance[0])
data.append(temp)
return pd.DataFrame(data, columns=['t', 'x', 'y', 'z', 'cov_x'])
def _read_imu(self, topic, columns_name):
data = []
for _, msg, _ in self.bag.read_messages(topics=[topic]):
temp = []
for name in columns_name:
if name == 't':
temp.append(msg.header.stamp.to_sec())
elif name == 'yaw':
q = msg.orientation
euler = to_euler(q.x, q.y, q.z, q.w)
temp.append(euler[-1])
else:
nm = 'msg.' + name
temp.append(eval(nm))
data.append(temp)
return pd.DataFrame(data, columns = columns_name)
# df = get_data_from_bag('bag/LURUS_1.bag')
df = get_data_from_bag('bag/LURUS_2.bag')
# # TAMBAHAN
# +
num_f = 51
dst = 0.1
X = np.copy(df.gnss.x)
Y = np.copy(df.gnss.y)
x = np.copy(df.gnss.x)
y = np.copy(df.gnss.y)
t = np.copy(df.gnss.t)
XX = np.copy(df.cs.ref_x)
YY = np.copy(df.cs.ref_y)
wp_x = [X[0]]
wp_y = [Y[0]]
wp_xx = [XX[0]]
wp_yy = [YY[0]]
wp_t = [t[0]]
for i in range(1, X.shape[0]):
dist = np.sqrt((X[i] - wp_x[-1])**2 + (Y[i] - wp_y[-1])**2)
ddist = np.sqrt((XX[i] - wp_xx[-1])**2 + (YY[i] - wp_yy[-1])**2)
while dist >= dst:
# if dist >= dst:
wp_x.append(wp_x[-1] + dst*(X[i] - wp_x[-1])/dist)
wp_y.append(wp_y[-1] + dst*(Y[i] - wp_y[-1])/dist)
wp_t.append(wp_t[-1] + dst*(t[i] - wp_t[-1])/dist)
wp_xx.append(wp_xx[-1] + dst*(XX[i] - wp_xx[-1])/ddist)
wp_yy.append(wp_yy[-1] + dst*(YY[i] - wp_yy[-1])/ddist)
dist = np.sqrt((X[i] - wp_x[-1])**2 + (Y[i] - wp_y[-1])**2)
ddist = np.sqrt((XX[i] - wp_xx[-1])**2 + (YY[i] - wp_yy[-1])**2)
wp_x = np.array(wp_x)
wp_y = np.array(wp_y)
wp_x_f = savgol_filter(wp_x, num_f, 3)
wp_y_f = savgol_filter(wp_y, num_f, 3)
wp_xx = np.array(wp_xx)
wp_yy = np.array(wp_yy)
wp_ref_yaw = np.zeros_like(wp_x)
diffx = wp_xx[2:] - wp_xx[:-2]
diffy = wp_yy[2:] - wp_yy[:-2]
wp_ref_yaw[1:-1] = np.arctan2(diffy, diffx)
wp_ref_yaw[0] = wp_ref_yaw[1]
wp_ref_yaw[-1] = wp_ref_yaw[-2]
wp_ref_yaw_f = wrap_angle(savgol_filter(np.unwrap(wp_ref_yaw), num_f, 3))
act_ref_yaw_dydx = np.copy(wp_ref_yaw)
act_ref_yaw_dydx_f = np.copy(wp_ref_yaw_f)
wp_yaw = np.zeros_like(wp_x)
diffx = wp_x[2:] - wp_x[:-2]
diffy = wp_y[2:] - wp_y[:-2]
wp_yaw[1:-1] = np.arctan2(diffy, diffx)
wp_yaw[0] = wp_yaw[1]
wp_yaw[-1] = wp_yaw[-2]
wp_yaw_f = wrap_angle(savgol_filter(np.unwrap(wp_yaw), num_f, 3))
act_yaw_dydx = np.copy(wp_yaw)
act_yaw_dydx_f = np.copy(wp_yaw_f)
s = np.zeros(wp_x.shape[0])
for i in range(1, s.shape[0]):
s[i] = s[i-1] + np.sqrt((wp_x[i] - wp_x[i-1])**2 + (wp_y[i] - wp_y[i-1])**2)
width = 15
height = 15
plt.figure(figsize=(width, height))
plt.subplot(1,2,1)
plt.plot(wp_x, wp_y, label='Processed')
plt.scatter(x, y, color='red',s=2., label='RAW')
plt.xlabel("X (m)")
plt.ylabel("Y (m)")
plt.legend()
plt.title("PATH")
plt.subplot(1,2,2)
plt.plot(s, wp_yaw*180./np.pi)
plt.plot(s, wp_yaw_f*180./np.pi, label='post filtered')
plt.title("YAW")
plt.xlabel('s (m)')
plt.ylabel(r'\degree')
plt.legend()
#plt.savefig('waypoints.png', dpi=600, transparent=True)
plt.show()
# -
act_yaw_dydx_interp = np.interp(df.imu.t, wp_t, act_yaw_dydx)
act_yaw_dydx_f_interp = np.interp(df.imu.t, wp_t, act_yaw_dydx_f)
act_ref_yaw_interp = np.interp(df.imu.t, wp_t, act_ref_yaw_dydx)
act_ref_yaw_f_interp = np.interp(df.imu.t, wp_t, act_ref_yaw_dydx_f)
ekf_yaw = np.interp(df.imu.t, df.cs.t, wrap_angle(df.cs.actual_yaw))
yaw_gnss = np.zeros_like(df.gnss.x.values)
n = 2
diffx = df.gnss.x.values[n:] - df.gnss.x.values[:-n]
diffy = df.gnss.y.values[n:] - df.gnss.y.values[:-n]
yaw_gnss[n:] = np.arctan2(diffy, diffx)
yaw_gnss[:n] = yaw_gnss[n]
yaw_gnss = np.interp(df.imu.t, df.gnss.t, yaw_gnss)
# plt.plot(df.gnss.t-df.gnss.t[0], yaw_gnss*180./np.pi, label='yaw gnss')
plt.plot(wp_t - df.gnss.t[0], wrap_angle(act_yaw_dydx_f)*180./np.pi, label='ground truth')
# plt.plot(df.imu.t-df.gnss.t[0], wrap_angle(df.imu.yaw + np.pi/2)*180./np.pi, label='Compass')
plt.plot(df.cs.t-df.gnss.t[0], wrap_angle(df.cs.actual_yaw)*180./np.pi, label='dy dx')
plt.plot(df.imu.t - df.imu.t[0], yaw_gnss*180./np.pi, label='gnss dy dx')
# plt.xlim(10., 33.)
plt.xlabel("Waktu (s)")
plt.ylabel(r"Yaw (\degree)")
plt.legend()
# plt.savefig('gt_vs_compass.png', dpi=600)
# plt.ylim(-180., 180.)
plt.show()
plt.plot(wp_t - df.gnss.t[0], wrap_angle(act_yaw_dydx_f)*180./np.pi, label='ground truth')
plt.plot(df.imu.t-df.gnss.t[0], wrap_angle(df.imu.yaw + np.pi/2)*180./np.pi, label='Compass')
# plt.plot(df.ekf.t-df.gnss.t[0], wrap_angle(df.ekf.yaw)*180./np.pi, label='dy dx')
plt.xlabel("Waktu (s)")
plt.ylabel(r"Yaw ($\degree$)")
plt.legend()
plt.ylim(-180., 180.)
plt.savefig('gagal/profil_yaw.png', dpi=600)
plt.show()
plt.plot(wp_t - df.gnss.t[0], wrap_angle(act_yaw_dydx_f)*180./np.pi, label='ground truth')
plt.plot(df.imu.t-df.gnss.t[0], wrap_angle(df.imu.yaw + np.pi/2)*180./np.pi, label='Compass')
# plt.plot(df.cs.t-df.gnss.t[0], wrap_angle(df.cs.actual_yaw)*180./np.pi, label='dy dx')
plt.xlabel("Waktu (s)")
plt.ylabel(r"Yaw ($\degree$)")
plt.legend()
plt.ylim(-120., -80.)
plt.xlim(10., 15.)
plt.savefig('gagal/profil_yaw_zoom.png', dpi=600)
plt.show()
plt.plot(df.cs.ref_x, df.cs.ref_y, label='ref')
plt.plot(df.cs.actual_x, df.cs.actual_y, label='aktual')
# plt.scatter(df.gnss.x,df.gnss.y, color='black', s=1.0)
plt.axis('square')
plt.legend()
plt.xlabel("X (m)")
plt.ylabel("Y (m)")
plt.savefig('gagal/posisi.png', dpi=600)
plt.show()
plt.plot(df.cs.t - df.cs.t[0], df.cs.error_yaw, label='galat yaw (rad)')
plt.plot(df.cs.t - df.cs.t[0], df.cs.error_lateral, label='galat lateral (m)')
plt.legend()
plt.xlabel("Waktu (s)")
plt.savefig('gagal/galat.png', dpi=600)
plt.show()
plt.plot(df.cs.t - df.ar.t[0], df.cs.action_steer, label='steering setpoint')
plt.plot(df.ar.t - df.ar.t[0], df.ar.steering_angle, label='steering aktual')
plt.legend()
plt.xlabel("Waktu (s)")
plt.ylabel(r'Setir ($\degree$)')
plt.savefig('gagal/sudut_kemudi.png', dpi=600)
plt.show()
plt.plot(df.cs.deg_ref_yaw)
plt.plot(180/np.pi*np.ones_like(df.cs.deg_ref_yaw)*np.arctan2(df.cs.actual_y.values[-1]-df.cs.actual_y.values[0], df.cs.actual_x.values[-1]-df.cs.actual_x.values[0]))
plt.scatter(act_yaw_dydx_interp*180./np.pi, df.imu.yaw*180./np.pi, s=1.)
plt.xlabel(r"ground truth $(\degree)$")
plt.ylabel(r"compass $(\degree)$")
plt.axis('square')
plt.legend()
plt.show()
# plt.savefig('ground_truth_vs_compass.png', dpi=600)
plt.scatter(act_yaw_dydx_f_interp*180./np.pi, df.imu.yaw*180./np.pi, s=0.5)
plt.xlabel(r"ground truth $(\degree)$")
plt.ylabel(r"compass $(\degree)$")
plt.axis('square')
plt.legend()
plt.savefig('gagal/cek_bias.png', dpi=600)
plt.show()
# +
# plt.plot(df.cs.t-df.gnss.t[0], df.cs.actual_speed)
# plt.xlim(8.)
# -
| Archieved FP/pkg_ta/scripts/Archieved/8 September 2020/data percobaan/plot_gagal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# Part of the possible emulator accuracy issues could be satellite fraction issues. Gonna look at those explicitly.
from pearce.emulator import SpicyBuffalo, LemonPepperWet, OriginalRecipe
from pearce.mocks import cat_dict
import numpy as np
from os import path
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
# %matplotlib inline
import seaborn as sns
sns.set()
#xi gg
training_file = '/scratch/users/swmclau2/xi_zheng07_cosmo_lowmsat/PearceRedMagicXiCosmoFixedNd.hdf5'
#test_file= '/scratch/users/swmclau2/xi_zheng07_cosmo_test_lowmsat2/'
test_file = '/scratch/users/swmclau2/xi_zheng07_cosmo_test_lowmsat2/PearceRedMagicXiCosmoFixedNd_Test.hdf5'
# + active=""
# #xi gm
# training_file = '/scratch/users/swmclau2/xi_gm_cosmo/PearceRedMagicXiGMCosmoFixedNd.hdf5'
# test_file = '/scratch/users/swmclau2/xi_gm_cosmo_test2/PearceRedMagicXiGMCosmoFixedNdTest.hdf5'
# -
em_method = 'gp'
split_method = 'random'
a = 1.0
z = 1.0/a - 1.0
scale_bin_centers = np.array([ 0.09581734, 0.13534558, 0.19118072, 0.27004994,
0.38145568, 0.53882047, 0.76110414, 1.07508818,
1.51860241, 2.14508292, 3.03001016, 4.28000311,
6.04566509, 8.53972892, 12.06268772, 17.0389993 ,
24.06822623, 33.99727318])
bin_idx = 1
fixed_params = {'z':z, 'r': scale_bin_centers[bin_idx]}#, 'cosmo': 0}#, 'r':24.06822623}
np.random.seed(0)
emu = OriginalRecipe(training_file, method = em_method, fixed_params=fixed_params,
custom_mean_function = 'linear', downsample_factor = 0.1)
emu.scale_bin_centers
pred_y, data_y = emu.goodness_of_fit(test_file, statistic = None)
# +
test_x, test_y, test_cov, _ = emu.get_data(test_file, emu.fixed_params)
t, old_idxs = emu._whiten(test_x)
# -
resmat_flat = 10**pred_y - 10**data_y
datamat_flat = 10**data_y
t_bin = t
acc_bin = np.abs(resmat_flat)/datamat_flat
from pearce.mocks.kittens import TrainingBox
boxno = 0
cat = TrainingBox(boxno, system = 'sherlock')
cat.load(a, HOD='zheng07')
nd = 1e-4
hod_pnames = emu.get_param_names()[7:]
mf = cat.calc_mf()
for pname in hod_pnames:
print pname, emu.get_param_bounds(pname)
# +
from scipy.optimize import minimize_scalar
def add_logMmin(hod_params, cat):
"""
In the fixed number density case, find the logMmin value that will match the nd given hod_params
:param: hod_params:
The other parameters besides logMmin
:param cat:
the catalog in question
:return:
None. hod_params will have logMmin added to it.
"""
hod_params['logMmin'] = 13.0 #initial guess
#cat.populate(hod_params) #may be overkill, but will ensure params are written everywhere
def func(logMmin, hod_params):
hod_params.update({'logMmin':logMmin})
return (cat.calc_analytic_nd(hod_params) - nd)**2
res = minimize_scalar(func, bounds = (12.0, 16.0), args = (hod_params,), options = {'maxiter':100},\
method = 'Bounded')
# assuming this doens't fail
#print 'logMmin', res.x
hod_params['logMmin'] = res.x
#print hod_params
# +
sat_fracs = np.zeros((1000,))
sat_nd = np.zeros((1000,))
actual_nd = np.zeros_like(sat_fracs)
log_mMins = np.zeros_like(sat_fracs)
for idx, x in enumerate(test_x[:1000, 7:]):
hod_params = dict(zip(hod_pnames, x))
add_logMmin(hod_params, cat)
log_mMins[idx] = hod_params['logMmin']
sat_hod = cat.calc_hod(hod_params, component='satellite')
sat_nd[idx] = np.sum(mf*sat_hod)/((cat.Lbox/cat.h)**3)
#sat_fracs[idx] = sat_nd/nd
actual_nd[idx] = cat.calc_analytic_nd(hod_params)
sat_fracs = sat_nd/actual_nd
# -
plt.hist(sat_fracs)
sat_fracs.mean()
sat_fracs.std()
plt.hist(log_mMins)
hod_pnames
plt.scatter(test_x[:1000, 9], acc_bin[:1000])
test_x[:5000,0]
pnames = emu.get_param_names()
for i in xrange(7):
for j in xrange(7):
mean_acc = np.mean(acc_bin[j*5000:(j+1)*5000])
plt.scatter(test_x[j*5000, i], mean_acc, label = 'Cosmo %d'%j)
plt.xlabel(pnames[i])
plt.ylabel('Avg. Percent Accurate')
plt.title('r = %.2f'%scale_bin_centers[bin_idx])
plt.legend(loc = 'best')
plt.show()
test_x[0*35::1000, 9]
pnames = emu.get_param_names()
for i in xrange(7,11):
for j in xrange(0,1000):
mean_acc = np.mean(acc_bin[j::1000])
plt.scatter(test_x[j, i], mean_acc, label = 'HOD %d'%j, alpha = 0.6)
plt.xlabel(pnames[i])
plt.ylabel('Avg. Percent Accurate')
plt.title('r = %.2f'%scale_bin_centers[bin_idx])
#plt.legend(loc = 'best')
plt.show()
mcut = 13.5
sub_test_idx = np.logical_and(test_x[:, 9]>mcut, test_x[:, 7] < mcut)
print np.mean(acc_bin[sub_test_idx]), np.sum(sub_test_idx)
plt.scatter(test_x[:1000, 9], sat_fracs)
plt.xlabel('logM1')
plt.ylabel('Sat Frac')
plt.scatter(test_x[:1000, 9], log_mMins)
plt.xlabel('logM1')
plt.ylabel('logMmin')
plt.hist(1e4*(actual_nd-nd) )
plt.scatter(test_x[:1000, 9], 1e4*(actual_nd-nd) )
plt.xlabel('logM1')
plt.ylabel('Actual nd - Fixed nd')
good_nd_idxs = np.isclose(actual_nd, nd)
print np.sum(good_nd_idxs)/1000.
# NOTE sat_fracs uses actual_nd, so these is a weird selection
good_satfrac_idxs = np.logical_and(0.1 < sat_fracs, sat_fracs < 0.5)
print np.sum(good_satfrac_idxs)/1000.
print np.sum(np.logical_and(good_satfrac_idxs, good_nd_idxs))/1000.
| notebooks/Satellite Fraction Tests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # Goals
#
# * compute books that have been marked GITenberg in the OPDS feed
# +
opds_gitenberg_url = "https://unglue.it/api/opds/kw.GITenberg/"
# -
doc = etree.parse(StringIO(requests.get(opds_gitenberg_url).content))
doc.findall("{http://www.w3.org/2005/Atom}entry")
# +
from StringIO import StringIO
from lxml import etree
import requests
ATOM_NS = "http://www.w3.org/2005/Atom"
def elements_for_feed(url, starting_page=0):
page = starting_page
while True:
page_url = url + "?page={}".format(page)
doc = etree.parse(StringIO(requests.get(page_url).content))
entries = doc.findall("{{{}}}entry".format(ATOM_NS))
if entries:
for entry in entries:
yield entry
else:
break
page += 1
# -
for (i, entry) in enumerate(elements_for_feed(opds_gitenberg_url)):
title = entry.find("{{{}}}{}".format(ATOM_NS, 'title')).text
print (i, entry.find("{{{}}}{}".format(ATOM_NS, 'title')).text)
| opds_parsing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + hide_input=false
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from sklearn import preprocessing
from matplotlib import pyplot
plt.rc("font", size=10)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import RandomOverSampler
from collections import Counter
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from imblearn.pipeline import Pipeline
from imblearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.pipeline import FeatureUnion
from sklearn import metrics
from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.feature_selection import chi2
from sklearn.preprocessing import StandardScaler
# pd.set_option('display.max_colwidth', None)
# pd.set_option('display.max_columns', None)
# sns.set(style="white")
# sns.set(style="whitegrid", color_codes=True)
# + hide_input=false
# plot confusion matrix
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0])
, range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# -
def show_most_informative_features(vectorizer, clf, n=20):
print("most informative words:")
feature_names = vectorizer.get_feature_names()
coef=clf.coef_[0]
coefs_with_fns = sorted(zip(clf.coef_[0], feature_names))
top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
for (coef_1, fn_1), (coef_2, fn_2) in top:
print("\t%.4f\t%-15s\t\t%.4f\t%-15s" % (coef_1, fn_1, coef_2, fn_2))
# + hide_input=false
path = "../../data/processed/stylo_cupid2_liwc.csv"
filepath = os.path.join(os.path.dirname(os.path.abspath("__file__")), path)
df = pd.read_csv(filepath)
df.rename(columns={'A':'age', 'B':'sex', 'C':'text', 'D':'isced', 'E': 'isced2', 'F': '#anwps', 'G':'clean_text', 'H': 'count_char', 'I':'count_punct', 'J':'count_word', 'K':'avg_wordlength', 'L':'count_misspelled', 'M':'word_uniqueness'}, inplace=True)
df.dropna(subset=['isced', 'clean_text'], inplace=True)
df['isced'].mask(df['isced'].isin([3.0, 5.0, 1.0]) , 0, inplace=True)
df['isced'].mask(df['isced'].isin([6.0, 7.0, 8.0]) , 1, inplace=True)
df['clean_text'] = df['clean_text'].str.replace('\d+', ' ')
clean_text = df['clean_text']
target = df.isced
meta = df.iloc[:, 5:13]
liwc = df.iloc[:, 13:]
liwc.replace(',','.',inplace=True, regex=True)
liwc= liwc.astype(float)
liwc_text = pd.concat([liwc, clean_text], axis=1)
# -
# # LIWC Output
liwc.head()
# # Summary statistics
liwc.describe()
# # Text classification by Logistic regression(Bag of words)
# + hide_input=true
Xt_rest,Xt_test, yt_rest, yt_test = train_test_split(clean_text, target, stratify=target, test_size = 0.25, random_state=0)
Xt_train, Xt_val, yt_train, yt_val = train_test_split(Xt_rest, yt_rest, stratify=yt_rest, test_size = 0.25, random_state=0)
clf_text = Pipeline([('vec', CountVectorizer(max_df=0.60, max_features=200000, stop_words='english', binary=True, lowercase=True, ngram_range=(1, 2))),
('clf', LogisticRegression(random_state=0, max_iter=10000, solver='lbfgs', penalty='l2', class_weight='balanced'))])
clf_text.fit(Xt_train, yt_train)
# clf_text = make_pipeline(CountVectorizer(ngram_range=(1, 2)), LogisticRegression(random_state=0, max_iter=10000, solver='lbfgs', penalty='l2', class_weight='balanced')).fit(Xt_train, yt_train)
predictions_t = clf_text.predict(Xt_val)
# RandomOverSampler(),
print("Final Accuracy for Logistic: %s"% accuracy_score(yt_val, predictions_t))
cm = confusion_matrix(yt_val,predictions_t)
print(classification_report(yt_val, predictions_t))
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
# + hide_input=true
show_most_informative_features(clf_text.get_params()['vec'], clf_text.get_params()['clf'], n=15)
# -
# ### Lasso for bag of words
# +
# from sklearn.feature_extraction.text import CountVectorizer
# from nltk.stem.snowball import FrenchStemmer
# stemmer = FrenchStemmer()
# analyzer = CountVectorizer().build_analyzer()
# def stemmed_words(doc):
# return (stemmer.stem(w) for w in analyzer(doc))
# analyzer=stemmed_words,
Xt_rest,Xt_test, yt_rest, yt_test = train_test_split(clean_text, target, stratify=target, test_size = 0.25, random_state=0)
Xt_train, Xt_val, yt_train, yt_val = train_test_split(Xt_rest, yt_rest, stratify=yt_rest, test_size = 0.25, random_state=0)
vectorizer = CountVectorizer(min_df=0.00, max_df=0.60, max_features=1000, stop_words='english', binary=True, lowercase=True, ngram_range=(1, 2))
grams = vectorizer.fit_transform(Xt_train)
words = vectorizer.get_feature_names()
# -
reg = LassoCV(max_iter=1000)
model_lasso = reg.fit(grams, yt_train)
# +
print("Best alpha using built-in LassoCV: %f" % reg.alpha_)
print("Best score using built-in LassoCV: %f" %reg.score(grams,yt_train))
coef = pd.Series(reg.coef_, index = words)
print("Lasso picked " + str(sum(coef != 0)) + " variables and eliminated the other " + str(sum(coef == 0)) + " variables")
# -
imp_coef = coef.sort_values()
new_col = imp_coef.nlargest(565)
cols = new_col.index.tolist()
cols
# ### Logistic Regression for bag of words with selected features
countvec_subset = CountVectorizer(vocabulary= cols)
Xt_train_subset = countvec_subset.fit_transform(Xt_train)
Xt_val_subset = countvec_subset.transform(Xt_val)
c = LogisticRegression(random_state=0, max_iter=10000, solver='lbfgs', penalty='l2', class_weight='balanced')
c.fit(Xt_train_subset, yt_train)
p = c.predict(Xt_val_subset)
print("Final Accuracy for Logistic: %s"% accuracy_score(yt_val, p))
cm = confusion_matrix(yt_val,p)
print(classification_report(yt_val, p))
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
show_most_informative_features(countvec_subset, c, n=20)
# # The Logistic regression(Text+ Language features)
#
# +
Xtm_rest,Xtm_test, ytm_rest, ytm_test = train_test_split(meta, target, stratify=target, test_size = 0.25, random_state=0)
Xtm_train, Xtm_val, ytm_train, ytm_val = train_test_split(Xtm_rest, ytm_rest, stratify=ytm_rest, test_size = 0.25, random_state=0)
cols = meta.loc[:, meta.columns != 'clean_text'].columns
get_text_data = FunctionTransformer(lambda x: x['clean_text'], validate=False)
get_numeric_data = FunctionTransformer(lambda x: x[cols], validate=False)
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', CountVectorizer(binary=False, ngram_range=(1, 2), lowercase=True))
]))
])),
('clf', LogisticRegression(random_state=0, max_iter=10000, solver='lbfgs', penalty='l2', class_weight='balanced'))
])
# merge vectorized text data and scaled numeric data
process_and_join_features.fit(Xtm_train, ytm_train)
predictions_tm = process_and_join_features.predict(Xtm_val)
print("Final Accuracy for Logistic: %s"% accuracy_score(ytm_val, predictions_tm))
cm = confusion_matrix(ytm_val,predictions_tm)
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
print(classification_report(ytm_val, predictions_tm))
# -
show_most_informative_features(process_and_join_features.get_params()['features'].get_params()['text_features'].get_params()['vec'], process_and_join_features.get_params()['clf'], n=20)
# ### selected features from language features
# +
meta = meta.loc[:, meta.columns != 'clean_text']
X_train_meta,X_test_meta, y_train_meta, y_test_meta = train_test_split(meta, target,stratify=target, test_size = 0.25, random_state=0)
scaler = StandardScaler()
meta_scaled = scaler.fit_transform(X_train_meta)
reg = LassoCV(max_iter=10000)
reg.fit(meta_scaled, y_train_meta)
print("Best alpha using built-in LassoCV: %f" % reg.alpha_)
print("Best score using built-in LassoCV: %f" %reg.score(meta_scaled,y_train_meta))
coef = pd.Series(reg.coef_, index = meta.columns)
print("Lasso picked " + str(sum(coef != 0)) + " variables and eliminated the other " + str(sum(coef == 0)) + " variables")
# -
imp_coef = coef.sort_values()
import matplotlib
matplotlib.rcParams['figure.figsize'] = (4.0, 7.0)
imp_coef.plot(kind = "barh")
plt.title("Feature importance using Lasso Model")
# # The Logistic regression (LIWC only)
# +
Xl_rest,Xl_test, yl_rest, yl_test = train_test_split(liwc, target, stratify=target, test_size = 0.25, random_state=0)
Xl_train, Xl_val, yl_train, yl_val = train_test_split(Xl_rest, yl_rest, stratify=yl_rest, test_size = 0.25, random_state=0)
scaler = preprocessing.StandardScaler()
Xl_train_scaled = scaler.fit_transform(Xl_train)
Xl_val_scaled = scaler.transform(Xl_val)
LogisticRegr = LogisticRegression(random_state=0, max_iter=10000, solver='lbfgs', penalty='l2', class_weight='balanced')
LogisticRegr.fit(Xl_train_scaled, yl_train)
predictions = LogisticRegr.predict(Xl_val_scaled)
print("Final Accuracy for Logistic: %s"% accuracy_score(yl_val, predictions))
cm = confusion_matrix(yl_val,predictions)
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
print(classification_report(yl_val, predictions))
# +
# importance = LogisticRegr.coef_[0]
# # summarize feature importance
# for i,v in enumerate(importance):
# print('Feature: %0d, Score: %.5f' % (i,v))
# # plot feature importance
# pyplot.bar([x for x in range(len(importance))], importance)
# pyplot.show()
coef_liwc = pd.Series(LogisticRegr.coef_[0], index = liwc.columns)
imp_coef_liwc = coef_liwc.sort_values()
import matplotlib
matplotlib.rcParams['figure.figsize'] = (8.0, 15.0)
imp_coef_liwc.plot(kind = "barh")
plt.title("Feature importance ")
# -
# ## Lasso for Liwc
# +
X_train,X_test, y_train, y_test = train_test_split(liwc, target,stratify=target, test_size = 0.25, random_state=0)
scaler = StandardScaler()
liwc_scaled = scaler.fit_transform(X_train)
reg = LassoCV(max_iter=10000)
reg.fit(liwc_scaled, y_train)
print("Best alpha using built-in LassoCV: %f" % reg.alpha_)
print("Best score using built-in LassoCV: %f" %reg.score(liwc_scaled,y_train))
coef = pd.Series(reg.coef_, index = liwc.columns)
print("Lasso picked " + str(sum(coef != 0)) + " variables and eliminated the other " + str(sum(coef == 0)) + " variables")
imp_coef = coef.sort_values()
# import matplotlib
# matplotlib.rcParams['figure.figsize'] = (8.0, 15.0)
# imp_coef.plot(kind = "barh")
# plt.title("Feature importance using Lasso Model")
new_col = abs(imp_coef).nlargest(43)
new_liwc = liwc[new_col.index]
text = df['clean_text']
liwc_text_new = pd.concat([new_liwc, text], axis=1)
Xl_rest,Xl_test, yl_rest, yl_test = train_test_split(new_liwc, target, stratify=target, test_size = 0.25, random_state=0)
Xl_train, Xl_val, yl_train, yl_val = train_test_split(Xl_rest, yl_rest, stratify=yl_rest, test_size = 0.25, random_state=0)
scaler = preprocessing.StandardScaler()
Xl_train_scaled = scaler.fit_transform(Xl_train)
Xl_val_scaled = scaler.transform(Xl_val)
LogisticRegr = LogisticRegression(random_state=0, max_iter=10000, solver='lbfgs', penalty='l2', class_weight='balanced')
LogisticRegr.fit(Xl_train_scaled, yl_train)
predictions = LogisticRegr.predict(Xl_val_scaled)
print("Final Accuracy for Logistic: %s"% accuracy_score(yl_val, predictions))
cm = confusion_matrix(yl_val,predictions)
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
print(classification_report(yl_val, predictions))
# -
n = 10
print("most informative words:")
feature_names = Xl_train.columns
coef=LogisticRegr.coef_[0]
coefs_with_fns = sorted(zip(LogisticRegr.coef_[0], feature_names))
top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
for (coef_1, fn_1), (coef_2, fn_2) in top:
print("\t%.4f\t%-15s\t\t%.4f\t%-15s" % (coef_1, fn_1, coef_2, fn_2))
# # The Logistic regression (Bag of words + LIWC)
# +
Xlt_rest,Xlt_test, ylt_rest, ylt_test = train_test_split(liwc_text, target,stratify=target, test_size = 0.25, random_state=0)
Xlt_train, Xlt_val, ylt_train, ylt_val = train_test_split(Xlt_rest, ylt_rest, stratify=ylt_rest, test_size = 0.25, random_state=0)
cols = liwc_text.loc[:, liwc_text.columns != 'clean_text'].columns
get_text_data = FunctionTransformer(lambda x: x['clean_text'], validate=False)
get_numeric_data = FunctionTransformer(lambda x: x[cols], validate=False)
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', CountVectorizer(binary=False, ngram_range=(1, 2), lowercase=True))
]))
])),
('clf', LogisticRegression(random_state=0, max_iter=5000, solver='lbfgs', penalty='l2', class_weight='balanced'))
])
# ('reducer', SelectKBest(chi2, k=100000)
# merge vectorized text data and scaled numeric data
process_and_join_features.fit(Xlt_train, ylt_train)
predictions_lt = process_and_join_features.predict(Xlt_val)
print("Final Accuracy for Logistic: %s"% accuracy_score(ylt_val, predictions_lt))
cm = confusion_matrix(ylt_val,predictions_lt)
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
print(classification_report(ylt_val, predictions_lt))
# +
def show_most_informative_features(vectorizer, clf, n=20):
feature_names = vectorizer.get_feature_names()
coefs_with_fns = sorted(zip(clf.coef_[0], feature_names))
top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
for (coef_1, fn_1), (coef_2, fn_2) in top:
print("\t%.4f\t%-15s\t\t%.4f\t%-15s" % (coef_1, fn_1, coef_2, fn_2))
show_most_informative_features(process_and_join_features.get_params()['features'].get_params()['text_features'].get_params()['vec'], process_and_join_features.get_params()['clf'], n=20)
# -
# # Feature selection(Lasso)
# + hide_input=false
X_train,X_test, y_train, y_test = train_test_split(liwc, target,stratify=target, test_size = 0.25, random_state=0)
scaler = StandardScaler()
liwc_scaled = scaler.fit_transform(X_train)
reg = LassoCV(max_iter=10000)
reg.fit(liwc_scaled, y_train)
print("Best alpha using built-in LassoCV: %f" % reg.alpha_)
print("Best score using built-in LassoCV: %f" %reg.score(liwc_scaled,y_train))
coef = pd.Series(reg.coef_, index = liwc.columns)
print("Lasso picked " + str(sum(coef != 0)) + " variables and eliminated the other " + str(sum(coef == 0)) + " variables")
# + hide_input=false
imp_coef = coef.sort_values()
import matplotlib
matplotlib.rcParams['figure.figsize'] = (8.0, 15.0)
imp_coef.plot(kind = "barh")
plt.title("Feature importance using Lasso Model")
# +
new_col = abs(imp_coef).nlargest(43)
new_liwc = liwc[new_col.index]
text = df['clean_text']
liwc_text_new = pd.concat([new_liwc, text], axis=1)
# +
Xlt_rest,Xlt_test, ylt_rest, ylt_test = train_test_split(liwc_text_new, target,stratify=target, test_size = 0.25, random_state=0)
Xlt_train, Xlt_val, ylt_train, ylt_val = train_test_split(Xlt_rest, ylt_rest, stratify=ylt_rest, test_size = 0.25, random_state=0)
cols = liwc_text_new.loc[:, liwc_text_new.columns != 'clean_text'].columns
get_text_data = FunctionTransformer(lambda x: x['clean_text'], validate=False)
get_numeric_data = FunctionTransformer(lambda x: x[cols], validate=False)
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', CountVectorizer(binary=False, ngram_range=(1, 2), lowercase=True))
]))
])),
('clf', LogisticRegression(random_state=0, max_iter=5000, solver='lbfgs', penalty='l2', class_weight='balanced'))
])
# merge vectorized text data and scaled numeric data
process_and_join_features.fit(Xlt_train, ylt_train)
predictions_lt = process_and_join_features.predict(Xlt_val)
print("Final Accuracy for Logistic: %s"% accuracy_score(ylt_val, predictions_lt))
cm = confusion_matrix(ylt_val,predictions_lt)
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
print(classification_report(ylt_val, predictions_lt))
# -
# ## Most informative words
show_most_informative_features(process_and_join_features.get_params()['features'].get_params()['text_features'].get_params()['vec'], process_and_join_features.get_params()['clf'], n=20)
| docs/reports/report7_lasso.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torch.nn as nn
import torch.nn.functional as F
# ### Intro to NNs
#
# Define the net
# +
class Net(nn.Module):
"""
A simple NN
"""
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 3x3 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
# -
# get parameters of the net
params = list(net.parameters())
print(len(params))
print(params[0].size())
# run the net on a random 32 x 32 input
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
# Zero the gradient buffers of all parameters and
# backprops with random gradients:
net.zero_grad()
out.backward(torch.randn(1, 10))
| PyTorch_Blitz/Neural_Network_Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
import numpy as np
# +
# constants of our universe
k = 9*10^9
# set up the system
num_charges = 1
charge_coord_m = [np.array([3, 2, 5])]
charge_C = 2*10^(-9)
# +
# Label the points in space we're interested in
location_dict = dict()
center = charge_coord_m[0]
label = 1
radius = 0.06
num_points = 10
for idx in list(range(1,num_points+1)):
angle = idx*2*np.pi/num_points
location = center + np.array([radius*np.cos(angle), radius*np.sin(angle), 0.05])
#print (location)
location_dict[label+idx-1] = location
label = 10
radius = 0.12
num_points = 15
for idx in list(range(1,num_points+1)):
angle = idx*2*np.pi/num_points
location = center + np.array([radius*np.cos(angle), radius*np.sin(angle), 0.05])
#print (location)
location_dict[label+idx-1] = location
label = 25
radius = 0.06
num_points = 10
for idx in list(range(1,num_points+1)):
angle = idx*2*np.pi/num_points
location = center + np.array([radius*np.cos(angle), radius*np.sin(angle), -0.05])
#print (location)
location_dict[label+idx-1] = location
label = 35
radius = 0.12
num_points = 15
for idx in list(range(1,num_points+1)):
angle = idx*2*np.pi/num_points
# z was supposed to be -0.05 but mistakes were made
location = center + np.array([radius*np.cos(angle), radius*np.sin(angle), 0.05])
#print (location)
location_dict[label+idx-1] = location
#print (location_dict)
# +
# choose the location you're interested in
#location_m = np.array([-0.09, 0, -0.0173])
location_m = location_dict[12]
# calculate electric field due to each charge
for idx in list(range(0,num_charges)):
source = charge_coord_m[idx]
r = location_m - source
print("point: " + repr(idx+1))
print("r: " + repr(r) + repr(np.linalg.norm(r)))
print("r hat: " + repr(r/np.linalg.norm(r)))
print("dE: " + repr(k*charge_C*r/(np.linalg.norm(r)**3)))
print("loc: " + repr(location_m))
# -
| point_charge_calc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Z-критерий для двух долей
# +
import numpy as np
import pandas as pd
import scipy
from statsmodels.stats.weightstats import *
from statsmodels.stats.proportion import proportion_confint
# -
# ## Загрузка данных
data = pd.read_csv('banner_click_stat.txt', header = None, sep = '\t')
data.columns = ['banner_a', 'banner_b']
data.head()
data.describe()
# ## Интервальные оценки долей
# $$\frac1{ 1 + \frac{z^2}{n} } \left( \hat{p} + \frac{z^2}{2n} \pm z \sqrt{ \frac{ \hat{p}\left(1-\hat{p}\right)}{n} + \frac{z^2}{4n^2} } \right), \;\; z \equiv z_{1-\frac{\alpha}{2}}$$
conf_interval_banner_a = proportion_confint(sum(data.banner_a),
data.shape[0],
method = 'wilson')
conf_interval_banner_b = proportion_confint(sum(data.banner_b),
data.shape[0],
method = 'wilson')
print '95%% confidence interval for a click probability, banner a: [%f, %f]' % conf_interval_banner_a
print '95%% confidence interval for a click probability, banner b [%f, %f]' % conf_interval_banner_b
# ## Z-критерий для разности долей (независимые выборки)
# | $X_1$ | $X_2$
# ------------- | -------------|
# 1 | a | b
# 0 | c | d
# $\sum$ | $n_1$| $n_2$
#
# $$ \hat{p}_1 = \frac{a}{n_1}$$
#
# $$ \hat{p}_2 = \frac{b}{n_2}$$
#
#
# $$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \hat{p}_1 - \hat{p}_2 \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}$$
#
# $$Z-статистика: Z({X_1, X_2}) = \frac{\hat{p}_1 - \hat{p}_2}{\sqrt{P(1 - P)(\frac{1}{n_1} + \frac{1}{n_2})}}$$
# $$P = \frac{\hat{p}_1{n_1} + \hat{p}_2{n_2}}{{n_1} + {n_2}} $$
def proportions_diff_confint_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
def proportions_diff_z_stat_ind(sample1, sample2):
n1 = len(sample1)
n2 = len(sample2)
p1 = float(sum(sample1)) / n1
p2 = float(sum(sample2)) / n2
P = float(p1*n1 + p2*n2) / (n1 + n2)
return (p1 - p2) / np.sqrt(P * (1 - P) * (1. / n1 + 1. / n2))
def proportions_diff_z_test(z_stat, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
if alternative == 'two-sided':
return 2 * (1 - scipy.stats.norm.cdf(np.abs(z_stat)))
if alternative == 'less':
return scipy.stats.norm.cdf(z_stat)
if alternative == 'greater':
return 1 - scipy.stats.norm.cdf(z_stat)
print "95%% confidence interval for a difference between proportions: [%f, %f]" %\
proportions_diff_confint_ind(data.banner_a, data.banner_b)
print "p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_ind(data.banner_a, data.banner_b))
print "p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_ind(data.banner_a, data.banner_b), 'less')
# ## Z-критерий для разности долей (связанные выборки)
# $X_1$ \ $X_2$ | 1| 0 | $\sum$
# ------------- | -------------|
# 1 | e | f | e + f
# 0 | g | h | g + h
# $\sum$ | e + g| f + h | n
#
# $$ \hat{p}_1 = \frac{e + f}{n}$$
#
# $$ \hat{p}_2 = \frac{e + g}{n}$$
#
# $$ \hat{p}_1 - \hat{p}_2 = \frac{f - g}{n}$$
#
#
# $$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \frac{f - g}{n} \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{f + g}{n^2} - \frac{(f - g)^2}{n^3}}$$
#
# $$Z-статистика: Z({X_1, X_2}) = \frac{f - g}{\sqrt{f + g - \frac{(f-g)^2}{n}}}$$
def proportions_diff_confint_rel(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
sample = zip(sample1, sample2)
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
return (left_boundary, right_boundary)
def proportions_diff_z_stat_rel(sample1, sample2):
sample = zip(sample1, sample2)
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
return float(f - g) / np.sqrt(f + g - float((f - g)**2) / n )
print "95%% confidence interval for a difference between proportions: [%f, %f]" \
% proportions_diff_confint_rel(data.banner_a, data.banner_b)
print "p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_rel(data.banner_a, data.banner_b))
print "p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_rel(data.banner_a, data.banner_b), 'less')
| coursera/ml_yandex/course4/course4week2/stat.two_proportions_diff_test_orig.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tylee33/nlp-tensorflow/blob/master/bert_chat.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="j5DlMVxzFGlS" colab_type="code" colab={}
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HugginFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + id="2hftlucmDVZ7" colab_type="code" outputId="0e814a55-b4cf-4fbc-9f63-5e86108de1db" colab={"base_uri": "https://localhost:8080/", "height": 306}
# !nvidia-smi
# + id="_iMMTBKcDtZF" colab_type="code" outputId="5b404ee2-2970-44d8-9ab8-608cad59509a" colab={"base_uri": "https://localhost:8080/", "height": 54}
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
# cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
# !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
# !pip install -q konlpy
# + id="qKVlFXn2KSNz" colab_type="code" colab={}
# # !apt-get install -y -qq software-properties-common python-software-properties module-init-tools
# # !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
# # !apt-get update -qq 2>&1 > /dev/null
# # !apt-get -y install -qq google-drive-ocamlfuse fuse
# from google.colab import auth
# auth.authenticate_user()
# from oauth2client.client import GoogleCredentials
# creds = GoogleCredentials.get_application_default()
# import getpass
# # !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
# vcode = getpass.getpass()
# # !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
# + id="rbv3kJucKSVa" colab_type="code" colab={}
# # !mkdir -p drive # 구글 드라이브 디렉토리 생성
# # !google-drive-ocamlfuse drive # 생성한 디렉토리에 구글 드라이브 연동
# + id="cF1PR_EOE-Sp" colab_type="code" outputId="c5f63cea-03a4-4ff0-ff13-699b12462444" colab={"base_uri": "https://localhost:8080/", "height": 357}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import collections
import unicodedata
import six
import math
import torch
import torch.nn as nn
import copy
import json
import logging
import re
import random
import numpy as np
import os
from tqdm import tqdm, trange
from torch.optim import Optimizer
from torch.nn.utils import clip_grad_norm_
from torch.nn import CrossEntropyLoss
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from torch.utils.data.distributed import DistributedSampler
from argparse import Namespace
from konlpy.tag import Okt
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
# + [markdown] id="wN6Epjlnp9FR" colab_type="text"
# ## 1.context finder
# + id="zRjuj14gLGK_" colab_type="code" colab={}
class ContextFinder:
# BASE_PATH : Project path
# vectorize : Initialize TF-IDF matrix
# documents : documents ( contexts in selected title or paragraph )
# X : Generated TF-IDF weights matrix after fitting input documents
# features : a.k.a vocabulary
# tokenizer : Open-Korean-Text for Korean language processing
def __init__(self):
self.BASE_PATH = os.path.dirname(os.path.abspath(__name__))
self.vectorize = self.init_tf_idf_vector()
self.documents = []
self.X = None
self.features = None
self.tokenizer = Okt()
# tokenization
# norm : ㅋㅋㅋㅋㅋ ---> ㅋㅋ
# stem : 들어간다 ---> 들어가다.
def convert_to_lemma(self, text: str) -> list:
return self.tokenizer.morphs(text, norm=True, stem=True)
# for testing, pos tagged tuple list
def check_pos(self, text: str) -> list:
return self.tokenizer.pos(text)
# loading dev data
def load_context_by_title(self, dataset_path):
if dataset_path is None:
dataset_path = 'dev-v1.1.json'
with open(dataset_path) as f:
data = json.load(f)['data']
for article in data:
for paragraph in article.get('paragraphs'):
self.documents.append(paragraph.get('context'))
# initializing vectorizer object, adding custom tokenizer above (convert_to_lemma)
def init_tf_idf_vector(self):
from sklearn.feature_extraction.text import TfidfVectorizer
return TfidfVectorizer(
tokenizer=self.convert_to_lemma,
min_df=1,
sublinear_tf=True
)
def generate_tf_idf_vector(self):
self.X = self.vectorize.fit_transform(self.documents)
self.features = self.vectorize.get_feature_names()
# e.g) after fitting 5 sentences and 7 features, matrix X looks like below
# ([[0. , 0.40824829, 0.81649658, 0. , 0. , 0. , 0.40824829],
# [0. , 0.40824829, 0.40824829, 0. , 0. , 0. , 0.81649658],
# [0.41680418, 0. , 0. , 0.69197025, 0.41680418, 0.41680418, 0. ],
# [0.76944707, 0. , 0. , 0.63871058, 0. , 0. , 0. ],
# [0. , 0. , 0. , 0.8695635 , 0.34918428, 0.34918428, 0. ]])
def build_model(self, dataset_path=None):
self.load_context_by_title(dataset_path)
self.generate_tf_idf_vector()
def get_ntop_context(self, query: str, n: int) -> str:
if self.X is None or self.features is None:
self.build_model()
# check input query keywords if they are in feature(vocabulary)
keywords = [word for word in ContextFinder.convert_to_lemma(query) if word in self.features]
# get indexes of keywords in X( TF-IDF matrix )
matched_keywords = np.asarray(self.X.toarray())[:, [self.vectorize.vocabulary_.get(i) for i in keywords]]
# word 1 word 2
# 0 0.000000 0.000000 doc 1
# 1 0.000000 0.000000 doc 2
# 2 0.416804 0.691970 doc 3
# 3 0.769447 0.638711 doc 4
# 4 0.000000 0.869563 doc 5
# sum each words weights document by document and sorting reverse order
scores = matched_keywords.sum(axis=1).argsort()[::-1]
for i in scores[:n]:
if scores[i] > 0:
yield self.documents[i]
def get_ntop_context_by_cosine_similarity(self, query: str, n: int):
from sklearn.metrics.pairwise import linear_kernel
if self.X is None or self.features is None:
self.build_model()
query_vector = self.vectorize.transform([query])
# linear_kernel is dot product between query_vector and all documents vector and transform 1 dim array
cosine_similar = linear_kernel(query_vector, self.X).flatten()
ranked_idx = cosine_similar.argsort()[::-1]
for i in ranked_idx[:n]:
if cosine_similar[i] > 0:
yield self.documents[i]
# + [markdown] id="4Q8Q4c_7Foyn" colab_type="text"
# ## 2.optimization
# + id="w1t6v1DIFpaV" colab_type="code" outputId="9078c340-96d8-46ad-e0d5-59946533102e" colab={"base_uri": "https://localhost:8080/", "height": 235}
def warmup_cosine(x, warmup=0.002):
if x < warmup:
return x/warmup
return 0.5 * (1.0 + torch.cos(math.pi * x))
def warmup_constant(x, warmup=0.002):
if x < warmup:
return x/warmup
return 1.0
def warmup_linear(x, warmup=0.002):
if x < warmup:
return x/warmup
return 1.0 - x
SCHEDULES = {
'warmup_cosine':warmup_cosine,
'warmup_constant':warmup_constant,
'warmup_linear':warmup_linear,
}
class BERTAdam(Optimizer):
"""Implements BERT version of Adam algorithm with weight decay fix (and no ).
Params:
lr: learning rate
warmup: portion of t_total for the warmup, -1 means no warmup. Default: -1
t_total: total number of training steps for the learning
rate schedule, -1 means constant learning rate. Default: -1
schedule: schedule to use for the warmup (see above). Default: 'warmup_linear'
b1: Adams b1. Default: 0.9
b2: Adams b2. Default: 0.999
e: Adams epsilon. Default: 1e-6
weight_decay_rate: Weight decay. Default: 0.01
max_grad_norm: Maximum norm for the gradients (-1 means no clipping). Default: 1.0
"""
def __init__(self, params, lr, warmup=-1, t_total=-1, schedule='warmup_linear',
b1=0.9, b2=0.999, e=1e-6, weight_decay_rate=0.01,
max_grad_norm=1.0):
if not lr >= 0.0:
raise ValueError("Invalid learning rate: {} - should be >= 0.0".format(lr))
if schedule not in SCHEDULES:
raise ValueError("Invalid schedule parameter: {}".format(schedule))
if not 0.0 <= warmup < 1.0 and not warmup == -1:
raise ValueError("Invalid warmup: {} - should be in [0.0, 1.0[ or -1".format(warmup))
if not 0.0 <= b1 < 1.0:
raise ValueError("Invalid b1 parameter: {} - should be in [0.0, 1.0[".format(b1))
if not 0.0 <= b2 < 1.0:
raise ValueError("Invalid b2 parameter: {} - should be in [0.0, 1.0[".format(b2))
if not e >= 0.0:
raise ValueError("Invalid epsilon value: {} - should be >= 0.0".format(e))
defaults = dict(lr=lr, schedule=schedule, warmup=warmup, t_total=t_total,
b1=b1, b2=b2, e=e, weight_decay_rate=weight_decay_rate,
max_grad_norm=max_grad_norm)
super(BERTAdam, self).__init__(params, defaults)
def get_lr(self):
lr = []
for group in self.param_groups:
for p in group['params']:
state = self.state[p]
if len(state) == 0:
return [0]
if group['t_total'] != -1:
schedule_fct = SCHEDULES[group['schedule']]
lr_scheduled = group['lr'] * schedule_fct(state['step']/group['t_total'], group['warmup'])
else:
lr_scheduled = group['lr']
lr.append(lr_scheduled)
return lr
def to(self, device):
""" Move the optimizer state to a specified device"""
for state in self.state.values():
state['exp_avg'].to(device)
state['exp_avg_sq'].to(device)
def initialize_step(self, initial_step):
"""Initialize state with a defined step (but we don't have stored averaged).
Arguments:
initial_step (int): Initial step number.
"""
for group in self.param_groups:
for p in group['params']:
state = self.state[p]
# State initialization
state['step'] = initial_step
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['next_m'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['next_v'] = torch.zeros_like(p.data)
next_m, next_v = state['next_m'], state['next_v']
beta1, beta2 = group['b1'], group['b2']
# Add grad clipping
if group['max_grad_norm'] > 0:
clip_grad_norm_(p, group['max_grad_norm'])
# Decay the first and second moment running average coefficient
# In-place operations to update the averages at the same time
next_m.mul_(beta1).add_(1 - beta1, grad)
next_v.mul_(beta2).addcmul_(1 - beta2, grad, grad)
update = next_m / (next_v.sqrt() + group['e'])
# Just adding the square of the weights to the loss function is *not*
# the correct way of using L2 regularization/weight decay with Adam,
# since that will interact with the m and v parameters in strange ways.
#
# Instead we want ot decay the weights in a manner that doesn't interact
# with the m/v parameters. This is equivalent to adding the square
# of the weights to the loss with plain (non-momentum) SGD.
if group['weight_decay_rate'] > 0.0:
update += group['weight_decay_rate'] * p.data
if group['t_total'] != -1:
schedule_fct = SCHEDULES[group['schedule']]
lr_scheduled = group['lr'] * schedule_fct(state['step']/group['t_total'], group['warmup'])
else:
lr_scheduled = group['lr']
update_with_lr = lr_scheduled * update
p.data.add_(-update_with_lr)
state['step'] += 1
# step_size = lr_scheduled * math.sqrt(bias_correction2) / bias_correction1
# bias_correction1 = 1 - beta1 ** state['step']
# bias_correction2 = 1 - beta2 ** state['step']
return loss
# + [markdown] id="EQgxzNLJEn_s" colab_type="text"
# ## 3.tokenization
# + id="n2EKsoFFFSbl" colab_type="code" colab={}
def convert_to_unicode(text):
"""Converts `text` to Unicode (if it's not already), assuming utf-8 input."""
if six.PY3:
if isinstance(text, str):
return text
elif isinstance(text, bytes):
return text.decode("utf-8", "ignore")
else:
raise ValueError("Unsupported string type: %s" % (type(text)))
elif six.PY2:
if isinstance(text, str):
return text.decode("utf-8", "ignore")
elif isinstance(text, unicode):
return text
else:
raise ValueError("Unsupported string type: %s" % (type(text)))
else:
raise ValueError("Not running on Python2 or Python 3?")
def printable_text(text):
"""Returns text encoded in a way suitable for print or `tf.logging`."""
# These functions want `str` for both Python2 and Python3, but in one case
# it's a Unicode string and in the other it's a byte string.
if six.PY3:
if isinstance(text, str):
return text
elif isinstance(text, bytes):
return text.decode("utf-8", "ignore")
else:
raise ValueError("Unsupported string type: %s" % (type(text)))
elif six.PY2:
if isinstance(text, str):
return text
elif isinstance(text, unicode):
return text.encode("utf-8")
else:
raise ValueError("Unsupported string type: %s" % (type(text)))
else:
raise ValueError("Not running on Python2 or Python 3?")
def load_vocab(vocab_file):
"""Loads a vocabulary file into a dictionary."""
vocab = collections.OrderedDict()
index = 0
with open(vocab_file, "r") as reader:
while True:
token = convert_to_unicode(reader.readline())
if not token:
break
token = token.strip()
vocab[token] = index
index += 1
return vocab
def convert_tokens_to_ids(vocab, tokens):
"""Converts a sequence of tokens into ids using the vocab."""
ids = []
for token in tokens:
ids.append(vocab[token])
return ids
def whitespace_tokenize(text):
"""Runs basic whitespace cleaning and splitting on a peice of text."""
text = text.strip()
if not text:
return []
tokens = text.split()
return tokens
def _is_whitespace(char):
"""Checks whether `chars` is a whitespace character."""
# \t, \n, and \r are technically contorl characters but we treat them
# as whitespace since they are generally considered as such.
if char == " " or char == "\t" or char == "\n" or char == "\r":
return True
cat = unicodedata.category(char)
if cat == "Zs":
return True
return False
def _is_control(char):
"""Checks whether `chars` is a control character."""
# These are technically control characters but we count them as whitespace
# characters.
if char == "\t" or char == "\n" or char == "\r":
return False
cat = unicodedata.category(char)
if cat.startswith("C"):
return True
return False
def _is_punctuation(char):
"""Checks whether `chars` is a punctuation character."""
cp = ord(char)
# We treat all non-letter/number ASCII as punctuation.
# Characters such as "^", "$", and "`" are not in the Unicode
# Punctuation class but we treat them as punctuation anyways, for
# consistency.
if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or
(cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)):
return True
cat = unicodedata.category(char)
if cat.startswith("P"):
return True
return False
# + [markdown] id="0Gf9ZAwAMgm_" colab_type="text"
# ### 3.1 FullTokenizer
# + id="fCGsDh_yMgwz" colab_type="code" colab={}
class FullTokenizer(object):
"""Runs end-to-end tokenziation."""
def __init__(self, vocab_file, do_lower_case=True):
self.vocab = load_vocab(vocab_file)
self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab)
def tokenize(self, text):
split_tokens = []
for token in self.basic_tokenizer.tokenize(text):
for sub_token in self.wordpiece_tokenizer.tokenize(token):
split_tokens.append(sub_token)
return split_tokens
def convert_tokens_to_ids(self, tokens):
return convert_tokens_to_ids(self.vocab, tokens)
# + [markdown] id="Dozc7fx3MRYt" colab_type="text"
# ### 3.2 BasicTokenizer
# + id="NFRFCWdaMRgd" colab_type="code" colab={}
class BasicTokenizer(object):
"""Runs basic tokenization (punctuation splitting, lower casing, etc.)."""
def __init__(self, do_lower_case=True):
"""Constructs a BasicTokenizer.
Args:
do_lower_case: Whether to lower case the input.
"""
self.do_lower_case = do_lower_case
def tokenize(self, text):
"""Tokenizes a piece of text."""
text = convert_to_unicode(text)
text = self._clean_text(text)
orig_tokens = whitespace_tokenize(text)
split_tokens = []
for token in orig_tokens:
if self.do_lower_case:
token = token.lower()
token = self._run_strip_accents(token)
split_tokens.extend(self._run_split_on_punc(token))
output_tokens = whitespace_tokenize(" ".join(split_tokens))
return output_tokens
def _run_strip_accents(self, text):
"""Strips accents from a piece of text."""
text = unicodedata.normalize("NFD", text)
output = []
for char in text:
cat = unicodedata.category(char)
if cat == "Mn":
continue
output.append(char)
return "".join(output)
def _run_split_on_punc(self, text):
"""Splits punctuation on a piece of text."""
chars = list(text)
i = 0
start_new_word = True
output = []
while i < len(chars):
char = chars[i]
if _is_punctuation(char):
output.append([char])
start_new_word = True
else:
if start_new_word:
output.append([])
start_new_word = False
output[-1].append(char)
i += 1
return ["".join(x) for x in output]
def _clean_text(self, text):
"""Performs invalid character removal and whitespace cleanup on text."""
output = []
for char in text:
cp = ord(char)
if cp == 0 or cp == 0xfffd or _is_control(char):
continue
if _is_whitespace(char):
output.append(" ")
else:
output.append(char)
return "".join(output)
# + [markdown] id="86GI1W9tMRsx" colab_type="text"
# ### 3.3 WordpieceTokenizer
# + id="Whh85ituMR9F" colab_type="code" colab={}
class WordpieceTokenizer(object):
"""Runs WordPiece tokenization."""
def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=100):
self.vocab = vocab
self.unk_token = unk_token
self.max_input_chars_per_word = max_input_chars_per_word
def tokenize(self, text):
"""Tokenizes a piece of text into its word pieces.
This uses a greedy longest-match-first algorithm to perform tokenization
using the given vocabulary.
For example:
input = "unaffable"
output = ["un", "##aff", "##able"]
Args:
text: A single token or whitespace separated tokens. This should have
already been passed through `BasicTokenizer.
Returns:
A list of wordpiece tokens.
"""
text = convert_to_unicode(text)
output_tokens = []
for token in whitespace_tokenize(text):
chars = list(token)
if len(chars) > self.max_input_chars_per_word:
output_tokens.append(self.unk_token)
continue
is_bad = False
start = 0
sub_tokens = []
while start < len(chars):
end = len(chars)
cur_substr = None
while start < end:
substr = "".join(chars[start:end])
if start > 0:
substr = "##" + substr
if substr in self.vocab:
cur_substr = substr
break
end -= 1
if cur_substr is None:
is_bad = True
break
sub_tokens.append(cur_substr)
start = end
if is_bad:
output_tokens.append(self.unk_token)
else:
output_tokens.extend(sub_tokens)
return output_tokens
# + [markdown] id="r3TOVWNtGE0z" colab_type="text"
# ## 4.modeling
# + [markdown] id="lTP-mIZvuxjD" colab_type="text"
# ### 4.1 gelu
# + id="046Plon9uMRY" colab_type="code" colab={}
def gelu(x):
"""Implementation of the gelu activation function.
For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
"""
return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))
# + [markdown] id="sZ4Ipqi8uzkY" colab_type="text"
# ### 4.2 Config
# + id="UPnMEZ3MuMUa" colab_type="code" colab={}
class BertConfig(object):
"""Configuration class to store the configuration of a `BertModel`.
"""
def __init__(self,
vocab_size,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=16,
initializer_range=0.02):
"""Constructs BertConfig.
Args:
vocab_size: Vocabulary size of `inputs_ids` in `BertModel`.
hidden_size: Size of the encoder layers and the pooler layer.
num_hidden_layers: Number of hidden layers in the Transformer encoder.
num_attention_heads: Number of attention heads for each attention layer in
the Transformer encoder.
intermediate_size: The size of the "intermediate" (i.e., feed-forward)
layer in the Transformer encoder.
hidden_act: The non-linear activation function (function or string) in the
encoder and pooler.
hidden_dropout_prob: The dropout probabilitiy for all fully connected
layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob: The dropout ratio for the attention
probabilities.
max_position_embeddings: The maximum sequence length that this model might
ever be used with. Typically set this to something large just in case
(e.g., 512 or 1024 or 2048).
type_vocab_size: The vocabulary size of the `token_type_ids` passed into
`BertModel`.
initializer_range: The sttdev of the truncated_normal_initializer for
initializing all weight matrices.
"""
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.hidden_act = hidden_act
self.intermediate_size = intermediate_size
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.type_vocab_size = type_vocab_size
self.initializer_range = initializer_range
@classmethod
def from_dict(cls, json_object):
"""Constructs a `BertConfig` from a Python dictionary of parameters."""
config = BertConfig(vocab_size=None)
for (key, value) in six.iteritems(json_object):
config.__dict__[key] = value
return config
@classmethod
def from_json_file(cls, json_file):
"""Constructs a `BertConfig` from a json file of parameters."""
with open(json_file, "r") as reader:
text = reader.read()
return cls.from_dict(json.loads(text))
def to_dict(self):
"""Serializes this instance to a Python dictionary."""
output = copy.deepcopy(self.__dict__)
return output
def to_json_string(self):
"""Serializes this instance to a JSON string."""
return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"
# + [markdown] id="Lt_ynQWEu2Uu" colab_type="text"
# ### 4.3 LayerNorm
# + id="KhyATpFHuMXn" colab_type="code" colab={}
class BERTLayerNorm(nn.Module):
def __init__(self, config, variance_epsilon=1e-12):
"""Construct a layernorm module in the TF style (epsilon inside the square root).
"""
super(BERTLayerNorm, self).__init__()
self.gamma = nn.Parameter(torch.ones(config.hidden_size))
self.beta = nn.Parameter(torch.zeros(config.hidden_size))
self.variance_epsilon = variance_epsilon
def forward(self, x):
u = x.mean(-1, keepdim=True)
s = (x - u).pow(2).mean(-1, keepdim=True)
x = (x - u) / torch.sqrt(s + self.variance_epsilon)
return self.gamma * x + self.beta
# + [markdown] id="vL4Xi_icvAt9" colab_type="text"
# ### 4.4 Embeddings
# + id="SMFJEkQCuMav" colab_type="code" colab={}
class BERTEmbeddings(nn.Module):
def __init__(self, config):
super(BERTEmbeddings, self).__init__()
"""Construct the embedding module from word, position and token_type embeddings.
"""
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = BERTLayerNorm(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, input_ids, token_type_ids=None):
seq_length = input_ids.size(1)
position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device)
position_ids = position_ids.unsqueeze(0).expand_as(input_ids)
if token_type_ids is None:
token_type_ids = torch.zeros_like(input_ids)
words_embeddings = self.word_embeddings(input_ids)
position_embeddings = self.position_embeddings(position_ids)
token_type_embeddings = self.token_type_embeddings(token_type_ids)
embeddings = words_embeddings + position_embeddings + token_type_embeddings
embeddings = self.LayerNorm(embeddings)
embeddings = self.dropout(embeddings)
return embeddings
# + [markdown] id="BWPXfUR6vEls" colab_type="text"
# ### 4.5 SelfAttention
# + id="b1-oWyIwuMiV" colab_type="code" colab={}
class BERTSelfAttention(nn.Module):
def __init__(self, config):
super(BERTSelfAttention, self).__init__()
if config.hidden_size % config.num_attention_heads != 0:
raise ValueError(
"The hidden size (%d) is not a multiple of the number of attention "
"heads (%d)" % (config.hidden_size, config.num_attention_heads))
self.num_attention_heads = config.num_attention_heads
self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
self.all_head_size = self.num_attention_heads * self.attention_head_size
self.query = nn.Linear(config.hidden_size, self.all_head_size)
self.key = nn.Linear(config.hidden_size, self.all_head_size)
self.value = nn.Linear(config.hidden_size, self.all_head_size)
self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
def transpose_for_scores(self, x):
new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
x = x.view(*new_x_shape)
return x.permute(0, 2, 1, 3)
def forward(self, hidden_states, attention_mask):
mixed_query_layer = self.query(hidden_states)
mixed_key_layer = self.key(hidden_states)
mixed_value_layer = self.value(hidden_states)
query_layer = self.transpose_for_scores(mixed_query_layer)
key_layer = self.transpose_for_scores(mixed_key_layer)
value_layer = self.transpose_for_scores(mixed_value_layer)
# Take the dot product between "query" and "key" to get the raw attention scores.
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
attention_scores = attention_scores / math.sqrt(self.attention_head_size)
# Apply the attention mask is (precomputed for all layers in BertModel forward() function)
attention_scores = attention_scores + attention_mask
# Normalize the attention scores to probabilities.
attention_probs = nn.Softmax(dim=-1)(attention_scores)
# This is actually dropping out entire tokens to attend to, which might
# seem a bit unusual, but is taken from the original Transformer paper.
attention_probs = self.dropout(attention_probs)
context_layer = torch.matmul(attention_probs, value_layer)
context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
context_layer = context_layer.view(*new_context_layer_shape)
return context_layer
# + [markdown] id="qNKf8CzXvG9K" colab_type="text"
# ### 4.6 SelfOutput
# + id="0t3tHYDhuMlh" colab_type="code" colab={}
class BERTSelfOutput(nn.Module):
def __init__(self, config):
super(BERTSelfOutput, self).__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.LayerNorm = BERTLayerNorm(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, hidden_states, input_tensor):
hidden_states = self.dense(hidden_states)
hidden_states = self.dropout(hidden_states)
hidden_states = self.LayerNorm(hidden_states + input_tensor)
return hidden_states
# + [markdown] id="U1P38JvjvJK6" colab_type="text"
# ### 4.7 Attention
# + id="1l1e5x__uMr2" colab_type="code" colab={}
class BERTAttention(nn.Module):
def __init__(self, config):
super(BERTAttention, self).__init__()
self.self = BERTSelfAttention(config)
self.output = BERTSelfOutput(config)
def forward(self, input_tensor, attention_mask):
self_output = self.self(input_tensor, attention_mask)
attention_output = self.output(self_output, input_tensor)
return attention_output
# + [markdown] id="GNfszLWJvUuj" colab_type="text"
# ### 4.8 Intermediate
# + id="Gdte3tzzuMvi" colab_type="code" colab={}
class BERTIntermediate(nn.Module):
def __init__(self, config):
super(BERTIntermediate, self).__init__()
self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
self.intermediate_act_fn = gelu
def forward(self, hidden_states):
hidden_states = self.dense(hidden_states)
hidden_states = self.intermediate_act_fn(hidden_states)
return hidden_states
# + [markdown] id="Wj1c1SNyvVOK" colab_type="text"
# ### 4.9 Output
# + id="4jFgoHnTuMpH" colab_type="code" colab={}
class BERTOutput(nn.Module):
def __init__(self, config):
super(BERTOutput, self).__init__()
self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
self.LayerNorm = BERTLayerNorm(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, hidden_states, input_tensor):
hidden_states = self.dense(hidden_states)
hidden_states = self.dropout(hidden_states)
hidden_states = self.LayerNorm(hidden_states + input_tensor)
return hidden_states
# + [markdown] id="6mvcKZINvVjp" colab_type="text"
# ### 4.10 Layer
# + id="QH1orn43uMf1" colab_type="code" colab={}
class BERTLayer(nn.Module):
def __init__(self, config):
super(BERTLayer, self).__init__()
self.attention = BERTAttention(config)
self.intermediate = BERTIntermediate(config)
self.output = BERTOutput(config)
def forward(self, hidden_states, attention_mask):
attention_output = self.attention(hidden_states, attention_mask)
intermediate_output = self.intermediate(attention_output)
layer_output = self.output(intermediate_output, attention_output)
return layer_output
# + [markdown] id="V157G-OwvWIt" colab_type="text"
# ### 4.11 Encoder
# + id="JCKkSAlcujbq" colab_type="code" colab={}
class BERTEncoder(nn.Module):
def __init__(self, config):
super(BERTEncoder, self).__init__()
layer = BERTLayer(config)
self.layer = nn.ModuleList([copy.deepcopy(layer) for _ in range(config.num_hidden_layers)])
def forward(self, hidden_states, attention_mask):
all_encoder_layers = []
for layer_module in self.layer:
hidden_states = layer_module(hidden_states, attention_mask)
all_encoder_layers.append(hidden_states)
return all_encoder_layers
# + [markdown] id="jbgCD-IMvWoF" colab_type="text"
# ### 4.12 Pooler
# + id="F6jABaSMuMdv" colab_type="code" colab={}
class BERTPooler(nn.Module):
def __init__(self, config):
super(BERTPooler, self).__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
# + [markdown] id="Y3TwHE0YvXCp" colab_type="text"
# ### 4.13 Bert Model
# + id="QPY-GIOaFS4U" colab_type="code" colab={}
class BertModel(nn.Module):
"""BERT model ("Bidirectional Embedding Representations from a Transformer").
Example usage:
```python
# Already been converted into WordPiece token ids
input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]])
input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]])
token_type_ids = torch.LongTensor([[0, 0, 1], [0, 2, 0]])
config = modeling.BertConfig(vocab_size=32000, hidden_size=512,
num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024)
model = modeling.BertModel(config=config)
all_encoder_layers, pooled_output = model(input_ids, token_type_ids, input_mask)
```
"""
def __init__(self, config: BertConfig):
"""Constructor for BertModel.
Args:
config: `BertConfig` instance.
"""
super(BertModel, self).__init__()
self.embeddings = BERTEmbeddings(config)
self.encoder = BERTEncoder(config)
self.pooler = BERTPooler(config)
def forward(self, input_ids, token_type_ids=None, attention_mask=None):
if attention_mask is None:
attention_mask = torch.ones_like(input_ids)
if token_type_ids is None:
token_type_ids = torch.zeros_like(input_ids)
# We create a 3D attention mask from a 2D tensor mask.
# Sizes are [batch_size, 1, 1, from_seq_length]
# So we can broadcast to [batch_size, num_heads, to_seq_length, from_seq_length]
# this attention mask is more simple than the triangular masking of causal attention
# used in OpenAI GPT, we just need to prepare the broadcast dimension here.
extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and -10000.0 for masked positions.
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
extended_attention_mask = extended_attention_mask.float()
extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
embedding_output = self.embeddings(input_ids, token_type_ids)
all_encoder_layers = self.encoder(embedding_output, extended_attention_mask)
sequence_output = all_encoder_layers[-1]
pooled_output = self.pooler(sequence_output)
return all_encoder_layers, pooled_output
# + [markdown] id="NiEAp81zt5NL" colab_type="text"
# ### 4.14 BertForQuestionAnswering
# + id="CqNXRN7Dt309" colab_type="code" colab={}
class BertForQuestionAnswering(nn.Module):
"""BERT model for Question Answering (span extraction).
This module is composed of the BERT model with a linear layer on top of
the sequence output that computes start_logits and end_logits
Example usage:
```python
# Already been converted into WordPiece token ids
input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]])
input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]])
token_type_ids = torch.LongTensor([[0, 0, 1], [0, 2, 0]])
config = BertConfig(vocab_size=32000, hidden_size=512,
num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024)
model = BertForQuestionAnswering(config)
start_logits, end_logits = model(input_ids, token_type_ids, input_mask)
```
"""
def __init__(self, config):
super(BertForQuestionAnswering, self).__init__()
self.bert = BertModel(config)
# TODO check with Google if it's normal there is no dropout on the token classifier of SQuAD in the TF version
# self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.qa_outputs = nn.Linear(config.hidden_size, 2)
def init_weights(module):
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=config.initializer_range)
elif isinstance(module, BERTLayerNorm):
module.beta.data.normal_(mean=0.0, std=config.initializer_range)
module.gamma.data.normal_(mean=0.0, std=config.initializer_range)
if isinstance(module, nn.Linear):
module.bias.data.zero_()
self.apply(init_weights)
def forward(self, input_ids, token_type_ids, attention_mask, start_positions=None, end_positions=None):
all_encoder_layers, _ = self.bert(input_ids, token_type_ids, attention_mask)
sequence_output = all_encoder_layers[-1]
logits = self.qa_outputs(sequence_output)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
if start_positions is not None and end_positions is not None:
# If we are on multi-GPU, split add a dimension - if not this is a no-op
start_positions = start_positions.squeeze(-1)
end_positions = end_positions.squeeze(-1)
# sometimes the start/end positions are outside our model inputs, we ignore these terms
ignored_index = start_logits.size(1)
start_positions.clamp_(0, ignored_index)
end_positions.clamp_(0, ignored_index)
loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
start_loss = loss_fct(start_logits, start_positions)
end_loss = loss_fct(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2
return total_loss, (start_logits, end_logits)
else:
return start_logits, end_logits
# + [markdown] id="i2h0fVJ6GKfI" colab_type="text"
# ## 5.extract_features
# + id="Z_wjmRijGIxa" colab_type="code" colab={}
class InputExample(object):
def __init__(self, unique_id, text_a, text_b):
self.unique_id = unique_id
self.text_a = text_a
self.text_b = text_b
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, unique_id, tokens, input_ids, input_mask, input_type_ids):
self.unique_id = unique_id
self.tokens = tokens
self.input_ids = input_ids
self.input_mask = input_mask
self.input_type_ids = input_type_ids
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def read_examples(input_file):
"""Read a list of `InputExample`s from an input file."""
examples = []
unique_id = 0
with open(input_file, "r") as reader:
while True:
line = tokenization.convert_to_unicode(reader.readline())
if not line:
break
line = line.strip()
text_a = None
text_b = None
m = re.match(r"^(.*) \|\|\| (.*)$", line)
if m is None:
text_a = line
else:
text_a = m.group(1)
text_b = m.group(2)
examples.append(
InputExample(unique_id=unique_id, text_a=text_a, text_b=text_b))
unique_id += 1
return examples
# + [markdown] id="n-TPYBk3tcCK" colab_type="text"
# ### 5.1convert_examples_to_features
#
# + id="0SsBwBYQtbxJ" colab_type="code" colab={}
def convert_examples_to_features(examples, tokenizer, max_seq_length,
doc_stride, max_query_length, is_training):
"""Loads a data file into a list of `InputBatch`s."""
unique_id = 1000000000
features = []
for (example_index, example) in enumerate(examples):
query_tokens = tokenizer.tokenize(example.question_text)
if len(query_tokens) > max_query_length:
query_tokens = query_tokens[0:max_query_length]
tok_to_orig_index = []
orig_to_tok_index = []
all_doc_tokens = []
for (i, token) in enumerate(example.doc_tokens):
orig_to_tok_index.append(len(all_doc_tokens))
sub_tokens = tokenizer.tokenize(token)
for sub_token in sub_tokens:
tok_to_orig_index.append(i)
all_doc_tokens.append(sub_token)
tok_start_position = None
tok_end_position = None
if is_training:
tok_start_position = orig_to_tok_index[example.start_position]
if example.end_position < len(example.doc_tokens) - 1:
tok_end_position = orig_to_tok_index[example.end_position + 1] - 1
else:
tok_end_position = len(all_doc_tokens) - 1
(tok_start_position, tok_end_position) = _improve_answer_span(
all_doc_tokens, tok_start_position, tok_end_position, tokenizer,
example.orig_answer_text)
# The -3 accounts for [CLS], [SEP] and [SEP]
max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
# We can have documents that are longer than the maximum sequence length.
# To deal with this we do a sliding window approach, where we take chunks
# of the up to our max length with a stride of `doc_stride`.
_DocSpan = collections.namedtuple( # pylint: disable=invalid-name
"DocSpan", ["start", "length"])
doc_spans = []
start_offset = 0
while start_offset < len(all_doc_tokens):
length = len(all_doc_tokens) - start_offset
if length > max_tokens_for_doc:
length = max_tokens_for_doc
doc_spans.append(_DocSpan(start=start_offset, length=length))
if start_offset + length == len(all_doc_tokens):
break
start_offset += min(length, doc_stride)
for (doc_span_index, doc_span) in enumerate(doc_spans):
tokens = []
token_to_orig_map = {}
token_is_max_context = {}
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in query_tokens:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
for i in range(doc_span.length):
split_token_index = doc_span.start + i
token_to_orig_map[len(tokens)] = tok_to_orig_index[split_token_index]
is_max_context = _check_is_max_context(doc_spans, doc_span_index,
split_token_index)
token_is_max_context[len(tokens)] = is_max_context
tokens.append(all_doc_tokens[split_token_index])
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
start_position = None
end_position = None
if is_training:
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
doc_start = doc_span.start
doc_end = doc_span.start + doc_span.length - 1
if (example.start_position < doc_start or
example.end_position < doc_start or
example.start_position > doc_end or example.end_position > doc_end):
continue
doc_offset = len(query_tokens) + 2
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset
if example_index < 20:
logger.info("*** Example ***")
logger.info("unique_id: %s" % (unique_id))
logger.info("example_index: %s" % (example_index))
logger.info("doc_span_index: %s" % (doc_span_index))
logger.info("tokens: %s" % " ".join(tokens))
logger.info("token_to_orig_map: %s" % " ".join([
"%d:%d" % (x, y) for (x, y) in token_to_orig_map.items()]))
logger.info("token_is_max_context: %s" % " ".join([
"%d:%s" % (x, y) for (x, y) in token_is_max_context.items()
]))
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info(
"input_mask: %s" % " ".join([str(x) for x in input_mask]))
logger.info(
"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
if is_training:
answer_text = " ".join(tokens[start_position:(end_position + 1)])
logger.info("start_position: %d" % (start_position))
logger.info("end_position: %d" % (end_position))
logger.info(
"answer: %s" % (answer_text))
features.append(
InputFeatures(
unique_id=unique_id,
example_index=example_index,
doc_span_index=doc_span_index,
tokens=tokens,
token_to_orig_map=token_to_orig_map,
token_is_max_context=token_is_max_context,
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
start_position=start_position,
end_position=end_position))
unique_id += 1
return features
# + [markdown] id="oaflPCLNGKpY" colab_type="text"
# ## 6.run_squad
# + id="i_qmoK19GtUY" colab_type="code" colab={}
class SquadExample(object):
"""A single training/test example for simple sequence classification."""
def __init__(self,
qas_id,
question_text,
doc_tokens,
orig_answer_text=None,
start_position=None,
end_position=None):
self.qas_id = qas_id
self.question_text = question_text
self.doc_tokens = doc_tokens
self.orig_answer_text = orig_answer_text
self.start_position = start_position
self.end_position = end_position
def __str__(self):
return self.__repr__()
def __repr__(self):
s = ""
s += "qas_id: %s" % (printable_text(self.qas_id))
s += ", question_text: %s" % (
printable_text(self.question_text))
s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
if self.start_position:
s += ", start_position: %d" % (self.start_position)
if self.start_position:
s += ", end_position: %d" % (self.end_position)
return s
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self,
unique_id,
example_index,
doc_span_index,
tokens,
token_to_orig_map,
token_is_max_context,
input_ids,
input_mask,
segment_ids,
start_position=None,
end_position=None):
self.unique_id = unique_id
self.example_index = example_index
self.doc_span_index = doc_span_index
self.tokens = tokens
self.token_to_orig_map = token_to_orig_map
self.token_is_max_context = token_is_max_context
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.start_position = start_position
self.end_position = end_position
def read_squad_examples(input_file, is_training):
"""Read a SQuAD json file into a list of SquadExample."""
with open(input_file, "r") as reader:
input_data = json.load(reader)["data"]
def is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
examples = []
for entry in input_data:
for paragraph in entry["paragraphs"]:
paragraph_text = paragraph["context"]
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
for c in paragraph_text:
if is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
char_to_word_offset.append(len(doc_tokens) - 1)
for qa in paragraph["qas"]:
qas_id = qa["id"]
question_text = qa["question"]
start_position = None
end_position = None
orig_answer_text = None
if is_training:
if len(qa["answers"]) != 1:
raise ValueError(
"For training, each question should have exactly 1 answer.")
answer = qa["answers"][0]
orig_answer_text = answer["text"]
answer_offset = answer["answer_start"]
answer_length = len(orig_answer_text)
start_position = char_to_word_offset[answer_offset]
end_position = char_to_word_offset[answer_offset + answer_length - 1]
# Only add answers where the text can be exactly recovered from the
# document. If this CAN'T happen it's likely due to weird Unicode
# stuff so we will just skip the example.
#
# Note that this means for training mode, every example is NOT
# guaranteed to be preserved.
actual_text = " ".join(doc_tokens[start_position:(end_position + 1)])
cleaned_answer_text = " ".join(
whitespace_tokenize(orig_answer_text))
if actual_text.find(cleaned_answer_text) == -1:
logger.warning("Could not find answer: '%s' vs. '%s'",
actual_text, cleaned_answer_text)
continue
example = SquadExample(
qas_id=qas_id,
question_text=question_text,
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_position,
end_position=end_position)
examples.append(example)
return examples
def convert_examples_to_features(examples, tokenizer, max_seq_length,
doc_stride, max_query_length, is_training):
"""Loads a data file into a list of `InputBatch`s."""
unique_id = 1000000000
features = []
for (example_index, example) in enumerate(examples):
query_tokens = tokenizer.tokenize(example.question_text)
if len(query_tokens) > max_query_length:
query_tokens = query_tokens[0:max_query_length]
tok_to_orig_index = []
orig_to_tok_index = []
all_doc_tokens = []
for (i, token) in enumerate(example.doc_tokens):
orig_to_tok_index.append(len(all_doc_tokens))
sub_tokens = tokenizer.tokenize(token)
for sub_token in sub_tokens:
tok_to_orig_index.append(i)
all_doc_tokens.append(sub_token)
tok_start_position = None
tok_end_position = None
if is_training:
tok_start_position = orig_to_tok_index[example.start_position]
if example.end_position < len(example.doc_tokens) - 1:
tok_end_position = orig_to_tok_index[example.end_position + 1] - 1
else:
tok_end_position = len(all_doc_tokens) - 1
(tok_start_position, tok_end_position) = _improve_answer_span(
all_doc_tokens, tok_start_position, tok_end_position, tokenizer,
example.orig_answer_text)
# The -3 accounts for [CLS], [SEP] and [SEP]
max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
# We can have documents that are longer than the maximum sequence length.
# To deal with this we do a sliding window approach, where we take chunks
# of the up to our max length with a stride of `doc_stride`.
_DocSpan = collections.namedtuple( # pylint: disable=invalid-name
"DocSpan", ["start", "length"])
doc_spans = []
start_offset = 0
while start_offset < len(all_doc_tokens):
length = len(all_doc_tokens) - start_offset
if length > max_tokens_for_doc:
length = max_tokens_for_doc
doc_spans.append(_DocSpan(start=start_offset, length=length))
if start_offset + length == len(all_doc_tokens):
break
start_offset += min(length, doc_stride)
for (doc_span_index, doc_span) in enumerate(doc_spans):
tokens = []
token_to_orig_map = {}
token_is_max_context = {}
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in query_tokens:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
for i in range(doc_span.length):
split_token_index = doc_span.start + i
token_to_orig_map[len(tokens)] = tok_to_orig_index[split_token_index]
is_max_context = _check_is_max_context(doc_spans, doc_span_index,
split_token_index)
token_is_max_context[len(tokens)] = is_max_context
tokens.append(all_doc_tokens[split_token_index])
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
start_position = None
end_position = None
if is_training:
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
doc_start = doc_span.start
doc_end = doc_span.start + doc_span.length - 1
if (example.start_position < doc_start or
example.end_position < doc_start or
example.start_position > doc_end or example.end_position > doc_end):
continue
doc_offset = len(query_tokens) + 2
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset
if example_index < 20:
logger.info("*** Example ***")
logger.info("unique_id: %s" % (unique_id))
logger.info("example_index: %s" % (example_index))
logger.info("doc_span_index: %s" % (doc_span_index))
logger.info("tokens: %s" % " ".join(
[printable_text(x) for x in tokens]))
logger.info("token_to_orig_map: %s" % " ".join(
["%d:%d" % (x, y) for (x, y) in six.iteritems(token_to_orig_map)]))
logger.info("token_is_max_context: %s" % " ".join([
"%d:%s" % (x, y) for (x, y) in six.iteritems(token_is_max_context)
]))
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info(
"input_mask: %s" % " ".join([str(x) for x in input_mask]))
logger.info(
"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
if is_training:
answer_text = " ".join(tokens[start_position:(end_position + 1)])
logger.info("start_position: %d" % (start_position))
logger.info("end_position: %d" % (end_position))
logger.info(
"answer: %s" % (printable_text(answer_text)))
features.append(
InputFeatures(
unique_id=unique_id,
example_index=example_index,
doc_span_index=doc_span_index,
tokens=tokens,
token_to_orig_map=token_to_orig_map,
token_is_max_context=token_is_max_context,
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
start_position=start_position,
end_position=end_position))
unique_id += 1
return features
def _improve_answer_span(doc_tokens, input_start, input_end, tokenizer,
orig_answer_text):
"""Returns tokenized answer spans that better match the annotated answer."""
# The SQuAD annotations are character based. We first project them to
# whitespace-tokenized words. But then after WordPiece tokenization, we can
# often find a "better match". For example:
#
# Question: What year was <NAME> born?
# Context: The leader was <NAME> (1895-1943).
# Answer: 1895
#
# The original whitespace-tokenized answer will be "(1895-1943).". However
# after tokenization, our tokens will be "( 1895 - 1943 ) .". So we can match
# the exact answer, 1895.
#
# However, this is not always possible. Consider the following:
#
# Question: What country is the top exporter of electornics?
# Context: The Japanese electronics industry is the lagest in the world.
# Answer: Japan
#
# In this case, the annotator chose "Japan" as a character sub-span of
# the word "Japanese". Since our WordPiece tokenizer does not split
# "Japanese", we just use "Japanese" as the annotation. This is fairly rare
# in SQuAD, but does happen.
tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text))
for new_start in range(input_start, input_end + 1):
for new_end in range(input_end, new_start - 1, -1):
text_span = " ".join(doc_tokens[new_start:(new_end + 1)])
if text_span == tok_answer_text:
return (new_start, new_end)
return (input_start, input_end)
def _check_is_max_context(doc_spans, cur_span_index, position):
"""Check if this is the 'max context' doc span for the token."""
# Because of the sliding window approach taken to scoring documents, a single
# token can appear in multiple documents. E.g.
# Doc: the man went to the store and bought a gallon of milk
# Span A: the man went to the
# Span B: to the store and bought
# Span C: and bought a gallon of
# ...
#
# Now the word 'bought' will have two scores from spans B and C. We only
# want to consider the score with "maximum context", which we define as
# the *minimum* of its left and right context (the *sum* of left and
# right context will always be the same, of course).
#
# In the example the maximum context for 'bought' would be span C since
# it has 1 left context and 3 right context, while span B has 4 left context
# and 0 right context.
best_score = None
best_span_index = None
for (span_index, doc_span) in enumerate(doc_spans):
end = doc_span.start + doc_span.length - 1
if position < doc_span.start:
continue
if position > end:
continue
num_left_context = position - doc_span.start
num_right_context = end - position
score = min(num_left_context, num_right_context) + 0.01 * doc_span.length
if best_score is None or score > best_score:
best_score = score
best_span_index = span_index
return cur_span_index == best_span_index
RawResult = collections.namedtuple("RawResult",
["unique_id", "start_logits", "end_logits"])
def write_predictions(all_examples, all_features, all_results, n_best_size,
max_answer_length, do_lower_case, output_prediction_file,
output_nbest_file, verbose_logging):
"""Write final predictions to the json file."""
logger.info("Writing predictions to: %s" % (output_prediction_file))
logger.info("Writing nbest to: %s" % (output_nbest_file))
example_index_to_features = collections.defaultdict(list)
for feature in all_features:
example_index_to_features[feature.example_index].append(feature)
unique_id_to_result = {}
for result in all_results:
unique_id_to_result[result.unique_id] = result
_PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
"PrelimPrediction",
["feature_index", "start_index", "end_index", "start_logit", "end_logit"])
all_predictions = collections.OrderedDict()
all_nbest_json = collections.OrderedDict()
for (example_index, example) in enumerate(all_examples):
features = example_index_to_features[example_index]
prelim_predictions = []
for (feature_index, feature) in enumerate(features):
result = unique_id_to_result[feature.unique_id]
start_indexes = _get_best_indexes(result.start_logits, n_best_size)
end_indexes = _get_best_indexes(result.end_logits, n_best_size)
for start_index in start_indexes:
for end_index in end_indexes:
# We could hypothetically create invalid predictions, e.g., predict
# that the start of the span is in the question. We throw out all
# invalid predictions.
if start_index >= len(feature.tokens):
continue
if end_index >= len(feature.tokens):
continue
if start_index not in feature.token_to_orig_map:
continue
if end_index not in feature.token_to_orig_map:
continue
if not feature.token_is_max_context.get(start_index, False):
continue
if end_index < start_index:
continue
length = end_index - start_index + 1
if length > max_answer_length:
continue
prelim_predictions.append(
_PrelimPrediction(
feature_index=feature_index,
start_index=start_index,
end_index=end_index,
start_logit=result.start_logits[start_index],
end_logit=result.end_logits[end_index]))
prelim_predictions = sorted(
prelim_predictions,
key=lambda x: (x.start_logit + x.end_logit),
reverse=True)
_NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
"NbestPrediction", ["text", "start_logit", "end_logit"])
seen_predictions = {}
nbest = []
for pred in prelim_predictions:
if len(nbest) >= n_best_size:
break
feature = features[pred.feature_index]
tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1)]
orig_doc_start = feature.token_to_orig_map[pred.start_index]
orig_doc_end = feature.token_to_orig_map[pred.end_index]
orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end + 1)]
tok_text = " ".join(tok_tokens)
# De-tokenize WordPieces that have been split off.
tok_text = tok_text.replace(" ##", "")
tok_text = tok_text.replace("##", "")
# Clean whitespace
tok_text = tok_text.strip()
tok_text = " ".join(tok_text.split())
orig_text = " ".join(orig_tokens)
final_text = get_final_text(tok_text, orig_text, do_lower_case, verbose_logging)
if final_text in seen_predictions:
continue
seen_predictions[final_text] = True
nbest.append(
_NbestPrediction(
text=final_text,
start_logit=pred.start_logit,
end_logit=pred.end_logit))
# In very rare edge cases we could have no valid predictions. So we
# just create a nonce prediction in this case to avoid failure.
if not nbest:
nbest.append(
_NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
assert len(nbest) >= 1
total_scores = []
for entry in nbest:
total_scores.append(entry.start_logit + entry.end_logit)
probs = _compute_softmax(total_scores)
nbest_json = []
for (i, entry) in enumerate(nbest):
output = collections.OrderedDict()
output["text"] = entry.text
output["probability"] = probs[i]
output["start_logit"] = entry.start_logit
output["end_logit"] = entry.end_logit
nbest_json.append(output)
assert len(nbest_json) >= 1
all_predictions[example.qas_id] = nbest_json[0]["text"]
all_nbest_json[example.qas_id] = nbest_json
with open(output_prediction_file, "w") as writer:
writer.write(json.dumps(all_predictions, indent=4) + "\n")
with open(output_nbest_file, "w") as writer:
writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
def _get_best_indexes(logits, n_best_size):
"""Get the n-best logits from a list."""
index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)
best_indexes = []
for i in range(len(index_and_score)):
if i >= n_best_size:
break
best_indexes.append(index_and_score[i][0])
return best_indexes
def _compute_softmax(scores):
"""Compute softmax probability over raw logits."""
if not scores:
return []
max_score = None
for score in scores:
if max_score is None or score > max_score:
max_score = score
exp_scores = []
total_sum = 0.0
for score in scores:
x = math.exp(score - max_score)
exp_scores.append(x)
total_sum += x
probs = []
for score in exp_scores:
probs.append(score / total_sum)
return probs
# + [markdown] id="08mLR83Pv_bZ" colab_type="text"
# ### 6.1 chatbot_predition_answer
# + id="ihFLNdX8v_Ok" colab_type="code" colab={}
def chatbot_prediction_answer(all_examples, all_features, all_results, n_best_size,
max_answer_length, do_lower_case):
example_index_to_features = collections.defaultdict(list)
for feature in all_features:
example_index_to_features[feature.example_index].append(feature)
unique_id_to_result = {}
for result in all_results:
unique_id_to_result[result.unique_id] = result
_PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
"PrelimPrediction",
["feature_index", "start_index", "end_index", "start_logit", "end_logit"])
all_predictions = collections.OrderedDict()
all_nbest_json = collections.OrderedDict()
for (example_index, example) in enumerate(all_examples):
features = example_index_to_features[example_index]
prelim_predictions = []
for (feature_index, feature) in enumerate(features):
result = unique_id_to_result[feature.unique_id]
start_indexes = _get_best_indexes(result.start_logits, n_best_size)
end_indexes = _get_best_indexes(result.end_logits, n_best_size)
for start_index in start_indexes:
for end_index in end_indexes:
# We could hypothetically create invalid predictions, e.g., predict
# that the start of the span is in the question. We throw out all
# invalid predictions.
if start_index >= len(feature.tokens):
continue
if end_index >= len(feature.tokens):
continue
if start_index not in feature.token_to_orig_map:
continue
if end_index not in feature.token_to_orig_map:
continue
if not feature.token_is_max_context.get(start_index, False):
continue
if end_index < start_index:
continue
length = end_index - start_index + 1
if length > max_answer_length:
continue
prelim_predictions.append(
_PrelimPrediction(
feature_index=feature_index,
start_index=start_index,
end_index=end_index,
start_logit=result.start_logits[start_index],
end_logit=result.end_logits[end_index]))
prelim_predictions = sorted(
prelim_predictions,
key=lambda x: (x.start_logit + x.end_logit),
reverse=True)
_NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
"NbestPrediction", ["text", "start_logit", "end_logit"])
seen_predictions = {}
nbest = []
for pred in prelim_predictions:
if len(nbest) >= n_best_size:
break
feature = features[pred.feature_index]
tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1)]
orig_doc_start = feature.token_to_orig_map[pred.start_index]
orig_doc_end = feature.token_to_orig_map[pred.end_index]
orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end + 1)]
tok_text = " ".join(tok_tokens)
# De-tokenize WordPieces that have been split off.
tok_text = tok_text.replace(" ##", "")
tok_text = tok_text.replace("##", "")
# Clean whitespace
tok_text = tok_text.strip()
tok_text = " ".join(tok_text.split())
orig_text = " ".join(orig_tokens)
final_text = get_final_text(tok_text, orig_text, do_lower_case, verbose_logging=False)
if final_text in seen_predictions:
continue
seen_predictions[final_text] = True
nbest.append(
_NbestPrediction(
text=final_text,
start_logit=pred.start_logit,
end_logit=pred.end_logit))
# In very rare edge cases we could have no valid predictions. So we
# just create a nonce prediction in this case to avoid failure.
if not nbest:
nbest.append(
_NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
assert len(nbest) >= 1
total_scores = []
for entry in nbest:
total_scores.append(entry.start_logit + entry.end_logit)
probs = _compute_softmax(total_scores)
nbest_json = []
for (i, entry) in enumerate(nbest):
output = collections.OrderedDict()
output["text"] = entry.text
output["probability"] = probs[i]
output["start_logit"] = entry.start_logit
output["end_logit"] = entry.end_logit
nbest_json.append(output)
assert len(nbest_json) >= 1
return nbest_json[0]["text"]
# + [markdown] id="ZqdHWTavwBTQ" colab_type="text"
# ### 6.2 get_final_text
# + id="BFCUK-W0wBdG" colab_type="code" colab={}
def get_final_text(pred_text, orig_text, do_lower_case, verbose_logging=False):
"""Project the tokenized prediction back to the original text."""
# When we created the data, we kept track of the alignment between original
# (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
# now `orig_text` contains the span of our original text corresponding to the
# span that we predicted.
#
# However, `orig_text` may contain extra characters that we don't want in
# our prediction.
#
# For example, let's say:
# pred_text = <NAME>
# orig_text = <NAME>
#
# We don't want to return `orig_text` because it contains the extra "'s".
#
# We don't want to return `pred_text` because it's already been normalized
# (the SQuAD eval script also does punctuation stripping/lower casing but
# our tokenizer does additional normalization like stripping accent
# characters).
#
# What we really want to return is "<NAME>".
#
# Therefore, we have to apply a semi-complicated alignment heruistic between
# `pred_text` and `orig_text` to get a character-to-charcter alignment. This
# can fail in certain cases in which case we just return `orig_text`.
def _strip_spaces(text):
ns_chars = []
ns_to_s_map = collections.OrderedDict()
for (i, c) in enumerate(text):
if c == " ":
continue
ns_to_s_map[len(ns_chars)] = i
ns_chars.append(c)
ns_text = "".join(ns_chars)
return (ns_text, ns_to_s_map)
# We first tokenize `orig_text`, strip whitespace from the result
# and `pred_text`, and check if they are the same length. If they are
# NOT the same length, the heuristic has failed. If they are the same
# length, we assume the characters are one-to-one aligned.
tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
tok_text = " ".join(tokenizer.tokenize(orig_text))
start_position = tok_text.find(pred_text)
if start_position == -1:
if verbose_logging:
logger.info(
"Unable to find text: '%s' in '%s'" % (pred_text, orig_text))
return orig_text
end_position = start_position + len(pred_text) - 1
(orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
(tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
if len(orig_ns_text) != len(tok_ns_text):
if verbose_logging:
logger.info("Length not equal after stripping spaces: '%s' vs '%s'",
orig_ns_text, tok_ns_text)
return orig_text
# We then project the characters in `pred_text` back to `orig_text` using
# the character-to-character alignment.
tok_s_to_ns_map = {}
for (i, tok_index) in six.iteritems(tok_ns_to_s_map):
tok_s_to_ns_map[tok_index] = i
orig_start_position = None
if start_position in tok_s_to_ns_map:
ns_start_position = tok_s_to_ns_map[start_position]
if ns_start_position in orig_ns_to_s_map:
orig_start_position = orig_ns_to_s_map[ns_start_position]
if orig_start_position is None:
if verbose_logging:
logger.info("Couldn't map start position")
return orig_text
orig_end_position = None
if end_position in tok_s_to_ns_map:
ns_end_position = tok_s_to_ns_map[end_position]
if ns_end_position in orig_ns_to_s_map:
orig_end_position = orig_ns_to_s_map[ns_end_position]
if orig_end_position is None:
if verbose_logging:
logger.info("Couldn't map end position")
return orig_text
output_text = orig_text[orig_start_position:(orig_end_position + 1)]
return output_text
# + [markdown] id="1hPk4byxtKAG" colab_type="text"
# ### 6.3 parse_input_examples
# + id="i-wvzICgVwRs" colab_type="code" colab={}
def parse_input_examples(qas_id, paragraph_text, question_text):
def is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
examples = []
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
for c in paragraph_text:
if is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
start_position = None
end_position = None
orig_answer_text = None
example = SquadExample(
qas_id=qas_id,
question_text=question_text,
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_position,
end_position=end_position)
examples.append(example)
return examples
# + id="XTMhBvybEOYl" colab_type="code" colab={}
BERT_CONFIG={
'bert_config_file' : 'bert_config.json',
'vocab_file' : 'vocab.txt',
'output_dir' : 'output',
'train_file' : 'train.json',
'predict_file' : 'dev.json',
'init_checkpoint': None, ## checkpint 파일명 or None
'do_lower_case' : False,
'max_seq_length' : 384,
'doc_stride' : 128,
'max_query_length' : 64,
'do_train' : False,
'do_predict' : False,
'do_chat' : True,
'train_batch_size' : 32,
'predict_batch_size' : 8,
'learning_rate' : 5e-5,
'num_train_epochs' : 3.0,
'warmup_proportion' : 0.1,
'save_checkpoints_steps' : 1000,
'iterations_per_loop' : 1000,
'n_best_size' : 20,
'max_answer_length' : 30,
'verbose_logging' : False,
'no_cuda' : False,
'local_rank' : -1,
'accumulate_gradients' : 1,
'seed' : 42,
'gradient_accumulation_steps' : 1
}
# + [markdown] id="mTp-2IXWr9tR" colab_type="text"
# ##7.main
# + id="utvdkhbmkhJO" colab_type="code" colab={}
args = Namespace(**BERT_CONFIG)
print(args)
if args.local_rank == -1 or args.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
n_gpu = torch.cuda.device_count()
else:
device = torch.device("cuda", args.local_rank)
n_gpu = 1
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.distributed.init_process_group(backend='nccl')
logger.info("device %s n_gpu %d distributed training %r", device, n_gpu, bool(args.local_rank != -1))
if args.accumulate_gradients < 1:
raise ValueError("Invalid accumulate_gradients parameter: {}, should be >= 1".format(
args.accumulate_gradients))
args.train_batch_size = int(args.train_batch_size / args.accumulate_gradients)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
if args.do_train:
if not args.train_file:
raise ValueError(
"If `do_train` is True, then `train_file` must be specified.")
if args.do_predict:
if not args.predict_file:
raise ValueError(
"If `do_predict` is True, then `predict_file` must be specified.")
bert_config = BertConfig.from_json_file(args.bert_config_file)
if args.max_seq_length > bert_config.max_position_embeddings:
raise ValueError(
"Cannot use sequence length %d because the BERT model "
"was only trained up to sequence length %d" %
(args.max_seq_length, bert_config.max_position_embeddings))
if os.path.exists(args.output_dir) and os.listdir(args.output_dir):
raise ValueError("Output directory () already exists and is not empty.")
os.makedirs(args.output_dir, exist_ok=True)
tokenizer = FullTokenizer(
vocab_file=args.vocab_file, do_lower_case=args.do_lower_case)
train_examples = None
num_train_steps = None
if args.do_train:
train_examples = read_squad_examples(
input_file=args.train_file, is_training=True)
num_train_steps = int(
len(train_examples) / args.train_batch_size * args.num_train_epochs)
model = BertForQuestionAnswering(bert_config)
if args.init_checkpoint is not None:
model.load_state_dict(torch.load(args.init_checkpoint, map_location='cpu'))
model.to(device)
if args.local_rank != -1:
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank],
output_device=args.local_rank)
elif n_gpu > 1:
model = torch.nn.DataParallel(model)
no_decay = ['bias', 'gamma', 'beta']
optimizer_parameters = [
{'params': [p for n, p in model.named_parameters() if n not in no_decay], 'weight_decay_rate': 0.01},
{'params': [p for n, p in model.named_parameters() if n in no_decay], 'weight_decay_rate': 0.0}
]
optimizer = BERTAdam(optimizer_parameters,
lr=args.learning_rate,
warmup=args.warmup_proportion,
t_total=num_train_steps)
global_step = 0
# + [markdown] id="QQvxBmpNsfHz" colab_type="text"
# ### 7.1train
# + id="K8haOibhsHhT" colab_type="code" colab={}
if args.do_train:
train_features = convert_examples_to_features(
examples=train_examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=True)
logger.info("***** Running training *****")
logger.info(" Num orig examples = %d", len(train_examples))
logger.info(" Num split examples = %d", len(train_features))
logger.info(" Batch size = %d", args.train_batch_size)
logger.info(" Num steps = %d", num_train_steps)
all_input_ids = torch.tensor([f.input_ids for f in train_features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in train_features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in train_features], dtype=torch.long)
all_start_positions = torch.tensor([f.start_position for f in train_features], dtype=torch.long)
all_end_positions = torch.tensor([f.end_position for f in train_features], dtype=torch.long)
train_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids,
all_start_positions, all_end_positions)
if args.local_rank == -1:
train_sampler = RandomSampler(train_data)
else:
train_sampler = DistributedSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=args.train_batch_size)
model.train()
for epoch in trange(int(args.num_train_epochs), desc="Epoch"):
for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration")):
input_ids, input_mask, segment_ids, start_positions, end_positions = batch
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
start_positions = start_positions.to(device)
end_positions = start_positions.to(device)
start_positions = start_positions.view(-1, 1)
end_positions = end_positions.view(-1, 1)
loss, _ = model(input_ids, segment_ids, input_mask, start_positions, end_positions)
if n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu.
loss.backward()
if (step + 1) % args.gradient_accumulation_steps == 0:
optimizer.step() # We have accumulated enought gradients
model.zero_grad()
global_step += 1
# + [markdown] id="XxA9FcoWsmSV" colab_type="text"
# ### 7.2predict
# + id="a69tIqhvsHzr" colab_type="code" colab={}
if args.do_predict:
eval_examples = read_squad_examples(
input_file=args.predict_file, is_training=False)
eval_features = convert_examples_to_features(
examples=eval_examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=False)
logger.info("***** Running predictions *****")
logger.info(" Num orig examples = %d", len(eval_examples))
logger.info(" Num split examples = %d", len(eval_features))
logger.info(" Batch size = %d", args.predict_batch_size)
all_input_ids = torch.tensor([f.input_ids for f in eval_features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in eval_features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long)
all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long)
eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_example_index)
if args.local_rank == -1:
eval_sampler = SequentialSampler(eval_data)
else:
eval_sampler = DistributedSampler(eval_data)
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=args.predict_batch_size)
model.eval()
all_results = []
logger.info("Start evaluating")
for input_ids, input_mask, segment_ids, example_indices in tqdm(eval_dataloader, desc="Evaluating"):
if len(all_results) % 1000 == 0:
logger.info("Processing example: %d" % (len(all_results)))
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
with torch.no_grad():
batch_start_logits, batch_end_logits = model(input_ids, segment_ids, input_mask)
for i, example_index in enumerate(example_indices):
start_logits = batch_start_logits[i].detach().cpu().tolist()
end_logits = batch_end_logits[i].detach().cpu().tolist()
eval_feature = eval_features[example_index.item()]
unique_id = int(eval_feature.unique_id)
all_results.append(RawResult(unique_id=unique_id,
start_logits=start_logits,
end_logits=end_logits))
output_prediction_file = os.path.join(args.output_dir, "predictions.json")
output_nbest_file = os.path.join(args.output_dir, "nbest_predictions.json")
write_predictions(eval_examples, eval_features, all_results,
args.n_best_size, args.max_answer_length,
args.do_lower_case, output_prediction_file,
output_nbest_file, args.verbose_logging)
# + [markdown] id="hF-8EhS2sou7" colab_type="text"
# ### 7.3chat
# + [markdown] id="aMYWxHqZePTz" colab_type="text"
# #### 7.3.1
# + id="7oCps-cNePo0" colab_type="code" colab={}
if args.do_chat:
logger.setLevel('CRITICAL')
input_text = None
qas_id = '56be4db0acb8001400a502ed'
c = ContextFinder()
c.build_model(args.predict_file)
print('READY')
# + id="3E7fIUt2jhDV" colab_type="code" colab={}
question_text = input('QUESTION: ')
question_text = ' '.join(c.convert_to_lemma(question_text))
context_text = ""
for i in c.get_ntop_context_by_cosine_similarity(question_text, 5):
context = ' '.join(c.convert_to_lemma(i))
print(context)
context_text += context + ' '
print('context_result:', context_text)
# + [markdown] id="rzI_KUrBeSqF" colab_type="text"
# #### 7.3.2
# + id="sN8BXBirsH4C" colab_type="code" colab={}
eval_examples = parse_input_examples(qas_id, context_text, question_text)
eval_features = convert_examples_to_features(
examples=eval_examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=False)
# + [markdown] id="QWoUzIKHjVRv" colab_type="text"
# #### 7.3.3
# + id="onpRlMyljVa1" colab_type="code" colab={}
all_input_ids = torch.tensor([f.input_ids for f in eval_features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in eval_features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long)
all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long)
eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_example_index)
# Run prediction for full data
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=args.predict_batch_size)
model.eval()
all_results = []
for input_ids, input_mask, segment_ids, example_indices in tqdm(eval_dataloader, desc="CHAT_INFERENCE"):
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
with torch.no_grad():
batch_start_logits, batch_end_logits = model(input_ids, segment_ids, input_mask)
for i, example_index in enumerate(example_indices):
start_logits = batch_start_logits[i].detach().cpu().tolist()
end_logits = batch_end_logits[i].detach().cpu().tolist()
eval_feature = eval_features[example_index.item()]
unique_id = int(eval_feature.unique_id)
all_results.append(RawResult(unique_id=unique_id,
start_logits=start_logits,
end_logits=end_logits))
# + [markdown] id="zp4e4yb4zQMr" colab_type="text"
# #### 7.3.4
# + id="iiz-hr5yzQUI" colab_type="code" colab={}
answer = chatbot_prediction_answer(eval_examples, eval_features, all_results,
args.n_best_size, args.max_answer_length,True)
print('QUESTION:', question_text)
print('ANSWER:', answer)
print('')
| bert_chat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import Python packages
import pickle
# Import Third party packages
import numpy as np
import matplotlib.pyplot as plt
# -
def compute_spurious_terms(results):
# Count the number of incorrectly ID'ed terms and missing terms
for result in results:
coeffs = result['coeffs']
spurious_terms = 0
# Count the incorrect terms identified
for term in coeffs:
if term not in ['u', 'du/dx', 'f', 'u^{2}']:
# if it isn't, increment counter
spurious_terms += 1
# Count if any terms are missing from learned model
for term in ['u', 'du/dx', 'f', 'u^{2}']:
if term not in coeffs:
# if it isn't, increment counter
spurious_terms += 1
result['spurious'] = spurious_terms
return results
# Prepare the results list
file_stem = "./data/Fig4a-NLSL-"
results = pickle.load(open(file_stem +"results.pickle", "rb"))
results = compute_spurious_terms(results)
# Prepare lists for plotting
plot_nm = [result['noise_mag'] for result in results]
plot_losses = [result['loss'] for result in results]
min_loss = min(plot_losses)
plot_losses = [loss-min_loss for loss in plot_losses]
plot_spurious = [result['spurious'] for result in results]
# +
import matplotlib as mpl
mpl.rcParams["legend.markerscale"] = 1.5
mpl.rcParams["legend.labelspacing"] = 1.2
mpl.rcParams["legend.handlelength"] = 3.5
mpl.rcParams["legend.handletextpad"] = 20
figsize = (6,4)
# Create figure
plt.figure(figsize=figsize)
# set axes
ax1 = plt.gca()
ax1.autoscale(False, axis='y')
ax2 = ax1.twinx()
pltstyle=dict(linestyle=None,marker='o')
ax1.plot(plot_nm, plot_losses, color='black', label = "PDE Find Loss Error", **pltstyle)
ax2.plot(plot_nm, plot_spurious, color='red', label="# Spurious Terms", **pltstyle)
ax2.spines['right'].set_color('red')
# Place the legend
lines = ax1.get_lines()+ax2.get_lines()
labels = [line.get_label() for line in lines]
labels = ['' for line in lines]
# adjust axis scales
ax1.set_ylim([0,50])
ax2.set_ylim([0,10])
# Turn off all the tick labels
#ax1.tick_params(labelbottom=False, labelleft=False)
#ax2.tick_params(labelright=False)
#ax2.tick_params(axis='y', colors='red')
## Save figure
#plt.savefig('./Figs/4a-NLSL-noise-vs-error.svg', dpi=600, transparent=True)
plt.show()
# -
# Prepare the results list
file_stem = "./data/Fig4b-NLSL-"
results = pickle.load(open(file_stem +"results.pickle", "rb"))
results = compute_spurious_terms(results)
# Prepare lists for plotting
plot_trials = [result['num_trials'] for result in results]
plot_losses = [result['loss']/result['num_trials'] for result in results]
min_loss = min(plot_losses)
plot_losses = [loss-min_loss for loss in plot_losses]
plot_spurious = [result['spurious'] for result in results]
# +
import matplotlib as mpl
mpl.rcParams["legend.markerscale"] = 1.5
mpl.rcParams["legend.labelspacing"] = 1.2
mpl.rcParams["legend.handlelength"] = 3.5
mpl.rcParams["legend.handletextpad"] = 20
figsize = (6,4)
# Create figure
plt.figure(figsize=figsize)
# set axes
ax1 = plt.gca()
ax1.autoscale(False, axis='y')
ax2 = ax1.twinx()
pltstyle=dict(linestyle=None,marker='o')
ax1.plot(plot_trials, plot_losses, color='black', label = "PDE Find Loss Error", **pltstyle)
ax2.plot(plot_trials, plot_spurious, color='red', label="# Spurious Terms", **pltstyle)
ax2.spines['right'].set_color('red')
# Place the legend
lines = ax1.get_lines()+ax2.get_lines()
labels = [line.get_label() for line in lines]
labels = ['' for line in lines]
# adjust axis scales
ax1.set_ylim([0, 3])
ax2.set_ylim([0, 15])
ax1.set_yticks([0,1,2,3])
ax2.set_yticks([0,5,10,15])
plt.xticks([0,50,100,150,200])
plt.xlim([0, 200])
# Turn off all the tick labels
#ax1.tick_params(labelbottom=False, labelleft=False)
#ax2.tick_params(labelright=False)
#ax2.tick_params(axis='y', colors='red')
#
## Save figure
#plt.savefig('./Figs/4b-NLSL-trials-vs-error.svg', dpi=600, transparent=True)
plt.show()
# -
# Create separate axes
legend_figsize = (figsize[0]*2, figsize[1]/5)
plt.figure(figsize=legend_figsize)
ax = plt.gca()
for spine in ax.spines:
ax.spines[spine].set_visible(False)
ax.tick_params(labelleft=False, labelbottom=False, left=False, bottom=False)
plt.legend(lines, labels, ncol=2, loc='center', frameon=False)
#plt.savefig('./Figs/4-legend.svg', dpi=600, transparent=True)
| Fig 4 - Analysis - NLSL.ipynb |
# ---
# jupyter:
# jupytext:
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Copyright : fast.ai - <NAME> & <NAME> - 2020 (GPLv3)
# Cellules de code et plan du notebook adaptées du livre :
#
# Deep Learning for Coders with fastai & PyTorch de <NAME> et <NAME>.
#
# The code in the original notebooks (and thus the code in this notebook) is covered by the GPL v3 license; see the LICENSE file for details.
# # Collaborative filtering
# Lorsque vous avez plusieurs utilisateurs et plusieurs produits, vous voulez recommander les produits qui sont les plus susceptibles d'être utiles pour chaque utilisateur. Il existe de nombreuses variantes, par exemple recommander des films (comme sur Netflix), déterminer ce qu'il faut mettre en évidence pour un utilisateur sur une page d'accueil, décider quelles histoires montrer dans un flux de médias sociaux, etc.
#
# Il existe une solution générale à ce problème, appelée **filtrage collaboratif**, qui fonctionne de la manière suivante : examiner les produits que l'utilisateur actuel a utilisés ou appréciés, trouver d'autres utilisateurs qui ont utilisé ou apprécié des produits similaires, puis recommander les produits que ces autres utilisateurs ont utilisés ou appréciés.
#
# Par exemple, sur Netflix, vous avez peut-être regardé beaucoup de films de science-fiction, pleins d'action, qui ont été réalisés dans les années 1970. Netflix ne connaît peut-être pas ces propriétés particulières des films que vous avez regardés, mais il peut constater que d'autres personnes qui ont regardé les mêmes films que vous ont également eu tendance à regarder d'autres films de science-fiction, pleins d'action, et réalisés dans les années 70. En d'autres termes, pour utiliser cette approche, nous n'avons pas nécessairement besoin de connaître quoi que ce soit sur ces films, seulement qui aime les regarder.
#
# Il existe en fait une catégorie plus générale de problèmes que cette approche peut résoudre, et pas seulement des problèmes qui concernent les utilisateurs et les produits. Les éléments peuvent être des liens sur lesquels vous cliquez, des diagnostics sélectionnés pour les patients, etc.
#
# L'idée fondamentale est celle des **caractéristiques latentes** (latent factors). Dans l'exemple de Netflix ci-dessus, nous sommes partis du principe que vous aimez les vieux films de science-fiction d'action. Mais vous n'avez jamais vraiment dit à Netflix que vous aimiez ce genre de films. Et Netflix n'a jamais eu besoin d'ajouter des colonnes à son tableau des films pour indiquer quels sont les films de ce type. Mais il doit y avoir un concept sous-jacent de science-fiction, d'action et d'âge du cinéma. Et ces concepts doivent être pertinents pour les décisions que prennent au moins certaines personnes en matière de visionnage de films.
# ## Un premier aperçu des données
from fastai2.collab import *
from fastai2.tabular.all import *
# Nous allons utiliser un jeu de données appelé [MovieLens](https://grouplens.org/datasets/movielens/). Ce jeu de données contient des dizaines de millions de notes de films (c'est-à-dire une combinaison d'un identifiant de film, d'un identifiant d'utilisateur et d'une note numérique), même si nous n'utiliserons qu'un sous-ensemble de 100 000 d'entre eux pour notre exemple.
path = untar_data(URLs.ML_100k)
ratings = pd.read_csv(path/'u.data', delimiter='\t', header=None, names=['user','movie','rating','timestamp'])
ratings.head(10)
len(ratings)
# Exemple de représentation des "caractéristiques latentes" des films et des utilisateurs.
# Si nous savions, pour chaque utilisateur, dans quelle mesure il aime chaque catégorie importante à laquelle un film peut appartenir, comme le genre, l'âge, les réalisateurs et acteurs préférés, etc. et si nous connaissions les mêmes informations sur chaque film, alors une façon simple de prédire la colonne "rating" serait de multiplier ces informations ensemble pour chaque film et de les additionner.
#
# Par exemple, en supposant que ces facteurs se situent entre -1 et un facteur positif, et que positif signifie une correspondance élevée et négatif une correspondance faible, et que les catégories sont la science-fiction, l'action et les vieux films, alors nous pourrions représenter le film The Last Skywalker comme :
last_skywalker = np.array([0.98,0.9,-0.9])
user1 = np.array([0.9,0.8,-0.6])
(user1*last_skywalker).sum()
casablanca = np.array([-0.99,-0.3,0.8])
(user1*casablanca).sum()
# Lorsque nous multiplions deux vecteurs ensemble et additionnons les résultats, on parle de produit scalaire (dot product).
#
# jargon : produit scalaire (dot product) : opération mathématique consistant à multiplier les éléments de deux vecteurs ensemble, puis à additionner le résultat.
# ## Apprendre les caractéristiques latentes
# Il y a étonnamment peu de chemin entre la spécification de la structure d'un modèle, comme nous l'avons fait dans la dernière section, et l'apprentissage d'un modèle, puisque nous pouvons simplement utiliser notre approche de descente de gradient.
#
# La première étape de cette approche consiste à initialiser de manière aléatoire certains paramètres. Ces paramètres seront un ensemble de caractéristiques latentes pour chaque utilisateur et chaque film. Nous devrons décider du nombre de paramètres à utiliser. Nous discuterons prochainement de la manière de sélectionner ces paramètres, mais à titre d'illustration, nous en utiliserons 5 pour l'instant. Chaque utilisateur aura 5 caractéristiques, et chaque film 5 caractéristiques : nous pouvons afficher ces valeurs initialisées de façon aléatoire juste à côté des utilisateurs et des films dans un tableau croisé.
#
# La deuxième étape de cette approche consiste à calculer nos prévisions. Comme nous l'avons vu, nous pouvons le faire en prenant simplement le produit scalaire de chaque film avec chaque utilisateur. Si, par exemple, la première caractéristique latente d'un utilisateur eprésente combien il aime les films d'action, et la première caractéristique latente d'un filmreprésente si le film contient beaucoup d'action ou non, alors le produit de ces deux facteurs sera particulièrement élevé si l'utilisateur aime les films d'action et si le film contient beaucoup d'action ou si l'utilisateur n'aime pas les films d'action et si le film ne contient pas d'action. En revanche, si nous avons un décalage (un utilisateur aime les films d'action mais que le film n'en contient pas, ou l'utilisateur n'aime pas les films d'action et que le film en contient), le produit sera très faible.
#
# La troisième étape consiste à calculer l'erreur de prédiction. Nous pouvons utiliser n'importe quelle fonction de coût ; c'est-à-dire choisir l'erreur quadratique moyenne (mean squared error) pour l'instant, car c'est une façon raisonnable de représenter la précision d'une prédiction.
#
# C'est tout ce dont nous avons besoin. Une fois cette fonction en place, nous pouvons optimiser nos paramètres (c'est-à-dire les caractéristiques latentes) en utilisant la descente de gradient stochastique, de manière à minimiser l'erreur de prédiction. À chaque étape, la boucle d'entrainement calculera la correspondance entre chaque film et chaque utilisateur à l'aide du produit scalaire, et la comparera à la note réelle que chaque utilisateur a donnée à chaque film, et il calculera ensuite la dérivée de cette valeur, et fera progresser les poids en multipliant cela par le taux d'apprentissage. Après avoir fait cela plusieurs fois, l'erreur de prédiction sera de plus en plus petite, et les recommandations seront également meilleures.
# ## Créer les DataLoaders
movies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1',
usecols=(0,1), names=('movie','title'), header=None)
movies.head()
ratings = ratings.merge(movies)
ratings.head()
dls = CollabDataLoaders.from_df(ratings, item_name='title', bs=64)
dls.show_batch()
# Initialisation aléatoire des caractéristiques latentes :
# +
n_users = len(dls.classes['user'])
n_movies = len(dls.classes['title'])
n_factors = 5
user_factors = torch.randn(n_users, n_factors)
movie_factors = torch.randn(n_movies, n_factors)
# -
user_factors[3]
# ## Collaborative filtering "from scratch"
# Programmation orientée objet en Python :
class Example:
def __init__(self, a): self.a = a
def say(self,x): return f'Hello {self.a}, {x}.'
ex = Example('Sylvain')
ex.say('nice to meet you')
# Création d'un module Pytorch - utilisation du module **Embeddding** :
class DotProduct(Module):
def __init__(self, n_users, n_movies, n_factors):
self.user_factors = Embedding(n_users, n_factors)
self.movie_factors = Embedding(n_movies, n_factors)
def forward(self, x):
users = self.user_factors(x[:,0])
movies = self.movie_factors(x[:,1])
return (users * movies).sum(dim=1)
x,y = dls.one_batch()
x.shape
x[0,0],x[0,1],y[0]
model = DotProduct(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3)
class DotProduct(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = Embedding(n_users, n_factors)
self.movie_factors = Embedding(n_movies, n_factors)
self.y_range = y_range
def forward(self, x):
users = self.user_factors(x[:,0])
movies = self.movie_factors(x[:,1])
return sigmoid_range((users * movies).sum(dim=1), *self.y_range)
model = DotProduct(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(3, 5e-3)
# Il manque une pièce évidente au puzzle : certains utilisateurs sont tout simplement plus positifs ou négatifs que d'autres dans leurs recommandations, et certains films sont tout simplement meilleurs ou pires que d'autres. Mais dans notre représentation des produits par points, nous n'avons aucun moyen de coder l'un ou l'autre de ces éléments. Si tout ce que vous pouvez dire, par exemple, sur le film, c'est qu'il est très science-fiction, très orienté vers l'action et très peu vieux, alors vous n'avez pas vraiment de moyen de dire que la plupart des gens l'aiment.
#
# C'est parce qu'à ce stade, nous n'avons que des poids ; nous n'avons pas de biais. Si nous avons un seul chiffre pour chaque utilisateur que nous ajoutons à nos scores, et idem pour chaque film, alors cela permettra de gérer très bien cette pièce manquante.
class DotProductBias(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = Embedding(n_users, n_factors)
self.user_bias = Embedding(n_users, 1)
self.movie_factors = Embedding(n_movies, n_factors)
self.movie_bias = Embedding(n_movies, 1)
self.y_range = y_range
def forward(self, x):
users = self.user_factors(x[:,0])
movies = self.movie_factors(x[:,1])
res = (users * movies).sum(dim=1, keepdim=True)
res += self.user_bias(x[:,0]) + self.movie_bias(x[:,1])
return sigmoid_range(res, *self.y_range)
model = DotProductBias(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3)
# Au lieu d'être meilleur, le modèle finit par être pire (au moins à la fin de l'entrainement). Pourquoi ?
#
# Si nous examinons attentivement les deux mesures de l'erreur, nous pouvons constater que l'erreur de validation a cessé de s'améliorer au milieu et a commencé à s'aggraver. Comme nous l'avons vu, c'est un signe évident de sur-spécialisation (overfitting).
#
# Dans ce cas, il n'est pas possible d'utiliser l'augmentation des données, nous devrons donc utiliser une autre technique de régularisation. Une approche qui peut être utile est appelée weight decay.
# ### Weight decay
# La technique de wieght decay, ou régularisation L2, consiste à ajouter à votre fonction de coût la somme de tous les poids au carré. Pourquoi faire cela ? Parce que lorsque nous calculons les gradients, nous y ajoutons une contribution qui incitera les poids à être aussi faibles que possible.
#
# Pourquoi cela permettrait-il d'éviter la sur-spécialisation (overfitting) ? L'idée est que plus les coefficients sont élevés, plus les canyons seront étroits dans la fonction de coût. Si nous prenons l'exemple de base de la parabole, y = a * (x**2), plus a est grand, plus la parabole est étroite.
# + hide_input=true
x = np.linspace(-2,2,100)
a_s = [1,2,5,10,50]
ys = [a * x**2 for a in a_s]
_,ax = plt.subplots(figsize=(8,6))
for a,y in zip(a_s,ys): ax.plot(x,y, label=f'a={a}')
ax.set_ylim([0,5])
ax.legend();
# -
# Le fait d'empêcher les poids de croitre à un niveau trop élevé va entraver l'entraînement du modèle, mais ça le conduira à un état où il aura une meilleure capacité de généralisation.
#
# Pour revenir un peu à la théorie, le weight decay (ou simplement wd) est un paramètre qui contrôle la somme des carrés que nous ajoutons à notre fonction de coût (en supposant que params est un tenseur de tous les paramètres) :
#
# ```
# loss_with_wd = loss + wd * (params**2).sum()
# ```
#
# Dans la pratique cependant, il serait très inefficace (et peut-être numériquement instable) de calculer cette grosse somme et de l'ajouter à la fonction de coût. Si vous vous souvenez un peu des mathématiques de lycée, vous vous rappellerez peut-être que la dérivée de p**2 par rapport à p est 2*p, donc ajouter cette grosse somme à notre perte est exactement la même chose que faire :
#
# ```
# weight.grad += wd * 2 * weight
# ```
#
# En pratique, puisque wd est un paramètre que nous choisissons, nous pouvons le rendre deux fois plus grand, de sorte que nous n'avons même pas besoin du *2 dans l'équation ci-dessus.
#
# Pour utiliser une régularisation L2 ou weight decay dans fastai, il suffit de passer wd dans votre appel à fit() :
model = DotProductBias(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3, wd=0.1)
# ### Créer notre propre module Embedding
# nn.Parameter
# +
class T(Module):
def __init__(self): self.a = torch.ones(3)
L(T().parameters())
# +
class T(Module):
def __init__(self): self.a = nn.Parameter(torch.ones(3))
L(T().parameters())
# -
# nn.Module contenant nn.Parameter
# +
class T(Module):
def __init__(self): self.a = nn.Linear(1, 3, bias=False)
t = T()
L(t.parameters())
# -
type(t.a.weight)
# Remplacer Embedding : création des paramètres et fonction forward
def create_params(size):
return nn.Parameter(torch.zeros(*size).normal_(0, 0.01))
class DotProductBias(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = create_params([n_users, n_factors])
self.user_bias = create_params([n_users])
self.movie_factors = create_params([n_movies, n_factors])
self.movie_bias = create_params([n_movies])
self.y_range = y_range
def forward(self, x):
users = self.user_factors[x[:,0]]
movies = self.movie_factors[x[:,1]]
res = (users*movies).sum(dim=1)
res += self.user_bias[x[:,0]] + self.movie_bias[x[:,1]]
return sigmoid_range(res, *self.y_range)
model = DotProductBias(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3, wd=0.1)
# ## Interpréter les embeddings et les biais (caractéristiques latentes)
movie_bias = learn.model.movie_bias.squeeze()
idxs = movie_bias.argsort()[:5]
[dls.classes['title'][i] for i in idxs]
idxs = movie_bias.argsort(descending=True)[:5]
[dls.classes['title'][i] for i in idxs]
# Il n'est pas si facile d'interpréter directement les matrices d'embeddings. Il y a tout simplement trop de caractéristiques à considérer pour un humain. Mais il existe une technique qui permet d'extraire les directions sous-jacentes les plus importantes dans une telle matrice, appelée **analyse en composantes principales** (**PCA**).
#
# Nous n'entrerons pas dans le détail de cette technique dans ce livre, mais si vous êtes intéressé, nous vous suggérons de consulter le cours de fast.ai, Computational Linear Algebra for Coders.
# + hide_input=true
g = ratings.groupby('title')['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_idxs = tensor([learn.dls.classes['title'].o2i[m] for m in top_movies])
movie_w = learn.model.movie_factors[top_idxs].cpu().detach()
movie_pca = movie_w.pca(3)
fac0,fac1,fac2 = movie_pca.t()
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(12,12))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
# -
# Nous pouvons voir ici que le modèle semble avoir découvert un concept de films classiques par opposition aux films de la culture pop !
#
# Jérémy : "Peu importe le nombre de modèles que j'entraine, je ne cesse de m'émouvoir et de m'étonner de la façon dont ces groupes de nombres initialisés au hasard, entrainés avec une mécanique aussi simple, ont réussi à découvrir des choses sur mes données par eux-mêmes. Cela ressemble presque à de la triche, que je puisse créer un code qui fait des choses utiles, sans jamais lui dire comment faire ces choses !"
# ### Utiliser fastai.collab
learn = collab_learner(dls, n_factors=50, y_range=(0, 5.5))
learn.fit_one_cycle(5, 5e-3, wd=0.1)
learn.model
movie_bias = learn.model.i_bias.weight.squeeze()
idxs = movie_bias.argsort(descending=True)[:5]
[dls.classes['title'][i] for i in idxs]
# ### Embedding distance
# Sur une carte bidimensionnelle, nous pouvons calculer la distance entre deux points en utilisant la formule de Pythagore : $\sqrt{x^{2}+y^{2}}$ (en supposant que X et Y sont les distances entre les coordonnées de chaque axe).
#
# Pour un embedding à 50 dimensions, nous pouvons faire exactement la même chose, sauf que nous additionnons les carrés des 50 distances entre les coordonnées.
#
# Si deux films étaient presque identiques, alors leurs vecteurs d'embedding devraient également être presque identiques, car les utilisateurs qui les aimeraient seraient presque exactement les mêmes. Il y a une idée plus générale ici : la similarité des films peut être définie par la similarité des utilisateurs qui aiment ces films.
#
# Et cela signifie directement que la distance entre les vecteurs d'embedding de deux films peut définir cette similarité. Nous pouvons utiliser cela pour trouver le film le plus similaire au Silence des agneaux :
movie_factors = learn.model.i_weight.weight
idx = dls.classes['title'].o2i['Silence of the Lambs, The (1991)']
distances = nn.CosineSimilarity(dim=1)(movie_factors, movie_factors[idx][None])
idx = distances.argsort(descending=True)[1]
dls.classes['title'][idx]
# ## Comment initialiser un système de recommandation ?
# Le plus grand défi que pose l'utilisation de modèles de filtrage collaboratifs dans la pratique est le problème du bootstrapping. La version la plus extrême de ce problème est lorsque vous n'avez pas d'utilisateurs, et donc pas d'historique dont vous pouvez tirer des enseignements. Quel produit recommandez-vous à votre tout premier utilisateur ?
#
# Mais même si vous êtes une entreprise bien établie avec un long historique de transactions d'utilisateurs, vous devez toujours vous poser la question suivante : que faites-vous lorsqu'un nouvel utilisateur s'inscrit ? Et en effet, que faites-vous lorsque vous ajoutez un nouveau produit à votre portefeuille ? Il n'y a pas de solution magique à ce problème, et les solutions que nous proposons ne sont en fait que des variantes de la formule qui font appel à votre bon sens. Vous pouvez initialiser vos nouveaux utilisateurs de telle sorte qu'ils aient la moyenne de tous les vecteurs d'intégration de vos autres utilisateurs - bien que cela pose le problème que cette combinaison particulière de caractéristiques latentrs peut ne pas être du tout commune (par exemple, la moyenne du facteur science-fiction peut être élevée, et la moyenne du facteur action peut être faible, mais il n'est pas si commun de trouver des gens qui aiment la science-fiction sans action). Le mieux serait probablement de choisir un utilisateur particulier pour représenter le goût moyen.
#
# Mieux encore, il est préférable d'utiliser un modèle tabulaire basé sur les métadonnées de l'utilisateur pour construire votre vecteur d'intégration initial. Lorsqu'un utilisateur s'inscrit, réfléchissez aux questions que vous pourriez lui poser et qui pourraient vous aider à comprendre ses goûts. Vous pouvez ensuite créer un modèle dans lequel la variable dépendante est le vecteur d'intégration de l'utilisateur, et les variables indépendantes sont les résultats des questions que vous lui posez, ainsi que les métadonnées de son inscription. Nous apprendrons dans la section suivante comment créer ce genre de modèles tabulaires. Vous avez peut-être remarqué que lorsque vous vous inscrivez à des services tels que Pandora et Netflix, ils ont tendance à vous poser quelques questions sur les genres de films ou de musique que vous aimez ; c'est ainsi qu'ils arrivent à vos premières recommandations de filtrage collaboratif.
#
# Une chose à laquelle il faut faire attention, c'est qu'un petit nombre d'utilisateurs extrêmement enthousiastes peuvent finir par établir efficacement les recommandations pour l'ensemble de votre base d'utilisateurs. C'est un problème très courant, par exemple, dans les systèmes de recommandation de films. Les personnes qui regardent des animés ont tendance à en regarder beaucoup, et pas beaucoup d'autres, et passent beaucoup de temps à mettre leurs évaluations sur des sites web. En conséquence, beaucoup des meilleures listes de films ont tendance à être fortement surreprésentées dans les dessins animés. Dans ce cas particulier, il peut être assez évident que vous avez un problème de biais de représentation, mais si le biais se produit dans les facteurs latents, alors il peut ne pas être évident du tout.
#
# Un tel problème peut modifier toute la composition de votre base d'utilisateurs et le comportement de votre système. Cela est particulièrement vrai en raison des boucles de rétroaction positive. Si un petit nombre de vos utilisateurs a tendance à déterminer l'orientation de votre système de recommandation, ils finiront naturellement par attirer davantage de personnes comme eux dans votre système. Et cela va, bien sûr, amplifier le biais de représentation initial. C'est une tendance naturelle à être amplifiée de manière exponentielle. Vous avez peut-être vu des exemples de dirigeants d'entreprises qui se sont étonnés de la détérioration rapide de leurs plateformes en ligne, de telle sorte qu'ils expriment des valeurs qui sont en contradiction avec celles des fondateurs. En présence de ce genre de boucles de rétroaction, il est facile de voir comment une telle divergence peut se produire à la fois rapidement et de manière cachée jusqu'à ce qu'il soit trop tard.
#
# Dans un système auto-renforçant comme celui-ci, nous devrions probablement nous attendre à ce que ce type de boucles de rétroaction soit la norme, et non l'exception. Par conséquent, vous devez partir du principe que vous les verrez, vous y préparer et déterminer dès le départ comment vous allez traiter ces questions. Essayez de réfléchir à toutes les façons dont les boucles de rétroaction peuvent être représentées dans votre système, et à la façon dont vous pourriez les identifier dans vos données. En fin de compte, cela revient à notre conseil initial sur la manière d'éviter les catastrophes lors du déploiement de tout type de système d'apprentissage automatique. Il s'agit de s'assurer qu'il y a des humains dans la boucle, qu'il y a un suivi attentif et un déploiement progressif et réfléchi.
#
# Notre modèle de produit scalairre fonctionne assez bien et il est à la base de nombreux systèmes de recommandation réussis dans le monde réel. Cette approche du filtrage collaboratif est connue sous le nom de factorisation matricielle probabiliste (PMF). Une autre approche, qui fonctionne généralement aussi bien avec les mêmes données, est le deep learning.
# ## Deep learning for collaborative filtering
# Pour transformer notre architecture en un modèle d'apprentissage profond, la première étape consiste à prendre les embeddings des utilisateurs et des films et à concaténer ces activations ensemble. Cela nous donne une matrice que nous pouvons ensuite passer à travers les couches linéaires et les non-linéarités de la manière habituelle.
#
# Comme nous concaténerons les matrices d'embeddings plutôt que de prendre leur produit scalaire, cela signifie que les deux matrices d'embeddings peuvent avoir des tailles différentes (c'est-à-dire des nombres différents de caractéristiques latentes).
#
# fastai dispose d'une fonction **get_emb_sz** qui renvoie les tailles recommandées pour les matrices d'intégration de vos données, sur la base d'une heuristique qui, selon fast.ai, tend à bien fonctionner en pratique :
embs = get_emb_sz(dls)
embs
class CollabNN(Module):
def __init__(self, user_sz, item_sz, y_range=(0,5.5), n_act=100):
self.user_factors = Embedding(*user_sz)
self.item_factors = Embedding(*item_sz)
self.layers = nn.Sequential(
nn.Linear(user_sz[1]+item_sz[1], n_act),
nn.ReLU(),
nn.Linear(n_act, 1))
self.y_range = y_range
def forward(self, x):
embs = self.user_factors(x[:,0]),self.item_factors(x[:,1])
x = self.layers(torch.cat(embs, dim=1))
return sigmoid_range(x, *self.y_range)
model = CollabNN(*embs)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3, wd=0.01)
# Bien que les résultats de EmbeddingNN soient un peu moins bons que l'approche par produit scalaire (ce qui montre la puissance de bien choisir une architecture adaptée à un domaine), elle nous permet de faire quelque chose de très important : nous pouvons maintenant intégrer directement d'autres informations sur les utilisateurs et les films, par exemple la durée et d'autres informations qui peuvent être pertinentes pour la recommandation.
learn = collab_learner(dls, use_nn=True, y_range=(0, 5.5), layers=[100,50])
learn.fit_one_cycle(5, 5e-3, wd=0.1)
learn.model
@delegates(TabularModel)
class EmbeddingNN(TabularModel):
def __init__(self, emb_szs, layers, **kwargs):
super().__init__(emb_szs, layers=layers, n_cont=0, out_sz=1, **kwargs)
# ### Note : kwargs et delegates
# EmbeddingNN inclut \**kwargs comme paramètre à __init__. En python, \**kwargs dans un paramètre comme signifie "mettre tout argument nommé supplémentaire dans un dictionnaire appelé kwargs". Et \**kwargs dans une liste d'arguments signifie "insérer toutes les paires clé/valeur dans le dictionnaire kwargs en tant qu'arguments nommés ici".
#
# Cette approche est utilisée dans de nombreuses bibliothèques populaires, telles que matplotlib, dans laquelle la fonction principale de tracé a simplement la signature (*args, \**kwargs). La documentation indique "Les kwargs sont des propriétés de Line2D" et énumère ensuite ces propriétés.
#
# Nous utilisons \**kwargs dans EmbeddingNN pour éviter d'avoir à écrire tous les arguments de TabularModel une seconde fois, et pour les garder synchronisés. Cependant, cela rend notre API assez difficile à utiliser, car maintenant Jupyter Notebook ne sait pas quels paramètres sont disponibles, donc des choses comme la complétion des noms de paramètres ne fonctionneront pas.
#
# Fastai résout ce problème en fournissant un décorateur spécial @delegates, qui modifie automatiquement la signature de la classe ou de la fonction (EmbeddingNN dans ce cas) pour insérer tous les arguments nommés dans la signature
| notebooks/06b_collab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sample : MNISTの学習
# !pip install --upgrade pip
# !pip install pillow
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
print('tensorflow ver.', tf.__version__)
# MNISTの訓練データとテストデータのダウンロード
#
# | | |
# |--|--|
# |画像サイズ|28x28|
# |データ範囲|0-255 (uint8)
# |データ数|訓練データ 6000枚
# | |テストデータ 6000枚
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
print('クラス :', type(x_train))
print('データサイズ:', x_train.shape)
print('データ型 :', x_train.dtype)
print('データ範囲 :', x_train.min(), '-', x_train.max())
print('ラベル範囲 :', y_train.min(), '-', y_train.max())
u, count = np.unique(y_train, return_counts=True)
print('データラベル:', u)
print('ラベル頻度 :', count)
Image.fromarray(x_train[0]).resize((112,112))
# データを0-1で正規化する
print(x_train.shape)
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train, x_valid = np.split(x_train, [55000])
y_train, y_valid = np.split(y_train, [55000])
print(x_valid.shape)
print(x_train.shape)
# # モデルの構築
#
# - Flatten : 一次元配列に変換
# - Dense:全結合層、活性化関数を指定
# - Dropout: dropout率を指定。訓練の間に要素の20%のニューロンがランダムにドロップアウトされることを表す。
# +
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
], name='tf_tutorial_model')
model.summary()
# -
# NNの学習の目的は、損失関数の値を最小化するパラメータを見つけるいわば最適化の一種です。なので、最適パラメータを探索するためのアルゴリズムや損失関数、評価関数を指定してあげる必要があります。
#
# - `optimizer` : 最適化アルゴリズムの指定
# - `loss` : 損失関数の指定(一般的に、二乗和誤差や交差エントロピー誤差など)
#
# | | |
# |:--|:--|
# |2乗和誤差|$ E = \frac{1}{2}\sum_{k} (y_k-t_k)^2$|
# |交差エントロピー誤差|$ E=-\sum_{k}t_k logy_k $
#
# - `metrics` : 評価関数の指定。"accuracy"を選択すると、損失関数や出力テンソルの情報から自動で"categorical_accuracy"などを判断してくれる。[3]
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# # モデルの訓練および評価
#
# - `mode.fit`:モデルの訓練
# - `model.evaluate`:モデルの評価
#
# 引数の詳細については以下を参照:
# - [Sequentialモデル - Keras Documentation](https://keras.io/ja/models/sequential/)
# +
fit = model.fit(x_train, y_train, epochs=5, verbose=1, validation_data=(x_valid, y_valid))
model.evaluate(x_test, y_test, verbose=2)
# -
print(fit.history)
# +
fig, (axL, axR) = plt.subplots(ncols=2, figsize=(10,4))
axR.plot(fit.history['accuracy'],label="acc for training")
axR.plot(fit.history['val_accuracy'],label="acc for validation")
axR.set_title('model accuracy')
axR.set_xlabel('epoch')
axR.set_ylabel('accuracy')
axR.legend(loc='upper right')
plt.axes(axR)
plt.ylim([0.95, 1])
axL.plot(fit.history['loss'],label="loss for training")
axL.plot(fit.history['val_loss'],label="loss for validation")
axL.set_title('model loss')
axL.set_xlabel('epoch')
axL.set_ylabel('loss')
axL.legend(loc='upper right')
plt.axes(axL)
plt.ylim([0, 0.3])
# -
# # 参考サイト
#
# 1. [初心者のための TensorFlow 2.0 入門 | TensorFlow Core](https://www.tensorflow.org/tutorials/quickstart/beginner?hl=ja)
# 1. [TensorFlow, Kerasの基本的な使い方(モデル構築・訓練・評価・予測) | note.nkmk.me](https://note.nkmk.me/python-tensorflow-keras-basics/)
# 1. [【Keras入門(4)】Kerasの評価関数(Metrics) - Qiita](https://qiita.com/FukuharaYohei/items/f7df70b984a4c7a53d58)
# 1. [MNISTでハイパーパラメータをいじってloss/accuracyグラフを見てみる - Qiita](https://qiita.com/hiroyuki827/items/213146d551a6e2227810)
| src/hello_docker_tensorflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 標本抽出と分布と平均
#
# ここまでの話はほとんどが「正規分布する母集団から無作為抽出された標本」を前提としているが,現実にはいろいろと気をつけることがある.現実的な統計を扱う際の注意点をまとめる.
# ## 標本抽出の方法
#
# ### 母集団
#
# **母集団全体 (general universe)** は本当に調査したい母集団,**実際に使える母集団 (working universe)** は実際に標本抽出を行なう母集団.
# 両者が必ずしも一致している保証はない.
# 屋久島にいる猿について調査したくても,調査が困難な場所に住んでいる猿もいる.
#
# 実際に使える母集団が母集団全体に正しく対応していない場合には,当然そこから抽出した標本群も偏ることになる.
#
# ### 抽出方法
#
# 標本抽出において重要なのが**無作為**であるということである.無作為であるということは,**平等** かつ **独立** であると言うことができる.
# - **平等** : 母集団内の標本単位は,すべて等しい確率で抽出される
# - **独立** : 抽出される確率は,他の標本単位に無関係である
#
# つまり,アンケートを行なう際に「無作為に選んだ電話番号に電話をかける.一度回答が得られたらその番号にはもうかけない」という方法は,
# 電話番号の選択が無作為であっても
# - 電話をかける時間帯に電話を取れる状況にある確率が異なる = 平等ではない
# - 家族の誰かが電話を受けると,他の家族が選択される可能性は0になる = 独立ではない
# という問題点があるので無作為抽出とは言えない.
#
# また,乱数を使って無作為抽出をしたとしても乱数生成器の精度が悪ければ偏る危険性がある.歪んだサイコロを振って抽出した結果は偏るかも知れない.
#
# 現実的には,完全な無作為抽出は難しい.以下のような抽出方法がよく用いられる.
# - **便宜抽出 (無意識の標本抽出)** : 手近なところで選ぶ.ただし,「手近さ」は平等ではないかも知れない.駅前で「ランダムに」声をかけるとしても,声をかけやすそうな人を無意識に選ぶ可能性は高い.
# - **系統抽出** : リストを作成して,その中から一定のルールで選ぶ.
# - **層別抽出** : 母集団をいくつかの(意味のある)グループに分け,その小グループからランダムに選ぶ.各小グループの影響力を等しくしたい場合に用いる.
# - **集落抽出 (クラスターサンプリング)** : 小グループを抽出し,そのグループ内の標本すべてを選択する.
# - **有意抽出** : 調査目的に合うように標本を選ぶ
# ## 相関関係・因果関係
#
# 標本に対して何らかのテストを行なって相関関係や因果関係を議論する場合,以下のような要因が結果に影響を与える.
# これらは,慎重に排除されなければならない.
#
# - **履歴** : テストとは無関係な外部要因が無視できないほど影響する.対照群を使う,もしくは外的要因を排除する環境を作ることで排除する.
# - **成熟** : テストに慣れることで結果が変化する.対照群を設定し実験群と比較することで排除する.
# - **選択** : 抽出や振り分けがきちんと無作為になっていないかも知れない.
# - **テスト** : 事前テストによってテストの内容や目的を推測され,それが結果に影響するかも知れない.実験群・対照群両方に同じテストを行なうことで影響を等しくする.
# - **計測技術** : スコアのつけ方がおかしいかも知れない
# - **ホーソン効果** : 自身が被験者であることを意識することで結果が変化するかも知れない.二重盲検によって排除する.
#
# これをふまえて,因果関係を立証するための実験方法には以下の段階がある
# - **非実験的デザイン** : 1つの群に1回のテストを行なう.相関関係は分かるが因果関係は分からない.
# - **前実験的デザイン** : 1つの群に,事前テストと事後テストの2回のテストを行なう.
# - **準実験的デザイン** : 実験群と対照群の2つを作る.ただし無作為ではないのでグループにバイアスがかかっている可能性がある.
# - **実験的デザイン** : 実験群と対照群を作り,無作為な振り分けを行なう.グループに有意わ差異がある場合,それはテストによるものであるとできる.
# ## スコアの分類
#
# 標本のスコアにもいくつかの分類が可能で,どのスコアがどういう性質かを知っておく必要がある.
#
# - **名義** : ただのラベル.数字の大きさにも順番にも意味がない.「はいなら0,いいえなら1を答えてください」のようなもの.
# - **順序尺度** : 数字が順番を意味する.標本を並べることはできるが,距離は分からない.1位と2位の間が1秒でも5分でも順位は同じである.
# - **間隔尺度** : 数字がスコア同士の距離に対応する.順番と,どのぐらい離れているかが分かる.ただし,比率を扱うことはできない.摂氏20度は摂氏10度よりも10度高いが,2倍高いとは言わない.
# - **比尺度** : 「ゼロ」を持ち,比率を扱うことができる
# ## アベレージ
#
# 正規分布を仮定する限り,「アベレージ」の値は1つに定まる.そうでない場合には3種類のアベレージがあり,どの指標がデータの中心傾向をよく反映しているか十分に注意しなければならない.
#
# - **平均 (mean)** : すべての値の和を標本数で割ったもの
# - **中央値 (median)** : すべての標本を並べたときに,順位がちょうど真ん中になる値.つまりこの値より上と下に全標本の50%ずつが分布する.
# - **最頻値 (mode)** : 出現頻度が最も高い値 (連続量の場合は確率密度関数が極大になる点).最頻値は複数存在することがある.
#
# ### Python Tips: Numpy と Pandas
#
# Pandas では数値データ以外の扱いが柔軟にできるので,時刻や文字列を扱うには便利である.
# 下の例は,あるコースのタイムの分布を扱う.「分:秒」の形式を文字列として分解して秒数に変換して並べている.
#
# 平均値は`mean()`,中央値は`median()`,最頻値は`mode()`で求めることができる.
# +
import pandas
import datetime
from matplotlib import pyplot
# %matplotlib inline
record = pandas.read_csv("./dat/kawachi-huuketsu.txt", sep='\t', dtype='str')
def to_sec(time_str):
tmp = time_str.split(":")
return int(tmp[0]) * 60 + int(tmp[1])
pyplot.hist(record['time'].map(to_sec), bins=30)
pyplot.show()
print(record['time'].map(to_sec).mean())
print(record['time'].map(to_sec).median())
print(record['time'].map(to_sec).mode()[0])
# -
| 06_Sapmling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from utils._utils import *
import pandas as pd
import numpy as np
root = get_root()
truth = pd.read_csv(open(root/'data-truth'/'zoltar-truth.csv', 'r'), parse_dates=['timezero'])
truth[(truth['target'] == '20 wk ahead inc death') & (truth['value'] == 0.0)].sort_values('timezero')
truthinc = pd.read_csv(open(root/'data-truth'/'truth-Incident Deaths.csv', 'r'))
pd.to_datetime(truthinc[truthinc['date'] == '2020-09-03']['date']).iloc[0].month_name()
truth
| code/truth-zero-check.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.5 64-bit
# language: python
# name: python3
# ---
# requirements
# !pip install keras
# !pip install pandas
# !pip install scipy
# !pip install tensorflow
# <h2>Titanic</h2>
# The task is a binary classification one. 0 -> died, 1 -> survived.
# +
# imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import keras
import csv
import tensorflow as tf
from tensorflow import keras
from keras import layers, Sequential
from keras.layers import Input, Add, Dense, Activation, ZeroPadding1D, Flatten, Conv1D, MaxPooling1D, GlobalMaxPooling1D, Dropout
from keras.models import Model
from sklearn.model_selection import train_test_split
# -
# create dataframe from training data. It's clear that Name
# +
working_directory = os.getcwd()
test_data_dir = os.path.join(working_directory,'test.csv')
training_data_dir = os.path.join(working_directory,'train.csv')
def load_csv(file_dir):
return pd.read_csv(file_dir)
train_dataframe = load_csv(training_data_dir)
print(train_dataframe.columns)
# -
train_dataframe = train_dataframe.drop(columns=['Name','PassengerId'])
# +
# count how many dinstinct values for each record
#train_dataframe['Sex'].value_counts()
#train_dataframe['Ticket'].nunique()
train_dataframe['Sex'].nunique()
# -
# Some values are missing -> 1st option delete all rows where a value is missing (more advanced options later)
# +
train_dataframe.isna().any(axis=1)#train_dataframe.isna().any(axis=1) boolean indexing
train_dataframe = train_dataframe.dropna(axis = 0)
#train_dataframe.drop(axis = 0,index = (train_dataframe.isna().any(axis=1)[0]))
train_dataframe['Cabin'].nunique() #133 unique cabins
# +
def convert_to_onehot(cabin_str):
temp = np.zeros(133,dtype=np.float32)
unique_arr = train_dataframe['Cabin'].unique()
temp[np.where(unique_arr == cabin_str)] = 1.0
return temp.flatten()
train_dataframe['Cabin'] = train_dataframe['Cabin'].apply(convert_to_onehot)
# -
train_df, validation_df = train_test_split(train_dataframe, test_size=0.2)
# +
# TRAIN ARRAYS
x_ = train_df[['Pclass','Age','SibSp','Parch','Fare']].values
x_cabins = np.array(np.stack(train_df['Cabin'].values))
x_train = np.concatenate((x_,x_cabins),axis=1)
y_train = train_df[['Survived']].values
# VALIDATION ARRAYS
x_ = validation_df[['Pclass','Age','SibSp','Parch','Fare']].values
x_cabins = np.array(np.stack(validation_df['Cabin'].values))
x_valid = np.concatenate((x_,x_cabins),axis=1)
y_valid = validation_df[['Survived']].values
# -
print(x_train[0])
# +
def neuralNet():
return keras.Sequential([
keras.layers.Input(shape = (138,)),
keras.layers.Dense(64, activation=tf.nn.relu),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(32, activation=tf.nn.relu),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(32, activation=tf.nn.relu),
keras.layers.Dense(1,activation=tf.nn.sigmoid)
])
nn = neuralNet()
nn.summary()
learning_rate = 0.005
nn.compile(
optimizer = keras.optimizers.Adam(learning_rate),
loss = 'binary_crossentropy',
metrics = ['accuracy']
)
history = nn.fit(
x = x_train,
y = y_train,
batch_size = 8,
validation_data=(x_valid, y_valid),
epochs = 40
).history
#validation_data=(X_val_filt, y_val),
# -
| Titanic/titanic.ipynb |
// -*- coding: utf-8 -*-
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++17
// language: C++
// name: cling-cpp17
// ---
// В этом ноутбуке я использую cling, C++ kernel for Jupyter Notebook. Обычно Jupyter Notebook используется для интерактивных экспериментов на питоне или R, но в CERN сделали возможность писать здесь на C++.
// Устанавливал вот по этой инструкции: http://shuvomoy.github.io/blog/programming/2016/08/04/Cpp-kernel-for-Jupyter.html.
//
// Здесь можно делать то, чего нельзя в обычной программе на C++: не иметь main и исполнять код вне функций. Это удобно для демонстрационных целей.
// # Генерация случайных чисел
//
// Бывает так, что в алгоритме хочется как-то использовать случайные величины. Например, хочется случайным образом перемешать массив, или сгенерировать случайный тест, или сгенерировать случайный пароль.
//
// Проблема в том, что компьютер — это штука для исполнения детерминированных алгоритмов. Процессор исполняет инструкции согласно строгой спецификации, и обычно не в состоянии взять откуда-то случайное число и положить его в память. Даже если попытаться не инициализировать память или регистр, а потом оттуда прочитать, скорее всего, то, что мы прочитаем, будет как-то связано с тем, что туда когда-то писалось. В обычной модели компьютера просто неоткуда взять и изобрести по-честному случайную величину, не используя какие-нибудь устройства ввода.
//
// На самом деле, устройств ввода у ПК много. Ядро Linux умеет генерировать по-честному случайные величины, анализируя для этого временные ряды использования клавиатуры и мыши и прерываний процессора. Ещё можно было бы использовать шум в микрофоне или веб-камере. Проблема такой честной генерации в том, что эти данные обновляются очень редко по сравнению с частотой, с которой процессор исполняет инструкции. Поэтому ядро поддерживает так называемый entropy pool. Он постепенно накапливает энтропию, то есть "честность случайности", пользуясь всем, чем можно, и расходует её всегда, когда процессы просят у ядра настоящие случайные значения. Если энтропия заканчивается, процесс может зависнуть, дожидаясь ответа от ядра. Криптографическая библиотека OpenSSL может в этой ситуации переключиться на менее честную генерацию, что менее безопасно. Это вполне возможная ситуация на серверах, которые работают в датацентрах со стабильным климатом и без подключенных клавиатуры, мыши, микрофона и вебкамеры.
//
// Возможно, вы когда-то подготавливали rsa-ключ программой PuTTY, чтобы пользоваться ssh или git на удалённом сервере без пароля, и PuTTY просил довольно долго водить мышкой, чтобы набрать энтропию. Вот это оно.
// В общем, получить много по-честному случайных чисел из внешней среды сложно и потенциально долго. Поэтому практичный подход — генерировать *псевдо*случайную последовательность с помощью какого-нибудь специального алгоритма. *Псевдо*случайность значит, что алгоритм на самом деле детерминированный, то есть если его инициализировать одинаковым образом, то и сгенерированная последовательность будет одинаковая. Но при этом у хороших алгоритмов последовательность выглядит и проходит всякие тесты, почти как по-настоящему случайная. Но, конечно, до некоторых пор: если алгоритм использует конечное количество памяти, чтобы хранить свою позицию в последовательности, рано или поздно он придёт в состояние, в котором уже был, и тогда последовательность станет периодической, чего не бывает со случайными последовательностями.
// ## minstd
//
// До середины девяностых был особенно популярен довольно простой алгоритм генерации псевдослучайных чисел. У него даже есть устойчивое название — minstd. Выглядит он следующим образом.
//
// Пусть $\kappa_{i-1}$ — число, которое мы сгенерировали в прошлый раз. Если ещё не генерировали, то это какое-нибудь число, которое надо взять из внешнего источника, обычно его называют зерном (seed) рандома.
//
// Тогда следующее число найдём по формуле: $\kappa_{i} = g \cdot \kappa_{i-1} \mod M$, где $g$ и $M$ — заранее фиксированные аккуратно подобранные числа.
//
// Всё. https://en.wikipedia.org/wiki/Lehmer_random_number_generator
#include <iostream>
// +
class MyMinstd {
public:
// M и G — статические поля. Это значит, что они хранятся не в каждом экземпляре класса, а
// в единственном месте в памяти программы, как глобальные переменные.
// Но при этом они находятся в пространстве видимости класса:
// - публичные статические поля доступны снаружи доступны только с явным указанием класса, например,
// MyMinstd::G;
// - приватные статические поля недоступны снаружи так же, как не статические.
//
// Внутри чисел можно писать апострофы для удобства чтения. Компилятор их игнорирует.
static const long long M = 2'147'483'647;
static const long long G = 48'271;
explicit MyMinstd(int seed = 1)
: state(seed)
{}
int operator()() {
state = (state * G) % M;
return state;
}
private:
int state;
};
// -
MyMinstd myGenerator11(11);
std::cout << myGenerator11() << '\n';
std::cout << myGenerator11() << '\n';
std::cout << myGenerator11() << '\n';
std::cout << myGenerator11() << '\n';
template<class Gen>
void show(Gen gen, int n) {
for (int i = 0; i < n; ++i) {
std::cout << gen() % 100 << ' ';
}
std::cout << '\n';
}
show(MyMinstd(42), 20);
show(MyMinstd(90320905), 10);
// В стандартной библиотеке C++ есть minstd:
#include <random>
show(std::minstd_rand(42), 20);
// Качество алгоритмов такого вида широко раскритиковано, особенно [некоторых](https://en.wikipedia.org/wiki/RANDU), зато они очень простые и быстрые.
// ## `rand()`
// Алгоритм, который должна реализовать функция rand(), унаследованная в C++ из C, не зафиксирован никаким стандартом. В реализации glibc, используемой обычно под linux, видимо, сделано что-то похожее на minstd: [link](https://www.mathstat.dal.ca/~selinger/random/).
// Использовать эту функцию в C++ крайне не рекомендуется, потому что есть лучшие альтернативы.
//
// * Так как алгоритм не зафиксирован, она буквально может всё время возвращать одно и то же [число 4](https://xkcd.com/221/). Согласно [cppreference](http://ru.cppreference.com/w/cpp/numeric/random/rand), *Нет никаких гарантий в отношении криптографической стойкости сгенерированных случайных чисел. В прошлом, в некоторых реализациях rand() имели место серьезные недостатки случайного распределения чисел (к примеру, единицы в нижних разрядах между вызовами просто чередовались 1-0-1-0-...).*. Особенно если [брать результат по модулю](https://stackoverflow.com/questions/14678957/libc-random-number-generator-flawed).
//
// * Кроме того, она имеет глобальное состояние, то есть во всей вашей программе может одновременно генерироваться только одна псевдослучайная последовательность. Нельзя в одной части погенерировать чисел так, чтобы на другой, совсем не связанной с первой, части программы, это не отразилось, если, конечно, не поменять зерно.
//
// * Функция может работать вовсе некорректно, если в вашей программе несколько потоков исполнения.
// ## Что стоит использовать
// Сейчас в моде алгоритм, который называется Mersenne Twister. Он качественный и у него очень большой период, до $2^{19937} − 1$. В стандартной библиотеке Python он [используется](https://docs.python.org/2.7/library/random.html) по умолчанию начиная с версии 2.3.
// Использовать его в C++ так же легко, как std::minstd_rand или MyMinstd:
#include <random>
// +
std::mt19937 twister(42);
std::cout << twister() << '\n';
std::cout << twister() << '\n';
show(twister, 20);
// -
// std::mt19937 генерирует по 32 бита за раз. Если хочется сразу по 64 бита, можно воспользоваться другой вариацией того же класса:
std::mt19937_64 twister64(42);
twister64()
// При этом нужно понимать, что у него довольно большое состояние по сравнению с другими алгоритмами:
sizeof (std::mt19937)
sizeof (std::minstd_rand)
// Это в байтах. Поэтому тут как с вектором, если вы передаёте генератор в функцию, и вам не нужна копия, передавайте по ссылке.
// ## Распределения
// Скорее всего, вам нужны не просто случайные 32 или 64 бита, а число из какого-то распределения. Скажем, нужно равновероятно выбрать номер элемента в массиве или равновероятно выбрать вещественное число от -1 до 1, или с нормальным распределением выбрать вещественное число в окрестности 0, и т. д. Всё это можно сделать, набирая случайные 32 или 64 бита и преобразовывая их по каким-нибудь формулам, но ошибиться при этом легче, чем не ошибиться.
//
// Поэтому в стандартной библиотеке есть классы, моделирующие многие одномерные распределения. Это тоже именно классы, потому что обычно распределения имеют параметры: границы отрезка, из которого должны равновероятно браться числа, или медиана и дисперсия для нормального распределения. Эти параметры запоминаются, и затем если передавать в operator() какой-нибудь генератор случайных чисел, то можно получать уже числа из распределения.
std::uniform_int_distribution<int> uniformInt(0, 9);
std::cout << uniformInt(twister) << '\n';
std::cout << uniformInt(twister) << '\n';
std::cout << uniformInt(twister) << '\n';
std::cout << uniformInt(twister) << '\n';
#include <vector>
template<class Distr, class Gen>
void showDistr(Distr distr, Gen& gen, int n) {
std::vector<int> count(n, 0);
for (int i = 0; i < 100000; ++i) {
int randValue = distr(gen);
++count[randValue];
}
for (auto c : count) {
std::cout << c << ' ';
}
std::cout << '\n';
for (int i = 0; i < n; ++i) {
std::cout << i << ": ";
for (int j = 0; j < count[i] / 150; ++j) {
std::cout << '*';
}
std::cout << '\n';
}
}
showDistr(std::uniform_int_distribution<int>(0, 19), twister, 20);
showDistr(std::binomial_distribution<int>(49, 0.420), twister, 50);
showDistr(std::bernoulli_distribution(0.228), twister, 2);
// std::uniform_int_distribution мог бы быть реализован как-то так:
// +
template<class T>
class MyUniform {
public:
explicit MyUniform(T lower, T upper)
: lower(lower)
, upper(upper)
{}
template<class Gen>
T operator()(Gen& gen) {
return lower + gen() % (upper - lower + 1);
}
private:
T lower, upper;
};
// -
showDistr(MyUniform<int>(3, 22), twister, 25);
// ## Настоящий рандом
// Кроме генераторов псевдослучайных чисел, в C++ есть класс с похожим интерфейсом, который пытается, если это возможно, получить от системы настоящее случайное число:
std::random_device trueRandom;
trueRandom()
trueRandom()
showDistr(MyUniform(0, 19), trueRandom, 20);
// Использовать его надо понемножку (не так, как я сейчас, быстро набрав $10^5$ чисел), чтобы не наступило истощение entropy pool. В примерах на cppreference из random_device берут зерно для mt19937.
// ## Further reading
//
// - https://en.wikipedia.org/wiki/Entropy_%28computing%29
//
// - https://www.atlasobscura.com/places/encryption-lava-lamps — У Cloudflare есть *ферма лавовых ламп* для получения настоящего рандома.
//
// - http://en.cppreference.com/w/cpp/numeric/random
//
// - https://ru.wikipedia.org/wiki/Генератор_псевдослучайных_чисел
//
// - https://ru.wikipedia.org/wiki/Тасование_Фишера_—_Йетса, http://ru.cppreference.com/w/cpp/algorithm/random_shuffle — равновероятное перемешивание массива за O(n), обратите внимание, что std::random_shuffle устарел и уже удалён из последнего стандарта C++, потому что использует std::rand(). Вместо него можно использовать std::shuffle.
//
// - https://ru.wikipedia.org/wiki/Reservoir_sampling — выбор случайного сочетания фиксированного размера k из потока длины n за O(n) времени и всего O(k) памяти.
//
// - https://ru.wikipedia.org/wiki/Преобразование_Бокса_—_Мюллера — как имея равномерное распределение получить нормальное.
| cpp-algo/random.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## IMDb : Internet Movie Database reviews
#
# The IMDb task is a sentiment classification task. It consists of movie reviews collected from IMDB. The training and test set sizes are both 25,000. In addition there is a set of 50,000 unlabeled reviews.
#
# See [website](http://ai.stanford.edu/~amaas/data/sentiment/) and [paper](http://ai.stanford.edu/~amaas/papers/wvSent_acl2011.pdf) for more info.
# +
import numpy as np
import pandas as pd
import os
import sys
import csv
import re
from sklearn import metrics
from sklearn.metrics import classification_report
from sklearn.utils import shuffle
sys.path.append("../")
from bert_sklearn import BertClassifier
from bert_sklearn import load_model
DATADIR = "./aclImdb"
# + language="bash"
# wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
# tar -xf aclImdb_v1.tar.gz
# rm aclImdb_v1.tar.gz
# +
"""
IMDB train data size: 25000
IMDB unsup data size: 50000
IMDB test data size: 25000
"""
def clean(text):
text = re.sub(r'<.*?>', '', text)
text = re.sub(r"\"", "", text)
return text
def slurp(filename):
with open(filename) as f:
data = clean(f.read())
return data
def get_imdb_df(datadir,val=None):
data = [(slurp(datadir + filename),val) for filename in os.listdir(datadir)]
return pd.DataFrame(data,columns=['text','label'])
def get_imdb_data(train_dir = DATADIR + "/train",test_dir = DATADIR + "/test",random_state=42 ):
label_list = [0,1]
pos = get_imdb_df(train_dir + "/pos/",1)
neg = get_imdb_df(train_dir + "/neg/",0)
train = shuffle(pd.concat([pos, neg]),random_state=random_state)
print("IMDB train data size: %d "%(len(train)))
unsup = get_imdb_df(train_dir + "/unsup/")
print("IMDB unsup data size: %d "%(len(unsup)))
pos = get_imdb_df(test_dir + "/pos/",1)
neg = get_imdb_df(test_dir + "/neg/",0)
test = shuffle(pd.concat([pos, neg]),random_state=random_state)
print("IMDB test data size: %d "%(len(test)))
return train, test, label_list, unsup
train, test, label_list, unsup = get_imdb_data()
# -
train.head()
train[:1].values
# As you can see, each review is much longer than a sentence or two. The Google AI BERT models were trained on sequences of max length 512. Lets look at the performance for max_seq_length equal to 128, 256, and 512.
#
# ### max_seq_length = 128
# +
# %%time
train, test, label_list, unsup = get_imdb_data()
X_train = train['text']
y_train = train['label']
X_test = test['text']
y_test = test['label']
model = BertClassifier()
model.max_seq_length = 128
model.learning_rate = 2e-05
model.epochs = 4
print(model)
model.fit(X_train, y_train)
accy = model.score(X_test, y_test)
# -
# ### max_seq_length = 256
# +
# %%time
train, test, label_list, unsup = get_imdb_data()
X_train = train['text']
y_train = train['label']
X_test = test['text']
y_test = test['label']
model = BertClassifier()
model.max_seq_length = 256
model.train_batch_size = 32
model.learning_rate = 2e-05
model.epochs = 4
print(model)
model.fit(X_train, y_train)
accy = model.score(X_test, y_test)
# -
# ### max_seq_length = 512
# +
# %%time
train, test, label_list, unsup = get_imdb_data()
X_train = train['text']
y_train = train['label']
X_test = test['text']
y_test = test['label']
model = BertClassifier()
model.max_seq_length = 512
# max_seq_length=512 will use a lot more GPU mem, so I am turning down batch size
# and adding gradient accumulation steps
model.train_batch_size = 16
model_gradient_accumulation_steps = 4
model.learning_rate = 2e-05
model.epochs = 4
print(model)
model.fit(X_train, y_train)
accy = model.score(X_test, y_test)
| other_examples/IMDb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: shopee_data_science
# language: python
# name: shopee_data_science
# ---
# %cd /home/adityasidharta/git/shopee_data_science
# %env PROJECT_PATH /home/adityasidharta/git/shopee_data_science
from sklearn.externals import joblib
import pandas as pd
from utils.envs import *
from model.text.common.prediction import *
from utils.common import get_datetime
beauty_result_path = '/home/adityasidharta/git/shopee_data_science/output/result/lgb-3/beauty_result_dict.pkl'
fashion_result_path = '/home/adityasidharta/git/shopee_data_science/output/result/lgb-3/fashion_result_dict.pkl'
mobile_result_path = '/home/adityasidharta/git/shopee_data_science/output/result/lgb-3/mobile_result_dict.pkl'
beauty_result_dict = joblib.load(beauty_result_path)
fashion_result_dict = joblib.load(fashion_result_path)
mobile_result_dict = joblib.load(mobile_result_path)
beauty_test_df = pd.read_csv(beauty_test_repo)
fashion_test_df = pd.read_csv(fashion_test_repo)
mobile_test_df = pd.read_csv(mobile_test_repo)
prediction_df = build_prediction_list(beauty_test_df, fashion_test_df, mobile_test_df)
# +
#beauty_prediction = predict_single(beauty_test_df, beauty_result_dict)
#fashion_prediction = predict_single(fashion_test_df, fashion_result_dict)
#mobile_prediction = predict_single(mobile_test_df, mobile_result_dict)
# -
beauty_prediction = predict_double(beauty_test_df, beauty_result_dict)
fashion_prediction = predict_double(fashion_test_df, fashion_result_dict)
mobile_prediction = predict_double(mobile_test_df, mobile_result_dict)
# +
#beauty_prediction = predict_threshold(beauty_test_df, beauty_result_dict, 0.8)
#fashion_prediction = predict_threshold(fashion_test_df, fashion_result_dict, 0.8)
#mobile_prediction = predict_threshold(mobile_test_df, mobile_result_dict, 0.8)
# -
final_result_df = concat_submission(beauty_prediction, fashion_prediction, mobile_prediction, beauty_test_df, fashion_test_df, mobile_test_df)
final_result_df
final_result_df.to_csv(os.path.join(result_path, 'result_{}.csv'.format(get_datetime())), index=False)
| notebooks/adi/submission.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Importing Libraries
# +
from IPython.display import Image
import pandas as pd
import numpy as np
# PDF Text extracttion
import tabula
import re
# Matching
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
# Interactive visualisations
import os
import plotly.express as px
from plotly import graph_objects as go
import chart_studio
import chart_studio.plotly as py
# -
data_one = pd.read_csv('flats_berlin_1.csv')
data_two = pd.read_csv('flats_berlin_2.csv')
data = data_one.append(data_two, sort=True).drop_duplicates(subset='Url')
data.reset_index(inplace=True, drop=True)
data.info()
# # Data Check
# ### Check for duplicated rows
pd.set_option('display.max_columns', 500)
data[data.duplicated()]
# ### Check percentage of missing values
data.isna().mean()
data.info()
# # Data Transformation
#
# Showcasing of different methods how data can be manipulated/cleaned up:
#
# - str.replace
# - .replace with regex
# - loc selection and assignment
# - custom functions
# - lambda functions
data['Kaltmiete'] = pd.to_numeric(data['Kaltmiete'].str.replace('.','').str.replace(',','.').str.replace(' €',''))
data['Nebenkosten'] = data['Nebenkosten'].str.replace('.','').str.replace(',','.').str.replace(' €','')
data.loc[data['Nebenkosten'] == 'keine Angabe', 'Nebenkosten' ] = np.nan
data['Nebenkosten'] = pd.to_numeric(data['Nebenkosten'])
# +
def transform_heizkosten(x):
x = x.replace('.','').replace(',','.').replace(' €','')
if x == 'in Nebenkosten enthalten' or 'inkl' in x:
x = 0
elif x == 'nicht in Nebenkosten enthalten' or x == 'keine Angabe':
x = np.nan
else:
x
return x
data['Heizkosten'] = pd.to_numeric(data['Heizkosten'].apply(transform_heizkosten))
# -
data['Gesamtmiete'] = data['Gesamtmiete'].replace({r" €":"", r"\.":""},regex=True).replace({r"\,":"."},regex=True)
data['Gesamtmiete'] = pd.to_numeric(data['Gesamtmiete'].replace(to_replace=[r" \(zzgl Nebenkosten & Heizkosten\)",
r" \(zzgl Heizkosten\)",
r" \(zzgl Nebenkosten\)"],value='',regex=True))
data['Wohnfläche ca.'] = pd.to_numeric(
data['Wohnfläche ca.'].apply(lambda string: string.replace(' m²','').replace('.','').replace(',','.')))
data['Kaution o. Genossenschaftsanteile'] = data['Kaution o. Genossenschaftsanteile'].replace(to_replace= [r" EUR",r" \€", r"\.",r"\€"],
value= '',
regex=True).replace().replace('\,','.',regex=True)
# To be further transformed if necessary
data['Kaution o. Genossenschaftsanteile'].sort_values(ascending=False).head()
data['Baujahr'] = pd.to_numeric(data['Baujahr'].replace("unbekannt",''))
data['Modernisierung/ Sanierung'] = pd.to_numeric(data['Modernisierung/ Sanierung'].replace(r"zuletzt ",'',regex=True))
data["Energieverbrauchskennwert"] = pd.to_numeric(data["Energieverbrauchskennwert"].replace(r" kWh/\(m\²\*a\)","",regex=True).replace(r"\.","",regex=True).replace(r"\,",".",regex=True))
# Correcting small spelling mistake
data.rename(columns={'Adreese':'Adresse'},inplace=True)
# +
def split_address(x):
address_list = x.split(',')
length = len(address_list)
street = np.nan
zip_code = np.nan
kiez = np.nan
if length == 3:
street = address_list[0].strip()
zip_code = address_list[1].strip()
kiez = address_list[2].strip()
#return street, zip_code, kiez
if length == 2:
street = np.nan
zip_code = address_list[0].strip()
kiez = address_list[1].strip()
kiez = kiez.replace(' Die vollständige Adresse der Immobilie erhalten Sie vom Anbieter.','').strip()
return street, zip_code, kiez
address = pd.DataFrame.from_records(data['Adresse'].apply(split_address)).rename(columns={0:'Straße',1:'PLZ',2:'Kiez'})
data = data.join(address)
# -
def split_street_house_number(string):
street = np.nan
number = np.nan
if isinstance(string, str):
match = re.search('\d{1,5}\s?\w*',string)
if match:
street = string[:match.span()[0]].strip()
street = street.replace('Str.','Straße').replace('str.','straße')
number = string[match.span()[0]:match.span()[1]].strip().replace(' ','')
return pd.Series({'Straße':street, 'Housenumber':number})
data[['Straße','Hausnummer']] = data['Straße'].apply(split_street_house_number)
data['Hausnummer'] = data['Hausnummer'].str.upper()
data['Bezirk'] = data['Kiez'].str.split('(',expand=True)[0].str.strip(')').str.strip()
data['Ortsteil'] = data['Kiez'].str.split('(',expand=True)[1].str.strip(')').str.strip()
data.drop(columns = 'Kiez',inplace=True)
data['Straße'].isnull().mean()
# ## Feature Engineering
data['Kaltmiete pro m²'] = data['Kaltmiete']/data['Wohnfläche ca.']
# ## Location Rating
#
# PDF Source: https://www.stadtentwicklung.berlin.de/wohnen/mietspiegel/de/download/Strassenverzeichnis2019.pdf
# ### PDF Extraction with tabula (Part 1: Bezrik, Ortsteil)
# +
# Extracting main street information left tables
top = 120
left = 44
height = 666
width = 164
y1 = top
x1 = left
y2 = top + height
x2 = left + width
area=(y1,x1,y2,x2)
columns = [44,73,164]
abbrevations_one = tabula.read_pdf("Strassenverzeichnis2019.pdf",
pages='2',
area=area,
columns=columns,
pandas_options={'columns':['Unnamed: 0','Bezirk','Ortsteil','Ortsteil Abkürzung']},
guess=False)[0]
top = 120
left = 216
height = 666
width = 164
y1 = top
x1 = left
y2 = top + height
x2 = left + width
area=(y1,x1,y2,x2)
columns = [216,245,336]
abbrevations_two = tabula.read_pdf("Strassenverzeichnis2019.pdf",
pages='2',
area=area,
columns=columns,
pandas_options={'columns':['Unnamed: 0','Bezirk','Ortsteil','Ortsteil Abkürzung']},
guess=False)[0]
top = 120
left = 388
height = 6666
width = 164
y1 = top
x1 = left
y2 = top + height
x2 = left + width
area=(y1,x1,y2,x2)
columns = [388,418,509]
abbrevations_three = tabula.read_pdf("Strassenverzeichnis2019.pdf",
pages='2',
area=area,
columns=columns,
pandas_options={'columns':['Unnamed: 0','Bezirk','Ortsteil','Ortsteil Abkürzung']},
guess=False)[0]
# +
abbreviations = abbrevations_one.append(abbrevations_two).append(abbrevations_three).drop(columns='Unnamed: 0')
#File was exported and quickly cleaned with GSheets
abbreviations.to_csv('to_be_cleaned_abreviations.csv')
# +
# After quick cleaning with GSheets reading in data again
abbreviations = pd.read_csv('cleaned_abreviations.csv')
abbreviations.drop_duplicates(subset='Ortsteil',keep='last',inplace=True)
abbreviations.rename(columns={'Bezirk':'Bezirk Abkürzung'},inplace=True)
# Adjust one Ortsteil with other naming convention
data.loc[data['Ortsteil'] == 'Treptow', 'Ortsteil' ] = 'Alt-Treptow'
# Add Ortsteil info to existing data
data = pd.merge(data, abbreviations, on='Ortsteil', how='left')
# Drop records where no Ortsteil could be matched
data.dropna(subset=['Ortsteil'],inplace=True)
# -
# ### PDF Extraction with tabula (Part 2: Location quality)
# +
# Extracting left tables
top = 69
left = 39
height = 723
width = 253
y1 = top
x1 = left
y2 = top + height
x2 = left + width
area=(y1,x1,y2,x2)
columns = [39,164.23,186.03,193.35,244.96,256.63,286.21]
dfs_1 = tabula.read_pdf("Strassenverzeichnis2019.pdf",
pages='3-262',
area=area,
columns=columns,
pandas_options={'columns':['Unnamed: 0',
'Straße',
'Bezirk Abkürzung',
'Gebietsstand',
'Hausnummer',
'Buchstabe',
'Wohnlage Einstufung',
'Lage im Stadtgebiet']},
guess=False)
# Extracting right tables
top = 69
left = 303
height = 723
width = 253
y1 = top
x1 = left
y2 = top + height
x2 = left + width
area=(y1,x1,y2,x2)
columns = [303,428, 450, 457,509, 520,550]
dfs_2 = tabula.read_pdf("Strassenverzeichnis2019.pdf",
pages='3-262',
area=area,
columns=columns,
pandas_options={'columns':['Unnamed: 0',
'Straße',
'Bezirk Abkürzung',
'Gebietsstand',
'Hausnummer',
'Buchstabe',
'Wohnlage Einstufung',
'Lage im Stadtgebiet']},
guess=False)
dfs_1.append(dfs_2)
# +
# Create new empty final dataframe
location_info = pd.DataFrame()
tables = dfs_1
for table in tables:
location_info = location_info.append(table)
location_info.drop(columns='Unnamed: 0',inplace=True)
# Filter out letters, e.g. A, B ...
location_info = location_info[location_info['Straße'].apply(lambda x: len(x)) != 1]
location_info['Ortsteil Abkürzung'] = location_info['Straße'].str.split('(',expand=True)[1].str.strip(')').str.strip('')
location_info['Straße'] = location_info['Straße'].str.split('(',expand=True)[0].str.strip('')
location_info = location_info.applymap(lambda x: x.strip() if isinstance(x, str) else x)
# +
def transform_housenumbers(df):
if isinstance(df['Hausnummer'], str):
# Split house numbers into start and end
start = df['Hausnummer'].split('-')[0].strip()
end = df['Hausnummer'].split('-')[1].strip()
# Check for letters e.g. 5A
start_letter = re.search('[A-Z]', start)
end_letter = re.search('[A-Z]', end)
# Find actual start and ending housenumber
start = int(re.search('\d*', start).group())
end = int(re.search('\d*', end).group())
# Continuing streets, e.g. 1, 2, 3, 4, 5
if df['Buchstabe'] == 'F':
house_number_list = list(range(start,end+1))
# Even or uneven streets, e.g. 1, 3, 5 or 2, 4, 6
elif df['Buchstabe'] == 'G' or df['Buchstabe'] == 'U':
house_number_list = list(range(start,end+1,2))
house_number_list = [str(e) for e in house_number_list]
if start_letter:
start_letter = start_letter.group().strip()
house_number_list[0] += start_letter
elif end_letter:
end_letter = end_letter.group().strip()
house_number_list[-1] += end_letter
return house_number_list
else:
return df['Hausnummer']
location_info['Hausnummer'] = location_info.apply(transform_housenumbers, axis=1)
location_info = location_info.reset_index().explode('Hausnummer')
# -
# ## Merging Location Rating features to Scraped Data
# Percentage of data without an address which means we cannot assess location quality
data['Adresse'].apply(lambda x: True if 'Die vollständige Adresse der Immobilie erhalten Sie vom Anbieter.' in x else False).mean()
# +
def match_street(x):
if isinstance(x, str):
match = process.extractOne(x, list(location_info['Straße'].unique()))
if match[1] > 90:
return match[0]
else:
return x
else:
return x
#Computationally quite expensive (only do this once if you have new data)
# %time data['Straße'] = data['Straße'].apply(match_street)
#Results temporarily saved to iterate over data faster
data.to_csv('temp_data.csv',index=False)
# -
# Optionally saved df to iterate over problems much faster
data = pd.read_csv('temp_data.csv')
# +
# Strip one more time all strings of whitespace to guarantee correct merge
data = data.applymap(lambda x: x.strip() if isinstance(x, str) else x)
location_info = location_info.applymap(lambda x: x.strip() if isinstance(x, str) else x)
# First merge attempt on three criterias
matching_one = pd.merge(data, location_info,on=['Straße','Bezirk Abkürzung','Hausnummer'],how='left')
# Percentage of data for which we could add location rating
matching_one.dropna(subset=['Url'])['Wohnlage Einstufung'].notnull().mean()
# +
# In the pdf there are location info for entire streets (marked with letter K), allowing us not to merge with the exact housenumber
matching_two = pd.merge(data, location_info[location_info['Buchstabe'] == 'K'], on=['Straße','Bezirk Abkürzung'], how='left')
# Drop duplicated columns
matching_two = matching_two.drop(columns='Hausnummer_y').rename(columns={'Hausnummer_x':'Hausnummer'})
# Percentage of data for which we could add location rating
matching_two.dropna(subset=['Url'])['Wohnlage Einstufung'].notnull().mean()
# +
# Combine both merges from above to create final_data_frame
matching = matching_one.append(matching_two)
# Drop all records for which we could not find a location rating
matching.dropna(subset=['Wohnlage Einstufung'], inplace=True)
# Some streets are so common that they are duplicate, resulting in duplicated values
# In the PDF only those streets had a location abbreviation. I filter out now those records to add them later back in the df
matching_three = matching[matching['Ortsteil Abkürzung_x'] == matching['Ortsteil Abkürzung_y']]
# Get rid of all duplicated records in the file
matching.drop_duplicates(subset=['Url'], keep=False, inplace=True)
# Add back the records which had many streets with the same name
matching = matching.append(matching_three)
# Get rid of all duplicated records in the file once more
matching.drop_duplicates(subset='Url',inplace=True)
matching.reset_index(inplace=True, drop=True)
matching.drop(columns='Ortsteil Abkürzung_y',inplace=True)
matching.rename(columns={'Ortsteil Abkürzung_x': 'Ortsteil Abkürzung'},inplace=True)
# -
# ## Calculation of Rental Price Ceiling
#
# The main infromation for the calculation can be found here: https://stadtentwicklung.berlin.de/wohnen/wohnraum/mietendeckel/
# ### 1) Base Table (year of building, bathroom, central heating)
# The following base table takes the year the building was built, central heating and the existance of a bathroom to come up with a base price.
Image("img/mietentabelle.png")
# #### English Translation
# +
rental_ceiling_info = {
'min_year': [1918, 1918, 1918, 1919, 1919, 1919, 1950, 1950, 1965, 1973, 1991, 2003 ],
'max_year': [1918, 1918, 1918, 1949, 1949, 1949, 1964, 1964, 1972, 1990, 2002, 2013 ],
'central_heating' : [True, True, False, True, True, False, True, True, True, True, True, True],
'condition': ['and', 'or', 'and', 'and', 'or', 'and', 'and', 'or', 'and', 'and', 'and', 'and'],
'bathroom' : [True, True, False, True, True, False, True, True, True, True, True, True],
'price_per_square_meter' : [6.45, 5.00, 3.92, 6.27, 5.22, 4.59, 6.08, 5.62, 5.95, 6.04, 8.13, 9.8]
}
rental_ceiling_info = pd.DataFrame.from_dict(rental_ceiling_info)
rental_ceiling_info
# -
# ## 2) Modern equipment
#
# Furthermore if at least three of the following criteria are met, one euro is added to the cold rent per squaremeter to adjust for modern equipment:
#
# *Für Wohnungen mit moderner Ausstattung erhöht sich der Wert um 1,00 Euro. Eine moderne Ausstattung liegt vor, wenn mindestens drei der folgenden Merkmale vorhanden sind:*
#
# - *schwellenlos erreichbarer Aufzug,*
# - *Einbauküche,*
# - *hochwertige Sanitärausstattung,*
# - *hochwertiger Bodenbelag in der überwiegenden Zahl der Wohnräume,*
# - *Energieverbrauchskennwert von weniger als 120 kWh/(m² a)*
#
#
# ## 3) Location Rating
#
# Next there is an additional premium or dedcution based on the location of the flat. The exact location criteria are not published yet but the PDF above gives the best potential idea of the to be published location rating:
#
# *Für Wohnungen in einfacher Wohnlage ist bei der Berechnung der Mietobergrenze ein Abschlag beim maßgeblichen Mietpreis in der Mietentabelle von 0,28 Euro zu berücksichtigen, für Wohnungen in mittlerer Wohnlage werden 0,09 Euro abgezogen und für Wohnungen in guter Wohnlage ist ein Zuschlag von 0,74 Euro zu berücksichtigen. (Die Lageeinordnung wird demnächst veröffentlicht.)*
# ## 4) Single or double family house
#
# Since I scraped only flats this criteria can be negelected and I will not need to take into consideration an additional increase of 10% to the price.
#
# *9. Was ist mit Ein- und Zweifamilienhäusern? Auch für sie gilt das Gesetz. Liegt der Wohnraum in Gebäuden mit nicht mehr als zwei Wohnungen, erhöht sich jedoch die Mietobergrenze um einen Zuschlag von zehn Prozent.*
# ## Assumptions
#
# With the mentioned criteria above, I took the following conservative assumptions to calculate the price rental ceiling per square meter:
#
# - maximum price from the base table (1) for the respective year of construction, which means that it is assumed that all the flats have central heating and a bath room
# - all the flats fufill at least three criteria, meaning that it is assumed that they all have modern equipment
# - the rent index PDF document published in 2019 is a good approximation for the location rating (2)
# - all the flats are non single family houses (scraped data only for apartments)
# ## Calculation
#
# max_price_per_square_meter = (base_price + location factor + modern_equipment_factor) * (1 + 20%)
# ### Max rental price ceiling
def find_base_price(df):
"""Takes in a dataframe and finds base price for the year the building was built.
Args:
- df: Dataframe
Returns:
- column (float): base price for the respective year the house was built
"""
# Initialising criteria
location_factor, base_price = (np.nan, np.nan)
# Age of the building
if df['Baujahr'] <= 1918:
base_price = 6.45
elif df['Baujahr'] >= 1919 and df['Baujahr'] <= 1949:
base_price = 6.27
elif df['Baujahr'] >= 1950 and df['Baujahr'] <= 1964:
base_price = 6.08
elif df['Baujahr'] >= 1965 and df['Baujahr'] <= 1972:
base_price = 5.95
elif df['Baujahr'] >= 1973 and df['Baujahr'] <= 1990:
base_price = 6.04
elif df['Baujahr'] >= 1991 and df['Baujahr'] <= 2002:
base_price = 8.13
elif df['Baujahr'] >= 2003 and df['Baujahr'] <= 2013:
base_price = 9.80
return base_price
def find_location_factor(df):
"""Takes in a dataframe and finds the location factor
Args:
- df: Dataframe
Returns:
- column (float): location factor depending on the quality of the location
"""
# Location factor of the flat
if df['Wohnlage Einstufung'].strip(' *') == 'einfach':
location_factor = -0.28
elif df['Wohnlage Einstufung'].strip(' *') == 'mittel':
location_factor = -0.09
elif df['Wohnlage Einstufung'].strip(' *') == 'gut':
location_factor = 0.74
return location_factor
# +
base_price = matching.apply(find_base_price, axis=1)
location_factor = matching.apply(find_location_factor, axis=1)
# Modern equipment factor of the flat
modern_equipment_factor = 1
# Maximum overpricing
percent = 0.2
max_price_per_square_meter = (base_price + location_factor + modern_equipment_factor) * (1+percent)
matching['Zulässige Mietobergrenze pro m²'] = max_price_per_square_meter
# -
# # Data Analysis
illegal_pricing = matching[matching['Kaltmiete pro m²'] > matching['Zulässige Mietobergrenze pro m²']]
legal_pricing = matching[matching['Kaltmiete pro m²'] < matching['Zulässige Mietobergrenze pro m²']]
# +
username = os.getenv('PLOTLY_USERNAME')
api_key = os.getenv('PLOTLY_API_KEY') # your api key - go to profile > settings > regenerate key
chart_studio.tools.set_credentials_file(username=username, api_key=api_key)
# +
fig = go.Figure(go.Funnel(
y = ['All Scraped Listings', 'Listings with sufficient Information', 'Listings over rental cap'],
x = [len(data),len(legal_pricing) + len(illegal_pricing),len(illegal_pricing)],
text = ["All","Sufficient Info", "Overpriced"],
opacity = 0.65,
marker = {"color": ["silver", "green", "red"]}
))
fig.update_layout(
title="Overview of Analysed Listings")
fig.show()
# -
py.plot(fig, filename = 'funnel_listings', auto_open=True)
# ### Average price before and after the law
matching['Kaltmiete pro m²'].mean()
matching['Zulässige Mietobergrenze pro m²'].mean()
(matching['Kaltmiete pro m²'].mean() - matching['Zulässige Mietobergrenze pro m²'].mean())/ matching['Kaltmiete pro m²'].mean()
# ### Pricing differenct between districts
# +
average_rent_difference_per_district = matching.groupby('Ortsteil')[['Kaltmiete pro m²','Zulässige Mietobergrenze pro m²']].mean().stack().reset_index()
average_rent_difference_per_district.rename(columns={
'level_1':'Comparison', 0 :'Average Rent'
}, inplace=True)
# -
fig = px.bar(average_rent_difference_per_district,
x='Ortsteil',
y='Average Rent',
color='Comparison',
barmode='group',
title='Comparison of Current and Newly Caculated Rent Cap per District')
fig.show()
py.plot(fig, filename = 'average_rent_difference_per_district', auto_open=True)
# +
percentage_rent_difference_per_district = matching.groupby('Ortsteil')[['Kaltmiete pro m²','Zulässige Mietobergrenze pro m²']].mean()
percentage_rent_difference_per_district = pd.DataFrame((percentage_rent_difference_per_district['Kaltmiete pro m²'] - percentage_rent_difference_per_district['Zulässige Mietobergrenze pro m²']) / percentage_rent_difference_per_district['Kaltmiete pro m²']*100)
percentage_rent_difference_per_district = percentage_rent_difference_per_district.rename(columns={0:'Percentage Change in Average Price'}).reset_index()
percentage_rent_difference_per_district = percentage_rent_difference_per_district.sort_values(['Percentage Change in Average Price'],ascending=False)
# -
fig = px.bar(percentage_rent_difference_per_district,
x='Ortsteil',
y='Percentage Change in Average Price',
title='Comparison of Price Change per District')
fig.show()
py.plot(fig, filename = 'percentage_rent_difference_per_district', auto_open=True)
# ## Exccess Rent paid
# Excess amount paid per month
illegal_cold_rent = illegal_pricing['Kaltmiete pro m²'] * illegal_pricing['Wohnfläche ca.']
illegal_rent_cap = illegal_pricing['Zulässige Mietobergrenze pro m²'] * illegal_pricing['Wohnfläche ca.']
illegal_excess_rent = illegal_cold_rent.sum() - illegal_rent_cap.sum()
illegal_excess_rent
illegal_pricing[['Titel', 'Anbieter' ,'Url', 'Adresse', 'Baujahr', 'Wohnlage Einstufung', 'Wohnfläche ca.', 'Kaltmiete', 'Kaltmiete pro m²', 'Zulässige Mietobergrenze pro m²']]
# +
top_10_illegal_real_estate_comp = illegal_pricing.groupby('Anbieter')[['Url']].count().sort_values(by='Url',ascending=False).head(10)
average_illegal_current_rent = illegal_pricing[illegal_pricing['Anbieter'].isin(top_10_illegal_real_estate_comp.index)].groupby('Anbieter')[['Kaltmiete pro m²']].mean()
average_illegal_rent_cap = illegal_pricing[illegal_pricing['Anbieter'].isin(top_10_illegal_real_estate_comp.index)].groupby('Anbieter')[['Zulässige Mietobergrenze pro m²']].mean()
top_10_illegal_real_estate_comp = top_10_illegal_real_estate_comp.join(average_illegal_current_rent).join(average_illegal_rent_cap)
top_10_illegal_real_estate_comp['Average excess rent'] = top_10_illegal_real_estate_comp['Kaltmiete pro m²'] - top_10_illegal_real_estate_comp['Zulässige Mietobergrenze pro m²']
top_10_illegal_real_estate_comp.rename(columns = {'Zulässige Mietobergrenze pro m²':'Rent cap'}, inplace=True)
# -
top_10_illegal_real_estate_comp.sort_values('Average excess rent',ascending=False, inplace=True)
top_10_illegal_real_estate_comp = pd.DataFrame(top_10_illegal_real_estate_comp.drop(columns=['Kaltmiete pro m²','Url']).stack()).reset_index()
top_10_illegal_real_estate_comp.rename(columns={'level_1':'Explanation',0:'Average rent per m²'},inplace=True)
fig = px.bar(top_10_illegal_real_estate_comp,
x="Anbieter",
y="Average rent per m²",
color='Explanation',
title='Calculated average rent cap per landlord (blue) and current average excess rent asked (red)')
fig.show()
py.plot(fig, filename = 'top_10_illegal_real_estate_comp', auto_open=True)
schöneberg = illegal_pricing[illegal_pricing['Ortsteil'] == 'Schöneberg']
(schöneberg['Kaltmiete pro m²'] - schöneberg['Zulässige Mietobergrenze pro m²']).sort_values(ascending=False).head(1)
example = illegal_pricing.loc[[1815]]
(example['Kaltmiete pro m²'] - example['Zulässige Mietobergrenze pro m²']) * example['Wohnfläche ca.']
example['Kaltmiete pro m²'] * example['Wohnfläche ca.']
example['Zulässige Mietobergrenze pro m²'] * example['Wohnfläche ca.']
excess_dist = pd.DataFrame(illegal_pricing['Kaltmiete pro m²'] - illegal_pricing['Zulässige Mietobergrenze pro m²'])
excess_dist.rename(columns={0:'Excess rent per m² under new rental cap'},inplace=True)
fig = px.histogram(excess_dist,
x="Excess rent per m² under new rental cap",
title="Excess rent per m² under new rental cap")
fig.show()
py.plot(fig, filename = 'excess_dist', auto_open=True)
len(illegal_pricing)/(len(legal_pricing) + len(illegal_pricing))
# +
table_price = example.apply(find_base_price, axis=1).sum()
location_factor = example.apply(find_location_factor,axis=1).sum()
modern_equipment_factor = 1
rent_cap = table_price + location_factor + modern_equipment
cold_rent = example['Kaltmiete pro m²'].sum()
additional_maximum = rent_cap * 0.20
max_rent = additional_maximum + rent_cap
excess_rent = cold_rent - max_rent
x = ["Table price",
"Location factor",
"Modern equipment",
"Rent cap",
"Maximum additional 20%",
"Maximum rent cap",
"Excess rent",
"Cold rent"]
y = [table_price,
location_factor,
modern_equipment_factor,
rent_cap,
additional_maximum,
max_rent,
excess_rent,
cold_rent]
y = [round(e,2) for e in y]
text = [str(round(e,2))+' €' for e in y]
fig = go.Figure(go.Waterfall(
name = "Rent price composition", orientation = "v",
measure = ["relative","relative","relative","total","relative","total","relative","total"],
x = x,
textposition = "outside",
y = y,
text = text,
connector = {"line":{"color":"rgb(63, 63, 63)"}},
))
fig.update_layout(
title = "Rent price compsition and comparison with cold rent",
showlegend = True
)
fig.show()
# -
py.plot(fig, filename = 'rent_price_composition_example', auto_open=True)
# Excess amount paid per month
illegal_cold_rent = illegal_pricing['Kaltmiete pro m²'] * illegal_pricing['Wohnfläche ca.']
illegal_rent_cap = illegal_pricing['Zulässige Mietobergrenze pro m²'] * illegal_pricing['Wohnfläche ca.']
illegal_excess_rent = illegal_cold_rent.sum() - illegal_rent_cap.sum()
# +
table_price = (illegal_pricing.apply(find_base_price, axis=1) * illegal_pricing['Wohnfläche ca.']).sum()
location_factor = (illegal_pricing.apply(find_location_factor,axis=1) * illegal_pricing['Wohnfläche ca.']).sum()
modern_equipment_factor = (1 * illegal_pricing['Wohnfläche ca.']).sum()
rent_cap = table_price + location_factor + modern_equipment_factor
cold_rent = (illegal_pricing['Kaltmiete pro m²'] * illegal_pricing['Wohnfläche ca.']).sum()
additional_maximum = rent_cap * 0.20
max_rent = additional_maximum + rent_cap
excess_rent = cold_rent - max_rent
x = ["Table price",
"Location factor",
"Modern equipment",
"Rent cap",
"Maximum additional 20%",
"Maximum rent cap",
"Excess rent",
"Cold rent"]
y = [table_price,
location_factor,
modern_equipment_factor,
rent_cap,
additional_maximum,
max_rent,
excess_rent,
cold_rent]
y = [round(e,2) for e in y]
text = [str(round(e/1000)).replace('.0','K €') for e in y]
fig = go.Figure(go.Waterfall(
name = "Rent price composition", orientation = "v",
measure = ["relative","relative","relative","total","relative","total","relative","total"],
x = x,
textposition = "outside",
y = y,
text = text,
connector = {"line":{"color":"rgb(63, 63, 63)"}},
))
fig.update_layout(
title = "Rent price compsition and comparison with cold rent for all illegal listings",
showlegend = True
)
fig.show()
# -
py.plot(fig, filename = 'rent_price_composition_dataset', auto_open=True)
| data_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data = pd.read_csv("final_3a_prob.csv")
col = []
for i in range(1, 37):
j = "_traj_Group_T" + str(i)
col.append(j)
data = data[col]
data.head(2)
group1 = []
group2 = []
group3 = []
group4 = []
for i in col:
t = list(data[i])
group1.append(t.count(1))
group2.append(t.count(2))
group3.append(t.count(3))
group4.append(t.count(4))
# combine = [group1, group2, group3, group4]
# combine = pd.DataFrame(combine)
group1[:5]
group2[:5]
group3[:5]
group4[:5]
x = []
for i in range(36):
x.append(i)
fig = plt.figure(figsize=(8,5))
plt.plot(x,group1)
plt.plot(x,group2)
plt.plot(x,group3)
plt.plot(x,group4)
plt.legend(loc='upper center', labels=['group1', 'group2', 'group3', 'group4'])
plt.xlabel('Time')
plt.ylabel('Percentage of Correct Patients')
plt.title('Patients Converge Time')
plt.show()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data = pd.read_csv("final_3b_prob.csv")
col = []
for i in range(1, 37):
j = "_traj_Group_T" + str(i)
col.append(j)
data = data[col]
group1 = []
group2 = []
group3 = []
group4 = []
for i in col:
t = list(data[i])
group1.append(t.count(1))
group2.append(t.count(2))
group3.append(t.count(3))
group4.append(t.count(4))
x = []
for i in range(36):
x.append(i)
fig = plt.figure(figsize=(8,5))
plt.plot(x,group1)
plt.plot(x,group2)
plt.plot(x,group3)
plt.plot(x,group4)
plt.legend(loc='upper center', labels=['group1', 'group2', 'group3', 'group4'])
plt.xlabel('Time')
plt.ylabel('Number of Patients')
plt.title('Patients Converge Time')
plt.show()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data = pd.read_csv("final_4_prob.csv")
col = []
for i in range(1, 37):
j = "_traj_Group_T" + str(i)
col.append(j)
data = data[col]
group1 = []
group2 = []
group3 = []
group4 = []
for i in col:
t = list(data[i])
group1.append(t.count(1))
group2.append(t.count(2))
group3.append(t.count(3))
group4.append(t.count(4))
x = []
for i in range(36):
x.append(i)
fig = plt.figure(figsize=(8,5))
plt.plot(x,group1)
plt.plot(x,group2)
plt.plot(x,group3)
plt.plot(x,group4)
plt.legend(loc='upper center', labels=['group1', 'group2', 'group3', 'group4'])
plt.xlabel('Time')
plt.ylabel('Number of Patients')
plt.title('Patients Converge Time')
plt.show()
| Clinic/convergence.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 315} colab_type="code" executionInfo={"elapsed": 13218, "status": "ok", "timestamp": 1540890868505, "user": {"displayName": "\u5bae\u672c\u572d\u4e00\u90ce", "photoUrl": "https://lh5.googleusercontent.com/-5BLtx8oPSy8/AAAAAAAAAAI/AAAAAAAALtI/-tIwIsmAvCs/s64/photo.jpg", "userId": "00037817427736046144"}, "user_tz": -540} id="0dQutTXVUp-k" outputId="9d94a14e-95ea-47f6-bd57-8ec190b37865"
# #colabを使う方はこちらを使用ください。
# # !pip install torch==0.4.1
# # !pip install torchvision==0.2.1
# # !pip install numpy==1.14.6
# # !pip install matplotlib==2.1.2
# # !pip install pillow==5.0.0
# # !pip install opencv-python==3.4.3.18
# -
# # 第9章 torch.nnパッケージ
# + [markdown] colab_type="text" id="o5ino9EEfgTZ"
# # 9.12 DataParallelレイヤー
# + colab={} colab_type="code" id="hzyWSFwA2TF-"
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import numpy as np
import torchvision.transforms as transforms
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# + colab={} colab_type="code" id="NGmwtY0rd2jV"
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
net = Model()
# + colab={"base_uri": "https://localhost:8080/", "height": 87} colab_type="code" executionInfo={"elapsed": 498, "status": "ok", "timestamp": 1540898795800, "user": {"displayName": "\u5bae\u672c\u572d\u4e00\u90ce", "photoUrl": "https://lh5.googleusercontent.com/-5BLtx8oPSy8/AAAAAAAAAAI/AAAAAAAALtI/-tIwIsmAvCs/s64/photo.jpg", "userId": "00037817427736046144"}, "user_tz": -540} id="5FH-A7t0uYEr" outputId="25ffbf1d-fd9f-434d-e172-6576acfca063"
#cpu
net.to("cpu")
# + colab={"base_uri": "https://localhost:8080/", "height": 87} colab_type="code" executionInfo={"elapsed": 3319, "status": "ok", "timestamp": 1540898799953, "user": {"displayName": "\u5bae\u672c\u572d\u4e00\u90ce", "photoUrl": "https://lh5.googleusercontent.com/-5BLtx8oPSy8/AAAAAAAAAAI/AAAAAAAALtI/-tIwIsmAvCs/s64/photo.jpg", "userId": "00037817427736046144"}, "user_tz": -540} id="CXbCJhsquZmx" outputId="a78d18b6-90ce-42b3-c49c-985557249094"
#gpu
net.to("cuda")
# + colab={} colab_type="code" id="7yg7ONstd2Yo"
#並列
net = torch.nn.DataParallel(net, device_ids=[0])
# + colab={} colab_type="code" id="dfsX5DRluhTg"
#並列
# net = torch.nn.DataParallel(net, device_ids=[0, 1, 2, 3])
# + colab={} colab_type="code" id="Uum0UVYSd2t5"
input = torch.randn(20, 1, 28, 28)
output = net(input)
# + colab={} colab_type="code" id="5SLralFYumto"
| chapter9/section9_12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data generation with numpy and pandas
# Imports and settings:
# + jupyter={"outputs_hidden": false}
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import string
# %matplotlib inline
# -
# Generate data, the second array is correlated with the first one, which is random. The third is random, and independent, and the first half of the fourth array is correlated with the first array
# , the second half with the third array.
# + tags=[]
p1 = np.random.uniform(size=20)
# + tags=[]
p2 = p1 + np.random.uniform(-0.05, 0.05, 20)
# + tags=[]
p3 = np.random.uniform(size=20)
# + tags=[]
p4 = np.append(p1[:10] + np.random.uniform(-0.1, 0.1, 10), p3[10:] + np.random.uniform(-0.05, 0.05, 10))
# + jupyter={"outputs_hidden": false}
plt.plot(p1, p2, 'o');
# + jupyter={"outputs_hidden": false}
plt.plot(p1, p4, 'o');
# + jupyter={"outputs_hidden": false}
plt.plot(p3, p4, 'o');
# -
# Create a data frame out of the arrays, the columns are patient IDs.
# + tags=[]
ge = pd.DataFrame({1: p1, 2: p2, 3: p3, 4: p4})
ge
# -
# As an index, random gene names are generated.
# + jupyter={"outputs_hidden": false}
letters = [a for a in string.ascii_uppercase]
name_chars = np.random.choice(letters, size=(20, 4))
names = [''.join(x) for x in name_chars]
names
# + jupyter={"outputs_hidden": false}
ge.index = names
ge
# -
# For the fun, save this as HTML.
# + tags=[]
ge.to_html('genes_table.html')
# -
# Show data as scatter matrix.
# + jupyter={"outputs_hidden": false}
pd.plotting.scatter_matrix(ge, figsize=(14,10));
| source-code/pandas/data_generation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Python for data Science
#
# # Define Functions and Getting Help
# You've already seen and used functions such as <b>print</b> and <b>abs</b>. But Python has many more functions, and defining your own functions is a big part of python programming.
#
# In this lesson you will learn more about using and defining functions.
# ## Agenda
# - help()
# - Define functions
# - Docstrings
# - Functions that don't return
# - Default arguments
# - Functions Applied to Functions
# #### Getting Help
# You saw the <b>abs</b> function in the previous tutorial, but what if you've forgotten what it does?
#
# The <b>help()</b> function is possibly the most important Python function you can learn. If you can remember how to use help(), you hold the key to understanding most other function.
#
# Here is an example:
help(round)
# help() displays two things:
#
# - the header of that function round(number[, ndigits]). In this case, this tells us that round() takes an argument we can describe as number. Additionally, we can optionally give a separate argument which could be described as ndigits.
# - A brief English description of what the function does.
#
# <b>Common pitfall</b>: when you're looking up a function, remember to pass in the name of the function itself, and not the result of calling that function.
#
# What happens if we invoke help on a call to the function abs() ? Unhide the output of the cell below to see.
#
#
help(round(-2.01))
# Python evaluates an expression like this from the inside out. First it calculates the value of round(-2.01), then it provides help on the output of that expression.
#
# (And it turns out to have a lot to say about integers! After we talk later about objects, methods, and attributes in Python, the voluminous help output above will make more sense.)
#
# round is a very simple function with a short docstring. help shines even more when dealing with more complex, configurable functions like print. Don't worry if the following output looks inscrutable... for now, just see if you can pick anything new out from this help.
help(print)
# <hr>
# If you were looking for it, you might learn that print can take an argument called sep, and that this describes what we put between all the other arguments when we print them.
# #### Defining functions
# Builtin functions are great, but we can only get so far with them before we need to start defining our own functions. Below is a simple example.
def least_difference(a, b, c):
diff1 = abs(a - b)
diff2 = abs(b - c)
diff3 = abs(a - c)
return min(diff1, diff2, diff3)
# This creates a function called least_difference, which takes three arguments, a, b, and c.
#
# Functions start with a header introduced by the def keyword. The indented block of code following the : is run when the function is called.
#
# return is another keyword uniquely associated with functions. When Python encounters a return statement, it exits the function immediately, and passes the value on the right hand side to the calling context.
#
# Is it clear what least_difference() does from the source code? If we're not sure, we can always try it out on a few examples:
print(
least_difference(1, 10, 100),
least_difference(1, 10, 10),
least_difference(5, 6, 7), # Python allows trailing commas in argument lists. How nice is that?
)
# Or maybe the help() function can tell us something about it.
help(least_difference)
# ython isn't smart enough to read my code and turn it into a nice English description. However, when I write a function, I can provide a description in what's called the docstring.
# #### Docstrings
def least_difference(a, b, c):
"""Return the smallest difference between any two numbers
among a, b and c.
>>> least_difference(1, 5, -5)
4
"""
diff1 = abs(a - b)
diff2 = abs(b - c)
diff3 = abs(a - c)
return min(diff1, diff2, diff3)
# The docstring is a triple-quoted string (which may span multiple lines) that comes immediately after the header of a function. When we call help() on a function, it shows the docstring.
help(least_difference)
# Good programmers use docstrings unless they expect to throw away the code soon after it's used (which is rare). So, you should start writing docstrings too.
# ### Functions that don't return
# What would happen if we didn't include the return keyword in our function?
# +
def least_difference(a, b, c):
"""Return the smallest difference between any two numbers
among a, b and c.
"""
diff1 = abs(a - b)
diff2 = abs(b - c)
diff3 = abs(a - c)
min(diff1, diff2, diff3)
print(
least_difference(1, 10, 100),
least_difference(1, 10, 10),
least_difference(5, 6, 7),
)
# -
# Python allows us to define such functions. The result of calling them is the special value None. (This is similar to the concept of "null" in other languages.)
#
# Without a return statement, least_difference is completely pointless, but a function with side effects may do something useful without returning anything. We've already seen two examples of this: print() and help() don't return anything. We only call them for their side effects (putting some text on the screen). Other examples of useful side effects include writing to a file, or modifying an input.
# ### Default arguments
# When we called help(print), we saw that the print function has several optional arguments. For example, we can specify a value for sep to put some special string in between our printed arguments:
print(1, 2, 3, sep=' < ')
# But if we don't specify a value, sep is treated as having a default value of ' ' (a single space).
print(1, 2, 3)
# Adding optional arguments with default values to the functions we define turns out to be pretty easy:
# +
def greet(who="Colin"):
print("Hello,", who)
greet()
greet(who="Kaggle")
# (In this case, we don't need to specify the name of the argument, because it's unambiguous.)
greet("world")
# -
# ## Functions Applied to Functions
# Here's something that's powerful, though it can feel very abstract at first. You can supply functions as arguments to other functions. Some example may make this clearer:
# +
def mult_by_five(x):
return 5 * x
def call(fn, arg):
"""Call fn on arg"""
return fn(arg)
def squared_call(fn, arg):
"""Call fn on the result of calling fn on arg"""
return fn(fn(arg))
print(
call(mult_by_five, 1),
squared_call(mult_by_five, 1),
sep='\n', # '\n' is the newline character - it starts a new line
)
# -
# Functions that operate on other funcitons are called "Higher order functions." You probably won't write your own for a little while. But there are higher order functions built into Python that you might find useful to call.
#
# Here's an interesting example using the max function.
#
# By default, max returns the largest of its arguments. But if we pass in a function using the optional key argument, it returns the argument x that maximizes key(x) (aka the 'argmax').
# +
def mod_5(x):
"""Return the remainder of x after dividing by 5"""
return x % 5
print(
'Which number is biggest?',
max(100, 51, 14),
'Which number is the biggest modulo 5?',
max(100, 51, 14, key=mod_5),
sep='\n',
)
# -
# ## Thanks for wtaching
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
| Define Functions and Getting Help.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import time
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
# %matplotlib inline
pd.set_option('display.max_columns', 250)
pd.set_option('display.max_rows', 200)
pd.options.display.float_format = '{:20,.1f}'.format
# -
ruta = "./../data/"
file1 = "iris2.csv"
iris = pd.read_csv(ruta + file1)
iris.sample(5)
iris = iris.drop("Unnamed: 5",axis= 1)
iris.sample(5)
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca_result = pca.fit_transform(iris[["p1", "p2", "p3", "p4"]])
iris['pca1'] = pca_result[:,0]
iris['pca2'] = pca_result[:,1]
plt.figure(figsize=(5,5))
sns.scatterplot(
x="pca1", y="pca2",
hue="tipo",
data=iris,
legend="full",
)
#
resultomp = pd.read_csv(ruta + "groupresult.csv")
resultsec = pd.read_csv(ruta + "groupresultsec.csv")
resultomp = resultomp.drop("Unnamed: 1", axis = 1)
resultsec = resultsec.drop("Unnamed: 1", axis = 1)
resultomp.sample()
resultsec.sample()
plt.figure(figsize=(5,5))
sns.scatterplot(
x="pca1", y="pca2",
hue=resultomp.omp,
data=iris,
legend="full",
)
irisp = iris.copy()
irisp["omp"] = resultomp.omp
#sns.catplot(x='tipo', y='p1', data =iris, hue= resultomp.omp)
sns.catplot(x='tipo', y='p1', data =irisp, hue= "omp")
plt.figure(figsize=(5,5))
sns.scatterplot(
x="pca1", y="pca2",
hue=resultsec.sec,
data=iris,
legend="full",
)
irisp["sec"] = resultsec.sec
sns.catplot(x='tipo', y='p1', data =irisp, hue= "sec")
# # Validando otros grupo con k desde 2 hasta 5
l = ["res2","res3","res4","res5"]
for e in l:
aux = pd.read_csv(ruta + e + ".csv")
iris[e] = aux.res
iris.sample()
# # dos puntos
plt.figure(figsize=(5,5))
sns.scatterplot(
x="pca1", y="pca2",
hue= "res2",
data=iris,
legend="full",
)
sns.catplot(x='tipo', y='p1', data =iris, hue= "res2")
# # res3
plt.figure(figsize=(5,5))
sns.scatterplot(
x="pca1", y="pca2",
hue= "res3",
data=iris,
legend="full",
)
sns.catplot(x='tipo', y='p1', data =iris, hue= "res3")
# # res 4
plt.figure(figsize=(5,5))
sns.scatterplot(
x="pca1", y="pca2",
hue= "res4",
data=iris,
legend="full",
)
sns.catplot(x='tipo', y='p1', data =iris, hue= "res4")
# # res 5
plt.figure(figsize=(5,5))
sns.scatterplot(
x="pca1", y="pca2",
hue= "res5",
data=iris,
legend="full",
)
sns.catplot(x='tipo', y='p1', data =iris, hue= "res5")
| analysis/.ipynb_checkpoints/integridad-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evaluation of Analyzers using Input Perturbations
# This notebook will guide you through an example of how to evaluate analyzers via perturbing the input according to the importance that the different input dimensions have.
#
# In particular, several analyzers will be applied to a simple multi-layer perceptron trained on MNIST digits, namely:
# * Sensitivity Analysis ("gradient")
# * Deconvolution
# * Layer-wise relevance propagation (epsilon- and z-rules)
#
# The input images are divided into quadratic regions that are sorted according to their importance w.r.t. to the pixel-wise saliency scores assigned by those analyzers. Then, the information content of the image is gradually destroyed by perturbation of the most important regions. The effect of this perturbation on the classifier performance is measured. This procedure is repeated several times.
#
# We expect that the classifier performance drops quickly if important information is removed and remains largely unaffected when perturbing unimportant regions.
#
# Thus, different analyzers can be compared by measuring how quickly their performance drops, i.e. the quicker the classifier performance drops after input perturbation w.r.t. to the prediction analysis, the better the analyzer is capable of identifying the input components responsible for the output of the model.
#
# Similarly, several models can be compared, e.g. with random perturbations on the data, towards their resilience to noisy input data: The faster the model prediction declines with ongoing perturbations, the more susceptible the classifier is to noise.
#
# Reference:
#
# *[Samek et al.](http://dx.doi.org/10.1109/TNNLS.2016.2599820)*, "Evaluating the visualization of what a deep neural network has learned." *IEEE transactions on neural networks and learning systems* 28.11 (2017): 2660-2673.
# # Imports
import warnings
warnings.simplefilter('ignore')
# +
# %matplotlib inline
import imp
import keras.backend
import keras.models
import matplotlib.pyplot as plt
import numpy as np
import os
import keras
from keras.datasets import mnist
from keras.models import Model
from keras.optimizers import RMSprop
import innvestigate
import innvestigate.applications
import innvestigate.applications.mnist
import innvestigate.utils as iutils
import innvestigate.utils.visualizations as ivis
from innvestigate.tools import Perturbation, PerturbationAnalysis
eutils = imp.load_source("utils", "../utils.py")
mnistutils = imp.load_source("utils_mnist", "../utils_mnist.py")
# -
# # Data
# Then, the MNIST data is loaded in its entirety, formatted according to the specifications of the Keras backend.
channels_first = keras.backend.image_data_format() == "channels_first"
data = mnistutils.fetch_data(channels_first) #returns x_train, y_train, x_test, y_test as numpy.ndarray
num_classes = len(np.unique(data[1]))
# # Model
# We have prepared an (extendable) dictionary of neural network architectures to play around with, some of which are already pre-trained and some which have not seen any `MNIST` data yet.
# MODELNAME INPUT RANGE EPOCHS BATCH_SZ MODEL CREATION KWARGS
models = {'mlp_2dense': ([-1, 1], 15, 128, {'dense_units':1024, 'dropout_rate':0.25}),
'mlp_3dense': ([-1, 1], 20, 128, {'dense_units':1024, 'dropout_rate':0.25}),
'cnn_2convb_2dense': ([-.5, .5], 20, 64, {}),
# pre-trained model from [https://doi.org/10.1371/journal.pone.0130140 , http://jmlr.org/papers/v17/15-618.html]
'pretrained_plos_long_relu': ([-1, 1], 0, 0, {}),
'pretrained_plos_short_relu': ([-1, 1], 0, 0, {}),
'pretrained_plos_long_tanh': ([-1, 1], 0, 0, {}),
'pretrained_plos_short_tanh': ([-1, 1], 0, 0, {}),
}
#Adapt and Play around!
# You can select one of the above models by setting the variable `modelname` as below. The corresponding parameters regarding expected input data range, number of training epochs and optional model definition parameters will be fetched from the dictionary.
# Unpack model params by name. The line below currently selects an already pretrained network, which saves some time.
modelname = 'pretrained_plos_long_relu'
activation_type = 'relu'
input_range, epochs, batch_size, kwargs = models[modelname]
# Now, preprocess the data according to the requirements of the model, build the model, optionally train it for `epochs` epochs and then test it.
# +
data_preprocessed = (mnistutils.preprocess(data[0], input_range), data[1],
mnistutils.preprocess(data[2], input_range), data[3])
x_test, y_test = data_preprocessed[2:]
# TODO use small subset for developing purposes
x_test, y_test = x_test[:10], y_test[:10]
y_test = keras.utils.to_categorical(y_test, num_classes)
test_sample = np.copy(x_test[0:1])
generator = iutils.BatchSequence([x_test, y_test], batch_size=256)
model_without_softmax, model_with_softmax = mnistutils.create_model(channels_first, modelname, **kwargs)
mnistutils.train_model(model_with_softmax, data_preprocessed, batch_size=batch_size, epochs=epochs)
model_without_softmax.set_weights(model_with_softmax.get_weights())
# -
# # Perturbation Analysis
# ### Setup analyzer and perturbation
# The perturbation analysis takes several parameters:
# * `perturbation_function`: This is the method with which the pixels in the most important regions are perturbated. You can pass your own function or pass a string to select one of the predefined functions, e.g. "zeros", "mean" or "gaussian".
# * `region_shape`: The shape of the regions that are considered for perturbation. In this case, we use single pixels. Regions are aggregated ("pooled") using a (customizable) aggregation function that is average pooling by default. The input image is padded such that it can be subdivided into an integer number of regions.
# * `steps`: Number of perturbation steps.
# * `ratio`: In each perturbation step, the `ratio` * 100% most important pixels are perturbed.
#
# Feel free to play around with different analyzers, e.g. by selecting them from the `methods` list via `selected_methods_indices`.
# +
perturbation_function = "zeros" # Equivalently, we could provide the string "zeros"
region_shape = (1, 1)
steps = 5
ratio = 0.05 # Perturbate 1% TODO of pixels per perturbation step
methods = [
# NAME OPT.PARAMS POSTPROC FXN TITLE
# Show input
("input", {}, mnistutils.image, "Input"),
# Function
("gradient", {}, mnistutils.graymap, "Gradient"),
("smoothgrad", {"noise_scale": 50}, mnistutils.graymap, "SmoothGrad"),
("integrated_gradients", {}, mnistutils.graymap, "Integrated Gradients"),
# Signal
("deconvnet", {}, mnistutils.bk_proj, "Deconvnet"),
("guided_backprop", {}, mnistutils.bk_proj, "Guided Backprop",),
("pattern.net", {}, mnistutils.bk_proj, "PatternNet"),
# Interaction
("lrp.z_baseline", {}, mnistutils.heatmap, "Gradient*Input"),
("lrp.z", {}, mnistutils.heatmap, "LRP-Z"),
("lrp.epsilon", {"epsilon": 1}, mnistutils.heatmap, "LRP-Epsilon"),
("lrp.sequential_preset_a",{}, mnistutils.heatmap, "LRP-PresetA"),
#("lrp.sequential_preset_b",{"epsilon": 1}, mnistutils.heatmap, "LRP-PresetB"),
]
# Select methods of your choice
selected_methods_indices = [1, 4, 8, 9]
selected_methods = [methods[i] for i in selected_methods_indices]
print('Using method(s) "{}".'.format([method[0] for method in selected_methods]))
analyzers = [innvestigate.create_analyzer(method[0],
model_without_softmax,
**method[1]) for method in selected_methods]
for analyzer in analyzers:
analyzer.fit(data_preprocessed[0],
pattern_type=activation_type,
batch_size=256, verbose=1)
# -
# ## 1. Evaluate the model after several perturbation steps
# ### Setup perturbation
# The perturbation analysis consists of two parts:
# 1. An object of the class `Perturbation` that performs the actual perturbation of input images. Here, we use (1, 1)-regions (i.e. single pixels) and add Gaussian noise to the original values of the most important pixels.
# 2. An object of the class `PerturbationAnalysis` that computes the analysis, performes several perturbation steps and evaluates the model performance. In each step, the 5% most important pixels are perturbed.
scores_selected_methods = dict()
perturbation_analyses = list()
for method, analyzer in zip(selected_methods, analyzers):
print("Method: {}".format(method[0]))
# Set up the perturbation analysis
perturbation = Perturbation(perturbation_function, region_shape=region_shape, in_place=False)
perturbation_analysis = PerturbationAnalysis(analyzer, model_with_softmax, generator, perturbation,
steps=steps, ratio=ratio, verbose=True)
scores = perturbation_analysis.compute_perturbation_analysis()
# Store the scores and perturbation analyses for later use
scores_selected_methods[method[0]] = np.array(scores)
perturbation_analyses.append(perturbation_analysis)
print()
plt.figure()
for method_name in scores_selected_methods.keys():
scores = scores_selected_methods[method_name]
plt.plot(scores[:, 1], label=method_name)
plt.xlabel("Perturbation steps")
plt.ylabel("Test accuracy")
plt.xticks(np.array(range(scores.shape[0])))
plt.legend()
plt.show()
# As mentioned above, a steeper decrease shows a better identification of the relevant information.
# ## 2. Plot perturbed sample
# Finally, we plot the perturbations on a selected test sample and show them along with the respective analyses, both before and after the perturbation.
# +
plt.figure()
plt.subplot(2, steps + 1, 1)
plt.imshow(np.squeeze(test_sample), cmap="Greys_r")
plt.axis("off")
plt.title("Sample")
plt.figure()
grid = list()
row_labels = list()
col_labels = ["Step {}".format(i+1) for i in range(steps)]
for perturbation_analysis, method in zip(perturbation_analyses, selected_methods):
row_labels.extend([["Samples\n{}".format(method[0])], ["Analysis before"], ["Analysis after"]])
analyses_before = list()
analyses_after = list()
samples = list()
# Reset the perturbation_analysis
perturbation_analysis.perturbation.ratio = 0.0
for i in range(steps):
perturbated_test_sample, analysis_before = perturbation_analysis.compute_on_batch(test_sample, return_analysis=True)
# Compute analysis after perturbation
analysis_after = perturbation_analysis.analyzer.analyze(perturbated_test_sample)
samples.append(np.squeeze(perturbated_test_sample))
analyses_before.append(np.squeeze(analysis_before))
analyses_after.append(np.squeeze(analysis_after))
grid.extend([samples, analyses_before, analyses_after])
eutils.plot_image_grid(grid, row_labels, list(), col_labels)
plt.show()
| examples/notebooks/mnist_perturbation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jkbells/Monte-Carlo-Simulations/blob/main/Monte_Carlo_Simulations%2C_Estimating_Pi%2C_Random_Walks.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="eXENubzbRlYA"
# # **Monte Carlo Simulation**
#
# we can do simulations to,well,simulate the real world in a low-risk environment.Monte Carlo simulation is by far the most widely used simulation and the concept is vey simple:try to do something n number of times and calculate probabilities of landing In a particular state.
#
# Examples:
#
#
# ---
#
#
#
# ---
#
#
#
# * Estimating pi
#
# * Random Walks
#
# * Finance: predicting stock prices
# * Aeronautical Engineering:Airflow around the wings
#
#
# * car safety:Crash simulations
#
#
# * Medicine:spread of disease and contagions
#
#
#
#
# # **Estimating Pi**
#
# Area of a circle is given as: pi(r^2)
#
# so, pi is essentially a ration between the area and the radius square. We can use that to estimate the value of pi using Monto Carlo simulation.
#
# We can simulate throwing darts and see which ones land within the circle and which ones land outside it.
#
# Also,if we throw darts only in the top-right quadrant, it will have the same ratio.
#
# area(circle) = pi(r^2)
#
# area(square) = (2r)^2 = 4(r^2)
#
# so, if we divide the area of circle by the area of sqaure, we get pi/4.Multiply by 4 and you get an estimate of the value of pi.
#
#
# + [markdown] id="ZILRJOTFwqJP"
#
# + id="cWmOGW8-SJaa"
import random as r
from math import sqrt
# + id="6x47AGInVKVX"
def distance(a, b):
return sqrt(a**2 + b**2)
def estimate_pi(darts):
inside = 0
for i in range(darts):
x = r.random()
y = r.random()
if distance(x, y) < 1.0:
inside += 1
pi = (inside / darts) * 4
return pi
# + colab={"base_uri": "https://localhost:8080/"} id="3CwXb8ypVmvf" outputId="6c61917b-95bf-4cf3-b68b-8cbdbb223bfb"
num_sim = int( 1e+6 )
pi = estimate_pi(num_sim)
print(pi)
# + [markdown] id="ull6vcUhYWdd"
# # **Random Walks**
# + id="ZbUySZEIV1S_"
import random
def get_action():
return random.choice( [(0, 1), (0, -1), (1, 0 ), (-1, 0)])
def random_walk(n):
"""Return path after 'n' block random walk"""
path = []
x, y = 0, 0
# y = 0
for i in range(n):
(dx, dy) = get_action() # get step deltas
x = x + dx
y = y + dy
path.append( (x,y) ) # create new tuple and save in the 'path
return path
walk = random_walk(150)
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="7Hicqs_mbRC5" outputId="900b07e6-9183-4614-9872-6e3bb5cafb1b"
import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure(num=None, figsize=(15,10))
point_color = list(range(150))
x_numbers = [x[0] for x in walk]
y_numbers = [x[1] for x in walk]
plt.scatter(x_numbers, y_numbers, marker=',' , s=150 , c=point_color)
plt.show()
# + id="F0Ea_qFtcr7e"
| Monte_Carlo_Simulations,_Estimating_Pi,_Random_Walks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#DT - It is Nonlinear Regressor and Non Continues
# -
# # Data Preprocessing
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# # Importing the Datasets
dataset=pd.read_csv("E:\\Edu\Data Science and ML\\Machinelearningaz\\Datasets\\Part 2 - Regression\\Section 8 - Decision Tree Regression\\Position_Salaries.csv")
dataset.head()
dataset.shape
dataset.describe()
X=dataset.iloc[:,1:2].values # (Matrix)
y=dataset.iloc[:,2].values # (Vector)
print(X)
print(y)
# # Fitting DT Regression to Training Set
from sklearn.tree import DecisionTreeRegressor
regressor=DecisionTreeRegressor(random_state=0)
regressor.fit(X,y)
# # Predicting new Result with DT Regression
regressor.predict(np.array([[6.5]])) # Predict salry for the level 6.5
# # Visualising the DT Regression results
plt.scatter(X,y,color='red')
plt.plot(X,regressor.predict(X),color='blue')
plt.title("Truth or Bluff (DT Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
#we are under Trap
# # Visualising the DT Regression with Higher Resolution and smoother Curve
X_grid=np.arange(min(X),max(X),0.01)
X_grid=X_grid.reshape((len(X_grid),1))
plt.scatter(X,y,color='red')
plt.plot(X_grid,regressor.predict(X_grid),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
# +
#In DT it predicts the value by taking avg in interval
| 2 Regression/5 Decision Tree Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=[]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=[]
# # Notion - Sent Gmail On New Item
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Notion/Notion_Sent_Gmail_On_New_Item.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=[]
# ## Input
# + [markdown] papermill={} tags=[]
# ### Import librairies
#
# Let's import all the necessary libraries required
# + papermill={} tags=[]
import naas
from naas_drivers import notion, gsheet
from naas_drivers import html
import pandas as pd
# + [markdown] papermill={} tags=[]
# ### Variables
#
# Replace all the variables below with appropriate values.
# + papermill={} tags=[]
# Notion
token = "<PASSWORD>"
database_id = "NOTION_DATABASE_ID"
# Gsheet
spreadsheet_id = "SPREADSHEET_ID"
mail_list_sheet_name = "Sheet1"
item_list_sheet_name = "Sheet2"
your_email = "YOUR_EMAIL_ID"
# + [markdown] papermill={} tags=[]
# ### Read the gsheet
# + papermill={} tags=[]
email_list_data = gsheet.connect(spreadsheet_id).get(sheet_name = mail_list_sheet_name)
try:
item_list_history = gsheet.connect(spreadsheet_id).get(sheet_name = item_list_sheet_name)
except:
item_list_history = []
# + [markdown] papermill={} tags=[]
# ### Setting up email
# + papermill={} tags=[]
firstname_list = email_list_data['FIRSTNAME']
email_list = email_list_data['EMAIL']
# + [markdown] papermill={} tags=[]
# ### Get database from notion
# + papermill={} tags=[]
def create_notion_connection():
database = notion.connect(token).database.get(database_id)
df_db = database.df()
print(df_db)
return df_db
# + [markdown] papermill={} tags=[]
# ## Model
# + [markdown] papermill={} tags=[]
# ### Send data to Gsheet
# + papermill={} tags=[]
#Send data to Gsheet
def send_data_to_gsheet(data):
gsheet.connect(spreadsheet_id)
gsheet.send(
sheet_name = item_list_sheet_name,
data = data
)
# + [markdown] papermill={} tags=[]
# ### Get new items from Notion
#
# Let's fetch out the new items from Notion
#
# Here our unique key is **Id**
# + papermill={} tags=[]
#Get new notion items list
def get_new_items_list(df_db):
if not list(item_list_history):
new_items = df_db
else:
item_list_history['Id'] = item_list_history['Id'].astype(int)
df_db['Id'] = df_db['Id'].astype(int)
common = df_db.merge(item_list_history, on=["Id"])
new_items = df_db[~df_db.Id.isin(common.Id)]
data = []
for i in range(len(new_items.index)):
dictionary = {}
for col in new_items.columns:
dictionary[col] = str(new_items.iloc[i][col])
data.append(dictionary)
send_data_to_gsheet(data)
return data
# + [markdown] papermill={} tags=[]
# ### Create email content
# + papermill={} tags=[]
#Get email contents
def get_mail_content():
email_content = html.generate(
display = 'iframe',
title = 'Updates here!!',
heading = 'Hi {first_name}, you have some new items in you notion list',
text_1 = 'Following are the new list of items seperated by comma : ',
text_2 = '{new_items_list}',
text_3 = 'Have a great day!!'
)
#print(email_content)
return email_content
# + [markdown] papermill={} tags=[]
# ### Sending Emails
# + papermill={} tags=[]
#Send mail to recipients
def send_mail(new_items_list):
email_content = get_mail_content()
for i in range(len(email_list_data)):
subject = "Update on Notion items"
content = email_content.replace("{first_name}",firstname_list[i]).replace("{new_items_list}",new_items_list)
naas.notifications.send(email_to=email_list[i], subject=subject, html=content, email_from=your_email)
# + [markdown] papermill={} tags=[]
# ## Output
# + papermill={} tags=[]
df = create_notion_connection()
# + papermill={} tags=[]
new_items_list = get_new_items_list(df)
new_items_list = ', '.join([data['Books'] for data in new_items_list])
# + papermill={} tags=[]
if new_items_list:
send_mail(new_items_list)
else:
print('No new items!!')
# + [markdown] papermill={} tags=[]
# ### Setting up the scheduler
#
# Let's schedule the notebook for every 15mins ⏰
#
# Ps: to remove the "Scheduler", just replace .add by .delete
# + papermill={} tags=[]
#Schedule the notebook to run every 15 minutes
naas.scheduler.add(cron="*/15 * * * *")
| Notion/Notion_Sent_Gmail_On_New_Item.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Exercise multimodal recognition: RGB-D scene recognition
#
# This exercise consists of three parts: two tutorials and the deliverable. The students must modify the code of the tutorial part, and write and discuss the results in the deliverable part that will be used to evaluate the exercise.
#
# If you are not familiar with jupyter notebooks please check __[this tutorial](https://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/What%20is%20the%20Jupyter%20Notebook.html)__ first.
#
# # Part 1 (tutorial): RGB baseline
#
# In this tutorial, you will use a pretrained convolutional network and replace the classifier for the target dataset using PyTorch. The code is loosely based on the __[PyTorch transfer learning tutorial](http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html)__. Just execute the code sequentially, paying attention to the comments.
# +
import torch
import torch.nn as nn
import sys
from tqdm import tqdm
from torch import optim
from torch import cuda
from torch.utils import data
from torch.autograd import Variable
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
import itertools
from utils import imshow_unimodal, MEAN_RGB, STD_RGB
plt.ion() # interactive mode
# -
# Load Data
# ---------
#
# We will use torchvision, torch.utils.data and RGBDutils packages for loading the
# data. The dataset is structured hierarchically in splits\modalities\classes (check the folder).
# +
# Data augmentation and normalization for training
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(MEAN_RGB, STD_RGB)
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(MEAN_RGB, STD_RGB)
]),
'test': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(MEAN_RGB, STD_RGB)
]),
}
# Modalities
modality = 'rgb'
# Path to the dataset
data_dir = '/home/mcv/datasets/sunrgbd_lite'
# Preparing dataset and dataloaders
partitions = ['train', 'val', 'test']
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x, modality),
data_transforms[x])
for x in partitions}
dataloaders = {x: data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in partitions}
dataset_sizes = {x: len(image_datasets[x]) for x in partitions}
class_names = image_datasets['train'].classes
use_gpu = cuda.is_available()
# -
image_datasets
# **Visualize a few images**
#
# Let's visualize a few images to get familiar with the dataset.
# +
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
inputs, classes = inputs[0:4], classes[0:4]
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow_unimodal(out, title=[class_names[x] for x in classes])
# -
# Training the model
# ------------------
#
# Now, let's write a general function to train a model. Details:
#
# - Uses Adam algorithm for gradient descent.
# - Early stoping using best validation accuracy
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in tqdm(range(num_epochs), file=sys.stdout, desc="Training"):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
print('Phase %s' % phase)
if phase == 'train':
if scheduler is not None:
scheduler.step()
model.train(True) # Set model to training mode
else:
model.train(False) # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for data in dataloaders[phase]:
# get the inputs
inputs, labels = data
# wrap them in Variable
if use_gpu:
inputs = Variable(inputs.cuda())
labels = Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.data[0] * inputs.size(0)
# running_loss += loss.data[0] * inputs.size(0) # Pytorch 0.4
running_loss += loss.data.item() * inputs.size(0) # Pytorch 1.0
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.floats() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
# And now, a function to evaluate the model on a particular set.
def evaluate_model(model, partition, criterion):
since = time.time()
model.train(False) # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for data in dataloaders[partition]:
# get the inputs
inputs, labels = data
# wrap them in Variable
if use_gpu:
inputs = Variable(inputs.cuda())
labels = Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
# forward
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
# statistics
# running_loss += loss.data[0] * inputs.size(0) # Pytorch 0.4
running_loss += loss.data.item() * inputs.size(0) # Pytorch 1.0
running_corrects += torch.sum(preds == labels.data)
test_loss = running_loss / dataset_sizes[partition]
test_acc = running_corrects.floats() / dataset_sizes[partition]
print()
time_elapsed = time.time() - since
print('Tested in {:.0f}m {:.0f}s Loss: {:.4f} Acc: {:.4f}'.format(
time_elapsed // 60, time_elapsed % 60, test_loss, test_acc))
return test_acc, test_loss
# The network
# ----------------------
#
# The architecture of the network is shown in the following figure:
# <img src="figures/rgb_network.png" />
#
# The following code creates the RGB network by (downloading and) instantiating an AlexNet trained on ImageNet.
# +
# Instantiate the model
model = models.alexnet(pretrained=True)
# You can visualize the network
print(model)
# -
# Set up the training/fine tuning parameters
# ----------------------
#
# The following code creates the optimization criterio and set per-layer training rates to better control the fine tuning and training process. We use a very simple model in which all layers are frozen except the last fully connected one, i.e. the classifier, so it should be easy to improve the performance.
# +
for param in model.parameters():
param.requires_grad = False
num_classes = len(class_names)
c = model.classifier
num_ftrs = c[6].in_features
model.classifier = nn.Sequential(c[0], c[1], c[2], c[3], c[4], c[5], nn.Linear(num_ftrs, num_classes))
if use_gpu:
model = model.cuda()
criterion = nn.CrossEntropyLoss()
learning_rate = 0.001
perlayer_optim = [
{'params': model.features[0].parameters(), 'lr': 0.00}, # conv1
{'params': model.features[3].parameters(), 'lr': 0.00}, # conv2
{'params': model.features[6].parameters(), 'lr': 0.00}, # conv3
{'params': model.features[8].parameters(), 'lr': 0.00}, # conv4
{'params': model.features[10].parameters(), 'lr': 0.00}, # conv5
{'params': model.classifier[1].parameters(), 'lr': 0.000}, # fc6
{'params': model.classifier[4].parameters(), 'lr': 0.000}, # fc7
{'params': model.classifier[6].parameters(), 'lr': 0.001} # fc8
]
for param in itertools.chain(model.features[0].parameters(), model.features[3].parameters(),
model.features[6].parameters(), model.features[8].parameters(),
model.features[10].parameters(), model.classifier[1].parameters(),
model.classifier[4].parameters(), model.classifier[6].parameters()):
param.requires_grad = True
optimizer = optim.Adam(perlayer_optim, lr=learning_rate)
# -
# Train and evaluate the model
# -----------------
#
# It shouldn't take more than 2 mins to train with the GPU in the server.
# +
# Train
model = train_model(model, criterion, optimizer, None, num_epochs=25)
# Evaluate
train_acc, _ = evaluate_model(model, 'train', criterion)
val_acc, _ = evaluate_model(model, 'val', criterion)
test_acc, _ = evaluate_model(model, 'test', criterion)
print('Accuracy. Train: %1.2f%% val: %1.2f%% test: %1.2f%%' %
(train_acc * 100, val_acc * 100, test_acc * 100))
| src/single_RGB.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.9 64-bit (conda)
# name: python3
# ---
# # Health Insurance Dataset
# ### Objective
# - Supervised Learning (Binary Classification problem). Predict whether the policyholders (customers) from the previous year will also be interested in Vehicle Insurance provided by the company.
#
#
# ## 1. DATA COLLECTION
# - Collect the data from kaggle with CSV format
#
# ## 2. EXPLORATORY DATA ANALYSIS & DATA CLEANING
# - Statiscial summary on numerical features and objects
# - Dataset shape
# - Datatypes (numerical, categorical)
# - Categorical (ordinal and nominal)
# - Pearson Correlation
# - Target values visualization
# - Plot Distribution
# - Check for Imbalanced Dataset
#
# ## 3. FEATURE ENGINEERING
# - Find outliers
# - Fill Missing Values
# - Binary Classification Problem
# - Ordinal and Label Encode
# - If model is tree based method (Decesion Tree, Random Forest, XGBoost), no scaling is needed
#
#
# ## 4. FEATURE SELECTION
# - Dropping low variance features
# - Information Gain-Mutual Information in Classification Problems
# - Pearson Correlation
# - Fisher Score-ChiSquare Test for Feature Selection
# - Tree-based Selection using ExtraTreesClassifier (Feature Importance)
# - Univariate Selection
#
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pylab
from pprint import pprint
import scipy.stats as stats
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_selection import VarianceThreshold, SelectKBest, chi2, mutual_info_classif
from sklearn.ensemble import ExtraTreesClassifier
from collections import Counter
from imblearn.under_sampling import NearMiss
# # DATA COLLECTION
# - importing files
train = pd.read_csv('../inputs/train.csv')
test = pd.read_csv('../inputs/test.csv')
sample = pd.read_csv('../inputs/sample_submission.csv')
print(f'Training shape: {train.shape}, Testing Shape: {test.shape}, Sample Shape: {sample.shape}')
# define the index id and join test with sample dataframe
test = test.set_index('id').join(sample.set_index('id'))
df = train.set_index('id')
df.head()
# # EXPLORATORY DATA ANALYSIS & DATA CLEANING
# - Statiscial summary on numerical features and objects
# - Dataset shape
# - Datatypes (numerical, categorical)
# - Categorical (ordinal and nominal)
# - Pearson Correlation
# - Target values visualization
# - Plot Distribution
# - Check for Imbalanced Dataset
# find null values in DataFrame
if df.isnull().sum().any() == False:
print('Data is Clean, No Null values found')
else:
print('Found Null Values')
# explore the shape (rows and columns) for dataframe
print(f'Number of rows of DataFrame: {df.shape[0]}')
print(f'Number of columns of DataFrame: {df.shape[1]}')
# +
features = df.columns
numerical_feat = [features for features in df.columns if df[features].dtypes != 'O']
print(f'Number of Numerical Features: {len(numerical_feat)}')
categorical_feat = [features for features in df.columns if df[features].dtypes == 'O']
print(f'Number of Categorical Features: {len(categorical_feat)}')
# -
pprint(df.columns.to_series().groupby(df.dtypes).groups)
# statisical summary for quantitative columns
df.describe()
# statistical summary of object dtypes columns (categorical)
df.describe(include=[np.object])
pct_response = (df.Response.value_counts()[1] / df.Response.value_counts()[0]) *100
print(f'Percentage of Customers that are Interested {pct_response:0.2f}%')
def plot_distribution(dataframe, feature):
plt.figure(figsize=(25,6))
# first row, 1st column
plt.subplot(1, 3, 1)
sns.histplot(dataframe[feature])
# first row, 2nd column
plt.subplot(1, 3, 2)
stats.probplot(dataframe[feature], dist='norm', plot=pylab)
# first row, 3rd column
plt.subplot(1, 3, 3)
sns.boxplot(dataframe[feature], orient="h", palette="Set2")
plt.show()
plot_distribution(df, 'Age')
# Annual Premium has outliers as shown in the boxplot
plot_distribution(df, 'Annual_Premium')
print(df.Annual_Premium.mean())
# 1 : Customer is interested, 0 : Customer is not interested
plt.title("Count of Interest or Not")
sns.countplot(x = "Response", data=df)
# +
plt.title("Response based on Vehicle Damage")
sns.countplot(x = "Response", hue = 'Vehicle_Damage', data=df)
# Customer with their vehicle damaged in the past tend to be interested
# -
plt.title("Response based on Previosly Insured")
sns.countplot(x = "Response", hue = 'Previously_Insured', data=df)
plt.title("Response based on Vehicle Age")
sns.countplot(x = "Response", hue = 'Vehicle_Age', data=df)
plt.figure(figsize = (13,5))
plt.title("Response based on Gender Category")
sns.countplot(df['Gender'], hue = df['Response'])
null_col = ['Age', 'Vintage','Policy_Sales_Channel']
plt.subplots(figsize=(15,8))
sns.boxplot(data=df[null_col], orient="h", palette="Set2")
# no outliers found
plot_distribution(df, 'Age')
# +
# plot correlation
def pearson_corr(dataframe):
# compute corr array and generate a mask for the upper triangle
corr = dataframe.corr()
mask = np.triu(np.ones_like(corr, dtype=bool))
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
# plt heatmap with mask and ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
pearson_corr(df)
# -
# # FEATURE ENGINEERING
# - Find outliers
# - Fill Missing Values
# - Binary Classification Problem
# - Ordinal and Label Encode
# - If model is tree based method (Decesion Tree, Random Forest, XGBoost), no scaling is needed
#
# +
# get each categorical features
# get each class name
cat_features = df[['Gender', 'Vehicle_Age', 'Vehicle_Damage']]
# printing unique values of each column
for col in cat_features.columns:
print(f"{col}: {cat_features[col].unique()}")
# -
# ## Nominal and Ordinal
nominal_col = ['Gender', 'Vehicle_Damage']
df[nominal_col] = df[nominal_col].apply(LabelEncoder().fit_transform)
vehicle_age_map = {'< 1 Year':1, '1-2 Year':2, '> 2 Years':3}
df['Vehicle_Age'] = df.Vehicle_Age.map(vehicle_age_map)
# Annual Premium has outliers as shown in the boxplot
plot_distribution(df, 'Policy_Sales_Channel')
replacements = {
'int64': 'int',
'float64': 'float'
}
col_str = ", ".join('{} {}'.format(n,d) for (n,d) in zip(df.columns, df.dtypes.replace(replacements)))
# find null values in DataFrame
if df.isnull().sum().any() == False:
print('Data is Clean, No Null values found')
else:
print('Found Null Values')
# # Detect Outliers
plot_distribution(df, 'Annual_Premium')
# +
# filter out outliers that are greater than 3 std from mean
outliers = []
def detect_outliers(col):
mu = np.mean(col)
std = np.std(col)
for i in col:
z_score = (i - mu)/ std
if np.abs(z_score) > 3:
outliers.append(i)
return outliers
outlier_pt = detect_outliers(df.Annual_Premium)
outliers = df.shape[0]
df = df[~df.Annual_Premium.isin(outlier_pt)]
no_outliers = df.shape[0]
print(f'Total Outliers: {outliers-no_outliers}')
# -
plot_distribution(df, 'Annual_Premium')
# # FEATURE SELECTION
# - Dropping low variance features 5%>
# - Information Gain-Mutual Information in Classification Problems
# - Pearson Correlation
# - Fisher Score-ChiSquare Test for Feature Selection
# - Tree-based Selection using ExtraTreesClassifier (Feature Importance)
# - Univariate Selection
pearson_corr(df)
# +
var_threshold = VarianceThreshold(threshold=0.05)
var_threshold.fit(df)
# get the number of feature coumns with 0 variance
constant_col = [col for col in df.columns if col not in df.columns[var_threshold.get_support()]]
# drop feature with 0 variance (constanct features)
print(constant_col)
df.drop(constant_col, axis=1, inplace=True)
# -
targets = df.Response
features = df.drop('Response', axis=1)
# +
# determine the mutual information for classification
# output a value between [0,1], the higher the value the more dependent on target values
mutual_info = mutual_info_classif(features, targets)
# convert into series and get column names
mutual_info = pd.Series(mutual_info)
mutual_info.index = features.columns
# plot ordered mutual_info values per feature
mutual_info.sort_values(ascending=False).plot(kind='barh', figsize=(20,10))
# +
model = ExtraTreesClassifier()
model.fit(features, targets)
# plot the feature importance
feat_importance = pd.Series(model.feature_importances_, index = features.columns)
feat_importance.nlargest(5).plot(kind='barh', figsize=(20,10))
plt.show()
# +
# get top 5 features using Chi2
best_features = SelectKBest(score_func=chi2, k=5)
fit = best_features.fit(features, targets)
# get a dataframe of score and column names
df_scores = pd.DataFrame(fit.scores_)
df_col = pd.DataFrame(features.columns)
# concat borth dataframes
feat_scores = pd.concat([df_col, df_scores], axis=1)
feat_scores.columns = ['features', 'score']
feat_scores.index = features.columns
feat_scores.sort_values(by='score').plot(kind='barh', figsize=(20,10))
# +
fig = plt.figure(figsize=(20,10))
ax = plt.axes(projection="3d")
x_points = df['Annual_Premium']
y_points = df['Policy_Sales_Channel']
z_points = df['Response']
ax.scatter3D(x_points, y_points, z_points, c=z_points, cmap='hsv')
ax.set_xlabel('Annual_Premium')
ax.set_ylabel('Policy_Sales_Channel')
ax.set_zlabel('Response')
plt.show()
# -
df.to_csv('../inputs/health_insurance_clean.csv', index=False)
# # IMBALANCED DATASET
#
# ### 14% of target values are interested
#
# # How to deal with Imbalanced Dataset
# - Stratified KFold
pct_response = (df.Response.value_counts()[1] / df.Response.value_counts()[0]) *100
print(f'Percentage of Customers that are Interested {pct_response:0.2f}%')
# +
## Get interested and not_interested count values
interested = df[df['Response']==1]
not_interested = df[df['Response']==0]
print(interested.shape)
print(not_interested.shape)
targets = df['Response']
features = df.drop('Response', axis=1)
# Implementing Undersampling for Handling Imbalanced
nm = NearMiss()
features_res, targets_res = nm.fit_resample(features,targets)
# -
print(f'Original dataset shape {Counter(targets)}')
print(f'Resampled dataset shape {Counter(targets_res)}')
| notebooks/healthinsurance.ipynb |
# +
# Copyright 2010-2017 Google
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""SAT code samples used in documentation."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
from ortools.sat.python import cp_model
def CodeSample():
model = cp_model.CpModel()
x = model.NewBoolVar('x')
print(x)
def LiteralSample():
model = cp_model.CpModel()
x = model.NewBoolVar('x')
not_x = x.Not()
print(x)
print(not_x)
def BoolOrSample():
model = cp_model.CpModel()
x = model.NewBoolVar('x')
y = model.NewBoolVar('y')
model.AddBoolOr([x, y.Not()])
def ReifiedSample():
"""Showcase creating a reified constraint."""
model = cp_model.CpModel()
x = model.NewBoolVar('x')
y = model.NewBoolVar('y')
b = model.NewBoolVar('b')
# First version using a half-reified bool and.
model.AddBoolAnd([x, y.Not()]).OnlyEnforceIf(b)
# Second version using implications.
model.AddImplication(b, x)
model.AddImplication(b, y.Not())
# Third version using bool or.
model.AddBoolOr([b.Not(), x])
model.AddBoolOr([b.Not(), y.Not()])
def RabbitsAndPheasants():
"""Solves the rabbits + pheasants problem."""
model = cp_model.CpModel()
r = model.NewIntVar(0, 100, 'r')
p = model.NewIntVar(0, 100, 'p')
# 20 heads.
model.Add(r + p == 20)
# 56 legs.
model.Add(4 * r + 2 * p == 56)
# Solves and prints out the solution.
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.FEASIBLE:
print('%i rabbits and %i pheasants' % (solver.Value(r), solver.Value(p)))
def BinpackingProblem():
"""Solves a bin-packing problem."""
# Data.
bin_capacity = 100
slack_capacity = 20
num_bins = 10
all_bins = range(num_bins)
items = [(20, 12), (15, 12), (30, 8), (45, 5)]
num_items = len(items)
all_items = range(num_items)
# Model.
model = cp_model.CpModel()
# Main variables.
x = {}
for i in all_items:
num_copies = items[i][1]
for b in all_bins:
x[(i, b)] = model.NewIntVar(0, num_copies, 'x_%i_%i' % (i, b))
# Load variables.
load = [model.NewIntVar(0, bin_capacity, 'load_%i' % b) for b in all_bins]
# Slack variables.
slacks = [model.NewBoolVar('slack_%i' % b) for b in all_bins]
# Links load and x.
for b in all_bins:
model.Add(load[b] == sum(x[(i, b)] * items[i][0] for i in all_items))
# Place all items.
for i in all_items:
model.Add(sum(x[(i, b)] for b in all_bins) == items[i][1])
# Links load and slack through an equivalence relation.
safe_capacity = bin_capacity - slack_capacity
for b in all_bins:
# slack[b] => load[b] <= safe_capacity.
model.Add(load[b] <= safe_capacity).OnlyEnforceIf(slacks[b])
# not(slack[b]) => load[b] > safe_capacity.
model.Add(load[b] > safe_capacity).OnlyEnforceIf(slacks[b].Not())
# Maximize sum of slacks.
model.Maximize(sum(slacks))
# Solves and prints out the solution.
solver = cp_model.CpSolver()
status = solver.Solve(model)
print('Solve status: %s' % solver.StatusName(status))
if status == cp_model.OPTIMAL:
print('Optimal objective value: %i' % solver.ObjectiveValue())
print('Statistics')
print(' - conflicts : %i' % solver.NumConflicts())
print(' - branches : %i' % solver.NumBranches())
print(' - wall time : %f s' % solver.WallTime())
def IntervalSample():
model = cp_model.CpModel()
horizon = 100
start_var = model.NewIntVar(0, horizon, 'start')
duration = 10 # Python cp/sat code accept integer variables or constants.
end_var = model.NewIntVar(0, horizon, 'end')
interval_var = model.NewIntervalVar(start_var, duration, end_var, 'interval')
print('start = %s, duration = %i, end = %s, interval = %s' %
(start_var, duration, end_var, interval_var))
def OptionalIntervalSample():
model = cp_model.CpModel()
horizon = 100
start_var = model.NewIntVar(0, horizon, 'start')
duration = 10 # Python cp/sat code accept integer variables or constants.
end_var = model.NewIntVar(0, horizon, 'end')
presence_var = model.NewBoolVar('presence')
interval_var = model.NewOptionalIntervalVar(start_var, duration, end_var,
presence_var, 'interval')
print('start = %s, duration = %i, end = %s, presence = %s, interval = %s' %
(start_var, duration, end_var, presence_var, interval_var))
def MinimalCpSat():
"""Minimal CP-SAT example to showcase calling the solver."""
# Creates the model.
model = cp_model.CpModel()
# Creates the variables.
num_vals = 3
x = model.NewIntVar(0, num_vals - 1, 'x')
y = model.NewIntVar(0, num_vals - 1, 'y')
z = model.NewIntVar(0, num_vals - 1, 'z')
# Creates the constraints.
model.Add(x != y)
# Creates a solver and solves the model.
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.FEASIBLE:
print('x = %i' % solver.Value(x))
print('y = %i' % solver.Value(y))
print('z = %i' % solver.Value(z))
def MinimalCpSatWithTimeLimit():
"""Minimal CP-SAT example to showcase calling the solver."""
# Creates the model.
model = cp_model.CpModel()
# Creates the variables.
num_vals = 3
x = model.NewIntVar(0, num_vals - 1, 'x')
y = model.NewIntVar(0, num_vals - 1, 'y')
z = model.NewIntVar(0, num_vals - 1, 'z')
# Adds an all-different constraint.
model.Add(x != y)
# Creates a solver and solves the model.
solver = cp_model.CpSolver()
# Sets a time limit of 10 seconds.
solver.parameters.max_time_in_seconds = 10.0
status = solver.Solve(model)
if status == cp_model.FEASIBLE:
print('x = %i' % solver.Value(x))
print('y = %i' % solver.Value(y))
print('z = %i' % solver.Value(z))
# You need to subclass the cp_model.CpSolverSolutionCallback class.
class VarArrayAndObjectiveSolutionPrinter(cp_model.CpSolverSolutionCallback):
"""Print intermediate solutions."""
def __init__(self, variables):
cp_model.CpSolverSolutionCallback.__init__(self)
self.__variables = variables
self.__solution_count = 0
def OnSolutionCallback(self):
print('Solution %i' % self.__solution_count)
print(' objective value = %i' % self.ObjectiveValue())
for v in self.__variables:
print(' %s = %i' % (v, self.Value(v)), end=' ')
print()
self.__solution_count += 1
def SolutionCount(self):
return self.__solution_count
def MinimalCpSatPrintIntermediateSolutions():
"""Showcases printing intermediate solutions found during search."""
# Creates the model.
model = cp_model.CpModel()
# Creates the variables.
num_vals = 3
x = model.NewIntVar(0, num_vals - 1, 'x')
y = model.NewIntVar(0, num_vals - 1, 'y')
z = model.NewIntVar(0, num_vals - 1, 'z')
# Creates the constraints.
model.Add(x != y)
model.Maximize(x + 2 * y + 3 * z)
# Creates a solver and solves.
solver = cp_model.CpSolver()
solution_printer = VarArrayAndObjectiveSolutionPrinter([x, y, z])
status = solver.SolveWithSolutionCallback(model, solution_printer)
print('Status = %s' % solver.StatusName(status))
print('Number of solutions found: %i' % solution_printer.SolutionCount())
class VarArraySolutionPrinter(cp_model.CpSolverSolutionCallback):
"""Print intermediate solutions."""
def __init__(self, variables):
cp_model.CpSolverSolutionCallback.__init__(self)
self.__variables = variables
self.__solution_count = 0
def OnSolutionCallback(self):
self.__solution_count += 1
for v in self.__variables:
print('%s=%i' % (v, self.Value(v)), end=' ')
print()
def SolutionCount(self):
return self.__solution_count
def MinimalCpSatAllSolutions():
"""Showcases calling the solver to search for all solutions."""
# Creates the model.
model = cp_model.CpModel()
# Creates the variables.
num_vals = 3
x = model.NewIntVar(0, num_vals - 1, 'x')
y = model.NewIntVar(0, num_vals - 1, 'y')
z = model.NewIntVar(0, num_vals - 1, 'z')
# Create the constraints.
model.Add(x != y)
# Create a solver and solve.
solver = cp_model.CpSolver()
solution_printer = VarArraySolutionPrinter([x, y, z])
status = solver.SearchForAllSolutions(model, solution_printer)
print('Status = %s' % solver.StatusName(status))
print('Number of solutions found: %i' % solution_printer.SolutionCount())
def SolvingLinearProblem():
"""CP-SAT linear solver problem."""
# Create a model.
model = cp_model.CpModel()
# x and y are integer non-negative variables.
x = model.NewIntVar(0, 17, 'x')
y = model.NewIntVar(0, 17, 'y')
model.Add(2 * x + 14 * y <= 35)
model.Add(2 * x <= 7)
obj_var = model.NewIntVar(0, 1000, 'obj_var')
model.Add(obj_var == x + 10 * y)
model.Maximize(obj_var)
# Create a solver and solve.
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.OPTIMAL:
print('Objective value: %i' % solver.ObjectiveValue())
print()
print('x= %i' % solver.Value(x))
print('y= %i' % solver.Value(y))
def MinimalJobShop():
"""Minimal jobshop problem."""
# Create the model.
model = cp_model.CpModel()
machines_count = 3
jobs_count = 3
all_machines = range(0, machines_count)
all_jobs = range(0, jobs_count)
# Define data.
machines = [[0, 1, 2], [0, 2, 1], [1, 2]]
processing_times = [[3, 2, 2], [2, 1, 4], [4, 3]]
# Computes horizon.
horizon = 0
for job in all_jobs:
horizon += sum(processing_times[job])
task_type = collections.namedtuple('task_type', 'start end interval')
assigned_task_type = collections.namedtuple('assigned_task_type',
'start job index')
# Creates jobs.
all_tasks = {}
for job in all_jobs:
for index in range(0, len(machines[job])):
start_var = model.NewIntVar(0, horizon, 'start_%i_%i' % (job, index))
duration = processing_times[job][index]
end_var = model.NewIntVar(0, horizon, 'end_%i_%i' % (job, index))
interval_var = model.NewIntervalVar(start_var, duration, end_var,
'interval_%i_%i' % (job, index))
all_tasks[(job, index)] = task_type(
start=start_var, end=end_var, interval=interval_var)
# Creates sequence variables and add disjunctive constraints.
for machine in all_machines:
intervals = []
for job in all_jobs:
for index in range(0, len(machines[job])):
if machines[job][index] == machine:
intervals.append(all_tasks[(job, index)].interval)
model.AddNoOverlap(intervals)
# Add precedence contraints.
for job in all_jobs:
for index in range(0, len(machines[job]) - 1):
model.Add(all_tasks[(job, index + 1)].start >= all_tasks[(job,
index)].end)
# Makespan objective.
obj_var = model.NewIntVar(0, horizon, 'makespan')
model.AddMaxEquality(
obj_var,
[all_tasks[(job, len(machines[job]) - 1)].end for job in all_jobs])
model.Minimize(obj_var)
# Solve model.
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.OPTIMAL:
# Print out makespan.
print('Optimal Schedule Length: %i' % solver.ObjectiveValue())
print()
# Create one list of assigned tasks per machine.
assigned_jobs = [[] for _ in range(machines_count)]
for job in all_jobs:
for index in range(len(machines[job])):
machine = machines[job][index]
assigned_jobs[machine].append(
assigned_task_type(
start=solver.Value(all_tasks[(job, index)].start),
job=job,
index=index))
disp_col_width = 10
sol_line = ''
sol_line_tasks = ''
print('Optimal Schedule', '\n')
for machine in all_machines:
# Sort by starting time.
assigned_jobs[machine].sort()
sol_line += 'Machine ' + str(machine) + ': '
sol_line_tasks += 'Machine ' + str(machine) + ': '
for assigned_task in assigned_jobs[machine]:
name = 'job_%i_%i' % (assigned_task.job, assigned_task.index)
# Add spaces to output to align columns.
sol_line_tasks += name + ' ' * (disp_col_width - len(name))
start = assigned_task.start
duration = processing_times[assigned_task.job][assigned_task.index]
sol_tmp = '[%i,%i]' % (start, start + duration)
# Add spaces to output to align columns.
sol_line += sol_tmp + ' ' * (disp_col_width - len(sol_tmp))
sol_line += '\n'
sol_line_tasks += '\n'
print(sol_line_tasks)
print('Time Intervals for task_types\n')
print(sol_line)
print('--- CodeSample ---')
CodeSample()
print('--- LiteralSample ---')
LiteralSample()
print('--- BoolOrSample ---')
BoolOrSample()
print('--- ReifiedSample ---')
ReifiedSample()
print('--- RabbitsAndPheasants ---')
RabbitsAndPheasants()
print('--- BinpackingProblem ---')
BinpackingProblem()
print('--- IntervalSample ---')
IntervalSample()
print('--- OptionalIntervalSample ---')
OptionalIntervalSample()
print('--- MinimalCpSat ---')
MinimalCpSat()
print('--- MinimalCpSatWithTimeLimit ---')
MinimalCpSatWithTimeLimit()
print('--- MinimalCpSatPrintIntermediateSolutions ---')
MinimalCpSatPrintIntermediateSolutions()
print('--- MinimalCpSatAllSolutions ---')
MinimalCpSatAllSolutions()
print('--- SolvingLinearProblem ---')
SolvingLinearProblem()
print('--- MinimalJobShop ---')
MinimalJobShop()
| examples/notebook/code_samples_sat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
"""
1. Plot a histogram for the column ‘wt’.
a. Map the ‘wt’ onto the x-axis.
b. Provide the x-axis label as ‘weight of the cars’.
c. Provide the y-axis label as ‘Count’
d. Set the number of bins as 30.
e. Set the title as ‘Histogram for the weight values’
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
mtcars = pd.read_csv("C:\\Users\\black\\Desktop\\PyforDS\\datasets\\mtcars.csv")
# +
labels = mtcars['wt']
x_pos = np.arange(len(labels))
plt.hist(x_pos, bins = 30)
plt.xlabel('weight of the cars')
plt.ylabel('Count')
plt.title('Histogram for the weight values')
plt.show()
| matplotlib_histogram.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="kCeYA79m1DEX"
# # Using side features: feature preprocessing
#
# + [markdown] id="TFJUp0Vdu-TG"
# ## Learning Objectives
#
# 1. Turning categorical features into embeddings.
# 2. Normalizing continuous features.
# 3. Processing text features.
# 4. Build a User and Movie model.
#
# ## Introduction
#
# One of the great advantages of using a deep learning framework to build recommender models is the freedom to build rich, flexible feature representations.
#
# The first step in doing so is preparing the features, as raw features will usually not be immediately usable in a model.
#
# For example:
#
# - User and item ids may be strings (titles, usernames) or large, noncontiguous integers (database IDs).
# - Item descriptions could be raw text.
# - Interaction timestamps could be raw Unix timestamps.
#
# These need to be appropriately transformed in order to be useful in building models:
#
# - User and item ids have to be translated into embedding vectors: high-dimensional numerical representations that are adjusted during training to help the model predict its objective better.
# - Raw text needs to be tokenized (split into smaller parts such as individual words) and translated into embeddings.
# - Numerical features need to be normalized so that their values lie in a small interval around 0.
#
# Fortunately, by using TensorFlow we can make such preprocessing part of our model rather than a separate preprocessing step. This is not only convenient, but also ensures that our pre-processing is exactly the same during training and during serving. This makes it safe and easy to deploy models that include even very sophisticated pre-processing.
#
# In this notebook, we are going to focus on recommenders and the preprocessing we need to do on the [MovieLens dataset](https://grouplens.org/datasets/movielens/). If you're interested in a larger tutorial without a recommender system focus, have a look at the full [Keras preprocessing guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).
#
# Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/featurization.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
# + [markdown] id="dh8vCHpi52gD"
# ## The MovieLens dataset
#
# Let's first have a look at what features we can use from the MovieLens dataset:
# + id="N3oCG2SE-dgf" colab={"base_uri": "https://localhost:8080/"} outputId="ed2657c5-31ba-4534-e017-4e1a59ea7174"
# !pip install -q --upgrade tensorflow-datasets
# -
# Please re-run the above cell if you are getting any incompatible warnings and errors.
# + id="BxQ_hy7xPH3N" colab={"base_uri": "https://localhost:8080/", "height": 493, "referenced_widgets": ["feb65cb213c64921962e8ea7cc9446d3", "4d90f930fdb54875a2bc7f8722fd4832", "57257ff73b1c4c8b8044913046e31e1b", "fd78a552bfab443dbe4b2975a6765fe6", "aedf2cfa1f5e44cfa81f3f28ba92bd82", "18a17058911c4699a5cdf22a49eb9eba", "a018615838ff42948fee2a277fe5c9c5", "7b29be093a32416aae8403f3dc3a8f67", "<KEY>", "<KEY>", "<KEY>", "be788f45b7fb414a8803df7d4303ca54", "<KEY>", "20aad48c87fe4506931daedaec505aa6", "21d3521e0fe94871aaa8305672d94de1", "78964542c45049799ca87c09e77dfdd0", "<KEY>", "30c6c8ac14d54e66ab280fb0887608e0", "<KEY>", "<KEY>", "<KEY>", "c35a052cae064e0eb1872dabe177e573", "a50793337e96465586c5624c4865e041", "<KEY>", "11a5d61559d64482a2efc5e937106b8e", "9d3669612c9544edb7a56b82c8a38f9d", "<KEY>", "<KEY>", "<KEY>", "c31a4a7f2edb41968e5b2bd0d4703bdc", "d98976b3f9444a1d89959b5447273e3f", "<KEY>", "<KEY>", "7a2effdf930147d0aada39c1ad9c0741", "<KEY>", "5ac1db2c51ce4ef79b4d201fe8e339be", "<KEY>", "<KEY>", "e69ccacb21844b2ca595f1d7a3989da5", "883e57f9b1eb447895b3878a210f959e", "bafab48be9884e9ea98d5a18bc3defa3", "<KEY>", "29958dd135a642a88d3c03522f14f6f3", "<KEY>", "6342d783267b429a823525b1e3b16955", "60d2a455f88343d9bd9ceff4cef576fa", "a5e6f21e1a7845f9bf9300a236500866", "f9486af7cf5343a9a839278eae09d20b"]} outputId="4bd519ed-ae8a-40b5-c4b4-52e12bdedae7"
import pprint
import tensorflow_datasets as tfds
ratings = tfds.load("movielens/100k-ratings", split="train")
for x in ratings.take(1).as_numpy_iterator():
pprint.pprint(x)
# + [markdown] id="_6ypp_nVub8J"
# There are a couple of key features here:
#
# - Movie title is useful as a movie identifier.
# - User id is useful as a user identifier.
# - Timestamps will allow us to model the effect of time.
#
# The first two are categorical features; timestamps are a continuous feature.
# + [markdown] id="cp2rd--gvW9w"
# ## Turning categorical features into embeddings
#
# A [categorical feature](https://en.wikipedia.org/wiki/Categorical_variable) is a feature that does not express a continuous quantity, but rather takes on one of a set of fixed values.
#
# Most deep learning models express these feature by turning them into high-dimensional vectors. During model training, the value of that vector is adjusted to help the model predict its objective better.
#
# For example, suppose that our goal is to predict which user is going to watch which movie. To do that, we represent each user and each movie by an embedding vector. Initially, these embeddings will take on random values - but during training, we will adjust them so that embeddings of users and the movies they watch end up closer together.
#
# Taking raw categorical features and turning them into embeddings is normally a two-step process:
#
# 1. Firstly, we need to translate the raw values into a range of contiguous integers, normally by building a mapping (called a "vocabulary") that maps raw values ("Star Wars") to integers (say, 15).
# 2. Secondly, we need to take these integers and turn them into embeddings.
# + [markdown] id="aa-7so1D_9B2"
# ### Defining the vocabulary
#
# The first step is to define a vocabulary. We can do this easily using Keras preprocessing layers.
# + id="IkA1HOXKyaEo"
import numpy as np
import tensorflow as tf
movie_title_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()
# + [markdown] id="7We60Iduy2SP"
# The layer itself does not have a vocabulary yet, but we can build it using our data.
# + id="GKluOy3ly7Pg" colab={"base_uri": "https://localhost:8080/"} outputId="208e82f9-e6f3-4ad9-a792-c18eb350707f"
movie_title_lookup.adapt(ratings.map(lambda x: x["movie_title"]))
print(f"Vocabulary: {movie_title_lookup.get_vocabulary()[:3]}")
# + [markdown] id="1cH2Je_KBQZy"
# Once we have this we can use the layer to translate raw tokens to embedding ids:
# + id="zXYpfmWDBVOq" colab={"base_uri": "https://localhost:8080/"} outputId="5f40c247-02cd-4719-8436-368c59e302a1"
movie_title_lookup(["Star Wars (1977)", "One Flew Over the Cuckoo's Nest (1975)"])
# + [markdown] id="PYXiq04dzTaq"
# Note that the layer's vocabulary includes one (or more!) unknown (or "out of vocabulary", OOV) tokens. This is really handy: it means that the layer can handle categorical values that are not in the vocabulary. In practical terms, this means that the model can continue to learn about and make recommendations even using features that have not been seen during vocabulary construction.
# + [markdown] id="qseZxzmBBJvv"
# ### Using feature hashing
#
# In fact, the `StringLookup` layer allows us to configure multiple OOV indices. If we do that, any raw value that is not in the vocabulary will be deterministically hashed to one of the OOV indices. The more such indices we have, the less likley it is that two different raw feature values will hash to the same OOV index. Consequently, if we have enough such indices the model should be able to train about as well as a model with an explicit vocabulary without the disadvantage of having to maintain the token list.
# + [markdown] id="t0gOaMjJAC17"
# We can take this to its logical extreme and rely entirely on feature hashing, with no vocabulary at all. This is implemented in the `tf.keras.layers.experimental.preprocessing.Hashing` layer.
# + id="1Os5gwGxzSaG"
# We set up a large number of bins to reduce the chance of hash collisions.
num_hashing_bins = 200_000
movie_title_hashing = tf.keras.layers.experimental.preprocessing.Hashing(
num_bins=num_hashing_bins
)
# + [markdown] id="rvcVNCzNB8GE"
# We can do the lookup as before without the need to build vocabularies:
# + id="OkEWdeflCAY6" colab={"base_uri": "https://localhost:8080/"} outputId="ade1c060-a2a7-4dfa-c15f-c034d8140d59"
movie_title_hashing(["Star Wars (1977)", "One Flew Over the Cuckoo's Nest (1975)"])
# + [markdown] id="-QFinPDA0LxM"
# ### Defining the embeddings
#
# Now that we have integer ids, we can use the [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer to turn those into embeddings.
#
# An embedding layer has two dimensions: the first dimension tells us how many distinct categories we can embed; the second tells us how large the vector representing each of them can be.
#
# When creating the embedding layer for movie titles, we are going to set the first value to the size of our title vocabulary (or the number of hashing bins). The second is up to us: the larger it is, the higher the capacity of the model, but the slower it is to fit and serve.
# + id="RUftFomv0nGO"
# Turns positive integers (indexes) into dense vectors of fixed size.
# TODO
movie_title_embedding = tf.keras.layers.Embedding(
# Let's use the explicit vocabulary lookup.
input_dim=movie_title_lookup.vocab_size(),
output_dim=32
)
# + [markdown] id="8JNyTTQq1RIw"
# We can put the two together into a single layer which takes raw text in and yields embeddings.
# + id="RSbQd_mn1YYe"
movie_title_model = tf.keras.Sequential([movie_title_lookup, movie_title_embedding])
# + [markdown] id="4QoA9YHw1gQc"
# Just like that, we can directly get the embeddings for our movie titles:
# + id="T-s6uPqM1fZz" colab={"base_uri": "https://localhost:8080/"} outputId="fb8ed692-c30e-45cb-de79-34031f8f7921"
movie_title_model(["Star Wars (1977)"])
# + [markdown] id="2chJv4jTSg04"
# We can do the same with user embeddings:
# + id="3ot3bfX8SgWT"
user_id_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()
user_id_lookup.adapt(ratings.map(lambda x: x["user_id"]))
user_id_embedding = tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32)
user_id_model = tf.keras.Sequential([user_id_lookup, user_id_embedding])
# + [markdown] id="abZNsN3oDf1F"
# ## Normalizing continuous features
#
# Continuous features also need normalization. For example, the `timestamp` feature is far too large to be used directly in a deep model:
# + id="GGcKKOyLDsEY" colab={"base_uri": "https://localhost:8080/"} outputId="1bc2b554-f0f7-42d7-ab99-4f1f4ad552cc"
for x in ratings.take(3).as_numpy_iterator():
print(f"Timestamp: {x['timestamp']}.")
# + [markdown] id="4aL_GMuaEBy0"
# We need to process it before we can use it. While there are many ways in which we can do this, discretization and standardization are two common ones.
# + [markdown] id="iCe-ch7eENNR"
# ### Standardization
#
# [Standardization](https://en.wikipedia.org/wiki/Feature_scaling#Standardization_(Z-score_Normalization)) rescales features to normalize their range by subtracting the feature's mean and dividing by its standard deviation. It is a common preprocessing transformation.
#
# This can be easily accomplished using the [`tf.keras.layers.experimental.preprocessing.Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer:
# + id="WxPsx6iSLGrp" colab={"base_uri": "https://localhost:8080/"} outputId="4d047c63-6e97-40b1-fa1d-082bc6a5e741"
# Feature-wise normalization of the data.
# TODO
timestamp_normalization = tf.keras.layers.experimental.preprocessing.Normalization()
timestamp_normalization.adapt(ratings.map(lambda x: x["timestamp"]).batch(1024))
for x in ratings.take(3).as_numpy_iterator():
print(f"Normalized timestamp: {timestamp_normalization(x['timestamp'])}.")
# + [markdown] id="zW1B974ZPn71"
# ### Discretization
#
# Another common transformation is to turn a continuous feature into a number of categorical features. This makes good sense if we have reasons to suspect that a feature's effect is non-continuous.
#
# To do this, we first need to establish the boundaries of the buckets we will use for discretization. The easiest way is to identify the minimum and maximum value of the feature, and divide the resulting interval equally:
# + id="YlJK0rYyQGEf" colab={"base_uri": "https://localhost:8080/"} outputId="c51c3906-99ef-4cf3-8b01-0005dd84ea4b"
max_timestamp = ratings.map(lambda x: x["timestamp"]).reduce(
tf.cast(0, tf.int64), tf.maximum).numpy().max()
min_timestamp = ratings.map(lambda x: x["timestamp"]).reduce(
np.int64(1e9), tf.minimum).numpy().min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000)
print(f"Buckets: {timestamp_buckets[:3]}")
# + [markdown] id="iPS3fh5JQhkO"
# Given the bucket boundaries we can transform timestamps into embeddings:
# + id="VCizNzPkQmwK" colab={"base_uri": "https://localhost:8080/"} outputId="3310cff5-1d4d-4c8a-ea64-498f88caa62f"
timestamp_embedding_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32)
])
for timestamp in ratings.take(1).map(lambda x: x["timestamp"]).batch(1).as_numpy_iterator():
print(f"Timestamp embedding: {timestamp_embedding_model(timestamp)}.")
# + [markdown] id="BWOg0NlGEeWh"
# ## Processing text features
#
# We may also want to add text features to our model. Usually, things like product descriptions are free form text, and we can hope that our model can learn to use the information they contain to make better recommendations, especially in a cold-start or long tail scenario.
#
# While the MovieLens dataset does not give us rich textual features, we can still use movie titles. This may help us capture the fact that movies with very similar titles are likely to belong to the same series.
#
# The first transformation we need to apply to text is tokenization (splitting into constituent words or word-pieces), followed by vocabulary learning, followed by an embedding.
#
# The Keras [`tf.keras.layers.experimental.preprocessing.TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer can do the first two steps for us:
# + id="TdRa-_BXF7IJ"
# Text vectorization layer.
# TODO
title_text = tf.keras.layers.experimental.preprocessing.TextVectorization()
title_text.adapt(ratings.map(lambda x: x["movie_title"]))
# + [markdown] id="rJkYkgMQGxHL"
# Let's try it out:
# + id="YAIj7TGOHAXs" colab={"base_uri": "https://localhost:8080/"} outputId="e36dc61f-a222-4c35-8efb-f9c45bc92881"
for row in ratings.batch(1).map(lambda x: x["movie_title"]).take(1):
print(title_text(row))
# + [markdown] id="CsQi_QGSH0it"
# Each title is translated into a sequence of tokens, one for each piece we've tokenized.
#
# We can check the learned vocabulary to verify that the layer is using the correct tokenization:
# + id="0gkJtiNyHzKq" colab={"base_uri": "https://localhost:8080/"} outputId="09b969a6-b250-4ee3-bdb1-518efaf7eb2b"
title_text.get_vocabulary()[40:45]
# + [markdown] id="V_v-HFg0ICQS"
# This looks correct: the layer is tokenizing titles into individual words.
#
# To finish the processing, we now need to embed the text. Because each title contains multiple words, we will get multiple embeddings for each title. For use in a downstream model these are usually compressed into a single embedding. Models like RNNs or Transformers are useful here, but averaging all the words' embeddings together is a good starting point.
# + [markdown] id="RomTZJ6N-z3Y"
# ## Putting it all together
#
# With these components in place, we can build a model that does all the preprocessing together.
# + [markdown] id="HMukupD2ggQh"
# ### User model
#
# The full user model may look like the following:
# + id="TL_eYNyD-80t"
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
user_id_lookup,
tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 2, 32)
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"])
], axis=1)
# + [markdown] id="6brsz6mnDZV2"
# Let's try it out:
# + id="LJlCFMgTDdC4" colab={"base_uri": "https://localhost:8080/"} outputId="15f9fd1b-d778-4cc3-a5a4-7a1a9f593748"
# TODO
user_model = UserModel()
user_model.normalized_timestamp.adapt(
ratings.map(lambda x: x["timestamp"]).batch(128))
for row in ratings.batch(1).take(1):
print(f"Computed representations: {user_model(row)[0, :3]}")
# + [markdown] id="F_-_kmurEN4E"
# ### Movie model
# We can do the same for the movie model:
# + id="n5k7fGmcETPl"
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
movie_title_lookup,
tf.keras.layers.Embedding(movie_title_lookup.vocab_size(), 32)
])
self.title_text_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.TextVectorization(max_tokens=max_tokens),
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
# We average the embedding of individual words to get one embedding vector
# per title.
tf.keras.layers.GlobalAveragePooling1D(),
])
def call(self, inputs):
return tf.concat([
self.title_embedding(inputs["movie_title"]),
self.title_text_embedding(inputs["movie_title"]),
], axis=1)
# + [markdown] id="QzXelC5kJbsQ"
# Let's try it out:
# + id="Tq3BWpzhJapY" colab={"base_uri": "https://localhost:8080/"} outputId="9d21c872-163f-4d0d-8c7f-10ef4c5d37e9"
# TODO
movie_model = MovieModel()
movie_model.title_text_embedding.layers[0].adapt(
ratings.map(lambda x: x["movie_title"]))
for row in ratings.batch(1).take(1):
print(f"Computed representations: {movie_model(row)[0, :3]}")
# + [markdown] id="-2dK71mPKoTw"
# ## Next steps
#
# With the two models above we've taken the first steps to representing rich features in a recommender model: to take this further and explore how these can be used to build an effective deep recommender model, take a look at our Deep Recommenders tutorial.
| courses/machine_learning/deepdive2/recommendation_systems/solutions/featurization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Theta
# 1 contaminant
# 180924 field
# 1-P_chance prior
# theta to 3.5"
# %matplotlib notebook
# +
# imports
from importlib import reload
import numpy as np
from matplotlib import pyplot as plt
from astropy.table import Table
from astropy import units
from astropy.coordinates import SkyCoord
from frb.associate import bayesian
from frb.galaxies import hosts
# -
# # Load
sky = Table.read('tst_DES_180924.fits')
frbs = Table.read('tst_FRB_180924_faint_theta3.5.fits')
# # Model
sigR = 0.25 * units.arcsec
theta_u = dict(method='uniform', max=4.) # This is 0.5" larger than truth
fov = 6 * units.arcsec
ncontam = 1
# # Run
model_dict = bayesian.mock_run(sky, frbs, sigR, theta_u, fov, ncontam=ncontam)
# # Explore
# ## Parse
model_mags, model_theta, max_PMix = bayesian.parse_model(model_dict)
# ## Magnitudes
bins_mag = np.linspace(19., 25.5, 20)
plt.clf()
ax = plt.gca()
# True
weights = np.ones_like(frbs['DES_r'].data)/float(len(frbs))
ax.hist(frbs['DES_r'], weights=weights, bins=bins_mag, color='b', label='True', alpha=0.5)
# Recovered
weights2 = np.ones_like(model_mags)/model_mags.size
ax.hist(model_mags, weights=weights2, bins=bins_mag, color='k', label='Model', histtype='step')
#
ax.set_xlabel('r (mag)')
legend = ax.legend(loc='upper right', scatterpoints=1, borderpad=0.2,
fontsize='large')
#
plt.show()
# ## Theta
bins_theta = np.linspace(0., 5., 20)
plt.clf()
ax = plt.gca()
# True
weights1 = np.ones_like(frbs['theta'].data)/float(len(frbs))
ax.hist(frbs['theta'], weights=weights1, bins=bins_theta, color='b', label='True', alpha=0.5)
# Recovered
weights2 = np.ones_like(model_theta)/model_theta.size
ax.hist(model_theta, weights=weights2, bins=bins_theta, color='k', label='Model', histtype='step')
#
ax.set_xlabel('theta (arcsec)')
legend = ax.legend(loc='upper right', scatterpoints=1, borderpad=0.2,
fontsize='large')
#
plt.show()
| frb/associate/dev/Theta.ipynb |
# ---
# title: "Handling Long Lines Of Code"
# author: "<NAME>"
# date: 2020-07-07T11:53:49-07:00
# description: "How to handle long lines of code in Python."
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Often engineers and data scientists run into a situation where they have a very long line of code. This is both ugly and break's Pythonic best practices.
# ## Preliminaries
import statistics
# ## Create Data
ages_of_community_members = [39, 23, 55, 23, 53, 27, 34, 67, 32, 34, 56]
number_of_ages = [4, 4, 5, 6, 7, 8, 5, 7, 3, 2, 4]
# ## Create Long Line Of Code
member_years_by_age = [first_list_element * second_list_element for first_list_element, second_list_element in zip(ages_of_community_members, number_of_ages)]
# ## Shorten Long Line
#
# While you can use `\` to break up lines of code. A more simple and readable option is to take advantage of the fact that line breaks are ignored inside `[]`, `{}`, and `[]`. Then use comments to help the reader understand the line.
# Create a variable with the count of members per age
member_years_by_age = [# multiply the first list's element by the second list's element
first_list_element * second_list_element
# for the first list's elements and second list's element
for first_list_element, second_list_element
# for each element in a zip between the age of community members
in zip(ages_of_community_members,
# and the number of members by age
number_of_ages)
]
| docs/python/basics/handling_long_lines_of_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Running This Notebook
# This notebook should be run using the [github/mdtok container on DockerHub](https://cloud.docker.com/u/github/repository/docker/github/mdtok). The Dockerfile that defines this container is located at the root of this repository named: [cpu.Dockerfile](https://github.com/machine-learning-apps/IssuesLanguageModel/blob/master/cpu.Dockerfile)
#
# This will ensure that you are able to run this notebook properly as many of the dependencies in this project are rapidly changing. To run this notebook using this container, the commands are:
#
# Get the container: `docker pull github\mdtok`
#
# Run the container: `docker run --it --net=host -v <host_dir>:/ds github/mdtok bash`
from mdparse.parser import transform_pre_rules, compose
import pandas as pd
from tqdm import tqdm_notebook
from fastai.text.transform import defaults
# # Source of Data
# The [GHArchive project](https://www.gharchive.org/) ingests large amounts of data from GitHub repositories. This data is stored in [BigQuery](https://cloud.google.com/bigquery/docs/) for public consumption.
#
# For this project, we gathered over 18 million GitHub issues by executing [this query](https://console.cloud.google.com/bigquery?sq=1073071082706:8b10cab0a54b4884b8bf948e70f2f22f). This query attempts to remove duplicate issues where the content of the issue is roughly the same.
#
# This query results in over 18 Million GitHub issues. The results of this query are split into 100 csv files for free download on the following Google Cloud Storage Bucket:
#
# `https://storage.googleapis.com/issue_label_bot/language_model_data/0000000000{00-99}.csv.gz`, each file contains approximately 180,000 issues and is 55MB compressed.
# # Preview Data
# #### Download Sample
#
# The below dataframe illustrates what the format of the raw data looks like:
# +
df = pd.read_csv(f'https://storage.googleapis.com/issue_label_bot/language_model_data/000000000000.csv.gz').sample(5)
df.head(1)
# -
# #### Illustrate Markdown Parsing Using `mdparse`
#
# [mdparse](https://github.com/machine-learning-apps/mdparse) is a library that parses markdown text and annotates the text with fields with meta-data for deep learning. Below is an illustration of `mdparse` at work. The parsed and annotated text can be seen in the `clean_body` field:
#
# The changes are often subtle, but can make a big difference with regard to feature extraction for language modeling.
# +
pd.set_option('max_colwidth', 1000)
df['clean_body'] = ''
for i, b in tqdm_notebook(enumerate(df.body), total=len(df)):
try:
df['clean_body'].iloc[i] = compose(transform_pre_rules+defaults.text_pre_rules)(b)
except:
print(f'error at: {i}')
break
df[['body', 'clean_body']]
# -
# # Download And Pre-Process Data
#
# We download the data from GCP and pre-process this data before saving to disk.
from fastai.text.transform import ProcessPoolExecutor, partition_by_cores
import numpy as np
from fastai.core import parallel
from itertools import chain
transforms = transform_pre_rules + defaults.text_pre_rules
# +
def process_dict(dfdict, _):
"""process the data, but allow failure."""
t = compose(transforms)
title = dfdict['title']
body = dfdict['body']
try:
text = 'xxxfldtitle '+ t(title) + ' xxxfldbody ' + t(body)
except:
return None
return {'url': dfdict['url'], 'text':text}
def download_data(i, _):
"""Since the data is in 100 chunks already, just do the processing by chunk."""
fn = f'https://storage.googleapis.com/issue_label_bot/language_model_data/{str(i).zfill(12)}.csv.gz'
dicts = [process_dict(d, 0) for d in pd.read_csv(fn).to_dict(orient='rows')]
df = pd.DataFrame([d for d in dicts if d])
df.to_csv(f'/ds/IssuesLanguageModel/data/1_processed_csv/processed_part{str(i).zfill(4)}.csv', index=False)
return df
# -
# **Note**: The below procedure took over 30 hours on a [p3.8xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance on AWS with 32 Cores and 64GB of Memory. You may have to change the number of workers based on your memory and compute constraints.
dfs = parallel(download_data, list(range(100)), max_workers=31)
dfs_rows = sum([x.shape[0] for x in dfs])
print(f'number of rows in pre-processed data: {dfs_rows:,}')
del dfs
# ### Cached pre-processed data
#
# Since ~19M GitHub issues take a long time to pre-process, the pre-processed files are available here:
#
# `https://storage.googleapis.com/issue_label_bot/pre_processed_data/1_processed_csv/processed_part00{00-99}.csv`
# # Partition Data Into Train/Validation Set
# Set aside random 10 files (out of 100) as the Validation set
# +
from pathlib import Path
from random import shuffle
# shuffle the files
p = Path('/ds/IssuesLanguageModel/data/1_processed_csv/')
files = p.ls()
shuffle(files)
# show a preview of files
files[:5]
# +
valid_df = pd.concat([pd.read_csv(f) for f in files[:10]]).dropna().drop_duplicates()
train_df = pd.concat([pd.read_csv(f) for f in files[10:]]).dropna().drop_duplicates()
print(f'rows in train_df:, {train_df.shape[0]:,}')
print(f'rows in valid_df:, {valid_df.shape[0]:,}')
# -
valid_df.to_hdf('/ds/IssuesLanguageModel/data/2_partitioned_df/valid_df.hdf')
train_df.to_hdf('/ds/IssuesLanguageModel/data/2_partitioned_df/train_df.hdf')
# ### Location of Train/Validaiton DataFrames
#
# You can download the above saved dataframes (in hdf format) from Google Cloud Storage:
#
# **train_df.hdf (9GB)**:
#
# `https://storage.googleapis.com/issue_label_bot/pre_processed_data/2_partitioned_df/train_df.hdf`
#
# **valid_df.hdf (1GB)**
#
# `https://storage.googleapis.com/issue_label_bot/pre_processed_data/2_partitioned_df/valid_df.hdf`
| Issue_Embeddings/notebooks/01_AcquireData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %load_ext autoreload
# %autoreload 2
import pydicom as dm
import numpy as np
import os
import sys
import matplotlib.pyplot as plt
import cv2
import imageio
import shutil
# My scripts
os.chdir('../')
from src import image_manip
# -
# ## Dicom Image manipulation
# +
#os.chdir('../')
dicom = dm.dcmread('1-1.dcm')
array = dicom.pixel_array
rows, cols = array.shape
row_inc = int(round(0.05*rows))
col_inc = int(round(0.05*cols))
arr = array[row_inc:rows-row_inc, col_inc:cols-col_inc]
image = cv2.resize(arr, (int(cols * 0.4), int(rows * 0.4)))
image = cv2.normalize(image, None, 0, 255, cv2.NORM_MINMAX)
image = np.uint8(image)
print(os.getcwd())
cv2.imwrite("testimage4.jpg", image)
# -
os.getcwd()
# +
# Full directory
path = '/home/maureen/Documents/Galvanize/Capstone1/Capstone3/Cancer_Prediction/data/CBIS-DDSM'
os.chdir(path)
dirs = [d for d in os.listdir()]
for d in dirs:
path = os.path.join(os.getcwd(), d)
for root,dirs,files in os.walk(path):
for f in files:
file_path = os.path.join(root,f)
#print(file_path)
try:
dicom = dm.dcmread(file_path)
array = dicom.pixel_array
# Crop 10% off all sides
rows, cols = array.shape
row_inc = int(round(0.05*rows))
col_inc = int(round(0.05*cols))
arr = array[row_inc:rows-row_inc, col_inc:cols-col_inc]
# Save as image. Matplotlib adds lots of crap we don't want
image = cv2.resize(arr, (int(cols * 0.4), int(rows * 0.4)))
image = cv2.normalize(image, None, 0, 255, cv2.NORM_MINMAX)
image = np.uint8(image)
cv2.imwrite(f'{d}.png', image)
except:
print(d)
# -
# ## Normal mammograms (ljpeg)
# Cropping and resizing mammograms. This will eventually be integrated into fixing AR
files = [f for f in os.listdir(path)]
for f in files:
image_manip.crop_mammograms(f)
# +
# Cropping and resizing mammogram images
path = '/home/maureen/Documents/Galvanize/Capstone1/Capstone3/Cancer_Prediction/data/Mammograms/normals/mlo'
os.chdir(path)
img_path = 'A_0200_1.RIGHT_MLO.jpg'
def crop_mammograms(img_path):
# Read image
im = cv2.imread(img_path)
image_name = os.path.splitext(img_path)[0]
# Crop and normalize
rows, cols, channels = im.shape
row_inc = int(round(0.05*rows))
col_inc = int(round(0.05*cols))
arr = im[row_inc:rows-row_inc, col_inc:cols-col_inc, :]
image = cv2.resize(arr, (int(cols * 0.3), int(rows * 0.3)))
cv2.normalize(image, None, 0, 255, cv2.NORM_MINMAX)
# Save
image = np.uint8(image)
cv2.imwrite(f'{image_name}.png', image)
return 0
crop_mammograms(img_path)
# -
# ## Changing AR and size
path = '/home/maureen/Documents/Galvanize/Capstone1/Capstone3/Cancer_Prediction/data/Mammograms/raw_images/cc/'
os.chdir(path)
files = [f for f in os.listdir() if '.png' in f]
for f in files:
image_manip.uniform_size(f)
os.chdir('/home/maureen/Documents/Galvanize/Capstone1/Capstone3/Cancer_Prediction')
# ## Image exploration
# +
# Check image channels
img_path = 'data/Mammograms/normals/MLO/A_0200_1.RIGHT_MLO_ar.png'
im_io = imageio.imread(img_path)
im_cv = cv2.imread(img_path)
im_cv = cv2.normalize(im_cv, None, 0, 255, cv2.NORM_MINMAX)
plt.imshow(im_cv)
# +
## Separating cancer and non cancer images
path = '/home/maureen/Documents/Galvanize/Capstone1/Capstone3/Cancer_Prediction/data/Mammograms/raw_images/cancers'
os.chdir(path)
overlay_files = [f for f in os.listdir(path) if 'OVERLAY' in f]
image_files = [f for f in os.listdir(path) if '.png' in f]
overlay_names = [os.path.splitext(f)[0] for f in overlay_files]
print(len(overlay_files), len(image_files))
i = 0
for name in overlay_names:
if name+'.png' in image_files:
shutil.move(f'{name}.png', f'overlay/{name}.png')
# -
| notebooks/Image_Manipulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Tce3stUlHN0L"
# ##### Copyright 2020 The TensorFlow Authors.
# + cellView="form" id="tuOe1ymfHZPu"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="MfBg1C5NB3X0"
# # Model Averaging
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/average_optimizers_callback"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
#
# + [markdown] id="xHxb-dlhMIzW"
# ## Overview
#
# This notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons pagkage.
#
# + [markdown] id="o2UNySlpXkbl"
# ## Moving Averaging
#
# > The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more genral idea of the model training until some point.
#
# ## Stocastic Averaging
#
# > Stocastic Weight Averaging converges to wider optimas. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer.
#
# ## Model Average Checkpoint
#
# > `callbacks.ModelCheckpoint` doesn't give you the option to save moving average weights in the middle of training, which is why Model Average Optimizers required a custom callback. Using the ```update_weights``` parameter, ```ModelAverageCheckpoint``` allows you to:
# 1. Assign the moving average weights to the model, and save them.
# 2. Keep the old non-averaged weights, but the saved model uses the average weights.
# + [markdown] id="MUXex9ctTuDB"
# ## Setup
# + id="sXEOqj5cIgyW"
# !pip install -U tensorflow-addons
# + id="IqR2PQG4ZaZ0"
import tensorflow as tf
import tensorflow_addons as tfa
# + id="4hnJ2rDpI38-"
import numpy as np
import os
# + [markdown] id="Iox_HZNNYLEB"
# ## Build Model
# + id="KtylpxOmceaC"
def create_model(opt):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
# + [markdown] id="pwdM2pl3RSPb"
# ## Prepare Dataset
# + id="mMOeXVmbdilM"
#Load Fashion MNIST dataset
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
test_images, test_labels = test
# + [markdown] id="iEbhI_eajpJe"
# We will be comparing three optimizers here:
#
# * Unwrapped SGD
# * SGD with Moving Average
# * SGD with Stochastic Weight Averaging
#
# And see how they perform with the same model.
# + id="_Q76K1fNk7Va"
#Optimizers
sgd = tf.keras.optimizers.SGD(0.01)
moving_avg_sgd = tfa.optimizers.MovingAverage(sgd)
stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
# + [markdown] id="nXlMX4p9qHwg"
# Both ```MovingAverage``` and ```StocasticAverage``` optimers use ```ModelAverageCheckpoint```.
# + id="SnvZjt34qEHY"
#Callback
checkpoint_path = "./training/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir,
save_weights_only=True,
verbose=1)
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
# + [markdown] id="uabQmjMtRtzs"
# ## Train Model
#
# + [markdown] id="SPmifETHmPix"
# ### Vanilla SGD Optimizer
# + id="Xy8W4LYppadJ"
#Build Model
model = create_model(sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
# + id="uU2iQ6HAZ6-E"
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
# + [markdown] id="lAvhD4unmc6W"
# ### Moving Average SGD
# + id="--NIjBp-mhVb"
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
# + id="zRAym9EBmnW9"
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
# + [markdown] id="K98lbU07m_Bk"
# ### Stocastic Weight Average SGD
# + id="Ia7ALKefnXWQ"
#Build Model
model = create_model(stocastic_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
# + id="EOT2E9NBoeHI"
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
| docs/tutorials/average_optimizers_callback.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="PF3wW89RPfOk" colab_type="code" colab={}
from google.colab import drive
import pandas as pd
import numpy as np
# + id="6fZwhiWAP6FB" colab_type="code" colab={}
# #!pip install datadotworld
# #!pip install datadotworld[pandas]
# + id="Vr6cCKu8TAxN" colab_type="code" colab={}
# #!dw configure
# + id="D4lEhcuNTHsM" colab_type="code" colab={}
import datadotworld as dw
# + id="Q8tZfFoiTL4S" colab_type="code" colab={}
#drive.mount('/content/drive')
# + id="r_ZGzfsgTZ1G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2bf21801-b66f-416a-e86a-2f0c0396700f" executionInfo={"status": "ok", "timestamp": 1581509389612, "user_tz": -60, "elapsed": 596, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
# cd "drive/My Drive/Colab Notebooks/dataWorkshop"
# + id="l4bBUKzOTkmx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="b2d5ca05-ec96-4cad-9b17-f5008d0f3316" executionInfo={"status": "ok", "timestamp": 1581509401617, "user_tz": -60, "elapsed": 2205, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
# ls
# + id="skjKMusyTpXU" colab_type="code" colab={}
# !mkdir data
# + id="pL1RcAuKT9gN" colab_type="code" colab={}
# !echo 'data' > .gitignore
# + id="_ztbnKPdUTNW" colab_type="code" colab={}
# !git add .gitignore
# + id="hSpVhGlKUFRR" colab_type="code" colab={}
data = dw.load_dataset('datafiniti/mens-shoe-prices')
# + id="69-zGE8ZUqR7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 129} outputId="e73a0d62-a6d1-46d1-df6f-5b9ce7a1806f" executionInfo={"status": "ok", "timestamp": 1581509718742, "user_tz": -60, "elapsed": 1930, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
df = data.dataframes['7004_1']
df.shape
# + id="3nVWhmPdU3oo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 591} outputId="89e771a8-0b1f-42c9-eb0e-7743820e319c" executionInfo={"status": "ok", "timestamp": 1581509742274, "user_tz": -60, "elapsed": 592, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
df.sample(5)
# + id="dossFg5IU6_t" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 237} outputId="8a9b5e59-3805-4ea1-a9ab-b0ae5463cc78" executionInfo={"status": "ok", "timestamp": 1581509756483, "user_tz": -60, "elapsed": 576, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
df.columns
# + id="qZbVJXKHVAZO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 109} outputId="c263d84c-61d1-45ee-e1f2-9ba63aa4a795" executionInfo={"status": "ok", "timestamp": 1581509788943, "user_tz": -60, "elapsed": 585, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
df.prices_currency.unique()
# + id="8GL0fL6jVGg3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 274} outputId="9f19c4d1-06c4-4280-8cc5-8693fb04f1a7" executionInfo={"status": "ok", "timestamp": 1581509831754, "user_tz": -60, "elapsed": 587, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
df.prices_currency.value_counts()
# + id="LrkGkpSJVQbp" colab_type="code" colab={}
dfUSD = df[df.prices_currency == 'USD'].copy()
# + id="ITTFkp_4Vjvk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="92412671-1517-4c3c-974f-120c7ac2c3c5" executionInfo={"status": "ok", "timestamp": 1581509931066, "user_tz": -60, "elapsed": 593, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
dfUSD.shape
# + id="1SUBIGL7Vsb0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="9dc8ac19-42e8-4cce-f5e2-a04d7257fca5" executionInfo={"status": "ok", "timestamp": 1581510095152, "user_tz": -60, "elapsed": 798, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
dfUSD.prices_amountmin = dfUSD.prices_amountmin.astype(np.float)
dfUSD.prices_amountmin.hist()
# + id="f5uWDxKrWTdf" colab_type="code" colab={}
filterMax = np.percentile(dfUSD['prices_amountmin'],99)
# + id="PLArjO11WsVd" colab_type="code" colab={}
dfUSDfilter = dfUSD[dfUSD['prices_amountmin'] < filterMax]
# + id="020QYr9XXLv_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="2cf24232-cdaf-4442-fb3c-b50d80cf066f" executionInfo={"status": "ok", "timestamp": 1581510448773, "user_tz": -60, "elapsed": 857, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
dfUSDfilter['prices_amountmin'].hist(bins=100)
# + id="_8uYO8rUYDcI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="1a507686-69e4-49f0-b813-d9ed97d68774" executionInfo={"status": "ok", "timestamp": 1581510593095, "user_tz": -60, "elapsed": 2304, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
# ls matrixOne/
# + id="rPxqaW1fYNpZ" colab_type="code" colab={}
# !git add matrixOne/day3.ipynb
# + id="QB6MTR23YR87" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="e22b8346-1ee2-4261-8baa-87d81a2a6837" executionInfo={"status": "ok", "timestamp": 1581511199279, "user_tz": -60, "elapsed": 2110, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
# !git commit -m "Read Men's Shoe Prices dataset from data.world"
# + id="nvAanVHHYgeL" colab_type="code" colab={}
# !git config --global user.email "<EMAIL>"
# !git config --global user.name "Pauka"
# + id="s0t6B6QdYqSo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 164} outputId="d4b80527-f0a5-43f6-accc-351e919de8c0" executionInfo={"status": "ok", "timestamp": 1581511206365, "user_tz": -60, "elapsed": 2002, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "05162319889358748368"}}
# !git push -u origin master
| matrixOne/day3m.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Pickle
# - 객체를 파일로 저장할 때 직렬화라는 과정을 거쳐서 저장을 합니다.
# - 직렬화
# - 객체(데이터타입), 저장되는파일(데이터타입)은 다릅니다.
# - 서로 다른 데이터 타입을 맞춰주는 과정을 직렬화
# - 파일을 읽고 저장하는데 속도가 더 빠릅니다.
import pickle
class A:
def __init__(self,data):
self.data = data
def disp(self):
print(self.data)
obj = A("pickle test")
obj
# 객체 저장하기
with open("obj.pkl", "wb") as f:
pickle.dump(obj,f)
# !ls | grep obj
# 객체 불러오기
with open("obj.pkl", "rb") as f:
load_obj = pickle.load(f)
load_obj.disp()
1/2+1/2+3/8+4/16+5/64+6/64+7/64+7/64
| python/10_pickle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sqlite3
import pandas as pd
import numpy as np
from pymongo import MongoClient
from datetime import datetime
conn=sqlite3.connect('archive/FPA_FOD_20170508.sqlite')
df=pd.read_sql("""
SELECT * from fires""", con=conn)
df['CONT_TIME'].fillna("1200", inplace=True)
df['DISCOVERY_TIME'].fillna("1200", inplace=True)
# +
epoch=pd.to_datetime(0, unit='s').to_julian_date()
def fix_time(date, time):
if np.isnan(date):
return None
else:
date=pd.to_datetime(date - epoch, unit='D').strftime("%Y-%m-%d")
return pd.to_datetime(date + ' ' + time)
#combine the dates and times into one column as a complete datetime
#df['DISCOVERY_DATE']=pd.to_datetime(df['DISCOVERY_DATE'].astype(str) + ' ' + df['DISCOVERY_TIME'].astype(str))
#df['CONT_DATE']=pd.to_datetime(df['CONT_DATE'].astype(str) + ' ' + df['CONT_TIME'].astype(str))
df['DISCOVERY_DATE']=df.apply(lambda x: fix_time(x['DISCOVERY_DATE'],x['DISCOVERY_TIME']), axis=1)
# -
df['CONT_DATE']=df.apply(lambda x: fix_time(x['CONT_DATE'],x['CONT_TIME']), axis=1)
# +
#df['CONT_DATE'].fillna("None", inplace=True)
#df['DISCOVERY_DATE'].fillna("None", inplace=True)
df['CONT_DATE'] = df['CONT_DATE'].astype(object).where(df['CONT_DATE'].notnull(), None)
df['DISCOVERY_DATE'] = df['DISCOVERY_DATE'].astype(object).where(df['DISCOVERY_DATE'].notnull(), None)
# -
client=MongoClient('mongodb://famp.fishermenmedia.com:27017/')
db=client['fires']
firesCollection=db['fires']
df.reset_index(inplace=True)
df_dict=df.to_dict("records")
firesCollection.insert_many(df_dict)
firesCollection.count_documents({})
| load_to_mongodb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import urllib.request
import json
import glob
import pandas as pd
import numpy as np
# Get data from sensors
# this cell gets data
URL = "http://192.168.127.12:8881/luftdatenGet/22FQ8dJEApww33p31935/9d93d9d8cv7js9sj4765s120sllkudp389cm/"
response = urllib.request.urlopen(URL)
data = json.loads(response.read())
print(data)
# read from json data to be read into pd.DataFrame
# +
columns = ['index','chip_id','P1','P2']
df_P = pd.DataFrame()
print(df_P)
i=0
for values in data:
chip_id_val = values.get('esp8266id')
sensor_vals = values.get('sensordatavalues')
P1 = sensor_vals[0]['value']
P2 = sensor_vals[1]['value']
row = pd.Series([i,chip_id_val,P1, P2])
df_row = pd.DataFrame(row).transpose()
df_P = pd.concat([df_P,df_row],axis=0)
i=i+1
df_P = pd.DataFrame(data= df_P.values, columns = columns )
print(df_P)
# -
# Get columns of interest
#this cells only selects interesting columns
#columns of interest are timestamp, PM10, PM2.5, temp, humidity
df_in.
# Resample Data to current data point and means for hour, week, month, year
# Output values from here to dashboard
| notebooks/get_means.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Eurydice processing
#
# Process hack `.md` files:
#
# - split on `--`; first item is aways assumed to be `TEXT`;
# - check for `TEXT`
# - add article entries to db;
# - add images to db;
# - rewrite articles to useful MyST format;
# - rewrite images to useful MyST format.
# +
import yaml
with open("_toc.yml", "r") as f:
toc = yaml.safe_load(f)
raw_files = [f["file"].strip('_') for p in toc["parts"] for f in p["chapters"] if f["file"].startswith("__")]
raw_files
# +
from sqlite_utils import Database
db_name = "eurydice-demo.db"
# Uncomment the following lines to connect to a pre-existing database
db = Database(db_name)
# +
# Do not run this cell if your database already exists!
# While developing the script, recreate database each time...
db = Database(db_name, recreate=True)
# +
# This schema has been evolved iteratively as I have identified structure
# that can be usefully mined...
db["sources"].create({
"url": str,
"fn": str,
"publication": str,
"published_date": str, # this may range from year to actual date
"title": int, # Title of section
"date": str, # optional; the second date field; may be eg correspondence date
"author": str, # attempt at provenance
"pages": str, # or pages like
"text": str,
},# pk=("url", "title") # Need an autoincrement; no natural key?
)
# Enable full text search
# This creates an extra virtual table (books_fts) to support the full text search
db["sources"].enable_fts(["publication","title", "text", "published_date"], create_triggers=True)
# +
fn = "training_ship.md"
def get_file_contents(fn):
"""Open file from filename and get file contents."""
with open(fn) as f:
txt = f.read().strip()
return txt
txt = get_file_contents(fn)
txt[:100]
# +
def get_sections(txt):
"""Get sections from file."""
txt_sections = [s.strip('-').strip() for s in txt.split("--") if s.strip('-').strip()]
return txt_sections
txt_sections = get_sections(txt)
txt_sections
# +
# First section is text, but then we need to parse type
def structure_record(txt_sections):
typ_s = [("TEXT", txt_sections[0])]
for s in txt_sections[1:]:
s = s.strip()
if s.startswith("TEXT"):
typ_s.append(("TEXT", s.replace("TEXT","").strip()))
elif s.startswith("!["):
typ_s.append(("IMAGE", s))
else:
# Should we assume text unless we get eg an http at the start of a record?
typ_s.append(("RESOURCE", s))
return typ_s
typ_s = structure_record(txt_sections)
typ_s
# +
# #%pip install dateparser
# https://dateparser.readthedocs.io/en/latest/usage.html
from dateparser.date import DateDataParser
ddp = DateDataParser(languages=['en'])
# +
import re
dt = "%Y-%m-%d"
def parse_sections(txt_sections, fn=None):
"""Parse file section."""
records = []
for section in txt_sections:
txt_lines = [l.strip() for l in section.split('\n') if l.strip()]
#print(txt_lines)
record = {"fn":fn}
for i, line in enumerate(txt_lines):
line = line.strip()
# This is inefficient...
# We should test as fallback...
try_url = line.startswith("http")
try_date = ddp.get_date_data(line.replace('Publication date', '').strip())
try_pages = re.search(r"^pp?\.?\s?([0-9ivxlcm\?].*)", line)
if try_url:
record["url"] = line
elif try_date["date_obj"]:
if "published_date" in record:
record["date"] = try_date.date_obj.strftime(dt)
else:
record["published_date"] = try_date.date_obj.strftime(dt)
elif try_pages:
record["pages"] = try_pages.group(1)
elif not "publication" in record:
record["publication"] = line
# We take pages as the last item of metadata...
if try_pages:
break
txt = f'{record["pages"]}'.join(section.split(try_pages.group(0))[1:]).strip()
if len(txt.split("\n")[0]) > 200:
record["title"] = txt[:100]
record["text"] = txt[100:]
else:
record["title"] = txt.split("\n")[0]
record["text"] = txt.replace(record["title"], "").strip()
#if len(txt_lines[i+1])>200:
# record["title"] = txt_lines[i+1][:100]
# record["text"] = "\n\n".join(txt_lines[i+1:])[100:]
#else:
# record["title"] = txt_lines[i+1]
# record["text"] = "\n\n".join(txt_lines[i+2:])
records.append(record)
return records
# -
parse_sections([typ_s[1][1]])
# +
from datetime import datetime
import humanize
def admonition_generator(record):
"""Generate MyST admonition markdown for the record."""
dt_ = datetime.fromisoformat(record["published_date"])
# The humanize package gives us things like 3rd, 27th, etc.
daynum = humanize.ordinal(dt_.day)
# Format the date to something like: Wednesday, April 3rd, 1878
# # %A is the day of the week (Monday, Tuesday, etc.)
# # %B is the month (March, April, etc.)
# # %Y is the 4-digit year (eg 1878)
dt = dt_.strftime(f'%A, %B {daynum}, %Y')
admonition = f"""
```{{admonition}} {record["title"]} - {dt}
:class: note dropdown
[{record["publication"]}]({record["url"]}), {record["published_date"]}, p. {record["pages"]}
{record["text"]}
```
"""
return admonition
# -
print(admonition_generator(parse_sections([typ_s[1][1]])[0]))
# +
# Parse image
# if we have an image, we need to pattern match until we get to a \n\n
# then create a figure and replace the original image matched pattern
xx="""
ssdsdd

Illustrated London News — H.M.S. Eurydice as she lay at eight a.m. on March 25 off Dunnose Point, Isle of Wight, April 6, 1878
asaa

Illustrated London News — H.M.S. Eurydice as she lay at eight a.m. on March 25 off Dunnose Point, Isle of Wight, April 6, 1878
asaa
"""
# The following says: .*? lazy search, (?=\n\n) lookahead to next \n\n
# re.MULTILINE | re.DOTALL give us the search over multiple lines
p = re.findall(r'!\[.*?(?=\n\n)', xx, re.MULTILINE | re.DOTALL)
p
# -
re.findall("!\[[^\]]*\]\(([^\)]*)\)(.*)$", p[0], re.MULTILINE | re.DOTALL)[0][1]
def generate_figure(doc):
images = re.findall(r'!\[.*?(?=\n\n)', doc, re.MULTILINE | re.DOTALL)
for image in images:
path = re.findall("!\[[^\]]*\]\(([^\)]*)\)(.*)$", image, re.MULTILINE | re.DOTALL)
if not path:
continue
txt = f"""
```{{figure}} {path[0][0]}
---
---
{path[0][1]}
```
"""
doc = doc.replace(image, txt)
return doc
# +
# Parse types
def create_admontions(typ_s):
parsed = []
for s in typ_s:
if s[0] =="TEXT" or s[0]=="IMAGE" or s[0].startswith("!["):
# parse image
parsed.append( s )
elif s[0] == "RESOURCE":
# parse resource
# Put things into an admonition block
#print("\n\n\*****\n"+s[1])
_parsed = admonition_generator(parse_sections([s[1]])[0])
parsed.append( (s[0], _parsed) )
else:
# This should be null
pass
return parsed
# -
parsed = create_admontions(typ_s)
parsed = generate_figure(parsed)
parsed
# +
myst_txt = "\n\n".join([t[1] for t in parsed ])
with open("test.md", "w") as f:
f.write(myst_txt)
# -
for f in raw_files:
txt = get_file_contents(f"{f}.md")
txt_sections = get_sections(txt)
typ_s = structure_record(txt_sections)
parsed = create_admontions(typ_s)
with open(f"__{f}.md", "w") as f:
myst_txt = "\n\n".join([t[1] for t in parsed ])
myst_txt = generate_figure(myst_txt)
f.write(myst_txt)
f
| resources/Eurydice processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Creating And Cleaning Features: Convert Categorical Features To Numeric
# ### Read In Data
# +
# Read in data
import pandas as pd
from sklearn.preprocessing import LabelEncoder
titanic_df = pd.read_csv('../Data/titanic_family_cnt.csv')
titanic_df.head()
# -
# ### Convert Categorical Features To Numeric
# +
# Convert categorical features to numeric levels
for feature in ['Sex', 'Cabin', 'Embarked', 'Embarked_clean', 'Title']:
le = LabelEncoder()
titanic_df[feature] = le.fit_transform(titanic_df[feature].astype(str)) # to avoid python will see 'NaN' as integer, we need to typecast to string
titanic_df.head()
# -
# Create new CSV with updated data
titanic_df.to_csv('../Data/titanic_numeric.csv', index=False)
| ML - Applied Machine Learning - Feature Engineering/04.Create and Clean Features/07.Convert Categorical Features To Numeric.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# -
# # Tutorial 3: Deep linear neural networks
# **Week 1, Day 2: Linear Deep Learning**
#
# **By Neuromatch Academy**
#
# __Content creators:__ <NAME>, <NAME>, <NAME>
#
# __Content reviewers:__ <NAME>, <NAME>
#
# __Content editors:__ <NAME>
#
# __Production editors:__ <NAME>, <NAME>
#
# **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
#
# <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
# ---
# # Tutorial Objectives
#
# * Deep linear neural networks
# * Learning dynamics and singular value decomposition
# * Representational Similarity Analysis
# * Illusory correlations & ethics
# + cellView="form"
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/bncr8/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# -
# ---
# # Setup
# +
# Imports
import math
import torch
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.optim as optim
# + cellView="form"
# @title Figure settings
from matplotlib import gridspec
from ipywidgets import interact, IntSlider, FloatSlider, fixed
from ipywidgets import FloatLogSlider, Layout, VBox
from ipywidgets import interactive_output
from mpl_toolkits.axes_grid1 import make_axes_locatable
import warnings
warnings.filterwarnings("ignore")
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# + cellView="form"
# @title Plotting functions
def plot_x_y_hier_data(im1, im2, subplot_ratio=[1, 2]):
fig = plt.figure(figsize=(12, 5))
gs = gridspec.GridSpec(1, 2, width_ratios=subplot_ratio)
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
ax0.imshow(im1, cmap="cool")
ax1.imshow(im2, cmap="cool")
# plt.suptitle("The whole dataset as imshow plot", y=1.02)
ax0.set_title("Labels of all samples")
ax1.set_title("Features of all samples")
ax0.set_axis_off()
ax1.set_axis_off()
plt.tight_layout()
plt.show()
def plot_x_y_hier_one(im1, im2, subplot_ratio=[1, 2]):
fig = plt.figure(figsize=(12, 1))
gs = gridspec.GridSpec(1, 2, width_ratios=subplot_ratio)
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
ax0.imshow(im1, cmap="cool")
ax1.imshow(im2, cmap="cool")
ax0.set_title("Labels of a single sample")
ax1.set_title("Features of a single sample")
ax0.set_axis_off()
ax1.set_axis_off()
plt.tight_layout()
plt.show()
def plot_tree_data(label_list = None, feature_array = None, new_feature = None):
cmap = matplotlib.colors.ListedColormap(['cyan', 'magenta'])
n_features = 10
n_labels = 8
im1 = np.eye(n_labels)
if feature_array is None:
im2 = np.array([[1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 0, 1]]).T
im2[im2 == 0] = -1
feature_list = ['can_grow',
'is_mammal',
'has_leaves',
'can_move',
'has_trunk',
'can_fly',
'can_swim',
'has_stem',
'is_warmblooded',
'can_flower']
else:
im2 = feature_array
if label_list is None:
label_list = ['Goldfish', 'Tuna', 'Robin', 'Canary',
'Rose', 'Daisy', 'Pine', 'Oak']
fig = plt.figure(figsize=(12, 7))
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 1.35])
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax1.imshow(im1, cmap=cmap)
if feature_array is None:
implt = ax2.imshow(im2, cmap=cmap, vmin=-1.0, vmax=1.0)
else:
implt = ax2.imshow(im2[:, -n_features:], cmap=cmap, vmin=-1.0, vmax=1.0)
divider = make_axes_locatable(ax2)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar = plt.colorbar(implt, cax=cax, ticks=[-0.5, 0.5])
cbar.ax.set_yticklabels(['no', 'yes'])
ax1.set_title("Labels")
ax1.set_yticks(ticks=np.arange(n_labels))
ax1.set_yticklabels(labels=label_list)
ax1.set_xticks(ticks=np.arange(n_labels))
ax1.set_xticklabels(labels=label_list, rotation='vertical')
ax2.set_title("{} random Features".format(n_features))
ax2.set_yticks(ticks=np.arange(n_labels))
ax2.set_yticklabels(labels=label_list)
if feature_array is None:
ax2.set_xticks(ticks=np.arange(n_features))
ax2.set_xticklabels(labels=feature_list, rotation='vertical')
else:
ax2.set_xticks(ticks=[n_features-1])
ax2.set_xticklabels(labels=[new_feature], rotation='vertical')
plt.tight_layout()
plt.show()
def plot_loss(loss_array, title="Training loss (Mean Squared Error)", c="r"):
plt.figure(figsize=(10, 5))
plt.plot(loss_array, color=c)
plt.xlabel("Epoch")
plt.ylabel("MSE")
plt.title(title)
plt.show()
def plot_loss_sv(loss_array, sv_array):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("Set1", n_sing_values)
_, (plot1, plot2) = plt.subplots(2, 1, sharex=True, figsize=(10, 10))
plot1.set_title("Training loss (Mean Squared Error)")
plot1.plot(loss_array, color='r')
plot2.set_title("Evolution of singular values (modes)")
for i in range(n_sing_values):
plot2.plot(sv_array[:, i], c=cmap(i))
plot2.set_xlabel("Epoch")
plt.show()
def plot_loss_sv_twin(loss_array, sv_array):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(10, 5))
ax1 = plt.gca()
ax1.set_title("Learning Dynamics")
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Mean Squared Error", c='r')
ax1.tick_params(axis='y', labelcolor='r')
ax1.plot(loss_array, color='r')
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values (modes)", c='b')
ax2.tick_params(axis='y', labelcolor='b')
for i in range(n_sing_values):
ax2.plot(sv_array[:, i], c=cmap(i))
fig.tight_layout()
plt.show()
def plot_ills_sv_twin(ill_array, sv_array, ill_label):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(10, 5))
ax1 = plt.gca()
ax1.set_title("Network training and the Illusory Correlations")
ax1.set_xlabel("Epoch")
ax1.set_ylabel(ill_label, c='r')
ax1.tick_params(axis='y', labelcolor='r')
ax1.plot(ill_array, color='r', linewidth=3)
ax1.set_ylim(-1.05, 1.05)
# ax1.set_yticks([-1, 0, 1])
# ax1.set_yticklabels(['False', 'Not sure', 'True'])
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values (modes)", c='b')
ax2.tick_params(axis='y', labelcolor='b')
for i in range(n_sing_values):
ax2.plot(sv_array[:, i], c=cmap(i))
fig.tight_layout()
plt.show()
def plot_loss_sv_rsm(loss_array, sv_array, rsm_array, i_ep):
n_ep = loss_array.shape[0]
rsm_array = rsm_array / np.max(rsm_array)
sv_array = sv_array / np.max(sv_array)
n_sing_values = sv_array.shape[1]
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(14, 5))
gs = gridspec.GridSpec(1, 2, width_ratios=[5, 3])
ax0 = plt.subplot(gs[1])
ax0.yaxis.tick_right()
implot = ax0.imshow(rsm_array[i_ep], cmap="Purples", vmin=0.0, vmax=1.0)
divider = make_axes_locatable(ax0)
cax = divider.append_axes("right", size="5%", pad=0.9)
cbar = plt.colorbar(implot, cax=cax, ticks=[])
cbar.ax.set_ylabel('Similarity', fontsize=12)
ax0.set_title("RSM at epoch {}".format(i_ep), fontsize=16)
# ax0.set_axis_off()
ax0.set_yticks(ticks=np.arange(n_sing_values))
ax0.set_yticklabels(labels=item_names)
# ax0.set_xticks([])
ax0.set_xticks(ticks=np.arange(n_sing_values))
ax0.set_xticklabels(labels=item_names, rotation='vertical')
ax1 = plt.subplot(gs[0])
ax1.set_title("Learning Dynamics", fontsize=16)
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Mean Squared Error", c='r')
ax1.tick_params(axis='y', labelcolor='r', direction="in")
ax1.plot(np.arange(n_ep), loss_array, color='r')
ax1.axvspan(i_ep-2, i_ep+2, alpha=0.2, color='m')
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values", c='b')
ax2.tick_params(axis='y', labelcolor='b', direction="in")
for i in range(n_sing_values):
ax2.plot(np.arange(n_ep), sv_array[:, i], c=cmap(i))
ax1.set_xlim(-1, n_ep+1)
ax2.set_xlim(-1, n_ep+1)
plt.show()
# + cellView="form"
#@title Helper functions
def build_tree(n_levels, n_branches, probability, to_np_array=True):
"""Builds a tree
"""
assert 0.0 <= probability <= 1.0
tree = {}
tree["level"] = [0]
for i in range(1, n_levels+1):
tree["level"].extend([i]*(n_branches**i))
tree["pflip"] = [probability]*len(tree["level"])
tree["parent"] = [None]
k = len(tree["level"])-1
for j in range(k//n_branches):
tree["parent"].extend([j]*n_branches)
if to_np_array:
tree["level"] = np.array(tree["level"])
tree["pflip"] = np.array(tree["pflip"])
tree["parent"] = np.array(tree["parent"])
return tree
def sample_from_tree(tree, n):
""" Generates n samples from a tree
"""
items = [i for i, v in enumerate(tree["level"]) if v == max(tree["level"])]
n_items = len(items)
x = np.zeros(shape=(n, n_items))
rand_temp = np.random.rand(n, len(tree["pflip"]))
flip_temp = np.repeat(tree["pflip"].reshape(1, -1), n, 0)
samp = (rand_temp > flip_temp) * 2 - 1
for i in range(n_items):
j = items[i]
prop = samp[:, j]
while tree["parent"][j] is not None:
j = tree["parent"][j]
prop = prop * samp[:, j]
x[:, i] = prop.T
return x
def generate_hsd():
# building the tree
n_branches = 2 # 2 branches at each node
probability = .15 # flipping probability
n_levels = 3 # number of levels (depth of tree)
tree = build_tree(n_levels, n_branches, probability, to_np_array=True)
tree["pflip"][0] = 0.5
n_samples = 10000 # Sample this many features
tree_labels = np.eye(n_branches**n_levels)
tree_features = sample_from_tree(tree, n_samples).T
return tree_labels, tree_features
def linear_regression(X, Y):
"""Analytical Linear regression
"""
assert isinstance(X, np.ndarray)
assert isinstance(Y, np.ndarray)
M, Dx = X.shape
N, Dy = Y.shape
assert Dx == Dy
W = Y @ X.T @ np.linalg.inv(X @ X.T)
return W
def add_feature(existing_features, new_feature):
assert isinstance(existing_features, np.ndarray)
assert isinstance(new_feature, list)
new_feature = np.array([new_feature]).T
# return np.hstack((tree_features, new_feature*2-1))
return np.hstack((tree_features, new_feature))
def net_svd(model, in_dim):
"""Performs a Singular Value Decomposition on a given model weights
Args:
model (torch.nn.Module): neural network model
in_dim (int): the input dimension of the model
Returns:
U, Σ, V (Tensors): Orthogonal, diagonal, and orthogonal matrices
"""
W_tot = torch.eye(in_dim)
for weight in model.parameters():
W_tot = weight.detach() @ W_tot
U, SIGMA, V = torch.svd(W_tot)
return U, SIGMA, V
def net_rsm(h):
"""Calculates the Representational Similarity Matrix
Arg:
h (torch.Tensor): activity of a hidden layer
Returns:
(torch.Tensor): Representational Similarity Matrix
"""
rsm = h @ h.T
return rsm
def initializer_(model, gamma=1e-12):
"""(in-place) Re-initialization of weights
Args:
model (torch.nn.Module): PyTorch neural net model
gamma (float): initialization scale
"""
for weight in model.parameters():
n_out, n_in = weight.shape
sigma = gamma / math.sqrt(n_in + n_out)
nn.init.normal_(weight, mean=0.0, std=sigma)
def test_initializer_ex(seed):
torch.manual_seed(seed)
model = LNNet(5000, 5000, 1)
try:
ex_initializer_(model, gamma=1)
std = torch.std(next(iter(model.parameters())).detach()).item()
if -1e-5 <= (std - 0.01) <= 1e-5:
print("Well done! Seems to be correct!")
else:
print("Please double check your implementation!")
except:
print("Faulty Implementation!")
def test_net_svd_ex(seed):
torch.manual_seed(seed)
model = LNNet(8, 30, 100)
try:
U_ex, Σ_ex, V_ex = ex_net_svd(model, 8)
U, Σ, V = net_svd(model, 8)
if (torch.all(torch.isclose(U_ex.detach(), U.detach(), atol=1e-6)) and
torch.all(torch.isclose(Σ_ex.detach(), Σ.detach(), atol=1e-6)) and
torch.all(torch.isclose(V_ex.detach(), V.detach(), atol=1e-6))):
print("Well done! Seems to be correct!")
else:
print("Please double check your implementation!")
except:
print("Faulty Implementation!")
def test_net_rsm_ex(seed):
torch.manual_seed(seed)
x = torch.rand(7, 17)
try:
y_ex = ex_net_rsm(x)
y = x @ x.T
if (torch.all(torch.isclose(y_ex, y, atol=1e-6))):
print("Well done! Seems to be correct!")
else:
print("Please double check your implementation!")
except:
print("Faulty Implementation!")
# + cellView="form"
#@title Set random seed
#@markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
# + cellView="form"
#@title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("WARNING: For this notebook to perform best, "
"if possible, in the menu under `Runtime` -> "
"`Change runtime type.` select `GPU` ")
else:
print("GPU is enabled in this notebook.")
return device
# -
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
# This colab notebook is GPU free!
# ---
# # Section 0: Prelude
#
# Throughout this tutorial, we will use a linear neural net with a single hidden layer. We have also excluded `bias` from the layers.
#
# **important to remember**: The forward loop returns the hidden activation, besides the network output (prediction). we will need it in section 3.
class LNNet(nn.Module):
"""A Linear Neural Net with one hidden layer
"""
def __init__(self, in_dim, hid_dim, out_dim):
"""
Args:
in_dim (int): input dimension
out_dim (int): ouput dimension
hid_dim (int): hidden dimension
"""
super().__init__()
self.in_hid = nn.Linear(in_dim, hid_dim, bias=False)
self.hid_out = nn.Linear(hid_dim, out_dim, bias=False)
def forward(self, x):
"""
Args:
x (torch.Tensor): input tensor
"""
hid = self.in_hid(x) # hidden activity
out = self.hid_out(hid) # output (prediction)
return out, hid
# Other than `net_svd` and `net_rsm` functions, the training loop should be mostly familiar to you. We will define these functions in the coming sections.
#
# **important**: Please note that the two functions are part of inner training loop and are therefore executed and recorded at every iteration.
def train(model, inputs, targets, n_epochs, lr, illusory_i=0):
"""Training function
Args:
model (torch nn.Module): the neural network
inputs (torch.Tensor): features (input) with shape `[batch_size, input_dim]`
targets (torch.Tensor): targets (labels) with shape `[batch_size, output_dim]`
n_epochs (int): number of training epochs (iterations)
lr (float): learning rate
illusory_i (int): index of illusory feature
Returns:
np.ndarray: record (evolution) of training loss
np.ndarray: record (evolution) of singular values (dynamic modes)
np.ndarray: record (evolution) of representational similarity matrices
np.ndarray: record of network prediction for the last feature
"""
in_dim = inputs.size(1)
losses = np.zeros(n_epochs) # loss records
modes = np.zeros((n_epochs, in_dim)) # singular values (modes) records
rs_mats = [] # representational similarity matrices
illusions = np.zeros(n_epochs) # prediction for the given feature
optimizer = optim.SGD(model.parameters(), lr=lr)
criterion = nn.MSELoss()
for i in range(n_epochs):
optimizer.zero_grad()
predictions, hiddens = model(inputs)
loss = criterion(predictions, targets)
loss.backward()
optimizer.step()
# Section 2 Singular value decomposition
U, Σ, V = net_svd(model, in_dim)
# Section 3 calculating representational similarity matrix
RSM = net_rsm(hiddens.detach())
# Section 4 network prediction of illusory_i inputs for the last feature
pred_ij = predictions.detach()[illusory_i, -1]
# logging (recordings)
losses[i] = loss.item()
modes[i] = Σ.detach().numpy()
rs_mats.append(RSM.numpy())
illusions[i] = pred_ij.numpy()
return losses, modes, np.array(rs_mats), illusions
# We also need take over the initialization of the weights. In PyTorch, [`nn.init`](https://pytorch.org/docs/stable/nn.init.html) provides us with the functions to initialize tensors from a given distribution.
#
# **important**: Since we need to make sure the plots are correct, so the tutorial message is delivered, we test your exercise implementations but we will not use them for the plots and training.
# ## Coding Exercise 0: Re-initialization
#
# Complete the function `ex_initializer_`, such that the weights are sampled from the following distribution:
#
# \begin{equation}
# \mathcal{N}\left(\mu=0, ~~\sigma=\gamma \sqrt{\dfrac{1}{n_{in} + n_{out}}} \right)
# \end{equation}
#
# where $\gamma$ is the initialization scale, $n_{in}$ and $n_{out}$ are respectively input and output dimensions of the layer. the Underscore ("_") in `ex_initializer_` and other functions, denotes "[in-place](https://discuss.pytorch.org/t/what-is-in-place-operation/16244/2)" operation.
#
# **important note**: since we did not include bias in the layers, the `model.parameters()` would only return the weights in each layer.
# +
def ex_initializer_(model, gamma=1e-12):
"""(in-place) Re-initialization of weights
Args:
model (torch.nn.Module): PyTorch neural net model
gamma (float): initialization scale
"""
for weight in model.parameters():
n_out, n_in = weight.shape
#################################################
## Define the standard deviation (sigma) for the normal distribution
# as given in the equation above
# Complete the function and remove or comment the line below
raise NotImplementedError("Function `ex_initializer_`")
#################################################
sigma = ...
nn.init.normal_(weight, mean=0.0, std=sigma)
## uncomment and run
# test_initializer_ex(SEED)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial3_Solution_34500ee0.py)
#
#
# -
# ---
# # Section 1: Deep Linear Neural Nets
# + cellView="form"
# @title Video 1: Intro to Representation Learning
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iM4y1T7eJ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"DqMSU4Bikt0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# So far, depth just seems to slow down the learning. And we know that a single nonlinear hidden layer (given enough number of neurons and infinite training samples) has the potential to approximate any function. So it seems fair to ask: **What is depth good for**?
#
# One reason can be that shallow nonlinear neural networks hardly meet their true potential in practice. In the contrast, deep neural nets are often surprisingly powerful in learning complex functions without sacrificing generalization. A core intuition behind deep learning is that deep nets derive their power through learning internal representations. How does this work? To address representation learning, we have to go beyond the 1D chain.
#
# For this and the next couple of exercises, we use syntactically generated hierarchically structured data through a *branching diffusion process* (see [this reference](https://www.pnas.org/content/pnas/suppl/2019/05/16/1820226116.DCSupplemental/pnas.1820226116.sapp.pdf) for more details).
#
# <center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/tree.png" alt="Simple nn graph" width="600"/></center>
#
# <center> hierarchically structured data (a tree) </center>
#
# The inputs to the network are labels (i.e. names), while the outputs are the features (i.e. attributes). For example, for the label "Goldfish", the network has to learn all the (artificially created) features, such as "*can swim*", "*is cold-blooded*", "*has fins*", and more. Given that we are training on hierarchically structured data, network could also learn the tree structure, that Goldfish and Tuna have rather similar features, and Robin has more in common with Tuna, compared to Rose.
# + cellView="form"
# @markdown #### Run to generate and visualize training samples from tree
tree_labels, tree_features = generate_hsd()
# convert (cast) data from np.ndarray to torch.Tensor
label_tensor = torch.tensor(tree_labels).float()
feature_tensor = torch.tensor(tree_features).float()
item_names = ['Goldfish', 'Tuna', 'Robin', 'Canary',
'Rose', 'Daisy', 'Pine', 'Oak']
plot_tree_data()
# dimensions
print("---------------------------------------------------------------")
print("Input Dimension: {}".format(tree_labels.shape[1]))
print("Output Dimension: {}".format(tree_features.shape[1]))
print("Number of samples: {}".format(tree_features.shape[0]))
# -
# To continue this tutorial, it is vital to understand the premise of our training data and what the task is. Therefore, please take your time to discuss them with your pod.
#
# <center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/neural_net.png" alt="neural net" width="600"/></center>
#
# <center> The neural network used for this tutorial </center>
# ## Interactive Demo 1: Training the deep LNN
#
# Training a neural net on our data is straight forward. But before executing the next cell, remember the training loss curve from previous tutorial.
# + cellView="form"
# @markdown #### Make sure you execute this cell to train the network and plot
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
losses, *_ = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
# plotting
plot_loss(losses)
# -
# **Question**: Why haven't we seen these "bumps" in training before? And should we look for them in the future? What do these bumps mean?
#
# Recall from previous tutorial, that we are always interested in learning rate ($\eta$) and initialization ($\gamma$) that would give us the fastest but yet stable (reliable) convergence. Try finding the optimal $\eta$ and $\gamma$ using the following widgets. More specifically, try large $\gamma$ and see if we can recover the bumps by tuning the $\eta$.
# + cellView="form"
# @markdown #### Make sure you execute this cell to enable the widget!
def loss_lr_init(lr, gamma):
"""Trains and plots the loss evolution given lr and initialization
Args:
lr (float): learning rate
gamma (float): initialization scale
"""
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
losses, *_ = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
plot_loss(losses)
_ = interact(loss_lr_init,
lr = FloatSlider(min=1.0, max=200.0,
step=1.0, value=100.0,
continuous_update=False,
readout_format='.1f',
description='eta'),
epochs = fixed(250),
gamma = FloatLogSlider(min=-15, max=1,
step=1, value=1e-12, base=10,
continuous_update=False,
description='gamma')
)
# -
# ---
# # Section 2: Singular Value Decomposition (SVD)
# + cellView="form"
# @title Video 2: SVD
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1bw411R7DJ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"18oNWRziskM", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# In this section, we intend to study the learning (training) dynamics we just saw. First, we should know that a linear neural network is performing sequential matrix multiplications, which can be simplified to:
#
# \begin{align}
# \mathbf{y} &= \mathbf{W}_{L}~\mathbf{W}_{L-1}~\dots~\mathbf{W}_{1} ~ \mathbf{x} \\
# &= (\prod_{i=1}^{L}{\mathbf{W}_{i}}) ~ \mathbf{x} \\
# &= \mathbf{W}_{tot} ~ \mathbf{x}
# \end{align}
#
# where $L$ denotes the number of layers in our network.
#
# Learning through gradient descent seems very alike to the evolution of a dynamic system. They both are described by a set of differential equations. Dynamical systems often have a "time-constant" which describes the rate of change, similar to the learning rate, only instead of time, gradient descent evolves through epochs.
#
# [Saxe et al. (2013)](https://arxiv.org/abs/1312.6120) showed that to analyze and to understanding the nonlinear learning dynamics of a deep LNN, we can use [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition) to decompose the $\mathbf{W}_{tot}$ into orthogonal vectors, where orthogonality of the vectors would ensure their "individuality (independence)". This means we can break a deep wide LNN into multiple deep narrow LNN, so their activity is untangled from each other.
#
# <br/>
#
# __A Quick intro to SVD__
#
# Any real-valued matix $A$ (yes, ANY) can be decomposed (factorized) to 3 matrices:
#
# \begin{equation}
# \mathbf{A} = \mathbf{U} \mathbf{Σ} \mathbf{V}^{\top}
# \end{equation}
#
# where $U$ is an orthogonal matrix, $\Sigma$ is a diagonal matrix, and $V$ is again an orthogonal matrix. The diagonal elements of $\Sigma$ are called **singular values**.
#
# The main difference between SVD and EigenValue Decomposition (EVD), is that EVD requires $A$ to be squared and does not guarantee the eigenvectors to be orthogonal. For the complex-valued matrix $A$, the factorization changes to $A = UΣV^*$ and $U$ and $V$ are unitary matrices.
#
# We strongly recommend the [Singular Value Decomposition (the SVD)](https://www.youtube.com/watch?v=mBcLRGuAFUk) by the amazing [<NAME>](http://www-math.mit.edu/~gs/) if you would like to learn more.
#
# ## Coding Exercise 2: SVD
#
# The goal is to perform the SVD on $\mathbf{W}_{tot}$ in every epoch, and record the singular values (modes) during the training.
#
# Complete the function `ex_net_svd`, by first calculating the $\mathbf{W}_{tot} = \prod_{i=1}^{L}{\mathbf{W}_{i}}$ and finally performing SVD on the $\mathbf{W}_{tot}$. Please use the PyTorch [`torch.svd`](https://pytorch.org/docs/stable/generated/torch.svd.html) instead of NumPy [`np.linalg.svd`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html).
# +
def ex_net_svd(model, in_dim):
"""Performs a Singular Value Decomposition on a given model weights
Args:
model (torch.nn.Module): neural network model
in_dim (int): the input dimension of the model
Returns:
U, Σ, V (Tensors): Orthogonal, diagonal, and orthogonal matrices
"""
W_tot = torch.eye(in_dim)
for weight in model.parameters():
#################################################
## Calculate the W_tot by multiplication of all weights
# and then perform SVD on the W_tot using pytorch's `torch.svd`
# Remember that weights need to be `.detach()` from the graph
# Complete the function and remove or comment the line below
raise NotImplementedError("Function `ex_net_svd`")
#################################################
W_tot = ...
U, Σ, V = ...
return U, Σ, V
## Uncomment and run
# test_net_svd_ex(SEED)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial3_Solution_e15a99dc.py)
#
#
# + cellView="form"
# @markdown #### Make sure you execute this cell to train the network and plot
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
losses, modes, *_ = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
plot_loss_sv_twin(losses, modes)
# -
# **Questions**: In EigenValue decomposition, the amount of variance explained by eigenvectors is proportional to the corresponding eigenvalues. What about the SVD? We see that the gradient descent guides the network to first learn the features that carry more information (have higher singular value)!
# + cellView="form"
# @title Video 3: SVD - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1t54y1J7Tb", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JEbRPPG2kUI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# ---
# # Section 3: Representational Similarity Analysis (RSA)
# + cellView="form"
# @title Video 4: RSA
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19f4y157zD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"YOs1yffysX8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# The previous section ended with an interesting remark. SVD helped to break our deep "wide" linear neural net into 8 deep "narrow" linear neural nets. Although the naive interpretation could be that each narrow net is learning an item (e.g. Goldfish), the structure of modes evolution implies something deeper. The first narrow net (highest singular value) converges fastest, while the last four narrow nets, converge almost simultaneously and have the smallest singular values. Maybe, one narrow net is learning the difference between "living things" and "objects", while another narrow net is learning the difference between Fish and Birds. And the narrow nets that are learning more informative distinction are trained first. So, how could we check this hypothesis?
#
# Representational Similarity Analysis (RSA) is an approach that could help us understand the internal representation of our network. The main idea is that the activity of hidden units (neurons) in the network must be similar when the network is presented with similar input. For our dataset (hierarchically structured data), we expect the activity of neurons in the hidden layer to be more similar for Tuna and Canary, and less similar for Tuna and Oak.
#
# If we perform RSA in every training iteration, we may be able to see whether the narrow nets are learning the representations or our hypothesis is empty.
# ## Coding Exercise 3: RSA
#
# The task is simple. We would need to measure the similarity between hidden layer activities $~\mathbf{h} = \mathbf{x} ~\mathbf{W_1}$) for every input $\mathbf{x}$.
#
# For similarity measure, we can use the good old dot (scalar) product, which is also called cosine similarity. For calculating the dot product between multiple vectors (which would be our case), you can simply use matrix multiplication. Therefore the Representational Similarity Matrix for multiple-input (batch) activity could be calculated as follow:
#
# \begin{equation}
# RSM = \mathbf{H} \mathbf{H}^{\top}
# \end{equation}
#
# where $\mathbf{H} = \mathbf{X} \mathbf{W_1}$ is the activity of hidden neurons for a given batch $\mathbf{X}$.
#
# If we perform RSA in every iteration, we could also see the evolution of representation learning.
# +
def ex_net_rsm(h):
"""Calculates the Representational Similarity Matrix
Arg:
h (torch.Tensor): activity of a hidden layer
Returns:
(torch.Tensor): Representational Similarity Matrix
"""
#################################################
## Calculate the Representational Similarity Matrix
# Complete the function and remove or comment the line below
raise NotImplementedError("Function `ex_net_rsm`")
#################################################
rsm = ...
return rsm
## Uncomment and run
# test_net_rsm_ex(SEED)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial3_Solution_63447a86.py)
#
#
# -
# Now we can train the model while recording the losses, modes, and RSMs at every iteration. First, use the epoch slider to explore the evolution of RSM without changing default lr ($\eta$) and initialization ($\gamma$). Then, as we did before, set $\eta$ and $\gamma$ to larger values to see whether you can retrieve the sequential structured learning of representations.
# + cellView="form"
#@markdown #### Make sure you execute this cell to enable widgets
def loss_svd_rsm_lr_gamma(lr, gamma, i_ep):
"""
Args:
lr (float): learning rate
gamma (float): initialization scale
i_ep (int): which epoch to show
"""
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
losses, modes, rsms, _ = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
plot_loss_sv_rsm(losses, modes, rsms, i_ep)
i_ep_slider = IntSlider(min=10, max=241, step=1, value=61,
continuous_update=False,
description='Epoch',
layout=Layout(width='630px'))
lr_slider = FloatSlider(min=20.0, max=200.0, step=1.0, value=100.0,
continuous_update=False,
readout_format='.1f',
description='eta')
gamma_slider = FloatLogSlider(min=-15, max=1, step=1,
value=1e-12, base=10,
continuous_update=False,
description='gamma')
widgets_ui = VBox([lr_slider, gamma_slider, i_ep_slider])
widgets_out = interactive_output(loss_svd_rsm_lr_gamma,
{'lr': lr_slider,
'gamma': gamma_slider,
'i_ep': i_ep_slider})
display(widgets_ui, widgets_out)
# -
# Let's take a moment to analyze this more. A deep neural net is learning the representations, rather than a naive mapping (look-up table). This is thought to be the reason for deep neural nets supreme generalization and transfer learning ability. Unsurprisingly, neural nets with no hidden layer are incapable of representation learning, even with extremely small initialization.
# + cellView="form"
# @title Video 5: RSA - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV18y4y1j7Xr", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"vprldATyq1o", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# ---
# # Section 4: Illusory Correlations
# + cellView="form"
# @title Video 6: Illusory Correlations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1vv411E7Sq", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"RxsAvyIoqEo", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# So far, everything looks great and all our trainings are successful (training loss converging to zero), and very fast. We even could interpret the dynamics of our deep linear networks and relate them to the data. Unfortunately, this rarely happens in practice. Real-world problems often require very deep and nonlinear networks with many hyper-parameters. And ordinarily, these complex networks take hours, if not days, to train.
#
# Let's recall the training loss curves. There was often a long plateau (where the weights are stuck at a saddle point), followed by a sudden drop. For very deep complex neural nets, such plateaus can last for hours of training, and we often decide to stop the training because we believe it "as good as it gets"! This raises the challenge of whether the network has learned all the "intended" hidden representations. But more importantly, the network might find an illusionary correlation between features that has never seen.
#
# To better understand this, let's do the next demonstration and exercise.
# ## Demonstration: Illusory Correlations
#
# Our original dataset has 4 animals: Canary, Robin, Goldfish, and Tuna. These animals all have bones. Therefore if we include a "has bone" feature, the network would learn it at the second level (i.e. second bump, second mode convergence), when it learns the animal-plants distinction.
#
# What if the dataset has Shark instead of Goldfish. Sharks don't have bones (their skeletons are made of cartilaginous, which is much lighter than true bone and more flexible). Then we will have a feature which is *True* (i.e. +1) for Tuna, Robin, and Canary, but *False* (i.e. -1) for all the plants and the shark! Let's see what the network does.
#
# First, we add the new feature to the targets. We then start training our LNN and in every epoch, record the network prediction for "sharks having bones".
#
# <center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/shark_tree.png" alt="Simple nn graph" width="600"/></center>
# +
# sampling new data from the tree
tree_labels, tree_features = generate_hsd()
# replacing Goldfish with Shark
item_names = ['Shark', 'Tuna', 'Robin', 'Canary',
'Rose', 'Daisy', 'Pine', 'Oak']
# index of label to record
illusion_idx = 0 # Shark is the first element
# the new feature (has bones) vector
new_feature = [-1, 1, 1, 1, -1, -1, -1, -1]
its_label = 'has_bones'
# adding feature has_bones to the feature array
tree_features = add_feature(tree_features, new_feature)
# plotting
plot_tree_data(item_names, tree_features, its_label)
# -
# You can see the new feature shown in the last column of the plot above.
#
# Now we can train the network on the new data, and record the network prediction (output) for Shark (indexed 0) label and "has bone" feature (last feature, indexed -1), during the training.
#
# Here is the snippet from the training loop that keeps track of network prediction for `illusory_i`th label and last (`-1`) feature:
#
# ```python
# pred_ij = predictions.detach()[illusory_i, -1]
# ```
# + cellView="form"
#@markdown #### Make sure you execute this cell to train the network and plot
# convert (cast) data from np.ndarray to torch.Tensor
label_tensor = torch.tensor(tree_labels).float()
feature_tensor = torch.tensor(tree_features).float()
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = feature_tensor.size(1)
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
_, modes, _, ill_predictions = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr,
illusory_i=illusion_idx)
# a label for the plot
ill_label = f"Prediction for {item_names[illusion_idx]} {its_label}"
# plotting
plot_ills_sv_twin(ill_predictions, modes, ill_label)
# -
# It seems that the network starts by learning an "illusory correlation" that sharks have bones, and in later epochs, as it learns deeper representations, it can see (learn) beyond the illusory correlation. This is important to remember that we never presented the network with any data saying that sharks have bones.
# ## Exercise 4: Illusory Correlations
#
# This exercise is just for you to explore the idea of illusory correlations. Think of medical, natural, or possibly social illusory correlations which can test the learning power of deep linear neural nets.
#
# **important notes**: the generated data is independent of tree labels, therefore the names are just for convenience.
#
# Here is our example for **Non-human Living things do not speak**. The lines marked by `{edit}` are for you to change in your example.
#
# +
# sampling new data from the tree
tree_labels, tree_features = generate_hsd()
# {edit} replacing Canary with Parrot
item_names = ['Goldfish', 'Tuna', 'Robin', 'Parrot',
'Rose', 'Daisy', 'Pine', 'Oak']
# {edit} index of label to record
illusion_idx = 3 # Parrot is the fourth element
# {edit} the new feature (cannot speak) vector
new_feature = [1, 1, 1, -1, 1, 1, 1, 1]
its_label = 'cannot_speak'
# adding feature has_bones to the feature array
tree_features = add_feature(tree_features, new_feature)
# plotting
plot_tree_data(item_names, tree_features, its_label)
# + cellView="form"
# @markdown #### Make sure you execute this cell to train the network and plot
# convert (cast) data from np.ndarray to torch.Tensor
label_tensor = torch.tensor(tree_labels).float()
feature_tensor = torch.tensor(tree_features).float()
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = feature_tensor.size(1)
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
_, modes, _, ill_predictions = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr,
illusory_i=illusion_idx)
# a label for the plot
ill_label = f"Prediction for {item_names[illusion_idx]} {its_label}"
# plotting
plot_ills_sv_twin(ill_predictions, modes, ill_label)
# + cellView="form"
# @title Video 7: Illusory Correlations - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1vv411E7rg", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"6VLHKQjQJmI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# ---
# # Summary
#
# The second day of the course has ended. So, in the third tutorial of the linear deep learning day we have learned more advanced topics. In the beginning we implemented a deep linear neural network and then we studied its learning dynamics using the linear algebra tool called singular value decomposition. Then, we learned about the representational similarity analysis and the illusory correlation.
# + cellView="form"
# @title Video 8: Outro
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1AL411n7ns", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"N2szOIsKyXE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# ---
# # Bonus
# + cellView="form"
# @title Video 9: Linear Regression
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Pf4y1L71L", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"uULOAbhYaaE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# ## Section 5.1: Linear Regression
#
# Generally, *regression* refers to a set of methods for modeling the mapping (relationship) between one (or more) independent variable(s) (i.e., features) and one (or more) dependent variable(s) (i.e., labels). For example, if we want to examine the relative impacts of calendar date, GPS coordinates, and time of the say (the independent variables) on air temperature (the dependent variable). On the other hand, regression can be used for predictive analysis. Thus the independent variables are also called predictors. When the model contains more than one predictor, then the method is called *multiple regression*, and if it contains more than one dependent variable called *multivariate regression*. Regression problems pop up whenever we want to predict a numerical (usually continuous) value.
#
# The independent variables are collected in vector $\mathbf{x} \in \mathbb{R}^M$, where $M$ denotes the number of independent variables, while the dependent variables are collected in vector $\mathbf{y} \in \mathbb{R}^N$, where $N$ denotes the number of dependent variables. And the mapping between them is represented by the weight matrix $\mathbf{W} \in \mathbb{R}^{N \times M}$ and a bias vector $\mathbf{b} \in \mathbb{R}^{N}$ (generalizing to affine mappings).
#
# The multivariate regression model can be written as:
#
# \begin{equation}
# \mathbf{y} = \mathbf{W} ~ \mathbf{x} + \mathbf{b}
# \end{equation}
#
# or it can be written in matrix format as:
#
# \begin{equation}
# \begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{N} \\ \end{bmatrix} = \begin{bmatrix} w_{1,1} & w_{1,2} & \dots & w_{1,M} \\ w_{2,1} & w_{2,2} & \dots & w_{2,M} \\ \vdots & \ddots & \ddots & \vdots \\ w_{N,1} & w_{N,2} & \dots & w_{N,M} \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{M} \\ \end{bmatrix} + \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\b_{N} \\ \end{bmatrix}
# \end{equation}
#
# ## Section 5.2: Vectorized regression
#
# Linear regression can be simply extended to multi-samples ($D$) input-output mapping, which we can collect in a matrix $\mathbf{X} \in \mathbb{R}^{M \times D}$, sometimes called the design matrix. The sample dimension also shows up in the output matrix $\mathbf{Y} \in \mathbb{R}^{N \times D}$. Thus, linear regression takes the following form:
#
# \begin{equation}
# \mathbf{Y} = \mathbf{W} ~ \mathbf{X} + \mathbf{b}
# \end{equation}
#
# where matrix $\mathbf{W} \in \mathbb{R}^{N \times M}$ and the vector $\mathbf{b} \in \mathbb{R}^{N}$ (broudcasted over sample dimension) are the desired parameters to find.
# ## Section 5.3: Analytical Linear Regression
# Linear regression is a relatively simple optimization problem. Unlike most other models that we will see in this course, linear regression for mean squared loss can be solved analytically.
#
# For $D$ samples (batch size), $\mathbf{X} \in \mathbb{R}^{M \times D}$, and $\mathbf{Y} \in \mathbb{R}^{N \times D}$, the goal of linear regression is to find $\mathbf{W} \in \mathbb{R}^{N \times M}$ such that:
#
# \begin{equation}
# \mathbf{Y} = \mathbf{W} ~ \mathbf{X}
# \end{equation}
#
# Given the Squared Error loss function, we have:
#
# \begin{equation}
# Loss(\mathbf{W}) = ||\mathbf{Y} - \mathbf{W} ~ \mathbf{X}||^2
# \end{equation}
#
# So, using matrix notation, the optimization problem is given by:
#
# \begin{align}
# \mathbf{W^{*}} &= \underset{\mathbf{W}}{\mathrm{argmin}} \left( Loss (\mathbf{W}) \right) \\
# &= \underset{\mathbf{W}}{\mathrm{argmin}} \left( ||\mathbf{Y} - \mathbf{W} ~ \mathbf{X}||^2 \right) \\
# &= \underset{\mathbf{W}}{\mathrm{argmin}} \left( \left( \mathbf{Y} - \mathbf{W} ~ \mathbf{X}\right)^{\top} \left( \mathbf{Y} - \mathbf{W} ~ \mathbf{X}\right) \right)
# \end{align}
#
# To solve the minimization problem, we can simply set the derivative of the loss with respect to $\mathbf{W}$ to zero.
#
# \begin{equation}
# \dfrac{\partial Loss}{\partial \mathbf{W}} = 0
# \end{equation}
#
# Assuming that $\mathbf{X}\mathbf{X}^{\top}$ is full-rank, and thus it is invertible we can write:
#
# \begin{equation}
# \mathbf{W}^{\mathbf{*}} = \mathbf{Y} \mathbf{X}^{\top} \left( \mathbf{X} \mathbf{X}^{\top} \right) ^{-1}
# \end{equation}
#
#
# ### Coding Exercise 5.3.1: Analytical solution to LR
#
# Complete the function `linear_regression` for finding the analytical solution to linear regression.
#
# +
def linear_regression(X, Y):
"""Analytical Linear regression
Args:
X (np.ndarray): design matrix
Y (np.ndarray): target ouputs
return:
np.ndarray: estimated weights (mapping)
"""
assert isinstance(X, np.ndarray)
assert isinstance(Y, np.ndarray)
M, Dx = X.shape
N, Dy = Y.shape
assert Dx == Dy
#################################################
## Complete the linear_regression_exercise function
# Complete the function and remove or comment the line below
raise NotImplementedError("Linear Regression `linear_regression`")
#################################################
W = ...
return W
W_true = np.random.randint(low=0, high=10, size=(3, 3)).astype(float)
X_train = np.random.rand(3, 37) # 37 samples
noise = np.random.normal(scale=0.01, size=(3, 37))
Y_train = W_true @ X_train + noise
## Uncomment and run
# W_estimate = linear_regression(X_train, Y_train)
# print(f"True weights:\n {W_true}")
# print(f"\nEstimated weights:\n {np.round(W_estimate, 1)}")
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial3_Solution_55ea556e.py)
#
#
# -
# ## Demonstration: Linear Regression vs. DLNN
#
# A linear neural network with NO hidden layer is very similar to linear regression in its core. We also know that no matter how many hidden layers a linear network has, it can be compressed to linear regression (no hidden layers).
#
# In this demonstration, we use the hierarchically structured data to:
#
# * analytically find the mapping between features and labels
# * train a zero-depth LNN to find the mapping
# * compare them to the $W_{tot}$ from the already trained deep LNN
# +
# sampling new data from the tree
tree_labels, tree_features = generate_hsd()
# convert (cast) data from np.ndarray to torch.Tensor
label_tensor = torch.tensor(tree_labels).float()
feature_tensor = torch.tensor(tree_features).float()
# +
# calculating the W_tot for deep network (already trained model)
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
losses, modes, rsms, ills = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
deep_W_tot = torch.eye(dim_input)
for weight in dlnn_model.parameters():
deep_W_tot = weight @ deep_W_tot
deep_W_tot = deep_W_tot.detach().numpy()
# -
# analytically estimation of weights
# our data is batch first dimension, so we need to transpose our data
analytical_weights = linear_regression(tree_labels.T, tree_features.T)
class LRNet(nn.Module):
"""A Linear Neural Net with ZERO hidden layer (LR net)
"""
def __init__(self, in_dim, out_dim):
"""
Args:
in_dim (int): input dimension
hid_dim (int): hidden dimension
"""
super().__init__()
self.in_out = nn.Linear(in_dim, out_dim, bias=False)
def forward(self, x):
"""
Args:
x (torch.Tensor): input tensor
"""
out = self.in_out(x) # output (prediction)
return out
# +
lr = 1000.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
LR_model = LRNet(dim_input, dim_output)
optimizer = optim.SGD(LR_model.parameters(), lr=lr)
criterion = nn.MSELoss()
losses = np.zeros(n_epochs) # loss records
for i in range(n_epochs): # training loop
optimizer.zero_grad()
predictions = LR_model(label_tensor)
loss = criterion(predictions, feature_tensor)
loss.backward()
optimizer.step()
losses[i] = loss.item()
# trained weights from zero_depth_model
LR_model_weights = next(iter(LR_model.parameters())).detach().numpy()
plot_loss(losses, "Training loss for zero depth LNN", c="r")
# -
print("The final weights from all methods are approximately equal?! "
"{}!".format(
(np.allclose(analytical_weights, LR_model_weights, atol=1e-02) and \
np.allclose(analytical_weights, deep_W_tot, atol=1e-02))
)
)
# As you may have guessed, they all arrive at the same results but through very different paths.
# + cellView="form"
# @title Video 10: Linear Regression - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV18v411E7Wg", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"gG15_J0i05Y", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
| tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# ### Lecture 11:
#
# - Learn about **lambda** functions
#
# - How to use **map( )**, **filter( )**, and **reduce( )**
#
# - Explore the joys of "List comprehension"
#
#
# ### Lambda functions
#
# You can spell any Greek letter and use it as a variable name EXCEPT for **lambda**. As we learned in Lecture 2, **lambda** is a _reserved word_. Why? Because **lambda** has a special meaning in Python; it is reserved for _anonymous functions_.
#
# The syntax of a **lambda** function consists of a **name =**, followed by the word **lambda** followed by an _argument list_, a colon (:), and ending with an _expression_. Here is a simple example of an anonymous function that returns the product of the argument list:
f=lambda x,y : x*y
# Let's dissect the statement.
#
# - **f** is a new kind of object that represents the function,
#
# - $x$ and $y$ are the arguments of the anonymous function,
#
# - and the expression $x*y$ is what is returned when the function is called.
#
# We're familiar with the following syntax for a "normal" function:
def g(x, y):
return x*y
# Both $f$ and $g$ take the same arguments and return the same value. They are essentially the same function.
#
# Let us verify this, by calling both functions with the arguments $x=2$ and $y=10$:
print (f(2,10))
print (g(2,10))
# Yup. Both the **lambda** function $f$ and the 'regular' function $g$ defined with the keyword **def** are of the type: _function_
print (type(f))
print (type(g))
# **lambda** functions should seem familiar. They follow the same syntax you use in math to define functions:
#
# f(x) = x<sup>2</sup> +5x + 9
#
# So we could easily write this as a **lambda** function like this:
#
#
#
#
h = lambda x: x**2+5.*x+9
#
# For a multivariate function (one with more than one argument), you need to list all the arguments after the reserved word **lambda**. For example,
# In math, you’d write the equation for the hypotenus of two sides, $a $ and $b$, as:
#
# hypotenuse($a$, $b$) = $\sqrt{a^2+ b^2}$.
#
# In Python it would be:
#
hypotenus = lambda a, b: np.sqrt(a**2+b**2)
print (hypotenus(3,4))
# ### Uses of lamda functions
#
# You may be wondering why **lambda** functions are useful. The answer is that **lambda** functions are anonymous- you don't have to give them a name (although we did when we assigned the function to $f$ in the above examples). This comes in handy if you 1) write or use functions that take in other functions as arguments or 2) you just want a quickie one-off calculation.
#
# For the first reason, examples of such functions that take **lambda** functions are **map( )**, **reduce( )**, and **filter( )**.
#
# Anticipating your further questions, you can look at this useful blog post on the subject: https://stackoverflow.com/questions/890128/why-are-python-lambdas-useful
# ### map( )
#
# **lambda** is often used in conjunction with the function **map( )**.
#
# **map(func, seq)** is a function that takes two arguments: a function, **func**, and a sequence, **seq**.
# **func** may be an ordinary function or an anonymous function defined in the first argument of **map( )**.
# **seq** is one or more lists on which the function is performed. So **map( )** returns a list generator with the results of whatever **func** did to the elements in **seq**.
#
# Here is an example which converts kilometers to miles (1 km = (5/8) miles).
km_to_mi=map(lambda x:(5./8.)*x,[8,10,24])
print (km_to_mi) # see the list generator
print (list(km_to_mi)) # see the list
# The anonymous function was defined as the first argument of **map( )**. This **lambda** function takes a single variable _x_ (in km), converts it to miles by multiplying by 5/8, and returns the value. The **map( )** function then takes a sequence as the second argument, in this case, the sequence is a list with 8,10, and 24 as elements. **map( )** converts each of the values in the list to miles by applying the anonymous function.
#
# If our **lambda** function has TWO variables, e.g., $x,y$, we must pass **map( )** TWO lists of the same length for **seq**:
map(lambda x,y : x*y,[2,3,4],[5,6,7]);
# The values for $x$ get taken from the first list of numbers, while $y$ gets taken from the second list. **map( )** returns a list with the product of the two input lists:
#
#
list(map(lambda x,y : x*y,[2,3,4],[5,6,7]))
# Another way to use **map( )** is to define the lists and functions ahead of time, then apply the **map( )** to them as follows:
a=[2,3,4]
b=[5,6,7]
f=lambda x,y : x*y
map(f,a,b)
# but let's see what it does with a print statement:
print (list(map(f,a,b) ))
# Well that was cool....
#
# You can see that $x$ snags values from the first list, $a$ and $y$ uses values from the second list, $b$.
# ### filter( )
#
# **filter(func, seq)** is another example of a _function_ that takes a _function_, **func**, and a sequence, **seq** as arguments. The function supplied to filter must return a boolean- either **True** or **False**. **filter()** then applies that function to all the values in the sequence and returns the values that are **True**. Let's walk through this step by step beginning with a function that returns **True** or **False**.
#
# Remember that _modulo_ is the remainder and in Python we can find the modulo of a given variable $x$ with, for example 2 by this syntax: $x\%2$ (spoken as 'x mod 2'). As an example of a boolean function, we can apply _modulo_ 2 to test whether a number is even or odd. When you divide $x$ by 2, even numbered values of $x$ will return 0 (and odd numbers return 1). And, remember that 0 is **False** and 1 is **True**.
print ('modulo of 2 divided by 2: ',2%2)
# and you can see that modulo is handy for keeping values between 0 and 360
print ('modulo of 400 divided by 360: ',400%360)
# Now let's create an anonymous function that tests whether numbers are even or odd by the value they return. As you just learned, if modulo returns 0 then the remainder is 0 and the original value was even, whereas, if it returns 1, then the original value was odd:
# +
f= lambda x: x % 2
print (f(2))
print (f(3))
print (f(4))
print (f(5))
# -
# We can add the relational operator **==** and return **True** or **False** instead of 0 or 1:
# +
f= lambda x: x % 2 == 0
print (f(2))
print (f(3))
print (f(4))
print (f(5))
# -
# Now, we can use **filter( )** and the function we defined to find the even values in a sequence. Similar to **map( )**, **filter ( )** applies the function to every value in the **list**, but **filter ( )** will only return the values that evaluate to **True**. The output of **filter( )** is a **list generator**, not itself a **list**, but we can turn it into a **list**. For example:
# +
f= lambda x: x % 2 == 0 # tests if a number is even or odd
mylist = list(range(20))
list(filter(f, mylist)) # returns only the even ones
# -
# ### reduce( )
#
# **reduce( )** is another function that regularly uses a **lambda** function. Like **map( )** and **filter( )**, **reduce(func, seq)** takes two arguments: a function and a sequence. With **reduce( )**, the function is applied to sequential pairs of elements in the list until the list is reduced to a single value, which is then returned. In detail, the function is first performed on the first two elements. The result then replaces the first element and the second is removed. The same function is again applied to the first two elements of the new list, replacing them with the single value from the function, and so on until only a single value remains.
#
# **reduce( )** is no longer standard in Python, so it must be imported with the command:
#
# import **reduce( )** from **functools**.
#
# So let's do that.
#
#
from functools import reduce
# Let's try an example. We could use **reduce( )** to return the factorial of a number $n$.
# Remember that the factorial is "the product of an integer and all the positive integers below it".
# So, we can use our **lambda** function defined above, which returns the product of two numbers. If we use the **reduce** function and make the **lambda** operate on a list of numbers from 1 to $n$, we will get the desired product at the end.
n=6
reduce(lambda x,y:x*y,range(1,n+1)) # performs the lambda function sequentially on the list
# We can compare our function with the **Numpy** version, **np.math.factorial)
np.math.factorial(n)
# Whew!
# ### List comprehensions
#
# Another succinct way to iterate over sequences and apply different operations, is through List, Dictionary, and Set comprehensions.
#
# A List comprehension is a convenient way of applying an operation to a collection of objects. It takes this basic form:
#
# \[**expression for** element **in** collection **if** condition\]
#
# Here is an example that takes a list of strings, looks for those with lengths greater than 5 and returns the upper case version using the **string.upper( )** method for strings:
mtList=['Andes','Mt. Everest','Mauna Loa','SP Mountain']
[s.upper() for s in mtList if len(s)>5]
# [Fun fact: you can get the lower case equivalents with the method **string.lower( )**.]
#
#
# Note that you could achieve the same result (the upper case list of all volcanoes with names having more than 5 characters) using our old friend the **for** loop:
another_list = []
for s in mtList:
if(len(s)>5):
another_list.append(s.upper())
another_list
# Or (challenge!) by using **filter( )** and **map( )** and an anonymous function:
# Each of these three approaches performs similarly, but the list comprehension is the most succinct.
# ### Dictionary Comprehension
# Dictionary comprehensions are similar to list comprehensions, but they generate key-value pairs instead of lists. Dictionary comprehensions follow the format:
#
# {**key:value for** variable **in** collection **if** condition}
#
#
# The following Dictionary comprehension generates a dictionary with a word from **mtList** as the key and the length of the word as the value
mtList=['Andes','<NAME>','<NAME>','SP Mountain'] # to remind you what mylist was
{s:len(s) for s in mtList} # dictionary comprehension with mylist
# Notice the {key:value, key:value} structure of the output is a dictionary.
# ### Set comprehension
#
# A Set comprehension, returns a set and follows this format:
#
# {**expression for** value **in** collection **if** condition}
#
#
#
# The following Set comprehension creates a set composed of the lengths of the words in mylist
{len(s) for s in mtList}
# You can tell that a set was returned because it is in curly braces with no keys.
# ### Complicated comprehensions
# List, Dictionary, and Set comprehensions can also replace complicated, nested loops. Here's an example that generates a list of x,y,z triplets if the values obey Pythagorus' rules for right triangles. Chew on it, until you get it:
[(x,y,z) for x in range(1,30) \
for y in range(x,30) for z in range(y,30) \
if x**2 + y**2 == z**2]
| Lecture_11.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## First step in gap analysis is to determine the AEP based on operational data.
# %load_ext autoreload
# %autoreload 2
# This notebook provides an overview and walk-through of the steps taken to produce a plant-level operational energy asssessment (OA) of a wind plant in the PRUF project. The La Haute-Borne wind farm is used here and throughout the example notebooks.
#
# Uncertainty in the annual energy production (AEP) estimate is calculated through a Monte Carlo approach. Specifically, inputs into the OA code as well as intermediate calculations are randomly sampled based on their specified or calculated uncertainties. By performing the OA assessment thousands of times under different combinations of the random sampling, a distribution of AEP values results from which uncertainty can be deduced. Details on the Monte Carlo approach will be provided throughout this notebook.
# ### Step 1: Import plant data into notebook
#
# A zip file included in the OpenOA 'examples/data' folder needs to be unzipped to run this step. Note that this zip file should be unzipped automatically as part of the project.prepare() function call below. Once unzipped, 4 CSV files will appear in the 'examples/data/la_haute_borne' folder.
# +
# Import required packages
import os
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
import pandas as pd
import copy
from project_ENGIE import Project_Engie
from operational_analysis.methods import plant_analysis
# -
# In the call below, make sure the appropriate path to the CSV input files is specfied. In this example, the CSV files are located directly in the 'examples/data/la_haute_borne' folder
# Load plant object
project = Project_Engie('./data/la_haute_borne/')
# Prepare data
project.prepare()
# ### Step 2: Review the data
#
# Several Pandas data frames have now been loaded. Histograms showing the distribution of the plant-level metered energy, availability, and curtailment are shown below:
# Review plant data
fig, (ax1, ax2, ax3) = plt.subplots(ncols = 3, figsize = (15,5))
ax1.hist(project._meter.df['energy_kwh'], 40) # Metered energy data
ax2.hist(project._curtail.df['availability_kwh'], 40) # Curtailment and availability loss data
ax3.hist(project._curtail.df['curtailment_kwh'], 40) # Curtailment and availability loss data
plt.tight_layout()
plt.show()
# ### Step 3: Process the data into monthly averages and sums
#
# The raw plant data can be in different time resolutions (in this case 10-minute periods). The following steps process the data into monthly averages and combine them into a single 'monthly' data frame to be used in the OA assessment.
project._meter.df.head()
# First, we'll create a MonteCarloAEP object which is used to calculate long-term AEP. Two renalaysis products are specified as arguments.
pa = plant_analysis.MonteCarloAEP(project, reanal_products = ['era5', 'merra2'])
# Let's view the result. Note the extra fields we've calculated that we'll use later for filtering:
# - energy_nan_perc : the percentage of NaN values in the raw revenue meter data used in calculating the monthly sum. If this value is too large, we shouldn't include this month
# - nan_flag : if too much energy, availability, or curtailment data was missing for a given month, flag the result
# - num_days_expected : number of days in the month (useful for normalizing monthly gross energy later)
# - num_days_actual : actual number of days per month as found in the data (used when trimming monthly data frame)
# View the monthly data frame
pa._aggregate.df.head()
# ### Step 4: Review reanalysis data
#
# Reanalysis data will be used to long-term correct the operational energy over the plant period of operation to the long-term. It is important that we only use reanalysis data that show reasonable trends over time with no noticeable discontinuities. A plot like below, in which normalized annual wind speeds are shown from 1997 to present, provides a good first look at data quality.
#
# The plot shows that both of the reanalysis products track each other reasonably well and seem well-suited for the analysis.
pa.plot_reanalysis_normalized_rolling_monthly_windspeed().show()
# ### Step 5: Review energy data
#
# It is useful to take a look at the energy data and make sure the values make sense. We begin with scatter plots of gross energy and wind speed for each reanalysis product. We also show a time series of gross energy, as well as availability and curtailment loss.
#
# Let's start with the scatter plots of gross energy vs wind speed for each reanalysis product. Here we use the 'Robust Linear Model' (RLM) module of the Statsmodels package with the default Huber algorithm to produce a regression fit that excludes outliers. Data points in red show the outliers, and were excluded based on a Huber sensitivity factor of 3.0 (the factor is varied between 2.0 and 3.0 in the Monte Carlo simulation).
#
# The plots below reveal that:
# - there are some outliers
# - Both renalysis products are strongly correlated with plant energy
pa.plot_reanalysis_gross_energy_data(outlier_thres=3).show()
# Next we show time series plots of the monthly gross energy, availabilty, and curtialment. Note that the availability and curtailment data were estimated based on SCADA data from the plant.
pa.plot_aggregate_plant_data_timeseries().show()
# ### Step 6: Specify availabilty and curtailment data not represenative of actual plant performance
#
# There may be anomalies in the reported availabilty that shouldn't be considered representative of actual plant performance. Force majeure events (e.g. lightning) are a good example. Such losses aren't typically considered in pre-construction AEP estimates; therefore, plant availablity loss reported in an operational AEP analysis should also not include such losses.
#
# The 'availability_typical' and 'curtailment_typical' fields in the monthly data frame are initially set to True. Below, individual months can be set to 'False' if it is deemed those months are unrepresentative of long-term plant losses. By flagging these months as false, they will be omitted when assessing average availabilty and curtailment loss for the plant.
#
# Justification for removing months from assessing average availabilty or curtailment should come from conversations with the owner/operator. For example, if a high-loss month is found, reasons for the high loss should be discussed with the owner/operator to determine if those losses can be considered representative of average plant operation.
# For illustrative purposes, let's suppose a few months aren't representative of long-term losses
pa._aggregate.df.loc['2014-11-01',['availability_typical','curtailment_typical']] = False
pa._aggregate.df.loc['2015-07-01',['availability_typical','curtailment_typical']] = False
# ### Step 7: Calculate long-term annual losses
#
# Once unrepresentative losses have been identifed, long-term availability and curtailment losses for the plant are calculated based on average losses for each calendar month (in energy units). Summing those average values yields the long-term annual estimates.
pa.calculate_long_term_losses()
pa.long_term_losses
# ### Step 8: Select reanalysis products to use
#
# Based on the assessment of reanalysis products above (both long-tern trend and relationship with plant energy), we now set which reanalysis products we will include in the OA. For this particular case study, we use both products given the high regression relationships.
# ### Step 9: Set up Monte Carlo inputs
#
# The next step is to set up the Monte Carlo framework for the analysis. Specifically, we identify each source of uncertainty in the OA estimate and use that uncertainty to create distributions of the input and intermediate variables from which we can sample for each iteration of the OA code. For input variables, we can create such distributions beforehand. For intermediate variables, we must sample separately for each iteration.
#
# Detailed descriptions of the sampled Monte Carlo inputs, which can be specified when initializing the MonteCarloAEP object if values other than the defaults are desired, are provided below:
#
# - slope, intercept, and num_outliers : These are intermediate variables that are calculated for each iteration of the code
#
# - outlier_threshold : Sample values between 2 and 3 which set the Huber algorithm outlier detection parameter. Varying this threshold accounts for analyst subjectivity on what data points constitute outliers and which do not.
#
# - metered_energy_fraction : Revenue meter energy measurements are associated with a measurement uncertainty of around 0.5%. This uncertainty is used to create a distribution centered at 1 (and with standard deviation therefore of 0.005). This column represents random samples from that distribution. For each iteration of the OA code, a value from this column is multiplied by the monthly revenue meter energy data before the data enter the OA code, thereby capturing the 0.5% uncertainty.
#
# - loss_fraction : Reported availability and curtailment losses are estimates and are associated with uncertainty. For now, we assume the reported values are associated with an uncertainty of 5%. Similar to above, we therefore create a distribution centered at 1 (with std of 0.05) from which we sample for each iteration of the OA code. These sampled values are then multiplied by the availability and curtaiment data independently before entering the OA code to capture the 5% uncertainty in the reported values.
#
# - num_years_windiness : This intends to capture the uncertainty associated with the number of historical years an analyst chooses to use in the windiness correction. The industry standard is typically 20 years and is based on the assumption that year-to-year wind speeds are uncorrelated. However, a growing body of research suggests that there is some correlation in year-to-year wind speeds and that there are trends in the resource on the decadal timescale. To capture this uncertainty both in the long-term trend of the resource and the analyst choice, we randomly sample integer values betweeen 10 and 20 as the number of years to use in the windiness correction.
#
# - loss_threshold : Due to uncertainty in reported availability and curtailment estimates, months with high combined losses are associated with high uncertainty in the calculated gross energy. It is common to remove such data from analysis. For this analysis, we randomly sample float values between 0.1 and 0.2 (i.e. 10% and 20%) to serve as criteria for the combined availability and curtailment losses. Specifically, months are excluded from analysis if their combined losses exceeds that criteria for the given OA iteration.
#
# - reanalyis_product : This captures the uncertainty of using different reanalysis products and, lacking a better method, is a proxy way of capturing uncertainty in the modelled monthly wind speeds. For each iteration of the OA code, one of the reanalysis products that we've already determined as valid (see the cells above) is selected.
# ### Step 10: Run the OA code
#
# We're now ready to run the Monte-Carlo based OA code. We repeat the OA process "num_sim" times using different sampling combinations of the input and intermediate variables to produce a distribution of AEP values.
#
# A single line of code here in the notebook performs this step, but below is more detail on what is being done.
#
# Steps in OA process:
#
# - Set the wind speed and gross energy data to be used in the regression based on i) the reanalysis product to be used (Monte-Carlo sampled); ii) the NaN energy data criteria (1%); iii) Combined availability and curtailment loss criteria (Monte-Carlo sampled); and iv) the outlier criteria (Monte-Carlo sampled)
# - Normalize gross energy to 30-day months
# - Perform linear regression and determine slope and intercept values, their standard errors, and the covariance between the two
# - Use the information above to create distributions of possible slope and intercept values (e.g. mean equal to slope, std equal to the standard error) from which we randomly sample a slope and intercept value (note that slope and intercept values are highly negatively-correlated so the sampling from both distributions are constrained accordingly)
# - to perform the long term correction, first determine the long-term monthly average wind speeds (i.e. average January wind speed, average Februrary wind speed, etc.) based on a 10-20 year historical period as determined by the Monte Carlo process.
# - Apply the Monte-Carlo sampled slope and intercept values to the long-term monthly average wind speeds to calculate long-term monthly gross energy
# - 'Denormalize' monthly long-term gross energy back to the normal number of days
# - Calculate AEP by subtracting out the long-term avaiability loss (curtailment loss is left in as part of AEP)
# Run Monte-Carlo based OA
pa.run(num_sim=2000, reanal_subset=['era5', 'merra2'])
# The key result is shown below: a distribution of AEP values from which uncertainty can be deduced. In this case, uncertainty is around 8%, which is on the higher end of a typical industry OA estimate (~4-7%). Note that we're including interannual variability (IAV) uncertainty in our uncertainty calculations, which typically dominates the uncertainty in an industry OA. IAV uncertainty is fundamentally a future or forward-looking uncertainty (i.e. what will annual energy production look like next year, or the next 10 years, based on what we've seen so far). IAV is estimated using the standard deviation of monthly wind speeds from the reanalysis data.
# +
# Plot a distribution of APE values from the Monte-Carlo OA method
pa.plot_result_aep_distributions().show()
# -
# ### Step 11: Post-analysis visualization
#
# Here we show some supplementary results of the Monte Carlo OA approach to help illustrate how it works.
#
# First, it's worth looking at the Monte-Carlo tracker data frame again, now that the slope, intercept, and number of outlier fields have been completed. Note that for transparency, debugging, and analysis purposes, we've also included in the tracker data frame the number of data points used in the regression.
# Produce histograms of the various MC-parameters
mc_reg = pd.DataFrame(data = {'slope': pa._mc_slope.ravel(),
'intercept': pa._mc_intercept,
'num_points': pa._mc_num_points,
'metered_energy_fraction': pa._inputs.metered_energy_fraction,
'loss_fraction': pa._inputs.loss_fraction,
'num_years_windiness': pa._inputs.num_years_windiness,
'loss_threshold': pa._inputs.loss_threshold,
'reanalysis_product': pa._inputs.reanalysis_product})
# It's useful to plot distributions of each variable to show what is happening in the Monte Carlo OA method. Based on the plot below, we observe the following:
#
# - metered_energy_fraction, and loss_fraction sampling follow a normal distribution as expected
# - The slope and intercept distributions appear normally distributed, even though different reanalysis products are considered, resulting in different regression relationships. This is likely because the reanalysis products agree with each other closely.
# - 19 data points were used for all iterations, indicating that there was no variation in the number of outlier months removed
# - We see approximately equal sampling of the num_years_windiness, loss_threshold, and reanalysis_product, as expected
plt.figure(figsize=(15,15))
for s in np.arange(mc_reg.shape[1]):
plt.subplot(4,3,s+1)
plt.hist(mc_reg.iloc[:,s],40)
plt.title(mc_reg.columns[s])
plt.show()
# It's worth highlighting the inverse relationship between slope and intercept values under the Monte Carlo approach. As stated earlier, slope and intercept values are strongly negatively correlated (e.g. slope goes up, intercept goes down) which is captured by the covariance result when performing linear regression. By constrained random sampling of slope and intercept values based on this covariance, we assure we aren't sampling unrealisic combinations.
#
# The plot below shows that the values are being sampled appropriately
# +
# Produce scatter plots of slope and intercept values, and overlay the resulting line of best fits over the actual wind speed
# and gross energy data points. Here we focus on the ERA-5 data
plt.figure(figsize=(8,6))
plt.plot(mc_reg.intercept[mc_reg.reanalysis_product =='era5'],mc_reg.slope[mc_reg.reanalysis_product =='era5'],'.')
plt.xlabel('Intercept (GWh)')
plt.ylabel('Slope (GWh / (m/s))')
plt.show()
# -
# We can look further at the influence of certain Monte Carlo parameters on the AEP result. For example, let's see what effect the choice of reanalysis product has on the result:
# Boxplot of AEP based on choice of reanalysis product
tmp_df=pd.DataFrame(data={'aep':pa.results.aep_GWh,'reanalysis_product':mc_reg['reanalysis_product']})
tmp_df.boxplot(column='aep',by='reanalysis_product',figsize=(8,6))
plt.ylabel('AEP (GWh/yr)')
plt.xlabel('Reanalysis product')
plt.title('AEP estimates by reanalysis product')
plt.suptitle("")
plt.show()
# In this case, the two reanalysis products lead to similar AEP estimates, although MERRA2 yields slightly higher uncertainty.
#
# We can also look at the effect on the number of years used in the windiness correction:
# +
# Boxplot of AEP based on number of years in windiness correction
tmp_df=pd.DataFrame(data={'aep':pa.results.aep_GWh,'num_years_windiness':mc_reg['num_years_windiness']})
tmp_df.boxplot(column='aep',by='num_years_windiness',figsize=(8,6))
plt.ylabel('AEP (GWh/yr)')
plt.xlabel('Number of years in windiness correction')
plt.title('AEP estimates by windiness years')
plt.suptitle("")
plt.show()
# -
# As seen above, the number of years used in the windiness correction does not significantly impact the AEP estimate.
| examples/02_plant_aep_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <p><font size="6"><b>03 - Pandas: Indexing and selecting data - part I</b></font></p>
#
# > *DS Data manipulation, analysis and visualization in Python*
# > *May/June, 2021*
# >
# > *© 2021, <NAME> and <NAME> (<mailto:<EMAIL>>, <mailto:<EMAIL>>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
#
# ---
import pandas as pd
# +
# redefining the example DataFrame
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
# -
# # Subsetting data
# ## Subset variables (columns)
# For a DataFrame, basic indexing selects the columns (cfr. the dictionaries of pure python)
#
# Selecting a **single column**:
countries['area'] # single []
# Remember that the same syntax can also be used to *add* a new columns: `df['new'] = ...`.
#
# We can also select **multiple columns** by passing a list of column names into `[]`:
countries[['area', 'population']] # double [[]]
# ## Subset observations (rows)
# Using `[]`, slicing or boolean indexing accesses the **rows**:
# ### Slicing
countries[0:4]
# ### Boolean indexing (filtering)
# Often, you want to select rows based on a certain condition. This can be done with *'boolean indexing'* (like a where clause in SQL) and comparable to numpy.
#
# The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
countries['area'] > 100000
countries[countries['area'] > 100000]
countries[countries['population'] > 50]
# An overview of the possible comparison operations:
#
# Operator | Description
# ------ | --------
# == | Equal
# != | Not equal
# \> | Greater than
# \>= | Greater than or equal
# \< | Lesser than
# <= | Lesser than or equal
#
# and to combine multiple conditions:
#
# Operator | Description
# ------ | --------
# & | And (`cond1 & cond2`)
# \| | Or (`cond1 \| cond2`)
# <div class="alert alert-info" style="font-size:120%">
# <b>REMEMBER</b>: <br><br>
#
# So as a summary, `[]` provides the following convenience shortcuts:
#
# * **Series**: selecting a **label**: `s[label]`
# * **DataFrame**: selecting a single or multiple **columns**:`df['col']` or `df[['col1', 'col2']]`
# * **DataFrame**: slicing or filtering the **rows**: `df['row_label1':'row_label2']` or `df[mask]`
#
# </div>
# ## Some other useful methods: `isin` and `string` methods
# The `isin` method of Series is very useful to select rows that may contain certain values:
s = countries['capital']
# +
# s.isin?
# -
s.isin(['Berlin', 'London'])
# This can then be used to filter the dataframe with boolean indexing:
countries[countries['capital'].isin(['Berlin', 'London'])]
# Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the `startswith` method:
string = 'Berlin'
string.startswith('B')
# In pandas, these are available on a Series through the `str` namespace:
countries['capital'].str.startswith('B')
# For an overview of all string methods, see: https://pandas.pydata.org/pandas-docs/stable/reference/series.html#string-handling
# # Exercises using the Titanic dataset
df = pd.read_csv("data/titanic.csv")
df.head()
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers.</li>
# </ul>
# </div>
# + clear_cell=true
males = df[df['Sex'] == 'male']
# + clear_cell=true
males['Age'].mean()
# + clear_cell=true
df[df['Sex'] == 'female']['Age'].mean()
# -
# We will later see an easier way to calculate both averages at the same time with groupby.
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many passengers older than 70 were on the Titanic?</li>
# </ul>
# </div>
# + clear_cell=true
len(df[df['Age'] > 70])
# + clear_cell=true
(df['Age'] > 70).sum()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Select the passengers that are between 30 and 40 years old?</li>
# </ul>
# </div>
# + clear_cell=true
df[(df['Age'] > 30) & (df['Age'] <= 40)]
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# Split the 'Name' column on the `,` extract the first part (the surname), and add this as new column 'Surname'.
#
# * Get the first value of the 'Name' column.
# * Split this string (check the `split()` method of a string) and get the first element of the resulting list.
# * Write the previous step as a function, and 'apply' this function to each element of the 'Name' column (check the `apply()` method of a Series).
#
# </div>
# + clear_cell=true
name = df['Name'][0]
name
# + clear_cell=true
name.split(",")
# + clear_cell=true
name.split(",")[0]
# + clear_cell=true
def get_surname(name):
return name.split(",")[0]
# + clear_cell=true
df['Name'].apply(get_surname)
# + clear_cell=true
df['Surname'] = df['Name'].apply(get_surname)
# + clear_cell=true
# alternative using an "inline" lambda function
df['Surname'] = df['Name'].apply(lambda x: x.split(',')[0])
# + clear_cell=true
# alternative solution with pandas' string methods
df['Surname'] = df['Name'].str.split(",").str.get(0)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Select all passenger that have a surname starting with 'Williams'.</li>
# </ul>
# </div>
# + clear_cell=true
df[df['Surname'].str.startswith('Williams')]
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Select all rows for the passengers with a surname of more than 15 characters.</li>
# </ul>
#
# </div>
# + clear_cell=true
df[df['Surname'].str.len() > 15]
# -
# # [OPTIONAL] more exercises
# For the quick ones among you, here are some more exercises with some larger dataframe with film data. These exercises are based on the [PyCon tutorial of <NAME>](https://github.com/brandon-rhodes/pycon-pandas-tutorial/) (so all credit to him!) and the datasets he prepared for that. You can download these data from here: [`titles.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKajNMa1pfSzN6Q3M) and [`cast.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKal9UYTJSR2ZhSW8) and put them in the `/notebooks/data` folder.
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many movies are listed in the titles dataframe?</li>
# </ul>
#
# </div>
# + clear_cell=true
len(titles)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>What are the earliest two films listed in the titles dataframe?</li>
# </ul>
# </div>
# + clear_cell=true
titles.sort_values('year').head(2)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many movies have the title "Hamlet"?</li>
# </ul>
# </div>
# + clear_cell=true
len(titles[titles['title'] == 'Hamlet'])
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>List all of the "Treasure Island" movies from earliest to most recent.</li>
# </ul>
# </div>
# + clear_cell=true
titles[titles['title'] == 'Treasure Island'].sort_values('year')
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many movies were made from 1950 through 1959?</li>
# </ul>
# </div>
# + clear_cell=true
len(titles[(titles['year'] >= 1950) & (titles['year'] <= 1959)])
# + clear_cell=true
len(titles[titles['year'] // 10 == 195])
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many roles in the movie "Inception" are NOT ranked by an "n" value?</li>
# </ul>
# </div>
# + clear_cell=true
inception = cast[cast['title'] == 'Inception']
# + clear_cell=true
len(inception[inception['n'].isna()])
# + clear_cell=true
inception['n'].isna().sum()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>But how many roles in the movie "Inception" did receive an "n" value?</li>
# </ul>
# </div>
# + clear_cell=true
len(inception[inception['n'].notna()])
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Display the cast of the "Titanic" (the most famous 1997 one) in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.</li>
# </ul>
# </div>
# + clear_cell=true
titanic = cast[(cast['title'] == 'Titanic') & (cast['year'] == 1997)]
titanic = titanic[titanic['n'].notna()]
titanic.sort_values('n')
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>List the supporting roles (having n=2) played by <NAME> in the 1990s, in order by year.</li>
# </ul>
# </div>
# + clear_cell=true
brad = cast[cast['name'] == '<NAME>']
brad = brad[brad['year'] // 10 == 199]
brad = brad[brad['n'] == 2]
brad.sort_values('year')
# -
# # Acknowledgement
#
#
# > The optional exercises are based on the [PyCon tutorial of <NAME>](https://github.com/brandon-rhodes/pycon-pandas-tutorial/) (so all credit to him!) and the datasets he prepared for that.
#
# ---
| _solved/pandas_03a_selecting_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# 절차지향 프로그램의 예
# 일반적인 코드
# 학생 1
student_name_1 = 'Kim'
student_number_1 = 1
student_grade_1 = 1
student_detail_1 = [
{'gender': 'Male'},
{'score1': 95},
{'score2': 88}
]
# 학생 2
student_name_2 = 'Kim'
student_number_2 = 2
student_grade_2 = 2
student_detail_2 = [
{'gender': 'Female'},
{'score1': 77},
{'score2': 92}
]
# 학생 3
student_name_3 = 'Park'
student_number_3 = 3
student_grade_3 = 3
student_detail_3 = [
{'gender': 'Male'},
{'score1': 99},
{'score2': 100}
]
# 리스트 구조
# 관리하기 불편
# 데이터의 정확한 위치 (인덱스) 매핑 해서 사용
student_names_list = ['Kim', 'Lee', 'Park']
student_numbers_list = [1, 2, 3]
student_grades_list = [1, 2, 4]
student_details_list = [
{'gender' : 'Male', 'score1': 95, 'score2': 88},
{'gender' : 'Female', 'score1': 77, 'score2': 92},
{'gender' : 'Male', 'score1': 99, 'score2': 100}
]
# 학생 삭제
del student_names_list[1]
del student_numbers_list[1]
del student_grades_list[1]
print(student_names_list)
print(student_numbers_list)
print(student_grades_list)
print(student_details_list)
# 딕셔너리 구조
# 코드 반복 지속, 중첩 문제가 생김
students_dicts = [
{'student_name': 'Kim',
'student_number': 1,
'student_grade': 1,
'student_detail': {'gender': 'Male', 'score1': 95, 'score2': 88}
},
{'student_name': 'Lee',
'student_number': 2,
'student_grade': 2,
'student_detail': {'gender': 'Female', 'score1': 77, 'score2': 92}
},
{'student_name': 'Park',
'student_number': 3,
'student_grade': 3,
'student_detail': {'gender': 'Male', 'score1': 99, 'score2': 100}
}
]
del students_dicts[1]
print(students_dicts)
# 3rd 파티 앱을 사용할 경우, 선호 되는 데이터 타입은 사전 형태이다.
# -
# chapter01-1
# 파이썬 심화
# 객체 지향 프로그래밍 (OOP) -> 코드의 재사용, 코드 중복 방지 등등
# +
# 클래스 상세 설명
# 클래스 변수, 인스턴스 변수
# -
# 절차지향 프로그래밍
# 위에서부터 아래로 코드를 읽어 실행되면서 진행되는 프로그램
# 실행속도가 빠름
# 단점, 위에서부터 아래로 몇천줄 짜리 코드를 만나면 유지 보수가 어려워 짐
# 디버깅도 어려워짐
| python/python_advanced_class_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <NAME> - Spotify Project
# Import Libraries #
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials #To access authorised Spotify data
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
import seaborn as sns
from pylab import rcParams
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import normalize
import scipy.cluster.hierarchy as shc
from sklearn.cluster import AgglomerativeClustering
# Connect to Spotify API #
sp = spotipy.Spotify()
cid =""
secret = ""
client_credentials_manager = SpotifyClientCredentials(client_id=cid, client_secret=secret)
sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager)
sp.trace=False
# Data Collection / Wrangling
# Create get_playlist_tracks function to get all tracks from a user's playlist
def get_playlist_tracks(username,playlist_id):
results = sp.user_playlist_tracks(username,playlist_id)
tracks = results['items']
while results['next']:
results = sp.next(results)
tracks.extend(results['items'])
return tracks
# Use get_playlist_tracks function to pull all tracks from 'pmattingly's' playlist 'Billboard #1 Hits from 2000-2019'
playlist = get_playlist_tracks('pmattingly', '06mZvzwge07R64LQNRGnFB')
#create list of songs in playlist
counter = 0
song_list = []
for i in playlist:
a = playlist[counter]
counter = counter + 1
#print(a.keys())
song_list.append(a)
#Split song_list into 3 groups - to get all songs from playlist
song_list_1 = song_list[:75]
song_list_2 = song_list[75:150]
song_list_3 = song_list[150:]
# +
# For song_list_1, get audio features and track/album/artist/release date info for each song and combine into one dataframe, df_tracks_1
ids = []
for i in range(len(song_list_1)):
ids.append(song_list_1[i]["track"]["id"])
features1 = sp.audio_features(ids)
df_audio_1 = pd.DataFrame(features1)
#rename df1-3 to df_audio_1-3
df_names_1 = pd.DataFrame(columns = ['track_name', 'album_name', 'artist_name', 'release_date', 'artist_uri', 'track_uri', 'popularity'])
for i in range(len(song_list_1)):
track_name = song_list_1[i]['track']['name']
album_name = song_list_1[i]['track']['album']['name']
artist_name = song_list_1[i]['track']['album']['artists'][0]['name']
release_date = song_list_1[i]['track']['album']['release_date']
artist_uri = (song_list_1[i]['track']['album']['artists'][0]['uri']).split(":")[2]
track_uri = (song_list_1[i]['track']['uri']).split(":")[2]
popularity = song_list_1[i]['track']['popularity']
values = [track_name, album_name, artist_name, release_date, artist_uri, track_uri, popularity]
s = pd.Series(values, index=df_names_1.columns)
df_names_1 = df_names_1.append(s, ignore_index=True)
df_tracks_1 = pd.concat([df_names_1, df_audio_1], axis=1)
# +
# For song_list_2, get audio features and track/album/artist/release date info for each song and combine into one dataframe, df_tracks_2
ids=[]
for i in range(len(song_list_2)):
ids.append(song_list_2[i]["track"]["id"])
features2 = sp.audio_features(ids)
df_audio_2 = pd.DataFrame(features2)
df_names_2 = pd.DataFrame(columns = ['track_name', 'album_name', 'artist_name', 'release_date', 'artist_uri', 'track_uri', 'popularity'])
for i in range(len(song_list_2)):
track_name = song_list_2[i]['track']['name']
album_name = song_list_2[i]['track']['album']['name']
artist_name = song_list_2[i]['track']['album']['artists'][0]['name']
release_date = song_list_2[i]['track']['album']['release_date']
artist_uri = (song_list_2[i]['track']['album']['artists'][0]['uri']).split(":")[2]
track_uri = (song_list_2[i]['track']['uri']).split(":")[2]
popularity = song_list_2[i]['track']['popularity']
values = [track_name, album_name, artist_name, release_date, artist_uri, track_uri, popularity]
s = pd.Series(values, index=df_names_2.columns)
df_names_2 = df_names_2.append(s, ignore_index=True)
df_tracks_2 = pd.concat([df_names_2, df_audio_2], axis=1)
# +
# For song_list_3, get audio features and track/album/artist/release date info for each song and combine into one dataframe, df_tracks_3
ids=[]
for i in range(len(song_list_3)):
ids.append(song_list_3[i]["track"]["id"])
features3 = sp.audio_features(ids)
df_audio_3 = pd.DataFrame(features3)
df_names_3 = pd.DataFrame(columns = ['track_name', 'album_name', 'artist_name', 'release_date', 'artist_uri', 'track_uri', 'popularity'])
for i in range(len(song_list_3)):
track_name = song_list_3[i]['track']['name']
album_name = song_list_3[i]['track']['album']['name']
artist_name = song_list_3[i]['track']['album']['artists'][0]['name']
release_date = song_list_3[i]['track']['album']['release_date']
artist_uri = (song_list_3[i]['track']['album']['artists'][0]['uri']).split(":")[2]
track_uri = (song_list_3[i]['track']['uri']).split(":")[2]
popularity = song_list_3[i]['track']['popularity']
values = [track_name, album_name, artist_name, release_date, artist_uri, track_uri, popularity]
s = pd.Series(values, index=df_names_3.columns)
df_names_3 = df_names_3.append(s, ignore_index=True)
df_tracks_3 = pd.concat([df_names_3, df_audio_3], axis=1)
# -
#combine 3 track datasets into 1, df_tracks
df_tracks = pd.concat([df_tracks_1,df_tracks_2,df_tracks_3], ignore_index=True)
#view head of df_tracks
df_tracks.head(10)
list(df_tracks.columns.values)
# view first two observations of id, uri, track_uri, and artist_uri in
df_tracks[['id', 'uri', 'track_uri', 'artist_uri']].head(2)
# drop columns containing duplicate information
df_tracks = df_tracks.drop(['uri', 'id', 'analysis_url', 'track_href'], axis=1)
# +
# use release_date column to create year, month, and month_year
#create year column
df_tracks['year'] = pd.DatetimeIndex(df_tracks['release_date']).year
#create month column
df_tracks['month'] = pd.DatetimeIndex(df_tracks['release_date']).month
#create month-year column
df_tracks['month_year'] = pd.to_datetime(df_tracks['release_date']).dt.to_period('M')
#view year, month, and month_year of the first two observations in df_tracks
df_tracks[['year', 'month', 'month_year']].head(2)
# -
# Exploratory Data Analysis
df_tracks.columns.values
df_tracks.shape
df_tracks.info()
df_tracks.describe()
# +
# View Top 20 Artists by Number of #1 Billboard Hits from 2000-2009
# magenta show artists with more than 3 hits during this time period ; teal shows artists with 3 hits during this time
top_20_artists = pd.DataFrame(df_tracks['artist_name'].value_counts()[:20])
#sns.countplot(x="artist_name", data=top_20_artists)
rcParams['figure.figsize'] = 15, 7
top_20_artists.plot(kind='bar', width = .8, color=[np.where(top_20_artists["artist_name"]>3, 'm', 'c')], legend=None)
plt.title('Top 20 Artists by #1 Billboard Hits from 2000-2019')
plt.xlabel('Artist Name')
plt.xticks(rotation = 45)
plt.ylabel('Number of Hits')
plt.show()
# -
# Top 5 Albums by Number of #1 Billboard Hits from 2000 - 2019
top_albums = pd.DataFrame(df_tracks['album_name'].value_counts()[:5])
rcParams['figure.figsize'] = 10, 5
top_albums.plot(kind='bar', width = .8, color='tab:purple', legend = None)
plt.title('Top 5 Albums by Number of #1 Billboard Hits from 2000-2019')
plt.xlabel('Album Name')
plt.xticks(rotation = 45)
plt.ylabel('Number of Hits')
plt.show()
#histogram of popularity scores of #1 Billboard Hits from 2000 to 2019
#something to think about in terms of next steps -- some songs may have a higher popularity score if they've come out more recently
#possibly standardize the popularity score variable (with time)
df_tracks["popularity"] = pd.to_numeric(df_tracks["popularity"])
ax = df_tracks['popularity'].hist(bins=10)
ax.set_title('Histogram of Popularity Scores of Billboard Hits from 2000-2019')
plt.show()
# Number of Billboard #1 Hits from 2000-2019 by year
sns.set(style="whitegrid")
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.countplot(x='year', data=df_tracks)
plt.title('Billboard #1 Hits from 2000-2019')
plt.xlabel('Year')
plt.ylabel('Number of Hits')
# Create subset of df_tracks for clustering
df_tracks_subset = df_tracks[['track_uri','popularity', 'acousticness', 'danceability', 'duration_ms', 'energy',
'instrumentalness', 'key', 'liveness', 'loudness', 'mode', 'speechiness', 'tempo',
'time_signature']]
df_tracks_subset.loc[:, df_tracks_subset.columns != 'track_uri'].head()
# K-Means Clustering : use K-Means clustering to cluster songs in playlist
# +
#standardize data in df_tracks
# * note: only using numeric / continuous variables as we are using k-means algorithm & need to standardize
scaler = StandardScaler()
data_scaled = scaler.fit_transform(df_tracks_subset.loc[:, df_tracks_subset.columns != 'track_uri'])
# statistics of scaled data
pd.DataFrame(data_scaled).describe()
# +
# fitting multiple k-means algorithms and storing the values in an empty list
SSE = []
for cluster in range(1,15):
kmeans = KMeans(n_jobs = -1, n_clusters = cluster, init='k-means++')
kmeans.fit(data_scaled)
SSE.append(kmeans.inertia_)
# converting the results into a dataframe and plot them (use this elbow method to determine the optimal number of clusters)
frame = pd.DataFrame({'Cluster':range(1,15), 'SSE':SSE})
plt.figure(figsize=(12,6))
plt.plot(frame['Cluster'], frame['SSE'], marker='o', color='black')
plt.xlabel('Number of clusters')
plt.xticks(range(1, 15))
plt.ylabel('Inertia')
# -
# k means using 5 clusters (based off of elbow method results above) and k-means++ initialization
kmeans = KMeans(n_jobs = -1, n_clusters = 5, init='k-means++')
kmeans.fit(data_scaled)
pred = kmeans.predict(data_scaled)
# get counts of number of observations/songs in each cluster
frame = pd.DataFrame(data_scaled)
frame['cluster'] = pred
frame['cluster'].value_counts()
#combine track info variables (track_uri, track_name, album_name, and artist_name) with frame (predicted cluster value of each observation)
df_final = pd.concat([df_tracks[['track_uri', 'track_name', 'album_name', 'artist_name']], frame], axis=1)
#view combined dataset created above
df_final.head()
# randomly select songs from each cluster and store in track_list
#seed wasn't set so the playlist created will show a different set of songs
random_tracks = df_final.groupby('cluster').apply(lambda x: x.sample(1)).reset_index(drop=True)
track_list = random_tracks[['track_uri', 'track_name', 'album_name', 'artist_name', 'cluster']]
track_list
# Hierarchical Clustering
#normalize data to be used in hierarchical clustering
data_scaled_h = normalize(df_tracks_subset.loc[:, df_tracks_subset.columns != 'track_uri'])
data_scaled_h = pd.DataFrame(data_scaled_h, columns=df_tracks_subset.loc[:, df_tracks_subset.columns != 'track_uri'].columns)
data_scaled_h.head()
#plot dendrogram -- resulting dendrogram shows 3 optimal clusters , using ward's method
plt.figure(figsize=(10, 7))
plt.title("Dendrograms")
dend = shc.dendrogram(shc.linkage(data_scaled_h, method='ward'))
plt.axhline(y=.0015, color='r', linestyle='--')
#agglomerative hierarchical clustering technique with 3 clusters
cluster = AgglomerativeClustering(n_clusters=3, affinity='euclidean', linkage='ward')
cluster.fit_predict(data_scaled_h)
data_scaled_h.columns.values
a =cluster.fit_predict(data_scaled_h)
a = pd.Series(a)
# combine track info (track_uri, track_name, album_name and artist_name) with predicted cluster for each observation/song
df_final_h = pd.concat([df_tracks[['track_uri', 'track_name', 'album_name', 'artist_name']], a], axis=1)
# view combined dataset created above
df_final_h.head()
#select random tracks for playlist from each cluster created using hierarchical agglomerative clustering technique
#seed wasn't set so the playlist created will show a different set of songs
random_tracks_h = df_final_h.groupby(0).apply(lambda x: x.sample(1)).reset_index(drop=True)
track_list_h = random_tracks_h[['track_uri', 'track_name', 'album_name', 'artist_name', 0]]
track_list_h
# Resources:
# - https://www.analyticsvidhya.com/blog/2019/05/beginners-guide-hierarchical-clustering/
# - https://medium.com/@RareLoot/extracting-spotify-data-on-your-favourite-artist-via-python-d58bc92a4330
# - https://www.kaggle.com/geomack/how-to-grab-data-using-the-spotipy-library
# - https://stackoverflow.com/questions/39086287/spotipy-how-to-read-more-than-100-tracks-from-a-playlist
# - https://www.analyticsvidhya.com/blog/2019/08/comprehensive-guide-k-means-clustering/
#
| Patel, Payal - Spotify Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Wrangle OpenStreetMap data
# [<NAME>](https://github.com/ccampguilhem/Udacity-DataAnalyst), August 2017
# <a id="Top"/>
# ## Table of contents
# - [Introduction](#Introduction)
# - [Project organisation](#Project organisation)
# - [Map area selection](#Area selection)
# - [XML data structure](#XML data structure)
# - [Data quality audit](#Data quality)
# - [Validity](#Data validity)
# - [Accuracy](#Data accuracy)
# - [Completeness](#Data completeness)
# - [Consistency](#Data consistency)
# - [Uniformity](#Data uniformity)
# - [Conclusion](#Audit conclusion)
# - [Data cleaning](#Data cleaning)
# - [Method](#Method)
# - [Converting to dictionnary-like structure](#Converting to dictionnary-like structure)
# - [Cleaning accuracy issues](#Cleaning accuracy issues)
# - [Cleaning completeness issues](#Cleaning completeness issues)
# - [Cleaning consistency issues](#Cleaning consistency issues)
# - [Cleaning uniformities issues](#Cleaning uniformities issues)
# - [Conclusion](#Cleaning conclusion)
# - [Data export](#Data export)
# - [To JSON and MongoDB](#JSON MongoDB)
# - [To csv and SQLite](#csv SQLite)
# - [Conclusion](#Conclusion)
# - [Further work](#Further work)
# - [Appendix](#Appendix)
# <a id="Introduction"/>
# ## Introduction *[top](#Top)*
#
# This project is related to Data Wrangling with MongoDB course for Udacity Data Analyst Nanodegree program.
# The purpose of this project is to:
#
# - Collect data from [OpenStreetMap](https://www.openstreetmap.org) web services.
# - Clean the data by fixing few issues introduced by users.
# - Store the dataset in a database to make any further analysis easier.
#
# OpenStreetMap is open data, licensed under the Open Data Commons Open Database License (ODbL) by the OpenStreetMap Foundation (OSMF).
#
# This project covers various aspects of data wrangling phase:
# - **screen scraping** with [Requests](http://requests.readthedocs.io/en/master/), an http Python library for making requests on web services,
# - **parsing** XML files with iterative and SAX parsers with Python standard library [xml.etree.ElementTree](https://docs.python.org/2/library/xml.etree.elementtree.html?highlight=iterparse#module-xml.etree.ElementTree) and [xml.sax](https://docs.python.org/2/library/xml.sax.html),
# - **auditing** (validity, accuracy, completeness, consistency and uniformity) and **cleaning** data with Python,
# - validity: does data conform to a schema ?
# - accuracy: does data conform to gold standard (a dataset we trust) ?
# - completeness: do we have all records ?
# - consistency: is dataset providing contradictory information ?
# - uniformity: are all data provided in the same units ?
# - **storing** data into SQL database (SQLite) with Python [sqlite3](https://docs.python.org/2/library/sqlite3.html) module and [MongoDG](https://www.mongodb.com/) no-SQL database.
# - exploring dataset **statistics**.
#
# The storing step will make use of [csv](https://docs.python.org/2/library/csv.html?highlight=csv#module-csv) and [json](https://docs.python.org/2/library/json.html?highlight=json#module-json) formats respectively for SQL and MongoDB exports.
#
# I am already a bit familiar with SQL but I will also provide SQL output in addition to MongoDB output for the cleaned dataset.
# <a id='Project organisation'/>
# ## Project organisation *[top](#Top)*
#
# The project is decomposed in the following manner:
#
# - This notebook (data_wrangling.ipynb) contains top-level code as well as results and report.
# - The [data_wrangling.html](./data_wrangling.html) file is a html export of this notebook.
# - The [environment.yml](./environment.yml) file contains the anaconda environment I used for this project.
# - The [data_wrangling.py](./data_wrangling.py) file is an export of this notebook.
# - The [download.py](./download.py) module contains a function to download OpenStreetMap dataset with editable configuration.
# - The [parse.py](./parse.py) module contains functions to parse OpenStreetMap dataset.
# - The [handler.py](./handler.py) module contains a class used as content handler for SAX XML parser.
# - The [utils.py](./utils.py) module contains functions used by audit classes.
# - The [validity_audit.py](./validity_audit.py) module contains a callback class for validity audit.
# - The [accuracy_audit.py](./accuracy_audit.py) module contains a callback class for accuracy audit.
# - The [completeness_audit.py](./completeness_audit.py) module contains a callback class for completeness audit.
# - The [consistency_audit.py](./consistency_audit.py) module contains a callback class for consistency audit.
# - The [uniformity_audit.py](./uniformity_audit.py) module contains a callback class for uniformity audit.
# - The [clean_data.py](./clean_data.py) module contains functions used to clean dataset.
# - The [dictionnary_export.py](./dictionnary_export.py) module contains a callback class to export data to a dictionnary.
# - The [export_data.py](./export_data.py) module contains functions used to export data (into csv or SQLite).
# + tags=["magic", "hide_export"]
#Enable auto-reload of modules, will help as we have a lot of modules
# %load_ext autoreload
# %autoreload 2
# -
# <a id="Area selection"/>
# ## Map area selection *[top](#Top)*
#
# If you don't want to have details on how the data from OpenStreetMap is retrieved, you can skip this section. At the end of the processing, you should have a *data.osm* file in the same directory than this notebook.
#
# I have made the map area selection dynamic. By configuring few variables, a different map area may be extracted from OpenStreetMap. Some pre-selections are available:
#
# | Pre-selection | Description | Usage | File size (bytes) | OpenStreetMap link |
# |:------------- |:------------------------- |:------------------- | -----------------:|:------------------ |
# | Tournefeuille | The city I live in | Project review | 103 143 437 | [link](https://www.openstreetmap.org/relation/35735)
# | City center | Tournefeuille city center | Testing, debugging | 583 419 | [link](https://www.openstreetmap.org/export#map=14/43.5848/1.3516)
# | Toulouse | Toulouse and surroundings | Benchmark | 1 271 859 210 | [link](https://www.openstreetmap.org/search?query=toulouse#map=11/43.6047/1.4442)
#
# The box variables are in the following order (south-west to north-east):
#
# - minimum latitude
# - minimum longitude
# - maximum latitude
# - maximum longitude
#
# **Note: ** The data cleaning provided in this project works for French area, if you select a non-french area no data cleaning will be performed.
#
# **Note: ** You can modify CONFIG variable to edit configuration (see [download.py](./download.py) module for help).
# I have used screen scrapping techniques presented throughout the course to extract data from OpenStreetMap:
#
# - I use the Overpass API (http://wiki.openstreetmap.org/wiki/Overpass_API).
# - The query form (http://overpass-api.de/query_form.html) sends a POST request to http://overpass-api.de/api/interpreter.
# - From the api/interpreter we can just make a GET request which takes a data parameter containing the box selection:
#
# ```
# (
# node(51.249,7.148,51.251,7.152);
# <;
# );
# out meta;
# ```
#
# The idea is to send a http GET request using [Requests](http://requests.readthedocs.io/en/master/) and collect results in a stream. This is because the data we get from the request may be huge and may not fit into memory.
#
# The following function `download_map_area` enables to download map area data and store it in a *data.osm* file:
from download import download_map_area
#Download dataset
status_code, dataset_path, dataset_size = download_map_area()
if status_code is None:
print "The file {} is re-used from a previous download. Its size is {} bytes.".format(dataset_path, dataset_size)
elif status_code == 200:
print "The file {} has been successfully downloaded. Its size is {} bytes.".format(dataset_path, dataset_size)
else:
print "An error occured while downloading the file. Http status code is {}.".format(status_code)
# <a id="XML data structure"/>
# ## XML data structure *[top](#Top)*
#
# In the previous section, we have downloaded a dataset from OpenStreetMap web service. The XML file retrieved this way is stored in the file named *data.osm*.
#
# In this section we are going to familiarize with the dataset to understand how it's built. As dataset may be a very large file (depending on the map area extracted) we are going to use an iterative parser that does not need to load the entire document in memory.
# +
#Import the XML library
import xml.etree.cElementTree as et
from collections import Counter, defaultdict
from pprint import pprint
import tabulate
from IPython.core.display import display, HTML
# -
#Iterative parsing
element_tags = Counter()
for (event, elem) in et.iterparse(dataset_path):
element_tags[elem.tag] += 1
pprint(dict(element_tags))
# In OpenStreetMap, data is structured this way:
# - A **node** is a location in space defined by its latitude and longitude. It might indicate a standalone point and/or can be used to define shape of a way.
# - A **way** can be either a polyline to represent roads, rivers... or a closed polygon to delimit areas (buildings, parks...).
# - A **nd** is used within way to reference nodes.
# - A **relation** can be defined from **member** nodes and ways to represent routes, bigger area such as regions or city boundaries.
# - A **member** is a subpart of a relation pointing either to a node or a way.
# - A **tag** is a (key, value) information attached to nodes, ways and relations to document in more detail the item.
# - **osm** is the root node in .osm files.
# - **note** and **meta** are metadata.
#
# We are now going to parse the XML file again to get the full path of each tag in the dataset. We need to use a SAX parser with a custom handler.
import xml.sax
from handler import OpenStreetMapXmlHandler
# We can now use the handler in SAX parsing:
parser = xml.sax.make_parser()
with OpenStreetMapXmlHandler() as handler:
parser.setContentHandler(handler)
parser.parse(dataset_path)
#Get tag counts
pprint(handler.getTagsCount())
# The returned tag count is the same than the one we have calculated using `et.iterparse`.
#Get tag ancestors
pprint(handler.getTagsAncestors())
# As we discussed later on:
# - **osm** element has no ancestor (it's root element)
# - **meta** and **note** only appear in **osm** element
# - **node**, **way** and **relation** are direct children of **osm**
# - **tag** can be used to document any of **node**, **way** and **relation**
# - **member** are only used in **relation** elements (to reference either nodes, ways or other relations)
# - **nd** are only used in **way** elements (to reference nodes)
#
# Such result will help us a lot when auditing [data quality](#Data quality).
# <a id='Data quality'/>
# ## Data quality audit *[top](#Top)*
#
# This chapter is divided into 5 sections for each kind of data quality audit:
# - [Validity](#Data validity)
# - [Accuracy](#Data accuracy)
# - [Completeness](#Data completeness)
# - [Consistency](#Data consistency)
# - [Uniformity](#Data uniformity)
# <a id='Data validity'/>
# ### Validity *[audit](#Data quality)*
#
# Validity is about compliance to a schema. The data we have retrieved from OpenStreetMap servers is a XML file. It exists techniques to validate XML structures such as XML Schema. We won't use such technique here because schema is relatively simple and because XML files can be large enough so we want to stick to using SAX parser.
#
# Actually, the SAX content handler that has been introduced in previous [section](#XML data structure) will be helpful here as it's already able to list ancestors for each element. We can then define a schema in a similar form and compare both to see if there is any issue.
#
# The schema is a dictionnary structured this way:
# - key: element tag
# - value: dictionnary with the following keys / values:
# - *ancestors*: List of any acceptable ancestor path. For example, the path ('osm.way') means that element shall be a children of a way element which itself is a children of a osm element.
# - *minOccurences*: minimum number of element in the dataset (greater or equal to 0), optional
# - *maxOccurences*: maximum number of element in the dataset (greater or equal to 1), optional
# - *requiredAttributes*: list of attribute names that shall be defined for element
# - *requiredChildren*: list of required children element
# - *attributesFuncs*: list of callable objects to be run on the element attributes for further checks
# +
import functools
#Function to check numbers
check_digit = lambda name, attr: attr[name].isdigit()
check_id_digit = functools.partial(check_digit, 'id')
check_uid_digit = functools.partial(check_digit, 'uid')
check_ref_digit = functools.partial(check_digit, 'ref')
#Define a schema
schema = {
#osm is root node. There shall be exactely one.
'osm': {
'ancestors': {''},
'minOccurences': 1,
'maxOccurences': 1},
#meta shall be within osm element. There shall be exactely one of those.
'meta': {
'ancestors': {'osm'},
'minOccurences': 1,
'maxOccurences': 1},
#meta shall be within osm element. There shall be exactely one of those.
'note': {
'ancestors': {'osm'},
'minOccurences': 1,
'maxOccurences': 1},
#node shall be within osm element. A node shall have id, lat (latitude) and lon (longitude) attributes.
#Additionally, lat shall be in the range [-90, 90] and longitude in the range [-180, 180]. Id shall be a digit
#number
'node': {
'ancestors': {'osm'},
'requiredAttributes': ['id', 'lat', 'lon', 'uid'],
'attributesFuncs': [lambda attr: -90 <= float(attr['lat']) <= 90,
lambda attr: -180 <= float(attr['lon']) <= 180,
check_id_digit,
check_uid_digit]},
#way shall be within osm element. A way shall have id attribute. It shall have at least one nd children.
#id shall be a digit.
'way': {
'ancestors': {'osm'},
'requiredAttributes': ['id', 'uid'],
'requiredChildren': ['nd'],
'attributesFuncs': [check_id_digit, check_uid_digit]},
#nd shall be within way element. A nd shall have ref attribute. ref attribute shall be a digit.
'nd': {
'ancestors': {'osm.way'},
'requiredAttributes': ['ref'],
'attributesFuncs': [check_ref_digit]},
#relation shall be within a osm element. It shall have a id attribute and at least one member children. id shall
#be a digit
'relation': {
'ancestors': {'osm'},
'requiredAttributes': ['id', 'uid'],
'requiredChildren': ['member'],
'attributesFunc': [check_id_digit, check_uid_digit]},
#member shall be within a relation element. It shall have type, ref and role attributes. The type attribute shall
#be either way or node. The ref attribute shall be a digit.
'member': {
'ancestors': {'osm.relation'},
'requiredAttributes': ['type', 'ref', 'role'],
'attributesFuncs': [lambda attr: attr['type'] in ['way', 'node', 'relation'],
check_ref_digit]},
#tag shall be within node, way or relation. It shall have k and v attributes.
'tag': {
'ancestors': {'osm.node', 'osm.way', 'osm.relation'},
'requiredAttributes': ['k', 'v']},
}
# -
# In order to have this schema validated, we are going to create a callback to be passed to SAX content handler we have created earlier:
from validity_audit import DataValidityAudit
# We create a function that will help us parsing and autiting the data. This function returns a list of nonconformities.
from parse import parse_and_audit
#Parse and audit
audit = [DataValidityAudit(schema)]
nonconformities = parse_and_audit(dataset_path, audit)
display(HTML(tabulate.tabulate(nonconformities, tablefmt='html')))
# The returned list above shall be empty. It means that no nonconfirmity has been detected for validity audit. The data we get from OpenStreetMap may be trusted in terms of schema compliance.
#
# As a reference it takes approximately 15 seconds to parse and audit the dataset of around 100 Mb.
# <a id='Data accuracy'/>
# ### Accuracy *[audit](#Data quality)*
#
# Accuracy is a measurement of coformity with gold standard. On a dataset such as the one from OpenStreetMap it may be difficult to find a gold standard. We are then going to limit this audit to values that are sometimes provided in the dataset for items which represents a town:
# - INSEE indentifier (ref:INSEE in the above example)
# - Population
# - Date of last census (source:population in the above example)
#
# Here is an example:
#
# ```xml
# <node id="26691412" lat="43.5827846" lon="1.3466543" version="17" timestamp="2017-08-22T17:20:54Z" changeset="51349577" uid="6523296" user="ccampguilhem">
# <tag k="addr:postcode" v="31170"/>
# <tag k="name" v="Tournefeuille"/>
# <tag k="name:fr" v="Tournefeuille"/>
# <tag k="name:oc" v="Tornafuèlha"/>
# <tag k="place" v="town"/>
# <tag k="population" v="26674"/>
# <tag k="ref:FR:SIREN" v="213105570"/>
# <tag k="ref:INSEE" v="31557"/>
# <tag k="source:population" v="INSEE 2014"/>
# <tag k="wikidata" v="Q328022"/>
# <tag k="wikipedia" v="fr:Tournefeuille"/>
# </node>
# ```
#
# But this information may also be attached to a relation element instead:
# ```xml
# <relation id="158881" version="20" timestamp="2017-06-22T16:33:19Z" changeset="49751028" uid="94578" user="andygol">
# <member type="node" ref="534672451" role="admin_centre"/>
# <member type="way" ref="36353842" role="outer"/>
# <member type="way" ref="166581580" role="outer"/>
# ...
# <member type="way" ref="502733025" role="outer"/>
# <member type="way" ref="502733024" role="outer"/>
# <member type="way" ref="36353843" role="outer"/>
# <tag k="addr:postcode" v="31820"/>
# <tag k="admin_level" v="8"/>
# <tag k="boundary" v="administrative"/>
# <tag k="name" v="Pibrac"/>
# <tag k="name:fr" v="Pibrac"/>
# <tag k="name:ru" v="Пибрак"/>
# <tag k="name:uk" v="Пібрак"/>
# <tag k="name:zh" v="皮布拉克"/>
# <tag k="population" v="8091"/>
# <tag k="ref:FR:SIREN" v="213104177"/>
# <tag k="ref:INSEE" v="31417"/>
# <tag k="source:population" v="INSEE 2013"/>
# ```
#
# Our audit code shall take this into account.
#
# For this example, I have updated the OpenStreetMap database manually to match official data published by [INSEE](https://www.insee.fr/en/accueil). I will use INSEE data as gold standard (see [here](https://www.insee.fr/fr/statistiques/1405599?geo=COM-31557+COM-31291+COM-31149+COM-31424+COM-31157+COM-31417)). The last census in my region is from 2014.
#
# We are going to define a gold standard in a dictionnary for few towns in the surrounding of Tournefeuille. If you have selected a user-defined area map, it may not be suitable to you:
# +
#Used to convert digit in XML with thoudand separators into a Python integer
convert_to_int = lambda x: int(x.replace(" ", ""))
gold_standard_insee = {
u'Tournefeuille': {
'population': (convert_to_int, 26674),
'source:population': (str, 'INSEE 2014'),
'ref:INSEE': (convert_to_int, 31557)},
u'Léguevin': {
'population': (convert_to_int, 8892),
'source:population': (str, 'INSEE 2014'),
'ref:INSEE': (convert_to_int, 31291)},
u'Colomiers': {
'population': (convert_to_int, 38541),
'source:population': (str, 'INSEE 2014'),
'ref:INSEE': (convert_to_int, 31149)},
u'Plaisance-du-Touch': {
'population': (convert_to_int, 17278),
'source:population': (str, 'INSEE 2014'),
'ref:INSEE': (convert_to_int, 31424)},
u'Cugnaux': {
'population': (convert_to_int, 17004),
'source:population': (str, 'INSEE 2014'),
'ref:INSEE': (convert_to_int, 31157)},
u'Pibrac': {
'population': (convert_to_int, 8226),
'source:population': (str, 'INSEE 2014'),
'ref:INSEE': (convert_to_int, 31417)},
u'Toulouse': {
'population': (convert_to_int, 466297),
'source:population': (str, 'INSEE 2014'),
'ref:INSEE': (convert_to_int, 31555)},
}
# -
# Let's create an audit class for accuracy. It will compare each information from items having a "population" tag to the standard above.
from accuracy_audit import DataAccuracyAudit
#Parse and audit
audit = [DataAccuracyAudit(gold_standard_insee)]
nonconformities = parse_and_audit(dataset_path, audit)
display(HTML(tabulate.tabulate(nonconformities, tablefmt='html')))
# Some accuracy issues are reported because data in OpenStreetMap is not up to date with census of 2014.
# No issue is reported for Tournefeuille because I have manually updated the OpenStreetMap database.
#
# There are many more accuracy checks that we can do. For example, building with commercial activities have their phone number and web site mentioned in the OpenStreetMap database. Accuracy would have been assessed by checking existence of web site or by comparing phone number to official records.
# <a id='Data completeness'/>
# ### Completeness *[audit](#Data quality)*
#
# Assessing completeness of data is a difficult task. We'll do the work for pharmacies. We will use another standard: [Pages Jaunes](https://www.pagesjaunes.fr/annuaire/tournefeuille-31/pharmacies). Pages Jaunes provides the same kind of services than Yellow Pages.
#
# Here is the list of pharmacies we expect to find:
gold_standard_pages_jaunes = [
(u'Pharmacie Denise Ribère', (u'2', u'Rue Platanes', 31170, u'Tournefeuille')),
(u'Pharmacie De La Ramée', (u'102', u'<NAME>', 31170, u'Tournfeuille')),
(u'Pharmacie Cap 2000', (u'1', u'<NAME>', 31170, u'Tournfeuille')),
(u'Pharmacie De La Commanderie', (u'110', u'Avenue Marquisat', 31170, u'Tournfeuille')),
(u'Pharmacie <NAME>', (u'18', u'<NAME>', 31170, u'Tournfeuille')),
(u'Pharmacie Arc En Ciel', (u'19', u'Avenue Al<NAME>', 31170, u'Tournfeuille')),
(u'Pharmacie Du Centre', (u'67', u'<NAME>', 31170, u'Tournfeuille')),
(u'La Pharmacie Du Vieux Pigeonnier', (u'3', u'<NAME>', 31170, u'Tournfeuille')),
(u'Pharmacie De Pahin', (u'37', u'<NAME>', 31170, u'Tournfeuille'))]
# Let's create an audit class for completeness. It will compare each information from items having a "amenity/pharmacy" tag to the standard above.
from completeness_audit import DataCompletenessAudit
#Parse and audit
audit = [DataCompletenessAudit(gold_standard_pages_jaunes, warnings=True)]
nonconformities = parse_and_audit(dataset_path, audit)
display(HTML(tabulate.tabulate(nonconformities, tablefmt='html')))
# 4 pharmacies are reported as missing. Some others are reported as present but not expected. This is because dataset extends over boundary of Tournefeuille. We can notice two things:
#
# - Pharmacie <NAME> is missing and Pharmacie Ribère has been found. It may be the pharmacy we expected.
# - Pharmacie Arc En Ciel is missing and Pharmac**c**ie Arc-en-Ciel has been found. Our string comparison function converts to lower case and replace - by space, but there is a typo in OpenStreetMap database. Use of [fuzzy string](https://streamhacker.com/2011/10/31/fuzzy-string-matching-python/) matching algorithms might have helped in this situation.
#
# For the last two missing items (La Pharmacie Du <NAME> and Pharmacie <NAME>), it seems at first glance that dataset is simply uncomplete. But after a closer look to Pages Jaunes on the location of pharmacies, it seems that they match position of Pharmacie <NAME> and Pharmacie Robin.
#
# A simple rename of pharmacies in OpenStreetMap dataset would be enought to ensure a 100% completeness.
# <a id='Data consistency'/>
# ### Consistency *[audit](#Data quality)*
#
# Consistency audit consits in finding contradictory information in the dataset or find issues that prevent us from using some information in the dataset.
#
# In a previous [section](#XML data structure), we have seen that **relation** elements refer to **node**, **way** or other **relation** through the **member** element. Similarly, **way** elements refer to nodes throught **nd** item.
#
# A consistent dataset would provide **relation** and **way** pointing to **node** and **way** also present in the dataset. This is the check we are going to implement:
from consistency_audit import DataConsistencyAudit
#Parse and audit
audit = [DataConsistencyAudit()]
nonconformities = parse_and_audit(dataset_path, audit)
display(HTML(tabulate.tabulate(nonconformities, tablefmt='html')))
# Some ways (a very low percentage) refer to non-present nodes. A significant number of relations refer to non-present entities. One possible explanation is that towns may be not completly extracted and relation defines town boundaries with nodes or ways. Those nodes or ways are missing because they are out of the box we have extracted from OpenStreetMap.
# <a id='Data uniformity'/>
# ### Uniformity *[audit](#Data quality)*
#
# To audit uniformity, we are going to focus on the way addresses are provided in the dataset.
#
# Here is an example:
#
# ```xml
# <relation id="1246249" version="2" timestamp="2017-08-21T10:30:22Z" changeset="51299391" uid="922338" user="<NAME>">
# <member type="way" ref="74688949" role="outer"/>
# <member type="way" ref="74695300" role="outer"/>
# <member type="way" ref="74692941" role="outer"/>
# <member type="way" ref="74688530" role="outer"/>
# <tag k="addr:city" v="Toulouse"/>
# <tag k="addr:housenumber" v="42"/>
# <tag k="addr:postcode" v="31057"/>
# <tag k="addr:street" v="Avenue Gaspard Coriolis"/>
# ...
# ```
#
# The item (a relation here) is documented with tags addr:city, addr:housenumber, addr:postcode, addr:street. The addresses in the dataset will be considered uniform if each of them contain all those components. In addition, the way addr:street are recorded will be analyzed to check if mulitple ways of writing street components (Rue, Avenue, Boulevard, Place, ...) are used. The audit class will report any non-uniformity throughout the dataset:
from uniformity_audit import DataUniformityAudit
#Parse and audit
audit = [DataUniformityAudit(warnings=False)]
nonconformities = parse_and_audit(dataset_path, audit)
display(HTML(tabulate.tabulate(nonconformities, tablefmt='html')))
streets_patterns = audit[0].getStreetsPatterns()
print streets_patterns
# In terms of uniformity of providing the same address components, the most common issue is to not have postcode and city. Few times housenumber is also missing. Fixing housenumber automatically seems difficult. Fixing postcode may be easy in the case city, as recorded in OpenStreetMap has a single postcode and item has a city attached. There is nothing obvious we can do for items having no postcode and city fields. One possible solution would be to check inclusion of node (given its latitude / longitude) in the polygon delimiting city (as provided by **relation** elements) which can be solved with a little [maths](http://geomalgorithms.com/a03-_inclusion.html).
#
# There is also a lack of uniformity in the way streets are recorded. For example we can see Av., avenue, or Avenue but this is not a big deal and be fixed easilly.
# <a id='Audit conclusion'/>
# ### Conclusion *[audit](#Data quality)*
#
# The audit performed is rather incomplete in terms of check that can be performed. But we have seen how we can audit any kind of nonconformity (validity, accuracy, completeness, consistency and uniformity).
#
# Yet, the frontier between each type of audit may be tenuous:
# - The completeness issues indentified with missing pharmacies turned out to be an accuracy problem (pharmacies are in the dataset but with a different name).
# - The inconsistency issue with nodes or ways referenced either in ways or relations but missing from dataset may be seen as a completeness issue. Nodes and ways probably exist in the full OpenStreetMap database, so this is probably more related to how the data from full database is extracted with a box selection.
# - The uniformity issues in the way addresses are recorded may also be seen as a completeness issue because housenumbers are missing.
#
# The classification of nonconformities is not that important, but the list (validity, accuracy, completeness, consistency and uniformity) is probably a good hint to be sure we do not forget some kind of checks.
#
# On large datasets, it seems not impossible but very tedious to run a full quality audit. Knowing the scope of analysis helps in selecting the minimum set of audits to run on the dataset.
#
# Using an iterative parser is clearly an additional difficulty in writing audit code, things would have been much simpler by using a full-parser like the ones provided by [lxml](http://lxml.de/) library. lxml also implements pretty advanced XPath requests that would have made both auditing and cleaning much faster.
#
# The following code wraps all audit tasks and returns a table with all kind of nonconformities:
#Parse and audit
full_audit = [DataValidityAudit(schema), DataAccuracyAudit(gold_standard_insee),
DataCompletenessAudit(gold_standard_pages_jaunes), DataConsistencyAudit(), DataUniformityAudit()]
nonconformities = parse_and_audit(dataset_path, full_audit)
display(HTML(tabulate.tabulate(nonconformities, tablefmt='html')))
# Note that we have not found any validity issue. The XML structure is pretty simple and OpenStreetMap provides XML files that can be trusted in terms of structure. The issues come from data that has been provided by users.
#
# In the [next section](#Data cleaning), we are going to clean this dataset before importing it into a database.
# <a id='Data cleaning'/>
# ## Data cleaning *[top](#Top)*
# <a id='Method'/>
# ### Method *[cleaning](#Data cleaning)*
#
# We are not going to clean the XML file we have downloaded from OpenStreetMap. As we need to parse it iteratively, writing a cleaning algorithm would be pretty difficult.
#
# Here are the steps we are going to follow:
#
# - Export dataset into a Python dictionnary-like structure.
# - Clean the dictionnary.
# - Save dictionnary into a JSON file.
# - Import JSON file into MongoDB database.
# - Write csv files from dictionnary-like structure.
# - Import csv files into SQLite database.
#
# **Note:** exporting the dataset to a dictionnary may be memory consumming. I have not used any technique here to reduce memory footprint but [shelve](https://docs.python.org/2/library/shelve.html?highlight=shelve#module-shelve) in standard library may help. Alternatives may be [diskcache](http://www.grantjenks.com/docs/diskcache/tutorial.html) and even directly [pymongo](https://api.mongodb.com/python/current/tutorial.html).
# <a id='Converting to dictionnary-like structure'/>
# ### Converting to dictionnary-like structure *[cleaning](#Data cleaning)*
#
# We are going to use a similar technique than the one we had for data quality audit. We are basically going to plug a callback class to the SAX content handler to load OpenStreetMap dataset into a dictonnary-like structure.
#
# First we need to define the dictionnary structure we want to have (this is pseudo-code not actually used in the program):
#
# ```python
# schema_dict = {
# 'nodes': [
# {'osmid': 0, #the OpenStreetMap id of node
# 'latitude': 0., #latitude [-90, 90]
# 'longitude': 0., #longitude [-180, 180]
# 'userid': 0, #OpenStreetMap id of owner
# 'tags': [ #list of associated tags
# {'key': '', #tag key
# 'value': ''}] #tag value
# }
# ]
# 'ways': [
# {'osmid': 0, #the OpenStreetMap id of node
# 'userid': 0, #OpenStreetMap id of owner
# 'tags': [ #list of associated tags
# {'key': '', #tag key
# 'value': ''}] #tag value
# 'nodes': [ ] #a list of nodes osmid
# }
# ]
# 'relations': [
# {'osmid': 0, #the OpenStreetMap id of node
# 'userid': 0, #OpenStreetMap id of owner
# 'tags': [ #list of associated tags
# {'key': '', #tag key
# 'value': ''}] #tag value
# 'nodes': [ ] #a list of nodes osmid
# 'ways': [ ] #a list of ways osmid
# 'relations': [ ] #a list of relations osmid
# }
# ]
# ```
from dictionnary_export import DictionnaryExport
from parse import parse
#Parse and extract
dataset_dict = { }
with DictionnaryExport(dataset_dict) as dict_export:
parse(dataset_path, [dict_export])
# All nodes, ways and relations have been exported.
# + [markdown] tags=["hide_export"]
# We can investigate few items:
# + tags=["hide_export"]
print dataset_dict['nodes'][-1]
# + tags=["hide_export"]
print dataset_dict['ways'][-1]
# + tags=["hide_export"]
print dataset_dict['relations'][-1]
# -
# Having the dataset in this format is much more handy. But making queries into it requires to write dedicated functions. That's all the point to use a database: anything to perform requests is already implemented for us. For now, it is convenient enough to perform the data cleaning.
# <a id='Cleaning accuracy issues'/>
# ### Cleaning accuracy issues *[cleaning](#Data cleaning)*
#
# In data accuracy audit [section](#Data accuracy) we have spotted few accuracy issues regarging city populations. The idea here is just to update nodes and relations having population tag so they match the INSEE standard.
#
# A simple function may be written for that purpose:
from clean_data import clean_accuracy
inodes, irelations = clean_accuracy(dataset_dict, gold_standard_insee)
# + tags=["hide_export"]
for inode in inodes:
pprint(dataset_dict["nodes"][inode]["tags"])
for irelation in irelations:
pprint(dataset_dict["relations"][irelation]["tags"])
# -
# <a id='Cleaning completeness issues'/>
# ### Cleaning completeness issues *[cleaning](#Data cleaning)*
#
# In data completeness audit [section](#Data completeness) we have indentified missing records in the dataset. It turned out that issues indentified as completeness issues may be seen as accuracy issues since the pharmacies are not recorded with the proper name.
#
# We can deal with these issues with a simple function:
from clean_data import clean_completeness
pharmacy_mapping = {u"<NAME>": u"Pharmacie <NAME>",
u"Pharmaccie Arc-en-Ciel": u"Pharmacie Arc En Ciel",
u"<NAME>": u"Pharmacie <NAME>",
u"Ph<NAME>": u"La Pharmacie Du Vieux Pigeonnier",
u"Pharmacie de la Ramée": u"Pharmacie De La Ramée",
u"Pharmacie de la Commanderie": u"Pharmacie De La Commanderie",
u"Pharmacie du Centre": u"Pharmacie Du Centre",
u"Pharmacie de Pahin": "Pharmacie De Pahin",
u"Pharmacie CAP 2000": u"Pharmacie Cap 2000"}
inodes = clean_completeness(dataset_dict, pharmacy_mapping)
# + tags=["hide_export"]
for inode in inodes:
pprint(dataset_dict["nodes"][inode])
# -
# <a id='Cleaning consistency issues'/>
# ### Cleaning consistency issues *[cleaning](#Data cleaning)*
#
# Some **node**s, **way**s or **relation**s are referenced in **way** or **relation** items but are missing in dataset we have extracted. I have decided to remove any of those items from the dictionnary-like dataset in order to keep a consistent database as output.
#
# The audit class can be requested to get a set of missing items. Knowing the list, we just have to remove any reference to those items in our cleaned dataset. This can be done with the following function:
from clean_data import clean_consistency
missing_nodes = full_audit[3].getMissingNodes()
missing_ways = full_audit[3].getMissingWays()
missing_relations = full_audit[3].getMissingRelations()
iways, irelations = clean_consistency(dataset_dict, missing_nodes, missing_ways, missing_relations)
# The number of updated ways and relations matches the number of issues we have identified earlier.
# <a id='Cleaning uniformity issues'/>
# ### Clean uniformity issues *[cleaning](#Data cleaning)*
#
# In data quality audit [section](#Data uniformity) we have identified two different kinds of nonconformities:
# - missing addr:housenumber or addr:postcode
# - non-uniform way of naming streets component
#
# We will not fix the addr:housenumber because there is no obvious way to do it. We'll try to fix addr:postcode as we may have postcode information in city data. We may not be able to fix 100% of postcode though:
# - some big cities have multiple postcodes
# - the city in addr:city may not match the city name
# - addr:city may not be provided
#
# Finally, we can easilly solve the second kind of non-uniformity by providing a mapping.
#
# The following function performs all cleaning related to uniformity issues:
from clean_data import clean_uniformity
#Reminder of all patterns encountered
print streets_patterns
#Mapping
street_mapping = {u"rue": u"Rue",
u"impasse": u"Impasse",
u"avenue": u"Avenue",
u"Av.": u"Avenue",
u"place": u"Place",
u"allée": u"Allée"}
inodes, iways, irelations = clean_uniformity(dataset_dict, street_mapping)
# We have been able to fix few postcodes issues (all for which city was provided and was different from Toulouse).
# <a id='Cleaning conclusion'/>
# ### Conclusion *[cleaning](#Data cleaning)*
#
# Cleaning is far from being perfect, but we have illustrated different techniques to clean a dataset. This is definitely a time-consumming activity :)
#
# It's now time to export dataset into files and databases.
# <a id='Data export'/>
# ## Data export *[top](#Top)*
# <a id='JSON MongoDB'/>
# ### To JSON and MongoDB *[export](#Data export)*
#
# This is the couple I am least confortable with. Luckily, our dictionnary-like structure is really adapted to both transformation to JSON file or for mass import into MongoDB.
#
# Let's start with dumping a JSON file:
import json
import os
with open('data.json', 'w') as fobj:
fobj.write(json.dumps(dataset_dict))
print "Size of JSON file {} bytes.".format(os.path.getsize('data.json'))
# Writing is pretty fast for a 68 Mb file.
#
# Let's try to reload the file:
with open('data.json', 'r') as fobj:
dataset_dict = json.loads(fobj.read())
print "Dataset with {} nodes, {} ways and {} relations.".format(len(dataset_dict["nodes"]),
len(dataset_dict["ways"]), len(dataset_dict["relations"]))
# We have not lost anything. Reading is a JSON is longer than writing it though. We can now store the data into a MongoDB database:
#Connect to MongoDB and remove any previous database (if any)
from pymongo import MongoClient
mongodb_client = MongoClient()
mongodb_client.drop_database('udacity-wrangling')
#Mass import from JSON file of documents
db = mongodb_client['udacity-wrangling']
nodes = db['nodes']
nodes.insert_many(dataset_dict["nodes"])
ways = db['ways']
ways.insert_many(dataset_dict["ways"])
relations = db["relations"]
relations.insert_many(dataset_dict["relations"])
print db.collection_names()
# + tags=["hide_export"]
#What's in there?
print nodes.find_one()
print ways.find_one()
print relations.find_one()
# -
#Request by OpenStreetMap id:
for item in nodes.find({'osmid': 8138771}):
pprint(item)
#Request east-most nodes (max 3 nodes are returned), SQL LIMIT equivalent
for item in nodes.find({'longitude': {'$gt': 1.39}}).limit(1):
pprint(item)
#Refined latitude / longitude box (equivalent to City center dataset)
for item in nodes.find({'longitude': {'$gt': 1.3434, '$lt': 1.3496},
'latitude': {'$gt': 43.5799, '$lt': 43.5838}}).limit(1):
pprint(item)
#Find in document attributes (list)
for item in relations.find({"nodes": 265545746}):
pprint(item)
#Find in document attributes (dict) with kind of SQL UNION and $and operator:
filter_tags = lambda x: x["key"] in ('name:fr', 'ref:INSEE', 'population', 'source:population')
city_criteria = {"$and": [{"tags.key": "ref:INSEE"}, {"tags.key": "population"}]}
items = [node for node in nodes.find(city_criteria)]
items.extend([relation for relation in relations.find(city_criteria)])
for item in items:
pprint(dict((t["key"], t["value"]) for t in filter(filter_tags, item["tags"])))
#Look for pharmacies in Tournefeuille either with city name or postcode, combination of and and or operators:
find_criteria = {"$and": [{"tags.key": "amenity", "tags.value": "pharmacy"},
{"$or": [{"tags.key": "addr:postcode", "tags.value": "31170"},
{"tags.key": "addr:city", "tags.value": "Tournefeuille"}]}]}
for node in nodes.find(find_criteria):
pprint(node)
# There is a single one pharmacy with either addr:postcode or addr:city attribute set for pharmacies in Tournfeuille.
# The cleaning made is insufficiant because we haven't been able to affect postcode or city to nodes missing both of them.
#Get major contributors: usage of aggregation, grouping and sorting (descending)
#We need to build an aggregation pipeline
for item in nodes.aggregate([{"$group": {"_id": "$userid", "count": {"$sum": 1}}}, #group by userid and count
{"$project": {"count": { "$multiply": [ "$count", 100. / nodes.count()]}}}, # calculate %
{"$sort": {"count": -1}}, #sort by descending order
{"$limit": 3}]): #limit to 3 users
print item
# The user owning the most number of nodes in OpenStreetMap is also owner of more than 68% of all nodes ! Let's find out if we can know more about him/her:
#More information about userid 1685
for item in ways.find({"userid": 1685, "tags.key": "source"}).limit(1):
pprint(item)
# It seems that he/she is working in Direction Générale des Impôts (French Tax Directorate) in the land registry office (french word: cadastre). French land registry office seems to use OpenStreetMap ;)
#
# To end this MongoDB capabilities overview, we are trying to do a join to get latitude and longitude of all nodes in the previous way:
#First let's have a look at what unwind operator does. Match operator enables to use a find in an aggregation
for item in ways.aggregate([{"$match": {"osmid": 30907996}},
{"$unwind": "$nodes"},
{"$limit": 2}]):
pprint(item)
# \$unwind operator enables to "deconstruct" an array. In fact we are going to need this operator to perform a join (\$lookup operator). Lookup performs a left join. The join syntax is:
#Now let's join
if map(lambda x: int(x), mongodb_client.server_info()['version'].split('.')) > (3, 2, 0):
try:
for item in ways.aggregate([{"$match": {"osmid": 30907996}},
{"$unwind": "$nodes"},
{"$lookup":
{"$from": "nodes", "$localField": "nodes", "$foreignField": "osmid"}}]):
pprint(item)
except:
"Sorry for that, this code is untested because I don't have a recent version of MongoDB yet, " \
"I need to updgrade."
else:
print "Your version of MongoDB does not support $lookup operator."
# I am using a LTS version of Ubuntu 16.04 coming with MongoDB 2.6 not supporting yet $lookup operator (supported from version 3.2).
#
# MongoDB differs from what I know about SQL:
# - no need of schema, it's very easy to store collections from Python
# - aggregate, group, sort, limit, etc... works a bit differently than SQL equivalent but product is well documented and it's easy to find answers on [StackOverflow](https://stackoverflow.com/questions/tagged/mongodb) (as always...)
# - "documents" are also easier to read, information is not spread into multiple tables
# - joining is a bit trickier than SQL, but that's probably the price to pay for lack of strict schema and "completeness" of records. I definitely need to install a newer version to take advantage of it.
# <a id='csv SQLite'/>
# ### To csv and SQLite *[export](#Data export)*
#
# I have not re-writen another SAX content handler for csv export. I am just going to make use of dictionnary-like structure to export csv files matching the way I am going to structure the SQL database.
#
# I will create the following files:
#
# | File | Description |
# |:----------------------- |:---------------------------------------------------- |
# | nodes.csv | nodes attributes |
# | nodes_tags.csv | nodes tags |
# | ways.csv | ways attributes |
# | ways_nodes.csv | references to nodes from ways |
# | ways_tags.csv | ways tags |
# | relations.csv | relations attributes |
# | relations_nodes.csv | references to nodes from relations |
# | relations_ways.csv | references to ways from relations |
# | relations_relations.csv | references to relations from relations |
#
# Contrary to MongoDB, preparation for mass import required more work !
#
# The following function will create all those files. It takes the dictionnary-like structure as argument:
from export_data import export_to_csv
# +
#Reload a dataset from JSON (the one we have is linked to MongoDB)
with open('data.json', 'r') as fobj:
dataset_csv = json.loads(fobj.read())
#Export to csv
export_to_csv(dataset_csv)
#Clean dataset
del dataset_csv
# -
# We now have to create the SQLite database. We need to define a schema:
sql_schema = """
CREATE TABLE nodes (
osmid INTEGER PRIMARY KEY NOT NULL,
latitude REAL,
longitude REAL,
userid INTEGER
);
CREATE TABLE nodes_tags (
node_id INTEGER NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL
);
CREATE TABLE ways (
osmid INTEGER PRIMARY KEY NOT NULL,
userid INTEGER
);
CREATE TABLE ways_tags (
way_id INTEGER NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL
);
CREATE TABLE ways_nodes (
way_id INTEGER NOT NULL,
node_id INTEGER NOT NULL
);
CREATE TABLE relations (
osmid INTEGER PRIMARY KEY NOT NULL,
userid INTEGER
);
CREATE TABLE relations_tags (
relation_id INTEGER NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL
);
CREATE TABLE relations_nodes (
relation_id INTEGER NOT NULL,
node_id INTEGER NOT NULL
);
CREATE TABLE relations_ways (
relation_id INTEGER NOT NULL,
way_id INTEGER NOT NULL
);
CREATE TABLE relations_relations (
relation_container INTEGER NOT NULL,
relation_content INTEGER NOT NULL
);
"""
#Create the database
import sqlite3
sql_database = 'data.sql'
#Delete any previous version of database:
try:
os.remove(sql_database)
except OSError:
pass
#Close any previous connection
try:
sql_client.close()
except NameError:
pass
sql_client = sqlite3.connect(sql_database)
cursor = sql_client.cursor()
cursor.executescript(sql_schema)
sql_client.commit()
from export_data import import_csv_into_sqlite
import_csv_into_sqlite(cursor)
print "Database {} ready [{} bytes].".format(sql_database, os.path.getsize(sql_database))
# Now the database has been populated with our dataset we can make some requests. Note that the SQLite file is around 35 Mb, smaller than the JSON file (which was 68 Mb).
#
# Here are few requests with SQL database:
#Request by OpenStreetMap id:
cursor.execute("SELECT * FROM nodes WHERE osmid = ?", (8138771,))
pprint(cursor.fetchall())
#Request east-most nodes (max 3 nodes are returned)
cursor.execute("SELECT * FROM nodes WHERE longitude > 1.39 LIMIT 3")
pprint(cursor.fetchall())
#Refined latitude / longitude box (equivalent to City center dataset)
cursor.execute("""SELECT * FROM nodes
WHERE longitude > ? AND longitude < ? AND
latitude > ? AND latitude < ?
LIMIT 3""", (1.3434, 1.3496, 43.5799, 43.5838))
pprint(cursor.fetchall())
#Find in list attributes (contrary to MongoDB we need a join here)
cursor.execute("""SELECT relations.* FROM relations
JOIN relations_nodes ON relations_nodes.relation_id = relations.osmid
JOIN nodes ON relations_nodes.node_id = nodes.osmid
WHERE nodes.osmid = ?""", (265545746,))
pprint(cursor.fetchall())
#Find in dict attributes
cursor.execute("""SELECT cities.node, nodes_tags.key, nodes_tags.value
FROM nodes_tags
JOIN --- This is a comment
(SELECT nodes.osmid AS node FROM nodes --- This is a subquery getting nodes
JOIN nodes_tags ON nodes_tags.node_id = nodes.osmid --- with tags ref:INSEE and population
WHERE nodes_tags.key = ? OR nodes_tags.key = ? --- we cannot use AND here so instead
GROUP BY nodes.osmid --- we use GROUP BY, count() and HAVING
HAVING count(*) = 2) cities ON cities.node = nodes_tags.node_id
WHERE nodes_tags.key IN (?, ?, ?, ?)""",
(u"ref:INSEE", u"population", u"ref:INSEE", u"population", u"source:population", u"name:fr"))
pprint(cursor.fetchall())
# In both cases (SQL and MongoDB), making requests based to tag keys and values requires relatively complex code.
# Having a list of tags key / value is highly flexible because new tags may be added easilly but the cost to pay is to make queries and exploration less easy. We could have selected a subset of tags we are interested in and turn them into fields in the database.
#
# Looking for pharmacies in Tournefeuille would require the a similar kind of request.
#Get major contributors - we can make arithmetics in SQL requests
cursor.execute("""SELECT userid, count(*) * 100. / (SELECT count(*) FROM nodes) as n
FROM nodes GROUP BY userid ORDER BY n DESC LIMIT 3""")
pprint(cursor.fetchall())
#More info on user 1685
#cursor.execute("""SELECT ways_tags.key, ways_tags.value """)
cursor.execute("""SELECT ways_tags.key, ways_tags.value
FROM ways_tags
JOIN ways ON ways.osmid = ways_tags.way_id
WHERE ways.userid = ?
GROUP BY ways.userid""", (1685,))
pprint(cursor.fetchall())
# Finally, the latest MongoDB request requiring a join is simple in SQL:
#Now let's join
cursor.execute("""SELECT nodes.osmid FROM nodes
JOIN ways_nodes ON ways_nodes.node_id = nodes.osmid
WHERE ways_nodes.way_id = ?""", (30907996,))
pprint(cursor.fetchall())
#We are done with the database
sql_client.close()
# We stated earlier than joining in MongoDB was more difficult (moreover it's only supported from version 3.2), but actually SQL requires more join operations due to the desing of tables.
#
# In both cases, it is not straightforward to write requests with tags. The way OpenStreetMap uses tags is very flexible but this flexibility is also playing against consistency in an open-source database (we have found a lot of non-uniformities in the way addresses are recorded for example).
#
# As we have kept a schema very close from the one in OpenStreetMap, (both for SQLite and MongoDB), we face the same difficulties: the way tags are recorded brings maximum flexibility but writing requests becomes also much more difficult. There is then a trade-off to make between a database structure which enables maximum flexibility (and by extension a higher number of applications) or a database more strict in terms of tagging (fixed-tags for example) that would greatly improve the simplicity of use.
# <a id='Conclusion'/>
# ## Conclusions *[top](#Top)*
#
# Udacity instructors said thad many data analysts report spending most of their time (up to around 75%) wrangling data, at the end of this project I can say that I now better understand that statement :)
#
# You can refer to this [Forbes article](https://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says/#4ed61cb66f63) for data analysts survey results or to New York Times [one](https://www.nytimes.com/2014/08/18/technology/for-big-data-scientists-hurdle-to-insights-is-janitor-work.html?_r=1) for sources.
#
# Wrangling data is making compromises:
#
# - We can probably spend infinite amount of time auditing and cleaning large datasets. The more time is spent on wrangling and the less time will be passed on data exploration tasks. But also the more potential applications we can have with the dataset.
#
# - The way dataset is made persistant also plays an important role. Schema may be strict and improve the ease of use during data exploration but also decreases the potential of reusability for different purposes.
#
# Another interesting article from [Openbridge](https://www.openbridge.com/data-wrangling-losing-the-battle/) on data wrangling "battle".
#
# In terms of techniques, there are a lot of improvements we can make:
#
# - The dataset of the project remains very small. Even the 1 Gb dataset is small and can fit into memory. So I haven't really explored all techniques for dealing with large datasets: if I used SAX XML parsers, I also, at some point, had a Python dictionnary with the whole dataset in memory. JSON writing also requires to have a full dataset in memory. I gave a brief try to shelve and diskcache modules but I found them very slow, probably a misuse.
#
# - String comparison may fail, due to different case, accented characters or not, use of - in names,... We could probably increase the flexibility of cleaning (and auting) steps by using some fuzzy comparisons. In addition, caution is needed to properly deal with unicode strings, encoding and decoding ([Ned Batchelder talk at Pycon 2012 is worth the detour.](https://www.youtube.com/watch?v=sgHbC6udIqc))
#
# - Finally, my first experience in SQL helps but is not enough when it comes to bigger requests (with subrequests, pivoting...) so I need more practice. MongoDB is the first no-SQL database I use, so I need to practice even more with aggregation pipelines and lookups once I have updated my configuration. I have also seen that it exists a lot of Object Relational Mapping systems based on Python and Mongodb. This would probably worth a try when I have some spare time :)
# <a id='Further work'/>
# ## Further work *[top](#Top)*
#
# We have cleaned pharmacies data in this project, so let's stick to it and say we want to build a dataset to map all pharmacies in France with the following attributes:
# - full address (housenumber, street, city)
# - [FINESS](http://finess.sante.gouv.fr/fininter/jsp/index.jsp) identifier (already in dataset as a tag with ref:FR:FINESS key) (this is a reference number provided by Ministry of Health in France)
#
# We know that we will have some troubles:
# - addresses are not given in a uniform way
# - addr:city and addr:postcode may be both missing
# - we are not sure that evrey single pharmacie exists in the database
# - if we want to map all pharmacies in France, the dataset is going to be huge !
# - the way we have structured our database in not optimal in terms of use (querying from tag value requires quite compex queries and we want to make data as easy to use as possible)
# - we do not want to sacrify potential reuse of dataset if we need to map doctors, hospitals, clinics...
#
# We have seen that fixing uniformity in addresses is easily doable.
#
# To fix addr:city and addr:postcode we can use information provided in OpenStreetMap data: city boundaries. It is given in the form of a collection of ways in relations which define a polygon. This [site](http://geomalgorithms.com/a03-_inclusion.html) provides algorithms to check inclusion of a 2D point into a 2D polygon. That means that we would be able to link each pharmacy node (latitude, longitude) to one of those city boundaries. Yet, it's not clear if we can find boundaries which have unique postcode. This would be probably demanding in terms of calculation and we might be tempted to go to parallel calculations.
#
# We could web-scrap data from Pages Jaunes for each department in France and then compare this list to the one we have in OpenStreetMap and make sure they match (and fix it if they don't). To implement missing pharmacies in our dataset would require to get latitude and longitude from a given random address. That would require probably to get ways and existing node in the proximity and make an educated "guess" on probable coordinates.
#
# Dataset is going to be huge and it would request to re-work our cleaning algorithms as we would not be able to have a full dataset in memory. For example, we could try to first iteratively store an uncleaned dataset into MongoDB and then implement cleaning algorithms on top of it. We could combine this with a sliding-window box selection of OpenStreetMap so that we can reduce chunk of data we collect, store and clean. This might help in writing parallel algorithms as well. The only downside is that we take the risk of having uncomplete city boundaries in relation items.
#
# To make the cleaned dataset reusable and still friendly to use by dataset customers we could keep a structure like the one we have used in this project but with addition of views ([SQLite](https://sqlite.org/lang_createview.html) and [MongoDB](https://docs.mongodb.com/manual/core/views/)). Creating such view would require to "pivot" the tags (which are stored in rows in database) so that they can be stored as fields (columns). SQLite does not support PIVOT instruction but can be emulated as explained [here](https://stackoverflow.com/questions/1237068/pivot-in-sqlite). There is no such capability in MongoDB but can be emulated with an [aggregate](https://stackoverflow.com/questions/15499115/mongo-and-pivot). The idea would be then to expose only the views to dataset users, update views as per their request and eventually to extend to other medical amenities (hospitals, clinics...).
# <a id="Appendix"/>
# ## Appendix *[top](#Top)*
# ### References
#
# [OpenStreetData wiki](http://wiki.openstreetmap.org/wiki/Main_Page)<hr>
# [INSEE](https://www.insee.fr/en/accueil) is French National Institute of Statistics and Economic Information. In this project, it is used as *gold* standard.<hr>
# [Pages Jaunes](https://www.pagesjaunes.fr/annuaire/tournefeuille-31/pharmacies) used as another *gold* standard.<hr>
# Validating XML tree with [XML Schema](https://www.w3schools.com/xml/schema_intro.asp) can be done with [lxml](http://lxml.de/validation.html) library. This technique has not been used here as the structure of XML is simple enough. Additionaly, XML Schema validation requires to have XML data into memory and may not be suitable for large files like the ones we might have here.<hr>
# Get line number in a content handler with SAX parser on [StackOverflow](https://stackoverflow.com/a/15477803/8500344).<hr>
# Display lists as html tables in notebook on [StackOverflow](https://stackoverflow.com/a/42323522/8500344).<hr>
# Fuzzy string matching blog post on [streamhacker.com](https://streamhacker.com/2011/10/31/fuzzy-string-matching-python/).<hr>
# MongoDB from Python: [pymongo](http://api.mongodb.com/python/current/tutorial.html) tutorial.<hr>
# MongoDB [manual](https://docs.mongodb.com/manual/).<hr>
# Geometry algorithms for [inclusion](http://geomalgorithms.com/a03-_inclusion.html) checks.<hr>
# StackOverflow for [MongoDB](https://stackoverflow.com/questions/tagged/mongodb).<hr>
# <NAME> talk at Pycon 2012 on [YouTube](https://www.youtube.com/watch?v=sgHbC6udIqc): How to stop unicode pain ?<hr>
# Python SQLite import from csv on [StackOverflow](https://stackoverflow.com/a/2888042/8500344).<hr>
# [Customization](http://nbconvert.readthedocs.io/en/latest/customizing.html) of templates for nbconvert exports to python and html.<hr>
# + tags=["hide_export", "magic"]
#Export to html
# !jupyter nbconvert --to html --template html_minimal.tpl data_wrangling.ipynb
# + tags=["hide_export", "magic"]
#Export to python
# !jupyter nbconvert --to python --template python_minimal.tpl data_wrangling.ipynb
| 03-DataWranglingWithMongoDB/P02-WrangleOpenStreetMapData/.ipynb_checkpoints/data_wrangling-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.model_selection import train_test_split
from sklearn import datasets
data = datasets.load_iris()
x = data.data
y = data.target
x_train,x_test,y_train,y_test = train_test_split(x,y)
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(hidden_layer_sizes=(20,),max_iter=3000)
clf.fit(x_train,y_train)
clf.score(x_test,y_test)
| 24.Neural Network/Sklearn Neural network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:datajoint_interview] *
# language: python
# name: conda-env-datajoint_interview-py
# ---
# ### Understand our data
# ##### Read in the pickle file
import numpy as np
import pandas as pd
import pickle
data = pd.read_pickle('ret1_data.pkl')
type(data)
type(data[0])
len(data)
# The data in the pickle file is a list of $16$ dictionaries, as stated in the description of the data structure.
# +
example = data[15]
print(example.keys())
name = example['subject_name']
print(f'Subject names are of the form "{name}"')
# -
# See how many unique subject names there are.
# +
subjects = set([item['subject_name'] for item in data])
print(f'There are {len(subjects)} unique subjects.')
print(subjects)
# -
# Therefore, we can assume that each subject may have been used in more than one sample, much like in the tutorial where a single mouse could be used for multiple experiment sessions.
# ## Import DataJoint and start creating pipeline
#
# ### Part (1) of the challenge
# +
import datajoint as dj
# dj.config['database.user'] = 'USERNAME_GOES_HERE'
# dj.config['database.password'] = '<PASSWORD>' # replace with real password
dj.config['database.host'] = 'tutorial-db.datajoint.io' # DataJoint tutorial database
dj.config.save_global()
dj.conn()
# -
# The first step is to create a `dj.Manual()` table for each of the mice, as we did in the tutorial. However, here the only unique identifier provided is the `subject_name` key, so we will use that as the primary attribute for this new `Mouse` table. However, we can add another attributes for knockout (KO) or wild type (WT), and call it `type`.
#
# Start by defining a new schema.
schema = dj.schema('rnvoleti_interview')
@schema
class Subject(dj.Manual):
definition = """
# Experimental animals
subject_name : varchar(30)
---
gene : varchar(30)
genotype : enum('homozygous', 'heterozygous')
type : enum('WT', 'KO')
bax='unknown' : enum('-/-', '+/+', '+/-', 'unknown')
"""
Subject()
subjects
# +
# Create a list of dictionaries for all subjects
# Note: I am manually entering this since there are only 5 unique mice, but it would probably be better to write some code to parse the subject_name
subject_list = [{'subject_name': 'KO (chx10)' , 'gene': 'chx10', 'genotype': 'homozygous', 'type': 'KO', 'bax': 'unknown'},
{'subject_name': 'KO (pax6)' , 'gene': 'pax6','genotype': 'homozygous', 'type': 'KO', 'bax': 'unknown'},
{'subject_name': 'KO bax -/- (chx10)' , 'gene': 'chx10', 'genotype': 'homozygous', 'type': 'KO', 'bax': '-/-'},
{'subject_name': 'WT (chx10 het)' , 'gene': 'chx10', 'genotype': 'heterozygous', 'type': 'WT', 'bax': 'unknown'},
{'subject_name': 'WT (pax6 het)' , 'gene': 'pax6', 'genotype': 'heterozygous', 'type': 'WT', 'bax': 'unknown'}
]
# -
Subject.insert(subject_list, skip_duplicates=True)
Subject()
# #### Do a test query
Subject() & 'genotype = "heterozygous"'
# #### Create Session Table
#
# This will depend on the `Subject`.
#
# Using the data loaded from the pickle file, we can use the `dj.Manual` tier here to define a Sesssion.
@schema
class Session(dj.Manual):
definition = """
# Experiment session
-> Subject
sample_number : int # sample number
session_date : date # date of session in YYYY-MM-DD format
---
stimulations : longblob # list of dicts for stimulations
"""
# #### Visualize schema
dj.Diagram(schema)
Session()
Session.insert(data, skip_duplicates=True)
Session()
# #### Stimulations Table
#
# The stimulations will each be stored as a `dj.Computed` table that is dependent on a given `Session`. We use the `dj.Computed` table tier because the imported data comes from the provided pickle file, whose contents are stored in our database.
#
# **Note:** One Session can have zero or more stimulations
@schema
class Stimulations(dj.Computed):
definition = """
-> Session
stim_id: int
---
fps=0: float # frames per second of movie
movie=null: longblob # numpy array of movie stimulus shaped as (horiz blocks, vert blocks, frames)
n_frames: int # integer number of frames
pixel_size=0: float # pixel size on retina in um/pixel
stim_height=0: int # height of stimulus in pixels
stim_width=0: int # width of the stimulus in pixels
stimulus_onset=0: float # onset of stimulus in seconds from start of recording
x_block_size=0: int # size of horizontal blocks in pixels
y_block_size=0: int # size of vertical blocks in pixels
spikes=null: longblob # list of spike times for recorded neurons, each element is np.ndarray()
"""
def make(self, key):
# load stimulations for a given key as a list
stims = (Session() & key).fetch1('stimulations')
print('Populating stimulation(s) for subject_name={subject_name}, sample_number={sample_number} on session_date={session_date}'.format(**key))
for idx, item in enumerate(stims):
key['stim_id'] = idx
key['fps'] = item['fps']
key['n_frames'] = item['n_frames']
key['movie'] = item['movie']
key['pixel_size'] = item['pixel_size']
key['stim_height'] = item['stim_height']
key['stim_width'] = item['stim_width']
key['stimulus_onset'] = item['stimulus_onset']
key['x_block_size'] = item['x_block_size']
key['y_block_size'] = item['y_block_size']
key['spikes'] = item['spikes']
# Insert key into self
self.insert1(key)
print('\tPopulated stimulation {stim_id}'.format(**key))
Stimulations.populate()
Stimulations()
dj.Diagram(schema)
# ### Spike trains table
#
# The next entity we care about are the spike trains associated with Stimulation. We can use another `dj.Computed` table here.
#
# Each stimulation's `spikes` attribute contains a list of numpy arrays for the spike times (in seconds) for each neuron, of which there can be one or more.
#
# **Note:** The spike times are from the start of data recording, so we want to subtract the `stimulus_onset` attribute from all the spike times to get the spike times after the stimulus.
keys = Stimulations.fetch('KEY')
(Stimulations() & keys[2]).fetch1('spikes')[25].reshape(-1) - (Stimulations() & keys[2]).fetch1('stimulus_onset')
# The above is just an example of what some adjusted spike times look like (corrected for movie stimulus onset) for neuron 25 of a particular stimulation experiment.
(Stimulations() & keys[2]).fetch1('movie').shape
@schema
class SpikeTrain(dj.Computed):
definition = """
-> Stimulations
neuron_id: int
---
spike_times=null: longblob # numpy array of spike times AFTER stimulus onset
"""
def make(self, key):
# load stimulations for a given key as a list
spike_trains = (Stimulations() & key).fetch1('spikes')
print('Populating spike train(s) for subject_name={subject_name}, sample_number={sample_number} on session_date={session_date} for stim_id={stim_id}'.format(**key))
for idx, item in enumerate(spike_trains):
key['neuron_id'] = idx
# subtract onset time from spike times to see the spike times in relation to the stimulus
key['spike_times'] = item.reshape(-1,) - (Stimulations() & key).fetch1('stimulus_onset')
# Insert key into self
self.insert1(key)
print('\tPopulated spike times for neuron {neuron_id}'.format(**key))
SpikeTrain.populate()
SpikeTrain()
# We have $10$ **stimulations** total for our subjects, and $177$ **total neurons** that were stimulated with spike train arrays in this data set.
dj.Diagram(schema)
# This concludes **Part 1**.
# ### Part (2) of the challenge - Compute Spatio-Temporal Receptive Fields (STRF)
#
# In order to compute the STRF, we need to make use of the other attributes in the `Stimulations()` table.
Stimulations()
# The computations will then all be done on each `spike_times` entry in `SpikeTrain()`.
SpikeTrain()
# The **Spike-triggered Average (STA)** is defined (on Wikipedia) as:
# > Mathematically, the STA is the average stimulus preceding a spike. To compute the STA, the stimulus in the time window preceding each spike is extracted, and the resulting (spike-triggered) stimuli are averaged.
#
# In our case, we are told our stimulus vector for the $i$'th time bin, $\mathbf{x_i}$, is *white noise* , so the Standard STA definition on Wikipedia should do.
#
# This is:
#
# \begin{equation}
# \mathrm{STA} = \frac{1}{n_{sp}} \sum_{i=1}^{T}{y_i \mathbf{x_i}},
# \end{equation}
# where $T$ is the total number of time bins, $y_i$ is the number of spikes in bin $i$, and $n_{sp} = \sum y_i$ is the total number of spikes.
#
# In matrix form, we can let the rows of a matrix $X$ be equal to $\mathbf{x_i}^T$ and a column vector $\mathbf{y}$ whose $i$'th element can be equal to $y_i$. Then we have
#
# \begin{equation}
# \mathrm{STA} = \frac{1}{n_{sp}} X^T \mathbf{y}
# \end{equation}
#
# In order to do this for our experiment, we need to determine a time bin/window size for each recording, which we can do with some trial and error for now.
#
# We have to consider the following in our experiment:
# * Time bin window size in seconds (a parameter we can set with a `Lookup` table)
# * Segment of the movie to match this time in seconds to determine the size of $\mathbf{x_i}$.
# * We may need to use the `n_frames`, `framerate`, `x_block_size`, `y_block_size`, and the `movie` array to determine these
# * Count the number of spikes in each time bin and total number of spikes to determine $y_i$ and $n_{sp}$
#
#
# #### UPDATE: STRF Computation
#
# This was not a part of my initial submission, but I have subsequently better understood what the STRF is and how it is computed.
#
# We are able to compute the STA for a given neuron using the formulation above. For a given window of stimulus $\mathbf{x_i}$, we can define the STRF computation as follows:
#
# \begin{equation}
# \mathrm{STRF}_{\tau_j} = \frac{1}{n_{sp}} \sum_{i=1}^{T}{y_{i + \tau_j} \mathbf{x_i}}, \quad j = 1, \ldots, K,
# \end{equation}
#
# Where $\tau_1, \ldots , \tau_K$ represent various delays in number of bins.
#
# We can say that the STRF is the STA computed to view the neuronal response for a stimulus $\mathbf{x_i}$ at $K$ different time bins following that stimulus
# ### Just playing around trying to understand how to compute STA with a single example with a fixed window size
def count_spikes_in_bin(times, spikes):
start = times[0]
stop = times[-1]
mask = np.logical_and(spikes >= start, spikes < stop)
return len(spikes[mask])
# Look at a random example:
# +
keys = SpikeTrain.fetch('KEY')
window_size = 15
movie = (Stimulations() & keys[2]).fetch1('movie')
movie_dims = movie.shape
n_fr = movie_dims[2]
fr = (Stimulations() & keys[2]).fetch1('fps')
x_size = (Stimulations() & keys[2]).fetch1('x_block_size')
x_dim = (Stimulations() & keys[2]).fetch1('stim_width')
y_size = (Stimulations() & keys[2]).fetch1('y_block_size')
y_dim = (Stimulations() & keys[2]).fetch1('stim_height')
print(f'Movie has dimensions {movie_dims}')
print(f'FPS is {fr}')
print(f'Video dimensions are {x_dim} x {y_dim} with {movie_dims[2]} frames')
print(f'x_block_size is {x_size}')
print(f'y_block_size is {y_size}')
# -
# We want to:
# * Create each $\mathbf{x_i}$ as a window of a particular size (15 frames here) of the movie.
# * In this example, each $\mathbf{x_i}$ should have dimensions (640, 1, 15)
# * Windowed movie array should now have dimenions (640, 1, 255, 15), where 255 is the number of bins in this example
# +
# Padding so I can reshape the index mask to the correct size given a particular frame count.
num_to_pad = window_size - n_fr % window_size # maximum value padding amount
mask_indices = np.pad(np.arange(n_fr), (0, num_to_pad), mode='maximum').reshape(-1, window_size)
spikes = (SpikeTrain() & keys[2]).fetch1('spike_times')
# create array of frame times
times = np.arange(0, n_fr) / fr
fr_times_windows = times[mask_indices]
spike_counts = np.apply_along_axis(lambda x: count_spikes_in_bin(x, spikes), axis=1, arr=fr_times_windows)
print(f'We have {fr_times_windows.shape[0]} bins of size {fr_times_windows.shape[1]}')
print('Sum of spike counts across all bins:', np.sum(spike_counts))
print('Length of original spike train array:', len(spikes))
# -
# It looks like some spikes are being missed by my method, but the number gets closer as the window size increases based on trial/error experimentation.
mask_indices
# +
windowed_movie = movie[:, :, mask_indices]
windowed_movie[:, :, 0].shape
# -
# ### Intuition
# Above we see the shape for the first window of 15 frames. This is essentially the first stimulus bin $\mathbf{x_0}$, and the next row would be $\mathbf{x_1}$, etc.
#
#
# Compute an average stimulus
average_movie = np.mean(windowed_movie, axis=3)
average_movie.shape
# This is the average movie stimulus for each of the 255 bins, i.e. $\mathbf{x_1}$ to $\mathbf{x_{255}}$
(average_movie @ spike_counts / np.sum(spike_counts)).shape
# I believe this is the **STA for a given neuron** with a defined window size and **no delay, $\tau = 0$**.
#
# We can **shift** the $y_i$ array of spike counts and repeat to get the STA when the neuronal output is delayed by $\tau$ bins.
#
# Need to reshape to actual image size using block size for visualizing.
# +
tau = 3
shifted_counts = np.roll(spike_counts, -1 * tau)
shifted_counts[-1 * tau:] = 0
print(spike_counts)
print('\n')
print(shifted_counts)
# -
# ### Back to defining schema
# Define lookup table for STA parameters
@schema
class STRF_Param(dj.Lookup):
definition = """
strf_param_id: int # unique id for STRF parameter set
---
bin_size=1: float # time window bin size in seconds
tau=1: int # of bins to jump for each consecutive delay
"""
# Try lots of window sizes in seconds
STRF_Param.insert([(0, 1, 5)
],
)
STRF_Param.insert([(1, 0.5, 10),
(2, 2.0, 2)
],
)
STRF_Param()
@schema
class STRF_Compute(dj.Computed):
definition = """
-> SpikeTrain
-> STRF_Param
---
n_spikes = null: longblob # array for number of spikes in each bin
srtf_values = null: longblob # STA at each time delay
"""
def make(self, key):
window_seconds = (STRF_Param() & key).fetch1('bin_size') # window/bin size in seconds
tau = (STRF_Param() & key).fetch1('tau') # incremental number of bins to delay
# Next, we want to slide window over movie and spike train
# Start by extracting the relevant parameters
x_blocks = (Stimulations() & key).fetch1('x_block_size')
y_blocks = (Stimulations() & key).fetch1('y_block_size')
fr = (Stimulations() & key).fetch1('fps')
n_fr = (Stimulations() & key).fetch1('n_frames')
x_dim = (Stimulations() & key).fetch1('stim_width')
y_dim = (Stimulations() & key).fetch1('stim_height')
spikes = (SpikeTrain() & key).fetch1('spike_times')
movie = (Stimulations() & key).fetch1('movie')
# create a frame time array (which is found by dividing frame_number / framerate)
frames = np.arange(0, n_fr)
fr_times = frames / fr
# Create sliding window mask for looking at subarrays of this fr_times array
# compute best window size:
window_size = round(window_seconds * fr)
# Padding so I can reshape the index mask to the correct size given a particular frame count.
num_to_pad = window_size - n_fr % window_size # maximum value padding amount
mask_indices = np.pad(np.arange(n_fr), (0, num_to_pad), mode='maximum').reshape(-1, window_size)
# mask frame times with sliding windows as rows
fr_times_windows = fr_times[mask_indices]
# Create spike_counts array, which is essentially the vector y in the definition above (spike counts for each bin)
# Should replace this with a vectorized function, but I was running into some error that I can debug later to optimize, since this is slow.
print('Populating spike counts for subject {subject_name}, sample {sample_number} on {session_date} with stim_id={stim_id} for neuron {neuron_id}'.format(**key))
spike_counts = np.apply_along_axis(lambda x: count_spikes_in_bin(x, spikes), axis=1, arr=fr_times_windows)
tot_spikes = np.sum(spike_counts)
print(f'\tFound {tot_spikes} total spikes with window size of {window_seconds} seconds.')
key['n_spikes'] = spike_counts
# Compute STA for each of the windows with delays incrementing by tau bins
windowed_movie = movie[:, :, mask_indices]
average_movie = np.mean(windowed_movie, axis=3)
shifted_spike_counts = spike_counts.copy()
# Do initial case with no delay
sta_0 = ((average_movie @ spike_counts) / tot_spikes)
sta_0 = np.repeat(sta_0, repeats=x_blocks, axis=0)
sta_0 = np.repeat(sta_0, repeats=y_blocks, axis=1)
srtf = [(0, sta_0)]
for t in range(tau, len(spike_counts), tau):
shifted_spike_counts = np.roll(shifted_spike_counts, -1 * t)
shifted_spike_counts[-t:] = 0 # set end of array by shift amount to 0
if np.max(shifted_spike_counts) == 0:
break
sta_t = ((average_movie @ shifted_spike_counts) / tot_spikes)
# probably not efficient for storage but doing this temporarily to see if it works
sta_t = np.repeat(sta_t, repeats=x_blocks, axis=0)
sta_t = np.repeat(sta_t, repeats=y_blocks, axis=1)
srtf.append((t, sta_t))
key['srtf_values'] = srtf
self.insert1(key)
dj.Diagram(schema)
STRF_Compute.populate()
STRF_Compute()
# #### Check an example key
# +
keys = STRF_Compute.fetch('KEY')
key = keys[50]
# -
key
# +
images = (STRF_Compute() & key).fetch1('srtf_values')
# images # uncomment if you want to see how this looks structurally
# +
import matplotlib.pyplot as plt
plt.imshow(images[2][1])
plt.xlabel(f'tau = {images[2][0]} bins delay')
# -
# ### Visualization code, part 3
# +
plt.figure(figsize=(20, 5))
plt.subplots_adjust(hspace=0.01)
plt.suptitle("STRF Plots", fontsize=18, y=0.95)
nrows = 1
ncols = len(images) // nrows + (len(images) % nrows > 0)
for n, plot in enumerate(images):
# add a new subplot iteratively
ax = plt.subplot(nrows, ncols, n + 1)
# filter df and plot ticker on the new subplot axis
plt.imshow(plot[1])
# chart formatting
ax.set_xlabel(f"tau = {plot[0]} bins")
# -
| Develop_pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Calculating a PSF for Roman Wide Field Instrument (WFI)
#
# This demonstration notebook offers a graphical interface to the basic functionality of WebbPSF-Roman, as well an example of performing a calculation with the Python scripting interface suited for more advanced calculations.
#
# ## Background and boilerplate
#
# Before we can do a calculation, we must set up the notebook by importing the packages we use and setting up logging output so we can follow the progress of the calculations. The cell below imports WebbPSF and standard scientific Python tools, and configures some options to make plots prettier.
#
# **You shouldn't need to edit anything in the next cell, just go ahead and run it.**
#
# *(Note: click in a cell and use **Shift + Enter** or click the play button <i class="fa-step-forward fa"></i> above to run it)*
# %matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (16, 7) # make the default figure size larger
matplotlib.rcParams['image.interpolation'] = 'nearest' # don't blur/smooth image plots
from matplotlib import pyplot as plt
import webbpsf
import webbpsf.roman
# The next cell tells WebbPSF to log information about what it is doing so that we can watch the progress of the calculation:
webbpsf.setup_logging()
# **Note:** As you explore in this notebook, you may see certain warnings that look like this:
#
# <div class="output_subarea output_stderr" style="font-family: monospace; margin: 1em">Warning: something happened! </div>
#
# For the most part, warnings are safe to ignore. In particular, warnings referencing the matplotlib plotting library or the FITS library in Astropy don't indicate anything that could affect the accuracy of the calculations.
# # Using the WFI model in WebbPSF
#
# Each instrument in WebbPSF is represented as a Python `class`, and the Wide Field Instrument model is in `webbpsf.roman.WFI`. We can to instantiate one to work with, in the same was as any of the JWST instruments.
wfi = webbpsf.roman.WFI()
# ## Using the notebook interface
#
# There's a notebook-friendly interface for the Wide Field Instrument PSF model. Bring it up in your notebook by running the following cell, then experiment with the different options, or read on for more explanation.
#
# Note that the calculations will typically take several seconds to run.
webbpsf.show_notebook_interface('wfi')
# ### Calculate PSF
#
# When you click the "Calculate PSF" button, you will see some output as the calculation progresses. When it completes, it will show a plot showing four panels (counting left-to-right, top-to-bottom) representing the optical planes in the model:
#
# 1. The entrance pupil transmission in black and white, and the phase ranging from red to blue to show the wavefront error. We include here a plausible estimate for wavefront error due to mirror polishing variations. (This is currently approximated by a map of high-frequency errors in the Hubble primary, since such data is not yet available for the Roman primary mirror.)
# 2. The exit pupil, showing the same data but with a change in coordinate system due to passing through focus. This is the pupil orientation as seen by the WFI looking outwards at the sky.
# 3. The same transmission and phase map with the addition of the phase term due to field-dependent optical aberrations and intermediate instrument optics. (Specifically, this is based on Zernike coefficients derived from the Cycle 5 optical modeling effort at GSFC.)
# 4. The final oversampled detector plane, with log-scaled intensity
#
# Below that, you will see a side-by-side comparison of the oversampled PSF, and the PSF binned down to detector pixels.
#
# Also, a button labeled "Download FITS image from last calculation" will appear below the "Calculate PSF" button. Click that to download the oversampled and detector-pixel-binned images as a multi-extension FITS file. (WebbPSF also offers tools to analyze PSFs within the notebook or your own scripts, which are described in the next section.)
#
# ### Display Optical System
#
# This shows a 2 x 2 grid of plots. The left hand side shows transmission (e.g. pupil or mask shape), and the right side shows optical path difference (which is converted to phase across the pupil). The first row represents the pupil plane at the primary mirror, the second row is the telescope exit pupil as seen by the instrument, and the third row is the notional pupil plane after all the field dependent aberrations have been applied to the wavefront, but before the final propagation to a detector or image plane.
#
# ### Clear Output
#
# The output from the calculation process can be pretty verbose, so this button is here to clear both text output and plots.
# ## Using the Python programming interface
#
# Alternatively, you can configure the WFI instance yourself in Python. A more detailed example is presented in the [WebbPSF-Roman documentation page](https://pythonhosted.org/webbpsf/roman.html), but here we will just show a simple monochromatic calculation at the default field position.
#
# The `wfi.calcPSF()` method returns a [FITS HDUList object](http://docs.astropy.org/en/stable/io/fits/index.html), which you can then write out to a file or analyze further in the notebook.
plt.figure(figsize=(8,10))
mono_psf = wfi.calc_psf(monochromatic=1.2e-6, display=True)
# Now you have the calculation result in the `mono_psf` variable, and can use various utility functions in WebbPSF to analyze it. The FITS object has an extension called `OVERSAMP` with each pixel split according to the default oversampling factor (4), and an extension called `DET_SAMP` with that image binned down to detector pixels.
mono_psf.info()
# Let's plot the PSF in detector pixels:
webbpsf.display_psf(mono_psf, ext='DET_SAMP')
# WebbPSF also includes functions for measuring EE, profiles, and centroids (described in the [WebbPSF documentation](http://pythonhosted.org/webbpsf/api_reference.html#functions) and the [POPPY documentation](http://pythonhosted.org/poppy/api.html#functions)). Below we measure the radial profile and encircled energy curve for the monochromatic PSF. (Note that the FWHM is also computed and labeled on the radial profile plot.)
plt.figure(figsize=(8, 6))
webbpsf.display_profiles(mono_psf)
# If you want, the FITS object containing the PSF can be written out to a file and downloaded to your computer. This can be useful if you need it as an input to another tool.
mono_psf.writeto('./mono_psf_1.2um.fits', overwrite=True)
# After you run the previous cell, this link will take you to download the FITS image: <a href="files/mono_psf_1.2um.fits">Download mono_psf_1.2um.fits</a>
#
# How that works is a little tricky: when you write `./mono_psf_1.2um.fits`, you're saying you want to save the file in the current working directory for the *Python* process. If you're working locally, that's just the directory where you started the `jupyter notebook` command. If you're working on a remote server, files saved from the notebook will be available at the URL `files/your_filename.fits` relative to this page.
#
# For example, if you're viewing this notebook at `https://example.com/user/janedoe/notebooks/WebbPSF-Roman_Tutorial.ipynb`, your file will be at `https://example.com/user/janedoe/notebooks/`**files/mono_psf_1.2um.fits**.
#
# # What next?
#
# Keep working in this notebook, if you like! For reference, there's always a [pristine copy of this notebook](https://github.com/mperrin/webbpsf/blob/master/notebooks/WebbPSF-Roman_Tutorial.ipynb) to refer back to in the [WebbPSF GitHub repository](https://github.com/mperrin/webbpsf). If you have not previously used this notebook interface to Python, the Help menu available above has a tutorial and a useful list of keyboard shortcuts.
#
# * **Review the [WebbPSF documentation](https://pythonhosted.org/webbpsf/) and the [POPPY documentation](https://pythonhosted.org/poppy/)**
# * **Report any issues to us on GitHub** —
# WebbPSF and POPPY are developed on GitHub: [mperrin/webbpsf](https://github.com/mperrin/webbpsf) and [mperrin/poppy](https://github.com/mperrin/poppy) respectively.
# The best way to report bugs is through the GitHub issue trackers: [WebbPSF](https://github.com/mperrin/webbpsf/issues) or [POPPY](https://github.com/mperrin/poppy/issues). (We also welcome pull requests from the community, if there's functionality you think should be included!)
# * **Contact us through the STScI helpdesk** — You can always email your questions to <a href="mailto:<EMAIL>"><EMAIL></a>, and our helpdesk people will make sure your request gets to the right person
# * **Sign up for WebbPSF update announcements** —
# This is entirely optional, but you may wish to sign up to the mailing list <EMAIL>. This is a low-traffic moderated announce-only list, to which we will periodically post announcements of updates to this software. To subscribe, visit the [maillist.stsci.edu](http://maillist.stsci.edu) server
| notebooks/WebbPSF-Roman_Tutorial.ipynb |