markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Hacer variables dummies
n=-10 final=general.MWh.tail(-n) onlyMWh=pd.DataFrame(general.MWh) general['Month'] = general.index.month general['Weekday_Name'] = general.index.weekday_name dates=general.index dummies = pd.get_dummies(general['Weekday_Name']).astype(int) dummies2 = pd.get_dummies(general['Month']).astype(int) Dum=pd.DataFrame(dummies.join(dummies2)) t=np.arange(0,len(onlyMWh)) Dum["t"]= np.arange(0,len(onlyMWh)) Dum["tiempo"]= np.arange(1,len(onlyMWh)+1) Dum["ones"]=np.ones(len(t)) Dum= Dum.set_index('t') Dum["Dom santo"]=0 Dum["NewYear"]=0 Dum["Constitucion"]=0 Dum["Benito"]=0 Dum["Jue santo"]=0 Dum["Vie santo"]=0 Dum["Trabajo"]=0 Dum["Madre"]=0 Dum["Grito"]=0 Dum["virgen"]=0 Dum["muertos"]=0 Dum["Virgen2"]=0 Dum["Navidad"]=0 Dum["elecciones"]=0 Dum["toma"]=0 Dum["sab santo"]=0 Dum["rev"]=0 ind=0 for date in general.index: for date2 in special_days["Dom santo"]: if date ==date2: Dum.iloc[ind,21]=1 for date2 in special_days["NewYear"]: if date ==date2: Dum.iloc[ind,22]=1 for date2 in special_days["Constitucion"]: if date ==date2: Dum.iloc[ind,23]=1 for date2 in special_days["Benito"]: if date ==date2: Dum.iloc[ind,24]=1 for date2 in special_days["Jue santo"]: if date ==date2: Dum.iloc[ind,25]=1 for date2 in special_days["Vie santo"]: if date ==date2: Dum.iloc[ind,26]=1 for date2 in special_days["Trabajo"]: if date ==date2: Dum.iloc[ind,27]=1 for date2 in special_days["Madre"]: if date ==date2: Dum.iloc[ind,28]=1 for date2 in special_days["Grito"]: if date ==date2: Dum.iloc[ind,29]=1 for date2 in special_days["virgen"]: if date ==date2: Dum.iloc[ind,30]=1 for date2 in special_days["muertos"]: if date ==date2: Dum.iloc[ind,31]=1 for date2 in special_days["Virgen2"]: if date ==date2: Dum.iloc[ind,32]=1 for date2 in special_days["Navidad"]: if date ==date2: Dum.iloc[ind,33]=1 for date2 in special_days["elecciones"]: if date ==date2: Dum.iloc[ind,34]=1 for date2 in special_days["toma"]: if date ==date2: Dum.iloc[ind,35]=1 for date2 in special_days["sab santo"]: if date ==date2: Dum.iloc[ind,36]=1 for date2 in special_days["rev"]: if date ==date2: Dum.iloc[ind,37]=1 ind+=1 del Dum["Friday"] Dum.drop(Dum.columns[[15]], axis=1,inplace=True)
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
Observar descomposición
part=general.MWh.tail(100) result=seasonal_decompose(part, model='multiplicative') fig = result.seasonal.plot(figsize=(20,5))
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
Al ver la decomposición, se puede ver por la forma que fourier debe estblecerse en senos y cosenos absolutos, para que se parezca a la estacionalidad de la serie. Se agrega a las variables dummies esta estacionalidad semanal, que parece ser fundamental en los datos Detectar efecto de las variables dummies
t=np.arange(1,len(onlyMWh)+1) Tiempo=pd.DataFrame(t) Tiempo["one"]=np.ones(len(onlyMWh)) Tiempo['sen']=np.abs(np.sin(((2*np.pi)/14)*t)) Tiempo['cos']=np.abs(np.cos(((2*np.pi)/14)*t)) Combinacion=kronecker(Dum,Tiempo) model = LinearRegression() prediction=regresion_linear(Combinacion[:n],general.MWh.values[:n]) plt.figure(figsize=(10,5)) plt.plot(onlyMWh.MWh.values[:n],label ="Datos") plt.plot(prediction,label="Predicción") plt.ylabel("demanda en MWh") plt.xlabel("días") plt.legend() #plt.axis([1630,1650,120000,160000]) plt.show() comp=comparacion(onlyMWh.MWh.values[:n],prediction) MAPE=comp.error.mean() print("MAPE = ",round(MAPE,4),"%")
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
Obtener error de datos con variables dummies vs datos reales
Tabla=pd.DataFrame(columns=['regresion','datos','resta']) Tabla["regresion"]=prediction Tabla["datos"]=onlyMWh.MWh.values[:n] Tabla["resta"]=Tabla.datos-Tabla.regresion plt.plot(Tabla.resta) plt.show()
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
Establecer las frecuencias que se debe considerar en la serie de fourier
f, Pxx_den = signal.periodogram(Tabla.resta, 1) plt.plot(1/f, Pxx_den) plt.xlabel('periodo') plt.ylabel('PSD') plt.show() top_50_periods = {} # get indices for 3 highest Pxx values top50_freq_indices = np.flip(np.argsort(Pxx_den), 0)[2:12] freqs = f[top50_freq_indices] power = Pxx_den[top50_freq_indices] periods = 1 / np.array(freqs) matrix=pd.DataFrame(columns=["power","periods"]) matrix.power=power matrix.periods=periods print(matrix)
power periods 0 8.749328e+09 1911.333333 1 5.225123e+09 45.507937 2 4.883263e+09 819.142857 3 4.626260e+09 1146.800000 4 4.413181e+09 36.522293 5 4.324008e+09 955.666667 6 4.069662e+09 573.400000 7 4.064267e+09 521.272727 8 3.968931e+09 61.000000 9 3.914966e+09 91.015873
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
Hacer la regresión del efecto cruzado de variables dummies y senos/cosenos absolutos de frecuencia de error
sencos = pd.DataFrame() sencos["t"]=np.arange(1,len(onlyMWh)+1) for i in matrix.periods: sencos["{}_sen".format(i)] = np.abs(np.sin(((2*np.pi)/i)*t)) sencos["{}_cos".format(i)] = np.abs(np.cos(((2*np.pi)/i)*t)) sencos["unos"] = 1 sencos['sen']=np.abs(np.sin(((2*np.pi)/14)*t)) sencos['cos']=np.abs(np.cos(((2*np.pi)/14)*t)) sencos['sen1']=np.abs(np.sin(((2*np.pi)/365)*t)) sencos['cos1']=np.abs(np.cos(((2*np.pi)/365)*t)) sencos['sen2']=np.abs(np.sin(((2*np.pi)/28)*t)) sencos['cos2']=np.abs(np.cos(((2*np.pi)/28)*t)) sencos_test=sencos[n:] sencos_train=sencos[0:n] Dum_test=Dum[n:] Dum_train=Dum[0:n] Combinacion=kronecker(Dum_train,sencos_train) model = LinearRegression() prediction=regresion_linear(Combinacion,general.MWh.values[0:n])
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
MAPE de la regresion
plt.figure(figsize=(10,5)) plt.plot(onlyMWh.MWh[0:n].values,label ="Datos") plt.plot(prediction,label="Predicción") plt.ylabel("demanda en MWh") plt.xlabel("días") plt.legend() plt.show() #%%obtener mape de regresión comp=comparacion(onlyMWh.MWh.values[:n],prediction) MAPE=comp.error.mean() print("MAPE = ",round(MAPE,4),"%")
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
Graficar residuales de la regresión
Tabla=pd.DataFrame(columns=['regresion','datos','resta']) Tabla["regresion"]=prediction Tabla["datos"]=onlyMWh.MWh[0:n].values Tabla["resta"]=Tabla.datos-Tabla.regresion plt.plot(Tabla.resta) plt.show() plt.hist(Tabla["resta"],bins=50) plt.show() resta=pd.DataFrame(Tabla["resta"]) from statsmodels.tsa.arima_model import ARIMA mod = ARIMA(resta, order=(1,0,4)) results = mod.fit() plt.plot(resta) plt.plot(results.fittedvalues, color='red') T=pd.DataFrame(columns=['regresion','datos','nuevo']) T["regresion"]=results.fittedvalues T["datos"]=resta T["nuevo"]=T.datos-T.regresion plt.plot(T.nuevo) plt.show() plt.figure(figsize=(10,5)) plt.plot(onlyMWh.MWh[0:n].values,label="Reales") plt.plot(prediction+results.fittedvalues,label="Predicción") #plt.axis([1630,1650,120000,160000]) plt.ylabel("demanda en MWh") plt.xlabel("días") plt.legend() plt.show() #%%obtener mape de regresión comp=comparacion(onlyMWh.MWh[0:n].values,prediction+results.fittedvalues) MAPE=comp.error.mean() print("MAPE = ",round(MAPE,4),"%")
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
Gráfica de manera dinámica
extra=results.predict(len(onlyMWh.MWh[0:n]),len(onlyMWh.MWh[0:n])-n) extra=extra.iloc[1:] from sklearn.linear_model import Lasso Combinaciontest=kronecker(Dum_test,sencos_test) #Initializing the Lasso Regressor with Normalization Factor as True lasso_reg = Lasso(normalize=True) #Fitting the Training data to the Lasso regressor lasso_reg.fit(Combinacion,onlyMWh.MWh[0:n]) coeff = lasso_reg.coef_ #coeff #Predicting for X_test y_pred_lass =lasso_reg.predict(Combinaciontest) coeff = np.sum(abs(lasso_reg.coef_)==0) coeff len(lasso_reg.coef_) #comb=Combinacion #comb2=Combinaciontest #x=np.where(lasso_reg.coef_==0) #comb=comb.drop(comb.columns[x], axis=1) #comb2=comb2.drop(comb2.columns[x], axis=1) #from sklearn.linear_model import HuberRegressor #huber = HuberRegressor().fit(comb,onlyMWh.MWh[0:n]) #hubpredict=huber.predict(comb2)
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
todo para pronóstico
comp_pronostico=comparacion(final,y_pred_lass+extra.values) #comp_pronostico=comparacion(final,hubpredict+extra.values) MAPE=comp_pronostico.error.mean() plt.figure(figsize=(10,5)) plt.plot(final,label="Real") plt.plot(comp_pronostico.prediccion,label="Pronóstico") plt.ylabel("demanda en MWh") plt.xlabel("días") plt.legend() plt.show() print("MAPE = ",round(MAPE,4),"%") comp_pronostico end = time.time() print((end - start)/60) model =LinearRegression() model.fit(comb,onlyMWh.MWh[0:n]) prediction=model.predict(comb2) comp_pronostico=comparacion(final,prediction+extra.values) MAPE=comp_pronostico.error.mean() plt.figure(figsize=(10,5)) plt.plot(final,label="Real") plt.plot(comp_pronostico.prediccion,label="Pronóstico") plt.ylabel("demanda en MWh") plt.xlabel("días") plt.legend() plt.show() print("MAPE = ",round(MAPE,4),"%") comp_pronostico 799,39.39 58.39 13.01 lasso_reg = Lasso(normalize=True) #Fitting the Training data to the Lasso regressor lasso_reg.fit(comb,onlyMWh.MWh[0:n]) coeff = lasso_reg.coef_ #coeff #Predicting for X_test y_pred_lass =lasso_reg.predict(comb2) comp_pronostico=comparacion(final,y_pred_lass+extra.values) MAPE=comp_pronostico.error.mean() plt.figure(figsize=(10,5)) plt.plot(final,label="Real") plt.plot(comp_pronostico.prediccion,label="Pronóstico") plt.ylabel("demanda en MWh") plt.xlabel("días") plt.legend() plt.show() print("MAPE = ",round(MAPE,4),"%") #coeff = lasso_reg.coef_ #coeff
_____no_output_____
MIT
Efecto Arima en final.ipynb
ramirezdiana/Forecast-with-fourier
МАДМО Физтех-Школа Прикладной математики и информатики МФТИ Лаборатория нейронных сетей и глубокого обучения (DeepHackLab) Домашнее задание необходимо загрузить в общий репозиторий с именной папкой Домашнее задание 1 Основы Python и пакет NumPy---
import numpy as np import random import scipy.stats as sps
_____no_output_____
MIT
Students/Tunitskaya/Hometask_1(1).ipynb
Alken1/Sberbank_ML
Задача 1В первой задаче вам предлагается перемножить две квадратные матрицы двумя способами -- без использования пакета ***numpy*** и с ним.
# Для генерации матриц используем фукнцию random -- она используется для генерации случайных объектов # функция sample создает случайную выборку. В качестве аргумента ей передается кортеж (i,j), здесь i -- число строк, # j -- число столбцов. a = np.random.sample((1000,1000)) b = np.random.sample((1000,1000)) # выведите ранг каждой матрицы с помощью функции np.linalg.rank. # Используйте функцию shape, что она вывела? # ======== rank_a = np.linalg.matrix_rank(a) print(rank_a) print(a.shape) print(np.linalg.matrix_rank(b)) print(b.shape) # ======== #print(a) #print(b) # здесь напишите перемножение матриц без # использования NumPy и выведите результат def mult(a, b): rows_a = len(a) cols_a = len(a[0]) rows_b = len(b) cols_b = len(b[0]) if cols_a != rows_b: return 'Error' c = [[0 for row in range(cols_b)] for col in range(rows_a)] for i in range(rows_a): for j in range(cols_b): for k in range(cols_a): c[i][j] += a[i][k] * b[k][j] #return c def np_mult(a, b): # здесь напишите перемножение матриц с # использованием NumPy и выведите результат return np.dot(a,b) %%time # засечем время работы функции без NumPy mult(a,b) %%time # засечем время работы функции с NumPy np_mult(a,b)
Wall time: 46.8 ms
MIT
Students/Tunitskaya/Hometask_1(1).ipynb
Alken1/Sberbank_ML
Задача 2Напишите функцию, которая по данной последовательности $\{A_i\}_{i=1}^n$ строит последовательность $S_n$, где $S_k = \frac{A_1 + ... + A_k}{k}$. Аналогично -- с помощью библиотеки **NumPy** и без нее. Сравните скорость, объясните результат.
# функция, решающая задачу с помощью NumPy def sec_av(A): return np.cumsum(A)/list(range(1,len(A)+1)) pass # функция без NumPy def stupid_sec_av(A): S = [0 for i in range(len(A))] S[0] = A[0] for i in range(len(A)-1): S[i+1] = A[i+1] + S[i] numb = list(range(1,len(A)+1)) for i in range(len(A)): S[i] = S[i] / numb[i] return S # зададим некоторую последовательность и проверим ее на ваших функциях. # Первая функция должна работать ~ в 50 раз быстрее A = sps.uniform.rvs(size=10 ** 7) %time S1 = sec_av(A) %time S2 = stupid_sec_av(A) #проверим корректность: np.abs(S1 - S2).sum()
Wall time: 1.39 s Wall time: 10.7 s
MIT
Students/Tunitskaya/Hometask_1(1).ipynb
Alken1/Sberbank_ML
Задача 3Пусть задан некоторый массив $X$. Надо построить новый массив, где все элементы с нечетными индексами требуется заменить на число $a$ (если оно не указано, то на 1). Все элементы с четными индексами исходного массива нужно возвести в куб и записать в обратном порядке относительно позиций этих элементов. Массив $X$ при этом должен остаться без изменений. В конце требуется слить массив X с преобразованным X и вывести в обратном порядке.
# функция, решающая задачу с помощью NumPy def transformation(X, a = 1): X[1::2] = a X[::2] **= 3 X[::2] = X[::2][::-1] return X # функция, решающая задачу без NumPy def stupid_transformation(X): t_odd = [] t_ev = [] t_ev_inv = [] Y = [] t_odd = int(round(len(X)/2)) * [1] for i in range(0,len(X),2): t_ev = t_ev + [round(X[i]**3,8)] for i in range(len(t_ev),0,-1): t_ev_inv = t_ev_inv + [temp_ev[i-1]] for i in range(min(len(t_ev_inv), len(t_odd))): Y = Y + [t_ev_inv[i]] + [t_odd[i]] if len(t_ev_inv) > len(t_odd): Y = Y + [t_ev_inv[-1]] if len(t_ev_inv) < len(t_odd): Y = Y + [t_odd[-1]] return Y X = sps.uniform.rvs(size=10 ** 7) # здесь код эффективнее примерно в 20 раз. # если Вы вдруг соберетесь печатать массив без np -- лучше сначала посмотрите на его размер %time S1 = transformation(X) %time S2 = stupid_transformation(X) # проверим корректность: np.abs(S1 - S2).sum()
Wall time: 172 ms
MIT
Students/Tunitskaya/Hometask_1(1).ipynb
Alken1/Sberbank_ML
Почему методы ***numpy*** оказываются эффективнее?
# Написаны на C
_____no_output_____
MIT
Students/Tunitskaya/Hometask_1(1).ipynb
Alken1/Sberbank_ML
Дополнительные задачи Дополнительные задачи подразумевают, что Вы самостоятельно разберётесь в некоторых функциях ***numpy***, чтобы их сделать. Эти задачи не являются обязательными, но могут повлиять на Ваш рейтинг в лучшую сторону (точные правила учёта доп. задач будут оглашены позже). Задача 4* Дана функция двух переменных: $f(x, y) = sin(x)cos(y)$ (это просто такой красивый 3D-график), а также дана функция для отрисовки $f(x, y)$ (`draw_f()`), которая принимает на вход двумерную сетку, на которой будет вычисляться функция. Вам нужно разобраться в том, как строить такие сетки (подсказка - это одна конкретная функция ***numpy***), и подать такую сетку на вход функции отрисовки.
from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline def f(x, y): '''Функция двух переменных''' return np.sin(x) * np.cos(y) def draw_f(grid_x, grid_y): '''Функция отрисовки функции f(x, y)''' fig = plt.figure(figsize=(10, 8)) ax = Axes3D(fig) ax.plot_surface(grid_x, grid_y, f(grid_x, grid_y), cmap='inferno') plt.show() i = np.arange(-1, 1, 0.02) grid_x, grid_y = np.meshgrid(i, i) draw_f(grid_x, grid_y)
_____no_output_____
MIT
Students/Tunitskaya/Hometask_1(1).ipynb
Alken1/Sberbank_ML
Задача 5* Выберите любую картинку и загрузите ее в папку с кодом. При загрузке её размерность равна 3: **(w, h, num_channels)**, где **w** - ширина картинки в пикселях, **h** - высота картинки в пикселях, **num_channels** - количество каналов *(R, G, B, alpha)*.Вам нужно "развернуть" картинку в одномерный массив размера w \* h \* num_channels, написав **одну строку кода**.
from matplotlib import pyplot as plt %matplotlib inline path_to_image = './image.png' image_array = plt.imread(path_to_image) plt.imshow(image_array); flat_image_array = = image_array.flatten() print(len(flat_image_array))
_____no_output_____
MIT
Students/Tunitskaya/Hometask_1(1).ipynb
Alken1/Sberbank_ML
Colab-pytorch-image-classification Original repo: [bentrevett/pytorch-image-classification](https://github.com/bentrevett/pytorch-image-classification) [SqueezeNet code](https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py): [pytorch/vision](https://github.com/pytorch/vision) My fork: [styler00dollar/Colab-image-classification](https://github.com/styler00dollar/Colab-image-classification) This colab is a combination of [this Colab](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/5_resnet.ipynb) and [my other Colab](https://colab.research.google.com/github/styler00dollar/Colab-image-classification/blob/master/5_(small)_ResNet.ipynb) to do SqueezeNet training.
!nvidia-smi
_____no_output_____
MIT
9_SqueezeNet.ipynb
styler00dollar/Colab-image-classification
DATASET CREATION
#@title Mount Google Drive from google.colab import drive drive.mount('/content/drive') print('Google Drive connected.') # copy data somehow !mkdir '/content/classification' !mkdir '/content/classification/images' !cp "/content/drive/MyDrive/classification_v2.7z" "/content/classification/images/classification.7z" %cd /content/classification/images !7z x "classification.7z" !rm -rf /content/classification/images/classification.7z #@title dataset creation TRAIN_RATIO = 0.90 #@param {type:"number"} import os import shutil from tqdm import tqdm #data_dir = os.path.join(ROOT, 'CUB_200_2011') data_dir = '/content/classification' #@param {type:"string"} images_dir = os.path.join(data_dir, 'images') train_dir = os.path.join(data_dir, 'train') test_dir = os.path.join(data_dir, 'test') if os.path.exists(train_dir): shutil.rmtree(train_dir) if os.path.exists(test_dir): shutil.rmtree(test_dir) os.makedirs(train_dir) os.makedirs(test_dir) classes = os.listdir(images_dir) for c in classes: class_dir = os.path.join(images_dir, c) images = os.listdir(class_dir) n_train = int(len(images) * TRAIN_RATIO) train_images = images[:n_train] test_images = images[n_train:] os.makedirs(os.path.join(train_dir, c), exist_ok = True) os.makedirs(os.path.join(test_dir, c), exist_ok = True) for image in tqdm(train_images): image_src = os.path.join(class_dir, image) image_dst = os.path.join(train_dir, c, image) shutil.copyfile(image_src, image_dst) for image in tqdm(test_images): image_src = os.path.join(class_dir, image) image_dst = os.path.join(test_dir, c, image) shutil.copyfile(image_src, image_dst)
_____no_output_____
MIT
9_SqueezeNet.ipynb
styler00dollar/Colab-image-classification
CALC MEANS & STDS
#@title print means and stds import torch import torchvision.transforms as transforms import torchvision.datasets as datasets from tqdm import tqdm train_data = datasets.ImageFolder(root = train_dir, transform = transforms.ToTensor()) means = torch.zeros(3) stds = torch.zeros(3) for img, label in tqdm(train_data): means += torch.mean(img, dim = (1,2)) stds += torch.std(img, dim = (1,2)) means /= len(train_data) stds /= len(train_data) print("\n") print(f'Calculated means: {means}') print(f'Calculated stds: {stds}')
_____no_output_____
MIT
9_SqueezeNet.ipynb
styler00dollar/Colab-image-classification
TRAIN
#@title import, seed, transforms, dataloader, functions, plot, model, parameter %cd /content/ from tqdm import tqdm import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler from torch.optim.lr_scheduler import _LRScheduler import torch.utils.data as data import torchvision.transforms as transforms import torchvision.datasets as datasets import torchvision.models as models from sklearn import decomposition from sklearn import manifold from sklearn.metrics import confusion_matrix from sklearn.metrics import ConfusionMatrixDisplay import matplotlib.pyplot as plt import numpy as np import copy from collections import namedtuple import os import random import shutil SEED = 1234 #@param {type:"number"} random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True train_dir = '/content/classification/train' #@param {type:"string"} test_dir = '/content/classification/test' #@param {type:"string"} pretrained_size = 256 #@param {type:"number"} pretrained_means = [0.6838, 0.6086, 0.6063] #@param {type:"raw"} pretrained_stds= [0.2411, 0.2403, 0.2306] #@param {type:"raw"} #https://github.com/mit-han-lab/data-efficient-gans/blob/master/DiffAugment_pytorch.py import torch import torch.nn.functional as F def DiffAugment(x, policy='', channels_first=True): if policy: if not channels_first: x = x.permute(0, 3, 1, 2) for p in policy.split(','): for f in AUGMENT_FNS[p]: x = f(x) if not channels_first: x = x.permute(0, 2, 3, 1) x = x.contiguous() return x def rand_brightness(x): x = x + (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) - 0.5) return x def rand_saturation(x): x_mean = x.mean(dim=1, keepdim=True) x = (x - x_mean) * (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) * 2) + x_mean return x def rand_contrast(x): x_mean = x.mean(dim=[1, 2, 3], keepdim=True) x = (x - x_mean) * (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) + 0.5) + x_mean return x def rand_translation(x, ratio=0.125): shift_x, shift_y = int(x.size(2) * ratio + 0.5), int(x.size(3) * ratio + 0.5) translation_x = torch.randint(-shift_x, shift_x + 1, size=[x.size(0), 1, 1], device=x.device) translation_y = torch.randint(-shift_y, shift_y + 1, size=[x.size(0), 1, 1], device=x.device) grid_batch, grid_x, grid_y = torch.meshgrid( torch.arange(x.size(0), dtype=torch.long, device=x.device), torch.arange(x.size(2), dtype=torch.long, device=x.device), torch.arange(x.size(3), dtype=torch.long, device=x.device), ) grid_x = torch.clamp(grid_x + translation_x + 1, 0, x.size(2) + 1) grid_y = torch.clamp(grid_y + translation_y + 1, 0, x.size(3) + 1) x_pad = F.pad(x, [1, 1, 1, 1, 0, 0, 0, 0]) x = x_pad.permute(0, 2, 3, 1).contiguous()[grid_batch, grid_x, grid_y].permute(0, 3, 1, 2) return x def rand_cutout(x, ratio=0.5): cutout_size = int(x.size(2) * ratio + 0.5), int(x.size(3) * ratio + 0.5) offset_x = torch.randint(0, x.size(2) + (1 - cutout_size[0] % 2), size=[x.size(0), 1, 1], device=x.device) offset_y = torch.randint(0, x.size(3) + (1 - cutout_size[1] % 2), size=[x.size(0), 1, 1], device=x.device) grid_batch, grid_x, grid_y = torch.meshgrid( torch.arange(x.size(0), dtype=torch.long, device=x.device), torch.arange(cutout_size[0], dtype=torch.long, device=x.device), torch.arange(cutout_size[1], dtype=torch.long, device=x.device), ) grid_x = torch.clamp(grid_x + offset_x - cutout_size[0] // 2, min=0, max=x.size(2) - 1) grid_y = torch.clamp(grid_y + offset_y - cutout_size[1] // 2, min=0, max=x.size(3) - 1) mask = torch.ones(x.size(0), x.size(2), x.size(3), dtype=x.dtype, device=x.device) mask[grid_batch, grid_x, grid_y] = 0 x = x * mask.unsqueeze(1) return x AUGMENT_FNS = { 'color': [rand_brightness, rand_saturation, rand_contrast], 'translation': [rand_translation], 'cutout': [rand_cutout], } train_transforms = transforms.Compose([ transforms.Resize(pretrained_size), transforms.RandomRotation(5), transforms.RandomHorizontalFlip(0.5), transforms.RandomCrop(pretrained_size, padding = 10), transforms.ToTensor(), transforms.Normalize(mean = pretrained_means, std = pretrained_stds) ]) test_transforms = transforms.Compose([ transforms.Resize(pretrained_size), transforms.CenterCrop(pretrained_size), transforms.ToTensor(), transforms.Normalize(mean = pretrained_means, std = pretrained_stds) ]) train_data = datasets.ImageFolder(root = train_dir, transform = train_transforms) test_data = datasets.ImageFolder(root = test_dir, transform = test_transforms) VALID_RATIO = 0.90 #@param {type:"number"} n_train_examples = int(len(train_data) * VALID_RATIO) n_valid_examples = len(train_data) - n_train_examples train_data, valid_data = data.random_split(train_data, [n_train_examples, n_valid_examples]) valid_data = copy.deepcopy(valid_data) valid_data.dataset.transform = test_transforms print(f'Number of training examples: {len(train_data)}') print(f'Number of validation examples: {len(valid_data)}') print(f'Number of testing examples: {len(test_data)}') BATCH_SIZE = 32 #@param {type:"number"} train_iterator = data.DataLoader(train_data, shuffle = True, batch_size = BATCH_SIZE) valid_iterator = data.DataLoader(valid_data, batch_size = BATCH_SIZE) test_iterator = data.DataLoader(test_data, batch_size = BATCH_SIZE) def normalize_image(image): image_min = image.min() image_max = image.max() image.clamp_(min = image_min, max = image_max) image.add_(-image_min).div_(image_max - image_min + 1e-5) return image def plot_images(images, labels, classes, normalize = True): n_images = len(images) rows = int(np.sqrt(n_images)) cols = int(np.sqrt(n_images)) fig = plt.figure(figsize = (15, 15)) for i in range(rows*cols): ax = fig.add_subplot(rows, cols, i+1) image = images[i] if normalize: image = normalize_image(image) ax.imshow(image.permute(1, 2, 0).cpu().numpy()) label = classes[labels[i]] ax.set_title(label) ax.axis('off') N_IMAGES = 25 #@param {type:"number"} images, labels = zip(*[(image, label) for image, label in [train_data[i] for i in range(N_IMAGES)]]) classes = test_data.classes plot_images(images, labels, classes) def format_label(label): label = label.split('.')[-1] label = label.replace('_', ' ') label = label.title() label = label.replace(' ', '') return label test_data.classes = [format_label(c) for c in test_data.classes] classes = test_data.classes plot_images(images, labels, classes) #https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py import torch import torch.nn as nn import torch.nn.init as init #from .utils import load_state_dict_from_url from typing import Any #__all__ = ['SqueezeNet', 'squeezenet1_0', 'squeezenet1_1'] model_urls = { '1_0': 'https://download.pytorch.org/models/squeezenet1_0-a815701f.pth', '1_1': 'https://download.pytorch.org/models/squeezenet1_1-f364aa15.pth', } class Fire(nn.Module): def __init__( self, inplanes: int, squeeze_planes: int, expand1x1_planes: int, expand3x3_planes: int ) -> None: super(Fire, self).__init__() self.inplanes = inplanes self.squeeze = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1) self.squeeze_activation = nn.ReLU(inplace=True) self.expand1x1 = nn.Conv2d(squeeze_planes, expand1x1_planes, kernel_size=1) self.expand1x1_activation = nn.ReLU(inplace=True) self.expand3x3 = nn.Conv2d(squeeze_planes, expand3x3_planes, kernel_size=3, padding=1) self.expand3x3_activation = nn.ReLU(inplace=True) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.squeeze_activation(self.squeeze(x)) return torch.cat([ self.expand1x1_activation(self.expand1x1(x)), self.expand3x3_activation(self.expand3x3(x)) ], 1) class SqueezeNet(nn.Module): def __init__( self, version: str = '1_0', num_classes: int = 1000 ) -> None: super(SqueezeNet, self).__init__() self.num_classes = num_classes if version == '1_0': self.features = nn.Sequential( nn.Conv2d(3, 96, kernel_size=7, stride=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(96, 16, 64, 64), Fire(128, 16, 64, 64), Fire(128, 32, 128, 128), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(256, 32, 128, 128), Fire(256, 48, 192, 192), Fire(384, 48, 192, 192), Fire(384, 64, 256, 256), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(512, 64, 256, 256), ) elif version == '1_1': self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, stride=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(64, 16, 64, 64), Fire(128, 16, 64, 64), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(128, 32, 128, 128), Fire(256, 32, 128, 128), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(256, 48, 192, 192), Fire(384, 48, 192, 192), Fire(384, 64, 256, 256), Fire(512, 64, 256, 256), ) else: # FIXME: Is this needed? SqueezeNet should only be called from the # FIXME: squeezenet1_x() functions # FIXME: This checking is not done for the other models raise ValueError("Unsupported SqueezeNet version {version}:" "1_0 or 1_1 expected".format(version=version)) # Final convolution is initialized differently from the rest final_conv = nn.Conv2d(512, self.num_classes, kernel_size=1) self.classifier = nn.Sequential( nn.Dropout(p=0.5), final_conv, nn.ReLU(inplace=True), nn.AdaptiveAvgPool2d((1, 1)) ) for m in self.modules(): if isinstance(m, nn.Conv2d): if m is final_conv: init.normal_(m.weight, mean=0.0, std=0.01) else: init.kaiming_uniform_(m.weight) if m.bias is not None: init.constant_(m.bias, 0) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.features(x) x = self.classifier(x) return torch.flatten(x, 1) def _squeezenet(version: str, pretrained: bool, progress: bool, **kwargs: Any) -> SqueezeNet: model = SqueezeNet(version, **kwargs) if pretrained: arch = 'squeezenet' + version state_dict = load_state_dict_from_url(model_urls[arch], progress=progress) model.load_state_dict(state_dict) return model def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet: r"""SqueezeNet model architecture from the `"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size" <https://arxiv.org/abs/1602.07360>`_ paper. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet progress (bool): If True, displays a progress bar of the download to stderr """ return _squeezenet('1_0', pretrained, progress, **kwargs) def squeezenet1_1(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet: r"""SqueezeNet 1.1 model from the `official SqueezeNet repo <https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>`_. SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters than SqueezeNet 1.0, without sacrificing accuracy. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet progress (bool): If True, displays a progress bar of the download to stderr """ return _squeezenet('1_1', pretrained, progress, **kwargs) """ #https://github.com/pytorch/vision/blob/master/torchvision/models/utils.py try: from torch.hub import load_state_dict_from_url except ImportError: from torch.utils.model_zoo import load_url as load_state_dict_from_url """ model_train = '1_1' #@param ["1_0", "1_1"] {type:"string"} if model_train == '1_0': model = SqueezeNet(num_classes=len(test_data.classes), version='1_0') #state_dict = load_state_dict_from_url(model_urls[model_train], # progress=True) #model.load_state_dict(state_dict) elif model_train == '1_1': model = SqueezeNet(num_classes=len(test_data.classes), version='1_1') #state_dict = load_state_dict_from_url(model_urls[model_train], # progress=True) #model.load_state_dict(state_dict) def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') START_LR = 1e-7 #@param {type:"number"} optimizer = optim.Adam(model.parameters(), lr=START_LR) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') criterion = nn.CrossEntropyLoss() model = model.to(device) criterion = criterion.to(device) class LRFinder: def __init__(self, model, optimizer, criterion, device): self.optimizer = optimizer self.model = model self.criterion = criterion self.device = device torch.save(model.state_dict(), 'init_params.pt') def range_test(self, iterator, end_lr = 10, num_iter = 100, smooth_f = 0.05, diverge_th = 5): lrs = [] losses = [] best_loss = float('inf') lr_scheduler = ExponentialLR(self.optimizer, end_lr, num_iter) iterator = IteratorWrapper(iterator) for iteration in tqdm(range(num_iter)): loss = self._train_batch(iterator) #update lr lr_scheduler.step() lrs.append(lr_scheduler.get_lr()[0]) if iteration > 0: loss = smooth_f * loss + (1 - smooth_f) * losses[-1] if loss < best_loss: best_loss = loss losses.append(loss) if loss > diverge_th * best_loss: print("Stopping early, the loss has diverged") break #reset model to initial parameters model.load_state_dict(torch.load('init_params.pt')) return lrs, losses def _train_batch(self, iterator): self.model.train() self.optimizer.zero_grad() x, y = iterator.get_batch() x = x.to(self.device) y = y.to(self.device) y_pred, _ = self.model(x) loss = self.criterion(y_pred, y) loss.backward() self.optimizer.step() return loss.item() class ExponentialLR(_LRScheduler): def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1): self.end_lr = end_lr self.num_iter = num_iter super(ExponentialLR, self).__init__(optimizer, last_epoch) def get_lr(self): curr_iter = self.last_epoch + 1 r = curr_iter / self.num_iter return [base_lr * (self.end_lr / base_lr) ** r for base_lr in self.base_lrs] class IteratorWrapper: def __init__(self, iterator): self.iterator = iterator self._iterator = iter(iterator) def __next__(self): try: inputs, labels = next(self._iterator) except StopIteration: self._iterator = iter(self.iterator) inputs, labels, *_ = next(self._iterator) return inputs, labels def get_batch(self): return next(self) def calculate_topk_accuracy(y_pred, y, k = 5): with torch.no_grad(): batch_size = y.shape[0] _, top_pred = y_pred.topk(k=1) top_pred = top_pred.t() correct = top_pred.eq(y.view(1, -1).expand_as(top_pred)) correct_1 = correct[:1].view(-1).float().sum(0, keepdim = True) #correct_k = correct[:k].view(-1).float().sum(0, keepdim = True) acc_1 = correct_1 / batch_size #acc_k = correct_k / batch_size acc_k = 0 return acc_1, acc_k def train(model, iterator, optimizer, criterion, scheduler, device, current_epoch): epoch_loss = 0 epoch_acc_1 = 0 epoch_acc_5 = 0 model.train() policy = 'color,translation,cutout' #@param {type:"string"} diffaug_activate = True #@param ["False", "True"] {type:"raw"} #https://stackoverflow.com/questions/45465031/printing-text-below-tqdm-progress-bar with tqdm(iterator, position=1, bar_format='{desc}') as desc: for (x, y) in tqdm(iterator, position=0): x = x.to(device) y = y.to(device) optimizer.zero_grad() if diffaug_activate == False: y_pred = model(x) else: y_pred = model(DiffAugment(x, policy=policy)) loss = criterion(y_pred, y) acc_1, acc_5 = calculate_topk_accuracy(y_pred, y) loss.backward() optimizer.step() scheduler.step() epoch_loss += loss.item() epoch_acc_1 += acc_1.item() #epoch_acc_5 += acc_5.item() epoch_loss /= len(iterator) epoch_acc_1 /= len(iterator) desc.set_description(f'Epoch: {current_epoch+1}') desc.set_description(f'\tTrain Loss: {epoch_loss:.3f} | Train Acc @1: {epoch_acc_1*100:6.2f}% | ' \ f'Train Acc @5: {epoch_acc_5*100:6.2f}%') return epoch_loss, epoch_acc_1, epoch_acc_5 def evaluate(model, iterator, criterion, device): epoch_loss = 0 epoch_acc_1 = 0 epoch_acc_5 = 0 model.eval() with torch.no_grad(): with tqdm(iterator, position=0, bar_format='{desc}', leave=True) as desc: for (x, y) in iterator: x = x.to(device) y = y.to(device) y_pred = model(x) loss = criterion(y_pred, y) acc_1, acc_5 = calculate_topk_accuracy(y_pred, y) epoch_loss += loss.item() epoch_acc_1 += acc_1.item() #epoch_acc_5 += acc_5.item() epoch_loss /= len(iterator) epoch_acc_1 /= len(iterator) #epoch_acc_5 /= len(iterator) desc.set_description(f'\tValid Loss: {epoch_loss:.3f} | Valid Acc @1: {epoch_acc_1*100:6.2f}% | ' \ f'Valid Acc @5: {epoch_acc_5*100:6.2f}%') return epoch_loss, epoch_acc_1, epoch_acc_5 def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs #@title lr_finder END_LR = 10 #@param {type:"number"} NUM_ITER = 100#@param {type:"number"} #100 lr_finder = LRFinder(model, optimizer, criterion, device) lrs, losses = lr_finder.range_test(train_iterator, END_LR, NUM_ITER) #@title plot_lr_finder def plot_lr_finder(lrs, losses, skip_start = 5, skip_end = 5): if skip_end == 0: lrs = lrs[skip_start:] losses = losses[skip_start:] else: lrs = lrs[skip_start:-skip_end] losses = losses[skip_start:-skip_end] fig = plt.figure(figsize = (16,8)) ax = fig.add_subplot(1,1,1) ax.plot(lrs, losses) ax.set_xscale('log') ax.set_xlabel('Learning rate') ax.set_ylabel('Loss') ax.grid(True, 'both', 'x') plt.show() plot_lr_finder(lrs, losses, skip_start = 30, skip_end = 30) #@title config FOUND_LR = 2e-4 #@param {type:"number"} """ params = [ {'params': model.conv1.parameters(), 'lr': FOUND_LR / 10}, {'params': model.bn1.parameters(), 'lr': FOUND_LR / 10}, {'params': model.layer1.parameters(), 'lr': FOUND_LR / 8}, {'params': model.layer2.parameters(), 'lr': FOUND_LR / 6}, {'params': model.layer3.parameters(), 'lr': FOUND_LR / 4}, {'params': model.layer4.parameters(), 'lr': FOUND_LR / 2}, {'params': model.fc.parameters()} ] """ #optimizer = optim.Adam(params, lr = FOUND_LR) optimizer = optim.Adam(model.parameters(), lr = FOUND_LR) EPOCHS = 100 #@param {type:"number"} STEPS_PER_EPOCH = len(train_iterator) TOTAL_STEPS = EPOCHS * STEPS_PER_EPOCH MAX_LRS = [p['lr'] for p in optimizer.param_groups] scheduler = lr_scheduler.OneCycleLR(optimizer, max_lr = MAX_LRS, total_steps = TOTAL_STEPS) #@title training without topk import time best_valid_loss = float('inf') best_valid_accuracy = 0 for epoch in range(EPOCHS): start_time = time.monotonic() train_loss, train_acc_1, train_acc_5 = train(model, train_iterator, optimizer, criterion, scheduler, device, epoch) valid_loss, valid_acc_1, valid_acc_5 = evaluate(model, valid_iterator, criterion, device) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'best-validation-loss.pt') if best_valid_accuracy < valid_acc_1: best_valid_accuracy = valid_acc_1 torch.save(model.state_dict(), 'best-validation-accuracy.pt') end_time = time.monotonic() epoch_mins, epoch_secs = epoch_time(start_time, end_time)
_____no_output_____
MIT
9_SqueezeNet.ipynb
styler00dollar/Colab-image-classification
TESTING
#@title Calc test loss model.load_state_dict(torch.load('best-validation-accuracy.pt')) print("best-validation-accuracy.pt") test_loss, test_acc_1, test_acc_5 = evaluate(model, test_iterator, criterion, device) print("-----------------------------") model.load_state_dict(torch.load('best-validation-loss.pt')) print("best-validation-loss.pt") test_loss, test_acc_1, test_acc_5 = evaluate(model, test_iterator, criterion, device) #@title plot_confusion_matrix def get_predictions(model, iterator): model.eval() images = [] labels = [] probs = [] with torch.no_grad(): for (x, y) in iterator: x = x.to(device) y_pred = model(x) y_prob = F.softmax(y_pred, dim = -1) top_pred = y_prob.argmax(1, keepdim = True) images.append(x.cpu()) labels.append(y.cpu()) probs.append(y_prob.cpu()) images = torch.cat(images, dim = 0) labels = torch.cat(labels, dim = 0) probs = torch.cat(probs, dim = 0) return images, labels, probs images, labels, probs = get_predictions(model, test_iterator) pred_labels = torch.argmax(probs, 1) def plot_confusion_matrix(labels, pred_labels, classes): fig = plt.figure(figsize = (50, 50)); ax = fig.add_subplot(1, 1, 1); cm = confusion_matrix(labels, pred_labels); cm = ConfusionMatrixDisplay(cm, display_labels = classes); cm.plot(values_format = 'd', cmap = 'Blues', ax = ax) fig.delaxes(fig.axes[1]) #delete colorbar plt.xticks(rotation = 90) plt.xlabel('Predicted Label', fontsize = 50) plt.ylabel('True Label', fontsize = 50) plot_confusion_matrix(labels, pred_labels, classes) #@title plot corrects = torch.eq(labels, pred_labels) incorrect_examples = [] for image, label, prob, correct in zip(images, labels, probs, corrects): if not correct: incorrect_examples.append((image, label, prob)) incorrect_examples.sort(reverse = True, key = lambda x: torch.max(x[2], dim = 0).values) def plot_most_incorrect(incorrect, classes, n_images, normalize = True): rows = int(np.sqrt(n_images)) cols = int(np.sqrt(n_images)) fig = plt.figure(figsize = (25, 20)) for i in range(rows*cols): ax = fig.add_subplot(rows, cols, i+1) image, true_label, probs = incorrect[i] image = image.permute(1, 2, 0) true_prob = probs[true_label] incorrect_prob, incorrect_label = torch.max(probs, dim = 0) true_class = classes[true_label] incorrect_class = classes[incorrect_label] if normalize: image = normalize_image(image) ax.imshow(image.cpu().numpy()) ax.set_title(f'true label: {true_class} ({true_prob:.3f})\n' \ f'pred label: {incorrect_class} ({incorrect_prob:.3f})') ax.axis('off') fig.subplots_adjust(hspace=0.4) N_IMAGES = 36 plot_most_incorrect(incorrect_examples, classes, N_IMAGES) #@title plot_representations def get_representations(model, iterator): model.eval() outputs = [] intermediates = [] labels = [] with torch.no_grad(): for (x, y) in iterator: x = x.to(device) y_pred, _ = model(x) outputs.append(y_pred.cpu()) labels.append(y) outputs = torch.cat(outputs, dim = 0) labels = torch.cat(labels, dim = 0) return outputs, labels outputs, labels = get_representations(model, train_iterator) def get_pca(data, n_components = 2): pca = decomposition.PCA() pca.n_components = n_components pca_data = pca.fit_transform(data) return pca_data def plot_representations(data, labels, classes, n_images = None): if n_images is not None: data = data[:n_images] labels = labels[:n_images] fig = plt.figure(figsize = (15, 15)) ax = fig.add_subplot(111) scatter = ax.scatter(data[:, 0], data[:, 1], c = labels, cmap = 'hsv') #handles, _ = scatter.legend_elements(num = None) #legend = plt.legend(handles = handles, labels = classes) output_pca_data = get_pca(outputs) plot_representations(output_pca_data, labels, classes) #@title get_tsne def get_tsne(data, n_components = 2, n_images = None): if n_images is not None: data = data[:n_images] tsne = manifold.TSNE(n_components = n_components, random_state = 0) tsne_data = tsne.fit_transform(data) return tsne_data output_tsne_data = get_tsne(outputs) plot_representations(output_tsne_data, labels, classes) #@title plot_filtered_images def plot_filtered_images(images, filters, n_filters = None, normalize = True): images = torch.cat([i.unsqueeze(0) for i in images], dim = 0).cpu() filters = filters.cpu() if n_filters is not None: filters = filters[:n_filters] n_images = images.shape[0] n_filters = filters.shape[0] filtered_images = F.conv2d(images, filters) fig = plt.figure(figsize = (30, 30)) for i in range(n_images): image = images[i] if normalize: image = normalize_image(image) ax = fig.add_subplot(n_images, n_filters+1, i+1+(i*n_filters)) ax.imshow(image.permute(1,2,0).numpy()) ax.set_title('Original') ax.axis('off') for j in range(n_filters): image = filtered_images[i][j] if normalize: image = normalize_image(image) ax = fig.add_subplot(n_images, n_filters+1, i+1+(i*n_filters)+j+1) ax.imshow(image.numpy(), cmap = 'bone') ax.set_title(f'Filter {j+1}') ax.axis('off'); fig.subplots_adjust(hspace = -0.7) N_IMAGES = 5 N_FILTERS = 7 images = [image for image, label in [train_data[i] for i in range(N_IMAGES)]] filters = model.conv1.weight.data plot_filtered_images(images, filters, N_FILTERS) #@title plot_filters #filters = model.conv1.weight.data def plot_filters(filters, normalize = True): filters = filters.cpu() n_filters = filters.shape[0] rows = int(np.sqrt(n_filters)) cols = int(np.sqrt(n_filters)) fig = plt.figure(figsize = (30, 15)) for i in range(rows*cols): image = filters[i] if normalize: image = normalize_image(image) ax = fig.add_subplot(rows, cols, i+1) ax.imshow(image.permute(1, 2, 0)) ax.axis('off') fig.subplots_adjust(wspace = -0.9) plot_filters(filters)
_____no_output_____
MIT
9_SqueezeNet.ipynb
styler00dollar/Colab-image-classification
From Variables to Classes A short Introduction Python - as any programming language - has many extensions and libraries at its disposal. Basically, there are libraries for everything. But what are **libraries**? Basically, **libraries** are a collection of methods (_small pieces of code where you put sth in and get sth else out_) which you can use to analyse your data, visualise your data, run models ... do anything you like. As said, methods usually take _something_ as input. That _something_ is usually a **variable**. In the following, we will work our way from **variables** to **libraries**. Variables Variables are one of the simplest types of objects in a programming language. An [object](https://en.wikipedia.org/wiki/Object_(computer_science) is a value stored in the memory of your computer, marked by a specific identifyer. Variables can have different types, such as [strings, numbers, and booleans](https://www.learnpython.org/en/Variables_and_Types). Differently to other programming languages, you do not need to declare the type of a variable, as variables are handled as objects in Python. ```pythonx = 4.2 floating point numbery = 'Hello World!' stringz = True boolean```
x = 4.2 print(type(x)) y = 'Hello World!' print(type(y)) z = True print(type(z))
<class 'float'> <class 'str'> <class 'bool'>
MIT
00_Variables_to_Classes.ipynb
darius74/geothermics
We can use operations (normal arithmetic operations) to use variables for getting results we want. With numbers, you can add, substract, multiply, divide, basically taking the values from the memory assigned to the variable name and performing calculations. Let's have a look at operations with numbers and strings. We leave booleans to the side for the moment. We will simply add the variables below. ```python n1 = 7 n2 = 42s1 = 'Looking good, 's2 = 'you are.'```
n1 = 7 n2 = 42 s1 = 'Looking good, ' s2 = 'you are.' first_sum = n1 + n2 print(first_sum) first_conc = s1 + s2 print(first_conc)
49 Looking good, you are.
MIT
00_Variables_to_Classes.ipynb
darius74/geothermics
Variables can be more than just a number. If you think of an Excel-Spreadsheet, a variable can be the content of a single cell, or multiple cells can be combined in one variable (e.g. one column of an Excel table). So let's create a list -_a collection of variables_ - from `x`, `n1`, and `n2`. Lists in python are created using [ ]. Now, if you want to calculate the sum of this list, it is really exhausting to sum up every item of this list manually. ```pythonfirst_list = [x, n1, n2] a sum of a list could look likesecond_sum = some_list[0] + some_list[1] + ... + some_list[n] where n is the last item of the list, e.g. 2 for first_list. ``` Actually, writing the second sum like this is the same as before. It would be great, if this step of calculating the sum could be used many times without writing it out. And this is, what functions are for. For example, there already exists a sum function: ```python sum(first_list)```
first_list = [x, n1, n2] second_sum = first_list[0] + first_list[1] + first_list[2] print('manual sum {}'.format(second_sum)) # This can also be done with a function print('sum function {}'.format(sum(first_list)))
manual sum 53.2 sum function 53.2
MIT
00_Variables_to_Classes.ipynb
darius74/geothermics
Functions The `sum()` method we used above is a **function**. Functions (later we will call them methods) are pieces of code, which take an input, perform some kind of operation, and (_optionally_) return an output. In Python, functions are written like: ```pythondef func(input): """ Description of the functions content called the function header """ some kind of operation on input called the function body return output```As an example, we write a `sumup` function which sums up a list.
def sumup(inp): """ input: inp - list/array with floating point or integer numbers return: sumd - scalar value of the summed up list """ val = 0 for i in inp: val = val + i return val # let's compare the implemented standard sum function with the new sumup function sum1 = sum(first_list) sum2 = sumup(first_list) print("The python sum function yields {}, \nand our sumup function yields {}.".format(*(sum1,sum2))) # summing up the numbers from 1 to 100 import numpy as np ar_2_sum = np.linspace(1,100,100, dtype='i') print("the sum of the array is: {}".format(sumup(ar_2_sum)))
the sum of the array is: 5050
MIT
00_Variables_to_Classes.ipynb
darius74/geothermics
As we see above, functions are quite practical and save a lot of time. Further, they help structuring your code. Some functions are directly available in python without any libraries or other external software. In the example above however, you might have noticed, that we `import`ed a library called `numpy`. In those libraries, functions are merged to one package, having the advantage that you don't need to import each single function at a time. Imagine you move and have to pack all your belongings. You can think of libraries as packing things with similar purpose in the same box (= library). Functions to Methods as part of classesWhen we talk about functions in the environment of classes, we usually call them methods. But what are **classes**? [Classes](https://docs.python.org/3/tutorial/classes.html) are ways to bundle functionality together. Logically, functionality with similar purpose (or different kind of similarity). One example could be: think of **apples**. Apples are now a class. You can apply methods to this class, such as `eat()` or `cut()`. Or more sophisticated methods including various recipes using apples comprised in a cookbook. The `eat()` method is straight forward. But the `cut()` method may be more interesting, since there are various ways to cut an apple. Let's assume there are two apples to be cut differently. In python, once you have assigned a class to a variable, you have created an **instance** of that class. Then, methods of are applied to that instance by using a . notation.```pythonGolden_Delicious = apple()Yoya = apple()Golden_Delicious.cut(4)Yoya.cut(8)```The two apples Golden Delicious and Yoya are _instances_ of the class apple. Real _incarnations_ of the abstract concept _apple_. The Golden Delicious is cut into 4 pieces, while the Yoya is cut into 8 pieces. This is similar to more complex libraries, such as the `scikit-learn`. In one exercise, you used the command: ```pythonfrom sklearn.cluster import KMeans```which simply imports the **class** `KMeans` from the library part `sklearn.cluster`. `KMeans` comprises several methods for clustering, which you can use by calling them similar to the apple example before. For this, you need to create an _instance_ of the `KMeans` class. ```python...kmeans_inst = KMeans(n_clusters=n_clusters) first we create the instance of the KMeans class called kmeans_instkmeans_inst.fit(data) then we apply a method to the instance kmeans_inst...```An example:
# here we just create the data for clustering from sklearn.datasets.samples_generator import make_blobs import matplotlib.pyplot as plt %matplotlib inline X, y = make_blobs(n_samples=100, centers=3, cluster_std= 0.5, random_state=0) plt.scatter(X[:,0], X[:,1], s=70) # now we create an instance of the KMeans class from sklearn.cluster import KMeans nr_of_clusters = 3 # because we see 3 clusters in the plot above kmeans_inst = KMeans(n_clusters= nr_of_clusters) # create the instance kmeans_inst kmeans_inst.fit(X) # apply a method to the instance y_predict = kmeans_inst.predict(X) # apply another method to the instance and save it in another variable # lets plot the predicted cluster centers colored in the cluster color plt.scatter(X[:, 0], X[:, 1], c=y_predict, s=50, cmap='Accent') centers = kmeans_inst.cluster_centers_ # apply the method to find the new centers of the determined clusters plt.scatter(centers[:, 0], centers[:, 1], c='red', s=200, alpha=0.6); # plot the cluster centers
_____no_output_____
MIT
00_Variables_to_Classes.ipynb
darius74/geothermics
Implementing LSH algorithm 0. Dataset 0.1 Import 20-news dataset
newsgroup_dataset = datasets.fetch_20newsgroups(data_home='./dataset/', subset='train', remove=('headers', 'footers', 'quotes'), download_if_missing=True) raw_documents = newsgroup_dataset['data'][:] len(raw_documents) raw_documents[0]
_____no_output_____
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
0.2 Preprocess the documents
K = 5 # number of word tokens to shinlge def preprocess(documents): processed_words = defaultdict(list) cnt = 0 for doc in documents: # first, filter out some uncesseary symbols like punctuations doc = re.sub('\/|\-|\'|\@|\%|\$|\#|\,|\(|\)|\}|\"|\{|\?|\.|\!|\;|\:', '', doc) # second, split the document into the words for word in doc.split(): # third, let word to be the lower-case if word.isalpha(): processed_words[cnt].append(word.lower()) # fourth, filter out the articles that has less than k shingles if len(processed_words[cnt]) < K: continue else: processed_words[cnt] = ' '.join(processed_words[cnt]) cnt += 1 return list(processed_words.values()) documents = preprocess(raw_documents) del raw_documents len(documents) documents[0]
_____no_output_____
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
1. Shingling
######################################################################################################################## # Programming 1 [15pt] # # In this section, you will implement the shingling algorithm to convert the document into the characteristic matrix. # # However, since storing the whole characteristic matrix in the form of a dense matrix is expensivein terms of space, # # your implementation should store the characteristic matrix in the form of a dictionary. # # # # i) get the all unique shingles from the documents [10pt] # # ii) create the dictionary that maps each document to the list of shingles [5pt] # # # # Note that, shingling is divided into 2-steps just for the readability of the algorithm # # # ########################################################################################################################
_____no_output_____
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
1.1 Get Shingles from the documents
def get_shingles(documents): ###################################################################################### # Programming 1.1 [10pt] # # Implement 'get_shingles' function to get 1-singles from the preprocessed documents # # You should especially be take care of your algorithm's computational efficiency # # # # Parameters: # # documents (dict) # # # # Returns: # # shingles (set) set of tuples where each element is a k-shingle # # ex) shingles = {('its', 'hard', 'to', 'say', 'whether'), # # ('known', 'bugs', 'in', 'the', 'warning') ...} # ###################################################################################### shingles = set() for doc in documents: doc_split = doc.split() for i in range(len(doc_split) - (K-1)): shingles.add(tuple(doc_split[i:i+K])) return shingles start = time.time() shingles = get_shingles(documents) end = time.time() # Check whether your implementation is correct [5pt] if len(shingles) == 1766049: pass_test1_1_1 = True print('Test1 passed') # Check whether your implementation is efficient enough [5pt] # With 4-lines of my implementations, it took 4.8 seconds with i7-8700 cpu if (end - start) < 20: pass_test1_1_2 = True print('Test2 passed') print(end - start)
1.0299334526062012
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
1.2 Build document to shingles dictionary
def build_doc_to_shingle_dictionary(documents, shingles): ################################################################################################################################ # Programming 1.2 [5pt] # # Implement 'build_doc_to_shingle_dictionary' function to convert documents into shingle dictionary with documents & shingles # # You need to construct and utilize a shingle2idx dictionary that maps each shingle into the uniuqe integer index. # # # # Parameters: # # documents (dict) # # shingles (set) # # # # Returns: # # doc_to_shingles (dict) # # key: index of the documents # # value: list of the shingle indexes # # ex) doc_to_shingles = {0: [1705196, 422880, 491967, ...], # # 1: [863922, 1381606, 1524066, ...], # # ... } # ################################################################################################################################ doc_to_shingles = {} shingle2idx = {} for idx, shingle in enumerate(shingles): shingle2idx[shingle] = idx for idx, doc in enumerate(documents): shingle_list = [shingle2idx[s] for s in get_shingles([doc])] doc_to_shingles[idx] = shingle_list return doc_to_shingles doc_to_shingles = build_doc_to_shingle_dictionary(documents, shingles) # Check whether your implementation is correct [5pt] if len(doc_to_shingles) == 10882 and len(doc_to_shingles[0]) == 84: pass_test1_2 = True print('Test passed')
Test passed
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
2. Min-Hashing
############################################################################################################################ # Programming 2 [25pt] # # In this section, you will implement the min-hashing algorithm to convert the characteristic matrix into the signatures. # # # # i) implement the jaccard-similarity algorithm [5pt] # # ii) implement the min-hash algorithm to create the signatures for the documents [20pt] # # # ############################################################################################################################
_____no_output_____
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
2.1 Generate Prime numbers for Universal Hashing
def is_prime(n): for i in range(2,int(np.sqrt(n))+1): if not n % i: return False return True def generate_prime_numbers(M, N): # this function generate the M prime numbers where each prime number is greater than N primes = [] cnt = 0 n = N + 1 while cnt < M: if is_prime(n): primes.append(n) cnt += 1 n += 1 return primes # Test prime number generation generate_prime_numbers(M = 3, N = 3)
_____no_output_____
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
2.2 Jaccard Similarity
def jaccard_similarity(s1, s2): ################################################################################## # Programming 2.2 [5pt] # # Implement the jaccard similarity algorithm to get the similarity of two sets # # # # Parameters # # s1 (set) # # s2 (set) # # Returns # # similarity (float) # ################################################################################## similarity = len(s1&s2) / len(s1|s2) return similarity s1 = {1, 3, 4} s2 = {3, 4, 6} if (jaccard_similarity(s1, s2) - 0.5) < 1e-3: pass_test2_1 = True print('Test passed')
Test passed
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
2.3 Min Hash
M = 100 # Number of Hash functions to use N = len(shingles) # First we will create M universal hashing functions # You can also modify or implement your own hash functions for implementing min_hash function class Hash(): def __init__(self, M, N): self.M = M self.N = N self.p = generate_prime_numbers(M, N) self.a = np.random.choice(9999, M) self.b = np.random.choice(9999, M) def __call__(self, x): return np.mod(np.mod((self.a * x + self.b), self.p), self.N) def __len__(self): return M #primes = generate_prime_numbers(M, N) hash_functions = Hash(M, N) def min_hash(doc_to_shingles, hash_functions): ########################################################################################### # Programming 2.3 [20pt] # # Implement the min-hash algorithm to create the signatures for the documents # # It would take about ~10 minutes to finish computation, # # while would take ~20 seconds if you parallelize your hash functions # # # # Parameters # # doc_to_shingles: (dict) dictionary that maps each document to the list of shingles # # hash_functions: [list] list of hash functions # # Returns # # signatures (np.array) numpy array of size (M, C) where C is the number of documents # # # ########################################################################################### C = len(doc_to_shingles) M = len(hash_functions) signatures = np.array(np.ones((M, C)) * 999999999999, dtype = np.int) for doc_id in range(C): shingles = doc_to_shingles[doc_id] for shingle in shingles: hash_shingle = hash_functions(shingle) signatures[:,doc_id] = np.where(hash_shingle < signatures[:,doc_id], hash_shingle, signatures[:,doc_id]) return signatures def compare(signatures, doc_to_shingles, trials = 10000): M, C = signatures.shape diff_list = [] for t in tqdm(range(trials)): doc1, doc2 = np.random.choice(C, 2, replace = False) shingle1, shingle2 = set(doc_to_shingles[doc1]), set(doc_to_shingles[doc2]) sig1, sig2 = signatures[:,doc1], signatures[:,doc2] true_sim = jaccard_similarity(shingle1, shingle2) approx_sim = sum(np.equal(sig1, sig2)) / M diff_list.append(abs(true_sim - approx_sim)) return diff_list start = time.time() signatures = min_hash(doc_to_shingles, hash_functions) end = time.time() diff_list = compare(signatures, doc_to_shingles) # Check whether your implementation is correct [20pt] # Average difference of document's jaccard similarity between characteristic matrix and signatures should be at most 1% # With 10 random seeds, difference was around 1e-5 ~ 1e-6% if np.mean(diff_list) < 0.01: pass_test2_2 = True print('Test passed')
100%|██████████| 10000/10000 [00:04<00:00, 2437.12it/s]
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
2.4 Qualitive Analysis
print('Document 3542') print(documents[3542]) print('-------------') print('Document 8033') print(documents[8033]) print('-------------') print('true jaccard similarity:' ,jaccard_similarity(set(doc_to_shingles[3542]), set(doc_to_shingles[8033]))) print('approx jaccard similarity:',sum(np.equal(signatures[:,3542], signatures[:,8033])) / M) print('Do you think signature well reflects the characteristic matrix?')
Document 3542 i have one complaint for the cameramen doing the jerseypitt series show the shots not the hits on more than one occassion the camera zoomed in on a check along the boards while the puck was in the slot they panned back to show the rebound maybe moms camera people were a little more experienced ------------- Document 8033 i have one complaint for the cameramen doing the jerseypitt series show the shots not the hits on more than one occassion the camera zoomed in on a check along the boards while the puck was in the slot they panned back to show the rebound maybe moms camera people were a little more experienced joseph stiehm exactly that is my biggest complaint about the coverage so far follow that damn puck ravi shah ------------- true jaccard similarity: 0.7285714285714285 approx jaccard similarity: 0.74 Do you think signature well reflects the characteristic matrix?
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
3. Locality Sensitive Hashing
######################################################################################################################## # Programming 3 [35pt] # # In this section, you will implement the Min-Hash based Locality Sensitive Hashing algorithm to convert signatures # # into the similar document pair candidates # # Finally, we will test our results based on the precision, recall and F1 score # # # # 1) get the similar document pair candidates [20pt] # # 2) calculate precision, recall, and f1 score [10pt] # # # ########################################################################################################################
_____no_output_____
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
3.1 Min-Hash based LSH
def lsh(signatures, b, r): ######################################################################################################### # Programming 3.1 [20pt] # # Implement the min-hash based LSH algorithm to find the candidate pairs of the similar documents. # # In the implementation, use python's dictionary to make your hash table, # # where each column is hashed into a bucket. # # Convert each column vector (within a band) into the tuple and use it as a key of the dictionary. # # # # Parameters # # signatures: (np.array) numpy array of size (M, C) where # # M is the number of min-hash functions, C is the number of documents # # b: (int) the number of bands # # r: (int) the number of rows per each band # # # # Requirements # # 1) M should be equivalent to b * r # # # # Returns # # candidatePairs (Set[Tuple[int, int]]) set of the pairs of indexes of candidate document pairs # # # ######################################################################################################### M = signatures.shape[0] # The number of min-hash functions C = signatures.shape[1] # The number of documents assert M == b * r candidatePairs = set() # TODO: Write down your code here for num_b in range(b): bucket = {} bands = signatures[num_b*r:(num_b+1)*r] for col in range(C): if tuple(bands[:,col]) in bucket.keys(): bucket[tuple(bands[:,col])].append(col) else: bucket[tuple(bands[:,col])] = [col] for value in bucket.values(): if len(value) >= 2: combi = combinations(value, 2) candidatePairs.update(list(combi)) #import ipdb; ipdb.set_trace() ### Implementation End ### return candidatePairs # You can test your implementation here b = 10 n = 0 tmpPairs = list(lsh(signatures, b, M // b)) print(f"b={b}") print(f"# of candidate pairs = {len(tmpPairs)}") samplePair = tmpPairs[n] shingle1, shingle2 = set(doc_to_shingles[samplePair[0]]), set(doc_to_shingles[samplePair[1]]) print(f"{n}th sample pair: {samplePair}") print(f"Jaccard similarity: {jaccard_similarity(shingle1, shingle2)}") print('-------------') print(documents[samplePair[0]]) print('-------------') print(documents[samplePair[1]]) print('-------------')
b=10 # of candidate pairs = 162 0th sample pair: (1658, 5780) Jaccard similarity: 0.8108108108108109 ------------- from paynecrldeccom andrew payne messageid organization dec cambridge research lab date tue apr gmt does anyone know if a source for the modem chips as used in the baycom and my pmp modems ideally something that is geared toward hobbyists small quantity mail order etc for years weve been buying them from a distributor marshall by the hundreds for pmp kits but orders have dropped to the point where we can no longer afford to offer this service and all of the distributors ive checked have some crazy minimum order or so id like to find a source for those still interested in building pmp kits any suggestions andrew c payne dec cambridge research lab ------------- does anyone know if a source for the modem chips as used in the baycom and my pmp modems ideally something that is geared toward hobbyists small quantity mail order etc for years weve been buying them from a distributor marshall by the hundreds for pmp kits but orders have dropped to the point where we can no longer afford to offer this service and all of the distributors ive checked have some crazy minimum order or so id like to find a source for those still interested in building pmp kits any suggestions -------------
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
3.2 Compute the precision, recall, and F1-score
# Compute the number of condition positives, which is the number of every document pair whose Jaccard similarity is greater than or equal to the threshold s = 0.8 # similarity threshold for checking condition positives numConditionPositives = 151 # This is the computed result when s=0.8, but I gave it to you to save your time. computeConditionPositives = False # If you want to calculate it, then change it to True. It will take 30 minutes to compute. if computeConditionPositives: numConditionPositives = 0 numDocs = len(documents) for i in tqdm(range(numDocs)): shingle1 = set(doc_to_shingles[i]) for j in range(i+1, numDocs): shingle2 = set(doc_to_shingles[j]) true_sim = jaccard_similarity(shingle1, shingle2) if true_sim >= s: numConditionPositives += 1 print(f"The number of condition positives: {numConditionPositives} when s={s}") def query_analysis(signatures, b, s, numConditionPositives): ########################################################################################################### # Programming 3.2 [10pt] # # Calculate the query time, precision, recall, and F1 score for the given configuration # # # # Parameters # # signatures: (np.array) numpy array of size (M, C) where # # M is the number of min-hash functions, C is the number of documents # # b: (int) the number of bands # # s: (float) similarity threshold for checking condition positives # # numConditionPositives: (int) the number of condition positives # # # # Requirements # # 1) b should be the divisor of M # # 2) 0 <= s <= 1 # # # # Returns # # query time: (float) the execution time of the codes which find the similar document candidate pairs # # precision: (float) # # recall: (float) # # f1: (float) F1-Score # # # ########################################################################################################### M = signatures.shape[0] # The number of min-hash functions assert M % b == 0 # TODO: Write down your code here TP = 0 t = time.time() candidatePairs = lsh(signatures, b, M // b) query_time = time.time() - t for pair in candidatePairs: shingle1, shingle2 = set(doc_to_shingles[pair[0]]), set(doc_to_shingles[pair[1]]) if jaccard_similarity(shingle1, shingle2) >= s: TP += 1 precision = TP / len(candidatePairs) recall = TP / numConditionPositives f1 = 2 * precision * recall / (precision + recall) ### Implementation End ### return query_time, precision, recall, f1 # Return the list of every divisor of given integer def find_divisors(x): divisors = list() for i in range(1, x + 1): if x % i == 0: divisors.append(i) return divisors b_list = find_divisors(M) query_time_list = list() precision_list = list() recall_list = list() f1_list = list() for b in tqdm(b_list): query_time, precision, recall, f1 = query_analysis(signatures, b, s, numConditionPositives) query_time_list.append(query_time) precision_list.append(precision) recall_list.append(recall) f1_list.append(f1) print("b: ", b_list) print("Query times: ", query_time_list) print("Precisions: ", precision_list) print("Recalls: ", recall_list) print("F1 scores: ", f1_list) plt.title(f"Query time (s={s})") plt.xlabel("b") plt.ylabel("Query time [sec]") plt.plot(b_list, query_time_list) plt.show() plt.title(f"Precision (s={s})") plt.xlabel("b") plt.ylabel("Precision") plt.plot(b_list, precision_list) plt.show() plt.title(f"Recall (s={s})") plt.xlabel("b") plt.ylabel("Recall") plt.plot(b_list, recall_list) plt.show() plt.title(f"F1 Score (s={s})") plt.xlabel("b") plt.ylabel("F1 Score") plt.plot(b_list, f1_list) plt.show() # Check whether the test passed test_msg = {True: "Passed", False: "Failed"} print("-----Test results-----") print(f"[Test 1.1 (1)]: {test_msg[pass_test1_1_1]}") print(f"[Test 1.1 (2)]: {test_msg[pass_test1_1_2]}") print(f"[Test 1.2]: {test_msg[pass_test1_2]}") print(f"[Test 2.1]: {test_msg[pass_test2_1]}") print(f"[Test 2.2]: {test_msg[pass_test2_2]}") print("----------------------")
-----Test results----- [Test 1.1 (1)]: Passed [Test 1.1 (2)]: Passed [Test 1.2]: Passed [Test 2.1]: Passed [Test 2.2]: Passed ----------------------
MIT
AI506/hw1/hw1-20194331/hw1_problem.ipynb
sungnyun/AI-assignments
Plotting functions
from recnn.preprocessing import sequentialize_by_pt def load_tf(filename_train, preprocess=None, n_events_train=-1): # Make training data print("Loading training data...") fd = open(filename_train, "rb") X, y = pickle.load(fd) fd.close() y = np.array(y) if n_events_train > 0: indices = check_random_state(123).permutation(len(X))[:n_events_train] X = [X[i] for i in indices] y = y[indices] print("\tfilename = %s" % filename_train) print("\tX size = %d" % len(X)) print("\ty size = %d" % len(y)) # Preprocessing print("Preprocessing...") X = [rewrite_content(jet) for jet in X] if preprocess: X = [preprocess(jet) for jet in X] X = [extract(permute_by_pt(jet)) for jet in X] tf = RobustScaler().fit(np.vstack([jet["content"] for jet in X])) return tf def load_test(tf, filename_test, preprocess=None, cropping=True): # Make test data print("Loading test data...") fd = open(filename_test, "rb") X, y = pickle.load(fd) fd.close() y = np.array(y) print("\tfilename = %s" % filename_test) print("\tX size = %d" % len(X)) print("\ty size = %d" % len(y)) # Preprocessing print("Preprocessing...") X = [rewrite_content(jet) for jet in X] if preprocess: X = [preprocess(jet) for jet in X] X = [extract(permute_by_pt(jet)) for jet in X] for jet in X: jet["content"] = tf.transform(jet["content"]) if not cropping: return X, y # Cropping X_ = [j for j in X if 250 < j["pt"] < 300 and 50 < j["mass"] < 110] y_ = [y[i] for i, j in enumerate(X) if 250 < j["pt"] < 300 and 50 < j["mass"] < 110] X = X_ y = y_ y = np.array(y) print("\tX size = %d" % len(X)) print("\ty size = %d" % len(y)) # Weights for flatness in pt w = np.zeros(len(y)) X0 = [X[i] for i in range(len(y)) if y[i] == 0] pdf, edges = np.histogram([j["pt"] for j in X0], density=True, range=[250, 300], bins=50) pts = [j["pt"] for j in X0] indices = np.searchsorted(edges, pts) - 1 inv_w = 1. / pdf[indices] inv_w /= inv_w.sum() w[y==0] = inv_w X1 = [X[i] for i in range(len(y)) if y[i] == 1] pdf, edges = np.histogram([j["pt"] for j in X1], density=True, range=[250, 300], bins=50) pts = [j["pt"] for j in X1] indices = np.searchsorted(edges, pts) - 1 inv_w = 1. / pdf[indices] inv_w /= inv_w.sum() w[y==1] = inv_w return X, y, w from recnn.recnn import grnn_transform_simple from recnn.recnn import grnn_predict_simple from recnn.recnn import grnn_predict_gated from recnn.recnn import grnn_predict_simple_join def predict(X, filename, func=grnn_predict_simple): fd = open(filename, "rb") params = pickle.load(fd) fd.close() y_pred = func(params, X) return y_pred def evaluate_models(X, y, w, pattern, func=grnn_predict_simple): rocs = [] fprs = [] tprs = [] for filename in glob.glob(pattern): print("Loading %s" % filename), y_pred = predict(X, filename, func=func) # Roc rocs.append(roc_auc_score(y, y_pred, sample_weight=w)) fpr, tpr, _ = roc_curve(y, y_pred, sample_weight=w) fprs.append(fpr) tprs.append(tpr) print("ROC AUC = %.4f" % rocs[-1]) print("Mean ROC AUC = %.4f" % np.mean(rocs)) return rocs, fprs, tprs def build_rocs(prefix_train, prefix_test, model_pattern, preprocess=None, gated=False): tf = load_tf("../data/w-vs-qcd/final/%s-train.pickle" % prefix_train, preprocess=preprocess) X, y, w = load_test(tf, "../data/w-vs-qcd/final/%s-test.pickle" % prefix_test, preprocess=preprocess) if not gated: rocs, fprs, tprs = evaluate_models(X, y, w, "../models/jet-study-2/model-w-s-%s-[0-9]*.pickle" % model_pattern) else: rocs, fprs, tprs = evaluate_models(X, y, w, "../models/jet-study-2/model-w-g-%s-[0-9]*.pickle" % model_pattern, func=grnn_predict_gated) return rocs, fprs, tprs def remove_outliers(rocs, fprs, tprs): inv_fprs = [] base_tpr = np.linspace(0.05, 1, 476) for fpr, tpr in zip(fprs, tprs): inv_fpr = interp(base_tpr, tpr, 1. / fpr) inv_fprs.append(inv_fpr) inv_fprs = np.array(inv_fprs) scores = inv_fprs[:, 225] p25 = np.percentile(scores, 1 / 6. * 100.) p75 = np.percentile(scores, 5 / 6. * 100) robust_mean = np.mean([scores[i] for i in range(len(scores)) if p25 <= scores[i] <= p75]) robust_std = np.std([scores[i] for i in range(len(scores)) if p25 <= scores[i] <= p75]) indices = [i for i in range(len(scores)) if robust_mean - 3*robust_std <= scores[i] <= robust_mean + 3*robust_std] new_r, new_f, new_t = [], [], [] for i in indices: new_r.append(rocs[i]) new_f.append(fprs[i]) new_t.append(tprs[i]) return new_r, new_f, new_t def report_score(rocs, fprs, tprs, label, latex=False, input="particles", short=False): inv_fprs = [] base_tpr = np.linspace(0.05, 1, 476) for fpr, tpr in zip(fprs, tprs): inv_fpr = interp(base_tpr, tpr, 1. / fpr) inv_fprs.append(inv_fpr) inv_fprs = np.array(inv_fprs) mean_inv_fprs = inv_fprs.mean(axis=0) if not latex: print("%32s\tROC AUC=%.4f+-%.2f\t1/FPR@TPR=0.5=%.2f+-%.2f" % (label, np.mean(rocs), np.std(rocs), np.mean(inv_fprs[:, 225]), np.std(inv_fprs[:, 225]))) else: if not short: print("%10s \t& %30s \t& %.4f $\pm$ %.4f \t& %.1f $\pm$ %.1f \\\\" % (input, label, np.mean(rocs), np.std(rocs), np.mean(inv_fprs[:, 225]), np.std(inv_fprs[:, 225]))) else: print("%30s \t& %.4f $\pm$ %.4f \t& %.1f $\pm$ %.1f \\\\" % (label, np.mean(rocs), np.std(rocs), np.mean(inv_fprs[:, 225]), np.std(inv_fprs[:, 225]))) def plot_rocs(rocs, fprs, tprs, label="", color="r", show_all=False): inv_fprs = [] base_tpr = np.linspace(0.05, 1, 476) for fpr, tpr in zip(fprs, tprs): inv_fpr = interp(base_tpr, tpr, 1. / fpr) inv_fprs.append(inv_fpr) if show_all: plt.plot(base_tpr, inv_fpr, alpha=0.1, color=color) inv_fprs = np.array(inv_fprs) mean_inv_fprs = inv_fprs.mean(axis=0) plt.plot(base_tpr, mean_inv_fprs, color, label="%s" % label) def plot_show(filename=None): plt.xlabel("Signal efficiency") plt.ylabel("1 / Background efficiency") plt.xlim([0.1, 1.0]) plt.ylim(1, 500) plt.yscale("log") plt.legend(loc="best") plt.grid() if filename: plt.savefig(filename) plt.show()
_____no_output_____
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Count parameters
def count(params): def _count(thing): if isinstance(thing, list): c = 0 for stuff in thing: c += _count(stuff) return c elif isinstance(thing, np.ndarray): return np.prod(thing.shape) c = 0 for k, v in params.items(): c += _count(v) return c # Simple vs gated fd = open("../models/jet-study-2/model-w-s-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() print("Simple =", count(params)) fd = open("../models/jet-study-2/model-w-g-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() print("Gated =", count(params)) # double # Simple vs gated fd = open("../models/jet-study-2/model-w-sd-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() print("Simple =", count(params)) fd = open("../models/jet-study-2/model-w-gd-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() print("Gated =", count(params))
('Simple =', 10081) ('Gated =', 50361)
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Embedding visualization
prefix_train = "antikt-kt" prefix_test = prefix_train tf = load_tf("../data/w-vs-qcd/final/%s-train.pickle" % prefix_train) X, y, w = load_test(tf, "../data/w-vs-qcd/final/%s-test.pickle" % prefix_test) fd = open("../models/jet-study-2/model-w-s-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() Xt = grnn_transform_simple(params, X[:5000]) from sklearn.manifold import TSNE Xtt = TSNE(n_components=2).fit_transform(Xt) for i in range(5000): plt.scatter(Xtt[i, 0], Xtt[i, 1], color="b" if y[i] == 1 else "r", alpha=0.5) plt.show() from sklearn.decomposition import PCA Xtt = PCA(n_components=2).fit_transform(Xt) for i in range(5000): plt.scatter(Xtt[i, 0], Xtt[i, 1], color="b" if y[i] == 1 else "r", alpha=0.5) plt.show()
_____no_output_____
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Generate all ROCs
for pattern, gated in [ # Simple ## Particles ("antikt-kt", False), ("antikt-cambridge", False), ("antikt-antikt", False), ("antikt-random", False), ("antikt-seqpt", False), ("antikt-seqpt-reversed", False), ## Towers ("antikt-kt-delphes", False), ("antikt-cambridge-delphes", False), ("antikt-antikt-delphes", False), ("antikt-random-delphes", False), ("antikt-seqpt-delphes", False), ("antikt-seqpt-reversed-delphes", False), ## Images ("antikt-kt-images", False), # Gated ## Particles ("antikt-kt", True), ("antikt-antikt", True), ("antikt-seqpt", True), ("antikt-seqpt-reversed", True), ("antikt-cambridge", True), ("antikt-random", True), ## Towers ("antikt-kt-delphes", True), ("antikt-antikt-delphes", True), ("antikt-seqpt-delphes", True), ("antikt-seqpt-reversed-delphes", True), ("antikt-cambridge-delphes", True), ("antikt-random-delphes", True), ## Images ("antikt-kt-images", True) ]: r, f, t = build_rocs(pattern, pattern, pattern, gated=gated) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "wb") pickle.dump((r, f, t), fd) fd.close() # sd/gd == contatenate embeddings of h1_L + h1_R for pattern, gated in [ # Simple ## Particles ("antikt-kt", False), ## Towers ("antikt-kt-delphes", False), ## Images ("antikt-kt-images", False), # Gated ## Particles ("antikt-kt", True), ## Towers ("antikt-kt-delphes", True), ## Images ("antikt-kt-images", True) ]: r, f, t = build_rocs(pattern, pattern, pattern, gated=gated) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("sd" if not gated else "gd", pattern), "wb") pickle.dump((r, f, t), fd) fd.close()
_____no_output_____
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Table
for pattern, gated, label in [ # Simple ## Particles ("antikt-kt", False, "RNN $k_t$"), ("antikt-cambridge", False, "RNN C/A"), ("antikt-antikt", False, "RNN anti-$k_t$"), ("antikt-random", False, "RNN random"), ("antikt-seqpt", False, "RNN asc-$p_T$"), ("antikt-seqpt-reversed", False, "RNN desc-$p_T$"), ## Towers ("antikt-kt-delphes", False, "RNN $k_t$"), ("antikt-cambridge-delphes", False, "RNN C/A"), ("antikt-antikt-delphes", False, "RNN anti-$k_t$"), ("antikt-random-delphes", False, "RNN random"), ("antikt-seqpt-delphes", False, "RNN asc-$p_T$"), ("antikt-seqpt-reversed-delphes", False, "RNN desc-$p_T$"), ## Images ("antikt-kt-images", False, "RNN $k_t$"), # Gated ## Particles ("antikt-kt", True, "RNN $k_t$ (gated)"), ("antikt-cambridge", True, "RNN C/A (gated)"), ("antikt-antikt", True, "RNN anti-$k_t$ (gated)"), ("antikt-random", True, "RNN random (gated)"), ("antikt-seqpt", True, "RNN asc-$p_T$ (gated)"), ("antikt-seqpt-reversed", True, "RNN desc-$p_T$ (gated)"), ## Towers ("antikt-kt-delphes", True, "RNN $k_t$ (gated)"), ("antikt-cambridge-delphes", True, "RNN C/A (gated)"), ("antikt-antikt-delphes", True, "RNN anti-$k_t$ (gated)"), ("antikt-random-delphes", True, "RNN random (gated)"), ("antikt-seqpt-delphes", True, "RNN asc-$p_T$ (gated)"), ("antikt-seqpt-reversed-delphes", True, "RNN desc-$p_T$ (gated)"), # Images ("antikt-kt-images", False, "RNN $k_t$"), ("antikt-kt-images", True, "RNN $k_t$ (gated)") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) report_score(r, f, t, label=label, latex=True, input="particles" if "delphes" not in pattern and "images" not in pattern else "towers") for pattern, gated, label in [ # Simple ## Particles ("antikt-kt", False, "RNN $k_t$"), ## Towers ("antikt-kt-delphes", False, "RNN $k_t$"), ## Images ("antikt-kt-images", False, "RNN $k_t$"), # Gated ## Particles ("antikt-kt", True, "RNN $k_t$ (gated)"), ## Towers ("antikt-kt-delphes", True, "RNN $k_t$ (gated)"), # Images ("antikt-kt-images", True, "RNN $k_t$ (gated)") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("sd" if not gated else "gd", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) report_score(r, f, t, label=label, latex=True, input="particles" if "delphes" not in pattern and "images" not in pattern else "towers")
particles & RNN $k_t$ & 0.9178 $\pm$ 0.0008 & 67.0 $\pm$ 2.6 \\ towers & RNN $k_t$ & 0.8801 $\pm$ 0.0012 & 24.1 $\pm$ 0.6 \\ towers & RNN $k_t$ & 0.8332 $\pm$ 0.0029 & 12.6 $\pm$ 0.4 \\ particles & RNN $k_t$ (gated) & 0.9187 $\pm$ 0.0011 & 70.2 $\pm$ 3.8 \\ towers & RNN $k_t$ (gated) & 0.8810 $\pm$ 0.0012 & 24.5 $\pm$ 0.5 \\ towers & RNN $k_t$ (gated) & 0.8255 $\pm$ 0.0050 & 12.2 $\pm$ 0.6 \\
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Plots
# Simple vs gated for pattern, gated, label, color in [ ("antikt-kt", False, "RNN $k_t$ (simple)", "r"), ("antikt-kt", True, "RNN $k_t$ (gated)", "b") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Topologies (particles, simple) for pattern, gated, label, color in [ ("antikt-kt", False, "$k_t$", "r"), ("antikt-cambridge", False, "C/A", "g"), ("antikt-antikt", False, "anti-$k_t$", "b"), ("antikt-seqpt", False, "asc-$p_T$", "c"), ("antikt-seqpt-reversed", False, "desc-$p_T$", "m"), ("antikt-random", False, "random", "orange") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Topologies (towers, simple) for pattern, gated, label, color in [ ("antikt-kt-delphes", False, "RNN $k_t$", "r"), ("antikt-cambridge-delphes", False, "RNN C/A", "g"), ("antikt-antikt-delphes", False, "RNN anti-$k_t$", "b"), ("antikt-seqpt-delphes", False, "RNN asc-$p_T$", "c"), ("antikt-seqpt-reversed-delphes", False, "RNN desc-$p_T$", "m"), ("antikt-random-delphes", False, "RNN random", "orange") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Topologies (particles, gated) for pattern, gated, label, color in [ ("antikt-kt", True, "RNN $k_t$", "r"), ("antikt-antikt", True, "RNN anti-$k_t$", "b"), ("antikt-seqpt", True, "RNN asc-$p_T$", "c"), ("antikt-seqpt-reversed", True, "RNN desc-$p_T$", "m"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Topologies (towers, gated) for pattern, gated, label, color in [ ("antikt-kt-delphes", True, "RNN $k_t$", "r"), ("antikt-antikt-delphes", True, "RNN anti-$k_t$", "b"), ("antikt-seqpt-delphes", True, "RNN asc-$p_T$", "c"), ("antikt-seqpt-reversed-delphes", True, "RNN desc-$p_T$", "m"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Particles vs towers vs images (simple) for pattern, gated, label, color in [ ("antikt-kt", False, "particles", "r"), ("antikt-kt-delphes", False, "towers", "g"), ("antikt-kt-images", False, "images", "b"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show(filename="particles-towers-images.pdf") # Particles vs towers vs images (gated) for pattern, gated, label, color in [ ("antikt-kt", True, "particles", "r"), ("antikt-kt-delphes", True, "towers", "g"), ("antikt-kt-images", True, "images", "b"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show()
particles ROC AUC=0.9195+-0.00 1/FPR@TPR=0.5=74.34+-2.40 towers ROC AUC=0.8822+-0.00 1/FPR@TPR=0.5=25.41+-0.36 images ROC AUC=0.8277+-0.00 1/FPR@TPR=0.5=12.36+-0.30
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Trimming
for pattern_train, pattern_test, gated in [ ("antikt-kt", "antikt-kt", False), ("antikt-kt", "antikt-kt-trimmed", False), ("antikt-kt-trimmed", "antikt-kt-trimmed", False), ("antikt-kt-trimmed", "antikt-kt", False), ]: r, f, t = build_rocs(pattern_train, pattern_test, pattern_train, gated=gated) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "wb") pickle.dump((r, f, t), fd) fd.close() for pattern_train, pattern_test, gated, label, color in [ ("antikt-kt", "antikt-kt", False, "$k_t$ on $k_t$", "b"), ("antikt-kt", "antikt-kt-trimmed", False, "$k_t$ on $k_t$-trimmed", "c"), ("antikt-kt-trimmed", "antikt-kt-trimmed", False, "$k_t$-trimmed on $k_t$-trimmed", "r"), ("antikt-kt-trimmed", "antikt-kt", False, "$k_t$-trimmed on $k_t$", "orange"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show()
$k_t$ on $k_t$ ROC AUC=0.9185+-0.00 1/FPR@TPR=0.5=68.32+-1.80 $k_t$ on $k_t$-trimmed ROC AUC=0.8952+-0.00 1/FPR@TPR=0.5=28.70+-1.30 $k_t$-trimmed on $k_t$-trimmed ROC AUC=0.9034+-0.00 1/FPR@TPR=0.5=33.05+-1.85 $k_t$-trimmed on $k_t$ ROC AUC=0.8893+-0.01 1/FPR@TPR=0.5=36.78+-5.88
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Colinear splits
from functools import partial from recnn.preprocessing import sequentialize_by_pt preprocess_seqpt = partial(sequentialize_by_pt, reverse=False) preprocess_seqpt_rev = partial(sequentialize_by_pt, reverse=True) for pattern_train, pattern_test, gated, preprocess in [ # kt ("antikt-kt", "antikt-kt-colinear1", False, None), ("antikt-kt", "antikt-kt-colinear10", False, None), ("antikt-kt", "antikt-kt-colinear1-max", False, None), ("antikt-kt", "antikt-kt-colinear10-max", False, None), # asc-pt ("antikt-seqpt", "antikt-kt-colinear1", False, preprocess_seqpt), ("antikt-seqpt", "antikt-kt-colinear10", False, preprocess_seqpt), ("antikt-seqpt", "antikt-kt-colinear1-max", False, preprocess_seqpt), ("antikt-seqpt", "antikt-kt-colinear10-max", False, preprocess_seqpt), # desc-pt ("antikt-seqpt-reversed", "antikt-kt-colinear1", False, preprocess_seqpt_rev), ("antikt-seqpt-reversed", "antikt-kt-colinear10", False, preprocess_seqpt_rev), ("antikt-seqpt-reversed", "antikt-kt-colinear1-max", False, preprocess_seqpt_rev), ("antikt-seqpt-reversed", "antikt-kt-colinear10-max", False, preprocess_seqpt_rev), ]: r, f, t = build_rocs(pattern_train, pattern_test, pattern_train, gated=gated, preprocess=preprocess) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "wb") pickle.dump((r, f, t), fd) fd.close() for pattern_train, pattern_test, gated, label in [ # kt ("antikt-kt", "antikt-kt-colinear1", False, "$k_t$ colinear1"), ("antikt-kt", "antikt-kt-colinear10", False, "$k_t$ colinear10"), ("antikt-kt", "antikt-kt-colinear1-max", False, "$k_t$ colinear1-max"), ("antikt-kt", "antikt-kt-colinear10-max", False, "$k_t$ colinear10-max"), # asc-pt ("antikt-seqpt", "antikt-kt-colinear1", False, "asc-$p_T$ colinear1"), ("antikt-seqpt", "antikt-kt-colinear10", False, "asc-$p_T$ colinear10"), ("antikt-seqpt", "antikt-kt-colinear1-max", False, "asc-$p_T$ colinear1-max"), ("antikt-seqpt", "antikt-kt-colinear10-max", False, "asc-$p_T$ colinear10-max"), # desc-pt ("antikt-seqpt-reversed", "antikt-kt-colinear1", False, "desc-$p_T$ colinear1"), ("antikt-seqpt-reversed", "antikt-kt-colinear10", False, "desc-$p_T$ colinear10"), ("antikt-seqpt-reversed", "antikt-kt-colinear1-max", False, "desc-$p_T$ colinear1-max"), ("antikt-seqpt-reversed", "antikt-kt-colinear10-max", False, "desc-$p_T$ colinear10-max"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) report_score(r, f, t, label=label, latex=True, short=True)
$k_t$ colinear1 & 0.9183 $\pm$ 0.0006 & 68.7 $\pm$ 2.0 \\ $k_t$ colinear10 & 0.9174 $\pm$ 0.0006 & 67.5 $\pm$ 2.6 \\ $k_t$ colinear1-max & 0.9184 $\pm$ 0.0006 & 68.5 $\pm$ 2.8 \\ $k_t$ colinear10-max & 0.9159 $\pm$ 0.0009 & 65.7 $\pm$ 2.7 \\ asc-$p_T$ colinear1 & 0.9129 $\pm$ 0.0031 & 53.0 $\pm$ 7.5 \\ asc-$p_T$ colinear10 & 0.9126 $\pm$ 0.0033 & 53.6 $\pm$ 7.2 \\ asc-$p_T$ colinear1-max & 0.9129 $\pm$ 0.0032 & 54.2 $\pm$ 7.9 \\ asc-$p_T$ colinear10-max & 0.9090 $\pm$ 0.0039 & 50.2 $\pm$ 8.2 \\ desc-$p_T$ colinear1 & 0.9188 $\pm$ 0.0010 & 70.7 $\pm$ 4.0 \\ desc-$p_T$ colinear10 & 0.9178 $\pm$ 0.0011 & 67.9 $\pm$ 4.3 \\ desc-$p_T$ colinear1-max & 0.9191 $\pm$ 0.0010 & 72.4 $\pm$ 4.3 \\ desc-$p_T$ colinear10-max & 0.9140 $\pm$ 0.0016 & 63.5 $\pm$ 5.2 \\
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Soft particles
from functools import partial from recnn.preprocessing import sequentialize_by_pt preprocess_seqpt = partial(sequentialize_by_pt, reverse=False) preprocess_seqpt_rev = partial(sequentialize_by_pt, reverse=True) for pattern_train, pattern_test, gated, preprocess in [ ("antikt-kt", "antikt-kt-soft", False, None), ("antikt-seqpt", "antikt-kt-soft", False, preprocess_seqpt), ("antikt-seqpt-reversed", "antikt-kt-soft", False, preprocess_seqpt_rev), ]: r, f, t = build_rocs(pattern_train, pattern_test, pattern_train, gated=gated, preprocess=preprocess) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "wb") pickle.dump((r, f, t), fd) fd.close() for pattern_train, pattern_test, gated, label in [ ("antikt-kt", "antikt-kt-soft", False, "$k_t$ soft"), ("antikt-seqpt", "antikt-kt-soft", False, "asc-$p_T$ soft"), ("antikt-seqpt-reversed", "antikt-kt-soft", False, "desc-$p_T$ soft"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) report_score(r, f, t, label=label, latex=True, short=True)
$k_t$ soft & 0.9179 $\pm$ 0.0006 & 68.2 $\pm$ 2.3 \\ asc-$p_T$ soft & 0.9121 $\pm$ 0.0032 & 51.3 $\pm$ 6.0 \\ desc-$p_T$ soft & 0.9188 $\pm$ 0.0009 & 70.2 $\pm$ 3.7 \\
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Learning curve
for pattern, gated, n_events in [ # ("antikt-kt", False, 6000), # ("antikt-seqpt-reversed", False, 6000), ("antikt-kt", True, 6000), ("antikt-seqpt-reversed", True, 6000), # ("antikt-kt", False, 15000), # ("antikt-seqpt-reversed", False, 15000), ("antikt-kt", True, 15000), ("antikt-seqpt-reversed", True, 15000), ]: tf = load_tf("../data/w-vs-qcd/final/%s-train.pickle" % pattern, n_events_train=n_events) X, y, w = load_test(tf, "../data/w-vs-qcd/final/%s-test.pickle" % pattern) if not gated: rocs, fprs, tprs = evaluate_models(X, y, w, "../models/jet-study-2/model-w-s-%s-%d-[0-9]*.pickle" % (pattern, n_events)) else: rocs, fprs, tprs = evaluate_models(X, y, w, "../models/jet-study-2/model-w-g-%s-%d-[0-9]*.pickle" % (pattern, n_events), func=grnn_predict_gated) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%d.pickle" % ("s" if not gated else "g", pattern, n_events), "wb") pickle.dump((rocs, fprs, tprs), fd) fd.close() for pattern, label, color in [ ("s-antikt-kt", "$k_t$ 100k", "r"), ("s-antikt-kt-15000", "$k_t$ 10k", "g"), ("s-antikt-kt-6000", "$k_t$ 1k", "b"), ("s-antikt-seqpt-reversed", "desc-$p_T$ 100k", "r--"), ("s-antikt-seqpt-reversed-15000", "desc-$p_T$ 10k", "g--"), ("s-antikt-seqpt-reversed-6000", "desc-$p_T$ 1k", "b--"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s.pickle" % pattern, "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() for pattern, label, color in [ ("g-antikt-kt", "$k_t$ 100k", "r"), ("g-antikt-kt-15000", "$k_t$ 10k", "g"), ("g-antikt-kt-6000", "$k_t$ 1k", "b"), ("g-antikt-seqpt-reversed", "desc-$p_T$ 100k", "r--"), ("g-antikt-seqpt-reversed-15000", "desc-$p_T$ 10k", "g--"), ("g-antikt-seqpt-reversed-6000", "desc-$p_T$ 1k", "b--"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s.pickle" % pattern, "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show()
$k_t$ 100k ROC AUC=0.9195+-0.00 1/FPR@TPR=0.5=74.34+-2.40 $k_t$ 10k ROC AUC=0.9042+-0.00 1/FPR@TPR=0.5=46.42+-5.09 $k_t$ 1k ROC AUC=0.8726+-0.01 1/FPR@TPR=0.5=22.63+-2.97 desc-$p_T$ 100k ROC AUC=0.9212+-0.00 1/FPR@TPR=0.5=83.27+-3.08 desc-$p_T$ 10k ROC AUC=0.9123+-0.00 1/FPR@TPR=0.5=56.12+-4.75 desc-$p_T$ 1k ROC AUC=0.8860+-0.00 1/FPR@TPR=0.5=27.35+-2.51
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
Tau21
import h5py f = h5py.File("../data/w-vs-qcd/h5/w_100000_j1p0_sj0p30_delphes_jets_images.h5", "r")["auxvars"] tau1 = f["tau_1"] tau2 = f["tau_2"] tau21 = np.true_divide(tau2, tau1) pt = f["pt_trimmed"] mass = f["mass_trimmed"] mask = (f["mass_trimmed"] < 110) & (f["mass_trimmed"] > 50) & (f["pt_trimmed"] < 300) & (f["pt_trimmed"] > 250) #mask = mask & np.isfinite(tau21) & (tau21 != 0.) signal_tau21 = tau21[mask] signal_pt = pt[mask] signal_mass = mass[mask] f = h5py.File("../data/w-vs-qcd/h5/qcd_100000_j1p0_sj0p30_delphes_jets_images.h5", "r")["auxvars"] tau1 = f["tau_1"] tau2 = f["tau_2"] tau21 = np.true_divide(tau2, tau1) pt = f["pt_trimmed"] mass = f["mass_trimmed"] mask = (f["mass_trimmed"] < 110) & (f["mass_trimmed"] > 50) & (f["pt_trimmed"] < 300) & (f["pt_trimmed"] > 250) #mask = mask & np.isfinite(tau21) & (tau21 != 0.) bkg_tau21 = tau21[mask] bkg_pt = pt[mask] bkg_mass = mass[mask] plt.hist(bkg_mass, histtype="step", bins=40, normed=1) plt.hist(signal_mass, histtype="step", bins=40, normed=1) tau21 = np.concatenate((signal_tau21, bkg_tau21)) pts = np.concatenate((signal_pt, bkg_pt)) masss = np.concatenate((signal_mass, bkg_mass)) X = np.hstack([tau21.reshape(-1,1), masss.reshape(-1,1)]) y = np.concatenate((np.ones(len(signal_tau21)), np.zeros(len(bkg_tau21)))) w = np.zeros(len(y)) pdf, edges = np.histogram(pts[y == 0], density=True, range=[250, 300], bins=50) indices = np.searchsorted(edges, pts[y == 0]) - 1 inv_w = 1. / pdf[indices] inv_w /= inv_w.sum() w[y==0] = inv_w pdf, edges = np.histogram(pts[y == 1], density=True, range=[250, 300], bins=50) indices = np.searchsorted(edges, pts[y == 1]) - 1 inv_w = 1. / pdf[indices] inv_w /= inv_w.sum() w[y==1] = inv_w X_train, X_test, y_train, y_test, w_train, w_test = train_test_split(X, y, w, train_size=0.5) def evaluate_models(X, y, w): rocs = [] fprs = [] tprs = [] y_pred = X # Roc rocs.append(roc_auc_score(y, y_pred, sample_weight=w)) fpr, tpr, _ = roc_curve(y, y_pred, sample_weight=w) fprs.append(fpr) tprs.append(tpr) return rocs, fprs, tprs r, f, t = evaluate_models(-tau21, y, w) plot_rocs(r, f, t, label="tau21") report_score(r, f, t, label="tau21") r, f, t = evaluate_models(masss, y, w) plot_rocs(r, f, t, label="mass") report_score(r, f, t, label="mass") plot_show() clf = ExtraTreesClassifier(n_estimators=1000, min_samples_leaf=100, max_features=1) clf.fit(X_train, y_train) r, f, t = evaluate_models(-clf.predict_proba(X_test)[:, 0], y_test, w_test) plot_rocs(r, f, t, label="tau21+mass") report_score(r, f, t, label="tau21+mass") plot_show()
tau21+mass ROC AUC=0.8207+-0.00 1/FPR@TPR=0.5=11.06+-0.00
BSD-3-Clause
notebooks/04-jet-study.ipynb
glouppe/recnn
RandomizedSearchCV
big_param_grid = { 'C' : np.arange(0, 10, 0.01), 'tol': np.arange(0, 0.001, 1e-5), } big_param_grid # Create the RandomizedSearch estimator along with a parameter object containing the values to adjust from sklearn.model_selection import RandomizedSearchCV random_clf = RandomizedSearchCV(model, big_param_grid, n_iter=100, random_state=1, verbose=3) random_clf # Fit the model by using the randomized search estimator. # This will take the LogisticRegression model and try a random sample of combinations of parameters random_clf.fit(X_train, y_train) # List the best parameters for this dataset print(random_clf.best_params_) # List the best score print(random_clf.best_score_) # Make predictions with the hypertuned model predictions = random_clf.predict(X_test) # Calculate the classification report from sklearn.metrics import classification_report print(classification_report(y_test, predictions, target_names=["blue", "red"]))
precision recall f1-score support blue 0.83 1.00 0.91 10 red 1.00 0.87 0.93 15 accuracy 0.92 25 macro avg 0.92 0.93 0.92 25 weighted avg 0.93 0.92 0.92 25
ADSL
01-Lesson-Plans/19-Supervised-Machine-Learning/1/Extra-Activities/01-Ins_Hyperparameters/Solved/Ins_Hyperparameters.ipynb
anirudhmungre/sneaky-lessons
AveragePooling3D **[pooling.AveragePooling3D.0] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last'**
data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(290) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 4, 4, 2) in: [-0.453175, -0.475078, 0.486234, -0.949643, -0.349099, -0.837108, 0.933439, 0.167853, -0.995191, 0.459466, -0.788337, -0.120985, 0.06215, 0.138625, -0.102201, 0.976605, -0.591051, -0.592066, 0.469334, -0.435067, 0.621416, 0.817698, 0.790015, 0.485862, 0.469679, -0.611443, 0.582845, -0.503885, -0.379174, -0.451035, 0.052289, 0.131836, -0.609312, -0.828722, -0.422428, -0.648754, -0.339801, -0.758017, 0.754244, -0.544823, 0.691656, 0.076848, -0.32539, 0.306448, -0.662415, 0.334329, 0.030666, -0.414111, -0.757096, -0.20427, -0.893088, -0.681919, -0.619269, -0.640749, 0.867436, 0.971453, -0.42039, -0.574905, -0.34642, 0.588678, -0.247265, 0.436084, 0.220126, 0.114202, 0.613623, 0.401452, -0.270262, -0.591146, -0.872383, 0.818368, 0.336808, 0.338197, -0.275646, 0.375308, -0.928722, -0.836727, -0.504007, -0.503397, -0.636099, 0.948482, -0.639661, -0.026878, -0.122643, -0.634018, -0.247016, 0.517246, -0.398639, 0.752174, -0.014633, -0.170534, -0.463453, -0.289716, 0.837207, 0.769962, -0.401357, 0.076406, 0.270433, -0.036538, -0.05766, 0.625256, 0.626847, -0.27321, -0.217219, -0.775553, -0.182939, 0.327385, -0.976376, -0.337729, -0.178467, -0.1545, -0.334259, -0.537949, 0.499229, 0.775269, -0.657598, 0.921864, 0.376821, 0.420375, -0.937835, -0.176425, -0.516753, 0.737286, 0.867807, -0.515893, 0.710035, 0.680377, -0.350561, -0.28645] out shape: (2, 2, 2, 2) out: [-0.301993, -0.272552, 0.040873, -0.117081, -0.185773, -0.37686, 0.163197, 0.233169, -0.225944, -0.009092, -0.222347, -0.017445, -0.130963, 0.099673, -0.051418, 0.344208]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'**
data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(291) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 4, 4, 2) in: [-0.803361, 0.348731, 0.30124, -0.168638, 0.516406, -0.258765, 0.297839, 0.993235, 0.958465, -0.273175, -0.704992, 0.261477, -0.301255, 0.263104, 0.678631, -0.644936, -0.029034, -0.320266, 0.307733, -0.479016, 0.608177, 0.034951, 0.456908, -0.929353, 0.594982, -0.243058, -0.524918, 0.455339, 0.034216, 0.356824, 0.63906, -0.259773, -0.084724, 0.248472, -0.608134, 0.0077, 0.400591, -0.960703, -0.247926, -0.774509, 0.496174, -0.319044, -0.324046, -0.616632, -0.322142, -0.472846, 0.171825, -0.030013, 0.992861, -0.645264, 0.524886, 0.673229, 0.883122, 0.25346, -0.706988, -0.654436, 0.918349, -0.139113, 0.742737, 0.338472, -0.812719, 0.860081, 0.003489, 0.667897, 0.362284, -0.283972, 0.995162, 0.67962, -0.700244, -0.137142, 0.045695, -0.450433, 0.929977, 0.157542, -0.720517, -0.939063, 0.295004, 0.308728, -0.094057, -0.374756, -0.400976, -0.539654, 0.27965, 0.977688, -0.361264, -0.027757, -0.67149, 0.57064, -0.861888, 0.616985, -0.027436, 0.40181, -0.30391, 0.92268, -0.486416, -0.335828, 0.138558, -0.445691, 0.156253, -0.967633, 0.127471, -0.783301, -0.691353, -0.76759, 0.618771, -0.377474, 0.50152, 0.126867, -0.872708, 0.649418, 0.697987, -0.33456, 0.906767, 0.604333, -0.129366, 0.579445, -0.033479, -0.22433, -0.979529, -0.251153, 0.839734, 0.550506, -0.599678, 0.584326, 0.211245, -0.66845, 0.948298, -0.004521] out shape: (3, 3, 3, 2) out: [-0.096172, -0.063889, -0.130291, -0.243163, 0.149246, -0.235679, 0.277756, -0.214836, 0.083935, -0.010284, 0.183535, -0.272509, 0.440949, -0.04496, 0.220404, 0.311668, 0.138158, 0.041206, 0.130772, -0.133172, -0.123041, -0.266292, -0.056407, -0.361459, 0.222251, -0.1564, 0.031836, 0.019601, -0.100749, -0.053372, 0.271023, 0.210519, 0.115633, 0.549958, -0.307022, 0.282092, 0.372751, -0.256226, -0.027257, -0.132813, -0.149026, -0.236204, 0.248228, 0.07371, -0.130145, 0.181375, -0.252442, 0.039529, 0.000851, 0.47193, -0.12053, 0.318177, -0.209568, -0.00234]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'**
data_in_shape = (4, 5, 2, 3) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(282) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 5, 2, 3) in: [-0.263147, -0.216555, -0.75766, -0.396007, 0.85243, 0.98415, -0.230197, -0.979579, 0.117628, -0.66833, 0.714058, -0.907302, -0.574249, 0.299573, 0.101165, 0.655872, -0.104788, 0.242064, -0.409262, -0.124059, 0.105687, -0.969325, -0.167941, 0.382377, 0.710487, 0.793042, 0.180663, -0.80231, 0.684253, -0.516992, 0.471203, -0.152325, 0.509501, 0.613742, -0.877379, 0.755416, 0.427677, 0.931956, 0.827636, -0.860685, 0.562326, -0.716081, 0.028046, 0.594422, -0.862333, 0.336131, 0.713855, 0.386247, -0.986659, 0.242413, 0.753777, -0.159358, 0.166548, -0.437388, 0.291152, -0.775555, 0.796086, -0.592021, -0.251661, 0.187174, 0.899283, 0.431861, -0.685273, -0.085991, -0.629026, -0.478334, 0.714983, 0.53745, -0.310438, 0.973848, -0.675219, 0.422743, -0.992263, 0.374017, -0.687462, -0.190455, -0.560081, 0.22484, -0.079631, 0.815275, 0.338641, -0.538279, -0.10891, -0.929005, 0.514762, 0.322038, 0.702195, -0.697122, 0.925468, -0.274158, 0.148379, 0.333239, 0.63072, -0.652956, -0.356451, -0.71114, 0.111465, 0.31787, 0.242578, 0.8926, -0.60807, 0.218759, 0.42079, 0.71253, -0.082496, 0.272704, 0.277213, -0.099807, -0.899322, -0.175367, -0.642213, -0.661032, 0.730145, 0.37799, 0.935939, -0.448007, -0.320652, -0.352629, 0.258139, 0.30254] out shape: (2, 4, 1, 3) out: [-0.113218, 0.104366, 0.101661, -0.110717, 0.341478, -0.101372, -0.259851, 0.202503, 0.083949, -0.364662, 0.07088, 0.181423, 0.375201, -0.081043, -0.083798, 0.275459, 0.046964, -0.00891, -0.333436, 0.258103, -0.187439, -0.222164, 0.289848, -0.055583]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'**
data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(283) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.3'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 4, 4, 2) in: [0.19483, -0.346754, 0.281648, -0.656271, 0.588328, 0.864284, -0.661556, 0.344578, 0.534692, 0.187914, -0.172976, 0.100575, 0.287857, 0.151936, 0.679748, 0.137527, 0.726773, -0.503042, -0.902524, -0.895315, 0.870645, 0.792427, -0.102238, -0.748643, -0.048728, -0.025835, 0.358631, 0.804295, -0.300104, -0.99179, -0.699454, -0.943476, -0.448011, 0.628611, 0.060595, 0.716813, -0.33607, 0.549002, 0.810379, 0.074881, -0.689823, 0.17513, -0.975426, 0.961779, -0.030624, -0.914643, -0.735591, 0.031988, -0.554272, 0.253033, 0.73405, 0.426412, -0.361457, 0.787875, -0.266747, -0.166595, 0.922155, -0.04597, -0.465312, 0.157074, -0.201136, -0.004584, -0.158067, 0.244864, -0.495687, 0.416834, -0.583545, 0.654634, -0.318258, -0.709804, -0.393463, 0.589381, -0.900991, 0.266171, 0.955916, -0.6571, 0.990855, -0.078764, 0.609356, -0.526011, -0.902476, 0.040574, -0.045497, -0.110604, 0.035908, -0.91532, -0.170028, -0.02148, -0.994139, 0.020418, 0.989168, -0.802385, 0.353583, -0.981395, -0.959128, 0.785969, -0.325003, -0.541583, -0.929888, 0.40832, 0.565713, 0.449217, -0.21377, -0.491438, -0.352481, 0.469042, 0.272024, 0.101279, -0.70562, -0.296457, 0.210789, -0.049051, -0.002596, 0.630726, 0.023403, -0.216062, 0.510835, -0.446393, -0.075211, 0.4807, 0.581605, 0.705887, 0.741715, -0.448041, 0.88241, -0.866934, -0.341714, -0.245376] out shape: (1, 1, 1, 2) out: [-0.053909, 0.080977]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'**
data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(284) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.4'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 4, 4, 2) in: [0.691755, -0.79282, -0.953135, 0.756956, -0.736874, 0.171061, -0.801845, 0.588236, -0.884749, 0.06721, -0.585121, -0.546211, -0.605281, -0.998989, 0.309413, -0.260604, -0.123585, 0.168908, -0.179496, 0.657412, -0.973664, 0.146258, -0.851615, -0.320588, 0.375102, -0.048494, 0.822789, 0.063572, -0.956466, 0.083595, 0.121146, 0.789353, -0.815498, -0.056454, -0.472042, -0.423572, 0.460752, 0.784129, -0.964421, -0.02912, -0.194265, 0.17147, -0.336383, -0.785223, 0.978845, 0.88826, -0.498649, -0.958507, 0.055052, -0.991654, -0.027882, 0.079693, 0.901998, 0.036266, -0.73015, -0.472116, 0.651073, 0.821196, 0.562183, 0.42342, -0.236111, 0.661076, -0.983951, -0.116893, -0.179815, 0.375962, -0.018703, -0.242038, -0.561415, 0.322072, 0.468695, 0.768235, -0.354887, 0.528139, 0.796988, -0.976979, 0.279858, -0.790546, 0.485339, 0.693701, -0.130412, 0.211269, -0.346429, 0.06497, 0.932512, -0.675758, -0.636085, 0.065187, -0.720225, -0.060809, -0.783716, -0.1708, 0.256143, 0.365727, -0.458241, 0.515217, -0.269055, 0.378065, 0.066507, 0.207271, -0.303131, 0.632455, 0.147251, 0.35156, -0.852052, -0.382054, 0.42108, -0.350071, 0.092818, 0.516404, 0.448487, 0.503722, -0.86555, 0.747871, 0.320894, -0.163714, 0.107681, 0.562623, 0.757089, 0.85338, -0.875069, 0.324594, -0.093024, 0.016279, -0.507882, -0.549638, -0.913588, 0.078328] out shape: (1, 1, 1, 2) out: [-0.125255, -0.068526]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'**
data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(285) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.5'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 4, 4, 2) in: [-0.495196, -0.886872, 0.220815, 0.126844, 0.168234, -0.640849, 0.457897, -0.375014, 0.001134, -0.486501, -0.819617, -0.468351, 0.15859, 0.39238, -0.590545, -0.402922, 0.821619, -0.208255, -0.512219, -0.586151, -0.365648, -0.195611, -0.280978, -0.08818, -0.449229, 0.169082, 0.075074, -0.719751, 0.657827, -0.060862, -0.217533, 0.907503, 0.902317, 0.613945, 0.670047, -0.808346, 0.060215, -0.446612, -0.710328, -0.018744, 0.348018, -0.294409, 0.623986, -0.216504, 0.270099, -0.216285, -0.433193, -0.197968, -0.829926, -0.93864, -0.901724, -0.388869, -0.658339, -0.931401, -0.654674, -0.469503, 0.970661, 0.008063, -0.751014, 0.519043, 0.197895, 0.959095, 0.875405, 0.700615, 0.301314, -0.980157, 0.275373, -0.082646, 0.100727, -0.027273, -0.322366, 0.26563, 0.668139, 0.890289, 0.854229, -0.85773, -0.07833, -0.319645, -0.948873, 0.403526, 0.683097, 0.174958, 0.926944, -0.418256, -0.406667, -0.333808, 0.102223, -0.00576, 0.182281, 0.979655, 0.230246, 0.422968, -0.381217, 0.146697, 0.660828, 0.060741, -0.201812, 0.587619, 0.188211, -0.652713, -0.937225, -0.814998, 0.277993, -0.539363, -0.665425, 0.72739, 0.919326, 0.710163, -0.819091, -0.089805, -0.778517, -0.593048, 0.945303, -0.078936, 0.303422, 0.206755, -0.899923, -0.868598, 0.249905, -0.47891, 0.006871, -0.263386, -0.484493, -0.75917, 0.857292, 0.401094, -0.077826, -0.44546] out shape: (2, 2, 2, 2) out: [0.181438, -0.302524, -0.077379, -0.238252, -0.197095, -0.268185, -0.055756, 0.102707, 0.292419, 0.042777, -0.43821, -0.214372, 0.349209, 0.033074, 0.013077, -0.1905]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'**
data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(286) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.6'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 4, 4, 2) in: [-0.709952, -0.532913, -0.169956, -0.391538, 0.729034, -0.2004, -0.67324, -0.973672, 0.879975, -0.981827, -0.4828, -0.887985, 0.843364, 0.710745, -0.260613, 0.20082, 0.309563, 0.721671, -0.967848, -0.976471, -0.13058, 0.052684, 0.666494, -0.319759, -0.060338, 0.359151, -0.795562, 0.70488, 0.100816, 0.466479, 0.992415, 0.066527, -0.690663, -0.741365, -0.251801, -0.479328, 0.62187, 0.578729, 0.598481, 0.817115, -0.913801, -0.694569, 0.397726, -0.31274, 0.163147, 0.087004, -0.744957, -0.920201, 0.440377, -0.191648, -0.227724, -0.562736, -0.484598, -0.230876, 0.019055, 0.988723, 0.656988, 0.185623, -0.629304, -0.321252, 0.329452, 0.355461, 0.734458, 0.496983, 0.181439, 0.414232, 0.776873, 0.68191, -0.846744, -0.442164, -0.526272, 0.92696, -0.704629, -0.800248, 0.643923, 0.775996, -0.203863, -0.756864, -0.398058, -0.914275, 0.980404, 0.329099, -0.576086, 0.851052, -0.74133, -0.23673, -0.001628, 0.972916, -0.571033, 0.669151, -0.977945, -0.707472, 0.371069, -0.772292, -0.207482, -0.094619, -0.604913, 0.111706, -0.123427, 0.284132, 0.292284, -0.490954, -0.873365, 0.109881, -0.40172, 0.103223, 0.396366, -0.415444, 0.766823, -0.057373, 0.619422, 0.30151, -0.126582, 0.862041, -0.083425, -0.018503, 0.744106, -0.681409, 0.556506, -0.628066, -0.697587, -0.201239, 0.051677, -0.585768, 0.202332, -0.634928, -0.410351, 0.005911] out shape: (4, 4, 4, 2) out: [-0.242659, -0.627783, 0.231323, -0.111939, 0.159636, 0.037518, -0.270082, -0.218984, -0.070567, -0.485788, -0.111164, -0.265047, 0.008914, 0.071142, -0.080005, -0.012604, -0.159231, -0.010098, -0.350668, -0.063979, 0.278439, 0.234528, 0.603106, 0.308118, -0.207054, 0.232101, -0.248649, 0.301392, 0.539285, 0.346362, 0.863437, 0.281755, -0.070117, -0.144514, 0.162641, 0.016568, -0.167049, -0.077962, -0.267701, -0.0226, 0.005024, -0.075724, -0.128601, -0.048237, -0.299029, -0.126288, -0.281397, 0.031791, -0.11304, 0.031477, -0.367058, -0.203106, 0.002375, 0.184946, 0.136101, 0.591001, -0.380324, -0.043487, -0.226682, -0.361389, 0.306874, -0.003617, 0.263488, 0.201182, 0.020489, 0.144438, 0.212779, -0.052595, -0.146222, -0.16541, -0.294568, 0.106019, 0.016031, 0.210902, 0.118314, -0.067409, 0.167747, -0.250036, 0.194061, -0.066979, -0.250072, 0.149795, -0.1262, -0.348256, 0.064153, -0.258652, -0.015739, 0.064035, -0.548722, -0.206332, -0.088217, -0.675115, -0.011108, -0.373982, -0.308916, -0.044354, -0.183423, 0.020904, 0.333012, -0.16991, 0.201291, -0.034234, -0.126972, 0.205695, -0.05384, 0.132829, 0.455967, -0.293182, 0.671714, -0.266335, 0.587964, -0.163278, -0.213979, 0.014133, 0.228672, -0.480152, 0.273148, -0.484623, 0.073078, -0.311078, -0.322955, -0.393504, 0.127005, -0.610348, -0.10401, -0.314508, -0.410351, 0.005911]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'**
data_in_shape = (4, 5, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(287) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.7'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 5, 4, 2) in: [-0.71103, 0.421506, 0.752321, 0.542455, -0.557162, -0.963774, 0.910303, -0.933284, 0.67521, 0.588709, -0.782848, -0.108964, -0.767069, 0.338318, -0.660374, -0.967294, -0.501079, -0.917532, -0.087991, -0.160473, 0.520493, 0.612007, -0.955448, -0.809749, -0.627003, 0.494441, 0.985405, 0.99813, -0.278165, 0.090068, 0.803872, 0.287682, 0.162199, 0.1796, -0.630223, 0.044743, 0.9092, 0.023879, -0.403203, -0.005329, -0.29237, -0.510033, -0.190427, 0.149011, 0.873547, -0.58793, -0.302525, 0.102122, -0.804112, 0.965834, 0.302039, -0.806929, 0.627682, 0.876256, 0.176245, 0.051969, 0.005712, -0.877694, -0.776877, -0.360984, 0.172577, 0.953108, 0.755911, 0.973515, 0.745292, 0.765506, 0.119956, 0.378346, 0.425789, 0.048668, 0.363691, -0.499862, 0.315721, 0.243267, 0.333434, -0.001645, -0.007235, -0.463152, -0.002048, 0.862117, -0.575785, 0.594789, 0.068012, 0.165267, 0.081581, 0.128645, 0.559305, -0.494595, 0.10207, 0.278472, -0.815856, 0.817863, 0.101417, -0.432774, -0.36832, 0.682055, -0.852236, 0.756063, 0.741739, 0.403911, 0.363444, -0.853088, 0.429379, 0.95063, 0.19365, 0.707334, 0.883575, 0.037535, 0.735855, -0.597979, -0.328964, -0.63363, 0.533345, 0.628204, -0.831273, -0.475492, -0.120719, -0.049689, 0.474126, -0.534385, 0.898272, -0.060213, 0.134975, -0.81603, -0.09329, -0.56951, 0.744421, 0.561587, -0.094, 0.780616, -0.206093, 0.992174, 0.563806, 0.562612, -0.754387, -0.36159, -0.288989, 0.195871, -0.156575, 0.108674, 0.037465, 0.115865, -0.313897, -0.290763, -0.920174, -0.943574, 0.610507, -0.749795, -0.802955, 0.296183, -0.862404, 0.174227, 0.6721, 0.674769, -0.198663, 0.163696, -0.572753, 0.149323, 0.456671, 0.162098] out shape: (4, 3, 4, 2) out: [-0.131402, 0.155199, 0.03226, -0.070195, 0.037581, -0.260452, 0.030912, -0.436622, -0.017073, 0.039968, 0.135148, 0.319859, 0.22609, 0.20693, 0.242007, -0.012103, 0.045283, 0.116491, 0.151294, -0.099044, 0.124178, 0.104379, -0.202626, 0.428394, -0.275804, 0.206784, 0.130999, 0.038676, 0.218617, 0.040718, 0.016176, 0.085388, 0.132601, 0.226252, 0.333257, 0.00119, 0.36471, 0.04267, 0.305004, 0.197663, 0.087807, 0.098584, -0.156448, -0.247495, 0.086031, -0.046277, 0.236039, 0.163866, -0.061051, 0.344117, -0.020681, 0.106031, 0.104317, 0.009554, 0.045255, 0.096864, 0.026437, 0.064502, 0.301632, -0.154837, -0.09276, -0.104819, -0.268972, 0.050116, 0.043877, 0.247794, -0.430852, -0.053041, 0.059331, -0.068163, 0.465398, -0.186144, 0.183288, 0.224137, 0.099849, 0.042311, 0.115137, 0.048274, -0.004983, 0.099998, -0.188808, -0.347206, -0.07789, -0.057268, -0.485448, 0.073878, -0.588151, -0.058268, 0.236719, 0.419233, -0.385708, 0.156509, -0.058041, 0.15571, 0.456671, 0.162098]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'**
data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(288) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.8'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 4, 4, 2) in: [0.106539, 0.430065, 0.625063, -0.956042, 0.681684, 0.345995, -0.589061, 0.186737, 0.535452, -0.125905, -0.396262, -0.44893, 0.39021, 0.253402, -0.238515, 0.337141, 0.178107, 0.244331, -0.93179, -0.081267, 0.895223, 0.820023, 0.365435, -0.738456, 0.893031, -0.787916, -0.518813, 0.661518, -0.464144, -0.639165, -0.252917, 0.784083, 0.577398, 0.769552, 0.036096, 0.847521, -0.171916, 0.07536, -0.830068, 0.734205, -0.437818, 0.295701, 0.252657, -0.859452, -0.425833, -0.650296, -0.584695, 0.163986, 0.43905, -0.521755, 0.620616, 0.066707, -0.101702, 0.941175, 0.479202, 0.624312, -0.372154, 0.625845, 0.980521, -0.834695, -0.40269, 0.784157, 0.814068, -0.485038, -0.150738, 0.682911, 0.406096, -0.405868, -0.337905, 0.803583, -0.764964, 0.96897, -0.057235, 0.403604, -0.605392, 0.389273, 0.235543, -0.095585, -0.860692, 0.937457, -0.928888, 0.702073, -0.18066, 0.033968, -0.082046, -0.237205, 0.922919, 0.064731, -0.026908, -0.865491, 0.881128, 0.265603, -0.132321, -0.701801, 0.490064, 0.718745, -0.884446, 0.538162, -0.086979, -0.734317, 0.089006, 0.349945, -0.415428, -0.621358, 0.892372, 0.090398, 0.883604, 0.772612, 0.589633, 0.187399, 0.807184, -0.128627, -0.534439, 0.258966, 0.141399, -0.777263, 0.911318, -0.359087, 0.789361, 0.470019, 0.836149, 0.415029, 0.920315, -0.916153, 0.645573, -0.446523, -0.899169, 0.78844] out shape: (2, 2, 2, 2) out: [0.162391, -0.005936, -0.221024, 0.180816, 0.161071, -0.078404, 0.166559, 0.261386, 0.04966, 0.217097, -0.082203, 0.300223, 0.138512, -0.110408, 0.330712, 0.037165]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'**
data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(289) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.9'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (4, 4, 4, 2) in: [0.454263, 0.047178, -0.644362, 0.432654, 0.776147, -0.088086, -0.16527, -0.152361, -0.723283, 0.119471, -0.020663, 0.230897, 0.249349, -0.825224, 0.809245, 0.37136, 0.649976, 0.690981, -0.5766, 0.750394, -0.777363, -0.359006, 0.398419, -0.851015, -0.479232, -0.924962, -0.898893, 0.135445, 0.819369, -0.867218, 0.039715, 0.304805, -0.865872, -0.891635, 0.730554, 0.178083, 0.981329, 0.047786, -0.466968, -0.89441, -0.037018, -0.880158, 0.635061, 0.108217, 0.405675, 0.242025, 0.524396, -0.46013, -0.98454, 0.227442, -0.159924, -0.396205, -0.843265, 0.181395, -0.743803, 0.445469, 0.05215, 0.837067, -0.756402, -0.959109, -0.580594, -0.677936, -0.929683, -0.165592, -0.870784, 0.91887, 0.542361, 0.46359, -0.521332, 0.778263, 0.662447, 0.692057, 0.224535, -0.087731, 0.904644, 0.207457, -0.564079, -0.389642, 0.590403, -0.861828, -0.280471, -0.593786, -0.542645, 0.788946, -0.808773, -0.334536, -0.973711, 0.68675, 0.383992, -0.38838, 0.278601, -0.89188, -0.582918, -0.190511, -0.493528, 0.635115, -0.375152, 0.586508, -0.986557, -0.449484, 0.216757, 0.746825, -0.144795, 0.448144, -0.828083, 0.224525, -0.958965, -0.566069, -0.850394, -0.261458, -0.589888, 0.75667, 0.531888, 0.146437, -0.877887, -0.575355, 0.06156, -0.714865, 0.710365, -0.439259, -0.084566, -0.854224, 0.467254, 0.59934, 0.527409, 0.791222, -0.66992, 0.644258] out shape: (2, 2, 2, 2) out: [-0.058915, -0.081912, 0.389238, -0.21988, -0.394183, 0.045132, -0.327151, -0.248637, -0.2935, 0.162208, -0.15011, 0.238629, -0.015479, -0.221113, -0.27869, 0.134772]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'**
data_in_shape = (2, 3, 3, 4) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(290) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.10'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (2, 3, 3, 4) in: [-0.453175, -0.475078, 0.486234, -0.949643, -0.349099, -0.837108, 0.933439, 0.167853, -0.995191, 0.459466, -0.788337, -0.120985, 0.06215, 0.138625, -0.102201, 0.976605, -0.591051, -0.592066, 0.469334, -0.435067, 0.621416, 0.817698, 0.790015, 0.485862, 0.469679, -0.611443, 0.582845, -0.503885, -0.379174, -0.451035, 0.052289, 0.131836, -0.609312, -0.828722, -0.422428, -0.648754, -0.339801, -0.758017, 0.754244, -0.544823, 0.691656, 0.076848, -0.32539, 0.306448, -0.662415, 0.334329, 0.030666, -0.414111, -0.757096, -0.20427, -0.893088, -0.681919, -0.619269, -0.640749, 0.867436, 0.971453, -0.42039, -0.574905, -0.34642, 0.588678, -0.247265, 0.436084, 0.220126, 0.114202, 0.613623, 0.401452, -0.270262, -0.591146, -0.872383, 0.818368, 0.336808, 0.338197] out shape: (2, 1, 1, 1) out: [-0.096379, -0.08704]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'**
data_in_shape = (2, 3, 3, 4) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(291) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.11'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (2, 3, 3, 4) in: [-0.803361, 0.348731, 0.30124, -0.168638, 0.516406, -0.258765, 0.297839, 0.993235, 0.958465, -0.273175, -0.704992, 0.261477, -0.301255, 0.263104, 0.678631, -0.644936, -0.029034, -0.320266, 0.307733, -0.479016, 0.608177, 0.034951, 0.456908, -0.929353, 0.594982, -0.243058, -0.524918, 0.455339, 0.034216, 0.356824, 0.63906, -0.259773, -0.084724, 0.248472, -0.608134, 0.0077, 0.400591, -0.960703, -0.247926, -0.774509, 0.496174, -0.319044, -0.324046, -0.616632, -0.322142, -0.472846, 0.171825, -0.030013, 0.992861, -0.645264, 0.524886, 0.673229, 0.883122, 0.25346, -0.706988, -0.654436, 0.918349, -0.139113, 0.742737, 0.338472, -0.812719, 0.860081, 0.003489, 0.667897, 0.362284, -0.283972, 0.995162, 0.67962, -0.700244, -0.137142, 0.045695, -0.450433] out shape: (2, 3, 3, 4) out: [-0.073055, 0.083417, 0.109908, 0.160761, 0.061998, 0.11563, 0.00915, 0.030844, 0.154595, 0.132854, -0.051119, 0.025479, 0.01321, 0.103228, 0.096798, 0.132983, 0.091705, 0.092372, 0.008749, 0.004411, 0.149296, 0.121109, -0.012738, -0.001443, 0.044439, 0.121335, 0.01906, 0.021515, 0.096866, 0.117315, -0.031152, -0.075063, 0.106077, 0.137015, -0.045408, -0.108109, 0.13765, 0.028927, -0.316498, -0.265803, 0.090454, 0.069218, -0.177051, -0.075283, 0.162245, 0.098457, -0.146385, -0.134885, 0.102239, 0.081747, -0.04865, 0.018312, 0.020763, 0.058465, -0.029871, 0.057668, 0.044907, 0.081293, -0.050427, 0.015913, 0.201232, 0.2022, 0.197264, 0.272857, 0.129309, 0.175371, 0.153743, 0.238277, 0.144593, 0.186112, 0.056922, 0.123728]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
**[pooling.AveragePooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'**
data_in_shape = (3, 4, 4, 3) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(292) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.12'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (3, 4, 4, 3) in: [-0.497409, -0.250345, 0.196124, -0.044334, -0.324906, 0.560065, 0.220435, -0.167776, -0.923771, 0.77337, -0.862909, -0.584756, -0.70451, 0.870272, 0.841773, -0.312016, 0.599915, 0.073955, 0.944336, -0.4175, 0.865698, 0.609184, 0.033839, -0.72494, -0.239473, 0.514968, -0.318523, -0.244443, 0.275468, -0.85993, -0.262732, 0.026767, -0.937574, 0.872647, 0.540013, 0.055422, 0.322167, 0.972206, 0.92596, -0.82368, -0.63508, 0.671616, -0.678809, 0.202761, -0.260164, -0.241878, 0.188534, -0.47291, -0.077436, -0.016304, 0.548747, -0.236224, -0.780147, -0.013071, -0.67362, -0.807763, -0.351361, 0.533701, 0.274553, -0.933379, -0.49029, 0.928012, -0.719924, 0.453519, 0.173223, 0.030778, 0.12229, 0.547074, -0.860491, 0.206434, 0.248515, -0.189106, -0.393127, -0.152128, -0.822508, -0.361768, -0.702917, 0.998304, -0.011396, -0.644766, -0.150506, -0.153633, -0.772981, -0.470261, -0.056372, 0.082635, 0.017418, 0.26302, 0.730468, 0.268813, -0.163174, 0.332229, -0.698119, -0.397122, -0.426552, -0.931893, -0.6652, -0.986456, -0.062208, -0.90263, -0.12278, -0.277462, 0.072233, 0.466157, -0.917268, 0.053668, -0.45609, 0.072386, 0.376642, 0.133363, -0.799663, -0.984724, 0.337956, -0.088779, -0.04311, 0.520989, -0.611655, -0.456082, -0.662569, 0.09705, 0.256941, -0.987104, -0.939188, -0.296892, 0.123336, 0.710366, -0.675095, -0.037022, -0.616184, -0.925359, -0.734003, -0.629605, 0.071318, 0.6548, -0.019787, -0.435663, -0.412123, 0.967997, -0.80066, -0.337085, 0.471011, -0.945095, -0.74401, 0.924175] out shape: (3, 2, 2, 1) out: [-0.082917, 0.141622, 0.017767, 0.080913, -0.005706, 0.056398, -0.073774, -0.279674, -0.351729, -0.0631, -0.128173, -0.649791]
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
export for Keras.js tests
print(json.dumps(DATA))
{"pooling.AveragePooling3D.0": {"input": {"data": [-0.453175, -0.475078, 0.486234, -0.949643, -0.349099, -0.837108, 0.933439, 0.167853, -0.995191, 0.459466, -0.788337, -0.120985, 0.06215, 0.138625, -0.102201, 0.976605, -0.591051, -0.592066, 0.469334, -0.435067, 0.621416, 0.817698, 0.790015, 0.485862, 0.469679, -0.611443, 0.582845, -0.503885, -0.379174, -0.451035, 0.052289, 0.131836, -0.609312, -0.828722, -0.422428, -0.648754, -0.339801, -0.758017, 0.754244, -0.544823, 0.691656, 0.076848, -0.32539, 0.306448, -0.662415, 0.334329, 0.030666, -0.414111, -0.757096, -0.20427, -0.893088, -0.681919, -0.619269, -0.640749, 0.867436, 0.971453, -0.42039, -0.574905, -0.34642, 0.588678, -0.247265, 0.436084, 0.220126, 0.114202, 0.613623, 0.401452, -0.270262, -0.591146, -0.872383, 0.818368, 0.336808, 0.338197, -0.275646, 0.375308, -0.928722, -0.836727, -0.504007, -0.503397, -0.636099, 0.948482, -0.639661, -0.026878, -0.122643, -0.634018, -0.247016, 0.517246, -0.398639, 0.752174, -0.014633, -0.170534, -0.463453, -0.289716, 0.837207, 0.769962, -0.401357, 0.076406, 0.270433, -0.036538, -0.05766, 0.625256, 0.626847, -0.27321, -0.217219, -0.775553, -0.182939, 0.327385, -0.976376, -0.337729, -0.178467, -0.1545, -0.334259, -0.537949, 0.499229, 0.775269, -0.657598, 0.921864, 0.376821, 0.420375, -0.937835, -0.176425, -0.516753, 0.737286, 0.867807, -0.515893, 0.710035, 0.680377, -0.350561, -0.28645], "shape": [4, 4, 4, 2]}, "expected": {"data": [-0.301993, -0.272552, 0.040873, -0.117081, -0.185773, -0.37686, 0.163197, 0.233169, -0.225944, -0.009092, -0.222347, -0.017445, -0.130963, 0.099673, -0.051418, 0.344208], "shape": [2, 2, 2, 2]}}, "pooling.AveragePooling3D.1": {"input": {"data": [-0.803361, 0.348731, 0.30124, -0.168638, 0.516406, -0.258765, 0.297839, 0.993235, 0.958465, -0.273175, -0.704992, 0.261477, -0.301255, 0.263104, 0.678631, -0.644936, -0.029034, -0.320266, 0.307733, -0.479016, 0.608177, 0.034951, 0.456908, -0.929353, 0.594982, -0.243058, -0.524918, 0.455339, 0.034216, 0.356824, 0.63906, -0.259773, -0.084724, 0.248472, -0.608134, 0.0077, 0.400591, -0.960703, -0.247926, -0.774509, 0.496174, -0.319044, -0.324046, -0.616632, -0.322142, -0.472846, 0.171825, -0.030013, 0.992861, -0.645264, 0.524886, 0.673229, 0.883122, 0.25346, -0.706988, -0.654436, 0.918349, -0.139113, 0.742737, 0.338472, -0.812719, 0.860081, 0.003489, 0.667897, 0.362284, -0.283972, 0.995162, 0.67962, -0.700244, -0.137142, 0.045695, -0.450433, 0.929977, 0.157542, -0.720517, -0.939063, 0.295004, 0.308728, -0.094057, -0.374756, -0.400976, -0.539654, 0.27965, 0.977688, -0.361264, -0.027757, -0.67149, 0.57064, -0.861888, 0.616985, -0.027436, 0.40181, -0.30391, 0.92268, -0.486416, -0.335828, 0.138558, -0.445691, 0.156253, -0.967633, 0.127471, -0.783301, -0.691353, -0.76759, 0.618771, -0.377474, 0.50152, 0.126867, -0.872708, 0.649418, 0.697987, -0.33456, 0.906767, 0.604333, -0.129366, 0.579445, -0.033479, -0.22433, -0.979529, -0.251153, 0.839734, 0.550506, -0.599678, 0.584326, 0.211245, -0.66845, 0.948298, -0.004521], "shape": [4, 4, 4, 2]}, "expected": {"data": [-0.096172, -0.063889, -0.130291, -0.243163, 0.149246, -0.235679, 0.277756, -0.214836, 0.083935, -0.010284, 0.183535, -0.272509, 0.440949, -0.04496, 0.220404, 0.311668, 0.138158, 0.041206, 0.130772, -0.133172, -0.123041, -0.266292, -0.056407, -0.361459, 0.222251, -0.1564, 0.031836, 0.019601, -0.100749, -0.053372, 0.271023, 0.210519, 0.115633, 0.549958, -0.307022, 0.282092, 0.372751, -0.256226, -0.027257, -0.132813, -0.149026, -0.236204, 0.248228, 0.07371, -0.130145, 0.181375, -0.252442, 0.039529, 0.000851, 0.47193, -0.12053, 0.318177, -0.209568, -0.00234], "shape": [3, 3, 3, 2]}}, "pooling.AveragePooling3D.2": {"input": {"data": [-0.263147, -0.216555, -0.75766, -0.396007, 0.85243, 0.98415, -0.230197, -0.979579, 0.117628, -0.66833, 0.714058, -0.907302, -0.574249, 0.299573, 0.101165, 0.655872, -0.104788, 0.242064, -0.409262, -0.124059, 0.105687, -0.969325, -0.167941, 0.382377, 0.710487, 0.793042, 0.180663, -0.80231, 0.684253, -0.516992, 0.471203, -0.152325, 0.509501, 0.613742, -0.877379, 0.755416, 0.427677, 0.931956, 0.827636, -0.860685, 0.562326, -0.716081, 0.028046, 0.594422, -0.862333, 0.336131, 0.713855, 0.386247, -0.986659, 0.242413, 0.753777, -0.159358, 0.166548, -0.437388, 0.291152, -0.775555, 0.796086, -0.592021, -0.251661, 0.187174, 0.899283, 0.431861, -0.685273, -0.085991, -0.629026, -0.478334, 0.714983, 0.53745, -0.310438, 0.973848, -0.675219, 0.422743, -0.992263, 0.374017, -0.687462, -0.190455, -0.560081, 0.22484, -0.079631, 0.815275, 0.338641, -0.538279, -0.10891, -0.929005, 0.514762, 0.322038, 0.702195, -0.697122, 0.925468, -0.274158, 0.148379, 0.333239, 0.63072, -0.652956, -0.356451, -0.71114, 0.111465, 0.31787, 0.242578, 0.8926, -0.60807, 0.218759, 0.42079, 0.71253, -0.082496, 0.272704, 0.277213, -0.099807, -0.899322, -0.175367, -0.642213, -0.661032, 0.730145, 0.37799, 0.935939, -0.448007, -0.320652, -0.352629, 0.258139, 0.30254], "shape": [4, 5, 2, 3]}, "expected": {"data": [-0.113218, 0.104366, 0.101661, -0.110717, 0.341478, -0.101372, -0.259851, 0.202503, 0.083949, -0.364662, 0.07088, 0.181423, 0.375201, -0.081043, -0.083798, 0.275459, 0.046964, -0.00891, -0.333436, 0.258103, -0.187439, -0.222164, 0.289848, -0.055583], "shape": [2, 4, 1, 3]}}, "pooling.AveragePooling3D.3": {"input": {"data": [0.19483, -0.346754, 0.281648, -0.656271, 0.588328, 0.864284, -0.661556, 0.344578, 0.534692, 0.187914, -0.172976, 0.100575, 0.287857, 0.151936, 0.679748, 0.137527, 0.726773, -0.503042, -0.902524, -0.895315, 0.870645, 0.792427, -0.102238, -0.748643, -0.048728, -0.025835, 0.358631, 0.804295, -0.300104, -0.99179, -0.699454, -0.943476, -0.448011, 0.628611, 0.060595, 0.716813, -0.33607, 0.549002, 0.810379, 0.074881, -0.689823, 0.17513, -0.975426, 0.961779, -0.030624, -0.914643, -0.735591, 0.031988, -0.554272, 0.253033, 0.73405, 0.426412, -0.361457, 0.787875, -0.266747, -0.166595, 0.922155, -0.04597, -0.465312, 0.157074, -0.201136, -0.004584, -0.158067, 0.244864, -0.495687, 0.416834, -0.583545, 0.654634, -0.318258, -0.709804, -0.393463, 0.589381, -0.900991, 0.266171, 0.955916, -0.6571, 0.990855, -0.078764, 0.609356, -0.526011, -0.902476, 0.040574, -0.045497, -0.110604, 0.035908, -0.91532, -0.170028, -0.02148, -0.994139, 0.020418, 0.989168, -0.802385, 0.353583, -0.981395, -0.959128, 0.785969, -0.325003, -0.541583, -0.929888, 0.40832, 0.565713, 0.449217, -0.21377, -0.491438, -0.352481, 0.469042, 0.272024, 0.101279, -0.70562, -0.296457, 0.210789, -0.049051, -0.002596, 0.630726, 0.023403, -0.216062, 0.510835, -0.446393, -0.075211, 0.4807, 0.581605, 0.705887, 0.741715, -0.448041, 0.88241, -0.866934, -0.341714, -0.245376], "shape": [4, 4, 4, 2]}, "expected": {"data": [-0.053909, 0.080977], "shape": [1, 1, 1, 2]}}, "pooling.AveragePooling3D.4": {"input": {"data": [0.691755, -0.79282, -0.953135, 0.756956, -0.736874, 0.171061, -0.801845, 0.588236, -0.884749, 0.06721, -0.585121, -0.546211, -0.605281, -0.998989, 0.309413, -0.260604, -0.123585, 0.168908, -0.179496, 0.657412, -0.973664, 0.146258, -0.851615, -0.320588, 0.375102, -0.048494, 0.822789, 0.063572, -0.956466, 0.083595, 0.121146, 0.789353, -0.815498, -0.056454, -0.472042, -0.423572, 0.460752, 0.784129, -0.964421, -0.02912, -0.194265, 0.17147, -0.336383, -0.785223, 0.978845, 0.88826, -0.498649, -0.958507, 0.055052, -0.991654, -0.027882, 0.079693, 0.901998, 0.036266, -0.73015, -0.472116, 0.651073, 0.821196, 0.562183, 0.42342, -0.236111, 0.661076, -0.983951, -0.116893, -0.179815, 0.375962, -0.018703, -0.242038, -0.561415, 0.322072, 0.468695, 0.768235, -0.354887, 0.528139, 0.796988, -0.976979, 0.279858, -0.790546, 0.485339, 0.693701, -0.130412, 0.211269, -0.346429, 0.06497, 0.932512, -0.675758, -0.636085, 0.065187, -0.720225, -0.060809, -0.783716, -0.1708, 0.256143, 0.365727, -0.458241, 0.515217, -0.269055, 0.378065, 0.066507, 0.207271, -0.303131, 0.632455, 0.147251, 0.35156, -0.852052, -0.382054, 0.42108, -0.350071, 0.092818, 0.516404, 0.448487, 0.503722, -0.86555, 0.747871, 0.320894, -0.163714, 0.107681, 0.562623, 0.757089, 0.85338, -0.875069, 0.324594, -0.093024, 0.016279, -0.507882, -0.549638, -0.913588, 0.078328], "shape": [4, 4, 4, 2]}, "expected": {"data": [-0.125255, -0.068526], "shape": [1, 1, 1, 2]}}, "pooling.AveragePooling3D.5": {"input": {"data": [-0.495196, -0.886872, 0.220815, 0.126844, 0.168234, -0.640849, 0.457897, -0.375014, 0.001134, -0.486501, -0.819617, -0.468351, 0.15859, 0.39238, -0.590545, -0.402922, 0.821619, -0.208255, -0.512219, -0.586151, -0.365648, -0.195611, -0.280978, -0.08818, -0.449229, 0.169082, 0.075074, -0.719751, 0.657827, -0.060862, -0.217533, 0.907503, 0.902317, 0.613945, 0.670047, -0.808346, 0.060215, -0.446612, -0.710328, -0.018744, 0.348018, -0.294409, 0.623986, -0.216504, 0.270099, -0.216285, -0.433193, -0.197968, -0.829926, -0.93864, -0.901724, -0.388869, -0.658339, -0.931401, -0.654674, -0.469503, 0.970661, 0.008063, -0.751014, 0.519043, 0.197895, 0.959095, 0.875405, 0.700615, 0.301314, -0.980157, 0.275373, -0.082646, 0.100727, -0.027273, -0.322366, 0.26563, 0.668139, 0.890289, 0.854229, -0.85773, -0.07833, -0.319645, -0.948873, 0.403526, 0.683097, 0.174958, 0.926944, -0.418256, -0.406667, -0.333808, 0.102223, -0.00576, 0.182281, 0.979655, 0.230246, 0.422968, -0.381217, 0.146697, 0.660828, 0.060741, -0.201812, 0.587619, 0.188211, -0.652713, -0.937225, -0.814998, 0.277993, -0.539363, -0.665425, 0.72739, 0.919326, 0.710163, -0.819091, -0.089805, -0.778517, -0.593048, 0.945303, -0.078936, 0.303422, 0.206755, -0.899923, -0.868598, 0.249905, -0.47891, 0.006871, -0.263386, -0.484493, -0.75917, 0.857292, 0.401094, -0.077826, -0.44546], "shape": [4, 4, 4, 2]}, "expected": {"data": [0.181438, -0.302524, -0.077379, -0.238252, -0.197095, -0.268185, -0.055756, 0.102707, 0.292419, 0.042777, -0.43821, -0.214372, 0.349209, 0.033074, 0.013077, -0.1905], "shape": [2, 2, 2, 2]}}, "pooling.AveragePooling3D.6": {"input": {"data": [-0.709952, -0.532913, -0.169956, -0.391538, 0.729034, -0.2004, -0.67324, -0.973672, 0.879975, -0.981827, -0.4828, -0.887985, 0.843364, 0.710745, -0.260613, 0.20082, 0.309563, 0.721671, -0.967848, -0.976471, -0.13058, 0.052684, 0.666494, -0.319759, -0.060338, 0.359151, -0.795562, 0.70488, 0.100816, 0.466479, 0.992415, 0.066527, -0.690663, -0.741365, -0.251801, -0.479328, 0.62187, 0.578729, 0.598481, 0.817115, -0.913801, -0.694569, 0.397726, -0.31274, 0.163147, 0.087004, -0.744957, -0.920201, 0.440377, -0.191648, -0.227724, -0.562736, -0.484598, -0.230876, 0.019055, 0.988723, 0.656988, 0.185623, -0.629304, -0.321252, 0.329452, 0.355461, 0.734458, 0.496983, 0.181439, 0.414232, 0.776873, 0.68191, -0.846744, -0.442164, -0.526272, 0.92696, -0.704629, -0.800248, 0.643923, 0.775996, -0.203863, -0.756864, -0.398058, -0.914275, 0.980404, 0.329099, -0.576086, 0.851052, -0.74133, -0.23673, -0.001628, 0.972916, -0.571033, 0.669151, -0.977945, -0.707472, 0.371069, -0.772292, -0.207482, -0.094619, -0.604913, 0.111706, -0.123427, 0.284132, 0.292284, -0.490954, -0.873365, 0.109881, -0.40172, 0.103223, 0.396366, -0.415444, 0.766823, -0.057373, 0.619422, 0.30151, -0.126582, 0.862041, -0.083425, -0.018503, 0.744106, -0.681409, 0.556506, -0.628066, -0.697587, -0.201239, 0.051677, -0.585768, 0.202332, -0.634928, -0.410351, 0.005911], "shape": [4, 4, 4, 2]}, "expected": {"data": [-0.242659, -0.627783, 0.231323, -0.111939, 0.159636, 0.037518, -0.270082, -0.218984, -0.070567, -0.485788, -0.111164, -0.265047, 0.008914, 0.071142, -0.080005, -0.012604, -0.159231, -0.010098, -0.350668, -0.063979, 0.278439, 0.234528, 0.603106, 0.308118, -0.207054, 0.232101, -0.248649, 0.301392, 0.539285, 0.346362, 0.863437, 0.281755, -0.070117, -0.144514, 0.162641, 0.016568, -0.167049, -0.077962, -0.267701, -0.0226, 0.005024, -0.075724, -0.128601, -0.048237, -0.299029, -0.126288, -0.281397, 0.031791, -0.11304, 0.031477, -0.367058, -0.203106, 0.002375, 0.184946, 0.136101, 0.591001, -0.380324, -0.043487, -0.226682, -0.361389, 0.306874, -0.003617, 0.263488, 0.201182, 0.020489, 0.144438, 0.212779, -0.052595, -0.146222, -0.16541, -0.294568, 0.106019, 0.016031, 0.210902, 0.118314, -0.067409, 0.167747, -0.250036, 0.194061, -0.066979, -0.250072, 0.149795, -0.1262, -0.348256, 0.064153, -0.258652, -0.015739, 0.064035, -0.548722, -0.206332, -0.088217, -0.675115, -0.011108, -0.373982, -0.308916, -0.044354, -0.183423, 0.020904, 0.333012, -0.16991, 0.201291, -0.034234, -0.126972, 0.205695, -0.05384, 0.132829, 0.455967, -0.293182, 0.671714, -0.266335, 0.587964, -0.163278, -0.213979, 0.014133, 0.228672, -0.480152, 0.273148, -0.484623, 0.073078, -0.311078, -0.322955, -0.393504, 0.127005, -0.610348, -0.10401, -0.314508, -0.410351, 0.005911], "shape": [4, 4, 4, 2]}}, "pooling.AveragePooling3D.7": {"input": {"data": [-0.71103, 0.421506, 0.752321, 0.542455, -0.557162, -0.963774, 0.910303, -0.933284, 0.67521, 0.588709, -0.782848, -0.108964, -0.767069, 0.338318, -0.660374, -0.967294, -0.501079, -0.917532, -0.087991, -0.160473, 0.520493, 0.612007, -0.955448, -0.809749, -0.627003, 0.494441, 0.985405, 0.99813, -0.278165, 0.090068, 0.803872, 0.287682, 0.162199, 0.1796, -0.630223, 0.044743, 0.9092, 0.023879, -0.403203, -0.005329, -0.29237, -0.510033, -0.190427, 0.149011, 0.873547, -0.58793, -0.302525, 0.102122, -0.804112, 0.965834, 0.302039, -0.806929, 0.627682, 0.876256, 0.176245, 0.051969, 0.005712, -0.877694, -0.776877, -0.360984, 0.172577, 0.953108, 0.755911, 0.973515, 0.745292, 0.765506, 0.119956, 0.378346, 0.425789, 0.048668, 0.363691, -0.499862, 0.315721, 0.243267, 0.333434, -0.001645, -0.007235, -0.463152, -0.002048, 0.862117, -0.575785, 0.594789, 0.068012, 0.165267, 0.081581, 0.128645, 0.559305, -0.494595, 0.10207, 0.278472, -0.815856, 0.817863, 0.101417, -0.432774, -0.36832, 0.682055, -0.852236, 0.756063, 0.741739, 0.403911, 0.363444, -0.853088, 0.429379, 0.95063, 0.19365, 0.707334, 0.883575, 0.037535, 0.735855, -0.597979, -0.328964, -0.63363, 0.533345, 0.628204, -0.831273, -0.475492, -0.120719, -0.049689, 0.474126, -0.534385, 0.898272, -0.060213, 0.134975, -0.81603, -0.09329, -0.56951, 0.744421, 0.561587, -0.094, 0.780616, -0.206093, 0.992174, 0.563806, 0.562612, -0.754387, -0.36159, -0.288989, 0.195871, -0.156575, 0.108674, 0.037465, 0.115865, -0.313897, -0.290763, -0.920174, -0.943574, 0.610507, -0.749795, -0.802955, 0.296183, -0.862404, 0.174227, 0.6721, 0.674769, -0.198663, 0.163696, -0.572753, 0.149323, 0.456671, 0.162098], "shape": [4, 5, 4, 2]}, "expected": {"data": [-0.131402, 0.155199, 0.03226, -0.070195, 0.037581, -0.260452, 0.030912, -0.436622, -0.017073, 0.039968, 0.135148, 0.319859, 0.22609, 0.20693, 0.242007, -0.012103, 0.045283, 0.116491, 0.151294, -0.099044, 0.124178, 0.104379, -0.202626, 0.428394, -0.275804, 0.206784, 0.130999, 0.038676, 0.218617, 0.040718, 0.016176, 0.085388, 0.132601, 0.226252, 0.333257, 0.00119, 0.36471, 0.04267, 0.305004, 0.197663, 0.087807, 0.098584, -0.156448, -0.247495, 0.086031, -0.046277, 0.236039, 0.163866, -0.061051, 0.344117, -0.020681, 0.106031, 0.104317, 0.009554, 0.045255, 0.096864, 0.026437, 0.064502, 0.301632, -0.154837, -0.09276, -0.104819, -0.268972, 0.050116, 0.043877, 0.247794, -0.430852, -0.053041, 0.059331, -0.068163, 0.465398, -0.186144, 0.183288, 0.224137, 0.099849, 0.042311, 0.115137, 0.048274, -0.004983, 0.099998, -0.188808, -0.347206, -0.07789, -0.057268, -0.485448, 0.073878, -0.588151, -0.058268, 0.236719, 0.419233, -0.385708, 0.156509, -0.058041, 0.15571, 0.456671, 0.162098], "shape": [4, 3, 4, 2]}}, "pooling.AveragePooling3D.8": {"input": {"data": [0.106539, 0.430065, 0.625063, -0.956042, 0.681684, 0.345995, -0.589061, 0.186737, 0.535452, -0.125905, -0.396262, -0.44893, 0.39021, 0.253402, -0.238515, 0.337141, 0.178107, 0.244331, -0.93179, -0.081267, 0.895223, 0.820023, 0.365435, -0.738456, 0.893031, -0.787916, -0.518813, 0.661518, -0.464144, -0.639165, -0.252917, 0.784083, 0.577398, 0.769552, 0.036096, 0.847521, -0.171916, 0.07536, -0.830068, 0.734205, -0.437818, 0.295701, 0.252657, -0.859452, -0.425833, -0.650296, -0.584695, 0.163986, 0.43905, -0.521755, 0.620616, 0.066707, -0.101702, 0.941175, 0.479202, 0.624312, -0.372154, 0.625845, 0.980521, -0.834695, -0.40269, 0.784157, 0.814068, -0.485038, -0.150738, 0.682911, 0.406096, -0.405868, -0.337905, 0.803583, -0.764964, 0.96897, -0.057235, 0.403604, -0.605392, 0.389273, 0.235543, -0.095585, -0.860692, 0.937457, -0.928888, 0.702073, -0.18066, 0.033968, -0.082046, -0.237205, 0.922919, 0.064731, -0.026908, -0.865491, 0.881128, 0.265603, -0.132321, -0.701801, 0.490064, 0.718745, -0.884446, 0.538162, -0.086979, -0.734317, 0.089006, 0.349945, -0.415428, -0.621358, 0.892372, 0.090398, 0.883604, 0.772612, 0.589633, 0.187399, 0.807184, -0.128627, -0.534439, 0.258966, 0.141399, -0.777263, 0.911318, -0.359087, 0.789361, 0.470019, 0.836149, 0.415029, 0.920315, -0.916153, 0.645573, -0.446523, -0.899169, 0.78844], "shape": [4, 4, 4, 2]}, "expected": {"data": [0.162391, -0.005936, -0.221024, 0.180816, 0.161071, -0.078404, 0.166559, 0.261386, 0.04966, 0.217097, -0.082203, 0.300223, 0.138512, -0.110408, 0.330712, 0.037165], "shape": [2, 2, 2, 2]}}, "pooling.AveragePooling3D.9": {"input": {"data": [0.454263, 0.047178, -0.644362, 0.432654, 0.776147, -0.088086, -0.16527, -0.152361, -0.723283, 0.119471, -0.020663, 0.230897, 0.249349, -0.825224, 0.809245, 0.37136, 0.649976, 0.690981, -0.5766, 0.750394, -0.777363, -0.359006, 0.398419, -0.851015, -0.479232, -0.924962, -0.898893, 0.135445, 0.819369, -0.867218, 0.039715, 0.304805, -0.865872, -0.891635, 0.730554, 0.178083, 0.981329, 0.047786, -0.466968, -0.89441, -0.037018, -0.880158, 0.635061, 0.108217, 0.405675, 0.242025, 0.524396, -0.46013, -0.98454, 0.227442, -0.159924, -0.396205, -0.843265, 0.181395, -0.743803, 0.445469, 0.05215, 0.837067, -0.756402, -0.959109, -0.580594, -0.677936, -0.929683, -0.165592, -0.870784, 0.91887, 0.542361, 0.46359, -0.521332, 0.778263, 0.662447, 0.692057, 0.224535, -0.087731, 0.904644, 0.207457, -0.564079, -0.389642, 0.590403, -0.861828, -0.280471, -0.593786, -0.542645, 0.788946, -0.808773, -0.334536, -0.973711, 0.68675, 0.383992, -0.38838, 0.278601, -0.89188, -0.582918, -0.190511, -0.493528, 0.635115, -0.375152, 0.586508, -0.986557, -0.449484, 0.216757, 0.746825, -0.144795, 0.448144, -0.828083, 0.224525, -0.958965, -0.566069, -0.850394, -0.261458, -0.589888, 0.75667, 0.531888, 0.146437, -0.877887, -0.575355, 0.06156, -0.714865, 0.710365, -0.439259, -0.084566, -0.854224, 0.467254, 0.59934, 0.527409, 0.791222, -0.66992, 0.644258], "shape": [4, 4, 4, 2]}, "expected": {"data": [-0.058915, -0.081912, 0.389238, -0.21988, -0.394183, 0.045132, -0.327151, -0.248637, -0.2935, 0.162208, -0.15011, 0.238629, -0.015479, -0.221113, -0.27869, 0.134772], "shape": [2, 2, 2, 2]}}, "pooling.AveragePooling3D.10": {"input": {"data": [-0.453175, -0.475078, 0.486234, -0.949643, -0.349099, -0.837108, 0.933439, 0.167853, -0.995191, 0.459466, -0.788337, -0.120985, 0.06215, 0.138625, -0.102201, 0.976605, -0.591051, -0.592066, 0.469334, -0.435067, 0.621416, 0.817698, 0.790015, 0.485862, 0.469679, -0.611443, 0.582845, -0.503885, -0.379174, -0.451035, 0.052289, 0.131836, -0.609312, -0.828722, -0.422428, -0.648754, -0.339801, -0.758017, 0.754244, -0.544823, 0.691656, 0.076848, -0.32539, 0.306448, -0.662415, 0.334329, 0.030666, -0.414111, -0.757096, -0.20427, -0.893088, -0.681919, -0.619269, -0.640749, 0.867436, 0.971453, -0.42039, -0.574905, -0.34642, 0.588678, -0.247265, 0.436084, 0.220126, 0.114202, 0.613623, 0.401452, -0.270262, -0.591146, -0.872383, 0.818368, 0.336808, 0.338197], "shape": [2, 3, 3, 4]}, "expected": {"data": [-0.096379, -0.08704], "shape": [2, 1, 1, 1]}}, "pooling.AveragePooling3D.11": {"input": {"data": [-0.803361, 0.348731, 0.30124, -0.168638, 0.516406, -0.258765, 0.297839, 0.993235, 0.958465, -0.273175, -0.704992, 0.261477, -0.301255, 0.263104, 0.678631, -0.644936, -0.029034, -0.320266, 0.307733, -0.479016, 0.608177, 0.034951, 0.456908, -0.929353, 0.594982, -0.243058, -0.524918, 0.455339, 0.034216, 0.356824, 0.63906, -0.259773, -0.084724, 0.248472, -0.608134, 0.0077, 0.400591, -0.960703, -0.247926, -0.774509, 0.496174, -0.319044, -0.324046, -0.616632, -0.322142, -0.472846, 0.171825, -0.030013, 0.992861, -0.645264, 0.524886, 0.673229, 0.883122, 0.25346, -0.706988, -0.654436, 0.918349, -0.139113, 0.742737, 0.338472, -0.812719, 0.860081, 0.003489, 0.667897, 0.362284, -0.283972, 0.995162, 0.67962, -0.700244, -0.137142, 0.045695, -0.450433], "shape": [2, 3, 3, 4]}, "expected": {"data": [-0.073055, 0.083417, 0.109908, 0.160761, 0.061998, 0.11563, 0.00915, 0.030844, 0.154595, 0.132854, -0.051119, 0.025479, 0.01321, 0.103228, 0.096798, 0.132983, 0.091705, 0.092372, 0.008749, 0.004411, 0.149296, 0.121109, -0.012738, -0.001443, 0.044439, 0.121335, 0.01906, 0.021515, 0.096866, 0.117315, -0.031152, -0.075063, 0.106077, 0.137015, -0.045408, -0.108109, 0.13765, 0.028927, -0.316498, -0.265803, 0.090454, 0.069218, -0.177051, -0.075283, 0.162245, 0.098457, -0.146385, -0.134885, 0.102239, 0.081747, -0.04865, 0.018312, 0.020763, 0.058465, -0.029871, 0.057668, 0.044907, 0.081293, -0.050427, 0.015913, 0.201232, 0.2022, 0.197264, 0.272857, 0.129309, 0.175371, 0.153743, 0.238277, 0.144593, 0.186112, 0.056922, 0.123728], "shape": [2, 3, 3, 4]}}, "pooling.AveragePooling3D.12": {"input": {"data": [-0.497409, -0.250345, 0.196124, -0.044334, -0.324906, 0.560065, 0.220435, -0.167776, -0.923771, 0.77337, -0.862909, -0.584756, -0.70451, 0.870272, 0.841773, -0.312016, 0.599915, 0.073955, 0.944336, -0.4175, 0.865698, 0.609184, 0.033839, -0.72494, -0.239473, 0.514968, -0.318523, -0.244443, 0.275468, -0.85993, -0.262732, 0.026767, -0.937574, 0.872647, 0.540013, 0.055422, 0.322167, 0.972206, 0.92596, -0.82368, -0.63508, 0.671616, -0.678809, 0.202761, -0.260164, -0.241878, 0.188534, -0.47291, -0.077436, -0.016304, 0.548747, -0.236224, -0.780147, -0.013071, -0.67362, -0.807763, -0.351361, 0.533701, 0.274553, -0.933379, -0.49029, 0.928012, -0.719924, 0.453519, 0.173223, 0.030778, 0.12229, 0.547074, -0.860491, 0.206434, 0.248515, -0.189106, -0.393127, -0.152128, -0.822508, -0.361768, -0.702917, 0.998304, -0.011396, -0.644766, -0.150506, -0.153633, -0.772981, -0.470261, -0.056372, 0.082635, 0.017418, 0.26302, 0.730468, 0.268813, -0.163174, 0.332229, -0.698119, -0.397122, -0.426552, -0.931893, -0.6652, -0.986456, -0.062208, -0.90263, -0.12278, -0.277462, 0.072233, 0.466157, -0.917268, 0.053668, -0.45609, 0.072386, 0.376642, 0.133363, -0.799663, -0.984724, 0.337956, -0.088779, -0.04311, 0.520989, -0.611655, -0.456082, -0.662569, 0.09705, 0.256941, -0.987104, -0.939188, -0.296892, 0.123336, 0.710366, -0.675095, -0.037022, -0.616184, -0.925359, -0.734003, -0.629605, 0.071318, 0.6548, -0.019787, -0.435663, -0.412123, 0.967997, -0.80066, -0.337085, 0.471011, -0.945095, -0.74401, 0.924175], "shape": [3, 4, 4, 3]}, "expected": {"data": [-0.082917, 0.141622, 0.017767, 0.080913, -0.005706, 0.056398, -0.073774, -0.279674, -0.351729, -0.0631, -0.128173, -0.649791], "shape": [3, 2, 2, 1]}}}
MIT
notebooks/layers/pooling/AveragePooling3D.ipynb
GTDev87/tpt-hackathon
Debugging strategiesIn this notebook, we'll talk about what happens when you get an error message (it will happen often!) and some steps you can take to resolve them.Run the code in the next cell.
x = 10 if x > 20 print(f'{x} is greater than 20!')
_____no_output_____
MIT
reference/Debugging strategies.ipynb
ireapps/cfj-2018
The "traceback" message shows you a couple of useful things:- What line the error is on: `line 3`- The class of error: `SyntaxError` (v common)- Exactly _where_ the error occured -- see where the `^` symbol is pointing?What's the problem? GooglingIf it's not immediately clear what's wrong -- if you're not even sure what a `SyntaxError` even is -- I might start by Googling the error messsage, the word "python" and maybe some keywords for what I was trying to do when I got the error. Something like [`"SyntaxError: invalid syntax" python if statement`](https://www.google.com/search?q=%22SyntaxError%3A+invalid+syntax%22+python+if+statement)Click through the first couple of links -- you'll become _very_ familiar with StackOverflow -- and see if you spot the problem.If you're still stuck, maybe it's time to ... Read the docsMy next stop would be the Python documentation to find some examples of the thing I'm trying to do. [Here's the page outlining how to write an `if` statement in Python](https://docs.python.org/3/tutorial/controlflow.html). From there, I would copy the example code, run it, compare it line by line with my code and see what's different.If I'm _still_ stuck, I might see if there are other keywords to search on and take another run at Google. Use `print()` liberallyThe `print()` function can be a lifesaver -- it can show you _what_ a value is before you try to do something to it, and whether it matches up with your expectations of what that value should be, and thereby give you a clue about why your script is failing. An example can help clarify this idea.**Scenario:** Your newsroom is handing out longevity bonuses. (Congratulations!) Each employee's bonus will be the number of years they've been with the company, times 50.So we're going to loop over our staff data, held in a list of dictionaries, and calculate each person's bonus.
staff = [ {'name': 'Fran', 'years_of_service': 2, 'job': 'Reporter'}, {'name': 'Graham', 'years_of_service': 7, 'job': 'Reporter'}, {'name': 'Pat', 'years_of_service': 4, 'job': 'Web Producer'}, {'name': 'John', 'years_of_service': '26', 'job': 'Managing Editor'}, {'name': 'Sue', 'years_of_service': 33, 'job': 'Executive Editor'} ] for person in staff: name = person['name'] bonus = person['years_of_service'] * 50 print(f'{name} is getting a bonus of {bonus}')
_____no_output_____
MIT
reference/Debugging strategies.ipynb
ireapps/cfj-2018
We didn't get an exception, but something is _clearly_ wrong with John's bonus. What's going on?Maybe you spot the error already. If not, we might Google something like ["python multiply numbers repeating"](https://www.google.com/search?q=python+multiply+numbers+repeating) -- which leads us to [this StackOverflow answer](https://stackoverflow.com/questions/20401871/want-to-multiply-not-repeat-variable). Is that what's going on here? Let's add a `print()` statement before we do the multiplication and use the [`type()`](https://docs.python.org/3/library/functions.htmltype) function to check the value that we're pulling out of each dictionary.
for person in staff: name = person['name'] bonus = person['years_of_service'] * 50 print(name, type(person['years_of_service'])) print(f'{name} is getting a bonus of {bonus}')
_____no_output_____
MIT
reference/Debugging strategies.ipynb
ireapps/cfj-2018
Aha! John's value for `years_of_service` has been stored as a string, not an integer. Let's fix that by using the [`int()`](https://docs.python.org/3/library/functions.htmlint) function to coerce the value to an integer.
for person in staff: name = person['name'] bonus = int(person['years_of_service']) ** 2 print(f'{name} is getting a bonus of {bonus}')
_____no_output_____
MIT
reference/Debugging strategies.ipynb
ireapps/cfj-2018
Winner winner, chicken dinner.Here are some more debugging exercises for you to work through. See if you can figure out what's wrong and fix them.
print(Hello, Pittsburgh!) desk = { 'wood': 'fir', 'color': 'black', 'height_in': 36, 'width_in': 48, 'length_in': 68 } print(desk['drawer_count']) students = ['Kelly', 'Larry', 'José', 'Frank', 'Sarah', 'Sue'] for student in students: if student = 'Kelly': print('It's Kelly!') elif student == 'José': print("It's José!") import cvs with open('../../../data/eels.csv', 'r') as o: reader = csv.DictReader(o) for row in Reader: print(row)
_____no_output_____
MIT
reference/Debugging strategies.ipynb
ireapps/cfj-2018
Make the Dataset
CA_x, CA_y = [], [] KS_x, KS_y = [], [] MT_x, MT_y = [], [] TX_x, TX_y = [], [] OH_x, OH_y = [], [] states = {"CA" : [CA_x, CA_y, "Roi_1"], "KS" : [KS_x, KS_y, "Roi_2"], "MT" : [MT_x, MT_y, "Roi_3"], "TX" : [TX_x, TX_y, "Roi_4"], "OH" : [OH_x, OH_y, "Roi_5"]}
_____no_output_____
MIT
Notebooks/GCN_cuda_Implement_speed.ipynb
oqbrady/Sentinel2_Traffic
Load into RAM
master_df = pd.read_csv("Sentinel2_Traffic/Traffic_Data/5_state_traffic.csv") master_df = master_df.set_index("Unnamed: 0") CA_x, CA_y = [], [] KS_x, KS_y = [], [] MT_x, MT_y = [], [] TX_x, TX_y = [], [] OH_x, OH_y = [], [] states = {"Cali" : [CA_x, CA_y, "Roi_1"], "KS" : [KS_x, KS_y, "Roi_2"], "MT" : [MT_x, MT_y, "Roi_3"], "TX" : [TX_x, TX_y, "Roi_4"], "Ohio" : [OH_x, OH_y, "Roi_5"]} j = 0 for st in ["Cali", "KS", "MT", "TX", "Ohio"]: # for st in ["TX"]: # path_check = "R/" + states[st][2] + "/greedy_a/" path = "new_roi/" + st # + "/sent_cloud_90p_raw/" # imgs_check = os.listdir(path_check) imgs = os.listdir(path) # for img, img_check in zip(imgs, imgs_check): for img in imgs: date = img[len(st):len(st) + 10] # print(date) # break try: photo = pd.read_csv(path + '/' + img) except: continue # photo_check = np.loadtxt(path_check + img_check).reshape(-1, 7, 3) # cali_pixs = 72264 # # kansas_pixs = 69071 # # mont_pixs = 72099 # # texas_pixs = 71764 # ohio_pixs = 62827 if photo.shape[0] < 50000: continue if date in list(master_df.index): if st == "Cali": lookup_st = "CA" elif st == "Ohio": lookup_st = "OH" else: lookup_st = st if not pd.isna(master_df.loc[date][lookup_st]): states[st][0].append(photo) states[st][1].append(master_df.loc[date][lookup_st]) print(j, st, photo.shape) j += 1 def gen_around(x, y): return [(x, y), (x, y + 10), (x, y - 10), (x + 10, y), (x - 10, y), (x + 10, y + 10), (x + 10, y - 10), (x - 10, y + 10), (x - 10, y - 10)] def gen_around_strict(x, y): return [(x, y), (x, y + 10), (x, y - 10), (x + 10, y), (x - 10, y)] def neighbors(road, coords, x, y, diagonal=True): neigh = [] if diagonal: cand = gen_around(x, y) else: cand = gen_around_strict(x, y) for pix in cand: if pix[0] in coords: if pix[1] in coords[pix[0]]: neigh.append(coords[pix[0]][pix[1]]['idx']) return neigh def src_dst(road, coords, diagonal=True): src, dst, values = [], [] , [] for row in range(road.shape[0]): x = road["x"][row] y = road["y"][row] idx = coords[x][y]['idx'] val = coords[x][y]['val'] # if val[0] != road[row][:3][0]: # assert(False) for c in neighbors(road, coords, x, y, diagonal): src.append(idx) dst.append(c) values.append(val) return src, dst #, values device = torch.cuda.current_device() class RoadDataset(DGLDataset): def __init__(self, states): self.states = states super().__init__(name='road_graphs') def process(self): self.graphs = [] self.labels = [] self.state = [] for st in self.states.keys(): # for st in ["TX"]: print(st) for i in range(len(self.states[st][0])): print(i) img = states[st][0][i] coords = {} vals = [] print(img.shape[0]) for j in range(img.shape[0]): # print(img[j].shape) lon = img["x"][j].astype(int) # print(lon) lat = img["y"][j].astype(int) val = [img["B2"][j], img["B3"][j], img["B4"][j]] vals.append(val) if lon not in coords: coords[lon] = {} coords[lon][lat] = {'idx' : j, 'val' : val} src, dst = src_dst(img, coords) #src, dst, values = src_dst(img, coords) # print(np.mean(src), np.mean(dst), np.mean(values)) graph = dgl.graph((src, dst), num_nodes=img.shape[0]) graph.ndata['feat'] = torch.from_numpy(np.array(vals)) #graph = graph.add_self_loop(graph) graph = graph.to(device) self.graphs.append(graph) self.labels.append(self.states[st][1][i]) self.state.append(st) # assert(False) def __getitem__(self, i): return self.graphs[i], self.labels[i], self.state[i] def __len__(self): return len(self.graphs) class RoadDatasetLoad(DGLDataset): def __init__(self, states): self.states = states super().__init__(name='road_graphs') def process(self): self.graphs = load_graphs("graphs/data_new.bin")[0] self.labels = np.loadtxt("graphs/labels_new.csv") self.state = np.loadtxt("graphs/states_new.csv", dtype=np.str) def __getitem__(self, i): return self.graphs[i], self.labels[i]#, self.state[i] def __len__(self): return len(self.graphs) Road_Graphs = RoadDataset(states) dataset = Road_Graphs dataset[100] # Road_Graphs = RoadDataset(states) save_graphs('graphs/data_new.bin', dataset.graphs) labels = np.array(dataset.labels) states = np.array(dataset.state) np.savetxt("graphs/labels_new.csv", labels) np.savetxt('graphs/states_new.csv', states, fmt="%s") Road_Load = RoadDatasetLoad(states) dataset_save = dataset # Generate a synthetic dataset with 10000 graphs, ranging from 10 to 500 nodes. # dataset = dgl.data.GINDataset('PROTEINS', self_loop=True) dataset = Road_Load
_____no_output_____
MIT
Notebooks/GCN_cuda_Implement_speed.ipynb
oqbrady/Sentinel2_Traffic
Train the Model
# X = dataset[:][0] # y = dataset[:][1] print(dataset.state[0:37]) print(dataset.state[37:64]) print(dataset.state[64:88]) print(dataset.state[88:119]) print(dataset.state[119:124]) from dgl.dataloading import GraphDataLoader from torch.utils.data.sampler import SubsetRandomSampler from torch.utils.data import DataLoader state_val = False one_sample = False state = "TX" lookup_state = {"CA" : 0, "KS" : 1, "MT" : 2, "TX" : 3, "OH" : 4} state_idxs = [(0, 37), (37, 64), (64, 88), (88, 119), (119, 124)] num_examples = len(dataset) if state_val: x = torch.arange(num_examples) start = state_idxs[lookup_state[state]][0] end = state_idxs[lookup_state[state]][1] test_sample = x[start + 3: end] val_sample = x[start : start + 3] train_sample = torch.cat((x[:start], x[end:])) train_sample = train_sample[torch.randperm(train_sample.shape[0])] print(train_sample) else: num_train = int(num_examples * 0.7) num_val = int(num_examples * 0.85) x = torch.randperm(num_examples) train_sample = x[:num_train] val_sample = x[num_train: num_val] test_sample = x[num_val:] train_sampler = SubsetRandomSampler(train_sample) val_sampler = SubsetRandomSampler(val_sample) test_sampler = SubsetRandomSampler(test_sample) train_dataloader = GraphDataLoader( dataset, sampler=train_sampler, batch_size=16, drop_last=False) val_dataloader = GraphDataLoader( dataset, sampler=val_sampler, batch_size=16, drop_last=False) test_dataloader = GraphDataLoader( dataset, sampler=test_sampler, batch_size=16, drop_last=False) # print(train_sample, val_sample, test_sample) it = iter(test_dataloader) batch = next(it) print(batch) batched_graph, labels = batch print('Number of nodes for each graph element in the batch:', batched_graph.batch_num_nodes()) print('Number of edges for each graph element in the batch:', batched_graph.batch_num_edges()) # Recover the original graph elements from the minibatch graphs = dgl.unbatch(batched_graph) print('The original graphs in the minibatch:') print(graphs) print(labels) from dgl.nn import GraphConv, DenseGraphConv, GATConv class GCN(nn.Module): def __init__(self, in_feats, conv_hidden, lin_hidden): super(GCN, self).__init__() self.conv_layers = nn.ModuleList() self.LR = nn.LeakyReLU(0.2) self.lin_layers = nn.ModuleList() self.conv_layers.append(GraphConv(in_feats, conv_hidden[0])) for i in range(1, len(conv_hidden)): self.conv_layers.append(GraphConv(conv_hidden[i - 1], conv_hidden[i])) for i in range(1, len(lin_hidden) - 1): self.lin_layers.append(nn.Linear(lin_hidden[i - 1], lin_hidden[i])) #self.lin_layers.append(nn.BatchNorm1d(lin_hidden[i])) self.lin_layers.append(nn.Linear(lin_hidden[-2], lin_hidden[-1])) def forward(self, g, in_feat): output = in_feat for layer in self.conv_layers: output = self.LR(layer(g, output)) # print(torch.mean(output)) graphs = dgl.unbatch(g) flat_arr = torch.zeros((g.batch_size, max_pixs)) prev = 0 # print("Before", torch.mean(output)) for i in range(len(batched_graph.batch_num_nodes())): end = prev + int(batched_graph.batch_num_nodes()[i].item()) entry = output[prev: end] entry = entry / int(g.batch_num_nodes()[i].item()) pad_val = int(torch.mean(entry).item()) pad_length = (max_pixs - entry.shape[0]) // 2 entry = torch.nn.functional.pad(entry.flatten(), (pad_length, pad_length), value=pad_val) flat_arr[i][:entry.shape[0]] = entry prev = end flat_arr = flat_arr.to(device) #print("After", torch.mean(flat_arr)) output = flat_arr for i, layer in enumerate(self.lin_layers): output = layer(output) if i != (len(self.lin_layers) - 1): output = self.LR(output) #print(flat_arr.shape) # g.ndata['h'] = h # print(dgl.mean_nodes(g, 'h')) # assert(False) return output #dgl.mean_nodes(g, 'h') # # Create the model with given dimensions model = GCN(3, [10, 10, 1], [max_pixs,1000, 500, 100, 50, 10, 1]) # model = GCN(3, 16, 1) model.cuda() criterion = nn.MSELoss() #model.to('cuda:0') optimizer = torch.optim.Adam(model.parameters(), lr=0.01) del criterion del optimizer del model torch.cuda.empty_cache() def init_weights(m): if type(m) == nn.Linear: torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) model.apply(init_weights) best_model = model min_val = 1e9 j = 0 for epoch in range(100): loss_tot = 0 loss = 0 batches = 0 model.train() for batched_graph, labels in train_dataloader: batched_graph = batched_graph.to(device) labels = labels.to(device) pred = model(batched_graph, batched_graph.ndata['feat'].float()) # print(pred, labels) labels = labels.to(device) loss = criterion(pred, labels.reshape(labels.shape[0], 1).float()) loss_tot += loss.item() batches += 1 optimizer.zero_grad() loss.backward() optimizer.step() if j % 10 == 0: print("Train Loss:", loss_tot / batches) num_tests = 0 loss_i = 0 with torch.no_grad(): model.eval() for batched_graph, labels in val_dataloader: batched_graph = batched_graph.to(device) labels = labels.to(device) pred = model(batched_graph, batched_graph.ndata['feat'].float()) loss_i += criterion(pred, labels.reshape(labels.shape[0], 1).float()).item() # x.extend([x[0] for x in pred.cpu().detach().numpy().tolist()]) # y.extend([x[0] for x in labels.reshape(labels.shape[0], 1).cpu().detach().numpy().tolist()]) # print(type(pred)) num_tests += 1 val_loss = loss_i / num_tests if j % 10 == 0: print('Val loss:', val_loss) # val_loss.append(loss_v.item()) if val_loss < min_val: print("new_best:", val_loss) min_val = val_loss best_model = copy.deepcopy(model) j += 1 # num_correct = 0 num_tests = 0 x = [] y = [] loss = 0 with torch.no_grad(): for batched_graph, labels in test_dataloader: # print(batched_graph) batched_graph = batched_graph.to(device) labels = labels.to(device) pred = best_model(batched_graph, batched_graph.ndata['feat'].float()) loss += criterion(pred, labels.reshape(labels.shape[0], 1).float()).item() x.extend([x[0] for x in pred.cpu().detach().numpy().tolist()]) y.extend([x[0] for x in labels.reshape(labels.shape[0], 1).cpu().detach().numpy().tolist()]) num_tests += 1 print('Test loss:', loss / num_tests) x_temp = y y_temp = x # print(y_temp) # for i in range(len(y_temp)): # if y_temp[i] < 600: # y_temp.pop(i) # x_temp.pop(i) # break x_plot = np.array(y_temp) y_plot = np.array(x_temp) new_x = np.array(x_plot).reshape(-1,1) new_y = np.array(y_plot) fit = LinearRegression().fit(new_x, new_y) score = fit.score(new_x, new_y) plt.xlabel("Prediction") plt.ylabel("Actual Traffic") print(score) plt.scatter(new_x, new_y) axes = plt.gca() x_vals = np.array(axes.get_xlim()) y_vals = x_vals plt.plot(x_vals, y_vals, '--') pre_y = fit.predict(new_x) # plt.plot plt.plot(new_x, pre_y) plt.plot(x_vals, y_vals, '--') # plt.savefig("GCN_MSE_143_r_881.png") plt.show() y labels class ChebNet(nn.Module): def __init__(self, k, in_feats, hiddens, out_feats): super(ChebNet, self).__init__() self.pool = nn.MaxPool1d(2) self.layers = nn.ModuleList() self.readout = MaxPooling() # Input layer self.layers.append( ChebConv(in_feats, hiddens[0], k)) for i in range(1, len(hiddens)): self.layers.append( ChebConv(hiddens[i - 1], hiddens[i], k)) self.cls = nn.Sequential( nn.Linear(hiddens[-1], out_feats), nn.LogSoftmax() ) def forward(self, g_arr, feat): for g, layer in zip(g_arr, self.layers): feat = self.pool(layer(g, feat, [2] * g.batch_size).transpose(-1, -2).unsqueeze(0))\ .squeeze(0).transpose(-1, -2) return self.cls(self.readout(g_arr[-1], feat))
_____no_output_____
MIT
Notebooks/GCN_cuda_Implement_speed.ipynb
oqbrady/Sentinel2_Traffic
Bi directional GRU encoder
class EncoderRNN(nn.Module): def __init__(self, hidden_size, embedding, n_layers=1, dropout=0): super(EncoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size self.embedding = embedding self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout), bidirectional=True) def forward(self, input_seq, input_lengths, hidden=None): embedded = self.embedding(input_seq) packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) outputs, hidden = self.gru(packed, hidden) outputs, _ = torch.nn.utils.rnn.pad_packed_sequence(outputs) outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:] return outputs, hidden
_____no_output_____
Apache-2.0
ChatBot.ipynb
DEK11/Chatbot-RNN
Attention module
class Attn(torch.nn.Module): def __init__(self, method, hidden_size): super(Attn, self).__init__() self.method = method if self.method not in ['dot', 'general', 'concat']: raise ValueError(self.method, "is not an appropriate attention method.") self.hidden_size = hidden_size if self.method == 'general': self.attn = torch.nn.Linear(self.hidden_size, hidden_size) elif self.method == 'concat': self.attn = torch.nn.Linear(self.hidden_size * 2, hidden_size) self.v = torch.nn.Parameter(torch.FloatTensor(hidden_size)) def dot_score(self, hidden, encoder_output): return torch.sum(hidden * encoder_output, dim=2) def general_score(self, hidden, encoder_output): energy = self.attn(encoder_output) return torch.sum(hidden * energy, dim=2) def concat_score(self, hidden, encoder_output): energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1), encoder_output), 2)).tanh() return torch.sum(self.v * energy, dim=2) def forward(self, hidden, encoder_outputs): if self.method == 'general': attn_energies = self.general_score(hidden, encoder_outputs) elif self.method == 'concat': attn_energies = self.concat_score(hidden, encoder_outputs) elif self.method == 'dot': attn_energies = self.dot_score(hidden, encoder_outputs) attn_energies = attn_energies.t() return F.softmax(attn_energies, dim=1).unsqueeze(1) class AttnDecoderRNN(nn.Module): def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1): super(AttnDecoderRNN, self).__init__() self.attn_model = attn_model self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.dropout = dropout self.embedding = embedding self.embedding_dropout = nn.Dropout(dropout) self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout)) self.concat = nn.Linear(hidden_size * 2, hidden_size) self.out = nn.Linear(hidden_size, output_size) self.attn = Attn(attn_model, hidden_size) def forward(self, input_step, last_hidden, encoder_outputs): embedded = self.embedding(input_step) embedded = self.embedding_dropout(embedded) rnn_output, hidden = self.gru(embedded, last_hidden) attn_weights = self.attn(rnn_output, encoder_outputs) context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) rnn_output = rnn_output.squeeze(0) context = context.squeeze(1) concat_input = torch.cat((rnn_output, context), 1) concat_output = torch.tanh(self.concat(concat_input)) output = self.out(concat_output) output = F.softmax(output, dim=1) return output, hidden def maskNLLLoss(inp, target, mask): nTotal = mask.sum() crossEntropy = -torch.log(torch.gather(inp, 1, target.view(-1, 1))) loss = crossEntropy.masked_select(mask).mean() loss = loss.to(device) return loss, nTotal.item() def train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, encoder_optimizer, decoder_optimizer, batch_size, clip): encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() input_variable = input_variable.to(device) lengths = lengths.to(device) target_variable = target_variable.to(device) mask = mask.to(device) loss = 0 print_losses = [] n_totals = 0 encoder_outputs, encoder_hidden = encoder(input_variable, lengths) decoder_input = torch.LongTensor([[SOS_token for _ in range(batch_size)]]) decoder_input = decoder_input.to(device) decoder_hidden = encoder_hidden[:decoder.n_layers] use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False if use_teacher_forcing: for t in range(max_target_len): decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_outputs) decoder_input = target_variable[t].view(1, -1) mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t]) loss += mask_loss print_losses.append(mask_loss.item() * nTotal) n_totals += nTotal else: for t in range(max_target_len): decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_outputs) _, topi = decoder_output.topk(1) decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]]) decoder_input = decoder_input.to(device) mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t]) loss += mask_loss print_losses.append(mask_loss.item() * nTotal) n_totals += nTotal loss.backward() _ = torch.nn.utils.clip_grad_norm_(encoder.parameters(), clip) _ = torch.nn.utils.clip_grad_norm_(decoder.parameters(), clip) encoder_optimizer.step() decoder_optimizer.step() return sum(print_losses) / n_totals def trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename): training_batches = [batch2TrainData(voc, [random.choice(pairs) for _ in range(batch_size)]) for _ in range(n_iteration)] print('Initializing ...') start_iteration = 1 print_loss = 0 if loadFilename: start_iteration = checkpoint['iteration'] + 1 print("Training...") for iteration in range(start_iteration, n_iteration + 1): training_batch = training_batches[iteration - 1] input_variable, lengths, target_variable, mask, max_target_len = training_batch # Run a training iteration with batch loss = train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, encoder_optimizer, decoder_optimizer, batch_size, clip) print_loss += loss if iteration % print_every == 0: print_loss_avg = print_loss / print_every print("Iteration: {}; Percent complete: {:.1f}%; Average loss: {:.4f}".format(iteration, iteration / n_iteration * 100, print_loss_avg)) print_loss = 0 if (iteration % save_every == 0): directory = os.path.join(save_dir, model_name, corpus_name, '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size)) if not os.path.exists(directory): os.makedirs(directory) torch.save({ 'iteration': iteration, 'en': encoder.state_dict(), 'de': decoder.state_dict(), 'en_opt': encoder_optimizer.state_dict(), 'de_opt': decoder_optimizer.state_dict(), 'loss': loss, 'voc_dict': voc.__dict__, 'embedding': embedding.state_dict() }, os.path.join(directory, '{}_{}.tar'.format(iteration, 'checkpoint'))) class GreedySearchDecoder(nn.Module): def __init__(self, encoder, decoder): super(GreedySearchDecoder, self).__init__() self.encoder = encoder self.decoder = decoder def forward(self, input_seq, input_length, max_length): encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length) decoder_hidden = encoder_hidden[:decoder.n_layers] decoder_input = torch.ones(1, 1, device=device, dtype=torch.long) * SOS_token all_tokens = torch.zeros([0], device=device, dtype=torch.long) all_scores = torch.zeros([0], device=device) for _ in range(max_length): decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs) decoder_scores, decoder_input = torch.max(decoder_output, dim=1) all_tokens = torch.cat((all_tokens, decoder_input), dim=0) all_scores = torch.cat((all_scores, decoder_scores), dim=0) decoder_input = torch.unsqueeze(decoder_input, 0) return all_tokens, all_scores def evaluate(encoder, decoder, searcher, voc, sentence, max_length=MAX_LENGTH): indexes_batch = [indexesFromSentence(voc, sentence)] lengths = torch.tensor([len(indexes) for indexes in indexes_batch]) input_batch = torch.LongTensor(indexes_batch).transpose(0, 1) input_batch = input_batch.to(device) lengths = lengths.to(device) tokens, scores = searcher(input_batch, lengths, max_length) decoded_words = [voc.index2word[token.item()] for token in tokens] return decoded_words def evaluateInput(encoder, decoder, searcher, voc): input_sentence = '' while(1): try: input_sentence = input('> ') # Check if it is quit case if input_sentence == 'q' or input_sentence == 'quit': break input_sentence = normalizeString(input_sentence) output_words = evaluate(encoder, decoder, searcher, voc, input_sentence) output_words[:] = [x for x in output_words if not (x == 'EOS' or x == 'PAD')] print('Bot:', ' '.join(output_words)) except KeyError: print("Error: Encountered unknown word.") model_name = 'cb_model' attn_model = 'dot' #attn_model = 'general' #attn_model = 'concat' hidden_size = 500 encoder_n_layers = 2 decoder_n_layers = 2 dropout = 0.1 batch_size = 64 # Set checkpoint to load from; set to None if starting from scratch loadFilename = None checkpoint_iter = 4000 #loadFilename = os.path.join(save_dir, model_name, corpus_name, # '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size), # '{}_checkpoint.tar'.format(checkpoint_iter)) # Load model if a loadFilename is provided if loadFilename: # If loading on same machine the model was trained on checkpoint = torch.load(loadFilename) # If loading a model trained on GPU to CPU #checkpoint = torch.load(loadFilename, map_location=torch.device('cpu')) encoder_sd = checkpoint['en'] decoder_sd = checkpoint['de'] encoder_optimizer_sd = checkpoint['en_opt'] decoder_optimizer_sd = checkpoint['de_opt'] embedding_sd = checkpoint['embedding'] voc.__dict__ = checkpoint['voc_dict'] save_dir = os.path.join("dialogues", "save") print('Building encoder and decoder ...') embedding = nn.Embedding(voc.num_words, hidden_size) if loadFilename: embedding.load_state_dict(embedding_sd) # Initialize encoder & decoder models encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout) decoder = AttnDecoderRNN(attn_model, embedding, hidden_size, voc.num_words, decoder_n_layers, dropout) if loadFilename: encoder.load_state_dict(encoder_sd) decoder.load_state_dict(decoder_sd) # Use appropriate device encoder = encoder.to(device) decoder = decoder.to(device) print('Models built and ready to go!') clip = 50.0 teacher_forcing_ratio = 1.0 learning_rate = 0.0001 decoder_learning_ratio = 5.0 n_iteration = 4000 print_every = 100 save_every = 500 encoder.train() decoder.train() print('Building optimizers ...') encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate * decoder_learning_ratio) if loadFilename: encoder_optimizer.load_state_dict(encoder_optimizer_sd) decoder_optimizer.load_state_dict(decoder_optimizer_sd) print("Starting Training!") trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename) encoder.eval() decoder.eval() searcher = GreedySearchDecoder(encoder, decoder) evaluateInput(encoder, decoder, searcher, voc)
> Hi Bot: hi . > How are you? Bot: i m fine . > Are you in love? Bot: i m not . > What are you thinking? Bot: i m thinking of a little . > Do you watch movies? Bot: sure . > which is your favourite? Bot: the big one . > Okay, bye now! Bot: okay . > q
Apache-2.0
ChatBot.ipynb
DEK11/Chatbot-RNN
When can we start watching?---Henry Rachootin - December 2018MIT License: https://opensource.org/licenses/MIT---BitTorrent allows people to download movies without staying strictly within the confines of the law, but because of the peer to peer nature of the download, the file will not download sequentially. The VLC player can play the incomplete movie, but if it encounters a missing piece while streaming it will fail.Our pirate pirate friend is downloading _Avengers: Infinity War_, which is 149 minutes long and 12.91 GB. The torrent downloads in 4 MB pieces. If we start watching the movie when their torrent client says it is $x$ percent downloaded, What is the probability that we can get $t$ seconds into the movie without VLC failing on a missing piece?
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' import numpy as np from scipy.stats import poisson from math import ceil,exp,floor from thinkbayes2 import Suite import thinkplot import pandas as pd from itertools import product
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
First we will just define some values.Let's define $T$ to be the runtime of the movie in seconds and $N$ to be the number of 4 MB pieces in the movie. From these, we can define $t_p$, the runtime of a single 4 MB piece as $\frac{T}{N}$.
T = 149*60 #movie runtime in seconds N = ceil(12.91*1000/4) #number of 4MB pieces in the whole movie t_p = T/N #runtime of a single 4MB piece print(f"The runtime of a single piece is {t_p:.2f} seconds")
The runtime of a single piece is 2.77 seconds
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
Let's now consider where we are going with this calculation. When watching the movie, we need to have the next piece every 2.77 seconds. If we assume that each piece is equally likely to be downloaded, we can define a function $P_p(t)$ which tells us the probability of having a specific piece after $t$ seconds, and that will be the probability of having the next piece. We will find the actual form of $P_p(t)$ later.We want to find $P(t)$, the probability of making it $t$ seconds into the movie without missing a piece. Let's define $n(t)=\lceil\frac{t}{t_p}\rceil$ to be the number of pieces needed to get $t$ seconds into the movie. We need to have each of those $n$ pieces at the time that they are played, and we have a function to tell us the probability that we will have them at that time. We can then say that$$P(t)=\prod_{i=0}^{n(t)} P_p(i~t_p).$$As for the actual form of $P_p(t)$, we will first find the distribution of the number of pieces downloaded at time $t$. Let's define the probability distribution $P_n(n,t)$, the probability of having $n$ pieces downloaded at time $t$. If we model piece arrival as a Poisson process, we can define $P_n(n,t)$ as$$P_n(n,t)=\text{poisson}(n;\lambda t)$$where $\lambda$ is the unknown mean piece arrival rate in pieces per second. We will find a distribution for $\lambda$ using real data. If we further assume that each piece is equally likely to be downloaded at any time, we can define $P_p(t)$ by the law of total probability as$$P_p(t)=\sum_{n=n_0}^{N} \frac{n}{N}P_n(n-n_0,t)$$where $n_0$ is the number of pieces downloaded when we start watching the movie, which we can just approximate as $\left\lfloor\frac{xN}{100}\right\rfloor$, were $x$ is still the percent downloaded at the start.Of course, whatever probabilities we get out of that will be dependent on $\lambda$, so we will have to sum them over our probability distribution for $\lambda$, once we have that. We will use a grid algorithm to find that $\lambda$ distribution, by starting with a uniform prior for a number of sample $\lambda$ values and updating it with measured interarrival times, remembering that the likelihood of an interarrival time $t$ is $\lambda e^{\lambda t}$ for a poisson process.
#wireshark dump data = pd.read_csv('torrent pieces.csv') #this finds the piece packets data = data[data.Info=="Piece[Malformed Packet]"] #extract the time each piece arrived at times = np.array(data.Time) #dump the initial times, they don't represent the long term behavior times = times[45:] interTimes = np.diff(times) class Lambda(Suite): def Likelihood(self, inter, lam): #poisson process interarrival likelihood return lam*exp(-lam*inter) #start with a uniform distribution for lambda lamPrior = np.linspace(0.5,1.8,25) lam = Lambda(lamPrior) thinkplot.Pdf(lam,label='prior') lam.UpdateSet(interTimes) thinkplot.Pdf(lam,label='posterior') thinkplot.decorate(title="PMF for $\lambda$",xlabel="$\lambda$ (pieces/s)",ylabel="PMF")
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
And we can implement all the functions we defined above:
def P_n(n,t,lam): """probability of having exactly n pieces at time t for rate lambda""" return poisson.pmf(n,lam*t) def P_p(t,n_0,lam): """probability of having a specific piece at time t for rate lambda""" #all the numbers of pieces there could be ns = np.array(range(n_0,N+1)) #the probabilities of having them ps = P_n(ns-n_0,t,lam) #the total probability #(since we are cutting off the poisson distribution at N #this is not always 1) P = np.sum(ps) if(P==0): #if lam*t is so large that we have cut off the whole poisson distribution, we can #just assume that we will have downloaded the whole movie return 1 return np.sum(ns*ps)/(N*P) def P(t,n_0,lam): """probability of getting to time t without missing a piece""" #total pieces we will need nt = ceil(t/t_p) #times we need each piece at ts = np.array(range(nt))*t_p #probabilitis of having each piece in time ps = np.array([P_p(t,n_0,lam) for t in ts]) #total probability return np.product(ps)
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
With those done, we can make our final $P(t,x)$ function, which will give us the probability of getting to time $t$ if we start at $x$ percent downloaded with our derived distribution for $\lambda$.
def PWatch(t,x): """Probability of getting to time t with initial download percentage x""" #intial piece number approximation n0 = floor(x*N/100) Ptot = 0 #law of total probability for l,p in lam.Items(): Ptot += p*P(t,n0,l) return Ptot
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
Unfortunately that function is prohibitively slow. We can speed it up quite a lot by improving our $P_p$ function to be less accurate but much faster. We will approximate it by$$P_p(t)=\frac{\min(\lambda t+n_0,N)}{N}$$which is just assuming that we get one piece every $\lambda$ seconds. This ignores the uncertanty of the poisson distribution, but is much faster to calculate since it does not involve a sum.
def P_p_fast(t,n_0,lam): return min(lam*t+n_0,N)/N testLam = lam.Mean() ts = np.linspace(0,4000) ps = np.array([P_p(t,0,testLam) for t in ts]) psFast = np.array([P_p_fast(t,0,testLam) for t in ts]) thinkplot.plot(ts,ps,label='Correct') thinkplot.plot(ts,psFast,label='Fast') thinkplot.decorate(title='Probability of having a specific piece over time', xlabel='time (s)', ylabel='probability')
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
From the graph we can see that this is an ok approximation.With that done, we can start making graphs and answering the original question.
P_p = P_p_fast #use the fast function from now on ts = np.linspace(0,500) xs = [50,90,95,99] for x in xs: ps = [PWatch(t,x) for t in ts] thinkplot.plot(ts,ps,label=f'start at {x}%') thinkplot.decorate(title='Probability of getting to different times in the movie', xlabel='Time (s)', ylabel='Probability')
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
That graph is zoomed in near the start of the movie, but here's what it looks like over the whole runtime:
ts = np.linspace(0,T) xs = [50,90,95,99] for x in xs: ps = [PWatch(t,x) for t in ts] thinkplot.plot(ts,ps,label=f'start at {x}%') thinkplot.decorate(title='Probability of getting to different times in the movie', xlabel='Time (s)', ylabel='Probability')
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
So we can see there is a definite falling off period, and after that we will probably finish the movie. With that in mind, we can ask what the probability of finishing the movie will be for different starting percentages.
xs = np.linspace(0,100) ps = [PWatch(T,x) for x in xs] thinkplot.plot(xs,ps) thinkplot.decorate(title='Probability of finishing movie', xlabel='Starting percent downloaded', ylabel='Probability of finishing movie')
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
Here's the nonzero portion of that graph:
xs = np.linspace(90,100) ps = [PWatch(T,x) for x in xs] thinkplot.plot(xs,ps) thinkplot.decorate(title='Probability of finishing movie', xlabel='Starting percent downloaded', ylabel='Probability of finishing movie')
_____no_output_____
MIT
examples/.ipynb_checkpoints/Greedy pirates-checkpoint.ipynb
sportsracer48/ThinkBayes2
Notebook para o PAN - Atribuição Autoral - 2018
%matplotlib inline #python basic libs from __future__ import print_function from tempfile import mkdtemp from shutil import rmtree import os; from os.path import join as pathjoin; import re; import glob; import json; import codecs; from collections import defaultdict; import pprint; from pprint import pprint from time import time import logging #data analysis libs import numpy as np; import pandas as pd; import matplotlib.pyplot as plt; import random; #machine learning libs #feature extraction from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer #preprocessing and transformation from sklearn.preprocessing import normalize, MaxAbsScaler, MinMaxScaler; from sklearn.preprocessing import LabelBinarizer; from sklearn.decomposition import PCA; from sklearn.metrics.pairwise import cosine_similarity; from sklearn.base import BaseEstimator, ClassifierMixin #classifiers from sklearn.svm import LinearSVC, SVC from sklearn.multiclass import OneVsOneClassifier, OneVsRestClassifier from sklearn.linear_model import LogisticRegression from sklearn.neural_network import MLPClassifier from sklearn.feature_selection import RFE,SelectFpr,SelectPercentile, chi2; # from sklearn.ensemble import AdaBoostClassifier, BaggingClassifier from sklearn.ensemble import VotingClassifier from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline #model valuation from sklearn.model_selection import train_test_split; from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, accuracy_score; import seaborn as sns; sns.set(color_codes=True); from pandas.plotting import scatter_matrix import platform; print(platform.platform()) print("NumPy", np.__version__) import scipy; print("SciPy", scipy.__version__) import sklearn; print("Scikit-Learn", sklearn.__version__) print("seaborn", sns.__version__)
Darwin-17.5.0-x86_64-i386-64bit NumPy 1.14.2 SciPy 1.0.1 Scikit-Learn 0.19.1 seaborn 0.8.1
Apache-2.0
2018/PAN_AA_2018-char-simplified.ipynb
jeleandro/PANAA2018
paths configuration
baseDir = '/Users/joseeleandrocustodio/Dropbox/mestrado/02 - Pesquisa/code'; inputDir= pathjoin(baseDir,'pan18aa'); outputDir= pathjoin(baseDir,'out',"oficial"); if not os.path.exists(outputDir): os.mkdir(outputDir);
_____no_output_____
Apache-2.0
2018/PAN_AA_2018-char-simplified.ipynb
jeleandro/PANAA2018
loading the dataset
def readCollectionsOfProblems(path): # Reading information about the collection infocollection = path+os.sep+'collection-info.json' with open(infocollection, 'r') as f: problems = [ { 'problem': attrib['problem-name'], 'language': attrib['language'], 'encoding': attrib['encoding'], } for attrib in json.load(f) ] return problems; problems = readCollectionsOfProblems(inputDir); problems[0] def readProblem(path, problem): # Reading information about the problem infoproblem = path+os.sep+problem+os.sep+'problem-info.json' candidates = [] with open(infoproblem, 'r') as f: fj = json.load(f) unk_folder = fj['unknown-folder'] for attrib in fj['candidate-authors']: candidates.append(attrib['author-name']) return unk_folder, candidates; def read_files(path,label): # Reads all text files located in the 'path' and assigns them to 'label' class files = glob.glob(pathjoin(path,label,'*.txt')) texts=[] for i,v in enumerate(files): f=codecs.open(v,'r',encoding='utf-8') texts.append((f.read(),label, os.path.basename(v))) f.close() return texts for index,problem in enumerate(problems): unk_folder, candidates_folder = readProblem(inputDir, problem['problem']); problem['candidates_folder_count'] = len(candidates_folder); problem['candidates'] = []; for candidate in candidates_folder: problem['candidates'].extend(read_files(pathjoin(inputDir, problem['problem']),candidate)); problem['unknown'] = read_files(pathjoin(inputDir, problem['problem']),unk_folder); pd.DataFrame(problems) #******************************************************************************************************* import warnings from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score from sklearn.preprocessing import LabelEncoder def eval_measures(gt, pred): """Compute macro-averaged F1-scores, macro-averaged precision, macro-averaged recall, and micro-averaged accuracy according the ad hoc rules discussed at the top of this file. Parameters ---------- gt : dict Ground truth, where keys indicate text file names (e.g. `unknown00002.txt`), and values represent author labels (e.g. `candidate00003`) pred : dict Predicted attribution, where keys indicate text file names (e.g. `unknown00002.txt`), and values represent author labels (e.g. `candidate00003`) Returns ------- f1 : float Macro-averaged F1-score precision : float Macro-averaged precision recall : float Macro-averaged recall accuracy : float Micro-averaged F1-score """ actual_authors = list(gt.values()) encoder = LabelEncoder().fit(['<UNK>'] + actual_authors) text_ids, gold_authors, silver_authors = [], [], [] for text_id in sorted(gt): text_ids.append(text_id) gold_authors.append(gt[text_id]) try: silver_authors.append(pred[text_id]) except KeyError: # missing attributions get <UNK>: silver_authors.append('<UNK>') assert len(text_ids) == len(gold_authors) assert len(text_ids) == len(silver_authors) # replace non-existent silver authors with '<UNK>': silver_authors = [a if a in encoder.classes_ else '<UNK>' for a in silver_authors] gold_author_ints = encoder.transform(gold_authors) silver_author_ints = encoder.transform(silver_authors) # get F1 for individual classes (and suppress warnings): with warnings.catch_warnings(): warnings.simplefilter('ignore') f1 = f1_score(gold_author_ints, silver_author_ints, labels=list(set(gold_author_ints)), average='macro') precision = precision_score(gold_author_ints, silver_author_ints, labels=list(set(gold_author_ints)), average='macro') recall = recall_score(gold_author_ints, silver_author_ints, labels=list(set(gold_author_ints)), average='macro') accuracy = accuracy_score(gold_author_ints, silver_author_ints) return f1,precision,recall,accuracy def evaluate(ground_truth_file,predictions_file): # Calculates evaluation measures for a single attribution problem gt = {} with open(ground_truth_file, 'r') as f: for attrib in json.load(f)['ground_truth']: gt[attrib['unknown-text']] = attrib['true-author'] pred = {} with open(predictions_file, 'r') as f: for attrib in json.load(f): if attrib['unknown-text'] not in pred: pred[attrib['unknown-text']] = attrib['predicted-author'] f1,precision,recall,accuracy = eval_measures(gt,pred) return f1, precision, recall, accuracy from sklearn.base import BaseEstimator from scipy.sparse import issparse class DenseTransformer(BaseEstimator): """Convert a sparse array into a dense array.""" def __init__(self, return_copy=True): self.return_copy = return_copy self.is_fitted = False def transform(self, X, y=None): """ Return a dense version of the input array. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] (default: None) Returns --------- X_dense : dense version of the input X array. """ if issparse(X): return X.toarray() elif self.return_copy: return X.copy() else: return X def fit(self, X, y=None): """ Mock method. Does nothing. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] (default: None) Returns --------- self """ self.is_fitted = True return self def fit_transform(self, X, y=None): """ Return a dense version of the input array. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] (default: None) Returns --------- X_dense : dense version of the input X array. """ return self.transform(X=X, y=y)
_____no_output_____
Apache-2.0
2018/PAN_AA_2018-char-simplified.ipynb
jeleandro/PANAA2018
examinando o parametro min_df isoladamente
def runML(problem): print ("\nProblem: %s, language: %s, " %(problem['problem'],problem['language'])) train_docs, train_labels, _ = zip(*problem['candidates']) problem['training_docs_size'] = len(train_docs); test_docs, _, test_filename = zip(*problem['unknown']) pipeline = Pipeline([ ('vect', TfidfVectorizer(analyzer='char', min_df=0.05, max_df=1.0, norm='l1', ngram_range=(3,5), sublinear_tf=True, smooth_idf=True, lowercase =False)), ('dense', DenseTransformer()), ('scaler', MaxAbsScaler()), ('transf', PCA(0.999)), ('clf', LogisticRegression(random_state=0,multi_class='multinomial', solver='newton-cg')), ]) # uncommenting more parameters will give better exploring power but will # increase processing time in a combinatorial way parameters = { 'vect__min_df':(2,0.01,0.05,0.1) } grid_search = GridSearchCV(pipeline, parameters, cv=5, n_jobs=-1, verbose=False, scoring='f1_macro') print("Performing grid search...") t0 = time() grid_search.fit(train_docs, train_labels) print("done in %0.3fs" % (time() - t0)) print("Best score: %0.3f" % grid_search.best_score_) print("Best parameters set:") best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print("\t%s: %r" % (param_name, best_parameters[param_name])) train_pred=grid_search.predict(train_docs); test_pred=grid_search.predict(test_docs); # Writing output file out_data=[] for i,v in enumerate(test_pred): out_data.append({'unknown-text': test_filename[i],'predicted-author': v}) answerFile = pathjoin(outputDir,'answers-'+problem['problem']+'.json'); with open(answerFile, 'w') as f: json.dump(out_data, f, indent=4) #evaluation train f1,precision,recall,accuracy=evaluate( pathjoin(inputDir, problem['problem'], 'ground-truth.json'), answerFile) return { 'problem-name' : problem['problem'], "language" : problem['language'], 'AuthorCount' : len(set(train_labels)), "train_doc_size": len(train_docs), "train_caract_per_doc": sum([len(l) for l in train_docs])/len(train_docs), "test_doc_size" : len(test_docs), "test_caract_per_doc": sum([len(l) for l in test_docs])/len(test_docs), 'macro-f1' : round(f1,3), 'macro-precision': round(precision,3), 'macro-recall' : round(recall,3), 'micro-accuracy' : round(accuracy,3), }, grid_search.cv_results_, best_parameters; result = []; cv_result = []; best_parameters = []; for problem in problems: r, c, b = runML(problem); result.append(r); cv_result.append(c); b['problem'] = problem['problem']; best_parameters.append(b); pd.DataFrame(best_parameters)[['problem','vect__min_df']]
_____no_output_____
Apache-2.0
2018/PAN_AA_2018-char-simplified.ipynb
jeleandro/PANAA2018
analisando os demais parametros
def runML(problem): print ("\nProblem: %s, language: %s, " %(problem['problem'],problem['language'])) train_docs, train_labels, _ = zip(*problem['candidates']) problem['training_docs_size'] = len(train_docs); test_docs, _, test_filename = zip(*problem['unknown']) pipeline = Pipeline([ ('vect', TfidfVectorizer(analyzer='char', min_df=0.01, max_df=1.0, norm='l1', lowercase =False, sublinear_tf=True)), ('dense', DenseTransformer()), ('scaler', MaxAbsScaler()), ('transf', PCA()), ('clf', LogisticRegression(random_state=0,multi_class='multinomial', solver='newton-cg')), ]) # uncommenting more parameters will give better exploring power but will # increase processing time in a combinatorial way parameters = { 'vect__ngram_range':((2,3),(2,4),(2,5),(3,5)), 'vect__sublinear_tf':(True, False), 'vect__norm':('l1','l2'), 'transf__n_components': (0.1,0.25,0.5,0.75,0.9,0.99), } grid_search = GridSearchCV(pipeline, parameters, cv=3, n_jobs=-1, verbose=False, scoring='f1_macro') print("Performing grid search...") t0 = time() grid_search.fit(train_docs, train_labels) print("done in %0.3fs" % (time() - t0)) print("Best score: %0.3f" % grid_search.best_score_) print("Best parameters set:") best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print("\t%s: %r" % (param_name, best_parameters[param_name])) train_pred=grid_search.predict(train_docs); test_pred=grid_search.predict(test_docs); # Writing output file out_data=[] for i,v in enumerate(test_pred): out_data.append({'unknown-text': test_filename[i],'predicted-author': v}) answerFile = pathjoin(outputDir,'answers-'+problem['problem']+'.json'); with open(answerFile, 'w') as f: json.dump(out_data, f, indent=4) #evaluation train f1,precision,recall,accuracy=evaluate( pathjoin(inputDir, problem['problem'], 'ground-truth.json'), answerFile) return { 'problem-name' : problem['problem'], "language" : problem['language'], 'AuthorCount' : len(set(train_labels)), "train_doc_size": len(train_docs), "train_caract_per_doc": sum([len(l) for l in train_docs])/len(train_docs), "test_doc_size" : len(test_docs), "test_caract_per_doc": sum([len(l) for l in test_docs])/len(test_docs), 'macro-f1' : round(f1,3), 'macro-precision': round(precision,3), 'macro-recall' : round(recall,3), 'micro-accuracy' : round(accuracy,3), }, grid_search.cv_results_,best_parameters; result = []; cv_result = []; best_parameters = []; for problem in problems: r, c, b = runML(problem); result.append(r); cv_result.append(c); b['problem'] = problem['problem']; best_parameters.append(b); df=pd.DataFrame(result)[['problem-name', "language", 'AuthorCount', "train_doc_size","train_caract_per_doc", "test_doc_size", "test_caract_per_doc", 'macro-f1','macro-precision','macro-recall' ,'micro-accuracy']] df print(df[["macro-f1"]].reset_index().to_latex(index=False).replace(" "," ")) languages={ 'en':'inglesa', 'sp':'espanhola', 'it':'italiana', 'pl':'polonesa', 'fr':'francesa' } cv_result2 = []; dfCV = pd.DataFrame(); for i, c in enumerate(cv_result): temp = pd.DataFrame(c); temp['problem'] = i+1; temp['language'] = languages[problems[i]['language']] dfCV = dfCV.append(temp); for p in ['param_transf__n_components', 'mean_test_score','std_test_score','mean_train_score', 'split0_test_score','split0_train_score', 'split1_test_score','split1_train_score', 'split2_test_score','split2_train_score']: dfCV[p]=dfCV[p].astype(np.float32); dfCV =dfCV[[ 'problem', 'language', 'rank_test_score', 'param_transf__n_components', 'param_vect__ngram_range', 'param_vect__sublinear_tf', 'param_vect__norm', 'mean_test_score', 'std_test_score', 'mean_train_score', 'split0_test_score','split0_train_score', 'split1_test_score','split1_train_score', 'split2_test_score','split2_train_score', 'mean_score_time', 'mean_fit_time', 'std_fit_time', 'std_score_time', 'std_train_score', ]]; dfCV.rename(columns={ 'param_transf__n_components':'PCA_componentes', 'param_vect__ngram_range':'ngram_range', 'param_vect__sublinear_tf':'sublinear_tf', 'param_vect__smooth_idf':'smooth_idf', 'param_vect__norm':'norm' },inplace=True); #print('\',\n\''.join(dfCV.columns)) dfCV.to_csv('PANAA2018_CHAR.csv', index=False) dfCV = pd.read_csv('PANAA2018_CHAR.csv', na_values='') (dfCV[dfCV.rank_test_score == 1])[ ['problem', 'language', 'rank_test_score', 'mean_test_score', 'std_test_score', 'ngram_range', 'sublinear_tf', 'norm', 'PCA_componentes'] ].sort_values(by=[ 'problem', 'mean_test_score', 'ngram_range', 'sublinear_tf', 'PCA_componentes' ], ascending=[True, False,False,False,False]) dfCV.pivot_table( index=['problem','language','PCA_componentes'], columns=['norm','sublinear_tf', 'ngram_range'], values='mean_test_score' ) pd.options.display.precision = 3 print(u"\\begin{table}[h]\n\\centering\n\\caption{Medida F1 para os parâmetros }") print(re.sub(r'[ ]{2,}',' ',dfCV[dfCV.PCA_componentes >= 0.99].pivot_table( index=['problem','language','sublinear_tf','norm'], columns=['ngram_range'], values='mean_test_score' ).to_latex())) print ("\label{tab:modelocaracter}") print(r"\end{table}")
\begin{table}[h] \centering \caption{Medida F1 para os parâmetros } \begin{tabular}{llllrrrr} \toprule & & & ngram\_range & (2, 3) & (2, 4) & (2, 5) & (3, 5) \\ problem & language & sublinear\_tf & norm & & & & \\ \midrule 1 & inglesa & False & l1 & 0.680 & 0.787 & 0.804 & 0.803 \\ & & & l2 & 0.670 & 0.760 & 0.715 & 0.734 \\ & & True & l1 & 0.791 & 0.827 & 0.833 & 0.827 \\ & & & l2 & 0.816 & 0.818 & 0.819 & 0.826 \\ 2 & inglesa & False & l1 & 0.883 & 0.940 & 0.940 & 0.971 \\ & & & l2 & 0.883 & 0.879 & 0.940 & 0.940 \\ & & True & l1 & 0.883 & 0.971 & 0.971 & 0.971 \\ & & & l2 & 0.910 & 0.971 & 0.971 & 0.971 \\ 3 & francesa & False & l1 & 0.800 & 0.782 & 0.772 & 0.772 \\ & & & l2 & 0.794 & 0.761 & 0.732 & 0.724 \\ & & True & l1 & 0.778 & 0.788 & 0.762 & 0.775 \\ & & & l2 & 0.786 & 0.769 & 0.763 & 0.776 \\ 4 & francesa & False & l1 & 0.775 & 0.800 & 0.825 & 0.825 \\ & & & l2 & 0.744 & 0.800 & 0.854 & 0.854 \\ & & True & l1 & 0.744 & 0.800 & 0.854 & 0.854 \\ & & & l2 & 0.799 & 0.828 & 0.854 & 0.854 \\ 5 & italiana & False & l1 & 0.654 & 0.629 & 0.643 & 0.651 \\ & & & l2 & 0.628 & 0.550 & 0.393 & 0.409 \\ & & True & l1 & 0.681 & 0.665 & 0.677 & 0.666 \\ & & & l2 & 0.682 & 0.688 & 0.659 & 0.660 \\ 6 & italiana & False & l1 & 0.940 & 0.971 & 0.940 & 0.940 \\ & & & l2 & 0.880 & 0.971 & 0.940 & 0.940 \\ & & True & l1 & 0.911 & 0.940 & 0.910 & 0.910 \\ & & & l2 & 0.911 & 0.971 & 0.940 & 0.940 \\ 7 & polonesa & False & l1 & 0.782 & 0.793 & 0.770 & 0.770 \\ & & & l2 & 0.748 & 0.723 & 0.713 & 0.707 \\ & & True & l1 & 0.784 & 0.816 & 0.814 & 0.799 \\ & & & l2 & 0.784 & 0.810 & 0.814 & 0.799 \\ 8 & polonesa & False & l1 & 0.712 & 0.810 & 0.810 & 0.810 \\ & & & l2 & 0.760 & 0.810 & 0.845 & 0.845 \\ & & True & l1 & 0.810 & 0.810 & 0.810 & 0.760 \\ & & & l2 & 0.810 & 0.810 & 0.810 & 0.810 \\ 9 & espanhola & False & l1 & 0.823 & 0.848 & 0.871 & 0.848 \\ & & & l2 & 0.831 & 0.823 & 0.784 & 0.740 \\ & & True & l1 & 0.862 & 0.881 & 0.894 & 0.886 \\ & & & l2 & 0.879 & 0.888 & 0.879 & 0.879 \\ 10 & espanhola & False & l1 & 0.912 & 0.909 & 0.882 & 0.882 \\ & & & l2 & 0.882 & 0.909 & 0.882 & 0.939 \\ & & True & l1 & 0.882 & 0.909 & 0.882 & 0.882 \\ & & & l2 & 0.882 & 0.909 & 0.882 & 0.882 \\ \bottomrule \end{tabular} \label{tab:modelocaracter} \end{table}
Apache-2.0
2018/PAN_AA_2018-char-simplified.ipynb
jeleandro/PANAA2018
Example usageTo use `webtoon_data` in a project:
import webtoon_data print(webtoon_data.__version__)
_____no_output_____
MIT
docs/example.ipynb
ekimz/webtoons_data
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE.
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Save and restore models View on TensorFlow.org Run in Google Colab View source on GitHub Model progress can be saved during—and after—training. This means a model can resume where it left off and avoid long training times. Saving also means you can share your model and others can recreate your work. When publishing research models and techniques, most machine learning practitioners share:* code to create the model, and* the trained weights, or parameters, for the modelSharing this data helps others understand how the model works and try it themselves with new data.Caution: Be careful with untrusted code—TensorFlow models are code. See [Using TensorFlow Securely](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) for details. OptionsThere are different ways to save TensorFlow models—depending on the API you're using. This guide uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For other approaches, see the TensorFlow [Save and Restore](https://www.tensorflow.org/guide/saved_model) guide or [Saving in eager](https://www.tensorflow.org/guide/eagerobject_based_saving). Setup Installs and imports Install and import TensorFlow and dependencies:
!pip install h5py pyyaml
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Get an example datasetWe'll use the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to train our model to demonstrate saving weights. To speed up these demonstration runs, only use the first 1000 examples:
from __future__ import absolute_import, division, print_function import os !pip install tf-nightly-2.0-preview import tensorflow as tf keras = tf.keras tf.__version__ (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data() train_labels = train_labels[:1000] test_labels = test_labels[:1000] train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0 test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Define a model Let's build a simple model we'll use to demonstrate saving and loading weights.
# Returns a short sequential model def create_model(): model = tf.keras.models.Sequential([ keras.layers.Dense(512, activation=tf.keras.activations.relu, input_shape=(784,)), keras.layers.Dropout(0.2), keras.layers.Dense(10, activation=tf.keras.activations.softmax) ]) model.compile(optimizer='adam', loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) return model # Create a basic model instance model = create_model() model.summary()
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Save checkpoints during training The primary use case is to automatically save checkpoints *during* and at *the end* of training. This way you can use a trained model without having to retrain it, or pick-up training where you left of—in case the training process was interrupted.`tf.keras.callbacks.ModelCheckpoint` is a callback that performs this task. The callback takes a couple of arguments to configure checkpointing. Checkpoint callback usageTrain the model and pass it the `ModelCheckpoint` callback:
checkpoint_path = "training_1/cp.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) # Create checkpoint callback cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path, save_weights_only=True, verbose=1) model = create_model() model.fit(train_images, train_labels, epochs = 10, validation_data = (test_images,test_labels), callbacks = [cp_callback]) # pass callback to training
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch:
!ls {checkpoint_dir}
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Create a new, untrained model. When restoring a model from only weights, you must have a model with the same architecture as the original model. Since it's the same model architecture, we can share weights despite that it's a different *instance* of the model.Now rebuild a fresh, untrained model, and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy):
model = create_model() loss, acc = model.evaluate(test_images, test_labels) print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Then load the weights from the checkpoint, and re-evaluate:
model.load_weights(checkpoint_path) loss,acc = model.evaluate(test_images, test_labels) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs