markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
So df3 has 500 records and 3 columns. The data represents factory production numbers and reported numbers of defects on certain days of the week. 1. Recreate this scatter plot of 'produced' vs 'defective'. Note the color and size of the points. Also note the figure size. See if you can figure out how to stretch it in a similar fashion.
# 1. Recrie este gráfico de dispersão de 'produzido' vs 'defeituoso'. Observe a cor e o tamanho dos pontos. # Observe também o tamanho da figura. Veja se você pode descobrir como esticá-lo de forma semelhante. # CODE HERE df3.plot.scatter(x='produced', y='defective', c='red', figsize=(12,3), s=20) # DON'T WRITE HERE
_____no_output_____
MIT
pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb
stephacastro/Coleta_dados
2. Create a histogram of the 'produced' column.
# 2. Crie um histograma da coluna 'produzida'. df3['produced'].plot.hist() # DON'T WRITE HERE
_____no_output_____
MIT
pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb
stephacastro/Coleta_dados
3. Recreate the following histogram of 'produced', tightening the x-axis and adding lines between bars.
# 3. Recrie o seguinte histograma de 'produzido', apertando o eixo x e adicionando linhas entre as barras. df3['produced'].plot.hist(edgecolor='k').autoscale(axis='x', tight=True) # DON'T WRITE HERE
_____no_output_____
MIT
pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb
stephacastro/Coleta_dados
4. Create a boxplot that shows 'produced' for each 'weekday' (hint: this is a groupby operation)
# 4. Crie um boxplot que mostre 'produzido' para cada 'dia da semana' (dica: esta é uma operação groupby) df3[['weekday','produced']].boxplot(by='weekday', figsize=(12,5)) # DON'T WRITE HERE
_____no_output_____
MIT
pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb
stephacastro/Coleta_dados
5. Create a KDE plot of the 'defective' column
# 5. Crie um gráfico KDE da coluna 'defeituoso' df3['defective'].plot.kde() # DON'T WRITE HERE
_____no_output_____
MIT
pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb
stephacastro/Coleta_dados
6. For the above KDE plot, figure out how to increase the linewidth and make the linestyle dashed.(Note: You would usually not dash a KDE plot line)
# 6. Para o gráfico do KDE acima, descubra como aumentar a largura da linha e tornar o estilo de linha tracejada. # (Nota: Você normalmente não traçou uma linha de plotagem do KDE) df3['defective'].plot.kde(ls='--', lw=5) # DON'T WRITE HERE
_____no_output_____
MIT
pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb
stephacastro/Coleta_dados
7. Create a blended area plot of all the columns for just the rows up to 30. (hint: use .loc)
# 7. Crie um gráfico de área combinada de todas as colunas apenas para as linhas até 30. (dica: use .loc) ax = df3.loc[0:30].plot.area(stacked=False, alpha=0.4) ax.legend(loc=0) # DON'T WRITE HERE
_____no_output_____
MIT
pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb
stephacastro/Coleta_dados
Bonus Challenge!Notice how the legend in our previous figure overlapped some of actual diagram. Can you figure out how to display the legend outside of the plot as shown below?
ax = df3.loc[0:30].plot.area(stacked=False, alpha=0.4) ax.legend(loc=0, bbox_to_anchor=(1.3,0.5)) # DON'T WRITE HERE
_____no_output_____
MIT
pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb
stephacastro/Coleta_dados
Disk Stacking[link](https://www.algoexpert.io/questions/Disk%20Stacking) My Solution
def diskStacking(disks): # Write your code here. disks.sort(key=lambda x: x[1]) globalMaxHeight = 0 prevDiskIdx = [None for _ in range(len(disks) + 1)] opt = [0 for _ in range(len(disks) + 1)] for i in range(len(disks)): opt[i + 1] = disks[i][2] for i in range(len(opt)): maxHeight = opt[i] for j in range(i): if disks[i - 1][0] > disks[j][0] \ and disks[i - 1][1] > disks[j][1] \ and disks[i - 1][2] > disks[j][2]: height = opt[j + 1] + disks[i - 1][2] if height > maxHeight: maxHeight = height prevDiskIdx[i] = j + 1 opt[i] = maxHeight if maxHeight > globalMaxHeight: globalMaxHeight = maxHeight globalMaxHeightIdx = i res = [] idx = globalMaxHeightIdx while prevDiskIdx[idx] != None: res.append(idx - 1) idx = prevDiskIdx[idx] if idx != 0: res.append(idx - 1) return [disks[res[i]] for i in reversed(range(len(res)))] def diskStacking(disks): # Write your code here. disks.sort(key=lambda x: x[1]) globalMaxHeight = 0 prevDiskIdx = [None for _ in disks] opt = [disk[2] for disk in disks] for i in range(len(opt)): for j in range(i): if isStackable(disks, i, j): height = opt[j] + disks[i][2] if height > opt[i]: opt[i] = height prevDiskIdx[i] = j if opt[i] > globalMaxHeight: globalMaxHeight = opt[i] globalMaxHeightIdx = i res = [] idx = globalMaxHeightIdx while idx is not None: res.append(idx) idx = prevDiskIdx[idx] return [disks[res[i]] for i in reversed(range(len(res)))] def isStackable(disks, lower, upper): return disks[lower][0] > disks[upper][0] \ and disks[lower][1] > disks[upper][1] \ and disks[lower][2] > disks[upper][2]
_____no_output_____
MIT
algoExpert/disk_stacking/solution.ipynb
maple1eaf/learning_algorithm
Expert Solution
# O(n^2) time | O(n) space def diskStacking(disks): disks.sort(key=lambda disk:disk[2]) heights = [disk[2] for disk in disks] sequences = [None for disk in disks] maxHeightIdx = 0 for i in range(1, len(disks)): currentDisk = disks[i] for j in range(0, i): otherDisk = disks[j] if areValidDimensions(otherDisk, currentDisk): if heights[i] <= currentDisk[2] + heights[j]: heights[i] = currentDisk[2] + heights[j] sequences[i] = j if heights[i] >= heights[maxHeightIdx]: maxHeightIdx = i return buildSequence(disks, sequences, maxHeightIdx) def areValidDimensions(o, c): return o[0] < c[0] and o[1] < c[1] and o[2] < c[2] def buildSequence(array, sequences, currentIdx): sequence = [] while currentIdx is not None: sequence.append(array[currentIdx]) currentIdx = sequences[currentIdx] return list(reversed(sequence))
_____no_output_____
MIT
algoExpert/disk_stacking/solution.ipynb
maple1eaf/learning_algorithm
Active Contour ModelThe active contour model is a method to fit open or closed splines to lines oredges in an image [1]_. It works by minimising an energy that is in partdefined by the image and part by the spline's shape: length and smoothness. Theminimization is done implicitly in the shape energy and explicitly in theimage energy.In the following two examples the active contour model is used (1) to segmentthe face of a person from the rest of an image by fitting a closed curveto the edges of the face and (2) to find the darkest curve between two fixedpoints while obeying smoothness considerations. Typically it is a good idea tosmooth images a bit before analyzing, as done in the following examples.We initialize a circle around the astronaut's face and use the default boundarycondition ``boundary_condition='periodic'`` to fit a closed curve. The defaultparameters ``w_line=0, w_edge=1`` will make the curve search towards edges,such as the boundaries of the face... [1] *Snakes: Active contour models*. Kass, M.; Witkin, A.; Terzopoulos, D. International Journal of Computer Vision 1 (4): 321 (1988). DOI:`10.1007/BF00133570`
import numpy as np import matplotlib.pyplot as plt from skimage.color import rgb2gray from skimage import data from skimage.filters import gaussian from skimage.segmentation import active_contour img = data.astronaut() img = rgb2gray(img) s = np.linspace(0, 2*np.pi, 400) r = 100 + 100*np.sin(s) c = 220 + 100*np.cos(s) init = np.array([r, c]).T snake = active_contour(gaussian(img, 3), init, alpha=0.015, beta=10, gamma=0.001, coordinates='rc') fig, ax = plt.subplots(figsize=(7, 7)) ax.imshow(img, cmap=plt.cm.gray) ax.plot(init[:, 1], init[:, 0], '--r', lw=3) ax.plot(snake[:, 1], snake[:, 0], '-b', lw=3) ax.set_xticks([]), ax.set_yticks([]) ax.axis([0, img.shape[1], img.shape[0], 0]) plt.show()
_____no_output_____
MIT
digital-image-processing/notebooks/edges/plot_active_contours.ipynb
sinamedialab/courses
Here we initialize a straight line between two points, `(5, 136)` and`(424, 50)`, and require that the spline has its end points there by givingthe boundary condition `boundary_condition='fixed'`. We furthermoremake the algorithm search for dark lines by giving a negative `w_line` value.
img = data.text() r = np.linspace(136, 50, 100) c = np.linspace(5, 424, 100) init = np.array([r, c]).T snake = active_contour(gaussian(img, 1), init, boundary_condition='fixed', alpha=0.1, beta=1.0, w_line=-5, w_edge=0, gamma=0.1, coordinates='rc') fig, ax = plt.subplots(figsize=(9, 5)) ax.imshow(img, cmap=plt.cm.gray) ax.plot(init[:, 1], init[:, 0], '--r', lw=3) ax.plot(snake[:, 1], snake[:, 0], '-b', lw=3) ax.set_xticks([]), ax.set_yticks([]) ax.axis([0, img.shape[1], img.shape[0], 0]) plt.show()
_____no_output_____
MIT
digital-image-processing/notebooks/edges/plot_active_contours.ipynb
sinamedialab/courses
Question1
df0 = df[df.DEATH_EVENT == 0] df0Feature = df0[['creatinine_phosphokinase', 'serum_creatinine', 'serum_sodium', 'platelets']] df1 = df[df.DEATH_EVENT == 1] df1Feature = df1[['creatinine_phosphokinase', 'serum_creatinine', 'serum_sodium', 'platelets']] # corr matrix for death_event 0 sn.heatmap(df0Feature.corr(), annot=True) plt.show() # corr matrix for death_event 1 sn.heatmap(df1Feature.corr(), annot=True) plt.show()
_____no_output_____
MIT
Assignment_7/heart_failure_linear_model/HeartFali.ipynb
KyleLeePiupiupiu/CS677_Assignment
Question2Group3--> X = serum sodium, Y = serum creatinine
def residual(yTest, yPredict): temp = 0 for (a, b) in zip(yTest, yPredict): temp += (a-b) * (a - b) return temp zeroSet = df0[[ 'serum_creatinine', 'serum_sodium']] oneSet = df1[[ 'serum_creatinine', 'serum_sodium']] # for death_evetn = 0 # simple linear regression x = zeroSet['serum_sodium'] y = zeroSet['serum_creatinine'] degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('linear') # quadratic x = zeroSet['serum_sodium'] y = zeroSet['serum_creatinine'] degree = 2 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('quadratic') # cubic x = zeroSet['serum_sodium'] y = zeroSet['serum_creatinine'] degree = 3 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('cubic') # GLM x = zeroSet['serum_sodium'] x = np.log(x) y = zeroSet['serum_creatinine'] degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('y = alog(x) + b') # GLM x = zeroSet['serum_sodium'] x = np.log(x) y = zeroSet['serum_creatinine'] y = np.log(y) degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(np.exp(yTest), np.exp(yPredict))) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('log(y) = alog(x) + b') # for death_evetn = 1 # simple linear regression x = oneSet['serum_sodium'] y = oneSet['serum_creatinine'] degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('linear') # quadratic x = oneSet['serum_sodium'] y = oneSet['serum_creatinine'] degree = 2 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('quadratic') # cubic x = oneSet['serum_sodium'] y = oneSet['serum_creatinine'] degree = 3 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('cubic') # GLM x = oneSet['serum_sodium'] x = np.log(x) y = oneSet['serum_creatinine'] degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('y = alog(x) + b') # GLM x = oneSet['serum_sodium'] x = np.log(x) y = oneSet['serum_creatinine'] y = np.log(y) degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(np.exp(yTest), np.exp(yPredict))) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('log(y) = alog(x) + b')
[-0.01236989 3.8018824 ] 38.63069354525988 [ 9.69912611e-03 -2.65111621e+00 1.83120347e+02] 61.60974727399684 [ 6.93416020e-03 -2.82186374e+00 3.82460077e+02 -1.72620285e+04] 2607.9355538202913 [-1.80477484 10.98470858] 38.58311899419233 [-3.18264613 16.14955754] 22.58026509775523
MIT
Assignment_7/heart_failure_linear_model/HeartFali.ipynb
KyleLeePiupiupiu/CS677_Assignment
Functions
#function: gaussian kernel 1d #input: sigma: std # order: A positive order corresponds to convolution with # that derivative of a Gaussian, use 0 here # radius: radius of the filter def my_gaussian_kernel1d(sigma, order, radius): """ Computes a 1D Gaussian convolution kernel. """ if order < 0: raise ValueError('order must be non-negative') p = np.polynomial.Polynomial([0, 0, -0.5 / (sigma * sigma)]) x = np.arange(-radius, radius + 1) phi_x = np.exp(p(x), dtype=np.double) phi_x /= phi_x.sum() if order > 0: q = np.polynomial.Polynomial([1]) p_deriv = p.deriv() for _ in range(order): # f(x) = q(x) * phi(x) = q(x) * exp(p(x)) # f'(x) = (q'(x) + q(x) * p'(x)) * phi(x) q = q.deriv() + q * p_deriv phi_x *= q(x) return phi_x #function: gaussian filter 2d def my_gaussian_kernel2d(sigma,order,radius): g_ker_1d=my_gaussian_kernel1d(sigma, order, radius) g_ker_2d=np.outer(g_ker_1d, g_ker_1d) g_ker_2d /=g_ker_2d.sum() return g_ker_2d #function: my difference of gaussian kernel 1d #input: centersigma is the center sigma, surround sigma=1.5*centersigma, centersigma=RFradius # radius: defalt 3*centersigma #output: kernel size length: 1+3*centersigma*2 def my_DOG_kernel1d(centersigma,order,radius): surroundsigma=1.5*centersigma center_kernel1d=my_gaussian_kernel1d(centersigma,order,radius) surround_kernel1d=my_gaussian_kernel1d(surroundsigma,order,radius) out_kernel1d=center_kernel1d-surround_kernel1d return out_kernel1d #function: my difference of gaussian kernel 2d, mimic retina center-surround onoff #input: centersigma is the center sigma, surround sigma=1.5*centersigma # radius: kernelradius, defalt 3*centersigma #output: kernel size length: 1+3*centersigma*2 def my_DOG_kernel2d(centersigma,order,radius): surroundsigma=1.5*centersigma center_kernel2d=my_gaussian_kernel2d(centersigma,order,radius) surround_kernel2d=my_gaussian_kernel2d(surroundsigma,order,radius) out_kernel2d=center_kernel2d-surround_kernel2d return out_kernel2d #function, calculate ONOFF for single pixel #input: #img: gray image, float, 1.0 (when phase srambled image, may be a little larger than 1.0) #(xx,yy): center coordinate, xx: along height, yy: along width, RFradius: radius of center #output: #onoff value def ONOFF_single(img,xx,yy,centersigma): surroundsigma=np.round(1.5*centersigma) kernelradius=3*centersigma temp=img[xx-kernelradius:xx+kernelradius+1,yy-kernelradius:yy+kernelradius+1] center_kernel2d=my_gaussian_kernel2d(centersigma,0,kernelradius) surround_kernel2d=my_gaussian_kernel2d(surroundsigma,0,kernelradius) centermean=np.sum(temp*center_kernel2d) surroundmean=np.sum(temp*surround_kernel2d) onoff=(centermean-surroundmean)/(centermean+surroundmean+1e-8) return onoff #input: #centersigma is the center sigma #img: image or image region, float #output: onoff_img, float, -1.0 to 1.0 def onoff_wholeimg(img,centersigma): kernelradius=3*centersigma onoff_img=np.zeros((img.shape[0],img.shape[1])) for ii in np.arange(kernelradius,img.shape[0]-kernelradius-1): for jj in np.arange(kernelradius,img.shape[1]-kernelradius-1): onoff_img[ii,jj]=ONOFF_single(img,ii,jj,centersigma) if img.shape[0]==437: mask_con=np.zeros((437,437),np.uint8) cv2.circle(mask_con,(218,218),radius=218-kernelradius,color=255,thickness=-1) mask_con=np.float32(mask_con/255.0) onoff_img=np.multiply(onoff_img,mask_con) return onoff_img #input: onoff_seed: random seed for contrast calculation #onoff_num: random pick numbers #centersigma is the center sigma #img: image or image region, float 1.0 (when phase srambled, may be a little larger than 1.0) #output: the onoff value distribution def onoff_random(onoff_seed,onoff_num,centersigma,img): kernelradius=3*centersigma np.random.seed(onoff_seed+866) walk_height=np.random.choice(np.arange(kernelradius,img.shape[0]-kernelradius-1),onoff_num,replace=False) np.random.seed(onoff_seed+899) walk_width=np.random.choice(np.arange(kernelradius,img.shape[1]-kernelradius-1),onoff_num,replace=False) onoffs=np.zeros(onoff_num) for ii in range(onoff_num): onoffs[ii]=ONOFF_single(img,walk_height[ii],walk_width[ii],centersigma) return onoffs #input: onoff_seed: random seed for contrast calculation #onoff_num: total random pick numbers=numberofimages* each_random_pick_numbers #centersigma is the center sigma #imgs: images, all gray images, float 1.0 (when phase srambled, may be a little larger than 1.0) # format like: numberofimages*height*width #output: the onoff value distribution def onoff_random_imgs(onoff_seed,onoff_num,centersigma,imgs): num_imgs=imgs.shape[0] onoffs=[] for ii in range(num_imgs): onoffs.append(onoff_random(onoff_seed+ii,int(np.round(onoff_num/num_imgs)),centersigma,imgs[ii])) onoffs=np.array(onoffs) onoffs=onoffs.flatten() return onoffs #input: onoff_seed: random seed for onoff and local contrast(rms2) calculation #onoff_num: random pick numbers #centersigma is the center sigma for onoff #RFradius for local contrast(rms2) #img: image or image region, float 1.0 (when phase srambled, may be a little larger than 1.0) #output: the onoff and local contrast (rms2) value distribution def onoff_rms2_random(onoff_seed,onoff_num,centersigma,RFradius,img): kernelradius=3*centersigma np.random.seed(onoff_seed+1866) walk_height=np.random.choice(np.arange(kernelradius,img.shape[0]-kernelradius-1),onoff_num,replace=False) np.random.seed(onoff_seed+2899) walk_width=np.random.choice(np.arange(kernelradius,img.shape[1]-kernelradius-1),onoff_num,replace=False) onoffs=np.zeros(onoff_num) rms2s=np.zeros(onoff_num) tempdisk=np.float64(disk(RFradius)) for ii in range(onoff_num): onoffs[ii]=ONOFF_single(img,walk_height[ii],walk_width[ii],centersigma) temp=img[walk_height[ii]-RFradius:walk_height[ii]+RFradius+1,\ walk_width[ii]-RFradius:walk_width[ii]+RFradius+1] temp=temp[np.nonzero(tempdisk)] rms2s[ii]=np.std(temp,ddof=1)/(np.mean(temp)+1e-8) return onoffs,rms2s #input: onoff_seed: random seed for contrast calculation #onoff_num: total random pick numbers=numberofimages* each_random_pick_numbers #centersigma is the center sigma for onoff #RFradius for local contrast(rms2) #imgs: images, all gray images, float 1.0 (when phase srambled, may be a little larger than 1.0) # format like: numberofimages*height*width #output: the onoff and local contrast (rms2) value distribution def onoff_rms2_random_imgs(onoff_seed,onoff_num,centersigma,RFradius,imgs): num_imgs=imgs.shape[0] onoffs=[] rms2s=[] for ii in range(num_imgs): temp_onoff,temp_rms2=onoff_rms2_random(onoff_seed+ii,int(np.round(onoff_num/num_imgs)),\ centersigma,RFradius,imgs[ii]) onoffs.append(temp_onoff) rms2s.append(temp_rms2) onoffs=np.array(onoffs) onoffs=onoffs.flatten() rms2s=np.array(rms2s) rms2s=rms2s.flatten() return onoffs,rms2s #function, get the rms2 image of one image, input: #img: image or image region, float, 1.0, could be a little larger than 1.0 for phase scrambled image #RFradius: the radius of the crop area to be estimated the rms2 #output: rms2_img, float, nonnegative def rms2_wholeimg(img,RFradius): tempdisk=np.float64(disk(RFradius)) rms2_img=np.zeros((img.shape[0],img.shape[1])) for ii in np.arange(RFradius,img.shape[0]-RFradius-1): for jj in np.arange(RFradius,img.shape[1]-RFradius-1): temp=img[ii-RFradius:ii+RFradius+1,jj-RFradius:jj+RFradius+1] temp=temp[np.nonzero(tempdisk)]#circular kernel rms2_img[ii,jj]=np.std(temp,ddof=1)/(np.mean(temp)+1e-8) if img.shape[0]==437:#whole image frame, not crop mask_con=np.zeros((437,437),np.uint8) cv2.circle(mask_con,(218,218),radius=218-RFradius,color=255,thickness=-1) mask_con=np.float32(mask_con/255.0) rms2_img=np.multiply(rms2_img,mask_con) return rms2_img #input: onoff_seed: random seed for local contrast(rms2) calculation #onoff_num: random pick numbers #RFradius for local contrast(rms2) #img: image or image region, float 1.0 (when phase srambled, may be a little larger than 1.0) #output: the local contrast (rms2) value distribution def rms2_random(onoff_seed,onoff_num,RFradius,img): np.random.seed(onoff_seed+1866) walk_height=np.random.choice(np.arange(RFradius,img.shape[0]-RFradius-1),onoff_num) np.random.seed(onoff_seed+2899) walk_width=np.random.choice(np.arange(RFradius,img.shape[1]-RFradius-1),onoff_num) rms2s=np.zeros(onoff_num) tempdisk=np.float64(disk(RFradius)) for ii in range(onoff_num): temp=img[walk_height[ii]-RFradius:walk_height[ii]+RFradius+1,\ walk_width[ii]-RFradius:walk_width[ii]+RFradius+1] temp=temp[np.nonzero(tempdisk)] rms2s[ii]=np.std(temp,ddof=1)/(np.mean(temp)+1e-8) return rms2s #bootstrapping #apply bootstrapping to estimate standard deviation (error) #statistics can be offratios, median, mean #for offratios, be careful with the threshold #data: for statistics offratios, median, mean: numpy array with shape (sample_size,1) #num_exp: number of experiments, with replacement def bootstrap(statistics,data,num_exp=10000,seed=66): if statistics == 'offratios': def func(x): return len(x[np.where(x<0)])/len(x[np.where(x>0)]) elif statistics == 'median': def func(x): return np.median(x) elif statistics == 'mean': def func(x): return np.mean(x) sta_boot=np.zeros((num_exp)) num_data=len(data) for ii in range(num_exp): np.random.seed(seed+ii) tempind=np.random.choice(num_data,num_data,replace=True) sta_boot[ii]=func(data[tempind]) return np.percentile(sta_boot,2.5),np.percentile(sta_boot,97.5)
_____no_output_____
MIT
code/image_analysis_twilight.ipynb
yongrong-qiu/mouse-scene-cam
Sunrise: Intensity profile in the dome, radiated from the sun
def createLineIterator(P1, P2, img): """ Produces and array that consists of the coordinates and intensities of each pixel in a line between two points Parameters: -P1: a numpy array that consists of the coordinate of the first point (x,y) -P2: a numpy array that consists of the coordinate of the second point (x,y) -img: the image being processed Returns: -it: a numpy array that consists of the coordinates and intensities of each pixel in the radii (shape: [numPixels, 3], row = [x,y,intensity]) """ #define local variables for readability imageH = img.shape[0] imageW = img.shape[1] P1X = P1[0] P1Y = P1[1] P2X = P2[0] P2Y = P2[1] #difference and absolute difference between points #used to calculate slope and relative location between points dX = P2X - P1X dY = P2Y - P1Y dXa = np.abs(dX) dYa = np.abs(dY) #predefine numpy array for output based on distance between points itbuffer = np.empty(shape=(np.maximum(dYa,dXa),3),dtype=np.float32) itbuffer.fill(np.nan) #Obtain coordinates along the line using a form of Bresenham's algorithm negY = P1Y > P2Y negX = P1X > P2X if P1X == P2X: #vertical line segment itbuffer[:,0] = P1X if negY: itbuffer[:,1] = np.arange(P1Y - 1,P1Y - dYa - 1,-1) else: itbuffer[:,1] = np.arange(P1Y+1,P1Y+dYa+1) elif P1Y == P2Y: #horizontal line segment itbuffer[:,1] = P1Y if negX: itbuffer[:,0] = np.arange(P1X-1,P1X-dXa-1,-1) else: itbuffer[:,0] = np.arange(P1X+1,P1X+dXa+1) else: #diagonal line segment steepSlope = dYa > dXa if steepSlope: #slope = dX.astype(np.float32)/dY.astype(np.float32) slope = dX/dY if negY: itbuffer[:,1] = np.arange(P1Y-1,P1Y-dYa-1,-1) else: itbuffer[:,1] = np.arange(P1Y+1,P1Y+dYa+1) itbuffer[:,0] = (slope*(itbuffer[:,1]-P1Y)).astype(np.int) + P1X else: #slope = dY.astype(np.float32)/dX.astype(np.float32) slope = dY/dX if negX: itbuffer[:,0] = np.arange(P1X-1,P1X-dXa-1,-1) else: itbuffer[:,0] = np.arange(P1X+1,P1X+dXa+1) itbuffer[:,1] = (slope*(itbuffer[:,0]-P1X)).astype(np.int) + P1Y #Remove points outside of image colX = itbuffer[:,0] colY = itbuffer[:,1] itbuffer = itbuffer[(colX >= 0) & (colY >=0) & (colX<imageW) & (colY<imageH)] #Get intensities from img ndarray itbuffer[:,2] = img[itbuffer[:,1].astype(np.uint),itbuffer[:,0].astype(np.uint)] return itbuffer #show line temp=img_real2view(img_sunrises[0]) lineeg=cv2.line(temp,(198,233),(53,161),(0,0,255),5) plt.imshow(lineeg[...,::-1]) #one example point1=(198,233) point2=(53,161) temp=createLineIterator(point1, point2, img_sunrises[0,...,0]) print (temp.shape) #intensity profile point1s=[[198,233],[198,233],[201,222]] point2s=[[53,161],[53,161],[56,150]] intenpro=np.zeros((3,2,145),np.uint8)#3 time points, 2 color channel (UV and G),135 pixels for ii in range(3): for jj in range(2): intenpro[ii,jj]=createLineIterator(point1s[ii], point2s[ii], img_sunrises[ii*2,...,jj])[:,2] intenpro=intenpro/255.0 #plot intensity profile in 3 time points fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.plot(intenpro[0,0],color='purple',linestyle='-',label='UV; Time 0') ax.plot(intenpro[1,0],color='purple',linestyle='--',label='UV; Time 2') ax.plot(intenpro[2,0],color='purple',linestyle=':',label='UV; Time 4') ax.plot(intenpro[0,1],color='g',linestyle='-',label='G; Time 0') ax.plot(intenpro[1,1],color='g',linestyle='--',label='G; Time 2') ax.plot(intenpro[2,1],color='g',linestyle=':',label='G; Time 4') ax.legend(loc='best',fontsize=16) ax.set_xticks([0,75,150]) ax.set_xticklabels(([0,35,70])) ax.set_ylim([0,1.0]) ax.set_yticks([0,0.5,1.0]) ax.set_xlabel('RF (degree)', fontsize=16) ax.set_ylabel('Intensity', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5)) #plot intensity profile in 3 time points fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.plot(intenpro[0,0],color='blueviolet',linestyle='-',label='UV; Time 0') ax.plot(intenpro[1,0],color='violet',linestyle='-',label='UV; Time 2') ax.plot(intenpro[2,0],color='purple',linestyle='-',label='UV; Time 4') ax.plot(intenpro[0,1],color='lime',linestyle='-',label='G; Time 0') ax.plot(intenpro[1,1],color='g',linestyle='-',label='G; Time 2') ax.plot(intenpro[2,1],color='yellowgreen',linestyle='-',label='G; Time 4') ax.legend(loc='best',fontsize=16) ax.set_xticks([0,75,150]) ax.set_xticklabels(([0,35,70])) ax.set_ylim([0,1.0]) ax.set_yticks([0,0.5,1.0]) ax.set_xlabel('RF (degree)', fontsize=16) ax.set_ylabel('Intensity', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5))
_____no_output_____
MIT
code/image_analysis_twilight.ipynb
yongrong-qiu/mouse-scene-cam
Sunrise: Dome and tree intensity change along time points
temp=img_real2view(img_sunrises[5]) recteg=cv2.rectangle(temp,(168,35),(228,55),(255,255,255),1) plt.imshow(recteg[...,::-1]) #dome intensity domeinten_median=np.zeros((6,2)) #6 time points, 2 color channel (UV and G) domeinten_std=np.zeros((6,2)) domeinten_lowq_higq=np.zeros((6,2,2)) #6 time points, 2 color channel (UV and G), low and high quantiles(percentiles) for ii in range(6): for jj in range(2): temp=img_sunrises[ii,35:55,168:228,jj]/255 domeinten_median[ii,jj]=np.median(temp) domeinten_std[ii,jj]=np.std(temp) low_perc,high_perc=bootstrap('median',temp,num_exp=10000,seed=66) domeinten_lowq_higq[ii,jj,0] = domeinten_median[ii,jj]-low_perc #low domeinten_lowq_higq[ii,jj,1] =-domeinten_median[ii,jj]+high_perc #high #tree intensity treeinten_median=np.zeros((6,2))#6 time points, 2 color channel (UV and G) treeinten_std=np.zeros((6,2)) treeinten_lowq_higq=np.zeros((6,2,2)) #6 time points, 2 color channel (UV and G), low and high quantiles(percentiles) for ii in range(6): for jj in range(2): temp=img_sunrises[ii,80:100,230:280,jj]/255 treeinten_median[ii,jj]=np.median(temp) treeinten_std[ii,jj]=np.std(temp) low_perc,high_perc=bootstrap('median',temp,num_exp=10000,seed=6666) treeinten_lowq_higq[ii,jj,0] = treeinten_median[ii,jj]-low_perc #low treeinten_lowq_higq[ii,jj,1] =-treeinten_median[ii,jj]+high_perc #high #median, errorbar: 2.5-97.5 percentils timepoints=[0,1,2,3,4,5] fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.errorbar(timepoints,domeinten_median[:,0],yerr=(domeinten_lowq_higq[:,0,0],domeinten_lowq_higq[:,0,1]),marker='o',\ color='purple',linestyle='-', label='Dome UV',alpha=1.0, capsize=4) ax.errorbar(timepoints,domeinten_median[:,1],yerr=(domeinten_lowq_higq[:,1,0],domeinten_lowq_higq[:,1,1]),marker='o',\ color='g', linestyle='-', label='Dome G',alpha=1.0, capsize=4) ax.errorbar(timepoints,treeinten_median[:,0],yerr=(treeinten_lowq_higq[:,0,0],treeinten_lowq_higq[:,0,1]),marker='o',\ color='purple',linestyle='--',label='Tree UV',alpha=1.0, capsize=4) ax.errorbar(timepoints,treeinten_median[:,1],yerr=(treeinten_lowq_higq[:,1,0],treeinten_lowq_higq[:,1,1]),marker='o',\ color='g', linestyle='--',label='Tree G',alpha=1.0, capsize=4) ax.legend(loc='best',fontsize=16) ax.set_xticks([0,1,2,3,4,5]) ax.set_ylim([0,0.09]) ax.set_yticks([0,0.03,0.06,0.09]) ax.set_xlabel('Time point', fontsize=16) ax.set_ylabel('Intensity median', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5))
_____no_output_____
MIT
code/image_analysis_twilight.ipynb
yongrong-qiu/mouse-scene-cam
Sunrise: Crms change along time points
#pick a rectangular area for the tree, not close to the sun, near the edge temp=img_real2view(img_sunrises[5]) recteg=cv2.rectangle(temp,(160,50),(340,100),(0,0,255),5) plt.imshow(recteg[...,::-1]) #RF: 2,10 degrees RFradius=np.array([2,12]) onoff_num=100 #Crms rms2_time=np.zeros((6,2,2,onoff_num))#6 time points, 2 color channel (UV and G),2 RFs, 100 data rms2_means=np.zeros((6,2,2))#6 time points, 2 color channel (UV and G),2 RFs rms2_stds=np.zeros((6,2,2)) rms2_lowq_higq=np.zeros((6,2,2,2)) #the last channel: low and high quantiles(percentiles) for ii in range(6): for jj in range(2): for kk in range(2): temp=img_sunrises[ii,50:100,160:340,jj]/255 temprms2s=rms2_random(566+ii*10,onoff_num,RFradius[kk],temp) rms2_time[ii,jj,kk]=temprms2s rms2_means[ii,jj,kk]=np.mean(temprms2s) rms2_stds[ii,jj,kk]=np.std(temprms2s) low_perc,high_perc=bootstrap('mean',temprms2s,num_exp=10000,seed=888) rms2_lowq_higq[ii,jj,kk,0] = rms2_means[ii,jj,kk]-low_perc #low rms2_lowq_higq[ii,jj,kk,1] =-rms2_means[ii,jj,kk]+high_perc #high #mean, errorbar: 2.5-97.5 percentiles timepoints=[0,1,2,3,4,5] fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.errorbar(timepoints,rms2_means[:,0,1],yerr=(rms2_lowq_higq[:,0,1,0],rms2_lowq_higq[:,0,1,1]),marker='o',\ color='purple',linestyle='-',label='UV; RF=10',alpha=1.0, capsize=4) ax.errorbar(timepoints,rms2_means[:,1,1],yerr=(rms2_lowq_higq[:,1,1,0],rms2_lowq_higq[:,1,1,1]),marker='o',\ color='g', linestyle='-',label='G; RF=10',alpha=1.0, capsize=4) ax.legend(loc='best',fontsize=16) ax.set_xticks([0,1,2,3,4,5]) ax.set_yticks([0,0.2,0.4]) ax.set_xlabel('Time point', fontsize=16) ax.set_ylabel('Crms mean', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5))
_____no_output_____
MIT
code/image_analysis_twilight.ipynb
yongrong-qiu/mouse-scene-cam
Sunset: Conoff and Crms of tree
#pick a rectangular area for the tree temp=img_real2view(img_sunsets[1]) recteg=cv2.rectangle(temp,(130,50),(340,200),(0,0,255),5) plt.imshow(recteg[...,::-1]) RFradius=np.array([2,7,12,16]) onoff_num=200 #upper visual field, UV channel upper_UV_RF_rms2s=np.zeros((4,onoff_num)) for ii in range(4): temp=img_sunsets[1,50:200,130:340,0]/255 upper_UV_RF_rms2s[ii]=rms2_random(566+ii*10,onoff_num,RFradius[ii],temp) #upper visual field, G channel upper_G_RF_rms2s=np.zeros((4,onoff_num)) for ii in range(4): temp=img_sunsets[1,50:200,130:340,1]/255 upper_G_RF_rms2s[ii]=rms2_random(566+ii*10,onoff_num,RFradius[ii],temp) #calculate rms2medians RFradius=np.array([2,7,12,16]) #upper visual field, UV channel upper_UV_RF_rms2medians=np.zeros(4) upper_UV_RF_rms2stds=np.zeros(4) upper_UV_RF_rms2lowqs=np.zeros(4) #lower_quartile upper_UV_RF_rms2higqs=np.zeros(4) #upper_quartile for ii in range(4): upper_UV_RF_rms2medians[ii]=np.median(upper_UV_RF_rms2s[ii]) upper_UV_RF_rms2stds[ii]=np.std(upper_UV_RF_rms2s[ii]) low_perc,high_perc=bootstrap('median',upper_UV_RF_rms2s[ii],num_exp=10000,seed=66) upper_UV_RF_rms2lowqs[ii] = upper_UV_RF_rms2medians[ii]-low_perc upper_UV_RF_rms2higqs[ii] =-upper_UV_RF_rms2medians[ii]+high_perc #upper visual field, G channel upper_G_RF_rms2medians=np.zeros(4) upper_G_RF_rms2stds=np.zeros(4) upper_G_RF_rms2lowqs=np.zeros(4) #lower_quartile upper_G_RF_rms2higqs=np.zeros(4) #upper_quartile for ii in range(4): upper_G_RF_rms2medians[ii]=np.median(upper_G_RF_rms2s[ii]) upper_G_RF_rms2stds[ii]=np.std(upper_G_RF_rms2s[ii]) low_perc,high_perc=bootstrap('median',upper_G_RF_rms2s[ii],num_exp=10000,seed=66) upper_G_RF_rms2lowqs[ii] = upper_G_RF_rms2medians[ii]-low_perc upper_G_RF_rms2higqs[ii] =-upper_G_RF_rms2medians[ii]+high_perc #median, errorbar: 2.5-97.5 percentiles RFs=np.array([2,6,10,14]) fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.errorbar(RFs,upper_UV_RF_rms2medians,yerr=(upper_UV_RF_rms2lowqs,upper_UV_RF_rms2higqs),marker='o',\ color='purple',linestyle='-',label='Upper UV',alpha=1.0, capsize=4) ax.errorbar(RFs,upper_G_RF_rms2medians, yerr=(upper_G_RF_rms2lowqs,upper_G_RF_rms2higqs), marker='o',\ color='g', linestyle='-',label='Upper G', alpha=1.0, capsize=4) ax.legend(loc='best',fontsize=16) ax.set_xticks([2,6,10,14]) ax.set_yticks([0,0.2,0.4,0.6]) ax.set_xlabel('RF (degree)', fontsize=16) ax.set_ylabel('Crms median', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5))
_____no_output_____
MIT
code/image_analysis_twilight.ipynb
yongrong-qiu/mouse-scene-cam
How to export 🤗 Transformers Models to ONNX ? [ONNX](http://onnx.ai/) is open format for machine learning models. It allows to save your neural network's computation graph in a framework agnostic way, which might be particulary helpful when deploying deep learning models.Indeed, businesses might have other requirements _(languages, hardware, ...)_ for which the training framework might not be the best suited in inference scenarios. In that context, having a representation of the actual computation graph that can be shared accross various business units and logics across an organization might be a desirable component.Along with the serialization format, ONNX also provides a runtime library which allows efficient and hardware specific execution of the ONNX graph. This is done through the [onnxruntime](https://microsoft.github.io/onnxruntime/) project and already includes collaborations with many hardware vendors to seamlessly deploy models on various platforms.Through this notebook we'll walk you through the process to convert a PyTorch or TensorFlow transformers model to the [ONNX](http://onnx.ai/) and leverage [onnxruntime](https://microsoft.github.io/onnxruntime/) to run inference tasks on models from 🤗 __transformers__ Exporting 🤗 transformers model to ONNX---Exporting models _(either PyTorch or TensorFlow)_ is easily achieved through the conversion tool provided as part of 🤗 __transformers__ repository. Under the hood the process is sensibly the following: 1. Allocate the model from transformers (**PyTorch or TensorFlow**)2. Forward dummy inputs through the model this way **ONNX** can record the set of operations executed3. Optionally define dynamic axes on input and output tensors4. Save the graph along with the network parameters
import sys !{sys.executable} -m pip install --upgrade git+https://github.com/huggingface/transformers !{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html !{sys.executable} -m pip install --upgrade onnxruntime==1.4.0 !{sys.executable} -m pip install -i https://test.pypi.org/simple/ ort-nightly !{sys.executable} -m pip install --upgrade onnxruntime-tools !rm -rf onnx/ from pathlib import Path from transformers.convert_graph_to_onnx import convert # Handles all the above steps for you convert(framework="pt", model="bert-base-cased", output=Path("onnx/bert-base-cased.onnx"), opset=11) # Tensorflow # convert(framework="tf", model="bert-base-cased", output="onnx/bert-base-cased.onnx", opset=11)
loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json from cache at /home/mfuntowicz/.cache/torch/transformers/b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.9da767be51e1327499df13488672789394e2ca38b877837e52618a67d7002391 Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 28996 }
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
How to leverage runtime for inference over an ONNX graph---As mentionned in the introduction, **ONNX** is a serialization format and many side projects can load the saved graph and run the actual computations from it. Here, we'll focus on the official [onnxruntime](https://microsoft.github.io/onnxruntime/). The runtime is implemented in C++ for performance reasons and provides API/Bindings for C++, C, C, Java and Python.In the case of this notebook, we will use the Python API to highlight how to load a serialized **ONNX** graph and run inference workload on various backends through **onnxruntime**.**onnxruntime** is available on pypi:- onnxruntime: ONNX + MLAS (Microsoft Linear Algebra Subprograms)- onnxruntime-gpu: ONNX + MLAS + CUDA
!pip install transformers onnxruntime-gpu onnx psutil matplotlib
Requirement already satisfied: transformers in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (3.0.2) Requirement already satisfied: onnxruntime-gpu in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (1.3.0) Requirement already satisfied: onnx in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (1.7.0) Requirement already satisfied: psutil in /home/mfuntowicz/.local/lib/python3.8/site-packages/psutil-5.7.0-py3.8-linux-x86_64.egg (5.7.0) Requirement already satisfied: matplotlib in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (3.3.1) Requirement already satisfied: tqdm>=4.27 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (4.46.1) Requirement already satisfied: numpy in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (1.18.1) Requirement already satisfied: sacremoses in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (0.0.43) Requirement already satisfied: regex!=2019.12.17 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (2020.6.8) Requirement already satisfied: filelock in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (3.0.12) Requirement already satisfied: sentencepiece!=0.1.92 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (0.1.91) Requirement already satisfied: requests in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (2.23.0) Requirement already satisfied: packaging in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (20.4) Requirement already satisfied: tokenizers==0.8.1.rc2 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from transformers) (0.8.1rc2) Requirement already satisfied: protobuf in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from onnxruntime-gpu) (3.12.2) Requirement already satisfied: six in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from onnx) (1.15.0) Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from onnx) (3.7.4.2) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from matplotlib) (2.4.7) Requirement already satisfied: kiwisolver>=1.0.1 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from matplotlib) (1.2.0) Requirement already satisfied: python-dateutil>=2.1 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from matplotlib) (2.8.1) Requirement already satisfied: cycler>=0.10 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from matplotlib) (0.10.0) Requirement already satisfied: pillow>=6.2.0 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from matplotlib) (7.2.0) Requirement already satisfied: certifi>=2020.06.20 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from matplotlib) (2020.6.20) Requirement already satisfied: click in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from sacremoses->transformers) (7.1.2) Requirement already satisfied: joblib in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from sacremoses->transformers) (0.15.1) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from requests->transformers) (1.25.9) Requirement already satisfied: chardet<4,>=3.0.2 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from requests->transformers) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from requests->transformers) (2.9) Requirement already satisfied: setuptools in /home/mfuntowicz/miniconda3/envs/pytorch/lib/python3.8/site-packages (from protobuf->onnxruntime-gpu) (47.1.1.post20200604)
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
Preparing for an Inference Session---Inference is done using a specific backend definition which turns on hardware specific optimizations of the graph. Optimizations are basically of three kinds: - **Constant Folding**: Convert static variables to constants in the graph - **Deadcode Elimination**: Remove nodes never accessed in the graph- **Operator Fusing**: Merge multiple instruction into one (Linear -> ReLU can be fused to be LinearReLU)ONNX Runtime automatically applies most optimizations by setting specific `SessionOptions`.Note:Some of the latest optimizations that are not yet integrated into ONNX Runtime are available in [optimization script](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers) that tunes models for the best performance.
# # An optional step unless # # you want to get a model with mixed precision for perf accelartion on newer GPU # # or you are working with Tensorflow(tf.keras) models or pytorch models other than bert # !pip install onnxruntime-tools # from onnxruntime_tools import optimizer # # Mixed precision conversion for bert-base-cased model converted from Pytorch # optimized_model = optimizer.optimize_model("bert-base-cased.onnx", model_type='bert', num_heads=12, hidden_size=768) # optimized_model.convert_model_float32_to_float16() # optimized_model.save_model_to_file("bert-base-cased.onnx") # # optimizations for bert-base-cased model converted from Tensorflow(tf.keras) # optimized_model = optimizer.optimize_model("bert-base-cased.onnx", model_type='bert_keras', num_heads=12, hidden_size=768) # optimized_model.save_model_to_file("bert-base-cased.onnx") # optimize transformer-based models with onnxruntime-tools from onnxruntime_tools import optimizer from onnxruntime_tools.transformers.onnx_model_bert import BertOptimizationOptions # disable embedding layer norm optimization for better model size reduction opt_options = BertOptimizationOptions('bert') opt_options.enable_embed_layer_norm = False opt_model = optimizer.optimize_model( 'onnx/bert-base-cased.onnx', 'bert', num_heads=12, hidden_size=768, optimization_options=opt_options) opt_model.save_model_to_file('bert.opt.onnx') from os import environ from psutil import cpu_count # Constants from the performance optimization available in onnxruntime # It needs to be done before importing onnxruntime environ["OMP_NUM_THREADS"] = str(cpu_count(logical=True)) environ["OMP_WAIT_POLICY"] = 'ACTIVE' from onnxruntime import GraphOptimizationLevel, InferenceSession, SessionOptions, get_all_providers from contextlib import contextmanager from dataclasses import dataclass from time import time from tqdm import trange def create_model_for_provider(model_path: str, provider: str) -> InferenceSession: assert provider in get_all_providers(), f"provider {provider} not found, {get_all_providers()}" # Few properties that might have an impact on performances (provided by MS) options = SessionOptions() options.intra_op_num_threads = 1 options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL # Load the model as a graph and prepare the CPU backend session = InferenceSession(model_path, options, providers=[provider]) session.disable_fallback() return session @contextmanager def track_infer_time(buffer: [int]): start = time() yield end = time() buffer.append(end - start) @dataclass class OnnxInferenceResult: model_inference_time: [int] optimized_model_path: str
_____no_output_____
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
Forwarding through our optimized ONNX model running on CPU---When the model is loaded for inference over a specific provider, for instance **CPUExecutionProvider** as above, an optimized graph can be saved. This graph will might include various optimizations, and you might be able to see some **higher-level** operations in the graph _(through [Netron](https://github.com/lutzroeder/Netron) for instance)_ such as:- **EmbedLayerNormalization**- **Attention**- **FastGeLU**These operations are an example of the kind of optimization **onnxruntime** is doing, for instance here gathering multiple operations into bigger one _(Operator Fusing)_.
from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased") cpu_model = create_model_for_provider("onnx/bert-base-cased.onnx", "CPUExecutionProvider") # Inputs are provided through numpy array model_inputs = tokenizer("My name is Bert", return_tensors="pt") inputs_onnx = {k: v.cpu().detach().numpy() for k, v in model_inputs.items()} # Run the model (None = get all the outputs) sequence, pooled = cpu_model.run(None, inputs_onnx) # Print information about outputs print(f"Sequence output: {sequence.shape}, Pooled output: {pooled.shape}")
loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /home/mfuntowicz/.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
Benchmarking PyTorch model_Note: PyTorch model benchmark is run on CPU_
from transformers import BertModel PROVIDERS = { ("cpu", "PyTorch CPU"), # Uncomment this line to enable GPU benchmarking # ("cuda:0", "PyTorch GPU") } results = {} for device, label in PROVIDERS: # Move inputs to the correct device model_inputs_on_device = { arg_name: tensor.to(device) for arg_name, tensor in model_inputs.items() } # Add PyTorch to the providers model_pt = BertModel.from_pretrained("bert-base-cased").to(device) for _ in trange(10, desc="Warming up"): model_pt(**model_inputs_on_device) # Compute time_buffer = [] for _ in trange(100, desc=f"Tracking inference time on PyTorch"): with track_infer_time(time_buffer): model_pt(**model_inputs_on_device) # Store the result results[label] = OnnxInferenceResult( time_buffer, None )
loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json from cache at /home/mfuntowicz/.cache/torch/transformers/b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.9da767be51e1327499df13488672789394e2ca38b877837e52618a67d7002391 Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 28996 } loading weights file https://cdn.huggingface.co/bert-base-cased-pytorch_model.bin from cache at /home/mfuntowicz/.cache/torch/transformers/d8f11f061e407be64c4d5d7867ee61d1465263e24085cfa26abf183fdc830569.3fadbea36527ae472139fe84cddaa65454d7429f12d543d80bfc3ad70de55ac2 All model checkpoint weights were used when initializing BertModel. All the weights of BertModel were initialized from the model checkpoint at bert-base-cased. If your task is similar to the task the model of the checkpoint was trained on, you can already use BertModel for predictions without further training. Warming up: 100%|██████████| 10/10 [00:00<00:00, 39.30it/s] Tracking inference time on PyTorch: 100%|██████████| 100/100 [00:02<00:00, 41.09it/s]
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
Benchmarking PyTorch & ONNX on CPU_**Disclamer: results may vary from the actual hardware used to run the model**_
PROVIDERS = { ("CPUExecutionProvider", "ONNX CPU"), # Uncomment this line to enable GPU benchmarking # ("CUDAExecutionProvider", "ONNX GPU") } for provider, label in PROVIDERS: # Create the model with the specified provider model = create_model_for_provider("onnx/bert-base-cased.onnx", provider) # Keep track of the inference time time_buffer = [] # Warm up the model model.run(None, inputs_onnx) # Compute for _ in trange(100, desc=f"Tracking inference time on {provider}"): with track_infer_time(time_buffer): model.run(None, inputs_onnx) # Store the result results[label] = OnnxInferenceResult( time_buffer, model.get_session_options().optimized_model_filepath ) %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import os # Compute average inference time + std time_results = {k: np.mean(v.model_inference_time) * 1e3 for k, v in results.items()} time_results_std = np.std([v.model_inference_time for v in results.values()]) * 1000 plt.rcdefaults() fig, ax = plt.subplots(figsize=(16, 12)) ax.set_ylabel("Avg Inference time (ms)") ax.set_title("Average inference time (ms) for each provider") ax.bar(time_results.keys(), time_results.values(), yerr=time_results_std) plt.show()
_____no_output_____
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
Quantization support from transformersQuantization enables the use of integers (_instead of floatting point_) arithmetic to run neural networks models faster. From a high-level point of view, quantization works as mapping the float32 ranges of values as int8 with the less loss in the performances of the model.Hugging Face provides a conversion tool as part of the transformers repository to easily export quantized models to ONNX Runtime. For more information, please refer to the following: - [Hugging Face Documentation on ONNX Runtime quantization supports](https://huggingface.co/transformers/master/serialization.htmlquantization)- [Intel's Explanation of Quantization](https://nervanasystems.github.io/distiller/quantization.html)With this method, the accuracy of the model remains at the same level than the full-precision model. If you want to see benchmarks on model performances, we recommand reading the [ONNX Runtime notebook](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/quantization/notebooks/Bert-GLUE_OnnxRuntime_quantization.ipynb) on the subject. Benchmarking PyTorch quantized model
import torch # Quantize model_pt_quantized = torch.quantization.quantize_dynamic( model_pt.to("cpu"), {torch.nn.Linear}, dtype=torch.qint8 ) # Warm up model_pt_quantized(**model_inputs) # Benchmark PyTorch quantized model time_buffer = [] for _ in trange(100): with track_infer_time(time_buffer): model_pt_quantized(**model_inputs) results["PyTorch CPU Quantized"] = OnnxInferenceResult( time_buffer, None )
100%|██████████| 100/100 [00:01<00:00, 90.15it/s]
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
Benchmarking ONNX quantized model
from transformers.convert_graph_to_onnx import quantize # Transformers allow you to easily convert float32 model to quantized int8 with ONNX Runtime quantized_model_path = quantize(Path("bert.opt.onnx")) # Then you just have to load through ONNX runtime as you would normally do quantized_model = create_model_for_provider(quantized_model_path.as_posix(), "CPUExecutionProvider") # Warm up the overall model to have a fair comparaison outputs = quantized_model.run(None, inputs_onnx) # Evaluate performances time_buffer = [] for _ in trange(100, desc=f"Tracking inference time on CPUExecutionProvider with quantized model"): with track_infer_time(time_buffer): outputs = quantized_model.run(None, inputs_onnx) # Store the result results["ONNX CPU Quantized"] = OnnxInferenceResult( time_buffer, quantized_model_path )
As of onnxruntime 1.4.0, models larger than 2GB will fail to quantize due to protobuf constraint. This limitation will be removed in the next release of onnxruntime. Quantized model has been written at bert.onnx: ✔
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
Show the inference performance of each providers
%matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import os # Compute average inference time + std time_results = {k: np.mean(v.model_inference_time) * 1e3 for k, v in results.items()} time_results_std = np.std([v.model_inference_time for v in results.values()]) * 1000 plt.rcdefaults() fig, ax = plt.subplots(figsize=(16, 12)) ax.set_ylabel("Avg Inference time (ms)") ax.set_title("Average inference time (ms) for each provider") ax.bar(time_results.keys(), time_results.values(), yerr=time_results_std) plt.show()
_____no_output_____
MIT
methods/transformers/notebooks/04-onnx-export.ipynb
INK-USC/RiddleSense
Lecture 1. Introduction to CUDA Programming Model and Toolkit. Welcome to the GPU short course exercies!During this course we will introduce you with the syntax of CUDA programming in Python using `numba.cuda` package. During the first lecture we will try to focus on optimizing the following function, which adds two vectors together using a single CPU core.
import numpy as np def add_vectors(a, b): assert len(a) == len(b) n = len(a) result = [None]*n for i in range(n): result[i] = a[i] + b[i] return result add_vectors([1, 2, 3, 4], [4, 5, 6, 7])
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Lets measure the time needed to execute the `add_vectors` for big-scale arrays:
a = np.random.rand(2**24) # ~ 1e7 elements b = np.random.rand(2**24) %%timeit -n 2 add_vectors(a, b)
2 loops, best of 5: 6.46 s per loop
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
In the following sections we will show you how to optimize the above implementation by reimplementing vector addition on GPU. Exercise 1.1. CUDA kernels and CUDA threads. 1.1.1. One-dimensional grid. Lets do the all necessary Python imports first.
import math from numba import cuda import numpy as np
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The `numba.cuda` package provides a possibility to write CUDA kernels directly in Python language. We will describe Numba in more detail later. For now, it will be enough to understand that:- the code that is executed by each GPU core separately is called a *GPU kernel*,- in Numba, GPU kernel is a Python function with `@cuda.jit` decorator.The below code creates a CUDA kernel which simply does nothing (*NOP*).
@cuda.jit def my_first_gpu_kernel(): pass
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The following line launches the code to be executed by 64 thread blocks, each block has 256 threads:
my_first_gpu_kernel[64, 256]()
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Ok, now that we know what is the syntax for writing and lanuching CUDA kernel code, we can proceed with porting the `add_vectors` function to the GPU device. The key is to note that vector element additions are *independent tasks* - that is, for each `i` and `j`, the result of `a[i]+b[i]` does not depend on `a[j]+b[j]` and vice versa.So, let's move line 9 from `add_vectors` function to the new, GPU kernel implementation:
@cuda.jit def add_vectors_kernel(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # Make sure not to go outside the grid area! if i >= len(result): return result[i] = a[i] + b[i]
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The above code is executed by each CUDA thread separately. Fields `blockIdx`, `blockDim` and `threadIdx` allows to determine the position of the current thread in the whole grid of threads executed by CUDA device:- `blockIdx` is the identifier of the currently executed block of threads in the grid,- `blockDim` is the size of a single block of threads,- `threadIdx` is the identifier of the currenty executed thread within a single block.A grid of threads can have one, two or three dimensions, described by `(cuda.blockIdx.z, cuda.blockDim.y, cuda.blockDim.x)` tuple. Each thread in a block has coordinates `(cuda.threadIdx.z, cuda.threadIdx.y, cuda.threadIdx.x)`. Each block in a grid has coordinates `(cuda.blockIdx.z, cuda.blockIdx.y, cuda.blockIdx.x)`. The `x` coordinate changes the fastest: two adjacent threads in the same block differ in the value of the `x` coordinate by 1.Grird and thread dimensions can be specified via `grid_size` and `thread_size` parameters: a single scalar value means, that a 1-D grid will be used, a pair of values imposes a 2-D grid, three values sets 3-D grid. Now, we would like to run the above kernel for each `result[i]`. Let's assume for a moment that we want the above CUDA kernel to be executed by 256 threads in parallel - i.e. one block will consists of 256 threads. To cover the entire input array, the kernel has to be executed by $\left\lceil \frac{n}{256} \right\rceil$ blocks of threads.
def add_vectors_gpu(a, b): assert len(a) == len(b) # Create output array in the GPU memory. result = cuda.device_array(shape=a.shape, dtype=a.dtype) block_size = 256 grid_size = math.ceil(len(a)/block_size) add_vectors_kernel[grid_size, block_size](result, a, b) return result.copy_to_host() %%timeit add_vectors_gpu(a, b)
1 loop, best of 5: 138 ms per loop
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Congratulations! Your very first GPU kernel, i.e. `add_vectors_gpu` function, executes much faster than its CPU counterpart.Of course, writing CUDA kernels is not the only part of preparing GPU processing pipeline. One of the other important things to consider is the heterogenous nature of the CPU-GPU processing: is the data transfer between GPU and the host computer. 1.1.2. Two-dimensional grid. In the previous example, the grid of threads was defined in a single dimension, i.e. the variables `grid_size` and `block_size` were a single scalar value. This time we will implement a function, which adds two **matrices**, and we will use a 2-D grid of threads for this purpose. Lets implement `add_matrices` for CPU first.
import itertools import numpy as np def add_matrices(a, b): a = np.array(a) b = np.array(b) assert a.shape == b.shape height, width = a.shape result = np.zeros(a.shape, dtype=a.dtype) for i, j in itertools.product(range(height), range(width)): result[i, j] = a[i, j] + b[i, j] return result add_matrices( # a = [[ 1, 2, 3, 4], [ 4, 5, 6, 7]], # b = [[-1, -2, -3, -4], [ 1, 1, 1, 1]]) A = np.random.rand(2**12, 2**12) # ~ 1e7 elements B = np.random.rand(2**12, 2**12) C = add_matrices(A, B) np.testing.assert_equal(C, A+B) %%timeit -n 2 add_matrices(A, B)
2 loops, best of 5: 8.95 s per loop
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Similarly to the `add_vectors_kernel` implementation, the `add_matrices_kernel` will compute a single matrix element. This time, we will use `y` coordinate to address matrix elements in the second dimension:
@cuda.jit def add_matrices_kernel(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x j = cuda.blockIdx.y*cuda.blockDim.y + cuda.threadIdx.y height, width = result.shape # Make sure we are not accessing data outside available space! if i >= width or j >= height: return result[j, i] = a[j, i] + b[j, i]
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Now, we must also pass the second dimension of the grid to the GPU kernel invocation parameters, as the implementation assumes a 2-D grid layout. The parameters `grid_size` and `block_size` now has to pairs of integer values:
def add_matrices_gpu(a, b): assert a.shape == b.shape # Create output array in the GPU memory. result = cuda.device_array(shape=a.shape, dtype=a.dtype) height, width = a.shape block_size = (16, 16) grid_size = (math.ceil(width/block_size[0]), math.ceil(height/block_size[1])) add_matrices_kernel[grid_size, block_size](result, a, b) return result.copy_to_host() add_matrices( # a = [[ 1, 2, 3, 4], [ 4, 5, 6, 7]], # b = [[-1, -2, -3, -4], [ 1, 1, 1, 1]])
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Let's test the implementation first:
C = add_matrices_gpu(A, B) np.testing.assert_equal(C, A+B)
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Now lets compare CPU and GPU processing time:
%%timeit add_matrices_gpu(A, B)
10 loops, best of 5: 140 ms per loop
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
We leave the implementation of adding two 3D arrays as homework for the students. 1.1.3. See also- CUDA kernels in Numba: introduction: https://numba.readthedocs.io/en/0.52.0/cuda/kernels.htmlintroduction- Matrix multiplication example https://numba.readthedocs.io/en/0.52.0/cuda/examples.htmlmatrix-multiplication Exercise 1.2. Transferring data to and from GPU memory. In the previous examples, we passed the `numpy` arrays directly to the GPU kernel code. The `numpy` arrays were stored in the host PC's operating memory. As GPU computing can be performed only on data which is located in GPU memory, Numba package impliclitly transferred the data from PC's memory to GPU global memory first.The data transfers can be run explicitly, if necessary:
block_size = 256 grid_size = math.ceil(len(a)/block_size) # Create an array for the result in the GPU global memory. result_gpu = cuda.device_array(shape=a.shape, dtype=a.dtype) # Here are the explicit data transfers from host PC memory to GPU global memory: a_gpu = cuda.to_device(a) b_gpu = cuda.to_device(b) add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) # After the computations are done, transfer the results to the host PC memory. result = result_gpu.copy_to_host()
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Data transfer to and from GPU memory is only possible with GPU global memory. The following functions are available in Numba:- create an array in the GPU global memory: `numba.cuda.device_array`, or `numba.cuda.device_array_like`,- host PC to GPU global memory transfer: `numba.cuda.to_device`- GPU global memory to host PC memory transfer: `gpu_array.copy_to_host`, where `gpu_array` is a GPU array. The complete list of Numba's functions for data transfer to and from GPU is available here:https://numba.readthedocs.io/en/0.52.0/cuda/memory.htmldata-transfer The advantage of heterogenous programming with CUDA is that computing performed on the GPU can be done in parallel with the operations performed by CPU -- both CPU and GPU are separate processing devices that can work simultaneously. In other words, that CUDA kernel invocations are **asynochronous**: when we invoke GPU kernel, the only job the CPU does is to enqueue the kernel to be executed on GPU, then it returns immediately(In fact, this is not always true for kernels written in Numba - the first launch of a CUDA kernel may also require Python code compilation. This topic will be discussed later in this lecture.)For example, the following GPU kernel call takes much less time than we've seen so far:
%%timeit -n 100 add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu)
100 loops, best of 5: 172 µs per loop
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The difference is that we did not transfer data from the GPU to the CPU. Let's try now with the GPU -> CPU transfer:
%%timeit -n 100 add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) result = result_gpu.copy_to_host()
100 loops, best of 5: 34.6 ms per loop
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The difference is due to the fact that the data transfer is a blocking operation - it waits for all queued operations to be performed, then performs the transfer. To wait for the kernels to execute explicitly, without transferring the result data to host PC, run `cuda.default_stream().synchronize()`.
%%timeit -n 100 add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) cuda.default_stream().synchronize()
100 loops, best of 5: 1.8 ms per loop
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
CUDA streams will be covered in more detail later in this short-course. 1.2.1 See also - CUDA Toolkit documentation: device memory management (CUDA C++): https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.htmldevice-memory- The complete list of Numba's functions for data transfer to and from GPU: https://numba.readthedocs.io/en/0.52.0/cuda/memory.htmldata-transfer Exercise 1.3. CUDA Toolkit and Numba package. Nvidia provides several tools in its Toolkit that help in the implementation and testing of the GPU code. In this exercise we will show you how to:- check what parameters you hardware has, like determine how to programatically check how many GPUs are available, how much each kind of memory you it has, and so on,- debug and memcheck you Python CUDA kernels,- profile CUDA code execution time.Also, we will introduce you with more details of Numba Python package, which we will use during the whole course. Exercise 1.3.1. CUDA device diagnostics. The most basic diagnostic tool for GPU cards is `nvidia-smi` which displays the current status of available all GPU cards.(NOTE: `nvidia-smi` tool is not available Nvidia Jetson processors. For SoC chips, please use built-in `tegrastats` or install [`jtop`](https://pypi.org/project/jetson-stats/)).
! nvidia-smi
Sun Jun 27 10:32:17 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 465.27 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 60C P0 66W / 70W | 1256MiB / 15109MiB | 86% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
`nvidia-smi` outputs information about:- the installed NVIDIA driver and CUDA Toolkit,- for each available GPU: - temperature, memory usage and GPU utilization, - processes that are currently running on that GPU. `nvidia-smi` is a command line tool, so use it in your shell to quickly check the state of your GPU. CUDA SDK provides also a programatic way to access device description in your application run-time, to e.g. check if we are not exceeding available GPU global memory.The CUDA device description is called SDK *device properties*. To get the device properties, we will use `cupy` package, which exposes CUDA SDK interface in Python in a convenient way.Let's check first how many GPU cards do we have:
import cupy as cp cp.cuda.runtime.getDeviceCount()
_____no_output_____
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Now, let's we check:- what is the name of the device and what is its compute capability,- what is the GPU clock frequency,- how much global, shared and constant memory our GPU card has.
device_props = cp.cuda.runtime.getDeviceProperties(0) print(f"Device: {device_props['name']} (cc {device_props['major']}.{device_props['minor']})") print(f"GPU clock frequency: {device_props['clockRate']/1e3} MHz") print("Available memory: ") print(f"- global memory: {device_props['totalGlobalMem']/2**20} MiB") print(f"- shared memory per thread block: {device_props['sharedMemPerBlock']} B") print(f"- constant memory: {device_props['totalConstMem']} B")
Device: b'Tesla T4' (cc 7.5) GPU clock frequency: 1590.0 MHz Available memory: - global memory: 15109.75 MiB - shared memory per thread block: 49152 B - constant memory: 65536 B
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The complete list of device properties is available [here](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.htmlgroup__CUDART__DEVICE_1g1bf9d625a931d657e08db2b4391170f0). Exercise 1.3.2. Numba. - Numba is a just-in-time (JIT) compiler for Python.- It generates machine code from Python bytecode using LLVM compiler library, which results in a significant speed up.- It works best on code that uses NumPy arrays and functions, and loops.- Numba can target NVIDIA CUDA and (experimentally) AMD ROC GPUs. In other words, it allows for (relatively) easy creation of Python code executed on GPU, which results (potentially) in a significant speed-up.Numba documentation is available here: https://numba.pydata.org/numba-doc/latest/index.html The thing that needs to be stressed here is to note, that Numba is a JIT compiler - that means it compiles a given function to machine code *lazily*, **on the first function call**. Compilation is performed only once - the first time a given function is run. After that, a cached version of machine code is used.Let's see how long it will take to execute the brand new kernel the first time:
@cuda.jit def increment(x): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x x[i] += 2 %time increment[1, 5](np.arange(5))
CPU times: user 125 ms, sys: 5.01 ms, total: 130 ms Wall time: 132 ms
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The second kernel execution takes:
%time increment[1, 5](np.arange(5))
CPU times: user 1.52 ms, sys: 10 µs, total: 1.53 ms Wall time: 1.34 ms
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Exercise 1.3.3. CUDA-MEMCHECK and debugging Numba code. 1.3.3.1 CUDA-MEMCHECK CUDA-MEMCHECK is an tool available in CUDA SDK, which gives the possibility to check if CUDA application makes any of the following errors:- misaligned and out of bounds memory access errors,- shared memory data races,- unintialized accesses to global memory. Let's to debug below Python script in order to detect any memory issues it may cause. According to Numba [documentation](https://numba.pydata.org/numba-doc/latest/user/troubleshoot.htmldebug-info), we can pass `debug=True` parameter to the `@cuda.jit` decorator in order to get some more information about the analyzed kernel. Let's do that, save the below cell to Python script, and run CUDA-MEMCHECK for the Python interpreter.
%%writefile 1_3_3_memcheck.py import os import numpy as np from numba import cuda import math @cuda.jit(debug=True) def add_vectors_invalid(result, a, b): pass i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # What are we missing here? result[i] = a[i] + b[i] a = np.arange(255) b = np.arange(255) result = cuda.device_array(a.shape, dtype=a.dtype) add_vectors_invalid[1, 256](result, a, b) result_host = result.copy_to_host()
Writing 1_3_3_memcheck.py
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The only thing the above cell does is saving the Python code to the `1_3_3_memcheck.py` script in the current directory (you can check it using `! pwd` command). Now, we can run `cuda-memcheck` along with the Python interpreter in order to see if there any issues with the script.
! cuda-memcheck --show-backtrace no python 1_3_3_memcheck.py
========= CUDA-MEMCHECK ========= Invalid __global__ read of size 8 ========= at 0x00000490 in cudapy::__main__::add_vectors_invalid$241(Array<__int64, int=1, C, mutable, aligned>, Array<__int64, int=1, C, mutable, aligned>, Array<__int64, int=1, C, mutable, aligned>) ========= by thread (255,0,0) in block (0,0,0) ========= Address 0x7fb806200ff8 is out of bounds ========= ========= Program hit CUDA_ERROR_LAUNCH_FAILED (error 719) due to "unspecified launch failure" on CUDA API call to cuMemcpyDtoH_v2. ========= Traceback (most recent call last): File "1_3_3_memcheck.py", line 19, in <module> add_vectors_invalid[1, 256](result, a, b) File "/usr/local/lib/python3.7/dist-packages/numba/cuda/compiler.py", line 770, in __call__ self.stream, self.sharedmem) File "/usr/local/lib/python3.7/dist-packages/numba/cuda/compiler.py", line 862, in call kernel.launch(args, griddim, blockdim, stream, sharedmem) File "/usr/local/lib/python3.7/dist-packages/numba/cuda/compiler.py", line 655, in launch driver.device_to_host(ctypes.addressof(excval), excmem, excsz) File "/usr/local/lib/python3.7/dist-packages/numba/cuda/cudadrv/driver.py", line 2345, in device_to_host fn(host_pointer(dst), device_pointer(src), size, *varargs) File "/usr/local/lib/python3.7/dist-packages/numba/cuda/cudadrv/driver.py", line 302, in safe_cuda_api_call self._check_error(fname, retcode) File "/usr/local/lib/python3.7/dist-packages/numba/cuda/cudadrv/driver.py", line 337, in _check_error raise CudaAPIError(retcode, msg) numba.cuda.cudadrv.driver.CudaAPIError: [719] Call to cuMemcpyDtoH results in CUDA_ERROR_LAUNCH_FAILED ========= ERROR SUMMARY: 2 errors
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
As we can see, CUDA-MEMCHECK detected, the `add_vectors_invalid` kernel was not properly executed by thread `255`. What is causing the issue? 1.3.3.2 Debugging Numba kernels CUDA SDK toolkit includes debuggers that can be run on the GPU kernel code, in case it's necessary to trace the cause of the issue. A list of CUDA debuggers, that can be run on C++ CUDA kernel code is available here: [Linux](https://docs.nvidia.com/cuda/cuda-gdb/index.html), [Windows](https://docs.nvidia.com/nsight-visual-studio-edition/cuda-debugger/).For the application that uses Python with Numba to generate GPU machine code, user have an opportunity to run Python debugger (`pdb`) directly on the kernel code using the CUDA simulator. More details about the simulator can be found [here](https://numba.pydata.org/numba-doc/dev/cuda/simulator.html). Let's use CUDA simulator to print debug data directly in the kernel code. In order to be able to run CUDA simulator, set `NUMBA_ENABLE_CUDASIM` environment variable to value `1`.
%%writefile 1_3_3_numba_debugger.py # Turn on CUDA simulator. import os os.environ['NUMBA_ENABLE_CUDASIM'] = '1' import numpy as np from numba import cuda import math @cuda.jit def add_vectors(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # What are we missing here? result[i] = a[i] + b[i] if i == 10: print(f"{result[i]} = {a[i]} + {b[i]}") # or use PDB: import pdb; pdb.set_trace() a = np.arange(255) b = np.arange(255) result = cuda.device_array(a.shape, dtype=a.dtype) add_vectors[1, 255](result, a, b) result_host = result.copy_to_host() ! python 1_3_3_numba_debugger.py
20 = 10 + 10
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
Exercise 1.3.4. Profiling GPU code. Sometimes, to better understand and optimize performance of GPU appplication, it is necessary to perform a dynamic program analysis and to mesaure a specific metrics, for example the execution time and memory requirements of a particular CUDA kernel. The utility that performs such analysis is usually called a *code profiler*.NVIDIA provides a number of tools that enable code profiling. Some of them allow you to perform inspections from the command line, while others provide a graphical user interface that clearly presents various code metrics. In this excersie we will introduce you the tools available in CUDA ecosystem.NOTE: Currently, NVIDIA is migrating to a new profiling toolkit called *NVIDIA Nsight Systems* and NVIDIA Nsight Compute system. We will extend this exercise with examples of their use in the future. 1.3.4.1. NVPROF `nvprof` is a CUDA SDK tool that allows to acquire profiling data directly from the command line. Documentation for the tool is available here. We will use `nvprof` for the rest of this course. Let's try profiling our `add_vectors_gpu` code first.
%%writefile 1_3_4_nvprof_add_vectors.py import math from numba import cuda import numpy as np @cuda.jit def add_vectors_kernel(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # Make sure not to go outside the grid area! if i >= len(result): return result[i] = a[i] + b[i] y = cuda.device_array(4) add_vectors_kernel[1, 4](y, np.array([1, 2, 3, 4]), np.array([4, 5, 6, 7])) result = y.copy_to_host() np.testing.assert_equal(result, [5, 7, 9, 11])
Writing 1_3_4_nvprof_add_vectors.py
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
The usage is the following:```nvprof [options] [application] [application-arguments]``` For example, to run the above Python script:
! nvprof python 1_3_4_nvprof_add_vectors.py
==568== NVPROF is profiling process 568, command: python3 1_3_4_nvprof_add_vectors.py ==568== Profiling application: python3 1_3_4_nvprof_add_vectors.py ==568== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 46.64% 5.7600us 3 1.9200us 1.6000us 2.5280us [CUDA memcpy DtoH] 26.94% 3.3270us 1 3.3270us 3.3270us 3.3270us cudapy::__main__::add_vectors_kernel$241(Array<double, int=1, C, mutable, aligned>, Array<__int64, int=1, C, mutable, aligned>, Array<__int64, int=1, C, mutable, aligned>) 26.43% 3.2640us 2 1.6320us 1.3760us 1.8880us [CUDA memcpy HtoD] API calls: 77.42% 192.63ms 1 192.63ms 192.63ms 192.63ms cuDevicePrimaryCtxRetain 21.86% 54.384ms 1 54.384ms 54.384ms 54.384ms cuLinkAddData 0.22% 540.14us 1 540.14us 540.14us 540.14us cuDeviceTotalMem 0.15% 384.52us 1 384.52us 384.52us 384.52us cuModuleLoadDataEx 0.09% 223.19us 3 74.397us 7.6010us 199.53us cuMemAlloc 0.08% 199.26us 101 1.9720us 243ns 80.285us cuDeviceGetAttribute 0.04% 103.58us 1 103.58us 103.58us 103.58us cuLinkComplete 0.04% 94.693us 2 47.346us 35.640us 59.053us cuDeviceGetName 0.03% 67.610us 3 22.536us 15.027us 36.832us cuMemcpyDtoH 0.02% 56.992us 1 56.992us 56.992us 56.992us cuLinkCreate 0.01% 29.286us 2 14.643us 10.217us 19.069us cuMemcpyHtoD 0.01% 28.299us 1 28.299us 28.299us 28.299us cuMemGetInfo 0.01% 25.681us 1 25.681us 25.681us 25.681us cuLaunchKernel 0.00% 11.264us 13 866ns 242ns 3.7900us cuCtxGetCurrent 0.00% 6.1790us 1 6.1790us 6.1790us 6.1790us cuDeviceGetPCIBusId 0.00% 5.6830us 11 516ns 235ns 1.7140us cuCtxGetDevice 0.00% 3.5440us 3 1.1810us 706ns 1.7690us cuDeviceGet 0.00% 3.0540us 1 3.0540us 3.0540us 3.0540us cuInit 0.00% 2.7270us 4 681ns 274ns 1.2660us cuDeviceGetCount 0.00% 1.7460us 5 349ns 177ns 736ns cuFuncGetAttribute 0.00% 1.7400us 1 1.7400us 1.7400us 1.7400us cuLinkDestroy 0.00% 1.7180us 1 1.7180us 1.7180us 1.7180us cuCtxPushCurrent 0.00% 1.6600us 1 1.6600us 1.6600us 1.6600us cuDriverGetVersion 0.00% 1.4600us 1 1.4600us 1.4600us 1.4600us cuModuleGetFunction 0.00% 1.0940us 1 1.0940us 1.0940us 1.0940us cuDeviceComputeCapability 0.00% 725ns 1 725ns 725ns 725ns cudaRuntimeGetVersion 0.00% 368ns 1 368ns 368ns 368ns cuDeviceGetUuid
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
By default, `nvprof` outputs all GPU and API calls activity. Mainly we are interested in the CUDA GPU tracing -- we can turn off API calls by using `--trace gpu` option.
! nvprof --trace gpu python 1_3_4_nvprof_add_vectors.py
==590== NVPROF is profiling process 590, command: python3 1_3_4_nvprof_add_vectors.py ==590== Profiling application: python3 1_3_4_nvprof_add_vectors.py ==590== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 44.50% 5.3120us 3 1.7700us 1.6000us 2.0800us [CUDA memcpy DtoH] 27.88% 3.3280us 1 3.3280us 3.3280us 3.3280us cudapy::__main__::add_vectors_kernel$241(Array<double, int=1, C, mutable, aligned>, Array<__int64, int=1, C, mutable, aligned>, Array<__int64, int=1, C, mutable, aligned>) 27.61% 3.2960us 2 1.6480us 1.3760us 1.9200us [CUDA memcpy HtoD] No API activities were profiled.
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
CUDA GPU activities includes:- CUDA kernels activities,- GPU to Host memory transfers (`DtoH`), host to GPU memory transfers (`HtoD`). Let's we add one more kernel to the above code:
%%writefile 1_3_4_nvprof_increment_add_vectors.py import math from numba import cuda import numpy as np @cuda.jit def increment(a): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x a[i] += 1 @cuda.jit def add_vectors_kernel(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # Make sure not to go outside the grid area! result[i] = a[i] + b[i] result_gpu = cuda.device_array(2**24) a_gpu = cuda.to_device(np.random.rand(2**24)) b_gpu = cuda.to_device(np.random.rand(2**24)) block_size = 256 grid_size = math.ceil(len(result_gpu)/block_size) increment[grid_size, block_size](a_gpu) add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) result = result_gpu.copy_to_host() ! nvprof --trace gpu python 1_3_4_nvprof_increment_add_vectors.py
==612== NVPROF is profiling process 612, command: python3 1_3_4_nvprof_increment_add_vectors.py ==612== Profiling application: python3 1_3_4_nvprof_increment_add_vectors.py ==612== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 54.69% 57.733ms 2 28.867ms 28.240ms 29.493ms [CUDA memcpy HtoD] 42.87% 45.252ms 1 45.252ms 45.252ms 45.252ms [CUDA memcpy DtoH] 1.45% 1.5324ms 1 1.5324ms 1.5324ms 1.5324ms cudapy::__main__::add_vectors_kernel$242(Array<double, int=1, C, mutable, aligned>, Array<double, int=1, C, mutable, aligned>, Array<double, int=1, C, mutable, aligned>) 0.99% 1.0485ms 1 1.0485ms 1.0485ms 1.0485ms cudapy::__main__::increment$241(Array<double, int=1, C, mutable, aligned>) No API activities were profiled.
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
In the above case, `nvprof` should display the execution times for both kernels. 1.3.4.2. NVIDIA Visual Profiler NVIDIA Visual Profiler (NVVP) allows to Let's export the profiling results to a file that can be loaded by NVVP. We can `nvprof` for this purpose, just use `--export-profile` parameter.
! nvprof --trace gpu --export-profile nvvp_example.nvvp -f python 1_3_4_nvprof_increment_add_vectors.py
==634== NVPROF is profiling process 634, command: python3 1_3_4_nvprof_increment_add_vectors.py ==634== Generated result file: /content/nvvp_example.nvvp
Unlicense
exercises/1_CUDA_programming_model.ipynb
pjarosik/ius-2021-gpu-short-course
**CS224W - Colab 2** In Colab 2, we will work to construct our own graph neural network using PyTorch Geometric (PyG) and then apply that model on two Open Graph Benchmark (OGB) datasets. These two datasets will be used to benchmark your model's performance on two different graph-based tasks: 1) node property prediction, predicting properties of single nodes and 2) graph property prediction, predicting properties of entire graphs or subgraphs.First, we will learn how PyTorch Geometric stores graphs as PyTorch tensors.Then, we will load and inspect one of the Open Graph Benchmark (OGB) datasets by using the `ogb` package. OGB is a collection of realistic, large-scale, and diverse benchmark datasets for machine learning on graphs. The `ogb` package not only provides data loaders for each dataset but also model evaluators.Lastly, we will build our own graph neural network using PyTorch Geometric. We will then train and evaluate our model on the OGB node property prediction and graph property prediction tasks.**Note**: Make sure to **sequentially run all the cells in each section**, so that the intermediate variables / packages will carry over to the next cellWe recommend you save a copy of this colab in your drive so you don't lose progress!Have fun and good luck on Colab 2 :) DeviceYou might need to use a GPU for this Colab to run quickly.Please click `Runtime` and then `Change runtime type`. Then set the `hardware accelerator` to **GPU**. SetupAs discussed in Colab 0, the installation of PyG on Colab can be a little bit tricky. First let us check which version of PyTorch you are running
import torch import os print("PyTorch has version {}".format(torch.__version__))
PyTorch has version 1.10.0
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Download the necessary packages for PyG. Make sure that your version of torch matches the output from the cell above. In case of any issues, more information can be found on the [PyG's installation page](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html).
# Install torch geometric if 'IS_GRADESCOPE_ENV' not in os.environ: !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html !pip install torch-geometric !pip install ogb
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html Collecting torch-scatter Downloading https://data.pyg.org/whl/torch-1.9.0%2Bcu111/torch_scatter-2.0.9-cp38-cp38-win_amd64.whl (4.0 MB) Installing collected packages: torch-scatter Successfully installed torch-scatter-2.0.9 Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html Collecting torch-sparse Downloading https://data.pyg.org/whl/torch-1.9.0%2Bcu111/torch_sparse-0.6.12-cp38-cp38-win_amd64.whl (4.0 MB) Requirement already satisfied: scipy in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-sparse) (1.7.1) Requirement already satisfied: numpy<1.23.0,>=1.16.5 in c:\users\yx84ax\anaconda3\lib\site-packages (from scipy->torch-sparse) (1.20.3) Installing collected packages: torch-sparse Successfully installed torch-sparse-0.6.12 Collecting torch-geometric Downloading torch_geometric-2.0.2.tar.gz (325 kB) Requirement already satisfied: numpy in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (1.20.3) Requirement already satisfied: tqdm in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (4.62.3) Requirement already satisfied: scipy in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (1.7.1) Requirement already satisfied: networkx in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (2.6.3) Requirement already satisfied: scikit-learn in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (1.0.1) Requirement already satisfied: requests in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (2.26.0) Requirement already satisfied: pandas in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (1.3.4) Collecting rdflib Downloading rdflib-6.0.2-py3-none-any.whl (407 kB) Collecting googledrivedownloader Downloading googledrivedownloader-0.4-py2.py3-none-any.whl (3.9 kB) Requirement already satisfied: jinja2 in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (2.11.3) Requirement already satisfied: pyparsing in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (3.0.4) Collecting yacs Downloading yacs-0.1.8-py3-none-any.whl (14 kB) Requirement already satisfied: PyYAML in c:\users\yx84ax\anaconda3\lib\site-packages (from torch-geometric) (6.0) Requirement already satisfied: MarkupSafe>=0.23 in c:\users\yx84ax\anaconda3\lib\site-packages (from jinja2->torch-geometric) (1.1.1) Requirement already satisfied: python-dateutil>=2.7.3 in c:\users\yx84ax\anaconda3\lib\site-packages (from pandas->torch-geometric) (2.8.2) Requirement already satisfied: pytz>=2017.3 in c:\users\yx84ax\anaconda3\lib\site-packages (from pandas->torch-geometric) (2021.3) Requirement already satisfied: six>=1.5 in c:\users\yx84ax\anaconda3\lib\site-packages (from python-dateutil>=2.7.3->pandas->torch-geometric) (1.16.0) Requirement already satisfied: setuptools in c:\users\yx84ax\anaconda3\lib\site-packages (from rdflib->torch-geometric) (58.0.4) Collecting isodate Downloading isodate-0.6.0-py2.py3-none-any.whl (45 kB) Requirement already satisfied: certifi>=2017.4.17 in c:\users\yx84ax\anaconda3\lib\site-packages (from requests->torch-geometric) (2021.10.8) Requirement already satisfied: charset-normalizer~=2.0.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from requests->torch-geometric) (2.0.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\yx84ax\anaconda3\lib\site-packages (from requests->torch-geometric) (1.26.7) Requirement already satisfied: idna<4,>=2.5 in c:\users\yx84ax\anaconda3\lib\site-packages (from requests->torch-geometric) (3.3) Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from scikit-learn->torch-geometric) (2.2.0) Requirement already satisfied: joblib>=0.11 in c:\users\yx84ax\anaconda3\lib\site-packages (from scikit-learn->torch-geometric) (1.1.0) Requirement already satisfied: colorama in c:\users\yx84ax\anaconda3\lib\site-packages (from tqdm->torch-geometric) (0.4.4) Building wheels for collected packages: torch-geometric Building wheel for torch-geometric (setup.py): started Building wheel for torch-geometric (setup.py): finished with status 'done' Created wheel for torch-geometric: filename=torch_geometric-2.0.2-py3-none-any.whl size=535570 sha256=a9e81f1d66cb10c742eff0a1d0d81fad65898556c4f39f8a94dee2f77ab5364a Stored in directory: c:\users\yx84ax\appdata\local\pip\cache\wheels\41\cd\57\4187ae4860bff8a3a432ca291ea2574a7682f87331bfa0551d Successfully built torch-geometric Installing collected packages: isodate, yacs, rdflib, googledrivedownloader, torch-geometric Successfully installed googledrivedownloader-0.4 isodate-0.6.0 rdflib-6.0.2 torch-geometric-2.0.2 yacs-0.1.8 Collecting ogb Downloading ogb-1.3.2-py3-none-any.whl (78 kB) Requirement already satisfied: tqdm>=4.29.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from ogb) (4.62.3) Requirement already satisfied: scikit-learn>=0.20.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from ogb) (1.0.1) Requirement already satisfied: numpy>=1.16.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from ogb) (1.20.3) Collecting outdated>=0.2.0 Downloading outdated-0.2.1-py3-none-any.whl (7.5 kB) Requirement already satisfied: torch>=1.6.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from ogb) (1.10.0) Requirement already satisfied: pandas>=0.24.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from ogb) (1.3.4) Requirement already satisfied: urllib3>=1.24.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from ogb) (1.26.7) Requirement already satisfied: six>=1.12.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from ogb) (1.16.0) Requirement already satisfied: requests in c:\users\yx84ax\anaconda3\lib\site-packages (from outdated>=0.2.0->ogb) (2.26.0) Collecting littleutils Downloading littleutils-0.2.2.tar.gz (6.6 kB) Requirement already satisfied: pytz>=2017.3 in c:\users\yx84ax\anaconda3\lib\site-packages (from pandas>=0.24.0->ogb) (2021.3) Requirement already satisfied: python-dateutil>=2.7.3 in c:\users\yx84ax\anaconda3\lib\site-packages (from pandas>=0.24.0->ogb) (2.8.2) Requirement already satisfied: scipy>=1.1.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from scikit-learn>=0.20.0->ogb) (1.7.1) Requirement already satisfied: joblib>=0.11 in c:\users\yx84ax\anaconda3\lib\site-packages (from scikit-learn>=0.20.0->ogb) (1.1.0) Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from scikit-learn>=0.20.0->ogb) (2.2.0) Requirement already satisfied: typing_extensions in c:\users\yx84ax\anaconda3\lib\site-packages (from torch>=1.6.0->ogb) (3.10.0.2) Requirement already satisfied: colorama in c:\users\yx84ax\anaconda3\lib\site-packages (from tqdm>=4.29.0->ogb) (0.4.4) Requirement already satisfied: idna<4,>=2.5 in c:\users\yx84ax\anaconda3\lib\site-packages (from requests->outdated>=0.2.0->ogb) (3.3) Requirement already satisfied: certifi>=2017.4.17 in c:\users\yx84ax\anaconda3\lib\site-packages (from requests->outdated>=0.2.0->ogb) (2021.10.8) Requirement already satisfied: charset-normalizer~=2.0.0 in c:\users\yx84ax\anaconda3\lib\site-packages (from requests->outdated>=0.2.0->ogb) (2.0.4) Building wheels for collected packages: littleutils Building wheel for littleutils (setup.py): started Building wheel for littleutils (setup.py): finished with status 'done' Created wheel for littleutils: filename=littleutils-0.2.2-py3-none-any.whl size=7048 sha256=cd4b0d285ba3b55442cc053ab4775f139c4f7cac897b1330ce6d202aff5d7587 Stored in directory: c:\users\yx84ax\appdata\local\pip\cache\wheels\6a\33\c4\0ef84d7f5568c2823e3d63a6e08988852fb9e4bc822034870a Successfully built littleutils Installing collected packages: littleutils, outdated, ogb Successfully installed littleutils-0.2.2 ogb-1.3.2 outdated-0.2.1
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
1) PyTorch Geometric (Datasets and Data) PyTorch Geometric has two classes for storing and/or transforming graphs into tensor format. One is `torch_geometric.datasets`, which contains a variety of common graph datasets. Another is `torch_geometric.data`, which provides the data handling of graphs in PyTorch tensors.In this section, we will learn how to use `torch_geometric.datasets` and `torch_geometric.data` together. PyG DatasetsThe `torch_geometric.datasets` class has many common graph datasets. Here we will explore its usage through one example dataset.
from torch_geometric.datasets import TUDataset if 'IS_GRADESCOPE_ENV' not in os.environ: root = './enzymes' name = 'ENZYMES' # The ENZYMES dataset pyg_dataset= TUDataset(root, name) # You will find that there are 600 graphs in this dataset print(pyg_dataset)
Downloading https://www.chrsmrrs.com/graphkerneldatasets/ENZYMES.zip Extracting enzymes\ENZYMES\ENZYMES.zip Processing...
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Question 1: What is the number of classes and number of features in the ENZYMES dataset? (5 points)
def get_num_classes(pyg_dataset): # TODO: Implement a function that takes a PyG dataset object # and returns the number of classes for that dataset. num_classes = 0 ############# Your code here ############ ## (~1 line of code) ## Note ## 1. Colab autocomplete functionality might be useful. num_classes = pyg_dataset.num_classes ######################################### return num_classes def get_num_features(pyg_dataset): # TODO: Implement a function that takes a PyG dataset object # and returns the number of features for that dataset. num_features = 0 ############# Your code here ############ ## (~1 line of code) ## Note ## 1. Colab autocomplete functionality might be useful. num_features = pyg_dataset.num_features ######################################### return num_features if 'IS_GRADESCOPE_ENV' not in os.environ: num_classes = get_num_classes(pyg_dataset) num_features = get_num_features(pyg_dataset) print("{} dataset has {} classes".format(name, num_classes)) print("{} dataset has {} features".format(name, num_features))
ENZYMES dataset has 6 classes ENZYMES dataset has 3 features
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
PyG DataEach PyG dataset stores a list of `torch_geometric.data.Data` objects, where each `torch_geometric.data.Data` object represents a graph. We can easily get the `Data` object by indexing into the dataset.For more information such as what is stored in the `Data` object, please refer to the [documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/data.htmltorch_geometric.data.Data). Question 2: What is the label of the graph with index 100 in the ENZYMES dataset? (5 points)
def get_graph_class(pyg_dataset, idx): # TODO: Implement a function that takes a PyG dataset object, # an index of a graph within the dataset, and returns the class/label # of the graph (as an integer). label = -1 ############# Your code here ############ ## (~1 line of code) graph = pyg_dataset[idx] label = int(graph.y) ######################################### return label # Here pyg_dataset is a dataset for graph classification if 'IS_GRADESCOPE_ENV' not in os.environ: graph_0 = pyg_dataset[0] print(graph_0) idx = 100 label = get_graph_class(pyg_dataset, idx) print('Graph with index {} has label {}'.format(idx, label))
Data(edge_index=[2, 168], x=[37, 3], y=[1]) Graph with index 100 has label 4
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Question 3: How many edges does the graph with index 200 have? (5 points)
def get_graph_num_edges(pyg_dataset, idx): # TODO: Implement a function that takes a PyG dataset object, # the index of a graph in the dataset, and returns the number of # edges in the graph (as an integer). You should not count an edge # twice if the graph is undirected. For example, in an undirected # graph G, if two nodes v and u are connected by an edge, this edge # should only be counted once. num_edges = 0 ############# Your code here ############ ## Note: ## 1. You can't return the data.num_edges directly - WHY NOT? ## 2. We assume the graph is undirected ## 3. Look at the PyG dataset built in functions ## (~4 lines of code) num_edges = pyg_dataset[idx].num_edges/2 num_edges_2 = pyg_dataset[idx]['edge_index'].size()[1]/2 assert num_edges == num_edges_2 ######################################### return num_edges if 'IS_GRADESCOPE_ENV' not in os.environ: idx = 200 num_edges = get_graph_num_edges(pyg_dataset, idx) print('Graph with index {} has {} edges'.format(idx, num_edges))
_____no_output_____
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
2) Open Graph Benchmark (OGB)The Open Graph Benchmark (OGB) is a collection of realistic, large-scale, and diverse benchmark datasets for machine learning on graphs. Its datasets are automatically downloaded, processed, and split using the OGB Data Loader. The model performance can then be evaluated by using the OGB Evaluator in a unified manner. Dataset and DataOGB also supports PyG dataset and data classes. Here we take a look on the `ogbn-arxiv` dataset.
import torch_geometric.transforms as T from ogb.nodeproppred import PygNodePropPredDataset if 'IS_GRADESCOPE_ENV' not in os.environ: dataset_name = 'ogbn-arxiv' # Load the dataset and transform it to sparse tensor dataset = PygNodePropPredDataset(name=dataset_name, transform=T.ToSparseTensor()) print('The {} dataset has {} graph'.format(dataset_name, len(dataset))) # Extract the graph data = dataset[0] print(data)
Downloading http://snap.stanford.edu/ogb/data/nodeproppred/arxiv.zip
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Question 4: How many features are in the ogbn-arxiv graph? (5 points)
def graph_num_features(data): # TODO: Implement a function that takes a PyG data object, # and returns the number of features in the graph (as an integer). num_features = 0 ############# Your code here ############ ## (~1 line of code) num_features = data.num_features ######################################### return num_features if 'IS_GRADESCOPE_ENV' not in os.environ: num_features = graph_num_features(data) print('The graph has {} features'.format(num_features))
The graph has 128 features
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
3) GNN: Node Property PredictionIn this section we will build our first graph neural network using PyTorch Geometric. Then we will apply it to the task of node property prediction (node classification).Specifically, we will use GCN as the foundation for your graph neural network ([Kipf et al. (2017)](https://arxiv.org/pdf/1609.02907.pdf)). To do so, we will work with PyG's built-in `GCNConv` layer. Setup
import torch import pandas as pd import torch.nn.functional as F print(torch.__version__) # The PyG built-in GCNConv from torch_geometric.nn import GCNConv import torch_geometric.transforms as T from ogb.nodeproppred import PygNodePropPredDataset, Evaluator
1.10.0
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Load and Preprocess the Dataset
if 'IS_GRADESCOPE_ENV' not in os.environ: dataset_name = 'ogbn-arxiv' dataset = PygNodePropPredDataset(name=dataset_name, transform=T.ToSparseTensor()) data = dataset[0] # Make the adjacency matrix to symmetric data.adj_t = data.adj_t.to_symmetric() device = 'cpu' # If you use GPU, the device should be cuda print('Device: {}'.format(device)) data = data.to(device) split_idx = dataset.get_idx_split() train_idx = split_idx['train'].to(device) print(graph_num_features(data))
Device: cpu 128
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
GCN ModelNow we will implement our GCN model!Please follow the figure below to implement the `forward` function.![test](https://drive.google.com/uc?id=128AuYAXNXGg7PIhJJ7e420DoPWKb-RtL)
class GCN(torch.nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, num_layers, dropout, return_embeds=False): # TODO: Implement a function that initializes self.convs, # self.bns, and self.softmax. super(GCN, self).__init__() # A list of GCNConv layers self.convs = None # A list of 1D batch normalization layers self.bns = None # The log softmax layer self.softmax = None ############# Your code here ############ ## Note: ## 1. You should use torch.nn.ModuleList for self.convs and self.bns ## 2. self.convs has num_layers GCNConv layers ## 3. self.bns has num_layers - 1 BatchNorm1d layers ## 4. You should use torch.nn.LogSoftmax for self.softmax ## 5. The parameters you can set for GCNConv include 'in_channels' and ## 'out_channels'. For more information please refer to the documentation: ## https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GCNConv ## 6. The only parameter you need to set for BatchNorm1d is 'num_features' ## For more information please refer to the documentation: ## https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html ## (~10 lines of code) self.num_layers = num_layers self.convs = torch.nn.ModuleList() # convolutional layers self.bns = torch.nn.ModuleList() # batch normalization layers in_size = [input_dim] + (num_layers - 1)*[hidden_dim] # [input_dim, hidden_dim, ... , hidden_dim, output_dim] out_size = (num_layers - 1)*[hidden_dim] + [output_dim] # [hidden_dim, ... , hidden_dim, output_dim] for i in range(num_layers): self.convs.append(GCNConv(in_channels = in_size[i], out_channels = out_size[i])) if i < num_layers - 1: self.bns.append(torch.nn.BatchNorm1d(out_size[i])) self.softmax = torch.nn.LogSoftmax(1) # along feature vectors ######################################### # Probability of an element getting zeroed self.dropout = dropout # Skip classification layer and return node embeddings self.return_embeds = return_embeds def reset_parameters(self): for conv in self.convs: conv.reset_parameters() for bn in self.bns: bn.reset_parameters() def forward(self, x, adj_t): # TODO: Implement a function that takes the feature tensor x and # edge_index tensor adj_t and returns the output tensor as # shown in the figure. out = None ############# Your code here ############ ## Note: ## 1. Construct the network as shown in the figure ## 2. torch.nn.functional.relu and torch.nn.functional.dropout are useful ## For more information please refer to the documentation: ## https://pytorch.org/docs/stable/nn.functional.html ## 3. Don't forget to set F.dropout training to self.training ## 4. If return_embeds is True, then skip the last softmax layer ## (~7 lines of code) for i in range(self.num_layers): x = self.convs[i](x, adj_t) # GCN convolutional if i < self.num_layers - 1: x = self.bns[i](x) x = F.relu(x) x = F.dropout(x, p = self.dropout, training = self.training) # last layer if self.return_embeds: out = x if not self.return_embeds: out = self.softmax(x) ######################################### return out def train(model, data, train_idx, optimizer, loss_fn): # TODO: Implement a function that trains the model by # using the given optimizer and loss_fn. model.train() loss = 0 ############# Your code here ############ ## Note: ## 1. Zero grad the optimizer ## 2. Feed the data into the model ## 3. Slice the model output and label by train_idx ## 4. Feed the sliced output and label to loss_fn ## (~4 lines of code) optimizer.zero_grad() output = model(data['x'], data['adj_t'])[train_idx] label = (data.y.to(device)[train_idx]).flatten() loss = loss_fn(output,label) ######################################### loss.backward() optimizer.step() return loss.item() # Test function here @torch.no_grad() def test(model, data, split_idx, evaluator, save_model_results=False): # TODO: Implement a function that tests the model by # using the given split_idx and evaluator. model.eval() # The output of model on all data out = None ############# Your code here ############ ## (~1 line of code) ## Note: ## 1. No index slicing here out = model(data['x'], data['adj_t']) ######################################### y_pred = out.argmax(dim=-1, keepdim=True) train_acc = evaluator.eval({ 'y_true': data.y[split_idx['train']], 'y_pred': y_pred[split_idx['train']], })['acc'] valid_acc = evaluator.eval({ 'y_true': data.y[split_idx['valid']], 'y_pred': y_pred[split_idx['valid']], })['acc'] test_acc = evaluator.eval({ 'y_true': data.y[split_idx['test']], 'y_pred': y_pred[split_idx['test']], })['acc'] if save_model_results: print ("Saving Model Predictions") data = {} data['y_pred'] = y_pred.view(-1).cpu().detach().numpy() df = pd.DataFrame(data=data) # Save locally as csv df.to_csv('ogbn-arxiv_node.csv', sep=',', index=False) return train_acc, valid_acc, test_acc # Please do not change the args if 'IS_GRADESCOPE_ENV' not in os.environ: args = { 'device': device, 'num_layers': 3, 'hidden_dim': 256, 'dropout': 0.5, 'lr': 0.01, 'epochs': 100, } args if 'IS_GRADESCOPE_ENV' not in os.environ: model = GCN(data.num_features, args['hidden_dim'], dataset.num_classes, args['num_layers'], args['dropout']).to(device) evaluator = Evaluator(name='ogbn-arxiv') import copy if 'IS_GRADESCOPE_ENV' not in os.environ: # reset the parameters to initial random value model.reset_parameters() optimizer = torch.optim.Adam(model.parameters(), lr=args['lr']) loss_fn = F.nll_loss best_model = None best_valid_acc = 0 for epoch in range(1, 1 + args["epochs"]): loss = train(model, data, train_idx, optimizer, loss_fn) result = test(model, data, split_idx, evaluator) train_acc, valid_acc, test_acc = result if valid_acc > best_valid_acc: best_valid_acc = valid_acc best_model = copy.deepcopy(model) print(f'Epoch: {epoch:02d}, ' f'Loss: {loss:.4f}, ' f'Train: {100 * train_acc:.2f}%, ' f'Valid: {100 * valid_acc:.2f}% ' f'Test: {100 * test_acc:.2f}%')
Epoch: 01, Loss: 4.0669, Train: 19.65%, Valid: 26.05% Test: 23.55% Epoch: 02, Loss: 2.3643, Train: 21.23%, Valid: 20.62% Test: 26.14% Epoch: 03, Loss: 1.9636, Train: 38.83%, Valid: 39.31% Test: 43.51% Epoch: 04, Loss: 1.7614, Train: 45.58%, Valid: 46.42% Test: 43.25% Epoch: 05, Loss: 1.6479, Train: 44.97%, Valid: 43.31% Test: 41.47% Epoch: 06, Loss: 1.5548, Train: 41.26%, Valid: 37.87% Test: 38.38% Epoch: 07, Loss: 1.4922, Train: 38.69%, Valid: 34.45% Test: 36.02% Epoch: 08, Loss: 1.4355, Train: 38.75%, Valid: 33.85% Test: 35.05% Epoch: 09, Loss: 1.3906, Train: 40.15%, Valid: 35.92% Test: 37.75% Epoch: 10, Loss: 1.3588, Train: 41.95%, Valid: 39.16% Test: 41.82% Epoch: 11, Loss: 1.3255, Train: 44.21%, Valid: 41.95% Test: 44.63% Epoch: 12, Loss: 1.3049, Train: 47.28%, Valid: 44.72% Test: 46.91% Epoch: 13, Loss: 1.2808, Train: 50.88%, Valid: 48.63% Test: 49.80% Epoch: 14, Loss: 1.2598, Train: 53.35%, Valid: 51.80% Test: 52.59% Epoch: 15, Loss: 1.2450, Train: 56.01%, Valid: 55.26% Test: 56.23% Epoch: 16, Loss: 1.2245, Train: 58.28%, Valid: 58.04% Test: 58.90% Epoch: 17, Loss: 1.2115, Train: 59.59%, Valid: 59.43% Test: 60.01% Epoch: 18, Loss: 1.1933, Train: 60.01%, Valid: 59.97% Test: 60.65% Epoch: 19, Loss: 1.1832, Train: 60.30%, Valid: 60.00% Test: 61.01% Epoch: 20, Loss: 1.1701, Train: 60.35%, Valid: 59.16% Test: 60.97% Epoch: 21, Loss: 1.1615, Train: 61.14%, Valid: 59.69% Test: 61.31% Epoch: 22, Loss: 1.1469, Train: 62.09%, Valid: 60.61% Test: 61.84% Epoch: 23, Loss: 1.1390, Train: 62.83%, Valid: 61.29% Test: 62.25% Epoch: 24, Loss: 1.1310, Train: 63.19%, Valid: 61.81% Test: 62.68% Epoch: 25, Loss: 1.1229, Train: 63.38%, Valid: 62.14% Test: 62.86% Epoch: 26, Loss: 1.1150, Train: 63.84%, Valid: 62.45% Test: 63.11% Epoch: 27, Loss: 1.1105, Train: 64.55%, Valid: 63.18% Test: 63.62% Epoch: 28, Loss: 1.1051, Train: 65.49%, Valid: 64.38% Test: 64.56% Epoch: 29, Loss: 1.0975, Train: 66.58%, Valid: 65.90% Test: 66.08% Epoch: 30, Loss: 1.0912, Train: 67.25%, Valid: 66.93% Test: 67.32% Epoch: 31, Loss: 1.0838, Train: 67.47%, Valid: 67.48% Test: 67.89% Epoch: 32, Loss: 1.0811, Train: 67.64%, Valid: 67.63% Test: 68.16% Epoch: 33, Loss: 1.0747, Train: 67.93%, Valid: 67.66% Test: 68.23% Epoch: 34, Loss: 1.0667, Train: 68.22%, Valid: 67.94% Test: 68.32% Epoch: 35, Loss: 1.0592, Train: 68.48%, Valid: 68.26% Test: 68.44% Epoch: 36, Loss: 1.0587, Train: 68.80%, Valid: 68.55% Test: 68.72% Epoch: 37, Loss: 1.0526, Train: 69.13%, Valid: 68.97% Test: 69.01% Epoch: 38, Loss: 1.0482, Train: 69.65%, Valid: 69.53% Test: 69.22% Epoch: 39, Loss: 1.0459, Train: 70.07%, Valid: 69.81% Test: 69.19% Epoch: 40, Loss: 1.0429, Train: 70.31%, Valid: 69.79% Test: 68.95% Epoch: 41, Loss: 1.0346, Train: 70.46%, Valid: 69.87% Test: 69.21% Epoch: 42, Loss: 1.0320, Train: 70.56%, Valid: 70.19% Test: 69.51% Epoch: 43, Loss: 1.0282, Train: 70.75%, Valid: 70.35% Test: 69.91% Epoch: 44, Loss: 1.0241, Train: 71.00%, Valid: 70.51% Test: 70.03% Epoch: 45, Loss: 1.0210, Train: 71.16%, Valid: 70.68% Test: 70.12% Epoch: 46, Loss: 1.0168, Train: 71.18%, Valid: 70.66% Test: 70.21% Epoch: 47, Loss: 1.0149, Train: 71.14%, Valid: 70.61% Test: 70.16% Epoch: 48, Loss: 1.0110, Train: 71.16%, Valid: 70.60% Test: 69.92% Epoch: 49, Loss: 1.0082, Train: 71.31%, Valid: 70.49% Test: 69.56% Epoch: 50, Loss: 1.0033, Train: 71.48%, Valid: 70.29% Test: 69.13% Epoch: 51, Loss: 0.9980, Train: 71.61%, Valid: 70.31% Test: 69.33% Epoch: 52, Loss: 1.0007, Train: 71.76%, Valid: 70.52% Test: 69.63% Epoch: 53, Loss: 0.9961, Train: 71.80%, Valid: 70.70% Test: 69.87% Epoch: 54, Loss: 0.9917, Train: 71.92%, Valid: 70.72% Test: 69.76% Epoch: 55, Loss: 0.9929, Train: 72.02%, Valid: 70.76% Test: 69.62% Epoch: 56, Loss: 0.9883, Train: 71.97%, Valid: 70.97% Test: 70.09% Epoch: 57, Loss: 0.9884, Train: 71.92%, Valid: 71.08% Test: 70.27% Epoch: 58, Loss: 0.9825, Train: 71.98%, Valid: 70.94% Test: 69.95% Epoch: 59, Loss: 0.9790, Train: 72.09%, Valid: 70.51% Test: 69.01% Epoch: 60, Loss: 0.9784, Train: 72.03%, Valid: 70.01% Test: 67.76% Epoch: 61, Loss: 0.9755, Train: 72.11%, Valid: 70.22% Test: 68.10% Epoch: 62, Loss: 0.9731, Train: 72.19%, Valid: 70.74% Test: 69.52% Epoch: 63, Loss: 0.9744, Train: 72.29%, Valid: 70.82% Test: 69.93% Epoch: 64, Loss: 0.9696, Train: 72.34%, Valid: 70.77% Test: 69.60% Epoch: 65, Loss: 0.9667, Train: 72.44%, Valid: 70.56% Test: 69.21% Epoch: 66, Loss: 0.9670, Train: 72.57%, Valid: 71.00% Test: 69.74% Epoch: 67, Loss: 0.9659, Train: 72.65%, Valid: 71.29% Test: 70.09% Epoch: 68, Loss: 0.9591, Train: 72.76%, Valid: 71.47% Test: 70.33% Epoch: 69, Loss: 0.9604, Train: 72.74%, Valid: 71.57% Test: 70.46% Epoch: 70, Loss: 0.9569, Train: 72.81%, Valid: 71.43% Test: 70.59% Epoch: 71, Loss: 0.9561, Train: 72.76%, Valid: 71.34% Test: 70.63% Epoch: 72, Loss: 0.9527, Train: 72.90%, Valid: 71.48% Test: 70.74% Epoch: 73, Loss: 0.9495, Train: 72.98%, Valid: 71.46% Test: 70.35% Epoch: 74, Loss: 0.9488, Train: 72.92%, Valid: 71.26% Test: 69.92% Epoch: 75, Loss: 0.9486, Train: 72.98%, Valid: 71.55% Test: 70.66% Epoch: 76, Loss: 0.9453, Train: 72.95%, Valid: 71.54% Test: 71.07% Epoch: 77, Loss: 0.9422, Train: 73.00%, Valid: 71.32% Test: 70.65% Epoch: 78, Loss: 0.9434, Train: 72.98%, Valid: 71.05% Test: 69.73% Epoch: 79, Loss: 0.9423, Train: 72.86%, Valid: 70.51% Test: 68.76% Epoch: 80, Loss: 0.9423, Train: 73.12%, Valid: 71.00% Test: 70.03% Epoch: 81, Loss: 0.9396, Train: 73.23%, Valid: 71.70% Test: 71.04% Epoch: 82, Loss: 0.9378, Train: 73.27%, Valid: 71.73% Test: 70.89% Epoch: 83, Loss: 0.9375, Train: 73.27%, Valid: 71.44% Test: 70.31% Epoch: 84, Loss: 0.9313, Train: 73.33%, Valid: 71.40% Test: 70.19% Epoch: 85, Loss: 0.9337, Train: 73.43%, Valid: 71.55% Test: 70.41% Epoch: 86, Loss: 0.9318, Train: 73.49%, Valid: 71.44% Test: 70.28% Epoch: 87, Loss: 0.9307, Train: 73.43%, Valid: 71.56% Test: 70.45% Epoch: 88, Loss: 0.9281, Train: 73.28%, Valid: 71.70% Test: 71.24% Epoch: 89, Loss: 0.9269, Train: 73.23%, Valid: 71.87% Test: 71.41% Epoch: 90, Loss: 0.9253, Train: 73.47%, Valid: 71.74% Test: 70.77% Epoch: 91, Loss: 0.9198, Train: 73.59%, Valid: 71.60% Test: 70.51% Epoch: 92, Loss: 0.9225, Train: 73.62%, Valid: 71.79% Test: 71.17% Epoch: 93, Loss: 0.9195, Train: 73.60%, Valid: 71.84% Test: 71.16% Epoch: 94, Loss: 0.9202, Train: 73.74%, Valid: 71.45% Test: 70.24% Epoch: 95, Loss: 0.9155, Train: 73.81%, Valid: 71.58% Test: 70.70% Epoch: 96, Loss: 0.9169, Train: 73.88%, Valid: 71.90% Test: 71.29% Epoch: 97, Loss: 0.9154, Train: 73.99%, Valid: 71.77% Test: 70.85% Epoch: 98, Loss: 0.9128, Train: 73.94%, Valid: 71.38% Test: 70.30% Epoch: 99, Loss: 0.9101, Train: 73.67%, Valid: 71.29% Test: 70.26% Epoch: 100, Loss: 0.9117, Train: 73.53%, Valid: 71.39% Test: 70.86%
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Question 5: What are your `best_model` validation and test accuracies?(20 points)Run the cell below to see the results of your best of model and save your model's predictions to a file named *ogbn-arxiv_node.csv*. You can view this file by clicking on the *Folder* icon on the left side pannel. As in Colab 1, when you sumbit your assignment, you will have to download this file and attatch it to your submission.
if 'IS_GRADESCOPE_ENV' not in os.environ: best_result = test(best_model, data, split_idx, evaluator, save_model_results=True) train_acc, valid_acc, test_acc = best_result print(f'Best model: ' f'Train: {100 * train_acc:.2f}%, ' f'Valid: {100 * valid_acc:.2f}% ' f'Test: {100 * test_acc:.2f}%')
Saving Model Predictions Best model: Train: 73.88%, Valid: 71.90% Test: 71.29%
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
4) GNN: Graph Property PredictionIn this section we will create a graph neural network for graph property prediction (graph classification). Load and preprocess the dataset
from ogb.graphproppred import PygGraphPropPredDataset, Evaluator from torch_geometric.data import DataLoader from tqdm.notebook import tqdm if 'IS_GRADESCOPE_ENV' not in os.environ: # Load the dataset dataset = PygGraphPropPredDataset(name='ogbg-molhiv') device = 'cuda' if torch.cuda.is_available() else 'cpu' print('Device: {}'.format(device)) split_idx = dataset.get_idx_split() # Check task type print('Task type: {}'.format(dataset.task_type)) # Load the dataset splits into corresponding dataloaders # We will train the graph classification task on a batch of 32 graphs # Shuffle the order of graphs for training set if 'IS_GRADESCOPE_ENV' not in os.environ: train_loader = DataLoader(dataset[split_idx["train"]], batch_size=32, shuffle=True, num_workers=0) valid_loader = DataLoader(dataset[split_idx["valid"]], batch_size=32, shuffle=False, num_workers=0) test_loader = DataLoader(dataset[split_idx["test"]], batch_size=32, shuffle=False, num_workers=0) if 'IS_GRADESCOPE_ENV' not in os.environ: # Please do not change the args args = { 'device': device, 'num_layers': 5, 'hidden_dim': 256, 'dropout': 0.5, 'lr': 0.001, 'epochs': 30, } args
_____no_output_____
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Graph Prediction Model Graph Mini-BatchingBefore diving into the actual model, we introduce the concept of mini-batching with graphs. In order to parallelize the processing of a mini-batch of graphs, PyG combines the graphs into a single disconnected graph data object (*torch_geometric.data.Batch*). *torch_geometric.data.Batch* inherits from *torch_geometric.data.Data* (introduced earlier) and contains an additional attribute called `batch`. The `batch` attribute is a vector mapping each node to the index of its corresponding graph within the mini-batch: batch = [0, ..., 0, 1, ..., n - 2, n - 1, ..., n - 1]This attribute is crucial for associating which graph each node belongs to and can be used to e.g. average the node embeddings for each graph individually to compute graph level embeddings. ImplementionNow, we have all of the tools to implement a GCN Graph Prediction model! We will reuse the existing GCN model to generate `node_embeddings` and then use `Global Pooling` over the nodes to create graph level embeddings that can be used to predict properties for the each graph. Remeber that the `batch` attribute will be essential for performining Global Pooling over our mini-batch of graphs.
from ogb.graphproppred.mol_encoder import AtomEncoder from torch_geometric.nn import global_add_pool, global_mean_pool ### GCN to predict graph property class GCN_Graph(torch.nn.Module): def __init__(self, hidden_dim, output_dim, num_layers, dropout): super(GCN_Graph, self).__init__() # Load encoders for Atoms in molecule graphs self.node_encoder = AtomEncoder(hidden_dim) # Node embedding model # Note that the input_dim and output_dim are set to hidden_dim self.gnn_node = GCN(hidden_dim, hidden_dim, hidden_dim, num_layers, dropout, return_embeds=True) self.pool = None ############# Your code here ############ ## Note: ## 1. Initialize self.pool as a global mean pooling layer ## For more information please refer to the documentation: ## https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#global-pooling-layers ######################################### # Output layer self.linear = torch.nn.Linear(hidden_dim, output_dim) def reset_parameters(self): self.gnn_node.reset_parameters() self.linear.reset_parameters() def forward(self, batched_data): # TODO: Implement a function that takes as input a # mini-batch of graphs (torch_geometric.data.Batch) and # returns the predicted graph property for each graph. # # NOTE: Since we are predicting graph level properties, # your output will be a tensor with dimension equaling # the number of graphs in the mini-batch # Extract important attributes of our mini-batch x, edge_index, batch = batched_data.x, batched_data.edge_index, batched_data.batch embed = self.node_encoder(x) out = None ############# Your code here ############ ## Note: ## 1. Construct node embeddings using existing GCN model ## 2. Use the global pooling layer to aggregate features for each individual graph ## For more information please refer to the documentation: ## https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#global-pooling-layers ## 3. Use a linear layer to predict each graph's property ## (~3 lines of code) ######################################### return out def train(model, device, data_loader, optimizer, loss_fn): # TODO: Implement a function that trains your model by # using the given optimizer and loss_fn. model.train() loss = 0 for step, batch in enumerate(tqdm(data_loader, desc="Iteration")): batch = batch.to(device) if batch.x.shape[0] == 1 or batch.batch[-1] == 0: pass else: ## ignore nan targets (unlabeled) when computing training loss. is_labeled = batch.y == batch.y ############# Your code here ############ ## Note: ## 1. Zero grad the optimizer ## 2. Feed the data into the model ## 3. Use `is_labeled` mask to filter output and labels ## 4. You may need to change the type of label to torch.float32 ## 5. Feed the output and label to the loss_fn ## (~3 lines of code) ######################################### loss.backward() optimizer.step() return loss.item() # The evaluation function def eval(model, device, loader, evaluator, save_model_results=False, save_file=None): model.eval() y_true = [] y_pred = [] for step, batch in enumerate(tqdm(loader, desc="Iteration")): batch = batch.to(device) if batch.x.shape[0] == 1: pass else: with torch.no_grad(): pred = model(batch) y_true.append(batch.y.view(pred.shape).detach().cpu()) y_pred.append(pred.detach().cpu()) y_true = torch.cat(y_true, dim = 0).numpy() y_pred = torch.cat(y_pred, dim = 0).numpy() input_dict = {"y_true": y_true, "y_pred": y_pred} if save_model_results: print ("Saving Model Predictions") # Create a pandas dataframe with a two columns # y_pred | y_true data = {} data['y_pred'] = y_pred.reshape(-1) data['y_true'] = y_true.reshape(-1) df = pd.DataFrame(data=data) # Save to csv df.to_csv('ogbg-molhiv_graph_' + save_file + '.csv', sep=',', index=False) return evaluator.eval(input_dict) if 'IS_GRADESCOPE_ENV' not in os.environ: model = GCN_Graph(args['hidden_dim'], dataset.num_tasks, args['num_layers'], args['dropout']).to(device) evaluator = Evaluator(name='ogbg-molhiv') import copy if 'IS_GRADESCOPE_ENV' not in os.environ: model.reset_parameters() optimizer = torch.optim.Adam(model.parameters(), lr=args['lr']) loss_fn = torch.nn.BCEWithLogitsLoss() best_model = None best_valid_acc = 0 for epoch in range(1, 1 + args["epochs"]): print('Training...') loss = train(model, device, train_loader, optimizer, loss_fn) print('Evaluating...') train_result = eval(model, device, train_loader, evaluator) val_result = eval(model, device, valid_loader, evaluator) test_result = eval(model, device, test_loader, evaluator) train_acc, valid_acc, test_acc = train_result[dataset.eval_metric], val_result[dataset.eval_metric], test_result[dataset.eval_metric] if valid_acc > best_valid_acc: best_valid_acc = valid_acc best_model = copy.deepcopy(model) print(f'Epoch: {epoch:02d}, ' f'Loss: {loss:.4f}, ' f'Train: {100 * train_acc:.2f}%, ' f'Valid: {100 * valid_acc:.2f}% ' f'Test: {100 * test_acc:.2f}%')
_____no_output_____
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Question 6: What are your `best_model` validation and test ROC-AUC scores? (20 points)Run the cell below to see the results of your best of model and save your model's predictions over the validation and test datasets. The resulting files are named *ogbn-arxiv_graph_valid.csv* and *ogbn-arxiv_graph_test.csv*. Again, you can view these files by clicking on the *Folder* icon on the left side pannel. As in Colab 1, when you sumbit your assignment, you will have to download these files and attatch them to your submission.
if 'IS_GRADESCOPE_ENV' not in os.environ: train_acc = eval(best_model, device, train_loader, evaluator)[dataset.eval_metric] valid_acc = eval(best_model, device, valid_loader, evaluator, save_model_results=True, save_file="valid")[dataset.eval_metric] test_acc = eval(best_model, device, test_loader, evaluator, save_model_results=True, save_file="test")[dataset.eval_metric] print(f'Best model: ' f'Train: {100 * train_acc:.2f}%, ' f'Valid: {100 * valid_acc:.2f}% ' f'Test: {100 * test_acc:.2f}%')
_____no_output_____
Unlicense
Colab 2/CS224W - Colab 2_tobias.ipynb
victorcroisfelt/aau-cs224w-ml-with-graphs
Import Data
def boston_df(sklearn_dataset): X = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names) y = pd.DataFrame(sklearn_dataset.target, columns = ['MEDV']) return X, y X, y = boston_df(load_boston()) X.columns
_____no_output_____
MIT
notebooks/00-data/04-boston.ipynb
mixerupper/mltools-fi_cate
Save data
folder_name = 'boston' try: os.mkdir(processed_root(folder_name)) except FileExistsError: print("Folder already exists") X.to_csv(processed_root(f"{folder_name}/X.csv"), index = False) y.to_csv(processed_root(f"{folder_name}/y.csv"), index = False)
_____no_output_____
MIT
notebooks/00-data/04-boston.ipynb
mixerupper/mltools-fi_cate
Example 1: Detecting an obvious outlier
import numpy as np from isotree import IsolationForest ### Random data from a standard normal distribution np.random.seed(1) n = 100 m = 2 X = np.random.normal(size = (n, m)) ### Will now add obvious outlier point (3, 3) to the data X = np.r_[X, np.array([3, 3]).reshape((1, m))] ### Fit a small isolation forest model iso = IsolationForest(ntrees = 10, ndim = 2, nthreads = 1) iso.fit(X) ### Check which row has the highest outlier score pred = iso.predict(X) print("Point with highest outlier score: ", X[np.argsort(-pred)[0], ])
Point with highest outlier score: [3. 3.]
BSD-2-Clause
example/isotree_example.ipynb
ankane/isotree-1
Example 2: Plotting outlier and density regions
import numpy as np, pandas as pd from isotree import IsolationForest import matplotlib.pyplot as plt from pylab import rcParams %matplotlib inline rcParams['figure.figsize'] = 10, 8 np.random.seed(1) group1 = pd.DataFrame({ "x" : np.random.normal(loc=-1, scale=.4, size = 1000), "y" : np.random.normal(loc=-1, scale=.2, size = 1000), }) group2 = pd.DataFrame({ "x" : np.random.normal(loc=+1, scale=.2, size = 1000), "y" : np.random.normal(loc=+1, scale=.4, size = 1000), }) X = pd.concat([group1, group2], ignore_index=True) ### Now add an obvious outlier which is within the 1d ranges ### (As an interesting test, remove it and see what happens, ### or check how its score changes when using sub-sampling) X = X.append(pd.DataFrame({"x" : [-1], "y" : [1]}), ignore_index = True) ### Single-variable Isolatio Forest iso_simple = IsolationForest(ndim=1, ntrees=100, penalize_range=False, prob_pick_pooled_gain=0) iso_simple.fit(X) ### Extended Isolation Forest iso_ext = IsolationForest(ndim=2, ntrees=100, penalize_range=False, prob_pick_pooled_gain=0) iso_ext.fit(X) ### SCiForest iso_sci = IsolationForest(ndim=2, ntrees=100, ntry=10, penalize_range=True, prob_pick_avg_gain=1, prob_pick_pooled_gain=0) iso_sci.fit(X) ### Fair-Cut Forest iso_fcf = IsolationForest(ndim=2, ntrees=100, penalize_range=False, prob_pick_avg_gain=0, prob_pick_pooled_gain=1) iso_fcf.fit(X) ### Plot as a heatmap pts = np.linspace(-3, 3, 250) space = np.array( np.meshgrid(pts, pts) ).reshape((2, -1)).T Z_sim = iso_simple.predict(space) Z_ext = iso_ext.predict(space) Z_sci = iso_sci.predict(space) Z_fcf = iso_fcf.predict(space) space_index = pd.MultiIndex.from_arrays([space[:, 0], space[:, 1]]) def plot_space(Z, space_index, X): df = pd.DataFrame({"z" : Z}, index = space_index) df = df.unstack() df = df[df.columns.values[::-1]] plt.imshow(df, extent = [-3, 3, -3, 3], cmap = 'hot_r') plt.scatter(x = X['x'], y = X['y'], alpha = .15, c = 'navy') plt.suptitle("Outlier and Density Regions", fontsize = 20) plt.subplot(2, 2, 1) plot_space(Z_sim, space_index, X) plt.title("Isolation Forest", fontsize=15) plt.subplot(2, 2, 2) plot_space(Z_ext, space_index, X) plt.title("Extended Isolation Forest", fontsize=15) plt.subplot(2, 2, 3) plot_space(Z_sci, space_index, X) plt.title("SCiForest", fontsize=15) plt.subplot(2, 2, 4) plot_space(Z_fcf, space_index, X) plt.title("Fair-Cut Forest", fontsize=15) plt.show() print("(Note that the upper-left corner has an outlier point,\n\ and that there is a slight slide in the axes of the heat colors and the points)")
_____no_output_____
BSD-2-Clause
example/isotree_example.ipynb
ankane/isotree-1
Example 3: calculating pairwise distances
import numpy as np, pandas as pd from isotree import IsolationForest from scipy.spatial.distance import cdist ### Generate random multivariate-normal data np.random.seed(1) n = 1000 m = 10 ### This is a random PSD matrix to use as covariance S = np.random.normal(size = (m, m)) S = S.T.dot(S) mu = np.random.normal(size = m, scale = 2) X = np.random.multivariate_normal(mu, S, n) ### Fitting the model iso = IsolationForest(prob_pick_avg_gain=0, prob_pick_pooled_gain=0) iso.fit(X) ### Calculate approximate distance D_sep = iso.predict_distance(X, square_mat = True) ### Compare against other distances D_euc = cdist(X, X, metric = "euclidean") D_cos = cdist(X, X, metric = "cosine") D_mah = cdist(X, X, metric = "mahalanobis") ### Correlations print("Correlations between different distance metrics") pd.DataFrame( np.corrcoef([D_sep.reshape(-1), D_euc.reshape(-1), D_cos.reshape(-1), D_mah.reshape(-1)]), columns = ['SeparaionDepth', 'Euclidean', 'Cosine', 'Mahalanobis'], index = ['SeparaionDepth', 'Euclidean', 'Cosine', 'Mahalanobis'] )
Correlations between different distance metrics
BSD-2-Clause
example/isotree_example.ipynb
ankane/isotree-1
Example 4: imputing missing values
import numpy as np from isotree import IsolationForest ### Generate random multivariate-normal data np.random.seed(1) n = 1000 m = 5 ### This is a random PSD matrix to use as covariance S = np.random.normal(size = (m, m)) S = S.T.dot(S) mu = np.random.normal(size = m) X = np.random.multivariate_normal(mu, S, n) ### Set some values randomly as missing values_NA = (np.random.random(size = n * m) <= .15).reshape((n, m)) X_na = X.copy() X_na[values_NA] = np.nan ### Fitting the model iso = IsolationForest(build_imputer=True, prob_pick_pooled_gain=1, ntry=10) iso.fit(X_na) ### Impute missing values X_imputed = iso.transform(X_na) print("MSE for imputed values w/model: %f\n" % np.mean((X[values_NA] - X_imputed[values_NA])**2)) ### Comparison against simple mean imputation X_means = np.nanmean(X_na, axis = 0) X_imp_mean = X_na.copy() for cl in range(m): X_imp_mean[np.isnan(X_imp_mean[:,cl]), cl] = X_means[cl] print("MSE for imputed values w/means: %f\n" % np.mean((X[values_NA] - X_imp_mean[values_NA])**2))
MSE for imputed values w/model: 3.176113 MSE for imputed values w/means: 5.540559
BSD-2-Clause
example/isotree_example.ipynb
ankane/isotree-1
决策树-----
# 准备工作 # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = ".." CHAPTER_ID = "decision_trees" def image_path(fig_id): return os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id) def save_fig(fig_id, tight_layout=True): print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(image_path(fig_id) + ".png", format='png', dpi=300)
_____no_output_____
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
训练与可视化
from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier iris = load_iris() X = iris.data[:, 2:] # petal length and width y = iris.target tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42) tree_clf.fit(X, y) from sklearn.tree import export_graphviz export_graphviz(tree_clf, out_file=image_path("iris_tree.dot"), feature_names=iris.feature_names[2:], class_names = iris.target_names, rounded=True, filled=True, )
_____no_output_____
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
根据上面得到的dot文件,可以使用`$ dot -Tpng iris_tree.dot -o iris_tree.png`命令转换为图片,如下:![iris_tree.png](attachment:iris_tree.png) 上图可以看到树是的预测过程。假设想分类鸢尾花, 可以从根节点开始。 首先看花瓣宽度, 如果小于2.45cm, 分入左边节点(深度1,左)。这种情况下,叶子节点不同继续询问,可以直接预测为Setosa鸢尾花。 如果宽度大于2.45cm, 移到右边子节点继续判断。由于不是叶子节点,因此继续判断, 花萼宽度如果小于1.75cm,则很大可能是Versicolor花(深度2, 左)。否则,可能是Virginica花(深度2, 右)。 其中参数含义如下:sample表示训练实例的个数。比如右节点中有100个实例, 花瓣宽度大于2.45cm。(深度1)其中54个花萼宽度小于1.75cm。value表示实例中每个类别的分分类个数。 gini系数表示实例的杂乱程度。如果等于0, 表示所有训练实例都属于同一个类别。如上setosa花分类。 公式可以计算第i个节点的gini分数。$G_i = 1 - \sum_{k=1}^{n} p_{i,k}^{2}$ P(i,k)表示k实例在i节点中的分布比例。 比如2层左节点的gini等于:$1-(0/50)^{2}-(49/50)^{2}-(5/50)^{2} = 0.168$。 注意:sklearn中使用CART,生成二叉树。但是像ID3可以生成多个孩子的决策树。
from matplotlib.colors import ListedColormap def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True): x1s = np.linspace(axes[0], axes[1], 100) x2s = np.linspace(axes[2], axes[3], 100) x1, x2 = np.meshgrid(x1s, x2s) X_new = np.c_[x1.ravel(), x2.ravel()] y_pred = clf.predict(X_new).reshape(x1.shape) custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap, linewidth=10) if not iris: custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50']) plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8) if plot_training: plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa") plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor") plt.plot(X[:, 0][y==2], X[:, 1][y==2], "g^", label="Iris-Virginica") plt.axis(axes) if iris: plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) else: plt.xlabel(r"$x_1$", fontsize=18) plt.ylabel(r"$x_2$", fontsize=18, rotation=0) if legend: plt.legend(loc="lower right", fontsize=14) plt.figure(figsize=(8, 4)) plot_decision_boundary(tree_clf, X, y) plt.plot([2.45, 2.45], [0, 3], "k-", linewidth=2) plt.plot([2.45, 7.5], [1.75, 1.75], "k--", linewidth=2) plt.plot([4.95, 4.95], [0, 1.75], "k:", linewidth=2) plt.plot([4.85, 4.85], [1.75, 3], "k:", linewidth=2) plt.text(1.40, 1.0, "Depth=0", fontsize=15) plt.text(3.2, 1.80, "Depth=1", fontsize=13) plt.text(4.05, 0.5, "(Depth=2)", fontsize=11) save_fig("decision_tree_decision_boundaries_plot") plt.show()
/anaconda3/lib/python3.6/site-packages/matplotlib/contour.py:967: UserWarning: The following kwargs were not used by contour: 'linewidth' s)
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
上图显示了该决策树的决策边界。垂直线表示决策树的根节点(深度0), 花瓣长度等于2.45cm。 由于左边gini为0,只有一种分类,不再进一步分类判断。但是右边不是很纯,因此深度1的右边节点根据花萼宽度1.75cm进一步判断。 由于最大深度为2,决策树停止后面的判断。但是可以设置max_depth为3, 然后,两个深度2节点将各自添加另一个决策边界(由虚线表示)。 补充:可以看到决策树的过程容易理解,称之为白盒模型。与之不同的是,随机森林和神经网络一般称为黑盒模型。 它们预测效果很好,可以很容易地检查其计算结果, 来做出这些预测。但却难以解释为什么这样预测。 决策树提供了很好的和简单的分类规则,甚至可以在需要时手动分类。 进行预测和计算可能性
tree_clf.predict_proba([[5, 1.5]]) tree_clf.predict([[5, 1.5]])
_____no_output_____
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
CART:分类回归树sklearn使用CART算法对训练决策树(增长树)。思想很简单:首先将训练集分为两个子集,根据特征k和阈值$t_k$(比如花瓣长度小于2.45cm)。重要的是怎么选出这个特征。 通过对每一组最纯的子集(k, $t_k$),根据大小的权重进行搜索。最小化如下损失函数: CART分类的损失函数$J(k, t_k) = \frac{m_{left}}{m}G_{left} + \frac{m_{right}}{m}G_{right} $![gini.png](attachment:gini.png)最小化如上函数,一旦成功分为两个子集, 就可以使用相同逻辑递归进行切分。当到达给定的最大深度时(max_depth)停止,或者不能再继续切分(数据集很纯,无法减少杂质)。 如下超参数控制其他的停止条件(min_samples_split, min_sample_leaf, min_weight_fraction_leaf, max_leaf_nodes). 计算复杂度默认的,经常使用gini impurity测量标准,但是也可以使用entropy impuirty来测量。![gini_2.png](attachment:gini_2.png)
not_widest_versicolor = (X[:, 1] != 1.8) | (y==2) X_tweaked = X[not_widest_versicolor] y_tweaked = y[not_widest_versicolor] tree_clf_tweaked = DecisionTreeClassifier(max_depth=2, random_state=40) tree_clf_tweaked.fit(X_tweaked, y_tweaked) plt.figure(figsize=(8, 4)) plot_decision_boundary(tree_clf_tweaked, X_tweaked, y_tweaked, legend=False) plt.plot([0, 7.5], [0.8, 0.8], "k-", linewidth=2) plt.plot([0, 7.5], [1.75, 1.75], "k-", linewidth=2) plt.text(1.0, 0.9, "Depth=0", fontsize=15) plt.text(1.0, 1.80, "Depth=0", fontsize=13) save_fig("decision_tree_instability_plot") plt.show()
/anaconda3/lib/python3.6/site-packages/matplotlib/contour.py:967: UserWarning: The following kwargs were not used by contour: 'linewidth' s)
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
限制超参数如下情况所示,防止过拟合数据,需要限制决策树的自由度,这个过程也叫正则(限制)。 决策树的max_depth超参数来控制拟合程度,默认不限制。可以减少max_depth来限制模型,减少过拟合的风险。
from sklearn.datasets import make_moons Xm, ym = make_moons(n_samples=100, noise=0.25, random_state=53) deep_tree_clf1 = DecisionTreeClassifier(random_state=42) deep_tree_clf2 = DecisionTreeClassifier(min_samples_leaf=4, random_state=42) deep_tree_clf1.fit(Xm, ym) deep_tree_clf2.fit(Xm, ym) plt.figure(figsize=(11, 4)) plt.subplot(121) plot_decision_boundary(deep_tree_clf1, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False) plt.title("No restrictions", fontsize=16) plt.subplot(122) plot_decision_boundary(deep_tree_clf2, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False) plt.title("min_samples_leaf = {}".format(deep_tree_clf2.min_samples_leaf), fontsize=14) save_fig("min_samples_leaf_plot") plt.show()
/anaconda3/lib/python3.6/site-packages/matplotlib/contour.py:967: UserWarning: The following kwargs were not used by contour: 'linewidth' s)
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
DecisionTreeClassifier有如下超参数:min_samples_split表示切分数据时包含的最小实例, min_samples_leaf表示一个叶子节点必须拥有的最小样本数目, min_weight_fraction_leaf(与min_samples_leaf相同,但表示为加权实例总数的一小部分), max_leaf_nodes(最大叶节点数)和max_features(在每个节点上分配的最大特性数), 增加min_*超参数或减少max_*超参数将使模型规范化。其他算法开始时对决策树进行无约束训练,之后删除没必要的特征,称为减枝。 如果一个节点的所有子节点所提供的纯度改善没有统计学意义,则认为其和其子节点是不必要的。
angle = np.pi / 180 * 20 rotation_maxtrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]]) Xr = X.dot(rotation_maxtrix) tree_clf_r = DecisionTreeClassifier(random_state=42) tree_clf_r.fit(Xr, y) plt.figure(figsize=(8, 3)) plot_decision_boundary(tree_clf_r, Xr, y, axes=[0.5, 7.5, -1.0, 1], iris=False) plt.show()
/anaconda3/lib/python3.6/site-packages/matplotlib/contour.py:967: UserWarning: The following kwargs were not used by contour: 'linewidth' s)
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
不稳定性目前为止,决策树有很多好处:它们易于理解和解释,易于使用,用途广泛,而且功能强大。 但是也有一些限制。首先, 决策树喜欢正交决策边界(所有的分割都垂直于一个轴),这使得它们对训练集的旋转很敏感。如下右图所示,旋转45度之后,尽管分类的很好,但是不会得到更大推广。其中的一种解决办法是PCA(后面介绍)。更普遍的,决策树对训练集中的微小变化很敏感。比如上图中移除一个实例的分类结果又很大的不同。 随机森林可以通过对许多树进行平均预测来限制这种不稳定, 对异常值,微小变化更加适用。
np.random.seed(6) Xs = np.random.rand(100, 2) - 0.5 ys = (Xs[:, 0] > 0).astype(np.float32) * 2 angle = np.pi / 4 rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]]) Xsr = Xs.dot(rotation_matrix) tree_clf_s = DecisionTreeClassifier(random_state=42) tree_clf_s.fit(Xs, ys) tree_clf_sr = DecisionTreeClassifier(random_state=42) tree_clf_sr.fit(Xsr, ys) plt.figure(figsize=(11, 4)) plt.subplot(121) plot_decision_boundary(tree_clf_s, Xs, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False) plt.subplot(122) plot_decision_boundary(tree_clf_sr, Xsr, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False) save_fig("sensitivity_to_rotation_plot") plt.show()
/anaconda3/lib/python3.6/site-packages/matplotlib/contour.py:967: UserWarning: The following kwargs were not used by contour: 'linewidth' s)
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
回归树
import numpy as np # 带噪声的2阶训练集 np.random.seed(42) m = 200 X = np.random.rand(m ,1) y = 4 * (X - 0.5) ** 2 y = y + np.random.randn(m, 1) / 10 from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor(max_depth=2, random_state=42) tree_reg.fit(X, y)
_____no_output_____
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
该回归决策树最大深度为2, dot后如下:![regression_tree.png](attachment:regression_tree.png) 与分类树非常类似。 主要的不同在于,分类树根据每个节点预测每个分类。 比如当x1 = 0.6时进行预测。从根开始遍历树,最终到达叶节点,该节点预测值=0.1106。 这个预测仅仅是与此叶节点相关的110个训练实例的平均目标值。这个预测的结果是一个平均平方误差(MSE),在这110个实例中等于0.0151。 请注意,每个区域的预测值始终是该区域实例的平均目标值。该算法以一种使大多数训练实例尽可能接近预测值的方式来分割每个区域。
from sklearn.tree import DecisionTreeRegressor tree_reg1 = DecisionTreeRegressor(random_state=42, max_depth=2) tree_reg2 = DecisionTreeRegressor(random_state=42, max_depth=3) tree_reg1.fit(X, y) tree_reg2.fit(X, y) def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"): x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1) y_pred = tree_reg.predict(x1) plt.axis(axes) if ylabel: plt.ylabel(ylabel, fontsize=18, rotation=0) plt.plot(X, y, "b.") plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$") plt.figure(figsize=(11, 4)) plt.subplot(121) plot_regression_predictions(tree_reg1, X, y) for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")): plt.plot([split, split], [-0.2, 1], style, linewidth=2) plt.text(0.21, 0.65, "Depth=0", fontsize=15) plt.text(0.01, 0.2, "Depth=1", fontsize=13) plt.text(0.65, 0.8, "Depth=1", fontsize=13) plt.legend(loc="upper center", fontsize=18) plt.title("max_depth=2", fontsize=14) plt.subplot(122) plot_regression_predictions(tree_reg2, X, y, ylabel=None) for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")): plt.plot([split, split], [-0.2, 1], style, linewidth=2) for split in (0.0458, 0.1298, 0.2873, 0.9040): plt.plot([split, split], [-0.2, 1], "k:", linewidth=1) plt.text(0.3, 0.5, "Depth=2", fontsize=13) plt.title("max_depth=3", fontsize=14) save_fig("tree_regression_plot") plt.show() # 画出分类图 export_graphviz( tree_reg1, out_file=image_path("regression_tree.dot"), feature_names=["x1"], rounded=True, filled=True ) tree_reg1 = DecisionTreeRegressor(random_state=42) tree_reg2 = DecisionTreeRegressor(random_state=42, min_samples_leaf=10) tree_reg1.fit(X, y) tree_reg2.fit(X, y) x1 = np.linspace(0, 1, 500).reshape(-1, 1) y_pred1 = tree_reg1.predict(x1) y_pred2 = tree_reg2.predict(x1) plt.figure(figsize=(11, 4)) plt.subplot(121) plt.plot(X, y, "b.") plt.plot(x1, y_pred1, "r.-", linewidth=2, label=r"$\hat{y}$") plt.axis([0, 1, -0.2, 1.1]) plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", fontsize=18, rotation=0) plt.legend(loc="upper center", fontsize=18) plt.title("No restrictions", fontsize=14) plt.subplot(122) plt.plot(X, y, "b.") plt.plot(x1, y_pred2, "r.-", linewidth=2, label=r"$\hat{y}$") plt.axis([0, 1, -0.2, 1.1]) plt.xlabel("$x_1$", fontsize=18) plt.title("min_samples_leaf={}".format(tree_reg2.min_samples_leaf), fontsize=14) save_fig("tree_regression_regularization_plot")
Saving figure tree_regression_regularization_plot
Apache-2.0
06_blog/06_blog_01.ipynb
yunshuipiao/hands-on-ml-with-sklearn-tf-python3
Use OSMnx to construct place boundaries and street networks, and save as various file formats for working with later - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/) - [GitHub repo](https://github.com/gboeing/osmnx) - [Examples, demos, tutorials](https://github.com/gboeing/osmnx/tree/master/examples)
import osmnx as ox %matplotlib inline ox.config(log_file=True, log_console=True, use_cache=True) place = 'Piedmont, California, USA'
_____no_output_____
MIT
examples/05-example-save-load-networks-shapes.ipynb
tanvim/test
Get place shape geometries from OSM and save as shapefile
gdf = ox.gdf_from_place(place) gdf.loc[0, 'geometry'] # save place boundary geometry as ESRI shapefile ox.save_gdf_shapefile(gdf, filename='place-shape')
_____no_output_____
MIT
examples/05-example-save-load-networks-shapes.ipynb
tanvim/test
Construct street network and save as shapefile to work with in GIS
G = ox.graph_from_place(place, network_type='drive') G_projected = ox.project_graph(G) # save street network as ESRI shapefile ox.save_graph_shapefile(G_projected, filename='network-shape')
_____no_output_____
MIT
examples/05-example-save-load-networks-shapes.ipynb
tanvim/test
Save street network as GraphML to work with in Gephi or NetworkX
# save street network as GraphML file ox.save_graphml(G_projected, filename='network.graphml')
_____no_output_____
MIT
examples/05-example-save-load-networks-shapes.ipynb
tanvim/test
Save street network as SVG to work with in Illustrator
# save street network as SVG fig, ax = ox.plot_graph(G_projected, show=False, save=True, filename='network', file_format='svg')
_____no_output_____
MIT
examples/05-example-save-load-networks-shapes.ipynb
tanvim/test
Load street network from saved GraphML file
G2 = ox.load_graphml('network.graphml') fig, ax = ox.plot_graph(G2)
_____no_output_____
MIT
examples/05-example-save-load-networks-shapes.ipynb
tanvim/test
TAG MATRIX DATA SCRAPER- author: Richard Castro - sticky_rank: 1- toc: true- badges: true- comments: false- categories: [Matrix]- image: images/scraper.jpg
import pandas as pd import requests import datetime date=datetime.datetime.now().strftime("%Y-%m-%d") import dash import dash_core_components as dcc df=pd.read_csv("https://raw.githubusercontent.com/pcm-dpc/COVID-19/master/dati-province/dpc-covid19-ita-province.csv") df.to_csv('../data/clients/CHEP/'+date+'-Italy.csv')
_____no_output_____
Apache-2.0
_notebooks/Matrix Data Scrapper.ipynb
lalalaNomNomNom/Notebooks
CANADA STUFF
#SOURCE - https://github.com/eebrown/data2019nCoV df=pd.read_csv('https://raw.githubusercontent.com/eebrown/data2019nCoV/master/data-raw/covid19.csv') df=df.rename(columns={'pruid':'uid', 'prname':'province'}) col=['uid', 'province','date', 'numconf', 'numprob', 'numdeaths', 'numtotal', 'numtested', 'numrecover', 'percentrecover', 'ratetested', 'numtoday', 'percentoday', 'ratetotal', 'ratedeaths', 'numdeathstoday', 'percentdeath', 'numtestedtoday', 'numrecoveredtoday', 'percentactive', 'numactive', 'rateactive', 'numtotal_last14', 'ratetotal_last14', 'numdeaths_last14', 'ratedeaths_last14'] df_ca=df[col] df_ca.set_index('date', inplace=True) df_ca.to_csv('../data/sources/canada/'+date+'-eeBrown.csv') #SOURCE - https://www12.statcan.gc.ca/census-recensement/index-eng.cfm df=pd.read_csv('https://www12.statcan.gc.ca/census-recensement/2016/dp-pd/hlt-fst/pd-pl/Tables/CompFile.cfm?Lang=Eng&T=301&OFT=FULLCSV') df_cacen=df df_cacen.to_csv('../data/sources/canada/'+date+'-ca_census.csv') #SOURCE - ISHABERRY #PROVINCE LEVEL CASE DATA df=pd.read_csv('https://raw.githubusercontent.com/ishaberry/Covid19Canada/master/timeseries_prov/cases_timeseries_prov.csv') df.rename(columns={'date_report':'date'}, inplace=True) df.set_index('date') df_Isha=df df_Isha.to_csv('../data/sources/canada/'+date+'Isha_Prov_Cases.csv') #SOURCE - ISHABERRY #HEALTH REGION LEVEL CASE DATA df=pd.read_csv('https://raw.githubusercontent.com/ishaberry/Covid19Canada/master/timeseries_hr/cases_timeseries_hr.csv') df.rename(columns={'date_report':'date'}, inplace=True) df.set_index('date') df_Isha=df df_Isha.to_csv('../data/sources/canada/'+date+'Isha_HR_Cases.csv') #SOURCE - ISHABERRY #PROVINCE LEVEL TEST DATA df=pd.read_csv('https://raw.githubusercontent.com/ishaberry/Covid19Canada/master/timeseries_prov/testing_timeseries_prov.csv') df.rename(columns={'date_testing':'date'}, inplace=True) df.set_index('date') df_Isha=df df_Isha.to_csv('../data/sources/canada/'+date+'Isha_Province_Testing.csv')
_____no_output_____
Apache-2.0
_notebooks/Matrix Data Scrapper.ipynb
lalalaNomNomNom/Notebooks
World-o-Meter
#WORLD O METER DATA #NEW YORK COUNTY DATA import datetime date=datetime.datetime.now().strftime("%Y-%m-%d") web=requests.get('https://www.worldometers.info/coronavirus/usa/new-york') ny=pd.read_html(web.text) ny=ny[1] ny.columns=map(str.lower, ny.columns) ny.to_csv('../data/sources/worldometer/'+date+'-NY-County-Data.csv') #CALIFORNIA COUNTY DATA cad=requests.get('https://www.worldometers.info/coronavirus/usa/california') ca=pd.read_html(cad.text) ca=ca[1] ca.columns=map(str.lower, ca.columns) ca.to_csv('../data/sources/worldometer/'+date+'-CA-County-Data.csv') #NEW JERSEY COUNTY DATA njd=requests.get('https://www.worldometers.info/coronavirus/usa/new-jersey') nj=pd.read_html(njd.text) nj=nj[1] nj.columns=map(str.lower, nj.columns) nj.to_csv('../data/sources/worldometer/'+date+'-NJ-County-Data.csv') #OHIO COUNTY DATA ohd=requests.get('https://www.worldometers.info/coronavirus/usa/ohio/') oh=pd.read_html(ohd.text) oh=oh[1] oh.columns=map(str.lower, oh.columns) oh.to_csv('../data/sources/worldometer/'+date+'-OH-County-Data.csv') #SOUTH CAROLINA COUNTY DATA scd=requests.get('https://www.worldometers.info/coronavirus/usa/south-carolina/') sc=pd.read_html(scd.text) sc=sc[1] sc.columns=map(str.lower, sc.columns) sc.to_csv('../data/sources/worldometer/'+date+'-SC-County-Data.csv') #PA COUNTY DATA pad=requests.get('https://www.worldometers.info/coronavirus/usa/pennsylvania/') pa=pd.read_html(pad.text) pa=pa[1] pa.columns=map(str.lower, pa.columns) pa.to_csv('../data/sources/worldometer/'+date+'-PA-County-Data.csv') #WASHINGTON COUNTY DATA wad=requests.get('https://www.worldometers.info/coronavirus/usa/washington/') wa=pd.read_html(wad.text) wa=wa[1] wa.columns=map(str.lower, wa.columns) wa.to_csv('../data/sources/worldometer/'+date+'-WA-County-Data.csv') #US STATE LEVEL DATA we=requests.get('https://www.worldometers.info/coronavirus/country/us/') us=pd.read_html(we.text) us=us[1] us.to_csv('../data/sources/worldometer/'+date+'-US-State-Data.csv')
_____no_output_____
Apache-2.0
_notebooks/Matrix Data Scrapper.ipynb
lalalaNomNomNom/Notebooks
rt live
rtlive=pd.read_csv('https://d14wlfuexuxgcm.cloudfront.net/covid/rt.csv') rtlive.to_csv('../_data/data_sources/rtlive/rtlive'+date+'.csv')
_____no_output_____
Apache-2.0
_notebooks/Matrix Data Scrapper.ipynb
lalalaNomNomNom/Notebooks
Mobility Reports Google Mobility Reports Apple Mobility Reports
#GOOGLE AND APPLE MOBILITY DATA BY COUNTY #apple=pd.read_csv('https://covid19-static.cdn-apple.com/covid19-mobility-data/2014HotfixDev8/v3/en-us/applemobilitytrends-2020-08-08.csv') #apple.to_csv('../_data/Data_Sources/Mobility_Reports/apple.csv') google=pd.read_csv('https://www.gstatic.com/covid19/mobility/Global_Mobility_Report.csv') google.to_csv('../_data/Data_Sources/google/google.csv')
_____no_output_____
Apache-2.0
_notebooks/Matrix Data Scrapper.ipynb
lalalaNomNomNom/Notebooks
WORLD-O-METER DATASETS NEW YORK CALIFORNIA NEW JERSEY PA SOUTH CAROLINA OHIO WASHINGTON STATE
healthDepartment=requests.get('https://data.ct.gov/Health-and-Human-Services/COVID-19-Tests-Cases-and-Deaths-By-Town-/28fr-iqnx/data') hd=pd.read_html(healthDepartment.text)
_____no_output_____
Apache-2.0
_notebooks/Matrix Data Scrapper.ipynb
lalalaNomNomNom/Notebooks