repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
zzsza/Datascience_School
|
09. 기초 확률론2 - 확률 변수/01. NumPy를 사용한 난수 발생.ipynb
|
mit
|
[
"NumPy를 사용한 난수 발생\n파이썬을 이용하여 난수를 발생시키거나 데이터를 무작위로 섞는 방법에 대해 알아본다.\n이런 기능들은 주로 NumPy의 random 서브패키지에서 제공한다.",
"import numpy as np",
"시드 설정하기\n컴퓨터 프로그램에서 무작위와 관련된 모든 알고리즘은 사실 무작위가 아니라 시작 숫자를 정해 주면 그 다음에는 정해진 알고리즘에 의해 마치 난수처럼 보이는 수열을 생성한다. 다만 출력되는 숫자들 간의 상관관계가 없어 보일 뿐이다. \n또한 같은 알고리즘을 여러번 실행하더라도 다른 숫자가 나오도록 시작 숫자는 현재 시간 등을 사용해서 매번 바꿔준다. 이런 시작 숫자를 시드(seed)라고 한다. \n따라서 시드를 사람이 수동으로 설정한다면 그 다음에 만들어지는 난수들은 예측할 수 있다.\n파이썬에서 시드를 설정하는 명령은 seed이다. 인수로는 0과 같거나 큰 정수를 넣어준다.",
"np.random.seed(0)",
"이렇게 시드를 설정한 후 rand 명령으로 5개의 난수를 생성해 보자. 다른 난수 관련 명령어를 실행하지 말고 바로 다음 명령을 실행해야 한다.",
"np.random.rand(5)",
"몇번 더 난수를 생성한다.",
"np.random.rand(10)\n\nnp.random.rand(10)",
"이제 시드를 0으로 재설정하고 다시 난수를 발생시켜 본다.",
"np.random.seed(0)\n\nnp.random.rand(5)\n\nnp.random.rand(10)\n\nnp.random.rand(10)",
"아까와 같은 숫자가 나오는 것을 확인할 수 있다.\n기존의 데이터의 순서 바꾸기\n기존에 있는 데이터의 순서를 바꾸려면 shuffle 명령을 사용한다.",
"x = np.arange(10)\nnp.random.shuffle(x)\nx",
"기존의 데이터에서 샘플링하기\n이미 있는 데이터 집합에서 일부를 선택하는 것을 샘플링(sampling)이라고 한다. 샘플링에는 choice 명령을 사용한다. choice 명령은 다음과 같은 인수를 가질 수 있다.\nnumpy.random.choice(a, size=None, replace=True, p=None)\n\na : 배열이면 원래의 데이터, 정수이면 range(a) 명령으로 데이터 생성\nsize : 정수. 샘플 숫자\nreplace : 불리언. True이면 한번 선택한 데이터를 다시 선택 가능\np : 배열. 각 데이터가 선택될 수 있는 확률",
"np.random.choice(5, 5, replace=False) # shuffle 명령과 같다.\n\nnp.random.choice(5, 3, replace=False) # 3개만 선택\n\nnp.random.choice(5, 10) # 반복해서 10개 선택\n\nnp.random.choice(5, 10, p=[0.1, 0, 0.3, 0.6, 0]) # 선택 확률을 다르게 해서 10개 선택",
"난수 생성\nNumPy의 random 서브패키지에는 난수를 생성하는 명령이 4가지가 있다.\n\nrand: 0부터 1사이의 균일 분포 \nrandn: 가우시안 표준 정규 분포\nrandint: 균일 분포의 정수 난수\n\nrand 명령은 0부터 1사이에서 균일한 확률 분포로 실수 난수를 생성한다. 숫자 인수는 생성할 난수의 크기이다. 여러개의 인수를 넣으면 해당 크기를 가진 행렬을 생성한다.",
"np.random.rand(10)\n\nnp.random.rand(3, 5)",
"randn 명령은 기댓값이 0이고 표준편차가 1인 가우시안 표준 정규 분포를 따르는 난수를 생성한다. 인수 사용법은 rand 명령과 같다.",
"np.random.randn(10)\n\nnp.random.randn(3, 5)",
"randint 명령은 다음과 같은 인수를 가진다.\nnumpy.random.randint(low, high=None, size=None)\n만약 high를 입력하지 않으면 0과 low사이의 숫자를, high를 입력하면 low와 high는 사이의 숫자를 출력한다. size는 난수의 숫자이다.",
"np.random.randint(10, size=10)\n\nnp.random.randint(10, 20, size=10)\n\nnp.random.randint(10, 20, size=(3,5))",
"정수 난수 통계\n이렇게 발생시킨 난수가 실수값이면 히스토그램 등을 사용하여 분석하면 된다. \n만약 난수가 정수값이면 unique 명령이나 bincount 명령으로 데이터 값을 분석할 수 있다. 이 명렬들은 random 서브패키지가 아니라 NumPy 바로 아래에 포함된 명령이다.\nunique 명령은 데이터에서 중복된 값을 제거하고 중복되지 않는 값의 리스트를 출력한다. return_counts 인수를 True 로 설정하면 각 값을 가진 데이터 갯수도 출력한다.",
"np.unique([11, 11, 2, 2, 34, 34])\n\na = np.array(['a', 'b', 'b', 'c', 'a'])\nindex, count = np.unique(a, return_counts=True)\n\nindex\n\ncount",
"그러나 unique 값은 데이터에 존재하는 값에 대해서만 갯수를 세므로 나올 수 있음에도 불구하고 데이터가 하나도 없는 값에 대한 정보를 주지 않는다.\n따라서 데이터가 주사위를 던졌을 때 나오는 수처럼 특정 범위안의 수인 경우에는 bincount에 minlength 인수를 설정하여 쓰는 것이 더 편리하다. 이 때는 0 부터 minlength - 1 까지의 숫자에 대해 각각 카운트를 한다. 데이터가 없을 경우에는 카운트 값이 0이 된다.",
"np.bincount([1, 1, 2, 2, 2, 3], minlength=6)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Upward-Spiral-Science/team1
|
code/Assignment10_Akash.ipynb
|
apache-2.0
|
[
"Assignment 10 - Akash",
"import matplotlib.pyplot as plt\n%matplotlib inline \nimport numpy as np\nimport urllib2\nimport scipy.stats as stats\n\nurl = ('https://raw.githubusercontent.com/Upward-Spiral-Science/data/master/syn-density/output.csv')\ndata = urllib2.urlopen(url)\ncsv = np.genfromtxt(data, delimiter=\",\")[1:] # Remove lable row\n\n# Clip data based on thresholds on x and y coordinates. Found from Bijan visual\nx_bounds = (409, 3529)\ny_bounds = (1564, 3124)\n\ndef check_in_bounds(row, x_bounds, y_bounds):\n if row[0] < x_bounds[0] or row[0] > x_bounds[1]: #check x inbounds\n return False\n if row[1] < y_bounds[0] or row[1] > y_bounds[1]: #check y inbounds\n return False\n if row[3] == 0: # remove zeros of unmasked values\n return False\n return True\n\nindices_in_bound = np.where(np.apply_along_axis(check_in_bounds, 1, csv, x_bounds, y_bounds))\ndata_clipped = csv[indices_in_bound]\n\ndensity = np.divide(data_clipped[:, 4],data_clipped[:, 3])*(64**3)\n\ndata_density = np.vstack((data_clipped[:,0],data_clipped[:,1],data_clipped[:,2],density))\ndata_density = data_density.T\n\nprint 'End removing zero unmasked and removing image edges'",
"Set up different sections of data based on clusters of 3 and regressions\nBorders of clusters from visual estimate of previous homework cluster visuals (emily)",
"#edges of k-means, k = 3 clusters\ncluster3x1bounds = (409,1500)\ncluster3x2bounds = (1501,2500)\ncluster3x3bounds = (2501,3529)\n\n#edges of k-means, k = 4 clusters\ncluster4x1bounds = (409,1100)\ncluster4x4bounds = (2750,3529)\n\ndef check_in_cluster(row, x_bounds):\n if row[0] < x_bounds[0] or row[0] > x_bounds[1]: #check x inbounds\n return False\n return True\n\nindices_in_bound = np.where(np.apply_along_axis(check_in_cluster, 1, data_density, cluster3x1bounds))\ndata_k3_1 = data_density[indices_in_bound]\n\nindices_in_bound = np.where(np.apply_along_axis(check_in_cluster, 1, data_density, cluster3x2bounds))\ndata_k3_2 = data_density[indices_in_bound]\n\nindices_in_bound = np.where(np.apply_along_axis(check_in_cluster, 1, data_density, cluster3x3bounds))\ndata_k3_3 = data_density[indices_in_bound]\n\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.svm import LinearSVR\nfrom sklearn.neighbors import KNeighborsRegressor as KNN\nfrom sklearn.ensemble import RandomForestRegressor as RF\nfrom sklearn.preprocessing import PolynomialFeatures as PF\nfrom sklearn.pipeline import Pipeline\nfrom sklearn import cross_validation\n\nnames = ['Linear Regression','SVR','KNN Regression','Random Forest Regression','Polynomial Regression']\nregressions = [LinearRegression(),\n LinearSVR(C=1.0),\n KNN(n_neighbors=10, algorithm='auto'),\n RF(max_depth=5, max_features=1),\n Pipeline([('poly', PF(degree=2)),('linear', LinearRegression(fit_intercept=False))])]\nk_fold = 10\n",
"Start regressions in cluster\n1.1) Cluster of 3, check 1st cluster",
"print('Regression on X=(x,y,z), Y=syn/unmasked')\nX = data_k3_1[:, (0, 1, 2)] # x,y,z\nY = data_k3_1[:, 3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n\n# x \nprint\nprint('Regressions on x and density')\nX = data_k3_1[:,[0]] # x,y,z\nY = data_k3_1[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# y \nprint\nprint('Regression on y and density')\nX = data_k3_1[:,[1]] # x,y,z\nY = data_k3_1[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# z -> syn/unmasked\nprint\nprint('Regression on z and density')\nX = data_k3_1[:,[2]] # x,y,z\nY = data_k3_1[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))",
"1.2) 2nd cluster",
"print('Regression on X=(x,y,z), Y=syn/unmasked')\nX = data_k3_2[:, (0, 1, 2)] # x,y,z\nY = data_k3_2[:, 3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n\n# x \nprint\nprint('Regressions on x and density')\nX = data_k3_2[:,[0]] # x,y,z\nY = data_k3_2[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# y \nprint\nprint('Regression on y and density')\nX = data_k3_2[:,[1]] # x,y,z\nY = data_k3_2[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# z -> syn/unmasked\nprint\nprint('Regression on z and density')\nX = data_k3_2[:,[2]] # x,y,z\nY = data_k3_2[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))",
"1.3) 3rd cluster",
"print('Regression on X=(x,y,z), Y=syn/unmasked')\nX = data_k3_3[:, (0, 1, 2)] # x,y,z\nY = data_k3_3[:, 3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n\n# x \nprint\nprint('Regressions on x and density')\nX = data_k3_3[:,[0]] # x,y,z\nY = data_k3_3[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# y \nprint\nprint('Regression on y and density')\nX = data_k3_3[:,[1]] # x,y,z\nY = data_k3_3[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# z -> syn/unmasked\nprint\nprint('Regression on z and density')\nX = data_k3_3[:,[2]] # x,y,z\nY = data_k3_3[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))",
"2) Set up 4 clusters",
"#edges of k-means, k = 4 clusters, Not sure how to get full clusters\n\ncluster4x1bounds = (409,1100)\ncluster4x4bounds = (2750,3529)\n\ndef check_in_cluster(row, x_bounds):\n if row[0] < x_bounds[0] or row[0] > x_bounds[1]: #check x inbounds\n return False\n return True\n\nindices_in_bound = np.where(np.apply_along_axis(check_in_cluster, 1, data_density, cluster4x1bounds))\ndata_k4_1 = data_density[indices_in_bound]\n\nindices_in_bound = np.where(np.apply_along_axis(check_in_cluster, 1, data_density, cluster4x4bounds))\ndata_k4_4 = data_density[indices_in_bound]",
"2.1) 4 cluster, section 1",
"print('Regression on X=(x,y,z), Y=syn/unmasked')\nX = data_k4_1[:, (0, 1, 2)] # x,y,z\nY = data_k4_1[:, 3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n\n# x \nprint\nprint('Regressions on x and density')\nX = data_k4_1[:,[0]] # x,y,z\nY = data_k4_1[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# y \nprint\nprint('Regression on y and density')\nX = data_k4_1[:,[1]] # x,y,z\nY = data_k4_1[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# z -> syn/unmasked\nprint\nprint('Regression on z and density')\nX = data_k4_1[:,[2]] # x,y,z\nY = data_k4_1[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))",
"2.2) 4 cluster, 4th section",
"print('Regression on X=(x,y,z), Y=syn/unmasked')\nX = data_k4_4[:, (0, 1, 2)] # x,y,z\nY = data_k4_4[:, 3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n\n# x \nprint\nprint('Regressions on x and density')\nX = data_k4_4[:,[0]] # x,y,z\nY = data_k4_4[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# y \nprint\nprint('Regression on y and density')\nX = data_k4_4[:,[1]] # x,y,z\nY = data_k4_4[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n\n# z -> syn/unmasked\nprint\nprint('Regression on z and density')\nX = data_k4_4[:,[2]] # x,y,z\nY = data_k4_4[:,3] # syn/unmasked\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: \\t %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))",
"3) Cluster Explore\nFocusing on the centroid, we can see that the centroid centers are not near where the centers are when calcualated with kmeans.",
"import sklearn.mixture as mixture\n\nn_clusters = 4\n###########################################\ngmm = mixture.GMM(n_components=n_clusters, n_iter=1000, covariance_type='diag', random_state=1)\nclusters = [[] for i in xrange(n_clusters)]\ncentroidmatrix = [0]*4\nprint centroidmatrix\n\npredicted = gmm.fit_predict(data_density)\nfor label, row in zip(predicted, data_density[:,]):\n clusters[label].append(row)\n\n \nfor i in xrange(n_clusters):\n clusters[i] = np.array(clusters[i])\n print \"# of samples in cluster %d: %d\" % (i+1, len(clusters[i])) \n print \"centroid: \", np.average(clusters[i], axis=0)\n centroidmatrix = np.vstack((centroidmatrix,np.average(clusters[i], axis=0)))\n # print \"cluster covariance: \"\n covar = np.cov(clusters[i].T)\n # print covar\n print \"determinant of covariance matrix: \", np.linalg.det(covar)\n print\n\ncentroidmatrix = np.delete(centroidmatrix, (0), axis = 0) \n \nprint centroidmatrix\n\n\nfig = plt.figure(figsize=(10, 7))\nax = fig.gca(projection='3d')\nax.view_init()\nax.dist = 10 # distance\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\nax.set_title('Scatter Plot of Centroids, size weighted by density')\n\n\nax.set_xticks(np.arange(500, 3500, 500))\nax.set_yticks(np.arange(1200,3200, 500))\nax.set_zticks(np.arange(0,1200, 150))\n\n\nax.scatter(\n centroidmatrix[:, 0], centroidmatrix[:, 1], centroidmatrix[:, 2], # data\n c='blue', # marker colour\n marker='o', # marker shape\n s=centroidmatrix[:,3]*10 # marker size\n)\n\n\n\n\nplt.show",
"4) Hex plot\nhexbin for bottom 4 levels... doesn't seem to be anything interesting... only to realize bins may be too small and are just points (w/o density data)... >.<",
"from mpl_toolkits.mplot3d import Axes3D\n\n# Random Sample\n\nsamples = 20000\nperm = np.random.permutation(xrange(1, len(data_density[:])))\ndata_density_sample = data_density[perm[:samples]]\n\ndata_uniques, UIndex, UCounts = np.unique(data_density_sample[:,2], return_index = True, return_counts = True)\n\nprint 'uniques'\nprint 'index: ' + str(UIndex)\nprint 'counts: ' + str(UCounts)\nprint 'values: ' + str(data_uniques)\n\nxmin = data_density_sample[:,0].min()\nxmax = data_density_sample[:,0].max()\nymin = data_density_sample[:,1].min()\nymax = data_density_sample[:,1].max()\n\ndef check_z(row):\n if row[2] == 55:\n return True\n return False\n\nindex_true = np.where(np.apply_along_axis(check_z, 1, data_density_sample))\ndds55 = data_density_sample[index_true]\n\ndata_uniques, UIndex, UCounts = np.unique(dds55[:,2], return_index = True, return_counts = True)\n\nprint 'uniques check'\nprint 'index: ' + str(UIndex)\nprint 'counts: ' + str(UCounts)\nprint 'values: ' + str(data_uniques)\n\n#plt.subplots_adjust(hspace=1)\n#plt.subplot(121)\nplt.hexbin(dds55[:,0], dds55[:,1], cmap=plt.cm.YlOrRd_r,mincnt=1)\nplt.axis([xmin, xmax, ymin, ymax])\nplt.title(\"Hexagon binning\")\nplt.xlabel('x coordinates')\nplt.ylabel('y coordinates')\ncb = plt.colorbar()\ncb.set_label('density')\n\nplt.show()\n\ndef check_z(row):\n if row[2] == 166:\n return True\n return False\n\nindex_true = np.where(np.apply_along_axis(check_z, 1, data_density_sample))\nddsZ = data_density_sample[index_true]\n\nplt.hexbin(ddsZ[:,0], ddsZ[:,1], cmap=plt.cm.YlOrRd_r,mincnt=1)\nplt.axis([xmin, xmax, ymin, ymax])\nplt.title(\"Hexagon binning\")\nplt.xlabel('x coordinates')\nplt.ylabel('y coordinates')\ncb = plt.colorbar()\ncb.set_label('density')\n\nplt.show()\n\ndef check_z(row):\n if row[2] == 277:\n return True\n return False\n\nindex_true = np.where(np.apply_along_axis(check_z, 1, data_density_sample))\nddsZ = data_density_sample[index_true]\n\nplt.hexbin(ddsZ[:,0], ddsZ[:,1], cmap=plt.cm.YlOrRd_r,mincnt=1)\nplt.axis([xmin, xmax, ymin, ymax])\nplt.title(\"Hexagon binning\")\nplt.xlabel('x coordinates')\nplt.ylabel('y coordinates')\ncb = plt.colorbar()\ncb.set_label('density')\n\nplt.show()\n\ndef check_z(row):\n if row[2] == 388:\n return True\n return False\n\nindex_true = np.where(np.apply_along_axis(check_z, 1, data_density_sample))\nddsZ = data_density_sample[index_true]\n\nplt.hexbin(ddsZ[:,0], ddsZ[:,1], cmap=plt.cm.YlGnBu,mincnt=1)\nplt.axis([xmin, xmax, ymin, ymax])\nplt.title(\"Hexagon binning\")\nplt.xlabel('x coordinates')\nplt.ylabel('y coordinates')\ncb = plt.colorbar()\ncb.set_label('density')\n\nplt.show()",
"5) Stats of spike shown to Jovo\n... something isnt right though....",
"\ndef check_spike(row):\n if row[3] > 0.0013 and row[3] < 0.00135 :\n return True\n return False\n\nindex_true = np.where(np.apply_along_axis(check_z, 1, data_density))\nspike = data_density[index_true]\n\nprint \"Spike\"\nprint \"length: \", len(spike)\nprint \"Mean: \", np.mean(spike)\nprint \"Median: \", np.mean(spike)\nprint \"STD: \", np.std(spike)",
"Explore the spike"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.20/_downloads/2e7ef25ccf0fd2af7902f12debe11fc1/plot_stats_cluster_1samp_test_time_frequency.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Non-parametric 1 sample cluster statistic on single trial power\nThis script shows how to estimate significant clusters\nin time-frequency power estimates. It uses a non-parametric\nstatistical procedure based on permutations and cluster\nlevel statistics.\nThe procedure consists of:\n\nextracting epochs\ncompute single trial power estimates\nbaseline line correct the power estimates (power ratios)\ncompute stats to see if ratio deviates from 1.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.time_frequency import tfr_morlet\nfrom mne.stats import permutation_cluster_1samp_test\nfrom mne.datasets import sample\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\ntmin, tmax, event_id = -0.3, 0.6, 1\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\ninclude = []\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\n\n# picks MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,\n stim=False, include=include, exclude='bads')\n\n# Load condition 1\nevent_id = 1\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, eog=150e-6))\n\n# Take only one channel\nch_name = 'MEG 1332'\nepochs.pick_channels([ch_name])\n\nevoked = epochs.average()\n\n# Factor to down-sample the temporal dimension of the TFR computed by\n# tfr_morlet. Decimation occurs after frequency decomposition and can\n# be used to reduce memory usage (and possibly computational time of downstream\n# operations such as nonparametric statistics) if you don't need high\n# spectrotemporal resolution.\ndecim = 5\nfreqs = np.arange(8, 40, 2) # define frequencies of interest\nsfreq = raw.info['sfreq'] # sampling in Hz\ntfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,\n average=False, return_itc=False, n_jobs=1)\n\n# Baseline power\ntfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))\n\n# Crop in time to keep only what is between 0 and 400 ms\nevoked.crop(0., 0.4)\ntfr_epochs.crop(0., 0.4)\n\nepochs_power = tfr_epochs.data[:, 0, :, :] # take the 1 channel",
"Compute statistic",
"threshold = 2.5\nn_permutations = 100 # Warning: 100 is too small for real-world analysis.\nT_obs, clusters, cluster_p_values, H0 = \\\n permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations,\n threshold=threshold, tail=0)",
"View time-frequency plots",
"evoked_data = evoked.data\ntimes = 1e3 * evoked.times\n\nplt.figure()\nplt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)\n\n# Create new stats image with only significant clusters\nT_obs_plot = np.nan * np.ones_like(T_obs)\nfor c, p_val in zip(clusters, cluster_p_values):\n if p_val <= 0.05:\n T_obs_plot[c] = T_obs[c]\n\nvmax = np.max(np.abs(T_obs))\nvmin = -vmax\nplt.subplot(2, 1, 1)\nplt.imshow(T_obs, cmap=plt.cm.gray,\n extent=[times[0], times[-1], freqs[0], freqs[-1]],\n aspect='auto', origin='lower', vmin=vmin, vmax=vmax)\nplt.imshow(T_obs_plot, cmap=plt.cm.RdBu_r,\n extent=[times[0], times[-1], freqs[0], freqs[-1]],\n aspect='auto', origin='lower', vmin=vmin, vmax=vmax)\nplt.colorbar()\nplt.xlabel('Time (ms)')\nplt.ylabel('Frequency (Hz)')\nplt.title('Induced power (%s)' % ch_name)\n\nax2 = plt.subplot(2, 1, 2)\nevoked.plot(axes=[ax2], time_unit='s')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
opesci/devito
|
examples/seismic/tutorials/04_dask_pickling.ipynb
|
mit
|
[
"04 - Full waveform inversion with Dask and Devito pickling\nIntroduction\nHere, we revisit 04_dask.ipynb: Full Waveform Inversion with Devito and Dask, but with a twist: we now want to show that it is possible to use pickle to serialize (deserialize) a Devito object structure into (from) a byte stream. This is specially useful in our example as the geometry of all source experiments remains essentially the same; only the source location changes. In other words, we can convert a solver object (built on top of generic Devito objects) into a byte stream to store it. Later on, this byte stream can then be retrieved and de-serialized back to an instance of the original solver object by the dask workers, and then be populated with the correct geometry for the i-th source location. We can still benefit from the simplicity of the example and create only one solver object which can be used to both generate the observed data set and to compute the predicted data and gradient in the FWI process. Further examples of pickling can be found here. \nThe tutorial roughly follows the structure of 04_dask.ipynb. Technical details about Dask and scipy.optimize.minimize will therefore treated only superficially.\nWhat is different from 04_dask.ipynb\n\n\nThe big difference between 04_dask.ipynb and this tutorial is that in the former is created a solver object for each source in both forward modeling and FWI gradient kernels. While here only one solver object is created and reused along all the optimization process. This is done through pickling and unpickling respectively.\n\n\nAnother difference between the tutorials is that the in 04_dask.ipynb is created a list with the observed shots, and then each observed shot record of the list is passed as parameter to a single-shot FWI objective function executed in parallel using the submit() method. Here, a single observed shot record along information of its source location is stored in a dictionary, which is saved into a pickle file. Later, dask workers retrieve the corresponding pickled data when computing the gradient for a single shot. The same applies for the model object in the optimization process. It is serialized each time the model's velocity is updated. Then, dask workers unpickle data from file back to model object.\n\n\nMoreover, there is a difference in the way that the global functional-gradient is obtained. In 04_dask.ipynb we had to wait for all computations to finish via wait(futures) and then we sum the function values and gradients from all workers. Here, it is defined a type fg_pair so that a reduce function sum can be used, such function takes all the futures given to it and after they are completed, combine them to get the estimate of the global functional-gradient. \n\n\nscipy.optimize.minimize\nAs in 04_dask.ipynb, here we are going to focus on using L-BFGS via scipy.optimize.minimize(method=’L-BFGS-B’)\npython\nscipy.optimize.minimize(fun, x0, args=(), method='L-BFGS-B', jac=None, bounds=None, tol=None, callback=None, options={'disp': None, 'maxls': 20, 'iprint': -1, 'gtol': 1e-05, 'eps': 1e-08, 'maxiter': 15000, 'ftol': 2.220446049250313e-09, 'maxcor': 10, 'maxfun': 15000})\nThe argument fun is a callable function that returns the misfit between the simulated and the observed data. If jac is a Boolean and is True, fun is assumed to return the gradient along with the objective function - as is our case when applying the adjoint-state method.\nDask\nDask is task-based parallelization framework for Python. It allows us to distribute our work among a collection of workers controlled by a central scheduler. Dask is well-documented, flexible, an currently under active development.\nIn the same way as in 04_dask.ipynb, we are going to use it here to parallelise the computation of the functional and gradient as this is the vast bulk of the computational expense of FWI and it is trivially parallel over data shots.\nForward modeling\nWe define the functions used for the forward modeling, as well as the other functions used in constructing and deconstructing Python/Devito objects to/from binary data as follows:",
"#NBVAL_IGNORE_OUTPUT\n\n# Set up inversion parameters.\nparam = {'t0': 0.,\n 'tn': 1000., # Simulation last 1 second (1000 ms)\n 'f0': 0.010, # Source peak frequency is 10Hz (0.010 kHz)\n 'nshots': 5, # Number of shots to create gradient from\n 'shape': (101, 101), # Number of grid points (nx, nz).\n 'spacing': (10., 10.), # Grid spacing in m. The domain size is now 1km by 1km.\n 'origin': (0, 0), # Need origin to define relative source and receiver locations.\n 'nbl': 40} # nbl thickness.\n\nimport numpy as np\n\nimport scipy\nfrom scipy import signal, optimize\n\nfrom devito import Grid\n\nfrom distributed import Client, LocalCluster, wait\n\nimport cloudpickle as pickle\n\n# Import acoustic solver, source and receiver modules.\nfrom examples.seismic import Model, demo_model, AcquisitionGeometry, Receiver\nfrom examples.seismic.acoustic import AcousticWaveSolver\nfrom examples.seismic import AcquisitionGeometry\n\n# Import convenience function for plotting results\nfrom examples.seismic import plot_image\nfrom examples.seismic import plot_shotrecord\n\n\ndef get_true_model():\n ''' Define the test phantom; in this case we are using\n a simple circle so we can easily see what is going on.\n '''\n return demo_model('circle-isotropic', vp_circle=3.0, vp_background=2.5, \n origin=param['origin'], shape=param['shape'],\n spacing=param['spacing'], nbl=param['nbl'])\n\ndef get_initial_model():\n '''The initial guess for the subsurface model.\n '''\n # Make sure both model are on the same grid\n grid = get_true_model().grid\n return demo_model('circle-isotropic', vp_circle=2.5, vp_background=2.5, \n origin=param['origin'], shape=param['shape'],\n spacing=param['spacing'], nbl=param['nbl'],\n grid=grid)\n\ndef wrap_model(x, astype=None):\n '''Wrap a flat array as a subsurface model.\n '''\n model = get_initial_model()\n v_curr = 1.0/np.sqrt(x.reshape(model.shape))\n \n if astype:\n model.update('vp', v_curr.astype(astype).reshape(model.shape))\n else:\n model.update('vp', v_curr.reshape(model.shape))\n return model\n\ndef load_model(filename):\n \"\"\" Returns the current model. This is used by the\n worker to get the current model.\n \"\"\"\n pkl = pickle.load(open(filename, \"rb\"))\n \n return pkl['model']\n\ndef dump_model(filename, model):\n ''' Dump model to disk.\n '''\n pickle.dump({'model':model}, open(filename, \"wb\"))\n \ndef load_shot_data(shot_id, dt):\n ''' Load shot data from disk, resampling to the model time step.\n '''\n pkl = pickle.load(open(\"shot_%d.p\"%shot_id, \"rb\"))\n \n return pkl['geometry'], pkl['rec'].resample(dt)\n\ndef dump_shot_data(shot_id, rec, geometry):\n ''' Dump shot data to disk.\n '''\n pickle.dump({'rec':rec, 'geometry': geometry}, open('shot_%d.p'%shot_id, \"wb\"))\n \ndef generate_shotdata_i(param):\n \"\"\" Inversion crime alert! Here the worker is creating the\n 'observed' data using the real model. For a real case\n the worker would be reading seismic data from disk.\n \"\"\"\n # Reconstruct objects\n with open(\"arguments.pkl\", \"rb\") as cp_file:\n cp = pickle.load(cp_file)\n \n solver = cp['solver']\n\n # source position changes according to the index\n shot_id=param['shot_id']\n \n solver.geometry.src_positions[0,:]=[20, shot_id*1000./(param['nshots']-1)]\n true_d = solver.forward()[0]\n dump_shot_data(shot_id, true_d.resample(4.0), solver.geometry.src_positions)\n\ndef generate_shotdata(solver):\n # Pick devito objects (save on disk)\n cp = {'solver': solver}\n with open(\"arguments.pkl\", \"wb\") as cp_file:\n pickle.dump(cp, cp_file) \n\n work = [dict(param) for i in range(param['nshots'])]\n # synthetic data is generated here twice: serial(loop below) and parallel (via dask map functionality) \n for i in range(param['nshots']):\n work[i]['shot_id'] = i\n generate_shotdata_i(work[i])\n\n # Map worklist to cluster, We pass our function and the dictionary to the map() function of the client\n # This returns a list of futures that represents each task\n futures = c.map(generate_shotdata_i, work)\n\n # Wait for all futures\n wait(futures)\n\n#NBVAL_IGNORE_OUTPUT\nfrom examples.seismic import plot_shotrecord\n\n# Client setup\ncluster = LocalCluster(n_workers=2, death_timeout=600)\nc = Client(cluster)\n\n# Generate shot data.\ntrue_model = get_true_model()\n# Source coords definition\nsrc_coordinates = np.empty((1, len(param['shape'])))\n# Number of receiver locations per shot.\nnreceivers = 101\n# Set up receiver data and geometry.\nrec_coordinates = np.empty((nreceivers, len(param['shape'])))\nrec_coordinates[:, 1] = np.linspace(param['spacing'][0], true_model.domain_size[0] - param['spacing'][0], num=nreceivers)\nrec_coordinates[:, 0] = 980. # 20m from the right end\n# Geometry \ngeometry = AcquisitionGeometry(true_model, rec_coordinates, src_coordinates,\n param['t0'], param['tn'], src_type='Ricker',\n f0=param['f0'])\n# Set up solver\nsolver = AcousticWaveSolver(true_model, geometry, space_order=4)\ngenerate_shotdata(solver)",
"Dask specifics\nPreviously in 03_fwi.ipynb, we defined a function to calculate the individual contribution to the functional and gradient for each shot, which was then used in a loop over all shots. However, when using distributed frameworks such as Dask we instead think in terms of creating a worklist which gets mapped onto the worker pool. The sum reduction is also performed in parallel. For now however we assume that the scipy.optimize.minimize itself is running on the master process; this is a reasonable simplification because the computational cost of calculating (f, g) far exceeds the other compute costs.\nBecause we want to be able to use standard reduction operators such as sum on (f, g) we first define it as a type so that we can define the __add__ (and __radd__ method).",
"# Define a type to store the functional and gradient.\nclass fg_pair:\n def __init__(self, f, g):\n self.f = f\n self.g = g\n \n def __add__(self, other):\n f = self.f + other.f\n g = self.g + other.g\n \n return fg_pair(f, g)\n \n def __radd__(self, other):\n if other == 0:\n return self\n else:\n return self.__add__(other)",
"Create operators for gradient based inversion\nTo perform the inversion we are going to use scipy.optimize.minimize(method=’L-BFGS-B’).\nFirst we define the functional, f, and gradient, g, operator (i.e. the function fun) for a single shot of data. This is the work that is going to be performed by the worker on a unit of data.",
"#NBVAL_IGNORE_OUTPUT\nfrom devito import Function\n\n# Create FWI gradient kernel for a single shot\ndef fwi_gradient_i(param):\n\n # Load the current model and the shot data for this worker.\n # Note, unlike the serial example the model is not passed in\n # as an argument. Broadcasting large datasets is considered\n # a programming anti-pattern and at the time of writing it\n # it only worked reliably with Dask master. Therefore, the\n # the model is communicated via a file.\n model0 = load_model(param['model'])\n \n dt = model0.critical_dt\n nbl = model0.nbl\n\n # Get src_position and data\n src_positions, rec = load_shot_data(param['shot_id'], dt)\n\n # Set up solver -- load the solver used above in the generation of the syntethic data. \n with open(\"arguments.pkl\", \"rb\") as cp_file:\n cp = pickle.load(cp_file)\n solver = cp['solver']\n \n # Set attributes to solver\n solver.geometry.src_positions=src_positions\n solver.geometry.resample(dt)\n\n # Compute simulated data and full forward wavefield u0\n d, u0 = solver.forward(vp=model0.vp, dt=dt, save=True)[0:2]\n \n # Compute the data misfit (residual) and objective function\n residual = Receiver(name='rec', grid=model0.grid,\n time_range=solver.geometry.time_axis,\n coordinates=solver.geometry.rec_positions)\n\n #residual.data[:] = d.data[:residual.shape[0], :] - rec.data[:residual.shape[0], :]\n residual.data[:] = d.data[:] - rec.data[0:d.data.shape[0], :]\n f = .5*np.linalg.norm(residual.data.flatten())**2\n\n # Compute gradient using the adjoint-state method. Note, this\n # backpropagates the data misfit through the model.\n grad = Function(name=\"grad\", grid=model0.grid)\n solver.gradient(rec=residual, u=u0, vp=model0.vp, dt=dt, grad=grad)\n \n # Copying here to avoid a (probably overzealous) destructor deleting\n # the gradient before Dask has had a chance to communicate it.\n g = np.array(grad.data[:])[nbl:-nbl, nbl:-nbl] \n \n # return the objective functional and gradient.\n return fg_pair(f, g)",
"Define the global functional-gradient operator. This does the following:\n* Maps the worklist (shots) to the workers so that the invidual contributions to (f, g) are computed.\n* Sum individual contributions to (f, g) and returns the result.",
"def fwi_gradient(model, param):\n # Dump a copy of the current model for the workers\n # to pick up when they are ready.\n param['model'] = \"model_0.p\"\n dump_model(param['model'], wrap_model(model))\n\n # Define work list\n work = [dict(param) for i in range(param['nshots'])]\n for i in range(param['nshots']):\n work[i]['shot_id'] = i\n \n # Distribute worklist to workers.\n fgi = c.map(fwi_gradient_i, work, retries=1)\n \n # Perform reduction.\n fg = c.submit(sum, fgi).result()\n \n # L-BFGS in scipy expects a flat array in 64-bit floats.\n return fg.f, fg.g.flatten().astype(np.float64)",
"FWI with L-BFGS-B\nEquipped with a function to calculate the functional and gradient, we are finally ready to define the optimization function.",
"from scipy import optimize\n\n# Many optimization methods in scipy.optimize.minimize accept a callback\n# function that can operate on the solution after every iteration. Here\n# we use this to monitor the true relative solution error.\nrelative_error = []\ndef fwi_callbacks(x): \n # Calculate true relative error\n true_vp = get_true_model().vp.data[param['nbl']:-param['nbl'], param['nbl']:-param['nbl']]\n true_m = 1.0 / (true_vp.reshape(-1).astype(np.float64))**2\n relative_error.append(np.linalg.norm((x-true_m)/true_m))\n\n# FWI with L-BFGS\nftol = 0.1\nmaxiter = 5\n\ndef fwi(model, param, ftol=ftol, maxiter=maxiter):\n # Initial guess\n v0 = model.vp.data[param['nbl']:-param['nbl'], param['nbl']:-param['nbl']]\n m0 = 1.0 / (v0.reshape(-1).astype(np.float64))**2\n \n # Define bounding box constraints on the solution.\n vmin = 1.4 # do not allow velocities slower than water\n vmax = 4.0\n bounds = [(1.0/vmax**2, 1.0/vmin**2) for _ in range(np.prod(model.shape))] # in [s^2/km^2]\n \n result = optimize.minimize(fwi_gradient,\n m0, args=(param, ), method='L-BFGS-B', jac=True,\n bounds=bounds, callback=fwi_callbacks,\n options={'ftol':ftol,\n 'maxiter':maxiter,\n 'disp':True})\n\n return result",
"We now apply our FWI function and have a look at the result.",
"#NBVAL_IGNORE_OUTPUT\n\nmodel0 = get_initial_model()\n\n# Baby steps\nresult = fwi(model0, param)\n\n# Print out results of optimizer.\nprint(result)\n\n#NBVAL_SKIP\n\n# Plot FWI result\nfrom examples.seismic import plot_image\n\nslices = tuple(slice(param['nbl'],-param['nbl']) for _ in range(2))\nvp = 1.0/np.sqrt(result['x'].reshape(true_model.shape))\nplot_image(true_model.vp.data[slices], vmin=2.4, vmax=2.8, cmap=\"cividis\")\nplot_image(vp, vmin=2.4, vmax=2.8, cmap=\"cividis\")\n\n#NBVAL_SKIP\nimport matplotlib.pyplot as plt\n\n# Plot model error\nplt.plot(range(1, maxiter+1), relative_error); plt.xlabel('Iteration number'); plt.ylabel('L2-model error')\nplt.show()",
"As can be observed in last figures, the results we obtain are exactly the same to the ones obtained in 04_dask.ipynb"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
g2214370/bigdata_wilson
|
model/ex02 data dowload 0804.ipynb
|
gpl-3.0
|
[
"import pprint\nimport json\nimport urllib\nimport pandas as pd\nimport random\nimport numpy as np",
"cnv2utf8 (網址中文部分需要用URLEncode 轉換成 UTF-8)",
"def cnv2utf8(mstr):\n #print mstr\n #print urllib.quote(mstr.encode(u\"utf8\"))\n return urllib.quote(mstr.encode(u\"utf8\"))\n",
"可以看json中文字的程式碼",
"class MyPrettyPrinter(pprint.PrettyPrinter):\n def format(self, object, context, maxlevels, level):\n if isinstance(object, unicode):\n return (object.encode('utf8'), True, False)\n return pprint.PrettyPrinter.format(self, object, context, maxlevels, level)\ndef printJson(aObj):\n MyPrettyPrinter().pprint(aObj)",
"傳入六個變數,回傳一個Json\n\n地點\n產品\n前幾名資料\n跳過多少資料量\n交易結束時間\n交易開始時間",
"def getData(location, product, top, skip, EndDate, StartDate):\n \n ##變數宣告\n # url是API的網址\n # ahash是替代的變數值\n url = u\"http://m.coa.gov.tw/OpenData/FarmTransData.aspx?$top={top}&$skip={skip}&Market={Market}&Crop={Crop}&EndDate={EndDate}&StartDate={StartDate}\"\n ahash={\n u\"{top}\" :top,\n u\"{skip}\" :skip,\n u\"{Market}\" :cnv2utf8(location),\n u\"{Crop}\" :cnv2utf8(product),\n u\"{EndDate}\" :EndDate,\n u\"{StartDate}\":StartDate,\n }\n \n ## 這裡再做的就是將ahash內的key用value換掉\n for abc in ahash:\n url=url.replace(abc,ahash[abc])\n #print url\n\n # 到API抓資料回來\n rsps = urllib.urlopen( url.encode(u\"utf8\") )\n \n np.random.seed(1337) \n alist = [1, 2, 3, 4, 5]\n\n for x in alist:\n if int(np.random.random()*10)>7 :\n print \"X\"\n else:\n print x\n return json.loads(rsps.read())\n \n\npd.read_json(json.dumps(getData(u\"\", u\"本島萵苣\", u\"10000\", u\"\",u\"105.06.30\",u\"103.05.01\"))).to_csv(u\"123.csv\",encoding='utf-8')",
"url = u\"http://m.coa.gov.tw/OpenData/FarmTransData.aspx?$top={top}&$skip={skip}&Market={Market}&Crop={Crop}\"\nahash={\n u\"{top}\" :u\"100\",\n u\"{skip}\" :u\"0\",\n u\"{Market}\":cnv2utf8(u\"台北二\"),\n u\"{Crop}\" :cnv2utf8(u\"椰子\"),\n}\n\n## 這裡再做的就是將ahash內的key用value換掉\n\nfor abc in ahash:\n url=url.replace(abc,ahash[abc])\n#print url\n\n# 到API抓資料回來\n\nrsps = urllib.urlopen( url.encode(u\"utf8\") )\ndata = json.loads(rsps.read())\n\nprintJson(data)",
"#url = u\"http://m.coa.gov.tw/OpenData/FarmTransData.aspx?$top={top}&$skip={skip}&filter={filter}\"\n\n#ahash={u\"99110001\":u\"August\",\n# u\"99110002\":u\"vicky\"}\n\n# ahash[u\"99110001\"]\n\n# ahash={\n u\"{top}\":u\"20\",\n u\"{skip}\":u\"100\",\n u\"{filter}\":u\"Market=台北二&Crop=椰子\",\n# }\n\n# ahash\n\n#ahash\n\n# for abc in ahash:\n print abc,ahash[abc]\n\n#url.replace(u\"gov\",u\"com\")\n\n#urllib.quote(u\"椰子\".encode(u\"utf8\"))\n\n#ahash={\n u\"{top}\":u\"20\",\n u\"{skip}\":u\"100\",\n u\"{filter}\":urllib.quote(u\"Market=台北二&Crop=椰子\".encode(u\"utf8\")),\n#}"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
4dsolutions/Python5
|
QuadraysJN.ipynb
|
mit
|
[
"STEM for Philosophers series<br/>Oregon Curriculum Network\nQuadray Coordinates: Getting Started\nWhat are quadray coordinates and how are they used in philosophy? For more background on this question, read my Thinking Outside the Box: Language Games in Mathematics, on Medium\nLets start out with a stripped down XYZ Vector class that works pretty much as expected, in allowing for vector addition and subtraction, multiplication by a scalar.",
"from math import radians, degrees, cos, sin, acos\nimport math\nfrom operator import add, sub, mul, neg\nfrom collections import namedtuple\n\nXYZ = namedtuple(\"xyz_vector\", \"x y z\")\nIVM = namedtuple(\"ivm_vector\", \"a b c d\")\n\nroot2 = 2.0**0.5\n\n \nclass Qvector:\n \"\"\"Quadray vector\"\"\"\n\n def __init__(self, arg):\n \"\"\"Initialize a vector at an (x,y,z)\"\"\"\n self.coords = self.norm(arg)\n\n def __repr__(self):\n return repr(self.coords)\n\n def norm(self, arg):\n \"\"\"Normalize such that 4-tuple all non-negative members.\"\"\"\n return IVM(*tuple(map(sub, arg, [min(arg)] * 4))) \n \n def norm0(self):\n \"\"\"Normalize such that sum of 4-tuple members = 0\"\"\"\n q = self.coords\n return IVM(*tuple(map(sub, q, [sum(q)/4.0] * 4))) \n\n @property\n def a(self):\n return self.coords.a\n\n @property\n def b(self):\n return self.coords.b\n\n @property\n def c(self):\n return self.coords.c\n\n @property\n def d(self):\n return self.coords.d\n \n def __eq__(self, other):\n return self.coords == other.coords\n \n def __lt__(self, other):\n return self.coords < other.coords\n\n def __gt__(self, other):\n return self.coords > other.coords\n \n def __hash__(self):\n return hash(self.coords)\n \n def __mul__(self, scalar):\n \"\"\"Return vector (self) * scalar.\"\"\"\n newcoords = [scalar * dim for dim in self.coords]\n return Qvector(newcoords)\n\n __rmul__ = __mul__ # allow scalar * vector\n\n def __truediv__(self,scalar):\n \"\"\"Return vector (self) * 1/scalar\"\"\" \n return self.__mul__(1.0/scalar)\n \n def __add__(self,v1):\n \"\"\"Add a vector to this vector, return a vector\"\"\" \n newcoords = tuple(map(add, v1.coords, self.coords))\n return Qvector(newcoords)\n \n def __sub__(self,v1):\n \"\"\"Subtract vector from this vector, return a vector\"\"\"\n return self.__add__(-v1)\n \n def __neg__(self): \n \"\"\"Return a vector, the negative of this one.\"\"\"\n return Qvector(tuple(map(neg, self.coords)))\n \n def dot(self,v1):\n \"\"\"Return the dot product of self with another vector.\n return a scalar\n \n >>> s1 = a.dot(b)/(a.length() * b.length())\n >>> degrees(acos(s1))\n 109.47122063449069\n \"\"\"\n return 0.5 * sum(map(mul, self.norm0(), v1.norm0()))\n\n def length(self):\n \"\"\"Return this vector's length\"\"\"\n return self.dot(self) ** 0.5\n \n def cross(self,v1):\n \"\"\"Return the cross product of self with another vector.\n return a Qvector\"\"\"\n A = Qvector((1,0,0,0))\n B = Qvector((0,1,0,0))\n C = Qvector((0,0,1,0))\n D = Qvector((0,0,0,1))\n a1,b1,c1,d1 = v1.coords\n a2,b2,c2,d2 = self.coords\n k= (2.0**0.5)/4.0\n sum = (A*c1*d2 - A*d1*c2 - A*b1*d2 + A*b1*c2\n + A*b2*d1 - A*b2*c1 - B*c1*d2 + B*d1*c2 \n + b1*C*d2 - b1*D*c2 - b2*C*d1 + b2*D*c1 \n + a1*B*d2 - a1*B*c2 - a1*C*d2 + a1*D*c2\n + a1*b2*C - a1*b2*D - a2*B*d1 + a2*B*c1 \n + a2*C*d1 - a2*D*c1 - a2*b1*C + a2*b1*D)\n return k*sum\n\n def angle(self, v1):\n return self.xyz().angle(v1.xyz())\n \n def xyz(self):\n a,b,c,d = self.coords\n k = 0.5/root2\n xyz = (k * (a - b - c + d),\n k * (a - b + c - d),\n k * (a + b - c - d))\n return Vector(xyz)",
"Converting to xyz will not work yet, as the Vector class is not yet defined. That's what's coming.",
"class Vector:\n\n def __init__(self, arg):\n \"\"\"Initialize a vector at an (x,y,z)\"\"\"\n self.xyz = XYZ(*map(float,arg))\n\n def __repr__(self):\n return repr(self.xyz)\n \n @property\n def x(self):\n return self.xyz.x\n\n @property\n def y(self):\n return self.xyz.y\n\n @property\n def z(self):\n return self.xyz.z\n \n def __mul__(self, scalar):\n \"\"\"Return vector (self) * scalar.\"\"\"\n newcoords = [scalar * dim for dim in self.xyz]\n return type(self)(newcoords)\n\n __rmul__ = __mul__ # allow scalar * vector\n\n def __truediv__(self,scalar):\n \"\"\"Return vector (self) * 1/scalar\"\"\" \n return self.__mul__(1.0/scalar)\n \n def __add__(self,v1):\n \"\"\"Add a vector to this vector, return a vector\"\"\" \n newcoords = map(add, v1.xyz, self.xyz)\n return type(self)(newcoords)\n \n def __sub__(self,v1):\n \"\"\"Subtract vector from this vector, return a vector\"\"\"\n return self.__add__(-v1)\n \n def __neg__(self): \n \"\"\"Return a vector, the negative of this one.\"\"\"\n return type(self)(tuple(map(neg, self.xyz)))\n\n def unit(self):\n return self.__mul__(1.0/self.length())\n\n def dot(self,v1):\n \"\"\"Return scalar dot product of this with another vector.\"\"\"\n return sum(map(mul , v1.xyz, self.xyz))\n\n def cross(self,v1):\n \"\"\"Return the vector cross product of this with another vector\"\"\"\n newcoords = (self.y * v1.z - self.z * v1.y, \n self.z * v1.x - self.x * v1.z,\n self.x * v1.y - self.y * v1.x )\n return type(self)(newcoords)\n \n def length(self):\n \"\"\"Return this vector's length\"\"\"\n return self.dot(self) ** 0.5\n\n def quadray(self):\n \"\"\"return (a, b, c, d) quadray based on current (x, y, z)\"\"\"\n x, y, z = self.xyz\n k = 2/root2\n a = k * ((x >= 0)* ( x) + (y >= 0) * ( y) + (z >= 0) * ( z))\n b = k * ((x < 0)* (-x) + (y < 0) * (-y) + (z >= 0) * ( z))\n c = k * ((x < 0)* (-x) + (y >= 0) * ( y) + (z < 0) * (-z))\n d = k * ((x >= 0)* ( x) + (y < 0) * (-y) + (z < 0) * (-z))\n return Qvector((a, b, c, d))",
"At the end is a method for outputting in quadray coordinates. \nSome design decisions were taken, conventions followed, in how the XYZ and IVM systems were overlaid. \nLets not worry about that for now and just imagine a cube with edges sqrt(2)/2, one corner in each octant. The face diagonals will have length 1 in this case.\nFor example, in the all positive octant (+ + +) we would have a point at (sqrt(2)/4, sqrt(2)/4, sqrt(2)/4).",
"octant0 = Vector((root2/4, root2/4, root2/4))\nprint(octant0.xyz)\nq0 = octant0.quadray()\nprint(q0)\n\nq0.length()",
"This might seem strange already. What appears to be a unit vector, has some irrational length. The cube below is flipped over somehow, but gives the idea. Think of (1,0,0,0) as being in the all positive octant (+, +, +) of XYZ.\nAnother issue with this cube, in the context of the surrounding computations, is all edges are doubled, with cube face diagonals set at 2. Divide every linear dimension through by 2 to get the computations here.\nThe body diagonal of a cube with unit face diagonas is $\\sqrt{6}/2$ meaning each quadray, which goes from the cube center to a corner, has a length of $\\sqrt(6)/4$. That's the lenght of q0 shown above.\n\n<div style=\"text-align: center;\">\nFig 1: Divide Through by 2 for Dimensions\n</div>",
"octant1 = Vector((-root2/4, root2/4, root2/4)) # neighboring octant\ndiff = octant0 - octant1\nprint(\"diff.quadray()\", diff.quadray())\nprint(\"Length between adjacent corners: \", diff.length())",
"This confirms the cube has the expected edge length. This is not the length between two quadray tips, which is $1.0$, but between two adjacent corners of our cube, so $\\sqrt{2}/2$.",
"diff.quadray().length()\n\na = Qvector((1,0,0,0))\nb = Qvector((0,1,0,0))\n(a-b).length()",
"Half the cube's vertices will align with the four spokes of the caltrop (in blue). These correspond the the vertexes of an embedded tetrahedron of edges 2 (in red), or edges 1 if measuring in D units.\n\nNote that the Qvector class comes with two ways to express a Qray in canonical lowest terms. One way preserves the non-negative coordinate address for every point. The other way assures that the 4-tuple coordinates sum to zero. I'm using the former for all representations (repr) whereas the latter gets used in various internal computations.",
"# add up three quadrays and negate their sum, to get the other Qray\na = Qvector((1,0,0,0))\nc = Qvector((0,0,1,0))\nd = Qvector((0,0,0,1))\nv_sum = -(a + c + d)\nprint(\"Canonical representation:\", v_sum)\nprint(\"Alternative expression: \", v_sum.norm0())\nprint(\"v_sum length: \", v_sum.length())",
"Related reading:<br />\n* Computing Volumes\n* Polyhedrons 101"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NREL/bifacial_radiance
|
docs/tutorials/17 - AgriPV - Jack Solar Site Modeling.ipynb
|
bsd-3-clause
|
[
"17 - AgriPV - Jack Solar Site Modeling\nModeling Jack Solar AgriPV site in Longmonth CO, for crop season May September. The site has two configurations:\n<b> Configuration A: </b>\n* Under 6 ft panels : 1.8288m\n* Hub height: 6 ft : 1.8288m \nConfiguration B:\n* 8 ft panels : 2.4384m\n* Hub height 8 ft : 2.4384m\nOther general parameters:\n* Module Size: 3ft x 6ft (portrait mode)\n* Row-to-row spacing: 17 ft --> 5.1816\n* Torquetube: square, diam 15 cm, zgap = 0\n* Albedo = green grass\nSteps in this Journal:\n<ol>\n <li> <a href='#step1'> Load Bifacial Radiance and other essential packages</a> </li>\n <li> <a href='#step2'> Define all the system variables </a> </li>\n <li> <a href='#step3'> Build Scene for a pretty Image </a> </li>\n</ol>\n\nMore details\nThere are three methods to perform the following analyzis: \n <ul><li>A. Hourly with Fixed tilt, getTrackerAngle to update tilt of tracker </li>\n <li>B. Hourly with gendaylit1axis using the tracking dictionary </li>\n <li>C. Cumulatively with gencumsky1axis </li>\n </ul>\nThe analysis itself is performed with the HPC with method A, and results are compared to GHI (equations below). The code below shows how to build the geometry and view it for accuracy, as well as evaluate monthly GHI, as well as how to model it with gencumsky1axis which is more suited for non-hpc environments. \n\n<a id='step1'></a>\n1. Load Bifacial Radiance and other essential packages",
"import bifacial_radiance\nimport numpy as np\nimport os # this operative system to do the relative-path testfolder for this example.\nimport pprint # We will be pretty-printing the trackerdictionary throughout to show its structure.\nfrom pathlib import Path\nimport pandas as pd",
"<a id='step2'></a>\n2. Define all the system variables",
"testfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'Tutorial_17')\nif not os.path.exists(testfolder):\n os.makedirs(testfolder)\n \ntimestamp = 4020 # Noon, June 17th.\nsimulationName = 'tutorial_17' # Optionally adding a simulation name when defning RadianceObj\n\n#Location\nlat = 40.1217 # Given for the project site at Colorado\nlon = -105.1310 # Given for the project site at Colorado\n\n# MakeModule Parameters\nmoduletype='test-module'\nnumpanels = 1 # This site have 1 module in Y-direction\nx = 1 \ny = 2\n#xgap = 0.15 # Leaving 15 centimeters between modules on x direction\n#ygap = 0.10 # Leaving 10 centimeters between modules on y direction\nzgap = 0 # no gap to torquetube.\nsensorsy = 6 # this will give 6 sensors per module in y-direction\nsensorsx = 3 # this will give 3 sensors per module in x-direction\n\ntorquetube = True\naxisofrotationTorqueTube = True \ndiameter = 0.15 # 15 cm diameter for the torquetube\ntubetype = 'square' # Put the right keyword upon reading the document\nmaterial = 'black' # Torque tube of this material (0% reflectivity)\n\n# Scene variables\nnMods = 20\nnRows = 7\nhub_height = 1.8 # meters\npitch = 5.1816 # meters # Pitch is the known parameter \nalbedo = 0.2 #'Grass' # ground albedo\ngcr = y/pitch\n\ncumulativesky = False\nlimit_angle = 60 # tracker rotation limit angle\nangledelta = 0.01 # we will be doing hourly simulation, we want the angle to be as close to real tracking as possible.\nbacktrack = True \n\ntest_folder_fmt = 'Hour_{}' ",
"<a id='step3'></a>\n3. Build Scene for a pretty Image",
"idx = 272\n\ntest_folderinner = os.path.join(testfolder, test_folder_fmt.format(f'{idx:04}'))\nif not os.path.exists(test_folderinner):\n os.makedirs(test_folderinner)\n\nrad_obj = bifacial_radiance.RadianceObj(simulationName,path = test_folderinner) # Create a RadianceObj 'object'\nrad_obj.setGround(albedo) \nepwfile = rad_obj.getEPW(lat,lon) \nmetdata = rad_obj.readWeatherFile(epwfile, label='center', coerce_year=2021)\nsolpos = rad_obj.metdata.solpos.iloc[idx]\nzen = float(solpos.zenith)\nazm = float(solpos.azimuth) - 180\ndni = rad_obj.metdata.dni[idx]\ndhi = rad_obj.metdata.dhi[idx]\nrad_obj.gendaylit(idx)\n# rad_obj.gendaylit2manual(dni, dhi, 90 - zen, azm)\n#print(rad_obj.metdata.datetime[idx])\ntilt = round(rad_obj.getSingleTimestampTrackerAngle(rad_obj.metdata, idx, gcr, limit_angle=65),1)\nsceneDict = {'pitch': pitch, 'tilt': tilt, 'azimuth': 90, 'hub_height':hub_height, 'nMods':nMods, 'nRows': nRows} \nscene = rad_obj.makeScene(module=moduletype,sceneDict=sceneDict)\noctfile = rad_obj.makeOct() ",
"The scene generated can be viewed by navigating on the terminal to the testfolder and typing\n\nrvu -vf views\\front.vp -e .0265652 -vp 2 -21 2.5 -vd 0 1 0 tutorial_17.oct\n\nOR Comment the ! line below to run rvu from the Jupyter notebook instead of your terminal.",
"\n## Comment the ! line below to run rvu from the Jupyter notebook instead of your terminal.\n## Simulation will stop until you close the rvu window\n\n#!rvu -vf views\\front.vp -e .0265652 -vp 2 -21 2.5 -vd 0 1 0 tutorial_17.oct\n",
"<a id='step4'></a>\nGHI Calculations\nFrom Weather File",
"# BOULDER\n# Simple method where I know the index where the month starts and collect the monthly values this way.\n\n# In 8760 TMY, this were the indexes:\nstarts = [2881, 3626, 4346, 5090, 5835]\nends = [3621, 4341, 5085, 5829, 6550]\n\nstarts = [metdata.datetime.index(pd.to_datetime('2021-05-01 6:0:0 -7')),\n metdata.datetime.index(pd.to_datetime('2021-06-01 6:0:0 -7')),\n metdata.datetime.index(pd.to_datetime('2021-07-01 6:0:0 -7')),\n metdata.datetime.index(pd.to_datetime('2021-08-01 6:0:0 -7')),\n metdata.datetime.index(pd.to_datetime('2021-09-01 6:0:0 -7'))]\n\nends = [metdata.datetime.index(pd.to_datetime('2021-05-31 18:0:0 -7')),\n metdata.datetime.index(pd.to_datetime('2021-06-30 18:0:0 -7')),\n metdata.datetime.index(pd.to_datetime('2021-07-31 18:0:0 -7')),\n metdata.datetime.index(pd.to_datetime('2021-08-31 18:0:0 -7')),\n metdata.datetime.index(pd.to_datetime('2021-09-30 18:0:0 -7'))]\n\nghi_Boulder = []\nfor ii in range(0, len(starts)):\n start = starts[ii]\n end = ends[ii]\n ghi_Boulder.append(metdata.ghi[start:end].sum())\nprint(\" GHI Boulder Monthly May to September Wh/m2:\", ghi_Boulder)\n",
"With raytrace",
"simulationName = 'EMPTYFIELD_MAY'\nstarttime = pd.to_datetime('2021-05-01 6:0:0')\nendtime = pd.to_datetime('2021-05-31 18:0:0')\nrad_obj = bifacial_radiance.RadianceObj(simulationName) \nrad_obj.setGround(albedo) \nmetdata = rad_obj.readWeatherFile(epwfile, label='center', coerce_year=2021, starttime=starttime, endtime=endtime)\nrad_obj.genCumSky()\n#print(rad_obj.metdata.datetime[idx])\nsceneDict = {'pitch': pitch, 'tilt': 0, 'azimuth': 90, 'hub_height':-0.2, 'nMods':1, 'nRows': 1} \nscene = rad_obj.makeScene(module=moduletype,sceneDict=sceneDict)\noctfile = rad_obj.makeOct() \nanalysis = bifacial_radiance.AnalysisObj()\nfrontscan, backscan = analysis.moduleAnalysis(scene, sensorsy=1)\nfrontscan['zstart'] = 0.5\nfrontdict, backdict = analysis.analysis(octfile = octfile, name='FIELDTotal', frontscan=frontscan, backscan=backscan)\nprint(\"FIELD TOTAL MAY:\", analysis.Wm2Front[0])",
"Next STEPS: Raytrace Every hour of the Month on the HPC -- Check HPC Scripts for Jack Solar"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DistrictDataLabs/yellowbrick
|
examples/pbwitt/testing.ipynb
|
apache-2.0
|
[
"Using Yellowbrick for Machine Learning Visualizations on Facebook Data\nPaul Witt\nThe dataset below was provided to the UCI Machine Learning Repository from researchers who used Neural Networks and Decision Trees to predict how many comments a given Facebook post would generate. \nThere are five variants of the dataset. This notebook only uses the first. \nThe full paper can be found here: \nhttp://uksim.info/uksim2015/data/8713a015.pdf\nThe primary purpose of this notebook is to test Yellowbrick.\nAttribute Information:\nAll features are integers or float values. \n1 \nPage Popularity/likes \nDecimal Encoding \nPage feature \nDefines the popularity or support for the source of the document. \n2 \nPage Checkins’s \nDecimal Encoding \nPage feature \nDescribes how many individuals so far visited this place. This feature is only associated with the places eg:some institution, place, theater etc. \n3 \nPage talking about \nDecimal Encoding \nPage feature \nDefines the daily interest of individuals towards source of the document/ Post. The people who actually come back to the page, after liking the page. This include activities such as comments, likes to a post, shares, etc by visitors to the page.\n4 \nPage Category \nValue Encoding \nPage feature \nDefines the category of the source of the document eg: place, institution, brand etc. \n5 - 29 \nDerived \nDecimal Encoding \nDerived feature \nThese features are aggregated by page, by calculating min, max, average, median and standard deviation of essential features. \n30 \nCC1 \nDecimal Encoding \nEssential feature \nThe total number of comments before selected base date/time. \n31 \nCC2 \nDecimal Encoding \nEssential feature \nThe number of comments in last 24 hours, relative to base date/time. \n32 \nCC3 \nDecimal Encoding \nEssential feature \nThe number of comments in last 48 to last 24 hours relative to base date/time. \n33 \nCC4 \nDecimal Encoding \nEssential feature \nThe number of comments in the first 24 hours after the publication of post but before base date/time. \n34 \nCC5 \nDecimal Encoding \nEssential feature \nThe difference between CC2 and CC3. \n35 \nBase time \nDecimal(0-71) Encoding \nOther feature \nSelected time in order to simulate the scenario. \n36 \nPost length \nDecimal Encoding \nOther feature \nCharacter count in the post. \n37 \nPost Share Count \nDecimal Encoding \nOther feature \nThis features counts the no of shares of the post, that how many peoples had shared this post on to their timeline. \n38 \nPost Promotion Status \nBinary Encoding \nOther feature \nTo reach more people with posts in News Feed, individual promote their post and this features tells that whether the post is promoted(1) or not(0). \n39 \nH Local \nDecimal(0-23) Encoding \nOther feature \nThis describes the H hrs, for which we have the target variable/ comments received. \n40-46 \nPost published weekday \nBinary Encoding \nWeekdays feature \nThis represents the day(Sunday...Saturday) on which the post was published. \n47-53 \nBase DateTime weekday \nBinary Encoding \nWeekdays feature \nThis represents the day(Sunday...Saturday) on selected base Date/Time. \n54 \nTarget Variable \nDecimal \nTarget \nThe no of comments in next H hrs(H is given in Feature no 39).\nData Exploration",
"%matplotlib inline\n\nimport os\nimport json\nimport time\nimport pickle\nimport requests\n\n\nimport numpy as np\nimport pandas as pd\nimport yellowbrick as yb \nimport matplotlib.pyplot as plt\n\n\ndf=pd.read_csv(\"/Users/pwitt/Documents/machine-learning/examples/pbwitt/Dataset/Training/Features_Variant_1.csv\")\n\n# Fetch the data if required\nDATA = df\nprint('Data Shape ' + str(df.shape))\n\nprint(df.dtypes)\n\nFEATURES = [\n \"Page Popularity/likes\",\n \"Page Checkins’s\",\n \"Page talking about\", \n \"Page Category\",\n \"Derived5\", \n \"Derived6\", \n \"Derived7\", \n \"Derived8\", \n \"Derived9\", \n \"Derived10\", \n \"Derived11\", \n \"Derived12\", \n \"Derived13\", \n \"Derived14\", \n \"Derived15\", \n \"Derived16\", \n \"Derived17\", \n \"Derived18\", \n \"Derived19\", \n \"Derived20\", \n \"Derived21\", \n \"Derived22\", \n \"Derived23\", \n \"Derived24\", \n \"Derived25\", \n \"Derived26\", \n \"Derived27\", \n \"Derived28\", \n \"Derived29\",\n \"CC1\",\n \"CC2\",\n \"CC3\",\n 'CC4',\n 'CC5',\n \"Base time\",\n \"Post length\",\n \"Post Share Count\",\n \"Post Promotion Status\",\n \"H Local\",\n \"Post published weekday-Sun\",\n \"Post published weekday-Mon\",\n \"Post published weekday-Tues\",\n \"Post published weekday-Weds\",\n \"Post published weekday-Thurs\",\n \"Post published weekday-Fri\",\n \"Post published weekday-Sat\",\n \"Base DateTime weekday-Sun\",\n \"Base DateTime weekday-Mon\",\n \"Base DateTime weekday-Tues\",\n \"Base DateTime weekday-Wed\",\n \"Base DateTime weekday-Thurs\",\n \"Base DateTime weekday-Fri\",\n \"Base DateTime weekday-Sat\",\n \"Target_Variable\"\n \n]\n\n# Read the data into a DataFrame\ndf.columns=FEATURES\ndf.head()\n#Note: Dataset is sorted. There is variation in the distributions. \n\n# Determine the shape of the data\nprint(\"{} instances with {} columns\\n\".format(*df.shape))\n",
"Test Yellowbrick Covariance Ranking",
"from yellowbrick.features.rankd import Rank2D \nfrom yellowbrick.features.radviz import RadViz \nfrom yellowbrick.features.pcoords import ParallelCoordinates\n\n# Specify the features of interest\n# Used all for testing purposes\nfeatures = FEATURES\n\n# Extract the numpy arrays from the data frame \nX = df[features].as_matrix()\ny = df[\"Base time\"].as_matrix()\n\n# Instantiate the visualizer with the Covariance ranking algorithm \nvisualizer = Rank2D(features=features, algorithm='covariance')\n\nvisualizer.fit(X, y) # Fit the data to the visualizer\nvisualizer.transform(X) # Transform the data\nvisualizer.show() # Draw/show/show the data\n\n# Instantiate the visualizer with the Pearson ranking algorithm \nvisualizer = Rank2D(features=features, algorithm='pearson')\n\nvisualizer.fit(X, y) # Fit the data to the visualizer\nvisualizer.transform(X) # Transform the data\nvisualizer.show() # Draw/show/show the data",
"Data Extraction\nCreate a bunch object to store data on disk. \n\ndata: array of shape n_samples * n_features\ntarget: array of length n_samples\nfeature_names: names of the features\nfilenames: names of the files that were loaded\nDESCR: contents of the readme",
"from sklearn.datasets.base import Bunch\nDATA_DIR = os.path.abspath(os.path.join(\".\", \"..\", \"pbwitt\",\"data\")) \n # Show the contents of the data directory\nfor name in os.listdir(DATA_DIR):\n if name.startswith(\".\"): continue\n print (\"- {}\".format(name))\n\ndef load_data(root=DATA_DIR):\n filenames = {\n 'meta': os.path.join(root, 'meta.json'),\n 'rdme': os.path.join(root, 'README.md'),\n 'data': os.path.join(root, 'Features_Variant_1.csv'),\n \n }\n \n #Load the meta data from the meta json\n with open(filenames['meta'], 'r') as f:\n meta = json.load(f) \n feature_names = meta['feature_names'] \n\n # Load the description from the README. \n with open(filenames['rdme'], 'r') as f:\n DESCR = f.read()\n\n\n # Load the dataset from the data file.\n dataset = pd.read_csv(filenames['data'], header=None)\n #tranform to numpy\n data = dataset.iloc[:,0:53] \n target = dataset.iloc[:,-1] \n \n # Extract the target from the data\n data = np.array(data) \n target = np.array(target)\n\n # Create the bunch object\n return Bunch(\n data=data,\n target=target,\n filenames=filenames,\n \n feature_names=feature_names,\n DESCR=DESCR\n )\n\n# Save the dataset as a variable we can use.\ndataset = load_data()\n\nprint(dataset.data.shape)\nprint(dataset.target.shape) \n \n\nfrom yellowbrick.regressor import PredictionError, ResidualsPlot\n\nfrom sklearn import metrics\nfrom sklearn import cross_validation\nfrom sklearn.model_selection import KFold\n\nfrom sklearn.svm import SVC\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom sklearn import linear_model\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import preprocessing\nfrom sklearn.linear_model import ElasticNet, Lasso\nfrom sklearn.linear_model import Ridge, Lasso \nfrom sklearn.model_selection import KFold\n",
"Build and Score Regression Models\n\nCreate function -- add parameters for Yellowbrick target visulizations \nScore models using Mean Absolute Error, Mean Squared Error, Median Absolute Error, R2",
"def fit_and_evaluate(dataset, model, label,vis, **kwargs ):\n \"\"\"\n Because of the Scikit-Learn API, we can create a function to\n do all of the fit and evaluate work on our behalf!\n \"\"\"\n start = time.time() # Start the clock! \n scores = {'Mean Absolute Error:':[], 'Mean Squared Error:':[], 'Median Absolute Error':[], 'R2':[]}\n \n for train, test in KFold(dataset.data.shape[0], n_folds=12, shuffle=True):\n X_train, X_test = dataset.data[train], dataset.data[test]\n y_train, y_test = dataset.target[train], dataset.target[test]\n \n estimator = model(**kwargs)\n estimator.fit(X_train, y_train)\n \n \n expected = y_test\n predicted = estimator.predict(X_test)\n \n #For Visulizer below \n if vis=='Ridge_vis':\n return [X_train,y_train,X_test,y_test]\n if vis=='Lasso_vis':\n return [X_train,y_train,X_test,y_test]\n \n \n scores['Mean Absolute Error:'].append(metrics.mean_absolute_error(expected, predicted))\n scores['Mean Squared Error:'].append(metrics.mean_squared_error(expected, predicted))\n scores['Median Absolute Error'].append(metrics.median_absolute_error(expected, predicted ))\n scores['R2'].append(metrics.r2_score(expected, predicted)) \n\n # Report\n print(\"Build and Validation of {} took {:0.3f} seconds\".format(label, time.time()-start))\n print(\"Validation scores are as follows:\\n\")\n print(pd.DataFrame(scores).mean())\n \n # Write official estimator to disk\n estimator = model(**kwargs)\n estimator.fit(dataset.data, dataset.target)\n \n outpath = label.lower().replace(\" \", \"-\") + \".pickle\"\n with open(outpath, 'wb') as f:\n pickle.dump(estimator, f)\n\n print(\"\\nFitted model written to:\\n{}\".format(os.path.abspath(outpath)))\n\nprint(\"Lasso Scores and Visualization Below: \\n\")\nfit_and_evaluate(dataset, Lasso, \"Facebook Lasso\",'NA')\n# Instantiate the linear model and visualizer \nlasso = Lasso()\nvisualizer = PredictionError(lasso)\nvisualizer.fit(fit_and_evaluate(dataset, Lasso, \"X_train\",'Lasso_vis')[0], fit_and_evaluate(dataset, Lasso, \"y_train\",'Lasso_vis')[1]) # Fit the training data to the visualizer\nvisualizer.score(fit_and_evaluate(dataset, Lasso, \"X_train\",'Lasso_vis')[2], fit_and_evaluate(dataset, Lasso, \"y_train\",'Lasso_vis')[3])\ng = visualizer.show() # Draw/show/show the data\n\n# Instantiate the linear model and visualizer \nprint(\"Ridge Scores and Target Visualization Below:\\n\")\nfit_and_evaluate(dataset, Ridge, \"Facebook Ridge\", 'NA')\n\nridge = Ridge()\nvisualizer = ResidualsPlot(ridge)\nvisualizer.fit(fit_and_evaluate(dataset, Ridge, \"X_train\",'Ridge_vis')[0], fit_and_evaluate(dataset, Ridge, \"y_train\",'Ridge_vis')[1]) # Fit the training data to the visualizer\nvisualizer.score(fit_and_evaluate(dataset, Ridge, \"X_train\",'Ridge_vis')[2], fit_and_evaluate(dataset, Ridge, \"y_train\",'Ridge_vis')[3]) # Evaluate the model on the test data \ng = visualizer.show() # Draw/show/show the data\n\nfit_and_evaluate(dataset, ElasticNet, \"Facebook ElasticNet\", 'NA')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Britefury/deep-learning-tutorial-pydata2016
|
SUPPLEMENTARY - Standardisation.ipynb
|
mit
|
[
"Dataset standardisation for images\nPlease ensure that you have downloaded and converted the CIFAR-10 dataset using fuel:\nfuel-download cifar10\nfuel-convert cifar10\nLets start by loading the CIFAR 10 training dataset; composed of 32x32 RGB images.",
"%matplotlib inline\n\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport seaborn\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom sklearn.decomposition import PCA\n\nseaborn.set_style('white')\n\n# We are using the fuel library to acquire our data.\nfrom fuel.datasets.cifar10 import CIFAR10\n\ndataset = CIFAR10(which_sets=['train'], load_in_memory=True)",
"Load the entire dataset into an array and scale from the range [0,255] to [0,1].",
"X, y = dataset.get_data(request=range(dataset.num_examples))\nX = X / 255.0",
"Get all the pixels in one long array.",
"samples_rgb = np.rollaxis(X, 1, 4).reshape((-1, 3))",
"Create a scatter plot to show the distribution of the pixel RGB values\nSelect a random subset of samples for the scatter plot.",
"indices = np.arange(samples_rgb.shape[0])\nnp.random.shuffle(indices)\nsamples_rgb_subset = samples_rgb[indices[:50000]]",
"Create a 3D plot.",
"fig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111, projection='3d')\nax.set_xlabel('R')\nax.set_ylabel('G')\nax.set_zlabel('B')\nax.scatter(samples_rgb_subset[:,0], samples_rgb_subset[:,1], samples_rgb_subset[:,2],\n marker='+', alpha=0.1, c='black')\nax.view_init(elev=45., azim=-45)",
"Ideal distribution\nThe ideal distribution for training a neural networks.",
"z_ideal = np.random.normal(size=(20000,3))\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111, projection='3d')\nax.set_xlabel('R (ideal)')\nax.set_ylabel('G (ideal)')\nax.set_zlabel('B (ideal)')\nax.scatter(z_ideal[:,0], z_ideal[:,1], z_ideal[:,2],\n marker='+', alpha=0.1, c='black')\n\n# Explicitly set the range that we are plotting to ensure uniform scale\nax.set_xlim(-3, 3)\nax.set_ylim(-3, 3)\nax.set_zlim(-3, 3)\nax.view_init(elev=45., azim=-45)",
"Standardisation\nThe normal approach to standardisation will only help a little. Lets compute the mean and standard deviation and plot them over the scatter plot to show the distribution.",
"rgb_mean = np.mean(samples_rgb, axis=0)\nrgb_std = np.std(samples_rgb, axis=0)\nprint 'Mean = {0}'.format(rgb_mean)\nprint 'Std-dev = {0}'.format(rgb_std)",
"There isn't much difference between the channels in terms of the mean and the standard deviation.",
"N = 20000\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111, projection='3d')\nax.set_xlabel('R')\nax.set_ylabel('G')\nax.set_zlabel('B')\nax.scatter(samples_rgb_subset[:N,0], samples_rgb_subset[:N,1], samples_rgb_subset[:N,2],\n marker='+', alpha=0.1, c='black')\nax.plot(np.array([rgb_mean[0]-rgb_std[0], rgb_mean[0]+rgb_std[0]]),\n np.array([rgb_mean[1], rgb_mean[1]]),\n np.array([rgb_mean[2], rgb_mean[2]]), color='red')\n\nax.plot(np.array([rgb_mean[0], rgb_mean[0]]),\n np.array([rgb_mean[1]-rgb_std[1], rgb_mean[1]+rgb_std[1]]),\n np.array([rgb_mean[2], rgb_mean[2]]), color='green')\n\nax.plot(np.array([rgb_mean[0], rgb_mean[0]]),\n np.array([rgb_mean[1], rgb_mean[1]]),\n np.array([rgb_mean[2]-rgb_std[2], rgb_mean[2]+rgb_std[2]]), color='blue')\n\nax.view_init(elev=45., azim=-45)\n\n# Extract 5000 samples so that the scatter plot is not too dense\nz = samples_rgb_subset[:N,:]\n\n# Standardise\nz_std = (z - rgb_mean) / rgb_std\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111, projection='3d')\nax.set_xlabel('(R-mu) / std')\nax.set_ylabel('(G-mu) / std')\nax.set_zlabel('(B-mu) / std')\nax.scatter(z_std[:,0], z_std[:,1], z_std[:,2],\n marker='+', alpha=0.1, c='black')\n\n# Create lines for unit standard deviation and plot them\nr = np.array([[-1,0,0], [1,0,0]])\ng = np.array([[0,-1,0], [0,1,0]])\nb = np.array([[0,0,-1], [0,0,1]])\nax.plot(r[:,0], r[:,1], r[:,2], color='red')\nax.plot(g[:,0], g[:,1], g[:,2], color='green')\nax.plot(b[:,0], b[:,1], b[:,2], color='blue')\n\n# Uniform plot scale\nax.set_xlim(-3.0, 3.0)\nax.set_ylim(-3.0, 3.0)\nax.set_zlim(-3.0, 3.0)\nax.view_init(elev=45., azim=-45)",
"A subset of the samples with the mean and standard deviation plotted; the mean as at the centre of the cross and the lines that pass through it extend 1 standard deviation to either side.\nPCA whitening\nWe can use PCA to extract the primary axes of variance and plot them. Note that we retain all axes; we do not use PCA to reduce dimensionality.",
"pca = PCA(n_components=3, whiten=True)\npca.fit(samples_rgb)\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111, projection='3d')\nax.set_xlabel('R')\nax.set_ylabel('G')\nax.set_zlabel('B')\nax.scatter(z[:,0], z[:,1], z[:,2],\n marker='+', alpha=0.1, c='black')\n# Get the mean and principal component axes and scale the axes by the standard deviation\nmu = pca.mean_[None,:]\npc0 = pca.components_[0:1,:] * np.sqrt(pca.explained_variance_[0])\npc1 = pca.components_[1:2,:] * np.sqrt(pca.explained_variance_[1])\npc2 = pca.components_[2:3,:] * np.sqrt(pca.explained_variance_[2])\n\n# Create the principal component lines\nr = np.append(mu-pc0, mu+pc0, axis=0)\ng = np.append(mu-pc1, mu+pc1, axis=0)\nb = np.append(mu-pc2, mu+pc2, axis=0)\n\n# Plot the principal components\nax.plot(r[:,0], r[:,1], r[:,2], color='red')\nax.plot(g[:,0], g[:,1], g[:,2], color='green')\nax.plot(b[:,0], b[:,1], b[:,2], color='blue')\n\nax.set_xlim(-0.25, 1.25)\nax.set_ylim(-0.25, 1.25)\nax.set_zlim(-0.25, 1.25)\nax.view_init(elev=45., azim=-45)",
"The samples with the principal components extracted by PCA.\nRotate the samples onto the standard basis axes.\nUse the PCA instance to rotate the samples onto the standard basis axes.",
"# Turn whitening off temporarily so that we don't scale the samples yet.\npca.whiten = False\nz_basis = pca.transform(z)\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111, projection='3d')\nax.set_xlabel('pc-0')\nax.set_ylabel('pc-1')\nax.set_zlabel('pc-2')\nax.scatter(z_basis[:,0], z_basis[:,1], z_basis[:,2],\n marker='+', alpha=0.1, c='black')\n# Get the mean and principal component axes and scale the axes by the standard deviation\n# The mean is now at 0,0,0 since PCA will have subtracted it from each sample, moving\n# the mean there\nmu = np.array([[0,0,0]])\n# The principal components have now been rotated onto the standard basis axes.\n# Scale by the standard deviation.\npc0 = np.array([[1,0,0]]) * np.sqrt(pca.explained_variance_[0])\npc1 = np.array([[0,1,0]]) * np.sqrt(pca.explained_variance_[1])\npc2 = np.array([[0,0,1]]) * np.sqrt(pca.explained_variance_[2])\n\n# Create the principal component lines\nr = np.append(mu-pc0, mu+pc0, axis=0)\ng = np.append(mu-pc1, mu+pc1, axis=0)\nb = np.append(mu-pc2, mu+pc2, axis=0)\n\n# Plot the principal components\nax.plot(r[:,0], r[:,1], r[:,2], color='red')\nax.plot(g[:,0], g[:,1], g[:,2], color='green')\nax.plot(b[:,0], b[:,1], b[:,2], color='blue')\n\n# Explicitly set the range that we are plotting to ensure uniform scale\nax.set_xlim(-0.75, 0.75)\nax.set_ylim(-0.75, 0.75)\nax.set_zlim(-0.75, 0.75)\nax.view_init(elev=45., azim=-45)",
"The samples are primarily aligned along the X-axis.\nFull PCA whitening\nPCA whitening adds one final step; after rotating the samples so that the principal components are aligned with the standard basis axes, the samples are scaled by the reciprocal of the standard deviation so that the distribution has unit variance.",
"# Turn whitening back on.\npca.whiten = True\nz_whitened = pca.transform(z)\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111, projection='3d')\nax.set_xlabel('pcw-0')\nax.set_ylabel('pcw-1')\nax.set_zlabel('pcw-2')\nax.scatter(z_whitened[:,0], z_whitened[:,1], z_whitened[:,2],\n marker='+', alpha=0.1, c='black')\n# Get the mean and principal component axes and scale the axes by the standard deviation\nmu = np.array([[0,0,0]])\npc0 = np.array([[1,0,0]])\npc1 = np.array([[0,1,0]])\npc2 = np.array([[0,0,1]])\n\n# Create the principal component lines\nr = np.append(mu-pc0, mu+pc0, axis=0)\ng = np.append(mu-pc1, mu+pc1, axis=0)\nb = np.append(mu-pc2, mu+pc2, axis=0)\n\n# Plot the principal components\nax.plot(r[:,0], r[:,1], r[:,2], color='red')\nax.plot(g[:,0], g[:,1], g[:,2], color='green')\nax.plot(b[:,0], b[:,1], b[:,2], color='blue')\n\n# Explicitly set the range that we are plotting to ensure uniform scale\nax.set_xlim(-3, 3)\nax.set_ylim(-3, 3)\nax.set_zlim(-3, 3)\nax.view_init(elev=45., azim=-45)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sz2472/foundations-homework
|
data and database/.ipynb_checkpoints/june 7 Allison's classnotes-checkpoint.ipynb
|
mit
|
[
"Scrapin' the Webz",
"!pip3 install bs4\n\nfrom bs4 import BeautifulSoup\n\nfrom urllib.request import urlopen\nhtml_str = urlopen(\"http://static.decontextualize.com/kittens.html\").read()\n\nprint(html_str)\n\ndocument = BeautifulSoup(html_str,\"html.parser\")\n\ntype(document)\n\nh1_tag = document.find('h1')\n\nh1_tag.string\n\nimg_tag = document.find('img')\n\nimg_tag.string\n\nimg_tag('src')\n\nimg_tag['src']\n\ndocument.find_all('img')\n\nimg_tags=document.find_all('img')\n\ntype(img_tags)\n\nfirst_img = img_tags[0]\n\nfirst_img['src']\n\nsecond_img = img_tags[1]\n\nsecond_img['src']\n\nfor item in img_tags:\n print(item['src'])\n\nh2_tags = document.find_all('h2')\nfor item in h2_tags:\n print(item.string)\n\ncheckups = document.find_all('span',{'class':'lastcheckup'})\nfor item in checkups:\n print(item.string)\n\nkittens = document.find_all('div', {'class': 'kitten'})\nfor item in kittens:\n h2_tag = item.find('h2')\n print(h2_tag.string)\n checkup = item.find('span')\n print(checkup.string)\n\nkittens = document.find_all('div', {'class': 'kitten'})\n\nfirst_kitten = kittens[0]\nfirst_kitten_h2 = first_kitten.find('h2')\nprint(first_kitten_h2.string)\n\nplanets = [\"Mercury\", \"Venus\", \"Earth\", \"Mars\", \"Jupiter\", \"Saturn\", \"Uranus\", \"Neptune\"]\n\nseparator = \",\"\n\nseparator.join(planets)",
"But first, an aside about joining strings",
"print(\"&\\n\".join(planets))\n\nprint(\"&\\n\".join(planets[:4]))\n\nkittens = document.find_all('div', {'class': 'kitten'})\nfor item in kittens:\n h2_tag = item.find('h2')\n print(h2_tag.string)\n a_tags = item.find_all('a') #anchor tag, ancestor\n all_shows_str = [] #create a new list\n for a_tag_item in a_tags:\n #print(\"-\", a_tag_item.string)\n tag_str = a_tag_item.string\n all_shows_str.append(tag_str)\n string_with_all_show_names = \",\".join(all_shows_str) \n print(h2_tag.string+ \":\", string_with_all_show_names)\n \n \n\nkittens_data = list()#create an empty list\nkittens = document.find_all('div', {'class': 'kitten'})\nfor item in kittens:\n h2_tag = item.find('h2')\n print(h2_tag.string)\n a_tags = item.find_all('a') #anchor tag, ancestor\n all_shows_str = [] #create a new list\n for a_tag_item in a_tags:\n #print(\"-\", a_tag_item.string)\n tag_str = a_tag_item.string\n all_shows_str.append(tag_str)\n #1 create a dictionary and add to it the relevant key/value pairs\n #kitten_map = {}\n #kitten_map[\"name\"] = h2_tag.string\n #kitten_map[\"tvshows\"] = all_shows_str\n kitten_map = {\"name\":h2_tag.string, \"tvshows\":all_shows_str }\n #2 append that dictionary to the kittens_data\n string_with_all_show_names = \",\".join(all_shows_str) \n #print(h2_tag.string+ \":\", string_with_all_show_names\nkittens_data\n\nkittens_data = list()#create an empty list\nkittens = document.find_all('div', {'class': 'kitten'})\nfor item in kittens:\n h2_tag = item.find('h2')\n print(h2_tag.string)\n a_tags = item.find_all('a') #anchor tag, ancestor\n all_shows_str = []\n for a_tag_item in a_tags:\n tag_str = a_tag_item.string\n all_shows_str.append(tag_str)\n #create a dictionary adding kittens checkups\n checkup = item.find('span')# get the string with checkup.string\n kittens_data.append(\n {\"name\":h2_tag.string, \n \"tvshows\":all_shows_str,\n \"last_checkup\": checkup.string})\nkittens_data",
"Another Aside: lists and ...lists",
"Our next goal is to create a data structure that looks like this: \n [\n {'name': 'Fluffy',\n 'tv shows': ['Deep Space Nine', 'Mr.Belvedere']},\n {}\n\nx = [\"a\", \"b\", \"c\", \"d\"]\n\nx[0]\n\nx.append(\"e\")\n\nlen(x)\n\nx[4]\n\nnumbers = [1,2,3,4,5,6]\n# end up with: [1,4,9,16,25,36]\n\nsquared = [item * item for item in numbers]\nfor item in numbers:\n s = item*item\n squared.append(s)\n\nsquared\n\n## Aside the Third: Making dictionaries\n#declaring a dictionary\nx = {'a':1, 'b':2, 'c':3}\n\n#get a value out of a dictionary\nx['a']\n\nx.keys()\n\nfor key in x.keys():\n print(key) #print out keys\n\n# target: {1:1, 2:4, 3:9, 4:16, 5:25,...}\nsquares = {}\nfor n in range(1,11):\n squares[n] = n*n\nsquares\n\nsquares[7]\n\nnames = [\"Aaron\", \"Bob\", \"Caroline\", \"Daphne\"]\n#target: {\"Aaron\": 5} #show the name and how many characters each name has\nname_length_map = {}#map is \nfor item in names:\n name_length_map[item] = len(item)\nname_length_map #evaluate the dictionary, Python 3 # take a list and create a new dictionary\n",
"Scraping the Faculty, how many percentage of the CJ faculty are adjunct faculty\nour hypothesis:\nfind all the <li> tags\ninside the <li> tags, find the <h4>\ninside the <h4>, the name is the content of an <a>\ninside the <li> tags, find the <p> with class description\n** the title of the professor is the content of that tag.",
"from urllib.request import urlopen\nfaculty_html = urlopen(\"http://www.journalism.columbia.edu/page/10/10?category_ids%5B%5D=2&category_ids%5B%5D=3&category_ids%5B%5D=37\").read()\n\ndocument = BeautifulSoup(faculty_html, \"html.parser\")\n\ndocument.find('h2').string\n\nh2_tag = document.find('h2')\nh2_tag.string",
"very first task: print out the names of all the faculty members.",
"# this doesn't work, \nul_tag = document.find('ul', {'class': 'experts-list'})\nli_tags= ul_tag.find_all('li')\nfor item in li_tags:\n h4_tag = item.find('h4')\n if h4_tag: #none counts as false in python, only proceed if we actually found a h4-tag under li tags\n a_tag = h4_tag.find('a')#name of adjunct\n p_tag = item.find('p', {'class':'description'})#position of adjunct\n print(a_tag.string, \"/\", p_tag.string)",
"Now, we want to make a list of dictionaries of faculty members along with their titles\n[{'name': 'Bodarky George', 'title': 'Adjunct Assistant Professor '},\n {'name':''}]",
"profs = []\nul_tag = document.find('ul', {'class': 'experts-list'})\nli_tags= ul_tag.find_all('li')\nfor item in li_tags:\n h4_tag = item.find('h4')\n if h4_tag: #none counts as false in python, only proceed if we actually found a h4-tag under li tags\n a_tag = h4_tag.find('a')\n p_tag = item.find('p', {'class':'description'})\n prof_map = {'name': a_tag.string, 'title': p_tag.string}\n profs.append(prof_map)\nprofs\n\nfor item in profs:\n print(item['name'])",
"String Indexing\nprint all of the professors whose last name start with M",
"# print all of the professors whose last name start with 'M'\nm_profs = []\nmcount = 0\nfor item in profs:\n prof_name = item['name']\n if prof_name[0]=='M':\n print(item['name'])\n mcount += 1 #mcount= mcount+1\n \nprint(mcount)\n\n\n\n# find all of the professors listed as \"Adjunct Faculty\"\nadjunct_profs = []\n#same as where clause\nmcount=0\nfor item in profs:\n if item['title'] is not None and (\"Adjunct\" in item['title']):\n adjunct_profs.append(item)\nlen(adjunct_profs)\n\n\nfor item in profs:\n if item['title'] is not None and (\"Adjunct\" in item['title']: adjunct_profs.append(item)\n\nmessage = \"bungalow\"\nmessage[0]\n\nmessage[2:6]\n\nmessage[-1]\n\nmessage[0:3]\n\nmessage[:3]\n\nmessage[4:]\n\nmessage[-5:-2]",
"lost count of asides",
"x=5\n\nx\n\nx = x-1\n\nx\n\nx -= 1\n\nx\n\nx *=2\n\nx"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
iutzeler/Introduction-to-Python-for-Data-Sciences
|
1_Basics.ipynb
|
mit
|
[
"<table>\n<tr>\n<td width=15%><img src=\"./img/UGA.png\"></img></td>\n<td><center><h1>Introduction to Python for Data Sciences</h1></center></td>\n<td width=15%><a href=\"http://www.iutzeler.org\" style=\"font-size: 16px; font-weight: bold\">Franck Iutzeler</a> </td>\n</tr>\n</table>\n\n<br/><br/>\n<center><a style=\"font-size: 40pt; font-weight: bold\">Chap. 1 - The Basics </a></center> \n<br/><br/>\n0 - Installation and Quick Start\nPython is a programming language that is widely spread nowadays. It is used in many different domains thanks to its versatility. \nIt is an interpreted language meaning that the code is not compiled but translated by a running Python engine.\nInstallation\nSee https://www.python.org/about/gettingstarted/ for how to install Python (but it is probably already installed).\nIn Data Science, it is common to use Anaconda to download and install Python and its environment. (see also the quickstart.\nWriting Code\nSeveral options exists, more ore less user-friendly.\nIn the python shell\nThe python shell can be launched by typing the command python in a terminal (this works on Linux, Mac, and Windows with PowerShell). To exit it, type exit().\n<img src=\"./img/python.png\" width=\"600\">\nWarning: Python (version 2.x) and Python3 (version 3.x) coexists in some systems as two different softwares. The differences appear small but are real, and Python 2 is no longer supported, to be sure to have Python 3, you can type python3. \nFrom the shell, you can enter Python code that will be executed on the run as you press Enter. As long as you are in the same shell, you keep your variables, but as soon as you exit it, everything is lost. It might not be the best option...\nFrom a file\nYou can write your code in a file and then execute it with Python. The extension of Python files is typically .py.\nIf you create a file test.py (using any text editor) containing the following code:\n\n~~~\na = 10\na = a + 7\nprint(a)\n~~~\n\nThen, you can run it using the command python test.py in a terminal from the same folder as the file.\n<img src=\"./img/python_file.png\" width=\"600\">\nThis is a conveniant solution to run some code but it is probably not the best way to code.\nUsing an integrated development environment (IDE)\nYou can edit you Python code files with IDEs that offer debuggers, syntax checking, etc. Two popular exemples are:\n* Spyder which is quite similar to MATLAB or RStudio \n<img src=\"https://www.spyder-ide.org/spyder_website_banner.png\" width=\"800\">\n* VS Code which has a very good Python integration while not being restricted to it. \n<img src=\"https://code.visualstudio.com/assets/home/home-screenshot-linux-lg.png\" width=\"400px\">\nJupyter notebooks\nJupyter notebooks are browser-based notebooks for Julia, Python, and R, they correspond to .ipynb files. The main features of jupyter notebooks are:\n* In-browser editing for code, with automatic syntax highlighting, indentation, and tab completion/introspection.\n* The ability to execute code from the browser and plot inline.\n* In-browser editing for rich text using the Markdown markup language.\n* The ability to include mathematical notation within markdown cells using LaTeX, and rendered natively by MathJax.\nInstallation\nIn a terminal, enter python -m pip install notebook or simply pip install notebook\nNote : Anaconda directly comes with notebooks, they can be lauched from the Navigator directly.\nUse\nTo lauch Jupyter, enter jupyter notebook. \nThis starts a kernel (a process that runs and interfaces the notebook content with an (i)Python shell) and opens a tab in the browser. The whole interface of Jupyter notebook is web-based and can be accessed at the address http://localhost:8888 .\nThen, you can either create a new notebook or open a notebooks (.ipynb file) of the current folder. \nNote : Closing the tab does not terminate the notebook, it can still be accessed at the above adress. To terminate it, use the interface (File -> Close and Halt) or in the kernel terminal type Ctrl+C.\nRemote notebook exectution\nWithout any installation, you can:\n* view notebooks using NBViewer\n* fully interact with notebooks (create/modify/run) using UGA's Jupyter hub, Binder or Google Colab\nInterface\nNotebook documents contains the inputs and outputs of an interactive python shell as well as additional text that accompanies the code but is not meant for execution. In this way, notebook files can serve as a complete computational record of a session, interleaving executable code with explanatory text, mathematics, and representations of resulting objects. These documents are saved with the .ipynb extension. \nNotebooks may be exported to a range of static formats, including HTML (for example, for blog posts), LaTeX, PDF, etc. by File->Download as\nAccessing notebooks\nYou can open a notebook by the file explorer from the Home (welcome) tab or using File->Open from an opened notebook. To create a new notebook use the New button top-right of Home (welcome) tab or using File->New Notebook from an opened notebook, the programming language will be asked.\nEditing notebooks\nYou can modify the title (that is the file name) by clicking on it next to the jupyter logo. \nThe notebooks are a succession of cells, that can be of four types:\n* code for python code (as in ipython)\n* markdown for text in Markdown formatting (see this Cheatsheet). You may additionally use HTML and Latex math formulas.\n* raw and heading are less used for raw text and titles\nCells\nYou can edit a cell by double-clicking on it.\nYou can run a cell by using the menu or typing Ctrl+Enter (You can also run all cells, all cells above a certain point). It if is a text cell, it will be formatted. If it is a code cell it will run as it was entered in a ipython shell, which means all previous actions, functions, variables defined, are persistent. To get a clean slate, your have to restart the kernel by using Kernel->Restart.\nUseful commands\n\nTab autocompletes\nShift+Tab gives the docstring of the input function\n? return the help\n\n1- Numbers and Variables\nVariables",
"2 + 2 + 1 # comment\n\na = 4\nprint(a)\nprint(type(a))\n\na,x = 4, 9000\nprint(a)\nprint(x)",
"Variables names can contain a-z, A-Z, 0-9 and some special character as _ but must always begin by a letter. By convention, variables names are smallcase. \nTypes\nVariables are weakly typed in python which means that their type is deduced from the context: the initialization or the types of the variables used for its computation. Observe the following example.",
"print(\"Integer\")\na = 3\nprint(a,type(a))\n\nprint(\"\\nFloat\")\nb = 3.14\nprint(b,type(b))\n\nprint(\"\\nComplex\")\nc = 3.14 + 2j\nprint(c,type(c))\nprint(c.real,type(c.real))\nprint(c.imag,type(c.imag))",
"This typing can lead to some variable having unwanted types, which can be resolved by casting",
"d = 1j*1j\nprint(d,type(d))\nd = d.real\nprint(d,type(d))\nd = int(d)\nprint(d,type(d))\n\ne = 10/3\nprint(e,type(e))\nf = (10/3)/(10/3)\nprint(f,type(f))\nf = int((10/3)/(10/3))\nprint(f,type(f))",
"Operation on numbers\nThe usual operations are \n* Multiplication and Division with respecively * and /\n* Exponent with **\n* Modulo with %",
"print(7 * 3., type(7 * 3.)) # int x float -> float\n\nprint(3/2, type(3/2)) # Warning: int in Python 2, float in Python 3\nprint(3/2., type(3/2.)) # To be sure\n\nprint(2**10, type(2**10)) \n\nprint(8%2, type(8%2)) ",
"Booleans\nBoolean is the type of a variable True or False and thus are extremely useful when coding. \n* They can be obtained by comparisons >, >= (greater, greater or égal), <, <= (smaller, smaller or equal) or membership == , != (equality, different).\n* They can be manipulated by the logical operations and, not, or.",
"print('2 > 1\\t', 2 > 1) \nprint('2 > 2\\t', 2 > 2) \nprint('2 >= 2\\t',2 >= 2) \nprint('2 == 2\\t',2 == 2) \nprint('2 == 2.0',2 == 2.0) \nprint('2 != 1.9',2 != 1.9) \n\nprint(True and False)\nprint(True or True)\nprint(not False)",
"Lists\nLists are the base element for sequences of variables in python, they are themselves a variable type. \n* The syntax to write them is [ ... , ... ]\n* The types of the elements may not be all the same\n* The indices begin at $0$ (l[0] is the first element of l)\n* Lists can be nested (lists of lists of ...)\nWarning: Another type called tuple with the syntax ( ... , ... ) exists in Python. It has almost the same structure than list to the notable exceptions that one cannot add or remove elements from a tuple. We will see them briefly later",
"l = [1, 2, 3, [4,8] , True , 2.3]\nprint(l, type(l))\n\nprint(l[0],type(l[0]))\nprint(l[3],type(l[3]))\nprint(l[3][1],type(l[3][1]))\n\nprint(l)\nprint(l[4:]) # l[4:] is l from the position 4 (included)\nprint(l[:5]) # l[:5] is l up to position 5 (excluded)\nprint(l[4:5]) # l[4:5] is l between 4 (included) and 5 (excluded) so just 4\nprint(l[1:6:2]) # l[1:6:2] is l between 1 (included) and 6 (excluded) by steps of 2 thus 1,3,5\nprint(l[::-1]) # reversed order\nprint(l[-1]) # last element",
"Operations on lists\nOne can add, insert, remove, count, or test if a element is in a list easily",
"l.append(10) # Add an element to l (the list is not copied, it is actually l that is modified)\nprint(l)\n\nl.insert(1,'u') # Insert an element at position 1 in l (the list is not copied, it is actually l that is modified)\nprint(l)\n\nl.remove(10) # Remove the first element 10 of l \nprint(l)\n\nprint(len(l)) # length of a list\nprint(2 in l) # test if 2 is in l",
"Handling lists\nLists are pointer-like types. Meaning that if you write l2=l, you do not copy l to l2 but rather copy the pointer so modifying one, will modify the other.\nThe proper way to copy list is to use the dedicated copy method for list variables.",
"l2 = l \nl.append('Something')\nprint(l,l2)\n\nl3 = list(l) # l.copy() works in Python 3 \nl.remove('Something')\nprint(l,l3)",
"You can have void lists and concatenate list by simply using the + operator, or even repeat them with * .",
"l4 = []\nl5 =[4,8,10.9865]\nprint(l+l4+l5)\nprint(l5*3)",
"Tuples, Dictionaries [*]\n\nTuples are similar to list but are created with (...,...) or simply comas. They cannot be changed once created.",
"t = (1,'b',876876.908)\nprint(t,type(t))\nprint(t[0])\n\na,b = 12,[987,98987]\nu = a,b\nprint(a,b,u)\n\ntry:\n u[1] = 2\nexcept Exception as error: \n print(error)",
"Dictionaries are aimed at storing values of the form key-value with the syntax {key1 : value1, ...}\n\nThis type is often used as a return type in librairies.",
"d = {\"param1\" : 1.0, \"param2\" : True, \"param3\" : \"red\"}\nprint(d,type(d))\n\nprint(d[\"param1\"])\nd[\"param1\"] = 2.4\nprint(d)",
"Strings and text formatting\n\nStrings are delimited with (double) quotes. They can be handled globally the same way as lists (see above).\n<tt>print</tt> displays (tuples of) variables (not necessarily strings).\nTo include variable into string, it is preferable to use the <tt>format</tt> method.\n\nWarning: text formatting and notably the print method is one of the major differences between Python 2 and Python 3. The method presented here is clean and works in both versions.",
"s = \"test\"\nprint(s,type(s))\n\nprint(s[0])\nprint(s + \"42\")\n\nprint(s,42)\nprint(s+\"42\")\n\ntry:\n print(s+42)\nexcept Exception as error: \n print(error)",
"The format method",
"print( \"test {}\".format(42) )\n\nprint( \"test with an int {:d}, a float {} (or {:e} which is roughly {:.1f})\".format(4 , 3.141 , 3.141 , 3.141 ))",
"2- Branching and Loops\nIf, Elif, Else\nIn Python, the formulation for branching is the if: condition (mind the :) followed by an indentation of one tab that represents what is executed if the condition is true. The indentation is primordial and at the core of Python.",
"statement1 = False\nstatement2 = False\n\nif statement1:\n print(\"statement1 is True\")\nelif statement2:\n print(\"statement2 is True\")\nelse:\n print(\"statement1 and statement2 are False\")\n\nstatement1 = statement2 = True\n\nif statement1:\n if statement2:\n print(\"both statement1 and statement2 are True\")\n\nif statement1:\n if statement2: # Bad indentation!\n #print(\"both statement1 and statement2 are True\") # Uncommenting Would cause an error\n print(\"here it is ok\")\n print(\"after the previous line, here also\")\n\nstatement1 = True \n\nif statement1:\n print(\"printed if statement1 is True\")\n \n \n \n print(\"still inside the if block\")\n\nstatement1 = False \n\nif statement1:\n print(\"printed if statement1 is True\")\n \nprint(\"outside the if block\")",
"For loop\nThe syntax of for loops is for x in something: followed by an indentation of one tab which represents what will be executed. \nThe something above can be of different nature: list, dictionary, etc.",
"for x in [1, 2, 3]:\n print(x)\n\nsentence = \"\"\nfor word in [\"Python\", \"for\", \"data\", \"Science\"]:\n sentence = sentence + word + \" \"\nprint(sentence)",
"A useful function is <tt>range</tt> which generated sequences of numbers that can be used in loops.",
"print(\"Range (from 0) to 4 (excluded) \")\nfor x in range(4): \n print(x) \n\nprint(\"Range from 2 (included) to 6 (excluded) \")\nfor x in range(2,6): \n print(x)\n\nprint(\"Range from 1 (included) to 12 (excluded) by steps of 3 \")\nfor x in range(1,12,3): \n print(x)",
"If the index is needed along with the value, the function enumerate is useful.",
"for idx, x in enumerate(range(-3,3)):\n print(idx, x)",
"While loop\nSimilarly to for loops, the syntax iswhile condition: followed by an indentation of one tab which represents what will be executed.",
"i = 0\n\nwhile i<5:\n print(i)\n i+=1",
"Try [*]\nWhen a command may fail, you can try to execute it and optionally catch the Exception (i.e. the error).",
"a = [1,2,3]\nprint(a)\n\ntry:\n a[1] = 3\n print(\"command ok\")\nexcept Exception as error: \n print(error)\n \nprint(a) # The command went through\n\ntry:\n a[6] = 3\n print(\"command ok\")\nexcept Exception as error: \n print(error)\n \nprint(a) # The command failed",
"3- Functions\nIn Python, a function is defined as def function_name(function_arguments): followed by an indentation representing what is inside the function. (No return arguments are provided a priori)",
"def fun0():\n print(\"\\\"fun0\\\" just prints\")\n\nfun0()",
"Docstring can be added to document the function, which will appear when calling help",
"def fun1(l):\n \"\"\"\n Prints a list and its length\n \"\"\"\n print(l, \" is of length \", len(l))\n \nfun1([1,'iuoiu',True])\n\nhelp(fun1)",
"Outputs\nreturn outputs a variable, tuple, dictionary, ...",
"def square(x):\n \"\"\"\n Return x squared.\n \"\"\"\n return(x ** 2)\n\nhelp(square)\nres = square(12)\nprint(res)\n\ndef powers(x):\n \"\"\"\n Return the first powers of x.\n \"\"\"\n return(x ** 2, x ** 3, x ** 4)\n\nhelp(powers)\n\nres = powers(12)\nprint(res, type(res))\n\ntwo,three,four = powers(3)\nprint(three,type(three))\n\ndef powers_dict(x):\n \"\"\"\n Return the first powers of x as a dictionary.\n \"\"\"\n return{\"two\": x ** 2, \"three\": x ** 3, \"four\": x ** 4}\n\n\nres = powers_dict(12)\nprint(res, type(res))\nprint(res[\"two\"],type(res[\"two\"]))",
"Arguments\nIt is possible to \n* Give the arguments in any order provided that you write the corresponding argument variable name\n* Set defaults values to variables so that they become optional",
"def fancy_power(x, p=2, debug=False):\n \"\"\"\n Here is a fancy version of power that computes the square of the argument or other powers if p is set\n \"\"\"\n if debug:\n print( \"\\\"fancy_power\\\" is called with x =\", x, \" and p =\", p)\n return(x**p)\n\nprint(fancy_power(5))\nprint(fancy_power(5,p=3))\n\nres = fancy_power(p=8,x=2,debug=True)\nprint(res)",
"4- Classes [*]\nClasses are at the core of object-oriented programming, they are used to represent an object with related attribues (variables) and methods (functions). \nThey are defined as functions but with the keyword class class my_class(object): followed by an indentation. The definition of a class usually contains some methods:\n* The first argument of a method must be self in auto-reference.\n* Some method names have a specific meaning: \n * __init__: method executed at the creation of the object\n * __str__ : method executed to represent the object as a string for instance when the object is passed ot the function print",
"class Point(object):\n \"\"\"\n Class of a point in the 2D plane.\n \"\"\"\n def __init__(self, x=0.0, y=0.0):\n \"\"\"\n Creation of a new point at position (x, y).\n \"\"\"\n self.x = x\n self.y = y\n \n def translate(self, dx, dy):\n \"\"\"\n Translate the point by (dx , dy).\n \"\"\"\n self.x += dx\n self.y += dy\n \n def __str__(self):\n return(\"Point: ({:.2f}, {:.2f})\".format(self.x, self.y))\n\np1 = Point()\nprint(p1)\n\np1.translate(3,2)\nprint(p1)\n\np2 = Point(1.2,3)\nprint(p2)",
"5- Reading and writing files\nopen returns a file object, and is most commonly used with two arguments: open(filename, mode).\nThe first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file will be used (optional, 'r' will be assumed if it’s omitted.):\n* 'r' when the file will only be read\n* 'w' for only writing (an existing file with the same name will be erased)\n* 'a' opens the file for appending; any data written to the file is automatically added to the end",
"f = open('./data/test.txt', 'w')\nprint(f)",
"f.write(string) writes the contents of string to the file.",
"f.write(\"This is a test\\n\")\n\nf.close()",
"Warning: For the file to be actually written and being able to be opened and modified again without mistakes, it is primordial to close the file handle with f.close()\nf.read() will read an entire file and put the pointer at the end.",
"f = open('./data/text.txt', 'r')\nf.read()\n\nf.read()",
"The end of the file has be reached so the command returns ''. \nTo get to the top, use f.seek(offset, from_what). The position is computed from adding offset to a reference point; the reference point is selected by the from_what argument. A from_what value of 0 measures from the beginning of the file, 1 uses the current file position, and 2 uses the end of the file as the reference point. from_what can be omitted and defaults to 0, using the beginning of the file as the reference point. Thus f.seek(0) goes to the top.",
"f.seek(0)",
"f.readline() reads a single line from the file; a newline character (\\n) is left at the end of the string",
"f.readline()\n\nf.readline()",
"For reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code:",
"f.seek(0)\nfor line in f:\n print(line)\n\nf.close()",
"6- Exercises\n\nExercise 1: Odd or Even\nThe code snippet below enable the user to enter a number. Check if this number is odd or even. Optionnaly, handle bad inputs (character, float, signs, etc)",
"num = input(\"Enter a number: \")\nprint(num)",
"Exercise 2: Fibonacci \nThe Fibonacci seqence is a sequence of numbers where the next number in the sequence is the sum of the previous two numbers in the sequence. The sequence looks like this: 1, 1, 2, 3, 5, 8, 13. Write a function that generate a given number of elements of the Fibonacci sequence.\n\n\n\nExercise 3: Implement quicksort\nThe wikipedia page describing this sorting algorithm gives the following pseudocode:\n\nfunction quicksort('array')\n if length('array') <= 1\n return 'array'\n select and remove a pivot value 'pivot' from 'array'\n create empty lists 'less' and 'greater'\n for each 'x' in 'array'\n if 'x' <= 'pivot' then append 'x' to 'less'\n else append 'x' to 'greater'\n return concatenate(quicksort('less'), 'pivot', quicksort('greater'))\n\n\nCreate a function that sorts a list using quicksort",
"def quicksort(l):\n # ...\n return None\n\nres = quicksort([-2, 3, 5, 1, 3])\nprint(res)",
"Exercise 4: Project Euler\nProject Euler is a website of competitive programming mainly based on solving cleverly otherwise computation-intensive mathematical problems. It is a good way to learn a new scientific programming language.\nProblem 1 reads\n\nIf we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.\n\nFind the sum of all the multiples of 3 or 5 below 1000.\n\n\nWrite a script that solves this problem.\n\nYou can continue by solving other problems. The first ones (eg. 4, 31) are the easiest."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
conversationai/conversationai-models
|
experiments/tf_trainer/tf_hub_tfjs/notebook/BiasEvaluation.ipynb
|
apache-2.0
|
[
"Bias Evaluation for TF Javascript Model\nBased on the FAT* Tutorial Measuring Unintended Bias in Text Classification Models with Real Data.\nCopyright 2019 Google LLC.\nSPDX-License-Identifier: Apache-2.0",
"!pip3 install --quiet \"tensorflow>=1.11\"\n!pip3 install --quiet sentencepiece\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport re\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport tensorflow as tf\nimport sentencepiece\nfrom google.colab import auth\nfrom IPython.display import HTML, display\n\nfrom sklearn import metrics\n\n%matplotlib inline\n\n# autoreload makes it easier to interactively work on code in imported libraries\n%load_ext autoreload\n%autoreload 2\n\n# Set pandas display options so we can read more of the comment text.\npd.set_option('max_colwidth', 300)\n\n# Seed for Pandas sampling, to get consistent sampling results\nRANDOM_STATE = 123456789\n\nauth.authenticate_user()\n\n!mkdir -p tfjs_model\n!gsutil -m cp -R gs://conversationai-public/public_models/tfjs/v1/* tfjs_model\n\ntest_df = pd.read_csv(\n 'https://raw.githubusercontent.com/conversationai/unintended-ml-bias-analysis/master/unintended_ml_bias/new_madlibber/output_data/English/intersectional_madlibs.csv')\nprint('test data has %d rows' % len(test_df))\n\n\nmadlibs_words = pd.read_csv(\n 'https://raw.githubusercontent.com/conversationai/unintended-ml-bias-analysis/master/unintended_ml_bias/new_madlibber/input_data/English/words.csv')\n\nidentity_columns = madlibs_words[madlibs_words.type=='identity'].word.tolist()\n\nfor term in identity_columns:\n test_df[term] = test_df['phrase'].apply(\n lambda x: bool(re.search(r'\\b{}\\b'.format(term), x,\n flags=re.UNICODE|re.IGNORECASE)))\n",
"Score test set with our text classification model\nUsing our new model, we can score the set of test comments for toxicity.",
"TOXICITY_COLUMN = 'toxicity'\nTEXT_COLUMN = 'phrase'\n\npredict_fn = tf.contrib.predictor.from_saved_model(\n 'tfjs_model', signature_def_key='predict')\n\nsp = sentencepiece.SentencePieceProcessor()\nsp.Load('tfjs_model/assets/universal_encoder_8k_spm.model')\n\ndef progress(value, max=100):\n return HTML(\"\"\"\n <progress\n value='{value}'\n max='{max}',\n style='width: 100%'\n >\n {value}\n </progress>\n \"\"\".format(value=value, max=max))\n\ntox_scores = []\nnrows = test_df.shape[0]\nout = display(progress(0, nrows), display_id=True)\nfor offset in range(0, nrows):\n out.update(progress(offset, nrows))\n values = sp.EncodeAsIds(test_df[TEXT_COLUMN][offset])\n tox_scores.append(predict_fn({\n 'values': values,\n 'indices': [(0, i) for i in range(len(values))],\n 'dense_shape': [1, len(values)]})['toxicity/probabilities'][0,1])\n\nMODEL_NAME = 'tfjs_model'\ntest_df[MODEL_NAME] = tox_scores",
"Evaluate the overall ROC-AUC\nThis calculates the models performance on the entire test set using the ROC-AUC metric.",
"SUBGROUP_AUC = 'subgroup_auc'\nBACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC = 'background_positive_subgroup_negative_auc'\nBACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC = 'background_negative_subgroup_positive_auc'\n\ndef compute_auc(y_true, y_pred):\n try:\n return metrics.roc_auc_score(y_true, y_pred)\n except ValueError:\n return np.nan\n\n\ndef compute_subgroup_auc(df, subgroup, label, model_name):\n subgroup_examples = df[df[subgroup]]\n return compute_auc(subgroup_examples[label], subgroup_examples[model_name])\n\n\ndef compute_background_positive_subgroup_negative_auc(df, subgroup, label, model_name):\n \"\"\"Computes the AUC of the within-subgroup negative examples and the background positive examples.\"\"\"\n index = df[label] == 'toxic'\n subgroup_negative_examples = df[df[subgroup] & ~index]\n non_subgroup_positive_examples = df[~df[subgroup] & index]\n examples = subgroup_negative_examples.append(non_subgroup_positive_examples)\n return compute_auc(examples[label], examples[model_name])\n\n\ndef compute_background_negative_subgroup_positive_auc(df, subgroup, label, model_name):\n \"\"\"Computes the AUC of the within-subgroup positive examples and the background negative examples.\"\"\"\n index = df[label] == 'toxic'\n subgroup_positive_examples = df[df[subgroup] & index]\n non_subgroup_negative_examples = df[~df[subgroup] & ~index]\n examples = subgroup_positive_examples.append(non_subgroup_negative_examples)\n return compute_auc(examples[label], examples[model_name])\n\n\ndef compute_bias_metrics_for_model(dataset,\n subgroups,\n model,\n label_col,\n include_asegs=False):\n \"\"\"Computes per-subgroup metrics for all subgroups and one model.\"\"\"\n records = []\n for subgroup in subgroups:\n record = {\n 'subgroup': subgroup,\n 'subgroup_size': len(dataset[dataset[subgroup]])\n }\n record[SUBGROUP_AUC] = compute_subgroup_auc(\n dataset, subgroup, label_col, model)\n record[BACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC] = compute_background_positive_subgroup_negative_auc(\n dataset, subgroup, label_col, model)\n record[BACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC] = compute_background_negative_subgroup_positive_auc(\n dataset, subgroup, label_col, model)\n records.append(record)\n return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True)\n\nbias_metrics_df = compute_bias_metrics_for_model(test_df, identity_columns, MODEL_NAME, TOXICITY_COLUMN)",
"Plot a heatmap of bias metrics\nPlot a heatmap of the bias metrics. Higher scores indicate better results.\n* Subgroup AUC measures the ability to separate toxic and non-toxic comments for this identity.\n* Negative cross AUC measures the ability to separate non-toxic comments for this identity from toxic comments from the background distribution.\n* Positive cross AUC measures the ability to separate toxic comments for this identity from non-toxic comments from the background distribution.",
"def plot_auc_heatmap(bias_metrics_results, models):\n metrics_list = [SUBGROUP_AUC, BACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC, BACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC]\n df = bias_metrics_results.set_index('subgroup')\n columns = []\n vlines = [i * len(models) for i in range(len(metrics_list))]\n for metric in metrics_list:\n for model in models:\n columns.append(metric)\n num_rows = len(df)\n num_columns = len(columns)\n fig = plt.figure(figsize=(num_columns, 0.5 * num_rows))\n ax = sns.heatmap(df[columns], annot=True, fmt='.2', cbar=True, cmap='Reds_r',\n vmin=0.5, vmax=1.0)\n ax.xaxis.tick_top()\n plt.xticks(rotation=90)\n ax.vlines(vlines, *ax.get_ylim())\n return ax\n\nplot_auc_heatmap(bias_metrics_df, [MODEL_NAME])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AtmaMani/pyChakras
|
udemy_ml_bootcamp/Python-for-Data-Analysis/NumPy/NumPy Arrays.ipynb
|
mit
|
[
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nNumPy\nNumPy (or Numpy) is a Linear Algebra Library for Python, the reason it is so important for Data Science with Python is that almost all of the libraries in the PyData Ecosystem rely on NumPy as one of their main building blocks.\nNumpy is also incredibly fast, as it has bindings to C libraries. For more info on why you would want to use Arrays instead of lists, check out this great StackOverflow post.\nWe will only learn the basics of NumPy, to get started we need to install it!\nInstallation Instructions\nIt is highly recommended you install Python using the Anaconda distribution to make sure all underlying dependencies (such as Linear Algebra libraries) all sync up with the use of a conda install. If you have Anaconda, install NumPy by going to your terminal or command prompt and typing:\nconda install numpy\n\nIf you do not have Anaconda and can not install it, please refer to Numpy's official documentation on various installation instructions.\nUsing NumPy\nOnce you've installed NumPy you can import it as a library:",
"import numpy as np",
"Numpy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of Numpy: vectors,arrays,matrices, and number generation. Let's start by discussing arrays.\nNumpy Arrays\nNumPy arrays are the main way we will use Numpy throughout the course. Numpy arrays essentially come in two flavors: vectors and matrices. Vectors are strictly 1-d arrays and matrices are 2-d (but you should note a matrix can still have only one row or one column).\nLet's begin our introduction by exploring how to create NumPy arrays.\nCreating NumPy Arrays\nFrom a Python List\nWe can create an array by directly converting a list or list of lists:",
"my_list = [1,2,3]\nmy_list\n\nnp.array(my_list)\n\nmy_matrix = [[1,2,3],[4,5,6],[7,8,9]]\nmy_matrix\n\nnp.array(my_matrix)",
"Built-in Methods\nThere are lots of built-in ways to generate Arrays\narange\nReturn evenly spaced values within a given interval.",
"np.arange(0,10)\n\nnp.arange(0,11,2)",
"zeros and ones\nGenerate arrays of zeros or ones",
"np.zeros(3)\n\nnp.zeros((5,5))\n\nnp.ones(3)\n\nnp.ones((3,3))",
"linspace\nReturn evenly spaced numbers over a specified interval.",
"np.linspace(0,10,3)\n\nnp.linspace(0,10,50)",
"eye\nCreates an identity matrix",
"np.eye(4)",
"Random\nNumpy also has lots of ways to create random number arrays:\nrand\nCreate an array of the given shape and populate it with\nrandom samples from a uniform distribution\nover [0, 1).",
"np.random.rand(2)\n\nnp.random.rand(5,5)",
"randn\nReturn a sample (or samples) from the \"standard normal\" distribution. Unlike rand which is uniform:",
"np.random.randn(2)\n\nnp.random.randn(5,5)",
"randint\nReturn random integers from low (inclusive) to high (exclusive).",
"np.random.randint(1,100)\n\nnp.random.randint(1,100,10)",
"Array Attributes and Methods\nLet's discuss some useful attributes and methods or an array:",
"arr = np.arange(25)\nranarr = np.random.randint(0,50,10)\n\narr\n\nranarr",
"Reshape\nReturns an array containing the same data with a new shape.",
"arr.reshape(5,5)",
"max,min,argmax,argmin\nThese are useful methods for finding max or min values. Or to find their index locations using argmin or argmax",
"ranarr\n\nranarr.max()\n\nranarr.argmax()\n\nranarr.min()\n\nranarr.argmin()",
"Shape\nShape is an attribute that arrays have (not a method):",
"# Vector\narr.shape\n\n# Notice the two sets of brackets\narr.reshape(1,25)\n\narr.reshape(1,25).shape\n\narr.reshape(25,1)\n\narr.reshape(25,1).shape",
"dtype\nYou can also grab the data type of the object in the array:",
"arr.dtype",
"Great Job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bekbote/project_repository
|
0207_Vectors-1549598493596.ipynb
|
apache-2.0
|
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"Vector plotting",
"plt.quiver(0,0,3,4)\nplt.show()\n\nplt.quiver(0,0,3,4, scale_units='xy', angles='xy', scale=1)\nplt.show()\n\nplt.quiver(0,0,3,4, scale_units='xy', angles='xy', scale=1)\nplt.xlim(-10,10)\nplt.ylim(-10,10)\nplt.show()\n\nplt.quiver(0,0,3,4, scale_units='xy', angles='xy', scale=1, color='r')\nplt.quiver(0,0,-3,4, scale_units='xy', angles='xy', scale=1, color='g')\nplt.xlim(-10,10)\nplt.ylim(-10,10)\nplt.show()\n\ndef plot_vectors(vecs):\n colors = ['r', 'b', 'g', 'y']\n i = 0\n for vec in vecs:\n plt.quiver(vec[0], vec[1], vec[2], vec[3], scale_units='xy', angles='xy', scale=1, color=colors[i%len(colors)])\n i += 1\n plt.xlim(-10,10)\n plt.ylim(-10,10)\n plt.show()\n\nplot_vectors([(0,0,3,4), (0,0,-3,4), (0,0,-3,-2), (0,0,4,-1)])",
"Vector addition and subtraction",
"vecs = [np.asarray([0,0,3,4]), np.asarray([0,0,-3,4]), np.asarray([0,0,-3,-2]), np.asarray([0,0,4,-1])]\n\nplot_vectors([vecs[0], vecs[3]])\n\nvecs[0] + vecs[3]\n\nplot_vectors([vecs[0], vecs[3], vecs[0] + vecs[3]])\n\nplot_vectors([vecs[0], vecs[0], vecs[0] + vecs[0]])\n\nplot_vectors([vecs[0], vecs[3], vecs[0] - vecs[3]])\n\nplot_vectors([-vecs[0], vecs[3], - vecs[0] + (vecs[3])])",
"Vector dot product",
"vecs = [np.asarray([0,0,5,4]), np.asarray([0,0,-3,4]), np.asarray([0,0,-3,-2]), np.asarray([0,0,4,-1])]\n\nplot_vectors(vecs)\n\na = np.asarray([5, 4])\nb = np.asarray([-3, -2])",
"$\\vec{a}\\cdot\\vec{b} = |\\vec{a}| |\\vec{b}| \\cos(\\theta) = a_x b_x + a_y b_y$",
"a_dot_b = np.dot(a, b)\n\nprint(a_dot_b)",
"$a_b = |\\vec{a}| \\cos(\\theta) = |\\vec{a}|\\frac{\\vec{a}\\cdot\\vec{b}}{|\\vec{a}||\\vec{b}|} = \\frac{\\vec{a}\\cdot\\vec{b}}{|\\vec{b}|}$",
"a_b = np.dot(a, b)/np.linalg.norm(b)\n\nprint(a_b)",
"$\\vec{a_b} = a_b \\hat{b} = a_b \\frac{\\vec{b}}{|\\vec{b}|}$",
"vec_a_b = (a_b/np.linalg.norm(b))*b\nprint(vec_a_b)\n\nplot_vectors([np.asarray([0,0,3,4]), np.asarray([0,0,4,-1]), np.asarray([0, 0, 1.88235294, -0.47058824])])",
"Linear combination\n$\\vec{c} = w_1 \\vec{a} + w_2 \\vec{b}$",
"def plot_linear_combination(a, b, w1, w2):\n \n plt.quiver(0,0,a[0],a[1], scale_units='xy', angles='xy', scale=1, color='r')\n plt.quiver(0,0,b[0],b[1], scale_units='xy', angles='xy', scale=1, color='b')\n \n c = w1 * a + w2 * b\n \n plt.quiver(0,0,c[0],c[1], scale_units='xy', angles='xy', scale=1, color='g')\n\n plt.xlim(-10,10)\n plt.ylim(-10,10)\n plt.show()\n\na = np.asarray([3, 4])\nb = np.asarray([1.5, 2])\n\nplot_linear_combination(a, b, -1, 1)\n\ndef plot_span(a, b):\n \n for i in range(1000):\n w1 = (np.random.random(1) - 0.5) * 3\n w2 = (np.random.random(1) - 0.5) * 3\n c = w1 * a + w2 * b\n plt.quiver(0,0,c[0],c[1], scale_units='xy', angles='xy', scale=1, color='g')\n \n plt.quiver(0,0,a[0],a[1], scale_units='xy', angles='xy', scale=1, color='r')\n plt.quiver(0,0,b[0],b[1], scale_units='xy', angles='xy', scale=1, color='b')\n\n plt.xlim(-10,10)\n plt.ylim(-10,10)\n plt.show()\n\nplot_span(a, b)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
herruzojm/udacity-deep-learning
|
tv-script-generation/.ipynb_checkpoints/dlnd_tv_script_generation-Copy1-checkpoint.ipynb
|
mit
|
[
"TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (10, 20)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)",
"import numpy as np\nimport problem_unittests as tests\nfrom collections import Counter\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n word_counts = Counter(text)\n sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)\n vocab_to_int = {word: idx for idx, word in enumerate(sorted_vocab)} \n int_to_vocab = {idx: word for idx, word in enumerate(sorted_vocab)} \n \n return (vocab_to_int, int_to_vocab)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n punctuation = {}\n punctuation['.'] = '<PERIOD>'\n punctuation[','] = '<COMMA>'\n punctuation['\"'] = '<QUOTATION_MARK>'\n punctuation[';'] = '<SEMICOLON>'\n punctuation['!'] = '<EXCLAMATION_MARK>'\n punctuation['?'] = '<QUESTION_MARK>'\n punctuation['('] = '<LEFT_PAREN>'\n punctuation[')'] = '<RIGHT_PAREN>'\n punctuation['--'] = '<DASH>'\n punctuation['\\n'] = '<NEW_LINE>'\n return punctuation\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following the tuple (Input, Targets, LearingRate)",
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n inputs = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None])\n learning_rate = tf.placeholder(tf.int32)\n return (inputs, targets, learning_rate)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)",
"lstm_layers = 1\n\ndef get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n cell = tf.contrib.rnn.MultiRNNCell([lstm]*lstm_layers)\n initial_state = cell.zero_state(batch_size, tf.int32)\n initial_state = tf.identity(initial_state, name='initial_state')\n return cell, initial_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.",
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n \n \n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)",
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)",
"def build_nn(cell, rnn_size, input_data, vocab_size):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```",
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.",
"# Number of Epochs\nnum_epochs = None\n# Batch Size\nbatch_size = None\n# RNN Size\nrnn_size = None\n# Sequence Length\nseq_length = None\n# Learning Rate\nlearning_rate = None\n# Show stats for every n number of batches\nshow_every_n_batches = None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Save Parameters\nSave seq_length and save_dir for generating a new TV script.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)",
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n return None, None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Choose Word\nImplement the pick_word() function to select the next word using probabilities.",
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.",
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DLTK/DLTK
|
examples/tutorials/03_building_a_model_fn.ipynb
|
apache-2.0
|
[
"Building a model function\ndltk/examples/applications are build on Tensorflow's high-level API tf.Estimator. To use this API, you will need to define a model function that specifies the computational graph, training function, etc. Great resources can be found here and here.\nCreating a test image\nTo start, we build a read_fn (c.f. dltk/examples/tutorials/01_Reading_data.ipynb) that we can\nuse for feeding data during training. To demonstrate, we create a toy example with overlapping\ncircles in a noisy image:",
"import SimpleITK as sitk\nimport tensorflow as tf\nimport os\n\nfrom dltk.io.augmentation import *\nfrom dltk.io.preprocessing import *\n\ntf.logging.set_verbosity(tf.logging.ERROR)\n\nos.environ['CUDA_VISIBLE_DEVICES'] = '0'\n\n# Generate a simple toy dataset\nim_width = 256\nim_height = 256\nnum_imgs = 200\n\ndef create_test_image(width, height, num_objs=12, rad_max=30):\n '''Return a noisy 2D image with `num_objs' circles and a 2D mask image.'''\n image = np.zeros((width, height))\n\n for i in range(num_objs):\n x = np.random.randint(rad_max, width - rad_max)\n y = np.random.randint(rad_max, height - rad_max)\n rad = np.random.randint(10, rad_max)\n \n spy, spx = np.ogrid[-x:width - x, -y:height - y]\n circle = (spx * spx + spy * spy) <= rad * rad\n image[circle] = np.random.random() * 0.5 + 0.5\n\n norm = np.random.uniform(0, 0.25, size=image.shape)\n\n return np.maximum(image, norm), (image > 0).astype(np.int32) \n\n# Note, that the read_fn does not rely on the `file_references` in this case and \n# just ignores it. Other than that, we create a similar `read_fn` as in \n# `dltk/examples/tutorials/01_Reading_data.ipynb`:\ndef read_fn(file_references, mode, params=None):\n \n im, mask = create_test_image(im_width,im_height)\n \n yield {'features': {'x': im[np.newaxis, :, :, np.newaxis]},\n 'labels': {'y': mask[np.newaxis]}}",
"(OPTIONAL) Building plotting hooks for jupyter notebooks\nYou can skip this part, if this is not relevant for you. The next code block is only relevant important for logging in notebooks. However, it might be an interesting example for how to create custom tf.SessionRunHooks. We define a hook for Tensorflow and call it with every session run. Here, we tell the tf.Session to fetch us certain tensors. Those can then be extracted from run_values in the after_run function. We then use that together with the IPython display bits to plot the progress during training.",
"from matplotlib import pyplot as plt\nfrom IPython import display\n\n%matplotlib inline\n\n\nclass NotebookLoggingHook(tf.train.SessionRunHook):\n def __init__(self, fetches):\n self.fetches = fetches\n self.loss = []\n \n def before_run(self, run_context):\n # Add the tensors to fetch to the session\n return tf.train.SessionRunArgs(self.fetches)\n \n def after_run(self, run_context, run_values):\n # Extract the results of the fetched tensors\n fetch_dict = run_values.results\n \n # Assume to have {'loss': scalar, 'input': img, 'output': img, 'truth': img}\n self.loss.append(fetch_dict['loss'])\n \n # Plot stuff using `matplotlib`\n f, axarr = plt.subplots(2, 2, figsize=(16,8))\n axarr[0,0].imshow(np.squeeze(fetch_dict['x'][0,0,:,:,0]), cmap='gray')\n axarr[0,0].set_title('Input: x')\n axarr[0,0].axis('off')\n\n axarr[0,1].plot(self.loss)\n axarr[0,1].set_title('Crossentropy loss')\n axarr[0,1].set_yscale('log')\n axarr[0,1].axis('on')\n\n axarr[1,0].imshow(np.squeeze(fetch_dict['y_'][0,0,:,:]), cmap='gray', vmin=0, vmax=1)\n axarr[1,0].set_title('Prediction: y_')\n axarr[1,0].axis('off')\n\n axarr[1,1].imshow(np.squeeze(fetch_dict['y'][0,0,:,:]), cmap='gray', vmin=0, vmax=1)\n axarr[1,1].set_title('Truth: y')\n axarr[1,1].axis('off')\n\n display.clear_output(wait=True)\n display.display(plt.gcf())\n plt.close(f)",
"Building a model_fn\nHere, we define a model function that requires to have the arguments features, labels, mode, and params. The function has to return a tf.estimator.EstimatorSpec that is used by a tf.estimator.Estimator to perform training, predtion, etc. The EstimatorSpec holds additional information, such as the optimiser, evaluation metrics, etc.",
"# Create a `NotebookLoggingHook`\nnl_hook = NotebookLoggingHook(None)\nhooks = [nl_hook]\n\n# Create the model function with its required arguments `features`, `labels`, \n# `mode`, and `params`:\ndef model_fn(features, labels, mode, params):\n \n # 1. Define a model to train and its outputs. It can be any model you \n # would like to create, however, here we import a pre-built \n # `dltk/networks/segmentation/fcn`:\n from dltk.networks.segmentation.fcn import residual_fcn_3d\n net_output_ops = residual_fcn_3d(features['x'], 2, num_res_units=1, filters=(16, 32, 64),\n strides=((1, 1, 1), (1, 2, 2), (1, 2, 2)), mode=mode)\n \n # 1.1 Generate predictions only (for `ModeKeys.PREDICT`)\n if mode == tf.estimator.ModeKeys.PREDICT:\n return tf.estimator.EstimatorSpec(\n mode=mode,\n predictions=net_output_ops,\n export_outputs={'out': tf.estimator.export.PredictOutput(net_output_ops)})\n \n # 2. Set up a loss function\n loss = tf.losses.sparse_softmax_cross_entropy(labels['y'],\n net_output_ops['logits'])\n \n # 3. Define a training op and ops for updating moving averages \n # (e.g. for batch normalisation:\n global_step = tf.train.get_global_step()\n optimiser = tf.train.AdamOptimizer(learning_rate=params[\"learning_rate\"],\n epsilon=1e-5)\n \n update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\n with tf.control_dependencies(update_ops):\n train_op = optimiser.minimize(loss, global_step=global_step)\n \n # 4.1 (Optional) Create custom summaries to be plotted with our\n # custom `NotebookLoggingHook`\n my_notebook_fetches = {}\n my_notebook_fetches['loss'] = loss\n my_notebook_fetches['x'] = features['x']\n my_notebook_fetches['y'] = labels['y']\n my_notebook_fetches['y_'] = net_output_ops['y_']\n nl_hook.fetches = my_notebook_fetches\n \n # 5. Return EstimatorSpec object\n return tf.estimator.EstimatorSpec(mode=mode,\n predictions=net_output_ops, \n loss=loss,\n train_op=train_op,\n eval_metric_ops=None)",
"Putting it all together to train a model\nLet's create a dltk reader as in dltk/examples/tutorials/01_Reading_data.ipynb and pass the file_references as None (see the test image creator above):",
"from dltk.io.abstract_reader import Reader\n\n# Set up a data reader to handle the file i/o. \nreader_example_shapes = {'features': {'x': [1, im_width, im_height, 1]},\n 'labels': {'y': [1, im_width, im_height]}}\n\nreader = Reader(read_fn, {'features': {'x': tf.float32},\n 'labels': {'y': tf.int32}})\n\ninput_fn, qinit_hook = reader.get_inputs(file_references=None,\n mode=tf.estimator.ModeKeys.TRAIN,\n example_shapes=reader_example_shapes)",
"Now, we use our custom model_fn to build a tf.estimator.Estimator. It does automatic checkpointing if you set a model_dir and the params argument is passed directly to the model_fn:",
"nn = tf.estimator.Estimator(model_fn=model_fn, \n model_dir=None, \n params={\"learning_rate\": 0.001})",
"Finally, we train using the estimator by simply calling the estimator.train function, passing the input_fn created and the custom NotebookLoggingHook for logging. Here, we train the model for 200 steps:",
"_ = nn.train(input_fn=input_fn, hooks=hooks + [qinit_hook], steps=200)",
"Similarly, we can evaluate the model:",
"nl_hook.loss = []\neval_input_fn, eval_qinit_hook = reader.get_inputs(file_references=None, \n mode=tf.estimator.ModeKeys.EVAL,\n example_shapes=reader_example_shapes)\n\n_ = nn.evaluate(input_fn=eval_input_fn, hooks=hooks + [eval_qinit_hook], steps=1)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ozorich/phys202-2015-work
|
assignments/assignment04/TheoryAndPracticeEx01.ipynb
|
mit
|
[
"Theory and Practice of Visualization Exercise 1\nImports",
"from IPython.display import Image",
"Graphical excellence and integrity\nFind a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.\n\nVox\nUpshot\n538\nBuzzFeed\n\nUpload the image for the visualization to this directory and display the image inline in this notebook.",
"# Add your filename and uncomment the following line:\nImage(filename='drugs_and_poverty_Chicago.0.png')",
"Describe in detail the ways in which the visualization exhibits graphical integrity and excellence:\nThis visual has graphical integrity because it acurrately displays its data and qoutes where the data it represents comes from. It labels the different representations on the graphic itself. The visual explains what each color represents and which visualization represents which group of data to avoid any ambiguity.\nThis visual has graphical excellence because it balances data representation and visual appeal of design. It has the three data sets and invites comparison of the data. It makes a large data set visually understandable. Avoids any extraneous ink usage, keeping white space where data isn't represented."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mmaelicke/felis_python1
|
felis_python1/lectures/03_Libraries and Conditions.ipynb
|
mit
|
[
"<h1><i><center>Python 1</center></i></h1>\n\nSection 3\nNative Libraries\nIn Python we have lots of functions that are available for usage directly. That's what we saw last section when we talked about Built-in functions. However, there are some functions we need, which are not available right away. In this case, we need to import them, like you did in the second section with the math library.",
"'''The first thing to do, is to use the 'import' built-in function to\nmake specific libraries available for usage. The 'import' function can\nbe used in different aways. Let's see a few of them.'''\n\n# Let's use the math package that you already know\nimport math #simple way to do it\nimport math as mt #import library with shorter name\nfrom math import sqrt #import specific functions from a library\nfrom math import sqrt as sq #import specific functions from a library in abbreviated format\nfrom math import * #import everything inside a library \n\n# But what is the difference between them?\n\n#Sandbox",
"math",
"# So, let's use some functions from math library\nimport math\n\n#Square Root\nn = input('Enter a value: ')\nprint 'The square root of {} is {}'.format(n,math.sqrt(n))\n\n# Trigonometry\n\nn = input('Enter a value: ')\nprint 'The sine of {} is {}'.format(n,math.sin(n))\nprint 'The cossine of {} is {}'.format(n,math.cos(n))\nprint 'The tangent of {} is {}'.format(n,math.tan(n))\n\n# Log\nn = input('Enter a value: ')\nprint 'The natural logarithm of {} is {}'.format(n,math.log(n))\nprint 'The base-10 logarithm of {} is {}'.format(n,math.log10(n))\n\n# Constants\nprint 'The constant pi is equal to {}.'.format(math.pi)\n# Euler's number: the unique number whose natural logarithm is equal to one.\nprint 'The constant e is equal to {}.'.format(math.e)",
"That are other calculations that can be performed. However, it is necessary to have some data structure. We will see that again in further sections.\nos",
"# Let's import the library first\nimport os\n\n# getcwd and chdir\n# this function sets your home directory\n\npath = r\"C:\\Users\\Pereira\\Desktop\"\nprint 'Former working environment:',os.getcwd()# get the path ot the current working directory\nos.chdir(path)\nprint 'Current working environment:',os.getcwd()\n\n# mkdir\n# this functions creates a new folder in the working environment\npath = r\"second_folder\"\nos.mkdir(path)\n\n# rename\npath = r\"C:\\Users\\Pereira\\Desktop\\first_folder\"\nos.rename(path, 'New_name')\n\n# remove\n# deletes the file that was just created\npath = r\"C:\\Users\\Pereira\\Desktop\\New_name\"\nos.removedirs(path)\n\n# open\nstring = 'Python is awesome!!'\nfile_ = open('test.txt', 'w')\nfile_.write(string)\nfile_.close()\n\n# remove\npath = r\"C:\\Users\\Pereira\\Desktop\\test.txt\"\nos.remove(path)\n\n# read\nfile_ = open('test.txt', 'r')\nprint file_.read()\nfile_.close() # very important\n\n# append\nstring = '\\nIt really is!!'\nfile_ = open('test.txt', 'a')\nfile_.write(string)\nfile_.close()",
"<body>\n<b>The os.open function has different modes in order to do different things.</b>\n<p>‘r’ – Read mode which is used when the file is only being read </p>\n<p></p>\n<p>‘w’ – Write mode which is used to edit and write new information to the file (any existing files with the same name will be erased when this mode is activated)</p> \n<p></p>\n<p>‘a’ – Appending mode, which is used to add new data to the end of the file; that is new information is automatically amended to the end</p> \n<p></p>\n<p>‘r+’ – Special read and write mode, which is used to handle both actions when working with a file</p>",
"# listdir\n# provide a list of files in a determined path given\npath = r'F:\\Joao\\Dropbox\\Python_I\\02_SS_2017\\material\\exercises'\nos.listdir(path)",
"glob",
"from glob import glob\npaths = glob(r'F:\\Joao\\Dropbox\\Python_I\\02_SS_2017\\material\\exercises\\*1.docx')\n# Don't worry if you don't get that. We will talk about it next section.\nfor path in paths:\n print path",
"time",
"import time\nprint 'GMT time\\n',time.gmtime()\nprint '\\nLocal time\\n',time.localtime()\n\ntime.asctime()\n\ntime.strftime(\"%a, %d %b %Y %H:%M:%S +0000\", time.gmtime())\n# https://docs.python.org/2/library/time.html#time.strftime\n\ntime.gmtime()",
"datetime",
"from datetime import datetime\nstartTime = datetime.now()\nlen(str(2**10000000))\nprint datetime.now() - startTime",
"random",
"import random\nrandom.randint(0,100)",
"Conditions\nConditions are already pretty well known, having the common sysntax 'if' and 'else'. In Python that is not different, but with the addition of another one called 'elif'. Now let's see how to use it in Python. But before we talk about conditions, we have to adress operators and boolean variables.\nIn Python we have operators, which can be represented by:\n - equal (=)\n - comparison (==)\n - bigger than (>)\n - smaller than (<)\n - bigger or equal than (>=)\n - smaller or equal than (<=)\n - different than (!=)\nWhen these operators are used, a boolean (True or False) variable is generated. Let's have a look.",
"print 4 > 2\nprint 2 < 3.7\nprint 2 > 5\nprint 2 == 7 # Be careful here\nprint 5 == 5 # Be careful here",
"When working with 'if' and 'else' statements, conditions are assessed generating boolean variables as result. Let's see.",
"a = 4\nb = 'Python'\nif a == b: # watch the obligatory element ':'\n print 'Variable a is equal variable b.'\nelse: # watch the obligatory element ':'\n print 'Variable a is different than variable b.'\n\nif a != b: # watch the obligatory element ':'\n print 'Variable a is different than variable b.'\nelse: # watch the obligatory element ':'\n print 'Variable a is equal variable b.'\n\nb = 'Python'\nif 'P' in b: # Try changing the letter 'y'\n print 'Yes, \"p\" is in {}.'.format(b)\nelse:\n print 'No, \"p\" isn\\'t in {}.'.format(b) # Have you asked yourself why this '\\' is there?",
"Now, let's take a look how to use the 'elif'. Working with intervals is a good way to understand it.",
"# The 'elif' functions allows us to add extra conditions to our if statement.\na = 5\nb = 2\n\nif a > b:\n print 'The variable a is bigger than the variable b.'\nelif a < b:\n print 'The variable a is smaller than the variable b.'\nelse:\n print 'The variable a is equal to the variable b.'\n\n# Let's make the example before a bit better by asking the user for the numbers\na = input('Provide a number: ')\nb = input('Provide a second number: ')\n\nif a > b:\n print '{} in bigger than {}.'.format(a,b)\nelif a < b:\n print '{} is smaller than {}.'.format(a,b)\nelse:\n print '{} is equal to {}.'.format(a,b)\n\n# Task 01: ask the user for a number and use a if statement to see if this number is even or uneven\na = input('Please, enter a number: ')\nif a%2==0:\n print 'The number {} is even.'.format(a)\nelse:\n print 'The number {} is odd.'.format(a)\n\n# Task 02: create a game that asks the user for a number. Then, tell the user if he/she won the game.\na = input('Guess a number between 0 and 20: ')\nnumber = 17\nif a==number:\n print 'Great!! You won!!'\nelse:\n print 'Haha...You lost!! =)'\n\n# Task 03: create a login/password system where \n# you ask the user for his/her login and password. \n# If either password or login are wrong, print \n# a message with an error.\nlogin = raw_input('Login: ')\npassw = raw_input('Password: ')\nLogin = 'user'\nPassw = 'admin'\nif login == Login and passw==Passw:\n print 'Access granted!'\nelif login != Login and passw==Passw:\n print 'Your login is wrong. Try again!!'\nelif login == Login and passw!=Passw:\n print 'Your password is wrong. Try again!!'\nelse:\n print 'Your login and password are wrong. Try again!!'\n\n# Task 04: write a code that asks for the user to write a sentence.\n# If the sentence has more then 10 words, print a message that the\n# sentence is valid. If the sentence has between 5 and 10, print a\n# message that the sentence is short but can be used. If the sentence\n# is less then 5, print a message that the setence is too short and\n# won't be used. Use the 'string.split()' function to help you with that.\n#string = 'Today we are learning about native functions and conditions in Python.'.split()\n#string = 'Today we are learning about native functions'.split()\nstring = 'Today we are learning something'.split()\n\nif len(string)>=10:\n print 'The sentence is valid.'\nelif len(string)>5 and len(string)<10:\n print 'The sentence is short, but valid.'\nelse:\n print 'The sentence is not valid.'\n\n# Task 05: create a coin conversion tool. Ask the user to provide a value \n# either in dollars or euros in a specific format (US$ XX,XX or EUR XX,XX) \n# and then convert it to the other option.\n# For example: \n# user's value: US$ XX,XX\n# other option: EUR XX,XX\n# Your tool must recognize automatically the type of coin entered and then\n# convert the value using the rates:\n# 1 dollar = 0,919 EUR\n# 1 EUR = 1,088 dollar\n# Print the final result in a nice format. For example:\n# US$ XX,XX in euros is EUR XX,XX. or EUR XX,XX in dollars is US$ XX,XX.\n###########################################################################\n\nm = raw_input('Please provide a value to convert. Please provide the input like US$ 23,12 or EUR 23,12: ')\nm_ = float(m[4:].replace(',','.'))\n\nif m[:3]=='US$':\n print 'US$ {} in euros is EUR {}.'.format(m[4:], m_*1.088)\nelif m[:3]=='EUR':\n print 'EUR {} in dollars is US$ {}.'.format(m[4:], m_*0.919)\nelse:\n print 'The value entered is invalid. Please try again.'",
"Python still have other operators called 'not', 'and' and 'or' (NAO). An easy way to understand is to use boolean variables. Let's try.",
"# Let's compare True and False using the operators.\nprint 'True and True is',True and True\nprint '0 and True is',False and True\nprint 'False and False is',False and False\nprint 'True or True is',True or True\nprint '1 or 0 is',False or True\nprint '0 or False is',False or False\nprint 'not True is',not True\nprint 'not 0 is',not False\n\n# You can also combine them.\nprint 'True and not True is',True and not True\nprint 'True and False or True is',True and False or True\nprint 'not True or True and not True and True is', not True or True and not True and True\n\nTrue and not True or False and True or False and not False and not True or False and True or not False and True\n\n# Operator can also be used in if statements\nstring = 'Python'\nstatement = 'Python is awesome!!'\nif string not in statement:\n print 'Something went wrong!!'\nelse:\n print 'Hell yeah python is awesome \\o/'",
"For the next episode of Python 1...\nyou will have a journey through:\n- loops (while and for)\n- list comprehentions\n- a loooot of exercises\n<h1><i><center>To be continued...</center></i></h1>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gigjozsa/HI_analysis_course
|
chapter_01_somename/01_01_somename2.ipynb
|
gpl-2.0
|
[
"Content\nGlossary\n1. Somename \nNext: 1.2 Somename 3\n\n\n\n\nImport standard modules:",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS",
"Import section specific modules:",
"pass",
"<ol start=\"1\">\n <li>[Somename2](#somename:sec:somename2)</li>\n <ol start=\"1\">\n <li>[Somename21](#somename:sec:somename21)</li>\n <li>[Somename22](#somename:sec:somename22)</li>\n </ol>\n </ol>\n\n1.1 Somename2 <a id='somename:sec:somename2'></a>\nText and code.\n1.1.1 Somename21<a id='somename:sec:somename21'></a>\nText and code.\n1.1.2 Somename22<a id='somename:sec:somename22'></a>\nText and code\n\n\nNext: 1.2 Somename 3"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ctroupin/OceanData_NoteBooks
|
PythonNotebooks/PlatformPlots/Read_drifter_data_2.ipynb
|
gpl-3.0
|
[
"This notebook uses what was shown in the previous one.<br/>\nThe goal is represent all the temperature data corresponding to a given month, in this case July 2015",
"year = 2015\nmonth = 7\n\n%matplotlib inline\nimport glob\nimport os\nimport netCDF4\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.basemap import Basemap\nfrom matplotlib import colors",
"The directory where we store the data files:",
"basedir = \"~/DataOceano/MyOcean/INSITU_GLO_NRT_OBSERVATIONS_013_030/monthly/\" + str(year) + str(month).zfill(2) + '/'\nbasedir = os.path.expanduser(basedir)",
"Simple plot\nConfiguration\nWe start by defining some options for the scatter plot:\n* the range of temperature that will be shown;\n* the colormap;\n* the ticks to be put on the colorbar.",
"tempmin, tempmax = 5., 30.\ncmaptemp = plt.cm.RdYlBu_r\nnormtemp = colors.Normalize(vmin=tempmin, vmax=tempmax)\ntempticks = np.arange(tempmin, tempmax+0.1,2.5)",
"Loop on the files\nWe create a loop on the netCDF files located in our directory. Longitude, latitude and dept are read from every file, while the temperature is not always available. For the plot, we only take the data with have a depth dimension equal to 1.",
"fig = plt.figure(figsize=(12, 8))\n\nnfiles_notemp = 0\nfilelist = sorted(glob.glob(basedir+'*.nc'))\nfor datafiles in filelist:\n\n with netCDF4.Dataset(datafiles) as nc:\n lon = nc.variables['LONGITUDE'][:]\n lat = nc.variables['LATITUDE'][:]\n depth = nc.variables['DEPH'][:]\n\n try:\n temperature = nc.variables['TEMP'][:,0]\n\n if(depth.shape[1] == 1):\n\n scat = plt.scatter(lon.mean(), lat.mean(), s=15., c=temperature.mean(), edgecolor='None', \n cmap=cmaptemp, norm=normtemp)\n\n except KeyError:\n # print 'No variable temperature in this file'\n #temperature = np.nan*np.ones_like(lat)\n nfiles_notemp+=1\n \n# Add colorbar and title\ncbar = plt.colorbar(scat, extend='both')\ncbar.set_label('$^{\\circ}$C', rotation=0, ha='left')\nplt.title('Temperature from surface drifters\\n' + str(year) + '-' + str(month).zfill(2))\nplt.show()",
"We also counted how many files don't have the temperature variable:",
"print 'Number of files: ' + str(len(filelist))\nprint 'Number of files without temperature: ' + str(nfiles_notemp)",
"Plot on a map\nConfiguration of the projection\nWe choose a Robin projection centered on 0ºE, with a cruse ('c') resolution for the coastline.",
"m = Basemap(projection='moll', lon_0=0, resolution='c')",
"The rest of the configuration of the plot can be kept as it was.\nLoop on the files\nWe can copy the part of the code used before. We need to add a line for the projection of the coordinates: lon, lat = m(lon, lat). After the loop we can add the coastline and the continents.",
"fig = plt.figure(figsize=(12, 8))\n\nnfiles_notemp = 0\nfilelist = sorted(glob.glob(basedir+'*.nc'))\nfor datafiles in filelist:\n\n with netCDF4.Dataset(datafiles) as nc:\n lon = nc.variables['LONGITUDE'][:]\n lat = nc.variables['LATITUDE'][:]\n depth = nc.variables['DEPH'][:]\n \n lon, lat = m(lon, lat)\n try:\n temperature = nc.variables['TEMP'][:,0]\n\n if(depth.shape[1] == 1):\n\n scat = m.scatter(lon.mean(), lat.mean(), s=15., c=temperature.mean(), edgecolor='None', \n cmap=cmaptemp, norm=normtemp)\n\n except KeyError:\n # print 'No variable temperature in this file'\n #temperature = np.nan*np.ones_like(lat)\n nfiles_notemp+=1\n \n# Add colorbar and title\ncbar = plt.colorbar(scat, extend='both', shrink=0.7)\ncbar.set_label('$^{\\circ}$C', rotation=0, ha='left')\nm.drawcoastlines(linewidth=0.2)\nm.fillcontinents(color = 'gray')\nplt.title('Temperature from surface drifters\\n' + str(year) + '-' + str(month).zfill(2))\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jinntrance/MOOC
|
coursera/deep-neural-network/quiz and assignments/Week 3/week3-Planar+data+classification+with+one+hidden+layer.ipynb
|
cc0-1.0
|
[
"Planar data classification with one hidden layer\nWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. \nYou will learn how to:\n- Implement a 2-class classification neural network with a single hidden layer\n- Use units with a non-linear activation function, such as tanh \n- Compute the cross entropy loss \n- Implement forward and backward propagation\n1 - Packages\nLet's first import all the packages that you will need during this assignment.\n- numpy is the fundamental package for scientific computing with Python.\n- sklearn provides simple and efficient tools for data mining and data analysis. \n- matplotlib is a library for plotting graphs in Python.\n- testCases provides some test examples to assess the correctness of your functions\n- planar_utils provide various useful functions used in this assignment",
"# Package imports\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom testCases import *\nimport sklearn\nimport sklearn.datasets\nimport sklearn.linear_model\nfrom planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets\n\n%matplotlib inline\n\nnp.random.seed(1) # set a seed so that the results are consistent",
"2 - Dataset\nFirst, let's get the dataset you will work on. The following code will load a \"flower\" 2-class dataset into variables X and Y.",
"X, Y = load_planar_dataset()\n# Y = Y[0,:]",
"Visualize the dataset using matplotlib. The data looks like a \"flower\" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.",
"print(X.shape)\n# Visualize the data:\nplt.scatter(X[0, :], X[1, :], c=Y[0,], s=40, cmap=plt.cm.Spectral);",
"You have:\n - a numpy-array (matrix) X that contains your features (x1, x2)\n - a numpy-array (vector) Y that contains your labels (red:0, blue:1).\nLets first get a better sense of what our data is like. \nExercise: How many training examples do you have? In addition, what is the shape of the variables X and Y? \nHint: How do you get the shape of a numpy array? (help)",
"### START CODE HERE ### (≈ 3 lines of code)\nshape_X = X.shape\nshape_Y = Y.shape\nm = shape_Y[1] # training set size\n### END CODE HERE ###\n\nprint ('The shape of X is: ' + str(shape_X))\nprint ('The shape of Y is: ' + str(shape_Y))\nprint ('I have m = %d training examples!' % (m))",
"Expected Output:\n<table style=\"width:20%\">\n\n <tr>\n <td>**shape of X**</td>\n <td> (2, 400) </td> \n </tr>\n\n <tr>\n <td>**shape of Y**</td>\n <td>(1, 400) </td> \n </tr>\n\n <tr>\n <td>**m**</td>\n <td> 400 </td> \n </tr>\n\n</table>\n\n3 - Simple Logistic Regression\nBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.",
"# Train the logistic regression classifier\nclf = sklearn.linear_model.LogisticRegressionCV();\nclf.fit(X.T, Y[0,:].T);",
"You can now plot the decision boundary of these models. Run the code below.",
"# Plot the decision boundary for logistic regression\nplot_decision_boundary(lambda x: clf.predict(x), X, Y[0,:])\nplt.title(\"Logistic Regression\")\n\n# Print accuracy\nLR_predictions = clf.predict(X.T)\nprint ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +\n '% ' + \"(percentage of correctly labelled datapoints)\")",
"Expected Output:\n<table style=\"width:20%\">\n <tr>\n <td>**Accuracy**</td>\n <td> 47% </td> \n </tr>\n\n</table>\n\nInterpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! \n4 - Neural Network model\nLogistic regression did not work well on the \"flower dataset\". You are going to train a Neural Network with a single hidden layer.\nHere is our model:\n<img src=\"images/classification_kiank.png\" style=\"width:600px;height:300px;\">\nMathematically:\nFor one example $x^{(i)}$:\n$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\\tag{1}$$ \n$$a^{[1] (i)} = \\tanh(z^{[1] (i)})\\tag{2}$$\n$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\\tag{3}$$\n$$\\hat{y}^{(i)} = a^{[2] (i)} = \\sigma(z^{ [2] (i)})\\tag{4}$$\n$$y^{(i)}_{prediction} = \\begin{cases} 1 & \\mbox{if } a^{2} > 0.5 \\ 0 & \\mbox{otherwise } \\end{cases}\\tag{5}$$\nGiven the predictions on all the examples, you can also compute the cost $J$ as follows: \n$$J = - \\frac{1}{m} \\sum\\limits_{i = 0}^{m} \\large\\left(\\small y^{(i)}\\log\\left(a^{[2] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[2] (i)}\\right) \\large \\right) \\small \\tag{6}$$\nReminder: The general methodology to build a Neural Network is to:\n 1. Define the neural network structure ( # of input units, # of hidden units, etc). \n 2. Initialize the model's parameters\n 3. Loop:\n - Implement forward propagation\n - Compute loss\n - Implement backward propagation to get the gradients\n - Update parameters (gradient descent)\nYou often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data.\n4.1 - Defining the neural network structure\nExercise: Define three variables:\n - n_x: the size of the input layer\n - n_h: the size of the hidden layer (set this to 4) \n - n_y: the size of the output layer\nHint: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.",
"# GRADED FUNCTION: layer_sizes\n\ndef layer_sizes(X, Y):\n \"\"\"\n Arguments:\n X -- input dataset of shape (input size, number of examples)\n Y -- labels of shape (output size, number of examples)\n \n Returns:\n n_x -- the size of the input layer\n n_h -- the size of the hidden layer\n n_y -- the size of the output layer\n \"\"\"\n ### START CODE HERE ### (≈ 3 lines of code)\n n_x = X.shape[0] # size of input layer\n n_h = 4\n n_y = Y.shape[0] # size of output layer\n ### END CODE HERE ###\n return (n_x, n_h, n_y)\n\nX_assess, Y_assess = layer_sizes_test_case()\n(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)\nprint(\"The size of the input layer is: n_x = \" + str(n_x))\nprint(\"The size of the hidden layer is: n_h = \" + str(n_h))\nprint(\"The size of the output layer is: n_y = \" + str(n_y))",
"Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).\n<table style=\"width:20%\">\n <tr>\n <td>**n_x**</td>\n <td> 5 </td> \n </tr>\n\n <tr>\n <td>**n_h**</td>\n <td> 4 </td> \n </tr>\n\n <tr>\n <td>**n_y**</td>\n <td> 2 </td> \n </tr>\n\n</table>\n\n4.2 - Initialize the model's parameters\nExercise: Implement the function initialize_parameters().\nInstructions:\n- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.\n- You will initialize the weights matrices with random values. \n - Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b).\n- You will initialize the bias vectors as zeros. \n - Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros.",
"# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n params -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.\n \n ### START CODE HERE ### (≈ 4 lines of code)\n # \n W1 = np.random.randn(n_h, n_x) * 0.01\n b1 = np.zeros((n_h, 1))\n W2 = np.random.randn(n_y, n_h) * 0.01\n b2 = np.zeros((n_y, 1))\n ### END CODE HERE ###\n \n assert (W1.shape == (n_h, n_x))\n assert (b1.shape == (n_h, 1))\n assert (W2.shape == (n_y, n_h))\n assert (b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nn_x, n_h, n_y = initialize_parameters_test_case()\n\nparameters = initialize_parameters(n_x, n_h, n_y)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:90%\">\n <tr>\n <td>**W1**</td>\n <td> [[-0.00416758 -0.00056267]\n [-0.02136196 0.01640271]\n [-0.01793436 -0.00841747]\n [ 0.00502881 -0.01245288]] </td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.]] </td> \n </tr>\n\n</table>\n\n4.3 - The Loop\nQuestion: Implement forward_propagation().\nInstructions:\n- Look above at the mathematical representation of your classifier.\n- You can use the function sigmoid(). It is built-in (imported) in the notebook.\n- You can use the function np.tanh(). It is part of the numpy library.\n- The steps you have to implement are:\n 1. Retrieve each parameter from the dictionary \"parameters\" (which is the output of initialize_parameters()) by using parameters[\"..\"].\n 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).\n- Values needed in the backpropagation are stored in \"cache\". The cache will be given as an input to the backpropagation function.",
"# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(X, parameters):\n \"\"\"\n Argument:\n X -- input data of size (n_x, m)\n parameters -- python dictionary containing your parameters (output of initialization function)\n \n Returns:\n A2 -- The sigmoid output of the second activation\n cache -- a dictionary containing \"Z1\", \"A1\", \"Z2\" and \"A2\"\n \"\"\"\n # Retrieve each parameter from the dictionary \"parameters\"\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = parameters['W1']\n b1 = parameters['b1']\n W2 = parameters['W2']\n b2 = parameters['b2']\n ### END CODE HERE ###\n \n # Implement Forward Propagation to calculate A2 (probabilities)\n ### START CODE HERE ### (≈ 4 lines of code)\n Z1 = np.dot(W1, X) + b1\n A1 = np.tanh(Z1)\n Z2 = np.dot(W2, A1) + b2\n A2 = sigmoid(Z2)\n ### END CODE HERE ###\n \n assert(A2.shape == (1, X.shape[1]))\n \n cache = {\"Z1\": Z1,\n \"A1\": A1,\n \"Z2\": Z2,\n \"A2\": A2}\n \n return A2, cache\n\nX_assess, parameters = forward_propagation_test_case()\n\nA2, cache = forward_propagation(X_assess, parameters)\n\n# Note: we use the mean here just to make sure that your output matches ours. \nprint(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))",
"Expected Output:\n<table style=\"width:55%\">\n <tr>\n <td> -0.000499755777742 -0.000496963353232 0.000438187450959 0.500109546852 </td> \n </tr>\n</table>\n\nNow that you have computed $A^{[2]}$ (in the Python variable \"A2\"), which contains $a^{2}$ for every example, you can compute the cost function as follows:\n$$J = - \\frac{1}{m} \\sum\\limits_{i = 0}^{m} \\large{(} \\small y^{(i)}\\log\\left(a^{[2] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[2] (i)}\\right) \\large{)} \\small\\tag{13}$$\nExercise: Implement compute_cost() to compute the value of the cost $J$.\nInstructions:\n- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented\n$- \\sum\\limits_{i=0}^{m} y^{(i)}\\log(a^{2})$:\npython\nlogprobs = np.multiply(np.log(A2),Y)\ncost = - np.sum(logprobs) # no need to use a for loop!\n(you can use either np.multiply() and then np.sum() or directly np.dot()).",
"# GRADED FUNCTION: compute_cost\n\ndef compute_cost(A2, Y, parameters):\n \"\"\"\n Computes the cross-entropy cost given in equation (13)\n \n Arguments:\n A2 -- The sigmoid output of the second activation, of shape (1, number of examples)\n Y -- \"true\" labels vector of shape (1, number of examples)\n parameters -- python dictionary containing your parameters W1, b1, W2 and b2\n \n Returns:\n cost -- cross-entropy cost given equation (13)\n \"\"\"\n \n m = Y.shape[1] # number of example\n \n # Retrieve W1 and W2 from parameters\n ### START CODE HERE ### (≈ 2 lines of code)\n W1 = parameters['W1']\n W2 = parameters['W2']\n ### END CODE HERE ###\n \n # Compute the cross-entropy cost\n ### START CODE HERE ### (≈ 2 lines of code)\n cost = - np.mean((np.multiply(np.log(A2), Y) + np.multiply(np.log(1 - A2), 1 - Y)))\n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # makes sure cost is the dimension we expect. \n # E.g., turns [[17]] into 17 \n assert(isinstance(cost, float))\n \n return cost\n\nA2, Y_assess, parameters = compute_cost_test_case()\nprint(\"cost = \" + str(compute_cost(A2, Y_assess, parameters)))",
"Expected Output:\n<table style=\"width:20%\">\n <tr>\n <td>**cost**</td>\n <td> 0.692919893776 </td> \n </tr>\n\n</table>\n\nUsing the cache computed during forward propagation, you can now implement backward propagation.\nQuestion: Implement the function backward_propagation().\nInstructions:\nBackpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. \n<img src=\"images/grad_summary.png\" style=\"width:600px;height:300px;\">\n<!--\n$\\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } = \\frac{1}{m} (a^{[2](i)} - y^{(i)})$\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial W_2 } = \\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } a^{[1] (i) T} $\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial b_2 } = \\sum_i{\\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)}}}$\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)} } = W_2^T \\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial W_1 } = \\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)} } X^T $\n\n$\\frac{\\partial \\mathcal{J} _i }{ \\partial b_1 } = \\sum_i{\\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)}}}$\n\n- Note that $*$ denotes elementwise multiplication.\n- The notation you will use is common in deep learning coding:\n - dW1 = $\\frac{\\partial \\mathcal{J} }{ \\partial W_1 }$\n - db1 = $\\frac{\\partial \\mathcal{J} }{ \\partial b_1 }$\n - dW2 = $\\frac{\\partial \\mathcal{J} }{ \\partial W_2 }$\n - db2 = $\\frac{\\partial \\mathcal{J} }{ \\partial b_2 }$\n\n!-->\n\n\nTips:\nTo compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute \n$g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)).",
"# GRADED FUNCTION: backward_propagation\n\ndef backward_propagation(parameters, cache, X, Y):\n \"\"\"\n Implement the backward propagation using the instructions above.\n \n Arguments:\n parameters -- python dictionary containing our parameters \n cache -- a dictionary containing \"Z1\", \"A1\", \"Z2\" and \"A2\".\n X -- input data of shape (2, number of examples)\n Y -- \"true\" labels vector of shape (1, number of examples)\n \n Returns:\n grads -- python dictionary containing your gradients with respect to different parameters\n \"\"\"\n m = X.shape[1]\n \n # First, retrieve W1 and W2 from the dictionary \"parameters\".\n ### START CODE HERE ### (≈ 2 lines of code)\n W1 = parameters['W1']\n W2 = parameters['W2']\n ### END CODE HERE ###\n \n # Retrieve also A1 and A2 from dictionary \"cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n A1 = cache['A1']\n A2 = cache['A2']\n ### END CODE HERE ###\n \n # Backward propagation: calculate dW1, db1, dW2, db2. \n ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)\n dZ2 = A2 - Y\n dW2 = np.dot(dZ2, A1.T) / m \n db2 = np.mean(dZ2, axis = 1, keepdims = True)\n dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1,2))\n dW1 = np.dot(dZ1, X.T) / m\n db1 = np.mean(dZ1, axis = 1, keepdims = True)\n ### END CODE HERE ###\n \n grads = {\"dW1\": dW1,\n \"db1\": db1,\n \"dW2\": dW2,\n \"db2\": db2}\n \n return grads\n\nparameters, cache, X_assess, Y_assess = backward_propagation_test_case()\n\ngrads = backward_propagation(parameters, cache, X_assess, Y_assess)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"db1 = \"+ str(grads[\"db1\"]))\nprint (\"dW2 = \"+ str(grads[\"dW2\"]))\nprint (\"db2 = \"+ str(grads[\"db2\"]))",
"Expected output:\n<table style=\"width:80%\">\n <tr>\n <td>**dW1**</td>\n <td> [[ 0.01018708 -0.00708701]\n [ 0.00873447 -0.0060768 ]\n [-0.00530847 0.00369379]\n [-0.02206365 0.01535126]] </td> \n </tr>\n\n <tr>\n <td>**db1**</td>\n <td> [[-0.00069728]\n [-0.00060606]\n [ 0.000364 ]\n [ 0.00151207]] </td> \n </tr>\n\n <tr>\n <td>**dW2**</td>\n <td> [[ 0.00363613 0.03153604 0.01162914 -0.01318316]] </td> \n </tr>\n\n\n <tr>\n <td>**db2**</td>\n <td> [[ 0.06589489]] </td> \n </tr>\n\n</table>\n\nQuestion: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).\nGeneral gradient descent rule: $ \\theta = \\theta - \\alpha \\frac{\\partial J }{ \\partial \\theta }$ where $\\alpha$ is the learning rate and $\\theta$ represents a parameter.\nIllustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.\n<img src=\"images/sgd.gif\" style=\"width:400;height:400;\"> <img src=\"images/sgd_bad.gif\" style=\"width:400;height:400;\">",
"# GRADED FUNCTION: update_parameters\n\ndef update_parameters(parameters, grads, learning_rate = 1.2):\n \"\"\"\n Updates parameters using the gradient descent update rule given above\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients \n \n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n # Retrieve each parameter from the dictionary \"parameters\"\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = parameters['W1']\n b1 = parameters['b1']\n W2 = parameters['W2']\n b2 = parameters['b2']\n ### END CODE HERE ###\n \n # Retrieve each gradient from the dictionary \"grads\"\n ### START CODE HERE ### (≈ 4 lines of code)\n dW1 = grads['dW1']\n db1 = grads['db1']\n dW2 = grads['dW2']\n db2 = grads['db2']\n ## END CODE HERE ###\n \n # Update rule for each parameter\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 -= learning_rate * dW1\n b1 -= learning_rate * db1\n W2 -= learning_rate * dW2\n b2 -= learning_rate * db2\n ### END CODE HERE ###\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nparameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads)\n\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:80%\">\n <tr>\n <td>**W1**</td>\n <td> [[-0.00643025 0.01936718]\n [-0.02410458 0.03978052]\n [-0.01653973 -0.02096177]\n [ 0.01046864 -0.05990141]]</td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ -1.02420756e-06]\n [ 1.27373948e-05]\n [ 8.32996807e-07]\n [ -3.20136836e-06]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.00010457]] </td> \n </tr>\n\n</table>\n\n4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()\nQuestion: Build your neural network model in nn_model().\nInstructions: The neural network model has to use the previous functions in the right order.",
"# GRADED FUNCTION: nn_model\n\ndef nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):\n \"\"\"\n Arguments:\n X -- dataset of shape (2, number of examples)\n Y -- labels of shape (1, number of examples)\n n_h -- size of the hidden layer\n num_iterations -- Number of iterations in gradient descent loop\n print_cost -- if True, print the cost every 1000 iterations\n \n Returns:\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n \n np.random.seed(3)\n n_x = layer_sizes(X, Y)[0]\n n_y = layer_sizes(X, Y)[2]\n \n # Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: \"n_x, n_h, n_y\". Outputs = \"W1, b1, W2, b2, parameters\".\n ### START CODE HERE ### (≈ 5 lines of code)\n parameters = initialize_parameters(n_x, n_h, n_y)\n W1 = parameters['W1']\n b1 = parameters['b1']\n W2 = parameters['W2']\n b2 = parameters['b2']\n ### END CODE HERE ###\n \n # Loop (gradient descent)\n import pdb\n for i in range(0, num_iterations):\n \n ### START CODE HERE ### (≈ 4 lines of code)\n # Forward propagation. Inputs: \"X, parameters\". Outputs: \"A2, cache\".\n A2, cache = forward_propagation(X, parameters)\n # Cost function. Inputs: \"A2, Y, parameters\". Outputs: \"cost\".\n cost = compute_cost(A2, Y, parameters)\n # Backpropagation. Inputs: \"parameters, cache, X, Y\". Outputs: \"grads\".\n grads = backward_propagation(parameters, cache, X, Y)\n # Gradient descent parameter update. Inputs: \"parameters, grads\". Outputs: \"parameters\".\n parameters = update_parameters(parameters, grads)\n ### END CODE HERE ###\n \n # Print the cost every 1000 iterations\n if print_cost and i % 1000 == 0:\n print (\"Cost after iteration %i: %f\" %(i, cost))\n\n return parameters\n\nX_assess, Y_assess = nn_model_test_case()\n\nparameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=False)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:90%\">\n <tr>\n <td>**W1**</td>\n <td> [[-4.18494056 5.33220609]\n [-7.52989382 1.24306181]\n [-4.1929459 5.32632331]\n [ 7.52983719 -1.24309422]]</td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ 2.32926819]\n [ 3.79458998]\n [ 2.33002577]\n [-3.79468846]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-6033.83672146 -6008.12980822 -6033.10095287 6008.06637269]] </td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[-52.66607724]] </td> \n </tr>\n\n</table>\n\n4.5 Predictions\nQuestion: Use your model to predict by building predict().\nUse forward propagation to predict results.\nReminder: predictions = $y_{prediction} = \\mathbb 1 \\text{{activation > 0.5}} = \\begin{cases}\n 1 & \\text{if}\\ activation > 0.5 \\\n 0 & \\text{otherwise}\n \\end{cases}$ \nAs an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X > threshold)",
"# GRADED FUNCTION: predict\n\ndef predict(parameters, X):\n \"\"\"\n Using the learned parameters, predicts a class for each example in X\n \n Arguments:\n parameters -- python dictionary containing your parameters \n X -- input data of size (n_x, m)\n \n Returns\n predictions -- vector of predictions of our model (red: 0 / blue: 1)\n \"\"\"\n \n # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.\n ### START CODE HERE ### (≈ 2 lines of code)\n A2, cache = forward_propagation(X, parameters)\n predictions = A2 >= 0.5\n ### END CODE HERE ###\n \n return predictions\n\nparameters, X_assess = predict_test_case()\n\npredictions = predict(parameters, X_assess)\nprint(\"predictions mean = \" + str(np.mean(predictions)))",
"Expected Output: \n<table style=\"width:40%\">\n <tr>\n <td>**predictions mean**</td>\n <td> 0.666666666667 </td> \n </tr>\n\n</table>\n\nIt is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.",
"# Build a model with a n_h-dimensional hidden layer\nparameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)\n\n# Plot the decision boundary\nplot_decision_boundary(lambda x: predict(parameters, x.T), X, Y[0,])\nplt.title(\"Decision Boundary for hidden layer size \" + str(4))",
"Expected Output:\n<table style=\"width:40%\">\n <tr>\n <td>**Cost after iteration 9000**</td>\n <td> 0.218607 </td> \n </tr>\n\n</table>",
"# Print accuracy\npredictions = predict(parameters, X)\nprint ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')",
"Expected Output: \n<table style=\"width:15%\">\n <tr>\n <td>**Accuracy**</td>\n <td> 90% </td> \n </tr>\n</table>\n\nAccuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. \nNow, let's try out several hidden layer sizes.\n4.6 - Tuning hidden layer size (optional/ungraded exercise)\nRun the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.",
"# This may take about 2 minutes to run\n\nplt.figure(figsize=(16, 32))\nhidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]\nfor i, n_h in enumerate(hidden_layer_sizes):\n plt.subplot(5, 2, i+1)\n plt.title('Hidden Layer of size %d' % n_h)\n parameters = nn_model(X, Y, n_h, num_iterations = 5000)\n plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y[0,])\n predictions = predict(parameters, X)\n accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)\n print (\"Accuracy for {} hidden units: {} %\".format(n_h, accuracy))",
"Interpretation:\n- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. \n- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.\n- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. \nOptional questions:\nNote: Remember to submit the assignment but clicking the blue \"Submit Assignment\" button at the upper-right. \nSome optional/ungraded questions that you can explore if you wish: \n- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?\n- Play with the learning_rate. What happens?\n- What if we change the dataset? (See part 5 below!)\n<font color='blue'>\nYou've learnt to:\n- Build a complete neural network with a hidden layer\n- Make a good use of a non-linear unit\n- Implemented forward propagation and backpropagation, and trained a neural network\n- See the impact of varying the hidden layer size, including overfitting.\nNice work! \n5) Performance on other datasets\nIf you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.",
"# Datasets\nnoisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()\n\ndatasets = {\"noisy_circles\": noisy_circles,\n \"noisy_moons\": noisy_moons,\n \"blobs\": blobs,\n \"gaussian_quantiles\": gaussian_quantiles}\n\n### START CODE HERE ### (choose your dataset)\ndataset = \"noisy_moons\"\n### END CODE HERE ###\n\nX, Y = datasets[dataset]\nX, Y = X.T, Y.reshape(1, Y.shape[0])\n\n# make blobs binary\nif dataset == \"blobs\":\n Y = Y%2\n\n# Visualize the data\nplt.scatter(X[0, :], X[1, :], c=Y[0,], s=40, cmap=plt.cm.Spectral);",
"Congrats on finishing this Programming Assignment!\nReference:\n- http://scs.ryerson.ca/~aharley/neural-networks/\n- http://cs231n.github.io/neural-networks-case-study/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
camillescott/boink
|
examples/Streaming Sourmash Demo.ipynb
|
mit
|
[
"Streaming Sourmash\nThis notebook demonstrates how to use goetia to perform a streaming analysis of sourmash minhash signatures. Goetia includes the sourmash C++ header and exposes it with cpppy, and wraps it so it can be used with goetia's sequence processors. This enables a simple way to perform fast streaming signature analysis in Python.",
"# First, import the necessary libraries\n\nfrom goetia import libgoetia\nfrom goetia.alphabets import DNAN_SIMPLE\nfrom goetia.signatures import SourmashSignature\n\nfrom sourmash import load_one_signature, MinHash\nimport screed\n\nfrom ficus import FigureManager\nimport seaborn as sns\nimport numpy as np",
"The Signature Class\nThe goetia SourmashSignature.Signature is derived from sourmash::MinHash, and so follows the same interface. This signature will contain 10000 hashes at a $K$ of 31.",
"all_signature = SourmashSignature.Signature.build(10000, 31, False, False, False, 42, 0)",
"The Signature Processor\nSourmashSignature contains its own sequence processor. This processor is iterable; it will process the given sequences in chunks given by fine_interval. The default fine interval is 10000. Alternatively, we can call process to consume the entire sample.",
"processor = SourmashSignature.Processor.build(all_signature)",
"Let's get some data. We'll start with something small, an ecoli sequencing run.",
"!curl -L https://osf.io/wa57n/download > ecoli.1.fastq.gz\n\n!curl -L https://osf.io/khqaz/download > ecoli.2.fastq.gz",
"Consuming Files\nThe processor can handle single or paired mode natively. There's nothing special to be done with paired reads for sourmash, so the paired mode just consumes each sequence in the pair one after the other. This is a full sequencing run of ~3.5 million sequences; it should take about ten-to-twenty seconds to consume.",
"%time processor.process('ecoli.1.fastq.gz', 'ecoli.2.fastq.gz')",
"If sourmash is installed, we can convert the signature to a sourmash.MinHash object to do further analysis.",
"all_mh = all_signature.to_sourmash()\n\nall_mh",
"Chunked Processing\nNow let's do it in chunked mode. We'll print info on the medium interval, which is contained in the state object.",
"chunked_signature = SourmashSignature.Signature.build(10000, 31, False, False, False, 42, 0)\nprocessor = SourmashSignature.Processor.build(chunked_signature, 10000, 250000)\n\nfor n_reads, n_skipped, state in processor.chunked_process('ecoli.1.fastq.gz', 'ecoli.2.fastq.gz'):\n if state.medium:\n print('Processed', n_reads, 'sequences.')",
"The similarity should be exact...",
"chunked_mh = chunked_signature.to_sourmash()\nchunked_mh.similarity(all_mh)",
"Streaming Distance Calculation\nUsing the chunked processor for reporting is a bit boring. A more interesting use-case is is to track within-sample distances -- streaming analysis. We'll write a quick function to perform this analysis",
"def sourmash_stream(left, right, N=1000, K=31):\n distances = []\n times = []\n\n streaming_sig = SourmashSignature.Signature.build(N, K, False, False, False, 42, 0)\n # We'll set the medium_interval to 250000\n processor = SourmashSignature.Processor.build(streaming_sig, 10000, 250000)\n\n # Calculate a distance at each interval. The iterator is over fine chunks.\n prev_mh = None\n for n_reads, _, state in processor.chunked_process(left, right):\n curr_mh = streaming_sig.to_sourmash()\n if prev_mh is not None:\n distances.append(prev_mh.similarity(curr_mh))\n times.append(n_reads)\n prev_mh = curr_mh\n\n if state.medium:\n print(n_reads, 'reads.')\n \n return np.array(distances), np.array(times), streaming_sig\n\ndistances_small, times_small, mh_small = sourmash_stream('ecoli.1.fastq.gz', 'ecoli.2.fastq.gz', N=1000)\n\nwith FigureManager(show=True, figsize=(12,8)) as (fig, ax):\n sns.lineplot(times_small, distances_small, ax=ax)\n ax.set_title('Streaming Minhash Distance')\n ax.set_xlabel('Sequence')\n ax.set_ylabel('Minhash (Jaccard) Similarity')",
"We can see that the similarity saturates at ~2 million sequences, with the exception of a dip later on -- this could be an instrument error. If we increase the size of the signature, the saturation curve will smooth out.",
"distances_large, times_large, mh_large = sourmash_stream('ecoli.1.fastq.gz', 'ecoli.2.fastq.gz', N=50000)\n\nwith FigureManager(show=True, figsize=(12,8)) as (fig, ax):\n sns.lineplot(times_small, distances_small, label='$N$=1,000', ax=ax)\n sns.lineplot(times_large, distances_large, label='$N$=50,000', ax=ax)\n ax.set_title('Streaming Minhash Distance')\n ax.set_xlabel('Sequence')\n ax.set_ylabel('Minhash (Jaccard) Similarity')\n ax.set_ylim(bottom=.8)\n\nwith FigureManager(show=True, figsize=(10,5)) as (fig, ax):\n ax.set_title('Distribution of Distances')\n sns.distplot(distances_small, vertical=True, hist=False, ax=ax, label='$N$=1,000')\n sns.distplot(distances_large, vertical=True, hist=False, ax=ax, label='$N$=50,000')",
"Smaller minhashes are more susceptible to sequence error, and so the saturation curve is noisy; additionally, the larger minhash detects saturation more clearly, and the distribution of distances leans more heavily toward 1.0.\nSome Signal Processing\nThere are a number of metrics we could use to detect \"saturation\": what exactly we count as such is a user decision. A simplistic approach would be to measure standard deviation of the distance over a window and consider the sample saturated when it drops below a threshold.",
"def sliding_window_stddev(distances, window_size=10):\n stddevs = np.zeros(len(distances) - window_size + 1)\n for i in range(0, len(distances) - window_size + 1):\n stddevs[i] = np.std(distances[i:i+window_size])\n return stddevs\n\nwith FigureManager(show=True, figsize=(12,8)) as (fig, ax):\n cutoff = .0005\n window_size = 10\n \n std_small = sliding_window_stddev(distances_small, window_size=window_size)\n sat_small = None\n for i, val in enumerate(std_small):\n if val < cutoff:\n sat_small = i\n break\n \n std_large = sliding_window_stddev(distances_large, window_size=window_size)\n sat_large = None\n for i, val in enumerate(std_large):\n if val < cutoff:\n sat_large = i\n break\n \n ax = sns.lineplot(times_small[:-window_size + 1], std_small, label='$N$=1,000', color=sns.xkcd_rgb['purple'], ax=ax)\n ax = sns.lineplot(times_large[:-window_size + 1], std_large, label='$N$=50,000', ax=ax, color=sns.xkcd_rgb['gold'])\n \n if sat_small is not None:\n ax.axvline(times_small[sat_small + window_size // 2], alpha=.5, color=sns.xkcd_rgb['light purple'],\n label='$N$=1,000 Saturation')\n if sat_large is not None:\n ax.axvline(times_large[sat_large + window_size // 2], alpha=.5, color=sns.xkcd_rgb['goldenrod'], \n label='$N$=50,000 Saturation')\n \n ax.set_ylabel('Rolling stdev of Distance')\n ax.set_xlabel('Sequence')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
iRipVanWinkle/ml
|
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Bank_Customer_Arrival_Model.ipynb
|
mit
|
[
"Queuing Simulation\nWe present in this notebook a simple bank customer arrival model. \nWe use a Poisson distribution to model the arrival and departure times.",
"import random\n\n#define and initialize the parameters of the Poisson distributions\nlambd_in = 0.5\nlambd_out = 0.4\n\n#bank variables\nclosing_time = 100 #initialize the bank closing time \novertime = 0 #overtime the employees need to be paid for\n \n#queue variables\nnum_arrivals = 0 #number of people in the que\nnum_departures = 0 #number of people who have been served\nn = 0 #length of the queue \nmax_line_length = 0 #the maximum length of the waiting line: \n \n#time variables\nt = 0 #set the time of first arrival to 0\ntime_depart = float('inf') #set the first time of departure to infinity\ntime_arrive = random.expovariate(lambd_in) #generate the first arrival\n ",
"Next we initialize two arrays to keep track of the customers who arrive and who depart:",
"departures = []\narrivals = []",
"We simulate the arrivals and departures in a bank branch using a while loop",
"while t < closing_time or n >= 0:\n \n # case 1 - within business hours, a customer arrives before any customer leaves the queue \n if time_arrive <= time_depart and time_arrive <= closing_time:\n \n t = time_arrive # move time along to the time of the new arrival\n num_arrivals += 1 # increase the number of customers with the additional arrival\n n += 1 # we have an additional customer, increase the size of the waiting line by 1 \n \n # generate time of next arrival\n time_arrive = random.expovariate(lambd_in) + t\n \n #append the new customer to the arrival list\n arrivals.append(t)\n print(\"Arrival \", num_arrivals, \"at time \", t)\n \n \n # generate time of departure \n if n == 1:\n Y = random.expovariate(lambd_out)\n time_depart = t + Y\n \n ''' \n print('Arrivals', arrivals)\n print('Departures', departures)\n ''' \n \n # case 2 - within business hours, a customer departs before the next arrival\n elif time_depart < time_arrive and time_depart <= closing_time:\n \n # advance time to the next departure time\n t = time_depart\n \n # one more person served -> increase the count of clients who have been served\n num_departures += 1\n \n #update the departure list\n departures.append(t)\n print(\"Departure \", num_departures, \"at time \", t)\n \n # one less person in line -> decrease the size of the waiting line\n n -= 1\n \n # if the queue is empt -> set the time of the next departure to infinity\n if n == 0:\n time_depart = float('inf')\n \n # if the queue isn't empty, generate the next time of departure\n else:\n Y = random.expovariate(lambd_out)\n time_depart = t + Y\n \n ''' \n print('Arrivals', arrivals)\n print('Departures', departures) \n ''' \n \n # case 3 - next arrival/departure happens after closing time and there are people still in the queue\n elif min(time_arrive, time_depart) > closing_time and n > 0:\n \n # advance time to next departure\n t = time_depart\n \n #update the departure list\n departures.append(t)\n \n #update the number of departures/clients served\n num_departures += 1 # one more person served\n \n print(\"Departure \", num_departures, \"at time \", t)\n \n #update the queue\n n -= 1 # one less person in the waiting line\n \n \n # if line isn't empty, generate the time of the next departure\n if n > 0:\n Y = random.expovariate(lambd_out)\n time_depart = t + Y\n \n ''' \n print('Arrivals', arrivals)\n print('Departures', departures) \n ''' \n \n \n # case 4 - next arrival/departure happens after closing time and there is nobody left in the queue\n elif min(time_arrive, time_depart) > closing_time and n == 0:\n \n # calculate overtime\n overtime = max(t - closing_time, 0)\n print('Overtime = ', overtime)\n \n '''\n print('Arrivals', arrivals)\n print('Departures', departures)\n '''\n \n break"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/neuroglancer
|
python/examples/jupyter-notebook-demo.ipynb
|
apache-2.0
|
[
"import neuroglancer\nimport numpy as np",
"Create a new (initially empty) viewer. This starts a webserver in a background thread, which serves a copy of the Neuroglancer client, and which also can serve local volume data and handles sending and receiving Neuroglancer state updates.",
"viewer = neuroglancer.Viewer()",
"Print a link to the viewer (only valid while the notebook kernel is running). Note that while the Viewer is running, anyone with the link can obtain any authentication credentials that the neuroglancer Python module obtains. Therefore, be very careful about sharing the link, and keep in mind that sharing the notebook will likely also share viewer links.",
"viewer",
"Add some example layers using the precomputed data source (HHMI Janelia FlyEM FIB-25 dataset).",
"with viewer.txn() as s:\n s.layers['image'] = neuroglancer.ImageLayer(source='precomputed://gs://neuroglancer-public-data/flyem_fib-25/image')\n s.layers['segmentation'] = neuroglancer.SegmentationLayer(source='precomputed://gs://neuroglancer-public-data/flyem_fib-25/ground_truth', selected_alpha=0.3)\n ",
"Display a numpy array as an additional layer. A reference to the numpy array is kept only as long as the layer remains in the viewer.\nMove the viewer position.",
"with viewer.txn() as s:\n s.voxel_coordinates = [3000.5, 3000.5, 3000.5]",
"Hide the segmentation layer.",
"with viewer.txn() as s:\n s.layers['segmentation'].visible = False\n\nimport tensorstore as ts\nimage_vol = await ts.open({'driver': 'neuroglancer_precomputed', 'kvstore': 'gs://neuroglancer-public-data/flyem_fib-25/image/'})\na = np.zeros((200,200,200), np.uint8)\ndef make_thresholded(threshold):\n a[...] = image_vol[3000:3200,3000:3200,3000:3200][...,0].read().result() > threshold\nmake_thresholded(110)\n# This volume handle can be used to notify the viewer that the data has changed.\nvolume = neuroglancer.LocalVolume(\n a,\n dimensions=neuroglancer.CoordinateSpace(\n names=['x', 'y', 'z'],\n units='nm',\n scales=[8, 8, 8],\n ),\n voxel_offset=[3000, 3000, 3000])\nwith viewer.txn() as s:\n s.layers['overlay'] = neuroglancer.ImageLayer(\n source=volume,\n # Define a custom shader to display this mask array as red+alpha.\n shader=\"\"\"\nvoid main() {\n float v = toNormalized(getDataValue(0)) * 255.0;\n emitRGBA(vec4(v, 0.0, 0.0, v));\n}\n\"\"\",\n )",
"Modify the overlay volume, and call invalidate() to notify the Neuroglancer client.",
"make_thresholded(100)\nvolume.invalidate()",
"Select a couple segments.",
"with viewer.txn() as s:\n s.layers['segmentation'].segments.update([1752, 88847])\n s.layers['segmentation'].visible = True",
"Print the neuroglancer viewer state. The Neuroglancer Python library provides a set of Python objects that wrap the JSON-encoded viewer state. viewer.state returns a read-only snapshot of the state. To modify the state, use the viewer.txn() function, or viewer.set_state.",
"viewer.state",
"Print the set of selected segments.|",
"viewer.state.layers['segmentation'].segments",
"Update the state by calling set_state directly.",
"import copy\nnew_state = copy.deepcopy(viewer.state)\nnew_state.layers['segmentation'].segments.add(10625)\nviewer.set_state(new_state)",
"Bind the 't' key in neuroglancer to a Python action.",
"num_actions = 0\ndef my_action(s):\n global num_actions\n num_actions += 1\n with viewer.config_state.txn() as st:\n st.status_messages['hello'] = ('Got action %d: mouse position = %r' %\n (num_actions, s.mouse_voxel_coordinates))\n print('Got my-action')\n print(' Mouse position: %s' % (s.mouse_voxel_coordinates,))\n print(' Layer selected values: %s' % (s.selected_values,))\nviewer.actions.add('my-action', my_action)\nwith viewer.config_state.txn() as s:\n s.input_event_bindings.viewer['keyt'] = 'my-action'\n s.status_messages['hello'] = 'Welcome to this example'",
"Change the view layout to 3-d.",
"with viewer.txn() as s:\n s.layout = '3d'\n s.projection_scale = 3000",
"Take a screenshot (useful for creating publication figures, or for generating videos). While capturing the screenshot, we hide the UI and specify the viewer size so that we get a result independent of the browser size.",
"from ipywidgets import Image\nscreenshot = viewer.screenshot(size=[1000, 1000])\nscreenshot_image = Image(value=screenshot.screenshot.image)\nscreenshot_image",
"Change the view layout to show the segmentation side by side with the image, rather than overlayed. This can also be done from the UI by dragging and dropping. The side by side views by default have synchronized position, orientation, and zoom level, but this can be changed.",
"with viewer.txn() as s:\n s.layout = neuroglancer.row_layout(\n [neuroglancer.LayerGroupViewer(layers=['image', 'overlay']),\n neuroglancer.LayerGroupViewer(layers=['segmentation'])])",
"Remove the overlay layer.",
"with viewer.txn() as s:\n s.layout = neuroglancer.row_layout(\n [neuroglancer.LayerGroupViewer(layers=['image']),\n neuroglancer.LayerGroupViewer(layers=['segmentation'])])",
"Create a publicly sharable URL to the viewer state (only works for external data sources, not layers served from Python). The Python objects for representing the viewer state (neuroglancer.ViewerState and friends) can also be used independently from the interactive Python-tied viewer to create Neuroglancer links.",
"print(neuroglancer.to_url(viewer.state))",
"Stop the Neuroglancer web server, which invalidates any existing links to the Python-tied viewer.",
"neuroglancer.stop()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bbfamily/abu
|
abupy_lecture/21-A股UMP决策(ABU量化使用文档).ipynb
|
gpl-3.0
|
[
"ABU量化系统使用文档\n<center>\n <img src=\"./image/abu_logo.png\" alt=\"\" style=\"vertical-align:middle;padding:10px 20px;\"><font size=\"6\" color=\"black\"><b>第21节 A股UMP决策</b></font>\n</center>\n\n作者: 阿布\n阿布量化版权所有 未经允许 禁止转载\nabu量化系统github地址 (欢迎+star)\n本节ipython notebook\n上一节通过切割A股市场训练集测试集symbol,分别对切割的训练集和测试集做了回测,本节将示例A股ump主裁,边裁决策。\n首先导入abupy中本节使用的模块:",
"# 基础库导入\n\nfrom __future__ import print_function\nfrom __future__ import division\n\nimport warnings\nwarnings.filterwarnings('ignore')\nwarnings.simplefilter('ignore')\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport ipywidgets\n%matplotlib inline\n\nimport os\nimport sys\n# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题\nsys.path.insert(0, os.path.abspath('../'))\nimport abupy\n\n# 使用沙盒数据,目的是和书中一样的数据环境\nabupy.env.enable_example_env_ipython()\n\nfrom abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak\nfrom abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, ABuProgress, ABuSymbolPd\nfrom abupy import EMarketTargetType, EDataCacheType, EMarketSourceType, EMarketDataFetchMode, EStoreAbu, AbuUmpMainMul\nfrom abupy import AbuUmpMainDeg, AbuUmpMainJump, AbuUmpMainPrice, AbuUmpMainWave, feature, AbuFeatureDegExtend\nfrom abupy import AbuUmpEdgeDeg, AbuUmpEdgePrice, AbuUmpEdgeWave, AbuUmpEdgeFull, AbuUmpEdgeMul, AbuUmpEegeDegExtend\nfrom abupy import AbuUmpMainDegExtend, ump, Parallel, delayed, AbuMulPidProgress\n\n# 关闭沙盒数据\nabupy.env.disable_example_env_ipython()",
"下面读取上一节存储的训练集和测试集回测数据,如下所示:",
"abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN\nabupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL\nabu_result_tuple = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME, \n custom_name='train_cn')\nabu_result_tuple_test = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME, \n custom_name='test_cn')\nABuProgress.clear_output()\nprint('训练集结果:')\nmetrics_train = AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=True ,only_info=True)\nprint('测试集结果:')\nmetrics_test = AbuMetricsBase.show_general(*abu_result_tuple_test, returns_cmp=True, only_info=True)",
"1. A股训练集主裁训练\n下面开始使用训练集交易数据训练主裁,裁判组合使用两个abupy中内置裁判AbuUmpMainDeg和AbuUmpMainPrice,两个外部自定义裁判使用‘第18节 自定义裁判决策交易‘中编写的AbuUmpMainMul和AbuUmpMainDegExtend\n第一次运行select:train main ump,然后点击run select,如果已经训练过可select:load main ump直接读取以训练好的主裁:",
"# 需要全局设置为A股市场,在ump会根据市场类型保存读取对应的ump\nabupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN\nump_deg=None\nump_mul=None\nump_price=None\nump_main_deg_extend=None\n# 使用训练集交易数据训练主裁\norders_pd_train_cn = abu_result_tuple.orders_pd\n\ndef train_main_ump():\n print('AbuUmpMainDeg begin...')\n AbuUmpMainDeg.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)\n print('AbuUmpMainPrice begin...')\n AbuUmpMainPrice.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)\n print('AbuUmpMainMul begin...')\n AbuUmpMainMul.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)\n print('AbuUmpMainDegExtend begin...')\n AbuUmpMainDegExtend.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)\n # 依然使用load_main_ump,避免下面多进程内存拷贝过大\n load_main_ump()\n \ndef load_main_ump():\n global ump_deg, ump_mul, ump_price, ump_main_deg_extend\n ump_deg = AbuUmpMainDeg(predict=True)\n ump_mul = AbuUmpMainMul(predict=True)\n ump_price = AbuUmpMainPrice(predict=True)\n ump_main_deg_extend = AbuUmpMainDegExtend(predict=True)\n print('load main ump complete!')\n\ndef select(select):\n if select == 'train main ump':\n train_main_ump()\n else:\n load_main_ump()\n\n_ = ipywidgets.interact_manual(select, select=['train main ump', 'load main ump'])",
"2. 验证A股主裁是否称职\n下面首先通过从测试集交易中筛选出来已经有交易结果的交易,如下:",
"# 选取有交易结果的数据order_has_result\norder_has_result = abu_result_tuple_test.orders_pd[abu_result_tuple_test.orders_pd.result != 0]",
"order_has_result的交易单中记录了所买入时刻的交易特征,如下所示:",
"order_has_result.filter(regex='^buy(_deg_|_price_|_wave_|_jump)').head()",
"可以通过一个一个迭代交易单,将交易单中的买入时刻特征传递给ump主裁决策器,让每一个主裁来决策是否进行拦截,这样可以统计每一个主裁的拦截成功率,以及整体拦截率等,如下所示:\n备注:\n\n如下的代码使用abupy中再次封装joblib的多进程调度使用示例,以及abupy中封装的多进程进度条的使用示例\n3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理",
"def apply_ml_features_ump(order, predicter, progress, need_hit_cnt):\n if not isinstance(order.ml_features, dict):\n import ast\n # 低版本pandas dict对象取出来会成为str\n ml_features = ast.literal_eval(order.ml_features)\n else:\n ml_features = order.ml_features\n progress.show()\n # 将交易单中的买入时刻特征传递给ump主裁决策器,让每一个主裁来决策是否进行拦截\n return predicter.predict_kwargs(need_hit_cnt=need_hit_cnt, **ml_features)\n\ndef pararllel_func(ump_object, ump_name):\n with AbuMulPidProgress(len(order_has_result), '{} complete'.format(ump_name)) as progress:\n # 启动多进程进度条,对order_has_result进行apply\n ump_result = order_has_result.apply(apply_ml_features_ump, axis=1, args=(ump_object, progress, 2,))\n return ump_name, ump_result\n\nif sys.version_info > (3, 4, 0):\n # python3.4以上并行处理4个主裁,每一个主裁启动一个进程进行拦截决策\n parallel = Parallel(\n n_jobs=4, verbose=0, pre_dispatch='2*n_jobs')\n out = parallel(delayed(pararllel_func)(ump_object, ump_name)\n for ump_object, ump_name in zip([ump_deg, ump_mul, ump_price, ump_main_deg_extend], \n ['ump_deg', 'ump_mul', 'ump_price', 'ump_main_deg_extend']))\nelse:\n # 3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理\n out = [pararllel_func(ump_object, ump_name) for ump_object, ump_name in zip([ump_deg, ump_mul, ump_price, ump_main_deg_extend], \n ['ump_deg', 'ump_mul', 'ump_price', 'ump_main_deg_extend'])]\n\n# 将每一个进程中的裁判的拦截决策进行汇总\nfor sub_out in out:\n order_has_result[sub_out[0]] = sub_out[1]",
"通过把所有主裁的决策进行相加, 如果有投票1的即会进行拦截,四个裁判整体拦截正确率统计:",
"block_pd = order_has_result.filter(regex='^ump_*')\n# 把所有主裁的决策进行相加\nblock_pd['sum_bk'] = block_pd.sum(axis=1)\nblock_pd['result'] = order_has_result['result']\n# 有投票1的即会进行拦截\nblock_pd = block_pd[block_pd.sum_bk > 0]\nprint('四个裁判整体拦截正确率{:.2f}%'.format(block_pd[block_pd.result == -1].result.count() / block_pd.result.count() * 100))\nblock_pd.tail()",
"下面统计每一个主裁的拦截正确率:",
"from sklearn import metrics\ndef sub_ump_show(block_name):\n sub_block_pd = block_pd[(block_pd[block_name] == 1)]\n # 如果失败就正确 -1->1 1->0\n sub_block_pd.result = np.where(sub_block_pd.result == -1, 1, 0)\n return metrics.accuracy_score(sub_block_pd[block_name], sub_block_pd.result) * 100, sub_block_pd.result.count()\n\nprint('角度裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_deg')))\nprint('角度扩展裁判拦拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_main_deg_extend')))\nprint('单混裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_mul')))\nprint('价格裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_price')))",
"3. A股训练集边裁训练\n下面开始使用训练集交易数据训练训裁,裁判组合依然使用两个abupy中内置裁判AbuUmpEdgeDeg和AbuUmpEdgePrice,两个外部自定义裁判使用‘第18节 自定义裁判决策交易‘中编写的AbuUmpEdgeMul和AbuUmpEegeDegExtend,如下所示\n备注:由于边裁的运行机制,所以边裁的训练非常快,这里直接进行训练,不再从本地读取裁判决策数据",
"# 需要全局设置为A股市场,在ump会根据市场类型保存读取对应的ump\nabupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN\n\nprint('AbuUmpEdgeDeg begin...')\nAbuUmpEdgeDeg.ump_edge_clf_dump(orders_pd_train_cn)\nedge_deg = AbuUmpEdgeDeg(predict=True)\n\nprint('AbuUmpEdgePrice begin...')\nAbuUmpEdgePrice.ump_edge_clf_dump(orders_pd_train_cn)\nedge_price = AbuUmpEdgePrice(predict=True)\n\nprint('AbuUmpEdgeMul begin...')\nAbuUmpEdgeMul.ump_edge_clf_dump(orders_pd_train_cn)\nedge_mul = AbuUmpEdgeMul(predict=True)\n\nprint('AbuUmpEegeDegExtend begin...')\nAbuUmpEegeDegExtend.ump_edge_clf_dump(orders_pd_train_cn)\nedge_deg_extend = AbuUmpEegeDegExtend(predict=True)\n\nprint('fit edge complete!')",
"4. 验证A股边裁是否称职\n使用与主裁类似的方式,一个一个迭代交易单,将交易单中的买入时刻特征传递给ump边裁决策器,让每一个边裁来决策是否进行拦截,统计每一个边裁的拦截成功率,以及整体拦截率等,如下所示:\n备注:如下的代码使用abupy中再次封装joblib的多进程调度使用示例,以及abupy中封装的多进程进度条的使用示例",
"def apply_ml_features_edge(order, predicter, progress):\n if not isinstance(order.ml_features, dict):\n import ast\n # 低版本pandas dict对象取出来会成为str\n ml_features = ast.literal_eval(order.ml_features)\n else:\n ml_features = order.ml_features\n # 边裁进行裁决\n progress.show()\n # 将交易单中的买入时刻特征传递给ump边裁决策器,让每一个边裁来决策是否进行拦截\n edge = predicter.predict(**ml_features)\n return edge.value\n\n\ndef edge_pararllel_func(edge, edge_name):\n with AbuMulPidProgress(len(order_has_result), '{} complete'.format(edge_name)) as progress:\n # # 启动多进程进度条,对order_has_result进行apply\n edge_result = order_has_result.apply(apply_ml_features_edge, axis=1, args=(edge, progress,))\n return edge_name, edge_result\n\nif sys.version_info > (3, 4, 0):\n # python3.4以上并行处理4个边裁的决策,每一个边裁启动一个进程进行拦截决策\n parallel = Parallel(\n n_jobs=4, verbose=0, pre_dispatch='2*n_jobs')\n out = parallel(delayed(edge_pararllel_func)(edge, edge_name)\n for edge, edge_name in zip([edge_deg, edge_price, edge_mul, edge_deg_extend], \n ['edge_deg', 'edge_price', 'edge_mul', 'edge_deg_extend']))\nelse:\n # 3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理\n out = [edge_pararllel_func(edge, edge_name) for edge, edge_name in zip([edge_deg, edge_price, edge_mul, edge_deg_extend], \n ['edge_deg', 'edge_price', 'edge_mul', 'edge_deg_extend'])]\n \n# 将每一个进程中的裁判的拦截决策进行汇总\nfor sub_out in out:\n order_has_result[sub_out[0]] = sub_out[1]",
"通过把所有边裁的决策进行统计, 如果有投票-1的结果即判定loss_top的拿出来和真实交易结果result组成结果集,统计四个边裁的整体拦截正确率以及拦截率,如下所示:",
"block_pd = order_has_result.filter(regex='^edge_*')\n\"\"\"\n 由于predict返回的结果中1代表win top\n 但是我们只需要知道loss_top,所以只保留-1, 其他1转换为0。\n\"\"\"\nblock_pd['edge_block'] = \\\n np.where(np.min(block_pd, axis=1) == -1, -1, 0)\n\n# 拿出真实的交易结果\nblock_pd['result'] = order_has_result['result']\n# 拿出-1的结果,即判定loss_top的\nblock_pd = block_pd[block_pd.edge_block == -1]\n\n\nprint('四个裁判整体拦截正确率{:.2f}%'.format(block_pd[block_pd.result == -1].result.count() / \n block_pd.result.count() * 100))\n\nprint('四个边裁拦截交易总数{}, 拦截率{:.2f}%'.format(\n block_pd.shape[0],\n block_pd.shape[0] / order_has_result.shape[0] * 100))\nblock_pd.head()",
"下面再统计每一个 边裁的拦截正确率:",
"from sklearn import metrics\ndef sub_edge_show(edge_name):\n sub_edge_block_pd = order_has_result[(order_has_result[edge_name] == -1)]\n return metrics.accuracy_score(sub_edge_block_pd[edge_name], sub_edge_block_pd.result) * 100, sub_edge_block_pd.shape[0]\n\nprint('角度边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_deg')))\nprint('单混边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_mul')))\nprint('价格边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_price')))\nprint('角度扩展边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_deg_extend')))",
"4. 在abu系统中开启主裁拦截模式,开启边裁拦截模式\n内置边裁的开启很简单,只需要通过env中的相关设置即可完成,如下所示,分别开启主裁和边裁的两个内置裁判:",
"# 开启内置主裁\nabupy.env.g_enable_ump_main_deg_block = True\nabupy.env.g_enable_ump_main_price_block = True\n\n# 开启内置边裁\nabupy.env.g_enable_ump_edge_deg_block = True\nabupy.env.g_enable_ump_edge_price_block = True\n\n# 回测时需要开启特征生成,因为裁判开启需要生成特征做为输入\nabupy.env.g_enable_ml_feature = True\n# 回测时使用上一次切割好的测试集数据\nabupy.env.g_enable_last_split_test = True\n\nabupy.beta.atr.g_atr_pos_base = 0.05",
"用户自定义裁判的开启在‘第18节 自定义裁判决策交易‘ 也示例过,通过ump.manager.append_user_ump即可\n\n\n注意下面还需要把10,30,50,90,120日走势拟合角度特征的AbuFeatureDegExtend,做为回测时的新的视角来录制比赛(记录回测特征),因为裁判里面有AbuUmpEegeDegExtend和AbuUmpMainDegExtend,它们需要生成带有10,30,50,90,120日走势拟合角度特征的回测交易单\n\n\n代码如下所示:",
"feature.clear_user_feature()\n# 10,30,50,90,120日走势拟合角度特征的AbuFeatureDegExtend,做为回测时的新的视角来录制比赛\nfeature.append_user_feature(AbuFeatureDegExtend)\n\n# 打开使用用户自定义裁判开关\nump.manager.g_enable_user_ump = True\n# 先clear一下\nump.manager.clear_user_ump()\n# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中\nump.manager.append_user_ump(AbuUmpEegeDegExtend)\n# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中\nump.manager.append_user_ump(AbuUmpMainDegExtend)",
"买入因子,卖出因子等依然使用相同的设置,如下所示:",
"# 初始化资金500万\nread_cash = 5000000\n\n# 买入因子依然延用向上突破因子\nbuy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak},\n {'xd': 42, 'class': AbuFactorBuyBreak}]\n\n# 卖出因子继续使用上一节使用的因子\nsell_factors = [\n {'stop_loss_n': 1.0, 'stop_win_n': 3.0,\n 'class': AbuFactorAtrNStop},\n {'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},\n {'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}\n]\nabupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN\nabupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL",
"完成裁判组合的开启,即可开始回测,回测操作流程和之前的操作一样:\n下面开始回测,第一次运行select:run loop back ump,然后点击run select_ump,如果已经回测过可select:load test ump data直接从缓存数据读取:",
"abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN\nabupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL\n\nabu_result_tuple_test_ump = None\ndef run_loop_back_ump():\n global abu_result_tuple_test_ump\n abu_result_tuple_test_ump, _ = abu.run_loop_back(read_cash,\n buy_factors,\n sell_factors,\n choice_symbols=None,\n start='2012-08-08', end='2017-08-08')\n # 把运行的结果保存在本地,以便之后分析回测使用,保存回测结果数据代码如下所示\n abu.store_abu_result_tuple(abu_result_tuple_test_ump, n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME, \n custom_name='test_ump_cn')\n ABuProgress.clear_output()\n\ndef run_load_ump():\n global abu_result_tuple_test_ump\n abu_result_tuple_test_ump = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME, \n custom_name='test_ump_cn')\n\ndef select_ump(select):\n if select == 'run loop back ump':\n run_loop_back_ump()\n else:\n run_load_ump()\n\n_ = ipywidgets.interact_manual(select_ump, select=['run loop back ump', 'load test ump data'])",
"下面对比针对A股市场测试集交易开启主裁,边裁拦截和未开启主裁,边裁,结果可以看出拦截了接近一半的交易,胜率以及盈亏比都有大幅度提高:",
"AbuMetricsBase.show_general(*abu_result_tuple_test_ump, returns_cmp=True, only_info=True)\n\nAbuMetricsBase.show_general(*abu_result_tuple_test, returns_cmp=True, only_info=True)",
"abu量化文档目录章节\n\n择时策略的开发\n择时策略的优化\n滑点策略与交易手续费\n多支股票择时回测与仓位管理\n选股策略的开发\n回测结果的度量\n寻找策略最优参数和评分\nA股市场的回测\n港股市场的回测\n比特币,莱特币的回测\n期货市场的回测\n机器学习与比特币示例\n量化技术分析应用\n量化相关性分析应用\n量化交易和搜索引擎\nUMP主裁交易决策\nUMP边裁交易决策\n自定义裁判决策交易\n数据源\nA股全市场回测\nA股UMP决策\n美股全市场回测\n美股UMP决策\n\nabu量化系统文档教程持续更新中,请关注公众号中的更新提醒。\n《量化交易之路》目录章节及随书代码地址\n\n第二章 量化语言——Python\n第三章 量化工具——NumPy\n第四章 量化工具——pandas\n第五章 量化工具——可视化\n第六章 量化工具——数学:你一生的追求到底能带来多少幸福\n第七章 量化系统——入门:三只小猪股票投资的故事\n第八章 量化系统——开发\n第九章 量化系统——度量与优化\n第十章 量化系统——机器学习•猪老三\n第十一章 量化系统——机器学习•ABU\n附录A 量化环境部署\n附录B 量化相关性分析\n附录C 量化统计分析及指标应用\n\n更多阿布量化量化技术文章\n更多关于量化交易相关请阅读《量化交易之路》\n更多关于量化交易与机器学习相关请阅读《机器学习之路》\n更多关于abu量化系统请关注微信公众号: abu_quant"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mwcraig/reducer
|
reducer/reducer-template.ipynb
|
bsd-3-clause
|
[
"Reducer: (Put your name here)\nReviewer: (Put your name here)\njupyter notebook crash course\nClick on a code cell (has grey background) then press Shift-Enter (at the same time) to run a code cell. That will add the controls (buttons, etc) you use to do the reduction one-by-one; then use them for reduction.\nreducer crash course\nRule 0: Run the code cells in order\nThe world won't end if you break this rule, but you are more likely to end up with nonsensical results or errors. Incidentally, welcome to python indexing, which starts numbering at zero.\nRule 1: Do not run this notebook in the directory containing your unreduced data\nreducer will not overwrite anything but the idea is that you will keep a copy of this notebook with your reduced data.\nRule 2: Keep the cells you need, delete the ones you don't\nIPython notebooks are essentially customizable apps. If you don't shoot dark frames, for example, delete the stuff related to darks.\nRule 3: If you find bugs, please report them\nYou can report bugs, make feature requests or (best of all) submit pull requests from reducer's home on github\nBonus Pro Tip: Feel free to ignore the code in the code cells\nCode is there so that people who know python can see what is going on, but if you don't know python you should still be able to use the notebook. Just remember to Shift-Enter on each code cell to run it, then fill in the form(s) that appear in the notebook.",
"import reducer.gui\nimport reducer.astro_gui as astro_gui\nfrom reducer.image_browser import ImageBrowser\n\nfrom ccdproc import ImageFileCollection\n\nfrom reducer import __version__\nprint(__version__)",
"Enter name of directory that contains your data in the cell below, or...\n...leave it unchanged to try out reducer with low-resolution dataset\nThat low-resolution dataset will expand to about 300MB when uncompressed",
"# To use the sample data set:\ndata_dir = reducer.notebook_dir.get_data_path()\n\n# Or, uncomment line below and modify as needed\n# data_dir = 'path/to/your/data'\n\ndestination_dir = '.'",
"Type any comments about this dataset here\nDouble-click on the cell to start editing it.\nLoad your data set",
"images = ImageFileCollection(location=data_dir, keywords='*')",
"Image Summary\nIncludes browser and image/metadata viewer\nThis is not, strictly speaking, part of reduction, but is a handy way to take a quick look at your files.",
"fits_browser = ImageBrowser(images, keys=['imagetyp', 'exposure'])\nfits_browser.display()",
"You can reconfigure the image browser if you want (or not)\nBy passing different keys into the tree constructor you can generate a navigable tree based on any keys you want.",
"im_a_tree_too = ImageBrowser(images, keys=['filter', 'imagetyp', 'exposure'])\nim_a_tree_too.display()",
"Make a combined bias image\nReduce the bias images",
"bias_reduction = astro_gui.Reduction(description='Reduce bias frames',\n toggle_type='button',\n allow_bias=False,\n allow_dark=False,\n allow_flat=False,\n input_image_collection=images,\n apply_to={'imagetyp': 'bias'},\n destination=destination_dir)\nbias_reduction.display()\n\nprint(bias_reduction)",
"Combine bias images to make combined bias",
"reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')\nbias = astro_gui.Combiner(description=\"Combined Bias Settings\",\n toggle_type='button',\n file_name_base='combined_bias',\n image_source=reduced_collection,\n apply_to={'imagetyp': 'bias'},\n destination=destination_dir)\nbias.display()\n\nprint(bias)",
"Make a combined dark\nReduce dark images",
"reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')\ndark_reduction = astro_gui.Reduction(description='Reduce dark frames',\n toggle_type='button',\n allow_bias=True,\n master_source=reduced_collection,\n allow_dark=False,\n allow_flat=False,\n input_image_collection=images,\n destination=destination_dir,\n apply_to={'imagetyp': 'dark'})\n\ndark_reduction.display()\n\nprint(dark_reduction)",
"Combine reduced darks\nNote the Group by option in the controls that appear after you execute the cell below. reducer will make a master for each value of the FITS keyword listed in Group by. By default this keyword is named exposure for darks, so if you have darks with exposure times of 10 sec, 15 sec and 120 sec you will get three master darks, one for each exposure time.",
"reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')\n\ndark = astro_gui.Combiner(description=\"Make Combined Dark(s)\",\n toggle_type='button',\n file_name_base='combined_dark',\n group_by='exposure',\n image_source=reduced_collection,\n apply_to={'imagetyp': 'dark'},\n destination=destination_dir)\ndark.display()\n\nprint(dark)",
"Make combined flats\nReduce flat images",
"reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')\nflat_reduction = astro_gui.Reduction(description='Reduce flat frames',\n toggle_type='button',\n allow_bias=True,\n master_source=reduced_collection,\n allow_dark=True,\n allow_flat=False,\n input_image_collection=images,\n destination=destination_dir,\n apply_to={'imagetyp': 'flat'})\n\nflat_reduction.display()\n\nprint(flat_reduction)",
"Build combined flats\nAgain, note the presence of Group by. If you typically use twilight flats you will almost certainly want to group by filter, not by filter and exposure.",
"reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')\n\nflat = astro_gui.Combiner(description=\"Make Combined Flat(s)\",\n toggle_type='button',\n file_name_base='combined_flat',\n group_by='filter',\n image_source=reduced_collection,\n apply_to={'imagetyp': 'flat'},\n destination=destination_dir)\nflat.display()\n\nprint(flat)",
"Reduce the science images\nThere is some autmatic matching going on here:\n\nIf darks are subtracted a dark of the same edxposure time will be used, if available. If not, and dark scaling is enabled, the dark with the closest exposure time will be scaled to match the science image.\nIf the dark you want to scale appears not to be bias-subtracted an error will be raised.\nFlats are matched by filter.",
"reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')\nlight_reduction = astro_gui.Reduction(description='Reduce light frames',\n toggle_type='button',\n allow_cosmic_ray=True,\n master_source=reduced_collection,\n input_image_collection=images,\n destination=destination_dir,\n apply_to={'imagetyp': 'light'})\n\nlight_reduction.display()",
"Wonder what the reduced images look like? Make another image browser...",
"reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')\n\nreduced_browser = ImageBrowser(reduced_collection, keys=['imagetyp', 'filter'])\nreduced_browser.display()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tuanvu216/udacity-course
|
deep_learning/examples/4_convolutions.ipynb
|
mit
|
[
"Deep Learning\nAssignment 4\nPreviously in 2_fullyconnected.ipynb and 3_regularization.ipynb, we trained fully connected networks to classify notMNIST characters.\nThe goal of this assignment is make the neural network convolutional.",
"# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nimport cPickle as pickle\nimport numpy as np\nimport tensorflow as tf\n\npickle_file = 'notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print 'Training set', train_dataset.shape, train_labels.shape\n print 'Validation set', valid_dataset.shape, valid_labels.shape\n print 'Test set', test_dataset.shape, test_labels.shape",
"Reformat into a TensorFlow-friendly shape:\n- convolutions need the image data formatted as a cube (width by height by #channels)\n- labels as float 1-hot encodings.",
"image_size = 28\nnum_labels = 10\nnum_channels = 1 # grayscale\n\nimport numpy as np\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape(\n (-1, image_size, image_size, num_channels)).astype(np.float32)\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint 'Training set', train_dataset.shape, train_labels.shape\nprint 'Validation set', valid_dataset.shape, valid_labels.shape\nprint 'Test set', test_dataset.shape, test_labels.shape\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])",
"Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.",
"batch_size = 16\npatch_size = 5\ndepth = 16\nnum_hidden = 64\n\ngraph = tf.Graph()\n\nwith graph.as_default():\n\n # Input data.\n tf_train_dataset = tf.placeholder(\n tf.float32, shape=(batch_size, image_size, image_size, num_channels))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n layer1_weights = tf.Variable(tf.truncated_normal(\n [patch_size, patch_size, num_channels, depth], stddev=0.1))\n layer1_biases = tf.Variable(tf.zeros([depth]))\n layer2_weights = tf.Variable(tf.truncated_normal(\n [patch_size, patch_size, depth, depth], stddev=0.1))\n layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))\n layer3_weights = tf.Variable(tf.truncated_normal(\n [image_size / 4 * image_size / 4 * depth, num_hidden], stddev=0.1))\n layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))\n layer4_weights = tf.Variable(tf.truncated_normal(\n [num_hidden, num_labels], stddev=0.1))\n layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))\n \n # Model.\n def model(data):\n conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')\n hidden = tf.nn.relu(conv + layer1_biases)\n conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')\n hidden = tf.nn.relu(conv + layer2_biases)\n shape = hidden.get_shape().as_list()\n reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])\n hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)\n return tf.matmul(hidden, layer4_weights) + layer4_biases\n \n # Training computation.\n logits = model(tf_train_dataset)\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n \n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(model(tf_valid_dataset))\n test_prediction = tf.nn.softmax(model(tf_test_dataset))\n\nnum_steps = 1001\n\nwith tf.Session(graph=graph) as session:\n tf.initialize_all_variables().run()\n print \"Initialized\"\n for step in xrange(num_steps):\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n batch_data = train_dataset[offset:(offset + batch_size), :, :, :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 50 == 0):\n print \"Minibatch loss at step\", step, \":\", l\n print \"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels)\n print \"Validation accuracy: %.1f%%\" % accuracy(\n valid_prediction.eval(), valid_labels)\n print \"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels)",
"Problem 1\nThe convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides a max pooling operation (nn.max_pool()) of stride 2 and kernel size 2.\n\n\nProblem 2\nTry to get the best performance you can using a convolutional net. Look for example at the classic LeNet5 architecture, adding Dropout, and/or adding learning rate decay."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tritemio/PyBroMo
|
notebooks/PyBroMo - 2. Generate smFRET data, including mixtures.ipynb
|
gpl-2.0
|
[
"PyBroMo - 2. Generate smFRET data, including mixtures\n<small><i>\nThis notebook is part of <a href=\"http://tritemio.github.io/PyBroMo\" target=\"_blank\">PyBroMo</a> a \npython-based single-molecule Brownian motion diffusion simulator \nthat simulates confocal smFRET\nexperiments.\n</i></small>\nOverview\nIn this notebook we show how to generated smFRET data files from the diffusion trajectories.\nLoading the software\nImport all the relevant libraries:",
"%matplotlib inline\nfrom pathlib import Path\nimport numpy as np\nimport tables\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pybromo as pbm\nprint('Numpy version:', np.__version__)\nprint('PyTables version:', tables.__version__)\nprint('PyBroMo version:', pbm.__version__)",
"Create smFRET data-files\nCreate a file for a single FRET efficiency\nIn this section we show how to save a single smFRET data file. In the next section we will perform the same steps in a loop to generate a sequence of smFRET data files.\nHere we load a diffusion simulation opening a file to save\ntimstamps in write mode. Use 'a' (i.e. append) to keep \npreviously simulated timestamps for the given diffusion.",
"S = pbm.ParticlesSimulation.from_datafile('0168', mode='w')\n\nS.particles.diffusion_coeff_counts\n\n#S = pbm.ParticlesSimulation.from_datafile('0168')",
"Simulate timestamps of smFRET\nExample1: single FRET population\nDefine the simulation parameters with the following syntax:",
"params = dict(\n em_rates = (200e3,), # Peak emission rates (cps) for each population (D+A)\n E_values = (0.75,), # FRET efficiency for each population\n num_particles = (20,), # Number of particles in each population\n bg_rate_d = 1500, # Poisson background rate (cps) Donor channel\n bg_rate_a = 800, # Poisson background rate (cps) Acceptor channel\n )",
"Create the object that will run the simulation and print a summary:",
"mix_sim = pbm.TimestapSimulation(S, **params)\nmix_sim.summarize()",
"Run the simualtion:",
"rs = np.random.RandomState(1234)\nmix_sim.run(rs=rs, overwrite=False, skip_existing=True)",
"Save simulation to a smFRET Photon-HDF5 file:",
"mix_sim.save_photon_hdf5(identity=dict(author='John Doe', \n author_affiliation='Planet Mars'))",
"Example 2: 2 FRET populations\nTo simulate 2 population we just define the parameters with \none value per population, except for the Poisson background \nrate that is a single value for each channel.",
"params = dict(\n em_rates = (200e3, 180e3), # Peak emission rates (cps) for each population (D+A)\n E_values = (0.75, 0.35), # FRET efficiency for each population\n num_particles = (20, 15), # Number of particles in each population\n bg_rate_d = 1500, # Poisson background rate (cps) Donor channel\n bg_rate_a = 800, # Poisson background rate (cps) Acceptor channel\n )\n\nmix_sim = pbm.TimestapSimulation(S, **params)\nmix_sim.summarize()\n\nrs = np.random.RandomState(1234)\nmix_sim.run(rs=rs, overwrite=False, skip_existing=True)\n\nmix_sim.save_photon_hdf5()",
"Burst analysis\nThe generated Photon-HDF5 files can be analyzed by any smFRET burst\nanalysis program. Here we show an example using the opensource\nFRETBursts program:",
"import fretbursts as fb\n\nfilepath = list(Path('./').glob('smFRET_*'))\nfilepath\n\nd = fb.loader.photon_hdf5(str(filepath[1]))\n\nd\n\nd.A_em\n\nfb.dplot(d, fb.timetrace);\n\nd.calc_bg(fun=fb.bg.exp_fit, tail_min_us='auto', F_bg=1.7)\n\nd.bg_dd, d.bg_ad\n\nd.burst_search(F=7)\n\nd.num_bursts\n\nds = d.select_bursts(fb.select_bursts.size, th1=20)\n\nds.num_bursts\n\nfb.dplot(d, fb.timetrace, bursts=True);\n\nfb.dplot(ds, fb.hist_fret, pdf=False)\nplt.axvline(0.75);",
"NOTE: Unless you simulated a diffusion of 30s or more the previous histogram will be very poor.",
"fb.bext.burst_data(ds)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zasdfgbnm/qutip-notebooks
|
examples/example-stinespring.ipynb
|
lgpl-3.0
|
[
"QuTiP Example: Stinespring Dilation of Quantum Channels\nChristopher Granade <br>\nUniversity of Sydney\n$\\newcommand{\\ket}[1]{\\left|#1\\right\\rangle}$\n$\\newcommand{\\bra}[1]{\\left\\langle#1\\right|}$\n$\\newcommand{\\cnot}{{\\scriptstyle \\rm CNOT}}$\n$\\newcommand{\\TT}{\\operatorname{T}}$\n$\\newcommand{\\Tr}{\\operatorname{Tr}}$\n$\\newcommand{\\Uni}{\\operatorname{U}}$\n$\\newcommand{\\Chan}{\\operatorname{C}}$\n$\\newcommand{\\Hil}{\\mathcal{H}}$\nIntroduction\nIn this notebook, we will demonstrate the to_stinespring function, which converts a Qobj representing the quantum channel $\\Phi$ to a pair of partial isometries that describe the action of $\\Phi$.\nIn introducing the Stinespring dilation, it is helpful to first adopt some notation. Let $\\Hil_X$, $\\Hil_Y$ and $\\Hil_Z$ be finite-dimensional Hilbert spaces. Let $\\Uni(\\Hil_X, \\Hil_Y)$ be the set of partial isometries from $\\Hil_X$ to $\\Hil_Y$, and $\\TT(\\Hil_X, \\Hil_Y)$ be the set of maps (that is, superoperators) from operators on $\\Hil_X$ to those acting on $\\Hil_Y$. Finally, let $\\Chan(\\Hil_X, \\Hil_Y) \\subset \\TT(\\Hil_X, \\Hil_Y)$ be the set of channels; that is, those maps which are completely positive and trace-preserving.\nFor a quantum map $\\Phi \\in \\TT(\\Hil_X, \\Hil_Y)$, then, the Stinespring dilation of $\\Phi$ is a pair of partial isometries $A, B \\in \\Uni(\\Hil_X, \\Hil_Y \\otimes \\Hil_Z)$ for some space $\\Hil_Z$ such that\n\\begin{equation}\n \\Phi(X) = \\Tr_Z (A X B^\\dagger).\n \\label{eq:stinespring-action}\n\\end{equation}\nIf $\\Phi$ is completely positive, then $A = B$. We do not insist on complete positivity, however, as it is common to consider the Stinespring dilation for the difference between two quantum channels, $\\Phi - \\Phi'$. By construction, such a map is not completely positive, nor is it trace-preserving.\nInformally, the ancillary space $\\Hil_Z$ serves the same role as the Kraus index. If $\\Phi$ is a channel with Kraus operators ${K_i : i \\in {0, \\dots, k - 1}}$, then $A = B = \\sum_i K_i \\otimes \\ket{i}$, such that summation over $i$ is performed by the partial trace in \\eqref{eq:stinespring-action}.\nThe Stinespring representation is useful for evaluating norms and for building system/environment models of a quantum channel.\nFurther Resources\n\nWatrous. CS 766: Theory of Quantum Information (lecture notes) (2013).\nWood, Biamonte and Cory. Tensor networks and graphical calculus for open quantum systems. arXiv:1111.6950 (2011).\nWatrous. Semidefinite Programs for Completely Bounded Norms. Theory of Computing 5 217-238, arXiv:0901.4709 (2009).\n\nPreamble\nFeatures\nWe enable a few features such that this notebook runs in both Python 2 and 3.",
"from __future__ import division, print_function",
"Imports",
"import numpy as np\nimport qutip as qt\n\nfrom qutip.ipynbtools import version_table",
"Plotting Support",
"%matplotlib inline",
"Settings",
"qt.settings.colorblind_safe = True",
"Dilations of CPTP Maps\nWe'll start by considering the case in which $\\Phi(X) = U X U^\\dagger$ for a random unitary operator $U \\in \\Uni(\\Hil_X, \\Hil_X)$.",
"phi = qt.to_super(qt.rand_unitary_haar(4))\n\nqt.visualization.hinton(phi)",
"By construction, $\\Phi$ has one Kraus operator--- the random unitary itself. Because of numerical precision in finding eigenvalues of the Choi matrix, a decomposition will result in many operators with very small norms.",
"for K in qt.to_kraus(phi):\n print(K.norm())",
"Because $\\Phi$ only has one Kraus operator, the Stinespring dilation will use a 1-dimensional ancillary space.",
"A, B = qt.to_stinespring(phi)\nprint(A.dims)",
"If we ask for a random channel of Choi rank 2 (that is, with 2 Kraus operators), then the corresponding space will be two-dimensional instead.",
"A, B = qt.to_stinespring(qt.rand_super_bcsz(4, rank=2))\nprint(A.dims)",
"Moreover, since we have demanded that $A$ and $B$ are the Stinespring pair for a channel, we can easily verify that $A = B$.",
"A - B",
"Differences of Random Channels\nNext, let's consider two random qubit channels from the [BCSZ distribution](qutip.visualization.hinton.",
"phi1 = qt.rand_super_bcsz(2)\nphi2 = qt.rand_super_bcsz(2)",
"Looking at the Pauli-basis Hinton diagram for the difference between these channels gives some insight into how this map behaves.",
"qt.visualization.hinton(phi1 - phi2)",
"In particular, note that $\\Delta\\Phi := \\Phi_1 - \\Phi_2$ annilates the traceful parts of its input, leaving an operator with negative eigenvalues.",
"d_phi = qt.to_super(phi1 - phi2)\nrho_in = qt.rand_dm(2)\nrho = qt.vector_to_operator(d_phi * qt.operator_to_vector(rho_in))\nprint(\"Trace: {}\\nEigenvalues:{}\".format(rho.tr(), rho.eigenenergies()))\nrho",
"Because $\\Delta \\Phi$ is not completely positive, we cannot obtain a Kraus decomposition where the left and right operators are the same:",
"Ks = qt.to_kraus(d_phi)\n\nrho - sum(K * rho_in * K.dag() for K in Ks)",
"On the other hand, if we allow the left and right Kraus operators to be different, or consider a Stinespring dilation, we get the right answer:",
"A, B = qt.to_stinespring(d_phi)\nrho - (A * rho_in * B.dag()).ptrace((0,))",
"Constructing System-Environment Models\nFor a channel $\\Phi \\in \\Chan(\\Hil_X, \\Hil_X)$, once we have found the Stinespring dilation $A$, we then represent $\\Phi$ as a unitary on the full space, $U \\in \\Uni(\\Hil_X \\otimes \\Hil_Z)$, interpreting $\\Hil_Z$ as a preparation of an environment. Concretely, we want to find $U$ such that for a preparation $\\ket{\\psi}$ of the environment,\n\\begin{equation}\n \\Phi(X) = \\Tr_Z(U \\rho \\otimes \\ket{\\psi}\\bra{\\psi} U^\\dagger).\n\\end{equation}\nBy convention, we'll choose a basis for $\\Hil_Z$ such that $\\ket{\\psi} = \\ket{0}$. Then, if we let $V = A \\otimes \\bra{0}$,\n\\begin{equation}\n \\Tr_Z(V \\rho \\otimes \\ket{0}\\bra{0} V^\\dagger) =\n \\Tr_Z(A \\rho \\otimes \\bra{0}\\ket{0}\\bra{0}\\ket{0} A) =\n \\Tr_Z(A \\rho A^\\dagger)\n\\end{equation}\nas desired.",
"A, B = qt.to_stinespring(qt.rand_super_bcsz(3))\nV = qt.tensor(A, qt.basis(A.dims[0][-1], 0).dag())\n# This adds an annoying left index, so let's drop it now.\ndel V.dims[0][-1]\nV\n\nrho = qt.rand_dm_ginibre(3)\n(A * rho * A.dag()).ptrace((0,)) - (V * qt.tensor(rho, qt.ket2dm(qt.basis(A.dims[0][-1], 0))) * V.dag()).ptrace((0,))",
"It thus remains to extend $V$ to act non-trivially on the full space. This is done by noting that the singular values of $V$ are each one or zero, convienently partitioning $\\Hil_X \\otimes \\Hil_Z$ into the null space of $V$ and its complement.",
"Vu, Vs, Vv = np.linalg.svd(V.data.todense())\nU = qt.Qobj(Vu, dims=V.dims) * qt.Qobj(Vv, dims=V.dims)\nU * U.dag()",
"We finish by verifying that preparing an environment state, evolving under the system/environment unitary $U$, then partial tracing over the environment produces the same map as the original channel.",
"(A * rho * A.dag()).ptrace((0,)) - (U * qt.tensor(rho, qt.ket2dm(qt.basis(A.dims[0][-1], 0))) * U.dag()).ptrace((0,))",
"Epilouge",
"version_table()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
abhay1/tf_rundown
|
notebooks/Nearest Neighbors.ipynb
|
mit
|
[
"Classification - MNIST dataset\n\nExploring the popular MNIST dataset. \nTensorflow provides a function to ingest the data.",
"# Necessary imports\nimport tensorflow as tf\nimport numpy as np\nfrom PIL import Image, ImageOps\nfrom matplotlib.pyplot import imshow\n%matplotlib inline\n\nfrom tensorflow.examples.tutorials.mnist import input_data\n\n# Read the mnist dataset\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)",
"Lets look at a random image and its label",
"# Pull out a random image & its label\nrandom_image_index = 200\nrandom_image = mnist.train.images[random_image_index]\nrandom_image_label = mnist.train.labels[random_image_index]\n\n# Print the label and the image as grayscale\nprint(\"Image label: %d\"%(random_image_label.argmax()))\npil_image = Image.fromarray(((random_image.reshape(28,28)) * 256).astype('uint8'), \"L\")\nimshow(ImageOps.invert(pil_image), cmap='gray')",
"Nearest Neighbor Classifier\n\nBuild a nearest neighbors classifier using a subset of mnist data.",
"# Read only a subsample\nX_train_input, Y_train_input = mnist.train.next_batch(5000)\nX_test_input, Y_test_input = mnist.test.next_batch(200)",
"Step 1: Create placeholders to hold the images. \n\nUsing None for a dimension in shape means it can be any number.",
"# Create placeholders\nx_train_tensor = tf.placeholder(tf.float32, shape=(None, 784))\nx_test_tensor = tf.placeholder(tf.float32, shape=[784])",
"Step 2: Lets build a graph using the following operations\n\n\nFirst get the deltas along each dimension (pixel value): Use tf.substract to subtract X_train_input (train) with x_test_tensor (test).\n (Note that X_train_input has 784 columns (28x28) and can have any number of rows (examples). x_test_tensor however is a vector of only 784 elements. tf.substract is a broadcast operation where each row of X_train_input is subtracted by x_test_tensor.)\n\n\nNext, get the squared deltas for each dimension: Use tf.square which performs elementwise squaring.\n\n\nNow compute the L2 distance. First use tf.reduce_sum to sum up all the 784 columns with squared deltas. Then use tf.srqt to compute the square root to get the L2 distance. Note that distance is a vector comprising distance of a particular test image with each of the train image.\n\n\nFind out the nearest neighbor. Use tf.arg_min (similar to numpy) to get the index of the closest training example.",
"# Image deltas\ndeltas = tf.subtract(x_train_tensor, x_test_tensor)\nsquared_deltas = tf.square(deltas)\n\n# L2 distance: Root of the sum of squared deltas\ndistance = tf.sqrt(tf.reduce_sum(squared_deltas, axis=1))\n\n# This is the nearest neighbor\nnearest_neighbor = tf.arg_min(distance, 0)",
"Now lets create a session and run the graph over the entire test set",
"# A variable to keep track of the accuracy\naccuracy = 0\n\n# Initializing global variables\ninit = tf.global_variables_initializer()\n\n# Create a session to run the graph\nwith tf.Session() as sess:\n # Run initialization\n sess.run(init)\n \n # Loop over all the test data\n for i in range(len(X_test_input)):\n\n # Get the nearest neighbor, i.e the row number/example number from the training dataset\n nearest_neighbor_index = sess.run(nearest_neighbor, \n feed_dict = { x_train_tensor: X_train_input, \n x_test_tensor: X_test_input[i,:]\n })\n\n # Extract the predicted label\n predicted_label = np.argmax(Y_train_input[nearest_neighbor_index,:])\n\n # Get the actual label and compare it \n #print(\"Example: %d\\t\"%i, \"Predicted: %d\\t\"%predicted_label, \"Actual: %d\"%np.argmax(Y_test_input[i]))\n \n # Calculate accuracy\n if predicted_label == np.argmax(Y_test_input[i]):\n accuracy += 1\n \n print(\"Classification done. Accuracy: %f\"%(accuracy/len(X_test_input)))",
"Now we can extend this to more than just one nearest neighbor and feed \"k\" in at runtime.",
"# Lets create a placeholder for k\nk = tf.placeholder(tf.int32)\n\n# Distances computation doesn't change.\n\n# Create a new variable that holds a vector of nearest neighbors\n# Use the top k function but flip the distance scores with a engative sign\n# K is fed in\n# The top k returns both values and indicies as a tuple. Using [1] only gives us the indicies\nnearest_neighbors = tf.nn.top_k(tf.negative(distance), k=k)[1]\n\n# Rest of the code remains mostly the same.\n# Create the session as usual, initialize and run the the graph\n# Note that now k has to be \"fed\" into the graph\naccuracy = 0\n\n# Initializing global variables\ninit = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n # Run initialization\n sess.run(init)\n \n # Loop over all the test data\n for i in range(len(X_test_input)):\n # Get the nearest neighbor, i.e the row number/example number from the training dataset\n nearest_neighbor_indices = sess.run( nearest_neighbors,\n feed_dict = { x_train_tensor: X_train_input, \n x_test_tensor: X_test_input[i,:],\n k:5})\n \n # Extract the predicted labels by summing the different predictions and picking the one with highest votes\n # Note that in case of equal votes, ideally the label of the nearest neighbor must be picked.\n # For the demonstration purposes, it is ignored & argmax picks the first highest element\n predicted_label = np.argmax(np.sum(Y_train_input[nearest_neighbor_indices,:], 0))\n \n # Get the actual label and compare it \n #print(\"Example: %d\\t\"%i, \"Predicted: %d\\t\"%predicted_label, \"Actual: %d\"%np.argmax(Y_test_input[i]))\n \n # Calculate accuracy\n if predicted_label == np.argmax(Y_test_input[i]):\n accuracy += 1 \n print(\"Classification done. Accuracy: %f\"%(accuracy/len(X_test_input)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/td1a_algo/td1a_correction_session4_5_jaccard.ipynb
|
mit
|
[
"1A.algo - distance de Jaccard (dictionnaires) - correction\nDe la distance de Jaccard à la distance de Levenshtein.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Exercice 1 : Constuire l'ensemble des lettres supprimées et ajoutées\nDeux mots n'ont pas forcément la même longueur, il est donc difficile de les comparer lettre à lettre. De même, on ne doit pas tenir compte de l'ordre des lettres dans chaque mot. Cette contrainte peut paraître plus difficile à mettre en oeuvre. Pourtant, si on construit un résultat intermédiaire : le décompte de chaque lettre dans un mot.",
"def compte_lettre(mot):\n d = {}\n for c in mot:\n d[c] = d.get(c,0) + 1\n return d\n\ncompte_lettre(\"lettre\"), compte_lettre(\"etre\")",
"Une lettre est ajoutée ou supprimée entre deux mots si son décompe est différent dans les deux dictionnaires. On s'en sert pour contruire la liste des lettres supprimées et ajoutées. Cela revient à construire la différence entre deux dictionnaires.",
"mot1 = \"lettre\"\nmot2 = \"etres\"\n\nd1 = compte_lettre(mot1)\nd2 = compte_lettre(mot2)\n\nsuppression = {}\nfor l in d1:\n c1 = d1[l]\n c2 = d2.get(l, 0) # la lettre l n'appartient pas forcément au second mot\n if c2 != c1:\n suppression[l] = c2 - c1\n \najout = {}\nfor l in d2:\n if l not in d1:\n c1 = 0\n c2 = d2[l]\n if c2 != c1:\n ajout[l] = c2 - c1 \n else:\n # on a déjà compté les lettres présentes dans les deux mots\n # lors de la première boucle\n pass\n \nsuppression, ajout ",
"Exercice 2 : écrire une fonction qui calcule la distance de Jaccard\nOn copie et on colle le code précédent pour créer la distance.",
"def jaccard(mot1, mot2): \n d1 = compte_lettre(mot1)\n d2 = compte_lettre(mot2)\n\n suppression = {}\n for l in d1:\n c1 = d1[l]\n c2 = d2.get(l, 0) # la lettre l n'appartient pas forcément au second mot\n if c2 != c1:\n suppression[l] = c2 - c1\n\n ajout = {}\n for l in d2:\n if l not in d1:\n c1 = 0\n c2 = d2[l]\n if c2 != c1:\n ajout[l] = c2 - c1 \n else:\n # on a déjà compté les lettres présentes dans les deux mots\n # lors de la première boucle\n pass\n \n dist = sum(abs(x) for x in suppression.values()) + sum(abs(x) for x in ajout.values())\n return dist\n\njaccard(\"lettre\", \"etre\"), jaccard(\"lettre\", \"etres\")\n\njaccard(\"marie\", \"aimer\")\n\njaccard(\"hôpital\", \"hospital\")",
"Exercice 3 : calculer la matrice des distances\nTout d'abord, on découpe un texte en mot. On remplace les tirets en espaces.",
"texte = \"à combien sont ces six saucissons-ci, ces six saucisson-ci sont à six sous\".replace(\"-\", \" \").split()\ntexte",
"liste de listes",
"distl = [ [ jaccard(texte[i], texte[j]) for i in range(len(texte))] for j in range(len(texte))]\ndistl",
"dictionnaire",
"distd = { (w,y): jaccard(w,y) for w in texte for y in texte}\ndistd",
"On compare le nombre de coefficients :",
"sum(len(_) for _ in distl)\n\nlen(distd)",
"Le dictionnaire paraît une meilleure option en terme de nombre de coefficients. Mais en terme d'espace mémoire occupé, le dictionnaire est largement plus conséquent.",
"import sys\nsys.getsizeof(distl), sys.getsizeof(distd)",
"Pour ce petit exemple, la matrice paraît une bonne option mais lorsque les mots sont répétés un grand nombre de fois :",
"texte_grand = texte * 40\n\ndistl = [ [ jaccard(texte_grand[i], texte_grand[j]) for i in range(len(texte_grand))] for j in range(len(texte_grand))]\ndistd = { (w,y): jaccard(w,y) for w in texte_grand for y in texte_grand}\nsum(len(_) for _ in distl), len(distd)\n\nsys.getsizeof(distl), sys.getsizeof(distd)",
"Le dictionnaire permet également de vérifier aisément qu'un calcul a déjà été effectué afin de gagner du temps et de ne pas le refaire.",
"distd = {}\nfor w in texte_grand:\n for y in texte_grand:\n if (w,y) not in distd:\n distd[w,y] = jaccard(w,y) \n distd[y,w] = distd[w,y]\nlen(distd) "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jGaboardi/AAG_16
|
.ipynb_checkpoints/AAG_2016-checkpoint.ipynb
|
lgpl-3.0
|
[
"# GNU LESSER GENERAL PUBLIC LICENSE\n# Version 3, 29 June 2007\n# Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>\n# Everyone is permitted to copy and distribute verbatim copies\n# of this license document, but changing it is not allowed.\n\n# James Gaboardi, 2016",
"Automating Multiple Single-Objective Spatial Optimization Models for Efficiency and Reproducibility\n\n\nJames D. Gaboardi | Association of American Geographers 2016\n\n\nFlorida State University | Department of Geography\n\nOutline\n\n\nBackground & Information\n\n\nModels\n\nPMP\nPCP\nCentDian\nPMCP Method\n\n\n\nData & Processing\n\n\nSolutions\n\n\nVisualizations\n\n\nFuture Work\n\nCOIN-OR\n\n\n\n\nInitial Imports",
"import IPython.display as IPd\n\n# Local path on user's machine\npath = '/Users/jgaboardi/AAG_16/Data/'",
"Background & Information\n$\\Rightarrow$ Automating solutions for p-median and p-center problems with p={n(p)} facilities\n$\\Rightarrow$ Compare coverage and costs numerically and visually \nPySAL 1.11.0\nPython Spatial Analysis Library\n[https://www.pysal.readthedocs.org]\nSergio Rey at Arizona State University leads the PySAL project. [https://geoplan.asu.edu/people/sergio-j-rey]\n\"PySAL is an open source library of spatial analysis functions written in Python intended to support the development of high level applications. PySAL is open source under the BSD License.\" [https://pysal.readthedocs.org/en/latest/]\nI will be only be demonstrating a portion of the functionality in PySAL.Network, but there are many other classes and functions for statistical spatial analysis within PySAL. \nPySAL.Network\nPySAL.Network was principally developed by Jay Laura at Arizona State Universty and the United States Geological Suvery. [https://geoplan.asu.edu/people/jay-laura]\nGurobi 6.5.0\nRelatively new company founded by optimization experts formerly at key positions with CPLEX.\n[http://www.gurobi.com] [http://www.gurobi.com/company/about-gurobi]\ngurobipy\nPython wrapper for Gurobi\nNumPy 1.10.4\n\"NumPy is the fundamental package for scientific computing with Python.\" [http://www.numpy.org]\nShapely 1.5.13\n\"Python package for manipulation and analysis of geometric objects in the Cartesian plane.\" [https://github.com/Toblerity/Shapely]\nGeoPandas 0.1.1\n\"GeoPandas is an open source project to make working with geospatial data in python easier.\" [http://geopandas.org]\nPandas 0.17.1\n\"pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools...\" [http://pandas.pydata.org]\nBokeh 0.11.1\n\"Bokeh is a Python interactive visualization library that targets modern web browsers for presentation.\" [http://bokeh.pydata.org/en/latest/]\n\nModels\nThe p-Median Problem\nThe objective of the p-median problem, also know as the minimsum problem and the PMP, is to minimize the total weighted cost while siting [p] facilities to serve all demand/client nodes. It was originally proposed by Hakimi (1964) and is well-studied in Geography, Operations Research, Mathematics, etc. In this particular project the network-based vertice PMP is used meaning the cost will be calculated on a road network and solutions will be determined based on discrete locations. Cost is generally defined as either travel time or distance and it is the latter in the project. Population (demand) utilized as a weight at each client node. The average cost can be calculated by dividing the minimized total cost by the total demand.\nFor more information refer to references section.\nMinimize\n $\\displaystyle {Z} = {\\sum_{i \\in 1}^n\\sum_{j\\in 1}^m a_i c_{ij} x_{ij}}$\nSubject to\n $\\displaystyle\\sum_{j\\in m} x_{ij} = 1 ,$ $\\forall i \\in n$\n $\\displaystyle\\sum_{j \\in m} y_j = p$\n $x_{ij} - y_j \\geq 0,$ $\\forall i \\in n, j \\in m$\n $x_{ij}, y_j \\in {0,1}$ $\\forall i \\in n , j \\in m$\nwhere\n − $i$ = a specific origin\n − $j$ = a specific destination\n − $n$ = the set of origins\n − $m$ = the set of destinations\n − $a_i$ = weight at each node\n − $c_{ij}$ = travel costs between nodes\n − $x_{ij}$ = the decision variable at each node in the matrix\n − $y_j$ = nodes chosen as service facilities\n − $p$ = the number of facilities to be sited\n\nAdapted from:\n- Daskin, M. S. 1995. Network and Discrete Location: Models, Algorithms, and Applications. Hoboken, NJ, USA: John Wiley & Sons, Inc.\n\nThe p-Center Problem\nThe objective of the p-center problem, also know as the minimax problem and the PCP, is to \nminimize the worst case cost (W) scenario while siting [p] facilities to serve all demand/client nodes. It was originally proposed by Minieka (1970) and, like the PMP, is well-studied in Geography, Operations Research, Mathematics, etc. In this particular project the network-based vertice PCP is used meaning the cost will be calculated on a road network and solutions will be determined based on discrete locations. Cost is generally defined as either travel time or distance and it is the latter in the project.\nFor more information refer to references section.\nMinimize\n $W$\nSubject to\n $\\displaystyle\\sum_{j\\in m} x_{ij} = 1,$ $\\forall i \\in n$\n $\\displaystyle\\sum_{j \\in m} y_j = p$\n $x_{ij} - y_j \\geq 0,$ $\\forall i\\in n, j \\in m$\n $\\displaystyle W \\geq \\sum_{j \\in m} c_{ij} x_{ij}$ $\\forall i \\in n$\n $x_{ij}, y_j \\in {0,1}$ $\\forall i \\in n, j \\in m$\nwhere\n − $W$ = the worst case cost between a client and a service node\n − $i$ = a specific origin\n − $j$ = a specific destination\n − $n$ = the set of origins\n − $m$ = the set of destinations\n − $a_i$ = weight at each node\n − $c_{ij}$ = travel costs between nodes\n − $x_{ij}$ = the decision variable at each node in the matrix\n − $y_j$ = nodes chosen as service facilities\n − $p$ = the number of facilities to be sited\n\nAdapted from:\n- Daskin, M. S. 1995. Network and Discrete Location: Models, Algorithms, and Applications. Hoboken, NJ, USA: John Wiley & Sons, Inc.\n\nThe p-CentDian Problem\nThe p-CentDian Problem was first descibed by Halpern (1976). It is a combination of the p-median problem and the p-center problem with a dual objective of minimizing both the worst case scenario and the total travel distance. The objective used for the model in this demonstration is the average of (1) the p-center objective function and (2) the p-median objective function divided by the total demand. An alternative formulation is the p-$\\lambda$-CentDian Problem, where ( $\\lambda$ ) represents the weight attributed to the p-center objective function and (1 - $\\lambda$) represents the weight attributed to the p-median objective function which was was proposed by Pérez-Brito, et al (1997). \nFor more information refer to references section.\nMinimize\n $ \\displaystyle {W + {Z \\over \\sum_{i=1}a_i} \\over 2}$\nSubject to\n $\\displaystyle\\sum_{j\\in m} x_{ij} = 1,$ $\\forall i \\in n$\n $\\displaystyle\\sum_{j \\in m} y_j = p$\n $x_{ij} - y_j \\geq 0,$ $\\forall i\\in n, j \\in m$\n $\\displaystyle W \\geq \\sum_{j \\in m} c_{ij} x_{ij}$ $\\forall i \\in n$\n $x_{ij}, y_j \\in {0,1}$ $\\forall i \\in n, j \\in m$\nwhere\n − $W$ = the maximum travel cost between client and service nodes\n − $Z$ = the minimized total travel cost $\\big({\\sum_{i \\in 1}^n\\sum_{j\\in 1}^m a_i c_{ij} x_{ij}}\\big)$\n − $i$ = a specific origin\n − $j$ = a specific destination\n − $n$ = the set of origins\n − $m$ = the set of destinations\n − $a_i$ = weight at each node\n − $c_{ij}$ = travel costs between nodes\n − $x_{ij}$ = the decision variable at each node in the matrix\n − $y_j$ = nodes chosen as service facilities\n − $p$ = the number of facilities to be sited\n\nAdapted from:\n\nHalpern, J. 1976. The Location of a Center-Median Convex Combination on an Undirected Tree*. Journal of Regional Science 16 (2):237–245\n\n\nThe PMCP Method\n$\\Rightarrow$ solve the p-median problem and the p-center problem concurrently to determine whether optimal locations can be sited with equivalent [p]\n$\\Rightarrow$ \"poor man's\" p-CentDian Problem?\n\n\nautomated & efficient decision making for those who don't have access to multiple-objective capable solvers\n\n\nwhat it is:\n\na comparision to determine equivalent site selection of single objective solutions\nprobably best used with low cost sites\na opportunity for finding optimal solutions without sacrificing either efficiency or equity\n\n\n\nwhat it is not:\n\nan optimization solution with multiple objective functions\ncapable of a true 'best solution' trade-off between efficiency and equity\nguaranteed to find identical solutions\n\n\n\nWorkflow",
"# Conceptual Model Workflow\nworkflow = IPd.Image(path+'/AAG_16.png')\nworkflow",
"Data & Processing\nProcess Imports",
"import pysal as ps\nimport geopandas as gpd\nimport numpy as np\nimport networkx as nx\nimport shapefile as shp\nfrom shapely.geometry import Point\nimport shapely\nfrom collections import OrderedDict\nimport pandas as pd\nimport qgrid\nqgrid.nbinstall(overwrite=True) # copies javascript dependencies to your /nbextensions folder\nqgrid.set_defaults(remote_js=True)\nimport gurobipy as gbp\nimport time\nfrom bokeh.plotting import figure, show, ColumnDataSource\nfrom bokeh.io import output_notebook\nfrom bokeh.models import (HoverTool, BoxAnnotation, GeoJSONDataSource, \n GMapPlot, GMapOptions, ColumnDataSource, Circle, \n DataRange1d, PanTool, WheelZoomTool, BoxSelectTool)\nimport utm\nfrom cylp.cy import CyCbcModel, CyClpSimplex\n%pylab inline\n\nfigsize(15,15)",
"Define the function to calculate the cost matrix and convert to miles",
"def c_s_matrix(): # Define Client to Service Matrix Function\n global All_Dist_MILES # in meters\n All_Neigh_Dist = ntw.allneighbordistances(\n sourcepattern=ntw.pointpatterns['Rand_Points_CLIENT'],\n destpattern=ntw.pointpatterns['Rand_Points_SERVICE'])\n All_Dist_MILES = All_Neigh_Dist * 0.000621371 # to miles",
"Define the function to solve the p-Median + p-Center Problems concurrently",
"def Gurobi_PMCP(sites, Ai, AiSum, All_Dist_Miles):\n \n # Define Global Variables\n global pydf_M\n global selected_M\n global NEW_Records_PMP\n global VAL_PMP\n global AVG_PMP\n \n global pydf_C\n global selected_C\n global NEW_Records_PCP\n global VAL_PCP\n \n global pydf_CentDian\n global selected_CentDian\n global NEW_Records_Pcentdian\n global VAL_CentDian\n \n global pydf_MC\n global VAL_PMCP\n global p_dens\n \n for p in range(1, sites+1):\n\n # DATA\n # [p] --> sites\n # Demand --> Ai\n # Demand Sum --> AiSum\n # Travel Costs\n Cij = All_Dist_MILES\n # Weighted Costs\n Sij = Ai * Cij\n # Total Client and Service nodes\n client_nodes = range(len(Sij))\n service_nodes = range(len(Sij[0]))\n\n ##################################################################\n # PMP\n t1_PMP = time.time()\n \n # Create Model, Add Variables, & Update Model\n # Instantiate Model\n mPMP = gbp.Model(' -- p-Median -- ')\n # Turn off Gurobi's output\n mPMP.setParam('OutputFlag',False)\n\n # Add Client Decision Variables (iXj)\n client_var = []\n for orig in client_nodes:\n client_var.append([])\n for dest in service_nodes:\n client_var[orig].append(mPMP.addVar(vtype=gbp.GRB.BINARY,\n lb=0,\n ub=1,\n obj=Sij[orig][dest], \n name='x'+str(orig+1)+'_'+str(dest+1)))\n\n # Add Service Decision Variables (j)\n serv_var = []\n for dest in service_nodes:\n serv_var.append([])\n serv_var[dest].append(mPMP.addVar(vtype=gbp.GRB.BINARY,\n lb=0,\n ub=1,\n name='y'+str(dest+1)))\n\n # Update the model\n mPMP.update()\n\n # 3. Set Objective Function\n mPMP.setObjective(gbp.quicksum(Sij[orig][dest]*client_var[orig][dest] \n for orig in client_nodes for dest in service_nodes), \n gbp.GRB.MINIMIZE)\n\n\n # 4. Add Constraints\n # Assignment Constraints\n for orig in client_nodes:\n mPMP.addConstr(gbp.quicksum(client_var[orig][dest] \n for dest in service_nodes) == 1)\n # Opening Constraints\n for orig in service_nodes:\n for dest in client_nodes:\n mPMP.addConstr((serv_var[orig][0] - client_var[dest][orig] >= 0))\n\n # Facility Constraint\n mPMP.addConstr(gbp.quicksum(serv_var[dest][0] for dest in service_nodes) == p)\n\n # 5. Optimize and Print Results\n # Solve\n mPMP.optimize()\n\n # Write LP\n mPMP.write(path+'LP_Files/PMP'+str(p)+'.lp')\n t2_PMP = time.time()-t1_PMP\n\n # Record and Display Results\n print '\\n*************************************************************************'\n selected_M = OrderedDict()\n dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')\n NEW_Records_PMP = []\n for v in mPMP.getVars():\n if 'x' in v.VarName:\n pass\n elif v.x > 0:\n var = '%s' % v.VarName\n selected_M[var]=(u\"\\u2588\")\n for i in range(dbf1.n_records):\n if var in dbf1.read_record(i):\n x = dbf1.read_record(i)\n NEW_Records_PMP.append(x)\n else:\n pass\n print ' | ', var\n \n pydf_M = pydf_M.append(selected_M, ignore_index=True)\n \n # Instantiate Shapefile\n SHP_Median = shp.Writer(shp.POINT)\n # Add Points\n for idy,idx,x,y in NEW_Records_PMP:\n SHP_Median.point(float(x), float(y))\n # Add Fields\n SHP_Median.field('y_ID')\n SHP_Median.field('x_ID')\n SHP_Median.field('LAT')\n SHP_Median.field('LON')\n # Add Records\n for idy,idx,x,y in NEW_Records_PMP:\n SHP_Median.record(idy,idx,x,y)\n # Save Shapefile\n SHP_Median.save(path+'Results/Selected_Locations_Pmedian'+str(p)+'.shp') \n\n print ' | Selected Facility Locations -------------- ^^^^ '\n print ' | Candidate Facilities [p] ----------------- ', len(selected_M)\n val_m = mPMP.objVal\n VAL_PMP.append(round(val_m, 3))\n print ' | Objective Value (miles) ------------------ ', val_m\n avg_m = float(mPMP.objVal)/float(AiSum)\n AVG_PMP.append(round(avg_m, 3))\n print ' | Avg. Value / Client (miles) -------------- ', avg_m\n print ' | Real Time to Optimize (sec.) ------------- ', t2_PMP\n print '*************************************************************************'\n print ' -- The p-Median Problem -- '\n print ' [p] = ', str(p), '\\n\\n'\n \n \n ##################################################################\n # PCP\n t1_PCP = time.time()\n \n # Instantiate P-Center Model\n mPCP = gbp.Model(' -- p-Center -- ')\n \n # Add Client Decision Variables (iXj)\n client_var_PCP = []\n for orig in client_nodes:\n client_var_PCP.append([])\n for dest in service_nodes:\n client_var_PCP[orig].append(mPCP.addVar(vtype=gbp.GRB.BINARY,\n lb=0,\n ub=1,\n obj=Cij[orig][dest], \n name='x'+str(orig+1)+'_'+str(dest+1)))\n\n # Add Service Decision Variables (j)\n serv_var_PCP = []\n for dest in service_nodes:\n serv_var_PCP.append([])\n serv_var_PCP[dest].append(mPCP.addVar(vtype=gbp.GRB.BINARY,\n lb=0,\n ub=1,\n name='y'+str(dest+1)))\n\n # Add the Maximum travel cost variable\n W = mPCP.addVar(vtype=gbp.GRB.CONTINUOUS,\n lb=0.,\n name='W') \n \n # Update the model\n mPCP.update()\n\n # 3. Set Objective Function\n mPCP.setObjective(W, gbp.GRB.MINIMIZE)\n\n # 4. Add Constraints\n # Assignment Constraints\n for orig in client_nodes:\n mPCP.addConstr(gbp.quicksum(client_var_PCP[orig][dest] \n for dest in service_nodes) == 1)\n # Opening Constraints\n for orig in service_nodes:\n for dest in client_nodes:\n mPCP.addConstr((serv_var_PCP[orig][0] - client_var_PCP[dest][orig] >= 0))\n\n # Add Maximum travel cost constraints\n for orig in client_nodes:\n mPCP.addConstr(gbp.quicksum(Cij[orig][dest]*client_var_PCP[orig][dest]\n for dest in service_nodes) - W <= 0)\n \n # Facility Constraint\n mPCP.addConstr(gbp.quicksum(serv_var_PCP[dest][0] for dest in service_nodes) == p)\n\n # 5. Optimize and Print Results\n # Solve\n mPCP.optimize()\n\n # Write LP\n mPCP.write(path+'LP_Files/PCP'+str(p)+'.lp')\n t2_PCP = time.time()-t1_PCP\n\n # Record and Display Results\n print '\\n*************************************************************************'\n selected_C = OrderedDict()\n dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')\n NEW_Records_PCP = []\n for v in mPCP.getVars():\n if 'x' in v.VarName:\n pass\n elif 'W' in v.VarName:\n pass\n elif v.x > 0:\n var = '%s' % v.VarName\n selected_C[var]=(u\"\\u2588\")\n for i in range(dbf1.n_records):\n if var in dbf1.read_record(i):\n x = dbf1.read_record(i)\n NEW_Records_PCP.append(x)\n else:\n pass\n print ' | ', var, ' '\n pydf_C = pydf_C.append(selected_C, ignore_index=True)\n \n # Instantiate Shapefile\n SHP_Center = shp.Writer(shp.POINT)\n # Add Points\n for idy,idx,x,y in NEW_Records_PCP:\n SHP_Center.point(float(x), float(y))\n # Add Fields\n SHP_Center.field('y_ID')\n SHP_Center.field('x_ID')\n SHP_Center.field('LAT')\n SHP_Center.field('LON')\n # Add Records\n for idy,idx,x,y in NEW_Records_PCP:\n SHP_Center.record(idy,idx,x,y)\n # Save Shapefile\n SHP_Center.save(path+'Results/Selected_Locations_Pcenter'+str(p)+'.shp') \n\n print ' | Selected Facility Locations -------------- ^^^^ '\n print ' | Candidate Facilities [p] ----------------- ', len(selected_C)\n val_c = mPCP.objVal\n VAL_PCP.append(round(val_c, 3))\n print ' | Objective Value (miles) ------------------ ', val_c\n print ' | Real Time to Optimize (sec.) ------------- ', t2_PCP\n print '*************************************************************************'\n print ' -- The p-Center Problem -- '\n print ' [p] = ', str(p), '\\n\\n'\n\n ###########################################################################\n # p-CentDian\n \n t1_centdian = time.time()\n \n # Instantiate P-Center Model\n mPcentdian = gbp.Model(' -- p-CentDian -- ')\n \n # Add Client Decision Variables (iXj)\n client_var_CentDian = []\n for orig in client_nodes:\n client_var_CentDian.append([])\n for dest in service_nodes:\n client_var_CentDian[orig].append(mPcentdian.addVar(vtype=gbp.GRB.BINARY,\n lb=0,\n ub=1,\n obj=Cij[orig][dest], \n name='x'+str(orig+1)+'_'+str(dest+1)))\n\n # Add Service Decision Variables (j)\n serv_var_CentDian = []\n for dest in service_nodes:\n serv_var_CentDian.append([])\n serv_var_CentDian[dest].append(mPcentdian.addVar(vtype=gbp.GRB.BINARY,\n lb=0,\n ub=1,\n name='y'+str(dest+1)))\n\n # Add the Maximum travel cost variable\n W_CD = mPcentdian.addVar(vtype=gbp.GRB.CONTINUOUS,\n lb=0.,\n name='W') \n \n # Update the model\n mPcentdian.update()\n\n # 3. Set Objective Function\n M = gbp.quicksum(Sij[orig][dest]*client_var_CentDian[orig][dest] \n for orig in client_nodes for dest in service_nodes)\n \n Zt = M/AiSum\n \n mPcentdian.setObjective((W_CD + Zt) / 2, gbp.GRB.MINIMIZE)\n\n # 4. Add Constraints\n # Assignment Constraints\n for orig in client_nodes:\n mPcentdian.addConstr(gbp.quicksum(client_var_CentDian[orig][dest] \n for dest in service_nodes) == 1)\n # Opening Constraints\n for orig in service_nodes:\n for dest in client_nodes:\n mPcentdian.addConstr((serv_var_CentDian[orig][0] - client_var_CentDian[dest][orig] \n >= 0))\n\n # Add Maximum travel cost constraints\n for orig in client_nodes:\n mPcentdian.addConstr(gbp.quicksum(Cij[orig][dest]*client_var_CentDian[orig][dest]\n for dest in service_nodes) - W_CD <= 0)\n \n # Facility Constraint\n mPcentdian.addConstr(gbp.quicksum(serv_var_CentDian[dest][0] for dest in service_nodes) \n == p)\n\n # 5. Optimize and Print Results\n # Solve\n mPcentdian.optimize()\n\n # Write LP\n mPcentdian.write(path+'LP_Files/CentDian'+str(p)+'.lp')\n t2_centdian = time.time()-t1_centdian\n\n # Record and Display Results\n print '\\n*************************************************************************'\n selected_CentDian = OrderedDict()\n dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')\n NEW_Records_Pcentdian = []\n for v in mPcentdian.getVars():\n if 'x' in v.VarName:\n pass\n elif 'W' in v.VarName:\n pass\n elif v.x > 0:\n var = '%s' % v.VarName\n selected_CentDian[var]=(u\"\\u2588\")\n for i in range(dbf1.n_records):\n if var in dbf1.read_record(i):\n x = dbf1.read_record(i)\n NEW_Records_Pcentdian.append(x)\n else:\n pass\n print ' | ', var, ' '\n pydf_CentDian = pydf_CentDian.append(selected_CentDian, ignore_index=True)\n \n # Instantiate Shapefile\n SHP_CentDian = shp.Writer(shp.POINT)\n # Add Points\n for idy,idx,x,y in NEW_Records_Pcentdian:\n SHP_CentDian.point(float(x), float(y))\n # Add Fields\n SHP_CentDian.field('y_ID')\n SHP_CentDian.field('x_ID')\n SHP_CentDian.field('LAT')\n SHP_CentDian.field('LON')\n # Add Records\n for idy,idx,x,y in NEW_Records_Pcentdian:\n SHP_CentDian.record(idy,idx,x,y)\n # Save Shapefile\n SHP_CentDian.save(path+'Results/Selected_Locations_CentDian'+str(p)+'.shp') \n\n print ' | Selected Facility Locations -------------- ^^^^ '\n print ' | Candidate Facilities [p] ----------------- ', len(selected_CentDian)\n val_cd = mPcentdian.objVal\n VAL_CentDian.append(round(val_cd, 3))\n print ' | Objective Value (miles) ------------------ ', val_cd\n print ' | Real Time to Optimize (sec.) ------------- ', t2_centdian\n print '*************************************************************************'\n print ' -- The p-CentDian Problem -- '\n print ' [p] = ', str(p), '\\n\\n'\n \n ###########################################################################\n # p-Median + p-Center Method\n \n # Record solutions that record identical facility selection\n if selected_M.keys() == selected_C.keys() == selected_CentDian.keys():\n \n pydf_MC = pydf_MC.append(selected_C, ignore_index=True) # append PMCP dataframe\n p_dens.append('p='+str(p)) # density of [p] \n VAL_PMCP.append([round(val_m,3), round(avg_m,3), \n round(val_c,3), round(val_cd,3)]) # append PMCP list\n \n # Instantiate Shapefile\n SHP_PMCP = shp.Writer(shp.POINT)\n # Add Points\n for idy,idx,x,y in NEW_Records_PCP:\n SHP_PMCP.point(float(x), float(y))\n # Add Fields\n SHP_PMCP.field('y_ID')\n SHP_PMCP.field('x_ID')\n SHP_PMCP.field('LAT')\n SHP_PMCP.field('LON')\n # Add Records\n for idy,idx,x,y in NEW_Records_PCP:\n SHP_PMCP.record(idy,idx,x,y)\n # Save Shapefile\n SHP_PMCP.save(path+'Results/Selected_Locations_PMCP'+str(p)+'.shp')\n else:\n pass ",
"Reproject the street network with GeoPandas",
"STREETS_Orig = gpd.read_file(path+'Waverly_Trim/Waverly.shp')\nSTREETS = gpd.read_file(path+'Waverly_Trim/Waverly.shp')\nSTREETS.to_crs(epsg=2779, inplace=True) # NAD83(HARN) / Florida North\nSTREETS.to_file(path+'WAVERLY/WAVERLY.shp')\nSTREETS[:5]",
"Instantiate Network and read in WAVERLY.shp",
"ntw = ps.Network(path+'WAVERLY/WAVERLY.shp')\nshp_W = ps.open(path+'WAVERLY/WAVERLY.shp')",
"Create Buffer of 200 meters",
"buff = STREETS.buffer(200) #Buffer\nbuff[:5]",
"Plot Buffers of Individual Streets",
"buff.plot()",
"Create a Unary Union of the individual street buffers",
"buffU = buff.unary_union #Buffer Union\nbuff1 = gpd.GeoSeries(buffU)\nbuff1.crs = STREETS.crs\nBuff = gpd.GeoDataFrame(buff1, crs=STREETS.crs)\nBuff.columns = ['geometry']\nBuff",
"Plot the unary union buffer",
"Buff.plot()",
"Create 1000 random ppoints within the bounds of WAVERLY.shp",
"np.random.seed(352)\nx = np.random.uniform(shp_W.bbox[0], shp_W.bbox[2], 1000)\nnp.random.seed(850)\ny = np.random.uniform(shp_W.bbox[1], shp_W.bbox[3], 1000) \ncoords0= zip(x,y)\ncoords = [shapely.geometry.Point(i) for i in coords0]\nRand = gpd.GeoDataFrame(coords)\nRand.crs = STREETS.crs\nRand.columns = ['geometry']\nRand[:5]",
"Plot the 1000 random",
"Rand.plot()",
"Create GeoPandas DF of the random points within the Unary Buffer",
"Inter = [Buff['geometry'].intersection(p) for p in Rand['geometry']]\nINTER = gpd.GeoDataFrame(Inter, crs=STREETS.crs)\nINTER.columns = ['geometry']\nINTER[:5]",
"Plot the points within the Unary Buffer",
"INTER.plot()",
"Add only intersecting records to a list",
"# Add records that are points within the buffer\npoint_in = []\nfor p in INTER['geometry']:\n if type(p) == shapely.geometry.point.Point:\n point_in.append(p)\npoint_in[:5]",
"Keep the first 100 for clients and the last 15 for service facilities",
"CLIENT = gpd.GeoDataFrame(point_in[:100], crs=STREETS.crs)\nCLIENT.columns = ['geometry']\nSERVICE = gpd.GeoDataFrame(point_in[-15:], crs=STREETS.crs)\nSERVICE.columns = ['geometry']\nCLIENT.to_file(path+'CLIENT')\nSERVICE.to_file(path+'SERVICE')\n\nCLIENT[:5]\n\nSERVICE[:5]",
"Plot the Unary Union, Simulated Clients, Simulated Service, and Streets",
"Buff.plot()\nSTREETS.plot()\nCLIENT.plot()\nSERVICE.plot(colormap=True)",
"Instaniate non-solution graphs to be drawn",
"g = nx.Graph() # Roads & Nodes\ng1 = nx.MultiGraph() # Edges and Vertices\nGRAPH_client = nx.Graph() # Clients \ng_client = nx.Graph() # Snapped Clients\nGRAPH_service = nx.Graph() # Service\ng_service = nx.Graph() # Snapped Service",
"Instantiate and fill Client and Service point dictionaries",
"points_client = {} \npoints_service = {}\n\nCLI = ps.open(path+'CLIENT/CLIENT.shp')\nfor idx, coords in enumerate(CLI):\n GRAPH_client.add_node(idx)\n points_client[idx] = coords\n GRAPH_client.node[idx] = coords\n \nSER = ps.open(path+'SERVICE/SERVICE.shp')\nfor idx, coords in enumerate(SER):\n GRAPH_service.add_node(idx)\n points_service[idx] = coords\n GRAPH_service.node[idx] = coords",
"Simulate weights for Client Demand",
"# Client Weights for demand\nnp.random.seed(850)\nAi = np.random.randint(1, 5, len(CLI))\nAi = Ai.reshape(len(Ai),1)\nAiSum = np.sum(Ai) # Sum of Weights (Total Demand)",
"Instantiate Client .shp",
"client = shp.Writer(shp.POINT) # Client Shapefile\n# Add Random Points\nfor i,j in CLI:\n client.point(i,j)\n# Add Fields\nclient.field('client_ID')\nclient.field('Weight')\ncounter = 0\nfor i in range(len(CLI)):\n counter = counter + 1\n client.record('client_' + str(counter), Ai[i])\nclient.save(path+'Simulated/RandomPoints_CLIENT') # Save Shapefile ",
"Instantiate Service .shp",
"service = shp.Writer(shp.POINT) #Service Shapefile\n# Add Random Points\nfor i,j in SER:\n service.point(i,j)\n# Add Fields\nservice.field('y_ID')\nservice.field('x_ID')\ncounter = 0\nfor i in range(len(SER)):\n counter = counter + 1\n service.record('y' + str(counter), 'x' + str(counter))\nservice.save(path+'Simulated/RandomPoints_SERVICE') # Save Shapefile ",
"Snap Client and Service points to the network",
"# Snap\nSnap_C = ntw.snapobservations(path+'Simulated/RandomPoints_CLIENT.shp', \n 'Rand_Points_CLIENT', attribute=True)\nSnap_S = ntw.snapobservations(path+'Simulated/RandomPoints_SERVICE.shp', \n 'Rand_Points_SERVICE', attribute=True)",
"Create lat/lon lists of snapped service coords",
"# Create Lat & Lon lists of the snapped service locations\ny_snapped = []\nx_snapped = []\nfor i,j in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():\n y_snapped.append(j[0]) \n x_snapped.append(j[1])",
"Instantiate snapped Service .shp",
"service_SNAP = shp.Writer(shp.POINT) # Snapped Service Shapefile\n# Add Points\nfor i,j in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():\n service_SNAP.point(j[0],j[1])\n# Add Fields\nservice_SNAP.field('y_ID')\nservice_SNAP.field('x_ID')\nservice_SNAP.field('LAT')\nservice_SNAP.field('LON')\ncounter = 0\nfor i in range(len(ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates)):\n counter = counter + 1\n service_SNAP.record('y' + str(counter), 'x' + str(counter), y_snapped[i], x_snapped[i])\nservice_SNAP.save(path+'Snapped/SERVICE_Snapped') # Save Shapefile ",
"Call Client to Service Matrix Function",
"# Call Client to Service Matrix Function\nc_s_matrix() ",
"Create Lists to fill index and columns of GeoPandas Data Frames",
"# PANDAS DATAFRAME OF p/y results\np_list = []\nfor i in range(1, len(SER)+1):\n p = 'p='+str(i)\n p_list.append(p)\ny_list = []\nfor i in range(1, len(SER)+1):\n y = 'y'+str(i)\n y_list.append(y)",
"Instantiate GeoPandas Dataframes",
"pydf_M = pd.DataFrame(index=p_list,columns=y_list)\npydf_C = pd.DataFrame(index=p_list,columns=y_list)\npydf_CentDian = pd.DataFrame(index=p_list,columns=y_list)\npydf_MC = pd.DataFrame(index=p_list,columns=y_list)\n\n\nqgrid.show_grid(pydf_M)",
"Create PMP, PCP, and CentDian solution graphs",
"# p-Median\nP_Med_Graphs = OrderedDict()\nfor x in range(1, len(SER)+1):\n P_Med_Graphs[\"{0}\".format(x)] = nx.Graph()\n \n# p-Center\nP_Cent_Graphs = OrderedDict()\nfor x in range(1, len(SER)+1):\n P_Cent_Graphs[\"{0}\".format(x)] = nx.Graph()\n \n# p-CentDian\nP_CentDian_Graphs = OrderedDict()\nfor x in range(1, len(SER)+1):\n P_CentDian_Graphs[\"{0}\".format(x)] = nx.Graph()",
"Instantiate lists for objective values and average values of all models",
"# PMP\nVAL_PMP = []\nAVG_PMP = []\n\n# PCP\nVAL_PCP = []\n\n# CentDian\nVAL_CentDian = []\n\n# PMCP\nVAL_PMCP = []\np_dens = [] # when the facilities for the p-median & p-center are the same",
"Solutions\nSolve all",
"Gurobi_PMCP(len(SER), Ai, AiSum, All_Dist_MILES)",
"Calculate and record percentage decrease",
"# PMP Total\nPMP_Tot_Diff = []\nfor i in range(len(VAL_PMP)):\n if i == 0:\n PMP_Tot_Diff.append('0%')\n elif i <= len(VAL_PMP):\n n1 = VAL_PMP[i-1]\n n2 = VAL_PMP[i]\n diff = n2 - n1\n perc_change = (diff/n1)*100.\n PMP_Tot_Diff.append(str(round(perc_change, 2))+'%')\n\n# PMP Average\nPMP_Avg_Diff = []\nfor i in range(len(AVG_PMP)):\n if i == 0:\n PMP_Avg_Diff.append('0%')\n elif i <= len(AVG_PMP):\n n1 = AVG_PMP[i-1]\n n2 = AVG_PMP[i]\n diff = n2 - n1\n perc_change = (diff/n1)*100.\n PMP_Avg_Diff.append(str(round(perc_change, 2))+'%')\n \n# PCP\nPCP_Diff = []\nfor i in range(len(VAL_PCP)):\n if i == 0:\n PCP_Diff.append('0%')\n elif i <= len(VAL_PCP):\n n1 = VAL_PCP[i-1]\n n2 = VAL_PCP[i]\n diff = n2 - n1\n perc_change = (diff/n1)*100.\n PCP_Diff.append(str(round(perc_change, 2))+'%')\n\n# p-CentDian\nCentDian_Diff = []\nfor i in range(len(VAL_CentDian)):\n if i == 0:\n CentDian_Diff.append('0%')\n elif i <= len(VAL_CentDian):\n n1 = VAL_CentDian[i-1]\n n2 = VAL_CentDian[i]\n diff = n2 - n1\n perc_change = (diff/n1)*100.\n CentDian_Diff.append(str(round(perc_change, 2))+'%')\n\n# PMCP \nPMCP_Diff = []\ncounter = 0\nfor i in range(len(VAL_PMCP)):\n PMCP_Diff.append([])\n for j in range(len(VAL_PMCP[0])):\n if i == 0:\n PMCP_Diff[i].append('0%')\n elif i <= len(VAL_PMCP):\n n1 = VAL_PMCP[i-1][j]\n n2 = VAL_PMCP[i][j]\n diff = n2 - n1\n perc_change = (diff/n1*100.)\n PMCP_Diff[i].append(str(round(perc_change, 2))+'%')",
"Data Frames adjust",
"# PMP\npydf_M = pydf_M[len(SER):]\npydf_M.reset_index()\npydf_M.index = p_list\npydf_M.columns.name = 'Decision\\nVariables'\npydf_M.index.name = 'Facility\\nDensity'\npydf_M['Tot. Obj. Value'] = VAL_PMP\npydf_M['Tot. % Change'] = PMP_Tot_Diff\npydf_M['Avg. Obj. Value'] = AVG_PMP\npydf_M['Avg. % Change'] = PMP_Avg_Diff\npydf_M = pydf_M.fillna('')\n#pydf_M.to_csv(path+'CSV') <-- need to change squares to alphanumeric to use\n\n# PCP\npydf_C = pydf_C[len(SER):]\npydf_C.reset_index()\npydf_C.index = p_list\npydf_C.columns.name = 'Decision\\nVariables'\npydf_C.index.name = 'Facility\\nDensity'\npydf_C['Worst Case Obj. Value'] = VAL_PCP\npydf_C['Worst Case % Change'] = PCP_Diff\npydf_C = pydf_C.fillna('')\n#pydf_C.to_csv(path+'CSV') <-- need to change squares to alphanumeric to use\n\npydf_CentDian = pydf_CentDian[len(SER):]\npydf_CentDian.reset_index()\npydf_CentDian.index = p_list\npydf_CentDian.columns.name = 'Decision\\nVariables'\npydf_CentDian.index.name = 'Facility\\nDensity'\npydf_CentDian['CentDian Obj. Value'] = VAL_CentDian\npydf_CentDian['CentDian % Change'] = CentDian_Diff\npydf_CentDian = pydf_CentDian.fillna('')\n#pydf_CentDian.to_csv(path+'CSV') <-- need to change squares to alphanumeric to use\n\n# PMCP\npydf_MC = pydf_MC[len(SER):]\npydf_MC.reset_index()\npydf_MC.index = p_dens\npydf_MC.columns.name = 'D.V.'\npydf_MC.index.name = 'F.D.'\npydf_MC['Min.\\nTotal'] = [VAL_PMCP[x][0] for x in range(len(VAL_PMCP))]\npydf_MC['Min.\\nTotal\\n%\\nChange'] = [PMCP_Diff[x][0] for x in range(len(PMCP_Diff))]\npydf_MC['Avg.\\nTotal'] = [VAL_PMCP[x][1] for x in range(len(VAL_PMCP))]\npydf_MC['Avg.\\nTotal\\n%\\nChange'] = [PMCP_Diff[x][1] for x in range(len(PMCP_Diff))]\npydf_MC['Worst\\nCase'] = [VAL_PMCP[x][2] for x in range(len(VAL_PMCP))]\npydf_MC['Worst\\nCase\\n%\\nChange'] = [PMCP_Diff[x][2] for x in range(len(PMCP_Diff))]\npydf_MC['Center\\nMedian'] = [VAL_PMCP[x][3] for x in range(len(VAL_PMCP))]\npydf_MC['Center\\nMedian\\n%\\nChange'] = [PMCP_Diff[x][3] for x in range(len(PMCP_Diff))]\npydf_MC = pydf_MC.fillna('')\n#pydf_MC.to_csv(path+'CSV') <-- need to change squares to alphanumeric to use",
"Create Graphs of the PMCP results",
"# Create Graphs of the PMCP results\nPMCP_Graphs = OrderedDict()\nfor x in pydf_MC.index:\n PMCP_Graphs[x[2:]] = nx.Graph()",
"Visualizations\nDraw PMP figure [p=1] large $ \\rightarrow $ small [p=15]",
"figsize(10,10)\n\n# Draw Network Actual Roads and Nodes\nfor e in ntw.edges:\n g.add_edge(*e)\nnx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)\n\n#PMP\nsize = 3000\nfor i,j in P_Med_Graphs.iteritems():\n size=size-120\n # p-Median\n P_Med = ps.open(path+'Results/Selected_Locations_Pmedian'+str(i)+'.shp')\n points_median = {}\n for idx, coords in enumerate(P_Med):\n P_Med_Graphs[i].add_node(idx)\n points_median[idx] = coords\n P_Med_Graphs[i].node[idx] = coords\n nx.draw(P_Med_Graphs[i], \n points_median, \n node_size=size, \n alpha=.1, \n node_color='k')\n\n# Legend (Ordered Dictionary)\nLEGEND = OrderedDict()\nLEGEND['Network Nodes']=g\nLEGEND['Roads']=g\nfor i in P_Med_Graphs:\n LEGEND['Optimal PMP '+str(i)]=P_Med_Graphs[i]\nlegend(LEGEND, \n loc='upper left', \n fancybox=True, \n framealpha=0.5, \n scatterpoints=1)\n\n# Title\ntitle('Waverly Hills\\n Tallahassee, Florida', family='Times New Roman', \n size=40, color='k', backgroundcolor='w', weight='bold')\n\n# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.\narrow(624000, 164050, 0.0, 500, width=50, head_width=125, \n head_length=75, fc='k', ec='k',alpha=0.75,)\nannotate('N', xy=(623900, 164700), fontstyle='italic', fontsize='xx-large',\n fontweight='heavy', alpha=0.75)",
"Pandas PMP Data Frame",
"qgrid.show_grid(pydf_M)\n#pydf_M",
"Bokeh PMP [p vs. cost] trade off",
"#import bokeh\n#from bokeh.charts import Scatter, show\nfrom bokeh.plotting import figure, show, ColumnDataSource\nfrom bokeh.io import output_notebook\nfrom bokeh.models import HoverTool, BoxAnnotation\noutput_notebook()\n\nsource_m = ColumnDataSource(\n data=dict(\n x=range(1, len(SER)+1),\n y=AVG_PMP,\n avg=AVG_PMP,\n desc=p_list,\n change=PMP_Avg_Diff))\n\nTOOLS = 'wheel_zoom, pan, reset, crosshair, save'\n\nhover = HoverTool(line_policy=\"nearest\", mode=\"hline\", tooltips=\"\"\"\n <div>\n <div>\n \n </div>\n <div>\n <span style=\"font-size: 17px; font-weight: bold;\">@desc</span> \n </div>\n <div>\n <span style=\"font-size: 15px;\">Average Minimized Cost</span>\n <span style=\"font-size: 15px; font-weight: bold; color: #ff4d4d;\">[@avg]</span>\n </div>\n <div>\n <span style=\"font-size: 15px;\">Variation</span>\n <span style=\"font-size: 15px; font-weight: bold; color: #ff4d4d;\">[@change]</span>\n </div>\n </div>\"\"\")\n\n# Instantiate Plot\npmp_plot = figure(plot_width=600, plot_height=600, tools=[TOOLS,hover],\n title=\"Average Distance vs. p-Facilities\", y_range=(0,2))\n\n# Create plot points and set source\npmp_plot.circle('x', 'y', size=15, color='red',source=source_m, \n legend='Total Minimized Cost / Total Demand')\npmp_plot.line('x', 'y', line_width=2, color='red', alpha=.5, source=source_m, \n legend='Total Minimized Cost / Total Demand')\n\npmp_plot.xaxis.axis_label = '[p = n]'\npmp_plot.yaxis.axis_label = 'Miles'\n\none_quarter = BoxAnnotation(plot=pmp_plot, top=.35, \n fill_alpha=0.1, fill_color='green')\nhalf = BoxAnnotation(plot=pmp_plot, bottom=.35, top=.7, \n fill_alpha=0.1, fill_color='blue')\nthree_quarter = BoxAnnotation(plot=pmp_plot, bottom=.7, top=1.05,\n fill_alpha=0.1, fill_color='gray')\n\npmp_plot.renderers.extend([one_quarter, half, three_quarter])\n\n# Display the figure\nshow(pmp_plot)",
"Draw PCP figure [p=1] large $ \\rightarrow $ small [p=15]",
"figsize(10,10)\n# Draw Network Actual Roads and Nodes\nfor e in ntw.edges:\n g.add_edge(*e)\nnx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)\n\n#PCP\nsize = 3000\nfor i,j in P_Cent_Graphs.iteritems():\n size=size-150\n # p-Center\n P_Cent = ps.open(path+'Results/Selected_Locations_Pcenter'+str(i)+'.shp')\n points_center = {}\n for idx, coords in enumerate(P_Cent):\n P_Cent_Graphs[i].add_node(idx)\n points_center[idx] = coords\n P_Cent_Graphs[i].node[idx] = coords\n nx.draw(P_Cent_Graphs[i], \n points_center, \n node_size=size, \n alpha=.1, \n node_color='k')\n\n# Legend (Ordered Dictionary)\nLEGEND = OrderedDict()\nLEGEND['Network Nodes']=g\nLEGEND['Roads']=g\nfor i in P_Cent_Graphs:\n LEGEND['Optimal PCP '+str(i)]=P_Cent_Graphs[i]\nlegend(LEGEND, \n loc='upper left', \n fancybox=True, \n framealpha=0.5, \n scatterpoints=1)\n\n# Title\ntitle('Waverly Hills\\n Tallahassee, Florida', family='Times New Roman', \n size=40, color='k', backgroundcolor='w', weight='bold')\n\n# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.\narrow(624000, 164050, 0.0, 500, width=50, head_width=125, \n head_length=75, fc='k', ec='k',alpha=0.75,)\nannotate('N', xy=(623900, 164700), fontstyle='italic', fontsize='xx-large',\n fontweight='heavy', alpha=0.75)",
"Pandas PCP Data Frame",
"pydf_C",
"Bokeh PCP [p vs. cost] trade off",
"#output_notebook()\n\nsource_c = ColumnDataSource(\n data=dict(\n x=range(1, len(SER)+1),\n y=VAL_PCP,\n obj=VAL_PCP,\n desc=p_list,\n change=PCP_Diff))\n \nTOOLS = 'wheel_zoom, pan, reset, crosshair, save'\n \nhover = HoverTool(line_policy=\"nearest\", mode=\"vline\", tooltips=\"\"\"\n <div>\n <div>\n \n </div>\n <div>\n <span style=\"font-size: 17px; font-weight: bold;\">@desc</span> \n </div>\n <div>\n <span style=\"font-size: 15px;\">Worst Case Cost</span>\n <span style=\"font-size: 15px; font-weight: bold; color: #00b300;\">[@obj]</span>\n </div>\n <div>\n <span style=\"font-size: 15px;\">Variation</span>\n <span style=\"font-size: 15px; font-weight: bold; color: #00b300;\">[@change]</span>\n </div>\n </div>\"\"\")\n\n# Instantiate Plot\npcp_plot = figure(plot_width=600, plot_height=600, tools=[TOOLS,hover],\n title=\"Worst Case Distance vs. p-Facilities\", y_range=(0,2))\n\n# Create plot points and set source\npcp_plot.circle('x', 'y', size=15, color='green', source=source_c,\n legend='Minimized Worst Case')\npcp_plot.line('x', 'y', line_width=2, color='green', alpha=.5, source=source_c,\n legend='Minimized Worst Case')\n\npcp_plot.xaxis.axis_label = '[p = n]'\npcp_plot.yaxis.axis_label = 'Miles'\n\none_quarter = BoxAnnotation(plot=pcp_plot, top=.35, \n fill_alpha=0.1, fill_color='green')\nhalf = BoxAnnotation(plot=pcp_plot, bottom=.35, top=.7, \n fill_alpha=0.1, fill_color='blue')\nthree_quarter = BoxAnnotation(plot=pcp_plot, bottom=.7, top=1.05,\n fill_alpha=0.1, fill_color='gray')\n\npcp_plot.renderers.extend([one_quarter, half, three_quarter])\n\n# Display the figure\nshow(pcp_plot)",
"Draw CentDian figure [p=1] large $ \\rightarrow $ small [p=15]",
"pydf_CentDian",
"Pandas CentDian Data Frame",
"figsize(10,10)\n# Draw Network Actual Roads and Nodes\nfor e in ntw.edges:\n g.add_edge(*e)\nnx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)\n\n#CentDian\nsize = 3000\nfor i,j in P_CentDian_Graphs.iteritems():\n size=size-150\n P_CentDian = ps.open(path+'Results/Selected_Locations_CentDian'+str(i)+'.shp')\n points_centdian = {}\n for idx, coords in enumerate(P_CentDian):\n P_CentDian_Graphs[i].add_node(idx)\n points_centdian[idx] = coords\n P_CentDian_Graphs[i].node[idx] = coords\n nx.draw(P_CentDian_Graphs[i], \n points_centdian, \n node_size=size, \n alpha=.1, \n node_color='k')\n\n# Legend (Ordered Dictionary)\nLEGEND = OrderedDict()\nLEGEND['Network Nodes']=g\nLEGEND['Roads']=g\nfor i in P_CentDian_Graphs:\n LEGEND['Optimal CentDian '+str(i)]=P_CentDian_Graphs[i]\nlegend(LEGEND, \n loc='upper left', \n fancybox=True, \n framealpha=0.5, \n scatterpoints=1)\n\n# Title\ntitle('Waverly Hills\\n Tallahassee, Florida', family='Times New Roman', \n size=40, color='k', backgroundcolor='w', weight='bold')\n\n# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.\narrow(624000, 164050, 0.0, 500, width=50, head_width=125, \n head_length=75, fc='k', ec='k',alpha=0.75,)\nannotate('N', xy=(623900, 164700), fontstyle='italic', fontsize='xx-large',\n fontweight='heavy', alpha=0.75)",
"Bokeh CentDian [p vs. cost] trade off",
"#output_notebook()\n\nsource_centdian = ColumnDataSource(\n data=dict(\n x=range(1, len(SER)+1),\n y=VAL_CentDian,\n obj=VAL_CentDian,\n desc=p_list,\n change=CentDian_Diff))\n \nTOOLS = 'wheel_zoom, pan, reset, crosshair, save'\n \nhover = HoverTool(line_policy=\"nearest\", mode=\"vline\" ,tooltips=\"\"\"\n <div>\n <div>\n \n </div>\n <div>\n <span style=\"font-size: 17px; font-weight: bold;\">@desc</span> \n </div>\n <div>\n <span style=\"font-size: 15px;\">Center Median Cost</span>\n <span style=\"font-size: 15px; font-weight: bold; color: #3385ff;\">[@obj]</span>\n </div>\n <div>\n <span style=\"font-size: 15px;\">Variation</span>\n <span style=\"font-size: 15px; font-weight: bold; color: #3385ff;\">[@change]</span>\n </div>\n </div>\"\"\")\n\n# Instantiate Plot\ncentdian_plot = figure(plot_width=600, plot_height=600, tools=[TOOLS,hover],\n title=\"Center Median Distance vs. p-Facilities\", y_range=(0,2))\n\n# Create plot points and set source\ncentdian_plot.circle('x', 'y', size=15, color='blue', source=source_centdian,\n legend='Center Median')\ncentdian_plot.line('x', 'y', line_width=2, color='blue', alpha=.5, source=source_centdian,\n legend='Center Median')\n\ncentdian_plot.xaxis.axis_label = '[p = n]'\ncentdian_plot.yaxis.axis_label = 'Miles'\n\none_quarter = BoxAnnotation(plot=pcp_plot, top=.35, \n fill_alpha=0.1, fill_color='green')\nhalf = BoxAnnotation(plot=pcp_plot, bottom=.35, top=.7, \n fill_alpha=0.1, fill_color='blue')\nthree_quarter = BoxAnnotation(plot=pcp_plot, bottom=.7, top=1.05,\n fill_alpha=0.1, fill_color='gray')\n\ncentdian_plot.renderers.extend([one_quarter, half, three_quarter])\n\n# Display the figure\nshow(centdian_plot)",
"Draw PMCP figure",
"figsize(10,10)\n# Draw Network Actual Roads and Nodes\nfor e in ntw.edges:\n g.add_edge(*e)\nnx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)\n\nsize = 500\nshape = 'sdh^Vp<8>'\ncounter = -1\nfor i,j in PMCP_Graphs.iteritems():\n if int(i) <= len(SER)-1:\n counter = counter+1\n pmcp = ps.open(path+'Results/Selected_Locations_PMCP'+str(i)+'.shp')\n points_pmcp = {}\n for idx, coords in enumerate(pmcp):\n PMCP_Graphs[i].add_node(idx)\n points_pmcp[idx] = coords\n PMCP_Graphs[i].node[idx] = coords\n nx.draw(PMCP_Graphs[i], \n points_pmcp, \n node_size=size,\n node_shape=shape[counter],\n alpha=.5, \n node_color='k')\n else:\n pass\n \n# Legend (Ordered Dictionary)\nLEGEND = OrderedDict()\nLEGEND['Network Nodes']=g\nLEGEND['Roads']=g\nfor i in PMCP_Graphs:\n if int(i) <= len(SER)-1:\n LEGEND['PMP/PCP == '+str(i)]=PMCP_Graphs[i]\nlegend(LEGEND, \n loc='upper left', \n fancybox=True, \n framealpha=0.5, \n scatterpoints=1)\n\n# Title\ntitle('Waverly Hills\\n Tallahassee, Florida', family='Times New Roman', \n size=40, color='k', backgroundcolor='w', weight='bold')\n\n# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.\narrow(624000, 164050, 0.0, 500, width=50, head_width=125, \n head_length=75, fc='k', ec='k',alpha=0.75,)\nannotate('N', xy=(623900, 164700), fontstyle='italic', fontsize='xx-large',\n fontweight='heavy', alpha=0.75)",
"Pandas PMCP Data Frame",
"pydf_MC",
"Bokeh PMP & PCP [p vs. cost] comparision",
"#output_notebook()\n\nTOOLS = 'wheel_zoom, pan, reset, crosshair, save, hover'\n\nbokeh_df_PMCP = pd.DataFrame()\nbokeh_df_PMCP['p'] = [int(i[2:]) for i in p_dens]\nbokeh_df_PMCP['Total Obj. Value'] = [VAL_PMCP[x][0] for x in range(len(VAL_PMCP))]\nbokeh_df_PMCP['Avg. Obj. Value'] = [VAL_PMCP[x][1] for x in range(len(VAL_PMCP))]\nbokeh_df_PMCP['Worst Case Obj. Value'] = [VAL_PMCP[x][2] for x in range(len(VAL_PMCP))]\nbokeh_df_PMCP['CentDian Obj. Value'] = [VAL_PMCP[x][3] for x in range(len(VAL_PMCP))]\n\nplot_PMCP = figure(title=\"Optimal PMP & PCP Selections without Sacrifice\", \n plot_width=800, plot_height=600, tools=[TOOLS], y_range=(0,2))\n\nplot_PMCP.circle('x', 'y', size=5, color='red', source=source_m, legend='PMP')\nplot_PMCP.line('x', 'y', \n color=\"#ff4d4d\", alpha=0.2, line_width=2, source=source_m, legend='PMP')\n\nplot_PMCP.circle('x', 'y', size=5, color='green', source=source_c, legend='PCP')\nplot_PMCP.line('x', 'y', \n color='#00b300', alpha=0.2, line_width=2, source=source_c, legend='PCP')\n\nplot_PMCP.circle('x', 'y', size=5, color='blue', source=source_centdian, legend='CentDian')\nplot_PMCP.line('x', 'y', \n color='#3385ff', alpha=0.2, line_width=2, source=source_centdian, legend='CentDian')\n\nplot_PMCP.circle_x(bokeh_df_PMCP['p'], \n bokeh_df_PMCP['Avg. Obj. Value'], \n legend=\"Location PMP=PCP for PM+CP\", \n color=\"#ff4d4d\",\n fill_alpha=0.2,\n size=15)\n\nplot_PMCP.circle_x(bokeh_df_PMCP['p'], \n bokeh_df_PMCP['Worst Case Obj. Value'], \n legend=\"Location PCP=PMP for PM+CP\", \n color='#00b300',\n fill_alpha=0.2,\n size=15)\n\nplot_PMCP.circle_x(bokeh_df_PMCP['p'], \n bokeh_df_PMCP['CentDian Obj. Value'], \n legend=\"Location CentDian = PMCP\", \n color='#3385ff',\n fill_alpha=0.2,\n size=15)\n\nplot_PMCP.xaxis.axis_label = '[p = n]'\nplot_PMCP.yaxis.axis_label = 'Miles'\n\none_quarter = BoxAnnotation(plot=plot_PMCP, top=.35, \n fill_alpha=0.1, fill_color='green')\nhalf = BoxAnnotation(plot=plot_PMCP, bottom=.35, top=.7, \n fill_alpha=0.1, fill_color='blue')\nthree_quarter = BoxAnnotation(plot=plot_PMCP, bottom=.7, top=1.05,\n fill_alpha=0.1, fill_color='gray')\n\nplot_PMCP.renderers.extend([one_quarter, half, three_quarter])\n\nshow(plot_PMCP)",
"Convert Service Facilities Back to Longitude/Latitude for Google Maps Plots",
"points = SERVICE\npoints.to_crs(epsg=32616, inplace=True) # UTM 16N\nLonLat_Dict = OrderedDict()\nLonLat_List = []\n\nfor i,j in points['geometry'].iteritems():\n LonLat_Dict[y_list[i]] = utm.to_latlon(j.xy[0][-1], j.xy[1][-1], 16, 'N') \n LonLat_List.append((utm.to_latlon(j.xy[0][-1], j.xy[1][-1], 16, 'N')))\n\nService_Lat_List = []\nService_Lon_List = []\n \nfor i in LonLat_List:\n Service_Lat_List.append(i[0])\nfor i in LonLat_List:\n Service_Lon_List.append(i[1])",
"Create Lists of Selected Locations for Google Maps Plot",
"# p-Median Selected Sites\nlist_of_p_MEDIAN = []\nfor y in range(len(y_list)):\n list_of_p_MEDIAN.append([])\n for p in range(len(p_list)):\n if pydf_M[y_list[y]][p_list[p]] == u'\\u2588':\n list_of_p_MEDIAN[y].append([p_list[p]])\n \n# p-Center Selected Sites\nlist_of_p_CENTER = []\nfor y in range(len(y_list)):\n list_of_p_CENTER.append([])\n for p in range(len(p_list)):\n if pydf_C[y_list[y]][p_list[p]] == u'\\u2588':\n list_of_p_CENTER[y].append([p_list[p]])\n \n# p-CentDian Selected Sites\nlist_of_p_CentDian = []\nfor y in range(len(y_list)):\n list_of_p_CentDian.append([])\n for p in range(len(p_list)):\n if pydf_CentDian[y_list[y]][p_list[p]] == u'\\u2588':\n list_of_p_CentDian[y].append([p_list[p]])\n\n# PMCP Selected Sites\nlist_of_PMCP = []\nfor y in range(len(y_list)):\n list_of_PMCP.append([])\n for p in p_dens:\n if pydf_MC[y_list[y]][p] == u'\\u2588':\n list_of_PMCP[y].append(p)\n",
"Google Maps Plot",
"from bokeh.io import output_notebook, output_file, show\nfrom bokeh.models import (GMapPlot, GMapOptions, ColumnDataSource, Circle, MultiLine, \n DataRange1d, PanTool, WheelZoomTool, BoxSelectTool, ResetTool)\n\nmap_options = GMapOptions(lat=30.4855, lng=-84.265, map_type=\"hybrid\", zoom=14)\n\nplot = GMapPlot(\n x_range=DataRange1d(), y_range=DataRange1d(), map_options=map_options, title=\"Waverly Hills\")\n\nhover = HoverTool(tooltips=\"\"\"\n <div>\n <div>\n \n </div>\n <div>\n <span style=\"font-size: 30px; font-weight: bold;\">Site @desc</span> \n </div>\n <div>\n <span> \\b </span>\n </div>\n <div>\n <span style=\"font-size: 17px; font-weight: bold;\">p-Median: </span> \n </div>\n <div>\n <span style=\"font-size: 15px; font-weight: bold; color: #ff4d4d;\">@p_select_median</span>\n </div>\n <div>\n <span> \\b </span>\n </div>\n <div>\n <span style=\"font-size: 17px; font-weight: bold;\">p-Center</span> \n \n <div>\n <span style=\"font-size: 14px; font-weight: bold; color: #00b300;\">@p_select_center</span>\n </div>\n <div>\n <span> \\b </span>\n </div>\n <div>\n <span style=\"font-size: 17px; font-weight: bold;\">p-CentDian</span> \n </div>\n <div>\n <span style=\"font-size: 14px; font-weight: bold; color: #3385ff;\">@p_select_centdian</span>\n </div>\n <div>\n <span> \\b </span>\n </div>\n <span style=\"font-size: 17px; font-weight: bold;\">PMCP Method</span> \n </div>\n <div>\n <span style=\"font-size: 14px; font-weight: bold; color: 'gray';\">@p_select_pmcp</span>\n </div>\n </div>\"\"\")\n\nsource_1 = ColumnDataSource(\n data=dict(\n lat=Service_Lat_List,\n lon=Service_Lon_List,\n desc=y_list,\n p_select_center=list_of_p_CENTER,\n p_select_median=list_of_p_MEDIAN,\n p_select_centdian= list_of_p_CentDian,\n p_select_pmcp=list_of_PMCP))\n\n#source_2 = ColumnDataSource(\n# data=dict(\n# xs=line1x,\n# ys=line1y))\n\nfacilties = Circle(x=\"lon\", y=\"lat\", size=10, fill_color=\"yellow\", fill_alpha=0.6, line_color=None)\n\n#streets = MultiLine(xs=\"xs\", ys=\"ys\", line_width=20, line_color='red')\n#plot.title = \"Waverly\"\n\nplot.add_glyph(source_1, facilties)\n#plot.add_glyph(source_2, streets)\n\nplot.add_tools(PanTool(), WheelZoomTool(), BoxSelectTool(), ResetTool(), hover)\noutput_file(\"gmap_plot.html\")\nshow(plot)",
"Future Work & Vision\n$\\Longrightarrow$ Develop a python library for bring together in one package spatial analysis & spatial optimization [spanoptpy] potentially incorporating:\n|QGIS| PySAL | NetworkX | Pandas | GeoPandas | NumPy | Shapely | Bokeh | CyLP\n|----|---------|-------------------------------------------------------------------------\n|GIS|network analysis|network analysis|data frames|geo dataframes|scientific computing|geometric objects|visualizations|optimization\n$\\Longrightarrow$ Need PySAL.Network to be able to handle larger networks\n$\\Longrightarrow$ Develop functionality within a Linux environment\n$\\Longrightarrow$ scipy.spatial.cKDTree(dist_matrix)\n $\\Longrightarrow$ query_ball_point() for close neighbors of the selected sites\n$\\Longrightarrow$ Master CyLP from COIN-OR or develop an open-source optimization suite\n\ninterface with CLP, CBC, CGL\n[ http://mpy.github.io/CyLPdoc/ ]\n\nrelatively steep learning curve \n\n\nComputational Optimization Infrastructure for Operations Research\n\n[ http://www.coin-or.org ]\n\n$\\ast$ CyLP example\nMinimize\n $ 3x + 2y $\nSubject To\n $ x \\geq 3$\n $ y \\geq 5$\n $ x - y \\leq 20$",
"# Gurobi\nm = gbp.Model()\nm.setParam( 'OutputFlag', False ) \nx = m.addVar(vtype=gbp.GRB.CONTINUOUS, name='x')\ny = m.addVar(vtype=gbp.GRB.CONTINUOUS, name='y')\nm.update()\nm.setObjective(3*x + 2*y, gbp.GRB.MINIMIZE)\nm.addConstr(x >= 3)\nm.addConstr(y >= 5)\nm.addConstr(x - y <= 20)\nm.optimize()\n#m.write('path_m.lp')\nprint m.objVal\nprint m.getVars()\n\n# CyLP\ns = CyClpSimplex()\nx = s.addVariable('x', 1)\ny = s.addVariable('y', 1)\ns += x >= 3\ns += y >= 5\ns += x - y <= 20\ns.objective = 3 * x + 2 * y\ns.primal()\n#s.writeLp('path_s')\nprint s.objectiveValue\nprint s.primalVariableSolution\nprint 'Gurobi & CLP Objective Values match? --> ', m.objVal == s.objectiveValue",
"email $\\Longrightarrow$ jgaboardi@fsu.edu\n\nGitHub $\\Longrightarrow$ https://github.com/jGaboardi/AAG_16",
"IPd.HTML('https://github.com/jGaboardi')",
"System Specs",
"import datetime as dt\nimport os\nimport platform\nimport sys\nimport bokeh\nimport cylp\n\nnames = ['OSX', 'Processor ', 'Machine ', 'Python ','PySAL ','Gurobi ','Pandas ','GeoPandas ',\n 'Shapely ', 'NumPy ', 'Bokeh ', 'CyLP', 'Date & Time']\nversions = [platform.mac_ver()[0], platform.processor(), platform.machine(), platform.python_version(),\n ps.version, gbp.gurobi.version(), pd.__version__, gpd.__version__, \n str(shapely.__version__), np.__version__, \n bokeh.__version__, '0.7.1', dt.datetime.now()]\nspecs = pd.DataFrame(index=names, columns=['Version'])\nspecs.columns.name = 'Platform & Software Specs'\nspecs['Version'] = versions\nspecs # Pandas DF of specifications",
"References\n\n\nBehnel, S., R. Bradshaw, C. Citro, L. Dalcin, D. S. Seljebotn, and K. Smith. 2011. Cython: The best of both worlds. Computing in Science and Engineering 13 (2):31–39.\n\n\nBokeh Development Team. 2014. Bokeh: Python library for interactive visualization.\n\n\nChurch, R. L. 2002. Geographical information systems and location science. Computers and Operations Research 29:541–562.\n\n\nChurch, R. L., and A. T. Murray. 2009. Business Site Selections, Locational Analysis, and GIS. Hoboken, NJ, USA: John Wiley & Sons, Inc.\n\n\nConde, E. 2008. A note on the minmax regret centdian location on trees. Operations Research Letters 36 (2):271–275.\n\n\nCurrent, J., M. S. Daskin, and D. A. Schilling. 2002. Discrete Network Location Models. In Facility Location Applications and Theory, eds. Z. Drezner and H. W. Hamacher, 81–118. New York: Springer Berlin Heidelberg.\n\n\nDan Fylstra, L. Hafer, B. Hart, B. Kristjannson, C. Phillips, T. Ralphs, (Matthew Saltzman, E. Straver, (Jean-Paul Watson, and H. G. Santos. CBC. https://projects.coin-or.org/Cbc.\n\n\nDaskin, M. 2013. Network and Discrete Location: Models, Algorithms and Applications 2nd ed. Hoboken, NJ, USA: John Wiley & Sons, Inc.\n\n\nDaskin, M. S. 2008. What You Should Know About Location Modeling. Naval Research Logistics 55 (2):283–294.\n\n\nGeoPandas Developers. 2013. GeoPandas. http://geopandas.org.\n\n\nGillies, S., A. Bierbaum, and K. Lautaportti. 2013. Shapely.\n\n\nGurobi. 2013. Gurobi optimizer quick start guide.\n\n\nHagberg, A. A., D. A. Schult, and P. J. Swart. 2008. Exploring network structure, dynamics, and function using NetworkX. Proceedings of the 7th Python in Science Conference (SciPy 2008) (SciPy):11–15.\n\n\nHakimi, S. L. 1964. Optimum Locations of Switching Centers and the Absolute Centers and Medians of a Graph. Operations Research 12 (3):450–459.\n\n\nHall, J., L. Hafer, M. Saltzman, and J. Forrest. CLP.\n\n\nHalpern, J. 1976. The Location of a Center-Median Convex Combination on an Undirected Tree. Journal of Regional Science 16 (2):237–245.\n\n\nHamacher, H. W., and S. Nickel. 1998. Classification of location models. Location Science 6 (1-4):229–242.\n\n\nHorner, M. W., and M. J. Widener. 2010. How do socioeconomic characteristics interact with equity and efficiency considerations? An analysis of hurricane disaster relief goods provision. Geospatial Analysis and Modelling of Urban Structure and Dynamics 99:393–414.\n\n\nHorner, M. W., and M. J. Widener. 2011. The effects of transportation network failure on people’s accessibility to hurricane disaster relief goods: A modeling approach and application to a Florida case study. Natural Hazards 59:1619–1634.\n\n\nHunter, J. D. 2007. Matplotlib: A 2D graphics environment. Computing in Science and Engineering 9 (3):99–104.\n\n\nLima, I. 2006. Python for Scientific Computing Python Overview. Marine Chemistry :10–20.\n\n\nLougee-Heimer, R. 2003. The Common Optimization INterface for Operations Research. IBM Journal of Research and Development 47 (1):57–66.\n\n\nMarcelin, J. M., M. W. Horner, E. E. Ozguven, and A. Kocatepe. 2016. How does accessibility to post-disaster relief compare between the aging and the general population? A spatial network optimization analysis of hurricane relief facility locations. International Journal of Disaster Risk Reduction 15:61–72.\n\n\nMcKinney, W. 2010. Data Structures for Statistical Computing in Python. In Proceedings of the 9th Python in Science Conference, 51–56.\n\n\nMiller, H. J., and S.-L. Shaw. 2001. Geographic Information Systems for Transportation. New York: Oxford University Press.\n\n\nMillman, K. J., and M. Aivazis. 2011. Python for scientists and engineers. Computing in Science and Engineering 13 (2):9–12.\n\n\nMinieka, E. 1970. The m-Center Problem. SIAM Review 12:38–39.\n\n\nOwen, S. H., and M. S. Daskin. 1998. Strategic facility location: A review. European Journal of Operational Research 111 (3):423–447.\n\n\nPérez, F., and B. E. Granger. 2007. IPython: A system for interactive scientific computing. Computing in Science and Engineering 9 (3):21–29.\n\n\nPérez-Brito, D., J. A. Moreno-Pérez, and R.-M. Inmaculada. 1997. Finite dominating set for the p-facility cent-dian network location problem. Studies in Locational Analysis (August):1–16.\n\n\nPérez-Brito, D., J. A. Moreno-Pérez, and I. Rodrı́guez-Martı́n. 1998. The 2-facility centdian network problem. Location Science 6 (1-4):369–381.\n\n\nQGIS Development Team. Open Source Geospatial Foundation Project. 2016. QGIS Geographic Information System.\n\n\nReVelle, C. S., and R. W. Swain. 1970. Central facilities location. Geographical Analysis 2 (1):30–42.\n\n\nRey, S. J., and L. Anselin. 2010. PySAL: A Python Library of Spatial Analytical Methods. In Handbook of Applied Spatial Analysis, eds. M. M. Fischer and A. Getis, 175–193. Springer Berlin Heidelberg.\n\n\nShier, D. R. 1977. A Min–Max Theorem for p-Center Problems on a Tree. Transportation Science 11:243–52.\n\n\nSuzuki, A., and Z. Drezner. 1996. The p-center location problem in an area. Location Science 4 (1-2):69–82.\n\n\nTamir, A., J. Puerto, and D. Pérez-Brito. 2002. The centdian subtree on tree networks. Discrete Applied Mathematics 118 (3):263–278.\n\n\nTeitz, M. B., and P. Bart. 1968. Heuristic Methods for Estimating the Generalized Vertex Median of a Weighted Graph. Operations Research 16 (5):955–961.\n\n\nTong, D., and A. T. Murray. 2012. Spatial Optimization in Geography. Annals of the Association of American Geographers 102 (6):1290–1309.\n\n\nTowhidi, M., and D. Orban. 2011. CyLP.\n\n\nUS Census Bureau. 2015. TIGER/Line® Shapefiles and TIGER/Line® Files. U.S. Census Bureau Geography. https://www.census.gov/geo/maps-data/data/tiger-line.html.\n\n\nWalt, S. van der, S. C. Colbert, and G. Varoquaux. 2011. The NumPy Array: A Struture for Efficient Numerical Computation. Computing in Science & Engeneering 13:22–30.\n\n\nWilliam, R. 1971. The M-center problem."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ikeyasu/til
|
cnn_dogs_cats/dataset_setup.ipynb
|
mit
|
[
"Preparing dataset of kaggle \"dogs and cats\".\nhttps://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition\nThis script is based on\nhttps://github.com/fastai/courses/blob/master/deeplearning1/nbs/dogs_cats_redux.ipynb\nInstalling \"kg\" command\nhttps://github.com/floydwch/kaggle-cli",
"%%bash\nsource activate root # you need to change here to your env name\npip install kaggle-cli",
"download dataset",
"%%bash\nsource activate root # you need to change here to your env name\n\nrm -rf data\nmkdir -p data\n\npushd data\n kg download\n unzip -q train.zip\n unzip -q test.zip\npopd",
"Copy files to valid and sample",
"from glob import glob\nimport numpy as np\nfrom shutil import move, copyfile\n\n%mkdir -p data/train\n%mkdir -p data/valid\n%mkdir -p data/sample/train\n%mkdir -p data/sample/valid\n%pushd data/train\ng = glob('*.jpg')\nshuf = np.random.permutation(g)\nfor i in range(200): copyfile(shuf[i], '../sample/train/' + shuf[i])\nshuf = np.random.permutation(g)\nfor i in range(200): copyfile(shuf[i], '../sample/valid/' + shuf[i])\n\n# validation files are moved \nshuf = np.random.permutation(g)\nfor i in range(1000): move(shuf[i], '../valid/' + shuf[i])\n%popd",
"Arrangement files",
"%pushd data/train\n% mkdir cat dog\n% mv cat*.jpg cat\n% mv dog*.jpg dog\n%popd\n%pushd data/valid\n% mkdir cat dog\n% mv cat*.jpg cat\n% mv dog*.jpg dog\n%popd\n%pushd data/sample/train\n% mkdir cat dog\n% mv cat*.jpg cat\n% mv dog*.jpg dog\n%popd\n%pushd data/sample/valid\n% mkdir cat dog\n% mv cat*.jpg cat\n% mv dog*.jpg dog\n%popd\n%pushd data/test\n% mkdir unknown\n% mv *.jpg unknown\n%popd\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io
|
misc/library_counts.ipynb
|
mit
|
[
"Title: Finding Your Most Used Python Libraries\nSlug: most-used-libraries\nSummary: A Simple Script To Scan Through A Given Directory, Finding Your Most Commonly Used Python Libraries\nDate: 2018-03-11 13:49\nCategory: Misc\nTags: Misc\nAuthors: Thomas Pinder\nFinding My Most Commonly Used Libraries\nThis is by no means a polished item, moreover a quick and simple script that scans through my Documents directory, locating any files with the .py extension. For each found file, the script will scan through the file, extracting the libraries loaded into the file and creating a total count for each library. This script could be enhanced by removing duplicates from files as there are cases where the same library is loaded multiple i.e. from sklearn.linear_model import linear_model and from sklearn.metrics import confusion_matrix would be classed as seperate libraries when in fact it is just the sklearn library being loaded. For now this will do though, maybe later down the line I will refine this script if I have time.",
"import os\nfrom collections import Counter\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\n\ndirectory = '/home/tpin3694/Documents/'\n\npython_files = [os.path.join(root, name) \n for root, dirs, files in os.walk(directory) \n for name in files if name.endswith(('.py'))]\nprint('Found {} Python files\\n'.format(len(python_files)))\n\nlibraries = []\nerror_count = 0\nfor file in python_files:\n try:\n file_import = open(file, 'r')\n file_data = file_import.readlines()\n for line in file_data:\n if line.startswith(('import')) or line.startswith(('from')):\n libraries.append(line.split()[1])\n except UnicodeDecodeError:\n error_count += 1\n\nprint('{}% files raising encoding errors.\\n'.format(round(100*error_count/len(python_files),2)))\n\nlibrary_counts = Counter(libraries)",
"With the counts stored in a Counter object, lets now quickly print out the top ten libraries and their respective counts.",
"print('Top 15 Libraries')\nfor label, count in library_counts.most_common(15):\n print('{}: {}'.format(label, count))\n",
"Nothing there seems particularly suprising as this is run from my laptop meaning that a lot of the scripts written are statistical or just simple automations and the above libraries are pretty useful for that. I'd be interested to run this same script on my GPU desktop as it is there that I run any neural networks and heavy machine learning files. I suspect PyTorch and TensorFlow may start to appear in the top few items then.\nJust to round things off, I will make a quick bar plot to depict the counts of each library.",
"labels, counts = zip(*library_counts.most_common(15))\nplt.figure(figsize=(12, 8))\nplt.bar(labels, counts)\nplt.xticks(rotation=60);",
"There we go then, that seems to work quite well. Like I said above, this is quite a rough and ready script and I will consider tweaking it in the future to make it more accurate, however, for now this does just fine."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
peendebak/SPI-rack
|
examples/S4g.ipynb
|
mit
|
[
"S4g example notebook\nExample notebook of the S4g 4 channel, 18-bit current source module. To use this, we need a S4g module and a controller module. This can be either the C1b/C2 combination or the C1.\nTo use the D5a module, we also need to import the SPI rack class. All communication with the SPI Rack will go through this object. Only one SPI_rack object can be active at a time on the PC. So before running another script, the connection in this one should be closed.",
"# Import SPI rack and S4g module\nfrom spirack import SPI_rack, S4g_module",
"Open the SPI rack connection and unlock the controller. This is necessary after bootup of the controller module. If not unlocked, no communication with the modules can take place. The virtual COM port baud rate is irrelevant as it doesn't change the actual speed. Timeout can be changed, but 1 second is a good value.",
"COM_speed = 1e6 # Baud rate, doesn't matter much\ntimeout = 1 # In seconds\n\nspi_rack = SPI_rack('COM4', COM_speed, timeout)\nspi_rack.unlock() # Unlock the controller to be able to send data",
"Create a new S4g module object at the correct (set) module address using the SPI object. By default the module resets the output currents to 0 Volt. Before it does this, it will read back the current value. If this value is non-zero it will slowly ramp it to zero. If reset_currents = False then the output will not be changed.\nMost S4g units have a span of ±50mA. If this is not the case, the maximum span can be set at initialisation using the max_current parameter. In this case we set it to 50 mA.",
"S4g = S4g_module(spi_rack, module=2, max_current=50e-3, reset_currents=True)",
"The output span of the module can be set in software to the multiple ranges. In all these ranges the units keeps the 18-bit resolution. The actual current output depends on the maximum current. Here we asume the default 50mA:\n* 0 to 50mA: range_max_uni\n* -50 to 50 mA: range_max_bi\n* -25 to 25 mA: range_min_bi\nBy default it is set to ±50mA. Changing the span determines the smallest step size possible. The DACs used internally are 18-bit, which gives a smallest step size of:\n$$\\text{LSB} = \\frac{\\text{Span}}{2^{18}}$$\nIn the case of a ±25mA span this gives a LSB of: ~190 nA. You can get the current stepsize of a certain current output with the get_stepsize function.",
"S4g.change_span_update(0, S4g.range_min_bi)\n\nstepsize = S4g.get_stepsize(1)\nprint(\"Stepsize: \" + str(stepsize/1e-9) + \" nA\")",
"The current can be set using the set_current function. If you want the output to be precisely equal to the set value, the current should be an integer multiple of the stepsize. Especially if sweeps are performed this is recommended. Otherwise the steps might nog be equidistant. Here we set the current of output 0 to 1 mA. In the software the output count starts at 0, while on the front it starts at 1.\nOptionally the output current might also be changed by setting a digital bit value ranging from 0 to $2^{18}-1$ using change_value_update.",
"# Changing the output by current\nS4g.set_current(0, 1e-3)\n\n# Changing the output by ditigal values\nS4g.change_value_update(0, 165535)",
"When done with the measurement, it is recommended to close the SPI Rack connection. This will allow other measurement scripts to acces the device.",
"spi_rack.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ioos/system-test
|
content/downloads/notebooks/2015-10-26-HF_Radar_Currents_west_coast.ipynb
|
unlicense
|
[
"\"\"\"\nThe original notebook is HF_Radar_Currents_west_coast.ipynb in\nTheme_2_Extreme_Events/Scenario_2B/HF_Radar_Currents/\n\"\"\"\n\ntitle = \"Near real-time current data\"\nname = '2015-10-26-HF_Radar_Currents_west_coast'\n\n%matplotlib inline\nimport seaborn\nseaborn.set(style='ticks')\n\nimport os\nfrom datetime import datetime\nfrom IPython.core.display import HTML\n\nimport warnings\nwarnings.simplefilter(\"ignore\")\n\n# Metadata and markdown generation.\nhour = datetime.utcnow().strftime('%H:%M')\ncomments = \"true\"\n\ndate = '-'.join(name.split('-')[:3])\nslug = '-'.join(name.split('-')[3:])\n\nmetadata = dict(title=title,\n date=date,\n hour=hour,\n comments=comments,\n slug=slug,\n name=name)\n\nmarkdown = \"\"\"Title: {title}\ndate: {date} {hour}\ncomments: {comments}\nslug: {slug}\n\n{{% notebook {name}.ipynb cells[2:] %}}\n\"\"\".format(**metadata)\n\ncontent = os.path.abspath(os.path.join(os.getcwd(), os.pardir,\n os.pardir, '{}.md'.format(name)))\n\nwith open('{}'.format(content), 'w') as f:\n f.writelines(markdown)\n\n\nhtml = \"\"\"\n<small>\n<p> This post was written as an IPython notebook. It is available for\n<a href=\"http://ioos.github.com/system-test/downloads/\nnotebooks/%s.ipynb\">download</a>. You can also try an interactive version on\n<a href=\"http://mybinder.org/repo/ioos/system-test/\">binder</a>.</p>\n<p></p>\n\"\"\" % (name)",
"The System Integration Test is divided into three main themes:\n\nBaseline Assessment,\nExtreme Events, and\nSpecies Protection and Marine Habitat Classification.\n\nThe second theme, extreme events, serves to test discovery, access,\nand usefulness of meteorological and oceanographic (metocean) data commonly needed to plan for or respond to extreme events such as coastal storms,\noil spills, or search and rescue.\nThe tests in this theme are focused around four questions:\n\nIs it possible to discover long enough records of metocean data to successfully perform extreme values analysis and predict extreme conditions for a given location?\nIs it possible to discover and compare modeled and observed metocean parameters for a given geographic location?\nWhat is the extent of relevant meteorological and oceanographic data available in near real-time that could be used to support rapid deployment of a common operational picture to support response?\nIs it possible to discover and use information about the consequences of an event (e.g. determining whether extreme water levels cause coastal flooding and to what extent)?\n\nWe already known that (2) is possible for sea surface height.\nWe also know we can query very long time-series over OPeNDAP, while SOS is limited to ~1 week of data.\nTherefore, (1) is possible depending on the endpoint available for the data in question.\nFinally, to properly answer question (4) we need first to be able to do (3) correctly.\nThe notebook shown below will try to answer (3) for ocean currents measured\nby HF-Radar.\nWe are interested in looking at near real-time data to analyze extreme events for a certain area of interest.\nLet's start by defining a small bounding box for San Francisco Bay,\nand query data from the past week up to now.",
"from datetime import datetime, timedelta\n\n\nbounding_box_type = \"box\"\nbounding_box = [-123, 36, -121, 40]\n\n# Temporal range.\nnow = datetime.utcnow()\nstart, stop = now - timedelta(days=(7)), now\n\n# URL construction.\niso_start = start.strftime('%Y-%m-%dT%H:%M:00Z')\niso_end = stop.strftime('%Y-%m-%dT%H:%M:00Z')\n\nprint('{} to {}'.format(iso_start, iso_end))",
"The cell below defines the variable of choice: currents.",
"data_dict = dict()\nsos_name = 'Currents'\ndata_dict['currents'] = {\"names\": ['currents',\n 'surface_eastward_sea_water_velocity',\n '*surface_eastward_sea_water_velocity*'],\n \"sos_name\": [sos_name]}",
"The next 3 cells are very similar to what we did in the fetching data example:\n\nInstantiate a csw object using the NGDC catalog.\nCreate the owslib.fes filter.\nApply the filter!",
"from owslib.csw import CatalogueServiceWeb\n\n\nendpoint = 'http://www.ngdc.noaa.gov/geoportal/csw'\n\ncsw = CatalogueServiceWeb(endpoint, timeout=60)\n\nfrom owslib import fes\nfrom utilities import fes_date_filter\n \nbegin, end = fes_date_filter(start, stop)\nbbox = fes.BBox(bounding_box)\n\nor_filt = fes.Or([fes.PropertyIsLike(propertyname='apiso:AnyText',\n literal=('*%s*' % val),\n escapeChar='\\\\', wildCard='*',\n singleChar='?') for val in\n data_dict['currents']['names']])\n\nval = 'Averages'\nnot_filt = fes.Not([fes.PropertyIsLike(propertyname='apiso:AnyText',\n literal=('*%s*' % val),\n escapeChar='\\\\',\n wildCard='*',\n singleChar='?')])\n\nfilter_list = [fes.And([bbox, begin, end, or_filt, not_filt])]\n\ncsw.getrecords2(constraints=filter_list, maxrecords=1000, esn='full')",
"The OPeNDAP URLs found were;",
"from utilities import service_urls\n\ndap_urls = service_urls(csw.records, service='odp:url')\n\nprint(\"Total DAP: {}\".format(len(dap_urls)))\nprint(\"\\n\".join(dap_urls))",
"We found 12 endpoints for currents data.\nNote that the last four look like model data, while all others are HF-Radar endpoints.\nCan we find more near real-time data?\nLet's try the Center for Operational Oceanographic Products and Services (CO-OPS) and the National Data Buoy Center (NDBC).\nBoth services can be queried using the pyoos library.",
"from pandas import read_csv\n\nbox_str = ','.join(str(e) for e in bounding_box)\n\nurl = (('http://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/SOS?'\n 'service=SOS&request=GetObservation&version=1.0.0&'\n 'observedProperty=%s&bin=1&'\n 'offering=urn:ioos:network:NOAA.NOS.CO-OPS:CurrentsActive&'\n 'featureOfInterest=BBOX:%s&responseFormat=text/csv') % (sos_name,\n box_str))\n\nobs_loc_df = read_csv(url)\n\nfrom utilities.ioos import processStationInfo\n\nst_list = processStationInfo(obs_loc_df, \"coops\")\n\nprint(st_list)",
"We got a few CO-OPS stations inside the time/box queried.\nHow about NDBC buoys?",
"box_str = ','.join(str(e) for e in bounding_box)\n\nurl = (('http://sdf.ndbc.noaa.gov/sos/server.php?'\n 'request=GetObservation&service=SOS&'\n 'version=1.0.0&'\n 'offering=urn:ioos:network:noaa.nws.ndbc:all&'\n 'featureofinterest=BBOX:%s&'\n 'observedproperty=%s&'\n 'responseformat=text/csv&') % (box_str, sos_name))\n\nobs_loc_df = read_csv(url)\n\nlen(obs_loc_df)",
"No recent observation were found for NDBC buoys!\nWe need to get the data from the stations identified above.\n(There is some boilerplate code in the cell below to show a progress bar.\nThat can be useful when downloading from many stations,\nbut it is not necessary.)",
"import uuid\nfrom IPython.display import HTML, Javascript, display\nfrom utilities.ioos import coopsCurrentRequest, ndbcSOSRequest\n\n\ndivid = str(uuid.uuid4())\npb = HTML(\"\"\"\n<div style=\"border: 1px solid black; width:500px\">\n <div id=\"%s\" style=\"background-color:blue; width:0%%\"> </div>\n</div>\n\"\"\" % divid)\n\ndisplay(pb)\n\ncount = 0\nfor station_index in st_list.keys():\n st = station_index.split(\":\")[-1]\n tides_dt_start = start.strftime('%Y%m%d %H:%M')\n tides_dt_end = stop.strftime('%Y%m%d %H:%M')\n\n if st_list[station_index]['source'] == \"coops\":\n df = coopsCurrentRequest(st, tides_dt_start, tides_dt_end)\n elif st_list[station_index]['source'] == \"ndbc\":\n df = ndbcSOSRequest(station_index, date_range_string)\n\n if (df is not None) and (len(df) > 0):\n st_list[station_index]['hasObsData'] = True\n else:\n st_list[station_index]['hasObsData'] = False\n st_list[station_index]['obsData'] = df\n\n print('{}'.format(station_index))\n\n count += 1\n percent_compelte = (float(count)/float(len(st_list.keys()))) * 100\n display(Javascript(\"$('div#%s').width('%i%%')\" %\n (divid, int(percent_compelte))))",
"We found two CO-OPS stations and downloaded the data for both.\nNow let's get the data for the HF-Radar we found.\nThe function get_hr_radar_dap_data filters the OPeNDAP endpoints\nreturning the the data only for HF-Radar that are near the observations.\n(We are downloading only 6 km HF-Radar from GNOME.\nThe function can be adapted to get any other resolution and source.)",
"from utilities.ioos import get_hr_radar_dap_data\n\ndf_list = get_hr_radar_dap_data(dap_urls, st_list, start, stop)",
"The models endpoints found in the CSW search are all global.\nGlobal model runs are not very useful when looking at such a small scale like the San Francisco Bay.\nSince we do know that the SFO ports operational model exits,\nwe can hard-code a data search for SFO and data points near the observations.",
"def findSFOIndexs(lats, lons, lat_lon_list):\n index_list, dist_list = [], []\n for val in lat_lon_list:\n point1 = Point(val[1], val[0])\n dist = 999999999\n index = -1\n for k in range(0, len(lats)):\n point2 = Point(lons[k], lats[k])\n val = point1.distance(point2)\n if val < dist:\n index = k\n dist = val\n index_list.append(index)\n dist_list.append(dist)\n print(index_list, dist_list)\n return index_list, dist_list\n\n\ndef buildSFOUrls(start, stop):\n \"\"\"\n Multiple files for time step, were only looking at Nowcast (past) values\n times are 3z, 9z, 15z, 21z\n\n \"\"\"\n url_list = []\n time_list = ['03z', '09z', '15z', '21z']\n delta = stop - start\n for i in range((delta.days)+1):\n model_file_date = start + timedelta(days=i)\n base_url = ('http://opendap.co-ops.nos.noaa.gov/'\n 'thredds/dodsC/NOAA/SFBOFS/MODELS/')\n val_month, val_year, val_day = '', '', ''\n # Month.\n if model_file_date.month < 10:\n val_month = \"0\" + str(model_file_date.month)\n else:\n val_month = str(model_file_date.month)\n # Year.\n val_year = str(model_file_date.year)\n # Day.\n if model_file_date.day < 10:\n val_day = \"0\" + str(model_file_date.day)\n else:\n val_day = str(model_file_date.day)\n file_name = '/nos.sfbofs.stations.nowcast.'\n file_name += val_year + val_month + val_day\n for t in time_list:\n t_val = '.t' + t + '.nc'\n url_list.append(base_url + val_year + val_month +\n file_name + t_val)\n return url_list\n\nfrom utilities.ioos import extractSFOModelData\n\nst_known = st_list.keys()\n\nname_list, lat_lon_list = [], []\nfor st in st_list:\n if st in st_known:\n lat = st_list[st]['lat']\n lon = st_list[st]['lon']\n name_list.append(st)\n lat_lon_list.append([lat, lon])\n\nmodel_data = extractSFOModelData(lat_lon_list, name_list, start, stop)",
"Let's visualize everything we got so far.",
"import matplotlib.pyplot as plt\n\n\nfor station_index in st_list.keys():\n df = st_list[station_index]['obsData']\n if st_list[station_index]['hasObsData']:\n fig = plt.figure(figsize=(16, 3))\n plt.plot(df.index, df['sea_water_speed (cm/s)'])\n fig.suptitle('Station:'+station_index, fontsize=14)\n plt.xlabel('Date', fontsize=14)\n plt.ylabel('sea_water_speed (cm/s)', fontsize=14)\n\n if station_index in model_data:\n df_model = model_data[station_index]['data']\n plt.plot(df_model.index, df_model['sea_water_speed (cm/s)'])\n for ent in df_list:\n if ent['ws_pts'] > 4:\n if station_index == ent['name']:\n df = ent['data']\n plt.plot(df.index, df['sea_water_speed (cm/s)'])\n ent['valid'] = True\n l = plt.legend(('Station Obs', 'Model', 'HF Radar'), loc='upper left')",
"Any comparison will only make sense if we put that in a geographic context.\nSo let's create map and place the observations, model grid point, and HF-Radar\ngrid point on it.",
"import folium\nfrom utilities import get_coordinates\n\nstation = st_list[list(st_list.keys())[0]]\nmapa = folium.Map(location=[station[\"lat\"], station[\"lon\"]], zoom_start=10)\nmapa.line(get_coordinates(bounding_box),\n line_color='#FF0000',\n line_weight=5)\n\nfor st in st_list:\n lat = st_list[st]['lat']\n lon = st_list[st]['lon']\n\n popupString = ('<b>Obs Location:</b><br>' + st + '<br><b>Source:</b><br>' +\n st_list[st]['source'])\n\n if 'hasObsData' in st_list[st] and st_list[st]['hasObsData'] == False:\n mapa.circle_marker([lat, lon], popup=popupString, radius=1000,\n line_color='#FF0000', fill_color='#FF0000',\n fill_opacity=0.2)\n\n elif st_list[st]['source'] == \"coops\":\n mapa.simple_marker([lat, lon], popup=popupString,\n marker_color=\"darkblue\", marker_icon=\"star\")\n elif st_list[st]['source'] == \"ndbc\":\n mapa.simple_marker([lat, lon], popup=popupString,\n marker_color=\"darkblue\", marker_icon=\"star\")\n\ntry:\n for ent in df_list:\n lat = ent['lat']\n lon = ent['lon']\n popupstring = (\"HF Radar: [\" + str(lat) + \":\"+str(lon) + \"]\" +\n \"<br>for<br>\" + ent['name'])\n mapa.circle_marker([lat, lon], popup=popupstring, radius=500,\n line_color='#FF0000', fill_color='#FF0000',\n fill_opacity=0.5)\nexcept:\n pass\n\ntry:\n for st in model_data:\n lat = model_data[st]['lat']\n lon = model_data[st]['lon']\n popupstring = (\"HF Radar: [\" + str(lat) + \":\" + str(lon) + \"]\" +\n \"<br>for<br>\" + ent['name'])\n mapa.circle_marker([lat, lon], popup=popupstring, radius=500,\n line_color='#66FF33', fill_color='#66FF33',\n fill_opacity=0.5)\nexcept:\n pass",
"Bonus: we can add WMS tile layer for today's HF-Radar data on the map.\nTo enable the layer we must click on the top right menu of the interactive map.",
"now = datetime.utcnow()\nmapa.add_tile_layer(tile_name='hfradar 2km',\n tile_url=('http://hfradar.ndbc.noaa.gov/tilesavg.php?'\n 's=10&e=100&x={x}&y={y}&z={z}&t=' +\n str(now.year) + '-' + str(now.month) +\n '-' + str(now.day) + ' ' +\n str(now.hour-2) + ':00:00&rez=2'),\n attr='NDBC/NOAA')\n\nmapa.add_layers_to_map()\n\nmapa",
"Map legend:\n\nBlue: observed station\nGreen: model grid point\nRed: HF Radar grid point\n\nIn the beginning we raised the question:\nWhat is the extent of relevant meteorological and oceanographic data available in near real-time that could be used to support rapid deployment of a common operational picture to support response?\n\nIn order to answer that we tested how to get currents data for an area of interest, like the San Fransisco Bay.\nThe main downside was the fact that we had to augment the catalog search by hard-coding a known model for the area, the SFBOFS Model.\nStill, the catalog was successful in finding near real-time observations from surrounding buoys/moorings, and HF-radar.\nYou can see the original IOOS System Test notebook here.",
"HTML(html)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
darioizzo/d-CGP
|
doc/sphinx/notebooks/dCGPANNs_for_classification.ipynb
|
gpl-3.0
|
[
"Training a FFNN in dCGPANN vs. Keras (classification)\nA Feed Forward Neural network is a widely used ANN model for regression and classification. Here we show how to encode it into a dCGPANN and train it with stochastic gradient descent on a regression task. To check the correctness of the result we perform the same training using the widely used Keras Deep Learning toolbox.",
"# Initial import\nimport dcgpy\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom tqdm import tqdm\nfrom sklearn.utils import shuffle\nimport timeit\n\n%matplotlib inline",
"Data set",
"# We import the data for a classification task.\nfrom numpy import genfromtxt\n# https://archive.ics.uci.edu/ml/datasets/Abalone\nmy_data = genfromtxt('abalone_data_set.csv', delimiter=',')\npoints = my_data[:,:-1]\nlabels_tmp = my_data[:,-1]\n\n# We trasform the categorical variables to one hot encoding\n# The problem is treated as a three class problem\nlabels = np.zeros((len(labels_tmp), 3))\nfor i,l in enumerate(labels_tmp):\n if l < 9:\n labels[i][0] = 1\n elif l > 10:\n labels[i][2] = 1\n else :\n labels[i][1] = 1\n\n# And split the data into training and test\nX_train = points[:3000]\nY_train = labels[:3000]\nX_test = points[3000:]\nY_test = labels[3000:]\n\n\n# Stable implementation of the softmax function\ndef softmax(x):\n \"\"\"Compute softmax values for each sets of scores in x.\"\"\"\n e_x = np.exp(x - np.max(x))\n return e_x / e_x.sum()\n\n# We define the accuracy metric\ndef accuracy(ex, points, labels):\n acc = 0.\n for p,l in zip(points, labels):\n ps = softmax(ex(p))\n if np.argmax(ps) == np.argmax(l):\n acc += 1.\n return acc / len(points)",
"Encoding and training a FFNN using dCGP\nThere are many ways the same FFNN could be encoded into a CGP chromosome. The utility encode_ffnn selects one for you returning the expression.",
"# We encode a FFNN into a dCGP expression. Note that the last layer is made by a sum activation function\n# so that categorical cross entropy can be used and produce a softmax activation last layer. \n# In a dCGP the concept of layers is absent and neurons are defined by activation functions R->R.\ndcgpann = dcgpy.encode_ffnn(8,3,[50,20],[\"sig\", \"sig\", \"sum\"], 5)\n\n# By default all weights (and biases) are set to 1 (and 0). We initialize the weights normally distributed\ndcgpann.randomise_weights(mean = 0., std = 1.)\ndcgpann.randomise_biases(mean = 0., std = 1.)\n\n\nprint(\"Starting error:\", dcgpann.loss(X_test,Y_test, \"CE\"))\nprint(\"Net complexity (number of active weights):\", dcgpann.n_active_weights())\nprint(\"Net complexity (number of unique active weights):\", dcgpann.n_active_weights(unique=True))\nprint(\"Net complexity (number of active nodes):\", len(dcgpann.get_active_nodes()))\n\n#dcgpann.visualize(show_nonlinearities=True, legend=True)\n\nres = []\n\n# We train\nn_epochs = 100\nprint(\"Start error (training set):\", dcgpann.loss(X_train,Y_train, \"CE\"), flush=True)\nprint(\"Start error (test):\", dcgpann.loss(X_test,Y_test, \"CE\"), flush=True)\n\nstart_time = timeit.default_timer()\nfor i in tqdm(range(n_epochs)):\n res.append(dcgpann.sgd(X_train, Y_train, 1., 32, \"CE\", parallel = 4))\nelapsed = timeit.default_timer() - start_time\n\nprint(\"End error (training set):\", dcgpann.loss(X_train,Y_train, \"CE\"), flush=True)\nprint(\"End error (test):\", dcgpann.loss(X_test,Y_test, \"CE\"), flush=True)\nprint(\"Time:\", elapsed, flush=True)\n\n\n\n\n\nplt.plot(res)\nprint(\"Accuracy (test): \", accuracy(dcgpann, X_test, Y_test))",
"Same training is done using Keras (Tensor Flow backend)\nIMPORTANT: no GPU is used for the comparison. The values are thus only to be taken as indications of the performances on a simple environment with 4 CPUs.",
"import keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation\nfrom keras import optimizers\n\n# We define Stochastic Gradient Descent as an optimizer\nsgd = optimizers.SGD(lr=1.)\n# We define weight initializetion\ninitializerw = keras.initializers.RandomNormal(mean=0.0, stddev=1, seed=None)\ninitializerb = keras.initializers.RandomNormal(mean=0.0, stddev=1, seed=None)\n\nmodel = Sequential([\n Dense(50, input_dim=8, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('sigmoid'),\n Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('sigmoid'),\n Dense(3, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('softmax'),\n])\nmodel.compile(optimizer=sgd,\n loss='categorical_crossentropy', metrics=['acc'])\n\n\nstart_time = timeit.default_timer()\nhistory = model.fit(X_train, Y_train, epochs=100, batch_size=32, verbose=False)\nelapsed = timeit.default_timer() - start_time\nprint(\"End error (training set):\", model.evaluate(X_train,Y_train, verbose=False))\nprint(\"End error (test):\", model.evaluate(X_test,Y_test, verbose=False))\nprint(\"Time:\", elapsed)\n\n# We plot for comparison the MSE during learning in the two cases\nplt.plot(history.history['loss'], label='Keras')\nplt.plot(res, label='dCGP')\nplt.title('dCGP vs Keras')\nplt.xlabel('epochs')\nplt.legend()\n_ = plt.ylabel('Cross Entropy Loss')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jeanpat/DeepFISH
|
notebooks/Loading_13434_LowRes_Dataset.ipynb
|
gpl-3.0
|
[
"import h5py\nimport numpy as np\nimport skimage as sk\n#print sk.__version__\nfrom skimage import io\nfrom matplotlib import pyplot as plt\n\nfrom skimage import filters\nfrom skimage import feature\nfrom skimage import io\nfrom scipy import ndimage as nd\nfrom scipy import misc\n\nfrom subprocess import check_output\nprint(check_output([\"ls\", \"../dataset\"]).decode(\"utf8\"))",
"Loading the dataset stored in hdf5 format",
"h5f = h5py.File('../dataset/LowRes_13434_overlapping_pairs.h5','r')\npairs = h5f['dataset_1'][:]\nh5f.close()\n\nprint(pairs.shape)\nprint(pairs[0,:,:,0].dtype)\nprint(pairs[0,:,:,0].max())",
"Looking at some examples",
"grey = pairs[0,:,:,0]\nmask = pairs[0,:,:,1]\n%matplotlib inline\nplt.figure(figsize = (10,8))\nplt.subplot(121)\nplt.imshow(grey)\nplt.title('max='+str(grey.max()))\nplt.subplot(122)\nplt.imshow(mask, interpolation = 'nearest')",
"Problem in the groundtruth label at low resolution:\nThe labels are a little noisy on the edge of the groundtruth label.",
"plt.figure(figsize = (10,12))\nplt.subplot(141)\nplt.imshow(mask, interpolation = 'nearest')\nplt.subplot(142)\nplt.title('label 1')\nplt.imshow(mask == 1, interpolation = 'nearest')\nplt.subplot(143)\nplt.title('label 2')\nplt.imshow(mask == 2, interpolation = 'nearest')\nplt.subplot(144)\nplt.title('label 3')\nplt.imshow(mask == 3, interpolation = 'nearest')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turi-code/tutorials
|
webinars/pattern-mining/deployment.ipynb
|
apache-2.0
|
[
"Introduction to ML Deployment\nDeploying models created using python in a Turi Predictive Service is very easy. This notebook walks you through the step-by-step process. \n<img src='images/predictive_services_overview.png'></img>\n\nDeployment Steps\nThe notebook has three sections: \n\n<a href='#cpo'>Create a model</a>\n<a href='#create'>Create a predictive service</a>\n<a href='#query'>Query the model</a>\n\nIf you are deploying a model in an existing Predictive Service instance you can go to step (2) directly.\n1. Create a model <a id='cpo'></a>\nLet's train a simple pattern mining model\n<img src=\"images/left.png\"></img>",
"# In order to run this code, you need an already trianed model (see the accompanying notebook)\nimport graphlab as gl\nmodel = gl.load_model('pattern_mining_model.gl')\nmodel",
"We can expose the trained model as a REST endpoint. This will allow other applications to consume the predictions from the model. \nIn order to do that, we wrap the model object in a Python function and add it to the Predictive Service. In the function you may add your own logic for transform input to the model, ensemble different models or manipulate output before returning. Checkout out user guide for more details.\nThe result of the function needs to be a JSON serializable object.",
"def predict(x):\n # Construct an SFrame\n sf = gl.SFrame(x)\n\n # Add your own business logic here \n \n # Call the predict method on the model.\n predictions = model.predict(sf)\n return predictions['prediction']",
"2. Create a Predictive Service (One time) <a id='create'></a>\nThis section shows you how to deploy a Predictive Service to EC2. The EC2 instances used by the Predictive Service will be launched in your own AWS account, so you will be responsible for the cost. \n<img src=\"images/middle.png\"></img>\nTo create a Predictive Service in Amazon AWS, we first configure the EC2 Config object, which contains the configuration parameters required for launching a Predictive Service cluster in EC2. These fields are optional and include the region, instance type, CIDR rules etc. Predictive Service uses this configuration for service creation.\nHaving configured our EC2 Config object, we're ready to launch a Predictive Service Deployment, There are a few aspects of the Predictive Service that can be customized:\n* Number of nodes in the service - By default the number of hosts (num_hosts) is 1. To obtain good cache utility and high availability, we recommended setting num_hosts to at least 3.\n* State path to persist service state and service logs. This is a s3 location. \n* Port to be used by the server.\n* Other settings, such as SSL credentials etc.\nThe following code snippet shows you how to create a Predictive Service. You will have to replace the ps_state_path and credentials for your Predictive Service.",
"import graphlab as gl\n\n# Replace with your path.\nps_state_path = 's3://<your-bucket-name>/predictive_service/ps'\n\n# Set your AWS credentials.\ngl.aws.set_credentials(<key>, <secret>)\n\n# Create an EC2 config\nec2_config = gl.deploy.Ec2Config()\n\n# Launch a predictive service\nps = gl.deploy.predictive_service.create(name = 'sklearn-predictive-service', \n ec2_config = ec2_config, state_path = ps_state_path, num_hosts = 1)",
"Load an already created service",
"import graphlab as gl\nps = gl.deploy.predictive_service.load('s3://gl-demo-usw2/predictive_service/demolab/ps-1.6')\n\nps\n\n# ps.add('pattern-mining', predict) (When you add this for the first time)\nps.update('pattern-mining', predict)\n\nps.apply_changes()",
"Query the model <a id='query'></a>\nYou may do a test query before really deploying it to production. This will help detect errors in the function before deploying it the Predictive Service. \n<img src=\"images/right.png\"></img>",
"# test query to make sure the model works fine\nps.query('pattern-mining', x={'Receipt': [1], 'StoreNum': [2], 'Item': ['CherryTart']})",
"Query from external applications via REST\nNow other applications can interact with our model! In the next section we will illustrate how to consume the model. We can also use other APIs like ps.update() to update a mode, ps.remove() to remove a model.\nThe model query is exposed through REST API. The path is:\nhttp(s)://<your-ps-endpoint>/data/<model-name>\n\nAnd the payload is a JSON serialized string in the following format:\n{\"api_key\": <api key>,\n \"data\": <data-passed-to-custom-query>}\n\nHere the 'api key' may be obtained through ps.api_key, and data is the actual data passed to the custom predictive object in the Predictive Service. It will be passed to the query using **kwargs format\nHere is a sample curl command to query your model:\ncurl -X POST -d '{\"api_key\":\"b437e588-0f2b-45e1-81c8-ce3acfa81ade\", \"data\":{\"x\":{\"Receipt\": [1], \"StoreNum\": [2], \"Item\": [\"CherryTart\"]}}}' http://demolab-one-six-2015364754.us-west-2.elb.amazonaws.com/query/pattern-mining\n\nYou can also query though Python using the requests module\nQuery through Python",
"import json\nimport requests\n\ndef restful_query(x):\n headers = {'content-type': 'application/json'}\n payload = {'api_key':'b437e588-0f2b-45e1-81c8-ce3acfa81ade', \"data\":{\"x\": x}}\n end_point = 'http://demolab-one-six-2015364754.us-west-2.elb.amazonaws.com/query/pattern-mining'\n return requests.post(end_point, json.dumps(payload), headers=headers).json()\n\nrestful_query({'Receipt': [1], 'StoreNum': [2], 'Item': ['CherryTart']})"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/en-snapshot/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial.ipynb
|
apache-2.0
|
[
"Auto-Batched Joint Distributions: A Gentle Tutorial\nCopyright 2020 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Modeling_with_JointDistribution\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Modeling_with_JointDistribution.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Modeling_with_JointDistribution.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Modeling_with_JointDistribution.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIntroduction\nTensorFlow Probability (TFP) offers a number of JointDistribution abstractions that make probabilistic inference easier by allowing a user to easily express a probabilistic graphical model in a near-mathematical form; the abstraction generates methods for sampling from the model and evaluating the log probability of samples from the model. In this tutorial, we review \"autobatched\" variants, which were developed after the original JointDistribution abstractions. Relative to the original, non-autobatched abstractions, the autobatched versions are simpler to use and more ergonomic, allowing many models to be expressed with less boilerplate. In this colab, we explore a simple model in (perhaps tedious) detail, making clear the problems autobatching solves, and (hopefully) teaching the reader more about TFP shape concepts along the way.\nPrior to the introduction of autobatching, there were a few different variants of JointDistribution, corresponding to different syntactic styles for expressing probabilistic models: JointDistributionSequential, JointDistributionNamed, andJointDistributionCoroutine. Auobatching exists as a mixin, so we now have AutoBatched variants of all of these. In this tutorial, we explore the differences between JointDistributionSequential and JointDistributionSequentialAutoBatched; however, everything we do here is applicable to the other variants with essentially no changes.\nDependencies & Prerequisites",
"#@title Import and set ups{ display-mode: \"form\" }\n\nimport functools\nimport numpy as np\n\nimport tensorflow.compat.v2 as tf\ntf.enable_v2_behavior()\n\nimport tensorflow_probability as tfp\n\ntfd = tfp.distributions",
"Prerequisite: A Bayesian Regression Problem\nWe'll consider a very simple Bayesian regression scenario:\n$$\n\\begin{align}\nm & \\sim \\text{Normal}(0, 1) \\\nb & \\sim \\text{Normal}(0, 1) \\\nY & \\sim \\text{Normal}(mX + b, 1)\n\\end{align}\n$$\nIn this model, m and b are drawn from standard normals, and the observations Y are drawn from a normal distribution whose mean depends on the random variables m and b, and some (nonrandom, known) covariates X. (For simplicity, in this example, we assume the scale of all random variables is known.)\nTo perform inference in this model, we'd need to know both the covariates X and the observations Y, but for the purposes of this tutorial, we'll only need X, so we define a simple dummy X:",
"X = np.arange(7)\nX",
"Desiderata\nIn probabilistic inference, we often want to perform two basic operations:\n- sample: Drawing samples from the model.\n- log_prob: Computing the log probability of a sample from the model.\nThe key contribution of TFP's JointDistribution abstractions (as well as of many other approaches to probabilistic programming) is to allow users to write a model once and have access to both sample and log_prob computations.\nNoting that we have 7 points in our data set (X.shape = (7,)), we can now state the desiderata for an excellent JointDistribution:\n\nsample() should produce a list of Tensors having shape [(), (), (7,)], corresponding to the scalar slope, scalar bias, and vector observations, respectively.\nlog_prob(sample()) should produce a scalar: the log probability of a particular slope, bias, and observations.\nsample([5, 3]) should produce a list of Tensors having shape [(5, 3), (5, 3), (5, 3, 7)], representing a (5, 3)-batch of samples from the model.\nlog_prob(sample([5, 3])) should produce a Tensor with shape (5, 3).\n\nWe'll now look at a succession of JointDistribution models, see how to achieve the above desiderata, and hopefully learn a little more about TFP shapes along the way. \nSpoiler alert: The approach that satisfies the above desiderata without added boilerplate is autobatching. \nFirst Attempt; JointDistributionSequential",
"jds = tfd.JointDistributionSequential([\n tfd.Normal(loc=0., scale=1.), # m\n tfd.Normal(loc=0., scale=1.), # b\n lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y\n])",
"This is more or less a direct translation of the model into code. The slope m and bias b are straightforward. Y is defined using a lambda-function: the general pattern is that a lambda-function of $k$ arguments in a JointDistributionSequential (JDS) uses the previous $k$ distributions in the model. Note the \"reverse\" order.\nWe'll call sample_distributions, which returns both a sample and the underlying \"sub-distributions\" that were used to generate the sample. (We could have produced just the sample by calling sample; later in the tutorial it will be convenient to have the distributions as well.) The sample we produce is fine:",
"dists, sample = jds.sample_distributions()\nsample",
"But log_prob produces a result with an undesired shape:",
"jds.log_prob(sample)",
"And multiple sampling doesn't work:",
"try:\n jds.sample([5, 3])\nexcept tf.errors.InvalidArgumentError as e:\n print(e)",
"Let's try to understand what's going wrong.\nA Brief Review: Batch and Event Shape\nIn TFP, an ordinary (not a JointDistribution) probability distribution has an event shape and a batch shape, and understanding the difference is crucial to effective use of TFP:\n\nEvent shape describes the shape of a single draw from the distribution; the draw may be dependent across dimensions. For scalar distributions, the event shape is []. For a 5-dimensional MultivariateNormal, the event shape is [5].\nBatch shape describes independent, not identically distributed draws, aka a \"batch\" of distributions. Representing a batch of distributions in a single Python object is one of the key ways TFP achieves efficiency at scale.\n\nFor our purposes, a critical fact to keep in mind is that if we call log_prob on a single sample from a distribution, the result will always have a shape that matches (i.e., has as rightmost dimensions) the batch shape.\nFor a more in-depth discussion of shapes, see the \"Understanding TensorFlow Distributions Shapes\" tutorial.\nWhy Doesn't log_prob(sample()) Produce a Scalar?\nLet's use our knowledge of batch and event shape to explore what's happening with log_prob(sample()). Here's our sample again:",
"sample",
"And here are our distributions:",
"dists",
"The log probability is computed by summing the log probabilities of the sub-distributions at the (matched) elements of the parts:",
"log_prob_parts = [dist.log_prob(s) for (dist, s) in zip(dists, sample)]\nlog_prob_parts\n\nnp.sum(log_prob_parts) - jds.log_prob(sample)",
"So, one level of explanation is that the log probability calculation is returning a 7-Tensor because the third subcomponent of log_prob_parts is a 7-Tensor. But why?\nWell, we see that the last element of dists, which corresponds to our distribution over Y in the mathematial formulation, has a batch_shape of [7]. In other words, our distribution over Y is a batch of 7 independent normals (with different means and, in this case, the same scale).\nWe now understand what's wrong: in JDS, the distribution over Y has batch_shape=[7], a sample from the JDS represents scalars for m and b and a \"batch\" of 7 independent normals. and log_prob computes 7 separate log-probabilities, each of which represents the log probability of drawing m and b and a single observation Y[i] at some X[i].\nFixing log_prob(sample()) with Independent\nRecall that dists[2] has event_shape=[] and batch_shape=[7]:",
"dists[2]",
"By using TFP's Independent metadistribution, which converts batch dimensions to event dimensions, we can convert this into a distribution with event_shape=[7] and batch_shape=[] (we'll rename it y_dist_i because it's a distribution on Y, with the _i standing in for our Independent wrapping):",
"y_dist_i = tfd.Independent(dists[2], reinterpreted_batch_ndims=1)\ny_dist_i",
"Now, the log_prob of a 7-vector is a scalar:",
"y_dist_i.log_prob(sample[2])",
"Under the covers, Independent sums over the batch:",
"y_dist_i.log_prob(sample[2]) - tf.reduce_sum(dists[2].log_prob(sample[2]))",
"And indeed, we can use this to construct a new jds_i (the i again stands for Independent) where log_prob returns a scalar:",
"jds_i = tfd.JointDistributionSequential([\n tfd.Normal(loc=0., scale=1.), # m\n tfd.Normal(loc=0., scale=1.), # b\n lambda b, m: tfd.Independent( # Y\n tfd.Normal(loc=m*X + b, scale=1.),\n reinterpreted_batch_ndims=1)\n])\n\njds_i.log_prob(sample)",
"A couple notes:\n- jds_i.log_prob(s) is not the same as tf.reduce_sum(jds.log_prob(s)). The former produces the \"correct\" log probability of the joint distribution. The latter sums over a 7-Tensor, each element of which is the sum of the log probability of m, b, and a single element of the log probability of Y, so it overcounts m and b. (log_prob(m) + log_prob(b) + log_prob(Y) returns a result rather than throwing an exception because TFP follows TF and NumPy's broadcasting rules; adding a scalar to a vector produces a vector-sized result.)\n- In this particular case, we could have solved the problem and achieved the same result using MultivariateNormalDiag instead of Independent(Normal(...)). MultivariateNormalDiag is a vector-valued distribution (i.e., it already has vector event-shape). Indeeed MultivariateNormalDiag could be (but isn't) implemented as a composition of Independent and Normal. It's worthwhile to remember that given a vector V, samples from n1 = Normal(loc=V), and n2 = MultivariateNormalDiag(loc=V) are indistinguishable; the difference beween these distributions is that n1.log_prob(n1.sample()) is a vector and n2.log_prob(n2.sample()) is a scalar.\nMultiple Samples?\nDrawing multiple samples still doesn't work:",
"try:\n jds_i.sample([5, 3])\nexcept tf.errors.InvalidArgumentError as e:\n print(e)",
"Let's think about why. When we call jds_i.sample([5, 3]), we'll first draw samples for m and b, each with shape (5, 3). Next, we're going to try to construct a Normal distribution via:\ntfd.Normal(loc=m*X + b, scale=1.)\nBut if m has shape (5, 3) and X has shape 7, we can't multiply them together, and indeed this is the error we're hitting:",
"m = tfd.Normal(0., 1.).sample([5, 3])\ntry:\n m * X\nexcept tf.errors.InvalidArgumentError as e:\n print(e)",
"To resolve this issue, let's think about what properties the distribution over Y has to have. If we've called jds_i.sample([5, 3]), then we know m and b will both have shape (5, 3). What shape should a call to sample on the Y distribution produce? The obvious answer is (5, 3, 7): for each batch point, we want a sample with the same size as X. We can achieve this by using TensorFlow's broadcasting capabilities, adding extra dimensions:",
"m[..., tf.newaxis].shape\n\n(m[..., tf.newaxis] * X).shape",
"Adding an axis to both m and b, we can define a new JDS that supports multiple samples:",
"jds_ia = tfd.JointDistributionSequential([\n tfd.Normal(loc=0., scale=1.), # m\n tfd.Normal(loc=0., scale=1.), # b\n lambda b, m: tfd.Independent( # Y\n tfd.Normal(loc=m[..., tf.newaxis]*X + b[..., tf.newaxis], scale=1.),\n reinterpreted_batch_ndims=1)\n])\n\nshaped_sample = jds_ia.sample([5, 3])\nshaped_sample\n\njds_ia.log_prob(shaped_sample)",
"As an extra check, we'll verify that the log probability for a single batch point matches what we had before:",
"(jds_ia.log_prob(shaped_sample)[3, 1] -\n jds_i.log_prob([shaped_sample[0][3, 1],\n shaped_sample[1][3, 1],\n shaped_sample[2][3, 1, :]]))",
"<a id='AutoBatching-For-The-Win'></a>\nAutoBatching For The Win\nExcellent! We now have a version of JointDistribution that handles all our desiderata: log_prob returns a scalar thanks to the use of tfd.Independent, and multiple samples work now that we fixed broadcasting by adding extra axes.\nWhat if I told you there was an easier, better way? There is, and it's called JointDistributionSequentialAutoBatched (JDSAB):",
"jds_ab = tfd.JointDistributionSequentialAutoBatched([\n tfd.Normal(loc=0., scale=1.), # m\n tfd.Normal(loc=0., scale=1.), # b\n lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y\n])\n\njds_ab.log_prob(jds.sample())\n\nshaped_sample = jds_ab.sample([5, 3])\njds_ab.log_prob(shaped_sample)\n\njds_ab.log_prob(shaped_sample) - jds_ia.log_prob(shaped_sample)",
"How does this work? While you could attempt to read the code for a deep understanding, we'll give a brief overview which is sufficient for most use cases:\n- Recall that our first problem was that our distribution for Y had batch_shape=[7] and event_shape=[], and we used Independent to convert the batch dimension to an event dimension. JDSAB ignores the batch shapes of component distributions; instead it treats batch shape as an overall property of the model, which is assumed to be [] (unless specified otherwise by setting batch_ndims > 0). The effect is equivalent to using tfd.Independent to convert all batch dimensions of component distributions into event dimensions, as we did manually above.\n- Our second problem was a need to massage the shapes of m and b so that they could broadcast appropriately with X when creating multiple samples. With JDSAB, you write a model to generate a single sample, and we \"lift\" the entire model to generate multiple samples using TensorFlow's vectorized_map. (This feature is analagous to JAX's vmap.)\nExploring the batch shape issue in more detail, we can compare the batch shapes of our original \"bad\" joint distribution jds, our batch-fixed distributions jds_i and jds_ia, and our autobatched jds_ab:",
"jds.batch_shape\n\njds_i.batch_shape\n\njds_ia.batch_shape\n\njds_ab.batch_shape",
"We see that the original jds has subdistributions with different batch shapes. jds_i and jds_ia fix this by creating subdistributions with the same (empty) batch shape. jds_ab has only a single (empty) batch shape.\nIt's worth noting that JointDistributionSequentialAutoBatched offers some additional generality for free. Suppose we make the covariates X (and, implicitly, the observations Y) two-dimensional:",
"X = np.arange(14).reshape((2, 7))\nX",
"Our JointDistributionSequentialAutoBatched works with no changes (we need to redefine the model because the shape of X is cached by jds_ab.log_prob):",
"jds_ab = tfd.JointDistributionSequentialAutoBatched([\n tfd.Normal(loc=0., scale=1.), # m\n tfd.Normal(loc=0., scale=1.), # b\n lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y\n])\n\nshaped_sample = jds_ab.sample([5, 3])\nshaped_sample\n\njds_ab.log_prob(shaped_sample)",
"On the other hand, our carefully crafted JointDistributionSequential no longer works:",
"jds_ia = tfd.JointDistributionSequential([\n tfd.Normal(loc=0., scale=1.), # m\n tfd.Normal(loc=0., scale=1.), # b\n lambda b, m: tfd.Independent( # Y\n tfd.Normal(loc=m[..., tf.newaxis]*X + b[..., tf.newaxis], scale=1.),\n reinterpreted_batch_ndims=1)\n])\n\ntry:\n jds_ia.sample([5, 3])\nexcept tf.errors.InvalidArgumentError as e:\n print(e)",
"To fix this, we'd have to add a second tf.newaxis to both m and b match the shape, and increase reinterpreted_batch_ndims to 2 in the call to Independent. In this case, letting the auto-batching machinery handle the shape issues is shorter, easier, and more ergonomic.\nOnce again, we note that while this notebook explored JointDistributionSequentialAutoBatched, the other variants of JointDistribution have equivalent AutoBatched. (For users of JointDistributionCoroutine, JointDistributionCoroutineAutoBatched has the additional benefit that you no longer need to specify Root nodes; if you've never used JointDistributionCoroutine you can safely ignore this statement.)\nConcluding Thoughts\nIn this notebook, we introduced JointDistributionSequentialAutoBatched and worked through a simple example in detail. Hopefully you learned something about TFP shapes and about autobatching!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
streety/biof509
|
Wk03-Numpy-model-package-survey.ipynb
|
mit
|
[
"Week 8 - Implementing a model in numpy and a survey of machine learning packages for python\nThis week we will be looking in detail at how to implement a supervised regression model using the base scientific computing packages available with python.\nWe will also be looking at the different packages available for python that implement many of the algorithms we might want to use.\nRegression with numpy\nWhy implement algorithms from scratch when dedicated packages already exist? \nThe packages available are very powerful and a real time saver but they can obscure some issues we might encounter if we don't know to look for them. By starting with just numpy these problems will be more obvious. We can address them here and then when we move on we will know what to look for and will be less likely to miss them.\nThe dedicated machine learning packages implement the different algorithms but we are still responsible for getting our data in a suitable format.",
"import matplotlib.pyplot as plt\nimport numpy as np\n\n%matplotlib inline\n\nn = 20\nx = np.random.random((n,1))\ny = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))\n\n\nplt.plot(x, y, 'b.')\nplt.show()",
"This is a very simple dataset. There is only one input value for each record and then there is the output value. Our goal is to determine the output value or dependent variable, shown on the y-axis, from the input or independent variable, shown on the x-axis.\nOur approach should scale to handle multiple input, or independent, variables. The independent variables can be stored in a vector, a 1-dimensional array:\n$$X^T = (X_{1}, X_{2}, X_{3})$$\nAs we have multiple records these can be stacked in a 2-dimensional array. Each record becomes one row in the array. Our x variable is already set up in this way.\nIn linear regression we can compute the value of the dependent variable using the following formula:\n$$f(X) = \\beta_{0} + \\sum_{j=1}^p X_j\\beta_j$$\nThe $\\beta_{0}$ term is the intercept, and represents the value of the dependent variable when the independent variable is zero.\nCalculating a solution is easier if we don't treat the intercept as special. Instead of having an intercept co-efficient that is handled separately we can instead add a variable to each of our records with a value of one.",
"intercept_x = np.hstack((np.ones((n,1)), x))\nintercept_x",
"Numpy contains the linalg module with many common functions for performing linear algebra. Using this module finding a solution is quite simple.",
"np.linalg.lstsq(intercept_x,y)",
"The values returned are:\n\nThe least-squares solution\nThe sum of squared residuals\nThe rank of the independent variables\nThe singular values of the independent variables\n\nExercise\n\nCalculate the predictions our model would make\nCalculate the sum of squared residuals from our predictions. Does this match the value returned by lstsq?",
"coeff, residuals, rank, sing_vals = np.linalg.lstsq(intercept_x,y)\n\npredictions = ",
"Least squares refers to the cost function for this algorithm. The objective is to minimize the residual sum of squares. The difference between the actual and predicted values is calculated, it is squared and then summed over all records. The function is as follows:\n$$RSS(\\beta) = \\sum_{i=1}^{N}(y_i - x_i^T\\beta)^2$$\nMatrix arithmetic\nWithin lstsq all the calculations are performed using matrix arithmetic rather than the more familiar element-wise arithmetic numpy arrays generally perform. Numpy does have a matrix type but matrix arithmetic can also be performed on standard arrays using dedicated methods.\n\nSource: Wikimedia Commons (User:Bilou)\nIn matrix multiplication the resulting value in any position is the sum of multiplying each value in a row in the first matrix by the corresponding value in a column in the second matrix.\nThe residual sum of squares can be calculated with the following formula:\n$$RSS(\\beta) = (y - X\\beta)^T(y-X\\beta)$$\nThe value of our co-efficients can be calculated with:\n$$\\hat\\beta = (X^TX)^{-1}X^Ty$$\nUnfortunately, the result is not as visually appealing as in languages that use matrix arithmetic by default.",
"our_coeff = np.dot(np.dot(np.linalg.inv(np.dot(intercept_x.T, intercept_x)), intercept_x.T), y)\n\nprint(coeff, '\\n', our_coeff)\n\nour_predictions = np.dot(intercept_x, our_coeff)\n\nnp.hstack((predictions, \n our_predictions\n ))\n\nplt.plot(x, y, 'ko', label='True values')\nplt.plot(x, our_predictions, 'ro', label='Predictions')\nplt.legend(numpoints=1, loc=4)\nplt.show()",
"Exercise\n\nPlot the residuals. The x axis will be the independent variable (x) and the y axis the residual between our prediction and the true value.\nPlot the predictions generated for our model over the entire range of 0-1. One approach is to use the np.linspace method to create equally spaced values over a specified range.\n\nTypes of independent variable\nThe independent variables can be many different types.\n\nQuantitative inputs\nCategorical inputs coded using dummy values\nInteractions between multiple inputs\nTranformations of other inputs, e.g. logs, raised to different powers, etc.\n\nIt is important to note that a linear model is only linear with respect to its inputs. Those input variables can take any form.\nOne approach we can take to improve the predictions from our model would be to add in the square, cube, etc of our existing variable.",
"x_expanded = np.hstack((x**i for i in range(1,20)))\n\nb, residuals, rank, s = np.linalg.lstsq(x_expanded, y)\nprint(b)\n\nplt.plot(x, y, 'ko', label='True values')\nplt.plot(x, np.dot(x_expanded, b), 'ro', label='Predictions')\nplt.legend(numpoints=1, loc=4)\nplt.show()",
"There is a tradeoff with model complexity. As we add more complexity to our model we can fit our training data increasingly well but eventually will lose our ability to generalize to new data.\nVery simple models underfit the data and have high bias.\nVery complex models overfit the data and have high variance.\nThe goal is to detect true sources of variation in the data and ignore variation that is just noise.\nHow do we know if we have a good model? A common approach is to break up our data into a training set, a validation set, and a test set. \n\nWe train models with different parameters on the training set.\nWe evaluate each model on the validation set, and choose the best\nWe then measure the performance of our best model on the test set.\n\nWhat would our best model look like? Because we are using dummy data here we can easily make more.",
"n = 20\np = 12\ntraining = []\nval = []\nfor i in range(1, p):\n np.random.seed(0)\n x = np.random.random((n,1))\n y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))\n x = np.hstack((x**j for j in np.arange(i)))\n our_coeff = np.dot(\n np.dot(\n np.linalg.inv(\n np.dot(\n x.T, x\n )\n ), x.T\n ), y\n )\n our_predictions = np.dot(x, our_coeff)\n our_training_rss = np.sum((y - our_predictions) ** 2)\n training.append(our_training_rss)\n \n val_x = np.random.random((n,1))\n val_y = 5 + 6 * val_x ** 2 + np.random.normal(0,0.5, size=(n,1))\n val_x = np.hstack((val_x**j for j in np.arange(i)))\n our_val_pred = np.dot(val_x, our_coeff)\n our_val_rss = np.sum((val_y - our_val_pred) ** 2)\n val.append(our_val_rss)\n #print(i, our_training_rss, our_val_rss)\n\nplt.plot(range(1, p), training, 'ko-', label='training')\nplt.plot(range(1, p), val, 'ro-', label='validation')\nplt.legend(loc=2)\nplt.show()",
"Gradient descent\nOne limitation of our current implementation is that it is resource intensive. For very large datasets an alternative is needed. Gradient descent is often preferred, and particularly stochastic gradient descent for very large datasets.\nGradient descent is an iterative process, repetitively calculating the error and changing the coefficients slightly to reduce that error. It does this by calculating a gradient and then descending to a minimum in small steps.\nStochastic gradient descent calculates the gradient on a small batch of the data, updates the coefficients, loads the next chunk of the data and repeats the process.\nWe will just look at a basic gradient descent model.",
"np.random.seed(0)\nn = 200\nx = np.random.random((n,1))\ny = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))\nintercept_x = np.hstack((np.ones((n,1)), x))\ncoeff, residuals, rank, sing_vals = np.linalg.lstsq(intercept_x,y)\nprint('lstsq', coeff)\n\n\n\ndef gradient_descent(x, y, rounds = 1000, alpha=0.01):\n theta = np.zeros((x.shape[1], 1))\n costs = []\n for i in range(rounds):\n prediction = np.dot(x, theta)\n error = prediction - y\n gradient = np.dot(x.T, error / y.shape[0])\n theta -= gradient * alpha\n costs.append(np.sum(error ** 2))\n return (theta, costs) \ntheta, costs = gradient_descent(intercept_x, y, rounds=10000)\nprint(theta, costs[::500])\n\nnp.random.seed(0)\nn = 200\n\nx = np.random.random((n,1))\ny = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))\nx = np.hstack((x**j for j in np.arange(20)))\n\ncoeff, residuals, rank, sing_vals = np.linalg.lstsq(x,y)\nprint('lstsq', coeff)\n\ntheta, costs = gradient_descent(x, y, rounds=10000)\nprint(theta, costs[::500])\n\nplt.plot(x[:,1], y, 'ko')\nplt.plot(x[:,1], np.dot(x, coeff), 'co')\nplt.plot(x[:,1], np.dot(x, theta), 'ro')\nplt.show()",
"Machine learning packages available in the python ecosystem\nOverview in the python wiki\nGeneral\n* scikit-learn\n* milk\n* Orange\n* Shogun\n* GraphLab Create (dato)\nThere is a collection of field specific packages including some with machine learning components on the scipy website. Other packages can often be found searching the python package index.\nDeep learning is receiving a lot of attention recently and a number of different packages have been developed.\n* Theano\n* pylearn2\n* keras\n* Blocks\n* Lasagne\nScikit-learn\nScikit-learn is now widely used. It includes modules for:\n* Classification\n* Regression\n* Clustering\n* Dimensionality reduction\n* Model selection\n* Preprocessing\nThere are modules for training online models, enabling very large datasets to be analyzed.\nThere is also a semi-supervised module for situations when you have a large dataset, but only have labels for part of the dataset.\nMilk\nMilk works very well with mahotas, a package for image processing. With the recent improvements in scikit-image milk is now less attractive, although still a strong option\nOrange and Shogun\nThese are both large packages but for whatever reason do not receive the attention that scikit-learn does.\nDato\nDato is a relative newcomer and has been receiving a lot of attention lately. Time will tell whether it can compete with scikit-learn."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
molpopgen/fwdpy
|
docs/examples/temporalSampling.ipynb
|
gpl-3.0
|
[
"Temporal sampling\nSampling nothing\nLet's evolve 40 populations to mutation-drift equilibrium:",
"import fwdpy as fp\nimport numpy as np\nimport pandas as pd\nnregions=[fp.Region(0,1,1)]\nsregions=[fp.GammaS(0,1,0.1,0.1,0.1,1.0),\n fp.GammaS(0,1,0.9,-0.2,9.0,0.0)\n ]\nrecregions=nregions\nN=1000\nnlist=np.array([N]*(10*N),dtype=np.uint32)\nmutrate_neutral=50.0/float(4*N)\nrecrate=mutrate_neutral\nmutrate_sel=mutrate_neutral*0.2\nrng=fp.GSLrng(101)\npops=fp.SpopVec(40,1000)\nsampler=fp.NothingSampler(len(pops))\n#This function implicitly uses a \"nothing sampler\"\nfp.evolve_regions_sampler(rng,pops,sampler,nlist,\n mutrate_neutral,\n 0.0, #No selected mutations....\n recrate,\n nregions,sregions,recregions,\n #Only sample every 10N generations,\n #which is fine b/c we're not sampling anything\n 10*N)",
"Take samples from population",
"#Take sample of size n=20\nsampler=fp.PopSampler(len(pops),20,rng)\nfp.evolve_regions_sampler(rng,pops,sampler,\n nlist[:N], #Evolve for N generations\n mutrate_neutral,\n mutrate_sel, \n recrate,\n nregions,sregions,recregions,\n #Sampler every 100 generations\n 100)",
"The output from this particular sampler type is a generator. Let's look at the first element of the first sample:",
"data=sampler[0]\n\nprint data[0]",
"These \"genotypes\" blocks can be used to caculate summary statistics. See the example on using pylibseq for that task.",
"print data[1]",
"Each element in d[0] is a tuple:",
"#The first element are the genotypes\ndata[0][0]\n\n#The first element in the genotypes are the neutral variants.\n#The first value is the position. The second value is a string\n#of genotypes for chromosomes 1 through n. 0 = ancestral/1=derived\ndata[0][0][0]\n\n#Same format for selected variants\ndata[0][0][1]\n\n#This is a dict relating to info re:\n#the selected variants.\n#dcount = derived freq in sample\n#ftime = fixation time. 2^32-1 = has not fixed\n#generation = generation when sampling occurred\n#h = dominance\n#origin = generation when mutation entered population\n#p = population frequency\n#s = effect size/selection coefficient\ndata[0][1]",
"Tracking mutation frequencies\nSee the example on tracking mutation frequencies.\nSee the example on fixation times for the use of this sampler"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tum-pbs/PhiFlow
|
demos/Learn_to_Throw_Tutorial.ipynb
|
mit
|
[
"Learn to Throw\n\nIn this notebook, we will train a fully-connected neural network to solve an inverse ballistics problem.\nWe will compare supervised training to differentiable physics training, both numerically and visually.\nΦ-Flow\n Documentation\n API\n Demos",
"# !pip install phiflow\n\n# from phi.tf.flow import *\nfrom phi.torch.flow import *\n# from phi.jax.stax.flow import *",
"To define the physics problem, we write a function to simulate the forward physics. This function takes the initial position, height, speed and angle of a thrown object and computes where the object will land, assuming the object follows a parabolic trajectory. Neglecting friction, can compute this by solving a quadratic equation.",
"def simulate_hit(pos, height, vel, angle, gravity=1.):\n vel_x, vel_y = math.cos(angle) * vel, math.sin(angle) * vel\n height = math.maximum(height, .5)\n hit_time = (vel_y + math.sqrt(vel_y**2 + 2 * gravity * height)) / gravity\n return pos + vel_x * hit_time, hit_time, height, vel_x, vel_y\n\nsimulate_hit(10, 1, 1, 0)[0]",
"Let's plot the trajectory! We define y(x) and sample it on a grid. We have to drop invalid values, such as negative flight times and below-ground points.",
"def sample_trajectory(pos, height, vel, angle, gravity=1.):\n hit, hit_time, height, vel_x, vel_y = simulate_hit(pos, height, vel, angle, gravity)\n def y(x):\n t = (x.vector[0] - pos) / vel_x\n y_ = height + vel_y * t - gravity / 2 * t ** 2\n return math.where((y_ > 0) & (t > 0), y_, NAN)\n return CenteredGrid(y, x=2000, bounds=Box(x=(min(pos.min, hit.min), max(pos.max, hit.max))))\n\nvis.plot(sample_trajectory(tensor(10), 1, 1, math.linspace(-PI/4, 1.5, 7)), title=\"Varying Angle\")",
"Before we train neural networks on this problem, let's perform a classical optimization using gradient descent in the initial velocity vel. We need to define a loss function to optimize. Here we desire the object to hit at x=0.",
"vel = 1\n\ndef loss_function(vel):\n return math.l2_loss(simulate_hit(10, 1, vel, 0)[0] - 0)\n\nloss_function(0)",
"Φ<sub>Flow</sub> uses the selected library (PyTorch/TensorFlow/Jax) to derive analytic derivatives.\nBy default, the gradient function also returns the function value.",
"gradient = math.functional_gradient(loss_function)\ngradient(0)",
"Now we can just subtract the gradient times a learning rate $\\eta = 0.2$ until we converge.",
"trj = [vel]\nfor i in range(10):\n loss, (grad,) = gradient(vel)\n vel = vel - .2 * grad\n trj.append(vel)\n print(f\"vel={vel} - loss={loss}\")\ntrj = math.stack(trj, channel('opt'))\nvis.plot(sample_trajectory(tensor(10), 1, trj, 0))",
"Next, we generate a training set and test by sampling random values.",
"def generate_data(shape):\n pos = math.random_normal(shape)\n height = math.random_uniform(shape) + .5\n vel = math.random_uniform(shape)\n angle = math.random_uniform(shape) * PI/2\n return math.stack(dict(pos=pos, height=height, vel=vel, angle=angle), channel('vector'))\n\nx_train = generate_data(batch(examples=1000))\nx_test = generate_data(batch(examples=1000))\ny_train = simulate_hit(*x_train.vector)[0]\ny_test = simulate_hit(*x_test.vector)[0]",
"Now, let's create a fully-connected neural network with three hidden layers. We can reset the seed to make the weight initialization predictable.",
"math.seed(0)\nnet_sup = dense_net(1, 4, [32, 64, 32])\nnet_sup\n\n# net_sup.trainable_weights[0] # TensorFlow\nnet_sup.state_dict()['linear0.weight'].flatten() # PyTorch\n# net_sup.parameters[0][0] # Stax",
"For the differentiable physics network we do the same thing again.",
"math.seed(0)\nnet_dp = dense_net(1, 4, [32, 64, 32])\n# net_dp.trainable_weights[0] # TensorFlow\nnet_dp.state_dict()['linear0.weight'].flatten() # PyTorch\n# net_dp.parameters[0][0] # Stax",
"Indeed, the We see that the networks were initialized identically! Alternatively, we could have saved and loaded the state.\nSupervised Training\nNow we can train the network. We feed the desired hit position into the network and predict a possible initial state.\nFor supervised training, we compare the network prediction to the ground truth from our training set.",
"opt_sup = adam(net_sup)\n\ndef supervised_loss(x, y, net=net_sup):\n prediction = math.native_call(net, y)\n return math.l2_loss(prediction - x)\n\nprint(f\"Supervised loss (training set): {supervised_loss(x_train, y_train)}\")\nprint(f\"Supervised loss (test set): {supervised_loss(x_test, y_test)}\")\n\nfor i in range(100):\n update_weights(net_sup, opt_sup, supervised_loss, x_train, y_train)\n\nprint(f\"Supervised loss (training set): {supervised_loss(x_train, y_train)}\")\nprint(f\"Supervised loss (test set): {supervised_loss(x_test, y_test)}\")",
"What? That's almost no progress! Feel free to run more iterations but there is a deeper problem at work here. Before we get into that, let's train a the network again but with a differentiable physics loss function.\nTraining with Differentiable Physics\nFor the differentiable physics loss, we simulate the trajectory given the initial conditions predicted by the network. Then we can measure how close to the desired location the network got.",
"def physics_loss(y, net=net_dp):\n prediction = math.native_call(net, y)\n y_sim = simulate_hit(*prediction.vector)[0]\n return math.l2_loss(y_sim - y)\n\nopt_dp = adam(net_dp)\n\nfor i in range(100):\n update_weights(net_dp, opt_dp, physics_loss, y_train)\n\nprint(f\"Supervised loss (training set): {supervised_loss(x_train, y_train, net=net_dp)}\")\nprint(f\"Supervised loss (test set): {supervised_loss(x_test, y_test, net=net_dp)}\")",
"This looks even worse! The differentiable physics network seems to stray even further from the ground truth.\nWell, we're not trying to match the ground truth, though. Let's instead measure how close to the desired location the network threw the object.",
"print(f\"Supervised network (test set): {physics_loss(y_test, net=net_sup)}\")\nprint(f\"Diff.Phys. network (test set): {physics_loss(y_test, net=net_dp)}\")",
"Now this is much more promissing. The diff.phys. network seems to hit the desired location very accurately considering it was only trained for 100 iterations. With more training steps, this loss will go down even further, unlike the supervised network.\nSo what is going on here? Why does the supervised network perform so poorly?\nThe answer lies in the problem itself. The task is multi-modal, i.e. there are many initial states that will hit the same target.\nThe network only gets the target position and must decide on a single initial state. With supervised training, there is no way to know which ground truth solution occurs in the test set. The best the network can do is average nearby solutions from the training set. But since the problem is non-linear, this will give only a rough guess.\nThe diff.phys. network completely ignores the ground truth solutions which are not even passed to the physics_loss function. Instead, it learns to hit the desired spot, which is exactly what we want.\nWe can visualize the difference by looking at a couple of trajectories.",
"predictions = math.stack({\n \"Ground Truth\": x_test.examples[:4],\n \"Supervised\": math.native_call(net_sup, y_test.examples[:4]),\n \"Diff.Phys\": math.native_call(net_dp, y_test.examples[:4]),\n}, channel('curves'))\nvis.plot(sample_trajectory(*predictions.vector), size=(16, 4))",
"We can see that the differentiable physics network matches the hit point much more precisely than the supervised network, as expected from the loss values."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathLab/RBniCS
|
tutorials/13_elliptic_optimal_control/tutorial_elliptic_optimal_control_2_rb.ipynb
|
lgpl-3.0
|
[
"TUTORIAL 13 - Elliptic Optimal Control\nKeywords: optimal control, inf-sup condition, reduced basis\n1. Introduction\nThis tutorial addresses a distributed optimal control problem for the Graetz conduction-convection equation on the domain $\\Omega$ shown below:\n<img src=\"data/mesh2.png\" width=\"60%\"/>\nThe problem is characterized by 3 parameters. The first parameter $\\mu_0$ represents the Péclet number, which describes the heat transfer between the two domains. The second and third parameters, $\\mu_1$ and $\\mu_2$, control the parameter dependent observation function $y_d(\\boldsymbol{\\mu})$ such that:\n$$ y_d(\\boldsymbol{\\mu})=\n\\begin{cases}\n \\mu_1 \\quad \\text{in} \\; \\hat{\\Omega}_1 \\\n \\mu_2 \\quad \\text{in} \\; \\hat{\\Omega}_2\n\\end{cases}\n$$\nThe ranges of the three parameters are the following: $$\\mu_0 \\in [3,20], \\mu_1 \\in [0.5,1.5], \\mu_2 \\in [1.5,2.5]$$\nThe parameter vector $\\boldsymbol{\\mu}$ is thus given by $$\\boldsymbol{\\mu}=(\\mu_0,\\mu_1,\\mu_2)$$ on the parameter domain $$\\mathbb{P}=[3,20] \\times [0.5,1.5] \\times [1.5,2.5].$$\nIn order to obtain a faster approximation of the optimal control problem, we pursue an optimize-then-discretize approach using the reduced basis method.\n2. Parametrized Formulation\nLet $y(\\boldsymbol{\\mu})$, the state function, be the temperature field in the domain $\\Omega$ and $u(\\boldsymbol{\\mu})$, the control function, act as a heat source. The observation domain $\\hat{\\Omega}$ is defined as: $\\hat{\\Omega}=\\hat{\\Omega}_1 \\cup \\hat{\\Omega}_2$.\nConsider the following optimal control problem:\n$$\n\\underset{y,u}{min} \\; J(y,u;\\boldsymbol{\\mu}) = \\frac{1}{2} \\left\\lVert y(\\boldsymbol{\\mu})-y_d(\\boldsymbol{\\mu})\\right\\rVert ^2_{L^2(\\hat{\\Omega})}, \\\ns.t. \n\\begin{cases}\n -\\frac{1}{\\mu_0}\\Delta y(\\boldsymbol{\\mu}) + x_2(1-x_2)\\frac{\\partial y(\\boldsymbol{\\mu})}{\\partial x_1} = u(\\boldsymbol{\\mu}) \\quad \\text{in} \\; \\Omega, \\\n \\frac{1}{\\mu_0} \\nabla y(\\boldsymbol{\\mu}) \\cdot \\boldsymbol{n} = 0 \\qquad \\qquad \\qquad \\quad \\enspace \\; \\text{on} \\; \\Gamma_N, \\\n y(\\boldsymbol{\\mu})=1 \\qquad \\qquad \\qquad \\qquad \\qquad \\enspace \\text{on} \\; \\Gamma_{D1}, \\\n y(\\boldsymbol{\\mu})=2 \\qquad \\qquad \\qquad \\qquad \\qquad \\enspace \\text{on} \\; \\Gamma_{D2}\n\\end{cases}\n$$\nThe corresponding weak formulation comes from solving for the gradient of the Lagrangian function as detailed in the previous tutorial. \nSince this problem is recast in the framework of saddle-point problems, the reduced basis problem must satisfy the inf-sup condition, thus an aggregated space for the state and adjoint variables is defined.",
"from dolfin import *\nfrom rbnics import *",
"3. Affine Decomposition\nFor this problem the affine decomposition is straightforward.",
"class EllipticOptimalControl(EllipticOptimalControlProblem):\n\n # Default initialization of members\n def __init__(self, V, **kwargs):\n # Call the standard initialization\n EllipticOptimalControlProblem.__init__(self, V, **kwargs)\n # ... and also store FEniCS data structures for assembly\n assert \"subdomains\" in kwargs\n assert \"boundaries\" in kwargs\n self.subdomains, self.boundaries = kwargs[\"subdomains\"], kwargs[\"boundaries\"]\n yup = TrialFunction(V)\n (self.y, self.u, self.p) = split(yup)\n zvq = TestFunction(V)\n (self.z, self.v, self.q) = split(zvq)\n self.dx = Measure(\"dx\")(subdomain_data=subdomains)\n self.ds = Measure(\"ds\")(subdomain_data=boundaries)\n # Regularization coefficient\n self.alpha = 0.01\n # Store the velocity expression\n self.vel = Expression(\"x[1] * (1 - x[1])\", element=self.V.sub(0).ufl_element())\n # Customize linear solver parameters\n self._linear_solver_parameters.update({\n \"linear_solver\": \"mumps\"\n })\n\n # Return custom problem name\n def name(self):\n return \"EllipticOptimalControl2RB\"\n\n # Return stability factor\n def get_stability_factor_lower_bound(self):\n return 1.\n\n # Return theta multiplicative terms of the affine expansion of the problem.\n def compute_theta(self, term):\n mu = self.mu\n if term in (\"a\", \"a*\"):\n theta_a0 = 1.0 / mu[0]\n theta_a1 = 1.0\n return (theta_a0, theta_a1)\n elif term in (\"c\", \"c*\"):\n theta_c0 = 1.0\n return (theta_c0,)\n elif term == \"m\":\n theta_m0 = 1.0\n return (theta_m0,)\n elif term == \"n\":\n theta_n0 = self.alpha\n return (theta_n0,)\n elif term == \"f\":\n theta_f0 = 1.0\n return (theta_f0,)\n elif term == \"g\":\n theta_g0 = mu[1]\n theta_g1 = mu[2]\n return (theta_g0, theta_g1)\n elif term == \"h\":\n theta_h0 = 0.24 * mu[1]**2 + 0.52 * mu[2]**2\n return (theta_h0,)\n elif term == \"dirichlet_bc_y\":\n theta_bc0 = 1.\n return (theta_bc0,)\n else:\n raise ValueError(\"Invalid term for compute_theta().\")\n\n # Return forms resulting from the discretization of the affine expansion of the problem operators.\n def assemble_operator(self, term):\n dx = self.dx\n if term == \"a\":\n y = self.y\n q = self.q\n vel = self.vel\n a0 = inner(grad(y), grad(q)) * dx\n a1 = vel * y.dx(0) * q * dx\n return (a0, a1)\n elif term == \"a*\":\n z = self.z\n p = self.p\n vel = self.vel\n as0 = inner(grad(z), grad(p)) * dx\n as1 = - vel * p.dx(0) * z * dx\n return (as0, as1)\n elif term == \"c\":\n u = self.u\n q = self.q\n c0 = u * q * dx\n return (c0,)\n elif term == \"c*\":\n v = self.v\n p = self.p\n cs0 = v * p * dx\n return (cs0,)\n elif term == \"m\":\n y = self.y\n z = self.z\n m0 = y * z * dx(1) + y * z * dx(2)\n return (m0,)\n elif term == \"n\":\n u = self.u\n v = self.v\n n0 = u * v * dx\n return (n0,)\n elif term == \"f\":\n q = self.q\n f0 = Constant(0.0) * q * dx\n return (f0,)\n elif term == \"g\":\n z = self.z\n g0 = z * dx(1)\n g1 = z * dx(2)\n return (g0, g1)\n elif term == \"h\":\n h0 = 1.0\n return (h0,)\n elif term == \"dirichlet_bc_y\":\n bc0 = [DirichletBC(self.V.sub(0), Constant(i), self.boundaries, i) for i in (1, 2)]\n return (bc0,)\n elif term == \"dirichlet_bc_p\":\n bc0 = [DirichletBC(self.V.sub(2), Constant(0.0), self.boundaries, i) for i in (1, 2)]\n return (bc0,)\n elif term == \"inner_product_y\":\n y = self.y\n z = self.z\n x0 = inner(grad(y), grad(z)) * dx\n return (x0,)\n elif term == \"inner_product_u\":\n u = self.u\n v = self.v\n x0 = u * v * dx\n return (x0,)\n elif term == \"inner_product_p\":\n p = self.p\n q = self.q\n x0 = inner(grad(p), grad(q)) * dx\n return (x0,)\n else:\n raise ValueError(\"Invalid term for assemble_operator().\")",
"4. Main program\n4.1. Read the mesh for this problem\nThe mesh was generated by the data/generate_mesh_2.ipynb notebook.",
"mesh = Mesh(\"data/mesh2.xml\")\nsubdomains = MeshFunction(\"size_t\", mesh, \"data/mesh2_physical_region.xml\")\nboundaries = MeshFunction(\"size_t\", mesh, \"data/mesh2_facet_region.xml\")",
"4.2. Create Finite Element space (Lagrange P1)",
"scalar_element = FiniteElement(\"Lagrange\", mesh.ufl_cell(), 1)\nelement = MixedElement(scalar_element, scalar_element, scalar_element)\nV = FunctionSpace(mesh, element, components=[\"y\", \"u\", \"p\"])",
"4.3. Allocate an object of the EllipticOptimalControl class",
"problem = EllipticOptimalControl(V, subdomains=subdomains, boundaries=boundaries)\nmu_range = [(3.0, 20.0), (0.5, 1.5), (1.5, 2.5)]\nproblem.set_mu_range(mu_range)",
"4.4. Prepare reduction with a reduced basis method",
"reduced_basis_method = ReducedBasis(problem)\nreduced_basis_method.set_Nmax(40)",
"4.5. Perform the offline phase",
"lifting_mu = (3.0, 1.0, 2.0)\nproblem.set_mu(lifting_mu)\nreduced_basis_method.initialize_training_set(100)\nreduced_problem = reduced_basis_method.offline()",
"4.6. Perform an online solve",
"online_mu = (15.0, 0.6, 1.8)\nreduced_problem.set_mu(online_mu)\nreduced_solution = reduced_problem.solve()\nprint(\"Reduced output for mu =\", online_mu, \"is\", reduced_problem.compute_output())\n\nplot(reduced_solution, reduced_problem=reduced_problem, component=\"y\")\n\nplot(reduced_solution, reduced_problem=reduced_problem, component=\"u\")\n\nplot(reduced_solution, reduced_problem=reduced_problem, component=\"p\")",
"4.7. Perform an error analysis",
"reduced_basis_method.initialize_testing_set(100)\nreduced_basis_method.error_analysis()",
"4.8. Perform a speedup analysis",
"reduced_basis_method.speedup_analysis()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
squishbug/DataScienceProgramming
|
DataScienceProgramming/08-Machine-Learning-I/HW5_orig.ipynb
|
cc0-1.0
|
[
"<div style=\"float: right; color: red;\">Please, rename this file to <code style=\"color: red;\">HW5.ipynb</code> and save it in <code style=\"color: red;\">MSA8010F16/HW5</code>\n</div>\n\nHomework 5: Evaluating Classifiers",
"%matplotlib inline",
"Classifier comparison\nA comparison of a several classifiers in scikit-learn on synthetic datasets.\nThe point of this example is to illustrate the nature of decision boundaries\nof different classifiers.\nThis should be taken with a grain of salt, as the intuition conveyed by\nthese examples does not necessarily carry over to real datasets.\nParticularly in high-dimensional spaces, data can more easily be separated\nlinearly and the simplicity of classifiers such as naive Bayes and linear SVMs\nmight lead to better generalization than is achieved by other classifiers.\nThe plots show training points in solid colors and testing points\nsemi-transparent. The lower right shows the classification accuracy on the test\nset.",
"print(__doc__)\n\n\n# Code source: Gaël Varoquaux\n# Andreas Müller\n# Modified for documentation by Jaques Grobler\n# License: BSD 3 clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.datasets import make_moons, make_circles, make_classification\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.gaussian_process import GaussianProcessClassifier\nfrom sklearn.gaussian_process.kernels import RBF\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\n\nh = .02 # step size in the mesh\n\nnames = [\"Decision Tree\", \"Nearest Neighbors\", \"Naive Bayes\", \"Linear SVM\"]\n\nclassifiers = [\n DecisionTreeClassifier(max_depth=5),\n KNeighborsClassifier(3),\n GaussianNB(),\n SVC(kernel=\"linear\", C=0.025),\n ]\n\nX, y = make_classification(n_features=2, n_redundant=0, n_informative=2,\n random_state=1, n_clusters_per_class=1)\nrng = np.random.RandomState(2)\nX += 2 * rng.uniform(size=X.shape)\nlinearly_separable = (X, y)\n\ndatasets = [make_moons(noise=0.3, random_state=0),\n make_circles(noise=0.2, factor=0.5, random_state=1),\n linearly_separable\n ]\n\nfigure = plt.figure(figsize=(27, 9))\ni = 1\n# iterate over datasets\nfor ds_cnt, ds in enumerate(datasets):\n # preprocess dataset, split into training and test part\n X, y = ds\n X = StandardScaler().fit_transform(X)\n X_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=.4, random_state=42)\n\n x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n\n # just plot the dataset first\n cm = plt.cm.RdBu\n cm_bright = ListedColormap(['#FF0000', '#0000FF'])\n ax = plt.subplot(len(datasets), len(classifiers) + 1, i)\n if ds_cnt == 0:\n ax.set_title(\"Input data\")\n # Plot the training points\n ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)\n # and testing points\n ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)\n ax.set_xlim(xx.min(), xx.max())\n ax.set_ylim(yy.min(), yy.max())\n ax.set_xticks(())\n ax.set_yticks(())\n i += 1\n\n # iterate over classifiers\n for name, clf in zip(names, classifiers):\n ax = plt.subplot(len(datasets), len(classifiers) + 1, i)\n clf.fit(X_train, y_train)\n score = clf.score(X_test, y_test)\n\n # Plot the decision boundary. For that, we will assign a color to each\n # point in the mesh [x_min, x_max]x[y_min, y_max].\n if hasattr(clf, \"decision_function\"):\n Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])\n else:\n Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n\n # Put the result into a color plot\n Z = Z.reshape(xx.shape)\n ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)\n\n # Plot also the training points\n ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)\n # and testing points\n ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,\n alpha=0.6)\n\n ax.set_xlim(xx.min(), xx.max())\n ax.set_ylim(yy.min(), yy.max())\n ax.set_xticks(())\n ax.set_yticks(())\n if ds_cnt == 0:\n ax.set_title(name)\n ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),\n size=15, horizontalalignment='right')\n i += 1\n\nplt.tight_layout()\nplt.show()",
"Standard Scaler\nhttp://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler\nStandardize features by removing the mean and scaling to unit variance\nCentering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using the transform method.\nStandardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual feature do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance).\nFor instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.\nThis scaler can also be applied to sparse CSR or CSC matrices by passing with_mean=False to avoid breaking the sparsity structure of the data.",
"datset\n\nfrom sklearn.metrics import confusion_matrix\n\ndef plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, cm[i, j],\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
msanterre/deep_learning
|
language-translation/dlnd_language_translation.ipynb
|
mit
|
[
"Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n \nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the <EOS> word id by doing:\npython\ntarget_vocab_to_int['<EOS>']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.",
"def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n \n source_ids = []\n target_ids = []\n \n for sentence in source_text.split(\"\\n\"):\n source_ids.append([source_vocab_to_int[word] for word in sentence.split(' ') if word != ''])\n \n for sentence in target_text.split(\"\\n\"):\n target_ids.append([target_vocab_to_int[word] for word in sentence.split(' ') if word != ''] + [target_vocab_to_int['<EOS>']])\n \n return source_ids, target_ids\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()",
"Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)",
"def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n\n input_text = tf.placeholder(tf.int32, [None, None], name=\"input\")\n target_text = tf.placeholder(tf.int32, [None, None], name=\"targets\")\n lr = tf.placeholder(tf.float32, name=\"learning_rate\")\n keep_prob = tf.placeholder(tf.float32, name=\"keep_prob\")\n target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n \n return input_text, target_text, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)",
"Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.",
"def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])\n dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n return dec_input\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)",
"Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()",
"from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n def make_cell(rnn_size, keep_prob):\n cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1))\n drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\n return drop\n \n embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)\n \n cells = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size, keep_prob) for i in range(num_layers)])\n outputs, state = tf.nn.dynamic_rnn(cells, embed_input, \n sequence_length=source_sequence_length,\n dtype=tf.float32)\n return outputs, state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)",
"Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode",
"\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n \n with tf.variable_scope(\"decode\"):\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, \n sequence_length=target_sequence_length,\n time_major=False)\n \n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)\n \n output = tf.contrib.seq2seq.dynamic_decode(training_decoder, \n impute_finished=True, \n maximum_iterations=max_summary_length)[0]\n return output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)",
"Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode",
"def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size])\n \n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n end_of_sequence_id)\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n encoder_state,\n output_layer)\n output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)[0]\n return output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)",
"Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.",
"def decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n def make_cell(rnn_size, keep_prob):\n cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\n return drop\n \n embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n embed_input = tf.nn.embedding_lookup(embeddings, dec_input)\n \n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size, keep_prob) for i in range(num_layers)])\n output_layer = Dense(target_vocab_size,\n kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))\n\n with tf.variable_scope(\"decode\"):\n training_decoder_output = decoding_layer_train(encoder_state,\n dec_cell,\n embed_input,\n target_sequence_length,\n max_target_sequence_length,\n output_layer,\n keep_prob)\n \n with tf.variable_scope(\"decode\", reuse=True):\n inferer_decoder_output = decoding_layer_infer(encoder_state,\n dec_cell,\n embeddings,\n target_vocab_to_int['<GO>'],\n target_vocab_to_int['<EOS>'],\n max_target_sequence_length,\n target_vocab_size,\n output_layer,\n batch_size,\n keep_prob)\n return training_decoder_output, inferer_decoder_output\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)",
"Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.",
"def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size,\n enc_embedding_size)\n \n dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)\n train_dec, infer_dec = decoding_layer(dec_input, enc_state, \n target_sequence_length, max_target_sentence_length,\n rnn_size, num_layers,\n target_vocab_to_int, target_vocab_size, \n batch_size, keep_prob, dec_embedding_size)\n return train_dec, infer_dec\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement",
"# Number of Epochs\nepochs = 5\n# Batch Size\nbatch_size = 64\n# RNN Size\nrnn_size = 128\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 128\ndecoding_embedding_size = 128\n# Learning Rate\nlearning_rate = 0.001\n# Dropout Keep Probability\nkeep_probability = 0.5\ndisplay_step = 100",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n",
"Batch and pad the source and target sequences",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')",
"Save Parameters\nSave the batch_size and save_path parameters for inference.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()",
"Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the <UNK> word id.",
"def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n unk_id = vocab_to_int[\"<UNK>\"]\n return [vocab_to_int.get(word, unk_id) for word in sentence.split(\" \")]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)",
"Translate\nThis will translate translate_sentence from English to French.",
"translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n",
"Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kwinkunks/timefreak
|
stft.ipynb
|
apache-2.0
|
[
"STFT and ISTFT\nI'd like to make my own spectrogram, so that I can play with Gabor logons, AKA Heisenberg boxes.",
"import numpy as np\nfrom scipy.fftpack import fft, ifft, rfft, irfft, fftfreq, rfftfreq\nimport scipy.signal\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"From Stack Overflow",
"def stft(x, fs, framesz, hop):\n framesamp = int(framesz*fs)\n hopsamp = int(hop*fs)\n w = scipy.signal.hann(framesamp)\n X = np.array([fft(w*x[i:i+framesamp]) for i in range(0, len(x)-framesamp, hopsamp)])\n return X\n\ndef istft(X, fs, T, hop):\n x = np.zeros(T*fs)\n framesamp = X.shape[1]\n hopsamp = int(hop*fs)\n for n,i in enumerate(range(0, len(x)-framesamp, hopsamp)):\n x[i:i+framesamp] += np.real(ifft(X[n]))\n return x\n\nf0 = 440 # Compute the STFT of a 440 Hz sinusoid\nfs = 8000 # sampled at 8 kHz\nT = 5. # lasting 5 seconds\nframesz = 0.050 # with a frame size of 50 milliseconds\nhop = 0.020 # and hop size of 20 milliseconds.\n\n# Create test signal and STFT.\nt = np.linspace(0, T, T*fs, endpoint=False)\nx = np.sin(2*scipy.pi*f0*t)\nX = stft(x, fs, framesz, hop)\n\n# Plot the magnitude spectrogram.\nplt.figure(figsize=(12,8))\nplt.subplot(311)\nplt.imshow(np.absolute(X.T), origin='lower', aspect='auto', interpolation='none')\nplt.ylabel('Frequency')\n\n# Compute the ISTFT.\nxhat = istft(X, fs, T, hop)\n\n# Plot the input and output signals over 0.1 seconds.\nT1 = int(0.1*fs)\n\nplt.subplot(312)\nplt.plot(t[:T1], x[:T1], t[:T1], xhat[:T1])\nplt.ylabel('Amplitude')\n\nplt.subplot(313)\nplt.plot(t[-T1:], x[-T1:], t[-T1:], xhat[-T1:])\nplt.xlabel('Time (seconds)')\nplt.ylabel('Ampltitude')\nplt.show()\n\nf0 = 10\nfs = 1000\nT = 2\nframesz = 0.050\nhop = 0.020\n\ndelta = int(T*fs/4)\n\ny = np.sin(2*np.pi*f0*t) + np.sin(2*np.pi*3*f0*t)\ny[delta] = 3.\ny[-delta] = 3.\n\nY = stft(y, fs, framesz, hop)\n\nplt.imshow(np.absolute(Y.T), origin='lower', aspect='auto', interpolation='none')",
"Benchmark signals",
"# Let's make a function to plot a signal and its specgram\n\ndef tf(signal, fs, w=256, wtime=False, poverlap=None, xlim=None, ylim=None, colorbar=False, vmin=None, vmax=None, filename=None, interpolation=\"bicubic\"):\n\n dt = 1./fs\n n = signal.size\n t = np.arange(0.0, n*dt, dt)\n \n if wtime:\n # Then the window length is time so change to samples\n w *= fs\n w = int(w)\n \n if poverlap:\n # Then overlap is a percentage\n noverlap = int(w * poverlap/100.)\n else:\n noverlap = w - 1\n \n plt.figure(figsize=(12,8))\n ax1 = plt.subplot(211)\n ax1.plot(t, signal)\n if xlim:\n ax1.set_xlim((0,xlim))\n \n ax2 = plt.subplot(212)\n Pxx, freqs, bins, im = plt.specgram(signal, NFFT=w, Fs=fs, noverlap=noverlap, cmap='Greys', vmin=vmin, vmax=vmax, interpolation=interpolation)\n if colorbar: plt.colorbar()\n if ylim:\n ax2.set_ylim((0,ylim))\n if xlim:\n ax2.set_xlim((0,xlim))\n \n if filename: plt.savefig(filename)\n \n plt.show()\n\nsynthetic = np.loadtxt('benchmark_signals/synthetic.txt')\n\ntf(synthetic, 800, w=256, xlim=10, ylim=256, vmin=-30, filename=\"/Users/matt/Pictures/stft_interpolated.png\")\n\nsynthetic.shape\n\nprint(\"length of signal =\", 8192/800.)\n\nSYN = stft(synthetic, 800, 0.128, 0.010)\nprint(SYN.shape)\n\nfreqs = fftfreq(128, d=1/800.)\n\nplt.figure(figsize=(12,4))\nplt.imshow(np.absolute(SYN.T[:SYN.shape[1]/2.,:1000]), origin='lower', aspect='auto', interpolation='none', cmap=\"Greys\")\n#plt.ylim(freqs[0],freqs[-65])\n#plt.colorbar()\nplt.savefig(\"/Users/matt/Pictures/stft_uninterpolated.png\")\n\nfftfreq(128, d=1/800.)\n\n# Gabor boxes\nfw = 1/.128\ntw = .128\n\nprint(fw, \"Hz \", tw, \"s\")",
"Complex display",
"import colorsys\n\nfm = np.amax(np.abs(SYN))\n\nnp.amin(np.angle(SYN))\n\ntimesteps = []\nfor t in SYN.T:\n freqs = []\n for f in t:\n # This is not right, need phase angle and mag:\n #rgb = colorsys.hsv_to_rgb(f.imag, f.real, f.real)\n hue = 0.5 + (np.angle(f) / (2*np.pi))\n rgb = colorsys.hsv_to_rgb(hue, np.abs(f)/fm, 1.0)\n freqs.append(rgb)\n timesteps.append(freqs)\nrgb_arr = np.array(timesteps)\n\nrgb_arr.shape",
"The matplotlib function imshow interprets arrays like this as 3-channel colour images, so we can just display this array directly. We'll chop off the negative frequencies again.",
"plt.figure(figsize=(12,4))\nplt.imshow(rgb_arr[:rgb_arr.shape[0]/2.,...], aspect=\"auto\", origin=\"lower\", interpolation=\"none\")\nplt.savefig('/Users/matt/Pictures/stft_complex.png')\nplt.show()",
"Inverse STFT",
"SYN.shape\n8192/800.\n\nfs=800\nT = 8192/800.\nhop = 0.010\n\nsyn = istft(SYN, fs, T, hop)\n\nplt.figure(figsize=(12,4))\nplt.plot(syn)\nplt.show()",
"PyTFD implementation\nI found a lightweight library for doing all sorts of time-frequency stuff: pytfd. https://github.com/endolith/pytfd",
"import pytfd.stft\n\nN = 256\nt_max = 10\nt = np.linspace(0, t_max, N)\nfs = N/t_max\nf = np.linspace(0, fs, N)\n\nplt.figure(figsize=(15,6))\nfor i, T in enumerate([32, 64, 128]):\n w = scipy.signal.boxcar(T) # Rectangular window\n \n delta1 = np.zeros(N)\n delta1[N/4] = 5\n \n delta2 = np.zeros(N)\n delta2[-N/4] = 5\n \n y = np.sin(2*np.pi*10*t) + np.sin(2*np.pi*30*t) + delta1 + delta2\n Y = pytfd.stft.stft(y, w)\n \n plt.subplot(3, 2, 2*i + 1)\n plt.plot(t, y)\n #plt.xlabel(\"Time\")\n #plt.ylabel(\"Amplitude\")\n #plt.title(r\"Signal\")\n \n plt.subplot(3, 2, 2*i + 2) \n plt.imshow(np.absolute(Y)[N/2:], interpolation=\"none\", aspect=0.5, origin=\"lower\")\n #plt.xlabel(\"Time\")\n #plt.ylabel(\"Frequency\")\n #plt.title(r\"STFT T = %d$T_s$\"%T)\n \nplt.show()",
"Nice! But no inverse STFT.\nUncertainty principle\nThe time and frequency localication of the window determine the localization of the result.",
"w = scipy.signal.hann(256)\n\nplt.plot(w)\nplt.show()\n\nW = rfft(w)\n\nplt.plot(np.absolute(W)[:10])\nplt.show()\n\n1/0.256"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cristhro/Machine-Learning
|
ejercicio 2/Ejercicio_2.ipynb
|
gpl-3.0
|
[
"Alumnos: Cristhian Rodríguez y Jesus Perucha\n1. Preparar los datos para aplicar algoritmos de ML",
"import sys #only needed to determine Python version number\n\n# Handle table-like data and matrices\nimport numpy as np\nimport pandas as pd\n\n# Visualisation\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.pylab as pylab\nimport seaborn as sns\n\n# Enable inline plotting\n%matplotlib inline\n\n# Modelo\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\nprint(\"ok\")",
"1.1 Importamos las librerias que vamos a necesitar",
"# Realizamos la lectura del fichero y incluimos en el fichero el nombre de las columnas\nnames_features = [\n'age', # Edad del individuo: [17 - 90] media 38, \n'type_employer', # Tipo de trabajo : [ State-gov, Self-emp-not-inc, Private, Federal-gov, Local-gov, ?, Self-emp-inc, Without-pay, Never-worked ]\n'fnlwgt', # Numero de personas encuestadas\n'education', # Nivel mas alto de educacion para el individuo\n'education-num', # Nivel mas alto de educacion en forma numerica [1 - 16]\n'marital-status', # Estado civil de la persona: [ Never-married, Married-civ-spouse, Divorced, Married-spouse-absent, Separated, Married-AF-spouse, Widowed ]\n'occupation', # Ocupacion de la persona: [ Adm-clerical, Exec-managerial, Handlers-cleaners, Prof-specialty, Other-service, Sales, Craft-repair, Transport-moving, Farming-fishing, Machine-op-inspct, Tech-support, ?, Protective-serv, Armed-Forces, Priv-house-serv']\n'relationship', # Relacion familiar: [ Not-in-family, Husband, Wife, Own-child, Unmarried, Other-Relative]\n'race', # Raza de un individuo: [ White, Black, Asian-Pac-Islander, Amer-Indian-Eskimo, Other ]\n'sex', # Sexo de un individuo : [ Male, Female]\n'capital_gain', # Ganancias de capital : [ 0 - 99999 ]\n'capital_loss', # Perdidas de caapital : [ 0 - 4356 ]\n'hours-per-week', # Horas trabajadas por semana : [ 1 - 99 ]\n'native-country', # Pais de origen de la persona : [ ]\n'income' # Indica si una persona gana o no mas de 50.000 \n]\ndf = pd.read_csv('adult.txt' , header=None, names=names_features)\ndf.head(3)\n#print('ok')\n\ncolumna_unica = df['native-country'].unique()\n\n# Obtenemos las edades de las personas\nedad = df['age']\n\n# Hacemos una busqueda con las edades que pertenezcan a un rango\nedad.where( edad < 30 ).where( edad > 18).count()\nprint('ok')\n#\",\".join(ocupacion )",
"• Agrupa los países en grupos\nAgrupamos las personas por ciudad, Las que son de EEUU en uno y los que no son de EEUU en otro",
"\ndef isEEUU(x):\n \n if x == ' United-States':\n return 1\n else :\n return 0\n \n# Aplicamos una funcion a la columna, para convertir los valores en enumerados\nenum_isEEUU = df[\"native-country\"].apply(isEEUU)\n\n# Cramos la nueva columna enum_isEEUU con los datos anteriores\ndf[\"enum_isEEUU\"] = enum_isEEUU \nprint('ok')",
"• Mapeamos las razas",
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\n\n# Mapea los valores unicos de type_employer a enumerados\nle.fit(df[\"race\"].unique())\n\n# Muestra como lo ha mapeado\nprint(list(le.classes_))\n\n# Aplicamos una funcion a la columna, para convertir los valores en enumerados\nenum_razas = le.transform( df[\"race\"])\n\n# Cramos la nueva columna enum_razas con los datos anteriores\ndf[\"enum_race\"] = enum_razas\n\n#muestra la inversa\n#list(le.inverse_transform(df[\"enum_razas\"].head()))\nprint('ok')",
"• Mapea el tipo de trabajo",
" from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\n\n# Mapea los valores unicos de type_employer a enumerados\nle.fit(df[\"type_employer\"].unique())\n\n# Transformamos la columna type_employer con 'labelEncoder' en enumerados\nenum_type_employer = le.transform( df[\"type_employer\"])\n\n# Cramos la nueva columna enum_type_employer con los datos anteriores\ndf[\"enum_type_employer\"] = enum_type_employer\nprint('ok')",
"• Mapea el tipo de familia",
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\n\nle.fit(df[\"relationship\"].unique())\n\nenum_relationship = le.transform( df[\"relationship\"])\n\ndf[\"enum_relationship\"] = enum_relationship\n\n#muestra la inversa\n#list(le.inverse_transform(df[\"enum_rel\"].head()))\nprint('ok')",
"• Mapea la ocupacion",
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit(df[\"occupation\"].unique())\ndf[\"enum_occupation\"] = le.transform( df[\"occupation\"])\nprint('ok')",
"• Mapea las ganancias (>50k o <=50k)",
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit(df[\"income\"].unique())\n\n# Muestra como lo ha mapeado\nprint(list(le.classes_))\n\ndf[\"enum_income\"] = le.transform( df[\"income\"])\nprint('ok')",
"• Mapea el sexo",
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit(df[\"sex\"].unique())\n\n# Muestra como lo ha mapeado\nprint(list(le.classes_))\n\ndf[\"enum_sex\"] = le.transform( df[\"sex\"])\nprint('ok')",
"• Mapea el estado civil de cada persona",
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit(df[\"marital-status\"].unique())\n\n# Muestra como lo ha mapeado\nprint(list(le.classes_))\n\ndf[\"enum_marital-status\"] = le.transform( df[\"marital-status\"])\nprint('ok')",
"2. Haz visualizaciones de diferentes valores con seaborn.\nDescribe lo que ves en las visualizaciones",
"plt.title('Personas que ganan mas de 50 o lo contrario', y=1.1, size=15)\nsns.countplot('income', data=df)\n\nplt.title('Numero de personas que ganan mas o menos de 50k diferenciadas por sexos', size=20, y=1.1)\nsns.countplot(x = 'income', hue='sex', data=df)\n\nplt.title('media de las que ganan mas o menos de 50k diferenciadas por sexos', size=20, y=1.1)\nsns.barplot(x = 'income', y='enum_sex',data=df)\n\nplt.title('Numero de personas que ganan mas o menos de 50k diferenciadas por el tipo de trabajador', size=20, y=1.1)\nsns.countplot(x = 'income', hue='type_employer', data=df)",
"En esta grafica podemos ver que hay menos mujeres que ganan mas de 50k.",
"plt.title('Media de las ganancias diferenciadas por el tipo de trabajador', size=10, y=1.1)\nsns.barplot(x = 'enum_type_employer', y='enum_income', data=df, hue=\"type_employer\")",
"Esta grafica nos muestra que los que mas ganan son los trabajadores con los puestos:\n\nSelf-emp-not-inc\nFederal-gov\nLocal-gov",
"plt.title('Numero de personas que ganan mas o menos de 50k diferenciadas por la raza', size=20, y=1.1)\nsns.countplot(x = 'income', hue='race', data=df)",
"Aqui podemos ver que las personas de raza blanca son las mas numerosas.\nDel mismo modo se ve que los blancos son los que ganan mas de 50k.",
"plt.title('Media de las ganancias diferenciadas por la raza', size=20, y=1.1)\nsns.barplot(x = 'enum_race', y='enum_income', data=df, hue=\"race\")",
"En cambio al hacer la media de las personas nos da como resultado que que los que mas ganan son los asiaticos.",
"plt.title('Numero de personas que ganan mas de 50k diferenciadas por la relacion familiar', size=20, y=1.1)\nsns.countplot(x = 'income', hue='relationship', data=df)",
"En esta grafica se aprecia que el numero de personas que ganan mas de 50k son los 'maridos'",
"plt.title('Media de las ganancias diferenciadas por la relacion familiar', size=20, y=1.1)\nsns.barplot(x = 'enum_relationship', y='enum_income', data=df, hue=\"relationship\")",
"Aqui podemos ver las 'esposas' tienen la mayor media de ganancias (> 50k)",
"plt.title('Numero de personas que ganan mas de 50k diferenciadas por la ocupacion', size=20, y=1.1)\nsns.countplot(x = 'income', hue='occupation', data=df)\n\nplt.title('Media de las ganancias diferenciadas por la relacion familiar', size=20, y=1.1)\nsns.barplot(x = 'enum_occupation', y='enum_income', data=df, hue=\"occupation\")\n\nplt.title('Distribucion de las horas por semana', size=20, y=1.1)\nsns.distplot(df['hours-per-week'])\ndf['hours-per-week'].describe()",
"Podemos observar que la mayoria de las personas trabajan 40h a la semana\nCorrelacion",
"colormap = plt.cm.viridis\nplt.figure(figsize=(12, 12))\nplt.title(\"Correlacion entre todas las features\", y=1.05, size=20)\nsns.heatmap(df.corr(), linewidths=0.1, square=True, vmax=1.0, annot=True, cmap=colormap)\n\nfeatures = [\"hours-per-week\",\"enum_sex\",\"enum_type_employer\",\"enum_race\",\"enum_income\",\"age\",\"education-num\",\"capital_gain\",\"capital_loss\",\"enum_marital-status\",\"enum_occupation\",\"enum_relationship\"]\n\nX = df[features] # features\nvar_correlations = {c: np.abs(X['enum_income'].corr(df[c])) for c in X.columns}\n\ncorr_dataframe = pd.DataFrame(var_correlations, index=['Correlation']).T.sort_values(by='Correlation')\nplt.title('Correlacion entre las features y la ganancia', y=1.1, size=20)\nplt.barh(range(corr_dataframe.shape[0]), corr_dataframe['Correlation'].values, tick_label=X.columns.values)\n\ng = sns.FacetGrid(df, col='enum_income')\ng.map(plt.hist, 'education-num', bins=20)",
"4. Utiliza random forest con diferentes parámetros, diferentes features, y cross_validation para conseguir el mejor modelo posible.",
"from sklearn import svm\nfrom sklearn.model_selection import cross_val_score\nimport pandas as pd\nimport numpy as np\n\n# modelo\nmodelo = RandomForestClassifier(n_estimators=9, min_samples_split=2 ,min_samples_leaf =1, n_jobs = 49)\n\n# Seleccionamos las features con las que vamos a usar el algoritmo\nfeatures = [\"education-num\",\"capital_gain\",\"capital_loss\",\"enum_marital-status\",\"enum_occupation\",\"enum_relationship\"]\n\nX = df[features] # features\ny = df[\"enum_income\"] # targets\n\n#SCORE\nscores = cross_val_score(modelo, X, y, cv = 5 )\nprint(scores.mean())\n\n\nfrom sklearn.grid_search import GridSearchCV\n# define the parameter values that should be searched\nN_estimators = list(range(1, 20))\nMin_samples_split= list(range(2, 10))\nMin_samples_leaf = list(range(1, 2))\nN_jobs = list(range(1, 20))\nprint(Min_samples_leaf)\nprint(Min_samples_split)\n\n# create a parameter grid: map the parameter names to the values that should be searched\nparam_grid = dict(n_estimators=N_estimators, min_samples_split=Min_samples_split ,min_samples_leaf =Min_samples_leaf, n_jobs = N_jobs)\n\n# instantiate and fit the grid\ngrid = GridSearchCV(modelo, param_grid, cv=5, scoring='accuracy')\ngrid.fit(X, y)\n\n\n# view the complete results\ngrid.grid_scores_\n",
"PRUEBAS"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hainm/bag_of_tips
|
pytraj_users/pytraj_md.ipynb
|
bsd-2-clause
|
[
"%matplotlib inline\nimport pytraj as pt\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport parmed as pmd\n\nfrom matplotlib import rcParams\nplt.rcParams['xtick.direction'] = 'in'\nplt.rcParams['ytick.direction'] = 'in'",
"Read in the trajectory file (.dcd) and the topology file (.psf)",
"traj = pt.Trajectory('step5_production.dcd', '../step3_pbcsetup.xplor.ext.psf')\nprint(traj)",
"Calculate the water-water radial distribution function. In statistical mechanics, the radial distribution function, (or pair correlation function) g(r) in a system of particles (atoms, molecules, colloids, etc.), describes how density varies as a function of distance from a reference particle.\n$$g(r)=\\frac{1}{N\\rho}\\sum_{i=1}^N\\sum_{k\\neq i}^N\\langle\\delta(r+r_k-r_i)\\rangle$$\nwhere $\\rho$ is the number density, $\\delta()$ is blah",
"goo = pt.rdf(traj, solvent_mask=':TIP3@OH2', solute_mask=':TIP3@OH2', bin_spacing=0.05, maximum=8.)\ngoh1 = pt.rdf(traj, solvent_mask=':TIP3@OH2', solute_mask=':TIP3@H1', bin_spacing=0.05, maximum=8.)\ngoh2 = pt.rdf(traj, solvent_mask=':TIP3@OH2', solute_mask=':TIP3@H2', bin_spacing=0.05, maximum=8.)",
"Plot the g(r) versus r for O-O and O-H",
"plt.plot(goo[0],goo[1])\nplt.plot(goo[0],goh1[1])\nplt.plot((0, 8), (1, 1), 'k--')\nplt.xlim([1.5,8])\nplt.ylim([0,3])\nplt.xlabel('$r_{ox} / \\AA$', fontsize=14)\nplt.ylabel('$g(r_{ox})$', fontsize=14)\nplt.show()",
"Calculate molecule - water rdf",
"gon = pt.rdf(traj, solvent_mask=':TIP3@OH2', solute_mask=':DOP@N23', bin_spacing=0.05, maximum=8.)\ngoh231 = pt.rdf(traj, solvent_mask=':TIP3@OH2', solute_mask=':DOP@H231', bin_spacing=0.05, maximum=8.)\ngoo1 = pt.rdf(traj, solvent_mask=':TIP3@OH2', solute_mask=':DOP@O1', bin_spacing=0.05, maximum=8.)\ngoh4 = pt.rdf(traj, solvent_mask=':TIP3@OH2', solute_mask=':DOP@H4', bin_spacing=0.05, maximum=8.)\nghwn23 = pt.rdf(traj, solvent_mask=':TIP3@H1', solute_mask=':DOP@N23', bin_spacing=0.05, maximum=8.)\nghwh231 = pt.rdf(traj, solvent_mask=':TIP3@H1', solute_mask=':DOP@H231', bin_spacing=0.05, maximum=8.)\nghwo1 = pt.rdf(traj, solvent_mask=':TIP3@H1', solute_mask=':DOP@O1', bin_spacing=0.05, maximum=8.)\nghwh4 = pt.rdf(traj, solvent_mask=':TIP3@H1', solute_mask=':DOP@H4', bin_spacing=0.05, maximum=8.)",
"Plot the g(r) versus r for Ow-x",
"plt.plot(gon[0],gon[1],label='OW...NH$_{3}$')\nplt.plot(goh231[0],goh231[1],label='OW...HN')\nplt.plot(goo1[0],goo1[1],label='OW...O1')\nplt.plot(goh4[0],goh4[1],label='OW...HO1')\nplt.plot((0, 8), (1, 1), 'k--')\nplt.xlim([1.5,8])\nplt.ylim([0,0.0015])\nplt.xlabel('$r_{ox} / \\AA$', fontsize=14)\nplt.ylabel('$g(r_{ox})$', fontsize=14)\nplt.legend(loc='upper right',frameon=False, bbox_to_anchor=(1.35, 0.95),numpoints=1)\nplt.show()",
"Plot the g(r) versus r for Hw-x",
"plt.plot(ghwn23[0],ghwn23[1],label='HW...NH$_{3}$')\nplt.plot(ghwh231[0],ghwh231[1],label='HW...HN')\nplt.plot(ghwo1[0],ghwo1[1],label='HW...O1')\nplt.plot(ghwh4[0],ghwh4[1],label='HW...HO1')\nplt.plot((0, 8), (1, 1), 'k--')\nplt.xlim([1.5,8])\nplt.ylim([0,0.0008])\nplt.xlabel('$r_{hwx} / \\AA$', fontsize=14)\nplt.ylabel('$g(r_{hwx})$', fontsize=14)\nplt.legend(loc='upper right',frameon=False, bbox_to_anchor=(1.35, 0.95),numpoints=1)\nplt.show()",
"Print the distance between dop (residue 0) and residue 1 (H2O)",
"data = pt.calc_pairdist(traj, mask=':TIP3@OH2', mask2=':DOP@N23')\nprint(data[:,0][:1])\n\natom_indices = pt.select(':DOP@N23', traj.top)\nprint(atom_indices)",
"Calculate a dihedral angle as a function of time",
"dataset = pt.dihedral(traj, ':DOP@N23 :DOP@C2 :DOP@C3 :DOP@C1')",
"Plot the results",
"time = [i*0.002 for i in range(len(dataset))]\nplt.plot(time,dataset, 'ro', label='$\\phi$')\nplt.xlabel('Time / ns', fontsize=14)\nplt.ylabel('Angle / $\\circ$', fontsize=14)\nplt.legend(loc='upper right',frameon=False, bbox_to_anchor=(1.2, 0.95),numpoints=1)\nplt.show()\n\nhist0 = plt.hist(dataset,25)\ntmp = plt.setp(hist0[2], 'facecolor', 'r', 'alpha', 0.75)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ThinkStats2
|
solutions/chap04soln.ipynb
|
gpl-3.0
|
[
"Chapter 4\nExamples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"import numpy as np\n\nfrom os.path import basename, exists\n\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n\n local, _ = urlretrieve(url, filename)\n print(\"Downloaded \" + local)\n\n\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py\")\n\n\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct\")\ndownload(\n \"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz\"\n)\n\nimport thinkstats2\nimport thinkplot",
"Examples\nOne more time, I'll load the data from the NSFG.",
"import first\n\nlive, firsts, others = first.MakeFrames()",
"And compute the distribution of birth weight for first babies and others.",
"first_wgt = firsts.totalwgt_lb\nfirst_wgt_dropna = first_wgt.dropna()\nprint('Firsts', len(first_wgt), len(first_wgt_dropna))\n \nother_wgt = others.totalwgt_lb\nother_wgt_dropna = other_wgt.dropna()\nprint('Others', len(other_wgt), len(other_wgt_dropna))\n\nfirst_pmf = thinkstats2.Pmf(first_wgt_dropna, label='first')\nother_pmf = thinkstats2.Pmf(other_wgt_dropna, label='other')",
"We can plot the PMFs on the same scale, but it is hard to see if there is a difference.",
"width = 0.4 / 16\n\n# plot PMFs of birth weights for first babies and others\nthinkplot.PrePlot(2)\nthinkplot.Hist(first_pmf, align='right', width=width)\nthinkplot.Hist(other_pmf, align='left', width=width)\nthinkplot.Config(xlabel='Weight (pounds)', ylabel='PMF')",
"PercentileRank computes the fraction of scores less than or equal to your_score.",
"def PercentileRank(scores, your_score):\n count = 0\n for score in scores:\n if score <= your_score:\n count += 1\n\n percentile_rank = 100.0 * count / len(scores)\n return percentile_rank",
"If this is the list of scores.",
"t = [55, 66, 77, 88, 99]",
"If you got the 88, your percentile rank is 80.",
"PercentileRank(t, 88)",
"Percentile takes a percentile rank and computes the corresponding percentile.",
"def Percentile(scores, percentile_rank):\n scores.sort()\n for score in scores:\n if PercentileRank(scores, score) >= percentile_rank:\n return score",
"The median is the 50th percentile, which is 77.",
"Percentile(t, 50)",
"Here's a more efficient way to compute percentiles.",
"def Percentile2(scores, percentile_rank):\n scores.sort()\n index = percentile_rank * (len(scores)-1) // 100\n return scores[int(index)]",
"Let's hope we get the same answer.",
"Percentile2(t, 50)",
"The Cumulative Distribution Function (CDF) is almost the same as PercentileRank. The only difference is that the result is 0-1 instead of 0-100.",
"def EvalCdf(sample, x):\n count = 0.0\n for value in sample:\n if value <= x:\n count += 1\n\n prob = count / len(sample)\n return prob",
"In this list",
"t = [1, 2, 2, 3, 5]",
"We can evaluate the CDF for various values:",
"EvalCdf(t, 0), EvalCdf(t, 1), EvalCdf(t, 2), EvalCdf(t, 3), EvalCdf(t, 4), EvalCdf(t, 5)",
"Here's an example using real data, the distribution of pregnancy length for live births.",
"cdf = thinkstats2.Cdf(live.prglngth, label='prglngth')\nthinkplot.Cdf(cdf)\nthinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='CDF', loc='upper left')",
"Cdf provides Prob, which evaluates the CDF; that is, it computes the fraction of values less than or equal to the given value. For example, 94% of pregnancy lengths are less than or equal to 41.",
"cdf.Prob(41)",
"Value evaluates the inverse CDF; given a fraction, it computes the corresponding value. For example, the median is the value that corresponds to 0.5.",
"cdf.Value(0.5)",
"In general, CDFs are a good way to visualize distributions. They are not as noisy as PMFs, and if you plot several CDFs on the same axes, any differences between them are apparent.",
"first_cdf = thinkstats2.Cdf(firsts.totalwgt_lb, label='first')\nother_cdf = thinkstats2.Cdf(others.totalwgt_lb, label='other')\n\nthinkplot.PrePlot(2)\nthinkplot.Cdfs([first_cdf, other_cdf])\nthinkplot.Config(xlabel='Weight (pounds)', ylabel='CDF')",
"In this example, we can see that first babies are slightly, but consistently, lighter than others.\nWe can use the CDF of birth weight to compute percentile-based statistics.",
"weights = live.totalwgt_lb\nlive_cdf = thinkstats2.Cdf(weights, label='live')",
"Again, the median is the 50th percentile.",
"median = live_cdf.Percentile(50)\nmedian",
"The interquartile range is the interval from the 25th to 75th percentile.",
"iqr = (live_cdf.Percentile(25), live_cdf.Percentile(75))\niqr",
"We can use the CDF to look up the percentile rank of a particular value. For example, my second daughter was 10.2 pounds at birth, which is near the 99th percentile.",
"live_cdf.PercentileRank(10.2)",
"If we draw a random sample from the observed weights and map each weigh to its percentile rank.",
"sample = np.random.choice(weights, 100, replace=True)\nranks = [live_cdf.PercentileRank(x) for x in sample]",
"The resulting list of ranks should be approximately uniform from 0-1.",
"rank_cdf = thinkstats2.Cdf(ranks)\nthinkplot.Cdf(rank_cdf)\nthinkplot.Config(xlabel='Percentile rank', ylabel='CDF')",
"That observation is the basis of Cdf.Sample, which generates a random sample from a Cdf. Here's an example.",
"resample = live_cdf.Sample(1000)\nthinkplot.Cdf(live_cdf)\nthinkplot.Cdf(thinkstats2.Cdf(resample, label='resample'))\nthinkplot.Config(xlabel='Birth weight (pounds)', ylabel='CDF')",
"This confirms that the random sample has the same distribution as the original data.\nExercises\nExercise: How much did you weigh at birth? If you don’t know, call your mother or someone else who knows. Using the NSFG data (all live births), compute the distribution of birth weights and use it to find your percentile rank. If you were a first baby, find your percentile rank in the distribution for first babies. Otherwise use the distribution for others. If you are in the 90th percentile or higher, call your mother back and apologize.",
"# Solution\n\nfirst_cdf.PercentileRank(8.5)\n\n# Solution\n\nother_cdf.PercentileRank(8.5)",
"Exercise: The numbers generated by numpy.random.random are supposed to be uniform between 0 and 1; that is, every value in the range should have the same probability.\nGenerate 1000 numbers from numpy.random.random and plot their PMF. What goes wrong?\nNow plot the CDF. Is the distribution uniform?",
"# Solution\n\nt = np.random.random(1000)\n\n# Solution\n\npmf = thinkstats2.Pmf(t)\nthinkplot.Pmf(pmf, linewidth=0.1)\nthinkplot.Config(xlabel='Random variate', ylabel='PMF')\n\n# Solution\n\ncdf = thinkstats2.Cdf(t)\nthinkplot.Cdf(cdf)\nthinkplot.Config(xlabel='Random variate', ylabel='CDF')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/eng-edu
|
ml/cc/prework/fr/intro_to_pandas.ipynb
|
apache-2.0
|
[
"Copyright 2017 Google LLC.",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"# Présentation rapide de Pandas\nObjectifs d'apprentissage :\n * Introduction aux structures de données DataFrame et Series de la bibliothèque Pandas\n * Accéder aux données et les manipuler dans une structure DataFrame et Series\n * Importer des données d'un fichier CSV dans un DataFrame Pandas\n * Réindexer un DataFrame pour mélanger les données\nPandas est une API d'analyse de données orientée colonnes. C'est un excellent outil pour manipuler et analyser des données d'entrée. Beaucoup de frameworks d'apprentissage automatique acceptent les structures de données Pandas en entrée.\nIl faudrait des pages et des pages pour présenter de manière exhaustive l'API Pandas, mais les concepts fondamentaux sont relativement simples. Aussi, nous avons décidé de vous les exposer ci-dessous. Pour une description plus complète, vous pouvez consulter le site de documentation de Pandas, sur lequel vous trouverez de multiples informations ainsi que de nombreux didacticiels.\n## Concepts de base\nLa ligne de code suivante permet d'importer l'API pandas et d'afficher sa version :",
"from __future__ import print_function\n\nimport pandas as pd\npd.__version__",
"On distingue deux grandes catégories de structures de données Pandas :\n\nLe DataFrame, un tableau relationnel de données, avec des lignes et des colonnes étiquetées\nLa Series, constituée d'une seule colonne. Un DataFrame contient une ou plusieurs Series, chacune étant étiquetée.\n\nLe DataFrame est une abstraction fréquemment utilisée pour manipuler des données. Spark et R proposent des implémentations similaires.\nPour créer une Series, vous pouvez notamment créer un objet Series. Par exemple :",
"pd.Series(['San Francisco', 'San Jose', 'Sacramento'])",
"Il est possible de créer des objets DataFrame en transmettant un dictionnaire qui met en correspondance les noms de colonnes (des chaînes de caractères) avec leur Series respective. Lorsque la longueur de la Series ne correspond pas, les valeurs manquantes sont remplacées par des valeurs NA/NaN spéciales. Exemple :",
"city_names = pd.Series(['San Francisco', 'San Jose', 'Sacramento'])\npopulation = pd.Series([852469, 1015785, 485199])\n\npd.DataFrame({ 'City name': city_names, 'Population': population })",
"Le plus souvent, vous chargez un fichier entier dans un DataFrame. Dans l'exemple qui suit, le fichier chargé contient des données immobilières pour la Californie. Exécutez la cellule suivante pour charger les données et définir les caractéristiques :",
"california_housing_dataframe = pd.read_csv(\"https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv\", sep=\",\")\ncalifornia_housing_dataframe.describe()",
"Dans l'exemple ci-dessus, la méthode DataFrame.describe permet d'afficher des statistiques intéressantes concernant un DataFrame. La fonction DataFrame.head est également utile pour afficher les premiers enregistrements d'un DataFrame :",
"california_housing_dataframe.head()",
"Autre fonction puissante de Pandas : la représentation graphique. Avec DataFrame.hist, par exemple, vous pouvez vérifier rapidement la façon dont les valeurs d'une colonne sont distribuées :",
"california_housing_dataframe.hist('housing_median_age')",
"## Accéder aux données\nL'accès aux données d'un DataFrame s'effectue au moyen d'opérations de liste ou de dictionnaire Python courantes :",
"cities = pd.DataFrame({ 'City name': city_names, 'Population': population })\nprint(type(cities['City name']))\ncities['City name']\n\nprint(type(cities['City name'][1]))\ncities['City name'][1]\n\nprint(type(cities[0:2]))\ncities[0:2]",
"Pandas propose en outre une API extrêmement riche, avec des fonctions avancées d'indexation et de sélection, que nous ne pouvons malheureusement pas aborder ici dans le détail.\n## Manipuler des données\nIl est possible d'effectuer des opérations arithmétiques de base de Python sur les Series. Par exemple :",
"population / 1000.",
"NumPy est un kit d'outils de calculs scientifiques populaire. Les Series Pandas peuvent faire office d'arguments pour la plupart des fonctions NumPy :",
"import numpy as np\n\nnp.log(population)",
"La méthode Series.apply convient pour les transformations à une colonne plus complexes. Comme la fonction map de Python, elle accepte en argument une fonction lambda, appliquée à chaque valeur.\nL'exemple ci-dessous permet de créer une Series signalant si la population dépasse ou non un million d'habitants :",
"population.apply(lambda val: val > 1000000)",
"La modification des DataFrames est également très simple. Ainsi, le code suivant permet d'ajouter deux Series à un DataFrame existant :",
"cities['Area square miles'] = pd.Series([46.87, 176.53, 97.92])\ncities['Population density'] = cities['Population'] / cities['Area square miles']\ncities",
"## Exercice n° 1\nModifiez le tableau cities en ajoutant une colonne booléenne qui prend la valeur True si et seulement si les deux conditions suivantes sont remplies :\n\nLa ville porte le nom d'un saint.\nLa ville s'étend sur plus de 50 miles carrés.\n\nRemarque : Pour combiner des Series booléennes, utilisez des opérateurs de bits, pas les opérateurs booléens classiques. Par exemple, pour le ET logique, utilisez & au lieu de and.\nAstuce : En espagnol, \"San\" signifie \"saint\".",
"# Your code here",
"### Solution\nCliquez ci-dessous pour afficher la solution.",
"cities['Is wide and has saint name'] = (cities['Area square miles'] > 50) & cities['City name'].apply(lambda name: name.startswith('San'))\ncities",
"## Index\nLes objets Series et DataFrame définissent également une propriété index, qui affecte un identifiant à chaque élément d'une Series ou chaque ligne d'un DataFrame. \nPar défaut, Pandas affecte les valeurs d'index en suivant l'ordre des données source. Ces valeurs ne varient pas par la suite ; elles restent inchangées en cas de réagencement des données.",
"city_names.index\n\ncities.index",
"Appelez DataFrame.reindex pour réorganiser manuellement les lignes. Le code suivant, par exemple, revient à trier les données par nom de ville :",
"cities.reindex([2, 0, 1])",
"La réindexation est un excellent moyen de mélanger (organiser aléatoirement) les données d'un DataFrame. Dans l'exemple ci-dessous, l'index de type tableau est transmis à la fonction NumPy random.permutation, qui mélange les valeurs. En appelant reindex avec ce tableau mélangé, nous mélangeons également les lignes du DataFrame.\nExécutez plusieurs fois la cellule suivante !",
"cities.reindex(np.random.permutation(cities.index))",
"Pour en savoir plus, consultez la documentation relative aux index.\n## Exercice n° 2\nLa méthode reindex autorise les valeurs d'index autres que celles associées au DataFrame d'origine. Voyez ce qu'il se passe lorsque vous utilisez ce type de valeurs ! Pourquoi est-ce autorisé à votre avis ?",
"# Your code here",
"### Solution\nCliquez ci-dessous pour afficher la solution.\nLorsque le tableau d'entrée reindex contient des valeurs d'index ne faisant pas partie de la liste des index du DataFrame d'origine, reindex ajoute des lignes pour ces index \\'manquants\\' et insère la valeur NaN dans les colonnes correspondantes :",
"cities.reindex([0, 4, 5, 2])",
"Ce comportement est souhaitable, car les index sont souvent des chaînes de caractères extraites des données réelles. La documentation Pandas sur la fonction reindex inclut un exemple avec des noms de navigateurs comme valeurs d'index).\nDans ce cas, autoriser les index \\'manquants\\' facilite la réindexation à l'aide d'une liste externe, car vous n'avez pas à vous préoccuper de l'intégrité des données d'entrée."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ramseylab/networkscompbio
|
class21_reveal_python3_template.ipynb
|
apache-2.0
|
[
"Class 21: joint entropy and the REVEAL algorithm\nWe'll use the bladder cancer gene expression data to test out the REVEAL algorithm. First, we'll load the data and filter to include only genes for which the median log2 expression level is > 12 (as we did in class session 20). That should give us 164 genes to work with.\nImport the Python modules that we will need for this exercise",
"import pandas\nimport numpy\nimport itertools",
"Load the data file shared/bladder_cancer_genes_tcga.txt into a pandas.DataFrame, convert it to a numpy.ndarray matrix, and print the matrix dimensions",
"gene_matrix_for_network_df = \ngene_matrix_for_network = \nprint(gene_matrix_for_network.shape)",
"Filter the matrix to include only rows for which the column-wise median is > 14; matrix should now be 13 x 414.",
"\ngenes_keep = numpy.where( ## fill in here ## > 14)\nmatrix_filt = \nmatrix_filt.shape\nN = matrix_filt.shape[0]\nM = matrix_filt.shape[1]",
"Binarize the gene expression matrix using the mean value as a breakpoint, turning it into a NxM matrix of booleans (True/False). Call it gene_matrix_binarized.",
"gene_matrix_binarized = ## fill in here ##",
"Test your matrix by printing the first four columns of the first four rows:",
"gene_matrix_binarized[0:4,0:4]",
"The core part of the REVEAL algorithm is a function that can compute the joint entropy of a collection of binary (TRUE/FALSE) vectors X1, X2, ..., Xn (where length(X1) = length(Xi) = M).\nWrite a function entropy_multiple_vecs that takes as its input a nxM matrix (where n is the number of variables, i.e., genes, and M is the number of samples in which gene expression was measured). The function should use the log2 definition of the Shannon entropy. It should return the joint entropy H(X1, X2, ..., Xn) as a scalar numeric value. I have created a skeleton version of this function for you, in which you can fill in the code. I have also created some test code that you can use to test your function, below.",
"def entropy_multiple_vecs(binary_vecs):\n ## use shape to get the numbers of rows and columns as [n,M]\n [n, M] = binary_vecs.shape\n \n # make a \"M x n\" dataframe from the transpose of the matrix binary_vecs\n binary_df = \n \n # use the groupby method to obtain a data frame of counts of unique occurrences of the 2^n possible logical states\n binary_df_counts = \n \n # divide the matrix of counts by M, to get a probability matrix\n probvec = \n \n # compute the shannon entropy using the formula\n hvec = \n \n return numpy.sum(hvec)",
"This test case should produce the value 3.938:",
"print(entropy_multiple_vecs(gene_matrix_binarized[0:4,]))",
"Example implementation of the REVEAL algorithm:\nWe'll go through stage 3",
"ratio_thresh = 0.1\ngenes_to_fit = list(range(0,N))\nstage = 0\nregulators = [None]*N\nentropies_for_stages = [None]*N\nmax_stage = 4\n\nentropies_for_stages[0] = numpy.zeros(N)\n\nfor i in range(0,N):\n single_row_matrix = gene_matrix_binarized[i,:,None].transpose()\n entropies_for_stages[0][i] = entropy_multiple_vecs(single_row_matrix)\n \ngenes_to_fit = set(range(0,N))\n\nfor stage in range(1,max_stage + 1):\n for gene in genes_to_fit.copy():\n # we are trying to find regulators for gene \"gene\"\n poss_regs = set(range(0,N)) - set([gene])\n poss_regs_combs = [list(x) for x in itertools.combinations(poss_regs, stage)]\n HGX = numpy.array([ entropy_multiple_vecs(gene_matrix_binarized[[gene] + poss_regs_comb,:]) for poss_regs_comb in poss_regs_combs ])\n HX = numpy.array([ entropy_multiple_vecs(gene_matrix_binarized[poss_regs_comb,:]) for poss_regs_comb in poss_regs_combs ])\n HG = entropies_for_stages[0][gene]\n min_value = numpy.min(HGX - HX)\n if HG - min_value >= ratio_thresh * HG:\n regulators[gene]=poss_regs_combs[numpy.argmin(HGX - HX)]\n genes_to_fit.remove(gene)\nregulators"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
info-370/classification
|
decision-trees/decision-tree.ipynb
|
mit
|
[
"Decision Trees\nAn introductory example of decision trees using data from this interactive visualization. This is an over-simplified example that doesn't use normalization as a pre-processing step, or cross validation as a mechanism for tuning the model.\nSet up",
"# Load packages\nimport pandas as pd\nfrom sklearn import tree\nfrom __future__ import division\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Read data\ndf = pd.read_csv('./data/housing-data.csv')",
"Data Exploration\nSome basic exploratory analysis before creating a decision tree",
"# What is the shape of our data?\n\n\n# What variables are present in the dataset?\n\n\n# What is the distribution of our outcome variable `in_sf`?\n\n\n# How does elevation vary for houses in/not-in sf (I suggest an overlapping histogram)\n",
"Build a decision tree using all variables",
"# Create variables to hold features and outcomes separately\n\n\n# Split data into testing and training sets\n\n\n# Create a classifier and fit your features to your outcome\n",
"Assess Model Fit",
"# Generate a set of predictions for your test data\n\n\n# Calculate accuracy for our test set (percentage of the time that prediction == truth)\n\n\n# By comparison, how well do we predict in our training data?\n",
"Show the tree\nA little bit of a pain, though there are some alternatives to the documentation presented here. You may have to do the following:\n```\nInstall graphviz in your terminal\nconda install graphviz\n```\nI then suggest the following solution:\ntree.export_graphviz(clf, out_file=\"mytree.dot\")\nwith open(\"mytree.dot\") as f:\n dot_graph = f.read()\ngraphviz.Source(dot_graph)",
"# Create tree diagram\n",
"Comparion to KNN\nPurely out of curiosity, how well does this model fit with KNN (for K=3)",
"# Create a knn classifier\n\n# Fit our classifier to our training data\n\n# Predict on our test data and assess accuracy\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jorisvandenbossche/DS-python-data-analysis
|
notebooks/python_recap/python_rehearsal.ipynb
|
bsd-3-clause
|
[
"<p><font size=\"6\"><b> Python rehearsal</b></font></p>\n\n\nDS Data manipulation, analysis and visualization in Python\nMay/June, 2021\n© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons\n\n\nI measure air pressure",
"pressure_hPa = 1010 # hPa",
"<div class=\"alert alert-info\">\n <b>REMEMBER</b>: Use meaningful variable names\n</div>\n\nI'm measuring at sea level, what would be the air pressure of this measured value on other altitudes?\nI'm curious what the equivalent pressure would be on other alitudes...\nThe barometric formula, sometimes called the exponential atmosphere or isothermal atmosphere, is a formula used to model how the pressure (or density) of the air changes with altitude. The pressure drops approximately by 11.3 Pa per meter in first 1000 meters above sea level.\n$$P=P_0 \\cdot \\exp \\left[\\frac{-g \\cdot M \\cdot h}{R \\cdot T}\\right]$$\nsee https://www.math24.net/barometric-formula/ or https://en.wikipedia.org/wiki/Atmospheric_pressure\nwhere:\n* $T$ = standard temperature, 288.15 (K)\n* $R$ = universal gas constant, 8.3144598, (J/mol/K)\n* $g$ = gravitational acceleration, 9.81 (m/s$^2$)\n* $M$ = molar mass of Earth's air, 0.02896 (kg/mol)\nand:\n* $P_0$ = sea level pressure (hPa)\n* $h$ = height above sea level (m)\nLet's implement this...\nTo calculate the formula, I need the exponential operator. Pure Python provide a number of mathematical functions, e.g. https://docs.python.org/3.7/library/math.html#math.exp within the math library",
"import math\n\n# ...modules and libraries...",
"<div class=\"alert alert-danger\">\n <b>DON'T</b>: <code>from os import *</code>. Just don't!\n</div>",
"standard_temperature = 288.15\ngas_constant = 8.31446\ngravit_acc = 9.81\nmolar_mass_earth = 0.02896",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: \n <ul>\n <li>Calculate the equivalent air pressure at the altitude of 2500 m above sea level for our measured value of <code>pressure_hPa</code> (1010 hPa)</li>\n </ul> \n</div>",
"height = 2500\npressure_hPa * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))\n\n# ...function/definition for barometric_formula... \n\ndef barometric_formula(pressure_sea_level, height=2500):\n \"\"\"Apply barometric formula\n \n Apply the barometric formula to calculate the air pressure on a given height\n \n Parameters\n ----------\n pressure_sea_level : float\n pressure, measured as sea level\n height : float\n height above sea level (m)\n \n Notes\n ------\n see https://www.math24.net/barometric-formula/ or \n https://en.wikipedia.org/wiki/Atmospheric_pressure\n \"\"\"\n standard_temperature = 288.15\n gas_constant = 8.3144598\n gravit_acc = 9.81\n molar_mass_earth = 0.02896\n \n pressure_altitude = pressure_sea_level * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))\n return pressure_altitude\n\nbarometric_formula(pressure_hPa, 2000)\n\nbarometric_formula(pressure_hPa)\n\n# ...formula not valid above 11000m... \n# barometric_formula(pressure_hPa, 12000)\n\ndef barometric_formula(pressure_sea_level, height=2500):\n \"\"\"Apply barometric formula\n \n Apply the barometric formula to calculate the air pressure on a given height\n \n Parameters\n ----------\n pressure_sea_level : float\n pressure, measured as sea level\n height : float\n height above sea level (m)\n \n Notes\n ------\n see https://www.math24.net/barometric-formula/ or \n https://en.wikipedia.org/wiki/Atmospheric_pressure\n \"\"\"\n if height > 11000:\n raise Exception(\"Barometric formula only valid for heights lower than 11000m above sea level\")\n \n standard_temperature = 288.15\n gas_constant = 8.3144598\n gravit_acc = 9.81\n molar_mass_earth = 0.02896\n \n pressure_altitude = pressure_sea_level * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))\n return pressure_altitude\n\n# ...combining logical statements... \n\nheight > 11000 or pressure_hPa < 9000\n\n# ...load function from file...",
"Instead of having the functions in a notebook, importing the function from a file can be done as importing a function from an installed package. Save the function barometric_formula in a file called barometric_formula.py and add the required import statement import math on top of the file. Next, run the following cell:",
"from barometric_formula import barometric_formula",
"<div class=\"alert alert-info\">\n <b>REMEMBER</b>: \n <ul>\n <li>Write functions to prevent copy-pasting of code and maximize reuse</li>\n <li>Add documentation to functions for your future self</li>\n <li>Named arguments provide default values</li>\n <li>Import functions from a file just as other modules</li>\n </ul> \n\n</div>\n\nI measure air pressure multiple times\nWe can store these in a Python list:",
"pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]\n\n# ...check methods of lists... append vs insert",
"<div class=\"alert alert-warning\">\n <b>Notice</b>: \n <ul>\n <li>A list is a general container, so can exist of mixed data types as well.</li>\n </ul> \n</div>",
"# ...list is a container...",
"I want to apply my function to each of these measurements\nI want to calculate the barometric formula for each of these measured values.",
"# ...for loop... dummy example",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: \n <ul>\n <li>Write a <code>for</code> loop that prints the adjusted value for altitude 3000m for each of the pressures in <code>pressures_hPa</code> </li>\n </ul> \n</div>",
"for pressure in pressures_hPa:\n print(barometric_formula(pressure, 3000))\n\n# ...list comprehensions...",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: \n <ul>\n <li>Write a <code>for</code> loop as a list comprehension to calculate the adjusted value for altitude 3000m for each of the pressures in <code>pressures_hPa</code> and store these values in a new variable <code>pressures_hPa_adjusted</code></li>\n </ul> \n</div>",
"pressures_hPa_adjusted = [barometric_formula(pressure, 3000) for pressure in pressures_hPa]\npressures_hPa_adjusted",
"The power of numpy",
"import numpy as np\n\npressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]\n\nnp_pressures_hPa = np.array([1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001])\n\n# ...slicing/subselecting is similar...\n\nprint(np_pressures_hPa[0], pressures_hPa[0])",
"<div class=\"alert alert-info\">\n <b>REMEMBER</b>: \n <ul>\n <li> <code>[]</code> for accessing elements\n <li> <code>[start:end:step]</code>\n </ul>\n</div>",
"# ...original function using numpy array instead of list... do both\n\nnp_pressures_hPa * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))",
"<div class=\"alert alert-info\">\n <b>REMEMBER</b>: The operations do work on all elements of the array at the same time, you don't need a <strike>`for` loop<strike>\n</div>\n\nIt is also a matter of calculation speed:",
"lots_of_pressures = np.random.uniform(990, 1040, 1000)\n\n%timeit [barometric_formula(pressure, 3000) for pressure in list(lots_of_pressures)]\n\n%timeit lots_of_pressures * np.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))",
"<div class=\"alert alert-info\">\n <b>REMEMBER</b>: for calculations, numpy outperforms python\n</div>\n\nBoolean indexing and filtering (!)",
"np_pressures_hPa\n\nnp_pressures_hPa > 1000",
"You can use this as a filter to select elements of an array:",
"boolean_mask = np_pressures_hPa > 1000\nnp_pressures_hPa[boolean_mask]",
"or, also to change the values in the array corresponding to these conditions:",
"boolean_mask = np_pressures_hPa < 900\nnp_pressures_hPa[boolean_mask] = 900\nnp_pressures_hPa",
"Intermezzo: Exercises boolean indexing:",
"AR = np.random.randint(0, 20, 15)\nAR",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Count the number of values in AR that are larger than 10 \n\n_Tip:_ You can count with True = 1 and False = 0 and take the sum of these values\n</div>",
"sum(AR > 10)",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Change all even numbers of `AR` into zero-values.\n</div>",
"AR[AR%2 == 0] = 0\nAR",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Change all even positions of matrix AR into the value 30\n</div>",
"AR[1::2] = 30\nAR",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Select all values above the 75th percentile of the following array AR2 ad take the square root of these values\n\n_Tip_: numpy provides a function `percentile` to calculate a given percentile\n</div>",
"AR2 = np.random.random(10)\nAR2\n\nnp.sqrt(AR2[AR2 > np.percentile(AR2, 75)])",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Convert all values -99 of the array AR3 into Nan-values \n\n_Tip_: that Nan values can be provided in float arrays as `np.nan` and that numpy provides a specialized function to compare float values, i.e. `np.isclose()`\n</div>",
"AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.])\n\nAR3[np.isclose(AR3, -99)] = np.nan\nAR3",
"I also have measurement locations",
"location = 'Ghent - Sterre'\n\n# ...check methods of strings... split, upper,...\n\nlocations = ['Ghent - Sterre', 'Ghent - Coupure', 'Ghent - Blandijn', \n 'Ghent - Korenlei', 'Ghent - Kouter', 'Ghent - Coupure',\n 'Antwerp - Groenplaats', 'Brussels- Grand place', \n 'Antwerp - Justitipaleis', 'Brussels - Tour & taxis']",
"<div class=\"alert alert alert-success\">\n <b>EXERCISE</b>: Use a list comprehension to convert all locations to lower case.\n\n_Tip:_ check the available methods of lists by writing: `location.` + TAB button\n</div>",
"[location.lower() for location in locations]",
"I also measure temperature",
"pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]\ntemperature_degree = [23, 20, 17, 8, 12, 5, 16, 22, -2, 16]\nlocations = ['Ghent - Sterre', 'Ghent - Coupure', 'Ghent - Blandijn', \n 'Ghent - Korenlei', 'Ghent - Kouter', 'Ghent - Coupure',\n 'Antwerp - Groenplaats', 'Brussels- Grand place', \n 'Antwerp - Justitipaleis', 'Brussels - Tour & taxis']",
"Python dictionaries are a convenient way to store multiple types of data together, to not have too much different variables:",
"measurement = {}\nmeasurement['pressure_hPa'] = 1010\nmeasurement['temperature'] = 23\n\nmeasurement\n\n# ...select on name, iterate over keys or items...\n\nmeasurements = {'pressure_hPa': pressures_hPa,\n 'temperature_degree': temperature_degree,\n 'location': locations}\n\nmeasurements",
"But: I want to apply my barometric function to measurements taken in Ghent when the temperature was below 10 degrees...",
"for idx, pressure in enumerate(measurements['pressure_hPa']):\n if measurements['location'][idx].startswith(\"Ghent\") and \\\n measurements['temperature_degree'][idx]< 10:\n print(barometric_formula(pressure, 3000))\n ",
"when a table would be more appropriate... Pandas!",
"import pandas as pd\n\nmeasurements = pd.DataFrame(measurements)\nmeasurements\n\nbarometric_formula(measurements[(measurements[\"location\"].str.contains(\"Ghent\")) & \n (measurements[\"temperature_degree\"] < 10)][\"pressure_hPa\"], 3000)",
"<div class=\"alert alert-info\">\n <b>REMEMBER</b>: We can combine the speed of numpy with the convenience of dictionaries and much more!\n</div>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/miroc/cmip6/models/nicam16-9s/toplevel.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: MIROC\nSource ID: NICAM16-9S\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:41\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9s', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bjsmith/motivation-simulation
|
.ipynb_checkpoints/test-jupyter-widgets-clone-checkpoint.ipynb
|
gpl-3.0
|
[
"%matplotlib inline\nfrom pylab import *\nfrom pylab import *\n\n#importing widgets\nfrom ipywidgets import widgets\nfrom IPython.display import display",
"Basic plot example",
"from matplotlib.pyplot import figure, plot, xlabel, ylabel, title, show\n\nx=linspace(0,5,10)\ny=x**2\n\nfigure()\nplot(x,y,'r')\nxlabel('x')\nylabel('y')\ntitle('title')\nshow()",
"$$c = \\sqrt{a^2 + b^2}$$\n$$\n\\begin{align}\nc =& \\sqrt{a^2 + b^2} \\\n=&\\sqrt{4+16} \\\n\\end{align}\n$$\n$$\n\\begin{align}\nf(x)= & x^2 \\\n = & {{x[1]}}\n\\end{align}\n$$\nThis text contains a value like $x[1]$ is {{x[1]}}\nWidget example",
"from IPython.display import display\ntext = widgets.FloatText()\n\nfloatText = widgets.FloatText(description='MyField',min=-5,max=5)\nfloatSlider = widgets.FloatSlider(description='MyField',min=-5,max=5)\n\n\n#https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Basics.html\nfloat_link = widgets.jslink((floatText, 'value'), (floatSlider, 'value'))",
"Here we will set the fields to one of several values so that we can see pre-configured examples.",
"floatSlider.value=1\n\n\nbSatiationPreset=widgets.Button(\n description='Salt Satiation',\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Click me'\n)\n\nbDeprivationPreset=widgets.Button(\n description='Salt Deprivation2',\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Click me'\n)\n\ndef bDeprivationPreset_on_click(b):\n floatSlider.value=0.5\n \n \ndef bSatiationPreset_on_click(b):\n floatSlider.value=0.71717\n\n \nbDeprivationPreset.on_click(bDeprivationPreset_on_click)\nbSatiationPreset.on_click(bSatiationPreset_on_click)\n\n#floatSlider.observe(bDeprivationPreset_on_click,names='value')\n\n\nmyw=widgets.HTMLMath('$a+b=$'+str(floatSlider.value))\n\ndisplay(floatText,floatSlider)\ndisplay(bSatiationPreset)\ndisplay(bDeprivationPreset)\ndisplay(myw)\n\n\n\ntxtArea = widgets.Text()\ndisplay(txtArea)\n\nmyb= widgets.Button(description=\"234\")\ndef add_text(b):\n txtArea.value = txtArea.value + txtArea.value\n\n\nmyb.on_click(add_text)\n\ndisplay(myb)\n\nfrom IPython.display import display, Markdown, Latex\n\n#display(Markdown('*some markdown* $\\phi$'))\n# If you particularly want to display maths, this is more direct:\ndisplay(Latex('\\phi=' + str(round(floatText.value,2))))\ndisplay(Latex('$\\begin{align}\\phi=&' + str(round(floatText.value,2)) + \" \\\\ =& \\alpha\\end{align}$\"))\ndisplay(Markdown('$$'))\ndisplay(Markdown('\\\\begin{align}'))\ndisplay( '\\phi=' + str(round(floatText.value,2)))\ndisplay(Markdown('\\\\end{align}'))\ndisplay(Markdown('$$'))\n\n\nfrom IPython.display import Markdown\n\none = 1\ntwo = 2\nmyanswer = one + two**2\n\nMarkdown(\"# Title\")\nMarkdown(\"\"\"\n# Math\n## Addition\nHere is a simple addition example: ${one} + {two}^2 = {myanswer}$\n\nHere is a multi-line equation\n\n$$\n\\\\begin{{align}}\n\\\\begin{{split}}\nc = & a + b \\\\\\\\\n = & {one} + {two}^2 \\\\\\\\\n = & {myanswer}\n \\end{{split}}\n\\end{{align}}\n$$\n\"\"\".format(one=one, two=two, myanswer=myanswer))\n\na = widgets.IntSlider(description='a')\nb = widgets.IntSlider(description='b')\nc = widgets.IntSlider(description='c')\n\ndef g(a, b, c):\n print('${}*{}*{}={}$'.format(a, b, c, a*b*c))\n\ndef h(a, b, c):\n print(Markdown('${}\\\\times{}\\\\times{}={}$'.format(a, b, c, a*b*c)))\n \ndef f(a, b, c):\n print('{}*{}*{}={}'.format(a, b, c, a*b*c))\n\ndef f2(a, b, c):\n widgets.HTMLMath(value=Markdown('${}\\\\times{}\\\\times{}={}$'.format(a, b, c, a*b*c))._repr_markdown_())\n\ndef f3(a, b, c):\n print(Markdown('${}\\\\times{}\\\\times{}={}$'.format(a, b, c, a*b*c))._repr_markdown_())\n\ndef f4(a, b, c):\n return Markdown('${}\\\\times{}\\\\times{}={}$'.format(a, b, c, a*b*c))._repr_markdown_()\n\ndef f5(a, b, c):\n print('${}\\\\times{}\\\\times{}={}$'.format(a, b, c, a*b*c))\n\ndef f6(a, b, c):\n display(widgets.HTMLMath(value='${}\\\\times{}\\\\times{}={}$'.format(a, b, c, a*b*c)))\n \ndef f7(a, b, c):\n display(Markdown('${}\\\\times{}\\\\times{}={}$'.format(a, b, c, a*b*c)))\n\nout = widgets.interactive_output(f7, {'a': a, 'b': b, 'c': c})\n\n\nwidgets.HBox([widgets.VBox([a, b, c]), out])\n#widgets.HBox([widgets.VBox([a, b, c]), widgets.HTMLMath('$x=y+z$')])\n\nMarkdown('$${}\\\\times{}\\\\times{}={}$$'.format(a.value, b.value, c.value, a.value*b.value*c.value))\n\nmyMarkDown\n\n\n%matplotlib inline\nfrom ipywidgets import interactive\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef f(m, b):\n plt.figure(2)\n x = np.linspace(-10, 10, num=1000)\n plt.plot(x, m * x + b)\n plt.ylim(-5, 5)\n plt.show()\n\ninteractive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))\noutput = interactive_plot.children[-1]\noutput.layout.height = '350px'\ninteractive_plot\n\ndef g(x,y):\n return Markdown(\"\"\"\n$$\n{x}\\\\times {y}={z}\n$$\n\"\"\".format(x=3,y=4,z=4*5))\n\nprint(g(6,7))\n\nx=3\ny=4\nMarkdown(\"\"\"\n$$\n\\\\begin{{align}}\n\\\\begin{{split}}\nz & =x \\\\times y \\\\\\\\\n & = {x} \\\\times {y} \\\\\\\\\n & = {z}\n\\end{{split}}\n\\end{{align}}\n$$\n\"\"\".format(x=x,y=y,z=x*y))\n\n\n\nLatex(\"\"\"\n$$\n\\\\begin{{align}}\n\\\\begin{{split}}\nz & =x \\\\times y \\\\\\\\\n & = {x} \\\\times {y} \\\\\\\\\n & = {z}\n\\end{{split}}\n\\end{{align}}\n$$\n\"\"\".format(x=x,y=y,z=x*y))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
qutip/qutip-notebooks
|
examples/brmesolve-time-dependent-Liouvillian.ipynb
|
lgpl-3.0
|
[
"QuTiP example: Phonon-assisted initialization using the time-dependent Bloch-Redfield master equation solver\nK.A. Fischer, Stanford University\nThis Jupyter notebook demonstrates how to use the time-dependent Bloch-Redfield master equation solver to simulate the phonon-assited initialization of a quantum dot, using QuTiP: The Quantum Toolbox in Python. The purpose is to show how environmentally-driven dissipative interactions can be leveraged to initialize a quantum dot into its excited state. This notebook closely follows the work, <a href=\"https://arxiv.org/abs/1409.6014\">Dissipative preparation of the exciton and biexciton in self-assembled quantum\ndots on picosecond time scales</a>, Phys. Rev. B 90, 241404(R) (2014).\nFor more information about QuTiP see the project web page: http://qutip.org/",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport itertools\n\nfrom qutip import *\nfrom numpy import *",
"Introduction\nThe quantum two-level system (TLS) is the simplest possible model to describe the quantum light-matter interaction between light and an artificial atom (quantum dot). While the version in the paper (both experiment and simulation) used a three-level system model, I decided to show only a TLS model here to minimize the notebook's runtime.\nIn the version we simulate here, the system is driven by a continuous-mode coherent state, whose dipolar interaction with the system is represented by the following Hamiltonain\n$$ H =\\hbar \\omega_0 \\sigma^\\dagger \\sigma + \\frac{\\hbar\\Omega(t)}{2}\\left( \\sigma\\textrm{e}^{-i\\omega_dt} + \\sigma^\\dagger \\textrm{e}^{i\\omega_dt}\\right),$$\nwhere $\\omega_0$ is the system's transition frequency, $\\sigma$ is the system's atomic lowering operator, $\\omega_d$ is the coherent state's center frequency, and $\\Omega(t)$ is the coherent state's driving strength.\nThe time-dependence can be removed to simplify the simulation by a rotating frame transformation. Then,\n$$ H_r =\\hbar \\left(\\omega_0-\\omega_d\\right) \\sigma^\\dagger \\sigma + \\frac{\\hbar\\Omega(t)}{2}\\left( \\sigma+ \\sigma^\\dagger \\right).$$\nAdditionally, the quantum dot exists in a solid-state matrix, where environmental interactions are extremely important. In particular, the coupling between acoustic phonons and the artifical atom leads to important dephasing effects. While the collapse operators of the quantum-optical master equation can be used to phenomenologically model these effects, they do not necessarily provide any direct connection to the underlying physics of the system-environmental interaction.\nInstead, the Bloch-Redfield master equation allows for a direct connection between the quantum dynamics and the underlying physical interaction mechanism. Furthermore, the interaction strengths can be derived from first principles and can include complex power-dependences, such as those that exist in the quantum-dot-phonon interaction. Though we note that there are important non-Markovian effects which are now being investigated, e.g. in the paper <a href=\"https://journals.aps.org/prb/abstract/10.1103/PhysRevB.95.201305\">Limits to coherent scattering and photon coalescence from solid-state quantum emitters</a>, Phys. Rev. B 95, 201305(R) (2017).\nProblem parameters\nNote, we use units where $\\hbar=1$.",
"n_Pi = 13 # 8 pi pulse area\n\nOm_list = np.linspace(0.001, n_Pi, 80) # driving strengths\nwd_list_e = np.array([-1, 0, 1]) # laser offsets in meV\nwd_list = wd_list_e*1.5 # in angular frequency\ntlist = np.linspace(0, 50, 40) # tmax ~ 2x FWHM\n\n# normalized Gaussian pulse shape, ~10ps long in energy\nt0 = 17 / (2 * np.sqrt(2 * np.log(2)))\n#pulse_shape = np.exp(-(tlist - 24) ** 2 / (2 * t0 ** 2))\n\npulse_shape = '0.0867 * exp(-(t - 24) ** 2 / (2 * {0} ** 2))'.format(t0)",
"Setup the operators, Hamiltonian, and initial state",
"# initial state\npsi0 = fock(2, 1) # ground state\n\n# system's atomic lowering operator\nsm = sigmam()\n\n# Hamiltonian components\nH_S = -sm.dag() * sm # self-energy, varies with drive frequency\nH_I = sm + sm.dag()\n\n# we ignore spontaneous emission since the pulse is much faster than\n# the decay time\nc_ops = []",
"Below, we define the terms specific to the Bloch-Redfield solver's system-environmental coupling. The quantum dot couples to acoustic phonons in its solid-state environment through a dispersive electron-phonon interaction of the form\n$$ H_\\textrm{phonon}=\\hbar J(\\omega)\\sigma^\\dagger \\sigma,$$\nwhere $J(\\omega)$ is the spectra density of the coupling.",
"# operator that couples the quantum dot to acoustic phonons\na_op = sm.dag()*sm\n \n# This spectrum is a displaced gaussian multiplied by w^3, which\n# models coupling to LA phonons. The electron and hole\n# interactions contribute constructively.\n\n\"\"\"\n# fitting parameters ae/ah\nah = 1.9e-9 # m\nae = 3.5e-9 # m\n# GaAs material parameters\nDe = 7\nDh = -3.5\nv = 5110 # m/s\nrho_m = 5370 # kg/m^3\nhbar = 1.05457173e-34 # Js\nT = 4.2 # Kelvin, temperature\n\n# results in ~3THz angular frequency width, w in THz\n# zero T spectrum, for w>0\nJ = 1.6*1e-13*w**3/(4*numpy.pi**2*rho_m*hbar*v**5) * \\\n (De*numpy.exp(-(w*1e12*ae/(2*v))**2) -\n Dh*numpy.exp(-(w*1e12*ah/(2*v))**2))**2\n\n# for temperature dependence, the 'negative' frequency \n# components correspond to absorption vs emission\n\n# w > 0:\nJT_p = J*(1 + numpy.exp(-w*0.6582119/(T*0.086173)) / \\\n (1-numpy.exp(-w*0.6582119/(T*0.086173))))\n# w < 0:\nJT_m = -J*numpy.exp(w*0.6582119/(T*0.086173)) / \\\n (1-numpy.exp(w*0.6582119/(T*0.086173)))\n\"\"\"\n\n# the Bloch-Redfield solver requires the spectra to be \n# formatted as a string\nspectra_cb =' 1.6*1e-13*w**3/(4*pi**2*5370*1.05457173e-34*5110**5) * ' + \\\n '(7*exp(-(w*1e12*3.5e-9/(2*5110))**2) +' + \\\n '3.5*exp(-(w*1e12*1.9e-9 /(2*5110))**2))**2 *' + \\\n '((1 + exp(-w*0.6582119/(4.2*0.086173)) /' + \\\n '(1+1e-9-exp(-w*0.6582119/(4.2*0.086173))))*(w>=0)' + \\\n '-exp(w*0.6582119/(4.2*0.086173)) /' + \\\n '(1+1e-9-exp(w*0.6582119/(4.2*0.086173)))*(w<0))'",
"Visualize the dot-phonon interaction spectrum\n$J(\\omega)$ has two components that give rise to its shape: a rising component due to the increasing acoustic phonon density of states and a roll-off that occurs due to the physical size of the quantum dot.",
"spec_list = np.linspace(-5, 10, 200)\n\nplt.figure(figsize=(8, 5))\nplt.plot(spec_list, [eval(spectra_cb.replace('w', str(_))) for _ in spec_list])\nplt.xlim(-5, 10)\nplt.xlabel('$\\omega$ [THz]')\nplt.ylabel('$J(\\omega)$ [THz]')\nplt.title('Quantum-dot-phonon interaction spectrum');",
"Calculate the pulse-system interaction dynamics\nThe Bloch-Redfield master equation solver takes the Hamiltonian time-dependence in list-string format. We calculate the final population at the end of the interaction of the pulse with the system, which represents the population initialized into the excited state.",
"# we will calculate the dot population expectation value\ne_ops = [sm.dag()*sm]\n\n# define callback for parallelization\ndef brme_step(args):\n wd = args[0]\n Om = args[1]\n H = [wd * H_S, [Om * H_I, pulse_shape]]\n \n # calculate the population after the pulse interaction has\n # finished using the Bloch-Redfield time-dependent solver\n return qutip.brmesolve(H, psi0, tlist, [[a_op, spectra_cb]],\n e_ops,options=Options(rhs_reuse=True)).expect[0][-1]\n\n# use QuTiP's builtin parallelized for loop, parfor\nresults = parfor(brme_step, itertools.product(wd_list, Om_list))\n\n# unwrap the results into a 2d array\ninv_mat_X = np.array(results).reshape((len(wd_list), len(Om_list)))",
"Visualize the quantum dot's initialization fidelity\nBelow, consider the trace of excited state occupation for increasing pulse area at a detuning of $\\omega_d-\\omega_L=0$. Here, the oscillations represent the standard Rabi oscillations of a driven two-level system, damped for increasing pulse area by a Markovian-like dephasing. This damping could be represented with a power-dependent collapse operator in the normal quantum-optical master equation. However, for nonzero pulse detunings, the results are quite nontrivial and difficult to model with a collapse operator. Herein lies the power of the Bloch-Redfield approach: it captures the dephasing in a more natural basis, the dressed atom basis, from first principles.\nIn this basis, the dispersive phonon-induced dephasing drives a population difference between the dressed states. This amounts to driving the system towards a dissipative quasi-steady state that initializes the population into the excited state with almost unity fidelity. The initialization effect is very insensitive to precise pulse area or laser detuning, as discussed in our paper, and hence is a powerfully robust way to pump a quantum dot into its excited state. Below is an example trace showing this dissipative initialization for a +1meV laser detuning. The high fidelity of the initialization relies on a low temperature bath that prefers phonon emission over absorption. As a complement, for a laser detuning of -1meV, the excited state is barely populated.",
"plt.figure(figsize=(8, 5))\n\nplt.plot(Om_list, inv_mat_X[0])\nplt.plot(Om_list, inv_mat_X[1])\nplt.plot(Om_list, inv_mat_X[2])\n\nplt.legend(['laser detuning, -1 meV', \n 'laser detuning, 0 meV', \n 'laser detuning, +1 meV'], loc=4)\n\nplt.xlim(0, 13)\nplt.xlabel('Pulse area [$\\pi$]')\nplt.ylabel('Excited state population')\nplt.title('Effects of phonon dephasing for different pulse detunings');",
"Versions",
"from qutip.ipynbtools import version_table\n\nversion_table()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rflamary/POT
|
notebooks/plot_barycenter_lp_vs_entropic.ipynb
|
mit
|
[
"%matplotlib inline",
"1D Wasserstein barycenter comparison between exact LP and entropic regularization\nThis example illustrates the computation of regularized Wasserstein Barycenter\nas proposed in [3] and exact LP barycenters using standard LP solver.\nIt reproduces approximately Figure 3.1 and 3.2 from the following paper:\nCuturi, M., & Peyré, G. (2016). A smoothed dual approach for variational\nWasserstein problems. SIAM Journal on Imaging Sciences, 9(1), 320-343.\n[3] Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L., & Peyré, G. (2015).\nIterative Bregman projections for regularized transportation problems\nSIAM Journal on Scientific Computing, 37(2), A1111-A1138.",
"# Author: Remi Flamary <remi.flamary@unice.fr>\n#\n# License: MIT License\n\nimport numpy as np\nimport matplotlib.pylab as pl\nimport ot\n# necessary for 3d plot even if not used\nfrom mpl_toolkits.mplot3d import Axes3D # noqa\nfrom matplotlib.collections import PolyCollection # noqa\n\n#import ot.lp.cvx as cvx",
"Gaussian Data",
"#%% parameters\n\nproblems = []\n\nn = 100 # nb bins\n\n# bin positions\nx = np.arange(n, dtype=np.float64)\n\n# Gaussian distributions\n# Gaussian distributions\na1 = ot.datasets.make_1D_gauss(n, m=20, s=5) # m= mean, s= std\na2 = ot.datasets.make_1D_gauss(n, m=60, s=8)\n\n# creating matrix A containing all distributions\nA = np.vstack((a1, a2)).T\nn_distributions = A.shape[1]\n\n# loss matrix + normalization\nM = ot.utils.dist0(n)\nM /= M.max()\n\n\n#%% plot the distributions\n\npl.figure(1, figsize=(6.4, 3))\nfor i in range(n_distributions):\n pl.plot(x, A[:, i])\npl.title('Distributions')\npl.tight_layout()\n\n#%% barycenter computation\n\nalpha = 0.5 # 0<=alpha<=1\nweights = np.array([1 - alpha, alpha])\n\n# l2bary\nbary_l2 = A.dot(weights)\n\n# wasserstein\nreg = 1e-3\not.tic()\nbary_wass = ot.bregman.barycenter(A, M, reg, weights)\not.toc()\n\n\not.tic()\nbary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)\not.toc()\n\npl.figure(2)\npl.clf()\npl.subplot(2, 1, 1)\nfor i in range(n_distributions):\n pl.plot(x, A[:, i])\npl.title('Distributions')\n\npl.subplot(2, 1, 2)\npl.plot(x, bary_l2, 'r', label='l2')\npl.plot(x, bary_wass, 'g', label='Reg Wasserstein')\npl.plot(x, bary_wass2, 'b', label='LP Wasserstein')\npl.legend()\npl.title('Barycenters')\npl.tight_layout()\n\nproblems.append([A, [bary_l2, bary_wass, bary_wass2]])",
"Dirac Data",
"#%% parameters\n\na1 = 1.0 * (x > 10) * (x < 50)\na2 = 1.0 * (x > 60) * (x < 80)\n\na1 /= a1.sum()\na2 /= a2.sum()\n\n# creating matrix A containing all distributions\nA = np.vstack((a1, a2)).T\nn_distributions = A.shape[1]\n\n# loss matrix + normalization\nM = ot.utils.dist0(n)\nM /= M.max()\n\n\n#%% plot the distributions\n\npl.figure(1, figsize=(6.4, 3))\nfor i in range(n_distributions):\n pl.plot(x, A[:, i])\npl.title('Distributions')\npl.tight_layout()\n\n\n#%% barycenter computation\n\nalpha = 0.5 # 0<=alpha<=1\nweights = np.array([1 - alpha, alpha])\n\n# l2bary\nbary_l2 = A.dot(weights)\n\n# wasserstein\nreg = 1e-3\not.tic()\nbary_wass = ot.bregman.barycenter(A, M, reg, weights)\not.toc()\n\n\not.tic()\nbary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)\not.toc()\n\n\nproblems.append([A, [bary_l2, bary_wass, bary_wass2]])\n\npl.figure(2)\npl.clf()\npl.subplot(2, 1, 1)\nfor i in range(n_distributions):\n pl.plot(x, A[:, i])\npl.title('Distributions')\n\npl.subplot(2, 1, 2)\npl.plot(x, bary_l2, 'r', label='l2')\npl.plot(x, bary_wass, 'g', label='Reg Wasserstein')\npl.plot(x, bary_wass2, 'b', label='LP Wasserstein')\npl.legend()\npl.title('Barycenters')\npl.tight_layout()\n\n#%% parameters\n\na1 = np.zeros(n)\na2 = np.zeros(n)\n\na1[10] = .25\na1[20] = .5\na1[30] = .25\na2[80] = 1\n\n\na1 /= a1.sum()\na2 /= a2.sum()\n\n# creating matrix A containing all distributions\nA = np.vstack((a1, a2)).T\nn_distributions = A.shape[1]\n\n# loss matrix + normalization\nM = ot.utils.dist0(n)\nM /= M.max()\n\n\n#%% plot the distributions\n\npl.figure(1, figsize=(6.4, 3))\nfor i in range(n_distributions):\n pl.plot(x, A[:, i])\npl.title('Distributions')\npl.tight_layout()\n\n\n#%% barycenter computation\n\nalpha = 0.5 # 0<=alpha<=1\nweights = np.array([1 - alpha, alpha])\n\n# l2bary\nbary_l2 = A.dot(weights)\n\n# wasserstein\nreg = 1e-3\not.tic()\nbary_wass = ot.bregman.barycenter(A, M, reg, weights)\not.toc()\n\n\not.tic()\nbary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)\not.toc()\n\n\nproblems.append([A, [bary_l2, bary_wass, bary_wass2]])\n\npl.figure(2)\npl.clf()\npl.subplot(2, 1, 1)\nfor i in range(n_distributions):\n pl.plot(x, A[:, i])\npl.title('Distributions')\n\npl.subplot(2, 1, 2)\npl.plot(x, bary_l2, 'r', label='l2')\npl.plot(x, bary_wass, 'g', label='Reg Wasserstein')\npl.plot(x, bary_wass2, 'b', label='LP Wasserstein')\npl.legend()\npl.title('Barycenters')\npl.tight_layout()",
"Final figure",
"#%% plot\n\nnbm = len(problems)\nnbm2 = (nbm // 2)\n\n\npl.figure(2, (20, 6))\npl.clf()\n\nfor i in range(nbm):\n\n A = problems[i][0]\n bary_l2 = problems[i][1][0]\n bary_wass = problems[i][1][1]\n bary_wass2 = problems[i][1][2]\n\n pl.subplot(2, nbm, 1 + i)\n for j in range(n_distributions):\n pl.plot(x, A[:, j])\n if i == nbm2:\n pl.title('Distributions')\n pl.xticks(())\n pl.yticks(())\n\n pl.subplot(2, nbm, 1 + i + nbm)\n\n pl.plot(x, bary_l2, 'r', label='L2 (Euclidean)')\n pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')\n pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')\n if i == nbm - 1:\n pl.legend()\n if i == nbm2:\n pl.title('Barycenters')\n\n pl.xticks(())\n pl.yticks(())"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sujitpal/polydlot
|
src/tensorflow/05-experiment-from-tf-estimator.ipynb
|
apache-2.0
|
[
"Build Experiment from TF Estimator\nEmbeds a 3 layer FCN model to predict MNIST handwritten digits in a Tensorflow Experiment. FCN model is built using the Estimator DNNClassifier from the tf.contrib.learn package.",
"from __future__ import division, print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport shutil\nimport tensorflow as tf\n\nDATA_DIR = \"../../data\"\nTRAIN_FILE = os.path.join(DATA_DIR, \"mnist_train.csv\")\nTEST_FILE = os.path.join(DATA_DIR, \"mnist_test.csv\")\n\nMODEL_DIR = os.path.join(DATA_DIR, \"expt-tf-model\")\n\nNUM_FEATURES = 784\nNUM_CLASSES = 10\nNUM_STEPS = 100",
"Prepare Data",
"def parse_file(filename):\n xdata, ydata = [], []\n fin = open(filename, \"rb\")\n i = 0\n for line in fin:\n if i % 10000 == 0:\n print(\"{:s}: {:d} lines read\".format(\n os.path.basename(filename), i))\n cols = line.strip().split(\",\")\n ydata.append(int(cols[0]))\n xdata.append([float(x) / 255. for x in cols[1:]])\n i += 1\n fin.close()\n print(\"{:s}: {:d} lines read\".format(os.path.basename(filename), i))\n y = np.array(ydata)\n X = np.array(xdata)\n return X, y\n\nXtrain, ytrain = parse_file(TRAIN_FILE)\nXtest, ytest = parse_file(TEST_FILE)\nprint(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)\n\n# these functions are parameters to the classifier\ndef train_input_fn():\n return tf.constant(Xtrain), tf.constant(ytrain)\n\ndef test_input_fn():\n return tf.constant(Xtest), tf.constant(ytest)",
"Define Estimator\nThe tf.contrib.learn package provides several Estimators out of the box.\n * tf.contrib.learn.LinearRegressor\n * tf.contrib.learn.LinearClassifier\n * tf.contrib.learn.DNNRegressor\n * tf.contrib.learn.DNNClassifier",
"shutil.rmtree(MODEL_DIR, ignore_errors=True)\nfeature_cols = [tf.contrib.layers.real_valued_column(\"\", \n dimension=NUM_FEATURES)]\nestimator = tf.contrib.learn.DNNClassifier(feature_columns=feature_cols,\n hidden_units=[512, 256], n_classes=NUM_CLASSES,\n model_dir=MODEL_DIR)",
"Train Estimator",
"estimator.fit(input_fn=train_input_fn, steps=NUM_STEPS)",
"Evaluate Estimator",
"accuracy_score = estimator.evaluate(input_fn=test_input_fn,\n steps=1)[\"accuracy\"]\nprint(\"accuracy: {:.3f}\".format(accuracy_score))",
"alternatively...\nDefine Experiment\nA model is wrapped in an Estimator, which is then wrapped in an Experiment. Once you have an Experiment, you can run this in a distributed manner on CPU, GPU or TPU.",
"def experiment_fn(run_config, params):\n feature_cols = [tf.contrib.layers.real_valued_column(\"\",\n dimension=NUM_FEATURES)]\n estimator = tf.contrib.learn.DNNClassifier(\n feature_columns=feature_cols,\n hidden_units=[512, 256],\n n_classes=NUM_CLASSES,\n config=run_config)\n return tf.contrib.learn.Experiment(\n estimator=estimator,\n train_input_fn=train_input_fn,\n train_steps=NUM_STEPS,\n eval_input_fn=test_input_fn)",
"Run Experiment",
"shutil.rmtree(MODEL_DIR, ignore_errors=True)\ntf.contrib.learn.learn_runner.run(experiment_fn, \n run_config=tf.contrib.learn.RunConfig(\n model_dir=MODEL_DIR))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
andrzejkrawczyk/python-course
|
ipython_course/ipython.ipynb
|
apache-2.0
|
[
"Instalacja\n* pip install ipython\n* Z strony internetowej: https://www.anaconda.com/distribution/#windows\nCiekawostki:\n* https://orgmode.org/\nipython profile create andrzejkrawczyk\nstworzenie własnego profilu z konfiguracją\nipython --profile andrzejkrawczyk\nMiejsce skryptów konfiguracyjnych\n\nWindows: u'C:\\Users\\akrawczyk\\.ipython\\profile_andrzejkrawczyk\\ipython_config.py'\nLinux: \"~/.ipython/\"\n\nDo modyfikacji konfiguracji jest potrzebne pobranie konfiguracji, dodajemy na starcie pliku:\n\nc = get_config()\n\nkonfiguracja zapamiętywania aliasów / makr / zmiennych przez %store <nazwa>\nc.StoreMagics.autorestore = False \nAuto uzupełnianie nawiasów i uruchamianie obiektów callable\nnp. def foo(x): print(x) \nfoo 5 -> #5 \n0 - wyłącz\n1 - 'smart' - tylko jak są argumenty uruchom call\n2 - force call \nc.InteractiveShell.autocall = 1\nUstawia możliwość uruchamiania magicznych komend bez %\nc.InteractiveShell.automagic = True\nUstawienie koloru shella opcje: (NoColor, Neutral, Linux, or LightBG)\nc.InteractiveShell.colors = 'Neutral'\nUstawienie edytora plików\nc.TerminalInteractiveShell.editor = 'C:\\Program Files\\Notepad++\\notepad++.exe'\nWykonanie wskazanych linii na start shell'a\nc.InteractiveShellApp.exec_lines = [\"import sys\", \"import os\"]\n\nFolder startup w config - ładowanie skryptów przy uruchomieniu.\nMożna pisać skrypty .py / .ipy\nSkrypty ładowane w kolejności leksykograficznej.",
"%lsmagic\n\n%ls\n\n%pwd\n\n# Konfiguracja zakładek do folderów\n%bookmark -l\n%bookmark my_pwd\n%bookmark -l\n%cd my_pwd\n\nprint(4)\nprint(5)\n\n%macro foo #n1-n2 n1 n2\n\n%store foo",
"Alias na komendę systemową z powłoki\nalias",
"%alias whole_line echo \"%l\"\n%whole_line ala ma kota\n\n%alias params_echo echo ala ma %s a tomek ma %s a zuzia ma %s\n%params_echo psa kota zyrafe\n\ndane = !dir\nprint(dane)\n\ndane = \"Ala ma kota\"\n!echo $dane\n\n!echo $$\n\nfrom IPython.core.magic import register_line_magic\n\n@register_line_magic\ndef my_custom_quadratic(line):\n print(int(line) ** 2)\ndel my_custom_quadratic\n\n%my_custom_quadratic 4\n\ndef foo(a, b): # foo(\"5\", \"6\")\n print(a, b)\n print(type(a), type(b))\n\n,foo 5, 6 \n\ndef foo(a, b=None): # foo(\"5 6\")\n print(a, b)\n print(type(a), type(b))\n\n;foo 5, 6 \n\n%hist -n\n\n%hist -g\n\n%hist -o -g -f ipython_history.md\n\n%save example.py <zakres>\n\n%reset\n\n%paste\n\n%timeit [x for x in range(10000)]\n\n%%timeit L = []\nfor x in range(10000):\n L.append(x**x)\nL = L[:int(len(L) / 2)]",
"pdb z ipythonem\npip install ipdb"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kimkipyo/dss_git_kkp
|
Python 복습/12일차.금_Pandas의 고급기능_DB/12일차_4T_Pandas Basic (4) - 파일 입출력 ( csv, excel, sql ).ipynb
|
mit
|
[
"4T_Pandas Basic (4) - 파일 입출력 ( csv, excel, sql )",
"-실제 엑셀 파일 데이터를 바탕으로 위의 것들을 다시 한 번 실습\n-국가별 파일 입출력했음\n번외로 수학계산을 해 볼 것이다. max, mean, min, sum\n\ndf = pd.DataFrame([{\"Name\": \"KiPyo Kim\", \"Age\": 29}, {\"Name\": \"KiDong Kim\", \"Age\": 33}])\n\ndf\n\n# 옵션에 대해서만 알아가자\ndf.to_csv(\"fastcampus.csv\")\ndf.to_csv(\"fastcampus.csv\", index=False)\ndf.to_csv(\"fastcampus.csv\", index=False, header=False)",
"CSV(Comma Seperated Value) => 각각의 데이터가 \",\"를 기준으로 나뉜 데이터\n예를 들어 김기표 | 29 | 분석가 // sep=\"|\" 이거였어 // 이렇게 하면 ,가 |이걸로 바뀌게 된다.",
"df.to_csv(\"fastcampus.csv\", index=False, header=False, sep=\"|\")",
"데이터 분석을 사용할 때 2가지 양식이 있다\ncsv(엑셀), XML, JSON == 데이터베이스\nPickle(데이터 분석에서 상당히 중요하다) => 파이썬의 객체 그대로 저장할 수 있다. 파이썬 코드를 그대로 저장\n즉, 클래스나 함수를 바이너리 형태로 저장해서 언제든 쓸 수 있도록",
"df = pd.read_csv\n# 이렇게 간단하다\n# 엑셀 파일을 일괄적으로 csv 형태로 바꿔주는 프로그래밍 => Pandas로 하면 금방 한다\n# read_excel().to_csv 이런 식으로 하면. 해보자\n\npd.read_csv(\"fastcampus.csv\")\n\npd.read_csv(\"fastcampus.csv\", header=None, sep=\"|\")\n\ndf = pd.read_csv(\"fastcampus.csv\", header=None, sep=\"|\")\ndf.rename(columns={0: \"Age\", 1: \"Name\"}, inplace=True)\ndf"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
theislab/scanpy_usage
|
170522_visualizing_one_million_cells/plot.ipynb
|
bsd-3-clause
|
[
"Plotting 1.3M cells",
"import scanpy.api as sc\nsc.settings.set_figure_params(dpi=70) # dots (pixels) per inch determine size of inline figures",
"We illustrate the concept using the subsampled data, simply replace the filename... An indepth analysis with 1.3M cells will follow in the first half of 2018.",
"filename = './write/subsampled.h5ad'",
"Opening an '.h5ad' file in \"backed\" mode does not load the data into memory.",
"adata = sc.read(filename, backed='r') # open file in backed mode for reading ('r')\nsc.pl.tsne(adata, color='louvain_groups')\n\nsc.logging.print_memory_usage()",
"Loading the data in \"memory\" mode needs 100 MB for the subsampled data.",
"adata = sc.read(filename, backed=False) # backed=False is the default of sc.read(...)\n\nsc.logging.print_memory_usage()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jsharpna/DavisSML
|
lectures/lecture1/.ipynb_checkpoints/lecture1-checkpoint.ipynb
|
mit
|
[
"Intro to Statistical Machine Learning\nStats 208\nLecture 1\nProf. Sharpnack\nLecture slides at course github page\nSome content of these slides are from STA 251 notes and STA 141B lectures.\nThe life satisfaction data example and some of the code is based on the notebook file 01 in Aurélien Geron's github page\nMachine Learning\nA computer program learns from experience, E, with respect to class of tasks, T, and performance measure, P, if its performance at T improves by P with E.\nExamples of these categories are,\n- E: data (training)\n- P: loss (test), reward\n- T: classification, regression, expert selection, etc. \nInference vs. Prediction\n\nstatistical inference: is this effect significant? is the model correct? etc.\nprediction: does this algorithm predict the response variable well?\n\nterms\n\nsupervised learning: predicting one variable from many others\npredictor variables: X variables\nresponse variable: Y variable\nX: $n \\times p$ design matrix / features\nY: $n$ label vector",
"## I will be using Python 3, for install instructions see \n## http://anson.ucdavis.edu/~jsharpna/DSBook/unit1/intro.html#installation-and-workflow\n\n## The following packages are numpy (linear algebra), pandas (data munging), \n## sklearn (machine learning), matplotlib (graphics), statsmodels (statistical models)\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\n\n## Lines that start with ! run a bash command\n\n!ls -l ../../data/winequality-red.csv\n\n!head ../../data/winequality-red.csv",
"Wine dataset description\n- 84199 bytes (not large, feel free to load into memory)\n- header with quotations \" in the text\n- each line has floats without quotations\n- each datum separated by ;\nSome Python basics:\n- file input/output\n- [f(a) for a in L] list comprehensions\n- iterables, basic types, built-in functions",
"datapath = \"../../data/\"\nwith open(datapath + 'winequality-red.csv','r') as winefile:\n header = winefile.readline()\n wine_list = [line.strip().split(';') for line in winefile]\n\nwine_ar = np.array(wine_list,dtype=np.float64)\n\nnames = [name.strip('\"') for name in header.strip().split(';')]\nprint(names)\n\n#Subselect the predictor X and response y\ny = wine_ar[:,-1]\nX = wine_ar[:,:-1]\nn,p = X.shape\n\ny.shape, X.shape #just checking\n\nimport statsmodels.api as sm\n\nX = np.hstack((np.ones((n,1)),X)) #add intercept\nwine_ols = sm.OLS(y,X) #Initialize the OLS \nwine_res = wine_ols.fit()\n\nwine_res.summary()",
"Linear model\n$$f_\\beta(x_i) = \\beta_0 + \\sum_{j=1}^p \\beta_j x_{i,j}$$\nInference in linear models\n\nstatistically test for significance of effects\nrequires normality assumptions, homoscedasticity, linear model is correct\nhard to obtain significance for individual effect under colinearity\n\nPrediction perspective\n\nthink of OLS as a black-box model for predicting $Y | X$\nhow do we evaluate performance of prediction?\nhow do we choose between multiple OLS models?\n\nSupervised learning\nLearning machine that takes $p$-dimensional data $x_i = (x_{i,1}, \\ldots, x_{i,p})$ and predicts $y_i \\in \\mathcal Y$. \n\nTask: Predict $y$ given $x$ as $f_\\beta(x)$\nPerformance Metric: Loss measured with some function $\\ell(\\beta; x,y)$\nExperience: Fit the model with training data ${x_i,y_i}_{i=1}^{n}$\n\nLinear Regression\n\nFit: Compute $\\hat \\beta$ from OLS with training data ${x_i,y_i}_{i=1}^{n}$\nPredict: For a new predictor $x_{n+1}$ predict $$\\hat y = f_{\\hat \\beta}(x_{n+1}) = \\hat \\beta_0 + \\sum_{j=1}^p \\hat \\beta_j x_{n+1,j}$$\nLoss: Observe new response $y_{n+1}$ and see loss $$\\ell(\\hat \\beta; x_{n+1},y_{n+1}) = (f_{\\hat \\beta}(x_{n+1}) - y_{n+1})^2$$",
"## The following uses pandas!\n\ndatapath = \"../../data/\"\noecd_bli = pd.read_csv(datapath + \"oecd_bli_2015.csv\", thousands=',')\noecd_bli = oecd_bli[oecd_bli[\"INEQUALITY\"]==\"TOT\"]\noecd_bli = oecd_bli.pivot(index=\"Country\", columns=\"Indicator\", values=\"Value\")\n\n# Load and prepare GDP per capita data\n\n# Download data from http://goo.gl/j1MSKe (=> imf.org)\ngdp_per_capita = pd.read_csv(datapath+\"gdp_per_capita.csv\", thousands=',', delimiter='\\t',\n encoding='latin1', na_values=\"n/a\")\ngdp_per_capita = gdp_per_capita.rename(columns={\"2015\": \"GDP per capita\"})\ngdp_per_capita = gdp_per_capita.set_index(\"Country\")\n\nfull_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)\nfull_country_stats = full_country_stats.sort_values(by=\"GDP per capita\")\n\nfull_country_stats.head()\n\n_ = full_country_stats.plot(\"GDP per capita\",'Life satisfaction',kind='scatter')\nplt.title('Life Satisfaction Index')\n\nkeepvars = full_country_stats.dtypes[full_country_stats.dtypes == float].index.values\nkeepvars = keepvars[:-1]\ncountry = full_country_stats[keepvars]\n\nY = np.array(country['Life satisfaction'])\ndel country['Life satisfaction']\nX_vars = country.columns.values\nX = np.array(country)\n\ndef loss(yhat,y):\n \"\"\"sqr error loss\"\"\"\n return (yhat - y)**2\n\ndef fit(X,Y):\n \"\"\"fit the OLS from training w/ intercept\"\"\"\n lin1 = LinearRegression(fit_intercept=True) # OLS from sklearn\n lin1.fit(X,Y) # fit OLS\n return np.append(lin1.intercept_,lin1.coef_) # return betahat\n\ndef predict(x, betahat):\n \"\"\"predict for point x\"\"\"\n return betahat[0] + x @ betahat[1:]",
"Summary\n\nSupervised learning task is to predict $Y$ given $X$\nFit is using training data to fit parameters\nPredict uses the fitted parameters to do prediction\nLoss is a function that says how poorly you did on datum $x_i,y_i$\n\nIntermission\nRisk and Empirical Risk\nGiven a loss $\\ell(\\theta; X,Y)$, for parameters $\\theta$, the risk is \n$$\nR(\\theta) = \\mathbb E \\ell(\\theta; X,Y).\n$$\nAnd given training data ${x_i,y_i}{i=1}^{n_0}$ (drawn iid to $X,Y$), then the empirical risk is\n$$\nR_n(\\theta) = \\frac 1n \\sum{i=1}^n \\ell(\\theta; x_i, y_i).\n$$\nNotice that $\\mathbb E R_n(\\theta) = R(\\theta)$ for fixed $\\theta$.\nFor a class of parameters $\\Theta$, the empirical risk minimizer (ERM) is the \n$$\n\\hat \\theta = \\arg \\min_{\\theta \\in \\Theta} R_n(\\theta)\n$$\n(may not be unique).\nOLS is the ERM\nOLS minimizes the following objective,\n$$\nR_n(\\beta) = \\frac 1n \\sum_{i=1}^n \\left(y_i - x_i^\\top \\beta - \\beta_0 \\right)^2\n$$\nwith respect to $\\beta,\\beta_0$.\nThis is the ERM for square error loss and linear predictor.\nWhy is ERM a good idea?\nFor a fixed $\\theta$ we know by the Law of Large Numbers (as long as expectations exist and data is iid),\n$$\nR_n(\\theta) = \\frac 1n \\sum_{i=1}^n \\ell(\\theta; x_i, y_i) \\rightarrow \\mathbb E \\ell(\\theta; X,Y) = R(\\theta),\n$$\nwhere convergence is in probability (or almost surely).\nWe want to minimize $R(\\theta)$ so $R_n(\\theta)$ is a pretty good surrogate.\nExample: Binary classification\nMortgage insurer pays the mortgage company if the insuree defaults on loan. To determine how much to charge want to predict if they will default (1) or not (0).\nAn actuary (from 19th century) says that people that are young (less than 30) are irresponsible and will not insure them. Let $x$ be the age in years, $y = 1$ if they default, and $\\theta = 30$.\n$$\ng_\\theta(x) = \\left{ \\begin{array}{ll}\n1, &x < \\theta\\\n0, &x \\ge \\theta \n\\end{array}\n\\right.\n$$\n0-1 loss is\n$$\n\\ell_{0-1}(\\theta; X,Y) = \\mathbf 1 {g_\\theta(X)\\ne Y}.\n$$\nThe risk is\n$$\nR(\\theta) = \\mathbb E \\mathbf 1 {g_\\theta(X)\\ne Y} = \\mathbb P { g_\\theta(X) \\ne Y }.\n$$\nHow well will he do?\nUnsupervised learning\nWant to summarize/compress/learn distribution of $X$. Clustering for example is the problem of assigning each datum to a cluster.\n<img width=\"500px\" src=\"kmeans.png\">\nImage from https://rpubs.com/cyobero/k-means\nClustering for example is the problem of assigning each datum to a cluster in index set $[C] = {1,\\ldots,C}$ for cluster centers $z_k$,\n$$\n\\theta = \\left{ \\textrm{cluster centers, } { z_k }{k=1}^C \\subset \\mathbb R^p, \\textrm{ cluster assignments, } \\sigma:[n] \\to [C] \\right}\n$$\nThe loss is \n$$\n\\ell(\\theta;x_i) = \\| x_i - z{\\sigma(i)} \\|^2 = \\sum_{j=1}^p (x_{i,j} - z_{\\sigma(i),j})^2.\n$$\nLoss, risk, and empirical risk still can be defined, but many concepts are not the same (such as bias-variance tradeoff).\nIssue with training error in Supervised learning.\nLet $\\hat \\theta$ be the ERM, then the training error is\n$$\nR_n(\\hat \\theta) = \\min_{\\theta \\in \\Theta} R_n(\\theta)\n$$\nwhich does NOT converge to $R(\\theta)$ because\n$$\n\\mathbb E \\min_\\theta R_n(\\theta) \\ne \\min_{\\theta} \\mathbb E R_n(\\theta) = \\min_\\theta R(\\theta).\n$$\nSolution\nSplit the data randomly into training and test sets: \n- train $\\hat \\theta$ with the training data\n- test $\\hat \\theta$ with the test data\nBecause the test data is independent of $\\hat \\theta$ we can think of the training process as fixed and test error is now unbiased for risk of $\\hat \\theta$.",
"## randomly shuffle data and split\nn,p = X.shape\nInd = np.arange(n) \nnp.random.shuffle(Ind) \ntrain_size = 2 * n // 3 +1 # set training set size\nX_tr, X_te = X[Ind[:train_size],:], X[Ind[train_size:],:]\nY_tr, Y_te = Y[Ind[:train_size]], Y[Ind[train_size:]]\n\n## compute losses on test set\nbetahat = fit(X_tr,Y_tr)\nY_hat_te = [predict(x,betahat) for x in X_te]\ntest_losses = [loss(yhat,y) for yhat,y in zip(Y_hat_te,Y_te)]\n\n## compute losses on train set\nY_hat_tr = [predict(x,betahat) for x in X_tr]\ntrain_losses = [loss(yhat,y) for yhat,y in zip(Y_hat_tr,Y_tr)]\n\ntrain_losses\n\ntest_losses\n\nprint(\"train avg loss: {}\\ntest avg loss: {}\".format(np.mean(train_losses), np.mean(test_losses)))\nprint(\"n p :\",n,p)\n\ndef train_test_split(X,Y,split_pr = 0.5):\n \"\"\"train-test split\"\"\"\n n,p = X.shape\n Ind = np.arange(n) \n np.random.shuffle(Ind) \n train_size = int(split_pr * n) # set training set size\n X_tr, X_te = X[Ind[:train_size],:], X[Ind[train_size:],:]\n Y_tr, Y_te = Y[Ind[:train_size]], Y[Ind[train_size:]]\n return (X_tr,Y_tr), (X_te, Y_te)\n\nY = wine_ar[:,-1]\nX = wine_ar[:,:-1]\n(X_tr,Y_tr), (X_te, Y_te) = train_test_split(X,Y)\n\n## compute losses on test set\nbetahat = fit(X_tr,Y_tr)\nY_hat_te = [predict(x,betahat) for x in X_te]\ntest_losses = [loss(yhat,y) for yhat,y in zip(Y_hat_te,Y_te)]\n\n## compute losses on train set\nY_hat_tr = [predict(x,betahat) for x in X_tr]\ntrain_losses = [loss(yhat,y) for yhat,y in zip(Y_hat_tr,Y_tr)]\n\nprint(\"train avg loss: {}\\ntest avg loss: {}\".format(np.mean(train_losses), np.mean(test_losses)))",
"Summary\n\nWant to minimize true risk (expected loss)\nInstead we minimize empirical risk (training error)\nTraining error is now biased, so we do training test split"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
texib/deeplearning_homework
|
tensor-flow-exercises/7_SimpleLSTM.ipynb
|
mit
|
[
"import numpy as np\nfrom random import shuffle\ntrain_input = ['{0:020b}'.format(i) for i in range(2**20)]\n\nshuffle(train_input)\ntrain_input = [map(int,i) for i in train_input]\n\nti = []\nfor i in train_input:\n temp_list = []\n for j in i:\n temp_list.append([j])\n ti.append(np.array(temp_list))\ntrain_input = ti\n\ntrain_output = []\n \nfor i in train_input:\n count = 0\n for j in i:\n if j[0] == 1:\n count+=1\n temp_list = ([0]*21)\n temp_list[count]=1\n train_output.append(temp_list)\n\nNUM_EXAMPLES = 10000\ntest_input = train_input[NUM_EXAMPLES:]\ntest_output = train_output[NUM_EXAMPLES:] #everything beyond 10,000\n \ntrain_input = train_input[:NUM_EXAMPLES]\ntrain_output = train_output[:NUM_EXAMPLES] #till 10,000\n\nprint len(train_input[0])\nprint len(train_output[0])\n\nimport tensorflow as tf\n\ndata = tf.placeholder(tf.float32, [None, 20,1])\ntarget = tf.placeholder(tf.float32, [None, 21])",
"Basic LSTM from https://arxiv.org/abs/1409.2329\n\n<img align=\"left\" width=\"600px\" src='./basic_lstm.png' />",
"num_hidden = 24\ncell = tf.nn.rnn_cell.LSTMCell(num_hidden,state_is_tuple=True)\n\nval, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)\n\nval = tf.transpose(val, [1, 0, 2])\n\nlast = tf.gather(val, int(val.get_shape()[0]) - 1)\n\nweight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])]))\nbias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))\n",
"Softmax\n<img width='200px' align=\"left\" src=\"./softmax.png\" />",
"prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)",
"什麼是 Corss Entropy\n\nhttp://blog.csdn.net/rtygbwwwerr/article/details/50778098",
"cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction,1e-10,1.0)))",
"建立 Optimize Function",
"optimizer = tf.train.AdamOptimizer()\nminimize = optimizer.minimize(cross_entropy)\n\nmistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))\nerror = tf.reduce_mean(tf.cast(mistakes, tf.float32))\n\ninit_op = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init_op)\n",
"作 Training",
"batch_size = 1000\nno_of_batches = int(len(train_input)/batch_size)\nepoch =500\nfor i in range(epoch):\n ptr = 0\n for j in range(no_of_batches):\n inp, out = train_input[ptr:ptr+batch_size], train_output[ptr:ptr+batch_size]\n ptr+=batch_size\n sess.run(minimize,{data: inp, target: out})\n# print \"Epoch - \",str(i)\nincorrect = sess.run(error,{data: test_input, target: test_output})\nprint('Epoch {:2d} error {:3.1f}%'.format(i + 1, 100 * incorrect))\n# sess.close()\n\n\ntest_result = sess.run(prediction,{data: [inp[0]]})\n\ncount =0\nfor i in inp[0]:\n if i == [1]:\n count += 1\nprint count\n\ntest_result.argmax()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/06_structured/3_keras_wd.ipynb
|
apache-2.0
|
[
"<h1> Create Keras Wide-and-Deep model </h1>\n\nThis notebook illustrates:\n<ol>\n<li> Creating a model using Keras. This requires TensorFlow 2.1\n</ol>",
"# Ensure the right version of Tensorflow is installed.\n!pip freeze | grep tensorflow==2.1\n\n# change these to try this notebook out\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nREGION = 'us-central1'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\n\n%%bash\nif ! gsutil ls | grep -q gs://${BUCKET}/; then\n gsutil mb -l ${REGION} gs://${BUCKET}\nfi\n\n%%bash\nls *.csv",
"Create Keras model\n<p>\nFirst, write an input_fn to read the data.",
"import shutil\nimport numpy as np\nimport tensorflow as tf\nprint(tf.__version__)\n\n# Determine CSV, label, and key columns\nCSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')\nLABEL_COLUMN = 'weight_pounds'\nKEY_COLUMN = 'key'\n\n# Set default values for each CSV column. Treat is_male and plurality as strings.\nDEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]\n\ndef features_and_labels(row_data):\n for unwanted_col in ['key']:\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label # features, label\n\n# load the training data\ndef load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)\n .map(features_and_labels) # features, label\n )\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = dataset.shuffle(1000).repeat()\n dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE\n return dataset",
"Next, define the feature columns. mother_age and gestation_weeks should be numeric.\nThe others (is_male, plurality) should be categorical.",
"## Build a Keras wide-and-deep model using its Functional API\ndef rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) \n\n# Helper function to handle categorical columns\ndef categorical_fc(name, values):\n orig = tf.feature_column.categorical_column_with_vocabulary_list(name, values)\n wrapped = tf.feature_column.indicator_column(orig)\n return orig, wrapped\n\ndef build_wd_model(dnn_hidden_units = [64, 32], nembeds = 3):\n # input layer\n deep_inputs = {\n colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')\n for colname in ['mother_age', 'gestation_weeks']\n }\n wide_inputs = {\n colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')\n for colname in ['is_male', 'plurality'] \n }\n inputs = {**wide_inputs, **deep_inputs}\n \n # feature columns from inputs\n deep_fc = {\n colname : tf.feature_column.numeric_column(colname)\n for colname in ['mother_age', 'gestation_weeks']\n }\n wide_fc = {}\n is_male, wide_fc['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])\n plurality, wide_fc['plurality'] = categorical_fc('plurality',\n ['Single(1)', 'Twins(2)', 'Triplets(3)',\n 'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])\n \n # bucketize the float fields. This makes them wide\n age_buckets = tf.feature_column.bucketized_column(deep_fc['mother_age'],\n boundaries=np.arange(15,45,1).tolist())\n wide_fc['age_buckets'] = tf.feature_column.indicator_column(age_buckets)\n gestation_buckets = tf.feature_column.bucketized_column(deep_fc['gestation_weeks'],\n boundaries=np.arange(17,47,1).tolist())\n wide_fc['gestation_buckets'] = tf.feature_column.indicator_column(gestation_buckets)\n \n # cross all the wide columns. We have to do the crossing before we one-hot encode\n crossed = tf.feature_column.crossed_column(\n [is_male, plurality, age_buckets, gestation_buckets], hash_bucket_size=20000)\n deep_fc['crossed_embeds'] = tf.feature_column.embedding_column(crossed, nembeds)\n\n # the constructor for DenseFeatures takes a list of numeric columns\n # The Functional API in Keras requires that you specify: LayerConstructor()(inputs)\n wide_inputs = tf.keras.layers.DenseFeatures(wide_fc.values(), name='wide_inputs')(inputs)\n deep_inputs = tf.keras.layers.DenseFeatures(deep_fc.values(), name='deep_inputs')(inputs)\n \n # hidden layers for the deep side\n layers = [int(x) for x in dnn_hidden_units]\n deep = deep_inputs\n for layerno, numnodes in enumerate(layers):\n deep = tf.keras.layers.Dense(numnodes, activation='relu', name='dnn_{}'.format(layerno+1))(deep) \n deep_out = deep\n \n # linear model for the wide side\n wide_out = tf.keras.layers.Dense(10, activation='relu', name='linear')(wide_inputs)\n \n # concatenate the two sides\n both = tf.keras.layers.concatenate([deep_out, wide_out], name='both')\n\n # final output is a linear activation because this is regression\n output = tf.keras.layers.Dense(1, activation='linear', name='weight')(both)\n model = tf.keras.models.Model(inputs, output)\n model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])\n return model\n\nprint(\"Here is our Wide-and-Deep architecture so far:\\n\")\nmodel = build_wd_model()\nprint(model.summary())",
"We can visualize the DNN using the Keras plot_model utility.",
"tf.keras.utils.plot_model(model, 'wd_model.png', show_shapes=False, rankdir='LR')",
"Train and evaluate",
"TRAIN_BATCH_SIZE = 32\nNUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around\nNUM_EVALS = 5 # how many times to evaluate\nNUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down\n\ntrainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)\nevalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\nhistory = model.fit(trainds, \n validation_data=evalds,\n epochs=NUM_EVALS, \n steps_per_epoch=steps_per_epoch)",
"Visualize loss curve",
"# plot\nimport matplotlib.pyplot as plt\nnrows = 1\nncols = 2\nfig = plt.figure(figsize=(10, 5))\n\nfor idx, key in enumerate(['loss', 'rmse']):\n ax = fig.add_subplot(nrows, ncols, idx+1)\n plt.plot(history.history[key])\n plt.plot(history.history['val_{}'.format(key)])\n plt.title('model {}'.format(key))\n plt.ylabel(key)\n plt.xlabel('epoch')\n plt.legend(['train', 'validation'], loc='upper left');",
"Save the model",
"import shutil, os, datetime\nOUTPUT_DIR = 'babyweight_trained'\nshutil.rmtree(OUTPUT_DIR, ignore_errors=True)\nEXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))\ntf.saved_model.save(model, EXPORT_PATH) # with default serving function\nprint(\"Exported trained model to {}\".format(EXPORT_PATH))\n\n!ls $EXPORT_PATH",
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
folivetti/PIPYTHON
|
Aula01.ipynb
|
mit
|
[
"Introdução à Programação em Python\nNesse Notebook aprenderemos:\n\noperações básicas do Python, \ntipos básicos de variáveis,\nentrada e saída de dados e, \nbibliotecas extras auxiliares.\n\n Operações Aritméticas \nO computador é essencialmente uma máquina de calcular. Para tanto temos diversas operações aritméticas disponíveis no Python:\n\nsoma (+): 1+2\nsubtração (-): 2-1\nmultiplicação (*): 3*5\ndivisão (/): 7/2\ndivisão inteira (//): 7//2\nexponenciação (**): 2**3\nresto da divisão (%): 5%2",
"# todo texto escrito após \"#\" é um comentário e não é interpretado pelo Python\n1+2 # realiza a operação 1+2 e imprime na tela\n\n2-1\n\n3*5\n\n7/2\n\n7//2\n\n2**3\n\n5%2",
"A ordem que as operações são executadas seguem uma ordem de precedência:\n- primeiro as operações de divisão e multiplicação,\n- em seguida adição e subtração.",
"1 + 2 * 3",
"A ordem pode ser alterada com o uso de parênteses.",
"(1+2)*3\n\n1 + (2*3)",
"Além das operações básicas temos algumas operações mais complexas disponíveis.\nPrecisamos indicar que queremos usar tais funções com o comando:\npython\nimport math\nNota: As funções matemáticas disponíveis podem ser consultadas na documentação.",
"import math\n\nmath.sqrt(2.0) # raiz de 2\n\nmath.log(10) # ln(10)\n\nmath.exp(1.0) # e^1 \n\nmath.log(math.exp(1.0)) # ln(e)\n\nmath.cos(math.pi) # cos(pi)",
"Os tipos numéricos em Python são chamados de:\n- int, para números inteiros e,\n- float, para números reais.\nComplete os códigos abaixo e aperte SHIFT+ENTER para verificar se a saída é a esperada.",
"# Exercício 01\n\n# Vamos fazer algumas operações básicas: escreva o código logo abaixo das instruções, ao completar a tarefa aperte \"shift+enter\"\n# e verifique se a resposta está de acordo com o esperado\n\n# 1) A razão áurea é dada pela eq. (1 + raiz(5))/2, imprima o resultado\n# resultado deve ser 1.61803398875\n \n\n\n\n# 2) A entropia é dada por: -p*log(p) - (1-p)*log(1-p), \n# Note que o logaritmo é na base 2, procure na documentação como calcular log2.\n# Calcule a entropia para:\n\n# p = 0.5, resultado = 1.0\n\n\n\n\n# p = 1.0, resultado = nan\n\n\n\n# p = 0.0, resultado = nan\n\n\n\n# p = 0.4, resultado = 0.970950594455\n\n",
"Quando uma operação tem resultado indefinido o Python indica com o valor nan.\nVariáveis\nNo exercício anterior, para calcular cada valor de entropia foi necessário digitar toda a fórmula para cada valor de p.\nAlém disso o valor de p era utilizado duas vezes em cada equação, tornando o trabalho tedioso e com tendência a erros.\nSeria interessante criar uma célula do Notebook customizável para calcular a entropia de qualquer valor de p.\nDa mesma forma que na matemática, podemos utilizar variáveis para generalizar uma expressão e alterar apenas os valores de tais variáveis.\nNo computador, tais variáveis armazenam valores em memória para uso posterior.\nPara utilizarmos variáveis no Python basta dar um nome a ela e, em seguida, atribuir um valor utilizando o operador \"=\".\nUma vez que tal variável tem um valor, podemos realizar qualquer operação com ela:",
"x = 10 # o computador armazena o valor 10 na variável de nome x\nx + 2\n\nx * 3",
"Dessa forma o calculo da entropia poderia ser feito como:",
"p = 0.4\n-p*math.log(p,2) - (1.0-p)*math.log(1.0-p,2)",
"Ao alterar o valor de p, o programa retorna o resultado da nova entropia.\nPodemos pedir que o usuário entre com os valores utilizando o comando input().\nEsse comando vai esperar que o usuário digite algum valor e o atribuirá em uma variável.\nNota: precisamos especificar ao comando input o tipo de variável desejada, no nosso caso float.",
"p = float(input(\"Digite o valor de p: \"))\n-p*math.log(p,2) - (1.0-p)*math.log(1.0-p,2)",
"O nome das variáveis deve:\n- começar com uma letra e,\n- conter apenas letras, números e o símbolo \"_\".\nAlém disso existem alguns nomes já utilizados pelo Python e que não devem ser utilizados:\npython\n and, as, assert, break, class, continue, def, del, elif, \n else, except, exec, finally, for, from, global, if, \n import, in, is, lambda, not, or, pass, print, raise,\n return, try, while, with, yield \nO Python determina automaticamente o tipo que a variável vai conter, de acordo com o que é atribuído a ela.\nPodemos utilizar o comando type() para determinar o tipo atual da variável:",
"x = 1\ntype(x)\n\nx = 1.0\ntype(x)",
"Outros tipos de dados\nBooleanos\nAlém dos tipos numéricos temos também os tipos lógicos que podem conter o valor de verdadeiro (True) ou falso (False).\nEsses valores são gerados através de operadores relacionais:\n- Igualdade: 1 == 2 => False\n- Desigualdade: 1 != 2 => True\n- Maior e Maior ou igual: 1 > 2, 1 >= 2 => False\n- Menor e Menor ou igual: 1 < 2, 1 <= 2 => True\nE compostos com operadores booleanos:\n- E: True and False => False\n- Ou: True or False => True\n- Não: not True => False",
"True and False\n\nTrue and True\n\nTrue or False\n\nTrue or True\n\nnot True\n\nnot False\n\nx = 1\ny = 2\nx == y\n\nx != y\n\nx > y\n\nx <= y\n\nx <= y and x != y # x < y?\n\nx <= y and not (x != y) # x >= y e x == y ==> x == y?",
"Strings\nTemos também a opção de trabalhar com textos, ou strings.\nEsse tipo é caracterizado por aspas simples ou duplas.",
"texto = \"Olá Mundo\"\ntype(texto)\n\nlen(texto)",
"Podemos usar os operadores + e *, representando concatenação e repetição:",
"ola = \"Olá\"\nespaco = \" \"\nmundo = \"Mundo\"\nola + espaco + mundo\n\n(ola + espaco) * 6 + mundo",
"Podemos também obter um caractere da string utilizando o conceito de indexação.\nNa indexação, colocamos um valor entre colchetes após o nome da variável para indicar a posição do elemento que nos interessa naquele momento.\nA contagem de posição inicia do 0.",
"texto = \"Olá Mundo\"\ntexto[0]\n\ntexto[2] # terceira letra",
"Podemos também usar a indexação para pegar faixas de valores:",
"texto[0:3]\n\ntexto[4:] # omitir o último valor significa que quero ir até o fim\n\ntexto[:3] # nesse caso omiti o valor inicial\n\ntexto[-1] # índice negativo representa uma contagem de trás para frente\n\ntexto[-1:-6:-1] # o último valor, -1, indica que quero andar de trás para frente.",
"Listas\nListas de valores são criadas colocando elementos de quaisquer tipos entre colchetes e separados por vírgulas.\nAs listas podem ser úteis para armazenar uma coleção de valores que utilizaremos em nosso programa.",
"coordenadas = [1.0, 3.0]\ncoordenadas\n\nlista = [1,2,3.0,'batata',True]\nlista\n\nlen(lista)",
"O acesso aos elementos da lista é feito de forma similar às strings:",
"lista[0] # imprime o primeiro elemento da lista\n\nlista[0:3] # imprime os 3 primeiros elementos\n\nlista[-1] # último elemento",
"Existem outros tipos mais avançados que veremos ao longo do curso.\nComplete o código abaixo para realizar o que é pedido:",
"#Exercício 02\nimport math\n# 1) entre com o numero do mes do seu nascimento na variavel 'mes' e siga as instruções abaixo colocando o resultado\n# na variável 'resultado'.\nmes = int(input('Digite o mes do seu nascimento: '))\nresultado = 0\n\n# 1) multiplique o número por 2 e armazene na variavel de nome resultado\n\n# 2) some 5 ao resultado e armazene nela mesma\n\n# 3) multiplique por 50 armazenando em resultado\n\n# 4) some sua idade ao resultado\nidade = int(input('Digite sua idade: '))\n\n# 5) subtraia resultado por 250\n\n# o primeiro digito deve ser o mês e os dois últimos sua idade\nprint(resultado)\n\n#Exercício 03\n\n# 1) Peça para o usuario digitar o nome e imprima \"Ola <nome>, como vai?\"\n# não esqueça que o nome deve ser digitado entre aspas.\nnome = \nprint()\n\n\n\n# 2) Crie uma lista com 3 zeros peça para o usuário digitar 2 valores, armazenando nas 2 primeiras\n# posicoes. Calcule a media dos dois primeiros valores e armazene na terceira posição.\n\n# utilize input() para capturar os valores\nlista =\n\nprint(lista)",
"Entrada e Saída de Dados\nEntrada de Dados\nDurante essa primeira aula vimos sobre os comandos input() para capturar valores digitados pelo usuário e print() para imprimir os valores dos resultados.\nO comando input() captura uma string digitada pelo usuário, essa string deve ser convertida de acordo com o que se espera dela:",
"x = float(input(\"Entre com um valor numérico: \")) # o valor digitado será um número em ponto flutuante\nx/2",
"Alguns dos comandos de conversão disponíveis são: int() para inteiros, float() para ponto flutuante e str() para strings.",
"x = 10\nint(x) + 2\n\nfloat(x) * 2\n\nstr(x) * 2",
"Saída de Dados\nA função print() permite escrever o valor de uma variável ou expressão na tela pulando para a próxima linha ao final.\nEsse comando permite a impressão de diversos valores na mesma linha separando-os por \",\".\nAdicionalmente, se o último argumento da função for end=\" \" ele não pula para a próxima linha após a impressão.",
"print(2) # imprime 2 e vai para a próxima linha\nprint(3, \"ola\", 4+5) # imprime múltiplos valores de diferentes tipos e vai para a próxima linha\nprint(1, 2, end=\" \") # imprime os dois valores mas não pula para a próxima linha\nprint(3,4) # continua impressão na mesma linha",
"A função print() permite o uso de alguns caracteres especiais como:\n- '\\t': adiciona uma tabulação\n- '\\n': pula uma linha\nPodemos complementar a formatação da string com o comando format():",
"m = 1\nn = 6\nd = 1/float(n)\nprint('{}: {}/{} = {:.4}'.format(\"O valor da divisão\", m, n, d)) \n# {} indica que as lacunas devem ser preenchidas e .4 é o número de casas decimais para o número.\n\nprint(\"Elemento 1 \\t Elemento 2 \\nElemento 3 \\t Elemento 4\")",
"Bibliotecas Extras\nO Python conta com diversas bibliotecas que automatizam muitas tarefas e permitem realizarmos experimentos interessantes.\nUsualmente as bibliotecas, ou módulos, são importados através do comando:\npython\nimport NOME \ne as funções das bibliotecas são chamadas precedidas pelo NOME do módulo.\nVimos isso anteriormente com a biblioteca math:",
"import math\nmath.sqrt(2)",
"Se soubermos exatamente as funções que iremos utilizar do módulo, podemos importá-los seletivamente com a sintaxe:\npython\nfrom NOME import FUNCAO\nNesse caso não precisamos preceder a função com NOME para executá-la:",
"from math import sqrt, exp\nprint(sqrt(2), exp(3))",
"Finalmente podemos apelidar o nome do módulo para facilitar a chamada de funções:",
"import math as mt\nprint(mt.sqrt(2), mt.exp(3))",
"O comando dir lista todas as funções disponíveis em uma biblioteca:",
"print(dir(math))",
"E o comando help imprime uma ajuda sobre determinada função:",
"help(math.sin)",
"A biblioteca matplotlib permite plotar gráficos diversos.\nPara usá-la no Jupyter Notebook primeiro devemos importar os comandos de plotagem:\npython\n%matplotlib inline \nimport matplotlib.pyplot as plt\nO comando %matplotlib inline indica para o Jupyter que todos os gráficos devem ser mostrados dentro do próprio notebook.\nVamos plotar alguns pontos de uma função quadrática utilizando o comando:\npython\nplt.plot(x,y)\nonde x e y são listas com valores correspondentes a abscissa e ordenada, respectivamente.\nBasicamente temos que:\n- x será uma lista de pontos que queremos plotar e,\n- y a aplicação da função $f(x) = x^2$ em cada ponto de x.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nx = [-5,-4,-3,-2,-1,0,1,2,3,4,5]\ny = [25,16,9,4,1,0,1,4,9,16,25]\n\nplt.figure(figsize=(8,8)) # Cria uma nova figura com largura e altura definida por figsize\nplt.style.use('fivethirtyeight') # define um estilo de plotagem: # http://tonysyu.github.io/raw_content/matplotlib-style-gallery/gallery.html\n\nplt.title('Eq. do Segundo Grau') # Escreve um título para o gráfico\nplt.xlabel('x') # nome do eixo x\nplt.ylabel(r'$x^2$') # nome do eixo y, as strings entre $ $ são formatadas como no latex\n\nplt.plot(x,y, color='green') # cria um gráfico de linha com os valores de x e y e a cor definida por \"color\"\n\nplt.show() # mostra o gráfico\n\n# Leiam mais em: http://matplotlib.org/users/pyplot_tutorial.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
snucsne/CSNE-Course-Source-Code
|
CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch12-tuples.ipynb
|
mit
|
[
"Chapter 12: Tuples\n\nContents\n- Tuples are immutable\n- Tuple assignment\n- Tuples as return values\n- Variable-length argument tuples\n- Lists and tuples\n- Dictionaries and tuples\n- Comparing tuples\n- Sequences of sequences\n- Debugging\n- Exercises\n\nThis notebook is based on \"Think Python, 2Ed\" by Allen B. Downey <br>\nhttps://greenteapress.com/wp/think-python-2e/\n\nTuples are immutable\n\nA tuple is a sequence of values\nThey values can be any type and are index by integers\nThey are similar to lists, except:\nTuples are immutable\nTuples values usually have different types (unlike lists which generally hold only a single type)\n\n\nThere are multiple ways to create a tuple",
"a_tuple = ( 'a', 'b', 'c', 'd', 'e' )\na_tuple = 'a', 'b', 'c', 'd', 'e'\na_tuple = 'a',\n\ntype( a_tuple )",
"If you want to create a tuple with a single value, add a comma (,) after the value, but don’t add parenthesis\nYou can also use the built in function tuple",
"a_tuple = tuple()\nprint( a_tuple )\na_tuple = tuple( 'lupins' )\nprint( a_tuple )",
"Most list operators (e.g., bracket and slice) work on tuples\nIf you try to modify one of the elements, you get an error",
"a_tuple = ( 'a', 'b', 'c', 'd', 'e' )\nprint( a_tuple[1:3] )\nprint( a_tuple[:3] )\nprint( a_tuple[1:] )\n\n# Uncomment to see error\n# a_tuple[0] = 'z'",
"Tuple assignment\n\nOften, you may want to swap the values of two variables\nThis is done conventionally using a temporary variable for storage",
"a = 5\nb = 6\n\n# Conventional value swap\ntemp = a\na = b\nb = temp\n\nprint( a, b )",
"Python allows you to do it using a tuple assignment",
"a, b = b, a\nprint( a, b )",
"On the left is a tuple of varaibles and on the right is a tuple of expressions\nNote that the number on each side of the equality sign must be the same\nThe right side can be any kind of sequence (e.g., a string or list)",
"addr = 'monty@python.org'\nuname, domain = addr.split( '@' )\nprint( uname )\nprint( domain )",
"Tuples as return values\n\nA function can only return one value\nHowever, if we make that value a tuple, we can effectively return multiple values\nFor example, the divmod function takes two (2) arguments and retunrs a tuple of two (2) values, the quotient and remainder",
"quotient, remainder = divmod( 7, 3 )\nprint( quotient )\nprint( remainder )",
"Note the use of a tuple assignment\nHere is how to build a function that returns a tuple",
"def min_max( a_tuple ):\n return min( a_tuple ), max( a_tuple )\n\nnumbers = ( 13, 7, 55, 42 )\nmin_num, max_num = min_max( numbers )\nprint( min_num )\nprint( max_num )",
"Note that min and max are built-in functions\n\nVariable-length argument tuples\n\nAll the functions we have built and used required a specific number of arguments\nYou can use tuples to build functions that accept a variable number of arguments\nPrepend the argument’s variable name with an * to do this\nIt is referred to as the gather operator",
"def printall( *args ):\n print( args )\n\nprintall( 1 , 2.0 , '3' )",
"The complement is the scatter operator\nIt allows you to pass a sequence of values as individual arguments to the function",
"a_tuple = ( 7, 3 )\n# divmod( a_tuple ) # Uncomment to see error\ndivmod( *a_tuple )",
"Lists and tuples\n\nThe zip function takes two or more sequences and \"zips\" them into a list of tuples",
"a_string = 'abc'\na_list = [ 0, 1, 2 ]\nfor element in zip( a_string, a_list ):\n print( element )",
"Essentially, it returns an iterator over a list of tuples\nIf the sequences aren't the same length, the result has the length of the shorter one",
"for element in zip( 'Peter', 'Tony' ):\n print( element )",
"You can also use tuple assignment to get the individual values",
"a_list = [ ('a', 0), ('b', 1), ('c', 2) ]\nfor letter, number in a_list:\n print( letter, number )",
"You can combine zip, for and tuple assignment to traverse two (or more) sequences at the same time",
"def has_match( tuple1, tuple2 ):\n result = False\n for x, y in zip( tuple1, tuple2 ):\n if( x == y ):\n result = True\n return result",
"If you need the indices, use the enumerate function",
"for index , element in enumerate( 'abc' ):\n print( index, element )",
"Dictionaries and tuples\n\nThe method items returns a tuple of key-value pairs from a dictionary",
"a_dict = { 'a': 0, 'b':1, 'c':2 }\ndict_items = a_dict.items()\n\nprint( type( dict_items ) )\nprint( dict_items )\n\nfor element in dict_items:\n print( element )",
"Remember that there is no particular ordering in a dictionary\nYou can also add a list of tuples to a dictionary using the update method\nUsing items, tuple assignment, and a for loop is an easy way to traverse the keys and values of a dictionary",
"for key, value in a_dict.items():\n print( key, value )",
"Since tuples are immutable, you can even use them as keys in a dictionary",
"directory = dict()\ndirectory[ 'Smith', 'Bob' ] = '555-1234'\ndirectory[ 'Doe', 'Jane' ] = '555-9786'\n\nfor last, first in directory:\n print( first, last, directory[last, first] )",
"Examples of a state diagram for a tuple can be found on pg. 121\n\nComparing tuples\n\nRelational operators work with tuples as well\nThe process is:\nCompare the first item\nIf they are equal, go on to the next one\nContinue until elements that differ are found\nSubsequent elements are ignored\n\n\n\nSequences of sequences\n\nIn many situations, the different kinds of sequences (i.e., strings, lists, and tuples) can be used interchangeably\nWhen do you choose one over the other?\nStrings are more limited since they are immutable and the elements have to be characters\nLists are more common than tuples (mainly because they are mutable)\nPrefer tuples when:\nIt is syntactically easier to create a tuple (like a return statement)\nYou want to use a sequence as a dictionary key. If so, you need to use a string or a tuple since they are immutable\nIf you passing a sequence as an argument to a function, using tuples reduces the potential for unexpected behavior due to aliasing\n\n\nSince tuples aren’t immutable, you can’t sort or reverse them\nHowever, there are the functions sorted and reversed which return a new sequence with the ordering changed\n\nDebugging\n\nLists, dictionaries and tuples are known generically as data structures\nWe dive into these data structures more in other courses\n\nExercises\n\nMany of the built-in functions use variable-length argument tuples. For example, max and min can take any number of arguments, but sum does not. Write a function called sum_all that takes any number of arguments and returns their sum.\nWrite a function called print_all that takes a variable number of arguments and prints each one on its own line.\nWrite a function called distance that takes two (2) two-dimensional points and returns the distance between them. Then, extend your function to use three-dimensional points. Can you generalize it to take two points of any dimension?\nWrite a function called merge that takes two (2) sorted tuples and returns a new tuple that contains all the elements in the two tuples in sorted order. Write both a recursive and an iterative solution."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
LimeeZ/phys292-2015-work
|
days/day11/Interpolation.ipynb
|
mit
|
[
"Interpolation\nLearning Objective: Learn to interpolate 1d and 2d datasets of structured and unstructured points using SciPy.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns",
"Overview\nWe have already seen how to evaluate a Python function at a set of numerical points:\n$$ f(x) \\rightarrow f_i = f(x_i) $$\nHere is an array of points:",
"x = np.linspace(0,4*np.pi,10)\nx",
"This creates a new array of points that are the values of $\\sin(x_i)$ at each point $x_i$:",
"f = np.sin(x)\nprint(f)\n\nc = np.cos(x)\nprint (c)\n\nplt.plot(x, f, marker='o')\nplt.xlabel('x')\nplt.ylabel('f(x)');",
"This plot shows that the points in this numerical array are an approximation to the actual function as they don't have the function's value at all possible points. In this case we know the actual function ($\\sin(x)$). What if we only know the value of the function at a limited set of points, and don't know the analytical form of the function itself? This is common when the data points come from a set of measurements.\nInterpolation is a numerical technique that enables you to construct an approximation of the actual function from a set of points:\n$$ {x_i,f_i} \\rightarrow f(x) $$\nIt is important to note that unlike curve fitting or regression, interpolation doesn't not allow you to incorporate a statistical model into the approximation. Because of this, interpolation has limitations:\n\nIt cannot accurately construct the function's approximation outside the limits of the original points.\nIt cannot tell you the analytical form of the underlying function.\n\nOnce you have performed interpolation you can:\n\nEvaluate the function at other points not in the original dataset.\nUse the function in other calculations that require an actual function.\nCompute numerical derivatives or integrals.\nPlot the approximate function on a finer grid that the original dataset.\n\nWarning:\nThe different functions in SciPy work with a range of different 1d and 2d arrays. To help you keep all of that straight, I will use lowercase variables for 1d arrays (x, y) and uppercase variables (X,Y) for 2d arrays. \n1d data\nWe begin with a 1d interpolation example with regularly spaced data. The function we will use it interp1d:",
"from scipy.interpolate import interp1d",
"Let's create the numerical data we will use to build our interpolation.",
"x = np.linspace(0,4*np.pi,10) # only use 10 points to emphasize this is an approx\nf = np.sin(x)",
"To create our approximate function, we call interp1d as follows, with the numerical data. Options for the kind argument includes:\n\nlinear: draw a straight line between initial points.\nnearest: return the value of the function of the nearest point.\nslinear, quadratic, cubic: use a spline (particular kinds of piecewise polynomial of a given order.\n\nThe most common case you will want to use is cubic spline (try other options):",
"sin_approx = interp1d(x, f, kind='cubic')",
"The sin_approx variabl that interp1d returns is a callable object that can be used to compute the approximate function at other points. Compute the approximate function on a fine grid:",
"newx = np.linspace(0,4*np.pi,100)\nnewf = sin_approx(newx)",
"Plot the original data points, along with the approximate interpolated values. It is quite amazing to see how the interpolation has done a good job of reconstructing the actual function with relatively few points.",
"plt.plot(x, f, marker='o', linestyle='', label='original data')\nplt.plot(newx, newf, marker='.', label='interpolated');\nplt.legend();\nplt.xlabel('x')\nplt.ylabel('f(x)');",
"Let's look at the absolute error between the actual function and the approximate interpolated function:",
"plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx)))\nplt.xlabel('x')\nplt.ylabel('Absolute error');",
"1d non-regular data\nIt is also possible to use interp1d when the x data is not regularly spaced. To show this, let's repeat the above analysis with randomly distributed data in the range $[0,4\\pi]$. Everything else is the same.",
"x = 4*np.pi*np.random.rand(15)\nf = np.sin(x)\n\nsin_approx = interp1d(x, f, kind='cubic')\n\n# We have to be careful about not interpolating outside the range\nnewx = np.linspace(np.min(x), np.max(x),100)\nnewf = sin_approx(newx)\n\nplt.plot(x, f, marker='o', linestyle='', label='original data')\nplt.plot(newx, newf, marker='.', label='interpolated');\nplt.legend();\nplt.xlabel('x')\nplt.ylabel('f(x)');\n\nplt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx)))\nplt.xlabel('x')\nplt.ylabel('Absolute error');",
"Notice how the absolute error is larger in the intervals where there are no points.\n2d structured\nFor the 2d case we want to construct a scalar function of two variables, given\n$$ {x_i, y_i, f_i} \\rightarrow f(x,y) $$\nFor now, we will assume that the points ${x_i,y_i}$ are on a structured grid of points. This case is covered by the interp2d function:",
"from scipy.interpolate import interp2d",
"Here is the actual function we will use the generate our original dataset:",
"def wave2d(x, y):\n return np.sin(2*np.pi*x)*np.sin(3*np.pi*y)",
"Build 1d arrays to use as the structured grid:",
"x = np.linspace(0.0, 1.0, 10)\ny = np.linspace(0.0, 1.0, 10)",
"Build 2d arrays to use in computing the function on the grid points:",
"X, Y = np.meshgrid(x, y)\nZ = wave2d(X, Y)",
"Here is a scatter plot of the points overlayed with the value of the function at those points:",
"plt.pcolor(X, Y, Z)\nplt.colorbar();\nplt.scatter(X, Y);\nplt.xlim(0,1)\nplt.ylim(0,1)\nplt.xlabel('x')\nplt.ylabel('y');",
"You can see in this plot that the function is not smooth as we don't have its value on a fine grid.\nNow let's compute the interpolated function using interp2d. Notice how we are passing 2d arrays to this function:",
"wave2d_approx = interp2d(X, Y, Z, kind='cubic')",
"Compute the interpolated function on a fine grid:",
"xnew = np.linspace(0.0, 1.0, 40)\nynew = np.linspace(0.0, 1.0, 40)\nXnew, Ynew = np.meshgrid(xnew, ynew) # We will use these in the scatter plot below\nFnew = wave2d_approx(xnew, ynew) # The interpolating function automatically creates the meshgrid!\n\nFnew.shape",
"Plot the original course grid of points, along with the interpolated function values on a fine grid:",
"plt.pcolor(xnew, ynew, Fnew);\nplt.colorbar();\nplt.scatter(X, Y, label='original points')\nplt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points')\nplt.xlim(0,1)\nplt.ylim(0,1)\nplt.xlabel('x')\nplt.ylabel('y');\nplt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.);",
"Notice how the interpolated values (green points) are now smooth and continuous. The amazing thing is that the interpolation algorithm doesn't know anything about the actual function. It creates this nice approximation using only the original course grid (blue points).\n2d unstructured\nIt is also possible to perform interpolation when the original data is not on a regular grid. For this, we will use the griddata function:",
"from scipy.interpolate import griddata",
"There is an important difference between griddata and the interp1d/interp2d:\n\ninterp1d and interp2d return callable Python objects (functions).\ngriddata returns the interpolated function evaluated on a finer grid.\n\nThis means that you have to pass griddata an array that has the finer grid points to be used. Here is the course unstructured grid we will use:",
"x = np.random.rand(100)\ny = np.random.rand(100)",
"Notice how we pass these 1d arrays to our function and don't use meshgrid:",
"f = wave2d(x, y)",
"It is clear that our grid is very unstructured:",
"plt.scatter(x, y);\nplt.xlim(0,1)\nplt.ylim(0,1)\nplt.xlabel('x')\nplt.ylabel('y');",
"To use griddata we need to compute the final (strcutured) grid we want to compute the interpolated function on:",
"xnew = np.linspace(x.min(), x.max(), 40)\nynew = np.linspace(y.min(), y.max(), 40)\nXnew, Ynew = np.meshgrid(xnew, ynew)\n\nXnew.shape, Ynew.shape\n\nFnew = griddata((x,y), f, (Xnew, Ynew), method='cubic', fill_value=0.0)\n\nFnew.shape\n\nplt.pcolor(Xnew, Ynew, Fnew, label=\"points\")\nplt.colorbar()\nplt.scatter(x, y, label='original points')\nplt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points')\nplt.xlim(0,1)\nplt.ylim(0,1)\nplt.xlabel('x')\nplt.ylabel('y');\nplt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.);",
"Notice how the interpolated function is smooth in the interior regions where the original data is defined. However, outside those points, the interpolated function is missing (it returns nan)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
apozas/BIST-Python-Bootcamp
|
6_Optimization_Integration_ODEs.ipynb
|
gpl-3.0
|
[
"6 Optimizing Parametric Functions, Integration, and ODEs\nWe will cover two things today:\n\nNumerical integration of ODE systems\nNumerical optimization\n\nFirst of all we include some basic packages to allow us to make nice-looking plots, manipulate data, ...:",
"import numpy as np\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport seaborn as sns\nsns.set()",
"Some Motivation for Numerical Integration\nLet's say we have a system, defined as a system of differential equations\n$$\n\\frac{d\\mathbf{y}}{dt} = f(t, \\mathbf{y}(t))\n$$\nwhere $\\mathbf{y}$ represents some vector of dependent variables, $t$ is time, and the system has some specified initial condition $\\mathbf{y}(t_0) = \\mathbf{y}_0 $.\nIn a number of cases, we can find an analytical solution through direct integration or other means. \nEventually, finding an analytical solution becomes tricky or intractable (this is the case in most real-world ODE models, particularly nonlinear ODE systems). \nAt that point it is necessary to find a numerical approximation to the 'real' solution, which is where numerical integration comes in.\nWe will be dealing today with the case of initial value problems, as defined above.\nThe Euler Method\nIn the general definition of the system above, the right hand side of the equation defines the rate of change of some quantity $\\mathbf{y}$ with respect to time.\nWe won't go into the derivation here, but for a sufficiently small time step, the slope will not change much and so a sequence of points can be computed by iteratively computing this rate of change ($f(t,y)$) based on the value of a previous point and a time increment $h$:\n$$\ny_{n+1} = y_n + h(f(t_n, y_n))\n$$\nThis is the most basic form of numerical integration, the so called Euler method.\nLet's make an example to see how it works, by integrating the function\n$$\n\\frac{dy}{dt} = -y(t)\n$$\nwhose exact solution is\n$$\ny(t) = y_0 \\cdot e^{-t}\n$$\nwhere $y_0$ is the initial value of the system.",
"def rhs_function(y, t):\n \"\"\"The right hand side of our ODE.\"\"\"\n dydt = -y\n return dydt\n\ndef real_solution(y0, t):\n \"\"\"The real (analytical) solution to the ODE\"\"\"\n return y0*np.exp(-t)\n\ndef euler_solve(t0, t1, h, y0, func):\n \"\"\"Solve a simple equation using the Euler method\n t0: initial time\n t1: final time\n h: integration timestep\n y0: initial function value\n func: the function to integrate\n \"\"\"\n # Generate all timesteps\n T = np.arange(t0, t1, h)\n \n # Pre-allocate space for output\n Y = np.empty(T.shape)\n\n # Set the initial value\n Y[0] = y0\n\n # Iterate over timesteps\n for i in range(len(T) - 1):\n # The value at the next timestep is equal to\n # the current value, plus the rate of change multiplied by the timestep.\n Y[i + 1] = Y[i] + h*func(Y[i], T[i])\n \n return T,Y\n\nt0 = 0\nt1 = 10\nh = 0.5\n\ny0 = 1.0\n\nT, Y = euler_solve(t0, t1, h, y0, rhs_function)\nt_fine = np.linspace(T[0], T[-1], 1000)\n\nplt.plot(T, Y, 'k-o')\nplt.plot(t_fine, real_solution(y0, t_fine), 'r--')\nplt.xlabel('Time')\nplt.legend(['Euler method', 'Exact solution'])",
"As you can see, the smaller the timestep becomes the closer our numerical soultion converges to the real solution.\nThe Euler method is low order, and requires a small timestep in general in order to converge to a correct solution.\nHowever it is important to note is that (despite its low-order nature) it is quite useful in the case where you need to include stochastic elements in the dynamics (e.g., a noise term modelled as a Wiener process).\nIn these cases, higher-order methods in their default formulation do not take into account additional noise terms and will generally provide an incorrect solution or flat out not work. \nSolving Initial Value Problems in SciPy\n1. Using scipy.integrate.odeint\nIn terms of ready-made numerical integration for ODEs, SciPy provides us with a few options. We will quickly take a look at two of them, imaginatively called odeint, shown here, and ode, below. These functions can be found in SciPy's scipy.integrate package.\nFirst, the odeint function (documentation here). \nThis function essentially wraps a solver called lsoda, originally written in FORTRAN, in a nicer Python interface. The solver can be used to solve initial value problems of the form \n$$\n\\frac{d\\mathbf{y}}{dt} = f(\\mathbf{y}, ~t_0, ~\\ldots)\n$$\nwhere $\\mathbf{y}$ is our vector of dependent variables, as before.",
"from scipy.integrate import odeint",
"The basic form of the right hand side function is the same as before.",
"# Solve with Euler\nt0 = 0\nt1 = 10\nh = 0.5\n\ny0 = 1.0\n\nT, Y = euler_solve(t0, t1, h, y0, rhs_function)\n\n# Solve with odeint\nsol = odeint(rhs_function, y0, T)\n\ntFine = np.linspace(T[0], T[-1], 1000)\n\nplt.plot(T, Y, 'k-o')\nplt.plot(T, sol, 'g-o')\nplt.plot(t_fine, real_solution(y0, tFine), 'r--')\nplt.xlabel('Time')\nplt.legend(['Euler method', 'odeint', 'Exact solution'])",
"It is evident that the more stable scheme used in odeint allows us to obtain a better solution using a larger timestep.\nNow we investigate the behaviour of a more interesting system, the 'Brusselator'.\nThe Brusselator is a theoretical model for a type of autocatalytic chemical reaction.\nIts behaviour is characterised by the following set of reactions:\n\n$ A \\rightarrow X $\n$ 2X + Y \\rightarrow 3X $\n$ B + X \\rightarrow Y + D $\n$ X \\rightarrow E $\n\nThus the rate equations for the full set of reactions can be written as \n$$\n\\frac{d[X]}{dt} = [A] + [X]^2[Y] - [B][X] - [X]\n$$\n$$\n\\frac{d[Y]}{dt} = [B][X] - [X]^2[Y]\n$$\nThis system is interesting, since it exhibits stable dynamics for parameters in the range $ B < 1 + A^2 $; otherwise it operates in an unstable regime where the dynamics approach a limit cycle.",
"def bruss_odeint(y, t, a, b):\n \"\"\"RHS equations for the brusselator system.\n Arguments:\n y: the current state of the system\n t: time\n a, b: free parameters governing the dynamics\n Returns:\n dydt: the computed derivative at (y, t)\n \"\"\"\n # Unpack both species\n X, Y = y\n # Derivatives\n dydt = [\n a + Y*(X**2) - (b + 1)*X, # d[X]/dt\n b*X - Y*(X**2), # d[Y]/dt\n ]\n return dydt",
"The first thing to do is to define the parameters of interest:",
"a = 1.0\nb = 1.0",
"It is also necessary to define the initial conditions ($X(t_0)$ and $Y(t_0)$), and the time points over which we will integrate:",
"# Initial conditions\ny0 = [0.5, 0.5]\n# Time from 0 to 25\nt = np.linspace(0, 25, 101)",
"Finally we can solve the system through a single call to odeint.\nNote the function signature. odeint() requires the following arguments:\n- The function we wish to integrate\n- The initial point(s); these must have the same dimension as the derivative i.e., you need to have initial conditions specified for all dependent variables\n- The time over which you wish to integrate\n- Any additional arguments (optional) which will be passed to the function we're integrating; this is how we pass in our parameters",
"sol, info = odeint(bruss_odeint, y0, t, args = (a, b), full_output = True)",
"The variable sol (first returned value) contains the solution over the desired time points.",
"print('The number of time points is', len(t))\nprint('The solution matrix contains', sol.shape, 'entries')",
"As you can see, there is a column of solutions for each variable (X and Y, in this case).\nSo, we can plot these and observe the dynamics of our system.",
"plt.figure()\nplt.plot(t, sol[:, 0])\nplt.plot(t, sol[:, 1])\nplt.xlabel('Time (a.u.)')\nplt.ylabel('Concentration (a.u.)')\nplt.legend(['[X]', '[Y]'])",
"Additionally, you may have noticed that we return a second value called info. \nThis stores a record about the numerical process underlying your results. This information can be useful in case you need to debug some experiments, and additionally if you want to start building some routines that use the integrator (e.g., you may want to adapt step sizes yourself, check if a part of the integration was successful, manually model discontinuities, ...).",
"print(\"Let's see what 'info' contains...:\\n\")\nprint(info)",
"These cryptically named fields are further detailed in the documentation, but in summary:\n\n'hu': vector of step sizes successfully used for each time step.\n'tcur' vector with the value of t reached for each time step. (will always be at least as large as the input times).\n'tolsf': vector of tolerance scale factors, greater than 1.0, computed when a request for too much accuracy was detected.\n'tsw': value of t at the time of the last method switch (given for each time step)\n'nst': cumulative number of time steps\n'nfe': cumulative number of function evaluations for each time step\n'nje': cumulative number of jacobian evaluations for each time step\n'nqu': a vector of method orders for each successful step.\n'imxer': index of the component of largest magnitude in the weighted local error vector (e / ewt) on an error return, -1 otherwise.\n'lenrw': the length of the double work array required.\n'leniw': the length of integer work array required.\n'mused': a vector of method indicators for each successful time step: 1: adams (nonstiff), 2: bdf (stiff)\n\nAccording to the condition defined previously ($ B < 1 + A^2 $) we ought to be able to destabilise the system by increasing the value of our parameter $B$.\nLet's choose $ B = 2.5 $ and see what happens...",
"# Same variable as before\nb = 2.5\n# Again the solution method is the same; we are just passing in a new value of b\nsol, info = odeint(bruss_odeint, y0, t, args=(a, b), full_output=True)\n# Make the plot\nplt.figure()\nplt.plot(t, sol[:, 0])\nplt.plot(t, sol[:, 1])\nplt.xlabel('Time (a.u.)')\nplt.ylabel('Concentration (a.u.)')\nplt.legend(['[X]', '[Y]'])",
"Since we are simulating both $[X]$ and $[Y]$, we can quite trivially plot one against the other in a phase plane.",
"# Steady-state behaviour, for a longer time\nb = 1.0\nt = np.linspace(0, 50, 501)\n\n# Construct a grid of initial points in the phase plane\nX, Y = np.meshgrid(np.linspace(0, 2, 5), np.linspace(0, 2, 5))\n\nplt.figure()\n\nfor y0 in zip(X.ravel(), Y.ravel()):\n sol, info = odeint(bruss_odeint, y0, t, args=(a, b), full_output=True)\n # Phase plane: we plt X vs Y, rather than a timeseries\n plt.plot(sol[:, 0], sol[:, 1], 'k--')\n # The initial point of the trajectory is a circle\n plt.plot(sol[0, 0], sol[0, 1], 'ko')\n \nplt.xlim([0, 3])\nplt.ylim([0, 2.5])\n\nplt.xlabel('[X]')\nplt.ylabel('[Y]')",
"If we're interested in seeing approximately when the transition from a stable steady state to a limit cycle occurs, we might instead plot a number of trajectories, using different values of $B$ rather than different intial conditions.\n_Aside: Numerical methods ('continuation') for finding the actual value of $B$ at which the transition occurs (the 'bifurcation point') are beyond the scope of this tutorial, but are discussed briefly at the end of the section. _\n2. Using scipy.integrate.ode\nIn many cases we will actually want more control over the numerical integration process. \nHelpfully, we have an alternative interface to various integration routines called simply ode (documentation here).\nThis is sold as a more object-oriented part of the module, but in fact it also provides more flexibility in terms of available solvers relative to odeint.",
"from scipy.integrate import ode",
"⚠ Warning ⚠ the arguments of the right-hand-side function $f(t, y)$, that is passed to ode(), are reversed relative to odeint()!\nI.e.,\n- odeint expects a RHS function of the form func(y, t, ...)\n- ode expects a RHS function of the form func(t, y, ...)\nConsequently, we define a new function for the Brusselator, where these two arguments are now in the order expected by ode().",
"def bruss_ode(t, y, a, b):\n \"\"\"The brusselator system, defined for scipy.integrate.ode\"\"\"\n # Unpack both species\n X, Y = y\n # Derivatives\n dydt = [\n a + Y*(X**2) - (b + 1)*X, # d[X]/dt\n b*X - Y*(X**2), # d[Y]/dt\n ]\n return dydt",
"Aside: We could also define this function just by swapping the arguments t and y in a wrapper function. \nThis stops us from repeating code unnecessarily.\nOne way of doing this is as follows:\npython\ndef bruss_ode(t, y, a, b):\n return bruss_odeint(y, t, a, b)\nGenerating a solution of the system is quite straightforward, it only requires us to call the function ode.integrate(). However obtaining the actual values of the solution can be done in multiple ways and depends a little on the solver you want to use.\nHere we will illustrate the use of two solvers:\n1. dopri5: higher-order solver\n2. vode: another higher-order solver that has a nice implicit method (good for stiff systems) and can take adaptively-sized time steps.",
"def make_plot(traj):\n \"\"\"A helper function to plot trajectories\"\"\"\n t = traj[:, 0]\n y = traj[:, 1:]\n plt.figure()\n plt.plot(t, y[:, 0])\n plt.plot(t, y[:, 1])\n plt.xlabel('Time (a.u.)')\n plt.ylabel('Concentration (a.u.)')\n plt.legend(['[X]', '[Y]'])",
"Dopri5 Solver",
"# Define parameters as before\na = 1.0\nb = 2.5\n\n# Define initial conditions\ny0 = [0.5, 0.5]\nt0 = 0\n\n# Final time\nt1 = 25\n\n# Define the ODE solver object\nsolver = ode(bruss_ode).set_integrator('dopri5', nsteps=10000)\n\n# System/solver parameters are set as properties on the object\nsolver.set_initial_value(y0, t0).set_f_params(a, b)\n\n# Dopri5 allows for a `solout` function to be attached.\n# This function is run at the end of every complete timestep.\n# We use it here just to append solutions to the full trajectory.\ntraj = []\ndef solout(t, y):\n traj.append([t, *y])\n\nsolver.set_solout(solout)\n\n# Integrate until the full time\n# (Also possible to take smaller sub-steps in a loop, as in the next section)\nsolver.integrate(t1)\n\n# Form an array for easier manipulation\ntraj = np.asarray(traj)\nmake_plot(traj)",
"Vode Solver",
"# Define parameters as before\na = 1.0\nb = 10\n\n# Define initial conditions\ny0 = [0.5, 0.5]\nt0 = 0\n\n# Final time\nt1 = 100\n\n# Define the ODE solver object\nsolver = ode(bruss_ode).set_integrator('vode', method='bdf')\n\n# System/solver parameters are set as properties on the object\nsolver.set_initial_value(y0, t0).set_f_params(a, b)\n\n# We can't define a `solout` function, but we can automatically use\n# the internal automatic step of the solver\ntraj = []\nwhile solver.successful() and solver.t < t1:\n s = solver.integrate(t1, step = True)\n traj.append([solver.t, *s])\n\n# Integrate until the full time\n# (Also possible to take smaller sub-steps in a loop, as in the next section)\nsolver.integrate(t1)\n\ntraj = np.asarray(traj)\nmake_plot(traj)",
"Some Performance Improvements: Using the Jacobian\nWe can specify the Jacobian, if known, to help the solver along a little bit. This will generally make the solution faster, and potentially allow the solver to perform fewer steps by providing knowledge about the system's derivatives a priori. \nMore advanced packages, such as PyDSTool (mentioned below), may automatically compute a numerical or symbolic representation of the Jacobian in order to optimise simulation efficiency.",
"def bruss_jac(t, y, a, b):\n \"\"\"Jacobian function of the Brusselator.\"\"\"\n j = np.empty((2, 2))\n j[0, 0] = 2*y[0]*y[1] - (b + 1)\n j[0, 1] = y[0]**2\n j[1, 0] = b - 2*y[0]*y[1]\n j[1, 1] = - y[0]**2\n return j\n\n# Previous trajectory\nold_trajectory = traj.copy()\n\n# Define parameters as before\na = 1.0\nb = 10.0\n\n# Define initial conditions\ny0 = [0.5, 0.5]\nt0 = 0\n\n# Final time\nt1 = 100\n\n# Define the ODE solver object\nsolver = ode(bruss_ode, jac=bruss_jac).set_integrator('vode', method='bdf')\n\n# System/solver parameters are set as properties on the object\nsolver.set_initial_value(y0, t0).set_f_params(a, b).set_jac_params(a, b)\n\n# We can't define a `solout` function, but we can automatically use\n# the internal automatic step of the solver\ntraj = []\nwhile solver.successful() and solver.t < t1:\n s = solver.integrate(t1, step = True)\n traj.append([solver.t, *s])\n\n# Integrate until the full time\n# (Also possible to take smaller sub-steps in a loop, as in the next section)\nsolver.integrate(t1)\n\ntraj = np.asarray(traj)\nmake_plot(traj)\n\nprint('Original trajectory had dimensions', old_trajectory.shape)\nprint('New trajectory (w/ Jacobian) has dimensions', traj.shape)\n\n# Sampling density of original trajectory\nplt.figure(figsize=(12, 4))\nplt.hist(old_trajectory[:, 0], bins=50)\nplt.xlabel('Time')\nplt.ylabel('Sampling density')\nplt.title('Sampling: no Jacobian')\nplt.ylim([0, 1200])\n\n# Sampling density of original trajectory\nplt.figure(figsize=(12, 4))\nplt.hist(traj[:, 0], bins=50)\nplt.xlabel('Time')\nplt.ylabel('Sampling density')\nplt.title('Sampling: WITH Jacobian')\nplt.ylim([0, 1200])",
"An Illustration of Adaptive Sampling\nOne thing to bear in mind is that it's worth using an integration scheme that supports adaptive sampling of the integration step. this means that when the derivative is large, small steps are taken and vice versa; as a result, simulations tend to be more efficient.\nWhat does this look like in practice?",
"# Sampling density of new trajectory, as timeseries\nplt.figure(figsize=(12, 4))\nplt.plot(traj[:, 0], traj[:, 1], '.')\nplt.xlabel('Time')\nplt.ylabel('[X]')",
"It is quite obvious here that the sampling is dense in the regions where changes in $[X]$ are steep, and spreads out significantly in the regions where the gradient is small.\nClosing Comments\nFinally, in reality you will probably want to do some more in-depth experiments and analysis with your system (and your system may be somewhat more complex than those presented today).\nAs an alternative to coding up your own solutions using the building blocks presented today, you could consider using the PyDSTool package (its source code repository is here. \nIt provides quite a nice interface for modelling systems of ODEs, and additionally it does some quite clever stuff for you, saving time, effort, and testing, e.g., \n- it will compile your RHS functions in order to make simulations run faster\n- integrators support adaptive timestepping\n- automatic generation of Jacobians (and other numerical specifics) under the hood\n- automatic generation of large ODE systems (e.g., networks of coupled systems)\nand much more.\nPyDSTool is partially integrated with the extremely powerful numerical continuation package AUTO which allows you to perform bifurcation/continuation analysis of your system directly from PyDSTool. \nSomewhat tangentially, if you are eventually interested in numerical continuation you'll find that modern versions of AUTO include their own Python API which allows you to build your own analysis using the full feature set of AUTO.\nScipy.optimize\nAt some point, you may find that you need to fit a model to some data. \nHere we refer to a model in quite a generic sense: this could be a regression or curve fit, the outcome of an ODE or PDE model, a complex black-box system such as a multi-scale finite element simulation of heat transfer in a nuclear reactor... \nThe important thing to consider is that the model receives a set of parameters (is parametrised) as input, and provides some data as output. \nThese simulated data may be directly comparable to experimentally acquired data, and in this case we may use an optimization routine to try to estimate the parameters of our model based on the experimental data.\nThus we may be able to use the model to infer hidden properties of the system, or to predict future behaviour.\nTo this end, a baseline set of useful tools is provided by the scipy.optimize package, and we will briefly go through some of these here.\nCurve Fitting\nArguably the most basic optimization procedure one can perform is to fit a curve to some experimentally observed points. \nThis could be a straightforward linear regression, or a more complicated nonlinear curve.\nThis functionality is provided by the scipy.optimize.curve_fit function.\nInternally the function uses a nonlinear least-squares method in order to minimize the distance between some data points and a curve function.",
"from scipy.optimize import curve_fit",
"In order to test the performance of the curve fitting provided by scipy, we will generate some synthetic data.\nThis data will take the form $ A \\sin (\\omega (x - t)) $, i.e., a sinusoidal curve with amplitude $A$, frequency $\\omega$ and horizontal offset $t$.",
"def sinus(x, A, t, omega):\n \"\"\"Sine wave, over x\n A: the amplitude\n omega: the frequency\n t: the horizontal translation of the curve\n \"\"\"\n return A*np.sin(omega*(t + x))\n\n# The range of x over which we will observe the function\nx = np.linspace(0, 2*np.pi, 100)\n\n# The 'true' data, underlying an experimental observation\ny_true = sinus(x, 1.0, 0, 2.0)\n\n# 'Measured' experimental data (what we observed); noisy\ny_measured = y_true + 0.1*np.random.randn(*y_true.shape)\n\nplt.plot(x, y_true, 'g')\nplt.plot(x, y_measured, '*k')\nplt.legend(['True data', 'Measured data'])",
"To make our task a bit easier for this illustrative example we fix a couple of parameters and define a new function with these ($A = 1.0$ and $t = 0.0$), with only the frequency varying.",
"def fixed_sinus(x, p):\n \"\"\"Sinusoidal curve with all parameters fixed apart from frequency.\n p: frequency\n \"\"\"\n A = 1.0\n t = 0.0\n return sinus(x, A, t, p)",
"We can now try to fit this function to our noisy 'measured' data.",
"# Curve fit function: fits fixedSinus to measured data over x\npopt, pcov = curve_fit(fixed_sinus, x, y_measured)\n\n# Success?\nprint('*** Fitted value of wavelength was: ', popt, '\\n')\n\nplt.plot(x, y_measured, '*g')\nplt.plot(x, fixed_sinus(x, popt), '--k')\nplt.legend(['Experimental data', 'Fitted model'])",
"Hmm... This is clearly quite wrong!\nIt looks like although the function is quite powerful, maybe it needs a little help.\nWe can sugegst an initial value for the unknown parameter to see if that helps:",
"# Curve fit function: fits fixedSinus to measured data over x\n# Now we add the keyword argument p0, which specifies an initial guess\npopt, pcov = curve_fit(fixed_sinus, x, y_measured, p0=1.5)\n\n# Success?\nprint('*** Fitted value of wavelength was: ', popt, '\\n')\n\nplt.plot(x, y_measured, '*g')\nplt.plot(x, fixed_sinus(x, popt), '--k')\nplt.legend(['Experimental data', 'Fitted model'])",
"Much better!\nObviously this is not great as in general you may not know a good starting parameter value (although you should have an idea of bounds and constraints especially if your model is not too abstract).\nOne way of getting around this is to initialise your fitting function from a bounded range of random initial values and choosing some that are markedly better.\nSome optimisation routines will do (basically) this internally.\nAnother option is to choose a more robust optimisation method.\nMore general minimization\nscipy.optimize contains a fairly wide range of optimization routines.\nIn general these aim to minimize some generic function, rather than to 'fit a curve' as we did a moment ago.\nTherefore, in order to fit a model to some data using these methods, we must formulate an objective function.\nThis function can simply compute the distance, or error, between our desired outcome (the measured data) and the current output of the model.",
"def objective_function(par):\n \"\"\"Compute the sum of squares difference between the measured data and model output.\"\"\"\n # The current model prediction, based on the current parameters' values\n y_model = sinus(x, *par)\n # Sum-of-squares difference between the model and 'measured' data\n return np.sum(np.power(y_measured - y_model, 2.0))",
"The scipy.optimize.minimize function provides a general interface for minimization.",
"from scipy.optimize import minimize\n\n# We attempt to minimize our objective\n# This function now contains the full (3 parameter) sin function,\n# and so we need to provide 3 starting points; we choose [0.5, 0.5, 0.5]\nopt = minimize(objective_function, [0.5]*3)\nprint('Optimal parameters: ')\nprint('Amplitude: ', opt.x[0])\nprint('Offset: ', opt.x[1])\nprint('Frequency: ', opt.x[2])\n\nplt.plot(x, y_measured, '*g')\nplt.plot(x, sinus(x, *opt.x), '--k')\nplt.legend(['Experimental data', 'Fitted model'])",
"Not bad, considering it is more generic.\nIf we wanted further control over the process, we can also provide bounds as an optional keyword argument to the minimizer:",
"# Here we specify that the parameters must all be positive\nbopt = minimize(objective_function, np.random.rand(3), bounds=[(0, None)]*3)\nprint('Optimal parameters: ')\nprint('Amplitude: ', bopt.x[0])\nprint('Offset: ', bopt.x[1])\nprint('Frequency: ', bopt.x[2])\n\nplt.plot(x, y_measured, '*g')\nplt.plot(x, sinus(x, *bopt.x), '--k')\nplt.legend(['Experimental data', 'Fitted model'])",
"Closing comments\nThis has just touched the surface of possibilities for numerical optimisation. As you can see, even the basic scipy installation provides some powerful algorithms: they may require a little tuning to get the results you need, but are in general quite robust and flexible.\nIn particular, the routines here are designed simply to perform one function: minimize an objective function.\nTherefore, as long as you can specify an objective function appropriately, these algorithms may be useful (even if the actual computation of the objective function becomes more complicated).\nFor example fitting an ODE model to some time series would basically require that you specify your objective function as the difference between an experimental time series and your model under a certain parametrisation.\nFor working on more complex systems, or in cases where you have complicated or noisy data, other packages may present a more viable alternative. \nIn reality, especially when dealing with noisy experimental data, it is crucial to choose the right optimisation routine for job. \nIn many cases it may be beneficial to use a probabilistic method, for example, and often you will end up using global optimizers, which we didn't have time to go into today.\nFor example, some that I like to use are:\n- PySwarm: a simple constrained particle swarm optimisation tool\n- PyGMO: a very powerful serial and parallel toolbox for constructing global optimisation routines\n- DEAP: another global optimisation package, strongly focused on large scale parallel evolutionary algorithms."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cgivre/oreilly-sec-ds-fundamentals
|
Notebooks/EDA/EDA Worksheet - Python Version - Answers.ipynb
|
apache-2.0
|
[
"import pandas as pd\nimport numpy as np\nimport matplotlib \n%matplotlib inline\nmatplotlib.pyplot.style.use = 'ggplot'",
"First, load up the data\nFirst you're going to want to create a data frame from the dailybots.csv file which can be found in the data directory. You should be able to do this with the pd.read_csv() function. Take a minute to look at the dataframe because we are going to be using it for this entire worksheet.",
"data = pd.read_csv( '../../data/dailybots.csv' )\n#Look at a summary of the data\ndata.describe()\n\ndata['botfam'].value_counts()",
"Exercise 1: Which industry sees the most Ramnit infections? Least?\nCount the number of infected days for \"Ramnit\" in each industry industry. \nHow: \n1. First filter the data to remove all the infections we don't care about\n2. Aggregate the data on the column of interest. HINT: You might want to use the groupby() function\n3. Add up the results",
"grouped_df = data[data.botfam == \"Ramnit\"].groupby(['industry'])\ngrouped_df.sum()",
"Exercise 2: Calculate the min, max, median and mean infected orgs for each bot family, sort by median\nIn this exercise, you are asked to calculate the min, max, median and mean of infected orgs for each bot family sorted by median. HINT:\n1. Using the groupby() function, create a grouped data frame\n2. You can do this one metric at a time OR you can use the .agg() function. You might want to refer to the documentation here: http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once\n3. Sort the values (HINT HINT) by the median column",
"group2 = data[['botfam','orgs']].groupby( ['botfam'])\nsummary = group2.agg([np.min, np.max, np.mean, np.median, np.std])\nsummary.sort_values( [('orgs', 'median')], ascending=False)",
"Exercise 3: Which date had the total most bot infections and how many infections on that day?\nIn this exercise you are asked to aggregate and sum the number of infections (hosts) by date. Once you've done that, the next step is to sort in descending order.",
"df3 = data[['date','hosts']].groupby('date').sum()\ndf3.sort_values(by='hosts', ascending=False).head(10)",
"Exercise 4: Plot the daily infected hosts for Necurs, Ramnit and PushDo\nIn this exercise you're going to plot the daily infected hosts for three infection types. In order to do this, you'll need to do the following steps:\n1. Filter the data to remove the botfamilies we don't care about. \n2. Use groupby() to aggregate the data by date and family, then sum up the hosts in each group\n3. Plot the data. Hint: You might want to use the unstack() function to prepare the data for plotting.",
"filteredData = data[ data['botfam'].isin(['Necurs', 'Ramnit', 'PushDo']) ][['date', 'botfam', 'hosts']]\ngroupedFilteredData = filteredData.groupby( ['date', 'botfam']).sum()\ngroupedFilteredData.unstack(level=1).plot(kind='line', subplots=False)",
"Exercise 5: What are the distribution of infected hosts for each day-of-week across all bot families?\nHint: try a box plot and/or violin plot. In order to do this, there are two steps:\n1. First create a day column where the day of the week is represented as an integer. You'll need to convert the date column to an actual date/time object. See here: http://pandas.pydata.org/pandas-docs/stable/timeseries.html\n2. Next, use the .boxplot() method to plot the data. This has grouping built in, so you don't have to group by first.",
"data.date = pd.to_datetime( data.date )\ndata['day'] = data.date.dt.weekday\ndata[['hosts', 'day']].boxplot( by='day')\n\ngrouped = data[['hosts', 'day']].groupby('day')\nprint( grouped.sum() )\n\n\ngrouped.box"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
karlstroetmann/Artificial-Intelligence
|
Python/Python-Tutorial.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))",
"CS228 Python Tutorial\nAdapted by Volodymyr Kuleshov and Isaac Caswell from the CS231n Python tutorial by Justin Johnson\n<a href=\"http://cs231n.github.io/python-numpy-tutorial/\">Python Numpy Tutorial</a>.\nIntroduction\nPython is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.\nWe expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.\nSome of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).\nIn this tutorial, we will cover:\n\nBasic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes\nNumpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting\nMatplotlib: Plotting, Subplots, Images\nIPython: Creating notebooks, Typical workflows\n\nBasics of Python\nPython is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:",
"def quicksort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quicksort(left) + middle + quicksort(right)\n\nquicksort([3,6,8,10,1,2,1])",
"Python versions\nThis version of the notebook has been adapted to work with Python 3.6.\nYou can check your Python version at the command line by running python --version.",
"!python --version",
"Basic data types\nNumbers\nIntegers and floats work as you would expect from other languages:",
"x = 3\nx, type(x)\n\nprint(x + 1) # Addition;\nprint(x - 1) # Subtraction;\nprint(x * 2) # Multiplication;\nprint(x ** 2) # Exponentiation;\n\nx += 1\nprint(x) # Prints \"4\"\nx *= 2\nprint(x) # Prints \"8\"\n\ny = 2.5\nprint(type(y))\nprint(y, y + 1, y * 2, y ** 2) ",
"Note that unlike certain other languages like <tt>C</tt>, <tt>C++</tt>, <em>Java</em>, or <tt>C#</tt>, Python does not have unary increment (x++) or decrement (x--) operators.\nPython also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.\nBooleans\nPython implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):",
"t, f = True, False\ntype(t) \n\nprint(type(t))",
"Now we let's look at the operations:",
"print(t and f) # Logical AND\nprint(t or f) # Logical OR\nprint( not t) # Logical NOT\nprint(t != f) # Logical XOR",
"Strings",
"hello = 'hello' # String literals can use single quotes\nworld = \"world\" # or double quotes; it does not matter.\nprint(hello, len(hello))\n\nhw = hello + ' ' + world # String concatenation\nhw \n\nhw12 = '%s, %s! %d' % (hello, world, 12) # sprintf style string formatting\nhw12 ",
"String objects have a bunch of useful methods; for example:",
"s = \"hello\"\nprint(s.capitalize()) # Capitalize a string\nprint(s.upper()) # Convert a string to uppercase\nprint(s.rjust(7)) # Right-justify a string, padding with spaces\nprint(s.center(7)) # Center a string, padding with spaces\nprint(s.replace('l', '\\N{greek small letter lamda}')) # Replace all instances of one substring with another\nprint(' world '.strip()) # Strip leading and trailing whitespace",
"You can find a list of all string methods in the documentation.\nContainers\nPython includes several built-in container types: lists, dictionaries, sets, and tuples.\nLists\nA list is the Python equivalent of an array, but is resizeable and can contain elements of different types:",
"xs = [3, 1, 2] # Create a list\nprint(xs, xs[2]) # Indexing starts at 0\nprint(xs[-1]) # Negative indices count from the end of the list; prints \"2\"\n\nxs[2] = 'foo' # Lists can contain elements of different types\nxs\n\nxs.append('bar') # Add a new element to the end of the list\nxs \n\nx = xs.pop() # Remove and return the last element of the list\nx, xs ",
"As usual, you can find all the gory details about lists in the documentation.\nSlicing\nIn addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:",
"nums = list(range(5)) \nprint(nums) \nprint(nums[2:4]) # Get a slice from index 2 to 4 (exclusive)\nprint(nums[2:]) # Get a slice from index 2 to the end\nprint(nums[:2]) # Get a slice from the start to index 2 (exclusive)\nprint(nums[:]) # Get a slice of the whole list, creates a shallow copy\nprint(nums[:-1]) # Slice indices can be negative\nnums[2:4] = [8, 9, 10] # Assign a new sublist to a slice\nnums ",
"Loops\nYou can loop over the elements of a list like this:",
"animals = ['cat', 'dog', 'monkey']\nfor animal in animals:\n print(animal)",
"If you want access to the index of each element within the body of a loop, use the built-in enumerate function:",
"animals = ['cat', 'dog', 'monkey']\nfor idx, animal in enumerate(animals):\n print('#%d: %s' % (idx + 1, animal))",
"List comprehensions:\nWhen programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:",
"nums = [0, 1, 2, 3, 4]\nsquares = []\nfor x in nums:\n squares.append(x ** 2)\nsquares",
"You can make this code simpler using a list comprehension:",
"nums = [0, 1, 2, 3, 4]\nsquares = [x ** 2 for x in nums]\nsquares",
"List comprehensions can also contain conditions:",
"nums = [0, 1, 2, 3, 4]\neven_squares = [x ** 2 for x in nums if x % 2 == 0]\neven_squares",
"Dictionaries\nA dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:",
"d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data\nprint(d['cat']) # Get an entry from a dictionary\nprint('cat' in d) # Check if a dictionary has a given key\n\nd['fish'] = 'wet' # Set an entry in a dictionary\nd['fish'] \n\nd['monkey'] # KeyError: 'monkey' not a key of d\n\nprint(d.get('monkey', 'N/A')) # Get an element with a default\nprint(d.get('fish', 'N/A')) # Get an element with a default\n\ndel d['fish'] # Remove an element from a dictionary\nd.get('fish', 'N/A') # \"fish\" is no longer a key",
"You can find all you need to know about dictionaries in the documentation.\nIt is easy to iterate over the keys in a dictionary:",
"d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal in d:\n legs = d[animal]\n print('A %s has %d legs.' % (animal.ljust(6), legs))",
"If you want access to keys and their corresponding values, use the <tt>items</tt> method:",
"d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal, legs in d.items():\n print('A %s has %d legs.' % (animal.ljust(6), legs))",
"Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:",
"nums = [0, 1, 2, 3, 4, 5, 6]\neven_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}\neven_num_to_square",
"Sets\nA set is an unordered collection of distinct elements. As a simple example, consider the following:",
"animals = {'cat', 'dog'}\nprint('cat' in animals) # Check if an element is in a set\nprint('fish' in animals)\n\nanimals.add('fish') # Add an element to a set\nprint('fish' in animals)\nprint(len(animals)) # Number of elements in a set\n\nanimals.add('cat') # Adding an element that is already in the set does nothing\nprint(len(animals)) \nanimals.remove('cat') # Remove an element from a set\nprint(len(animals))\nanimals",
"Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:",
"animals = {'cat', 'dog', 'fish'}\nfor idx, animal in enumerate(animals):\n print('#%d: %s' % (idx + 1, animal))",
"Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:",
"from math import sqrt\n{ int(sqrt(x)) for x in range(30) }",
"Tuples\nA tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:",
"d = { (x, x + 1): x for x in range(10) } # Create a dictionary with tuple keys\nt = (5, 6) # Create a tuple\nprint(type(t))\nprint(d[t]) \nprint(d[(1, 2)])\nd\n\nt[0] = 1",
"Functions\nPython functions are defined using the def keyword. For example:",
"def sign(x):\n if x > 0:\n return 'positive'\n elif x < 0:\n return 'negative'\n else:\n return 'zero'\n\nfor x in [-1, 0, 1]:\n print(sign(x))",
"We will often define functions to take optional keyword arguments, like this:",
"def hello(name, loud=False):\n if loud:\n print('HELLO, %s' % name.upper())\n else:\n print('Hello, %s!' % name)\n\nhello('Bob')\nhello('Fred', loud=True)",
"Classes\nThe syntax for defining classes in Python is straightforward:",
"class Greeter:\n\n # Constructor\n def __init__(self, name):\n self.name = name # Create an instance variable\n\n # Instance method\n def greet(self, loud=False):\n if loud:\n print('HELLO, %s!' % self.name.upper())\n else:\n print('Hello, %s' % self.name)\n\ng = Greeter('Fred') # Construct an instance of the Greeter class\ng.greet() # Call an instance method\ng.greet(loud=True) ",
"Numpy\nNumpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. \nTo use Numpy, we first need to import the numpy package:",
"import numpy as np",
"Arrays\nA numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is called the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.\nWe can initialize numpy arrays from nested Python lists, and access elements using square brackets:",
"a = np.array([1, 2, 3]) # Create a rank 1 array\nprint(type(a), a.shape, a[0], a[1], a[2])\na[0] = 5 # Change an element of the array\na \n\nb = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array\nb\n\nprint(b.shape) \nprint(b[0, 0], b[0, 1], b[1, 0])",
"Numpy also provides many functions to create arrays:",
"np.zeros((2,2)) # Create an array of all zeros\n\nnp.ones((1,2)) # Create an array of all ones\n\nnp.full((2,2), 7) # Create a constant array \n\nnp.eye(2) # Create a 2x2 identity matrix\n\nnp.random.random((2,2)) # Create an array filled with random values",
"Array indexing\nNumpy offers several ways to index into arrays.\nSlicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:",
"import numpy as np\n\n# Create the following rank 2 array with shape (3, 4)\n# [[ 1 2 3 4]\n# [ 5 6 7 8]\n# [ 9 10 11 12]]\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\nprint(a)\n# Use slicing to pull out the subarray consisting of the first 2 rows\n# and columns 1 and 2; b is the following array of shape (2, 2):\n# [[2 3]\n# [6 7]]\nb = a[:2, 1:3]\nprint(b)",
"A slice of an array is a view into the same data, so modifying it will modify the original array.",
"print(a[0, 1])\nb[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]\nprint(a[0, 1]) ",
"You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array.",
"# Create the following rank 2 array with shape (3, 4)\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\na",
"Two ways of accessing the data in the middle row of the array.\nMixing integer indexing with slices yields an array of lower rank,\nwhile using only slices yields an array of the same rank as the\noriginal array:",
"row_r1 = a[1, :] # Rank 1 view of the second row of a \nrow_r2 = a[1:2, :] # Rank 2 view of the second row of a\nrow_r3 = a[[1], :] # Rank 2 view of the second row of a\nprint(row_r1, row_r1.shape)\nprint(row_r2, row_r2.shape)\nprint(row_r3, row_r3.shape)\n\n# We can make the same distinction when accessing columns of an array:\ncol_r1 = a[:, 1]\ncol_r2 = a[:, 1:2]\nprint(col_r1, col_r1.shape)\nprint()\nprint(col_r2, col_r2.shape)",
"Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:",
"a\n\nnp.array([a[0, 0], a[1, 1], a[2, 0]])",
"The following expression will return an array containing the elements a[0,1]and a[2,3].",
"# When using integer array indexing, you can reuse the same\n# element from the source array:\na[[0, 2], [1, 3]]\n\na[0,1], a[2,3]\n\n# Equivalent to the previous integer array indexing example\nnp.array([a[0, 1], a[2, 3]])",
"One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:",
"# Create a new array from which we will select elements\na = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\na\n\n# Create an array of indices\nb = np.array([0, 2, 0, 1])\nb\n\n# Select one element from each row of a using the indices in b\na[[0, 1, 2, 3], b] ",
"same as a[0,0], a[1,2], a[2,0], a[3,1],",
"a[0,0], a[1,2], a[2,0], a[3,1]\n\n# Mutate one element from each row of a using the indices in b\na[[0, 1, 2, 3], b] += 100\na",
"Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:",
"import numpy as np\n\na = np.array([[1,2], [3, 4], [5, 6]])\nprint('a = \\n', a, sep='')\nbool_idx = (a > 2) # Find the elements of a that are bigger than 2;\n # this returns a numpy array of Booleans of the same\n # shape as a, where each slot of bool_idx tells\n # whether that element of a is > 2.\nbool_idx\n\n# We use boolean array indexing to construct a rank 1 array\n# consisting of the elements of a corresponding to the True values\n# of bool_idx\na[bool_idx]\n\n# We can do all of the above in a single concise statement:\na[a > 2]",
"For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.\nDatatypes\nEvery numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:",
"x = np.array([1, 2]) # Let numpy choose the datatype\ny = np.array([1.0, 2.0]) # Let numpy choose the datatype\nz = np.array([1, 2], dtype=np.int64) # Force a particular datatype\n\nx.dtype, y.dtype, z.dtype",
"You can read all about numpy datatypes in the documentation.\nArray math\nBasic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:",
"x = np.array([[1,2],[3,4]], dtype=np.float64)\ny = np.array([[5,6],[7,8]], dtype=np.float64)\n\n# Elementwise sum\nx + y\n\nnp.add(x, y)\n\n# Elementwise difference\nx - y\n\nnp.subtract(x, y)\n\n# Elementwise product\nx * y\n\nnp.multiply(x, y)\n\n# Elementwise division\nx / y\n\nnp.divide(x, y)\n\n# Elementwise square root\nnp.sqrt(x)",
"Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:",
"x = np.array([[1,2],[3,4]])\nx\n\ny = np.array([[5,6],[7,8]])\ny\n\nv = np.array([9,10])\nv\n\nw = np.array([11, 12])\nw\n\n# Inner product of vectors\nprint(v.dot(w))\nprint(np.dot(v, w))\n\n# Matrix / vector product\nprint(x.dot(v))\nprint(np.dot(x, v))\n\n# Matrix / matrix product; both produce the rank 2 array\nprint(x.dot(y))\nprint(np.dot(x, y))",
"Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:",
"x\n\nnp.sum(x) # Compute sum of all elements\n\nnp.sum(x, axis=0) # Compute sum of each column\n\nnp.sum(x, axis=1) # Compute sum of each row",
"You can find the full list of mathematical functions provided by numpy in the documentation.\nApart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:",
"x\n\nx.T\n\nv = np.array([[1,2,3]])\nprint(v) \nprint(v.T)",
"Broadcasting\nBroadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.\nFor example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:",
"# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nx\n\nv = np.array([1, 0, 1])\nv",
"Create an empty matrix with the same shape as x. The elements of this matrix are initialized arbitrarily.",
"y = np.empty_like(x) \ny\n\n# Add the vector v to each row of the matrix x with an explicit loop\nfor i in range(4):\n y[i, :] = x[i, :] + v\ny",
"This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:",
"vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other\nvv \n\ny = x + vv # Add x and vv elementwise\ny",
"Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:",
"# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\ny = x + v # Add v to each row of x using broadcasting\ny",
"The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.\nBroadcasting two arrays together follows these rules:\n\nIf the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.\nThe two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.\nThe arrays can be broadcast together if they are compatible in all dimensions.\nAfter broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.\nIn any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension\n\nIf this explanation does not make sense, try reading the explanation from the documentation or this explanation.\nFunctions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.\nHere are some applications of broadcasting:",
"# Compute outer product of vectors\nv = np.array([1,2,3]) # v has shape (3,)\nw = np.array([4,5]) # w has shape (2,)\n# To compute an outer product, we first reshape v to be a column\n# vector of shape (3, 1); we can then broadcast it against w to yield\n# an output of shape (3, 2), which is the outer product of v and w:\n\nnp.reshape(v, (3, 1)) * w\n\n# Add a vector to each row of a matrix\nx = np.array([[1,2,3], [4,5,6]])\n# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),\n# giving the following matrix:\nx + v",
"Add a vector to each column of a matrix\nx has shape (2, 3) and w has shape (2,).\nIf we transpose x then it has shape (3, 2) and can be broadcast\nagainst w to yield a result of shape (3, 2); transposing this result\nyields the final result of shape (2, 3) which is the matrix x with\nthe vector w added to each column. Gives the following matrix:",
"(x.T + w).T",
"Another solution is to reshape w to be a row vector of shape (2, 1);\nwe can then broadcast it directly against x to produce the same\noutput.",
"x + np.reshape(w, (2, 1))",
"Multiply a matrix by a constant:\nx has shape (2, 3). Numpy treats scalars as arrays of shape ();\nthese can be broadcast together to shape (2, 3), producing the\nfollowing array:",
"x * 2",
"Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.\nThis brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.\nMatplotlib\nMatplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.",
"import matplotlib.pyplot as plt",
"By running this special iPython command, we will be displaying plots inline:",
"%matplotlib inline",
"Plotting\nThe most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:",
"# Compute the x and y coordinates for points on a sine curve\nx = np.arange(0, 3 * np.pi, 0.1)\ny = np.sin(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y)",
"With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:",
"y_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y_sin)\nplt.plot(x, y_cos)\nplt.xlabel('x axis label')\nplt.ylabel('y axis label')\nplt.title('Sine and Cosine')\nplt.legend(['Sine', 'Cosine'])",
"Subplots\nYou can plot different things in the same figure using the subplot function. Here is an example:",
"# Compute the x and y coordinates for points on sine and cosine curves\nx = np.arange(0, 3 * np.pi, 0.1)\ny_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# Set up a subplot grid that has height 2 and width 1,\n# and set the first such subplot as active.\nplt.subplot(2, 1, 1)\n\n# Make the first plot\nplt.plot(x, y_sin)\nplt.title('Sine')\n\n# Set the second subplot as active, and make the second plot.\nplt.subplot(2, 1, 2)\nplt.plot(x, y_cos)\nplt.title('Cosine')\n\n# Show the figure.\nplt.show()",
"You can read much more about the subplot function in the documentation."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
alexvmarch/atomic
|
docs/source/notebooks/01_basics.ipynb
|
apache-2.0
|
[
"Welcome to exatomic\nThis notebook demonstrates some basics of working with exatomic.",
"import exatomic\nexatomic.__version__",
"Getting help in the Jupyter notebook is easy, just put a \"?\" after a class or function.\nDon't forget to use tab to help with syntax completion",
"exatomic.Universe?",
"The Universe object contains all of the information about a simulation, nuclear coordinates, orbitals, etc.\nData is stored in pandas DataFrames (see pandas for more information)",
"uni = exatomic.Universe()\nuni",
"Empty universes can be useful...but it is more interesting with data\nNote that exatomic uses Hartree atomic units",
"atom = exatomic.Atom.from_dict({'x': [0.0, 0.0], 'y': [0.0, 0.0], 'z': [-0.34, 0.34],\n 'symbol': [\"H\", \"H\"], 'frame': [0, 0]})\nuni = exatomic.Universe(atom=atom)\nuni.atom",
"The frame column is how we track state (e.g. time, theory, etc.)\nThe simplest dataframe is the frame object which by default only contains the number of atoms",
"uni.frame # This was computed on-the-fly as we didn't instantiate it above",
"Visualization of this simple universe can be accomplished directly in the notebook",
"exatomic.UniverseWidget(uni)",
"In building the visualization, bonds were automatically computed\nFor small systems this is the default behavior, but for large systems it is not",
"uni.atom_two",
"Note again that distances are in atomic units",
"uni.molecule",
"Center of masses can also be computed",
"uni.compute_molecule_com()\n\nuni.molecule"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Kaggle/learntools
|
notebooks/nlp/raw/ex3.ipynb
|
apache-2.0
|
[
"Vectorizing Language\nEmbeddings are both conceptually clever and practically effective. \nSo let's try them for the sentiment analysis model you built for the restaurant. Then you can find the most similar review in the data set given some example text. It's a task where you can easily judge for yourself how well the embeddings work.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport spacy\n\n# Set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.nlp.ex3 import *\nprint(\"\\nSetup complete\")\n\n# Load the large model to get the vectors\nnlp = spacy.load('en_core_web_lg')\n\nreview_data = pd.read_csv('../input/nlp-course/yelp_ratings.csv')\nreview_data.head()",
"Here's an example of loading some document vectors. \nCalculating 44,500 document vectors takes about 20 minutes, so we'll get only the first 100. To save time, we'll load pre-saved document vectors for the hands-on coding exercises.",
"reviews = review_data[:100]\n# We just want the vectors so we can turn off other models in the pipeline\nwith nlp.disable_pipes():\n vectors = np.array([nlp(review.text).vector for idx, review in reviews.iterrows()])\n \nvectors.shape",
"The result is a matrix of 100 rows and 300 columns. \nWhy 100 rows?\nBecause we have 1 row for each column.\nWhy 300 columns?\nThis is the same length as word vectors. See if you can figure out why document vectors have the same length as word vectors (some knowledge of linear algebra or vector math would be needed to figure this out).\nGo ahead and run the following cell to load in the rest of the document vectors.",
"# Loading all document vectors from file\nvectors = np.load('../input/nlp-course/review_vectors.npy')",
"1) Training a Model on Document Vectors\nNext you'll train a LinearSVC model using the document vectors. It runs pretty quick and works well in high dimensional settings like you have here.\nAfter running the LinearSVC model, you might try experimenting with other types of models to see whether it improves your results.",
"from sklearn.svm import LinearSVC\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(vectors, review_data.sentiment, \n test_size=0.1, random_state=1)\n\n# Create the LinearSVC model\nmodel = LinearSVC(random_state=1, dual=False)\n# Fit the model\n____\n\n# Uncomment and run to see model accuracy\n# print(f'Model test accuracy: {model.score(X_test, y_test)*100:.3f}%')\n\n# Uncomment to check your work\n#q_1.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_1.hint()\n#_COMMENT_IF(PROD)_\nq_1.solution()\n\n#%%RM_IF(PROD)%%\nmodel = LinearSVC(random_state=1, dual=False)\nmodel.fit(X_train, y_train)\nq_1.assert_check_passed()\n\n# Scratch space in case you want to experiment with other models\n\n#second_model = ____\n#second_model.fit(X_train, y_train)\n#print(f'Model test accuracy: {second_model.score(X_test, y_test)*100:.3f}%')",
"Document Similarity\nFor the same tea house review, find the most similar review in the dataset using cosine similarity.\n2) Centering the Vectors\nSometimes people center document vectors when calculating similarities. That is, they calculate the mean vector from all documents, and they subtract this from each individual document's vector. Why do you think this could help with similarity metrics?\nRun the following line after you've decided your answer.",
"# Check your answer (Run this code cell to receive credit!)\nq_2.solution()",
"3) Find the most similar review\nGiven an example review below, find the most similar document within the Yelp dataset using the cosine similarity.",
"review = \"\"\"I absolutely love this place. The 360 degree glass windows with the \nYerba buena garden view, tea pots all around and the smell of fresh tea everywhere \ntransports you to what feels like a different zen zone within the city. I know \nthe price is slightly more compared to the normal American size, however the food \nis very wholesome, the tea selection is incredible and I know service can be hit \nor miss often but it was on point during our most recent visit. Definitely recommend!\n\nI would especially recommend the butternut squash gyoza.\"\"\"\n\ndef cosine_similarity(a, b):\n return np.dot(a, b)/np.sqrt(a.dot(a)*b.dot(b))\n\nreview_vec = nlp(review).vector\n\n## Center the document vectors\n# Calculate the mean for the document vectors, should have shape (300,)\nvec_mean = vectors.mean(axis=0)\n# Subtract the mean from the vectors\ncentered = ____\n\n# Calculate similarities for each document in the dataset\n# Make sure to subtract the mean from the review vector\nsims = ____\n\n# Get the index for the most similar document\nmost_similar = ____\n\n# Uncomment to check your work\n#q_3.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_3.hint()\n#_COMMENT_IF(PROD)_\nq_3.solution()\n\n#%%RM_IF(PROD)%%\nreview_vec = nlp(review).vector\n\n## Center the document vectors\n# Calculate the mean for the document vectors\nvec_mean = vectors.mean(axis=0)\n# Subtract the mean from the vectors\ncentered = vectors - vec_mean\n\n# Calculate similarities for each document in the dataset\n# Make sure to subtract the mean from the review vector\nsims = np.array([cosine_similarity(review_vec - vec_mean, vec) for vec in centered])\n\n# Get the index for the most similar document\nmost_similar = sims.argmax()\nq_3.assert_check_passed()\n\nprint(review_data.iloc[most_similar].text)",
"Even though there are many different sorts of businesses in our Yelp dataset, you should have found another tea shop. \n4) Looking at similar reviews\nIf you look at other similar reviews, you'll see many coffee shops. Why do you think reviews for coffee are similar to the example review which mentions only tea?",
"# Check your answer (Run this code cell to receive credit!)\nq_4.solution()",
"Congratulations!\nYou've finished the NLP course. It's an exciting field that will help you make use of vast amounts of data you didn't know how to work with before.\nThis course should be just your introduction. Try a project with text. You'll have fun with it, and your skills will continue growing."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
idekerlab/sdcsb-advanced-tutorial
|
tutorials/answers/Lesson_1_Introduction_to_cyREST_answer.ipynb
|
mit
|
[
"VIZBI Tutorial Session\nPart 2: Cytoscape, IPython, Docker, and reproducible network data visualization workflows\nTuesday, 3/24/2015\nLesson 1: Introduction to cyREST\nby Keiichiro Ono\n\n\n\nWelcome!\nThis is an introduction to cyREST and its basic API. In this section, you will learn how to access Cytoscape through RESTful API.\nPrerequisites\n\nBasic knowledge of RESTful API\nThis is a good introduction to REST\n\n\nBasic Python skill - only basics, such as conditional statements, loops, basic data types.\nBasic knowledge of Cytoscape\nCytoscape data types - Networks, Tables, and Styles.\n\n\n\nSystem Requirments\nThis tutorial is tested on the following platform:\nClient machine running Cytoscape\n\nJava SE 8\nCytoscape 3.2.1\nLatest version of cyREST app\n\nServer Running IPython Notebook\n\nDocker running this image\n\n\n1. Import Python Libraries and Basic Setup\nLibraries\nIn this tutorial, we will use several popular Python libraries to make this workflow more realistic.\nDo I need to install all of them?\nNO. Because we are running this notebook server in Docker container.\nHTTP Client\nSince you need to access Cytoscape via RESTful API, HTTP client library is the most important tool you need to understand. In this example, we use Requests library to simplify API call code.\nJSON Encoding and Decoding\nData will be exchanged as JSON between Cytoscape and Python code. Python has built-in support for JSON and we will use it in this workflow.\nBasic Setup for the API\nAt this point, there is only one option for the cy-rest module: port number.\nChange Port Number\nBy default, port number used by cy-rest module is 1234. To change this, you need set a global Cytoscape property from Edit → Preserences → Properties... and add a new property resr.port.\nWhat is happing in your machine?\nMac / Windows\n\nLinux\n\nActual Docker runtime is only available to Linux operating system and if you use Mac or Windows version of Docker, it is running on a Linux virtual machine (called boot2docker). \nURL to Access Cytoscape REST API\nWe assume you are running Cytoscape desktop application and IPython Notebook server in a Docker container we provide. To access Cytoscape REST API, use the following URL:\nurl\nhttp://IP_of_your_machine:PORT_NUMBER/v1/\nwhere v1 is the current version number of API. Once the final release is ready, we guarantee compatibility of your scripts as long as major version number is the same.\nCheck your machine's IP\n\nFor Linux and Mac:\nbash\nifconfig\nFor Windows:\nipconfig\n\nViewing JSON\nAll data exchanged between Cytoscape and other applications is in JSON. You can view the JSON data by using browser extensions:\n\nJSONView for Firefox\nJSONView for Chrome\n\nIf you prefer command-line, jq is the best choice.",
"# HTTP Client for Python\nimport requests\n\n# Standard JSON library\nimport json\n\n# Basic Setup\nPORT_NUMBER = 1234 # This is the default port number of CyREST\n\n# IP address of your PHYSICAL MACHINE (NOT VM)\nIP = '192.168.100.172'\n\nBASE = 'http://' + IP + ':' + str(PORT_NUMBER) + '/v1/'\n\n# Header for posting data to the server as JSON\nHEADERS = {'Content-Type': 'application/json'}\n\n# Clean-up\nrequests.delete(BASE + 'session')",
"2. Test Cytoscape REST API\nCheck the status of server\nFirst, send a simple request and check the server status.\nRoundtrip between JSON and Python Object\nObject returned from the requests contains return value of API as JSON. Let's convert it into Python object. JSON library in Python converts JSON string into simple Python object.",
"# Get server status\nres = requests.get(BASE)\nstatus_object = res.json()\nprint(json.dumps(status_object, indent=4))\n\nprint(status_object['apiVersion'])\nprint(status_object['memoryStatus']['usedMemory'])",
"If you are comfortable with this data type conversion, you are ready to go!\n\n3. Import Networks from various data sources\nThere are many ways to load networks into Cytoscape from REST API:\n\nLoad from files\nLoad from web services\nSend Cytoscape.js style JSON directly to Cytoscape\nSend edgelist\n\n3.1 Create networks from local files and URLs\nLet's start from a simple file loading examples. The POST method is used to create new Cytoscape objects. For example,\nbash\nPOST http://localhost:1234/v1/networks\nmeans create new network(s) by specified method. If you want to create networks from files on your machine or remote servers, all you need to do is create a list of file locations and post it to Cytoscape.",
"# Small utility function to create networks from list of URLs\ndef create_from_list(network_list, collection_name='Yeast Collection'):\n payload = {'source': 'url', 'collection': collection_name}\n server_res = requests.post(BASE + 'networks', data=json.dumps(network_list), headers=HEADERS, params=payload)\n return server_res.json()\n\n\n# Array of data source. \nnetwork_files = [\n #This should be path in the LOCAL file system! \n 'file:////Users/kono/git/vizbi-2015/tutorials/data/yeast.json',\n # SIF file on a web server\n 'http://chianti.ucsd.edu/cytoscape-data/galFiltered.sif'\n \n # And of course, you can add as many files as you need...\n]\n\n# Create!\nprint(json.dumps(create_from_list(network_files), indent=4))",
"What is SUID?\nSUID is a unique identifiers for all graph objects in Cytoscape. You can access any objects in current session as long as you have its SUID.\nWhere is my local data file?\nThis is a bit trickey part. When you specify local file, you need to absolute path \nOn Docker container, your data file is mounted on:\n/notebooks/data\nHowever, actual file is in:\nPATH_TO_YOUR_WORKSPACE/vizbi-2015-cytoscape-tutorial/notebooks/data\nAlthough you can see the data directory on /notebooks/data, you need to use absolute path to access actual data from Cytoscape. You may think this is a bit annoying, but actually, this is the power of container technology. You can use completely isolated environment to run your workflow. \n3.2 Create networks from public RESTful web services\nThere are many public network data services. If the service supports Cytoscape-readable file formats, you can specify the query URL as a network location. For example, the following URL calls KEGG REST API and load the TCA Cycle pathway diagram for human.",
"# Utility function to display JSON (Pretty-printer)\ndef pp(json_data):\n print(json.dumps(json_data, indent=4))\n\n# You need KEGGScape App to load this file!\nqueries = [ 'http://rest.kegg.jp/get/hsa00020/kgml' ]\npp(create_from_list(queries, 'KEGG Metabolic Pathways'))",
"And of course, you can mix local files, URLs, and list of web service queries in a same list:",
"mixed = [\n 'http://chianti.ucsd.edu/cytoscape-data/galFiltered.sif',\n 'http://www.ebi.ac.uk/Tools/webservices/psicquic/intact/webservices/current/search/query/brca1?format=xml25'\n]\nresult = create_from_list(mixed, 'Mixed Collection')\npp(result)\n\nmixed1 = result[0]['networkSUID'][0]\nmixed1",
"Understand REST Principles\nWe used modern best practices to design cyREST API. All HTTP verbs are mapped to Cytoscape resources: \n| HTTP Verb | Description |\n|:----------:|:------------|\n| GET | Retrieving resources (in most cases, it is Cytoscape data objects, such as networks or tables) |\n| POST | Creating resources | \n| PUT | Changing/replacing resources or collections |\n| DELETE | Deleting resources |\nThis design style is called Resource Oriented Architecture (ROA).\nActually, basic idea is very simple: mapping all operations to existing HTTP verbs. It is easy to understand once you try actual examples.\nGET (Get a resource)",
"# Get a list of network IDs\nget_all_networks_url = BASE + 'networks'\nprint(get_all_networks_url)\nres = requests.get(get_all_networks_url)\npp(res.json())\n\n# Pick the first network from the list above:\nnetwork_suid = res.json()[0]\nget_network_url = BASE + 'networks/' + str(network_suid)\nprint(get_network_url)\n\n# Get number of nodes in the network\nget_nodes_count_url = BASE + 'networks/' + str(network_suid) + '/nodes/count'\nprint(get_nodes_count_url)\n\n# Get all nodes\nget_nodes_url = BASE + 'networks/' + str(network_suid) + '/nodes'\nprint(get_nodes_url)\n\n# Get Node data table as CSV\nget_node_table_url = BASE + 'networks/' + str(network_suid) + '/tables/defaultnode.csv'\nprint(get_node_table_url)",
"Exercise 1: Guess URLs\nIf a system's RESTful API is well-designed based on ROA best practices, it should be easy to guess similar functions as URLs.\nDisplay a clickable URLs for the following functions:\n\nShow number of networks in current session\nShow all edges in a network\nShow full information for a node (can be any node)\nShow information for all columns in the default node table\nShow all values in default node table under \"name\" column",
"# Write your answers here...\n\n# 1\nprint(BASE + 'networks/count')\n\n#2\nprint(BASE + 'networks/' + str(network_suid) + '/edges')\n\n#3\n# First, get available node SUID. Let's use the URL in the last section:\nres = requests.get(get_nodes_url)\nfirst_node_id = res.json()[0]\nprint(BASE + 'networks/' + str(network_suid) + '/nodes/' + str(first_node_id))\n\n#4\nprint(BASE + 'networks/' + str(network_suid) + '/tables/defaultnode/columns')\n\n#5\nprint(BASE + 'networks/' + str(network_suid) + '/tables/defaultnode/columns/name')",
"POST (Create a new resource)\nTo create new resource (objects), you should use POST methods. URLs follow ROA standards, but you need to read API documents to understand data format for each object.",
"# Add a new nodes to existing network (with time stamps)\nimport datetime\n\nnew_nodes =[\n 'Node created at ' + str(datetime.datetime.now()),\n 'Node created at ' + str(datetime.datetime.now())\n]\n\nres = requests.post(get_nodes_url, data=json.dumps(new_nodes), headers=HEADERS)\nnew_node_ids = res.json()\npp(new_node_ids)",
"DELETE (Delete a resource)",
"# Delete all nodes\nrequests.delete(BASE + 'networks/' + str(mixed1) + '/nodes')\n\n# Delete a network\nrequests.delete(BASE + 'networks/' + str(mixed1))",
"PUT (Update a resource)\nPUT method is used to update information for existing resources. Just like POST methods, you need to know the data format to be posted.",
"# Update a node name\nnew_values = [\n {\n 'SUID': new_node_ids[0]['SUID'],\n 'value' : 'updated 1'\n },\n {\n 'SUID': new_node_ids[1]['SUID'],\n 'value' : 'updated 2'\n }\n]\nrequests.put(BASE + 'networks/' + str(network_suid) + '/tables/defaultnode/columns/name', data=json.dumps(new_values), headers=HEADERS)",
"3.3 Create networks from Python objects\nAnd this is the most powerful feature in Cytoscape REST API. You can easily convert Python objects into Cytoscape networks, tables, or Visual Styles\nHow does this work?\nCytoscape REST API sends and receives data as JSON. For networks, it uses Cytoscape.js style JSON (support for more file formats are comming!). You can programmatically generates networks by converting Python dictionary into JSON.\n3.3.1 Prepare Network as Cytoscape.js JSON\nLet's start with the simplest network JSON:",
"# Start from a clean slate: remove all networks from current session\n# requests.delete(BASE + 'networks')\n\n# Manually generates JSON as a Python dictionary\ndef create_network():\n network = { \n 'data': {\n 'name': 'I\\'m empty!'\n },\n 'elements': {\n 'nodes':[],\n 'edges':[]\n }\n }\n return network\n\n\n# Difine a simple utility function\ndef postNetwork(data):\n url_params = {\n 'collection': 'My Network Colleciton'\n }\n res = requests.post(BASE + 'networks', params=url_params, data=json.dumps(data), headers=HEADERS)\n return res.json()['networkSUID']\n\n\n# POST data to Cytoscape\nempty_net_1 = create_network()\nempty_net_1_suid = postNetwork(empty_net_1)\nprint('Empty network has SUID ' + str(empty_net_1_suid))",
"Modify network dara programmatically\nSince it's a simple Python dictionary, it is easy to add data to the network. Let's add some nodes and edges:",
"# Create sequence of letters (A-Z)\nseq_letters = list(map(chr, range(ord('A'), ord('Z')+1)))\nprint(seq_letters)\n\n# Option 1: Add nods and edges with for loops\ndef add_nodes_edges(network):\n nodes = []\n edges = []\n \n for lt in seq_letters:\n node = {\n 'data': {\n 'id': lt\n }\n }\n nodes.append(node)\n for lt in seq_letters:\n edge = {\n 'data': { \n 'source': lt, \n 'target': 'A' \n }\n }\n edges.append(edge)\n network['elements']['nodes'] = nodes\n network['elements']['edges'] = edges\n network['data']['name'] = 'A is the hub.'\n\n# Option 2: Add nodes and edges in functional way\ndef add_nodes_edges_functional(network):\n network['elements']['nodes'] = list(map(lambda x: {'data': { 'id': x }}, seq_letters))\n network['elements']['edges'] = list(map(lambda x: {'data': { 'source': x, 'target': 'A' }}, seq_letters))\n network['data']['name'] = 'A is the hub (Functional Way)'\n\n# Uncomment this if you want to see the actual JSON object\n# print(json.dumps(empty_network, indent=4))\n\nnet1 = create_network()\nnet2 = create_network()\n\nadd_nodes_edges_functional(net1)\nadd_nodes_edges(net2)\n\nnetworks = [net1, net2]\n\ndef visualize(net):\n suid = postNetwork(net)\n net['data']['SUID'] = suid\n # Apply layout and Visual Style\n requests.get(BASE + 'apply/layouts/force-directed/' + str(suid))\n requests.get(BASE + 'apply/styles/Directed/' + str(suid))\n\nfor net in networks:\n visualize(net)",
"Now, your Cytoscpae window should look like this:\n\nEmbed images in IPython Notebook\ncyRest has function to generate PNG image directly from current network view. Let's try to see the result in this notebook.",
"from IPython.display import Image\n\nImage(url=BASE+'networks/' + str(net1['data']['SUID'])+ '/views/first.png', embed=True)",
"Introduction to Cytoscape Data Model\nEssentially, writing your workflow as a notebook is a programming. To control Cytoscape efficiently from Notebooks, you need to understand basic data model of Cytoscape. Let me explain it as a notebook...\nFirst, let's create a data file to visualize Cytoscape data model",
"%%writefile ../data/model.sif\nModel parent_of ViewModel_1\nModel parent_of ViewModel_2\nModel parent_of ViewModel_3\nViewModel_1 parent_of Presentation_A\nViewModel_1 parent_of Presentation_B\nViewModel_2 parent_of Presentation_C\nViewModel_3 parent_of Presentation_D\nViewModel_3 parent_of Presentation_E\nViewModel_3 parent_of Presentation_F\n\nmodel = [\n 'file:////Users/kono/git/vizbi-2015/tutorials/data/model.sif'\n]\n\n# Create!\nres = create_from_list(model)\nmodel_suid = res[0]['networkSUID'][0]\n\nrequests.get(BASE + 'apply/layouts/force-directed/' + str(model_suid))\nImage(url=BASE+'networks/' + str(model_suid)+ '/views/first.png', embed=True)",
"Mode, View Model, and Presentation\nModel\nEssentially, Model in Cytoscape means networks and tables. Internally, Model can have multiple View Models.\nView Model\nState of the view.\nThis is why you need to use views instead of view in the API:\n/v1/networks/SUID/views\nHowever, Cytoscape 3.2.x has only one rendering engine for now, and end-users do not have access to this feature. Until Cytoscape Desktop supports multiple renderers, best practice is just use one view per model. To access the default view, there is a utility method first:",
"view_url = BASE + 'networks/' + str(model_suid) + '/views/first'\nprint('You can access (default) network view from this URL: ' + view_url)",
"Presentation\nPresentation is a stateless, actual graphics you see in the window. A View Model can have multiple Presentations. For now, you can assume there is always one presentation per View Model.\n\nWhat do you need to know as a cyREST user?\nCyREST API is fairly low level, and you can access all levels of Cytoscpae data structures. But if you want to use Cytoscape as a simple network visualization engine for IPython Notebook, here are some tips:\nTip 1: Always keep SUID when you create any new object\nALL Cytoscape objects, networks, nodes, egdes, and tables have a session unique ID, called SUID. When you create any new data objects in Cytoscape, it returns SUIDs. You need to keep them as Python data objects (list, dict, amp, etc.) to access them later.\nTip 2: Create one view per model\nUntil Cytoscape Desktop fully support multiple view/presentation feature, keep it simple: one view per model.\nTip 3: Minimize number of API calls\nOf course, there is a API to add / remove / update one data object per API call, but it is extremely inefficient!\n3.3.2 Prepare Network as edgelist\nEdgelist is a minimalistic data format for networks and it is widely used in popular libraries including NetworkX and igraph. Preparing edgelist in Python is straightforward. You just need to prepare a list of edges as string like:\na b\nb c\na c\nc d\nd f\nb f\nf g\nf h\nIn Python, there are many ways to generate string like this. Here is a naive approach:",
"data_str = ''\nn = 0\nwhile n <100:\n data_str = data_str + str(n) + '\\t' + str(n+1) + '\\n'\n n = n + 1\n\n# Join the first and last nodes\ndata_str = data_str + '100\\t0\\n'\n\n# print(data_str)\n\nres = requests.post(BASE + 'networks?format=edgelist&collection=Ring', data=data_str, headers=HEADERS)\ncircle_suid = res.json()['networkSUID']\nrequests.get(BASE + 'apply/layouts/circular/' + str(circle_suid))\n\nImage(url=BASE+'networks/' + str(circle_suid) + '/views/first.png', embed=True)",
"Exercise 2: Create a network from a simple edge list file\nEdge list is a human-editable text file to represent a graph structure. Using the sample data abobe (edge list example in 3.3.2), create a new network in Cytoscape from the edge list and visualize it just like the ring network above. \nHint: Use Magic!",
"%%writefile ../data/small1.txt\na b\nb c\na c\nc d\nd f\nb f\nf g\nf h\n\n# Write your code here...\nf = open('../data/small1.txt', 'r')\ndata = f.read()\n\nres = requests.post(BASE + 'networks?format=edgelist&collection=Edge List Sample', data=data, headers=HEADERS)\nsmall_network_suid = res.json()['networkSUID']\n\nrequests.get(BASE + 'apply/layouts/force-directed/' + str(small_network_suid))\nrequests.get(BASE + 'apply/styles/Directed/' + str(small_network_suid))\n\nImage(url=BASE+'networks/' + str(small_network_suid) + '/views/first.png', embed=True)",
"Discussion\nIn this section, we've learned how to generate networks programmatically in Python. But for real world problems, it is not a good idea to use low level Python code to generate networks because there are lots of cool graph libraries such as NetworkX or igraph which provide high level graph APIs. In the next session, let's use them to analyze real network data sets.\nContinues to Lesson 2: Working with Graph Libraries"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Gezort/YSDA_deeplearning17
|
Seminar1/Homework 1 (Face Recognition).ipynb
|
mit
|
[
"%pylab inline\n\nimport numpy as np\nfrom matplotlib import pyplot as plt",
"Face recognition\nThe goal of this seminar is to build two simple (anv very similar) face recognition pipelines using scikit-learn package. Overall, we'd like to explore different representations and see which one works better. \nPrepare dataset",
"import scipy.io\n\nimage_h, image_w = 32, 32\n\ndata = scipy.io.loadmat('faces_data.mat')\n\nX_train = data['train_faces'].reshape((image_w, image_h, -1)).transpose((2, 1, 0)).reshape((-1, image_h * image_w))\ny_train = (data['train_labels'] - 1).reshape((-1,))\nX_test = data['test_faces'].reshape((image_w, image_h, -1)).transpose((2, 1, 0)).reshape((-1, image_h * image_w))\ny_test = (data['test_labels'] - 1).reshape((-1,))\n\nn_features = X_train.shape[1]\nn_train = len(y_train)\nn_test = len(y_test)\nn_classes = len(np.unique(y_train))\n\nprint('Dataset loaded.')\nprint(' Image size : {}x{}'.format(image_h, image_w))\nprint(' Train images : {}'.format(n_train))\nprint(' Test images : {}'.format(n_test))\nprint(' Number of classes : {}'.format(n_classes))",
"Now we are going to plot some samples from the dataset using the provided helper function.",
"def plot_gallery(images, titles, h, w, n_row=3, n_col=6):\n \"\"\"Helper function to plot a gallery of portraits\"\"\"\n plt.figure(figsize=(1.5 * n_col, 1.7 * n_row))\n plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)\n for i in range(n_row * n_col):\n plt.subplot(n_row, n_col, i + 1)\n plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray, interpolation='nearest')\n plt.title(titles[i], size=12)\n plt.xticks(())\n plt.yticks(())\n\ntitles = [str(y) for y in y_train]\n\nplot_gallery(X_train, titles, image_h, image_w)",
"Nearest Neighbour baseline\nThe simplest way to do face recognition is to treat raw pixels as features and perform Nearest Neighbor Search in the Euclidean space. Let's use KNeighborsClassifier class.",
"from sklearn.neighbors import KNeighborsClassifier\n\ny_train = y_train.ravel()\ny_test = y_test.ravel()\n\n# Use KNeighborsClassifier to calculate test score for the Nearest Neighbour classifier.\nclf = KNeighborsClassifier(n_neighbors=1, n_jobs=-1)\nclf.fit(X_train, y_train)\ntest_score = clf.score(X_test, y_test)\n\nprint('Test score: {}'.format(test_score))",
"Not very imperssive, is it?\nEigenfaces\nAll the dirty work will be done by the scikit-learn package. First we need to learn a dictionary of codewords. For that we preprocess the training set by making each face normalized (zero mean and unit variance)..",
"X_train.shape\n\nX_train.mean(axis=0).shape\n\n# Populate variable 'X_train_processed' with samples each of which has zero mean and unit variance.\nX_train_processed = X_train * 1.\nmean = X_train_processed.mean(axis=0)\nX_train_processed -= mean\nstd = X_train_processed.std(axis=0)\nX_train_processed /= std\nX_test_processed = X_test * 1.\nX_test_processed -= mean\nX_test_processed /= std\n\nX_train_processed.shape",
"Now we are going to apply PCA to obtain a dictionary of codewords. \nPCA class is what we need (use svd_solver='randomized' for randomized PCA).",
"from sklearn.decomposition import RandomizedPCA\n\nn_components = 64\n\n# Populate 'pca' with a trained instance of RamdomizedPCA.\npca = RandomizedPCA(copy=True, n_components=n_components, random_state=123)\nX_train_pca = pca.fit_transform(X_train_processed)\nX_test_pca = pca.transform(X_test_processed)",
"We plot a bunch of principal components.",
"plt.figure(figsize=(20,10))\nfor i in range(5):\n plt.subplot(1, 5, i + 1)\n plt.imshow(pca.components_[i].reshape(32,32), cmap=plt.cm.gray, interpolation='nearest')",
"Transform training data, train an SVM and apply it to the encoded test data.",
"from sklearn.svm import SVC\n\nsvc = SVC(kernel='linear', random_state=123)\n\n# Populate 'test_score' with test accuracy of an SVM classifier.\nsvc.fit(X_train_pca, y_train)\ntest_score = svc.score(X_test_pca, y_test)\nprint('Test score: {}'.format(test_score))",
"How many components are sufficient to reach the same accuracy level?",
"n_components = [1, 2, 4, 8, 16, 32, 64]\naccuracy = []\n\n# Try different numbers of components and populate 'accuracy' list.\nfor n_comp in n_components:\n pca = RandomizedPCA(n_components=n_comp, copy=True, random_state=123)\n X_train_pca = pca.fit_transform(X_train_processed)\n X_test_pca = pca.transform(X_test_processed)\n svc.fit(X_train_pca, y_train)\n accuracy.append(svc.score(X_test_pca, y_test))\nplt.figure(figsize=(10, 6))\nplt.plot(n_components, test_score * np.ones(len(n_components)), 'r')\nplt.plot(n_components, accuracy)\n\nprint('Max accuracy: {}'.format(max(accuracy)))",
"как видим, в приципе достаточно 32-х компонент (для линейного ядра)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
alexandratutino/rna-analysis-notebooks
|
Bar Graph for Multiple Genes.ipynb
|
apache-2.0
|
[
"Bar Graph for Multiple Genes given from RNA Sequencing",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport csv\naortaData = [] \naortaDataNumbers = []\ncerebellumData = []\ncerebellumDataNumbers= []\narteryData = []\narteryDataNumbers = []\nwith open (\"genomicdata.csv\") as csvfile:\n readCSV = csv.reader(csvfile, delimiter= '\\t') #gives access to the CSV file\n\n for col in readCSV:\n if col[7] == '':\n aortaData.append('0')\n else:\n aortaData.append(col[7])\n if col[13] == '': #cerebellum\n cerebellumData.append('0')\n else:\n cerebellumData.append(col[13])\n if col[15] == '': #coronary artery\n arteryData.append('0')\n else:\n arteryData.append(col[15])\n aortaDataNumbers = list(map(float, aortaData[19:24]))\n cerebellumDataNumbers = list(map(float, cerebellumData[19:24]))\n arteryDataNumbers = list(map(float, arteryData[19:24]))\n \nind = np.arange(len(aortaDataNumbers)) # the x locations for the groups\nwidth = 0.35 # the width of the bars\n\nfig, ax = plt.subplots()\nrects1 = ax.bar(ind, aortaDataNumbers, width, color='r') #creates rectangles\nrects2 = ax.bar(ind+width, cerebellumDataNumbers, width, color='g') #creates rectangles\nrects3 = ax.bar(ind+width*2, arteryDataNumbers, width, color='b') #creates rectangles\n\nax.set_ylabel('Expression Vector') #Y axis label\nax.set_title('Gene Expressed') #X axis label\nax.set_xticks(ind + width) #the distance between each bar\n\n\n# ax.legend((rects1[0]), ('Expression Vector of Each Gene Expressed in the Aorta')) #Creates a legend so people know\n#what they are looking at\n\n\ndef autolabel(rects): #creates a different label for each bar to show the height\n for rect in rects:\n height = rect.get_height() #height of each bar\n ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,\n '%d' % int(height),\n ha='center', va='bottom') #gives the value \n\n\nplt.show()\n\n",
"In this code, I combined what I worked on last week (creating a gene expression bar chart) with the task for this week (creating a gene expression bar chart for multiple genes). The three tissues I chose were the cerebellum, the coronary artery, and the aorta. I originally decided to do a bar chart with two tissues: the cerebellum and the coronary artery. I selected a random set of genes (19-24) to see how tissues differ. After graphing these genes for the cerebellum and the coronary artery, that the expression vectors for all the genes minus the last one (ENSG00000002549 LAP3), the expressions were relatively the same. For ENSG00000002549 LAP3 however, the the coronary artery had signifcantly higher expression for the gene in comparison to the cerebellum. Because of this, I decided to add in the gene expression vectors for the same set of genes for the aorta since the aorta and the coronary artery are both a part of the cardiovascular system. Like I predicted, the aorta also has a significantly higher expression vector for this gene.",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport csv\naortaData = [] \naortaDataNumbers = []\ncerebellumData = []\ncerebellumDataNumbers= []\narteryData = []\narteryDataNumbers = []\nwith open (\"genomicdata.csv\") as csvfile:\n readCSV = csv.reader(csvfile, delimiter= '\\t') #gives access to the CSV file\n\n for col in readCSV:\n if col[7] == '':\n aortaData.append('0')\n else:\n aortaData.append(col[7])\n if col[13] == '': #cerebellum\n cerebellumData.append('0')\n else:\n cerebellumData.append(col[13])\n if col[15] == '': #coronary artery\n arteryData.append('0')\n else:\n arteryData.append(col[15])\n aortaDataNumbers = list(map(float, aortaData[73:80]))\n cerebellumDataNumbers = list(map(float, cerebellumData[73:80]))\n arteryDataNumbers = list(map(float, arteryData[73:80]))\n \nind = np.arange(len(aortaDataNumbers)) # the x locations for the groups\nwidth = 0.35 # the width of the bars\n\nfig, ax = plt.subplots()\nrects1 = ax.bar(ind, aortaDataNumbers, width, color='r') #creates rectangles\nrects2 = ax.bar(ind+width, cerebellumDataNumbers, width, color='g') #creates rectangles\nrects3 = ax.bar(ind+width*2, arteryDataNumbers, width, color='b') #creates rectangles\n\nax.set_ylabel('Expression Vector') #Y axis label\nax.set_title('Gene Expressed') #X axis label\nax.set_xticks(ind + width) #the distance between each bar\n\n\n\ndef autolabel(rects): #creates a different label for each bar to show the height\n for rect in rects:\n height = rect.get_height() #height of each bar\n ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,\n '%d' % int(height),\n ha='center', va='bottom') #gives the value \n\n\nplt.show()",
"In this graph, along with upcoming graphs, I chose a gene that the cerebellum had a significantly high expression for. This gene, number 75 in the array, proved my previous prediction wrong since the cerebellum, the coronary artery, and the aorta all showed significantly higher gene expressions for this gene.",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport csv\naortaData = [] \naortaDataNumbers = []\ncerebellumData = []\ncerebellumDataNumbers= []\narteryData = []\narteryDataNumbers = []\nwith open (\"genomicdata.csv\") as csvfile:\n readCSV = csv.reader(csvfile, delimiter= '\\t') #gives access to the CSV file\n\n for col in readCSV:\n if col[7] == '':\n aortaData.append('0')\n else:\n aortaData.append(col[7])\n if col[13] == '': #cerebellum\n cerebellumData.append('0')\n else:\n cerebellumData.append(col[13])\n if col[15] == '': #coronary artery\n arteryData.append('0')\n else:\n arteryData.append(col[15])\n aortaDataNumbers = list(map(float, aortaData[722:727]))\n cerebellumDataNumbers = list(map(float, cerebellumData[722:727]))\n arteryDataNumbers = list(map(float, arteryData[722:727]))\n \n\nind = np.arange(len(aortaDataNumbers)) # the x locations for the groups\nwidth = 0.35 # the width of the bars\n\nfig, ax = plt.subplots()\nrects1 = ax.bar(ind, aortaDataNumbers, width, color='r') #creates rectangles for aorta\nrects2 = ax.bar(ind+width, cerebellumDataNumbers, width, color='g') #creates rectangles for cerebellum\nrects3 = ax.bar(ind+width*2, arteryDataNumbers, width, color='b') #creates rectangles for coronary artery\n\nax.set_ylabel('Expression Vector') #Y axis label\nax.set_title('Gene Expressed') #X axis label\nax.set_xticks(ind + width) #the distance between each bar\n\n\ndef autolabel(rects): #creates a different label for each bar to show the height\n for rect in rects:\n height = rect.get_height() #height of each bar\n ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,\n '%d' % int(height),\n ha='center', va='bottom') #gives the value \n\n\nplt.show()",
"This graph shows two different things. \nThe first is what I was expecting in the previous graph. For the gene in spot 722 in the array, the Expression vector for the cerebellum is high, whereas there is no expression vector for the gene in both the coronary artery and for the aorta.\nThe second is something that I thought of after the results given in the previous graph. For the gene in spot 725 in the array, the gene expression vector for the aorta is significantly higher than the gene expression vector for the coronary artery."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.18/_downloads/925ca2811e0c01ab2f8d558f074c31a3/plot_simulated_raw_data_using_subject_anatomy.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Simulate raw data using subject anatomy\nThis example illustrates how to generate source estimates and simulate raw data\nusing subject anatomy with the :class:mne.simulation.SourceSimulator class.\nOnce the raw data is simulated, generated source estimates are reconstructed\nusing dynamic statistical parametric mapping (dSPM) inverse operator.",
"# Author: Ivana Kojcic <ivana.kojcic@gmail.com>\n# Eric Larson <larson.eric.d@gmail.com>\n# Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com>\n# Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com>\n\n# License: BSD (3-clause)\n\nimport os.path as op\n\nimport numpy as np\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\n# To simulate the sample dataset, information of the sample subject needs to be\n# loaded. This step will download the data if it not already on your machine.\n# Subjects directory is also set so it doesn't need to be given to functions.\ndata_path = sample.data_path()\nsubjects_dir = op.join(data_path, 'subjects')\nsubject = 'sample'\nmeg_path = op.join(data_path, 'MEG', subject)\n\n# First, we get an info structure from the sample subject.\nfname_info = op.join(meg_path, 'sample_audvis_raw.fif')\ninfo = mne.io.read_info(fname_info)\ntstep = 1 / info['sfreq']\n\n# To simulate sources, we also need a source space. It can be obtained from the\n# forward solution of the sample subject.\nfwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')\nfwd = mne.read_forward_solution(fwd_fname)\nsrc = fwd['src']\n\n# To simulate raw data, we need to define when the activity occurs using events\n# matrix and specify the IDs of each event.\n# Noise covariance matrix also needs to be defined.\n# Here, both are loaded from the sample dataset, but they can also be specified\n# by the user.\n\nfname_event = op.join(meg_path, 'sample_audvis_raw-eve.fif')\nfname_cov = op.join(meg_path, 'sample_audvis-cov.fif')\n\nevents = mne.read_events(fname_event)\nnoise_cov = mne.read_cov(fname_cov)\n\n# Standard sample event IDs. These values will correspond to the third column\n# in the events matrix.\nevent_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'smiley': 5, 'button': 32}",
"In order to simulate source time courses, labels of desired active regions\nneed to be specified for each of the 4 simulation conditions.\nMake a dictionary that maps conditions to activation strengths within\naparc.a2009s [1]_ labels. In the aparc.a2009s parcellation:\n\n'G_temp_sup-G_T_transv' is the label for primary auditory area\n'S_calcarine' is the label for primary visual area\n\nIn each of the 4 conditions, only the primary area is activated. This means\nthat during the activations of auditory areas, there are no activations in\nvisual areas and vice versa.\nMoreover, for each condition, contralateral region is more active (here, 2\ntimes more) than the ipsilateral.",
"activations = {\n 'auditory/left':\n [('G_temp_sup-G_T_transv-lh', 100), # label, activation (nAm)\n ('G_temp_sup-G_T_transv-rh', 200)],\n 'auditory/right':\n [('G_temp_sup-G_T_transv-lh', 200),\n ('G_temp_sup-G_T_transv-rh', 100)],\n 'visual/left':\n [('S_calcarine-lh', 100),\n ('S_calcarine-rh', 200)],\n 'visual/right':\n [('S_calcarine-lh', 200),\n ('S_calcarine-rh', 100)],\n}\n\nannot = 'aparc.a2009s'\n\n# Load the 4 necessary label names.\nlabel_names = sorted(set(activation[0]\n for activation_list in activations.values()\n for activation in activation_list))\nregion_names = list(activations.keys())\n\n# Define the time course of the activity for each region to activate. We use a\n# sine wave and it will be the same for all 4 regions.\nsource_time_series = np.sin(np.linspace(0, 4 * np.pi, 100)) * 10e-9",
"Create simulated source activity\nHere, :class:~mne.simulation.SourceSimulator is used, which allows to\nspecify where (label), what (source_time_series), and when (events) event\ntype will occur.\nWe will add data for 4 areas, each of which contains 2 labels. Since add_data\nmethod accepts 1 label per call, it will be called 2 times per area.\nAll activations will contain the same waveform, but the amplitude will be 2\ntimes higher in the contralateral label, as explained before.\nWhen the activity occurs is defined using events. In this case, they are\ntaken from the original raw data. The first column is the sample of the\nevent, the second is not used. The third one is the event id, which is\ndifferent for each of the 4 areas.",
"source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep)\nfor region_id, region_name in enumerate(region_names, 1):\n events_tmp = events[np.where(events[:, 2] == region_id)[0], :]\n for i in range(2):\n label_name = activations[region_name][i][0]\n label_tmp = mne.read_labels_from_annot(subject, annot,\n subjects_dir=subjects_dir,\n regexp=label_name,\n verbose=False)\n label_tmp = label_tmp[0]\n amplitude_tmp = activations[region_name][i][1]\n source_simulator.add_data(label_tmp,\n amplitude_tmp * source_time_series,\n events_tmp)\n\n# To obtain a SourceEstimate object, we need to use `get_stc()` method of\n# SourceSimulator class.\nstc_data = source_simulator.get_stc()",
"Simulate raw data\nProject the source time series to sensor space. Three types of noise will be\nadded to the simulated raw data:\n\nmultivariate Gaussian noise obtained from the noise covariance from the\n sample data\nblink (EOG) noise\nECG noise\n\nThe :class:~mne.simulation.SourceSimulator can be given directly to the\n:func:~mne.simulation.simulate_raw function.",
"raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd,\n cov=None)\nraw_sim.set_eeg_reference(projection=True).crop(0, 60) # for speed\n\nmne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0)\nmne.simulation.add_eog(raw_sim, random_state=0)\nmne.simulation.add_ecg(raw_sim, random_state=0)\n\n# Plot original and simulated raw data.\nraw_sim.plot(title='Simulated raw data')",
"Reconstruct simulated source time courses using dSPM inverse operator\nHere, source time courses for auditory and visual areas are reconstructed\nseparately and their difference is shown. This was done merely for better\nvisual representation of source reconstruction.\nAs expected, when high activations appear in primary auditory areas, primary\nvisual areas will have low activations and vice versa.",
"method, lambda2 = 'dSPM', 1. / 9.\nepochs = mne.Epochs(raw_sim, events, event_id)\ninv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov)\nstc_aud = mne.minimum_norm.apply_inverse(\n epochs['auditory/left'].average(), inv, lambda2, method)\nstc_vis = mne.minimum_norm.apply_inverse(\n epochs['visual/right'].average(), inv, lambda2, method)\nstc_diff = stc_aud - stc_vis\n\nbrain = stc_diff.plot(subjects_dir=subjects_dir, initial_time=0.1,\n hemi='split', views=['lat', 'med'])",
"References\n.. [1] Destrieux C, Fischl B, Dale A, Halgren E (2010). Automatic\n parcellation of human cortical gyri and sulci using standard\n anatomical nomenclature, vol. 53(1), 1-15, NeuroImage."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lago-project/lago
|
docs/examples/lago_sdk_one_vm_one_net.ipynb
|
gpl-2.0
|
[
"Lago SDK Example - one VM one Network",
"import logging\nimport tempfile\nfrom textwrap import dedent\nfrom lago import sdk",
"Create a LagoInitFile, normally this file should be saved to the disk. Here we will use a temporary file instead. Our environment includes one CentOS 7.3 VM with one network.",
"with tempfile.NamedTemporaryFile(delete=False) as init_file:\n init_file.write(dedent(\"\"\"\n domains:\n vm-01:\n memory: 1024\n nics:\n - net: net-01\n disks:\n - template_name: el7.3-base\n type: template\n name: root\n dev: sda\n format: qcow2\n nets:\n net-01:\n type: nat\n dhcp:\n start: 100\n end: 254\n \"\"\"))",
"Now we will initialize the environment by using the init file. Our workdir will be created automatically if it does not exists. If this is the first time you are running Lago, it might take a while as it will download the CentOS 7.3 template. You can monitor its progress by watching the log file we configured in /tmp/lago.log.",
"env = sdk.init(config=init_file.name,\n workdir='/tmp/lago_sdk_simple_example',\n loglevel=logging.DEBUG,\n log_fname='/tmp/lago.log')",
"When the method returns, the environment can be started:",
"env.start()",
"Check which VMs are available and get some meta data:",
"vms = env.get_vms()\nprint vms\n\nvm = vms['vm-01']\n\n\nvm.distro()\n\n\nvm.ip()",
"Executing commands in the VM can be done with ssh method:",
"res = vm.ssh(['hostname', '-f'])\n\nres",
"Lets stop the environment, here we will use the destroy method, however you may also use stop and start if you would like to turn the environment off.",
"env.destroy()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ozak/CompEcon
|
notebooks/ipympl.ipynb
|
gpl-3.0
|
[
"The Matplotlib Jupyter Widget Backend\nEnabling interaction with matplotlib charts in the Jupyter notebook and JupyterLab\nhttps://github.com/matplotlib/ipympl",
"# Enabling the `widget` backend.\n# This requires jupyter-matplotlib a.k.a. ipympl.\n# ipympl can be install via pip or conda.\n%matplotlib widget\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Testing matplotlib interactions with a simple plot\nfig = plt.figure()\nplt.plot(np.sin(np.linspace(0, 20, 100)));\n\n# Always hide the toolbar\nfig.canvas.toolbar_visible = False\n\n# Put it back to its default\nfig.canvas.toolbar_visible = 'fade-in-fade-out'\n\n# Change the toolbar position\nfig.canvas.toolbar_position = 'top'\n\n# Hide the Figure name at the top of the figure\nfig.canvas.header_visible = False\n\n# Hide the footer\nfig.canvas.footer_visible = False\n\n# Disable the resizing feature\nfig.canvas.resizable = False\n\n# If true then scrolling while the mouse is over the canvas will not move the entire notebook\nfig.canvas.capture_scroll = True",
"You can also call display on fig.canvas to display the interactive plot anywhere in the notebooke",
"fig.canvas.toolbar_visible = True\ndisplay(fig.canvas)",
"Or you can display(fig) to embed the current plot as a png",
"display(fig)",
"3D plotting",
"from mpl_toolkits.mplot3d import axes3d\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\n# Grab some test data.\nX, Y, Z = axes3d.get_test_data(0.05)\n\n# Plot a basic wireframe.\nax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)\n\nplt.show()",
"Subplots",
"# A more complex example from the matplotlib gallery\nnp.random.seed(0)\n\nn_bins = 10\nx = np.random.randn(1000, 3)\n\nfig, axes = plt.subplots(nrows=2, ncols=2)\nax0, ax1, ax2, ax3 = axes.flatten()\n\ncolors = ['red', 'tan', 'lime']\nax0.hist(x, n_bins, density=1, histtype='bar', color=colors, label=colors)\nax0.legend(prop={'size': 10})\nax0.set_title('bars with legend')\n\nax1.hist(x, n_bins, density=1, histtype='bar', stacked=True)\nax1.set_title('stacked bar')\n\nax2.hist(x, n_bins, histtype='step', stacked=True, fill=False)\nax2.set_title('stack step (unfilled)')\n\n# Make a multiple-histogram of data-sets with different length.\nx_multi = [np.random.randn(n) for n in [10000, 5000, 2000]]\nax3.hist(x_multi, n_bins, histtype='bar')\nax3.set_title('different sample sizes')\n\nfig.tight_layout()\nplt.show()\n\nfig.canvas.toolbar_position = 'right'\n\nfig.canvas.toolbar_visible = False",
"Interactions with other widgets and layouting\nWhen you want to embed the figure into a layout of other widgets you should call plt.ioff() before creating the figure otherwise plt.figure() will trigger a display of the canvas automatically and outside of your layout. \nWithout using ioff\nHere we will end up with the figure being displayed twice. The button won't do anything it just placed as an example of layouting.",
"import ipywidgets as widgets\n\n# ensure we are interactive mode \n# this is default but if this notebook is executed out of order it may have been turned off\nplt.ion()\n\nfig = plt.figure()\nax = fig.gca()\nax.imshow(Z)\n\nwidgets.AppLayout(\n center=fig.canvas,\n footer=widgets.Button(icon='check'),\n pane_heights=[0, 6, 1]\n)",
"Fixing the double display with ioff\nIf we make sure interactive mode is off when we create the figure then the figure will only display where we want it to.\nThere is ongoing work to allow usage of ioff as a context manager, see the ipympl issue and the matplotlib issue",
"plt.ioff()\nfig = plt.figure()\nplt.ion()\n\nax = fig.gca()\nax.imshow(Z)\n\nwidgets.AppLayout(\n center=fig.canvas,\n footer=widgets.Button(icon='check'),\n pane_heights=[0, 6, 1]\n)",
"Interacting with other widgets\nChanging a line plot with a slide",
"# When using the `widget` backend from ipympl,\n# fig.canvas is a proper Jupyter interactive widget, which can be embedded in\n# an ipywidgets layout. See https://ipywidgets.readthedocs.io/en/stable/examples/Layout%20Templates.html\n\n# One can bound figure attributes to other widget values.\nfrom ipywidgets import AppLayout, FloatSlider\n\nplt.ioff()\n\nslider = FloatSlider(\n orientation='horizontal',\n description='Factor:',\n value=1.0,\n min=0.02,\n max=2.0\n)\n\nslider.layout.margin = '0px 30% 0px 30%'\nslider.layout.width = '40%'\n\nfig = plt.figure()\nfig.canvas.header_visible = False\nfig.canvas.layout.min_height = '400px'\nplt.title('Plotting: y=sin({} * x)'.format(slider.value))\n\nx = np.linspace(0, 20, 500)\n\nlines = plt.plot(x, np.sin(slider.value * x))\n\ndef update_lines(change):\n plt.title('Plotting: y=sin({} * x)'.format(change.new))\n lines[0].set_data(x, np.sin(change.new * x))\n fig.canvas.draw()\n fig.canvas.flush_events()\n\nslider.observe(update_lines, names='value')\n\nAppLayout(\n center=fig.canvas,\n footer=slider,\n pane_heights=[0, 6, 1]\n)",
"Update image data in a performant manner\nTwo useful tricks to improve performance when updating an image displayed with matplolib are to:\n1. Use the set_data method instead of calling imshow\n2. Precompute and then index the array",
"# precomputing all images\nx = np.linspace(0,np.pi,200)\ny = np.linspace(0,10,200)\nX,Y = np.meshgrid(x,y)\nparameter = np.linspace(-5,5)\nexample_image_stack = np.sin(X)[None,:,:]+np.exp(np.cos(Y[None,:,:]*parameter[:,None,None]))\n\nplt.ioff()\nfig = plt.figure()\nplt.ion()\nim = plt.imshow(example_image_stack[0])\n\ndef update(change):\n im.set_data(example_image_stack[change['new']])\n fig.canvas.draw_idle()\n \n \nslider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1)\nslider.observe(update, names='value')\nwidgets.VBox([slider, fig.canvas])",
"Debugging widget updates and matplotlib callbacks\nIf an error is raised in the update function then will not always display in the notebook which can make debugging difficult. This same issue is also true for matplotlib callbacks on user events such as mousemovement, for example see issue. There are two ways to see the output:\n1. In jupyterlab the output will show up in the Log Console (View > Show Log Console)\n2. using ipywidgets.Output\nHere is an example of using an Output to capture errors in the update function from the previous example. To induce errors we changed the slider limits so that out of bounds errors will occur:\nFrom: slider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1)\nTo: slider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10)\nIf you move the slider all the way to the right you should see errors from the Output widget",
"plt.ioff()\nfig = plt.figure()\nplt.ion()\nim = plt.imshow(example_image_stack[0])\n\nout = widgets.Output()\n@out.capture()\ndef update(change):\n with out:\n if change['name'] == 'value':\n im.set_data(example_image_stack[change['new']])\n fig.canvas.draw_idle\n \n \nslider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10)\nslider.observe(update)\ndisplay(widgets.VBox([slider, fig.canvas]))\ndisplay(out)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kaleoyster/nbi-data-science
|
Deterioration Curves/(Southwest) Deterioration+Curves+and+Classification+of+Bridges+in+the+Southwest+United+States.ipynb
|
gpl-2.0
|
[
"Libraries and Packages",
"import pymongo\nfrom pymongo import MongoClient\nimport time\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport csv",
"Connecting to National Data Service: The Lab Benchwork's NBI - MongoDB instance",
"Client = MongoClient(\"mongodb://bridges:readonly@nbi-mongo.admin/bridge\")\ndb = Client.bridge\ncollection = db[\"bridges\"]",
"Deterioration curves of Southwest United states\nFor demonstration purposes, the results only focuses on the states in the Southwest United States which includes: \nTexas, Oklahoma, New Mexico, Arizona\nThe classification of the bridge into slow deteriorating, fast deteriorating, and average deteriorating is done based on bridge's rate of deterioration. Therefore, In this section will demonstrate how bridges deteriorate over time in the Southwest United States. To plot the deterioration curve of bridges in every state of Southwest United States, bridges were grouped by their age. As a result, There are 60 groups of bridges from age 1 to 60, The mean of the condition rating of the deck, superstructure, and substructure of the bridge is plotted for every age.\nExtracting Data of Northeast states of the United states from 1992 - 2016.\nThe following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, and subtructure.",
"def getData(state):\n pipeline = [{\"$match\":{\"$and\":[{\"year\":{\"$gt\":1991, \"$lt\":2017}},{\"stateCode\":state}]}},\n {\"$project\":{\"_id\":0,\n \"structureNumber\":1,\n \"yearBuilt\":1,\n \"deck\":1, ## rating of deck\n \"year\":1, ## survey year\n \"substructure\":1, ## rating of substructure\n \"superstructure\":1, ## rating of superstructure\n }}]\n \n dec = collection.aggregate(pipeline)\n conditionRatings = pd.DataFrame(list(dec)) \n conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt']\n return conditionRatings\n\n",
"Filtering Null Values, Converting JSON format to Dataframes, and Calculating Mean Condition Ratings of Deck, Superstructure, and Substucture\nAfter NBI data is extracted. The Data has to be filtered to remove data points with missing values such as 'N', 'NA'.\nThe mean condition rating for all the components: Deck, Substructure, and Superstructe, has to be calculated.",
"def getMeanRatings(state,startAge, endAge, startYear, endYear):\n conditionRatings = getData(state)\n conditionRatings = conditionRatings[['structureNumber','Age','superstructure','deck','substructure','year']]\n conditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])]\n #conditionRatings = conditionRatings.loc[~conditionRatings['Structure Type'].isin([19])]\n #conditionRatings = conditionRatings.loc[~conditionRatings['Type of Wearing Surface'].isin(['6'])]\n \n maxAge = conditionRatings['Age'].unique()\n tempConditionRatingsDataFrame = conditionRatings.loc[conditionRatings['year'].isin([i for i in range(startYear, endYear+1, 1)])]\n \n MeanDeck = []\n StdDeck = []\n \n MeanSubstructure = []\n StdSubstructure = []\n \n MeanSuperstructure = []\n StdSuperstructure = []\n \n ## start point of the age to be = 1 and ending point = 100\n for age in range(startAge,endAge+1,1):\n ## Select all the bridges from with age = i\n tempAgeDf = tempConditionRatingsDataFrame.loc[tempConditionRatingsDataFrame['Age'] == age]\n \n ## type conversion deck rating into int\n listOfMeanDeckOfAge = list(tempAgeDf['deck'])\n listOfMeanDeckOfAge = [ int(deck) for deck in listOfMeanDeckOfAge ] \n \n ## takeing mean and standard deviation of deck rating at age i\n meanDeck = np.mean(listOfMeanDeckOfAge)\n stdDeck = np.std(listOfMeanDeckOfAge)\n \n ## type conversion substructure rating into int\n listOfMeanSubstructureOfAge = list(tempAgeDf['substructure'])\n listOfMeanSubstructureOfAge = [ int(substructure) for substructure in listOfMeanSubstructureOfAge ] \n \n meanSub = np.mean(listOfMeanSubstructureOfAge)\n stdSub = np.std(listOfMeanSubstructureOfAge)\n \n \n ## type conversion substructure rating into int\n listOfMeanSuperstructureOfAge = list(tempAgeDf['superstructure'])\n listOfMeanSuperstructureOfAge = [ int(superstructure) for superstructure in listOfMeanSuperstructureOfAge ] \n \n meanSup = np.mean(listOfMeanSuperstructureOfAge)\n stdSup = np.std(listOfMeanSuperstructureOfAge)\n \n #Append Deck\n MeanDeck.append(meanDeck)\n StdDeck.append(stdDeck)\n \n #Append Substructure\n MeanSubstructure.append(meanSub)\n StdSubstructure.append(stdSub)\n \n #Append Superstructure\n MeanSuperstructure.append(meanSup)\n StdSuperstructure.append(stdSup)\n \n return [MeanDeck, StdDeck ,MeanSubstructure, StdSubstructure, MeanSuperstructure, StdSuperstructure]\n",
"Creating DataFrames of the Mean condition ratings of the deck, superstructure and substructure\nThe calculated Mean Condition Ratings of deck, superstructure, and substructure are now stored in seperate dataframe for the convience.",
"#Texas, Oklahoma, New Mexico, Arizona\nstates = ['48','40','35','04']\n\n# state code to state abbreviation \nstateNameDict = {'25':'MA',\n '04':'AZ',\n '08':'CO',\n '38':'ND',\n '09':'CT',\n '19':'IA',\n '26':'MI',\n '48':'TX',\n '35':'NM',\n '17':'IL',\n '51':'VA',\n '23':'ME',\n '16':'ID',\n '36':'NY',\n '56':'WY',\n '29':'MO',\n '39':'OH',\n '28':'MS',\n '11':'DC',\n '21':'KY',\n '18':'IN',\n '06':'CA',\n '47':'TN',\n '12':'FL',\n '24':'MD',\n '34':'NJ',\n '46':'SD',\n '13':'GA',\n '55':'WI',\n '30':'MT',\n '54':'WV',\n '15':'HI',\n '32':'NV',\n '37':'NC',\n '10':'DE',\n '33':'NH',\n '44':'RI',\n '50':'VT',\n '42':'PA',\n '05':'AR',\n '20':'KS',\n '45':'SC',\n '22':'LA',\n '40':'OK',\n '72':'PR',\n '41':'OR',\n '27':'MN',\n '53':'WA',\n '01':'AL',\n '31':'NE',\n '02':'AK',\n '49':'UT'\n }\n\ndef getBulkMeanRatings(states, stateNameDict):\n # Initializaing the dataframes for deck, superstructure and subtructure\n df_mean_deck = pd.DataFrame({'Age':range(1,61)})\n df_mean_sup = pd.DataFrame({'Age':range(1,61)})\n df_mean_sub = pd.DataFrame({'Age':range(1,61)})\n \n df_std_deck = pd.DataFrame({'Age':range(1,61)})\n df_std_sup = pd.DataFrame({'Age':range(1,61)})\n df_std_sub = pd.DataFrame({'Age':range(1,61)})\n\n for state in states:\n meanDeck, stdDeck, meanSub, stdSub, meanSup, stdSup = getMeanRatings(state,1,100,1992,2016)\n stateName = stateNameDict[state]\n df_mean_deck[stateName] = meanDeck[:60]\n df_mean_sup[stateName] = meanSup[:60]\n df_mean_sub[stateName] = meanSub[:60]\n \n df_std_deck[stateName] = stdDeck[:60]\n df_std_sup[stateName] = stdSup[:60]\n df_std_sub[stateName] = stdSub[:60]\n \n return df_mean_deck, df_mean_sup, df_mean_sub, df_std_deck, df_std_sup, df_std_sub\n \ndf_mean_deck, df_mean_sup, df_mean_sub, df_std_deck, df_std_sup, df_std_sub = getBulkMeanRatings(states, stateNameDict)",
"Deterioration Curves - Deck",
"%matplotlib inline\npalette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey',\n 'red','silver','purple', 'gold', 'black','olive' ]\n\nplt.figure(figsize = (10,8))\nindex = 0\nfor state in states:\n index = index + 1\n stateName = stateNameDict[state]\n plt.plot(df_mean_deck['Age'],df_mean_deck[stateName], color = palette[index])\nplt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) \nplt.xlim(1,60)\nplt.ylim(1,9)\nplt.title('Mean Deck Rating Vs Age')\nplt.xlabel('Age')\nplt.ylabel('Mean Deck Rating')\n\n\nplt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\n#palette = plt.get_cmap('gist_ncar')\npalette = [\n 'blue', 'blue', 'green','magenta','cyan','brown','grey','red','silver','purple','gold','black','olive'\n]\n# multiple line plot\nnum=1\nfor column in df_mean_deck.drop('Age', axis=1):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_mean_deck['Age'], df_mean_deck[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,9)\n \n # Not ticks everywhere\n if num in range(10) :\n plt.tick_params(labelbottom='off')\n if num not in [1,4,7,10]:\n plt.tick_params(labelleft='off')\n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 4, 'Mean Deck Rating', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"Mean Deck Rating vs Age \\nIndividual State Deterioration Curves\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n ",
"Deterioration Curve - Superstructure",
"palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey',\n 'red','silver','purple', 'gold', 'black','olive' ]\n\nplt.figure(figsize = (10,8))\nindex = 0\nfor state in states:\n index = index + 1\n stateName = stateNameDict[state]\n plt.plot(df_mean_sub['Age'],df_mean_sup[stateName], color = palette[index])\nplt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) \nplt.xlim(1,60)\nplt.ylim(1,9)\nplt.title('Mean Superstructure Rating Vs Age')\nplt.xlabel('Age')\nplt.ylabel('Mean Superstructure Rating')\n\n\nplt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\n#palette = plt.get_cmap('gist_ncar')\npalette = [\n 'blue',\n 'blue',\n 'green',\n 'magenta',\n 'cyan',\n 'brown',\n 'grey',\n 'red',\n 'silver',\n 'purple',\n 'gold',\n 'black',\n 'olive'\n]\n# multiple line plot\nnum=1\nfor column in df_mean_sup.drop('Age', axis=1):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_mean_sup['Age'], df_mean_sup[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,9)\n \n # Not ticks everywhere\n if num in range(10) :\n plt.tick_params(labelbottom='off')\n if num not in [1,4,7,10]:\n plt.tick_params(labelleft='off')\n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 4, 'Mean Superstructure Rating', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"Mean Superstructure Rating vs Age \\nIndividual State Deterioration Curves\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)",
"Deterioration Curves - Substructure",
"palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey',\n 'red','silver','purple', 'gold', 'black','olive' ]\n\nplt.figure(figsize = (10,8))\nindex = 0\nfor state in states:\n index = index + 1\n stateName = stateNameDict[state]\n plt.plot(df_mean_sub['Age'],df_mean_sub[stateName], color = palette[index], linewidth=4)\nplt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) \nplt.xlim(1,60)\nplt.ylim(1,9)\nplt.title('Mean Substructure Rating Vs Age')\nplt.xlabel('Age')\nplt.ylabel('Mean Substructure Rating')\n\n\nplt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\n# multiple line plot\nnum=1\nfor column in df_mean_sub.drop('Age', axis=1):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_mean_sub['Age'], df_mean_sub[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,9)\n \n # Not ticks everywhere\n if num in range(7) :\n plt.tick_params(labelbottom='off')\n if num not in [1,4,7] :\n plt.tick_params(labelleft='off')\n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 4, 'Mean Substructure Rating', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"Mean Substructure Rating vs Age \\nIndividual State Deterioration Curves\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n \n\ndef getDataOneYear(state):\n pipeline = [{\"$match\":{\"$and\":[{\"year\":{\"$gt\":2015, \"$lt\":2017}},{\"stateCode\":state}]}},\n {\"$project\":{\"_id\":0,\n \"Structure Type\":\"$structureTypeMain.typeOfDesignConstruction\",\n \"Type of Wearing Surface\":\"$wearingSurface/ProtectiveSystem.typeOfWearingSurface\",\n 'Structure Type':1,\n \"structureNumber\":1,\n \"yearBuilt\":1,\n \"deck\":1, ## rating of deck\n \"year\":1, ## survey year\n \"substructure\":1, ## rating of substructure\n \"superstructure\":1, ## rating of superstructure\n }}]\n \n dec = collection.aggregate(pipeline)\n conditionRatings = pd.DataFrame(list(dec)) \n conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt']\n \n \n return conditionRatings\n\n## Condition ratings of all states concatenated into one single data frame ConditionRatings\nframes = []\nfor state in states:\n f = getDataOneYear(state)\n frames.append(f)\n\ndf_nbi_sw = pd.concat(frames)\n\ndf_nbi_sw = df_nbi_sw.loc[~df_nbi_sw['deck'].isin(['N','NA'])]\ndf_nbi_sw = df_nbi_sw.loc[~df_nbi_sw['substructure'].isin(['N','NA'])]\ndf_nbi_sw = df_nbi_sw.loc[~df_nbi_sw['superstructure'].isin(['N','NA'])]\ndf_nbi_sw = df_nbi_sw.loc[~df_nbi_sw['Type of Wearing Surface'].isin(['6'])]",
"The mean deterioration curve can be a measure to evaluate the rate of deterioration. If the condition rating of a bridge lies above the deterioration curve then the bridge is deteriorating at a slower pace than mean deterioration of the bridges, and if the condition rating of the bridge lies below the deterioration curve of the bridges then it is deteriorating at a faster pace than the mean deterioration of the bridges.\nThis concept can further be extended to calculate deterioration score. Deterioration score denotes the rate of deterioration. A positive deterioration denotes that the individual bridge is deteriorating at a slower pace than the mean rate of deterioration of bridges, and a negative deterioration denotes that the individual bridge is deteriorating at a higher pace than the mean deterioration of the bridges.\nThe following provides definition of deterioration score:\n\nClassification Criteria\nThe classfication criteria used to classify bridges into slow Deterioration, average deterioration and fast deterioration. Bridges are classified based on how far an individual bridge’s deterioration score is from the mean deterioration score.\n| Categories | Value |\n|------------------------|-------------------------------|\n| Slow Deterioration | $z_ia$ ≥ $\\bar x_a$ + 1 σ ( $ x_a$ )|\n| Average Deterioration| $\\bar x_a$ - 1 σ ( $x_a$ ) ≥ $z_ia$ ≥ $\\bar x_a$ + 1 σ ( $ x_a$ ) |\n| Fast Deterioration |$z_ia$ ≤ $\\bar x_a$ - 1 σ ( $ x_a$ ) |",
"stat = ['48','40','35','04']\nAgeList = list(df_nbi_sw['Age'])\ndeckList = list(df_nbi_sw['deck'])\nnum = 1\nfor st in stat:\n deckR = []\n deckR = getDataOneYear(st)\n deckR = deckR[['Age','deck']]\n deckR= deckR.loc[~deckR['deck'].isin(['N','NA'])]\n stateName = stateNameDict[st]\n labels = []\n for deckRating, Age in zip (deckList,AgeList):\n if Age < 60:\n mean_age_conditionRating = df_mean_deck[stateName][Age]\n std_age_conditionRating = df_std_deck[stateName][Age]\n\n detScore = (int(deckRating) - mean_age_conditionRating) / std_age_conditionRating\n\n if (mean_age_conditionRating - std_age_conditionRating) < int(deckRating) <= (mean_age_conditionRating + std_age_conditionRating):\n # Append a label\n labels.append('Average Deterioration')\n # else, if more than a value,\n elif int(deckRating) > (mean_age_conditionRating + std_age_conditionRating):\n # Append a label\n labels.append('Slow Deterioration')\n # else, if more than a value,\n elif int(deckRating) < (mean_age_conditionRating - std_age_conditionRating):\n # Append a label\n labels.append('Fast Deterioration')\n else:\n labels.append('Null Value')\n D = dict((x,labels.count(x)) for x in set(labels))\n \n plt.figure(figsize=(12,6))\n plt.title(stateName)\n plt.bar(range(len(D)), list(D.values()), align='center')\n plt.xticks(range(len(D)), list(D.keys()))\n plt.xlabel('Categories')\n plt.ylabel('Number of Bridges')\n plt.show()\n num = num + 1\n\nstat = ['48','40','35','04']\nAgeList = list(df_nbi_sw['Age'])\ndeckList = list(df_nbi_sw['deck'])\nnum = 1\nlabel = []\nfor st in stat:\n deckR = []\n deckR = getDataOneYear(st)\n deckR = deckR[['Age','deck']]\n deckR= deckR.loc[~deckR['deck'].isin(['N','NA'])]\n stateName = stateNameDict[st]\n \n for deckRating, Age in zip (deckList,AgeList):\n if Age < 60:\n mean_age_conditionRating = df_mean_deck[stateName][Age]\n std_age_conditionRating = df_std_deck[stateName][Age]\n\n detScore = (int(deckRating) - mean_age_conditionRating) / std_age_conditionRating\n\n if (mean_age_conditionRating - std_age_conditionRating) < int(deckRating) <= (mean_age_conditionRating + std_age_conditionRating):\n # Append a label\n labels.append('Average Deterioration')\n # else, if more than a value,\n elif int(deckRating) > (mean_age_conditionRating + std_age_conditionRating):\n # Append a label\n labels.append('Slow Deterioration')\n # else, if more than a value,\n elif int(deckRating) < (mean_age_conditionRating - std_age_conditionRating):\n # Append a label\n labels.append('Fast Deterioration')\n else:\n labels.append('Null Value')\n\n ",
"Classification of all the bridges in the Southwest United States",
"D = dict((x,labels.count(x)) for x in set(labels))\nplt.figure(figsize=(12,6))\nplt.title('Classification of Bridges in Southwest United States')\nplt.bar(range(len(D)), list(D.values()), align='center')\nplt.xticks(range(len(D)), list(D.keys()))\nplt.xlabel('Categories of Bridges')\nplt.ylabel('Number of Bridges')\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
clarka34/exploring-ship-logbooks
|
scripts/classifier-example.ipynb
|
mit
|
[
"Example of how to use classification code for ship logbooks\nImports",
"from exploringShipLogbooks.config import non_slave_ships\nfrom exploringShipLogbooks.classification import LogbookClassifier",
"Initialize classifier\n\nClassification algorithm can be set to \"Naive Bayes\" or \"Decision Tree\"",
"cl = LogbookClassifier(classification_algorithm=\"Naive Bayes\")",
"Load Data, Clean data, and Perform Classification\n\nThis function loads, cleans, and classifies data\nOptions include:\nfuzz - boolean value for whether to perform fuzzy string matching on values or not\nexport_csv - boolean value to determine whether classification output is saved. The csv contains information for every log used in the code, with the following key:\n0 = unclassified data\n1 = data used as negative training data\n2 = data used as positive validation data from cliwoc database\n3 = slave voyages database data\n4 = data classified as a non-slave ship\n5 = data classified a slave ship",
"cl.load_clean_and_classify(fuzz=False, export_csv=True)",
"How to access data from outside of the classifier",
"# data that was classified (unknown class before classification)\ncl.unclassified_logs.head()\n\n# data used for validation: 20% of slave voyage logs\ncl.validation_set_2.head()\n\n# data used for validation: logs that mention slaves in cliwoc data set\ncl.validation_set_1.head()\n\n# data used for training classifier\ncl.training_data.head()",
"Sample plots of data",
"import exploringShipLogbooks\nimport os.path as op\nimport matplotlib.pyplot as plt\nimport matplotlib\n%matplotlib inline\nimport pandas as pd\n\n# load un-cleaned slave_voyage_logs data\ndata_path = op.join(exploringShipLogbooks.__path__[0], 'data')\nfile_name = data_path + '/tastdb-exp-2010'\nslave_voyage_logs = pd.read_pickle(file_name)\n\nfig1, ax1 = plt.subplots()\n\nax1.hist(pd.concat([cl.validation_set_2, cl.training_data], ignore_index = True)['Year'])\nax1.set_xlabel('Year', fontsize = 30)\nax1.set_ylabel('Counts', fontsize = 30)\nplt.xlim([1750, 1850])\n\nfor tick in ax1.xaxis.get_major_ticks():\n tick.label.set_fontsize(26) \n\nfor tick in ax1.yaxis.get_major_ticks():\n tick.label.set_fontsize(26) \n\nfig1.set_size_inches(10, 8)\n\nplt.savefig('slave_voyage_years.png')\n\nfig2, ax2 = plt.subplots()\n\nax2.hist(pd.concat([cl.validation_set_1, cl.unclassified_logs], ignore_index = True)['Year'])\nax2.set_xlabel('Year', fontsize = 30)\nax2.set_ylabel('Counts', fontsize = 30)\nplt.xlim([1750, 1850])\n\nfor tick in ax2.xaxis.get_major_ticks():\n tick.label.set_fontsize(26) \n\nfor tick in ax2.yaxis.get_major_ticks():\n tick.label.set_fontsize(26) \n\nfig2.set_size_inches(11, 8)\n\nplt.savefig('cliwoc_years.jpeg')\n\nfractions = []\nfract_dict = dict(slave_voyage_logs['national'].value_counts(normalize=True))\nfractions = []\nnats = []\nfor key in fract_dict:\n if fract_dict[key] > 0.01:\n nats.append(key)\n fractions.append(fract_dict[key])\n\nexplode=[0.05] * len(fractions)\n\nfig2, ax2 = plt.subplots()\nfig2.set_size_inches(10,10)\nmatplotlib.rcParams['font.size'] = 30\n\nmatplotlib.pylab.pie(fractions, labels = nats, explode = explode)\n\nplt.savefig('slave_voyages_nats.png')\n\nfractions = []\nfract_dict = dict(cl.cliwoc_data_all['Nationality'].value_counts(normalize=True))\nfractions = []\nnats = []\nfor key in fract_dict:\n if fract_dict[key] > 0.01:\n nats.append(key)\n fractions.append(fract_dict[key])\n\nexplode=[0.05] * len(fractions)\n\nfig2, ax2 = plt.subplots()\nfig2.set_size_inches(10,10)\nmatplotlib.rcParams['font.size'] = 30\n\nmatplotlib.pylab.pie(fractions, labels = nats, explode = explode)\n\nplt.savefig('cliwoc_nats.png')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dvirsamuel/MachineLearningCourses
|
Visual Recognision - Stanford/assignment2/FullyConnectedNets.ipynb
|
gpl-3.0
|
[
"Fully-Connected Neural Nets\nIn the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.\nIn this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:\n```python\ndef layer_forward(x, w):\n \"\"\" Receive inputs x and weights w \"\"\"\n # Do some computations ...\n z = # ... some intermediate value\n # Do some more computations ...\n out = # the output\ncache = (x, w, z, out) # Values we need to compute gradients\nreturn out, cache\n```\nThe backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:\n```python\ndef layer_backward(dout, cache):\n \"\"\"\n Receive derivative of loss with respect to outputs and cache,\n and compute derivative with respect to inputs.\n \"\"\"\n # Unpack cache values\n x, w, z, out = cache\n# Use values in cache to compute derivatives\n dx = # Derivative of loss with respect to x\n dw = # Derivative of loss with respect to w\nreturn dx, dw\n```\nAfter implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.\nIn addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.",
"# As usual, a bit of setup\nimport time\nimport numpy as np\nimport matplotlib\nmatplotlib.use('TkAgg')\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape",
"Affine layer: foward\nOpen the file cs231n/layers.py and implement the affine_forward function.\nOnce you are done you can test your implementaion by running the following:",
"# Test the affine_forward function\n\nnum_inputs = 2\ninput_shape = (4, 5, 6)\noutput_dim = 3\n\ninput_size = num_inputs * np.prod(input_shape)\nweight_size = output_dim * np.prod(input_shape)\n\nx = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)\nw = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)\nb = np.linspace(-0.3, 0.1, num=output_dim)\n\nout, _ = affine_forward(x, w, b)\ncorrect_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],\n [ 3.25553199, 3.5141327, 3.77273342]])\n\n# Compare your output with ours. The error should be around 1e-9.\nprint 'Testing affine_forward function:'\nprint 'difference: ', rel_error(out, correct_out)",
"Affine layer: backward\nNow implement the affine_backward function and test your implementation using numeric gradient checking.",
"# Test the affine_backward function\n\nx = np.random.randn(10, 2, 3)\nw = np.random.randn(6, 5)\nb = np.random.randn(5)\ndout = np.random.randn(10, 5)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)\n\n_, cache = affine_forward(x, w, b)\ndx, dw, db = affine_backward(dout, cache)\n\n# The error should be around 1e-10\nprint 'Testing affine_backward function:'\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, dw)\nprint 'db error: ', rel_error(db_num, db)",
"ReLU layer: forward\nImplement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:",
"# Test the relu_forward function\n\nx = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)\n\nout, _ = relu_forward(x)\ncorrect_out = np.array([[ 0., 0., 0., 0., ],\n [ 0., 0., 0.04545455, 0.13636364,],\n [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])\n\n# Compare your output with ours. The error should be around 1e-8\nprint 'Testing relu_forward function:'\nprint 'difference: ', rel_error(out, correct_out)",
"ReLU layer: backward\nNow implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:",
"x = np.random.randn(10, 10)\ndout = np.random.randn(*x.shape)\n\ndx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)\n\n_, cache = relu_forward(x)\ndx = relu_backward(dout, cache)\n\n# The error should be around 1e-12\nprint 'Testing relu_backward function:'\nprint 'dx error: ', rel_error(dx_num, dx)",
"\"Sandwich\" layers\nThere are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.\nFor now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:",
"from cs231n.layer_utils import affine_relu_forward, affine_relu_backward\n\nx = np.random.randn(2, 3, 4)\nw = np.random.randn(12, 10)\nb = np.random.randn(10)\ndout = np.random.randn(2, 10)\n\nout, cache = affine_relu_forward(x, w, b)\ndx, dw, db = affine_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)\n\nprint 'Testing affine_relu_forward:'\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, dw)\nprint 'db error: ', rel_error(db_num, db)",
"Loss layers: Softmax and SVM\nYou implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.\nYou can make sure that the implementations are correct by running the following:",
"num_classes, num_inputs = 10, 50\nx = 0.001 * np.random.randn(num_inputs, num_classes)\ny = np.random.randint(num_classes, size=num_inputs)\n\ndx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)\nloss, dx = svm_loss(x, y)\n\n# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9\nprint 'Testing svm_loss:'\nprint 'loss: ', loss\nprint 'dx error: ', rel_error(dx_num, dx)\n\ndx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)\nloss, dx = softmax_loss(x, y)\n\n# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8\nprint '\\nTesting softmax_loss:'\nprint 'loss: ', loss\nprint 'dx error: ', rel_error(dx_num, dx)",
"Two-layer network\nIn the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.\nOpen the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.",
"N, D, H, C = 3, 5, 50, 7\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=N)\n\nstd = 1e-2\nmodel = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)\n\nprint 'Testing initialization ... '\nW1_std = abs(model.params['W1'].std() - std)\nb1 = model.params['b1']\nW2_std = abs(model.params['W2'].std() - std)\nb2 = model.params['b2']\nassert W1_std < std / 10, 'First layer weights do not seem right'\nassert np.all(b1 == 0), 'First layer biases do not seem right'\nassert W2_std < std / 10, 'Second layer weights do not seem right'\nassert np.all(b2 == 0), 'Second layer biases do not seem right'\n\nprint 'Testing test-time forward pass ... '\nmodel.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)\nmodel.params['b1'] = np.linspace(-0.1, 0.9, num=H)\nmodel.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)\nmodel.params['b2'] = np.linspace(-0.9, 0.1, num=C)\nX = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T\nscores = model.loss(X)\ncorrect_scores = np.asarray(\n [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],\n [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],\n [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])\nscores_diff = np.abs(scores - correct_scores).sum()\nassert scores_diff < 1e-6, 'Problem with test-time forward pass'\n\nprint 'Testing training loss (no regularization)'\ny = np.asarray([0, 5, 1])\nloss, grads = model.loss(X, y)\ncorrect_loss = 3.4702243556\nassert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'\n\nmodel.reg = 1.0\nloss, grads = model.loss(X, y)\ncorrect_loss = 26.5948426952\nassert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'\n\nfor reg in [0.0, 0.7]:\n print 'Running numeric gradient check with reg = ', reg\n model.reg = reg\n loss, grads = model.loss(X, y)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))",
"Solver\nIn the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.\nOpen the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.",
"model = TwoLayerNet()\nsolver = None\n\n##############################################################################\n# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #\n# 50% accuracy on the validation set. #\n##############################################################################\ndata1 = {\n 'X_train': data['X_train'],# training data\n 'y_train': data['y_train'],# training labels\n 'X_val': data['X_val'],# validation data\n 'y_val': data['y_val'] # validation labels\n }\nsolver = Solver(model, data1,\n update_rule='sgd',\n optim_config={\n 'learning_rate': 1e-3,\n },\n lr_decay=0.95,\n num_epochs=10, batch_size=100,\n print_every=100)\nsolver.train()\n##############################################################################\n# END OF YOUR CODE #\n##############################################################################\n\n# Run this cell to visualize training loss and train / val accuracy\nplt.subplot(2, 1, 1)\nplt.title('Training loss')\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('Iteration')\n\nplt.subplot(2, 1, 2)\nplt.title('Accuracy')\nplt.plot(solver.train_acc_history, '-o', label='train')\nplt.plot(solver.val_acc_history, '-o', label='val')\nplt.plot([0.5] * len(solver.val_acc_history), 'k--')\nplt.xlabel('Epoch')\nplt.legend(loc='lower right')\nplt.gcf().set_size_inches(14, 12)\nplt.show(block=True)\nplt.show()",
"Multilayer network\nNext you will implement a fully-connected network with an arbitrary number of hidden layers.\nRead through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.\nImplement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.\nInitial loss and gradient check\nAs a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?\nFor gradient checking, you should expect to see errors around 1e-6 or less.",
"N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print 'Running check with reg = ', reg\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))",
"As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.",
"# TODO: Use a three-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nlearning_rate = 1e-3\nweight_scale = 1e-1\nmodel = FullyConnectedNet([100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()",
"Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.",
"# TODO: Use a five-layer Net to overfit 50 training examples.\nimport random\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nmax_count = 1 #before=30\n# find the best learning rate and weight scale by changing the uniform :)\nfor count in xrange(max_count):\n #learning_rate = 10**random.uniform(-4,0)\n #weight_scale = 10**random.uniform(-4,0)\n # best values found for 100% accuracy on training set\n learning_rate=0.000894251014039\n weight_scale=0.0736354068714\n model = FullyConnectedNet([100, 100, 100, 100],\n weight_scale=weight_scale, dtype=np.float64)\n solver = Solver(model, small_data,\n num_epochs=20, batch_size=25,\n update_rule='sgd', verbose=True,\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\n #solver = Solver(model, small_data,\n # print_every=10, num_epochs=20, batch_size=25,\n # update_rule='sgd',\n # optim_config={\n # 'learning_rate': learning_rate,\n # }\n # )\n solver.train()\n #print \"lr=\" + str(learning_rate) + \",ws=\" + str(weight_scale) \\\n # + \"loss=\" + str(solver.loss_history[-1])\n plt.plot(solver.loss_history, 'o')\n plt.title('Training loss history')\n plt.xlabel('Iteration')\n plt.ylabel('Training loss')\n plt.show()\n",
"Inline question:\nDid you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?\nAnswer:\nIt took the five layer net more epochs to overfit the small data. This is ofcourse becouse the net is much bigger and we need to adjsut more parameters\nUpdate rules\nSo far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.\nSGD+Momentum\nStochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.\nOpen the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.",
"from cs231n.optim import sgd_momentum\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nv = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-3, 'velocity': v}\nnext_w, _ = sgd_momentum(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],\n [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],\n [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],\n [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])\nexpected_velocity = np.asarray([\n [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],\n [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],\n [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],\n [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])\n\nprint 'next_w error: ', rel_error(next_w, expected_next_w)\nprint 'velocity error: ', rel_error(expected_velocity, config['velocity'])",
"Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.",
"num_train = 4000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\n\nfor update_rule in ['sgd', 'sgd_momentum']:\n print 'running with ', update_rule\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': 1e-2,\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print \n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in solvers.iteritems():\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n\n\nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(12, 9)\nplt.show()\n\n#plt.show()\n",
"RMSProp and Adam\nRMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.\nIn the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.\n[1] Tijmen Tieleman and Geoffrey Hinton. \"Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.\" COURSERA: Neural Networks for Machine Learning 4 (2012).\n[2] Diederik Kingma and Jimmy Ba, \"Adam: A Method for Stochastic Optimization\", ICLR 2015.",
"# Test RMSProp implementation; you should see errors less than 1e-7\nfrom cs231n.optim import rmsprop\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\ncache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'cache': cache}\nnext_w, _ = rmsprop(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],\n [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],\n [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],\n [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])\nexpected_cache = np.asarray([\n [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],\n [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],\n [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],\n [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])\n\nprint 'next_w error: ', rel_error(expected_next_w, next_w)\nprint 'cache error: ', rel_error(expected_cache, config['cache'])\n\n# Test Adam implementation; you should see errors around 1e-7 or less\nfrom cs231n.optim import adam\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nm = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\nv = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}\nnext_w, _ = adam(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],\n [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],\n [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],\n [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])\nexpected_v = np.asarray([\n [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],\n [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],\n [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],\n [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])\nexpected_m = np.asarray([\n [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],\n [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],\n [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],\n [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])\n\nprint 'next_w error: ', rel_error(expected_next_w, next_w)\nprint 'v error: ', rel_error(expected_v, config['v'])\nprint 'm error: ', rel_error(expected_m, config['m'])",
"Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:",
"learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}\nfor update_rule in ['adam', 'rmsprop']:\n print 'running with ', update_rule\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': learning_rates[update_rule]\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print\nfig = plt.figure()\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in solvers.iteritems():\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\n\n#plt.gcf().set_size_inches(15,15)\nplt.show(block=True)\n#fig.savefig('vs.png')",
"Train a good model!\nTrain the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.\nIf you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.\nYou might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.",
"best_model = None\n################################################################################\n# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #\n# batch normalization and dropout useful. Store your best model in the #\n# best_model variable. #\n################################################################################\nmax_count = 1 #before=30\n# find the best learning rate and weight scale by changing the uniform :)\nfor count in xrange(max_count):\n #lr = 10**random.uniform(-5,0)\n #ws = 10**random.uniform(-5,0)\n #print \"lr=\" + str(lr) + \",ws=\" + str(ws)\n lr=1e-4\n ws=5e-2\n best_model = FullyConnectedNet([100, 100, 100, 100], weight_scale=ws,use_batchnorm=True)\n solver = Solver(best_model, data,\n num_epochs=20, batch_size=500,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=True)\n solver.train()\n \n y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)\n y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)\n print 'Val_acc: ', (y_val_pred == data['y_val']).mean()\n print 'Test_acc: ', (y_test_pred == data['y_test']).mean()\n print '------------'\n################################################################################\n# END OF YOUR CODE #\n################################################################################",
"Test you model\nRun your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.",
"y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)\ny_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)\nprint 'Validation set accuracy: ', (y_val_pred == data['y_val']).mean()\nprint 'Test set accuracy: ', (y_test_pred == data['y_test']).mean()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
israelzuniga/spark_streaming_class
|
spark_streaming_class/Spark_Streaming.ipynb
|
mit
|
[
"Spark Streaming\nSpark Streaming es una extensión de la API core de Spark que habilita un procesamiento de flujos para canales de datos en vivo, con un soporte escalable, de alto rendimiento y tolerante a fallas. La ingesta de datos puede provenir de distintas fuentes como Kafka, Flume, Kinesis o sockets TCP. Y pueden ser procesados con algoritmos expresados mediante funciones de alto nivel como map, reduce, join y window. \nFinalmente, los datos procesados se pueden guardar en un sistema de archivos, bases de datos y dashboards en vivo. También se dispone la capacidad de usar los algoritmos de Spark para Machine Learning y procesamiento de Grafos sobre los flujos de datos.\n\nInternamente, Spark Streaming recibe datos de canales en vivo y los divide en series (batches), que son procesados por el motor de Spark para generar el flujo final en forma de series finales.\n\nSpark Streaming provee una abstracción de alto nivel llamada discretized stream o DStream, que representa una corritente continua de datos. Los DStream, se pueden crear a partir de flujos de datos como Kafka, Flume y Kinesis, o después de aplicarse operaciones de alto nivel sobre otros DStream. De forma interna, un DStream se representa como una secuencia de RDDs.\nEjemplo\nImportamos StreamingContext, que es el acceso para todas las funcionalidades de Spark Streaming. Creamos una instancia del objeto con dos hilos de ejecución, y a 10 segundos como intervalo para la creación del batch.",
"from pyspark import SparkContext\n# https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext\nfrom pyspark.streaming import StreamingContext\n\nsc = SparkContext(\"local[2]\", \"NetworkWordCount\")\nssc = StreamingContext(sc, 10)",
"Usando el contecto anterior, creamos un DStream que representa el flujo de datos de una fuente TCP (socket), se especifica un hostname y un puerto.",
"lines = ssc.socketTextStream(\"localhost\", 9999)",
"El DStream lines representa un flujo de datos que serán recibidos desde otro servidor. Cada registro en este DStream es una línea de texto. Después de recibir el registro, separaremos las palabras usando los espacios entre ellas.",
"words = lines.flatMap(lambda line: line.split(\" \"))",
"flatMap es una operación (transformación one-to-many) para DStream que crea un nuevo objeto al generar multiples registros por cada registro en el DStream de la fuente. En este caso, cada línea será cortada para obtener multiples palabras y representarse en el DStream words. Después, vamos a imprimir esas palabras.",
"pairs = words.map(lambda word: (word, 1))\nwordCounts = pairs.reduceByKey(lambda x, y: x + y)\n\n\nwordCounts.pprint()",
"El DStream words es transformado (map, one-to-one) al siguiente DStream de pares llave-valor con el siguiente formato (palabra, 1), donde se reduce para obtener la frecuencia de palabras en cada batch de datos. Finalmente, wordCounts.pprint() imprime el conteo generado en ese intervalo.\nDebemos recordar, que aún cuando las líneas de código anteriores ya fueron ejecutadas, Spark Streaming ejecutará el cómputo hasta que el proceso sea requerido. Mientras tanto, no ha habido un procesamiento real de datos.\nPara iniciar el procesamiento de datos, después de que las transformaciones fueron establecidas, llamamos a las funciones:",
"ssc.start()\nssc.awaitTermination()\n\nssc.stop() # 😉",
"Despliegue del programa de Spark\nbash\n$ /usr/local/spark/bin/spark-submit network_wordcount.py localhost 9999\nEntrar a http://localhost:4040 para ver la interfaz de monitoreo de Spark\nLista completa de las transformaciones para DStreams: https://spark.apache.org/docs/latest/streaming-programming-guide.html#transformations-on-dstreams\nPuntos a recordar:\n- Una vez que el contexto ha sido creado, no se pueden agregar nuevas computaciones o modificar las existentes.\n- Una vez que el contexto ha sido detenido, no puede reiniciarse.\n- Solo un contexto de StreamingContext puede estar activo por cada JVM.\n- stop() en StreamingContext también detiene el contexto SparkContext. Para detener solo el contexto StreamingContext, hay que establecer el parámetro *stopSparkContext a Falso en stop().\n- Una instancia de SparkContext puede ser reusada para crear multiples objetos StreamingContext, en el caso de que el contexto de streaming haya sido detenido con anterioridad y sin detener SparkContext"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jupyter-widgets/ipywidgets
|
docs/source/examples/Using Interact.ipynb
|
bsd-3-clause
|
[
"Using Interact\nThe interact function (ipywidgets.interact) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython's widgets.",
"from __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets",
"Basic interact\nAt the most basic level, interact autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use interact, you need to define a function that you want to explore. Here is a function that returns its only argument x.",
"def f(x):\n return x",
"When you pass this function as the first argument to interact along with an integer keyword argument (x=10), a slider is generated and bound to the function parameter.",
"interact(f, x=10);",
"When you move the slider, the function is called, and its return value is printed.\nIf you pass True or False, interact will generate a checkbox:",
"interact(f, x=True);",
"If you pass a string, interact will generate a text box.",
"interact(f, x='Hi there!');",
"interact can also be used as a decorator. This allows you to define a function and interact with it in a single shot. As this example shows, interact also works with functions that have multiple arguments.",
"@interact(x=True, y=1.0)\ndef g(x, y):\n return (x, y)",
"Fixing arguments using fixed\nThere are times when you may want to explore a function using interact, but fix one or more of its arguments to specific values. This can be accomplished by wrapping values with the fixed function.",
"def h(p, q):\n return (p, q)",
"When we call interact, we pass fixed(20) for q to hold it fixed at a value of 20.",
"interact(h, p=5, q=fixed(20));",
"Notice that a slider is only produced for p as the value of q is fixed.\nWidget abbreviations\nWhen you pass an integer-valued keyword argument of 10 (x=10) to interact, it generates an integer-valued slider control with a range of [-10,+3*10]. In this case, 10 is an abbreviation for an actual slider widget:\npython\nIntSlider(min=-10, max=30, step=1, value=10)\nIn fact, we can get the same result if we pass this IntSlider as the keyword argument for x:",
"interact(f, x=widgets.IntSlider(min=-10, max=30, step=1, value=10));",
"The following table gives an overview of different argument types, and how they map to interactive controls:\n<table class=\"table table-condensed table-bordered\">\n <tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr> \n <tr><td>`True` or `False`</td><td>Checkbox</td></tr> \n <tr><td>`'Hi there'`</td><td>Text</td></tr>\n <tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSlider</td></tr>\n <tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSlider</td></tr>\n <tr><td>`['orange','apple']` or `[('one', 1), ('two', 2)]</td><td>Dropdown</td></tr>\n</table>\nNote that a dropdown is used if a list or a list of tuples is given (signifying discrete choices), and a slider is used if a tuple is given (signifying a range).\nYou have seen how the checkbox and text widgets work above. Here, more details about the different abbreviations for sliders and dropdowns are given.\nIf a 2-tuple of integers is passed (min, max), an integer-valued slider is produced with those minimum and maximum values (inclusively). In this case, the default step size of 1 is used.",
"interact(f, x=(0,4));",
"If a 3-tuple of integers is passed (min,max,step), the step size can also be set.",
"interact(f, x=(0,8,2));",
"A float-valued slider is produced if any of the elements of the tuples are floats. Here the minimum is 0.0, the maximum is 10.0 and step size is 0.1 (the default).",
"interact(f, x=(0.0,10.0));",
"The step size can be changed by passing a third element in the tuple.",
"interact(f, x=(0.0,10.0,0.01));",
"For both integer and float-valued sliders, you can pick the initial value of the widget by passing a default keyword argument to the underlying Python function. Here we set the initial value of a float slider to 5.5.",
"@interact(x=(0.0,20.0,0.5))\ndef h(x=5.5):\n return x",
"Dropdown menus are constructed by passing a list of strings. In this case, the strings are both used as the names in the dropdown menu UI and passed to the underlying Python function.",
"interact(f, x=['apples','oranges']);",
"If you want a dropdown menu that passes non-string values to the Python function, you can pass a list of ('label', value) pairs. The first items are the names in the dropdown menu UI and the second items are values that are the arguments passed to the underlying Python function.",
"interact(f, x=[('one', 10), ('two', 20)]);",
"Finally, if you need more granular control than that afforded by the abbreviation, you can pass a ValueWidget instance as the argument. A ValueWidget is a widget that aims to control a single value. Most of the widgets bundled with ipywidgets inherit from ValueWidget. For more information, see this section on widget types.",
"interact(f, x=widgets.Combobox(options=[\"Chicago\", \"New York\", \"Washington\"], value=\"Chicago\"));",
"interactive\nIn addition to interact, IPython provides another function, interactive, that is useful when you want to reuse the widgets that are produced or access the data that is bound to the UI controls.\nNote that unlike interact, the return value of the function will not be displayed automatically, but you can display a value inside the function with IPython.display.display.\nHere is a function that displays the sum of its two arguments and returns the sum. The display line may be omitted if you don't want to show the result of the function.",
"from IPython.display import display\ndef f(a, b):\n display(a + b)\n return a+b",
"Unlike interact, interactive returns a Widget instance rather than immediately displaying the widget.",
"w = interactive(f, a=10, b=20)",
"The widget is an interactive, a subclass of VBox, which is a container for other widgets.",
"type(w)",
"The children of the interactive are two integer-valued sliders and an output widget, produced by the widget abbreviations above.",
"w.children",
"To actually display the widgets, you can use IPython's display function.",
"display(w)",
"At this point, the UI controls work just like they would if interact had been used. You can manipulate them interactively and the function will be called. However, the widget instance returned by interactive also gives you access to the current keyword arguments and return value of the underlying Python function. \nHere are the current keyword arguments. If you rerun this cell after manipulating the sliders, the values will have changed.",
"w.kwargs",
"Here is the current return value of the function.",
"w.result",
"Disabling continuous updates\nWhen interacting with long running functions, realtime feedback is a burden instead of being helpful. See the following example:",
"def slow_function(i):\n print(int(i),list(x for x in range(int(i)) if \n str(x)==str(x)[::-1] and \n str(x**2)==str(x**2)[::-1]))\n return\n\n%%time\nslow_function(1e6)",
"Notice that the output is updated even while dragging the mouse on the slider. This is not useful for long running functions due to lagging:",
"from ipywidgets import FloatSlider\ninteract(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));",
"There are two ways to mitigate this. You can either only execute on demand, or restrict execution to mouse release events.\ninteract_manual\nThe interact_manual function provides a variant of interaction that allows you to restrict execution so it is only done on demand. A button is added to the interact controls that allows you to trigger an execute event.",
"interact_manual(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));",
"You can do the same thing with interactive by using a dict as the second argument, as shown below.",
"slow = interactive(slow_function, {'manual': True}, i=widgets.FloatSlider(min=1e4, max=1e6, step=1e4))\nslow",
"continuous_update\nIf you are using slider widgets, you can set the continuous_update kwarg to False. continuous_update is a kwarg of slider widgets that restricts executions to mouse release events.",
"interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5, continuous_update=False));",
"More control over the user interface: interactive_output\ninteractive_output provides additional flexibility: you can control how the UI elements are laid out.\nUnlike interact, interactive, and interact_manual, interactive_output does not generate a user interface for the widgets. This is powerful, because it means you can create a widget, put it in a box, and then pass the widget to interactive_output, and have control over the widget and its layout.",
"a = widgets.IntSlider()\nb = widgets.IntSlider()\nc = widgets.IntSlider()\nui = widgets.HBox([a, b, c])\ndef f(a, b, c):\n print((a, b, c))\n\nout = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})\n\ndisplay(ui, out)",
"Arguments that are dependent on each other\nArguments that are dependent on each other can be expressed manually using observe. See the following example, where one variable is used to describe the bounds of another. For more information, please see the widget events example notebook.",
"x_widget = FloatSlider(min=0.0, max=10.0, step=0.05)\ny_widget = FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0)\n\ndef update_x_range(*args):\n x_widget.max = 2.0 * y_widget.value\ny_widget.observe(update_x_range, 'value')\n\ndef printer(x, y):\n print(x, y)\ninteract(printer,x=x_widget, y=y_widget);",
"Flickering and jumping output\nOn occasion, you may notice interact output flickering and jumping, causing the notebook scroll position to change as the output is updated. The interactive control has a layout, so we can set its height to an appropriate value (currently chosen manually) so that it will not change size as it is updated.",
"%matplotlib inline\nfrom ipywidgets import interactive\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef f(m, b):\n plt.figure(2)\n x = np.linspace(-10, 10, num=1000)\n plt.plot(x, m * x + b)\n plt.ylim(-5, 5)\n plt.show()\n\ninteractive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))\noutput = interactive_plot.children[-1]\noutput.layout.height = '350px'\ninteractive_plot",
"Interact with multiple functions\nYou may want to have a single widget interact with multiple functions. This is possible by simply linking the widget to both functions using the interactive_output() function. The order of execution of the functions will be the order they were linked to the widget.",
"import ipywidgets as widgets\nfrom IPython.display import display\n\na = widgets.IntSlider(value=5, min=0, max=10)\n\ndef f1(a):\n display(a)\n \ndef f2(a):\n display(a * 2)\n \nout1 = widgets.interactive_output(f1, {'a': a})\nout2 = widgets.interactive_output(f2, {'a': a})\n\ndisplay(a)\ndisplay(out1)\ndisplay(out2)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eflautt/ga-data-science
|
AdmissionsProject/Part3/starter-code/Flautt-project3-submission.ipynb
|
mit
|
[
"Project 3\nIn this project, you will perform a logistic regression on the admissions data we've been working with in projects 1 and 2.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport statsmodels.api as sm\nimport pylab as pl\nimport numpy as np\n\ndf_raw = pd.read_csv(\"../assets/admissions.csv\")\ndf = df_raw.dropna() \nprint df_raw.count() # showing 'old' df total count\nprint df.count() # showing 'new' df total count \nprint df.head()",
"Part 1. Frequency Tables\n1. Let's create a frequency table of our variables",
"# frequency table for prestige and whether or not someone was admitted\nprint df.columns # finding columns for dataframe\npd.crosstab(df['admit'],df['prestige']) # displaying var presitge over var admit in a pandas crosstab function",
"Part 2. Return of dummy variables\n2.1 Create class or dummy variables for prestige",
"# creating dummy variables for var 'prestige' using pandas get_dummies function\ndummy_ranks = pd.get_dummies(df['prestige'],prefix = 'prestige')\nprint dummy_ranks.head(5)\n# still need to append these dummy values to df using .join function; see part 3",
"2.2 When modeling our class variables, how many do we need?\nAnswer: When modeling dummy or class variables in general, n-1 variables are required. We assume prestige value '4' to be the base case as this, comparatively, the lowest ranked. For modeling purposes of this model, we will need three (3) class variables\nPart 3. Hand calculating odds ratios\nDevelop your intuition about expected outcomes by hand calculating odds ratios.",
"cols_to_keep = ['admit', 'gre', 'gpa']\nhandCalc = df[cols_to_keep].join(dummy_ranks.ix[:, 'prestige_1':])\nprint handCalc.head()\n\n# crosstab prestige 1 admission \n# frequency table cutting prestige and whether or not someone was admitted\nprint handCalc.columns\npd.crosstab(handCalc['prestige_1.0'],handCalc['admit'])",
"3.1 Use the cross tab above to calculate the odds of being admitted to grad school if you attended a #1 ranked college",
"OR = float(33.0/93.0)\nprint OR",
"3.2 Now calculate the odds of admission if you did not attend a #1 ranked college",
"OR = float(28.0/243.0)\nprint OR",
"3.3 Calculate the odds ratio\n3.4 Write this finding in a sentence:\nAnswer: For every 93 times a non-prestige_1.0 individual gets into UCLA, a non-prestige_1.0 individual will not get accepted 243 times. \n3.5 Print the cross tab for prestige_4",
"pd.crosstab(handCalc['prestige_4.0'],handCalc['admit'])",
"3.6 Calculate the OR\n3.7 Write this finding in a sentence\nAnswer: For every 12 times an prestige_4.0 individual gets into UCLA, an individual from prestige_4.0 will not get accepted 114 times.\nPart 4. Analysis",
"# create a clean data frame for the regression\ncols_to_keep = ['admit', 'gre', 'gpa']\ndata = df[cols_to_keep].join(dummy_ranks.ix[:, 'prestige_2':])\nprint data.head()",
"We're going to add a constant term for our Logistic Regression. The statsmodels function we're going to be using requires that intercepts/constants are specified explicitly.",
"# manually add the intercept\ndata['intercept'] = 1",
"4.1 Set the covariates to a variable called train_cols",
"train_cols = ['prestige_2.0','prestige_3.0','prestige_4.0','gre', 'gpa']",
"4.2 Fit the model",
"data_predictors = data[train_cols]\nprint data_predictors.head(2)\nlogit = sm.Logit(data['admit'], data_predictors)\nresults = logit.fit()",
"4.3 Print the summary results",
"results.summary()",
"4.4 Calculate the odds ratios of the coeffiencents and their 95% CI intervals\nhint 1: np.exp(X)\nhint 2: conf['OR'] = params\n conf.columns = ['2.5%', '97.5%', 'OR']",
"print np.exp(-0.9562)\nprint np.exp(-1.5375)\nprint np.exp(-1.8699)\nprint np.exp(0.0014)\nprint np.exp(-0.1323)\n",
"4.5 Interpret the OR of Prestige_2\nAnswer: Prestige_2.0 decreases your odds by 0.38 or a probability of 20 percent (p=0.38/0.83+1). This is statistically significant due to the fact that 0 is not included in the CI.\n4.6 Interpret the OR of GPA\nAnswer: GPA decreases your odds but we cannot say that it is statistically significant becayse the CI encompases 0\nPart 5: Predicted probablities\nAs a way of evaluating our classifier, we're going to recreate the dataset with every logical combination of input values. This will allow us to see how the predicted probability of admission increases/decreases across different variables. First we're going to generate the combinations using a helper function called cartesian (above).\nWe're going to use np.linspace to create a range of values for \"gre\" and \"gpa\". This creates a range of linearly spaced values from a specified min and maximum value--in our case just the min/max observed values.",
"def cartesian(arrays, out=None):\n \"\"\"\n Generate a cartesian product of input arrays.\n Parameters\n ----------\n arrays : list of array-like\n 1-D arrays to form the cartesian product of.\n out : ndarray\n Array to place the cartesian product in.\n Returns\n -------\n out : ndarray\n 2-D array of shape (M, len(arrays)) containing cartesian products\n formed of input arrays.\n Examples\n --------\n >>> cartesian(([1, 2, 3], [4, 5], [6, 7]))\n array([[1, 4, 6],\n [1, 4, 7],\n [1, 5, 6],\n [1, 5, 7],\n [2, 4, 6],\n [2, 4, 7],\n [2, 5, 6],\n [2, 5, 7],\n [3, 4, 6],\n [3, 4, 7],\n [3, 5, 6],\n [3, 5, 7]])\n \"\"\"\n\n arrays = [np.asarray(x) for x in arrays]\n dtype = arrays[0].dtype\n\n n = np.prod([x.size for x in arrays])\n if out is None:\n out = np.zeros([n, len(arrays)], dtype=dtype)\n\n m = n / arrays[0].size\n out[:,0] = np.repeat(arrays[0], m)\n if arrays[1:]:\n cartesian(arrays[1:], out=out[0:m,1:])\n for j in xrange(1, arrays[0].size):\n out[j*m:(j+1)*m,1:] = out[0:m,1:]\n return out\n\n# instead of generating all possible values of GRE and GPA, we're going\n# to use an evenly spaced range of 10 values from the min to the max \ngres = np.linspace(data['gre'].min(), data['gre'].max(), 10)\nprint gres\n# array([ 220. , 284.44444444, 348.88888889, 413.33333333,\n# 477.77777778, 542.22222222, 606.66666667, 671.11111111,\n# 735.55555556, 800. ])\ngpas = np.linspace(data['gpa'].min(), data['gpa'].max(), 10)\nprint gpas\n# array([ 2.26 , 2.45333333, 2.64666667, 2.84 , 3.03333333,\n# 3.22666667, 3.42 , 3.61333333, 3.80666667, 4. ])\n\n\n# enumerate all possibilities\ncombos = pd.DataFrame(cartesian([gres, gpas, [1, 2, 3, 4], [1.]]))",
"5.1 Recreate the dummy variables",
"# recreate the dummy variables\ndummy_ranks = pd.get_dummies(df['prestige'],prefix = 'prestige')\n# keep only what we need for making predictions\npred_cols=['gres','gpas']\ntraining_data = combos[pred_cols].join(dummy_ranks.ix[:, 'prestige_2':])\ntraining_data.head(2)",
"5.2 Make predictions on the enumerated dataset",
"logit = sm.Logit(training_data['admit'], pred_cols)",
"5.3 Interpret findings for the last 4 observations\nAnswer: \nBonus\nPlot the probability of being admitted into graduate school, stratified by GPA and GRE score."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
HazyResearch/snorkel
|
test/learning/tensorflow/test_TF_notebook.ipynb
|
apache-2.0
|
[
"Testing TFNoiseAwareModel\nWe'll start by testing the textRNN model on a categorical problem from tutorials/crowdsourcing. In particular we'll test for (a) basic performance and (b) proper construction / re-construction of the TF computation graph both after (i) repeated notebook calls, and (ii) with GridSearch in particular.",
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport os\nos.environ['SNORKELDB'] = 'sqlite:///{0}{1}crowdsourcing.db'.format(os.getcwd(), os.sep)\n\nfrom snorkel import SnorkelSession\nsession = SnorkelSession()",
"Load candidates and training marginals",
"from snorkel.models import candidate_subclass\nfrom snorkel.contrib.models.text import RawText\nTweet = candidate_subclass('Tweet', ['tweet'], cardinality=5)\ntrain_tweets = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()\nlen(train_tweets)\n\nfrom snorkel.annotations import load_marginals\ntrain_marginals = load_marginals(session, train_tweets, split=0)\ntrain_marginals.shape",
"Train LogisticRegression",
"# Simple unigram featurizer\ndef get_unigram_tweet_features(c):\n for w in c.tweet.text.split():\n yield w, 1\n\n# Construct feature matrix\nfrom snorkel.annotations import FeatureAnnotator\nfeaturizer = FeatureAnnotator(f=get_unigram_tweet_features)\n\n%time F_train = featurizer.apply(split=0)\nF_train\n\n%time F_test = featurizer.apply_existing(split=1)\nF_test\n\nfrom snorkel.learning.tensorflow import LogisticRegression\n\nmodel = LogisticRegression(cardinality=Tweet.cardinality)\nmodel.train(F_train.todense(), train_marginals)",
"Train SparseLogisticRegression\nNote: Testing doesn't currently work with LogisticRegression above, but no real reason to use that over this...",
"from snorkel.learning.tensorflow import SparseLogisticRegression\n\nmodel = SparseLogisticRegression(cardinality=Tweet.cardinality)\nmodel.train(F_train, train_marginals, n_epochs=50, print_freq=10)\n\nimport numpy as np\ntest_labels = np.load('crowdsourcing_test_labels.npy')\nacc = model.score(F_test, test_labels)\nprint(acc)\nassert acc > 0.6\n\n# Test with batch size s.t. N % batch_size == 1...\nmodel.score(F_test, test_labels, batch_size=9)",
"Train basic LSTM\nWith dev set scoring during execution (note we use test set here to be simple)",
"from snorkel.learning.tensorflow import TextRNN\ntest_tweets = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()\n\ntrain_kwargs = {\n 'dim': 100,\n 'lr': 0.001,\n 'n_epochs': 25,\n 'dropout': 0.2,\n 'print_freq': 5\n}\nlstm = TextRNN(seed=123, cardinality=Tweet.cardinality)\nlstm.train(train_tweets, train_marginals, X_dev=test_tweets, Y_dev=test_labels, **train_kwargs)\n\nacc = lstm.score(test_tweets, test_labels)\nprint(acc)\nassert acc > 0.60\n\n# Test with batch size s.t. N % batch_size == 1...\nlstm.score(test_tweets, test_labels, batch_size=9)",
"Run GridSearch",
"from snorkel.learning.utils import GridSearch\n\n# Searching over learning rate\nparam_ranges = {'lr': [1e-3, 1e-4], 'dim': [50, 100]}\nmodel_class_params = {'seed' : 123, 'cardinality': Tweet.cardinality}\nmodel_hyperparams = {\n 'dim': 100,\n 'n_epochs': 20,\n 'dropout': 0.1,\n 'print_freq': 10\n}\nsearcher = GridSearch(TextRNN, param_ranges, train_tweets, train_marginals,\n model_class_params=model_class_params,\n model_hyperparams=model_hyperparams)\n\n# Use test set here (just for testing)\nlstm, run_stats = searcher.fit(test_tweets, test_labels)\n\nacc = lstm.score(test_tweets, test_labels)\nprint(acc)\nassert acc > 0.60",
"Reload saved model outside of GridSearch",
"lstm = TextRNN(seed=123, cardinality=Tweet.cardinality)\nlstm.load('TextRNN_best', save_dir='checkpoints/grid_search')\nacc = lstm.score(test_tweets, test_labels)\nprint(acc)\nassert acc > 0.60",
"Reload a model with different structure",
"lstm.load('TextRNN_0', save_dir='checkpoints/grid_search')\nacc = lstm.score(test_tweets, test_labels)\nprint(acc)\nassert acc < 0.60",
"Testing GenerativeModel\nTesting GridSearch on crowdsourcing data",
"from snorkel.annotations import load_label_matrix\nimport numpy as np\n\nL_train = load_label_matrix(session, split=0)\ntrain_labels = np.load('crowdsourcing_train_labels.npy')\n\nfrom snorkel.learning import GenerativeModel\n\n# Searching over learning rate\nsearcher = GridSearch(GenerativeModel, {'epochs': [0, 10, 30]}, L_train)\n\n# Use training set labels here (just for testing)\ngen_model, run_stats = searcher.fit(L_train, train_labels)\n\nacc = gen_model.score(L_train, train_labels)\nprint(acc)\nassert acc > 0.97"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ivukotic/ML_platform_tests
|
tutorial/DeepLearningImages/DeepLearning-MNIST.ipynb
|
gpl-3.0
|
[
"MNIST: Simple Deep Neural Network for Image Classification\nThis tutorial is part of the EFI Data Analytics for Physics workshop. It is meant for the beginner HEP undergraduate or graduate student (or postdoc/faculty) who wants to get started implementing a simple Convolutional Neural Network (ConvNet) in Python with Keras. The task is to classify images from the ''Fashion-MNIST'' dataset, which shares the same image size (28x28 pixels) and split between training and testing datasets as the original MNIST handwritten digit dataset, which is often used as the \"Hello World\" example for object recognition with deep learning. The tutorial was develped by Ben Nachman (UC Berkeley) and Joakim Olsson (University of Chicago), with inspiration from tutorials online (such as this.\nGoals\nIn this tutorial you will learn:\n- How to load a datasets into Keras from an HDF5 file.\n- How to implement a simple multi-layer perceptron neural network in Keras.\n- How to implement and evaluate a simple Convolutional Neural Network (ConvNet).\n- How to implement a close to state-of-the-art deep learning model in Keras.\n- How to modify the structure and function of models in Keras. \nExercises\n1.) Run through the notebook with the fashion-MNIST dataset\nLike the previous tutorial, we recommend your first go through the tutorial carefully first, don't hesitate to ask us question of something is not clear!\n1a) What is the performance of each neural network model?\n1b) Which model has the best accuracy on the test dataset?\n1c) What is the difference between accuracy and loss?\n2. Switch to the MNIST dataset of handwritten digits and rerun the training\nBy changing the \"do_andwritten_digit_mnist = True\" boolean below.\n2a) Switch to the MNIST dataset and rerun the notebook\n2b) How does the accuracy and loss of the training and test datasets compare with the \"fashion-MNIST\" dataset?\n3) Modify the number of training samples\nTry reducing the number of training samples (and test) by 1-2 orders of magnitude. How does that change the performance for each model?\n4) Modify the neural networks\n4a). Try tweaking the feed-forward network\nSuggestions: What happens if you increase the number neurons in the hidden layer? What happens if you increase/decrease the number of training epochs? What happens if you vary the batch size? What happens if you change the activation function (in particular try switching from 'sigmoind' to 'linear')? What happens if you add additional layers?\n4b). Try tweaking the simple Convolutional Neural Network\n\nWhat does Conv2D do? Try to modify it's parameters (filter, kernel-size, stride, etc.). (See: https://keras.io/layers/convolutional/)\nWhat is MaxPooling2D Try to modify it's parameters (See: https://keras.io/layers/pooling/#maxpooling2d)\nEtc...\n\n4c) Try tweaking the more complicated Deep Neural Network\n\nTry training on more epochs. \nTry to modify the structure of the network, adding additional layers etc. \nGoogle something like \"fashion-MNIST Kaggle\" etc. for tons of example networks that various groups.\n\nLoading the Fashion-MNIST dataset in Keras\nWe will work with the Fashion-MNIST dataset. Each image is a 28 by 28 pixel square grayscale image (784 pixels in total), associated with a label from 10 classes. The dataset is split into 60,000 images used for training and 10,000 examples for testing. \nEach training and test example is assigned to one of the following labels:\n| Label | Description |\n| --- | --- |\n| 0 | T-shirt/top |\n| 1 | Trouser |\n| 2 | Pullover |\n| 3 | Dress |\n| 4 | Coat |\n| 5 | Sandal |\n| 6 | Shirt |\n| 7 | Sneaker |\n| 8 | Bag |\n| 9 | Ankle boot |\nLet's begin by loading the dataset (if you want to try the standard \"Handwritting digit MNIST\" instead of Fashion MNIST, just set: do_handwritten_mnist=True).\nFirst here's a function to load the data from HDF5.",
"# Function to import the dataset \n# (if 'do_handwritten_digit_mnist = True' the standard handwritten digit MNIST dataset will be used instead of the fashion dataset)\nimport h5py\ndef import_data(do_handwritten_digit_mnist = False):\n \n # Load the data from hdf5 files\n if do_handwritten_digit_mnist:\n h5_file_readonly = h5py.File('data/handwritten-mnist.h5','r')\n else:\n h5_file_readonly = h5py.File('data/fashion-mnist.h5','r')\n X_train = h5_file_readonly['X_train'][:]\n X_test = h5_file_readonly['X_test'][:]\n y_train = h5_file_readonly['y_train'][:]\n y_test = h5_file_readonly['y_test'][:]\n h5_file_readonly.close()\n return X_train, X_test, y_train, y_test",
"Next, let's actually import the data.",
"# Set to 'True' if you want to use the handwritten digit dataset instead\ndo_handwritten_digit_mnist = False\nX_train, X_test, y_train, y_test = import_data(do_handwritten_digit_mnist)",
"To see the shape of numpy arrays, you can do the following:",
"print(X_train.shape)\nprint(y_train.shape)\nprint(X_test.shape)\nprint(y_test.shape)",
"We can quickly plot a few examples of the images from the dataset to get an idea of what they look like (each image is an array of 28x28 pixels). matplotlib.pyplot.imshow is useful for plotting arrays.",
"# TODO: will be explanined on tuesday\nimport matplotlib.pyplot as plt\n\n# plot 4 images as gray scale\nplt.subplot(221)\nplt.imshow(X_train[0], cmap=plt.get_cmap('gray'))\nplt.subplot(222)\nplt.imshow(X_train[1], cmap=plt.get_cmap('gray'))\nplt.subplot(223)\nplt.imshow(X_train[2], cmap=plt.get_cmap('gray'))\nplt.subplot(224)\nplt.imshow(X_train[3], cmap=plt.get_cmap('gray'))\n\n# show the plot\nplt.show()",
"1. Simple feed-forward Neural Network\nBefore we get to the more complex convolutional neural network, let's start with a simple multi-layer perceptron model as a baseline. \nInitialize a random number generator, which is useful if you want to reproduce the results later.",
"import numpy as np\n# fix random seed for reproducibility\nseed = 7\nnp.random.seed(seed)",
"Load the dataset again, just to make sure we start with a clean slate.",
"# Set to 'True' if you want to use the handwritten digit dataset instead\ndo_handwritten_digit_mnist = False\nX_train, X_test, y_train, y_test = import_data(do_handwritten_digit_mnist)",
"When training this simple Neural Network we will first flatten the 28x28 2D images into a vector of 784 (28x28) 1D-array for each image. This is because when training a simple Neural Network there is no 2D shape information in the inputs.\nLet's perform some numpy array tricks.",
"print(\"Before flattening:\")\nprint(\"X_train.shape {}\".format(X_train.shape))\n# flatten 28*28 images to a 784 vector for each image\nnum_pixels = X_train.shape[1] * X_train.shape[2] # 28x28\nX_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')\nX_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')\nprint(\"After flattening:\")\nprint(\"X_train.shape {}\".format(X_train.shape))",
"Before training, input features are often normalized to within the range [0, 1].",
"# normalize inputs from 0-255 to 0-1\nX_train = X_train / 255\nX_test = X_test / 255",
"The training labels (i.e. the 'y' values) are often converted into what's called \"one-hot\" encoding. All this means is that instead of encoding the labels with scalar values between 0-9 (as in the table above), each y-label is represented by a vector of length 10 where the ith entry, corresponding to the scalar value of y, is 1 (hot) and the rest 0. \nEx. 5 - sandal would look like: $[0 0 0 0 0 1 0 0 0 0]^T$",
"from keras.utils import np_utils\n# Ex. say we have the following labels\nyy = np.array([0, 1, 2, 3, 4 , 5, 6 ,7 ,8 ,9], np.float32)\nprint('yy',yy)\nprint('yy.shape',yy.shape)\nyy_onehot = np_utils.to_categorical(yy)\nprint('yy_onehot',yy_onehot)\nprint('yy_onehot.shape',yy_onehot.shape)",
"Let's now do the same thing for our actual dataset.",
"# one hot encode outputs\nfrom keras.utils import np_utils\ny_train_onehot = np_utils.to_categorical(y_train)\ny_test_onehot = np_utils.to_categorical(y_test)\nnum_classes = y_test_onehot.shape[1]\nprint(\"y_train.shape: \",y_train.shape)\nprint(\"y_train_onehot.shape: \",y_train_onehot.shape)",
"Alright, so let's start doing some machine learning. Again, you are strongly encouraged to go and read the Keras documentation.\nThe core data structure of Keras is a model, a way to organize layers. The simplest type of model is the Sequential model, a linear stack of layers. For more complex architectures, you should use the Keras functional API, which allows to build arbitrary graphs of layers.\nWe compile our model and choose 'binary_crossentropy' as our loss function and 'ADAM' (adaptive moment estimation) as the optimizer (an extension of stochastic gradient descent).",
"# Import the Sequential model\nfrom keras.models import Sequential\n# Dense is just your the regular densely-connected NN layer\nfrom keras.layers import Dense\n\n# define baseline model\ndef baseline_model():\n # create model\n model = Sequential()\n model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))\n model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax'))\n # Compile model\n model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n return model",
"Build and show the model architecuture.",
"# build the model\nmodel = baseline_model()\nmodel.summary()",
"We then train our model in the training data. Keras automatically runs validation on the test data.\nYou can experiment with the number of 'epochs' the 'batch size':\n- one epoch = One forward pass and one backward pass of all the training examples.\n- batch size = The number of training examples in one forward/backward pass. Tthe higher the batch size, the more memory space you'll need.",
"# Fit (train) the model\nhistory = model.fit(X_train, y_train_onehot, validation_data=(X_test, y_test_onehot), epochs=10, batch_size=200, verbose=2)\n# Final evaluation of the model\nscores = model.evaluate(X_test, y_test_onehot, verbose=0)\nprint(\"Baseline Error: %.2f%%\" % (100-scores[1]*100))",
"The baseline error for this simple multi-layer perceptron neural net should be around 11%. \nYou can plot the accuracy and loss vs. epoch.",
"# list all data in history\nprint(history.history.keys())\n\n# summarize history for accuracy\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n\n# summarize history for loss\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()",
"Simple Convolutional Neural Network\nIn this section we will create a simple CNN for MNIST that demonstrates how to use all of the aspects of a modern CNN implementation, including Convolutional layers, Pooling layers and Dropout layers.\nWe start by importing the classes and functions needed.",
"import numpy as np\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\nfrom keras.layers import Flatten\nfrom keras.layers.convolutional import Conv2D\nfrom keras.layers.convolutional import MaxPooling2D\nfrom keras.utils import np_utils\nfrom keras import backend as K\nK.set_image_data_format(\"channels_first\")\n\n# fix random seed for reproducibility\nseed = 7\nnp.random.seed(seed)\n\n# Set to 'True' if you want to use the handwritten digit dataset instead\ndo_handwritten_digit_mnist = False\nX_train, X_test, y_train, y_test = import_data(do_handwritten_digit_mnist)\n\n# reshape to be [samples][pixels][width][height]\nX_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')\nX_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')\n\n# normalize inputs from 0-255 to 0-1\nX_train = X_train / 255\nX_test = X_test / 255\n# one hot encode outputs\ny_train = np_utils.to_categorical(y_train)\ny_test = np_utils.to_categorical(y_test)\nnum_classes = y_test.shape[1]",
"Next we define our neural network model.\nConvolutional neural networks are more complex than standard multi-layer perceptrons, so we will start by using a simple structure to begin with that uses all of the elements for state of the art results. Below summarizes the network architecture.\nThe first hidden layer is a convolutional layer called a Convolution2D. The layer has 32 feature maps, which with the size of 5×5 and a rectifier activation function. This is the input layer, expecting images with the structure outline above [pixels][width][height].\nNext we define a pooling layer that takes the max called MaxPooling2D. It is configured with a pool size of 2×2.\nThe next layer is a regularization layer using dropout called Dropout. It is configured to randomly exclude 20% of neurons in the layer in order to reduce overfitting.\nNext is a layer that converts the 2D matrix data to a vector called Flatten. It allows the output to be processed by standard fully connected layers.\nNext a fully connected layer with 128 neurons and rectifier activation function.\nFinally, the output layer has 10 neurons for the 10 classes and a softmax activation function to output probability-like predictions for each class.",
"def baseline_model():\n # create model\n model = Sequential()\n model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation='relu'))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n model.add(Dropout(0.2))\n model.add(Flatten())\n model.add(Dense(128, activation='relu'))\n model.add(Dense(num_classes, activation='softmax'))\n # Compile model\n model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n return model",
"We evaluate the model the same way as before with the multi-layer perceptron. The CNN is fit over 10 epochs with a batch size of 200.",
"# build the model\nmodel = baseline_model()\n# Print a summary of the model structure\nmodel.summary()\n# Fit the model\nhistory = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)\n# Final evaluation of the model\nscores = model.evaluate(X_test, y_test, verbose=0)\nprint(\"CNN Error: %.2f%%\" % (100-scores[1]*100))",
"The error for this simple convolutional neural net should be about 8.5% (after 10 epochs).",
"# list all data in history\nprint(history.history.keys())\n# summarize history for accuracy\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n# summarize history for loss\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()",
"Larger Convolutional Neural Network\nFinally, let's implement closer to the state of the art results.",
"# Larger CNN for the MNIST Dataset\nimport numpy\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\nfrom keras.layers import Flatten\nfrom keras.layers.convolutional import Conv2D\nfrom keras.layers.convolutional import MaxPooling2D\nfrom keras.utils import np_utils\nfrom keras import backend as K\nK.set_image_data_format(\"channels_first\")\n\n# fix random seed for reproducibility\nseed = 7\nnumpy.random.seed(seed)\n\n# load the dataset\ndo_handwritten_digit_mnist = False\nX_train, X_test, y_train, y_test = import_data(do_handwritten_digit_mnist)\n\n# reshape to be [samples][pixels][width][height]\nX_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')\nX_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')\n\n# normalize inputs from 0-255 to 0-1\nX_train = X_train / 255\nX_test = X_test / 255\n\n# one hot encode outputs\ny_train = np_utils.to_categorical(y_train)\ny_test = np_utils.to_categorical(y_test)\nnum_classes = y_test.shape[1]",
"This time we define a large CNN architecture with additional convolutional, max pooling layers and fully connected layers. The network topology can be summarized as follows.\n\nConvolutional layer with 30 feature maps of size 5×5.\nPooling layer taking the max over 2*2 patches.\nConvolutional layer with 15 feature maps of size 3×3.\nPooling layer taking the max over 2*2 patches.\nDropout layer with a probability of 20%.\nFlatten layer.\nFully connected layer with 128 neurons and rectifier activation.\nFully connected layer with 50 neurons and rectifier activation.\nOutput layer.",
"# define the larger model\ndef larger_model():\n # create model\n model = Sequential()\n model.add(Conv2D(30, (5, 5), input_shape=(1, 28, 28), activation='relu'))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n model.add(Conv2D(15, (3, 3), activation='relu'))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n model.add(Dropout(0.2))\n model.add(Flatten())\n model.add(Dense(128, activation='relu'))\n model.add(Dense(50, activation='relu'))\n model.add(Dense(num_classes, activation='softmax'))\n # Compile model\n model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n return model",
"Training: Like the previous two experiments, the model is fit over 10 epochs with a batch size of 200.",
"# build the model\nmodel = larger_model()\n# Print a summary of the model structure\nmodel.summary()\n# Fit the model\nmodel.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200)\n# Final evaluation of the model\nscores = model.evaluate(X_test, y_test, verbose=0)\nprint(\"Large CNN Error: %.2f%%\" % (100-scores[1]*100))",
"It is often very useful to save a partly trained model. This is easy in Keras.",
"# Save partly trained model\nmodel.save('partly_trained.h5')",
"To the reload the model and continue training you can do:",
"# Load partly trained model\nfrom keras.models import load_model\nmodel = load_model('partly_trained.h5')\n\n# model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200)",
"The model takes about 100 seconds to run per epoch.\nCongratulations, you are now done with the default tutorial. We strongly encourage you to tweak parameters and modify in order to improve the model."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ray-project/ray
|
doc/source/tune/examples/tune-wandb.ipynb
|
apache-2.0
|
[
"Using Weights & Biases with Tune\n(tune-wandb-ref)=\nWeights & Biases (Wandb) is a tool for experiment\ntracking, model optimizaton, and dataset versioning. It is very popular\nin the machine learning and data science community for its superb visualization\ntools.\n{image} /images/wandb_logo_full.png\n:align: center\n:alt: Weights & Biases\n:height: 80px\n:target: https://www.wandb.ai/\nRay Tune currently offers two lightweight integrations for Weights & Biases.\nOne is the {ref}WandbLoggerCallback <tune-wandb-logger>, which automatically logs\nmetrics reported to Tune to the Wandb API.\nThe other one is the {ref}@wandb_mixin <tune-wandb-mixin> decorator, which can be\nused with the function API. It automatically\ninitializes the Wandb API with Tune's training information. You can just use the\nWandb API like you would normally do, e.g. using wandb.log() to log your training\nprocess.\n{contents}\n:backlinks: none\n:local: true\nRunning A Weights & Biases Example\nIn the following example we're going to use both of the above methods, namely the WandbLoggerCallback and\nthe wandb_mixin decorator to log metrics.\nLet's start with a few crucial imports:",
"import numpy as np\nimport wandb\n\nfrom ray import tune\nfrom ray.tune import Trainable\nfrom ray.tune.integration.wandb import (\n WandbLoggerCallback,\n WandbTrainableMixin,\n wandb_mixin,\n)",
"Next, let's define an easy objective function (a Tune Trainable) that reports a random loss to Tune.\nThe objective function itself is not important for this example, since we want to focus on the Weights & Biases\nintegration primarily.",
"def objective(config, checkpoint_dir=None):\n for i in range(30):\n loss = config[\"mean\"] + config[\"sd\"] * np.random.randn()\n tune.report(loss=loss)",
"Given that you provide an api_key_file pointing to your Weights & Biases API key, you cna define a\nsimple grid-search Tune run using the WandbLoggerCallback as follows:",
"def tune_function(api_key_file):\n \"\"\"Example for using a WandbLoggerCallback with the function API\"\"\"\n analysis = tune.run(\n objective,\n metric=\"loss\",\n mode=\"min\",\n config={\n \"mean\": tune.grid_search([1, 2, 3, 4, 5]),\n \"sd\": tune.uniform(0.2, 0.8),\n },\n callbacks=[\n WandbLoggerCallback(api_key_file=api_key_file, project=\"Wandb_example\")\n ],\n )\n return analysis.best_config",
"To use the wandb_mixin decorator, you can simply decorate the objective function from earlier.\nNote that we also use wandb.log(...) to log the loss to Weights & Biases as a dictionary.\nOtherwise, the decorated version of our objective is identical to its original.",
"@wandb_mixin\ndef decorated_objective(config, checkpoint_dir=None):\n for i in range(30):\n loss = config[\"mean\"] + config[\"sd\"] * np.random.randn()\n tune.report(loss=loss)\n wandb.log(dict(loss=loss))",
"With the decorated_objective defined, running a Tune experiment is as simple as providing this objective and\npassing the api_key_file to the wandb key of your Tune config:",
"def tune_decorated(api_key_file):\n \"\"\"Example for using the @wandb_mixin decorator with the function API\"\"\"\n analysis = tune.run(\n decorated_objective,\n metric=\"loss\",\n mode=\"min\",\n config={\n \"mean\": tune.grid_search([1, 2, 3, 4, 5]),\n \"sd\": tune.uniform(0.2, 0.8),\n \"wandb\": {\"api_key_file\": api_key_file, \"project\": \"Wandb_example\"},\n },\n )\n return analysis.best_config",
"Finally, you can also define a class-based Tune Trainable by using the WandbTrainableMixin to define your objective:",
"class WandbTrainable(WandbTrainableMixin, Trainable):\n def step(self):\n for i in range(30):\n loss = self.config[\"mean\"] + self.config[\"sd\"] * np.random.randn()\n wandb.log({\"loss\": loss})\n return {\"loss\": loss, \"done\": True}",
"Running Tune with this WandbTrainable works exactly the same as with the function API.\nThe below tune_trainable function differs from tune_decorated above only in the first argument we pass to\ntune.run():",
"def tune_trainable(api_key_file):\n \"\"\"Example for using a WandTrainableMixin with the class API\"\"\"\n analysis = tune.run(\n WandbTrainable,\n metric=\"loss\",\n mode=\"min\",\n config={\n \"mean\": tune.grid_search([1, 2, 3, 4, 5]),\n \"sd\": tune.uniform(0.2, 0.8),\n \"wandb\": {\"api_key_file\": api_key_file, \"project\": \"Wandb_example\"},\n },\n )\n return analysis.best_config",
"Since you may not have an API key for Wandb, we can mock the Wandb logger and test all three of our training\nfunctions as follows.\nIf you do have an API key file, make sure to set mock_api to False and pass in the right api_key_file below.",
"import tempfile\nfrom unittest.mock import MagicMock\n\nmock_api = True\n\napi_key_file = \"~/.wandb_api_key\"\n\nif mock_api:\n WandbLoggerCallback._logger_process_cls = MagicMock\n decorated_objective.__mixins__ = tuple()\n WandbTrainable._wandb = MagicMock()\n wandb = MagicMock() # noqa: F811\n temp_file = tempfile.NamedTemporaryFile()\n temp_file.write(b\"1234\")\n temp_file.flush()\n api_key_file = temp_file.name\n\ntune_function(api_key_file)\ntune_decorated(api_key_file)\ntune_trainable(api_key_file)\n\nif mock_api:\n temp_file.close()",
"This completes our Tune and Wandb walk-through.\nIn the following sections you can find more details on the API of the Tune-Wandb integration.\nTune Wandb API Reference\nWandbLoggerCallback\n(tune-wandb-logger)=\n{eval-rst}\n.. autoclass:: ray.tune.integration.wandb.WandbLoggerCallback\n :noindex:\nWandb-Mixin\n(tune-wandb-mixin)=\n{eval-rst}\n.. autofunction:: ray.tune.integration.wandb.wandb_mixin\n :noindex:"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
albahnsen/CostSensitiveClassification
|
doc/tutorials/slides_edcs_fraud_detection.ipynb
|
bsd-3-clause
|
[
"<h1 class=\"title\">Example-Dependent Cost-Sensitive Fraud Detection using CostCla</h1>\n\n<center>\n<h2>Alejandro Correa Bahnsen, PhD</h2>\n<p>\n<h2>Data Scientist</h2>\n<p>\n<div>\n<img img class=\"logo\" src=\"https://raw.githubusercontent.com/albahnsen/CostSensitiveClassification/master/doc/tutorials/files/logo_easysol.jpg\" style=\"width: 400px;\">\n</div>\n\n<h3>PyCaribbean, Santo Domingo, Dominican Republic, Feb 2016</h3>\n</center>\n\n<h1 class=\"bigtitle\">About Me</h1>\n\n%%html\n<style>\ntable,td,tr,th {border:none!important}\n</style>\n\n### A brief bio:\n\n* PhD in **Machine Learning** at Luxembourg University\n* Data Scientist at Easy Solutions\n* Worked for +8 years as a data scientist at GE Money, Scotiabank and SIX Financial Services\n* Bachelor in Industrial Engineering and Master in Financial Engineering\n* Organizer of Big Data & Data Science Bogota Meetup\n* Sport addict, love to swim, play tennis, squash, and volleyball, among others.\n\n<p>\n\n<table style=\"border-collapse: collapse; border-top-color: rgb(255, 255, 255); border-right-color: rgb(255, 255, 255); border-bottom-color: rgb(255, 255, 255); border-left-color: rgb(255, 255, 255); border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 1px; \" border=\"0\" bordercolor=\"#888\" cellspacing=\"0\" align=\"left\">\n <tr>\n <td>\n<a href=\"mailto: al.bahnsen@gmail.com\"><svg width=\"40px\" height=\"40px\" viewBox=\"0 0 60 60\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:sketch=\"http://www.bohemiancoding.com/sketch/ns\">\n <path d=\"M0.224580688,30 C0.224580688,13.4314567 13.454941,0 29.7754193,0 C46.0958976,0 59.3262579,13.4314567 59.3262579,30 C59.3262579,46.5685433 46.0958976,60 29.7754193,60 C13.454941,60 0.224580688,46.5685433 0.224580688,30 Z M0.224580688,30\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M35.0384324,31.6384006 L47.2131148,40.5764264 L47.2131148,20 L35.0384324,31.6384006 Z M13.7704918,20 L13.7704918,40.5764264 L25.9449129,31.6371491 L13.7704918,20 Z M30.4918033,35.9844891 L27.5851037,33.2065217 L13.7704918,42 L47.2131148,42 L33.3981762,33.2065217 L30.4918033,35.9844891 Z M46.2098361,20 L14.7737705,20 L30.4918033,32.4549304 L46.2098361,20 Z M46.2098361,20\" id=\"Shape\" fill=\"#333333\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M59.3262579,30 C59.3262579,46.5685433 46.0958976,60 29.7754193,60 C23.7225405,60 18.0947051,58.1525134 13.4093244,54.9827754 L47.2695458,5.81941103 C54.5814438,11.2806503 59.3262579,20.0777973 59.3262579,30 Z M59.3262579,30\" id=\"reflec\" fill-opacity=\"0.08\" fill=\"#000000\" sketch:type=\"MSShapeGroup\"></path>\n</svg></a> \n</td> \n<td>\n<a href=\"mailto: al.bahnsen@gmail.com\" target=\"_blank\">al.bahnsen@gmail.com</a> \n</td> </tr><tr> <td>\n\n<a href=\"http://github.com/albahnsen\"><svg width=\"40px\" height=\"40px\" viewBox=\"0 0 60 60\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:sketch=\"http://www.bohemiancoding.com/sketch/ns\">\n <path d=\"M0.336871032,30 C0.336871032,13.4314567 13.5672313,0 29.8877097,0 C46.208188,0 59.4385483,13.4314567 59.4385483,30 C59.4385483,46.5685433 46.208188,60 29.8877097,60 C13.5672313,60 0.336871032,46.5685433 0.336871032,30 Z M0.336871032,30\" id=\"Github\" fill=\"#333333\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M18.2184245,31.9355566 C19.6068506,34.4507902 22.2845295,36.0156764 26.8007287,36.4485173 C26.1561023,36.9365335 25.3817877,37.8630984 25.2749857,38.9342607 C24.4644348,39.4574749 22.8347506,39.62966 21.5674303,39.2310659 C19.7918469,38.6717023 19.1119377,35.1642642 16.4533306,35.6636959 C15.8773626,35.772144 15.9917933,36.1507609 16.489567,36.4722998 C17.3001179,36.9955141 18.0629894,37.6500075 18.6513541,39.04366 C19.1033554,40.113871 20.0531304,42.0259813 23.0569369,42.0259813 C24.2489236,42.0259813 25.0842679,41.8832865 25.0842679,41.8832865 C25.0842679,41.8832865 25.107154,44.6144649 25.107154,45.6761142 C25.107154,46.9004355 23.4507693,47.2457569 23.4507693,47.8346108 C23.4507693,48.067679 23.9990832,48.0895588 24.4396415,48.0895588 C25.3102685,48.0895588 27.1220883,47.3646693 27.1220883,46.0918317 C27.1220883,45.0806012 27.1382993,41.6806599 27.1382993,41.0860982 C27.1382993,39.785673 27.8372803,39.3737607 27.8372803,39.3737607 C27.8372803,39.3737607 27.924057,46.3153869 27.6704022,47.2457569 C27.3728823,48.3397504 26.8360115,48.1846887 26.8360115,48.6727049 C26.8360115,49.3985458 29.0168704,48.8505978 29.7396911,47.2571725 C30.2984945,46.0166791 30.0543756,39.2072834 30.0543756,39.2072834 L30.650369,39.1949165 C30.650369,39.1949165 30.6837446,42.3123222 30.6637192,43.7373675 C30.6427402,45.2128317 30.5426134,47.0792797 31.4208692,47.9592309 C31.9977907,48.5376205 33.868733,49.5526562 33.868733,48.62514 C33.868733,48.0857536 32.8436245,47.6424485 32.8436245,46.1831564 L32.8436245,39.4688905 C33.6618042,39.4688905 33.5387911,41.6768547 33.5387911,41.6768547 L33.5988673,45.7788544 C33.5988673,45.7788544 33.4186389,47.2733446 35.2190156,47.8992991 C35.8541061,48.1209517 37.2139245,48.1808835 37.277815,47.8089257 C37.3417055,47.4360167 35.6405021,46.8814096 35.6252446,45.7236791 C35.6157088,45.0178155 35.6567131,44.6059032 35.6567131,41.5379651 C35.6567131,38.470027 35.2438089,37.336079 33.8048426,36.4323453 C38.2457082,35.9766732 40.9939527,34.880682 42.3337458,31.9450695 C42.4383619,31.9484966 42.8791491,30.5737742 42.8219835,30.5742482 C43.1223642,29.4659853 43.2844744,28.1550957 43.3168964,26.6025764 C43.3092677,22.3930799 41.2895654,20.9042975 40.9014546,20.205093 C41.4736082,17.0182425 40.8060956,15.5675121 40.4961791,15.0699829 C39.3518719,14.6637784 36.5149435,16.1145088 34.9653608,17.1371548 C32.438349,16.3998984 27.0982486,16.4712458 25.0957109,17.3274146 C21.4005522,14.6875608 19.445694,15.0918628 19.445694,15.0918628 C19.445694,15.0918628 18.1821881,17.351197 19.1119377,20.6569598 C17.8961113,22.2028201 16.9902014,23.2968136 16.9902014,26.1963718 C16.9902014,27.8297516 17.1828264,29.2918976 17.6176632,30.5685404 C17.5643577,30.5684093 18.2008493,31.9359777 18.2184245,31.9355566 Z M18.2184245,31.9355566\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M59.4385483,30 C59.4385483,46.5685433 46.208188,60 29.8877097,60 C23.8348308,60 18.2069954,58.1525134 13.5216148,54.9827754 L47.3818361,5.81941103 C54.6937341,11.2806503 59.4385483,20.0777973 59.4385483,30 Z M59.4385483,30\" id=\"reflec\" fill-opacity=\"0.08\" fill=\"#000000\" sketch:type=\"MSShapeGroup\"></path>\n</svg></a>\n\n</td><td>\n<a href=\"http://github.com/albahnsen\" target=\"_blank\">http://github.com/albahnsen</a> \n\n</td> </tr><tr> <td>\n\n<a href=\"http://linkedin.com/in/albahnsen\"><svg width=\"40px\" height=\"40px\" viewBox=\"0 0 60 60\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:sketch=\"http://www.bohemiancoding.com/sketch/ns\">\n <path d=\"M0.449161376,30 C0.449161376,13.4314567 13.6795217,0 30,0 C46.3204783,0 59.5508386,13.4314567 59.5508386,30 C59.5508386,46.5685433 46.3204783,60 30,60 C13.6795217,60 0.449161376,46.5685433 0.449161376,30 Z M0.449161376,30\" fill=\"#007BB6\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M22.4680392,23.7098144 L15.7808366,23.7098144 L15.7808366,44.1369537 L22.4680392,44.1369537 L22.4680392,23.7098144 Z M22.4680392,23.7098144\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M22.9084753,17.3908761 C22.8650727,15.3880081 21.4562917,13.862504 19.1686418,13.862504 C16.8809918,13.862504 15.3854057,15.3880081 15.3854057,17.3908761 C15.3854057,19.3522579 16.836788,20.9216886 19.0818366,20.9216886 L19.1245714,20.9216886 C21.4562917,20.9216886 22.9084753,19.3522579 22.9084753,17.3908761 Z M22.9084753,17.3908761\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M46.5846502,32.4246563 C46.5846502,26.1503226 43.2856534,23.2301456 38.8851658,23.2301456 C35.3347011,23.2301456 33.7450983,25.2128128 32.8575489,26.6036896 L32.8575489,23.7103567 L26.1695449,23.7103567 C26.2576856,25.6271338 26.1695449,44.137496 26.1695449,44.137496 L32.8575489,44.137496 L32.8575489,32.7292961 C32.8575489,32.1187963 32.9009514,31.5097877 33.0777669,31.0726898 C33.5610713,29.8530458 34.6614937,28.5902885 36.5089747,28.5902885 C38.9297703,28.5902885 39.8974476,30.4634101 39.8974476,33.2084226 L39.8974476,44.1369537 L46.5843832,44.1369537 L46.5846502,32.4246563 Z M46.5846502,32.4246563\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M59.5508386,30 C59.5508386,46.5685433 46.3204783,60 30,60 C23.9471212,60 18.3192858,58.1525134 13.6339051,54.9827754 L47.4941264,5.81941103 C54.8060245,11.2806503 59.5508386,20.0777973 59.5508386,30 Z M59.5508386,30\" id=\"reflec\" fill-opacity=\"0.08\" fill=\"#000000\" sketch:type=\"MSShapeGroup\"></path>\n</svg></a> \n</td> <td>\n<a href=\"http://linkedin.com/in/albahnsen\" target=\"_blank\">http://linkedin.com/in/albahnsen</a> \n\n</td> </tr><tr> <td>\n\n<a href=\"http://twitter.com/albahnsen\"><svg width=\"40px\" height=\"40px\" viewBox=\"0 0 60 60\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:sketch=\"http://www.bohemiancoding.com/sketch/ns\">\n <path d=\"M0,30 C0,13.4314567 13.4508663,0 30.0433526,0 C46.6358389,0 60.0867052,13.4314567 60.0867052,30 C60.0867052,46.5685433 46.6358389,60 30.0433526,60 C13.4508663,60 0,46.5685433 0,30 Z M0,30\" fill=\"#4099FF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M29.2997675,23.8879776 L29.3627206,24.9260453 L28.3135016,24.798935 C24.4943445,24.3116787 21.1578281,22.6592444 18.3249368,19.8840023 L16.9399677,18.5069737 L16.5832333,19.5238563 C15.8277956,21.7906572 16.3104363,24.1845684 17.8842648,25.7946325 C18.72364,26.6844048 18.5347806,26.8115152 17.0868584,26.2818888 C16.5832333,26.1124083 16.1425613,25.985298 16.1005925,26.0488532 C15.9537019,26.1971486 16.457327,28.1249885 16.8560302,28.8876505 C17.4016241,29.9469033 18.5137962,30.9849709 19.7308902,31.5993375 L20.7591248,32.0865938 L19.5420308,32.1077788 C18.3669055,32.1077788 18.3249368,32.1289639 18.4508431,32.57385 C18.8705307,33.9508786 20.5282967,35.4126474 22.3749221,36.048199 L23.6759536,36.4930852 L22.5427971,37.1710069 C20.8640467,38.1455194 18.891515,38.6963309 16.9189833,38.738701 C15.9746862,38.759886 15.1982642,38.8446262 15.1982642,38.9081814 C15.1982642,39.1200319 17.7583585,40.306395 19.2482495,40.7724662 C23.7179224,42.1494948 29.0269705,41.5563132 33.0140027,39.2047722 C35.846894,37.5311528 38.6797853,34.2050993 40.0018012,30.9849709 C40.7152701,29.2689815 41.428739,26.1335934 41.428739,24.6294545 C41.428739,23.654942 41.4916922,23.5278317 42.6668174,22.3626537 C43.359302,21.6847319 44.0098178,20.943255 44.135724,20.7314044 C44.3455678,20.3288884 44.3245835,20.3288884 43.2543801,20.6890343 C41.4707078,21.324586 41.2188952,21.2398458 42.1002392,20.2865183 C42.750755,19.6085965 43.527177,18.3798634 43.527177,18.0197174 C43.527177,17.9561623 43.2124113,18.0620876 42.8556769,18.252753 C42.477958,18.4646036 41.6385828,18.7823794 41.0090514,18.9730449 L39.8758949,19.3331908 L38.8476603,18.634084 C38.281082,18.252753 37.4836756,17.829052 37.063988,17.7019416 C35.9937846,17.4053509 34.357003,17.447721 33.3917215,17.7866818 C30.768674,18.7400093 29.110908,21.1974757 29.2997675,23.8879776 Z M29.2997675,23.8879776\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M60.0867052,30 C60.0867052,46.5685433 46.6358389,60 30.0433526,60 C23.8895925,60 18.1679598,58.1525134 13.4044895,54.9827754 L47.8290478,5.81941103 C55.2628108,11.2806503 60.0867052,20.0777973 60.0867052,30 Z M60.0867052,30\" id=\"reflec\" fill-opacity=\"0.08\" fill=\"#000000\" sketch:type=\"MSShapeGroup\"></path>\n</svg></a>\n</td> <td>\n<a href=\"http://twitter.com/albahnsen\" target=\"_blank\">@albahnsen</a> \n\n</td> </tr>\n</table>\n\n# Agenda\n\n* Quick Intro to Fraud Detection\n* Financial Evaluation of a Fraud Detection Model\n* Example-Dependent Classification\n* CostCla Library\n* Conclusion and Future Work\n\n# Fraud Detection\n\nEstimate the **probability** of a transaction being **fraud** based on analyzing customer patterns and recent fraudulent behavior\n\n<center>\n<div>\n<img img class=\"logo\" src=\"https://raw.githubusercontent.com/albahnsen/CostSensitiveClassification/master/doc/tutorials/files/trx_flow.png\" style=\"width: 800px;\">\n</div>\n</center>\n\n# Fraud Detection\n\nIssues when constructing a fraud detection system:\n\n* Skewness of the data\n* **Cost-sensitivity**\n* Short time response of the system\n* Dimensionality of the search space\n* Feature preprocessing\n* Model selection\n\nDifferent machine learning methods are used in practice, and in the\nliterature: logistic regression, neural networks, discriminant\nanalysis, genetic programing, decision trees, random forests among others\n\n# Fraud Detection\n\nFormally, a fraud detection is a statistical model that allows the estimation of the probability of transaction $i$ being a fraud ($y_i=1$)\n\n $$\\hat p_i=P(y_i=1|\\mathbf{x}_i)$$ \n\n<h1 class=\"bigtitle\">Data!</h1>\n<center>\n<img img class=\"logo\" src=\"http://www.sei-security.com/wp-content/uploads/2015/12/shutterstock_144683186.jpg\" style=\"width: 400px;\">\n</center>\n\n# Load dataset from CostCla package",
"import pandas as pd\nimport numpy as np\nfrom costcla import datasets\n\nfrom costcla.datasets.base import Bunch\ndef load_fraud(cost_mat_parameters=dict(Ca=10)):\n# data_ = pd.read_pickle(\"trx_fraud_data.pk\")\n data_ = pd.read_pickle(\"/home/al/DriveAl/EasySol/Projects/DetectTA/Tests/trx_fraud_data_v3_agg.pk\")\n target = data_['fraud'].values\n data = data_.drop('fraud', 1)\n n_samples = data.shape[0]\n cost_mat = np.zeros((n_samples, 4))\n cost_mat[:, 0] = cost_mat_parameters['Ca']\n cost_mat[:, 1] = data['amount']\n cost_mat[:, 2] = cost_mat_parameters['Ca']\n cost_mat[:, 3] = 0.0\n return Bunch(data=data.values, target=target, cost_mat=cost_mat,\n target_names=['Legitimate Trx', 'Fraudulent Trx'], DESCR='',\n feature_names=data.columns.values, name='FraudDetection')\ndatasets.load_fraud = load_fraud\n\ndata = datasets.load_fraud()",
"Data file",
"print(data.keys())\nprint('Number of examples ', data.target.shape[0])",
"Class Label",
"target = pd.DataFrame(pd.Series(data.target).value_counts(), columns=('Frequency',))\ntarget['Percentage'] = (target['Frequency'] / target['Frequency'].sum()) * 100\ntarget.index = ['Negative (Legitimate Trx)', 'Positive (Fraud Trx)']\ntarget.loc['Total Trx'] = [data.target.shape[0], 1.]\nprint(target)",
"Features",
"pd.DataFrame(data.feature_names[:4], columns=('Features',))",
"Features",
"df = pd.DataFrame(data.data[:, :4], columns=data.feature_names[:4])\ndf.head(10)",
"Aggregated Features",
"df = pd.DataFrame(data.data[:, 4:], columns=data.feature_names[4:])\ndf.head(10)",
"Fraud Detection as a classification problem\nSplit in training and testing",
"from sklearn.cross_validation import train_test_split\nX = data.data[:, [2, 3] + list(range(4, data.data.shape[1]))].astype(np.float)\nX_train, X_test, y_train, y_test, cost_mat_train, cost_mat_test = \\\ntrain_test_split(X, data.target, data.cost_mat, test_size=0.33, random_state=10)",
"Fraud Detection as a classification problem\nFit models",
"from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n\nclassifiers = {\"RF\": {\"f\": RandomForestClassifier()},\n \"DT\": {\"f\": DecisionTreeClassifier()}}\n\nci_models = ['DT', 'RF']\n# Fit the classifiers using the training dataset\nfor model in classifiers.keys():\n classifiers[model][\"f\"].fit(X_train, y_train)\n classifiers[model][\"c\"] = classifiers[model][\"f\"].predict(X_test)\n classifiers[model][\"p\"] = classifiers[model][\"f\"].predict_proba(X_test)\n classifiers[model][\"p_train\"] = classifiers[model][\"f\"].predict_proba(X_train)",
"Models performance\nEvaluate metrics and plot results",
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.core.pylabtools import figsize\nimport seaborn as sns\ncolors = sns.color_palette()\nfigsize(12, 8)\n\nfrom sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score\n\nmeasures = {\"F1Score\": f1_score, \"Precision\": precision_score, \n \"Recall\": recall_score, \"Accuracy\": accuracy_score}\n\nresults = pd.DataFrame(columns=measures.keys())\n\nfor model in ci_models:\n results.loc[model] = [measures[measure](y_test, classifiers[model][\"c\"]) for measure in measures.keys()]",
"Models performance",
"def fig_acc():\n plt.bar(np.arange(results.shape[0])-0.3, results['Accuracy'], 0.6, label='Accuracy', color=colors[0])\n plt.xticks(range(results.shape[0]), results.index) \n plt.tick_params(labelsize=22); plt.title('Accuracy', size=30)\n plt.show()\n\nfig_acc()",
"Models performance",
"def fig_f1():\n plt.bar(np.arange(results.shape[0])-0.3, results['Precision'], 0.2, label='Precision', color=colors[0])\n plt.bar(np.arange(results.shape[0])-0.3+0.2, results['Recall'], 0.2, label='Recall', color=colors[1])\n plt.bar(np.arange(results.shape[0])-0.3+0.4, results['F1Score'], 0.2, label='F1Score', color=colors[2])\n\n plt.xticks(range(results.shape[0]), results.index) \n plt.tick_params(labelsize=22)\n plt.ylim([0, 1])\n plt.legend(loc='center left', bbox_to_anchor=(1, 0.5),fontsize=22)\n plt.show()\n\nfig_f1()",
"Models performance\n\n\nNone of these measures takes into account the business and economical realities that take place in fraud detection. \n\n\nLosses due to fraud or customer satisfaction costs, are not considered in the evaluation of the different models. \n\n\n<h1 class=\"bigtitle\">Financial Evaluation of a Fraud Detection Model</h1>\n\nMotivation\n\nTypically, a fraud model is evaluated using standard cost-insensitive measures.\nHowever, in practice, the cost associated with approving a fraudulent transaction (False Negative) is quite different from the cost associated with declining a legitimate transaction (False Positive).\nFurthermore, the costs are not constant among transactions. \n\nCost Matrix\n| | Actual Positive ($y_i=1$) | Actual Negative ($y_i=0$)|\n|--- |:-: |:-: |\n| Pred. Positive ($c_i=1$) | $C_{TP_i}=C_a$ | $C_{FP_i}=C_a$ |\n| Pred. Negative ($c_i=0$) | $C_{FN_i}=Amt_i$ | $C_{TN_i}=0$ |\nWhere:\n\n$C_{FN_i}$ = Amount of the transaction $i$\n$C_a$ is the administrative cost of dealing with an alert\n\nFor more info see <a href=\"http://albahnsen.com/files/%20Improving%20Credit%20Card%20Fraud%20Detection%20by%20using%20Calibrated%20Probabilities%20-%20Publish.pdf\" target=\"_blank\">[Correa Bahnsen et al., 2014]</a>",
"# The cost matrix is already calculated for the dataset\n# cost_mat[C_FP,C_FN,C_TP,C_TN]\nprint(data.cost_mat[[10, 17, 50]])",
"Financial savings\nThe financial cost of using a classifier $f$ on $\\mathcal{S}$ is calculated by\n$$ Cost(f(\\mathcal{S})) = \\sum_{i=1}^N y_i(1-c_i)C_{FN_i} + (1-y_i)c_i C_{FP_i}.$$\nThen the financial savings are defined as the cost of the algorithm versus the cost of using no algorithm at all.\n$$ Savings(f(\\mathcal{S})) = \\frac{ Cost_l(\\mathcal{S}) - Cost(f(\\mathcal{S}))} {Cost_l(\\mathcal{S})},$$\nwhere $Cost_l(\\mathcal{S})$ is the cost of the costless class\nModels Savings\ncostcla.metrics.savings_score(y_true, y_pred, cost_mat)",
"# Calculation of the cost and savings\nfrom costcla.metrics import savings_score, cost_loss\n\n# Evaluate the savings for each model\nresults[\"Savings\"] = np.zeros(results.shape[0])\nfor model in ci_models:\n results[\"Savings\"].loc[model] = savings_score(y_test, classifiers[model][\"c\"], cost_mat_test)\n\n# Plot the results\ndef fig_sav():\n plt.bar(np.arange(results.shape[0])-0.4, results['Precision'], 0.2, label='Precision', color=colors[0])\n plt.bar(np.arange(results.shape[0])-0.4+0.2, results['Recall'], 0.2, label='Recall', color=colors[1])\n plt.bar(np.arange(results.shape[0])-0.4+0.4, results['F1Score'], 0.2, label='F1Score', color=colors[2])\n plt.bar(np.arange(results.shape[0])-0.4+0.6, results['Savings'], 0.2, label='Savings', color=colors[3])\n\n plt.xticks(range(results.shape[0]), results.index) \n plt.tick_params(labelsize=22)\n plt.ylim([0, 1])\n plt.xlim([-0.5, results.shape[0] -1 + .5])\n plt.legend(loc='center left', bbox_to_anchor=(1, 0.5),fontsize=22)\n plt.show()",
"Models Savings",
"fig_sav()",
"Threshold Optimization\nConvert a classifier cost-sensitive by selecting a proper threshold\nfrom training instances according to the savings\n$$ t \\quad = \\quad argmax_t \\: Savings(c(t), y) $$\nThreshold Optimization - Code\ncostcla.models.ThresholdingOptimization(calibration=True)\nfit(y_prob_train=None, cost_mat, y_true_train)\n- Parameters\n - y_prob_train : Predicted probabilities of the training set\n - cost_mat : Cost matrix of the classification problem. \n - y_true_cal : True class\npredict(y_prob)\n- Parameters\n - y_prob : Predicted probabilities\n\nReturns\ny_pred : Predicted class\n\n\n\nThreshold Optimization",
"from costcla.models import ThresholdingOptimization\n\nfor model in ci_models:\n classifiers[model+\"-TO\"] = {\"f\": ThresholdingOptimization()}\n # Fit\n classifiers[model+\"-TO\"][\"f\"].fit(classifiers[model][\"p_train\"], cost_mat_train, y_train)\n # Predict\n classifiers[model+\"-TO\"][\"c\"] = classifiers[model+\"-TO\"][\"f\"].predict(classifiers[model][\"p\"])\n\nprint('New thresholds')\nfor model in ci_models:\n print(model + '-TO - ' + str(classifiers[model+'-TO']['f'].threshold_))\n\nfor model in ci_models:\n # Evaluate\n results.loc[model+\"-TO\"] = 0\n results.loc[model+\"-TO\", measures.keys()] = \\\n [measures[measure](y_test, classifiers[model+\"-TO\"][\"c\"]) for measure in measures.keys()]\n results[\"Savings\"].loc[model+\"-TO\"] = savings_score(y_test, classifiers[model+\"-TO\"][\"c\"], cost_mat_test) ",
"Threshold Optimization",
"fig_sav()",
"Models Savings\n\n\nThere are significant differences in the results when evaluating a model using a traditional cost-insensitive measures\n\n\nTrain models that take into account the different financial costs\n\n\n<h1 class=\"bigtitle\">Example-Dependent Cost-Sensitive Classification</h1>\n\n*Why \"Example-Dependent\"\nCost-sensitive classification ussualy refers to class-dependent costs, where the cost dependends on the class but is assumed constant accross examples.\nIn fraud detection, different transactions have different amounts, which implies that the costs are not constant\nBayes Minimum Risk (BMR)\nThe BMR classifier is a decision model based on quantifying tradeoffs between various decisions using probabilities and the costs that accompany such decisions. \nIn particular:\n$$ R(c_i=0|\\mathbf{x}i)=C{TN_i}(1-\\hat p_i)+C_{FN_i} \\cdot \\hat p_i, $$\nand\n$$ R(c_i=1|\\mathbf{x}i)=C{TP_i} \\cdot \\hat p_i + C_{FP_i}(1- \\hat p_i), $$\nBMR Code\ncostcla.models.BayesMinimumRiskClassifier(calibration=True)\nfit(y_true_cal=None, y_prob_cal=None)\n- Parameters\n - y_true_cal : True class\n - y_prob_cal : Predicted probabilities\npredict(y_prob,cost_mat)\n- Parameters\n - y_prob : Predicted probabilities\n - cost_mat : Cost matrix of the classification problem. \n\nReturns\ny_pred : Predicted class\n\n\n\nBMR Code",
"from costcla.models import BayesMinimumRiskClassifier\n\nfor model in ci_models:\n classifiers[model+\"-BMR\"] = {\"f\": BayesMinimumRiskClassifier()}\n # Fit\n classifiers[model+\"-BMR\"][\"f\"].fit(y_test, classifiers[model][\"p\"])\n # Calibration must be made in a validation set\n # Predict\n classifiers[model+\"-BMR\"][\"c\"] = classifiers[model+\"-BMR\"][\"f\"].predict(classifiers[model][\"p\"], cost_mat_test)\n\nfor model in ci_models:\n # Evaluate\n results.loc[model+\"-BMR\"] = 0\n results.loc[model+\"-BMR\", measures.keys()] = \\\n [measures[measure](y_test, classifiers[model+\"-BMR\"][\"c\"]) for measure in measures.keys()]\n results[\"Savings\"].loc[model+\"-BMR\"] = savings_score(y_test, classifiers[model+\"-BMR\"][\"c\"], cost_mat_test) ",
"BMR Results",
"fig_sav()",
"BMR Results\nWhy so important focusing on the Recall\n\nAverage cost of a False Negative",
"print(data.data[data.target == 1, 2].mean())",
"Average cost of a False Positive",
"print(data.cost_mat[:,0].mean())",
"BMR Results\n\nBayes Minimum Risk increases the savings by using a cost-insensitive method and then introducing the costs\nWhy not introduce the costs during the estimation of the methods?\n\nCost-Sensitive Decision Trees (CSDT)\nA a new cost-based impurity measure taking into account the costs when all the examples in a leaf\ncostcla.models.CostSensitiveDecisionTreeClassifier(criterion='direct_cost', criterion_weight=False, pruned=True)\nCost-Sensitive Random Forest (CSRF)\nEnsemble of CSDT\ncostcla.models.CostSensitiveRandomForestClassifier(n_estimators=10, max_samples=0.5, max_features=0.5,combination='majority_voting)\nCSDT & CSRF Code",
"from costcla.models import CostSensitiveDecisionTreeClassifier\nfrom costcla.models import CostSensitiveRandomForestClassifier\n\n\nclassifiers = {\"CSDT\": {\"f\": CostSensitiveDecisionTreeClassifier()},\n \"CSRF\": {\"f\": CostSensitiveRandomForestClassifier(combination='majority_bmr')}}\n\n# Fit the classifiers using the training dataset\nfor model in classifiers.keys():\n classifiers[model][\"f\"].fit(X_train, y_train, cost_mat_train)\n if model == \"CSRF\":\n classifiers[model][\"c\"] = classifiers[model][\"f\"].predict(X_test, cost_mat_test)\n else:\n classifiers[model][\"c\"] = classifiers[model][\"f\"].predict(X_test)\n\nfor model in ['CSDT', 'CSRF']:\n # Evaluate\n results.loc[model] = 0\n results.loc[model, measures.keys()] = \\\n [measures[measure](y_test, classifiers[model][\"c\"]) for measure in measures.keys()]\n results[\"Savings\"].loc[model] = savings_score(y_test, classifiers[model][\"c\"], cost_mat_test)",
"CSDT & CSRF Results",
"fig_sav()",
"Lessons Learned (so far ...)\n\n\nSelecting models based on traditional statistics does not give the best results in terms of cost\n\n\nModels should be evaluated taking into account real financial costs of the application\n\n\nAlgorithms should be developed to incorporate those financial costs\n\n\n<center>\n<img src=\"https://raw.githubusercontent.com/albahnsen/CostSensitiveClassification/master/logo.png\" style=\"width: 600px;\" align=\"middle\">\n</center>\nCostCla Library\n\n\nCostCla is a Python open source cost-sensitive classification library built on top of Scikit-learn, Pandas and Numpy. \n\n\nSource code, binaries and documentation are distributed under 3-Clause BSD license in the website http://albahnsen.com/CostSensitiveClassification/\n\n\nCostCla Algorithms\n\n\nCost-proportionate over-sampling <a href=\"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.514\" target=\"_blank\">[Elkan, 2001]</a> \n\n\nSMOTE <a href=\"http://arxiv.org/abs/1106.1813\" target=\"_blank\">[Chawla et al., 2002]</a> \n\n\nCost-proportionate rejection-sampling <a href=\"http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1250950\" target=\"_blank\">[Zadrozny et al., 2003]</a>\n\n\nThresholding optimization <a href=\"http://www.aaai.org/Papers/AAAI/2006/AAAI06-076.pdf\" target=\"_blank\">[Sheng and Ling, 2006]</a> \n\n\nBayes minimum risk <a href=\"http://albahnsen.com/files/%20Improving%20Credit%20Card%20Fraud%20Detection%20by%20using%20Calibrated%20Probabilities%20-%20Publish.pdf\" target=\"_blank\">[Correa Bahnsen et al., 2014a]</a> \n\n\nCost-sensitive logistic regression <a href=\"http://albahnsen.com/files/Example-Dependent%20Cost-Sensitive%20Logistic%20Regression%20for%20Credit%20Scoring_publish.pdf\" target=\"_blank\">[Correa Bahnsen et al., 2014b]</a> \n\n\nCost-sensitive decision trees <a href=\"http://albahnsen.com/files/Example-Dependent%20Cost-Sensitive%20Decision%20Trees.pdf\" target=\"_blank\">[Correa Bahnsen et al., 2015a]</a> \n\n\nCost-sensitive ensemble methods: cost-sensitive bagging, cost-sensitive pasting, cost-sensitive random forest and cost-sensitive random patches <a href=\"http://arxiv.org/abs/1505.04637\" target=\"_blank\">[Correa Bahnsen et al., 2015c]</a>\n\n\nCostCla Databases\n\n\nCredit Scoring1 - Kaggle credit competition <a href=\"https://www.kaggle.com/c/GiveMeSomeCredit\" target=\"_blank\">[Data]</a>, cost matrix: <a href=\"http://albahnsen.com/files/Example-Dependent%20Cost-Sensitive%20Logistic%20Regression%20for%20Credit%20Scoring_publish.pdf\" target=\"_blank\">[Correa Bahnsen et al., 2014]</a> \n\n\nCredit Scoring 2 - PAKDD2009 Credit <a href=\"http://sede.neurotech.com.br/PAKDD2009/\" target=\"_blank\">[Data]</a>, cost matrix: <a href=\"http://albahnsen.com/files/Example-Dependent%20Cost-Sensitive%20Logistic%20Regression%20for%20Credit%20Scoring_publish.pdf\" target=\"_blank\">[Correa Bahnsen et al., 2014a]</a> \n\n\nDirect Marketing - PAKDD2009 Credit <a href=\"https://archive.ics.uci.edu/ml/datasets/Bank+Marketing\" target=\"_blank\">[Data]</a>, cost matrix: <a href=\"http://albahnsen.com/files/%20Improving%20Credit%20Card%20Fraud%20Detection%20by%20using%20Calibrated%20Probabilities%20-%20Publish.pdf\" target=\"_blank\">[Correa Bahnsen et al., 2014b]</a> \n\n\nChurn Modeling, soon \n\n\nFraud Detection, soon\n\n\nFuture Work\n\nCSDT in Cython\nCost-sensitive class-dependent algorithms\nSampling algorithms\nProbability calibration (Only ROCCH)\nOther algorithms\nMore databases\n\nYou find the presentation and the IPython Notebook here:\n\n<a href=\"http://nbviewer.ipython.org/format/slides/github/albahnsen/CostSensitiveClassification/blob/master/doc/tutorials/slides_edcs_fraud_detection.ipynb#/\" target=\"_blank\">http://nbviewer.ipython.org/format/slides/github/\nalbahnsen/CostSensitiveClassification/blob/\nmaster/doc/tutorials/slides_edcs_fraud_detection.ipynb#/</a>\n<a href=\"https://github.com/albahnsen/CostSensitiveClassification/blob/master/doc/tutorials/slides_edcs_fraud_detection.ipynb\" target=\"_blank\">https://github.com/albahnsen/CostSensitiveClassification/ blob/master/doc/tutorials/slides_edcs_fraud_detection.ipynb</a>\n\n<h1 class=\"bigtitle\">Thanks!</h1>\n<center>\n<table style=\"border-collapse: collapse; border-top-color: rgb(255, 255, 255); border-right-color: rgb(255, 255, 255); border-bottom-color: rgb(255, 255, 255); border-left-color: rgb(255, 255, 255); border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 1px; \" border=\"0\" bordercolor=\"#888\" cellspacing=\"0\" align=\"left\">\n <tr>\n <td>\n<a href=\"mailto: al.bahnsen@gmail.com\"><svg width=\"40px\" height=\"40px\" viewBox=\"0 0 60 60\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:sketch=\"http://www.bohemiancoding.com/sketch/ns\">\n <path d=\"M0.224580688,30 C0.224580688,13.4314567 13.454941,0 29.7754193,0 C46.0958976,0 59.3262579,13.4314567 59.3262579,30 C59.3262579,46.5685433 46.0958976,60 29.7754193,60 C13.454941,60 0.224580688,46.5685433 0.224580688,30 Z M0.224580688,30\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M35.0384324,31.6384006 L47.2131148,40.5764264 L47.2131148,20 L35.0384324,31.6384006 Z M13.7704918,20 L13.7704918,40.5764264 L25.9449129,31.6371491 L13.7704918,20 Z M30.4918033,35.9844891 L27.5851037,33.2065217 L13.7704918,42 L47.2131148,42 L33.3981762,33.2065217 L30.4918033,35.9844891 Z M46.2098361,20 L14.7737705,20 L30.4918033,32.4549304 L46.2098361,20 Z M46.2098361,20\" id=\"Shape\" fill=\"#333333\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M59.3262579,30 C59.3262579,46.5685433 46.0958976,60 29.7754193,60 C23.7225405,60 18.0947051,58.1525134 13.4093244,54.9827754 L47.2695458,5.81941103 C54.5814438,11.2806503 59.3262579,20.0777973 59.3262579,30 Z M59.3262579,30\" id=\"reflec\" fill-opacity=\"0.08\" fill=\"#000000\" sketch:type=\"MSShapeGroup\"></path>\n</svg></a> \n</td> \n<td>\n<a href=\"mailto: al.bahnsen@gmail.com\" target=\"_blank\">al.bahnsen@gmail.com</a> \n</td> </tr><tr> <td>\n\n<a href=\"http://github.com/albahnsen\"><svg width=\"40px\" height=\"40px\" viewBox=\"0 0 60 60\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:sketch=\"http://www.bohemiancoding.com/sketch/ns\">\n <path d=\"M0.336871032,30 C0.336871032,13.4314567 13.5672313,0 29.8877097,0 C46.208188,0 59.4385483,13.4314567 59.4385483,30 C59.4385483,46.5685433 46.208188,60 29.8877097,60 C13.5672313,60 0.336871032,46.5685433 0.336871032,30 Z M0.336871032,30\" id=\"Github\" fill=\"#333333\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M18.2184245,31.9355566 C19.6068506,34.4507902 22.2845295,36.0156764 26.8007287,36.4485173 C26.1561023,36.9365335 25.3817877,37.8630984 25.2749857,38.9342607 C24.4644348,39.4574749 22.8347506,39.62966 21.5674303,39.2310659 C19.7918469,38.6717023 19.1119377,35.1642642 16.4533306,35.6636959 C15.8773626,35.772144 15.9917933,36.1507609 16.489567,36.4722998 C17.3001179,36.9955141 18.0629894,37.6500075 18.6513541,39.04366 C19.1033554,40.113871 20.0531304,42.0259813 23.0569369,42.0259813 C24.2489236,42.0259813 25.0842679,41.8832865 25.0842679,41.8832865 C25.0842679,41.8832865 25.107154,44.6144649 25.107154,45.6761142 C25.107154,46.9004355 23.4507693,47.2457569 23.4507693,47.8346108 C23.4507693,48.067679 23.9990832,48.0895588 24.4396415,48.0895588 C25.3102685,48.0895588 27.1220883,47.3646693 27.1220883,46.0918317 C27.1220883,45.0806012 27.1382993,41.6806599 27.1382993,41.0860982 C27.1382993,39.785673 27.8372803,39.3737607 27.8372803,39.3737607 C27.8372803,39.3737607 27.924057,46.3153869 27.6704022,47.2457569 C27.3728823,48.3397504 26.8360115,48.1846887 26.8360115,48.6727049 C26.8360115,49.3985458 29.0168704,48.8505978 29.7396911,47.2571725 C30.2984945,46.0166791 30.0543756,39.2072834 30.0543756,39.2072834 L30.650369,39.1949165 C30.650369,39.1949165 30.6837446,42.3123222 30.6637192,43.7373675 C30.6427402,45.2128317 30.5426134,47.0792797 31.4208692,47.9592309 C31.9977907,48.5376205 33.868733,49.5526562 33.868733,48.62514 C33.868733,48.0857536 32.8436245,47.6424485 32.8436245,46.1831564 L32.8436245,39.4688905 C33.6618042,39.4688905 33.5387911,41.6768547 33.5387911,41.6768547 L33.5988673,45.7788544 C33.5988673,45.7788544 33.4186389,47.2733446 35.2190156,47.8992991 C35.8541061,48.1209517 37.2139245,48.1808835 37.277815,47.8089257 C37.3417055,47.4360167 35.6405021,46.8814096 35.6252446,45.7236791 C35.6157088,45.0178155 35.6567131,44.6059032 35.6567131,41.5379651 C35.6567131,38.470027 35.2438089,37.336079 33.8048426,36.4323453 C38.2457082,35.9766732 40.9939527,34.880682 42.3337458,31.9450695 C42.4383619,31.9484966 42.8791491,30.5737742 42.8219835,30.5742482 C43.1223642,29.4659853 43.2844744,28.1550957 43.3168964,26.6025764 C43.3092677,22.3930799 41.2895654,20.9042975 40.9014546,20.205093 C41.4736082,17.0182425 40.8060956,15.5675121 40.4961791,15.0699829 C39.3518719,14.6637784 36.5149435,16.1145088 34.9653608,17.1371548 C32.438349,16.3998984 27.0982486,16.4712458 25.0957109,17.3274146 C21.4005522,14.6875608 19.445694,15.0918628 19.445694,15.0918628 C19.445694,15.0918628 18.1821881,17.351197 19.1119377,20.6569598 C17.8961113,22.2028201 16.9902014,23.2968136 16.9902014,26.1963718 C16.9902014,27.8297516 17.1828264,29.2918976 17.6176632,30.5685404 C17.5643577,30.5684093 18.2008493,31.9359777 18.2184245,31.9355566 Z M18.2184245,31.9355566\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M59.4385483,30 C59.4385483,46.5685433 46.208188,60 29.8877097,60 C23.8348308,60 18.2069954,58.1525134 13.5216148,54.9827754 L47.3818361,5.81941103 C54.6937341,11.2806503 59.4385483,20.0777973 59.4385483,30 Z M59.4385483,30\" id=\"reflec\" fill-opacity=\"0.08\" fill=\"#000000\" sketch:type=\"MSShapeGroup\"></path>\n</svg></a>\n\n</td><td>\n<a href=\"http://github.com/albahnsen\" target=\"_blank\">http://github.com/albahnsen</a> \n\n</td> </tr><tr> <td>\n\n<a href=\"http://linkedin.com/in/albahnsen\"><svg width=\"40px\" height=\"40px\" viewBox=\"0 0 60 60\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:sketch=\"http://www.bohemiancoding.com/sketch/ns\">\n <path d=\"M0.449161376,30 C0.449161376,13.4314567 13.6795217,0 30,0 C46.3204783,0 59.5508386,13.4314567 59.5508386,30 C59.5508386,46.5685433 46.3204783,60 30,60 C13.6795217,60 0.449161376,46.5685433 0.449161376,30 Z M0.449161376,30\" fill=\"#007BB6\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M22.4680392,23.7098144 L15.7808366,23.7098144 L15.7808366,44.1369537 L22.4680392,44.1369537 L22.4680392,23.7098144 Z M22.4680392,23.7098144\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M22.9084753,17.3908761 C22.8650727,15.3880081 21.4562917,13.862504 19.1686418,13.862504 C16.8809918,13.862504 15.3854057,15.3880081 15.3854057,17.3908761 C15.3854057,19.3522579 16.836788,20.9216886 19.0818366,20.9216886 L19.1245714,20.9216886 C21.4562917,20.9216886 22.9084753,19.3522579 22.9084753,17.3908761 Z M22.9084753,17.3908761\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M46.5846502,32.4246563 C46.5846502,26.1503226 43.2856534,23.2301456 38.8851658,23.2301456 C35.3347011,23.2301456 33.7450983,25.2128128 32.8575489,26.6036896 L32.8575489,23.7103567 L26.1695449,23.7103567 C26.2576856,25.6271338 26.1695449,44.137496 26.1695449,44.137496 L32.8575489,44.137496 L32.8575489,32.7292961 C32.8575489,32.1187963 32.9009514,31.5097877 33.0777669,31.0726898 C33.5610713,29.8530458 34.6614937,28.5902885 36.5089747,28.5902885 C38.9297703,28.5902885 39.8974476,30.4634101 39.8974476,33.2084226 L39.8974476,44.1369537 L46.5843832,44.1369537 L46.5846502,32.4246563 Z M46.5846502,32.4246563\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M59.5508386,30 C59.5508386,46.5685433 46.3204783,60 30,60 C23.9471212,60 18.3192858,58.1525134 13.6339051,54.9827754 L47.4941264,5.81941103 C54.8060245,11.2806503 59.5508386,20.0777973 59.5508386,30 Z M59.5508386,30\" id=\"reflec\" fill-opacity=\"0.08\" fill=\"#000000\" sketch:type=\"MSShapeGroup\"></path>\n</svg></a> \n</td> <td>\n<a href=\"http://linkedin.com/in/albahnsen\" target=\"_blank\">http://linkedin.com/in/albahnsen</a> \n\n</td> </tr><tr> <td>\n\n<a href=\"http://twitter.com/albahnsen\"><svg width=\"40px\" height=\"40px\" viewBox=\"0 0 60 60\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:sketch=\"http://www.bohemiancoding.com/sketch/ns\">\n <path d=\"M0,30 C0,13.4314567 13.4508663,0 30.0433526,0 C46.6358389,0 60.0867052,13.4314567 60.0867052,30 C60.0867052,46.5685433 46.6358389,60 30.0433526,60 C13.4508663,60 0,46.5685433 0,30 Z M0,30\" fill=\"#4099FF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M29.2997675,23.8879776 L29.3627206,24.9260453 L28.3135016,24.798935 C24.4943445,24.3116787 21.1578281,22.6592444 18.3249368,19.8840023 L16.9399677,18.5069737 L16.5832333,19.5238563 C15.8277956,21.7906572 16.3104363,24.1845684 17.8842648,25.7946325 C18.72364,26.6844048 18.5347806,26.8115152 17.0868584,26.2818888 C16.5832333,26.1124083 16.1425613,25.985298 16.1005925,26.0488532 C15.9537019,26.1971486 16.457327,28.1249885 16.8560302,28.8876505 C17.4016241,29.9469033 18.5137962,30.9849709 19.7308902,31.5993375 L20.7591248,32.0865938 L19.5420308,32.1077788 C18.3669055,32.1077788 18.3249368,32.1289639 18.4508431,32.57385 C18.8705307,33.9508786 20.5282967,35.4126474 22.3749221,36.048199 L23.6759536,36.4930852 L22.5427971,37.1710069 C20.8640467,38.1455194 18.891515,38.6963309 16.9189833,38.738701 C15.9746862,38.759886 15.1982642,38.8446262 15.1982642,38.9081814 C15.1982642,39.1200319 17.7583585,40.306395 19.2482495,40.7724662 C23.7179224,42.1494948 29.0269705,41.5563132 33.0140027,39.2047722 C35.846894,37.5311528 38.6797853,34.2050993 40.0018012,30.9849709 C40.7152701,29.2689815 41.428739,26.1335934 41.428739,24.6294545 C41.428739,23.654942 41.4916922,23.5278317 42.6668174,22.3626537 C43.359302,21.6847319 44.0098178,20.943255 44.135724,20.7314044 C44.3455678,20.3288884 44.3245835,20.3288884 43.2543801,20.6890343 C41.4707078,21.324586 41.2188952,21.2398458 42.1002392,20.2865183 C42.750755,19.6085965 43.527177,18.3798634 43.527177,18.0197174 C43.527177,17.9561623 43.2124113,18.0620876 42.8556769,18.252753 C42.477958,18.4646036 41.6385828,18.7823794 41.0090514,18.9730449 L39.8758949,19.3331908 L38.8476603,18.634084 C38.281082,18.252753 37.4836756,17.829052 37.063988,17.7019416 C35.9937846,17.4053509 34.357003,17.447721 33.3917215,17.7866818 C30.768674,18.7400093 29.110908,21.1974757 29.2997675,23.8879776 Z M29.2997675,23.8879776\" id=\"Path\" fill=\"#FFFFFF\" sketch:type=\"MSShapeGroup\"></path>\n <path d=\"M60.0867052,30 C60.0867052,46.5685433 46.6358389,60 30.0433526,60 C23.8895925,60 18.1679598,58.1525134 13.4044895,54.9827754 L47.8290478,5.81941103 C55.2628108,11.2806503 60.0867052,20.0777973 60.0867052,30 Z M60.0867052,30\" id=\"reflec\" fill-opacity=\"0.08\" fill=\"#000000\" sketch:type=\"MSShapeGroup\"></path>\n</svg></a>\n</td> <td>\n<a href=\"http://twitter.com/albahnsen\" target=\"_blank\">@albahnsen</a> \n\n</td> </tr>\n</table>\n</center>",
"#Format from https://github.com/ellisonbg/talk-2013-scipy\nfrom IPython.display import display, HTML\ns = \"\"\"\n\n<style>\n\n.rendered_html {\n font-family: \"proxima-nova\", helvetica;\n font-size: 100%;\n line-height: 1.3;\n}\n\n.rendered_html h1 {\n margin: 0.25em 0em 0.5em;\n color: #015C9C;\n text-align: center;\n line-height: 1.2; \n page-break-before: always;\n}\n\n.rendered_html h2 {\n margin: 1.1em 0em 0.5em;\n color: #26465D;\n line-height: 1.2;\n}\n\n.rendered_html h3 {\n margin: 1.1em 0em 0.5em;\n color: #002845;\n line-height: 1.2;\n}\n\n.rendered_html li {\n line-height: 1.5; \n}\n\n.prompt {\n font-size: 120%; \n}\n\n.CodeMirror-lines {\n font-size: 120%; \n}\n\n.output_area {\n font-size: 120%; \n}\n\n#notebook {\n background-image: url('files/images/witewall_3.png');\n}\n\nh1.bigtitle {\n margin: 4cm 1cm 4cm 1cm;\n font-size: 300%;\n}\n\nh3.point {\n font-size: 200%;\n text-align: center;\n margin: 2em 0em 2em 0em;\n #26465D\n}\n\n.logo {\n margin: 20px 0 20px 0;\n}\n\na.anchor-link {\n display: none;\n}\n\nh1.title { \n font-size: 250%;\n}\n\n</style>\n\"\"\"\ndisplay(HTML(s))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
thempel/adaptivemd
|
examples/rp/1_example_setup_project.ipynb
|
lgpl-2.1
|
[
"AdaptiveMD\nExample 1 - Setup\n0. Imports",
"import sys, os",
"We want to stop RP from reporting all sorts of stuff for this example so we set a specific environment variable to tell RP to do so. If you want to see what RP reports change it to REPORT.",
"# verbose = os.environ.get('RADICAL_PILOT_VERBOSE', 'REPORT')\nos.environ['RADICAL_PILOT_VERBOSE'] = 'ERROR'",
"We will import the appropriate parts from AdaptiveMD as we go along so it is clear what it needed at what stage. Usually you will have the block of imports at the beginning of your script or notebook as suggested in PEP8.",
"from adaptivemd import Project",
"Let's open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.",
"project = Project('test')",
"Now we have a handle for our project. First thing is to set it up to work on a resource.\n1. Set the resource\nWhat is a resource? A Resource specifies a shared filesystem with one or more clusteres attached to it. This can be your local machine or just a regular cluster or even a group of cluster that can access the same FS (like Titan, Eos and Rhea do).\nOnce you have chosen your place to store your results this way it is set for the project and can (at least should) not be altered since all file references are made to match this resource. Currently you can use the Fu Berlin Allegro Cluster or run locally. There are two specific local adaptations that include already the path to your conda installation. This simplifies the use of openmm or pyemma.\nLet us pick a local resource on a laptop for now.",
"from adaptivemd import LocalCluster AllegroCluster\n\nresource_id = 'local.jhp'\n\nif resource_id == 'local.jhp':\n project.initialize(LocalJHP())\nelif resource_id == 'local.sheep':\n project.initialize(LocalSheep())\nelif resource_id == 'fub.allegro':\n project.initialize(AllegroCluster())",
"TaskGenerators\nTaskGenerators are instances whose purpose is to create tasks to be executed. This is similar to the\nway Kernels work. A TaskGenerator will generate Task objects for you which will be translated into a ComputeUnitDescription and executed. In simple terms:\nThe task generator creates the bash scripts for you that run a simulation or run pyemma.\nA task generator will be initialized with all parameters needed to make it work and it will now what needs to be staged to be used.",
"from adaptivemd.engine.openmm import OpenMMEngine\nfrom adaptivemd.analysis.pyemma import PyEMMAAnalysis\n\nfrom adaptivemd import File, Directory",
"The engine\nA task generator that will create jobs to run simulations. Currently it uses a little python script that will excute OpenMM. It requires conda to be added to the PATH variable or at least openmm to be installed on the cluster. If you setup your resource correctly then this should all happen automatically.\nFirst we define a File object. These are used to represent files anywhere, on the cluster or your local application. File like any complex object in adaptivemd can have a .name attribute that makes them easier to find later.",
"pdb_file = File('file://../files/alanine/alanine.pdb').named('initial_pdb')",
"Here we used a special prefix that can point to specific locations. \n\nfile:// points to files on your local machine. \nunit:// specifies files on the current working directory of the executing node. Usually these are temprary files for a single execution.\nshared:// specifies the root shared FS directory (e.g. NO_BACKUP/ on Allegro) Use this to import and export files that are already on the cluster.\nstaging:// a special scheduler specific directory where files are moved after they are completed on a node and should be used for later. Use this to relate to files that should be stored or reused. After you one excution is done you usually move all important files to this place.\nsandbox:// this should not concern you and is a special RP folder where all pilot/session folders are located.\n\nSo let's do an example for an OpenMM engine. This is simply a small python script that makes OpenMM look like a executable. It run a simulation by providing an initial frame, OpenMM specific system.xml and integrator.xml files and some additional parameters like the platform name, how often to store simulation frames, etc.",
"engine = OpenMMEngine(\n pdb_file=pdb_file,\n system_file=File('file://../files/alanine/system.xml'),\n integrator_file=File('file://../files/alanine/integrator.xml'),\n args='-r --report-interval 1 -p CPU --store-interval 1'\n).named('openmm')",
"To explain this we have now an OpenMMEngine which uses the previously made pdb File object and uses the location defined in there. The same some Files for the OpenMM XML files and some args to store each frame (to keep it fast) and run using the CPU kernel.\nLast we name the engine openmm to find it later.",
"engine.name",
"The modeller\nThe instance to compute an MSM model of existing trajectories that you pass it. It is initialized with a .pdb file that is used to create features between the $c_\\alpha$ atoms. This implementaton requires a PDB but in general this is not necessay. It is specific to my PyEMMAAnalysis show case.",
"modeller = PyEMMAAnalysis(\n pdb_file=pdb_file\n).named('pyemma')",
"Again we name it pyemma for later reference.\nAdd generators to project\nNext step is to add these to the project for later usage. We pick the .generators store and just add it. Consider a store to work like a set() in python. It contains objects only once and is not ordered. Therefore we need a name to find the objects later. Of course you can always iterate over all objects, but the order is not given.\nTo be precise there is an order in the time of creation of the object, but it is only accurate to seconds and it really is the time it was created and not stored.",
"import datetime\ndatetime.datetime.fromtimestamp(modeller.__time__).strftime(\"%Y-%m-%d %H:%M:%S\")\n\nproject.generators.add(engine)\nproject.generators.add(modeller)\n\nprint project.generators",
"Note, that you cannot add the same engine twice. But if you create a new engine it will be considered different and hence you can store it again. \nCreate one intial trajectory\nFinally we are ready to run a first trajectory that we will store as a point of reference in the project. Also it is nice to see how it works in general.\n1. Open a scheduler\na job on the cluster to execute tasks\nthe .get_scheduler function delegates to the resource and uses the get_scheduler functions from there. This is merely a convenience since a Scheduler has the responsibility to open queues on the resource for you. \nYou have the same options as the queue has in the resource. This is often the number of cores and walltime, but can be additional ones, too. \nLet's open the default queue and use a single core for it since we only want to run one simulation.",
"scheduler = project.get_scheduler(cores=1)",
"Next we create the parameter for the engine to run the simulation. Since it seemed appropriate we use a Trajectory object (a special File with initial frame and length) as the input. You could of course pass these things separately, but this way, we can actualy reference the no yet existing trajectory and do stuff with it.\nA Trajectory should have a unique name and so there is a project function to get you one. It uses numbers and makes sure that this number has not been used yet in the project.",
"trajectory = project.new_trajectory(engine['pdb_file'], 100)\ntrajectory",
"This says, initial is alanine.pdb run for 100 frames and is named xxxxxxxx.dcd.\nNow, we want that this trajectory actually exists so we have to make it (on the cluster which is waiting for things to do). So we need a Task object to run a simulation. Since Task objects are very flexible there are helper functions to get them to do, what you want, like the ones we already created just before. Let's use the openmm engine to create an openmm task",
"task = engine.task_run_trajectory(trajectory)",
"That's it, just that a trajectory description and turn it into a task that contains the shell commands and needed files, etc. \nLast step is to really run the task. You can just use a scheduler as a function or call the .submit() method.",
"scheduler(task)",
"Now we have to wait. To see, if we are done, you can check the scheduler if it is still running tasks.",
"scheduler.is_idle\n\nprint scheduler.generators",
"or you wait until it becomes idle using .wait()",
"# scheduler.wait()",
"If all went as expected we will now have our first trajectory.",
"print project.files\nprint project.trajectories",
"Excellent, so cleanup and close our queue",
"scheduler.exit()",
"and close the project.",
"project.close()",
"The final project.close() will also shut down all open schedulers for you, so the exit command would not be necessary here. It is relevant if you want to exit the queue as soon as possible to save walltime.\nSummary\nYou have finally created an AdaptiveMD project and run your first trajectory. Since the project exists now, it is\nmuch easier to run more trajectories now."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ga7g08/ga7g08.github.io
|
_notebooks/09-02-2017-The-Gelman-Rubin-Statistic-and-emcee.ipynb
|
mit
|
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport emcee",
"The Gelman-Rubin statistic and emcee\nIn this post, I'll give a simple demo of using the Gelman-Rubin statistic (see this paper for an overview and alternatives to quantify convergance when using the emcee software package. There are issues with this approach which remain to be addressed. The Gelman-Rubin statistic applies to cases where one runs multiple independent MCMC simulations. emcee uses an ensemble sampler: n parallel walkers are run and when jumping, the position of the other walkers is used to generate proposal stes (for details, you may want to read this). As such, the n walkers are certainly not independent. I won't go into details of the issues this may cause here as I've not yet investigated it fully, but instead I will simply implement it naively and see if it works. \nThe initial setup of the problem follows this post with some minor modifications.\nData\nFirst, we generate some fake data to which we will fit a model;",
"# Model parameters\nA = 2.3\nf = 1\nX0 = 2.5\n\n# Noise parameter\nsigma = 0.6\n\nN = 500\nt = np.sort(np.random.uniform(0, 10, N))\nX = X0 + A * np.sin(2 * np.pi * f * t) + np.random.normal(0, sigma, N)\n\nplt.plot(t, X, \"o\")\nplt.xlabel(\"$t$\")\nplt.ylabel(r\"$X_{\\mathrm{obs}}$\")\nplt.show()",
"Set-up for using emcee\nDefine some generic helper functions",
"def Generic_lnuniformprior(theta, theta_lims):\n \"\"\" Generic uniform priors on theta \n \n Parameters\n ----------\n theta : array_like \n Value of the parameters\n theta_lims: array_like of shape (len(theta), 2)\n Array of pairs [min, max] for each theta paramater\n \n \"\"\"\n \n theta_lims = np.array(theta_lims)\n \n if all(theta - theta_lims[:, 0] > 0 ) and all(theta_lims[:, 1] - theta > 0):\n return np.prod(1.0/np.diff(theta_lims, axis=1))\n return -np.inf\n\ndef Generic_lnlike(theta, t, x, model):\n \"\"\" Generic likelihood function for signal in Gaussian noise\n \n Parameters\n ----------\n theta : array_like\n Value of the parameters, the noise strength `sigma` should ALWAYS be\n the last element in the list\n t : array_like\n The independant variable\n x : array_like\n The observed dependent variable\n model : func\n Signal model, calling `model(theta[:-1], t)` should\n produced the corresponding value of the signal \n alone `x_val`, without noise. \n \n \"\"\"\n \n sigma2 = theta[-1]**2\n return -0.5*(np.sum((x-model(theta[:-1], t))**2 / sigma2 + np.log(2*np.pi*sigma2))) \n \ndef PlotWalkers(sampler, symbols=None, alpha=0.4, color=\"k\", PSRF=None, PSRFx=None):\n \"\"\" Plot all the chains from a sampler \"\"\"\n nwalkers, nsteps, ndim = sampler.chain.shape\n\n fig, axes = plt.subplots(ndim, 1, sharex=True, figsize=(8, 9))\n\n for i in range(ndim):\n axes[i].plot(sampler.chain[:, :, i].T, color=\"k\", alpha=alpha)\n if symbols:\n axes[i].set_ylabel(symbols[i])\n if PSRF is not None:\n PSRF = np.array(PSRF)\n PSRFx = np.array(PSRFx)\n ax = axes[i].twinx()\n idx = np.argmin(np.abs(PSRF[:, i] - 1.1))\n ax.plot(PSRFx[:idx+1], PSRF[:idx+1, i], '-r')\n ax.plot(PSRFx[idx:], PSRF[idx:, i], '--r') ",
"Define model and uniform prior",
"def model(theta, t):\n A, f, X0 = theta \n return X0 + A * np.sin(2 * np.pi * f * t)\n \nlnlike = lambda theta, x, y: Generic_lnlike(theta, x, y, model) \n\n# Tie the two together\ndef lnprob(theta, x, y):\n \"\"\" Generic function ties together the lnprior and lnlike \"\"\" \n lp = lnprior(theta)\n if not np.isfinite(lp):\n return -np.inf\n return lp + lnlike(theta, x, y)\n \n# Prior\ntheta_lims = [[-100, 200], \n [0.95, 1.05], \n [0.0, 100.0],\n [0.1, 1.3]\n ]\nlnprior = lambda theta: Generic_lnuniformprior(theta, theta_lims) ",
"Calculating the PSRF\nHere, I follow the description on this blog for how to calculate the Potential Scale Reduction Factor (PSRF), a measure of the converegance. For PSRF < 1.2, Gelman & Rubin state the the chains have converged.\nIn practical terms, the sampler is run for a number of steps, the PSRF is calculated, and then the sampler is run again. Here I simply plot how this changes for each parameter. This becomes practically useful when some measuure of the convergence (say for example the maximum PSRF) is used to end a run prematurely (before the maximum number of steps). \nFirst, here is a convienience function which computes the PSRF:",
"def get_convergence_statistic(i, sampler, nwalkers, convergence_length=10,\n convergence_period=10):\n s = sampler.chain[:, i-convergence_length+1:i+1, :]\n within_std = np.mean(np.var(s, axis=1), axis=0)\n per_walker_mean = np.mean(s, axis=1)\n mean = np.mean(per_walker_mean, axis=0)\n between_std = np.sqrt(np.mean((per_walker_mean-mean)**2, axis=0))\n W = within_std\n B_over_n = between_std**2 / convergence_period\n Vhat = ((convergence_period-1.)/convergence_period * W\n + B_over_n + B_over_n / float(nwalkers))\n c = Vhat/W\n return i - convergence_period/2, c",
"Okay, now here is an example",
"ndim = 4\nnwalkers = 100\nnsteps = 400\n# Initialise the walkers\nwalker_pos = [[np.random.uniform(tl[0], tl[1]) for tl in theta_lims] \n for i in range(nwalkers)] \n\n# Run the sampler\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(t, X))\n\nPSRF = []\nPSRFx = []\nfor i, output in enumerate(sampler.sample(walker_pos, iterations=nsteps)):\n idx, c = get_convergence_statistic(i, sampler, nwalkers, \n convergence_length=10, convergence_period=10)\n PSRF.append(c)\n PSRFx.append(idx)\n \nsymbols = [\"$A$\", \"$f$\", \"$X_{0}$\", \"$\\sigma$\"] \nPlotWalkers(sampler, symbols=symbols, PSRF=PSRF, PSRFx=PSRFx)\nplt.show()",
"The reality is that the convergence_period and convergence_length may need to be fine-tuned. And one has to decide how to intepret the four convergence tests. From simple experimentation, if the length is too long, the test provides evidence for convergence when visual inspection indicates they have not converged. It is likely this is because the Gelam & Rubin sta"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MingChen0919/learning-apache-spark
|
notebooks/02-data-manipulation/2.8-sql-functions-to-extend-column-expressions.ipynb
|
mit
|
[
"# create entry points to spark\ntry:\n sc.stop()\nexcept:\n pass\nfrom pyspark import SparkContext, SparkConf\nfrom pyspark.sql import SparkSession\nsc=SparkContext()\nspark = SparkSession(sparkContext=sc)",
"pyspark.sql.functions functions\npyspark.sql.functions is collection of built-in functions for creating column expressions. These functions largely increase methods that we can use to manipulate DataFrame and DataFrame columns.\nThere are many sql functions from the pyspark.sql.functions module. Here I only choose a few to show how these functions extend the ability to create column expressions.",
"from pyspark.sql import functions as F",
"abs(): create column expression that returns absolute values of a column",
"from pyspark.sql import Row\ndf = sc.parallelize([Row(x=1), Row(x=-1), Row(x=-2)]).toDF()\ndf.show()\n\nx_abs = F.abs(df.x)\nx_abs\n\ndf.select(df.x, x_abs).show()",
"concat(): create column expression that concatenates multiple column values into one",
"df = sc.parallelize([Row(a='apple', b='tree'), Row(a='orange', b='flowers')]).toDF()\ndf.show()\n\nab_concat = F.concat(df.a, df.b)\nab_concat\n\ndf.select(df.a, df.b, ab_concat).show()",
"corr(): create column expression that returns pearson correlation coefficient between two columns",
"mtcars = spark.read.csv('../../data/mtcars.csv', inferSchema=True, header=True)\nmtcars.show(5)\n\ndrat_wt_corr = F.corr(mtcars.drat, mtcars.wt)\ndrat_wt_corr\n\nmtcars.select(drat_wt_corr).show()",
"array(): create column expression that merge multiple column values into an array\nThis function can be used to build feature column in machine learning models.",
"cols = [eval('mtcars.' + col) for col in mtcars.columns[1:]]\ncols\n\ncols_array = F.array(cols)\ncols_array\n\nmtcars.select(cols_array).show(truncate=False)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/05_review/4_preproc.ipynb
|
apache-2.0
|
[
"Preprocessing Using Dataflow\nLearning Objectives\n- Creating datasets for Machine Learning using Dataflow\nIntroduction\nWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.",
"#Ensure that we have Apache Beam version installed.\n!pip freeze | grep apache-beam || sudo pip install apache-beam[gcp]==2.12.0\n\nimport tensorflow as tf\nimport apache_beam as beam\nimport shutil\nimport os\nprint(tf.__version__)",
"Next, set the environment variables related to your GCP Project.",
"PROJECT = \"cloud-training-demos\" # Replace with your PROJECT\nBUCKET = \"cloud-training-bucket\" # Replace with your BUCKET\nREGION = \"us-central1\" # Choose an available region for Cloud MLE\nTFVERSION = \"1.14\" # TF version for CMLE to use\n\nimport os\nos.environ[\"BUCKET\"] = BUCKET\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"REGION\"] = REGION\n\n%%bash\nif ! gsutil ls | grep -q gs://${BUCKET}/; then\n gsutil mb -l ${REGION} gs://${BUCKET}\nfi",
"Save the query from earlier\nThe data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.",
"# Create SQL query using natality data after the year 2000\nquery_string = \"\"\"\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth\nFROM\n publicdata.samples.natality\nWHERE\n year > 2000\n\"\"\"\n\n# Call BigQuery and examine in dataframe\nfrom google.cloud import bigquery\nbq = bigquery.Client(project = PROJECT)\n\ndf = bq.query(query_string + \"LIMIT 100\").to_dataframe()\ndf.head()",
"Create ML dataset using Dataflow\nLet's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.\nInstead of using Beam/Dataflow, I had three other options:\n\nUse Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!\nRead from BigQuery directly using TensorFlow.\nUse the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to \"allow large results\" and save the result into a CSV file on Google Cloud Storage. \n\nHowever, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.\nThe preprocess function below includes an arugment in_test_mode. When this is set to True, running preprocess initiates a local Beam job. This is helpful for quickly debugging your pipeline and ensuring it works before submitting a job to the Cloud. Setting in_test_mode to False will launch a processing that is happening on the Cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.\nIf you wish to continue without doing this step, you can copy my preprocessed output:\n<pre>\ngsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://YOUR_BUCKET/\n</pre>",
"import apache_beam as beam\nimport datetime, os\n\ndef to_csv(rowdict):\n # Pull columns from BQ and create a line\n import hashlib\n import copy\n CSV_COLUMNS = \"weight_pounds,is_male,mother_age,plurality,gestation_weeks\".split(',')\n\n # Create synthetic data where we assume that no ultrasound has been performed\n # and so we don\"t know sex of the baby. Let\"s assume that we can tell the difference\n # between single and multiple, but that the errors rates in determining exact number\n # is difficult in the absence of an ultrasound.\n no_ultrasound = copy.deepcopy(rowdict)\n w_ultrasound = copy.deepcopy(rowdict)\n\n no_ultrasound[\"is_male\"] = \"Unknown\"\n if rowdict[\"plurality\"] > 1:\n no_ultrasound[\"plurality\"] = \"Multiple(2+)\"\n else:\n no_ultrasound[\"plurality\"] = \"Single(1)\"\n\n # Change the plurality column to strings\n w_ultrasound[\"plurality\"] = [\"Single(1)\", \"Twins(2)\", \"Triplets(3)\", \"Quadruplets(4)\", \"Quintuplets(5)\"][rowdict[\"plurality\"] - 1]\n\n # Write out two rows for each input row, one with ultrasound and one without\n for result in [no_ultrasound, w_ultrasound]:\n data = ','.join([str(result[k]) if k in result else \"None\" for k in CSV_COLUMNS])\n yield str(\"{}\".format(data))\n \ndef preprocess(in_test_mode):\n import shutil, os, subprocess\n job_name = \"preprocess-babyweight-features\" + \"-\" + datetime.datetime.now().strftime(\"%y%m%d-%H%M%S\")\n\n if in_test_mode:\n print(\"Launching local job ... hang on\")\n OUTPUT_DIR = \"./preproc\"\n shutil.rmtree(OUTPUT_DIR, ignore_errors=True)\n os.makedirs(OUTPUT_DIR)\n else:\n print(\"Launching Dataflow job {} ... hang on\".format(job_name))\n OUTPUT_DIR = \"gs://{0}/babyweight/preproc/\".format(BUCKET)\n try:\n subprocess.check_call(\"gsutil -m rm -r {}\".format(OUTPUT_DIR).split())\n except:\n pass\n\n options = {\n \"staging_location\": os.path.join(OUTPUT_DIR, \"tmp\", \"staging\"),\n \"temp_location\": os.path.join(OUTPUT_DIR, \"tmp\"),\n \"job_name\": job_name,\n \"project\": PROJECT,\n \"teardown_policy\": \"TEARDOWN_ALWAYS\",\n \"no_save_main_session\": True\n }\n opts = beam.pipeline.PipelineOptions(flags = [], **options)\n if in_test_mode:\n RUNNER = \"DirectRunner\"\n else:\n RUNNER = \"DataflowRunner\"\n \n p = beam.Pipeline(RUNNER, options = opts)\n query = \"\"\"\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth\nFROM\n publicdata.samples.natality\nWHERE\n year > 2000\n AND weight_pounds > 0\n AND mother_age > 0\n AND plurality > 0\n AND gestation_weeks > 0\n AND month > 0\n\"\"\"\n\n if in_test_mode:\n query = query + \" LIMIT 100\" \n\n for step in [\"train\", \"eval\"]:\n if step == \"train\":\n selquery = \"SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) < 80\".format(query)\n elif step == \"eval\":\n selquery = \"SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) >= 80 AND ABS(MOD(hashmonth, 100)) < 90\".format(query)\n else: \n selquery = \"SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) >= 90\".format(query)\n (p \n | \"{}_read\".format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))\n | \"{}_csv\".format(step) >> beam.FlatMap(to_csv)\n | \"{}_out\".format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, \"{}.csv\".format(step))))\n )\n\n job = p.run()\n if in_test_mode:\n job.wait_until_finish()\n print(\"Done!\")\n \npreprocess(in_test_mode = False)",
"For a Cloud preprocessing job (i.e. setting in_test_mode to False), the above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the follwing step.\nView results\nWe can have a look at the elements in our bucket to see the results of our pipeline above.",
"!gsutil ls gs://$BUCKET/babyweight/preproc/*-00000*",
"Preprocessing with BigQuery\nCreate SQL query for BigQuery that will union all both the ultrasound and no ultrasound datasets.",
"query = \"\"\"\nWITH CTE_Raw_Data AS (\nSELECT\n weight_pounds,\n CAST(is_male AS STRING) AS is_male,\n mother_age,\n plurality,\n gestation_weeks,\n FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth\nFROM\n publicdata.samples.natality\nWHERE\n year > 2000\n AND weight_pounds > 0\n AND mother_age > 0\n AND plurality > 0\n AND gestation_weeks > 0\n AND month > 0)\n\n-- Ultrasound\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n CASE\n WHEN plurality = 1 THEN \"Single(1)\"\n WHEN plurality = 2 THEN \"Twins(2)\"\n WHEN plurality = 3 THEN \"Triplets(3)\"\n WHEN plurality = 4 THEN \"Quadruplets(4)\"\n WHEN plurality = 5 THEN \"Quintuplets(5)\"\n ELSE \"NULL\"\n END AS plurality,\n gestation_weeks,\n hashmonth\nFROM\n CTE_Raw_Data\nUNION ALL\n-- No ultrasound\nSELECT\n weight_pounds,\n \"Unknown\" AS is_male,\n mother_age,\n CASE\n WHEN plurality = 1 THEN \"Single(1)\"\n WHEN plurality > 1 THEN \"Multiple(2+)\"\n END AS plurality,\n gestation_weeks,\n hashmonth\nFROM\n CTE_Raw_Data\n\"\"\"",
"Create temporary BigQuery dataset",
"from google.cloud import bigquery\n\n# Construct a BigQuery client object.\nclient = bigquery.Client()\n\n# Set dataset_id to the ID of the dataset to create.\ndataset_name = \"temp_babyweight_dataset\"\ndataset_id = \"{}.{}\".format(client.project, dataset_name)\n\n# Construct a full Dataset object to send to the API.\ndataset = bigquery.Dataset.from_string(dataset_id)\n\n# Specify the geographic location where the dataset should reside.\ndataset.location = \"US\"\n\n# Send the dataset to the API for creation.\n# Raises google.api_core.exceptions.Conflict if the Dataset already\n# exists within the project.\ntry:\n dataset = client.create_dataset(dataset) # API request\n print(\"Created dataset {}.{}\".format(client.project, dataset.dataset_id))\nexcept:\n print(\"Dataset {}.{} already exists\".format(client.project, dataset.dataset_id))",
"Execute query and write to BigQuery table.",
"job_config = bigquery.QueryJobConfig()\nfor step in [\"train\", \"eval\"]:\n if step == \"train\":\n selquery = \"SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) < 80\".format(query)\n elif step == \"eval\":\n selquery = \"SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) >= 80 AND ABS(MOD(hashmonth, 100)) < 90\".format(query)\n else: \n selquery = \"SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 100)) >= 90\".format(query)\n # Set the destination table\n table_name = \"babyweight_{}\".format(step)\n table_ref = client.dataset(dataset_name).table(table_name)\n job_config.destination = table_ref\n job_config.write_disposition = \"WRITE_TRUNCATE\"\n\n # Start the query, passing in the extra configuration.\n query_job = client.query(\n query=selquery,\n # Location must match that of the dataset(s) referenced in the query\n # and of the destination table.\n location=\"US\",\n job_config=job_config) # API request - starts the query\n\n query_job.result() # Waits for the query to finish\n print(\"Query results loaded to table {}\".format(table_ref.path))",
"Export BigQuery table to CSV in GCS.",
"dataset_ref = client.dataset(dataset_id=dataset_name, project=PROJECT)\n\nfor step in [\"train\", \"eval\"]:\n destination_uri = \"gs://{}/{}\".format(BUCKET, \"babyweight/bq_data/{}*.csv\".format(step))\n table_name = \"babyweight_{}\".format(step)\n table_ref = dataset_ref.table(table_name)\n extract_job = client.extract_table(\n table_ref,\n destination_uri,\n # Location must match that of the source table.\n location=\"US\",\n ) # API request\n extract_job.result() # Waits for job to complete.\n\n print(\"Exported {}:{}.{} to {}\".format(PROJECT, dataset_name, table_name, destination_uri))",
"View results\nWe can have a look at the elements in our bucket to see the results of our pipeline above.",
"!gsutil ls gs://$BUCKET/babyweight/bq_data/*000000000000*",
"Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hcchengithub/project-k
|
Play with the FORTH kernel on jupyter notebook.ipynb
|
mit
|
[
"A rewritten of: https://github.com/hcchengithub/project-k/wiki/Play-with-the-forth-kernel-on-python<br>\nYou can play with this article online directly through the jupyter notebook binder: https://mybinder.org/v2/gh/hcchengithub/project-k/master\nPlay with the FORTH kernel on jupyter notebook\nThe small file projectk.py from the project-k on GitHub at the same directory where you launched this jupyter notebook is the only thing we need. Project-k's purpose is to put the FORTH programming language fundamental components into a small kernel file that bridges FORTH into the host system which is Python here. Obviously projectk.js is for JavaScript. Project-k supports these two host systems so far (2018.3.15).\nWith the project-k kernel, it takes only 15 minutes to build your own FORTH system. So when you need an user interface to communicate with your machine, FORTH is a remarkable choice. \n1. Import the project-k kernel\nNow let's create a project-k vm object:",
"import projectk as vm # vm means 'Virtual Machine'. ",
"The above python statement created an instance of project-k object.\n2. Try the Virtual Machine's basic features\nLet's try some project-k vm methods:",
"vm.dictate(\"123\").stack",
"vm.dictate() method is the way project-k VM receives your commands (in a multiple lines string). It actually is also the way we feed it an entire Forth source code file. \"123\" means: \"please push this number into the FORTH data stack\". vm.stack is the Forth VM data stack which was empty at first and now has one item, 123, that we've just put in it. Some methods as vm.dictate() returns the vm object itself so we can cascade multiple function calls in one line. Therefore the above statement vm.dictate(\"123\").stack is cascated from the two lines:\nvm.dictate(\"123\")\nvm.stack\n\n3. Use code and end-code to define a high-level FORTH word\nIn FORTH we call an identifier a word that is a command or a variable name. Now we can start to define a new word:",
"vm.dictate(\"code hi print('Hello World!!') end-code hi\");",
"A code word is defined between code and end-code. The first token after the leading code, which is 'hi' in this example, is the name of the new FORTH word. All the rests down to the ending end-code are python statements. Note after end-code above we execute hi the new word immediately and it works!\n4. Define the 'words' command\n'words' is a basic FORTH word that lists all words. This example shows you how to define it.",
"vm.dictate(\"code words print([w.name for w in words['forth'][1:]]) end-code words\");",
"Where the vm.words that appears in above definition is project-k vm's word-list that is a common component of a FORTH system. We can see it this way:",
"vm.words",
"5. Define commands + , .s , and s\".\nThese examples demonstrate project-k built-in methods push(), pop(), and nexttoken().",
"vm.dictate(\"code + push(pop(1)+pop()) end-code\");",
"The vm knows how to do the '+' now, let's try:",
"vm.stack=[] # clear the data stack\nvm.dictate(\"123 456 +\").stack ",
"FORTH is a programming language of postfix-expression. \"123 456 +\" means: \"Please push 123 into the data stack, please push 456 too. Now please get the top two cells out of the data stack, add them and push the result back to the data stack\". The result is 579 left in the data stack. \nThe common FORTH word to view its data stack is .s, we can have it by this definition:",
"vm.dictate(\"code .s print(stack) end-code\");\nvm.execute('.s');",
"Another FORTH word that quotes a text string is s\" that can be defined like this:",
"vm.dictate('code s\" push(nexttoken(\\'\"\\'));nexttoken() end-code');",
"The FORTH term TIB (terminal input buffer) is simply the string given from vm.dictate('I am the TIB'). nexttoken() is a project-k built-in function that gets a quote out of the TIB from the current position to the given delimiter which is \" in the example above. So FORTH strings can now be expressed by s\" this is a string\" and that will be pushed to the FORTH data stack. Let's use it:",
"# Put a string onto the TOS (Top of the data stack)\nvm.dictate('s\" The wise build bridges, \" .s');\n\nvm.dictate('s\" while the foolish build barriers.\" .s');",
"and according to the + word we defined above, it can concatenate two strings too. So let's execute + and check the result on the data stack:",
"vm.execute('+').execute('.s');",
"The two strings are correctly concatenated into one.\nLook into the project-k module object\n6. See them all at a glance\nFirst, let's see what are in the vm. When in a code word definition these properties are global variables and global functions being seen in that new word.",
"# list all global functions and global variables seen in a code word definition. \n\nprint([propertie for propertie in dir(vm) if not propertie.startswith('__')])\n\n# List only functions out of the aboves\n\nprint([method for method in dir(vm) if callable(getattr(vm,method))])",
"7. Get help of project-k global functions\nProject-k built-in functions are explained with comments in the projectk.py source code. View them by using the python help() function.",
"help(vm.tos)\nhelp(vm.pop)\nhelp(vm.push)\nhelp(vm.nexttoken)",
"8. Self-reference of a code word",
"vm.dictate(\"code see-locals print(locals()) end-code see-locals\");",
"The _me object points to the new word itself. For example, this word prints its own name:",
"vm.dictate(\"code who-am-I? print('My name is: ' + _me.name) end-code\").execute('who-am-I?');",
"9. Let's see the globals\nYou will want to know about the globals when at the view point in a code word definition. Many of them have been seen above yet this is from a different view point.",
"vm.dictate(\"code see-globals print(globals().keys()) end-code see-globals\");",
"The __name__ is 'projectk', as shown below, that indicates that the namespace of this small world is within the FORTH virtual machine. We can't see anything in the outside world unless vm.push() passes them into the data stack.",
"vm.dictate(\"code see__name__ print(globals()['__name__']) end-code see__name__\");",
"For example, this jupyter notebook itself is the main program we are running and through vm.push() we can pass this information into the FORTH vm and get it back by vm.pop():",
"vm.push(__name__).pop()\nvm.push(__IPYTHON__).pop() # see another property from the main program",
"List of all project-k global variables and built-in functions\nList of all project-k global variables and built-in functions can be found at the end of this page on project-k project wiki on GitHub.\n<hr>\n\nYou can start building your own FORTH system now. Don't hesitate to let me know anything that is unclear above.\nMay the FORTH be with you!\nHave fun!\nH.C. Chen @ FitTaiwan<br>\nhcchen5600@gmail.com<br>\n2018.3.15<br>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/workshops
|
extras/amld/notebooks/exercises/2_keras.ipynb
|
apache-2.0
|
[
"Using tf.keras\nThis Colab is about how to use Keras to define and train simple models on the data generated in the last Colab 1_data.ipynb",
"# In Jupyter, you would need to install TF 2.0 via !pip.\n%tensorflow_version 2.x\n\nimport tensorflow as tf\nimport json, os\n\n# Tested with TensorFlow 2.1.0\nprint('version={}, CUDA={}, GPU={}, TPU={}'.format(\n tf.__version__, tf.test.is_built_with_cuda(),\n # GPU attached?\n len(tf.config.list_physical_devices('GPU')) > 0,\n # TPU accessible? (only works on Colab)\n 'COLAB_TPU_ADDR' in os.environ))",
"Attention: Please avoid using the TPU runtime (TPU=True) for now. The notebook contains an optional part on TPU usage at the end if you're interested. You can change the runtime via: \"Runtime > Change runtime type > Hardware Accelerator\" in Colab.\n\nData from Protobufs",
"# Load data from Drive (Colab only).\ndata_path = '/content/gdrive/My Drive/amld_data/zoo_img'\n\n# Or, you can load data from different sources, such as:\n# From your local machine:\n# data_path = './amld_data'\n\n# Or use a prepared dataset from Cloud (Colab only).\n# - 50k training examples, including pickled DataFrame.\n# data_path = 'gs://amld-datasets/zoo_img_small'\n# - 1M training examples, without pickled DataFrame.\n# data_path = 'gs://amld-datasets/zoo_img'\n# - 4.1M training examples, without pickled DataFrame.\n# data_path = 'gs://amld-datasets/animals_img'\n# - 29M training examples, without pickled DataFrame.\n# data_path = 'gs://amld-datasets/all_img'\n\n# Store models on Drive (Colab only).\nmodels_path = '/content/gdrive/My Drive/amld_data/models'\n\n# Or, store models to local machine.\n# models_path = './amld_models'\n\nif data_path.startswith('/content/gdrive/'):\n from google.colab import drive\n drive.mount('/content/gdrive')\n\nif data_path.startswith('gs://'):\n from google.colab import auth\n auth.authenticate_user()\n !gsutil ls -lh \"$data_path\"\nelse:\n !sleep 1 # wait a bit for the mount to become ready\n !ls -lh \"$data_path\"\n\nlabels = [label.strip() for label \n in tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]\nprint('All labels in the dataset:', ' '.join(labels))\n\ncounts = json.load(tf.io.gfile.GFile('{}/counts.json'.format(data_path)))\nprint('Splits sizes:', counts)\n\n# This dictionary specifies what \"features\" we want to extract from the\n# tf.train.Example protos (i.e. what they look like on disk). We only\n# need the image data \"img_64\" and the \"label\". Both features are tensors\n# with a fixed length.\n# You need to specify the correct \"shape\" and \"dtype\" parameters for\n# these features.\nfeature_spec = {\n # Single label per example => shape=[1] (we could also use shape=() and\n # then do a transformation in the input_fn).\n 'label': tf.io.FixedLenFeature(shape=[1], dtype=tf.int64),\n # The bytes_list data is parsed into tf.string.\n 'img_64': tf.io.FixedLenFeature(shape=[64, 64], dtype=tf.int64),\n}\n\ndef parse_example(serialized_example):\n # Convert string to tf.train.Example and then extract features/label.\n features = tf.io.parse_single_example(serialized_example, feature_spec)\n\n label = features['label']\n label = tf.one_hot(tf.squeeze(label), len(labels))\n\n features['img_64'] = tf.cast(features['img_64'], tf.float32) / 255.\n return features['img_64'], label\n\nbatch_size = 100\nsteps_per_epoch = counts['train'] // batch_size\neval_steps_per_epoch = counts['eval'] // batch_size\n\n# Create datasets from TFRecord files.\ntrain_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(\n '{}/train-*'.format(data_path)))\ntrain_ds = train_ds.map(parse_example)\ntrain_ds = train_ds.batch(batch_size).repeat()\n\neval_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(\n '{}/eval-*'.format(data_path)))\neval_ds = eval_ds.map(parse_example)\neval_ds = eval_ds.batch(batch_size)\n\n# Read a single batch of examples from the training set and display shapes.\nfor img_feature, label in train_ds:\n break\nprint('img_feature.shape (batch_size, image_height, image_width) =', \n img_feature.shape)\nprint('label.shape (batch_size, number_of_labels) =', label.shape)\n\n# Visualize some examples from the training set.\nfrom matplotlib import pyplot as plt\n\ndef show_img(img_64, title='', ax=None):\n \"\"\"Displays an image.\n \n Args:\n img_64: Array (or Tensor) with monochrome image data.\n title: Optional title.\n ax: Optional Matplotlib axes to show the image in.\n \"\"\"\n (ax if ax else plt).matshow(img_64.reshape((64, -1)), cmap='gray')\n if isinstance(img_64, tf.Tensor):\n img_64 = img_64.numpy()\n ax = ax if ax else plt.gca()\n ax.set_xticks([])\n ax.set_yticks([])\n ax.set_title(title)\n \nrows, cols = 3, 5\nfor img_feature, label in train_ds:\n break\n_, axs = plt.subplots(rows, cols, figsize=(2*cols, 2*rows))\nfor i in range(rows):\n for j in range(cols):\n show_img(img_feature[i*rows+j].numpy(), \n title=labels[label[i*rows+j].numpy().argmax()], ax=axs[i][j])",
"Linear model",
"# Sample linear model.\nlinear_model = tf.keras.Sequential()\nlinear_model.add(tf.keras.layers.Flatten(input_shape=(64, 64,)))\nlinear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))\n\n# \"adam, categorical_crossentropy, accuracy\" and other string constants can be\n# found at https://keras.io.\nlinear_model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy', tf.keras.metrics.categorical_accuracy])\n\nlinear_model.summary()\n\nlinear_model.fit(train_ds,\n validation_data=eval_ds,\n steps_per_epoch=steps_per_epoch,\n validation_steps=eval_steps_per_epoch,\n epochs=1,\n verbose=True)",
"Convolutional model",
"# Let's define a convolutional model:\nconv_model = tf.keras.Sequential([\n tf.keras.layers.Reshape(target_shape=(64, 64, 1), input_shape=(64, 64)),\n tf.keras.layers.Conv2D(filters=32, \n kernel_size=(10, 10), \n padding='same', \n activation='relu'),\n tf.keras.layers.Conv2D(filters=32, \n kernel_size=(10, 10), \n padding='same', \n activation='relu'), \n tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),\n tf.keras.layers.Conv2D(filters=64, \n kernel_size=(5, 5), \n padding='same', \n activation='relu'),\n tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),\n\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(256, activation='relu'),\n tf.keras.layers.Dropout(0.3),\n tf.keras.layers.Dense(len(labels), activation='softmax'),\n])\n\n# YOUR ACTION REQUIRED:\n# Compile + print summary of the model (analogous to the linear model above).\n\n\n# YOUR ACTION REQUIRED:\n# Train the model (analogous to linear model above).\n# Note: You might want to reduce the number of steps if if it takes too long.\n# Pro tip: Change the runtime type (\"Runtime\" menu) to GPU! After the change you\n# will need to rerun the cells above because the Python kernel's state is reset.\n",
"Store model",
"tf.io.gfile.makedirs(models_path)\n\n# Save model as Keras model.\nkeras_path = os.path.join(models_path, 'linear.h5')\nlinear_model.save(keras_path)\n\n# Keras model is a single file.\n!ls -hl \"$keras_path\"\n\n# Load Keras model.\nloaded_keras_model = tf.keras.models.load_model(keras_path)\nloaded_keras_model.summary()\n\n# Save model as Tensorflow Saved Model.\nsaved_model_path = os.path.join(models_path, 'saved_model/linear')\nlinear_model.save(saved_model_path, save_format='tf')\n\n# Inspect saved model directory structure.\n!find \"$saved_model_path\"\n\nsaved_model = tf.keras.models.load_model(saved_model_path)\nsaved_model.summary()\n\n# YOUR ACTION REQUIRED:\n# Store the convolutional model and any additional models that you trained\n# in the previous sections in Keras format so we can use them in later\n# notebooks for prediction.\n",
"----- Optional part -----\nLearn from errors\nLooking at classification mistakes is a great way to better understand how a model is performing. This section walks you through the necessary steps to load some examples from the dataset, make predictions, and plot the mistakes.",
"import collections\nMistake = collections.namedtuple('Mistake', 'label pred img_64')\nmistakes = []\neval_ds_iter = iter(eval_ds)\n\nfor img_64_batch, label_onehot_batch in eval_ds_iter:\n break\nimg_64_batch.shape, label_onehot_batch.shape\n\n# YOUR ACTION REQUIRED:\n# Use model.predict() to get a batch of predictions.\npreds =\n\n\n# Iterate through the batch:\nfor label_onehot, pred, img_64 in zip(label_onehot_batch, preds, img_64_batch):\n # YOUR ACTION REQUIRED:\n # Both `label_onehot` and pred are vectors with length=len(labels), with every\n # element corresponding to a probability of the corresponding class in\n # `labels`. Get the value with the highest value to get the index within\n # `labels`.\n label_i = \n pred_i =\n if label_i != pred_i:\n mistakes.append(Mistake(label_i, pred_i, img_64.numpy()))\n\n# You can run this and above 2 cells multiple times to get more mistakes. \nlen(mistakes)\n\n# Let's examine the cases when our model gets it wrong. Would you recognize\n# these images correctly?\n\n# YOUR ACTION REQUIRED:\n# Run above cell but using a different model to get a different set of\n# classification mistakes. Then copy over this cell to plot the mistakes for\n# comparison purposes. Can you spot a pattern?\nrows, cols = 5, 5\nplt.figure(figsize=(cols*2.5, rows*2.5))\nfor i, mistake in enumerate(mistakes[:rows*cols]):\n ax = plt.subplot(rows, cols, i + 1)\n title = '{}? {}!'.format(labels[mistake.pred], labels[mistake.label])\n show_img(mistake.img_64, title, ax)",
"Data from DataFrame\nFor comparison, this section shows how you would load data from a pandas.DataFrame and then use Keras for training. Note that this approach does not scale well and can only be used for quite small datasets.",
"# Note: used memory BEFORE loading the DataFrame.\n!free -h\n\n# Loading all the data in memory takes a while (~40s).\nimport pickle\ndf = pickle.load(tf.io.gfile.GFile('%s/dataframe.pkl' % data_path, mode='rb'))\nprint(len(df))\nprint(df.columns)\n\ndf_train = df[df.split == b'train']\nlen(df_train)\n\n# Note: used memory AFTER loading the DataFrame.\n!free -h\n\n# Show some images from the dataset.\n\nfrom matplotlib import pyplot as plt\n\ndef show_img(img_64, title='', ax=None):\n (ax if ax else plt).matshow(img_64.reshape((64, -1)), cmap='gray')\n ax = ax if ax else plt.gca()\n ax.set_xticks([])\n ax.set_yticks([])\n ax.set_title(title)\n\nrows, cols = 3, 3\n_, axs = plt.subplots(rows, cols, figsize=(2*cols, 2*rows))\nfor i in range(rows):\n for j in range(cols):\n d = df.sample(1).iloc[0]\n show_img(d.img_64, title=labels[d.label], ax=axs[i][j])\n\ndf_x = tf.convert_to_tensor(df_train.img_64, dtype=tf.float32)\ndf_y = tf.one_hot(df_train.label, depth=len(labels), dtype=tf.float32)\n\n# Note: used memory AFTER defining the Tenors based on the DataFrame.\n!free -h\n\n# Checkout the shape of these rather large tensors.\ndf_x.shape, df_x.dtype, df_y.shape, df_y.dtype\n\n# Copied code from section \"Linear model\" above.\nlinear_model = tf.keras.Sequential()\nlinear_model.add(tf.keras.layers.Flatten(input_shape=(64 * 64,)))\nlinear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))\n\n# \"adam, categorical_crossentropy, accuracy\" and other string constants can be\n# found at https://keras.io.\nlinear_model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy', tf.keras.metrics.categorical_accuracy])\nlinear_model.summary()\n\n# How much of a speedup do you see because the data is already in memory?\n# How would this compare to the convolutional model?\nlinear_model.fit(df_x, df_y, epochs=1, batch_size=100)",
"TPU Support\nFor using TF with a TPU we'll need to make some adjustments. Generally, please note that several TF TPU features are experimental and might not work as smooth as it does with a CPU or GPU.\n\nAttention: Please make sure to switch the runtime to TPU for this part. You can do so via: \"Runtime > Change runtime type > Hardware Accelerator\" in Colab. As this might create a new environment this section can be executed isolated from anything above.",
"%tensorflow_version 2.x\n\nimport json, os\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport tensorflow as tf\n\n# Disable duplicate logging output in TF.\nlogger = tf.get_logger()\nlogger.propagate = False\n\n# This will fail if no TPU is connected...\ntpu = tf.distribute.cluster_resolver.TPUClusterResolver()\n# Set up distribution strategy.\ntf.config.experimental_connect_to_cluster(tpu)\ntf.tpu.experimental.initialize_tpu_system(tpu);\nstrategy = tf.distribute.experimental.TPUStrategy(tpu)\n\n# Tested with TensorFlow 2.1.0\nprint('\\n\\nTF version={} TPUs={} accelerators={}'.format(\n tf.__version__, tpu.cluster_spec().as_dict()['worker'],\n strategy.num_replicas_in_sync))",
"Attention: TPUs require all files (input and models) to be stored in cloud storage buckets (gs://bucket-name/...). If you plan to use TPUs please choose the data_path below accordingly. Otherwise, you might run into File system scheme '[local]' not implemented errors.",
"from google.colab import auth\nauth.authenticate_user()\n\n# Browse datasets:\n# https://console.cloud.google.com/storage/browser/amld-datasets\n\n# - 50k training examples, including pickled DataFrame.\ndata_path = 'gs://amld-datasets/zoo_img_small'\n# - 1M training examples, without pickled DataFrame.\n# data_path = 'gs://amld-datasets/zoo_img'\n# - 4.1M training examples, without pickled DataFrame.\n# data_path = 'gs://amld-datasets/animals_img'\n# - 29M training examples, without pickled DataFrame.\n# data_path = 'gs://amld-datasets/all_img'\n\n#@markdown **Copied and adjusted data definition code from above**\n#@markdown\n#@markdown Note: You can double-click this cell to see its code.\n#@markdown\n#@markdown The changes have been highlighted with `!` in the contained code\n#@markdown (things like the `batch_size` and added `drop_remainder=True`).\n#@markdown\n#@markdown Feel free to just **click \"execute\"** and ignore the details for now.\n\nlabels = [label.strip() for label \n in tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]\nprint('All labels in the dataset:', ' '.join(labels))\n\ncounts = json.load(tf.io.gfile.GFile('{}/counts.json'.format(data_path)))\nprint('Splits sizes:', counts)\n\n# This dictionary specifies what \"features\" we want to extract from the\n# tf.train.Example protos (i.e. what they look like on disk). We only\n# need the image data \"img_64\" and the \"label\". Both features are tensors\n# with a fixed length.\n# You need to specify the correct \"shape\" and \"dtype\" parameters for\n# these features.\nfeature_spec = {\n # Single label per example => shape=[1] (we could also use shape=() and\n # then do a transformation in the input_fn).\n 'label': tf.io.FixedLenFeature(shape=[1], dtype=tf.int64),\n # The bytes_list data is parsed into tf.string.\n 'img_64': tf.io.FixedLenFeature(shape=[64, 64], dtype=tf.int64),\n}\n\ndef parse_example(serialized_example):\n # Convert string to tf.train.Example and then extract features/label.\n features = tf.io.parse_single_example(serialized_example, feature_spec)\n\n # Important step: remove \"label\" from features!\n # Otherwise our classifier would simply learn to predict\n # label=features['label'].\n label = features['label']\n label = tf.one_hot(tf.squeeze(label), len(labels))\n\n features['img_64'] = tf.cast(features['img_64'], tf.float32)\n return features['img_64'], label\n\n# Adjust the batch size to the given hardware (#accelerators).\nbatch_size = 64 * strategy.num_replicas_in_sync\n# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\nsteps_per_epoch = counts['train'] // batch_size\neval_steps_per_epoch = counts['eval'] // batch_size\n\n# Create datasets from TFRecord files.\ntrain_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(\n '{}/train-*'.format(data_path)))\ntrain_ds = train_ds.map(parse_example)\ntrain_ds = train_ds.batch(batch_size, drop_remainder=True).repeat()\n# !!!!!!!!!!!!!!!!!!!\n\neval_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(\n '{}/eval-*'.format(data_path)))\neval_ds = eval_ds.map(parse_example)\neval_ds = eval_ds.batch(batch_size, drop_remainder=True)\n# !!!!!!!!!!!!!!!!!!!\n# Read a single example and display shapes.\nfor img_feature, label in train_ds:\n break\nprint('img_feature.shape (batch_size, image_height, image_width) =', \n img_feature.shape)\nprint('label.shape (batch_size, number_of_labels) =', label.shape)\n\n# Model definition code needs to be wrapped in scope.\nwith strategy.scope():\n linear_model = tf.keras.Sequential()\n linear_model.add(tf.keras.layers.Flatten(input_shape=(64, 64,)))\n linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))\n\nlinear_model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy', tf.keras.metrics.categorical_accuracy])\nlinear_model.summary()\n\nlinear_model.fit(train_ds,\n validation_data=eval_ds,\n steps_per_epoch=steps_per_epoch,\n validation_steps=eval_steps_per_epoch,\n epochs=1,\n verbose=True)\n\n# Model definition code needs to be wrapped in scope.\nwith strategy.scope():\n conv_model = tf.keras.Sequential([\n tf.keras.layers.Reshape(target_shape=(64, 64, 1), input_shape=(64, 64)),\n tf.keras.layers.Conv2D(filters=32, \n kernel_size=(10, 10), \n padding='same', \n activation='relu'),\n tf.keras.layers.ZeroPadding2D((1,1)),\n tf.keras.layers.Conv2D(filters=32, \n kernel_size=(10, 10), \n padding='same', \n activation='relu'), \n tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),\n tf.keras.layers.Conv2D(filters=64, \n kernel_size=(5, 5), \n padding='same', \n activation='relu'),\n tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),\n\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(256, activation='relu'),\n tf.keras.layers.Dropout(0.3),\n tf.keras.layers.Dense(len(labels), activation='softmax'),\n ])\n\nconv_model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\nconv_model.summary()\n\nconv_model.fit(train_ds,\n validation_data=eval_ds,\n steps_per_epoch=steps_per_epoch,\n validation_steps=eval_steps_per_epoch,\n epochs=3,\n verbose=True)\nconv_model.evaluate(eval_ds, steps=eval_steps_per_epoch)\n\n!nvidia-smi n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chapagain/kaggle-competitions-solution
|
Sentiment Analysis on Movie Reviews/Sentiment-Analysis-on-Movie-Reviews-Detail-using-multiple-models.ipynb
|
mit
|
[
"Sentiment Analysis on Movie Reviews\nUsing Logistic Regression, SGD, Naive Bayes, OneVsOne Models\n\n\n0 - negative\n\n\n1 - somewhat negative\n\n\n2 - neutral\n\n\n3 - somewhat positive\n\n\n4 - positive\n\n\nLoad Libraries",
"import nltk\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.linear_model import LogisticRegression, SGDClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.svm import LinearSVC\nfrom sklearn.multiclass import OneVsRestClassifier, OneVsOneClassifier\n\nfrom sklearn.metrics import classification_report, confusion_matrix",
"Load & Read Datasets",
"train = pd.read_csv('train.tsv', delimiter='\\t')\ntest = pd.read_csv('test.tsv', delimiter='\\t')\n\ntrain.shape, test.shape\n\ntrain.head()\n\ntest.head()\n\n# unique sentiment labels\ntrain.Sentiment.unique()\n\ntrain.info()\n\ntrain.Sentiment.value_counts()\n\ntrain.Sentiment.value_counts() / train.Sentiment.count()",
"Extracting features\nIn order to perform machine learning on text documents, we first need to turn the text content into numerical feature vectors.\nBags of words\nThe most intuitive way to do so is the bags of words representation:\n\n\nassign a fixed integer id to each word occurring in any document of the training set (for instance by building a dictionary from words to integer indices).\n\n\nfor each document $#i$, count the number of occurrences of each word $w$ and store it in $X[i, j]$ as the value of feature $#j$ where $j$ is the index of word $w$ in the dictionary\n\n\nReference: http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html\nThe Bag of Words model learns a vocabulary from all of the documents, then models each document by counting the number of times each word appears.\nWe'll be using the CountVectorizer feature extractor module from scikit-learn to create bag-of-words features.",
"X_train = train['Phrase']\ny_train = train['Sentiment']\n\n# Convert a collection of text documents to a matrix of token counts\ncount_vect = CountVectorizer() \n\n# Fit followed by Transform\n# Learn the vocabulary dictionary and return term-document matrix\nX_train_counts = count_vect.fit_transform(X_train)\n\n#X_train_count = X_train_count.toarray()\n\n# 156060 rows of train data & 15240 features (one for each vocabulary word)\nX_train_counts.shape\n\n# get all words in the vocabulary\nvocab = count_vect.get_feature_names()\nprint (vocab)\n\n# get index of any word\ncount_vect.vocabulary_.get(u'100')\n\n# Sum up the counts of each vocabulary word\ndist = np.sum(X_train_counts, axis=0)\n# print (dist) # matrix\n\ndist = np.squeeze(np.asarray(dist))\nprint (dist) # array\n\nzipped = zip(vocab, dist)\nzipped.sort(key = lambda t: t[1], reverse=True) # sort words by highest number of occurrence\n\n# For each, print the vocabulary word and the number of times it \n# appears in the training set\nfor tag, count in zipped:\n print (count, tag)",
"Convert Occurrence to Frequency\nProblem with occurrence count of words:\n- longer documents will have higher average count values than shorter documents, even though they might talk about the same topics\nSolution:\n- divide the number of occurrences of each word in a document by the total number of words in the document\n- new features formed by this method are called tf (Term Frequencies)\nRefinement on tf:\n- downscale weights for words that occur in many documents in the corpus and are therefore less informative than those that occur only in a smaller portion of the corpus\n- this downscaling is called tf-idf (Term Frequency times Inverse Document Frequency)\nLet's compute tf and tf-idf :",
"tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)\nX_train_tf = tf_transformer.transform(X_train_counts)\n\n# 156060 rows of train data & 15240 features (one for each vocabulary word)\nX_train_tf.shape\n\n# print some values of tf-idf transformed feature vector\nprint X_train_tf[1:2]",
"In the above code, we first used the fit() method to fit our estimator and then the transform() method to transform our count-matrix to a tf-idf representation.\nThese two steps can be combined using fit_transform() method.",
"tfidf_transformer = TfidfTransformer()\nX_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)\nX_train_tfidf.shape",
"Train Classifier\nWe train our classifier by inputing our features and expecting our classifier to output/predict the sentiment value for each phrase in test dataset.\nNaive Bayes Classifier",
"clf = MultinomialNB().fit(X_train_tfidf, y_train)\n\npredicted = clf.predict(X_train_tfidf)\n\nnp.mean(predicted == y_train)",
"Building a Pipeline\nIn order to make the vectorizer => transformer => classifier easier to work with, scikit-learn provides a Pipeline class that behaves like a compound classifier.\nYou can compare the above accuracy result of the classifier without using Pipeline and the below accuracy result of the classifier while using Pipeline class. It's the same. Hence, Pipeline class highly simplifies our task of tokenizing and tfidf conversion.",
"text_clf = Pipeline([\n ('vect', CountVectorizer()),\n ('tfidf', TfidfTransformer()),\n ('clf', MultinomialNB()),\n])\n\ntext_clf.fit(X_train, y_train)\n\npredicted = text_clf.predict(X_train)\n\nnp.mean(predicted == y_train)",
"Let's use stop words filter in CountVectorizer method and see how it affects the classifier's accuracy. We see that this increases accuracy.",
"text_clf = Pipeline([\n ('vect', CountVectorizer(stop_words='english')),\n ('tfidf', TfidfTransformer()),\n ('clf', MultinomialNB()),\n])\n\ntext_clf.fit(X_train, y_train)\npredicted = text_clf.predict(X_train)\nnp.mean(predicted == y_train)",
"Classification Report (precision, recall, f1-score)",
"target_names = y_train.unique()\n#np.array(map(str, target_names))\n#np.char.mod('%d', target_names)\ntarget_names = ['0', '1', '2', '3', '4']\n\nprint (classification_report(\n y_train, \\\n predicted, \\\n target_names = target_names\n))",
"Confusion Matrix",
"print (confusion_matrix(y_train, predicted))",
"Stochastic Gradient Descent (SGD) Classifier",
"text_clf = Pipeline([\n ('vect', CountVectorizer(stop_words='english')),\n ('tfidf', TfidfTransformer()),\n ('clf', SGDClassifier(loss='modified_huber', shuffle=True, penalty='l2', alpha=1e-3, random_state=42, max_iter=5, tol=None)),\n])\n\ntext_clf.fit(X_train, y_train)\npredicted = text_clf.predict(X_train)\nnp.mean(predicted == y_train)",
"Logistic Regression Classifier",
"text_clf = Pipeline([\n ('vect', CountVectorizer(stop_words='english', max_features=5000)),\n ('tfidf', TfidfTransformer()),\n ('clf', LogisticRegression())\n])\n\ntext_clf.fit(X_train, y_train)\npredicted = text_clf.predict(X_train)\nnp.mean(predicted == y_train)",
"OneVsOne Classifier",
"text_clf = Pipeline([\n ('vect', CountVectorizer(stop_words='english', max_features=5000)),\n ('tfidf', TfidfTransformer()),\n ('clf', OneVsOneClassifier(LinearSVC()))\n])\n\ntext_clf.fit(X_train, y_train)\npredicted = text_clf.predict(X_train)\nnp.mean(predicted == y_train)",
"Create Submission",
"test.info()\n\nX_test = test['Phrase']\nphraseIds = test['PhraseId']\npredicted = text_clf.predict(X_test)\noutput = pd.DataFrame( data={\"PhraseId\":phraseIds, \"Sentiment\":predicted} )\n#output.to_csv( \"submission.csv\", index=False, quoting=3 )"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
makism/dyfunconn
|
tutorials/fMRI - 0 - Retrieve and parse.ipynb
|
bsd-3-clause
|
[
"The fMRI time series are taken from https://paris-saclay-cds.github.io/autism_challenge/.\nCheck out their GitHub repository at: https://github.com/ramp-kits/autism/.\nIf you run this notebook on Binder, the data have already been downloaded automatically for you.",
"import numpy as np\nimport pandas as pd\nimport os\nimport pathlib",
"Fetch the dataset\nBelow you may find some general instructions how to fetch the dataset.\n\ncd /opt/Temp/\ngit clone https://github.com/ramp-kits/autism.git\ncd autism/data/fmri/\nwget -c wget https://zenodo.org/record/3625740/files/msdl.zip\nunzip msdl.zip\n\nParse dataset",
"curr_dir = pathlib.Path(\"./\")\nrsfmri_basedir = str((curr_dir / \"raw_data/autism/\").resolve())",
"The following code is heavily based on the code provided by the competition's organizers.",
"def parse_dataset():\n _target_column_name = 'asd'\n _prediction_label_names = [0, 1]\n \n subject_id = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'train.csv'), header=None)\n # read the list of the subjects\n df_participants = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'participants.csv'), index_col=0)\n df_participants.columns = ['participants_' + col for col in df_participants.columns]\n \n # load the structural and functional MRI data\n df_anatomy = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'anatomy.csv'), index_col=0)\n df_anatomy.columns = ['anatomy_' + col for col in df_anatomy.columns]\n df_fmri = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'fmri_filename.csv'), index_col=0)\n df_fmri.columns = ['fmri_' + col for col in df_fmri.columns]\n \n # load the QC for structural and functional MRI data\n df_anatomy_qc = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'anatomy_qc.csv'), index_col=0)\n df_fmri_qc = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'fmri_qc.csv'), index_col=0)\n \n # rename the columns for the QC to have distinct names\n df_anatomy_qc = df_anatomy_qc.rename(columns={\"select\": \"anatomy_select\"})\n df_fmri_qc = df_fmri_qc.rename(columns={\"select\": \"fmri_select\"})\n\n X = pd.concat([df_participants, df_anatomy, df_anatomy_qc, df_fmri, df_fmri_qc], axis=1)\n X = X.loc[subject_id[0]]\n \n y = X['participants_asd']\n y.columns = [_target_column_name]\n \n X = X.drop('participants_asd', axis=1)\n\n return X, y.values\n\ndata, labels = parse_dataset()\n\nfmri_data = data[[col for col in data.columns if col.startswith('fmri')]]\n\nfmri_msdl_filenames = fmri_data['fmri_msdl']\n\nfmri = np.array([pd.read_csv(rsfmri_basedir + \"/\" + subject_filename, header=None).values \n for subject_filename in fmri_msdl_filenames])\n\nanatomy = data[[col for col in data.columns if col.startswith('anatomy')]]\nanatomy = anatomy.drop(columns='anatomy_select')",
"Dump arrays",
"np.save('data/fmri_autism_ts.npy', fmri)\nnp.save('data/fmri_autism_anatomy.npy', anatomy)\nnp.save('data/fmri_autism_labels.npy', labels)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sbenthall/bigbang
|
examples/experimental_notebooks/Git Interaction Graph.ipynb
|
agpl-3.0
|
[
"This notebook creates a graph representation of the collaboration between contributors of a Git repository, where nodes are authors, and edges are weighted by the parent/child dependencies between the commits of authors.",
"%matplotlib inline\nfrom bigbang.git_repo import GitRepo;\nfrom bigbang import repo_loader;\n\nimport matplotlib.pyplot as plt\nimport networkx as nx\nimport pandas as pd\n\nrepos = repo_loader.get_org_repos(\"codeforamerica\")\nrepo = repo_loader.get_multi_repo(repos=repos)\nfull_info = repo.commit_data;",
"Nodes will be Author objects, each of which holds a list of Commit objects.",
"class Commit:\n def __init__(self, message, hexsha, parents):\n self.message = message\n self.hexsha = hexsha\n self.parents = parents\n \n def __repr__(self):\n return ' '.join(self.message.split(' ')[:4])\n\n \nclass Author:\n def __init__(self, name, commits):\n self.name = name\n self.commits = commits\n self.number_of_commits = 1\n \n def add_commit(self, commit):\n self.commits.append(commit)\n self.number_of_commits += 1\n \n def __repr__(self):\n return self.name",
"We create a list of authors, also separately keeping track of committer names to make sure we only add each author once. If a commit by an already stored author is found, we add it to that authors list of commits.",
"def get_authors():\n authors = []\n names = []\n\n for index, row in full_info.iterrows():\n name = row[\"Committer Name\"]\n hexsha = row[\"HEXSHA\"]\n parents = row[\"Parent Commit\"]\n message = row[\"Commit Message\"]\n\n if name not in names:\n authors.append(Author(name, [Commit(message, hexsha, parents)]))\n names.append(name)\n\n else:\n for author in authors:\n if author.name == name:\n author.add_commit(Commit(message, hexsha, parents))\n\n return authors",
"We create our graph by forming an edge whenever an author has a commit which is the parent of another author's commit, and only increasing the weight of that edge if an edge between those two authors already exists.",
"def make_graph(nodes):\n G = nx.Graph()\n \n for author in nodes:\n for commit in author.commits:\n for other in nodes:\n for other_commit in other.commits:\n if commit.hexsha in other_commit.parents:\n if G.has_edge(author, other):\n G[author][other]['weight'] += 1\n else:\n G.add_edge(author, other, weight = 1)\n \n return G\n\nnodes = get_authors()\nG = make_graph(nodes)\n\npos = nx.spring_layout(G, iterations=100)\nnx.draw(G, pos, font_size=8, with_labels = False)\n# nx.draw_networkx_labels(G, pos);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gallantlab/cottoncandy
|
cottoncandy/examples/ccexample_openneuro.ipynb
|
bsd-2-clause
|
[
"Using cottoncandy to access OpenNeuro data from AWS S3\nThis examples shows how to use cottoncandy to access data from OpenNeuro.\nTo find out more about cottoncandy, checkout our GitHub repo: \nhttps://github.com/gallantlab/cottoncandy\n\nContributed by:\nAnwar O Nunez-Elizalde (Aug, 2018)",
"!pip install cottoncandy nibabel",
"A function to download nifti images as nibabel objects\nWe will describe each step in the function in the rest of the document.",
"def download_nifti(object_name, cci):\n '''Use cottoncandy to download a nifti image from the OpenNeuro S3 database\n\n Parameters\n ----------\n object_name : str\n The name of the image to download\n cci : object\n A cottoncandy instance\n\n Returns\n -------\n nifti_image : nibabel.Nifti1Image\n\n Example\n -------\n >>> import cottoncandy as cc\n >>> cci = cc.get_interface('openneuro', ACCESS_KEY='FAKEAC', SECRET_KEY='FAKESK', endpoint_url='https://s3.amazonaws.com')\n >>> nifti_image = download_nifti('ds000255/ds000255_R1.0.0/uncompressed/sub-02/ses-02/func/sub-02_ses-02_task-viewFigure_run-06_bold.nii.gz', cci)\n >>> nifti_image.get_data() # return numpy array\n\n See\n ---\n https://github.com/gallantlab/cottoncandy\n '''\n \n import cottoncandy as cc\n cci.set_bucket('openneuro.org')\n data_stream = cci.download_stream(object_name)\n # Uncompress the data\n uncompressed_data = cc.utils.GzipInputStream(data_stream.content)\n\n try:\n from cStringIO import StringIO\n except ImportError:\n from io import BytesIO as StringIO\n\n # make a file-object\n container = StringIO()\n container.write(uncompressed_data.read())\n container.seek(0)\n\n import nibabel as nib\n\n # make an image container\n nifti_map = nib.Nifti1Image.make_file_map()\n nifti_map['image'].fileobj = container\n\n # make a nifti image\n nii = nib.Nifti1Image.from_file_map(nifti_map)\n return nii",
"Setting up the cottoncandy connection\nIn order to run this example, you will need to enter your AWS keys below",
"ACCESSKEY = 'FAKEAK' \nSECRETKEY = 'FAKESK'\n\nimport cottoncandy as cc\ncci = cc.get_interface('openneuro.org', ACCESS_KEY=ACCESSKEY, SECRET_KEY=SECRETKEY, endpoint_url='https://s3.amazonaws.com')",
"Downloading and uncompressing the data",
"# Get the data stream\nnifti_object_name = 'ds000255/sub-02/ses-02/func/sub-02_ses-02_task-viewFigure_run-06_bold.nii.gz'\ndata_stream = cci.download_stream(nifti_object_name)\n# This is a GZIP Nifti so we need to uncompress it\nuncompressed_data = cc.utils.GzipInputStream(data_stream.content)",
"Creating a file object",
"try:\n from cStringIO import StringIO\nexcept ImportError:\n from io import BytesIO as StringIO\n# make a file-object\ncontainer = StringIO()\ncontainer.write(uncompressed_data.read())\ncontainer.seek(0)",
"Creating a nibabel image",
"import nibabel as nib\n\n# make an image container\nnifti_map = nib.Nifti1Image.make_file_map()\nnifti_map['image'].fileobj = container\n# make a nifti image\nnii = nib.Nifti1Image.from_file_map(nifti_map)\n\n# Get the data!\narr = nii.get_data().T\nprint(arr.shape)",
"Plotting the data",
"import matplotlib.pyplot as plt\nplt.matshow(arr[100,15], cmap='inferno')\nplt.grid(False)\n__ = plt.title('Sample slice from image', fontsize=30)\n\ntsnr = arr.mean(0)/arr.std(0)\nplt.matshow(tsnr[15], cmap='inferno')\nplt.grid(False)\n__ = plt.title('Temporal SNR image', fontsize=30)",
"Iterating through multiple nifti images from OpenNeuro S3 bucket",
"cci = cc.get_interface('openneuro.org', ACCESS_KEY=ACCESSKEY, SECRET_KEY=SECRETKEY, endpoint_url='https://s3.amazonaws.com')\ndirs = cci.lsdir()\nprint('Sample datasets:\\n%s'%', '.join(dirs[-10:]))",
"Explore one dataset",
"cci.lsdir('ds000255')",
"Information about the dataset",
"from pprint import pprint\n# print metadata from JSON file\npprint(cci.download_json('ds000255/dataset_description.json'))\n\n# print the 1000 characters in the readm\nprint(cci.download_object('ds000255/README')[:1000]) \n\n# list the contents of the functional directory for sub02 in session 02\ncci.lsdir('ds000255/sub-02/ses-02/func/')",
"Let's find all the nifti images for subject 02 session 02",
"cci.search('ds000255/sub-02/ses-02/func/*.nii.gz')\n# note that the beggining of the file name is cut off in the printed output. \n# this occurs because the object names are very long. \n\nnifti_files = cci.glob('ds000255/sub-02/ses-02/func/*.nii.gz')\nprint(nifti_files[0]) # the full object name",
"Compute temporal SNR images for individual nifti runs",
"for fl in nifti_files[:5]:\n print('Working on: %s'%fl)\n\n # Use the function defined at the beginning\n nii = download_nifti(fl, cci)\n\n arr = nii.get_data().T\n tsnr = arr.mean(0)/arr.std(0)\n plt.matshow(tsnr[15], cmap='inferno')\n plt.grid(False)\n\n description = fl.split('/')[-1]\n __ = plt.title('Temporal SNR image:\\n%s'%description, fontsize=20)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Neuroglycerin/neukrill-net-work
|
notebooks/tutorials/convolutional_network.ipynb
|
mit
|
[
"pylearn2 tutorial: Convolutional network\nby Ian Goodfellow\nIntroduction\nThis ipython notebook will teach you the basics of how convolutional networks work, and show you how to use multilayer perceptrons in pylearn2.\nTo do this, we will go over several concepts:\nPart 1: What pylearn2 is doing for you in this example\n\n\nReview of multilayer perceptrons, and how convolutional networks are similar\n\n\nConvolution and the equivariance property\n\n\nPooling and the invariance property\n\n\nA note on using convolution in research papers\n\n\nPart 2: How to use pylearn2 to train a convolutional network\n- pylearn2 Spaces\n\n- MNIST classification example\n\nNote that this won't explain in detail how the individual classes are implemented. The classes\nfollow pretty good naming conventions and have pretty good docstrings, but if you have trouble\nunderstanding them, write to me and I might add a part 3 explaining how some of the parts work\nunder the hood.\nPlease write to pylearn-dev@googlegroups.com if you encounter any problem with this tutorial.\nRequirements\nBefore running this notebook, you must have installed pylearn2.\nFollow the download and installation instructions\nif you have not yet done so.\nThis tutorial also assumes you already know about multilayer perceptrons, and know how to train and evaluate a multilayer perceptron in pylearn2. If not, work through multilayer_perceptron.ipynb before starting this tutorial.\nIt's also strongly recommend that you run this notebook with THEANO_FLAGS=\"device=gpu\". This is a processing intensive example and the GPU will make it run a lot faster, if you have one available. Execute the next cell to verify that you are using the GPU.",
"import theano\nprint theano.config.device",
"Part 1: What pylearn2 is doing for you in this example\nIn this part, we won't get into any specifics of pylearn2 yet. We'll just discuss what a convolutional network is. If you already know about convolutional networks, feel free to skip to part 2.\nReview of multilayer perceptrons, and how convolutional networks are similar\nIn multilayer_perceptron.ipynb, we saw how the multilayer perceptron (MLP) is a versatile model that can do many things. In this series of tutorials, we think of it as a classification model that learns to map an input vector $x$ to a probability distribution $p(y\\mid x)$ where $y$ is a categorical value with $k$ different values. Using a dataset $\\mathcal{D}$ of $(x, y)$, we can train any such probabilistic model by maximizing the log likelihood,\n$$ \\sum_{x,y \\in \\mathcal{D} } \\log P(y \\mid x). $$\nThe multilayer perceptron defines $P(y \\mid x)$ to be the composition of several simpler functions. Each function being composed can be thought of as another \"layer\" or \"stage\" of processing.\nA convolutional network is nothing but a multilayer perceptron where some layers take a very special form, which we will call \"convolutional layers\". These layers are specially designed for processing inputs where the indices of the elements have some topological significance.\nFor example, if we represent a grayscale image as an array $I$ with the array indices corresponding to physical locations in the image, then we know that the element $I_{i,j}$ represents something that is spatially close to the element $I_{i+1,j}$. This is in contrast to a vector representation of an image. If $I$ is a vector, then $I_i$ might not be very close at all to $I_{i+1}$, depending on whether the image was converted to vector form in row-major or column major format and depending on whether $i$ is close to the end of a row or column.\nOther kinds of data with topological in the indices include time series data, where some series $S$ can be indexed by a time variable $t$. We know that $S_t$ and $S_{t+1}$ come from close together in time. We can also think of the (row, column, time) indices of video data as providing topological information.\nSuppose $T$ is a function that can translate (move) an input in the space defined by its indices by some amount $x$.\nIn other words,\n$T(S,x)_i = S_j$ where $j=i-x$ (a MathJax or ipython bug seems to prevent me from putting $i-x$ in a subscript).\nConvolutional layers are an example of a function $f$ designed with the property $f(T(S,x)) \\approx f(S)$ for small x.\nThis means if a neural network can recognize a handwritten digit in one position, it can recognize it when it is slightly shifted to a nearby position. Being able to recognize shifted versions of previously seen inputs greatly improves the generalization performance of convolutional networks.\nConvolution and the equivariance property\nTODO\nPooling and the invariance property\nTODO\nA note on using convolution in research papers\nTODO\nPart 2: How to use pylearn2 to train an MLP\nNow that we've described the theory of what we're going to do, it's time to do it! This part describes\nhow to use pylearn2 to run the algorithms described above.\nAs in the MLP tutorial, we will use the convolutional net to do optical character recognition on the MNIST dataset.\npylearn2 Spaces\nIn many places in pylearn2, we would like to be able to process several different kinds of data. In previous tutorials, we've just talked about data that could be preprocessed into a vector representation. Our algorithms all worked on vector spaces. However, it's often useful to format data in other ways. The pylearn2 Space object is used to specify the format for data. The VectorSpace class represents the typical vector formatted data we've used so far. The only thing it needs to encode about the data is its dimensionality, i.e., how many elements the vector has. In this tutorial we will start to explicitly represent images as having 2D structure, so we need to use the Conv2DSpace. The Conv2DSpace object describes how to represent a collection of images as a 4-tensor.\nOne thing the Conv2DSpace object needs to describe is the shape of the space--how big is the image in terms of rows and columns of pixels? Also, the image may have multiple channels. In this example, we use a grayscale input image, so the input only has one channel. Color images require three channels to store the red, green, and blue pixels at each location. We can also think of the output of each convolution layer as living in a Conv2DSpace, where each kernel outputs a different channel. Finally, the Conv2DSpace specifies what each axis of the 4-tensor means. The default is for the first axis to index over different examples, the second axis to index over channels, and the last two to index over rows and columns, respectively. This is the format that theano's 2D convolution code uses, but other libraries exist that use other formats and we often need to convert between them.\nMNIST classification example\nSetting up a convolutional network in pylearn2 is essentially the same as setting up any other MLP. In the YAML experiment description below, there are really just two things to take note of.\nFirst, rather than using \"nvis\" to specify the input that the MLP will take, we use a parameter called \"input_space\". \"nvis\" is actually shorthand; if you pass an integer n to nvis, it will set input_space to VectorSpace(n). Now that we are using a convolutional network, we need the input to be formatted as a collection of images so that the convolution operator will have a 2D space to work on.\nSecond, we make a few layers of the network be \"ConvRectifiedLinear\" layers. Putting some convolutional layers in the network makes those layers invariant to small translations, so the job of the remaining layers is much easier.\nWe don't need to do anything special to make the Softmax layer on top work with these convolutional layers. The MLP class will tell the Softmax class that its input is now coming from a Conv2DSpace. The Softmax layer will then use the Conv2DSpace's convert method to convert the 2D output from the convolutional layer into a batch of vector-valued examples.\nThe model and training is defined in conv.yaml file. Here we load it and set some of it's hyper-parameters.",
"!wget https://raw.githubusercontent.com/lisa-lab/pylearn2/master/pylearn2/scripts/tutorials/convolutional_network/conv.yaml\n\ntrain = open('conv.yaml', 'r').read()\ntrain_params = {'train_stop': 50000,\n 'valid_stop': 60000,\n 'test_stop': 10000,\n 'batch_size': 100,\n 'output_channels_h2': 64, \n 'output_channels_h3': 64, \n 'max_epochs': 500,\n 'save_path': '.'}\ntrain = train % (train_params)\nprint train",
"Now, we use pylearn2's yaml_parse.load to construct the Train object, and run its main loop. The same thing could be accomplished by running pylearn2's train.py script on a file containing the yaml string.\nExecute the next cell to train the model. This will take several minutes and possible as much as a few hours depending on how fast your computer is.\nMake sure the dataset is present:",
"%%bash\nmkdir -p /disk/scratch/neuroglycerin/mnist/\ncd /disk/scratch/neuroglycerin/mnist/\nwget http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\nwget http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\nwget http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\nwget http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\ngzip -d *\ncd -",
"And make sure the pylearn2 environment variable for it is set:",
"%env PYLEARN2_DATA_PATH=/disk/scratch/neuroglycerin\n!echo $PYLEARN2_DATA_PATH\n\n%pdb\n\nfrom pylearn2.config import yaml_parse\ntrain = yaml_parse.load(train)\ntrain.main_loop()",
"Giving up on that yaml file. Trying the mnist.yaml that is supplied with the repo:",
"%pdb\n\n%run /afs/inf.ed.ac.uk/user/s08/s0805516/repos/pylearn2/pylearn2/scripts/train.py --time-budget 600 ~/repos/pylearn2/pylearn2/scripts/papers/maxout/mnist.yaml\n\n!du -h /afs/inf.ed.ac.uk/user/s08/s0805516/repos/pylearn2/pylearn2/scripts/papers/maxout/mnist_best.pkl\n\n!rm /afs/inf.ed.ac.uk/user/s08/s0805516/repos/pylearn2/pylearn2/scripts/papers/maxout/mnist_best.pkl",
"Compiling the theano functions used to run the network will take a long time for this example. This is because the number of theano variables and ops used to specify the computation is relatively large. There is no single theano op for doing max pooling with overlapping pooling windows, so pylearn2 builds a large expression graph using indexing operations to accomplish the max pooling.\nAfter the model is trained, we can use the print_monitor script to print the last monitoring entry of a saved model. By running it on \"convolutional_network_best.pkl\", we can see the performance of the model at the point where it did the best on the validation set.",
"!print_monitor.py convolutional_network_best.pkl | grep test_y_misclass",
"The test set error has dropped to 0.74%! This is a big improvement over the standard MLP.\nWe can also look at the convolution kernels learned by the first layer, to see that the network is looking for shifted versions of small pieces of penstrokes.",
"!show_weights.py convolutional_network_best.pkl",
"Further reading\nYou can find more information on convolutional networks from the following sources:\nLISA lab's Deep Learning Tutorials: Convolutional Neural Networks (LeNet)\nThis is by no means a complete list."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.