markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Apply PCA Finally apply a PCA transformation. This has to be done because the only way to visualise the decision boundary in 2D would be if the KNN algorithm ran in 2D as well. Note that removing the PCA will improve the accuracy (KNeighbours is applied to the entire train data, not just the two principal components).
from sklearn.decomposition import PCA # # Just like the preprocessing transformation, create a PCA # transformation as well. Fit it against the training data, and then # project the training and testing features into PCA space using the # PCA model's .transform() method. # # pca_reducer = PCA(n_components=2).fit(X_train_normalised) X_train_pca = pca_reducer.transform(X_train_normalised) X_test_pca = pca_reducer.transform(X_test_normalised)
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
KNN algorithm Now we finally apply the K-neighbours algorithm, using the related module from SKlearn. For K-Neighbours, generally the higher your "K" value, the smoother and less jittery your decision surface becomes. Higher K values also result in your model providing probabilistic information about the ratio of samples per each class. There is a tradeoff though, as higher K values mean the algorithm is less sensitive to local fluctuations since farther samples are taken into account. This causes it to only model the overall classification function without much attention to detail, and increases the computational complexity of the classification. We use here K = 9
from sklearn.neighbors import KNeighborsClassifier # # Create and train a KNeighborsClassifier. Start with K=9 neighbors. # NOTE: Be sure to train the classifier against the pre-processed, PCA- # transformed training data above! # knn = KNeighborsClassifier(n_neighbors=9) knn.fit(X_train_pca, y_train)
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Decision Boundaries A unique feature of supervised classification algorithms are their decision boundaries, or more generally, their n-dimensional decision surface: a threshold or region where if superseded, will result in your sample being assigned that class. The decision surface isn't always spherical. In fact, it can take many different types of shapes depending on the algorithm that generated it. Let's prepare a function to plot the decision boundaries, that we can use for other examples, later
import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') # Look Pretty import numpy as np def plotDecisionBoundary(model, X, y, colors, padding=0.6, resolution = 0.0025): fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(111) # Calculate the boundaries x_min, x_max = X[:, 0].min(), X[:, 0].max() y_min, y_max = X[:, 1].min(), X[:, 1].max() x_range = x_max - x_min y_range = y_max - y_min x_min -= x_range * padding y_min -= y_range * padding x_max += x_range * padding y_max += y_range * padding # Create a 2D Grid Matrix. The values stored in the matrix # are the predictions of the class at at said location xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution), np.arange(y_min, y_max, resolution)) # What class does the classifier say? Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour map using the rainbow colourmap #cs = plt.contourf(xx, yy, Z, cmap=plt.cm.rainbow) ax.contourf(xx, yy, Z, cmap=plt.cm.rainbow) fig.tight_layout(pad=2) # Plot the testing original points as well... for label in np.unique(y): indices = np.where(y == label) ax.scatter(X[indices, 0], X[indices, 1], c=colors[label], alpha=0.8) # print the title p = model.get_params() fig.suptitle('Decision boundaries, K = ' + str(p['n_neighbors']))
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Just a reminder: these are the wheat labels:
for label in np.unique(y_original): print (label) myColours = ['royalblue','forestgreen','ghostwhite'] plotDecisionBoundary(knn, X_train_pca, y_train, colors = myColours)
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
The KNN (with K=9) algorithm divided the space into three clusters, one for each wheat type. The clusters fit quite well the testing data but not perfectly, some data points are mis-classified.
# # Display the accuracy score of the test data/labels, computed by # the KNeighbors model. # # NOTE: You do NOT have to run .predict before calling .score, since # .score will take care of running the predictions automatically. # print(knn.score(X_test_pca, y_test))
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
K-Neighbours is particularly useful when no other model fits your data well, as it is a parameter free approach to classification. So for example, you don't have to worry about things like your data being linearly separable or not. Some of the caution-points to keep in mind while using K-Neighbours is that your data needs to be measurable. If there is no metric for discerning distance between your features, K-Neighbours cannot help you. As with all algorithms dependent on distance measures, it is also sensitive to feature scaling, to perturbations and the local structure of your dataset, particularly at lower "K" values. KNN with hyper-parameters We now explore deeper how the algorithm's parameters impact it, using as an example a dataset to classify a breast tumor as benign or malign. Breast cancer doesn't develop over night and, like any other cancer, can be treated extremely effectively if detected in its earlier stages. Part of the understanding cancer is knowing that not all irregular cell growths are malignant; some are benign, or non-dangerous, non-cancerous growths. Being able to properly assess if a tumor is actually benign and ignorable, or malignant and alarming is therefore of importance, and also is a problem that might be solvable through data and machine learning. Using the Breast Cancer Wisconsin Original data set, provided courtesy of UCI's Machine Learning Repository Read the data
# # Load in the dataset, identify nans, and set proper headers. # X = pd.read_csv("../Datasets/breast-cancer-wisconsin.data", header=None, names=['sample', 'thickness', 'size', 'shape', 'adhesion', 'epithelial', 'nuclei', 'chromatin', 'nucleoli', 'mitoses', 'status'], index_col='sample', na_values='?') X.head()
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Data Pre-processing Extract the target values, remove all NaN values and split into testing and training data
# # Copy out the status column into a slice, then drop it from the main # dataframe. # # y = X.status.copy() X.drop(['status'], axis=1, inplace=True) # # With the labels safely extracted from the dataset, replace any nan values # with the mean feature / column value # if X.isnull().values.any() == True: print("Preprocessing data: substituted all NaN with mean value") X.fillna(X.mean(), inplace=True) else: print("Preprocessing data: No NaN found!") # # Do train_test_split. set the random_state=7 for reproduceability, and keep # the test_size at 0.5 (50%). # X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=7)
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Define hyper-parameters. We will loop the KNN algorithm with different parameters, specifically: different scalers for normalisation reduced or not reduced (here PCA but can also use isomap for reduction) different weight function and different values of K
# automate the tuning of hyper-parameters using for-loops to traverse the search space. reducers = [False, True] weights = ['uniform', 'distance'] # Experiment with the basic SKLearn preprocessing scalers. We know that # the features consist of different units mixed in together, so it might be # reasonable to assume feature scaling is necessary. scalers = [preprocessing.Normalizer, preprocessing.StandardScaler, preprocessing.MinMaxScaler, preprocessing.RobustScaler] from sklearn.decomposition import PCA from sklearn import manifold
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Hyper-parameters tuning Loop through all the parameters: fit the model and print the result every time
# the f print function works from Python 3.6, you can use print otherwise separator = "--------------------------------------" print('*** Starting K-neighbours classifier') print(separator) bestScore = 0.0 # outer loop: the scalers for scaler in scalers: print("* Scaler = ", scaler) scalerTrained = scaler().fit(X_train) X_train_scaled = scalerTrained.transform(X_train) X_test_scaled = scalerTrained.transform(X_test) print("PCA? | K | Weight | Score") print(separator) # next loop though PCA reduction or not reducer = None for isPCA in reducers: if isPCA: # # Implement PCA here: reduce down to two dimensions. # reducer = PCA(n_components=2).fit(X_train_scaled) else: # # Implement Isomap here. K values can be from 5 to 10. # Reduce down to two dimensions. # reducer = manifold.Isomap(n_neighbors=10, n_components=2).fit(X_train_scaled) # 2D transformation on both datasets X_train_reduced = reducer.transform(X_train_scaled) X_test_reduced = reducer.transform(X_test_scaled) # # Implement and train KNeighborsClassifier on the projected 2D # training data. You can use any K value from 1 - 15, so play around # with it and see what results you can come up. Your goal is to find a # good balance where you aren't too specific (low-K), nor are you too # general (high-K). You should also experiment with how changing the weights # parameter affects the results. # for k in range(1,16): for weight in weights: # # Train the model against data_train. # knmodel = KNeighborsClassifier(n_neighbors = k, weights = weight) knmodel.fit(X_train_reduced, y_train) # INFO: Be sure to always keep the domain of the problem in mind! It's # WAY more important to errantly classify a benign tumor as malignant, # and have it removed, than to incorrectly leave a malignant tumor, believing # it to be benign, and then having the patient progress in cancer. Since the UDF # weights don't give you any class information, the only way to introduce this # data into SKLearn's KNN Classifier is by "baking" it into your data. For # example, randomly reducing the ratio of benign samples compared to malignant # samples from the training set. # # Calculate + Print the accuracy of the testing set # currentScore = knmodel.score(X_test_reduced, y_test) print(f"{isPCA} | {k} | {weight} | {currentScore}") # save the best model for plotting it later if (currentScore > bestScore): bestScore = currentScore bestPCA = isPCA bestK = k bestWeight = weight bestScaler = scaler
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Re-apply the best parameters to the model
print("These are the best parameters for the model:") print("PCA? | K | Weight | Scaler | Score") print(f"{bestPCA} | {bestK} | {bestWeight} | {bestScaler} | {bestScore}") BestScalerTrained = bestScaler().fit(X_train) X_train_scaled = BestScalerTrained.transform(X_train) X_test_scaled = BestScalerTrained.transform(X_test) if isPCA: # # Implement PCA here. # You should reduce down to two dimensions. # reducer = PCA(n_components=2).fit(X_train_scaled) else: # # Implement Isomap here. K values from 5-10. # You should reduce down to two dimensions. # reducer = manifold.Isomap(n_neighbors=10, n_components=2).fit(X_train_scaled) # # Train your model against data_train, then transform both # data_train and data_test using your model. You can save the results right # back into the variables themselves. # X_train_reduced = reducer.transform(X_train_scaled) X_test_reduced = reducer.transform(X_test_scaled) # # Implement and train KNeighborsClassifier on your projected 2D # training data here. You can use any K value from 1 - 15, so play around # with it and see what results you can come up. Your goal is to find a # good balance where you aren't too specific (low-K), nor are you too # general (high-K). You should also experiment with how changing the weights # parameter affects the results. # bestKnmodel = KNeighborsClassifier(n_neighbors = bestK, weights = bestWeight) bestKnmodel.fit(X_train_reduced, y_train)
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Plotting the decision boundaries
# 2 for benign (blue colour), 4 for malignant (red colour) myColours = {2:'royalblue',4:'lightsalmon'} plotDecisionBoundary(bestKnmodel, X_test_reduced, y_test, colors = myColours, padding = 0.1, resolution = 0.1)
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Another example for KNN with reduction
import scipy.io import math # Same datasets as in the PCA example! # load up the face_data.mat, calculate the # num_pixels value, and rotate the images to being right-side-up # instead of sideways. # mat = scipy.io.loadmat('../datasets/face_data.mat') df = pd.DataFrame(mat['images']).T num_images, num_pixels = df.shape num_pixels = int(math.sqrt(num_pixels)) # Rotate the pictures, so we don't have to crane our necks: for i in range(num_images): df.loc[i,:] = df.loc[i,:].values.reshape(num_pixels, num_pixels).T.reshape(-1) # # Load up the face_labels dataset. It only has a single column, and # we're only interested in that single column. We have to slice the # column out so that we have access to it as a "Series" rather than as a # "Dataframe". # labelsDF = pd.read_csv("../Datasets/face_labels.csv", header=None) labels = labelsDF.iloc[:,0] # # Do train_test_split. The labels are actually passed in as a series # (instead of as an NDArray) to access their underlying indices # later on. This is necessary to find the samples in the original # dataframe, which is used to plot the testing data as images rather # than as points: # data_train, data_test, label_train, label_test = train_test_split(df, labels, test_size=0.15, random_state=7) # If you'd like to try with PCA instead of Isomap, # as the dimensionality reduction technique: Test_PCA = False if Test_PCA: # INFO: PCA is used *before* KNeighbors to simplify the high dimensionality # image samples down to just 2 principal components! A lot of information # (variance) is lost during the process, as I'm sure you can imagine. But # you have to drop the dimension down to two, otherwise you wouldn't be able # to visualize a 2D decision surface / boundary. In the wild, you'd probably # leave in a lot more dimensions, but wouldn't need to plot the boundary; # simply checking the results would suffice. # # The model should only be trained (fit) against the training data (data_train) # Once we've done this, we use the model to transform both data_train # and data_test from their original high-D image feature space, down to 2D # Finally, storing the results back into # data_train and data_test. pca_reducer = PCA(n_components=2).fit(data_train) data_train = pca_reducer.transform(data_train) data_test = pca_reducer.transform(data_test) else: # INFO: Isomap is used *before* KNeighbors to simplify the high dimensionality # image samples down to just 2 components! A lot of information has been is # lost during the process, as I'm sure you can imagine. But if you have # non-linear data that can be represented on a 2D manifold, you probably will # be left with a far superior dataset to use for classification. Plus by # having the images in 2D space, you can plot them as well as visualize a 2D # decision surface / boundary. In the wild, you'd probably leave in a lot # more dimensions, but wouldn't need to plot the boundary; simply checking # the results would suffice. # # The model should only be trained (fit) against the training data (data_train) # Once done this, we use the model to transform both data_train # and data_test from their original high-D image feature space, down to 2D # storing the results back into # data_train, and data_test. iso_reducer = manifold.Isomap(n_neighbors=5, n_components=2).fit(data_train) data_train = iso_reducer.transform(data_train) data_test = iso_reducer.transform(data_test) # # Implement KNeighborsClassifier. # knn = KNeighborsClassifier(n_neighbors=5) knn.fit(data_train, label_train) # # Calculate + Print the accuracy of the testing set (data_test and # label_test). # # print(knn.score(data_test, label_test)) # isomap: 0.961904761905 # pca: 0.571428571429 matplotlib.style.use('ggplot') # Look Pretty def Plot2DBoundary(model, DTrain, LTrain, DTest, LTest): # The dots are training samples (img not drawn), and the pics are testing samples (images drawn) # Play around with the K values. This is very controlled dataset so it # should be able to get perfect classification on testing entries fig = plt.figure(figsize=(9,8)) ax = fig.add_subplot(111) ax.set_title('Transformed Boundary, Image Space -> 2D') padding = 0.1 # Zoom out resolution = 1 # Don't get too detailed; smaller values (finer rez) will take longer to compute colors = ['blue','green','orange','red'] # ------ # Calculate the boundaries of the mesh grid. The mesh grid is # a standard grid (think graph paper), where each point will be # sent to the classifier (KNeighbors) to predict what class it # belongs to. This is why KNeighbors has to be trained against # 2D data, so we can produce this countour. Once we have the # label for each point on the grid, we can color it appropriately # and plot it. x_min, x_max = DTrain[:, 0].min(), DTrain[:, 0].max() y_min, y_max = DTrain[:, 1].min(), DTrain[:, 1].max() x_range = x_max - x_min y_range = y_max - y_min x_min -= x_range * padding y_min -= y_range * padding x_max += x_range * padding y_max += y_range * padding # Using the boundaries, actually make the 2D Grid Matrix: xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution), np.arange(y_min, y_max, resolution)) # What class does the classifier say about each spot on the chart? # The values stored in the matrix are the predictions of the model # at said location: Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the mesh grid as a filled contour plot: ax.contourf(xx, yy, Z, cmap=plt.cm.terrain) # ------ # When plotting the testing images, used to validate if the algorithm # is functioning correctly, size them as 5% of the overall chart size x_size = x_range * 0.05 y_size = y_range * 0.05 # First, plot the images in the TEST dataset img_num = 0 for index in LTest.index: # DTest is a regular NDArray, so you'll iterate over that 1 at a time. x0, y0 = DTest[img_num,0]-x_size/2., DTest[img_num,1]-y_size/2. x1, y1 = DTest[img_num,0]+x_size/2., DTest[img_num,1]+y_size/2. # DTest = our images isomap-transformed into 2D. But we still want # to plot the original image, so we look to the original, untouched # dataset (at index) to get the pixels: img = df.iloc[index,:].values.reshape(num_pixels, num_pixels) ax.imshow(img, aspect='auto', cmap=plt.cm.gray, interpolation='nearest', zorder=100000, extent=(x0, x1, y0, y1), alpha=0.8) img_num += 1 # Plot the TRAINING points as well... as points rather than as images for label in range(len(np.unique(LTrain))): indices = np.where(LTrain == label) ax.scatter(DTrain[indices, 0], DTrain[indices, 1], c=colors[label], alpha=0.8, marker='o') # Chart the combined decision boundary, the training data as 2D plots, and # the testing data as small images so we can visually validate performance. Plot2DBoundary(knn, data_train, label_train, data_test, label_test)
02-Classification/knn.ipynb
Mashimo/datascience
apache-2.0
Create some text to use....
text = "Compatibility of systems of linear constraints over the set of natural numbers. Criteria of compatibility of a system of linear Diophantine equations, strict inequations, and nonstrict inequations are considered. Upper bounds for components of a minimal set of solutions and algorithms of construction of minimal generating sets of solutions for all types of systems are given. These criteria and the corresponding algorithms for constructing a minimal supporting set of solutions can be used in solving all the considered types systems and systems of mixed types."
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
Then add PyTextRank into the spaCy pipeline...
import pytextrank tr = pytextrank.TextRank() nlp.add_pipe(tr.PipelineComponent, name="textrank", last=True) doc = nlp(text)
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
Examine the results: a list of top-ranked phrases in the document
for p in doc._.phrases: print("{:.4f} {:5d} {}".format(p.rank, p.count, p.text)) print(p.chunks)
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
Construct a list of the sentence boundaries with a phrase vector (initialized to empty set) for each...
sent_bounds = [ [s.start, s.end, set([])] for s in doc.sents ] sent_bounds
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
Iterate through the top-ranked phrases, added them to the phrase vector for each sentence...
limit_phrases = 4 phrase_id = 0 unit_vector = [] for p in doc._.phrases: print(phrase_id, p.text, p.rank) unit_vector.append(p.rank) for chunk in p.chunks: print(" ", chunk.start, chunk.end) for sent_start, sent_end, sent_vector in sent_bounds: if chunk.start >= sent_start and chunk.start <= sent_end: print(" ", sent_start, chunk.start, chunk.end, sent_end) sent_vector.add(phrase_id) break phrase_id += 1 if phrase_id == limit_phrases: break
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
Let's take a look at the results...
sent_bounds for sent in doc.sents: print(sent)
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
We also construct a unit_vector for all of the phrases, up to the limit requested...
unit_vector sum_ranks = sum(unit_vector) unit_vector = [ rank/sum_ranks for rank in unit_vector ] unit_vector
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
Iterate through each sentence, calculating its euclidean distance from the unit vector...
from math import sqrt sent_rank = {} sent_id = 0 for sent_start, sent_end, sent_vector in sent_bounds: print(sent_vector) sum_sq = 0.0 for phrase_id in range(len(unit_vector)): print(phrase_id, unit_vector[phrase_id]) if phrase_id not in sent_vector: sum_sq += unit_vector[phrase_id]**2.0 sent_rank[sent_id] = sqrt(sum_sq) sent_id += 1 print(sent_rank)
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
Sort the sentence indexes in descending order
from operator import itemgetter sorted(sent_rank.items(), key=itemgetter(1))
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
Extract the sentences with the lowest distance, up to the limite requested...
limit_sentences = 2 sent_text = {} sent_id = 0 for sent in doc.sents: sent_text[sent_id] = sent.text sent_id += 1 num_sent = 0 for sent_id, rank in sorted(sent_rank.items(), key=itemgetter(1)): print(sent_id, sent_text[sent_id]) num_sent += 1 if num_sent == limit_sentences: break
explain_summ.ipynb
ceteri/pytextrank
apache-2.0
์ •๊ทœํ™” <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/addons/tutorials/layers_normalizations"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์—์„œ ๋ณด๊ธฐ</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/addons/tutorials/layers_normalizations.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab์—์„œ ์‹คํ–‰ํ•˜๊ธฐ</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/addons/tutorials/layers_normalizations.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub์—์„œ ์†Œ์Šค ๋ณด๊ธฐ</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/addons/tutorials/layers_normalizations.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋…ธํŠธ๋ถ ๋‹ค์šด๋กœ๋“œํ•˜๊ธฐ</a></td> </table> ๊ฐœ์š” ์ด ๋…ธํŠธ๋ถ์€ TensorFlow์˜ ์ •๊ทœํ™” ๋ ˆ์ด์–ด์— ๋Œ€ํ•œ ๊ฐ„๋žตํ•œ ์†Œ๊ฐœ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ ์ง€์›๋˜๋Š” ๋ ˆ์ด์–ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฃน ์ •๊ทœํ™”(TensorFlow Addons) ์ธ์Šคํ„ด์Šค ์ •๊ทœํ™”(TensorFlow Addons) ๋ ˆ์ด์–ด ์ •๊ทœํ™”(TensorFlow Core) ๋ ˆ์ด์–ด์˜ ๊ธฐ๋ณธ ์•„์ด๋””์–ด๋Š” ํ™œ์„ฑ ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ์„ ์ •๊ทœํ™”ํ•˜์—ฌ ํ›ˆ๋ จ ์ค‘ ์ˆ˜๋ ด์„ ๊ฐœ์„ ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ฐฐ์น˜ ์ •๊ทœํ™”์™€ ๋‹ฌ๋ฆฌ, ์ด๋Ÿฌํ•œ ์ •๊ทœํ™”๋Š” ๋ฐฐ์น˜์—์„œ ๋™์ž‘ํ•˜์ง€ ์•Š๊ณ  ๋Œ€์‹  ๋‹จ์ผ ์ƒ˜ํ”Œ์˜ ํ™œ์„ฑํ™”๋ฅผ ์ •๊ทœํ™”ํ•˜์—ฌ ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง์—๋„ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ •๊ทœํ™”๋Š” ์ž…๋ ฅ ํ…์„œ์—์„œ ํ•˜์œ„ ๊ทธ๋ฃน์˜ ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์— ์Šค์ผ€์ผ๊ณผ ์˜คํ”„์…‹ ์ธ์ž๋ฅผ ์ ์šฉํ•˜๋Š” ๊ฒƒ๋„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. $y_{i} = \frac{\gamma ( x_{i} - \mu )}{\sigma }+ \beta$ $ y$ : ์ถœ๋ ฅ $x$ : ์ž…๋ ฅ $\gamma$ : ์Šค์ผ€์ผ ์ธ์ž $\mu$: ํ‰๊ท  $\sigma$: ํ‘œ์ค€ ํŽธ์ฐจ $\beta$: ์˜คํ”„์…‹ ์ธ์ž ๋‹ค์Œ ์ด๋ฏธ์ง€๋Š” ๊ธฐ์ˆ  ๊ฐ„์˜ ์ฐจ์ด์ ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ๊ฐ ํ•˜์œ„ ํ”Œ๋กฏ์€ N์ด ๋ฐฐ์น˜ ์ถ•, C๊ฐ€ ์ฑ„๋„ ์ถ•, (H, W)๊ฐ€ ๊ณต๊ฐ„ ์ถ•(์˜ˆ: ๊ทธ๋ฆผ์˜ ๋†’์ด ๋ฐ ๋„ˆ๋น„)์ธ ์ž…๋ ฅ ํ…์„œ๋ฅผ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ํŒŒ๋ž€์ƒ‰ ํ”ฝ์…€์€ ์ด๋“ค ํ”ฝ์…€์˜ ๊ฐ’์„ ์ง‘๊ณ„ํ•˜์—ฌ ๊ณ„์‚ฐ๋œ, ๋™์ผํ•œ ํ‰๊ท  ๋ฐ ๋ถ„์‚ฐ์œผ๋กœ ์ •๊ทœํ™”๋ฉ๋‹ˆ๋‹ค. ์ถœ์ฒ˜: (https://arxiv.org/pdf/1803.08494.pdf) ๊ฐ€์ค‘์น˜ ๊ฐ๋งˆ ๋ฐ ๋ฒ ํƒ€๋Š” ๋ชจ๋“  ์ •๊ทœํ™” ๋ ˆ์ด์–ด์—์„œ ํ›ˆ๋ จ ๊ฐ€๋Šฅํ•˜๋ฉฐ ํ‘œํ˜„ ๋Šฅ๋ ฅ์˜ ์†์‹ค ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด์ƒํ•ฉ๋‹ˆ๋‹ค. center ๋˜๋Š” scale ํ”Œ๋ž˜๊ทธ๋ฅผ True๋กœ ์„ค์ •ํ•˜์—ฌ ์ด๋“ค ์š”์†Œ๋ฅผ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌผ๋ก  beta ๋ฐ gamma์— initializers, constraints ๋ฐ regularizer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ํ”„๋กœ์„ธ์Šค ์ค‘์— ์ด๋“ค ๊ฐ’์„ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„ค์ • Tensorflow 2.0 ๋ฐ Tensorflow-Addons ์„ค์น˜
!pip install -U tensorflow-addons import tensorflow as tf import tensorflow_addons as tfa
site/ko/addons/tutorials/layers_normalizations.ipynb
tensorflow/docs-l10n
apache-2.0
๋ฐ์ดํ„ฐ์„ธํŠธ ์ค€๋น„ํ•˜๊ธฐ
mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0
site/ko/addons/tutorials/layers_normalizations.ipynb
tensorflow/docs-l10n
apache-2.0
๊ทธ๋ฃน ์ •๊ทœํ™” ํŠœํ† ๋ฆฌ์–ผ ์†Œ๊ฐœ ๊ทธ๋ฃน ์ •๊ทœํ™”(GN)๋Š” ์ž…๋ ฅ ์ฑ„๋„์„ ๋” ์ž‘์€ ํ•˜์œ„ ๊ทธ๋ฃน์œผ๋กœ ๋‚˜๋ˆ„๊ณ  ํ‰๊ท ๊ณผ ๋ถ„์‚ฐ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ’์„ ์ •๊ทœํ™”ํ•ฉ๋‹ˆ๋‹ค. GN์€ ๋‹จ์ผ ์˜ˆ์ œ์—์„œ ๋™์ž‘ํ•˜๋ฏ€๋กœ ์ด ๊ธฐ์ˆ ์€ ๋ฐฐ์น˜ ํฌ๊ธฐ์™€ ๋…๋ฆฝ์ ์ž…๋‹ˆ๋‹ค. GN์€ ์‹คํ—˜์ ์œผ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ์ž‘์—…์—์„œ ๋ฐฐ์น˜ ์ •๊ทœํ™”์™€ ๋น„์Šทํ•œ ์ ์ˆ˜๋ฅผ ๊ธฐ๋กํ–ˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด batch_size๊ฐ€ ๋‚ฎ์€ ๊ฒฝ์šฐ ์ด๋•Œ ๋ฐฐ์น˜ ์ •๊ทœํ™”์˜ ์„ฑ๋Šฅ์ด ์ €ํ•˜๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋ฐฐ์น˜ ์ •๊ทœํ™” ๋Œ€์‹  GN์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ‘œ์ค€ "channels last" ์„ค์ •์—์„œ Conv2D ๋ ˆ์ด์–ด ์ดํ›„ 10๊ฐœ์˜ ์ฑ„๋„์„ 5๊ฐœ์˜ ํ•˜์œ„ ๊ทธ๋ฃน์œผ๋กœ ๋ถ„ํ• ํ•˜๋Š” ์˜ˆ์ œ
model = tf.keras.models.Sequential([ # Reshape into "channels last" setup. tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)), tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"), # Groupnorm Layer tfa.layers.GroupNormalization(groups=5, axis=3), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_test, y_test)
site/ko/addons/tutorials/layers_normalizations.ipynb
tensorflow/docs-l10n
apache-2.0
์ธ์Šคํ„ด์Šค ์ •๊ทœํ™” ํŠœํ† ๋ฆฌ์–ผ ์†Œ๊ฐœ ์ธ์Šคํ„ด์Šค ์ •๊ทœํ™”๋Š” ๊ทธ๋ฃน ํฌ๊ธฐ๊ฐ€ ์ฑ„๋„ ํฌ๊ธฐ(๋˜๋Š” ์ถ• ํฌ๊ธฐ)์™€ ๊ฐ™์€ ๊ทธ๋ฃน ์ •๊ทœํ™”์˜ ํŠน์ˆ˜ํ•œ ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋ฐฐ์น˜ ์ •๊ทœํ™”๋ฅผ ๋Œ€์ฒดํ•  ๋•Œ ์ธ์Šคํ„ด์Šค ์ •๊ทœํ™”๊ฐ€ ์Šคํƒ€์ผ ์ „์†ก์—์„œ ์ž˜ ์ˆ˜ํ–‰๋จ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ์ตœ๊ทผ์—๋Š” ์ธ์Šคํ„ด์Šค ์ •๊ทœํ™”๊ฐ€ GAN์—์„œ ๋ฐฐ์น˜ ์ •๊ทœํ™”๋ฅผ ๋Œ€์ฒดํ•˜๋Š” ์šฉ๋„๋กœ๋„ ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ์ œ Conv2D ๋ ˆ์ด์–ด ๋‹ค์Œ์— InstanceNormalization์„ ์ ์šฉํ•˜๊ณ  ๊ท ์ผํ•˜๊ฒŒ ์ดˆ๊ธฐํ™”๋œ ์Šค์ผ€์ผ ๋ฐ ์˜คํ”„์…‹ ์ธ์ž๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
model = tf.keras.models.Sequential([ # Reshape into "channels last" setup. tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)), tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"), # LayerNorm Layer tfa.layers.InstanceNormalization(axis=3, center=True, scale=True, beta_initializer="random_uniform", gamma_initializer="random_uniform"), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_test, y_test)
site/ko/addons/tutorials/layers_normalizations.ipynb
tensorflow/docs-l10n
apache-2.0
๋ ˆ์ด์–ด ์ •๊ทœํ™” ํŠœํ† ๋ฆฌ์–ผ ์†Œ๊ฐœ ๋ ˆ์ด์–ด ์ •๊ทœํ™”๋Š” ๊ทธ๋ฃน ํฌ๊ธฐ๊ฐ€ 1์ธ ๊ทธ๋ฃน ์ •๊ทœํ™”์˜ ํŠน์ˆ˜ํ•œ ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค. ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ๋Š” ๋‹จ์ผ ์ƒ˜ํ”Œ์˜ ๋ชจ๋“  ํ™œ์„ฑํ™”์—์„œ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋ ˆ์ด์–ด ์ •๊ทœํ™”๊ฐ€ ๋ฐฐ์น˜ ํฌ๊ธฐ์™€๋Š” ๋…๋ฆฝ์ ์œผ๋กœ ๋™์ž‘ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง์— ์ ํ•ฉํ•˜๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. Example Conv2D ๋ ˆ์ด์–ด ๋‹ค์Œ์— ๋ ˆ์ด์–ด ์ •๊ทœํ™”๋ฅผ ์ ์šฉํ•˜๊ณ  ์Šค์ผ€์ผ ๋ฐ ์˜คํ”„์…‹ ์ธ์ž๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
model = tf.keras.models.Sequential([ # Reshape into "channels last" setup. tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)), tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"), # LayerNorm Layer tf.keras.layers.LayerNormalization(axis=3 , center=True , scale=True), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_test, y_test)
site/ko/addons/tutorials/layers_normalizations.ipynb
tensorflow/docs-l10n
apache-2.0
Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. |Number of Bins|Validation Accuracy|Learning Rate|Regularization Strength|Test Accuracy| |------|------|--| |10|0.426000|8.000000e-07|5.000000e+04|| |50|0.440000|8.000000e-07|5.000000e+04|0.428| |50|0.441000|3.000000e-07|1.000000e+05|0.428| |100|0.440000|2.000000e-07|8.000000e+04|0.414| |150|0.428000|8.000000e-07|2.000000e+04|0.388| lr 3.000000e-07 reg 1.000000e+05 train accuracy: 0.426041 val accuracy: 0.441000
# Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-7, 2e-7, 3e-7, 5e-5, 8e-7] regularization_strengths = [1e4, 2e4, 3e4, 4e4, 5e4, 6e4, 7e4, 8e4, 7e5] results = {} best_val = -1 best_svm = None ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained classifer in best_svm. You might also want to play # # with different numbers of bins in the color histogram. If you are careful # # you should be able to get accuracy of near 0.44 on the validation set. # ################################################################################ for lr in learning_rates: for rs in regularization_strengths: svm = LinearSVM() svm.train(X_train_feats, y_train, learning_rate = lr, reg = rs, num_iters = 2000) train_accuracy = np.mean(y_train == svm.predict(X_train_feats)) val_accuracy = np.mean(y_val == svm.predict(X_val_feats)) results[(lr, rs)] = (train_accuracy, val_accuracy) if val_accuracy > best_val: best_val = val_accuracy best_svm = svm ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val # Evaluate your trained SVM on the test set y_test_pred = best_svm.predict(X_test_feats) test_accuracy = np.mean(y_test == y_test_pred) print test_accuracy # An important way to gain intuition about how an algorithm works is to # visualize the mistakes that it makes. In this visualization, we show examples # of images that are misclassified by our current system. The first column # shows images that our system labeled as "plane" but whose true label is # something other than "plane". examples_per_class = 8 classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for cls, cls_name in enumerate(classes): idxs = np.where((y_test != cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show()
assignment1/features.ipynb
Hasil-Sharma/Neural-Networks-CS231n
gpl-3.0
Inline question 1: Describe the misclassification results that you see. Do they make sense? Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
print X_train_feats.shape
assignment1/features.ipynb
Hasil-Sharma/Neural-Networks-CS231n
gpl-3.0
| Learning Rate| Regularization Rate | Validation Accuracy | Test Accuracy | | --- | --- | --- | --- | | 0.1 | 0.0001 | 0.544 | 0.534 | | 0.1 | 0.000215443469003 | 0.544 | 0.538 | | 0.1 | 0.000464158883361 | 0.542 | 0.534 | | 0.1 | 0.001 | 0.537 | 0.535 | | 0.1 | 0.00215443469003 | 0.536 | 0.533 | | 0.1 | 0.00464158883361 | 0.529 | 0.533 | | 0.1 | 0.01 | 0.524 | 0.522 | | 0.1 | 0.0215443469003 | 0.508 | 0.508 | | 0.1 | 0.0464158883361 | 0.51 | 0.489 | | 0.1 | 0.1 | 0.434 | 0.446 | | 0.215443469003 | 0.0001 | 0.594 | 0.58 | | 0.215443469003 | 0.000215443469003 | 0.604 | 0.578 | | 0.215443469003 | 0.000464158883361 | 0.601 | 0.58 | | 0.215443469003 | 0.001 | 0.593 | 0.586 | | 0.215443469003 | 0.00215443469003 | 0.597 | 0.569 | | 0.215443469003 | 0.00464158883361 | 0.579 | 0.56 | | 0.215443469003 | 0.01 | 0.554 | 0.539 | | 0.215443469003 | 0.0215443469003 | 0.515 | 0.517 | | 0.215443469003 | 0.0464158883361 | 0.508 | 0.491 | | 0.215443469003 | 0.1 | 0.441 | 0.446 | | 0.464158883361 | 0.0001 | 0.595 | 0.599 | | 0.464158883361 | 0.000215443469003 | 0.601 | 0.597 | | 0.464158883361 | 0.000464158883361 | 0.594 | 0.6 | | 0.464158883361 | 0.001 | 0.616 | 0.596 | | 0.464158883361 | 0.00215443469003 | 0.609 | 0.601 | | 0.464158883361 | 0.00464158883361 | 0.603 | 0.575 | | 0.464158883361 | 0.01 | 0.573 | 0.551 | | 0.464158883361 | 0.0215443469003 | 0.525 | 0.517 | | 0.464158883361 | 0.0464158883361 | 0.502 | 0.503 | | 0.464158883361 | 0.1 | 0.44 | 0.447 | | 1.0 | 0.0001 | 0.568 | 0.566 | | 1.0 | 0.000215443469003 | 0.588 | 0.589 | | 1.0 | 0.000464158883361 | 0.591 | 0.571 | | 1.0 | 0.001 | 0.61 | 0.587 | | 1.0 | 0.00215443469003 | 0.614 | 0.603 | | 1.0 | 0.00464158883361 | 0.62 | 0.587 | | 1.0 | 0.01 | 0.574 | 0.557 | | 1.0 | 0.0215443469003 | 0.521 | 0.517 | | 1.0 | 0.0464158883361 | 0.498 | 0.492 | | 1.0 | 0.1 | 0.433 | 0.441 | | 2.15443469003 | 0.0001 | 0.547 | 0.559 | | 2.15443469003 | 0.000215443469003 | 0.571 | 0.564 | | 2.15443469003 | 0.000464158883361 | 0.563 | 0.578 | | 2.15443469003 | 0.001 | 0.6 | 0.592 | | 2.15443469003 | 0.00215443469003 | 0.615 | 0.613 | | 2.15443469003 | 0.00464158883361 | 0.611 | 0.6 | | 2.15443469003 | 0.01 | 0.578 | 0.558 | | 2.15443469003 | 0.0215443469003 | 0.525 | 0.511 | | 2.15443469003 | 0.0464158883361 | 0.491 | 0.485 | | 2.15443469003 | 0.1 | 0.449 | 0.454 | | 4.64158883361 | 0.0001 | 0.087 | 0.103 | | 4.64158883361 | 0.000215443469003 | 0.087 | 0.103 | | 4.64158883361 | 0.000464158883361 | 0.087 | 0.103 | | 4.64158883361 | 0.001 | 0.087 | 0.103 | | 4.64158883361 | 0.00215443469003 | 0.087 | 0.103 | | 4.64158883361 | 0.00464158883361 | 0.087 | 0.103 | | 4.64158883361 | 0.01 | 0.087 | 0.103 | | 4.64158883361 | 0.0215443469003 | 0.087 | 0.103 | | 4.64158883361 | 0.0464158883361 | 0.087 | 0.103 | | 4.64158883361 | 0.1 | 0.087 | 0.103 | | 10.0 | 0.0001 | 0.087 | 0.103 | | 10.0 | 0.000215443469003 | 0.087 | 0.103 | | 10.0 | 0.000464158883361 | 0.087 | 0.103 | | 10.0 | 0.001 | 0.087 | 0.103 | | 10.0 | 0.00215443469003 | 0.087 | 0.103 | | 10.0 | 0.00464158883361 | 0.087 | 0.103 | | 10.0 | 0.01 | 0.087 | 0.103 | | 10.0 | 0.0215443469003 | 0.087 | 0.103 | | 10.0 | 0.0464158883361 | 0.087 | 0.103 | | 10.0 | 0.1 | 0.087 | 0.103 | | 21.5443469003 | 0.0001 | 0.087 | 0.103 | | 21.5443469003 | 0.000215443469003 | 0.087 | 0.103 | | 21.5443469003 | 0.000464158883361 | 0.087 | 0.103 | | 21.5443469003 | 0.001 | 0.087 | 0.103 | | 21.5443469003 | 0.00215443469003 | 0.087 | 0.103 | | 21.5443469003 | 0.00464158883361 | 0.087 | 0.103 | | 21.5443469003 | 0.01 | 0.087 | 0.103 | | 21.5443469003 | 0.0215443469003 | 0.087 | 0.103 | | 21.5443469003 | 0.0464158883361 | 0.087 | 0.103 | | 21.5443469003 | 0.1 | 0.087 | 0.103 | | 46.4158883361 | 0.0001 | 0.087 | 0.103 | | 46.4158883361 | 0.000215443469003 | 0.087 | 0.103 | | 46.4158883361 | 0.000464158883361 | 0.087 | 0.103 | | 46.4158883361 | 0.001 | 0.087 | 0.103 | | 46.4158883361 | 0.00215443469003 | 0.087 | 0.103 | | 46.4158883361 | 0.00464158883361 | 0.087 | 0.103 | | 46.4158883361 | 0.01 | 0.087 | 0.103 | | 46.4158883361 | 0.0215443469003 | 0.087 | 0.103 | | 46.4158883361 | 0.0464158883361 | 0.087 | 0.103 | | 46.4158883361 | 0.1 | 0.087 | 0.103 | | 100.0 | 0.0001 | 0.087 | 0.103 | | 100.0 | 0.000215443469003 | 0.087 | 0.103 | | 100.0 | 0.000464158883361 | 0.087 | 0.103 | | 100.0 | 0.001 | 0.087 | 0.103 | | 100.0 | 0.00215443469003 | 0.087 | 0.103 | | 100.0 | 0.00464158883361 | 0.087 | 0.103 | | 100.0 | 0.01 | 0.087 | 0.103 | | 100.0 | 0.0215443469003 | 0.087 | 0.103 | | 100.0 | 0.0464158883361 | 0.087 | 0.103 | | 100.0 | 0.1 | 0.087 | 0.103 |
from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 best_net = None best_val_acc = 0.0 best_hidden_size = None best_learning_rate = None best_regularization_strength = None ################################################################################ # TODO: Train a two-layer neural network on image features. You may want to # # cross-validate various parameters as in previous sections. Store your best # # model in the best_net variable. # ################################################################################ learning_rates = np.logspace(-1, 2, 10) regularization_strengths = np.logspace(-4, -1, 10) print '| Learning Rate| Regularization Rate | Validation Accuracy | Test Accuracy |' print '| --- | --- | --- | --- |' for learning_rate in learning_rates: for regularization_strength in regularization_strengths: net = TwoLayerNet(input_dim, hidden_dim, num_classes) # Train the network stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=5000, batch_size=500, learning_rate=learning_rate, learning_rate_decay=0.95, reg=regularization_strength, verbose=False) # Predict on the validation set val_acc = (net.predict(X_val_feats) == y_val).mean() test_acc = (net.predict(X_test_feats) == y_test).mean() if best_val_acc < val_acc: best_val_acc = val_acc best_net = net best_learning_rate = learning_rate best_regularization_strength = regularization_strength print '|', learning_rate, '|', regularization_strength,'|', val_acc,'|',test_acc, '|' ################################################################################ # END OF YOUR CODE # ################################################################################ # Run your neural net classifier on the test set. You should be able to # get more than 55% accuracy. test_acc = (best_net.predict(X_test_feats) == y_test).mean() print test_acc
assignment1/features.ipynb
Hasil-Sharma/Neural-Networks-CS231n
gpl-3.0
ไฝฟ็”จไน‹ๅ‰ไธ‹่ผ‰็š„ mnist ่ณ‡ๆ–™๏ผŒ่ผ‰ๅ…ฅ่จ“็ทด่ณ‡ๆ–™ train_set ๅ’Œๆธฌ่ฉฆ่ณ‡ๆ–™ test_set
import gzip import pickle with gzip.open('../Week02/mnist.pkl.gz', 'rb') as f: train_set, validation_set, test_set = pickle.load(f, encoding='latin1') train_X, train_y = train_set validation_X, validation_y = validation_set test_X, test_y = test_set
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
ไน‹ๅ‰็š„็œ‹ๅœ–็‰‡ๅ‡ฝๆ•ธ
from IPython.display import display def showX(X): int_X = (X*255).clip(0,255).astype('uint8') # N*784 -> N*28*28 -> 28*N*28 -> 28 * 28N int_X_reshape = int_X.reshape(-1,28,28).swapaxes(0,1).reshape(28,-1) display(Image.fromarray(int_X_reshape)) # ่จ“็ทด่ณ‡ๆ–™๏ผŒ X ็š„ๅ‰ 20 ็ญ† showX(train_X[:20])
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
train_set ๆ˜ฏ็”จไพ†่จ“็ทดๆˆ‘ๅ€‘็š„ๆจกๅž‹็”จ็š„ ๆˆ‘ๅ€‘็š„ๆจกๅž‹ๆ˜ฏๅพˆ็ฐกๅ–ฎ็š„ logistic regression ๆจกๅž‹๏ผŒ็”จๅˆฐ็š„ๅƒๆ•ธๅชๆœ‰ไธ€ๅ€‹ 784x10 ็š„็Ÿฉ้™ฃ W ๅ’Œไธ€ๅ€‹้•ทๅบฆ 10 ็š„ๅ‘้‡ bใ€‚ ๆˆ‘ๅ€‘ๅ…ˆ็”จๅ‡ๅ‹ป้šจๆฉŸไบ‚ๆ•ธไพ†่จญๅฎš W ๅ’Œ b ใ€‚
W = np.random.uniform(low=-1, high=1, size=(28*28,10)) b = np.random.uniform(low=-1, high=1, size=10)
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
ๅฎŒๆ•ด็š„ๆจกๅž‹ๅฆ‚ไธ‹ ๅฐ‡ๅœ–็‰‡็œ‹ๆˆๆ˜ฏ้•ทๅบฆ 784 ็š„ๅ‘้‡ x ่จˆ็ฎ— $Wx+b$๏ผŒ ็„ถๅพŒๅ†ๅ– $exp$ใ€‚ ๆœ€ๅพŒๅพ—ๅˆฐ็š„ๅๅ€‹ๆ•ธๅ€ผใ€‚ๅฐ‡้€™ไบ›ๆ•ธๅ€ผ้™คไปฅไป–ๅ€‘็š„็ธฝๅ’Œใ€‚ ๆˆ‘ๅ€‘ๅธŒๆœ›ๅ‡บไพ†็š„ๆ•ธๅญ—ๆœƒ็ฌฆๅˆ้€™ๅผตๅœ–็‰‡ๆ˜ฏ้€™ๅ€‹ๆ•ธๅญ—็š„ๆฉŸ็އใ€‚ $ \Pr(Y=i|x, W, b) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}$ ๅ…ˆๆ‹ฟ็ฌฌไธ€็ญ†่ณ‡ๆ–™่ฉฆ่ฉฆ็œ‹๏ผŒ x ๆ˜ฏ่ผธๅ…ฅใ€‚ y ๆ˜ฏ้€™ๅผตๅœ–็‰‡ๅฐๆ‡‰ๅˆฐ็š„ๆ•ธๅญ—(ไปฅ้€™ๅ€‹ไพ‹ๅญไพ†่ชช y=5)ใ€‚
x = train_X[0] y = train_y[0] showX(x) y
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
ๅ…ˆ่จˆ็ฎ— $e^{Wx+b} $
Pr = np.exp(x @ W + b) Pr.shape
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
็„ถๅพŒ normalize๏ผŒ่ฎ“็ธฝๅ’Œ่ฎŠๆˆ 1 ๏ผˆ็ฌฆๅˆๆฉŸ็އ็š„ๆ„็พฉ๏ผ‰
Pr = Pr/Pr.sum() Pr
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
็”ฑๆ–ผ $W$ ๅ’Œ $b$ ้ƒฝๆ˜ฏ้šจๆฉŸ่จญๅฎš็š„๏ผŒๆ‰€ไปฅไธŠ้ขๆˆ‘ๅ€‘็ฎ—ๅ‡บ็š„ๆฉŸ็އไนŸๆ˜ฏ้šจๆฉŸ็š„ใ€‚ ๆญฃ็ขบ่งฃๆ˜ฏ $y=5$๏ผŒ ้‹ๆฐฃๅฅฝๆœ‰ๅฏ่ƒฝ็Œœไธญ ็‚บไบ†่ฆ่ฉ•ๆ–ทๆˆ‘ๅ€‘็š„้ ๆธฌ็š„ๅ“่ณช๏ผŒ่ฆ่จญ่จˆไธ€ๅ€‹่ฉ•ๆ–ท่ชคๅทฎ็š„ๆ–นๅผ๏ผŒๆˆ‘ๅ€‘็”จ็š„ๆ–นๆณ•ๅฆ‚ไธ‹๏ผˆไธๆ˜ฏๅธธ่ฆ‹็š„ๆ–นๅทฎ๏ผŒ่€Œๆ˜ฏ็”จ็†ต็š„ๆ–นๅผไพ†็ฎ—๏ผŒๅฅฝ่™•ๆ˜ฏๅฎนๆ˜“ๅพฎๅˆ†๏ผŒๆ•ˆๆžœๅฅฝ๏ผ‰ $ loss = - \log(\Pr(Y=y|x, W,b)) $ ไธŠ่ฟฐ็š„่ชคๅทฎ่ฉ•ๅˆ†ๆ–นๅผ๏ผŒๅธธๅธธ็จฑไฝœ error ๆˆ–่€… loss๏ผŒๆ•ธๅญธๅผๅฏ่ƒฝๆœ‰้ปž่ฒป่งฃใ€‚ๅฏฆ้š›่จˆ็ฎ—ๅ…ถๅฏฆๅพˆ็ฐกๅ–ฎ๏ผŒๅฐฑๆ˜ฏไธ‹้ข็š„ๅผๅญ
loss = -np.log(Pr[y]) loss
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
ๆƒณ่พฆๆณ•ๆ”น้€ฒใ€‚ ๆˆ‘ๅ€‘็”จไธ€็จฎ่ขซ็จฑไฝœๆ˜ฏ gradient descent ็š„ๆ–นๅผไพ†ๆ”นๅ–„ๆˆ‘ๅ€‘็š„่ชคๅทฎใ€‚ ๅ› ็‚บๆˆ‘ๅ€‘็Ÿฅ้“ gradient ๆ˜ฏ่ฎ“ๅ‡ฝๆ•ธไธŠๅ‡ๆœ€ๅฟซ็š„ๆ–นๅ‘ใ€‚ๆ‰€ไปฅๆˆ‘ๅ€‘ๅฆ‚ๆžœๆœ gradient ็š„ๅๆ–นๅ‘่ตฐไธ€้ปž้ปž๏ผˆไนŸๅฐฑๆ˜ฏไธ‹้™ๆœ€ๅฟซ็š„ๆ–นๅ‘๏ผ‰๏ผŒ้‚ฃ้บผๅพ—ๅˆฐ็š„ๅ‡ฝๆ•ธๅ€ผๆ‡‰่ฉฒๆœƒๅฐไธ€้ปžใ€‚ ่จ˜ๅพ—ๆˆ‘ๅ€‘็š„่ฎŠๆ•ธๆ˜ฏ $W$ ๅ’Œ $b$ (่ฃก้ข็ธฝๅ…ฑๆœ‰ 28*20+10 ๅ€‹่ฎŠๆ•ธ)๏ผŒๆ‰€ไปฅๆˆ‘ๅ€‘่ฆๆŠŠ $loss$ ๅฐ $W$ ๅ’Œ $b$ ่ฃก้ข็š„ๆฏไธ€ๅ€‹ๅƒๆ•ธไพ†ๅๅพฎๅˆ†ใ€‚ ้‚„ๅฅฝ้€™ๅ€‹ๅๅพฎๅˆ†ๆ˜ฏๅฏไปฅ็”จๆ‰‹็ฎ—ๅ‡บไป–็š„ๅฝขๅผ๏ผŒ่€Œๆœ€ๅพŒๅๅพฎๅˆ†็š„ๅผๅญไนŸไธๆœƒๅพˆ่ค‡้›œใ€‚ $loss$ ๅฑ•้–‹ๅพŒๅฏไปฅๅฏซๆˆ $loss = \log(\sum_j e^{W_j x + b_j}) - W_i x - b_i$ ๅฐ $k \neq i$ ๆ™‚, $loss$ ๅฐ $b_k$ ็š„ๅๅพฎๅˆ†ๆ˜ฏ $$ \frac{e^{W_k x + b_k}}{\sum_j e^{W_j x + b_j}} = \Pr(Y=k | x, W, b)$$ ๅฐ $k = i$ ๆ™‚, $loss$ ๅฐ $b_k$ ็š„ๅๅพฎๅˆ†ๆ˜ฏ $$ \Pr(Y=k | x, W, b) - 1$$
gradb = Pr.copy() gradb[y] -= 1 print(gradb)
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
ๅฐ $W$ ็š„ๅๅพฎๅˆ†ไนŸไธ้›ฃ ๅฐ $k \neq i$ ๆ™‚, $loss$ ๅฐ $W_{k,t}$ ็š„ๅๅพฎๅˆ†ๆ˜ฏ $$ \frac{e^{W_k x + b_k} W_{k,t} x_t}{\sum_j e^{W_j x + b_j}} = \Pr(Y=k | x, W, b) x_t$$ ๅฐ $k = i$ ๆ™‚, $loss$ ๅฐ $W_{k,t}$ ็š„ๅๅพฎๅˆ†ๆ˜ฏ $$ \Pr(Y=k | x, W, b) x_t - x_t$$
print(Pr.shape, x.shape, W.shape) gradW = x.reshape(784,1) @ Pr.reshape(1,10) gradW[:, y] -= x
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
็ฎ—ๅฅฝ gradient ๅพŒ๏ผŒ่ฎ“ W ๅ’Œ b ๅˆ†ๅˆฅๅพ€ gradient ๅๆ–นๅ‘่ตฐไธ€้ปž้ปž๏ผŒๅพ—ๅˆฐๆ–ฐ็š„ W ๅ’Œ b
W -= 0.1 * gradW b -= 0.1 * gradb
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
ๅ†ไธ€ๆฌก่จˆ็ฎ— $\Pr$ ไปฅๅŠ $loss$
Pr = np.exp(x @ W + b) Pr = Pr/Pr.sum() loss = -np.log(Pr[y]) loss
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
Q ็œ‹็œ‹ Pr ๏ผŒ ็„ถๅพŒๆ‰พๅ‡บๆฉŸ็އๆœ€ๅคง่€…๏ผŒ predict y ๅ€ผ ๅ†่ท‘ไธ€้ไธŠ้ข็จ‹ๅบ๏ผŒ็œ‹็œ‹่ชคๅทฎๆ˜ฏๅฆ่ฎŠๅฐ๏ผŸ ๆ‹ฟๅ…ถไป–็š„ๆธฌ่ฉฆ่ณ‡ๆ–™ไพ†็œ‹็œ‹๏ผŒๆˆ‘ๅ€‘็š„ W, b ๅญธๅˆฐไบ†ไป€้บผ๏ผŸ ๆˆ‘ๅ€‘ๅฐ‡ๅŒๆจฃ็š„ๆ–นๅผ่ผชๆตๅฐไบ”่ฌ็ญ†่จ“็ทด่ณ‡ๆ–™ไพ†ๅš๏ผŒ็œ‹็œ‹ๆƒ…ๅฝขๆœƒๅฆ‚ไฝ•
W = np.random.uniform(low=-1, high=1, size=(28*28,10)) b = np.random.uniform(low=-1, high=1, size=10) score = 0 N=50000*20 d = 0.001 learning_rate = 1e-2 for i in range(N): if i%50000==0: print(i, "%5.3f%%"%(score*100)) x = train_X[i%50000] y = train_y[i%50000] Pr = np.exp( x @ W +b) Pr = Pr/Pr.sum() loss = -np.log(Pr[y]) score *=(1-d) if Pr.argmax() == y: score += d gradb = Pr.copy() gradb[y] -= 1 gradW = x.reshape(784,1) @ Pr.reshape(1,10) gradW[:, y] -= x W -= learning_rate * gradW b -= learning_rate * gradb
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
็ตๆžœ็™ผ็พๆญฃ็ขบ็އๅคง็ด„ๆ˜ฏ 92%๏ผŒ ไฝ†้€™ๆ˜ฏๅฐ่จ“็ทด่ณ‡ๆ–™่€Œไธๆ˜ฏๅฐๆธฌ่ฉฆ่ณ‡ๆ–™ ่€Œไธ”๏ผŒไธ€็ญ†ไธ€็ญ†็š„่จ“็ทด่ณ‡ไนŸๆœ‰้ปžๆ…ข๏ผŒ็ทšๆ€งไปฃๆ•ธ็š„็‰น้ปžๅฐฑๆ˜ฏ่ƒฝๅค ๅ‘้‡้‹็ฎ—ใ€‚ๅฆ‚ๆžœๆŠŠๅพˆๅคš็ญ† $x$ ็•ถๆˆๅˆ—ๅ‘้‡็ต„ๅˆๆˆไธ€ๅ€‹็Ÿฉ้™ฃ๏ผˆ็„ถๅพŒๅซๅš $X$๏ผ‰๏ผŒ็”ฑๆ–ผ็Ÿฉ้™ฃไน˜ๆณ•็š„ๅŽŸ็†๏ผŒๆˆ‘ๅ€‘้‚„ๆ˜ฏไธ€ๆจฃ่จˆ็ฎ— $WX+b$ ๏ผŒ ๅฐฑๅฏไปฅๅŒๆ™‚ๅพ—ๅˆฐๅคš็ญ†็ตๆžœใ€‚ ไธ‹้ข็š„ๅ‡ฝๆ•ธ๏ผŒๅฏไปฅไธ€ๆฌก่ผธๅ…ฅๅคš็ญ† $x$๏ผŒ ๅŒๆ™‚ไธ€ๆฌก่จˆ็ฎ—ๅคš็ญ† $x$ ็š„็ตๆžœๅ’Œๆบ–็ขบ็އใ€‚
def compute_Pr(X): Pr = np.exp(X @ W + b) return Pr/Pr.sum(axis=1, keepdims=True) def compute_accuracy(Pr, y): return (Pr.argmax(axis=1)==y).mean()
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
ไธ‹้ขๆ˜ฏๆ›ดๆ–ฐ้Žๅพ—่จ“็ทด้Ž็จ‹๏ผŒ ็•ถ i%100000 ๆ™‚๏ผŒ้ †ไพฟ่จˆ็ฎ—ไธ€ไธ‹ test accuracy ๅ’Œ valid accuracyใ€‚
%%timeit -r 1 -n 1 def compute_Pr(X): Pr = np.exp(X @ W + b) return Pr/Pr.sum(axis=1, keepdims=True) def compute_accuracy(Pr, y): return (Pr.argmax(axis=1)==y).mean() W = np.random.uniform(low=-1, high=1, size=(28*28,10)) b = np.random.uniform(low=-1, high=1, size=10) score = 0 N=20000 batch_size = 128 learning_rate = 0.5 for i in range(0, N): if (i+1)%2000==0: test_score = compute_accuracy(compute_Pr(test_X), test_y)*100 train_score = compute_accuracy(compute_Pr(train_X), train_y)*100 print(i+1, "%5.2f%%"%test_score, "%5.2f%%"%train_score) # ้šจๆฉŸ้ธๅ‡บไธ€ไบ›่จ“็ทด่ณ‡ๆ–™ๅ‡บไพ† rndidx = np.random.choice(train_X.shape[0], batch_size, replace=False) X, y = train_X[rndidx], train_y[rndidx] # ไธ€ๆฌก่จˆ็ฎ—ๆ‰€ๆœ‰็š„ Pr Pr = compute_Pr(X) # ่จˆ็ฎ—ๅนณๅ‡ gradient Pr_one_y = Pr-np.eye(10)[y] gradb = Pr_one_y.mean(axis=0) gradW = X.T @ (Pr_one_y) / batch_size # ๆ›ดๆ–ฐ W ๅ’Œ ba W -= learning_rate * gradW b -= learning_rate * gradb
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
ๆœ€ๅพŒๅพ—ๅˆฐ็š„ๆบ–็ขบ็އๆ˜ฏ 92%-93% ไธ็ฎ—ๅฎŒ็พŽ๏ผŒไธ้Ž็•ข็ซŸ้€™ๅชๆœ‰ไธ€ๅ€‹็Ÿฉ้™ฃ่€Œๅทฒใ€‚ ๅ…‰็œ‹ๆ•ธๆ“šๆฒ’ๆ„Ÿ่ฆบ๏ผŒๆˆ‘ๅ€‘ไพ†็œ‹็œ‹ๅ‰ๅ็ญ†ๆธฌ่ฉฆ่ณ‡ๆ–™่ท‘่ตทไพ†็š„ๆƒ…ๅฝข ๅฏไปฅ็œ‹ๅˆฐๅ‰ๅ็ญ†ๅชๆœ‰้Œฏไธ€ๅ€‹
Pr = compute_Pr(test_X[:10]) pred_y =Pr.argmax(axis=1) for i in range(10): print(pred_y[i], test_y[i]) showX(test_X[i])
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
็œ‹็œ‹ๅ‰ไธ€็™พ็ญ†่ณ‡ๆ–™ไธญ๏ผŒๆ˜ฏๅ“ชไบ›ๆƒ…ๆณ็ฎ—้Œฏ
Pr = compute_Pr(test_X[:100]) pred_y = Pr.argmax(axis=1) for i in range(100): if pred_y[i] != test_y[i]: print(pred_y[i], test_y[i]) showX(test_X[i])
Week05/From NumPy to Logistic Regression.ipynb
tjwei/HackNTU_Data_2017
mit
You can definitely begin to make out some of the structure that is occuring in the photovoltaic performance of this device. This image looks great, but there are still many areas of improvement. For example, I will need to extensively prove that this image is not purely a result of topographical cross-talk. If this image is correct, this is a significant improvement on our current imaging technique. The Features In the next cell we show an image of the various features that were calculated from the raw deflection signal. Some features more clearly matter than others and indicate that the search for better and more representative features is desirable. However, I think this is a great start to a project I hope to continue developing in the future.
fig, axs = plt.subplots(nrows=3) axs[0].imshow(real_sum_img ,'hot') axs[0].set_title('Total Signal Sum') axs[1].imshow(fft_sum_img, cmap='hot') axs[1].set_title('Sum of the FFT Power Spectrum') axs[2].imshow(amp_diff_img, cmap='hot') axs[2].set_title('Difference in Amplitude After Trigger') plt.tight_layout() plt.show()
Examples/demo.ipynb
jarrison/trEFM-learn
mit
www.topuniversities.com
root_url_1 = 'https://www.topuniversities.com' # we use the link to the API from where the website fetches its data instead of BeautifulSoup # much much cleaner list_url_1 = root_url_1 + '/sites/default/files/qs-rankings-data/357051_indicators.txt' r = requests.get(list_url_1) top_universities = pd.DataFrame() top_universities = top_universities.from_dict(r.json()['data'])[['uni', 'overall_rank', 'location', 'region']] # get the university name and details URL with a regex top_universities['name'] = top_universities['uni'].apply(lambda name: html.unescape(re.findall('<a[^>]+href=\"(.*?)\"[^>]*>(.*)?</a>', name)[0][1])) top_universities['url'] = top_universities['uni'].apply(lambda name: html.unescape(re.findall('<a[^>]+href=\"(.*?)\"[^>]*>(.*)?</a>', name)[0][0])) top_universities.drop('uni', axis=1, inplace=True) top_universities['overall_rank'] = top_universities['overall_rank'].astype(int) # selects the top N rows based on the colum_name of the dataframe df def select_top_N(df, column_name, N): df = df.sort_values(by=column_name) df = df[df[column_name] <= N] return df # get only the first top-200 universities by overall rank top_universities = select_top_N(top_universities, 'overall_rank', NUM_OBS) top_universities.head() students_total = [] students_inter = [] faculty_total = [] faculty_inter = [] def get_num(soup, selector): scraped = soup.select(selector) # Some top_universities don't have stats, return NaN for these case if scraped: return int(scraped[0].contents[0].replace(',', '')) else: return np.NaN for details_url in tqdm_notebook(top_universities['url']): soup = BeautifulSoup(requests.get(root_url_1 + details_url).text, 'html.parser') students_total.append(get_num(soup, 'div.total.student div.number')) students_inter.append(get_num(soup, 'div.total.inter div.number')) faculty_total.append(get_num(soup, 'div.total.faculty div.number')) faculty_inter.append(get_num(soup, 'div.inter.faculty div.number')) top_universities['students_total'] = students_total top_universities['students_international'] = students_inter top_universities['students_national'] = top_universities['students_total'] - top_universities['students_international'] top_universities['faculty_total'] = faculty_total top_universities['faculty_international'] = faculty_inter top_universities['faculty_national'] = top_universities['faculty_total'] - top_universities['faculty_international'] top_universities.head() #defining colors for each type of plot colors_1 = ['#FF9F9A', '#D0BBFF'] colors_2 = ['#92C6FF', '#97F0AA'] plt.style.use('ggplot')
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Best universities in term of: We selected the top 10 universities in point (a) and (b). For point (c) and (d), the top 200 universities were used in order to have more data. (a) ratio between faculty members and students
top = 10 top_universities_ratio = select_top_N(top_universities, 'overall_rank', top) top_universities_ratio_sf = top_universities_ratio[['name', 'students_total', 'faculty_total']] top_universities_ratio_sf = top_universities_ratio_sf.set_index(['name']) top_universities_ratio_sf.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) top_universities_ratio_sf.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Top 10 ratio\'s between students and faculty members among universities') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: We see that it is rather difficult to compare the ratios of the different universities. This is due to the different sizes of the population. In order to draw more precise information about it, we need to normalize the data with repect to each university.
# normalize the data to be able to make a good comparison top_universities_ratio_normed = top_universities_ratio_sf.div(top_universities_ratio_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) top_universities_ratio_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) top_universities_ratio_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Top 10 ratio\'s between students and faculty members among universities') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # we can restrict the range on the y axis to avoid displaying unnecessary content axes.set_ylim([0.7,1]) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: You noticed that the y-axis ranges from 0.7 to 1. We limited the visualization to this interval because the complete interval does not add meaningful insight about the data. Analyzing the results, we see that the Caltech university is the university in the top 10 offering more faculty members to its students. ETHZ is in the last position. (b) ratio of international students
top_universities_ratio_s = top_universities_ratio[['name', 'students_international', 'students_national']] top_universities_ratio_s = top_universities_ratio_s.set_index(['name']) top_universities_ratio_s_normed = top_universities_ratio_s.div(top_universities_ratio_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) top_universities_ratio_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10, 5)) top_universities_ratio_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Top 10 ratio\'s of international and national students among universities') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0, 0.6]) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: The most international university, by its students, among the top 10 universities is the Imperial College London. Notice that ETHZ is in the third position. (c) same comparisons by country
ratio_country_sf = top_universities.groupby(['location'])['students_total', 'faculty_total'].sum() ratio_country_sf_normed = ratio_country_sf.div(ratio_country_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) ratio_country_sf_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(15, 5)) ratio_country_sf_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Ratio of students and faculty members by country') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) fig.autofmt_xdate() plt.show() ratio_country_s = top_universities.groupby(['location'])['students_international', 'students_national'].sum() ratio_country_s_normed = ratio_country_s.div(ratio_country_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) ratio_country_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(15, 5)) ratio_country_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Ratio of international and national students by country') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0, 0.4]) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: Aggregating the data by country, we see that Russia is the country offering more faculty members for its student, followed by Danemark and Saudi Arabia. The most international university in terms of students is Australia, followed by United Kingdom and Hong Kong. Switzerland is in the fifth position and India is the country with the lowest ratio of international students. (d) same comparisons by region
ratio_region_s = top_universities.groupby(['region'])['students_total', 'faculty_total'].sum() ratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) ratio_region_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) ratio_region_s_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Ratio of students and faculty members by region') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) axes.yaxis.grid(True) fig.autofmt_xdate() plt.show() ratio_region_s = top_universities.groupby(['region'])['students_international', 'students_national'].sum() ratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) ratio_region_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) ratio_region_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Ratio of international and national students by region') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0,0.4]) axes.yaxis.grid(True) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: Asia is the region offering more faculty members to its students. It is followed by North America and Europe. The most international university in terms of students is Oceania. Europe is second. Analysis of the two methods We get consistent results comparing the results obtained by region or by country about the ratio of international students. By country, we get Australia and by region, Oceania. This makes sense as Australia owns nine of the eleven top_universities of Oceania. www.timeshighereducation.com
# we repeat the same procedure as for www.topuniversities.com root_url_2 = 'https://www.timeshighereducation.com' list_url_2 = root_url_2 + '/sites/default/files/the_data_rankings/world_university_rankings_2018_limit0_369a9045a203e176392b9fb8f8c1cb2a.json' r = requests.get(list_url_2) times_higher_education = pd.DataFrame() times_higher_education = times_higher_education.from_dict(r.json()['data'])[['rank', 'location', 'location', 'name', 'url', 'stats_number_students', 'stats_pc_intl_students', 'stats_student_staff_ratio']] # rename columns as is the first dataframe times_higher_education.columns = ['overall_rank', 'location', 'region', 'name', 'url', 'students_total', 'ratio_inter', 'student_staff_ratio'] # as the ranks have different represetation we had to delete the '=' in front of universities that have the same rank, # rewrite the rank when it is represented as an interval (ex: 201-250) and finally delete the '+' in the end for the last ones times_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: re.sub('[=]', '', rank)) times_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: rank.split('โ€“')[0]) times_higher_education['overall_rank'] = times_higher_education['overall_rank'].apply(lambda rank: re.sub('[+]', '', rank)).astype(int) # remaps a ranking in order to make selection by ranking easier # ex: 1,2,3,3,5,6,7 -> 1,2,3,3,4,5,6 def remap_ranking(rank): last=0 for i in range(len(rank)): if last == rank[i]-1: #no problem last = rank[i] elif last != rank[i]: last = last+1 rank[i] = last rank[(i+1):] = rank[(i+1):]-1 return rank times_higher_education['overall_rank'] = remap_ranking(times_higher_education['overall_rank'].copy()) # in the following lines we make the necessary transformation in order to get the right type or numbers for each column times_higher_education['students_total'] = times_higher_education['students_total'].apply(lambda x: re.sub('[^0-9]','', x)).astype(int) times_higher_education['ratio_inter'] = times_higher_education['ratio_inter'].apply(lambda x: re.sub('[^0-9]','', x)).astype(float) times_higher_education['student_staff_ratio'] = times_higher_education['student_staff_ratio'].astype(float) times_higher_education['students_international'] = (times_higher_education['students_total'] * (times_higher_education['ratio_inter']/100)).astype(int) times_higher_education['students_national'] = times_higher_education['students_total'] - times_higher_education['students_international'] times_higher_education['faculty_total'] = (times_higher_education['students_total'] / times_higher_education['student_staff_ratio']).astype(int) times_higher_education['faculty_international'] = np.NaN times_higher_education['faculty_national'] = np.NaN times_higher_education['region'] = np.NaN # resolve ties times_higher_education['overall_rank'] = np.arange(1, times_higher_education.shape[0]+1) # resolve N/A region loc_to_reg = top_universities[['location', 'region']] loc_to_reg = set(loc_to_reg.apply(lambda x: '{}_{}'.format(x['location'], x['region']), axis=1).values) loc_to_reg = {x.split('_')[0]: x.split('_')[1] for x in loc_to_reg} from collections import defaultdict loc_to_reg = defaultdict(lambda: 'N/A', loc_to_reg) def resolve_uni(x): x['region'] = loc_to_reg[x['location']] return x times_higher_education = times_higher_education.apply(resolve_uni, axis=1) del times_higher_education['ratio_inter'] del times_higher_education['student_staff_ratio'] # get only the first top-200 universities by overall rank times_higher_education = select_top_N(times_higher_education, 'overall_rank', NUM_OBS) times_higher_education.head()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Best universities in term of: We selected the top 10 universities in point (a) and (b). For point (c) and (d), the top 200 universities were used in order to have more data. (a) ratio between faculty members and students
top = 10 times_higher_education_ratio = select_top_N(times_higher_education, 'overall_rank', top) times_higher_education_ratio_sf = times_higher_education_ratio[['name', 'students_total', 'faculty_total']] times_higher_education_ratio_sf = times_higher_education_ratio_sf.set_index(['name']) times_higher_education_ratio_normed = times_higher_education_ratio_sf.div(times_higher_education_ratio_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) times_higher_education_ratio_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) times_higher_education_ratio_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Top 10 ratio\'s between students and faculty members among universities') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: The university of Chicago is the faculty with the more faculty members by students. It is closely followed by the California Institute of Technology. (b) ratio of international students
times_higher_education_ratio_s = times_higher_education_ratio[['name', 'students_international', 'students_national']] times_higher_education_ratio_s = times_higher_education_ratio_s.set_index(['name']) times_higher_education_ratio_s_normed = times_higher_education_ratio_s.div(times_higher_education_ratio_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) times_higher_education_ratio_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10, 5)) times_higher_education_ratio_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Top 10 ratio\'s of international and national students among universities') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.2, 0.6]) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: The Imperial College Longon university has a strong lead in the internationalization of its student. Oxford and ETHZ are following bunched together. (c) same comparisons by country
ratio_country_sf = times_higher_education.groupby(['location'])['students_total', 'faculty_total'].sum() ratio_country_sf_normed = ratio_country_sf.div(ratio_country_sf.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) ratio_country_sf_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(15, 5)) ratio_country_sf_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Ratio of students and faculty members by country') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: Denmark is in the first position. We find the Russian Federation in the second place. This is the same result obtained with the top universities website the other way around. This shows that either the universities of each country have different ranking in each website or each website has different information about each university.
ratio_country_s = times_higher_education.groupby(['location'])['students_international', 'students_national'].sum() ratio_country_s_normed = ratio_country_s.div(ratio_country_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) ratio_country_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(15, 5)) ratio_country_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Ratio of international and national students by country') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0, 0.6]) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: Luxembourg has more international than national students which allows it to be in first position without difficulty. Switzerland is in the sixth position (versus fifth for top university website). (d) same comparisons by region
# Some countries have their field 'region' filled with 'N/A': this is due to the technique we used to write the # correct region for each university. In the sample we are considering, let's see how many universities are concerned: times_higher_education[times_higher_education['region'] == 'N/A'] # As there is only two universities concerned, we can rapidly write it by hand. Of course we should have develop a # more generalized manner to do it, if we had a much larger sample. times_higher_education.set_value(178, 'region', 'Europe') times_higher_education.set_value(193, 'region', 'Europe') ratio_region_s = times_higher_education.groupby(['region'])['students_total', 'faculty_total'].sum() ratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='faculty_total', ascending=False) ratio_region_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) ratio_region_s_normed.plot.bar(stacked=True, color=colors_1, ax=axes) axes.set_title('Ratio of students and faculty members by region') axes.legend(labels=['students', 'faculty members'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0.8,1]) axes.yaxis.grid(True) fig.autofmt_xdate() plt.show() ratio_region_s = times_higher_education.groupby(['region'])['students_international', 'students_national'].sum() ratio_region_s_normed = ratio_region_s.div(ratio_region_s.sum(1), axis=0).sort_values(by='students_international', ascending=False) ratio_region_s_normed.index.name = None fig, axes = plt.subplots(1, 1, figsize=(10,5), sharey=True) ratio_region_s_normed.plot.bar(stacked=True, color=colors_2, ax=axes) axes.set_title('Ratio of international and national students by region') axes.legend(labels=['international students', 'national students'], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) axes.set_ylim([0,0.4]) axes.yaxis.grid(True) fig.autofmt_xdate() plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Comments: In the first plot, we see that Africa is the region where there is more faculty members by students. The two following regions are very close to each other. In the second plot, Oceania is the more internationalized school in terms of its students and Europe is second. We had similar results by the other website concerning this last outcome.
# Detects same universities with different names in the two dataframes before merging # using Jaccard similarity and same location rule (seems to keep matching entry) def t(x): # Compute Jaccard score (intersection over union) def jaccard(a, b): u = set(a.split(' ')) v = set(b.split(' ')) return len(u.intersection(v)) / len(u.union(v)) names = top_universities['name'].tolist() locations = top_universities['location'].tolist() scores = np.array([jaccard(x['name'], n) for n in names]) m = scores.max() i = scores.argmax() # Jaccard score for similarity and location match to filter out name with different locations if m > 0.5 and x['location'] == locations[i]: x['name'] = names[i] return x # Match universities name in both dataframes times_higher_education = times_higher_education.apply(t, axis=1) # Intersection on the name column of the two datasets merged = pd.merge(top_universities, times_higher_education, on='name', how='inner') merged.head()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Insights Here we first proceed by creating the correlation matrix (since it's a symetric matrix we only kept the lower triangle). We then plot it using a heatmap to see correlation between columns of the dataframe. We also made another heatmap with only correlation whose absolute value is greater than 0.5. Finally we averaged the features when they were available for the two websites (except rankings). Correlations analysis Some correlations bring interesting information: - $Corr(overall_rank_x, overall_rank_y) = 0.7$ <br /> We get a strong correlation between the ranking of the first website and the second one. It shows us that the two website ranking methods lead on similar results (since the correlation is positive). It's insightfull since even if the features are approximately the same for the two websites, their methodology to attribute a rank could be really different. This important positive correlation reveals that the methodologies doesn't differ so much between the two websites. - $Corr(students_international_avg, faculty_international_avg) = 0.59$ <br /> Here we have an interesting correlation between the number of international students and the number of international staff members. - We have strong correlation between same features but coming from different websites. It's not really interesting since difference in same features from the two websites are likely to be small. Also we have important correlation between "total" features and their sub-categories like "international" and "national". These are not interesting too because they follow a simple relation: when the total is higher, the sub-categories are likely to be higher numbers too (i.e. if we have more students, we are likely to have more national or international students).
merged_num = merged.select_dtypes(include=[np.number]) merged_num.dropna(how='all', axis=1) merged_num.dropna(how='any', axis=0) def avg_feature(x): cols = set([x for x in x.index if 'overall' not in x]) cols_common = set([x[0:-2] for x in cols]) for cc in cols_common: cc_x = '{}_x'.format(cc) cc_y = '{}_y'.format(cc) if cc_y in cols: x['{}_avg'.format(cc)] = (x[cc_x] + x[cc_y]) / 2 else: x['{}_avg'.format(cc)] = x[cc_x] / 2 for c in cols: del x[c] return x merged_num_avg = merged_num.apply(avg_feature, axis=1) merged_num.head() corr = merged_num.corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True fig, ax = plt.subplots(figsize=(10,10)) sns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True) plt.show() # Keep only correlation with and absolute value superior to 0.5 corr[(corr < 0.5) & (corr > -0.5)] = 0 fig, ax = plt.subplots(figsize=(10,10)) sns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True) plt.show() # Keep only correlation with and absolute value superior to 0.5 for averaged features corr = merged_num_avg.corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True corr[(corr < 0.5) & (corr > -0.5)] = 0 fig, ax = plt.subplots(figsize=(10,10)) sns.heatmap(corr, ax=ax, mask=mask, annot=True, square=True) plt.show()
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Best university First we have to transform ranking in some score. Here we assume a linear relation for the score given the ranking, so we gave a score of 1 for the best ranking and 0 for the worst ranking with linear mapping between these two. We did it for each of the ranking (the two websites). Also we don't really know if a website is most trustworthy than the other, so a good merging for the ranking would be to take the average of the two scores with equal weights for each score. Finally we also took into account the ratio of staff member per students: $finalScore = mean(score1, score2, staff per studiants)$ After computing these values, we found that Caltech is the best university (according to our assumptions). Per Website ranking: Caltech: top_universities -> 4 | times_higher_education = -> 3 | staff per student ratio -> 0.15 | => final score: 0.71
r = merged[['name', 'overall_rank_x', 'overall_rank_y']] r.head() def lin(df): best_rank = df.min() worst_rank = df.max() a = 1 / (best_rank - worst_rank) b = 1 - a*best_rank return df.apply(lambda x: a*x + b) r['stud_staff_ratio'] = merged[['faculty_international_x', 'faculty_international_y']].mean(axis=1) / \ merged[['students_total_x', 'students_total_y']].mean(axis=1) r['score_x'] = lin(r['overall_rank_x']) r['score_y'] = lin(r['overall_rank_y']) r['overall_score'] = r[['score_x', 'score_y', 'stud_staff_ratio']].mean(axis=1) r = r.dropna() r[r['overall_score'] == r['overall_score'].max()]
HW02/Homework 2.ipynb
Timonzimm/CS-401
mit
Customizing the atoms involved in the contact ContactFrequency takes the parameters query and haystack, which are lists of atom indices. It will then search for all contacts between atoms in query and atoms in haystack. This allows you to, for example, focus on the contacts between two distinct parts of a protein. By only including some atoms in the search, the contacts are calculated more quickly. This also allows you to fundamentally change the definition of a contact by making it about C$_\alpha$ or about all atoms, instead of heavy atoms as is the default (though if you change that, you should also change the cutoff value). In general, it is easiest to get the list of atom indices from MDTraj using its atom selection language. The default behavior is to look for contacts between all heavy (i.e., non-hydrogen), non-water atoms.
# the default selection is default_selection = topology.select("not water and symbol != 'H'") print(len(default_selection))
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
Note that the general assumption is that the query is no larger than the haystack. If this isn't obeyed, you'll still get correct answers, but some algorithms may be less efficient, and visualizations have also been designed with this in mind. Changing the query Now let's focus in on contacts involving specific regions of KRas. In your work, this might be contacts between different parts of one molecule, or contacts between two different molecules, such as in drug binding or DNA-protein interactions. First, let's look at the contacts between the switch 1 region and all other atoms in our default selection. So switch 1 will be our query. MDTraj allows queries based on different numbering systems: resid and resSeq. The resid is the internally-used residue number, and starts from 0. On the other hand, resSeq is the residue number given in the PDB, which usually starts from 1 (and is the number we usually refer to in literature).
switch1 = topology.select("resSeq 32 to 38 and symbol != 'H'") %%time sw1_contacts = ContactFrequency(trajectory=traj, query=switch1) sw1_contacts.residue_contacts.plot();
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
This shows all contacts of switch 1 with anything else in the system. Here, we automatically zoom in to have query on the x axis and the rest on the y axis. The boxes are long rectangles instead of squares as in the default selection. The box represents the residue number (in the resid numbering system) that is to its left and under it. Let's also zoom out to see the complete symmetric plot instead:
fig, ax = sw1_contacts.residue_contacts.plot() ax.set_xlim(0, sw1_contacts.residue_contacts.max_size) ax.set_ylim(0, sw1_contacts.residue_contacts.max_size);
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
Changing query and haystack What if we wanted to zoom in even more, and only look at the contacts between the switch 1 and cations in the system? We make one of the the query and the other the haystack. Since switch1 contains more atoms than cations, we'll use switch1 as the haystack.
cations = topology.select("resname NA or resname MG") %%time cations_switch1 = ContactFrequency(trajectory=traj, query=cations, haystack=switch1) cations_switch1.residue_contacts.plot();
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
Now we'll plot again, but we'll change the x and y axes so that we now can see switch 1 along x and cations (the query) along y:
fig, ax = cations_switch1.residue_contacts.plot() ax.set_xlim(*cations_switch1.haystack_residue_range) ax.set_ylim(*cations_switch1.query_residue_range);
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
Here you can see that the most significant contacts here are between residue 36 and the ion listed as residue 167. Let's see just how frequently that contact is made:
print(cations_switch1.residue_contacts.counter[frozenset([36, 167])])
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
So about half the time. Now, which residue/ion are these? Remember, these indices start at 0, even though the tradition in science (and the PDB) is to count from 1. Furthermore, the PDB residue numbers for the ions skip the section of the protein that has been removed. But we can easily obtain the relevant residues:
print(topology.residue(36)) print(topology.residue(167))
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
So this is a contact between the Glu37 and the magnesium ion (which is listed as residue 202 in the PDB). Changing the cutoff Depending on the atoms you use to select contacts, you might choose different cutoff distances. The default cutoff of 0.45 nm is reasonable for heavy atom contacts. However, if you use all atoms (including hydrogens), you'll probably want a smaller cutoff distance. If you use $\textrm{C}_\alpha$ distances, you'll want a larger cutoff distance. The cutoff distance in controlled using the cutoff parameter to ContactFrequency. The performance of Contact Map Explorer largely depends on how many atoms are within the cutoff distance, so making the cutoff distance larger while keeping the same number of atoms will have a significant effect.
%%time large_cutoff = ContactFrequency(trajectory=traj[0], cutoff=1.5) %%time large_cutoff.residue_contacts.plot();
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
The larger cutoff leads to a more dense contact matrix. The performance of plotting depends on how dense the contact matrix is -- for tricks to plot dense matrices more quickly, see the documentation on customizing plotting. Changing the number of ignored neighbors By default, Contact Map Explorer ignore atoms from 2 residues on either side of the given residue (and in the same chain). This is easily changed. However, even when you say to ignore no neighbors, you still ignore the residue's interactions with itself. Note: for non-protein contacts, the chain is often poorly defined. In this example, the GTP and the Mg are listed sequentially in residue order, and therefore they are considered "neighbors" and their contacts are ignored.
%%time ignore_none = ContactFrequency(trajectory=traj, n_neighbors_ignored=0) ignore_none.residue_contacts.plot();
examples/changing_defaults.ipynb
dwhswenson/contact_map
lgpl-2.1
Rossman Data Preparation Individual Data Source In addition to the data provided by the competition, we will be using external datasets put together by participants in the Kaggle competition. We can download all of them here. Then we should untar them in the directory to which data_dir is pointing to.
data_dir = 'rossmann' print('available files: ', os.listdir(data_dir)) file_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test'] path_names = {file_name: os.path.join(data_dir, file_name + '.csv') for file_name in file_names} df_train = pd.read_csv(path_names['train'], low_memory=False) df_test = pd.read_csv(path_names['test'], low_memory=False) print('training data dimension: ', df_train.shape) print('testing data dimension: ', df_test.shape) df_train.head()
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
We turn state Holidays to booleans, to make them more convenient for modeling.
df_train['StateHoliday'] = df_train['StateHoliday'] != '0' df_test['StateHoliday'] = df_test['StateHoliday'] != '0'
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
For the weather and state names data, we perform a join on a state name field and create a single dataframe.
df_weather = pd.read_csv(path_names['weather']) print('weather data dimension: ', df_weather.shape) df_weather.head() df_state_names = pd.read_csv(path_names['state_names']) print('state names data dimension: ', df_state_names.shape) df_state_names.head() df_weather = df_weather.rename(columns={'file': 'StateName'}) df_weather = df_weather.merge(df_state_names, on="StateName", how='left') df_weather.head()
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
For the google trend data. We're going to extract the state and date information from the raw dataset, also replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'.
df_googletrend = pd.read_csv(path_names['googletrend']) print('google trend data dimension: ', df_googletrend.shape) df_googletrend.head() df_googletrend['Date'] = df_googletrend['week'].str.split(' - ', expand=True)[0] df_googletrend['State'] = df_googletrend['file'].str.split('_', expand=True)[2] df_googletrend.loc[df_googletrend['State'] == 'NI', 'State'] = 'HB,NI' df_googletrend.head()
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
The following code chunks extracts particular date fields from a complete datetime for the purpose of constructing categoricals. We should always consider this feature extraction step when working with date-time. Without expanding our date-time into these additional fields, we can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
DEFAULT_DT_ATTRIBUTES = [ 'Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start' ] def add_datepart(df, colname, drop_original_col=False, dt_attributes=DEFAULT_DT_ATTRIBUTES, add_elapse_col=True): """ Extract various date time components out of a date column, this modifies the dataframe inplace. References ---------- - https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-date-components """ df[colname] = pd.to_datetime(df[colname], infer_datetime_format=True) if dt_attributes: for attr in dt_attributes: df[attr] = getattr(df[colname].dt, attr.lower()) # representing the number of seconds elapsed from 1970-01-01 00:00:00 # https://stackoverflow.com/questions/15203623/convert-pandas-datetimeindex-to-unix-time if add_elapse_col: df['Elapsed'] = df[colname].astype(np.int64) // 10 ** 9 if drop_original_col: df = df.drop(colname, axis=1) return df df_weather.head() df_weather = add_datepart( df_weather, 'Date', dt_attributes=None, add_elapse_col=False) df_googletrend = add_datepart( df_googletrend, 'Date', drop_original_col=True, dt_attributes=['Year', 'Week'], add_elapse_col=False) df_train = add_datepart(df_train, 'Date') df_test = add_datepart(df_test, 'Date') print('training data dimension: ', df_train.shape) df_train.head()
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
df_trend_de = df_googletrend.loc[df_googletrend['file'] == 'Rossmann_DE', ['Year', 'Week', 'trend']] df_trend_de.head()
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
Merging Various Data Source Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here. Aside: Why not just do an inner join? If we are assuming that all records are complete and match on the field we desire, an inner join will do the same thing as an outer join. However, in the event we are not sure, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is an equivalent approach). During the merging process, we'll print out the first few rows of the dataframe and the column names so we can keep track of how the dataframe evolves as we join with a new data source.
df_store = pd.read_csv(path_names['store']) print('store data dimension: ', df_store.shape) df_store.head() df_store_states = pd.read_csv(path_names['store_states']) print('store states data dimension: ', df_store_states.shape) df_store_states.head() df_store = df_store.merge(df_store_states, on='Store', how='left') print('null count: ', len(df_store[df_store['State'].isnull()])) df_store.head() df_joined_train = df_train.merge(df_store, on='Store', how='left') df_joined_test = df_test.merge(df_store, on='Store', how='left') null_count_train = len(df_joined_train[df_joined_train['StoreType'].isnull()]) null_count_test = len(df_joined_test[df_joined_test['StoreType'].isnull()]) print('null count: ', null_count_train, null_count_test) print('dimension: ', df_joined_train.shape) df_joined_train.head() df_joined_train.columns df_joined_train = df_joined_train.merge(df_weather, on=['State', 'Date'], how='left') df_joined_test = df_joined_test.merge(df_weather, on=['State', 'Date'], how='left') null_count_train = len(df_joined_train[df_joined_train['Mean_TemperatureC'].isnull()]) null_count_test = len(df_joined_test[df_joined_test['Mean_TemperatureC'].isnull()]) print('null count: ', null_count_train, null_count_test) print('dimension: ', df_joined_train.shape) df_joined_train.head() df_joined_train.columns df_joined_train = df_joined_train.merge(df_googletrend, on=['State', 'Year', 'Week'], how='left') df_joined_test = df_joined_test.merge(df_googletrend, on=['State', 'Year', 'Week'], how='left') null_count_train = len(df_joined_train[df_joined_train['trend'].isnull()]) null_count_test = len(df_joined_test[df_joined_test['trend'].isnull()]) print('null count: ', null_count_train, null_count_test) print('dimension: ', df_joined_train.shape) df_joined_train.head() df_joined_train.columns df_joined_train = df_joined_train.merge(df_trend_de, on=['Year', 'Week'], suffixes=('', '_DE'), how='left') df_joined_test = df_joined_test.merge(df_trend_de, on=['Year', 'Week'], suffixes=('', '_DE'), how='left') null_count_train = len(df_joined_train[df_joined_train['trend_DE'].isnull()]) null_count_test = len(df_joined_test[df_joined_test['trend_DE'].isnull()]) print('null count: ', null_count_train, null_count_test) print('dimension: ', df_joined_train.shape) df_joined_train.head() df_joined_train.columns
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
Final Data After merging all the various data source to create our master dataframe, we'll still perform some additional feature engineering steps including: Some of the rows contain missing values for some columns, we'll impute them here. What values to impute is pretty subjective then we don't really know the root cause of why it is missing, we won't spend too much time on it here. One common strategy for imputing missing categorical features is to pick an arbitrary signal value that otherwise doesn't appear in the data, e.g. -1, -999. Or impute it with the mean, majority value and create another column that takes on a binary value indicating whether or not that value is missing in the first place. Create some duration features with Competition and Promo column.
for df in (df_joined_train, df_joined_test): df['CompetitionOpenSinceYear'] = (df['CompetitionOpenSinceYear'] .fillna(1900) .astype(np.int32)) df['CompetitionOpenSinceMonth'] = (df['CompetitionOpenSinceMonth'] .fillna(1) .astype(np.int32)) df['Promo2SinceYear'] = df['Promo2SinceYear'].fillna(1900).astype(np.int32) df['Promo2SinceWeek'] = df['Promo2SinceWeek'].fillna(1).astype(np.int32) for df in (df_joined_train, df_joined_test): df['CompetitionOpenSince'] = pd.to_datetime(dict( year=df['CompetitionOpenSinceYear'], month=df['CompetitionOpenSinceMonth'], day=15 )) df['CompetitionDaysOpen'] = df['Date'].subtract(df['CompetitionOpenSince']).dt.days
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
For the CompetitionMonthsOpen field, we limit the maximum to 2 years to limit the number of unique categories.
for df in (df_joined_train, df_joined_test): df['CompetitionMonthsOpen'] = df['CompetitionDaysOpen'] // 30 df.loc[df['CompetitionMonthsOpen'] > 24, 'CompetitionMonthsOpen'] = 24 df.loc[df['CompetitionMonthsOpen'] < -24, 'CompetitionMonthsOpen'] = -24 df_joined_train['CompetitionMonthsOpen'].unique()
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
Repeat the same process for Promo
from isoweek import Week for df in (df_joined_train, df_joined_test): df['Promo2Since'] = pd.to_datetime(df.apply(lambda x: Week( x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1)) df['Promo2Days'] = df['Date'].subtract(df['Promo2Since']).dt.days for df in (df_joined_train, df_joined_test): df['Promo2Weeks'] = df['Promo2Days'] // 7 df.loc[df['Promo2Weeks'] < 0, 'Promo2Weeks'] = 0 df.loc[df['Promo2Weeks'] > 25, 'Promo2Weeks'] = 25 df_joined_train['Promo2Weeks'].unique() df_joined_train.columns
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
Durations It is common when working with time series data to extract features that captures relationships across rows instead of between columns. e.g. time until next event, time since last event. Here, we would like to compute features such as days until next promotion or days before next promotion. And the same process can be repeated for state/school holiday.
columns = ['Date', 'Store', 'Promo', 'StateHoliday', 'SchoolHoliday'] df = df_joined_train[columns].append(df_joined_test[columns]) df['DateUnixSeconds'] = df['Date'].astype(np.int64) // 10 ** 9 df.head() @numba.njit def compute_duration(store_arr, date_unix_seconds_arr, field_arr): """ For each store, track the day since/before the occurrence of a field. The store and date are assumed to be already sorted. Parameters ---------- store_arr : 1d ndarray[int] date_unix_seconds_arr : 1d ndarray[int] The date should be represented in unix timestamp (seconds). field_arr : 1d ndarray[bool]/ndarray[int] The field that we're interested in. If int, it should take value of 1/0 indicating whether the field/event occurred or not. Returns ------- result : list[int] Days since/before the occurrence of a field. """ result = [] last_store = 0 zipped = zip(store_arr, date_unix_seconds_arr, field_arr) for store, date_unix_seconds, field in zipped: if store != last_store: last_store = store last_date = date_unix_seconds if field: last_date = date_unix_seconds diff_day = (date_unix_seconds - last_date) // 86400 result.append(diff_day) return result df = df.sort_values(['Store', 'Date']) start = time.time() for col in ('SchoolHoliday', 'StateHoliday', 'Promo'): result = compute_duration(df['Store'].values, df['DateUnixSeconds'].values, df[col].values) df['After' + col] = result end = time.time() print('elapsed: ', end - start) df.head(10)
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
If we look at the values in the AfterStateHoliday column, we can see that the first row of the StateHoliday column is True, therefore, the corresponding AfterStateHoliday is therefore 0 indicating it's a state holiday that day, after encountering a state holiday, the AfterStateHoliday column will start incrementing until it sees the next StateHoliday, which will then reset this counter. Note that for Promo, it starts out with a 0, but the AfterPromo starts accumulating until it sees the next Promo. Here, we're not exactly sure when was the last promo before 2013-01-01 since we don't have the data for it. Nonetheless we'll still start incrementing the counter. Another approach is to fill it all with 0.
df = df.sort_values(['Store', 'Date'], ascending=[True, False]) start = time.time() for col in ('SchoolHoliday', 'StateHoliday', 'Promo'): result = compute_duration(df['Store'].values, df['DateUnixSeconds'].values, df[col].values) df['Before' + col] = result end = time.time() print('elapsed: ', end - start) df.head(10)
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
After creating these new features, we join it back to the original dataframe.
df = df.drop(['Promo', 'StateHoliday', 'SchoolHoliday', 'DateUnixSeconds'], axis=1) df_joined_train = df_joined_train.merge(df, on=['Date', 'Store'], how='inner') df_joined_test = df_joined_test.merge(df, on=['Date', 'Store'], how='inner') print('dimension: ', df_joined_train.shape) df_joined_train.head() df_joined_train.columns
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
We save the cleaned data so we won't have to repeat this data preparation step again.
output_dir = 'cleaned_data' if not os.path.isdir(output_dir): os.makedirs(output_dir, exist_ok=True) engine = 'pyarrow' output_path_train = os.path.join(output_dir, 'train_clean.parquet') output_path_test = os.path.join(output_dir, 'test_clean.parquet') df_joined_train.to_parquet(output_path_train, engine=engine) df_joined_test.to_parquet(output_path_test, engine=engine)
projects/kaggle_rossman_store_sales/rossman_data_prep.ipynb
ethen8181/machine-learning
mit
Use the lane pixals identified to fit a ploygon and draw it back on the original image
def write_stats(img): """ Write lane stats on image """ font = cv2.FONT_HERSHEY_SIMPLEX size = 1 weight = 2 color = (255,70,0) cv2.putText(img,'Left Curve : '+ '{0:.2f}'.format(left_line.radius_of_curvature)+' m',(10,30), font, size, color, weight) cv2.putText(img,'Right Curve : '+ '{0:.2f}'.format(right_line.radius_of_curvature)+' m',(10,60), font, size, color, weight) cv2.putText(img,'Left Lane Pos: '+ '{0:.2f}'.format(left_line.bestx),(10,100), font, size, color, weight) cv2.putText(img,'Right Lane Pos: '+ '{0:.2f}'.format(right_line.bestx),(10,130), font, size, color, weight) cv2.putText(img,'Distance from center: '+ "{0:.2f}".format(left_line.line_base_pos)+' m',(10,180), font, size, color, weight) def draw_lane(undist, img, Minv): """ Draw the detected lane bak on the image """ # Generate x and y values for plotting ploty = np.linspace(300, 700) # Create an image to draw the lines on warp_zero = np.zeros_like(img).astype(np.uint8) color_warp = np.dstack((warp_zero, warp_zero, warp_zero)) left_fit = left_line.best_fit right_fit = right_line.best_fit if left_fit is not None and right_fit is not None: left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] # Recast the x and y points into usable format for cv2.fillPoly() pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) # Draw the lane onto the warped blank image cv2.fillPoly(color_warp, np.int_([pts]), (20,120, 80)) # Warp the blank back to original image space using inverse perspective matrix (Minv) newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0])) # Combine the result with the original image result = cv2.addWeighted(undist, 1, newwarp, 0.6, 0) write_stats(result) return result return undist
car-lane-detection.ipynb
neerajdixit/car-lane-detection
apache-2.0
See the distribution of predictions over time
fs.display_model_drift('deployed_models','twimlcon_regression', 5)
twimlcon-workshop-materials/5 - Model Governance.ipynb
splicemachine/splice-community-sample-code
apache-2.0
See the distribution of features at the time a model was trained, and the distribution seen by the deployed model
fs.display_model_feature_drift('deployed_models','twimlcon_regression')
twimlcon-workshop-materials/5 - Model Governance.ipynb
splicemachine/splice-community-sample-code
apache-2.0
Investigate individual predictions
%%sql select * from deployed_models.twimlcon_regression where customerid = 12526 and (eval_time >= '2020-11-01' and eval_time <= '2020-11-07') from splicemachine.notebook import get_mlflow_ui get_mlflow_ui() #tags."Run ID" = {runid} spark.stop()
twimlcon-workshop-materials/5 - Model Governance.ipynb
splicemachine/splice-community-sample-code
apache-2.0
If this tutorial we are going to use estimate the connectivity and subsequently filter them. Load data
import sys import tqdm import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import numpy as np np.set_printoptions(threshold=sys.maxsize) fmri = np.load('data/fmri_autism_ts.npy', allow_pickle=True) labels = np.load('data/fmri_autism_labels.npy') num_subjects = len(fmri) num_samples, num_rois = np.shape(fmri[0])
tutorials/fMRI - 1 - Graph Analysis (Group).ipynb
makism/dyfunconn
bsd-3-clause
Compute the connectivity
conn_mtx = np.zeros((num_subjects, num_rois, num_rois)) for subj in tqdm.tqdm(range(num_subjects)): fmri_ts = fmri[subj] conn_mtx[subj, ...] = np.corrcoef(fmri_ts.T) np.save('data/fmri_autism_conn_mtx.npy', conn_mtx)
tutorials/fMRI - 1 - Graph Analysis (Group).ipynb
makism/dyfunconn
bsd-3-clause
Filter connectivity matrices
thres_conn_mtx = np.zeros_like(conn_mtx) from dyconnmap.graphs import threshold_eco for subj in tqdm.tqdm(range(num_subjects)): subj_conn_mtx = np.abs(conn_mtx[subj]) _, CIJtree, _ = threshold_eco(subj_conn_mtx) thres_conn_mtx[subj] = CIJtree np.save('data/fmri_autism_thres_conn_mtx.npy', thres_conn_mtx)
tutorials/fMRI - 1 - Graph Analysis (Group).ipynb
makism/dyfunconn
bsd-3-clause
Cada celda la puedes usar para escribir el cรณdigo que tu quieras y si de repente se te olvida alguna funciรณn o tienes duda de si el nombre es correcto IPython es muy amable en ese sentido. Para saber acerca de una funciรณn, es decir cuรกl es su salida o los parรกmetros que necesita puedes usar el signo de interrogaciรณn al final del nombre de la funciรณn. Ejercicio 2 En la siguiente celda busca las siguientes funciones: sum, max, round, mean. No olvides ejecutar la celda despuรฉs de haber escrito las funciones.
sum? max? round? mean?
UsoJupyter/CuadernoJupyter.ipynb
PyladiesMx/Empezando-con-Python
mit
Como te pudiste dar cuenta, cuando no encuentra la funciรณn te da un error... En IPython, y por lo tanto en Jupyter, hay una utilidad de completar con Tab. Esto quiere decir que si tu empiezas a escribir el nombre de una variable, funciรณn o atributo, no tienes que escribirlo todo, puedes empezar con unas cuantas letras y automรกticamente (si es lo รบnico que empieza de esa forma) lo va a completar por ti. Todos los flojos y/o olvidadizos amamos esta herramienta. En caso de que haya varias opciones no se va a completar, pero si lo vuelves a oprimir te va a mostrar en la celda todas las opciones que tienes...
variable = 50 saludo = 'Hola'
UsoJupyter/CuadernoJupyter.ipynb
PyladiesMx/Empezando-con-Python
mit
Ejercicio 3 Empieza a escribir las primeras tres letras de cada elemento de la celda anterior y presiona tab para ver si se puede autocompletar
vars?
UsoJupyter/CuadernoJupyter.ipynb
PyladiesMx/Empezando-con-Python
mit
Tambiรฉn hay funciones mรกgicas que nos permitirรกn hacer diversas tareas como mostrar las grรกficas que se produzcan en el cรณdigo dentro de una celda, medir el tiempo de ejecuciรณn del cรณdigo y cambiar del directorio de trabajo, entre otras. para ver quรฉ funciones mรกgicas hay en Jupyter sรณlo tienes que escribir python %magic Todas las funciones "mรกgicas" empiezan con el signo de porcentaje %
%magic
UsoJupyter/CuadernoJupyter.ipynb
PyladiesMx/Empezando-con-Python
mit
Grรกficas Ahora veremos unos ejemplos de grรกficas y cรณmo hacerlas interactivas. Estos ejemplos fueron tomados de la libreta para demostraciรณn de nature
# Importa matplotlib (paquete para graficar) y numpy (paquete para arreglos). # Fรญjate en el la funciรณn mรกgica para que aparezca nuestra grรกfica en la celda. %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Crea un arreglo de 30 valores para x que va de 0 a 5. x = np.linspace(0, 5, 30) y = np.sin(x) # grafica y versus x fig, ax = plt.subplots(nrows=1, ncols=1) ax.plot(x, y, color='red') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('A simple graph of $y=x^2$')
UsoJupyter/CuadernoJupyter.ipynb
PyladiesMx/Empezando-con-Python
mit
Google Cloud Storage Let's see if we can create a bucket with boto (using credentials, project ID, etc. specified in boto config file)...
import datetime now = datetime.datetime.now() BUCKET_NAME = 'test_' + GPRED_PROJECT_ID + now.strftime("%Y-%m-%d") # lower case letters required, no upper case allowed import boto import gcs_oauth2_boto_plugin project_id = %env GPRED_PROJECT_ID header_values = {"x-goog-project-id": project_id} boto.storage_uri(BUCKET_NAME, 'gs').create_bucket(headers=header_values)
credentials/Test.ipynb
louisdorard/bml-base
mit
Listing existing buckets...
uri = boto.storage_uri('', 'gs') # If the default project is defined, call get_all_buckets() without arguments. for bucket in uri.get_all_buckets(headers=header_values): print bucket.name
credentials/Test.ipynb
louisdorard/bml-base
mit
Upload a file to the new bucket
import os os.system("echo 'hello!' > newfile") filename = 'newfile' boto.storage_uri(BUCKET_NAME + '/' + filename, 'gs').new_key().set_contents_from_file(open(filename))
credentials/Test.ipynb
louisdorard/bml-base
mit