markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 t...
def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ a = np.zeros(shape=(len(x),10)) # TODO: Implement Function for i in range(len(x)): a[i...
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittest...
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], ...
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor...
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple fo...
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Act...
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """...
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Full...
def conv_net(x_tensor, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and ...
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy...
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the...
# TODO: Tune Parameters epochs = 30 batch_size = 128 keep_probability = .8
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Writing to hdf5 using the Microdata objects
# Code source: Chris Smith -- cq6@ornl.gov # Liscense: MIT import numpy as np import pycroscopy as px
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Create some MicroDatasets and MicroDataGroups that will be written to the file. With h5py, groups and datasets must be created from the top down, but the Microdata objects allow us to build them in any order and link them later.
# First create some data data1 = np.random.rand(5, 7)
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Now use the array to build the dataset. This dataset will live directly under the root of the file. The MicroDataset class also implements the compression and chunking parameters from h5py.Dataset.
ds_main = px.MicroDataset('Main_Data', data=data1, parent='/')
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
We can also create an empty dataset and write the values in later With this method, it is neccessary to specify the dtype and maxshape kwarg parameters.
ds_empty = px.MicroDataset('Empty_Data', data=[], dtype=np.float32, maxshape=[7, 5, 3])
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
We can also create groups and add other MicroData objects as children. If the group's parent is not given, it will be set to root.
data_group = px.MicroDataGroup('Data_Group', parent='/') root_group = px.MicroDataGroup('/') # After creating the group, we then add an existing object as its child. data_group.addChildren([ds_empty]) root_group.addChildren([ds_main, data_group])
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
The showTree method allows us to view the data structure before the hdf5 file is created.
root_group.showTree()
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Now that we have created the objects, we can write them to an hdf5 file
# First we specify the path to the file h5_path = 'microdata_test.h5' # Then we use the ioHDF5 class to build the file from our objects. hdf = px.ioHDF5(h5_path)
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
The writeData method builds the hdf5 file using the structure defined by the MicroData objects. It returns a list of references to all h5py objects in the new file.
h5_refs = hdf.writeData(root_group, print_log=True) # We can use these references to get the h5py dataset and group objects h5_main = px.io.hdf_utils.getH5DsetRefs(['Main_Data'], h5_refs)[0] h5_empty = px.io.hdf_utils.getH5DsetRefs(['Empty_Data'], h5_refs)[0]
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Compare the data in our dataset to the original
print(np.allclose(h5_main[()], data1))
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
As mentioned above, we can now write to the Empty_Data object
data2 = np.random.rand(*h5_empty.shape) h5_empty[:] = data2[:]
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Now that we are using h5py objects, we must use flush to write the data to file after it has been altered. We need the file object to do this. It can be accessed as an attribute of the hdf object.
h5_file = hdf.file h5_file.flush()
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Now that we are done, we should close the file so that it can be accessed elsewhere.
h5_file.close()
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/sequence_batching@1x.png" width=500px> <br> We have o...
def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: the number of sequences per batch n_steps: Number of sequence steps per batch ...
intro-to-rnns/Anna_KaRNNa_Exercises.ipynb
hparik11/Deep-Learning-Nanodegree-Foundation-Repository
mit
Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classe...
def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units nu...
intro-to-rnns/Anna_KaRNNa_Exercises.ipynb
hparik11/Deep-Learning-Nanodegree-Foundation-Repository
mit
Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for ea...
class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one characte...
intro-to-rnns/Anna_KaRNNa_Exercises.ipynb
hparik11/Deep-Learning-Nanodegree-Foundation-Repository
mit
Steps 4 & 5: Sample data from setting similar to data and record classification accuracy
np.random.seed(12345678) # for reproducibility, set random seed r = 20 # define number of rois N = 100 # number of samples at each iteration p0 = 0.10 p1 = 0.15 # define number of subjects per class S = np.array((8, 16, 20, 32, 40, 64, 80, 100, 120, 200, 320, 400, 600)) S = np.array((200,300)) names = ...
Code/classificationANDregression_simulation_AL.ipynb
Upward-Spiral-Science/the-vat
apache-2.0
STEP 6: PLOTTING ACCURACY VS. N FOR EACH REGRESSOR
plt.errorbar(S, errors[:,0,0], yerr = errors[:,0,1], hold=True, label=names[0]) plt.errorbar(S, errors[:,1,0], yerr = errors[:,1,1], color='green', hold=True, label=names[1]) plt.errorbar(S, errors[:,2,0], yerr = errors[:,2,1], color='red', hold=True, label=names[2]) plt.errorbar(S, errors[:,3,0], yerr = errors[:,3,1],...
Code/classificationANDregression_simulation_AL.ipynb
Upward-Spiral-Science/the-vat
apache-2.0
STEP 7: APPLYING REGRESSIONS TO COLUMNS OF FEATURES
#### RUN AT BEGINNING AND TRY NOT TO RUN AGAIN - TAKES WAY TOO LONG #### csvfile = "data_normalized/shortenedFeatures_normalized.txt" # load in the feature data list_of_features = [] with open(csvfile) as file: for line in file: inner_list = [float(elt.strip()) for elt in line.split(',')] ...
Code/classificationANDregression_simulation_AL.ipynb
Upward-Spiral-Science/the-vat
apache-2.0
STEP 8: DISCUSSION:
X, y, coef = datasets.make_regression(n_samples=300, n_features=1, n_informative=1, noise=1, coef=True, random_state=0) X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) # create regression ...
Code/classificationANDregression_simulation_AL.ipynb
Upward-Spiral-Science/the-vat
apache-2.0
Logistic Regression Hyperparameter tuning: For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag') Model calibration: See above LR with L1-Penalty Hyperparameter ...
cValsL1 = [7.5, 10.0, 12.5, 20.0] methods = ['sigmoid', 'isotonic'] cv = 2 tol = 0.01 for c in cValsL1: for m in methods: ccvL1 = CalibratedClassifierCV(LogisticRegression(penalty='l1', C=c, tol=tol), method=m, cv=cv) ccvL1.fit(mini_train_data, mini_train_labels) print(ccvL1.get_params) ...
iterations/KK_scripts/W207_Final_Project_logisticRegressionOptimization_updated_08_20_1911.ipynb
samgoodgame/sf_crime
mit
LR with L2-Penalty Hyperparameter Tuning
cValsL2 = [75.0, 100.0, 150.0, 250.0] methods = ['sigmoid', 'isotonic'] cv = 2 tol = 0.01 for c in cValsL2: for m in methods: ccvL2 = CalibratedClassifierCV(LogisticRegression(penalty='l2', solver='newton-cg', C=c, tol=tol), method=m, cv=cv) ccvL2.fit(mini_train_data, mini_train_labels) prin...
iterations/KK_scripts/W207_Final_Project_logisticRegressionOptimization_updated_08_20_1911.ipynb
samgoodgame/sf_crime
mit
Since coefficient in the triangle on the rhs are a part of Pascal triangle, namely A104712, the following is a generalization: $$ f_{2n+1} - 1 = \sum_{k=1}^{n}{{{n+1}\choose{k+1}}f_{k}} $$
gen_odd_fibs = Eq(f[2*n+1]-1, Sum(binomial(n+1, k+1)*f[k], (k,1,n))) Eq(gen_odd_fibs, Sum(binomial(n+1, n-k)*f[k], (k,1,n))) expand_sum_in_eq(gen_odd_fibs.subs(n, 8)) eq_sym.subs(fibs) eq_17 = Eq(f[17],f[-1] + rhs_sym[-1]) eq_18_shift = Eq(f[n], f[n-18]+8*f[n-17]+36*f[n-16]+84*f[n-15]+126*f[n-14]+126*f[n-13]+84*f[n-...
notebooks/binomial-transform-applied-to-fibonacci-numbers.ipynb
massimo-nocentini/PhD
apache-2.0
again, fibonacci numbers, A000045.
from itertools import accumulate to_accumulate = rhs_sym + ones(9,1)*f[-1] even_rhs = Matrix(list(accumulate(to_accumulate, lambda folded, current_row: Add(folded, current_row) ))) even_lhs = Matrix([f[i] for i in range(2,19,2)]) even_fibs_matrix_eq = Eq(even_lhs, even_rhs) even_fibs_matrix_eq even_transformed_matr...
notebooks/binomial-transform-applied-to-fibonacci-numbers.ipynb
massimo-nocentini/PhD
apache-2.0
Since coefficient in the triangle on the rhs are a part of Pascal triangle, namely A104713, the following is a generalization: $$ f_{2n} - n = \sum_{k=1}^{n-1}{{{n+1}\choose{k+2}}f_{k}} $$
gen_even_fibs = Eq(f[2*n]-n, Sum(binomial(n+1, k+2)*f[k], (k,1,n-1))) Eq(gen_even_fibs, Sum(binomial(n+1, n-k-1)*f[k], (k,1,n-1))) expand_sum_in_eq(gen_even_fibs.subs(n, 9)) even_fibs_matrix_eq_minus1_appear = even_fibs_matrix_eq.subs(fibs) Eq(even_fibs_matrix_eq.lhs, even_fibs_matrix_eq_minus1_appear, evaluate=False...
notebooks/binomial-transform-applied-to-fibonacci-numbers.ipynb
massimo-nocentini/PhD
apache-2.0
summands on the rhs form a known sequence A054452.
list(accumulate([fibonacci(2*i+1)-1 for i in range(21)])) def n_gf(t): return t/(1-t)**2 n_gf(t).series(n=20) def odd_fib_gf(t): return t**2/((1-t)**2*(1-3*t+t**2)) odd_fib_gf(t).series(n=20) composite_odd_fibs_gf = n_gf(t)+odd_fib_gf(t) composite_odd_fibs_gf.factor(), composite_odd_fibs_gf.series(n=20) def odd_i...
notebooks/binomial-transform-applied-to-fibonacci-numbers.ipynb
massimo-nocentini/PhD
apache-2.0
Generating a Spiral Training Dataset We'll be using this 2D dataset because it's easy to visually see the classifier performance, and because it's impossible to linearly separate the classes nicely.
N = 100 # points per class D = 2 # dimensionality at 2 so we can eyeball it K = 3 # number of classes X = np.zeros((N*K, D)) # generate an empty matrix to hold X features y = np.zeros(N*K, dtype='uint8') # generate an empty vector to hold y labels # for 3 classes, evenly generates spiral arms for j in xrange(K): ...
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Quick question, what are the dimensions of X and y? Let's visualize this. Setting S=20 (size of points) so that the color/label differences are more visible.
plt.scatter(X[:,0], X[:,1], c=y, s=20, cmap=plt.cm.Spectral) plt.show()
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Training a Linear Classifier Let's start by training a a simple y = WX + b linear classifer on this dataset. We need to compute some Weights (W) and a bias vector (b) for all classes.
# random initialization of starting params. recall that it's best to randomly initialize at a small value. # how many parameters should this linear classifier have? remember there are K output classes, and 2 features per observation. W = 0.01 * np.random.randn(D,K) b = np.zeros((1,K)) print "W shape", W.shape print ...
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
We're going to compute the normalized softmax of these scores...
num_examples = X.shape[0] exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # Let's look at one example to verify the softmax transform print "Score: ", scores[50] print "Class Probabilities: ", probs[50]
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
The array correct_logprobs is a 1D array of the probabilities assigned to the correct classes for each example.
correct_logprobs = -np.log(probs[range(num_examples),y]) # data loss is L1 loss plus regularization loss data_loss = np.sum(correct_logprobs)/num_examples reg_loss = 0.5*reg*np.sum(W*W) loss = data_loss + reg_loss # this gets the gradient of the scores # class probabilities minus - divided by num_examples dscores...
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Updating the Parameters We update the parameters W and B in the direction of the negative gradient in order to decrease the loss.
# this updates the W and b parameters W += -learning_rate * dW b += -learning_rate * db
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Full Code for the Training the Linear Softmax Classifier Using gradient descent method for optimization. Using L1 for loss funtion. This ought to converge to a loss of around 0.78 after 150 iterations
# initialize parameters randomly W = 0.01 * np.random.randn(D,K) b = np.zeros((1,K)) # some hyperparameters step_size = 1e-0 reg = 1e-3 # regularization strength # gradient descent loop num_examples = X.shape[0] # evaluated for 200 steps for i in xrange(200): # evaluate class scores, [N x K] scores = np.dot(X...
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Evaluating the Training Accuracy The training accuracy here ought to be at around 0.5 This is better than change for 3 classes, where the expected accuracy of randomly selecting one of out 3 labels is 0.33. But not that much better.
scores = np.dot(X, W) + b predicted_class = np.argmax(scores, axis=1) print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Let's eyeball the decision boundaries to get a better feel for the split.
# plot the resulting classifier h = 0.02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b Z = np.argmax(Z, axis=1) Z =...
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Training a 2 Layer Neural Network Let's see what kind of improvement we'll get with adding a single hidden layer.
# init parameters np.random.seed(100) # so we all have the same numbers W = 0.01 * np.random.randn(D,h) b = np.zeros((1,h)) h = 100 # size of hidden layer. a hyperparam in itself. W2 = 0.01 * np.random.randn(h,K) b2 = np.zeros((1,K))
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Let's use a ReLU activation function. See how we're passing the scores from one layer into the hidden layer.
hidden_layer = np.maximum(0, np.dot(X, W) + b) scores = np.dot(hidden_layer, W2) + b2
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
The loss computation and the dscores gradient computation remain the same. The major difference lies in the the chaining backpropagation of the dscores all the way back up to the parameters W and b.
# backpropate the gradient to the parameters of the hidden layer dW2 = np.dot(hidden_layer.T, dscores) db2 = np.sum(dscores, axis=0, keepdims=True) # gradient of the outputs of the hidden layer (the local gradient) dhidden = np.dot(dscores, W2.T) # backprop through the ReLU function dhidden[hidden_layer <= 0] = 0 # ...
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Full Code for Training the 2 Layer NN with ReLU activation Very similar to the linear classifier!
# initialize parameters randomly np.random.seed(100) # so we all have the same numbers h = 100 # size of hidden layer W = 0.01 * np.random.randn(D,h) b = np.zeros((1,h)) W2 = 0.01 * np.random.randn(h,K) b2 = np.zeros((1,K)) # some hyperparameters step_size = 1e-0 reg = 1e-3 # regularization strength # optimization...
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Evaluating the Training Set Accuracy This should be around 0.98, which is hugely better than the 0.50 we were getting from the linear classifier!
hidden_layer = np.maximum(0, np.dot(X, W) + b) scores = np.dot(hidden_layer, W2) + b2 predicted_class = np.argmax(scores, axis=1) print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Let's visualize this to get a more dramatic sense of just how good the split is.
# plot the resulting classifier h = 0.02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.dot(np.maximum(0, np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b), W2) +...
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Tokens The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group. We have seen how we can extract tokens by splitting the text at the blank spaces. NTLK has a function word_tokenize() for it:
import nltk s1Tokens = nltk.word_tokenize(sampleText1) s1Tokens len(s1Tokens)
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
21 tokens extracted, which include words and punctuation. Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens: "can" and "n't" (= "not") while a tokeniser that splits text by spaces would consider it a single token: "can't". Let's see anothe...
s2Tokens = nltk.word_tokenize(sampleText2) s2Tokens
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time:
# If you would like to work with the raw text you can use 'bookRaw' with open('../datasets/ThePrince.txt', 'r') as f: bookRaw = f.read() bookTokens = nltk.word_tokenize(bookRaw) bookText = nltk.Text(bookTokens) # special format nBookTokens= len(bookTokens) # or alternatively len(bookText) print ("*** Analysing bo...
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens. Sentences NTLK has a function to tokenise a text not in words but in sentences.
text1 = "This is the first sentence. A liter of milk in the U.S. costs $0.99. Is this the third sentence? Yes, it is!" sentences = nltk.sent_tokenize(text1) len(sentences) sentences
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99). It also splits correctly sentences after question or exclamation marks but not after commas.
sentences = nltk.sent_tokenize(bookRaw) # extract sentences nSent = len(sentences) print ("The book has {} sentences".format (nSent)) print ("and each sentence has in average {} tokens".format (nBookTokens / nSent))
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Most common tokens What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency? The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token. Its most_common() method then returns a list of tuples where...
def get_top_words(tokens): # Calculate frequency distribution fdist = nltk.FreqDist(tokens) return fdist.most_common() topBook = get_top_words(bookTokens) # Output top 20 words topBook[:20]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Comma is the most common: we need to remove the punctuation. Most common alphanumeric tokens We can use isalpha() to check if the token is a word and not punctuation.
topWords = [(freq, word) for (word,freq) in topBook if word.isalpha() and freq > 400] topWords
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
We can also remove any capital letters before tokenising:
def preprocessText(text, lowercase=True): if lowercase: tokens = nltk.word_tokenize(text.lower()) else: tokens = nltk.word_tokenize(text) return [word for word in tokens if word.isalpha()] bookWords = preprocessText(bookRaw) topBook = get_top_words(bookWords) # Output top 20 words topBook...
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ... As we have seen last time, these are so-called stop words that are very common and are normally stripped from a text when doing these kind of analysis. Meaningful most common tokens A simple approach ...
meaningfulWords = [word for (word,freq) in topBook if len(word) > 5 and freq > 80] sorted(meaningfulWords)
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
This would work but would leave out also tokens such as I and you which are actually significative. The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words. NLTK has a corpus of stop words in several languages:
from nltk.corpus import stopwords stopwordsEN = set(stopwords.words('english')) # english language betterWords = [w for w in bookWords if w not in stopwordsEN] topBook = get_top_words(betterWords) # Output top 20 words topBook[:20]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Now we excluded words such as the but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
'princes' in betterWords betterWords.count("prince") + betterWords.count("princes")
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Stemming Above, in the list of words we have both prince and princes which are respectively the singular and plural version of the same word (the stem). The same would happen with verb conjugation (love and loving are considered different words but are actually inflections of the same verb). Stemmer is the tool that re...
input1 = "List listed lists listing listings" words1 = input1.lower().split(' ') words1
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
And now we apply one of the NLTK stemmer, the Porter stemmer:
porter = nltk.PorterStemmer() [porter.stem(t) for t in words1]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
stemmedWords = [porter.stem(w) for w in betterWords] topBook = get_top_words(stemmedWords) topBook[:20] # Output top 20 words
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Now the word princ is counted 281 times, exactly like the sum of prince and princes. A note here: Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes. Prince and prin...
from nltk.stem.snowball import SnowballStemmer stemmerIT = SnowballStemmer("italian") inputIT = "Io ho tre mele gialle, tu hai una mela gialla e due pere verdi" wordsIT = inputIT.split(' ') [stemmerIT.stem(w) for w in wordsIT]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Lemma Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma. While a stemmer operates on a single word without knowledge ...
from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() words1 [lemmatizer.lemmatize(w, 'n') for w in words1] # n = nouns
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
[lemmatizer.lemmatize(w, 'v') for w in words1] # v = verbs
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
We get a different result if we say that the words are verbs. They have all the same lemma, in fact they could be all different inflections or conjugation of a verb. The type of words that can be used are: 'n' = noun, 'v'=verb, 'a'=adjective, 'r'=adverb
words2 = ['good', 'better'] [porter.stem(w) for w in words2] [lemmatizer.lemmatize(w, 'a') for w in words2]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
It works with different adjectives, it doesn't look only at prefixes and suffixes. You would wonder why stemmers are used, instead of always using lemmatisers: stemmers are much simpler, smaller and faster and for many applications good enough. Now we lemmatise the book:
lemmatisedWords = [lemmatizer.lemmatize(w, 'n') for w in betterWords] topBook = get_top_words(lemmatisedWords) topBook[:20] # Output top 20 words
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Yes, the lemma now is prince. But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word. Part of speech (PoS) In traditional grammar, a part of speech (abbreviated form: PoS or POS) is a category of words which have similar grammatical p...
text1 = "Children shouldn't drink a sugary drink before bed." tokensT1 = nltk.word_tokenize(text1) nltk.pos_tag(tokensT1)
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
The NLTK function pos_tag() will tag each token with the estimated PoS. NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function:
nltk.help.upenn_tagset('RB')
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Which are the most common PoS in The Prince book?
tokensAndPos = nltk.pos_tag(bookTokens) posList = [thePOS for (word, thePOS) in tokensAndPos] fdistPos = nltk.FreqDist(posList) fdistPos.most_common(5) nltk.help.upenn_tagset('IN')
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
It's not nouns (NN) but interections (IN) such as preposition or conjunction. Extra note: Parsing the grammar structure Words can be ambiguous and sometimes is not easy to understand which kind of POS is a word, for example in the sentence "visiting aunts can be a nuisance", is visiting a verb or an adjective? Tagging ...
# Parsing sentence structure text2 = nltk.word_tokenize("Alice loves Bob") grammar = nltk.CFG.fromstring(""" S -> NP VP VP -> V NP NP -> 'Alice' | 'Bob' V -> 'loves' """) parser = nltk.ChartParser(grammar) trees = parser.parse_all(text2) for tree in trees: print(tree)
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
QCM Les bonnes réponses sont en gras. Que fait le programme suivant ? Il trie. Il vérifie qu'un tableau est trié. Rien car la boucle ne commence pas à 0.
l = [0,1,2,3,4,6,5,8,9,10] res = True for i in range(1,len (l)) : if l[i-1] > l[i]: # un tableau n'est pas trié si deux éléments consécutifs res = False # ne sont pas dans le bon ordre print(res)
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
La fonction suivante ne fonctionne pas sur ... Le nombre 0. La constante "123". Les nombres strictement négatifs
def somme(n): return sum ( [ int(c) for c in str(n) ] ) # un signe moins aménera le calcul de int('-') qui edt invalide somme(0), somme("123")
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Le programme suivant provoque une erreur. Quelle est l'exception qu'il va produire ? SyntaxError TypeError IndexError
# déclenche une exception li = list(range(0,10)) sup = [0,9] for i in sup : del li [i] # on supprime le premier élément # à ce moment le dernier élément est d'indice 8 et non plus 9
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Entourer ce que est vrai à propos de la fonction suivante ? Elle est récursive. Il manque une condition d'arrêt. fibo(4) appelle récursivement 8 fois fibo: une fois fibo(3), deux fois fibo(2), trois fois fibo(1) et deux fois fibo(0)
def fibo (n) : print("fibo", n) if n < 1 : return 0 elif n == 1 : return 1 else : return fibo (n-1) + fibo (n-2) fibo(4)
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
La fonction est évidemment récursive car elle s'appelle elle-même, elle fait même deux appels récursifs au sein de la même fonction ce qui explique les nombreux appels. Combien de lignes comporte le dataframe df2 ? 3 4 5 6 7 8 9 Aucun, le code provoque une erreur.
import pandas df = pandas.DataFrame([dict(x=1, t="e"), dict(x=3, t="f"), dict(x=4, t="e")]) df2 = df.merge(df, left_on="x", right_on="x") df2
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Le dataframe initial a 3 lignes. On le fusionne avec lui même avec une colonne qui ne contient des valeurs distinctes. Chaque ligne ne fusionnera qu'avec une seule ligne. Le résultat contient 3 lignes. Combien de lignes comporte le dataframe df3 ? 3 4 5 6 7 8 9 Aucun, le code provoque une erreur.
import pandas df = pandas.DataFrame([dict(x=1, t="e"), dict(x=3, t="f"), dict(x=4, t="e")]) df3 = df.merge(df, left_on="t", right_on="t") df3.head()
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Le dataframe initial a 3 lignes. On le fusionne avec lui même avec une colonne qui contient des valeurs non distinctes. Il y a 2 'e' et 1 'f'. La clé unique 'f' fusionnera avec elle-même, les clés 'e' fusionneront les unes avec les autres soit 2x2 = 4 lignes. Résultat : 1 + 4 = 5. Dataframes On suppose qu'on a un fich...
import pandas from urllib.error import URLError url_ = "https://archive.ics.uci.edu/ml/machine-learning-databases/00350/" name = "default%20of%20credit%20card%20clients.xls" url = url_ + name try: df = pandas.read_excel(url, skiprows=1) except URLError: # backup plan url_ = "http://www.xavierdupre.fr/enseig...
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Q1 Ecrire une fonction qui agrège un dataset par AGE et calcule le mininum, maximum et la moyenne en une seule fois pour les variables LIMIT_BAL, default payment next month et qui calcule le nombre d'observations partageant le même AGE.
import numpy res = df.groupby("AGE").agg({"LIMIT_BAL": (min, max, numpy.mean), "ID":len, "default payment next month": (min, max, numpy.mean)}) res.head()
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Q2 Lire la documentation de read_csv. On veut charger un fichier en plusieurs morceaux et pour chaque morceau, calculer l'agrégation ci-dessus. Le nom des colonnes n'est présent qu'à la première ligne du programme.
aggs = [] step = 10000 columns = None for i in range(0, df.shape[0], step): part = pandas.read_csv("data.txt", encoding="utf-8", sep="\t", skiprows=i, nrows=step, header=0 if columns is None else None, names=columns) ...
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Les points importants : on utilise la fonction read_csv pour lire le fichier par morceaux avec skip_rows et nrows on calcul les statistiques sur chaque morceau le nom des colonnes n'apparaît qu'à la première ligne, donc il faut les conserver pour les ajouter lorsqu'on charge le second morceau du fichier (et les suivan...
def agg_exo(df): gr = df.groupby("AGE").agg({ #'LIMIT_BAL': {'LB_min': 'min','LB_max': 'max', 'LB_avg': 'mean'}, 'LIMIT_BAL': ['min', 'max', 'mean'], 'default payment next month': ['min', 'max', 'mean'], #'ID': {'len': 'count'} 'ID': ['count'], })...
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Q3 Le dataframe tout est la concaténation de deux dataframes contenant des informations aggrégées pour chaque morceau. On veut maintenant obtenir les mêmes informations agrégrées pour l'ensemble des données uniquement à partir du dataframe tout. Ecrire le code qui fait cette agrégation.
tout[("LIMIT_BAL","w")] = tout[("LIMIT_BAL", "mean")] * tout[("ID", "len")] # faire attention aux pondérations ici tout[("default payment next month","w")] = tout[("default payment next month", "mean")] * tout[("ID", "len")] toutm = tout.reset_index() tout_agg = toutm.groupby("AGE").agg({ ("LIMIT_BAL", "min"): min, ...
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Calculer une moyenne sur des observations est assez facile mais cela se complique quand on fait une moyenne de moyennes. Il faut retenir le nombre d'observations que représente chaque moyenne sinon la moyenne finale sera fausse. Cela explique la ligne 3. Q4 Tracer un histogramme avec la valeur moyenne de la variable LI...
tout_agg.head() data = tout_agg f, ax = plt.subplots(figsize=(10,4)) data.plot.bar(y=("LIMIT_BAL", "mean"), label="avg LIMIT_BAL", ax=ax, color="pink") data.reset_index(drop=True).plot(y=("LIMIT_BAL", "min"), label="min", kind="line", ax=ax, color="green") data.reset_index(drop=True).plot(y=("LIMIT_BAL", "max"), label...
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
A funny thing about year is the most old sight is in 1762! This dataset includes sights from history. How can this be significative? Well, to figure it out its time to plot some charts. The humans are visual beings and a picture really worth much more than a bunch of numbers and words. To do so we will use the default...
sightings_by_year = ufo.groupby('Year').size().reset_index() sightings_by_year.columns = ['Year', 'Sightings'] import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.ticker as ticker import seaborn as sns plt.style.use('seaborn-white') %matplotlib inline plt.xticks(rotation = 90) sns.barplot( ...
Python3/.ipynb_checkpoints/ufo-sample-python3-checkpoint.ipynb
valter-lisboa/ufo-notebooks
gpl-3.0
We can see the number of sightings is more representative after around 1900, so we will filter the dataframe for all year above this threshold.
ufo = ufo[ufo['Year'] > 1900]
Python3/.ipynb_checkpoints/ufo-sample-python3-checkpoint.ipynb
valter-lisboa/ufo-notebooks
gpl-3.0
For the purpose of cleaning up the data I determined that the Name and Ticket columns were not necessary for my future analysis.
##dont need name and ticket for what I am going to be tackeling titanic_ds = titanic_ds.drop(["Name", "Ticket"], axis=1)
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Using .info and .describe, I am able to get a quick overview of what the data set has to offer and if anything stands out. In this instance we can see the embarked number is less then the number of passengers, but this will not be an issue for my analysis.
##over view of data titanic_ds.info() titanic_ds.describe()
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
For my next few blocks of code and graphs I will be looking at two groups of individuals on the boat, Male and Female. To make the analysis a litle easier I created two variables that would define all males and all females on the boat.
##defining men and women from data men_ds = titanic_ds[titanic_ds.Sex == 'male'] women_ds = titanic_ds[titanic_ds.Sex == 'female']
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Using those two data sets that I created in the previous block, I printed the counts to understand how many of each sex were on the boat.
# idea of the spread between men and women print("Males: ") print(men_ds.count()['Sex']) print("Females: ") print(women_ds.count()['Sex'])
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
For this section I utilized Seaborn's factorplot function to graph the count of male's and females in each class.
##Gender distribution by class gender_class= sea.factorplot('Pclass',order=[1,2,3], data=titanic_ds, hue='Sex', kind='count') gender_class.set_ylabels("count of passengers")
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To begin answering my first question of who has a higer probability of surviving I created two variables, men_prob and women_prob. From there I grouped by sex and survival then taking the mean and thn printing out each statement.
##Probability of Survival by Gender men_prob = men_ds.groupby('Sex').Survived.mean() women_prob = women_ds.groupby('Sex').Survived.mean() print("Male ability to survive: ") print(men_prob[0]) print("Women ability to survive: ") print(women_prob[0])
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To visually answer the questions of what sex had a higher probability of surviving I utitlized the factorplot function with seaborn to map the sex, and survived in the form of a bar graph. I also incudled a y-axis label for presentaiton.
sbg = sea.factorplot("Sex", "Survived", data=titanic_ds, kind="bar", ci=None, size=5) sbg.set_ylabels("survival probability")
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To answer my section question of what the age range range was of survivors vs. non-survivors I first wanted to see the distribution of age acorss the board. To do this I used the histogram function as well as printed the median age. To validate the finding that Females do have a higher probability of surviving over Ma...
print("Total Count of Males and Females on ship: ") print(titanic_ds.count()['Sex']) print("Total Males:") print(men_ds.count()['Sex']) print("Males (Survived, Deseased): ") print(men_ds[men_ds.Survived == 1].count()['Sex'], men_ds[men_ds.Survived == 0].count()['Sex']) print("Total Women:") print(women_ds.count()['S...
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Chi-square value: 260.71702016732104 p-value: 1.1973570627755645e-58 degrees of freedom: 1 expected frequencies table: 221.47474747, 355.52525253 120.52525253, 193.47474747 Given the p-value is 1.1973570627755645e-58 (.011973570627755645e-58) is less than the significance level of .05, there is an indicuation t...
##Distribution of age; Median age 28.0 titanic_ds['Age'].hist(bins=100) print ("Median Age: ") print titanic_ds['Age'].median()
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To answer my second questoins I showed survived data with age in a box plot to show average age as well as its distruvtion for both deseased and survived.
##Age box plot, survived and did not survive ##fewer people survived as compared to deseased age_box=sea.boxplot(x="Survived", y="Age", data=titanic_ds) age_box.set(xlabel = 'Survived', ylabel = 'Age', xticklabels = ['Desased', 'Survived'])
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To tackle the third question of what is the probability as well as who has a higher probability of survivng, being Alone or in a Family. I first went ahead and greated my function that would return True if the number of people reported was above 0 (Family) and Fale if it was not (Alone). Bellow you can see the function...
titanic_ds['Family']=(titanic_ds.SibSp + titanic_ds.Parch > 0) print titanic_ds.head()
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To now show the probability visually as well as its output I have gone ahead and created a factorplot as well as printed out the probabilities of the two (Alone = False and Family = True). To get the probabilities I had to divide the sum of survivors by family type and divide by the count of family type.
fanda = sea.factorplot('Family', "Survived", data=titanic_ds, kind='bar', ci=None, size=5) fanda.set(xticklabels = ['Alone', 'Family']) print ((titanic_ds.groupby('Family')['Survived'].sum()/titanic_ds.groupby('Family')['Survived'].count()))
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Finally, to answer my last question of if being in a higher class affected the probability of you surviving, I used the same seaborn factorplot but to create this graph i had to take the sum of survivors and divide them by the count of survivors in each class.
sea.factorplot('Pclass', "Survived", data=titanic_ds, kind='bar', ci=None, size=5) PS=(titanic_ds.groupby('Pclass')['Survived'].sum()) PC=(titanic_ds.groupby('Pclass')['Survived'].count()) print ("Class Survivability: ") print (PS/PC)
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inpu...
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, shape=[None, real_dim], name="inputs_real") inputs_z = tf.placeholder(tf.float32, shape=[None, z_dim], name="inputs_z") return inputs_real, inputs_z
Generative-Adversarial-Networks/Intro_to_GANs_Exercises.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_wit...
# Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=(tf.ones_like(d_logits_real) * (1 - smooth)))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_lo...
Generative-Adversarial-Networks/Intro_to_GANs_Exercises.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Discussion: The square wave is not maintained due to the first order backward differencing scheme in space creates false diffusion. (Numerical diffusion) If we reduce the spatial step (spatial resolution - increase nx) then the error reduces. The wave shifts to the right with constant speed $c\Delta t$. Near the wall ...
def nonlin_convection_1d(nx=41, nt=50, dt=.01, u_init=[2,2,2,2], init_offset = 5, keep_all=False): ''' nx = 41 # Number of horizontal location points (x axis on graph) nt = 100 # Number of time step (iteration) dt = .01 # Resolution of the time step c = 1 # Constant, speed ...
lec_samples/lec2_navier_stokes.ipynb
gear/HPSC
gpl-3.0