markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ a = np.zeros(shape=(len(x),10)) # TODO: Implement Function for i in range(len(x)): a[i][x[i]] = 1 return a """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2],), name="x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=([None, n_classes]), name="y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name="keep_prob") """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function #x_tensor = Tensor("Placeholder_25:0", shape=(?, 32, 32, 5), dtype=float32) filter_weights = tf.Variable(tf.truncated_normal((conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs))) filter_bias = tf.Variable(tf.zeros(conv_num_outputs)) conv_strides = [1, conv_strides[0], conv_strides[1], 1] conv_layer = tf.nn.conv2d(x_tensor, filter_weights, conv_strides, "SAME") conv_layer = tf.nn.bias_add(conv_layer, filter_bias) conv_layer = tf.nn.relu(conv_layer) max_pool_strides = [1, pool_ksize[0], pool_ksize[1], 1] max_pool_padding = [1, pool_strides[0], pool_strides[1], 1] conv_layer = tf.nn.max_pool(conv_layer, max_pool_strides, max_pool_padding, "SAME") return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return tf.contrib.layers.linear(x_tensor, num_outputs) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x_tensor, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: conv_ksize = (3,3) conv_strides = (1,1) pool_ksize = (2,2) pool_strides = (2,2) conv_num_outputs = 12 x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_num_outputs = 4 x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) x_tensor = tf.nn.dropout(x_tensor, keep_prob) # TODO: Apply a Flatten Layer # Function Definition from Above: x_tensor = flatten(x_tensor) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: num_outputs = 64 x_tensor = fully_conn(x_tensor, num_outputs) x_tensor = tf.nn.dropout(x_tensor, keep_prob) num_outputs = 15 x_tensor = fully_conn(x_tensor, num_outputs) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: num_outputs = 10 x_tensor = output(x_tensor, num_outputs) # TODO: return output return x_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function loss = sess.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.}) valid_acc = sess.run(accuracy, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.}) print('Loss: {} Validation Accuracy: {}'.format( loss, valid_acc))
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout
# TODO: Tune Parameters epochs = 30 batch_size = 128 keep_probability = .8
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Writing to hdf5 using the Microdata objects
# Code source: Chris Smith -- cq6@ornl.gov # Liscense: MIT import numpy as np import pycroscopy as px
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Create some MicroDatasets and MicroDataGroups that will be written to the file. With h5py, groups and datasets must be created from the top down, but the Microdata objects allow us to build them in any order and link them later.
# First create some data data1 = np.random.rand(5, 7)
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Now use the array to build the dataset. This dataset will live directly under the root of the file. The MicroDataset class also implements the compression and chunking parameters from h5py.Dataset.
ds_main = px.MicroDataset('Main_Data', data=data1, parent='/')
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
We can also create an empty dataset and write the values in later With this method, it is neccessary to specify the dtype and maxshape kwarg parameters.
ds_empty = px.MicroDataset('Empty_Data', data=[], dtype=np.float32, maxshape=[7, 5, 3])
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
We can also create groups and add other MicroData objects as children. If the group's parent is not given, it will be set to root.
data_group = px.MicroDataGroup('Data_Group', parent='/') root_group = px.MicroDataGroup('/') # After creating the group, we then add an existing object as its child. data_group.addChildren([ds_empty]) root_group.addChildren([ds_main, data_group])
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
The showTree method allows us to view the data structure before the hdf5 file is created.
root_group.showTree()
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Now that we have created the objects, we can write them to an hdf5 file
# First we specify the path to the file h5_path = 'microdata_test.h5' # Then we use the ioHDF5 class to build the file from our objects. hdf = px.ioHDF5(h5_path)
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
The writeData method builds the hdf5 file using the structure defined by the MicroData objects. It returns a list of references to all h5py objects in the new file.
h5_refs = hdf.writeData(root_group, print_log=True) # We can use these references to get the h5py dataset and group objects h5_main = px.io.hdf_utils.getH5DsetRefs(['Main_Data'], h5_refs)[0] h5_empty = px.io.hdf_utils.getH5DsetRefs(['Empty_Data'], h5_refs)[0]
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Compare the data in our dataset to the original
print(np.allclose(h5_main[()], data1))
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
As mentioned above, we can now write to the Empty_Data object
data2 = np.random.rand(*h5_empty.shape) h5_empty[:] = data2[:]
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Now that we are using h5py objects, we must use flush to write the data to file after it has been altered. We need the file object to do this. It can be accessed as an attribute of the hdf object.
h5_file = hdf.file h5_file.flush()
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Now that we are done, we should close the file so that it can be accessed elsewhere.
h5_file.close()
docs/auto_examples/microdata_example.ipynb
anugrah-saxena/pycroscopy
mit
Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/sequence_batching@1x.png" width=500px> <br> We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator. The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep. After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches. Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this: python y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] where x is the input batch and y is the target batch. The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide. Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the number of characters per batch and number of batches we can make batch_size = n_seqs * n_steps n_batches = len(arr)//batch_size # Keep only enough characters to make full batches arr = arr[:n_batches*batch_size] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, -1)) # print(arr.shape) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n:n+n_steps] # The targets, shifted by one y = np.zeros((x.shape[0], x.shape[1])) y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] yield x, y
intro-to-rnns/Anna_KaRNNa_Exercises.ipynb
hparik11/Deep-Learning-Nanodegree-Foundation-Repository
mit
Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss. Exercise: Implement the loss calculation in the function below.
def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per sequence per step y_one_hot = tf.one_hot(targets, num_classes) y_reshaped = tf.reshape(y_one_hot, logits.get_shape()) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(logits, labels=y_reshaped) return loss
intro-to-rnns/Anna_KaRNNa_Exercises.ipynb
hparik11/Deep-Learning-Nanodegree-Foundation-Repository
mit
Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(self.inputs, num_classes) # Run each sequence step through the RNN with tf.nn.dynamic_rnn outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logit, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
intro-to-rnns/Anna_KaRNNa_Exercises.ipynb
hparik11/Deep-Learning-Nanodegree-Foundation-Repository
mit
Steps 4 & 5: Sample data from setting similar to data and record classification accuracy
np.random.seed(12345678) # for reproducibility, set random seed r = 20 # define number of rois N = 100 # number of samples at each iteration p0 = 0.10 p1 = 0.15 # define number of subjects per class S = np.array((8, 16, 20, 32, 40, 64, 80, 100, 120, 200, 320, 400, 600)) S = np.array((200,300)) names = ["Linear Regression", "Support Vector Regression", "Nearest Neighbors", "Random Forest"] #, "Polynomial Regression"] regressors = [ LinearRegression(), SVR(kernel="linear", C=0.5, epsilon=0.01), KNeighborsRegressor(6, weights="distance"), RandomForestRegressor(max_depth=5, n_estimators=10, max_features=1) ] errors = np.zeros((len(S), len(regressors), 2), dtype=np.dtype('float64')) # sample data accordingly for each # of simulations for idx1, s in enumerate(S): # null regression y = a + bx + episilon X = np.arange(0, s) # epsilon = np.random.normal(0, 0.5, s) eps_x = np.random.normal(0, 0.5, s) a = np.random.rand(s,) b = np.random.normal(s,)*5 y = a + b*(X+eps_x) X, y, coef = datasets.make_regression(n_samples=s, n_features=1, n_informative=1, noise=10, coef=True, random_state=0) # reshape array to make it work with linear regression # y = y.reshape(-1,1) X = X.reshape(-1,1) for idx2, regr in enumerate(regressors): X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) # Train the model using the training sets reg = regr.fit(X_train, y_train) # leave one out analysis # loo = LeaveOneOut(len(X)) loo = cross_validation.KFold(n=len(X), n_folds=10, shuffle=False, random_state=None) # compute scores for running this regressor scores = cross_validation.cross_val_score(reg, X, y, scoring='mean_squared_error', cv=loo) errors[idx1, idx2,] = [scores.mean(), scores.std()] print("MSE of %s: %f (+/- %0.5f)" % (names[idx2], scores.mean(), scores.std() * 2)) # print accuracy print errors print errors.shape
Code/classificationANDregression_simulation_AL.ipynb
Upward-Spiral-Science/the-vat
apache-2.0
STEP 6: PLOTTING ACCURACY VS. N FOR EACH REGRESSOR
plt.errorbar(S, errors[:,0,0], yerr = errors[:,0,1], hold=True, label=names[0]) plt.errorbar(S, errors[:,1,0], yerr = errors[:,1,1], color='green', hold=True, label=names[1]) plt.errorbar(S, errors[:,2,0], yerr = errors[:,2,1], color='red', hold=True, label=names[2]) plt.errorbar(S, errors[:,3,0], yerr = errors[:,3,1], color='black', hold=True, label=names[3]) plt.xscale('log') plt.xlabel('number of samples') plt.ylabel('MSE') plt.title('MSE of Regressions under simulated data') plt.axhline(1, color='red', linestyle='--') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.show() # null regression y = a + bx + episilon x = np.random.normal(0, 1, s) epsilon = np.random.normal(0, 0.05, s) a = np.random.rand(s,) b = np.random.rand(s,) y = a + b*x + epsilon y = np.reshape(y, (s,1)) X = np.reshape(X, (s,1)) X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) # Train the model using the training sets # reg = SVR(kernel="linear", C=1.).fit(X_train, y_train) reg = KNeighborsRegressor(5).fit(X_train, y_train) print reg print X.shape print epsilon.shape print y.shape
Code/classificationANDregression_simulation_AL.ipynb
Upward-Spiral-Science/the-vat
apache-2.0
STEP 7: APPLYING REGRESSIONS TO COLUMNS OF FEATURES
#### RUN AT BEGINNING AND TRY NOT TO RUN AGAIN - TAKES WAY TOO LONG #### csvfile = "data_normalized/shortenedFeatures_normalized.txt" # load in the feature data list_of_features = [] with open(csvfile) as file: for line in file: inner_list = [float(elt.strip()) for elt in line.split(',')] # create list of features list_of_features.append(inner_list) # conver to a numpy matrix list_of_features = np.array(list_of_features) print list_of_features.shape sub_features = list_of_features[:,1:] print sub_features.shape # Keep only first 96 features # X = X[np.random.choice(range(X.shape[0]),size=100000,replace=False),0:24*4] # randomly select n rows in list_of_features num_rows = len(list_of_features) X = list_of_features[np.random.choice(range(list_of_features.shape[0]), size=10000, replace=False), :] print X.shape ## Run regression on one column of the data errors = np.zeros((len(list_of_features), 2)) # create a 1119299 x 2 matrix # y = list_of_features[:,0] # sub_features = list_of_features[:,1:] num_cols = list_of_features.shape[1] errors_cols = {} for i in range(0, num_cols): y = X[:,i] indices = [p for p in range(0,96) if p != i] sub_features = X[:,indices] for idx, regr in enumerate(regressors): X_train, X_test, y_train, y_test = cross_validation.train_test_split(sub_features, y, test_size=0.4, random_state=0) # create regression and fit reg = regr.fit(X_train, y_train) # leave one out & compute cross-validation scores with MSE loo = cross_validation.KFold(n=len(sub_features), n_folds=10, shuffle=False, random_state=None) # loo = LeaveOneOut(len(sub_features)) scores = cross_validation.cross_val_score(reg, sub_features, y, scoring='mean_squared_error', cv=loo) # get error scores and print errors[idx,] = [scores.mean(), scores.std()] print("MSE's of %s: %f (+/- %0.2f)" % (names[idx], scores.mean(), scores.std())) errors_cols[str(i)] = errors # # print accuracy
Code/classificationANDregression_simulation_AL.ipynb
Upward-Spiral-Science/the-vat
apache-2.0
STEP 8: DISCUSSION:
X, y, coef = datasets.make_regression(n_samples=300, n_features=1, n_informative=1, noise=1, coef=True, random_state=0) X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) # create regression and fit reg = regr.fit(X_train, y_train) # leave one out & compute cross-validation scores with MSE # loo = LeaveOneOut(len(X)) loo = cross_validation.KFold(n=len(X), n_folds=10, shuffle=False, random_state=None) scores = cross_validation.cross_val_score(reg, X, y, scoring='mean_squared_error', cv=loo) # get error scores and print errors[0,] = [scores.mean(), scores.std()] print("MSE's of %s: %0.2f (+/- %0.2f)" % ('linear reg', scores.mean(), scores.std())) # reg = LinearRegression() # regr = reg.fit(X,y) plt.plot(y) errors_mse = {} for key in errors_cols.keys(): errors = errors_cols[key] errors_mse[key] = np.mean(errors,axis=0) # write new list_of_features to new txt file csvfile = "data_normalized/mse_regressions_features.txt" #Assuming res is a flat list with open(csvfile, "w") as output: # write to new file the data writer = csv.writer(output, lineterminator='\n') for key in errors_mse.keys(): try: writer.writerow(errors_mse[key]) except: print key errors_mse['24'] = None
Code/classificationANDregression_simulation_AL.ipynb
Upward-Spiral-Science/the-vat
apache-2.0
Logistic Regression Hyperparameter tuning: For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag') Model calibration: See above LR with L1-Penalty Hyperparameter Tuning
cValsL1 = [7.5, 10.0, 12.5, 20.0] methods = ['sigmoid', 'isotonic'] cv = 2 tol = 0.01 for c in cValsL1: for m in methods: ccvL1 = CalibratedClassifierCV(LogisticRegression(penalty='l1', C=c, tol=tol), method=m, cv=cv) ccvL1.fit(mini_train_data, mini_train_labels) print(ccvL1.get_params) ccvL1_prediction_probabilities = ccvL1.predict_proba(mini_dev_data) ccvL1_predictions = ccvL1.predict(mini_dev_data) print("L1 Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = ccvL1_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n") print() cValsL1 = [15.0, 20.0, 25.0, 50.0] method = 'sigmoid' cv = 2 tol = 0.01 for c in cValsL1: ccvL1 = CalibratedClassifierCV(LogisticRegression(penalty='l1', C=c, tol=tol), method=method, cv=cv) ccvL1.fit(mini_train_data, mini_train_labels) print(ccvL1.get_params) ccvL1_prediction_probabilities = ccvL1.predict_proba(mini_dev_data) ccvL1_predictions = ccvL1.predict(mini_dev_data) print("L1 Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = ccvL1_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n") print()
iterations/KK_scripts/W207_Final_Project_logisticRegressionOptimization_updated_08_20_1911.ipynb
samgoodgame/sf_crime
mit
LR with L2-Penalty Hyperparameter Tuning
cValsL2 = [75.0, 100.0, 150.0, 250.0] methods = ['sigmoid', 'isotonic'] cv = 2 tol = 0.01 for c in cValsL2: for m in methods: ccvL2 = CalibratedClassifierCV(LogisticRegression(penalty='l2', solver='newton-cg', C=c, tol=tol), method=m, cv=cv) ccvL2.fit(mini_train_data, mini_train_labels) print(ccvL2.get_params) ccvL2_prediction_probabilities = ccvL2.predict_proba(mini_dev_data) ccvL2_predictions = ccvL2.predict(mini_dev_data) print("L2 Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = ccvL2_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n") print() cValsL2 = [200.0, 250.0, 300.0, 500.0] method = 'isotonic' cv = 2 tol = 0.01 for c in cValsL2: for m in methods: ccvL2 = CalibratedClassifierCV(LogisticRegression(penalty='l2', solver='newton-cg', C=c, tol=tol), method=method, cv=cv) ccvL2.fit(mini_train_data, mini_train_labels) print(ccvL2.get_params) ccvL2_prediction_probabilities = ccvL2.predict_proba(mini_dev_data) ccvL2_predictions = ccvL2.predict(mini_dev_data) print("L2 Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = ccvL2_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n") print() cValsL2 = [400.0, 500.0, 750.0, 1000.0] method = 'isotonic' cv = 2 tol = 0.01 for c in cValsL2: for m in methods: ccvL2 = CalibratedClassifierCV(LogisticRegression(penalty='l2', solver='newton-cg', C=c, tol=tol), method=method, cv=cv) ccvL2.fit(mini_train_data, mini_train_labels) print(ccvL2.get_params) ccvL2_prediction_probabilities = ccvL2.predict_proba(mini_dev_data) ccvL2_predictions = ccvL2.predict(mini_dev_data) print("L2 Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = ccvL2_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n") print()
iterations/KK_scripts/W207_Final_Project_logisticRegressionOptimization_updated_08_20_1911.ipynb
samgoodgame/sf_crime
mit
Since coefficient in the triangle on the rhs are a part of Pascal triangle, namely A104712, the following is a generalization: $$ f_{2n+1} - 1 = \sum_{k=1}^{n}{{{n+1}\choose{k+1}}f_{k}} $$
gen_odd_fibs = Eq(f[2*n+1]-1, Sum(binomial(n+1, k+1)*f[k], (k,1,n))) Eq(gen_odd_fibs, Sum(binomial(n+1, n-k)*f[k], (k,1,n))) expand_sum_in_eq(gen_odd_fibs.subs(n, 8)) eq_sym.subs(fibs) eq_17 = Eq(f[17],f[-1] + rhs_sym[-1]) eq_18_shift = Eq(f[n], f[n-18]+8*f[n-17]+36*f[n-16]+84*f[n-15]+126*f[n-14]+126*f[n-13]+84*f[n-12]+36*f[n-11]+9*f[n-10]+f[n-9]) eq_17, eq_18_shift [eq_18_shift.subs(n,i).lhs.subs(fibs) - eq_18_shift.subs(n,i).rhs.subs(fibs) for i in range(18,32)]
notebooks/binomial-transform-applied-to-fibonacci-numbers.ipynb
massimo-nocentini/PhD
apache-2.0
again, fibonacci numbers, A000045.
from itertools import accumulate to_accumulate = rhs_sym + ones(9,1)*f[-1] even_rhs = Matrix(list(accumulate(to_accumulate, lambda folded, current_row: Add(folded, current_row) ))) even_lhs = Matrix([f[i] for i in range(2,19,2)]) even_fibs_matrix_eq = Eq(even_lhs, even_rhs) even_fibs_matrix_eq even_transformed_matrix = Matrix([ [1,0,0,0,0,0,0,0,0], [2,1,0,0,0,0,0,0,0], [4,4,1,0,0,0,0,0,0], [7,10,5,1,0,0,0,0,0], [11,20,15,6,1,0,0,0,0], [16,35,35,21,7,1,0,0,0], [22,56,70,56,28,8,1,0,0], [29,84,126,126,84,36,9,1,0], [37,120,210,252,210,120,45,10,1]]) even_transformed_matrix even_transforming_matrix = (pascal_matrix**(-1))*even_transformed_matrix even_transforming_matrix (catalan_matrix**(-1) )*even_transformed_matrix catalan_inverse_matrix * even_transformed_matrix even_transforming_matrix * fib_matrix_sym even_vector_eq_sym = Eq(even_lhs - Matrix(list(range(1,10))), pascal_matrix * even_transforming_matrix * fib_matrix_sym) even_vector_eq_sym even_vector_eq_sym.subs(fib0_term, fibs[fib0_term])
notebooks/binomial-transform-applied-to-fibonacci-numbers.ipynb
massimo-nocentini/PhD
apache-2.0
Since coefficient in the triangle on the rhs are a part of Pascal triangle, namely A104713, the following is a generalization: $$ f_{2n} - n = \sum_{k=1}^{n-1}{{{n+1}\choose{k+2}}f_{k}} $$
gen_even_fibs = Eq(f[2*n]-n, Sum(binomial(n+1, k+2)*f[k], (k,1,n-1))) Eq(gen_even_fibs, Sum(binomial(n+1, n-k-1)*f[k], (k,1,n-1))) expand_sum_in_eq(gen_even_fibs.subs(n, 9)) even_fibs_matrix_eq_minus1_appear = even_fibs_matrix_eq.subs(fibs) Eq(even_fibs_matrix_eq.lhs, even_fibs_matrix_eq_minus1_appear, evaluate=False)
notebooks/binomial-transform-applied-to-fibonacci-numbers.ipynb
massimo-nocentini/PhD
apache-2.0
summands on the rhs form a known sequence A054452.
list(accumulate([fibonacci(2*i+1)-1 for i in range(21)])) def n_gf(t): return t/(1-t)**2 n_gf(t).series(n=20) def odd_fib_gf(t): return t**2/((1-t)**2*(1-3*t+t**2)) odd_fib_gf(t).series(n=20) composite_odd_fibs_gf = n_gf(t)+odd_fib_gf(t) composite_odd_fibs_gf.factor(), composite_odd_fibs_gf.series(n=20) def odd_integers_gf(t): return ((n_gf(t)+n_gf(-t))/2).simplify() odd_integers_gf(t).series(n=20) # here is the error: we should use the generating function of F(2n+1) instead of F(n) as done here! def fib_gf(t): return t/(1-t-t**2) fib_gf(odd_integers_gf(t)).series(n=20) def even_fibs_gf(t): return n_gf(t) + fib_gf(t)/(1-t) even_fibs_gf(t).series(n=10) even_fibs_matrix_eq_minus1_appear.subs(f[-1],1) eq_17 = Eq(f[n], 8*f[n-17]+29*f[n-16]+84*f[n-15]+126*f[n-14]+126*f[n-13]+84*f[n-12]+36*f[n-11]+9*f[n-10]+f[n-9]) eq_17 [eq_17.subs(n,i).lhs.subs(fibs) - eq_17.subs(n,i).rhs.subs(fibs) for i in range(17,31)]
notebooks/binomial-transform-applied-to-fibonacci-numbers.ipynb
massimo-nocentini/PhD
apache-2.0
Generating a Spiral Training Dataset We'll be using this 2D dataset because it's easy to visually see the classifier performance, and because it's impossible to linearly separate the classes nicely.
N = 100 # points per class D = 2 # dimensionality at 2 so we can eyeball it K = 3 # number of classes X = np.zeros((N*K, D)) # generate an empty matrix to hold X features y = np.zeros(N*K, dtype='uint8') # generate an empty vector to hold y labels # for 3 classes, evenly generates spiral arms for j in xrange(K): ix = range(N*j, N*(j+1)) r = np.linspace(0.0,1,N) #radius t = np.linspace(j*4, (j+1)*4, N) + np.random.randn(N)*0.2 # theta X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Quick question, what are the dimensions of X and y? Let's visualize this. Setting S=20 (size of points) so that the color/label differences are more visible.
plt.scatter(X[:,0], X[:,1], c=y, s=20, cmap=plt.cm.Spectral) plt.show()
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Training a Linear Classifier Let's start by training a a simple y = WX + b linear classifer on this dataset. We need to compute some Weights (W) and a bias vector (b) for all classes.
# random initialization of starting params. recall that it's best to randomly initialize at a small value. # how many parameters should this linear classifier have? remember there are K output classes, and 2 features per observation. W = 0.01 * np.random.randn(D,K) b = np.zeros((1,K)) print "W shape", W.shape print "W values", W # Here are some hyperparameters that we're not going to worry about too much right now learning_rate = 1e-0 # the step size in the descent reg = 1e-3 scores = np.dot(X, W) + b print scores.shape
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
We're going to compute the normalized softmax of these scores...
num_examples = X.shape[0] exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # Let's look at one example to verify the softmax transform print "Score: ", scores[50] print "Class Probabilities: ", probs[50]
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
The array correct_logprobs is a 1D array of the probabilities assigned to the correct classes for each example.
correct_logprobs = -np.log(probs[range(num_examples),y]) # data loss is L1 loss plus regularization loss data_loss = np.sum(correct_logprobs)/num_examples reg_loss = 0.5*reg*np.sum(W*W) loss = data_loss + reg_loss # this gets the gradient of the scores # class probabilities minus - divided by num_examples dscores = probs dscores[range(num_examples),y] -= 1 dscores /= num_examples # this backpropages the gradient into W and b dW = np.dot(X.T, dscores) # don't forget to transpose! otherwise, you'll be forwarding the gradient dW += 0.5*W # regularization gradient db = np.sum(dscores, axis=0, keepdims=True)
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Updating the Parameters We update the parameters W and B in the direction of the negative gradient in order to decrease the loss.
# this updates the W and b parameters W += -learning_rate * dW b += -learning_rate * db
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Full Code for the Training the Linear Softmax Classifier Using gradient descent method for optimization. Using L1 for loss funtion. This ought to converge to a loss of around 0.78 after 150 iterations
# initialize parameters randomly W = 0.01 * np.random.randn(D,K) b = np.zeros((1,K)) # some hyperparameters step_size = 1e-0 reg = 1e-3 # regularization strength # gradient descent loop num_examples = X.shape[0] # evaluated for 200 steps for i in xrange(200): # evaluate class scores, [N x K] scores = np.dot(X, W) + b # compute the class probabilities exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K] # compute the loss: average cross-entropy loss and regularization corect_logprobs = -np.log(probs[range(num_examples),y]) data_loss = np.sum(corect_logprobs)/num_examples reg_loss = 0.5*reg*np.sum(W*W) loss = data_loss + reg_loss # for every 10 iterations print the loss if i % 10 == 0: print "iteration %d: loss %f" % (i, loss) # compute the gradient on scores dscores = probs dscores[range(num_examples),y] -= 1 dscores /= num_examples # backpropate the gradient to the parameters (W,b) dW = np.dot(X.T, dscores) db = np.sum(dscores, axis=0, keepdims=True) dW += reg*W # regularization gradient # perform a parameter update W += -step_size * dW b += -step_size * db
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Evaluating the Training Accuracy The training accuracy here ought to be at around 0.5 This is better than change for 3 classes, where the expected accuracy of randomly selecting one of out 3 labels is 0.33. But not that much better.
scores = np.dot(X, W) + b predicted_class = np.argmax(scores, axis=1) print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Let's eyeball the decision boundaries to get a better feel for the split.
# plot the resulting classifier h = 0.02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b Z = np.argmax(Z, axis=1) Z = Z.reshape(xx.shape) fig = plt.figure() plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max())
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Training a 2 Layer Neural Network Let's see what kind of improvement we'll get with adding a single hidden layer.
# init parameters np.random.seed(100) # so we all have the same numbers W = 0.01 * np.random.randn(D,h) b = np.zeros((1,h)) h = 100 # size of hidden layer. a hyperparam in itself. W2 = 0.01 * np.random.randn(h,K) b2 = np.zeros((1,K))
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Let's use a ReLU activation function. See how we're passing the scores from one layer into the hidden layer.
hidden_layer = np.maximum(0, np.dot(X, W) + b) scores = np.dot(hidden_layer, W2) + b2
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
The loss computation and the dscores gradient computation remain the same. The major difference lies in the the chaining backpropagation of the dscores all the way back up to the parameters W and b.
# backpropate the gradient to the parameters of the hidden layer dW2 = np.dot(hidden_layer.T, dscores) db2 = np.sum(dscores, axis=0, keepdims=True) # gradient of the outputs of the hidden layer (the local gradient) dhidden = np.dot(dscores, W2.T) # backprop through the ReLU function dhidden[hidden_layer <= 0] = 0 # back right into the parameters W and b dW = np.dot(X.T, dhidden) db = np.sum(dhidden, axis=0, keepdims=True)
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Full Code for Training the 2 Layer NN with ReLU activation Very similar to the linear classifier!
# initialize parameters randomly np.random.seed(100) # so we all have the same numbers h = 100 # size of hidden layer W = 0.01 * np.random.randn(D,h) b = np.zeros((1,h)) W2 = 0.01 * np.random.randn(h,K) b2 = np.zeros((1,K)) # some hyperparameters step_size = 1e-0 reg = 1e-3 # regularization strength # optimization: gradient descent loop num_examples = X.shape[0] for i in xrange(10000): # feed forward # evaluate class scores, [N x K] hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation scores = np.dot(hidden_layer, W2) + b2 # compute the class probabilities exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K] # compute the loss: average cross-entropy loss and regularization corect_logprobs = -np.log(probs[range(num_examples),y]) data_loss = np.sum(corect_logprobs)/num_examples reg_loss = 0.5*reg*np.sum(W*W) + 0.5*reg*np.sum(W2*W2) loss = data_loss + reg_loss if i % 1000 == 0: print "iteration %d: loss %f" % (i, loss) # backprop # compute the gradient on scores dscores = probs dscores[range(num_examples),y] -= 1 dscores /= num_examples # backpropate the gradient to the parameters # first backprop into parameters W2 and b2 dW2 = np.dot(hidden_layer.T, dscores) db2 = np.sum(dscores, axis=0, keepdims=True) # next backprop into hidden layer dhidden = np.dot(dscores, W2.T) # backprop the ReLU non-linearity dhidden[hidden_layer <= 0] = 0 # finally into W,b dW = np.dot(X.T, dhidden) db = np.sum(dhidden, axis=0, keepdims=True) # add regularization gradient contribution dW2 += reg * W2 dW += reg * W # perform a parameter update W += -step_size * dW b += -step_size * db W2 += -step_size * dW2 b2 += -step_size * db2
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Evaluating the Training Set Accuracy This should be around 0.98, which is hugely better than the 0.50 we were getting from the linear classifier!
hidden_layer = np.maximum(0, np.dot(X, W) + b) scores = np.dot(hidden_layer, W2) + b2 predicted_class = np.argmax(scores, axis=1) print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Let's visualize this to get a more dramatic sense of just how good the split is.
# plot the resulting classifier h = 0.02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.dot(np.maximum(0, np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b), W2) + b2 Z = np.argmax(Z, axis=1) Z = Z.reshape(xx.shape) fig = plt.figure() plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) #fig.savefig('spiral_net.png')
codelab_1_NN_Numpy.ipynb
thinkingmachines/deeplearningworkshop
mit
Tokens The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group. We have seen how we can extract tokens by splitting the text at the blank spaces. NTLK has a function word_tokenize() for it:
import nltk s1Tokens = nltk.word_tokenize(sampleText1) s1Tokens len(s1Tokens)
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
21 tokens extracted, which include words and punctuation. Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens: "can" and "n't" (= "not") while a tokeniser that splits text by spaces would consider it a single token: "can't". Let's see another example:
s2Tokens = nltk.word_tokenize(sampleText2) s2Tokens
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time:
# If you would like to work with the raw text you can use 'bookRaw' with open('../datasets/ThePrince.txt', 'r') as f: bookRaw = f.read() bookTokens = nltk.word_tokenize(bookRaw) bookText = nltk.Text(bookTokens) # special format nBookTokens= len(bookTokens) # or alternatively len(bookText) print ("*** Analysing book ***") print ("The book is {} chars long".format (len(bookRaw))) print ("The book has {} tokens".format (nBookTokens))
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens. Sentences NTLK has a function to tokenise a text not in words but in sentences.
text1 = "This is the first sentence. A liter of milk in the U.S. costs $0.99. Is this the third sentence? Yes, it is!" sentences = nltk.sent_tokenize(text1) len(sentences) sentences
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99). It also splits correctly sentences after question or exclamation marks but not after commas.
sentences = nltk.sent_tokenize(bookRaw) # extract sentences nSent = len(sentences) print ("The book has {} sentences".format (nSent)) print ("and each sentence has in average {} tokens".format (nBookTokens / nSent))
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Most common tokens What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency? The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token. Its most_common() method then returns a list of tuples where each tuple is of the form (token, frequency). The list is sorted in descending order of frequency.
def get_top_words(tokens): # Calculate frequency distribution fdist = nltk.FreqDist(tokens) return fdist.most_common() topBook = get_top_words(bookTokens) # Output top 20 words topBook[:20]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Comma is the most common: we need to remove the punctuation. Most common alphanumeric tokens We can use isalpha() to check if the token is a word and not punctuation.
topWords = [(freq, word) for (word,freq) in topBook if word.isalpha() and freq > 400] topWords
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
We can also remove any capital letters before tokenising:
def preprocessText(text, lowercase=True): if lowercase: tokens = nltk.word_tokenize(text.lower()) else: tokens = nltk.word_tokenize(text) return [word for word in tokens if word.isalpha()] bookWords = preprocessText(bookRaw) topBook = get_top_words(bookWords) # Output top 20 words topBook[:20] print ("*** Analysing book ***") print ("The text has now {} words (tokens)".format (len(bookWords)))
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ... As we have seen last time, these are so-called stop words that are very common and are normally stripped from a text when doing these kind of analysis. Meaningful most common tokens A simple approach could be to filter the tokens that have a length greater than 5 and frequency of more than 150.
meaningfulWords = [word for (word,freq) in topBook if len(word) > 5 and freq > 80] sorted(meaningfulWords)
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
This would work but would leave out also tokens such as I and you which are actually significative. The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words. NLTK has a corpus of stop words in several languages:
from nltk.corpus import stopwords stopwordsEN = set(stopwords.words('english')) # english language betterWords = [w for w in bookWords if w not in stopwordsEN] topBook = get_top_words(betterWords) # Output top 20 words topBook[:20]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Now we excluded words such as the but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
'princes' in betterWords betterWords.count("prince") + betterWords.count("princes")
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Stemming Above, in the list of words we have both prince and princes which are respectively the singular and plural version of the same word (the stem). The same would happen with verb conjugation (love and loving are considered different words but are actually inflections of the same verb). Stemmer is the tool that reduces such inflectional forms into their stem, base or root form and NLTK has several of them (each with a different heuristic algorithm).
input1 = "List listed lists listing listings" words1 = input1.lower().split(' ') words1
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
And now we apply one of the NLTK stemmer, the Porter stemmer:
porter = nltk.PorterStemmer() [porter.stem(t) for t in words1]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
stemmedWords = [porter.stem(w) for w in betterWords] topBook = get_top_words(stemmedWords) topBook[:20] # Output top 20 words
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Now the word princ is counted 281 times, exactly like the sum of prince and princes. A note here: Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes. Prince and princes become princ. A different flavour is the lemmatisation that we will see in one second, but first a note about stemming in other languages than English. Stemming in other languages Snowball is an improvement created by Porter: a language to create stemmers and have rules for many more languages than English. For example Italian:
from nltk.stem.snowball import SnowballStemmer stemmerIT = SnowballStemmer("italian") inputIT = "Io ho tre mele gialle, tu hai una mela gialla e due pere verdi" wordsIT = inputIT.split(' ') [stemmerIT.stem(w) for w in wordsIT]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Lemma Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma. While a stemmer operates on a single word without knowledge of the context, a lemmatiser can take the context in consideration. NLTK has also a built-in lemmatiser, so let's see it in action:
from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() words1 [lemmatizer.lemmatize(w, 'n') for w in words1] # n = nouns
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
[lemmatizer.lemmatize(w, 'v') for w in words1] # v = verbs
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
We get a different result if we say that the words are verbs. They have all the same lemma, in fact they could be all different inflections or conjugation of a verb. The type of words that can be used are: 'n' = noun, 'v'=verb, 'a'=adjective, 'r'=adverb
words2 = ['good', 'better'] [porter.stem(w) for w in words2] [lemmatizer.lemmatize(w, 'a') for w in words2]
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
It works with different adjectives, it doesn't look only at prefixes and suffixes. You would wonder why stemmers are used, instead of always using lemmatisers: stemmers are much simpler, smaller and faster and for many applications good enough. Now we lemmatise the book:
lemmatisedWords = [lemmatizer.lemmatize(w, 'n') for w in betterWords] topBook = get_top_words(lemmatisedWords) topBook[:20] # Output top 20 words
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Yes, the lemma now is prince. But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word. Part of speech (PoS) In traditional grammar, a part of speech (abbreviated form: PoS or POS) is a category of words which have similar grammatical properties.  For example, an adjective (red, big, quiet, ...) describe properties while a verb (throw, walk, have) describe actions or states. Commonly listed parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection.
text1 = "Children shouldn't drink a sugary drink before bed." tokensT1 = nltk.word_tokenize(text1) nltk.pos_tag(tokensT1)
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
The NLTK function pos_tag() will tag each token with the estimated PoS. NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function:
nltk.help.upenn_tagset('RB')
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
Which are the most common PoS in The Prince book?
tokensAndPos = nltk.pos_tag(bookTokens) posList = [thePOS for (word, thePOS) in tokensAndPos] fdistPos = nltk.FreqDist(posList) fdistPos.most_common(5) nltk.help.upenn_tagset('IN')
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
It's not nouns (NN) but interections (IN) such as preposition or conjunction. Extra note: Parsing the grammar structure Words can be ambiguous and sometimes is not easy to understand which kind of POS is a word, for example in the sentence "visiting aunts can be a nuisance", is visiting a verb or an adjective? Tagging a PoS depends on the context, which can be ambiguous. Making sense of a sentence is easier if it follows a well-defined grammatical structure, such as : subject + verb + object NLTK allows to define a formal grammar which can then be used to parse a text. The NLTK ChartParser is a procedure for finding one or more trees (sentences have internal organisation that can be represented using a tree) corresponding to a grammatically well-formed sentence.
# Parsing sentence structure text2 = nltk.word_tokenize("Alice loves Bob") grammar = nltk.CFG.fromstring(""" S -> NP VP VP -> V NP NP -> 'Alice' | 'Bob' V -> 'loves' """) parser = nltk.ChartParser(grammar) trees = parser.parse_all(text2) for tree in trees: print(tree)
03-NLP/introNLTK.ipynb
Mashimo/datascience
apache-2.0
QCM Les bonnes réponses sont en gras. Que fait le programme suivant ? Il trie. Il vérifie qu'un tableau est trié. Rien car la boucle ne commence pas à 0.
l = [0,1,2,3,4,6,5,8,9,10] res = True for i in range(1,len (l)) : if l[i-1] > l[i]: # un tableau n'est pas trié si deux éléments consécutifs res = False # ne sont pas dans le bon ordre print(res)
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
La fonction suivante ne fonctionne pas sur ... Le nombre 0. La constante "123". Les nombres strictement négatifs
def somme(n): return sum ( [ int(c) for c in str(n) ] ) # un signe moins aménera le calcul de int('-') qui edt invalide somme(0), somme("123")
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Le programme suivant provoque une erreur. Quelle est l'exception qu'il va produire ? SyntaxError TypeError IndexError
# déclenche une exception li = list(range(0,10)) sup = [0,9] for i in sup : del li [i] # on supprime le premier élément # à ce moment le dernier élément est d'indice 8 et non plus 9
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Entourer ce que est vrai à propos de la fonction suivante ? Elle est récursive. Il manque une condition d'arrêt. fibo(4) appelle récursivement 8 fois fibo: une fois fibo(3), deux fois fibo(2), trois fois fibo(1) et deux fois fibo(0)
def fibo (n) : print("fibo", n) if n < 1 : return 0 elif n == 1 : return 1 else : return fibo (n-1) + fibo (n-2) fibo(4)
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
La fonction est évidemment récursive car elle s'appelle elle-même, elle fait même deux appels récursifs au sein de la même fonction ce qui explique les nombreux appels. Combien de lignes comporte le dataframe df2 ? 3 4 5 6 7 8 9 Aucun, le code provoque une erreur.
import pandas df = pandas.DataFrame([dict(x=1, t="e"), dict(x=3, t="f"), dict(x=4, t="e")]) df2 = df.merge(df, left_on="x", right_on="x") df2
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Le dataframe initial a 3 lignes. On le fusionne avec lui même avec une colonne qui ne contient des valeurs distinctes. Chaque ligne ne fusionnera qu'avec une seule ligne. Le résultat contient 3 lignes. Combien de lignes comporte le dataframe df3 ? 3 4 5 6 7 8 9 Aucun, le code provoque une erreur.
import pandas df = pandas.DataFrame([dict(x=1, t="e"), dict(x=3, t="f"), dict(x=4, t="e")]) df3 = df.merge(df, left_on="t", right_on="t") df3.head()
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Le dataframe initial a 3 lignes. On le fusionne avec lui même avec une colonne qui contient des valeurs non distinctes. Il y a 2 'e' et 1 'f'. La clé unique 'f' fusionnera avec elle-même, les clés 'e' fusionneront les unes avec les autres soit 2x2 = 4 lignes. Résultat : 1 + 4 = 5. Dataframes On suppose qu'on a un fichier de données trop gros pour être chargé en mémoire. On veut produire des statistiques simples. Pour tester votre code, vous pourrez utiliser le fichier data.txt construit comme suit :
import pandas from urllib.error import URLError url_ = "https://archive.ics.uci.edu/ml/machine-learning-databases/00350/" name = "default%20of%20credit%20card%20clients.xls" url = url_ + name try: df = pandas.read_excel(url, skiprows=1) except URLError: # backup plan url_ = "http://www.xavierdupre.fr/enseignement/complements/" url = url_ + name df = pandas.read_excel(url, skiprows=1) df.to_csv("data.txt", encoding="utf-8", sep="\t", index=False)
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Q1 Ecrire une fonction qui agrège un dataset par AGE et calcule le mininum, maximum et la moyenne en une seule fois pour les variables LIMIT_BAL, default payment next month et qui calcule le nombre d'observations partageant le même AGE.
import numpy res = df.groupby("AGE").agg({"LIMIT_BAL": (min, max, numpy.mean), "ID":len, "default payment next month": (min, max, numpy.mean)}) res.head()
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Q2 Lire la documentation de read_csv. On veut charger un fichier en plusieurs morceaux et pour chaque morceau, calculer l'agrégation ci-dessus. Le nom des colonnes n'est présent qu'à la première ligne du programme.
aggs = [] step = 10000 columns = None for i in range(0, df.shape[0], step): part = pandas.read_csv("data.txt", encoding="utf-8", sep="\t", skiprows=i, nrows=step, header=0 if columns is None else None, names=columns) agg = part.groupby("AGE").agg({"LIMIT_BAL": (min, max, numpy.mean), "ID":len, "default payment next month": (min, max, numpy.mean)}) aggs.append(agg) if columns is None: columns = part.columns tout = pandas.concat(aggs) tout.head()
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Les points importants : on utilise la fonction read_csv pour lire le fichier par morceaux avec skip_rows et nrows on calcul les statistiques sur chaque morceau le nom des colonnes n'apparaît qu'à la première ligne, donc il faut les conserver pour les ajouter lorsqu'on charge le second morceau du fichier (et les suivant) Le troisième point est plus élégamment traité avec le paramètre iterator. Cette solution est meilleure car la boucle n'utilise pas l'information df.shape[0] : cela revient à lire deux fois le fichier, une fois pour avoir le nombre de lignes, une autre pour lire le contenu. La seconde solution ne lit qu'une seule fois le fichier.
def agg_exo(df): gr = df.groupby("AGE").agg({ #'LIMIT_BAL': {'LB_min': 'min','LB_max': 'max', 'LB_avg': 'mean'}, 'LIMIT_BAL': ['min', 'max', 'mean'], 'default payment next month': ['min', 'max', 'mean'], #'ID': {'len': 'count'} 'ID': ['count'], }) gr.columns = ['LB_min', 'LB_max', 'LB_avg', 'dpnm_min', 'dpmn_max', 'dpmn_avg', 'len'] return gr params = {'filepath_or_buffer': "data.txt", 'encoding': "utf-8", 'sep':"\t" , 'iterator': True, 'chunksize':10001} tout2 = pandas.concat([agg_exo(part) for part in pandas.read_csv(**params)], axis=0) tout2.head()
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Q3 Le dataframe tout est la concaténation de deux dataframes contenant des informations aggrégées pour chaque morceau. On veut maintenant obtenir les mêmes informations agrégrées pour l'ensemble des données uniquement à partir du dataframe tout. Ecrire le code qui fait cette agrégation.
tout[("LIMIT_BAL","w")] = tout[("LIMIT_BAL", "mean")] * tout[("ID", "len")] # faire attention aux pondérations ici tout[("default payment next month","w")] = tout[("default payment next month", "mean")] * tout[("ID", "len")] toutm = tout.reset_index() tout_agg = toutm.groupby("AGE").agg({ ("LIMIT_BAL", "min"): min, ("LIMIT_BAL", "max"): max, ("LIMIT_BAL", "w"): sum, ("default payment next month", "min"): min, ("default payment next month", "max"): max, ("default payment next month", "w"): sum, ("ID", "len"):sum, }) # et là tout_agg[("LIMIT_BAL", "mean")] = tout_agg[("LIMIT_BAL", "w")] / tout_agg[("ID", "len")] tout_agg[("default payment next month", "mean")] = tout_agg[("default payment next month", "w")] / tout_agg[("ID", "len")] tout_agg = tout_agg [ sorted(tout_agg.columns)] tout_agg.head()
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
Calculer une moyenne sur des observations est assez facile mais cela se complique quand on fait une moyenne de moyennes. Il faut retenir le nombre d'observations que représente chaque moyenne sinon la moyenne finale sera fausse. Cela explique la ligne 3. Q4 Tracer un histogramme avec la valeur moyenne de la variable LIMIT_BAL, on ajoutera deux lignes pour les valeurs min et max. Quelques indications : How to align the bar and line in matplotlib two y-axes chart?.
tout_agg.head() data = tout_agg f, ax = plt.subplots(figsize=(10,4)) data.plot.bar(y=("LIMIT_BAL", "mean"), label="avg LIMIT_BAL", ax=ax, color="pink") data.reset_index(drop=True).plot(y=("LIMIT_BAL", "min"), label="min", kind="line", ax=ax, color="green") data.reset_index(drop=True).plot(y=("LIMIT_BAL", "max"), label="max", kind="line", ax=ax) data.reset_index(drop=True).plot(y=("LIMIT_BAL", "mean"), label="avg LIMIT_BAL", kind="line", ax=ax) x = ax.get_xticks() ax.xaxis.set_ticks(x[::5]) ax.xaxis.set_ticklabels(x[::5]+min(data.index)) ax.set_title("average LIMIT_BAL per age");
_doc/notebooks/examen/solution_2016.ipynb
sdpython/actuariat_python
mit
A funny thing about year is the most old sight is in 1762! This dataset includes sights from history. How can this be significative? Well, to figure it out its time to plot some charts. The humans are visual beings and a picture really worth much more than a bunch of numbers and words. To do so we will use the default matplotlib library from Python to build our graphs. Analysing the years Before start lets count the sights by year. The comands bellow are equivalent to the following SQL code: SQL SELECT Year, count(*) AS Sightings FROM ufo GROUP BY Year
sightings_by_year = ufo.groupby('Year').size().reset_index() sightings_by_year.columns = ['Year', 'Sightings'] import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.ticker as ticker import seaborn as sns plt.style.use('seaborn-white') %matplotlib inline plt.xticks(rotation = 90) sns.barplot( data = sightings_by_year, x = 'Year', y = 'Sightings', color= 'blue' ) ax = plt.gca() ax.xaxis.set_major_locator(ticker.MultipleLocator(base=5))
Python3/.ipynb_checkpoints/ufo-sample-python3-checkpoint.ipynb
valter-lisboa/ufo-notebooks
gpl-3.0
We can see the number of sightings is more representative after around 1900, so we will filter the dataframe for all year above this threshold.
ufo = ufo[ufo['Year'] > 1900]
Python3/.ipynb_checkpoints/ufo-sample-python3-checkpoint.ipynb
valter-lisboa/ufo-notebooks
gpl-3.0
For the purpose of cleaning up the data I determined that the Name and Ticket columns were not necessary for my future analysis.
##dont need name and ticket for what I am going to be tackeling titanic_ds = titanic_ds.drop(["Name", "Ticket"], axis=1)
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Using .info and .describe, I am able to get a quick overview of what the data set has to offer and if anything stands out. In this instance we can see the embarked number is less then the number of passengers, but this will not be an issue for my analysis.
##over view of data titanic_ds.info() titanic_ds.describe()
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
For my next few blocks of code and graphs I will be looking at two groups of individuals on the boat, Male and Female. To make the analysis a litle easier I created two variables that would define all males and all females on the boat.
##defining men and women from data men_ds = titanic_ds[titanic_ds.Sex == 'male'] women_ds = titanic_ds[titanic_ds.Sex == 'female']
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Using those two data sets that I created in the previous block, I printed the counts to understand how many of each sex were on the boat.
# idea of the spread between men and women print("Males: ") print(men_ds.count()['Sex']) print("Females: ") print(women_ds.count()['Sex'])
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
For this section I utilized Seaborn's factorplot function to graph the count of male's and females in each class.
##Gender distribution by class gender_class= sea.factorplot('Pclass',order=[1,2,3], data=titanic_ds, hue='Sex', kind='count') gender_class.set_ylabels("count of passengers")
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To begin answering my first question of who has a higer probability of surviving I created two variables, men_prob and women_prob. From there I grouped by sex and survival then taking the mean and thn printing out each statement.
##Probability of Survival by Gender men_prob = men_ds.groupby('Sex').Survived.mean() women_prob = women_ds.groupby('Sex').Survived.mean() print("Male ability to survive: ") print(men_prob[0]) print("Women ability to survive: ") print(women_prob[0])
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To visually answer the questions of what sex had a higher probability of surviving I utitlized the factorplot function with seaborn to map the sex, and survived in the form of a bar graph. I also incudled a y-axis label for presentaiton.
sbg = sea.factorplot("Sex", "Survived", data=titanic_ds, kind="bar", ci=None, size=5) sbg.set_ylabels("survival probability")
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To answer my section question of what the age range range was of survivors vs. non-survivors I first wanted to see the distribution of age acorss the board. To do this I used the histogram function as well as printed the median age. To validate the finding that Females do have a higher probability of surviving over Males, I will be applying stitastical analysis, chi-squared test, to gain the necessary understanding. My findings and code are bellow.
print("Total Count of Males and Females on ship: ") print(titanic_ds.count()['Sex']) print("Total Males:") print(men_ds.count()['Sex']) print("Males (Survived, Deseased): ") print(men_ds[men_ds.Survived == 1].count()['Sex'], men_ds[men_ds.Survived == 0].count()['Sex']) print("Total Women:") print(women_ds.count()['Sex']) print("Females (Survived, Deseased): ") print(women_ds[women_ds.Survived == 1].count()['Sex'], women_ds[women_ds.Survived == 0].count()['Sex']) men_women_survival = np.array([[men_ds[men_ds.Survived == 1].count()['Sex'], men_ds[men_ds.Survived == 0].count()['Sex']],[women_ds[women_ds.Survived == 1].count()['Sex'], women_ds[women_ds.Survived == 0].count()['Sex']]]) print(men_women_survival) # Chi-square calculations sp.stats.chi2_contingency(men_women_survival)
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Chi-square value: 260.71702016732104 p-value: 1.1973570627755645e-58 degrees of freedom: 1 expected frequencies table: 221.47474747, 355.52525253 120.52525253, 193.47474747 Given the p-value is 1.1973570627755645e-58 (.011973570627755645e-58) is less than the significance level of .05, there is an indicuation that there is a relationtion between gender and survivability. That means we accept the alternative hypothesis that gender and survivability are dependant of each other.
##Distribution of age; Median age 28.0 titanic_ds['Age'].hist(bins=100) print ("Median Age: ") print titanic_ds['Age'].median()
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To answer my second questoins I showed survived data with age in a box plot to show average age as well as its distruvtion for both deseased and survived.
##Age box plot, survived and did not survive ##fewer people survived as compared to deseased age_box=sea.boxplot(x="Survived", y="Age", data=titanic_ds) age_box.set(xlabel = 'Survived', ylabel = 'Age', xticklabels = ['Desased', 'Survived'])
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To tackle the third question of what is the probability as well as who has a higher probability of survivng, being Alone or in a Family. I first went ahead and greated my function that would return True if the number of people reported was above 0 (Family) and Fale if it was not (Alone). Bellow you can see the function as well as the new column crated with the True and False statements.
titanic_ds['Family']=(titanic_ds.SibSp + titanic_ds.Parch > 0) print titanic_ds.head()
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
To now show the probability visually as well as its output I have gone ahead and created a factorplot as well as printed out the probabilities of the two (Alone = False and Family = True). To get the probabilities I had to divide the sum of survivors by family type and divide by the count of family type.
fanda = sea.factorplot('Family', "Survived", data=titanic_ds, kind='bar', ci=None, size=5) fanda.set(xticklabels = ['Alone', 'Family']) print ((titanic_ds.groupby('Family')['Survived'].sum()/titanic_ds.groupby('Family')['Survived'].count()))
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Finally, to answer my last question of if being in a higher class affected the probability of you surviving, I used the same seaborn factorplot but to create this graph i had to take the sum of survivors and divide them by the count of survivors in each class.
sea.factorplot('Pclass', "Survived", data=titanic_ds, kind='bar', ci=None, size=5) PS=(titanic_ds.groupby('Pclass')['Survived'].sum()) PC=(titanic_ds.groupby('Pclass')['Survived'].count()) print ("Class Survivability: ") print (PS/PC)
Project 2 Titanic Data Final.ipynb
dasnah/TitanicDataSet
unlicense
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, shape=[None, real_dim], name="inputs_real") inputs_z = tf.placeholder(tf.float32, shape=[None, z_dim], name="inputs_z") return inputs_real, inputs_z
Generative-Adversarial-Networks/Intro_to_GANs_Exercises.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
# Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=(tf.ones_like(d_logits_real) * (1 - smooth)))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=(tf.zeros_like(d_logits_real) * (1 - smooth)))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
Generative-Adversarial-Networks/Intro_to_GANs_Exercises.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Discussion: The square wave is not maintained due to the first order backward differencing scheme in space creates false diffusion. (Numerical diffusion) If we reduce the spatial step (spatial resolution - increase nx) then the error reduces. The wave shifts to the right with constant speed $c\Delta t$. Near the wall (end of the x array), the line start to become linear (straight line) due to the fact that there is no viscosity defined, hence the non-physical change near the wall. Step 2: 1-D Non-linear Convection From the system of 1-D Linear Convection, now instead of letting the speed of convection being constant, we use $u$ as the speed of convection. $$ \frac{\partial u}{\partial t} + u\frac{\partial u}{\partial t} = 0 $$ Applying the same discretizing method: $$ \frac{u_i^{n+1} - u_i^n}{\Delta t} + u^n_i \frac{u^n_i - u^n_{i-1}}{\Delta x} = 0 $$ Transposing: $$ u^{n+1}i = u^n_i - u^n_i \frac{\Delta t}{\Delta x}(u^n_i - u^n{i-1}) $$
def nonlin_convection_1d(nx=41, nt=50, dt=.01, u_init=[2,2,2,2], init_offset = 5, keep_all=False): ''' nx = 41 # Number of horizontal location points (x axis on graph) nt = 100 # Number of time step (iteration) dt = .01 # Resolution of the time step c = 1 # Constant, speed of convection ''' dx = 2./(nx-1) x = np.linspace(0,2,nx) u = np.ones(nx) u[init_offset:(init_offset+len(u_init))] = u_init[:] for n in range(nt): un = u.copy() for i in range(1,nx-1): u[i] = un[i]-un[i]*dt/dx*(un[i]-un[i-1]) pyplot.plot(x, u) pyplot.axis([0, 2, .5, 2.5]) pyplot.pause(0.05) if not keep_all : if not n == nt-1 : pyplot.cla() pyplot.show() %matplotlib inline nonlin_convection_1d(keep_all=True) nonlin_convection_1d(nx=101, nt=101, dt=0.005, u_init=2*np.ones(20), init_offset=10, keep_all=True) nonlin_convection_1d()
lec_samples/lec2_navier_stokes.ipynb
gear/HPSC
gpl-3.0