markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Here I'm looking through a sample of values in each column, choosing the columns to explore based on the their headings, a bit of contextual info from colleagues and the `df.describe()` above.
df["Subject"]
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
After much trial and error...Subjects look like an interesting avenue to explore further. Where subjects have _actually_ been filled in and the entry is not `None`, a list of subjects is returned. We can explore some of these subjects' subtleties by creating an adjacency matrix. We'll count the number of times each subject appears alongside every other subject and return a big $n \times n$ matrix, where $n$ is the total number of unique subjects. We can use this adjacency matrix for all sorts of stuff, but we have to build it first. To start, lets get a uniqur list of all subjects. This involves unpacking each sub-list and flattening them out into one long list, before finding the unique elements. We'll also use the `clean` function defined above to get rid of any irregularities which might become annoying later on.
subjects = flatten(df["Subject"].dropna().tolist()) print(len(subjects)) subjects = list(set(map(clean, subjects))) print(len(subjects))
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
At this point it's often helpful to index our data, ie transform words into numbers. We'll create two dictionaries which map back and forth between the subjects and their corresponding indicies:
index_to_subject = {index: subject for index, subject in enumerate(subjects)} subject_to_index = {subject: index for index, subject in enumerate(subjects)}
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
Lets instantiate an empty numpy array which we'll then fill with our coocurrence data. Each column and each row will represent a subject - each cell (the intersection of a column and row) will therefore represent the 'strength' of the interaction between those subjects. As we haven't seen any interactions yet, we'll set every array element to 0.
adjacency = np.empty((len(subjects), len(subjects)), dtype=np.uint16)
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
To populate the matrix, we want to find every possible combination of subject in each sub-list from our original column, ie if we had the subjects`[Disease, Heart, Heart Diseases, Cardiology]`we would want to return `[['Disease', 'Disease'], ['Heart', 'Disease'], ['Heart Diseases', 'Disease'], ['Cardiology', 'Disease'], ['Disease', 'Heart'], ['Heart', 'Heart'], ['Heart Diseases', 'Heart'], ['Cardiology', 'Heart'], ['Disease', 'Heart Diseases'], ['Heart', 'Heart Diseases'], ['Heart Diseases', 'Heart Diseases'], ['Cardiology', 'Heart Diseases'], ['Disease', 'Cardiology'], ['Heart', 'Cardiology'], ['Heart Diseases', 'Cardiology'], ['Cardiology', 'Cardiology']]`The `cartesian()` function which I've defined above will do that for us. We then find the appropriate intersection in the matrix and add another unit of 'strength' to it. We'll do this for every row of subjects in the `['Subjects']` column.
for row_of_subjects in tqdm(df["Subject"].dropna()): for subject_pair in cartesian(row_of_subjects, row_of_subjects): subject_index_1 = subject_to_index[clean(subject_pair[0])] subject_index_2 = subject_to_index[clean(subject_pair[1])] adjacency[subject_index_1, subject_index_2] += 1
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
We can do all sorts of fun stuff now - adjacency matrices are the foundation on which all of graph theory is built. However, because it's a bit more interesting, I'm going to start with some dimensionality reduction. We'll get to the graphy stuff later. Using [UMAP](https://github.com/lmcinnes/umap), we can squash the $n \times n$ dimensional matrix down into a $n \times m$ dimensional one, where $m$ is some arbitrary integer. Setting $m$ to 2 will then allow us to plot each subject as a point on a two dimensional plane. UMAP will try to preserve the 'distances' between subjects - in this case, that means that related or topically similar subjects will end up clustered together, and different subjects will move apart.
embedding_2d = pd.DataFrame(UMAP(n_components=2).fit_transform(adjacency)) embedding_2d.plot.scatter(x=0, y=1);
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
We can isolate the clusters we've found above using a number of different methods - `scikit-learn` provides easy access to some very powerful algorithms. Here I'll use a technique called _agglomerative clustering_, and make a guess that 15 is an appropriate number of clusters to look for.
n_clusters = 15 embedding_2d["labels"] = AgglomerativeClustering(n_clusters).fit_predict( embedding_2d.values ) embedding_2d.plot.scatter(x=0, y=1, c="labels", cmap="Paired");
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
We can now use the `index_to_subject` mapping that we created earlier to examine which subjects have been grouped together into clusters
for i in range(n_clusters): print(str(i) + " " + "-" * 80 + "\n") print( np.sort( [ index_to_subject[index] for index in embedding_2d[embedding_2d["labels"] == i].index.values ] ) ) print("\n")
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
""" DON'T MODIFY ANYTHING IN THIS CELL """ # load in data import helper data_dir = './data/Seinfeld_Scripts.txt' text = helper.load_data(data_dir)
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
view_line_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) lines = text.split('\n') print('Number of lines: {}'.format(len(lines))) word_count_line = [len(line.split()) for line in lines] print('Average number of words in each line: {}'.format(np.average(word_count_line))) print() print('The lines {} to {}:'.format(*view_line_range)) print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
Dataset Stats Roughly the number of unique words: 46367 Number of lines: 109233 Average number of words in each line: 5.544240293684143 The lines 0 to 10: jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go. jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother. george: are you through? jerry: you do of course try on, when you buy? george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
import problem_unittests as tests from collections import Counter def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function #define count var countVar = Counter(text) #define vocab var Vocab = sorted(countVar, key=countVar.get, reverse=True) #define integer to vocab int_to_vocab = {ii: word for ii, word in enumerate(Vocab)} #define vocab to integer vocab_to_int = {word: ii for ii, word in int_to_vocab.items()} # return tuple return (vocab_to_int, int_to_vocab) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
Tests Passed
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenized dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function tokens = dict() tokens['.'] = '||period||' tokens[','] = '||comma||' tokens['"'] = '||quotation_mark||' tokens[';'] = '||semicolon||' tokens['!'] = '||exclam_mark||' tokens['?'] = '||question_mark||' tokens['('] = '||left_par||' tokens[')'] = '||right_par||' tokens['-'] = '||dash||' tokens['\n'] = '||return||' return tokens """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
Tests Passed
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # pre-process training data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
""" DON'T MODIFY ANYTHING IN THIS CELL """ import torch # Check for a GPU train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('No GPU found. Please use a GPU to train your neural network.')
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
from torch.utils.data import TensorDataset, DataLoader def batch_data(words, sequence_length, batch_size): """ Batch the neural network data using DataLoader :param words: The word ids of the TV scripts :param sequence_length: The sequence length of each batch :param batch_size: The size of each batch; the number of sequences in a batch :return: DataLoader with batched data """ # TODO: Implement function Num_batches = len(words)//batch_size words = words[:Num_batches*batch_size] x, y = [], [] for idx in range(0, len(words) - sequence_length): x.append(words[idx:idx+sequence_length]) y.append(words[idx+sequence_length]) feature_tensors, target_tensors = torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)) dataset = TensorDataset(feature_tensors, target_tensors) dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size) # return a dataloader return dataloader # there is no test for this function, but you are encouraged to create # print statements and tests of your own
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
# test dataloader test_text = range(50) t_loader = batch_data(test_text, sequence_length=5, batch_size=10) data_iter = iter(t_loader) sample_x, sample_y = data_iter.next() print(sample_x.shape) print(sample_x) print() print(sample_y.shape) print(sample_y)
torch.Size([10, 5]) tensor([[ 0, 1, 2, 3, 4], [ 1, 2, 3, 4, 5], [ 2, 3, 4, 5, 6], [ 3, 4, 5, 6, 7], [ 4, 5, 6, 7, 8], [ 5, 6, 7, 8, 9], [ 6, 7, 8, 9, 10], [ 7, 8, 9, 10, 11], [ 8, 9, 10, 11, 12], [ 9, 10, 11, 12, 13]]) torch.Size([10]) tensor([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
import torch.nn as nn class RNN(nn.Module): def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5): """ Initialize the PyTorch RNN Module :param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary) :param output_size: The number of output dimensions of the neural network :param embedding_dim: The size of embeddings, should you choose to use them :param hidden_dim: The size of the hidden layer outputs :param dropout: dropout to add in between LSTM/GRU layers """ super(RNN, self).__init__() # TODO: Implement function # set class variables self.output_size = output_size self.n_layers = n_layers self.hidden_dim = hidden_dim # define model layers self.embedding = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True) self.fc = nn.Linear(hidden_dim, output_size) def forward(self, nn_input, hidden): """ Forward propagation of the neural network :param nn_input: The input to the neural network :param hidden: The hidden state :return: Two Tensors, the output of the neural network and the latest hidden state """ # TODO: Implement function batch_size = nn_input.size(0) embeds = self.embedding(nn_input) lstm_out, hidden = self.lstm(embeds, hidden) lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim) out = self.fc(lstm_out) # reshape out = out.view(batch_size, -1, self.output_size) # find the last batch output = out[:, -1] # return one batch of output word scores and the hidden state return output, hidden def init_hidden(self, batch_size): ''' Initialize the hidden state of an LSTM/GRU :param batch_size: The batch_size of the hidden state :return: hidden state of dims (n_layers, batch_size, hidden_dim) ''' # Implement function # initialize hidden state with zero weights, and move to GPU if available weight = next(self.parameters()).data if (train_on_gpu): hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda()) else: hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()) return hidden """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_rnn(RNN, train_on_gpu)
Tests Passed
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden): """ Forward and backward propagation on the neural network :param decoder: The PyTorch Module that holds the neural network :param decoder_optimizer: The PyTorch optimizer for the neural network :param criterion: The PyTorch loss function :param inp: A batch of input to the neural network :param target: The target output for the batch of input :return: The loss and the latest hidden state Tensor """ # TODO: Implement Function # move data to GPU, if available if (train_on_gpu): inp = inp.cuda() target = target.cuda() # perform backpropagation and optimization hidden = tuple([each.data for each in hidden]) rnn.zero_grad() output, hidden = rnn(inp, hidden) loss = criterion(output, target) loss.backward() nn.utils.clip_grad_norm_(rnn.parameters(), 5) optimizer.step() # return the loss over a batch and the hidden state produced by our model return loss.item(), hidden # Note that these tests aren't completely extensive. # they are here to act as general checks on the expected outputs of your functions """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
Tests Passed
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
""" DON'T MODIFY ANYTHING IN THIS CELL """ def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100): batch_losses = [] rnn.train() print("Training for %d epoch(s)..." % n_epochs) for epoch_i in range(1, n_epochs + 1): # initialize hidden state hidden = rnn.init_hidden(batch_size) for batch_i, (inputs, labels) in enumerate(train_loader, 1): # make sure you iterate over completely full batches, only n_batches = len(train_loader.dataset)//batch_size if(batch_i > n_batches): break # forward, back prop loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) # record loss batch_losses.append(loss) # printing loss stats if batch_i % show_every_n_batches == 0: print('Epoch: {:>4}/{:<4} Loss: {}\n'.format( epoch_i, n_epochs, np.average(batch_losses))) batch_losses = [] # returns a trained rnn return rnn
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
# Data params # Sequence Length sequence_length = 12 # of words in a sequence # Batch Size batch_size = 120 # data loader - do not change train_loader = batch_data(int_text, sequence_length, batch_size) # Training parameters # Number of Epochs num_epochs = 10 # Learning Rate learning_rate = 0.001 # Model parameters # Vocab size vocab_size = len(vocab_to_int) # Output size output_size = len(vocab_to_int) # Embedding Dimension embedding_dim = 300 # Hidden Dimension hidden_dim = int(300*1.25) # Number of RNN Layers n_layers = 2 # Show stats for every n number of batches show_every_n_batches = 500
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # create model and move to gpu if available rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5) if train_on_gpu: rnn.cuda() # defining loss and optimization functions for training optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate) criterion = nn.CrossEntropyLoss() # training the model trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches) # saving the trained model helper.save_model('./save/trained_rnn', trained_rnn) print('Model Trained and Saved')
Training for 10 epoch(s)... Epoch: 1/10 Loss: 5.531735227108002 Epoch: 1/10 Loss: 4.930219256401062 Epoch: 1/10 Loss: 4.637746600151062 Epoch: 1/10 Loss: 4.442910384654999 Epoch: 1/10 Loss: 4.507025374412537 Epoch: 1/10 Loss: 4.437873532295227 Epoch: 1/10 Loss: 4.499480187416077 Epoch: 1/10 Loss: 4.360066707611084 Epoch: 1/10 Loss: 4.224316485881806 Epoch: 1/10 Loss: 4.262131739616394 Epoch: 1/10 Loss: 4.250785901069641 Epoch: 1/10 Loss: 4.350858811378479 Epoch: 1/10 Loss: 4.325847270488739 Epoch: 1/10 Loss: 4.353909639358521 Epoch: 2/10 Loss: 4.122715645664375 Epoch: 2/10 Loss: 3.9956280279159544 Epoch: 2/10 Loss: 3.8856767058372497 Epoch: 2/10 Loss: 3.764372691631317 Epoch: 2/10 Loss: 3.86486045217514 Epoch: 2/10 Loss: 3.8547475366592407 Epoch: 2/10 Loss: 3.9458832178115846 Epoch: 2/10 Loss: 3.8158533701896666 Epoch: 2/10 Loss: 3.730561679840088 Epoch: 2/10 Loss: 3.7631667466163634 Epoch: 2/10 Loss: 3.792806112766266 Epoch: 2/10 Loss: 3.901373929977417 Epoch: 2/10 Loss: 3.8545562386512757 Epoch: 2/10 Loss: 3.8934123978614807 Epoch: 3/10 Loss: 3.757648332837055 Epoch: 3/10 Loss: 3.715178225517273 Epoch: 3/10 Loss: 3.630136202812195 Epoch: 3/10 Loss: 3.5242831320762633 Epoch: 3/10 Loss: 3.625628924369812 Epoch: 3/10 Loss: 3.6309379606246948 Epoch: 3/10 Loss: 3.7150929403305053 Epoch: 3/10 Loss: 3.5617436628341674 Epoch: 3/10 Loss: 3.5180994987487795 Epoch: 3/10 Loss: 3.524529664516449 Epoch: 3/10 Loss: 3.5592564339637756 Epoch: 3/10 Loss: 3.682766806125641 Epoch: 3/10 Loss: 3.63344953250885 Epoch: 3/10 Loss: 3.6614061670303344 Epoch: 4/10 Loss: 3.569586784254444 Epoch: 4/10 Loss: 3.5327431926727293 Epoch: 4/10 Loss: 3.4778979787826536 Epoch: 4/10 Loss: 3.3810313334465025 Epoch: 4/10 Loss: 3.4490697503089907 Epoch: 4/10 Loss: 3.4713255314826967 Epoch: 4/10 Loss: 3.540807016849518 Epoch: 4/10 Loss: 3.395219274520874 Epoch: 4/10 Loss: 3.3618284215927123 Epoch: 4/10 Loss: 3.380989155292511 Epoch: 4/10 Loss: 3.3962616963386534 Epoch: 4/10 Loss: 3.5119336886405943 Epoch: 4/10 Loss: 3.5053564672470094 Epoch: 4/10 Loss: 3.52675777053833 Epoch: 5/10 Loss: 3.444213536519074 Epoch: 5/10 Loss: 3.4076444568634034 Epoch: 5/10 Loss: 3.3569597172737122 Epoch: 5/10 Loss: 3.264707137107849 Epoch: 5/10 Loss: 3.325695327758789 Epoch: 5/10 Loss: 3.3434394330978394 Epoch: 5/10 Loss: 3.4178655323982237 Epoch: 5/10 Loss: 3.292145290374756 Epoch: 5/10 Loss: 3.25900110912323 Epoch: 5/10 Loss: 3.282059187412262 Epoch: 5/10 Loss: 3.286310025691986 Epoch: 5/10 Loss: 3.371444211959839 Epoch: 5/10 Loss: 3.3803050670623778 Epoch: 5/10 Loss: 3.4059303545951845 Epoch: 6/10 Loss: 3.3510205145178626 Epoch: 6/10 Loss: 3.3233260822296145 Epoch: 6/10 Loss: 3.262583809375763 Epoch: 6/10 Loss: 3.1777939085960387 Epoch: 6/10 Loss: 3.2358165702819823 Epoch: 6/10 Loss: 3.2441793150901796 Epoch: 6/10 Loss: 3.323215190887451 Epoch: 6/10 Loss: 3.2096805644035338 Epoch: 6/10 Loss: 3.1801719818115233 Epoch: 6/10 Loss: 3.198467743396759 Epoch: 6/10 Loss: 3.1996535511016844 Epoch: 6/10 Loss: 3.2810081453323363 Epoch: 6/10 Loss: 3.292669029712677 Epoch: 6/10 Loss: 3.3275886268615724 Epoch: 7/10 Loss: 3.2740323561436364 Epoch: 7/10 Loss: 3.2473365926742552 Epoch: 7/10 Loss: 3.189321361064911 Epoch: 7/10 Loss: 3.1124250736236574 Epoch: 7/10 Loss: 3.1598252415657044 Epoch: 7/10 Loss: 3.174737638950348 Epoch: 7/10 Loss: 3.2507713837623595 Epoch: 7/10 Loss: 3.133176600456238 Epoch: 7/10 Loss: 3.1098085503578186 Epoch: 7/10 Loss: 3.1263022136688234 Epoch: 7/10 Loss: 3.1329917140007018 Epoch: 7/10 Loss: 3.2054256014823914 Epoch: 7/10 Loss: 3.2255016083717347 Epoch: 7/10 Loss: 3.249888722896576 Epoch: 8/10 Loss: 3.2023202670221034 Epoch: 8/10 Loss: 3.1807839002609253 Epoch: 8/10 Loss: 3.132005618095398 Epoch: 8/10 Loss: 3.0564675722122194 Epoch: 8/10 Loss: 3.101025879383087 Epoch: 8/10 Loss: 3.1125088901519775 Epoch: 8/10 Loss: 3.191727280139923 Epoch: 8/10 Loss: 3.073734776496887 Epoch: 8/10 Loss: 3.0565507707595825 Epoch: 8/10 Loss: 3.068301407337189 Epoch: 8/10 Loss: 3.0812396683692933 Epoch: 8/10 Loss: 3.148022204875946 Epoch: 8/10 Loss: 3.1773056478500368 Epoch: 8/10 Loss: 3.1913066611289977 Epoch: 9/10 Loss: 3.1471167175460604 Epoch: 9/10 Loss: 3.133128029823303 Epoch: 9/10 Loss: 3.085259078979492 Epoch: 9/10 Loss: 3.009995768547058 Epoch: 9/10 Loss: 3.0498082242012026 Epoch: 9/10 Loss: 3.0640956010818483 Epoch: 9/10 Loss: 3.144371497631073 Epoch: 9/10 Loss: 3.022254427909851 Epoch: 9/10 Loss: 3.0071170454025267 Epoch: 9/10 Loss: 3.0216007103919984 Epoch: 9/10 Loss: 3.0384934406280517 Epoch: 9/10 Loss: 3.1074284529685974 Epoch: 9/10 Loss: 3.1239990234375 Epoch: 9/10 Loss: 3.136796194553375 Epoch: 10/10 Loss: 3.0981430233099836 Epoch: 10/10 Loss: 3.0887152523994446 Epoch: 10/10 Loss: 3.0389931325912474 Epoch: 10/10 Loss: 2.9724698853492737 Epoch: 10/10 Loss: 3.003671471595764 Epoch: 10/10 Loss: 3.021963978290558 Epoch: 10/10 Loss: 3.099330397605896 Epoch: 10/10 Loss: 2.9838244442939756 Epoch: 10/10 Loss: 2.9682252025604248 Epoch: 10/10 Loss: 2.9791079874038697 Epoch: 10/10 Loss: 2.999862591743469 Epoch: 10/10 Loss: 3.0582768750190734 Epoch: 10/10 Loss: 3.0778299646377563 Epoch: 10/10 Loss: 3.0915690803527833
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** I trained the model with the following parameters:10 epochslearning rate = 0.001embedding dim = 300hidden dim = 375number of layers = 2show_every_n_batches = 2500and it gave a good loss: 2.96 --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
""" DON'T MODIFY ANYTHING IN THIS CELL """ import torch import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() trained_rnn = helper.load_model('./save/trained_rnn')
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ import torch.nn.functional as F def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100): """ Generate text using the neural network :param decoder: The PyTorch Module that holds the trained neural network :param prime_id: The word id to start the first prediction :param int_to_vocab: Dict of word id keys to word values :param token_dict: Dict of puncuation tokens keys to puncuation values :param pad_value: The value used to pad a sequence :param predict_len: The length of text to generate :return: The generated text """ rnn.eval() # create a sequence (batch_size=1) with the prime_id current_seq = np.full((1, sequence_length), pad_value) current_seq[-1][-1] = prime_id predicted = [int_to_vocab[prime_id]] for _ in range(predict_len): if train_on_gpu: current_seq = torch.LongTensor(current_seq).cuda() else: current_seq = torch.LongTensor(current_seq) # initialize the hidden state hidden = rnn.init_hidden(current_seq.size(0)) # get the output of the rnn output, _ = rnn(current_seq, hidden) # get the next word probabilities p = F.softmax(output, dim=1).data if(train_on_gpu): p = p.cpu() # move to cpu # use top_k sampling to get the index of the next word top_k = 5 p, top_i = p.topk(top_k) top_i = top_i.numpy().squeeze() # select the likely next word index with some element of randomness p = p.numpy().squeeze() word_i = np.random.choice(top_i, p=p/p.sum()) # retrieve that word from the dictionary word = int_to_vocab[word_i] predicted.append(word) # the generated word becomes the next "current sequence" and the cycle can continue current_seq = np.roll(current_seq, -1, 1) current_seq[-1][-1] = word_i gen_sentences = ' '.join(predicted) # Replace punctuation tokens for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' gen_sentences = gen_sentences.replace(' ' + token.lower(), key) gen_sentences = gen_sentences.replace('\n ', '\n') gen_sentences = gen_sentences.replace('( ', '(') # return all the sentences return gen_sentences
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
# run the cell multiple times to get different results! gen_length = 400 # modify the length to your preference prime_word = 'jerry' # name for starting the script """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ pad_word = helper.SPECIAL_WORDS['PADDING'] generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length) print(generated_script)
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:40: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
# save script to a text file f = open("generated_script_1.txt","w") f.write(generated_script) f.close()
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
NorahAlshaya/TV-Script-Generation-Project
Problem 3 Use this notebook to write your code for problem 3 by filling in the sections marked ` TODO` and running all cells.
import numpy as np import matplotlib.pyplot as plt import itertools import urllib.request urllib.request.urlretrieve('https://raw.githubusercontent.com/lakigigar/Caltech-CS155-2021/main/psets/set1/perceptron_helper.py', 'perceptron_helper.py') from perceptron_helper import ( predict, plot_data, boundary, plot_perceptron, ) %matplotlib inline
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
Implementation of Perceptron First, we will implement the perceptron algorithm. Fill in the `update_perceptron()` function so that it finds a single misclassified point and updates the weights and bias accordingly. If no point exists, the weights and bias should not change.Hint: You can use the `predict()` helper method, which labels a point 1 or -1 depending on the weights and bias.
def update_perceptron(X, Y, w, b): """ This method updates a perceptron model. Takes in the previous weights and returns weights after an update, which could be nothing. Inputs: X: A (N, D) shaped numpy array containing N D-dimensional points. Y: A (N, ) shaped numpy array containing the labels for the points. w: A (D, ) shaped numpy array containing the weight vector. b: A float containing the bias term. Output: next_w: A (D, ) shaped numpy array containing the next weight vector after updating on a single misclassified point, if one exists. next_b: The next float bias term after updating on a single misclassified point, if one exists. """ next_w, next_b = np.copy(w), np.copy(b) #============================================== # TODO: Implement update rule for perceptron. #=============================================== return next_w, next_b
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
Next you will fill in the `run_perceptron()` method. The method performs single updates on a misclassified point until convergence, or max_iter updates are made. The function will return the final weights and bias. You should use the `update_perceptron()` method you implemented above.
def run_perceptron(X, Y, w, b, max_iter): """ This method runs the perceptron learning algorithm. Takes in initial weights and runs max_iter update iterations. Returns final weights and bias. Inputs: X: A (N, D) shaped numpy array containing N D-dimensional points. Y: A (N, ) shaped numpy array containing the labels for the points. w: A (D, ) shaped numpy array containing the initial weight vector. b: A float containing the initial bias term. max_iter: An int for the maximum number of updates evaluated. Output: w: A (D, ) shaped numpy array containing the final weight vector. b: The final float bias term. """ #============================================ # TODO: Implement perceptron update loop. #============================================= return w, b
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
Problem 3A Visualizing a Toy Dataset We will begin by training our perceptron on a toy dataset of 3 points. The green points are labelled +1 and the red points are labelled -1. We use the helper function `plot_data()` to do so.
X = np.array([[ -3, -1], [0, 3], [1, -2]]) Y = np.array([ -1, 1, 1]) fig = plt.figure(figsize=(5,4)) ax = fig.gca(); ax.set_xlim(-4.1, 3.1); ax.set_ylim(-3.1, 4.1) plot_data(X, Y, ax)
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
Running the Perceptron Next, we will run the perceptron learning algorithm on this dataset. Update the code to show the weights and bias at each timestep and the misclassified point used in each update. Run the below code, and fill in the corresponding table in the set.
# Initialize weights and bias. weights = np.array([0.0, 1.0]) bias = 0.0 weights, bias = run_perceptron(X, Y, weights, bias, 16) print() print ("final w = %s, final b = %.1f" % (weights, bias))
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
Visualizating the Perceptron Getting all that information in table form isn't very informative. Let us visualize what the decision boundaries are at each timestep instead. The helper functions `boundary()` and `plot_perceptron()` plot a decision boundary given a perceptron weights and bias. Note that the equation for the decision boundary is given by:$$w_1x_1 + w_2x_2 + b = 0.$$ Using some algebra, we can obtain $x_2$ from $x_1$ to plot the boundary as a line. $$x_2 = \frac{-w_1x_2 - b}{w_2}. $$ Below is a redefinition of the `run_perceptron()` method to visualize the points and decision boundaries at each timestep instead of printing. Fill in the method using your previous `run_perceptron()` method, and the above helper methods.Hint: The axs element is a list of Axes, which are used as subplots for each timestep. You can do the following:```ax = axs[i]```to get the plot correponding to $t = i$. You can then use ax.set_title() to title each subplot. You will want to use the `plot_data()` and `plot_perceptron()` helper methods.
def run_perceptron(X, Y, w, b, axs, max_iter): """ This method runs the perceptron learning algorithm. Takes in initial weights and runs max_iter update iterations. Returns final weights and bias. Inputs: X: A (N, D) shaped numpy array containing N D-dimensional points. Y: A (N, ) shaped numpy array containing the labels for the points. w: A (D, ) shaped numpy array containing the initial weight vector. b: A float containing the initial bias term. axs: A list of Axes that contain suplots for each timestep. max_iter: An int for the maximum number of updates evaluated. Output: The final weight and bias vectors. """ #============================================ # TODO: Implement perceptron update loop. #============================================= return w, b
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
Run the below code to get a visualization of the perceptron algorithm. The red region are areas the perceptron thinks are negative examples.
# Initialize weights and bias. weights = np.array([0.0, 1.0]) bias = 0.0 f, ax_arr = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(9,8)) axs = list(itertools.chain.from_iterable(ax_arr)) for ax in axs: ax.set_xlim(-4.1, 3.1); ax.set_ylim(-3.1, 4.1) run_perceptron(X, Y, weights, bias, axs, 4) f.tight_layout()
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
Problem 3C Visualize a Non-linearly Separable Dataset. We will now work on a dataset that cannot be linearly separated, namely one that is generated by the XOR function.
X = np.array([[0, 1], [1, 0], [0, 0], [1, 1]]) Y = np.array([1, 1, -1, -1]) fig = plt.figure(figsize=(5,4)) ax = fig.gca(); ax.set_xlim(-0.1, 1.1); ax.set_ylim(-0.1, 1.1) plot_data(X, Y, ax)
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
We will now run the perceptron algorithm on this dataset. We will limit the total timesteps this time, but you should see a pattern in the updates. Run the below code.
# Initialize weights and bias. weights = np.array([0.0, 1.0]) bias = 0.0 f, ax_arr = plt.subplots(4, 4, sharex=True, sharey=True, figsize=(9,8)) axs = list(itertools.chain.from_iterable(ax_arr)) for ax in axs: ax.set_xlim(-0.1, 1.1); ax.set_ylim(-0.1, 1.1) run_perceptron(X, Y, weights, bias, axs, 16) f.tight_layout()
_____no_output_____
BSD-2-Clause
psets/set1/set1_prob3.ipynb
ichakraborty/CS155-iniproject
!apt-get -qq install tree import os import numpy as np import pandas as pd from getpass import getpass from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer def access_kaggle(): """ Access Kaggle from Google Colab. If the /root/.kaggle does not exist then prompt for the username and for the Kaggle API key. Creates the kaggle.json access file in the /root/.kaggle/ folder. """ KAGGLE_ROOT = os.path.join('/root', '.kaggle') KAGGLE_PATH = os.path.join(KAGGLE_ROOT, 'kaggle.json') if '.kaggle' not in os.listdir(path='/root'): user = getpass(prompt='Kaggle username: ') key = getpass(prompt='Kaggle API key: ') !mkdir $KAGGLE_ROOT !touch $KAGGLE_PATH !chmod 666 $KAGGLE_PATH with open(KAGGLE_PATH, mode='w') as f: f.write('{"username":"%s", "key":"%s"}' %(user, key)) f.close() !chmod 600 $KAGGLE_PATH del user del key success_msg = "Kaggle is successfully set up. Good to go." print(f'{success_msg}') access_kaggle() !kaggle competitions download -c home-data-for-ml-course -p datasets/ml-course !tree -sh ./ !cat -n datasets/ml-course/train.csv|head -2 df = pd.read_csv('datasets/ml-course/train.csv', sep=',', index_col=0) df.columns = df.columns.map(lambda c: c.lower()) df.columns df.info() df.saleprice.isnull().sum() y = df.saleprice X = df.drop(['saleprice'], axis='columns') train_x_full, valid_x_full, train_y, valid_y = train_test_split(X, y, test_size=0.2, random_state=42) numerical_columns = [col for col in train_x_full.columns if train_x_full[col].dtype in ['float64', 'int64']] categorical_columns = [col for col in train_x_full.columns if train_x_full[col].dtype == 'object' and train_x_full[col].nunique() < 10] selected_columns = categorical_columns + numerical_columns train_x = train_x_full[selected_columns].copy() valid_x = valid_x_full[selected_columns].copy() train_x.shape, valid_x.shape train_x.head() imputers = [ ('imputer', SimpleImputer()), ('imputer_median', SimpleImputer(strategy='median')), ('imputer_most_frequent', SimpleImputer(strategy='most_frequent')), ] trees_in_the_forest = [5, 10, 20, 50] models = [RandomForestRegressor(n_estimators=N, random_state=42) for N in trees_in_the_forest] for imputer_name, imputer in imputers: numerical_transformer = imputer categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('one_hot_encoder', OneHotEncoder(sparse=False, handle_unknown='ignore')), ]) preprocessor = ColumnTransformer( transformers=[ # (name , transformer , columns) ('num', numerical_transformer, numerical_columns), ('cat', categorical_transformer, categorical_columns), ] ) print(f'{imputer_name} imputer:') print('-'*20) for model in models: pipe = Pipeline( steps=[ ('preprocessor', preprocessor), ('model', model), ] ) pipe.fit(train_x, train_y) preds = pipe.predict(valid_x) mae = mean_absolute_error(y_true=valid_y, y_pred=preds) print(f'{model}') print(f'---> MAE: {mae}') print()
imputer imputer: -------------------- RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=5, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 21061.74246575343 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 19763.0897260274 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 18705.513356164385 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 17976.44712328767 imputer_median imputer: -------------------- RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=5, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 20431.88219178082 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 19872.100342465754 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 18947.181335616435 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 18198.937123287673 imputer_most_frequent imputer: -------------------- RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=5, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 20626.355479452053 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 19683.9698630137 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 18550.526198630138 RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False) ---> MAE: 18138.273904109592
MIT
Pipeline_multiple_imputers_and_models.ipynb
chrismarkella/Kaggle-access-from-Google-Colab
In this tutorial you'll learn all about **histograms** and **density plots**. Set up the notebookAs always, we begin by setting up the coding environment. (_This code is hidden, but you can un-hide it by clicking on the "Code" button immediately below this text, on the right._)
#$HIDE$ import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns print("Setup Complete")
_____no_output_____
Apache-2.0
notebooks/data_viz_to_coder/raw/tut5.ipynb
stephenramthun/learntools
Select a datasetWe'll work with a dataset of 150 different flowers, or 50 each from three different species of iris (*Iris setosa*, *Iris versicolor*, and *Iris virginica*).![tut4_iris](https://i.imgur.com/RcxYYBA.png) Load and examine the dataEach row in the dataset corresponds to a different flower. There are four measurements: the sepal length and width, along with the petal length and width. We also keep track of the corresponding species.
# Path of the file to read iris_filepath = "../input/iris.csv" # Read the file into a variable iris_data iris_data = pd.read_csv(iris_filepath, index_col="Id") # Print the first 5 rows of the data iris_data.head()
_____no_output_____
Apache-2.0
notebooks/data_viz_to_coder/raw/tut5.ipynb
stephenramthun/learntools
HistogramsSay we would like to create a **histogram** to see how petal length varies in iris flowers. We can do this with the `sns.distplot` command.
# Histogram sns.distplot(a=iris_data['Petal Length (cm)'], kde=False)
_____no_output_____
Apache-2.0
notebooks/data_viz_to_coder/raw/tut5.ipynb
stephenramthun/learntools
We customize the behavior of the command with two additional pieces of information:- `a=` chooses the column we'd like to plot (_in this case, we chose `'Petal Length (cm)'`_).- `kde=False` is something we'll always provide when creating a histogram, as leaving it out will create a slightly different plot. Density plotsThe next type of plot is a **kernel density estimate (KDE)** plot. In case you're not familiar with KDE plots, you can think of it as a smoothed histogram. To make a KDE plot, we use the `sns.kdeplot` command. Setting `shade=True` colors the area below the curve (_and `data=` has identical functionality as when we made the histogram above_).
# KDE plot sns.kdeplot(data=iris_data['Petal Length (cm)'], shade=True)
_____no_output_____
Apache-2.0
notebooks/data_viz_to_coder/raw/tut5.ipynb
stephenramthun/learntools
2D KDE plotsWe're not restricted to a single column when creating a KDE plot. We can create a **two-dimensional (2D) KDE plot** with the `sns.jointplot` command.In the plot below, the color-coding shows us how likely we are to see different combinations of sepal width and petal length, where darker parts of the figure are more likely.
# 2D KDE plot sns.jointplot(x=iris_data['Petal Length (cm)'], y=iris_data['Sepal Width (cm)'], kind="kde")
_____no_output_____
Apache-2.0
notebooks/data_viz_to_coder/raw/tut5.ipynb
stephenramthun/learntools
Note that in addition to the 2D KDE plot in the center,- the curve at the top of the figure is a KDE plot for the data on the x-axis (in this case, `iris_data['Petal Length (cm)']`), and- the curve on the right of the figure is a KDE plot for the data on the y-axis (in this case, `iris_data['Sepal Width (cm)']`). Color-coded plotsFor the next part of the tutorial, we'll create plots to understand differences between the species. To accomplish this, we begin by breaking the dataset into three separate files, with one for each species.
# Paths of the files to read iris_set_filepath = "../input/iris_setosa.csv" iris_ver_filepath = "../input/iris_versicolor.csv" iris_vir_filepath = "../input/iris_virginica.csv" # Read the files into variables iris_set_data = pd.read_csv(iris_set_filepath, index_col="Id") iris_ver_data = pd.read_csv(iris_ver_filepath, index_col="Id") iris_vir_data = pd.read_csv(iris_vir_filepath, index_col="Id") # Print the first 5 rows of the Iris versicolor data iris_ver_data.head()
_____no_output_____
Apache-2.0
notebooks/data_viz_to_coder/raw/tut5.ipynb
stephenramthun/learntools
In the code cell below, we create a different histogram for each species by using the `sns.distplot` command (_as above_) three times. We use `label=` to set how each histogram will appear in the legend.
# Histograms for each species sns.distplot(a=iris_set_data['Petal Length (cm)'], label="Iris-setosa", kde=False) sns.distplot(a=iris_ver_data['Petal Length (cm)'], label="Iris-versicolor", kde=False) sns.distplot(a=iris_vir_data['Petal Length (cm)'], label="Iris-virginica", kde=False) # Add title plt.title("Histogram of Petal Lengths, by Species") # Force legend to appear plt.legend()
_____no_output_____
Apache-2.0
notebooks/data_viz_to_coder/raw/tut5.ipynb
stephenramthun/learntools
In this case, the legend does not automatically appear on the plot. To force it to show (for any plot type), we can always use `plt.legend()`.We can also create a KDE plot for each species by using `sns.kdeplot` (_as above_). Again, `label=` is used to set the values in the legend.
# KDE plots for each species sns.kdeplot(data=iris_set_data['Petal Length (cm)'], label="Iris-setosa", shade=True) sns.kdeplot(data=iris_ver_data['Petal Length (cm)'], label="Iris-versicolor", shade=True) sns.kdeplot(data=iris_vir_data['Petal Length (cm)'], label="Iris-virginica", shade=True) # Add title plt.title("Distribution of Petal Lengths, by Species")
_____no_output_____
Apache-2.0
notebooks/data_viz_to_coder/raw/tut5.ipynb
stephenramthun/learntools
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn's random forest.First, we will fit the penguins regression dataset.
import pandas as pd from sklearn.model_selection import train_test_split penguins = pd.read_csv("../datasets/penguins_regression.csv") feature_names = ["Flipper Length (mm)"] target_name = "Body Mass (g)" data, target = penguins[feature_names], penguins[target_name] data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0)
_____no_output_____
CC-BY-4.0
notebooks/ensemble_ex_02.ipynb
lesteve/scikit-learn-mooc
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the generalization performance on the testing set in terms of meanabsolute error.
# Write your code here.
_____no_output_____
CC-BY-4.0
notebooks/ensemble_ex_02.ipynb
lesteve/scikit-learn-mooc
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
# Write your code here.
_____no_output_____
CC-BY-4.0
notebooks/ensemble_ex_02.ipynb
lesteve/scikit-learn-mooc
IMPORTING THE LIBRARIES
import os import pandas as pd import pickle import numpy as np import seaborn as sns from sklearn.datasets import load_files from keras.utils import np_utils import matplotlib.pyplot as plt from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D from keras.layers import Dropout, Flatten, Dense from keras.models import Sequential from keras.utils.vis_utils import plot_model from keras.callbacks import ModelCheckpoint from keras.utils import to_categorical from sklearn.metrics import confusion_matrix from keras.preprocessing import image from tqdm import tqdm import seaborn as sns from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score # Pretty display for notebooks %matplotlib inline !ls
csv_files 'Distracted Driver Detection CNN Scratch.ipynb' 'Distracted Driver Detection VGG16.ipynb' imgs model pickle_files 'Self Trained Model Evaluation.ipynb' temp_dir
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
Defining the train,test and model directoriesWe will create the directories for train,test and model training paths if not present
TEST_DIR = os.path.join(os.getcwd(),"imgs","test") TRAIN_DIR = os.path.join(os.getcwd(),"imgs","train") MODEL_PATH = os.path.join(os.getcwd(),"model","self_trained") PICKLE_DIR = os.path.join(os.getcwd(),"pickle_files") if not os.path.exists(TEST_DIR): print("Testing data does not exists") if not os.path.exists(TRAIN_DIR): print("Training data does not exists") if not os.path.exists(MODEL_PATH): print("Model path does not exists") os.makedirs(MODEL_PATH) print("Model path created") if not os.path.exists(PICKLE_DIR): os.makedirs(PICKLE_DIR)
_____no_output_____
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
Data Preparation We will create a csv file having the location of the files present for training and test images and their associated class if present so that it is easily traceable.
def create_csv(DATA_DIR,filename): class_names = os.listdir(DATA_DIR) data = list() if(os.path.isdir(os.path.join(DATA_DIR,class_names[0]))): for class_name in class_names: file_names = os.listdir(os.path.join(DATA_DIR,class_name)) for file in file_names: data.append({ "Filename":os.path.join(DATA_DIR,class_name,file), "ClassName":class_name }) else: class_name = "test" file_names = os.listdir(DATA_DIR) for file in file_names: data.append(({ "FileName":os.path.join(DATA_DIR,file), "ClassName":class_name })) data = pd.DataFrame(data) data.to_csv(os.path.join(os.getcwd(),"csv_files",filename),index=False) create_csv(TRAIN_DIR,"train.csv") create_csv(TEST_DIR,"test.csv") data_train = pd.read_csv(os.path.join(os.getcwd(),"csv_files","train.csv")) data_test = pd.read_csv(os.path.join(os.getcwd(),"csv_files","test.csv")) data_train.info() data_train['ClassName'].value_counts() data_train.describe() nf = data_train['ClassName'].value_counts(sort=False) labels = data_train['ClassName'].value_counts(sort=False).index.tolist() y = np.array(nf) width = 1/1.5 N = len(y) x = range(N) fig = plt.figure(figsize=(20,15)) ay = fig.add_subplot(211) plt.xticks(x, labels, size=15) plt.yticks(size=15) ay.bar(x, y, width, color="blue") plt.title('Bar Chart',size=25) plt.xlabel('classname',size=15) plt.ylabel('Count',size=15) plt.show() data_test.head() data_test.shape
_____no_output_____
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
Observation:1. There are total 22424 training samples2. There are total 79726 training samples3. The training dataset is equally balanced to a great extent and hence we need not do any downsampling of the data Converting into numerical values
labels_list = list(set(data_train['ClassName'].values.tolist())) labels_id = {label_name:id for id,label_name in enumerate(labels_list)} print(labels_id) data_train['ClassName'].replace(labels_id,inplace=True) with open(os.path.join(os.getcwd(),"pickle_files","labels_list.pkl"),"wb") as handle: pickle.dump(labels_id,handle) labels = to_categorical(data_train['ClassName']) print(labels.shape)
(22424, 10)
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
Splitting into Train and Test sets
from sklearn.model_selection import train_test_split xtrain,xtest,ytrain,ytest = train_test_split(data_train.iloc[:,0],labels,test_size = 0.2,random_state=42)
_____no_output_____
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
Converting into 64*64 images You can substitute 64,64 to 224,224 for better results only if ram is >32gb
def path_to_tensor(img_path): # loads RGB image as PIL.Image.Image type img = image.load_img(img_path, target_size=(64, 64)) # convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3) x = image.img_to_array(img) # convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor return np.expand_dims(x, axis=0) def paths_to_tensor(img_paths): list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)] return np.vstack(list_of_tensors) from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True # pre-process the data for Keras train_tensors = paths_to_tensor(xtrain).astype('float32')/255 - 0.5 valid_tensors = paths_to_tensor(xtest).astype('float32')/255 - 0.5 ##takes too much ram ## run this if your ram is greater than 16gb # test_tensors = paths_to_tensor(data_test.iloc[:,0]).astype('float32')/255 - 0.5
_____no_output_____
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
Defining the Model
model = Sequential() model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(64,64,3), kernel_initializer='glorot_normal')) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=128, kernel_size=2, padding='same', activation='relu', kernel_initializer='glorot_normal')) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=256, kernel_size=2, padding='same', activation='relu', kernel_initializer='glorot_normal')) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=512, kernel_size=2, padding='same', activation='relu', kernel_initializer='glorot_normal')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(500, activation='relu', kernel_initializer='glorot_normal')) model.add(Dropout(0.5)) model.add(Dense(10, activation='softmax', kernel_initializer='glorot_normal')) model.summary() plot_model(model,to_file=os.path.join(MODEL_PATH,"model_distracted_driver.png"),show_shapes=True,show_layer_names=True) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) filepath = os.path.join(MODEL_PATH,"distracted-{epoch:02d}-{val_accuracy:.2f}.hdf5") checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max',period=1) callbacks_list = [checkpoint] model_history = model.fit(train_tensors,ytrain,validation_data = (valid_tensors, ytest),epochs=25, batch_size=40, shuffle=True,callbacks=callbacks_list) fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 12)) ax1.plot(model_history.history['loss'], color='b', label="Training loss") ax1.plot(model_history.history['val_loss'], color='r', label="validation loss") ax1.set_xticks(np.arange(1, 25, 1)) ax1.set_yticks(np.arange(0, 1, 0.1)) ax2.plot(model_history.history['accuracy'], color='b', label="Training accuracy") ax2.plot(model_history.history['val_accuracy'], color='r',label="Validation accuracy") ax2.set_xticks(np.arange(1, 25, 1)) legend = plt.legend(loc='best', shadow=True) plt.tight_layout() plt.show()
_____no_output_____
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
Model AnalysisFinding the Confusion matrix,Precision,Recall and F1 score to analyse the model thus created
def print_confusion_matrix(confusion_matrix, class_names, figsize = (10,7), fontsize=14): df_cm = pd.DataFrame( confusion_matrix, index=class_names, columns=class_names, ) fig = plt.figure(figsize=figsize) try: heatmap = sns.heatmap(df_cm, annot=True, fmt="d") except ValueError: raise ValueError("Confusion matrix values must be integers.") heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize) heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize) plt.ylabel('True label') plt.xlabel('Predicted label') fig.savefig(os.path.join(MODEL_PATH,"confusion_matrix.png")) return fig def print_heatmap(n_labels, n_predictions, class_names): labels = n_labels #sess.run(tf.argmax(n_labels, 1)) predictions = n_predictions #sess.run(tf.argmax(n_predictions, 1)) # confusion_matrix = sess.run(tf.contrib.metrics.confusion_matrix(labels, predictions)) matrix = confusion_matrix(labels.argmax(axis=1),predictions.argmax(axis=1)) row_sum = np.sum(matrix, axis = 1) w, h = matrix.shape c_m = np.zeros((w, h)) for i in range(h): c_m[i] = matrix[i] * 100 / row_sum[i] c = c_m.astype(dtype = np.uint8) heatmap = print_confusion_matrix(c, class_names, figsize=(18,10), fontsize=20) class_names = list() for name,idx in labels_id.items(): class_names.append(name) # print(class_names) ypred = model.predict(valid_tensors) print_heatmap(ytest,ypred,class_names)
_____no_output_____
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
Precision Recall F1 Score
ypred_class = np.argmax(ypred,axis=1) # print(ypred_class[:10]) ytest = np.argmax(ytest,axis=1) accuracy = accuracy_score(ytest,ypred_class) print('Accuracy: %f' % accuracy) # precision tp / (tp + fp) precision = precision_score(ytest, ypred_class,average='weighted') print('Precision: %f' % precision) # recall: tp / (tp + fn) recall = recall_score(ytest,ypred_class,average='weighted') print('Recall: %f' % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(ytest,ypred_class,average='weighted') print('F1 score: %f' % f1)
Accuracy: 0.992865 Precision: 0.992910 Recall: 0.992865 F1 score: 0.992871
Apache-2.0
Distracted Driver Detection CNN Scratch.ipynb
vk16309/Distracted-Driver-Detection
WeatherPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time import json import random import csv as csv # Import API key from api_keys import api_key # Incorporated citipy to determine city based on latitude and longitude from citipy import citipy # Output File (CSV) city_file = "output_data/cities.csv" # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180)
_____no_output_____
MIT
Instructions/starter_code/weatherpy1.ipynb
bshub6/API-Challenge
Generate Cities List
# List for holding lat_lngs and cities counter = 0 randlat = [] randlngs = [] cities = [] # Create a set of random lat and lng combinations while len(randlat)< 500: lats = np.random.uniform(low=-90, high=90) lngs = np.random.uniform(low=-180, high=180) randlat.append(lats) randlngs.append(lngs) counter += 1 coord_df = pd.DataFrame({"lats":randlat, "lngs": randlngs}) coord_df.head() # Create a set of random lat and lng combinations lats = np.random.uniform(low=-90.000, high=90.000, size=1500) lngs = np.random.uniform(low=-180.000, high=180.000, size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combination for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name # If the city is unique, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count print(len(cities)) #print(cities)
634
MIT
Instructions/starter_code/weatherpy1.ipynb
bshub6/API-Challenge
Perform API Calls* Perform a weather check on each city using a series of successive API calls.* Include a print log of each city as it'sbeing processed (with the city number and city name).
url = "http://api.openweathermap.org/data/2.5/weather?" units = "metric" # Build query URL to begin call url = "http://api.openweathermap.org/data/2.5/weather?units=metric&appid=" + api_key #Set up list for responses date = [] country = [] lat = [] lon = [] temp_max = [] humidity = [] cloud = [] wind = [] _cities = [] print("Beginning Data Retrieval") for city in cities: url_city = url + "&q=" + str(city) #print(url_city) #convert to json try: city_data = requests.get(url_city).json() country.append(city_data['sys']['country']) date.append(city_data['dt']) lat.append(city_data['coord']['lat']) lon.append(city_data['coord']['lon']) temp_max.append(city_data['main']['temp_max']) humidity.append(city_data['main']['humidity']) cloud.append(city_data['clouds']['all']) wind.append(city_data['wind']['speed']) _cities.append(city) print(f"retreiving data | {city}") except: print("If city is not found, skipping") print("Retrieval is complete!") data_dict = {'city': _cities, 'country': country, 'latitude': lat, 'longitude': lon, 'max temp': temp_max, 'humidity': humidity, 'cloudiness': cloud, 'windspeed': wind} #print(data_dict) df = pd.DataFrame.from_dict(data_dict) df.head()
_____no_output_____
MIT
Instructions/starter_code/weatherpy1.ipynb
bshub6/API-Challenge
Convert Raw Data to DataFrame* Export the city data into a .csv.* Display the DataFrame
df.count() #Convert file to csv and save df.to_csv("weather_data.csv", encoding="utf-8", index=False)
_____no_output_____
MIT
Instructions/starter_code/weatherpy1.ipynb
bshub6/API-Challenge
Plotting the Data* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.* Save the plotted figures as .pngs. Latitude vs. Temperature Plot
# Build a scatter plot for each data type plt.scatter(df["latitude"], df["max temp"], marker="o") # Incorporate the other graph properties plt.title("City Latitude vs. Temperature (F)") plt.ylabel("Temperature (F)") plt.xlabel("Latitude") plt.grid(True) # Save the figure plt.savefig("Temperature (F).png") # Show plot plt.show()
_____no_output_____
MIT
Instructions/starter_code/weatherpy1.ipynb
bshub6/API-Challenge
Latitude vs. Humidity Plot
# Build a scatter plot for each data type plt.scatter(df["latitude"], df["humidity"], marker="o") # Incorporate the other graph properties plt.title("City Latitude vs. Humidity %") plt.ylabel("Humidity %") plt.xlabel("Latitude") plt.grid(True) # Save the figure plt.savefig("Humidity%.png") # Show plot plt.show()
_____no_output_____
MIT
Instructions/starter_code/weatherpy1.ipynb
bshub6/API-Challenge
Latitude vs. Cloudiness Plot
# Build a scatter plot for each data type plt.scatter(df["latitude"], df["cloudiness"], marker="o") # Incorporate the other graph properties plt.title("City Latitude vs. Cloudiness %") plt.ylabel("Cloudiness %") plt.xlabel("Latitude") plt.grid(True) # Save the figure plt.savefig("Clouds%.png") # Show plot plt.show()
_____no_output_____
MIT
Instructions/starter_code/weatherpy1.ipynb
bshub6/API-Challenge
Latitude vs. Wind Speed Plot
# Build a scatter plot for each data type plt.scatter(df["latitude"], df["windspeed"], marker="o") # Incorporate the other graph properties plt.title("City Latitude vs. Windspeed (mph)") plt.ylabel("Windspeed (mph)") plt.xlabel("Latitude") plt.grid(True) # Save the figure plt.savefig("Windspeed(mph).png") # Show plot plt.show()
_____no_output_____
MIT
Instructions/starter_code/weatherpy1.ipynb
bshub6/API-Challenge
Here you have a collection of guided exercises for the first class on Python. The exercises are divided by topic, following the topics reviewed during the theory session, and for each topic you have some mandatory exercises, and other optional exercises, which you are invited to do if you still have time after the mandatory exercises. Remember that you have 5 hours to solve these exercises, after which we will review the most interesting exercises together. If you don't finish all the exercises, you can work on them tonightor tomorrow. At the end of the class, we will upload the code with the solutions of the exercises so that you can review them again if needed. If you still have not finished some exercises, try to do them first by yourself, before taking a look at the solutions: you are doing these exercises for yourself, so it is always the best to do them your way first, as it is the fastest way to learn! **Exercise 1.1:** The cover price of a book is 24.95 EUR, but bookstores get a 40 percent discount. Shipping costs 3 EUR for the first copy and 75 cents for each additional copy. **Calculate the total wholesale costs for 60 copies**.
#Your Code Here
_____no_output_____
Apache-2.0
M1_Python/d3_Intro_to_Python/02_exercises/Intro to Python - Easy.ipynb
succeedme/Strive_Main
**Exercise 1.2:** When something is wrong with your code, Python will raise errors. Often these will be "syntax errors" that signal that something is wrong with the form of your code (e.g., the code in the previous exercise raised a `SyntaxError`). There are also "runtime errors", which signal that your code was in itself formally correct, but that something went wrong during the code's execution. A good example is the `ZeroDivisionError`, which indicates that you tried to divide a number by zero (which, as you may know, is not allowed). Try to make Python **raise such a `ZeroDivisionError`.**
#Your Code Here
_____no_output_____
Apache-2.0
M1_Python/d3_Intro_to_Python/02_exercises/Intro to Python - Easy.ipynb
succeedme/Strive_Main
**Exercise 5.1**: Create a countdown function that starts at a certain count, and counts down to zero. Instead of zero, print "Blast off!". Use a `for` loop.
# Countdown def countdown(): """ 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Blast off! """ return
_____no_output_____
Apache-2.0
M1_Python/d3_Intro_to_Python/02_exercises/Intro to Python - Easy.ipynb
succeedme/Strive_Main
**Exercise 5.2:** Write and test three functions that return the largest, the smallest, and the number of dividables by 3 in a given collection of numbers. Use the algorithm described earlier in the Part 5 lecture :)
# Your functions def main(): """ a = [2, 4, 6, 12, 15, 99, 100] 100 2 4 """ return
_____no_output_____
Apache-2.0
M1_Python/d3_Intro_to_Python/02_exercises/Intro to Python - Easy.ipynb
succeedme/Strive_Main
Mauna-Loa CO2 concentration exampleExperiment of CO2 measurements in Mauna Loa, Hawaii, using a single output Gaussian process with a spectral mixture kernel. The data set contains daily measurements of CO2 in the air from 1958 to 2001. We will resample the data to obtain 10 averaged samples per year. That is, any yearly pattern will be at $\frac{1}{10} = 0.1$.
import mogptk import numpy as np from sklearn.datasets import fetch_openml def load_mauna_loa_atmospheric_co2(): ml_data = fetch_openml(data_id=41187) months = [] ppmv_sums = [] counts = [] y = ml_data.data['year'] m = ml_data.data['month'] month_float = y + (m - 1) / 12 ppmvs = ml_data.target for month, ppmv in zip(month_float, ppmvs): if not months or month != months[-1]: months.append(month) ppmv_sums.append(ppmv) counts.append(1) else: # aggregate monthly sum to produce average ppmv_sums[-1] += ppmv counts[-1] += 1 months = np.asarray(months).reshape(-1) avg_ppmvs = np.asarray(ppmv_sums) / counts return months, avg_ppmvs
_____no_output_____
MIT
examples/example_mauna_loa.ipynb
vishalbelsare/mogptk
First we load the dataset, and define a variable `stop` with the index that separates between train and test
# load dataset x, y = load_mauna_loa_atmospheric_co2() # stop omde to separate train from test stop = 200 data = mogptk.Data(x, y, name='Mauna Loa') data.remove_range(start=x[stop]) data.transform(mogptk.TransformDetrend(3)) data.plot();
_____no_output_____
MIT
examples/example_mauna_loa.ipynb
vishalbelsare/mogptk
We initialize the model with random parameters and show the spectral density of the kernel. As we are only taking random values, there is no relation with the data.
# create model model = mogptk.SM(data, Q=10) model.plot_spectrum(title='SD with random parameters');
_____no_output_____
MIT
examples/example_mauna_loa.ipynb
vishalbelsare/mogptk
Then we initialize the parameters before training, using Bayesian Nonparametric spectral estimation (BNSE) (Tobar 2017), and use the estimated Power spectral density (PSD) to define initial spectral mean and magnitudes.
method = 'BNSE' model.init_parameters(method) model.plot_spectrum(title='PSD with {} initialization'.format(method));
_____no_output_____
MIT
examples/example_mauna_loa.ipynb
vishalbelsare/mogptk
Then we train the model and show the power spectral density of the trained model.
model.log_marginal_likelihood() model.train(method='Adam', iters=1000, lr=0.1, plot=True, error='MAE') model.plot_spectrum(title='PSD with model trained');
Start Adam: 0/1000 loss= 322.484 error= 1.88645 10/1000 loss= 313.976 error= 1.88665 20/1000 loss= 307.505 error= 1.88656 30/1000 loss= 301.477 error= 1.88645 40/1000 loss= 295.067 error= 1.88634 50/1000 loss= 288.609 error= 1.88603 60/1000 loss= 282.216 error= 1.88578 70/1000 loss= 275.716 error= 1.88539 80/1000 loss= 269.194 error= 1.88506 90/1000 loss= 262.687 error= 1.88457 100/1000 loss= 256.224 error= 1.88398 110/1000 loss= 249.696 error= 1.88341 120/1000 loss= 243.088 error= 1.88278 130/1000 loss= 236.521 error= 1.88209 140/1000 loss= 230.029 error= 1.88124 150/1000 loss= 223.463 error= 1.88004 160/1000 loss= 216.782 error= 1.87871 170/1000 loss= 209.974 error= 1.87675 180/1000 loss= 203.112 error= 1.87421 190/1000 loss= 196.303 error= 1.87109 200/1000 loss= 189.686 error= 1.86688 210/1000 loss= 183.307 error= 1.86204 220/1000 loss= 177.049 error= 1.85648 230/1000 loss= 170.922 error= 1.851 240/1000 loss= 164.705 error= 1.84523 250/1000 loss= 158.509 error= 1.84018 260/1000 loss= 152.239 error= 1.83473 270/1000 loss= 145.8 error= 1.82696 280/1000 loss= 138.706 error= 1.81481 290/1000 loss= 131.393 error= 1.80203 300/1000 loss= 124.501 error= 1.78861 310/1000 loss= 118.807 error= 1.77402 320/1000 loss= 113.895 error= 1.75941 330/1000 loss= 109.753 error= 1.74695 340/1000 loss= 106.533 error= 1.72741 350/1000 loss= 103.003 error= 1.71038 360/1000 loss= 100.341 error= 1.69305 370/1000 loss= 97.8219 error= 1.67336 380/1000 loss= 95.7121 error= 1.65591 390/1000 loss= 93.7487 error= 1.63805 400/1000 loss= 92.0012 error= 1.61455 410/1000 loss= 90.3272 error= 1.60285 420/1000 loss= 88.9001 error= 1.57394 430/1000 loss= 87.4867 error= 1.57177 440/1000 loss= 85.8493 error= 1.52869 450/1000 loss= 84.51 error= 1.54105 460/1000 loss= 83.0117 error= 1.48245 470/1000 loss= 189.814 error= 1.60444 480/1000 loss= 139.043 error= 1.48253 490/1000 loss= 111.307 error= 1.4628 500/1000 loss= 103.139 error= 1.44905 510/1000 loss= 97.2063 error= 1.42853 520/1000 loss= 93.0574 error= 1.41599 530/1000 loss= 90.3223 error= 1.44419 540/1000 loss= 1379.97 error= 2.31593 550/1000 loss= 918.444 error= 2.16682 560/1000 loss= 404.317 error= 1.95854 570/1000 loss= 277.134 error= 1.89074 580/1000 loss= 221.99 error= 1.89774 590/1000 loss= 201.582 error= 1.89448 600/1000 loss= 185.868 error= 1.88753 610/1000 loss= 175.018 error= 1.88729 620/1000 loss= 166.55 error= 1.88229 630/1000 loss= 159.52 error= 1.88005 640/1000 loss= 153.676 error= 1.87677 650/1000 loss= 148.587 error= 1.87344 660/1000 loss= 144.101 error= 1.87071 670/1000 loss= 140.099 error= 1.86777 680/1000 loss= 136.5 error= 1.86457 690/1000 loss= 133.236 error= 1.86129 700/1000 loss= 130.252 error= 1.85791 710/1000 loss= 127.493 error= 1.85434 720/1000 loss= 124.845 error= 1.85024 730/1000 loss= 122.235 error= 1.84494 740/1000 loss= 119.882 error= 1.84175 750/1000 loss= 117.706 error= 1.83885 760/1000 loss= 115.665 error= 1.83557 770/1000 loss= 113.758 error= 1.83218 780/1000 loss= 111.966 error= 1.82898 790/1000 loss= 115.227 error= 1.82089 800/1000 loss= 264.396 error= 1.86019 810/1000 loss= 171.81 error= 1.8587 820/1000 loss= 144.88 error= 1.85282 830/1000 loss= 134.86 error= 1.85346 840/1000 loss= 129.927 error= 1.78689 850/1000 loss= 127.451 error= 1.78702 860/1000 loss= 124.895 error= 1.78763 870/1000 loss= 122.718 error= 1.77601 880/1000 loss= 120.73 error= 1.76855 890/1000 loss= 118.862 error= 1.76299 900/1000 loss= 117.112 error= 1.75689 910/1000 loss= 115.454 error= 1.75193 920/1000 loss= 113.885 error= 1.74654 930/1000 loss= 112.396 error= 1.74071 940/1000 loss= 110.978 error= 1.73493 950/1000 loss= 109.627 error= 1.7289 960/1000 loss= 108.336 error= 1.72269 970/1000 loss= 107.099 error= 1.71631 980/1000 loss= 105.912 error= 1.70976 990/1000 loss= 104.77 error= 1.70318 1000/1000 loss= 103.67 error= 1.69641 Finished
MIT
examples/example_mauna_loa.ipynb
vishalbelsare/mogptk
Lastly we predict in the test set.
model.predict() data.plot();
_____no_output_____
MIT
examples/example_mauna_loa.ipynb
vishalbelsare/mogptk
TensorFlow Neural Machine Translation on Cloud TPUsThis tutorial demonstrates how to translate text using a LSTM Network from one language to another (from English to German in this case). We will work with a dataset that contains pairs of English-German phrases. Given a sequence of words in English, we train a model to predict the German equivalent in the sequence.Note: Enable TPU acceleration to execute this notebook faster. In Colab: Runtime > Change runtime type > Hardware acclerator > **TPU**. If running locally make sure TensorFlow version >= 1.11.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras).By Rishabh Anand (GitHub: @rish-16)
!ls !wget http://www.manythings.org/anki/deu-eng.zip !unzip deu-eng.zip !head deu.txt
Hi. Hallo! Hi. Grüß Gott! Run! Lauf! Wow! Potzdonner! Wow! Donnerwetter! Fire! Feuer! Help! Hilfe! Help! Zu Hülf! Stop! Stopp! Wait! Warte!
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Importing TensorFlow and other libraries
import string import numpy as np from numpy import array import pandas as pd import tensorflow as tf from tensorflow.keras.models import Sequential, load_model from tensorflow.keras.layers import Dense, Embedding, RepeatVector, LSTM from tensorflow.keras.optimizers import RMSprop from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt
_____no_output_____
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Extracting lines from dataset and into arrayHere, we can examine how the dataset is structures. The English-German dataset comprises of an English and German phrase separted by a tab `\t`
deu_eng = open('./deu.txt', mode='rt', encoding='utf-8') deu_eng = deu_eng.read() deu_eng = deu_eng.strip().split('\n') deu_eng = [i.split('\t') for i in deu_eng] deu_eng = array(deu_eng) deu_eng = deu_eng[:50000, :] print (deu_eng[:5])
[['Hi.' 'Hallo!'] ['Hi.' 'Grüß Gott!'] ['Run!' 'Lauf!'] ['Wow!' 'Potzdonner!'] ['Wow!' 'Donnerwetter!']]
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Removing punctuationWe will be removing punctuation from the phrases and converting them to lowercase. We will not be creating embeddings for punctuations or uppercase characters as it adds to the complexity of the NMT model
deu_eng[:, 0] = [s.translate((str.maketrans('', '', string.punctuation))) for s in deu_eng[:, 0]] deu_eng[:, 1] = [s.translate((str.maketrans('', '', string.punctuation))) for s in deu_eng[:, 1]] for i in range(len(deu_eng)): deu_eng[i, 0] = deu_eng[i, 0].lower() deu_eng[i, 1] = deu_eng[i, 1].lower() print (deu_eng[:5])
[['hi' 'hallo'] ['hi' 'grüß gott'] ['run' 'lauf'] ['wow' 'potzdonner'] ['wow' 'donnerwetter']]
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Tokenising the phrasesTokenisation is the process of taking a sequence and chopping it up into smaller pieces called `tokens`. For example, suppose we have a sentence `"Bob returned home after the party"`The tokenised sentence will return an array with the tokens:`["Bob", "returned", "home", "after", "the", "party"]`In this section, we will be breaking up the phrases into tokenised sequences that comprises of numbers for each unique word. For instance, the word "good" may have the value of 32 while the word "boy" may have the value of 46. Supposing the phrase is "good boy", the tokenised sequence is `[32, 46]`.
def tokenize(lines): tokenizer = Tokenizer() tokenizer.fit_on_texts(lines) return tokenizer eng_tokenizer = tokenize(deu_eng[:, 0]) eng_vocab_size = len(eng_tokenizer.word_index) + 1 eng_sequence_length = 8 print ('English vocabulary size: {}'.format(eng_vocab_size)) deu_tokenizer = tokenize(deu_eng[:, 1]) deu_vocab_size = len(deu_tokenizer.word_index) + 1 deu_sequence_length = 8 print ('German vocabulary size: {}'.format(deu_vocab_size))
English vocabulary size: 6352 German vocabulary size: 10678
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Convert lines into sequences as input for the NMT modelWe will now be using our Tokeniser to create tokenised sequences of the original English and German phrases from our dataset.
def encode_sequences(tokenizer, sequence_length, lines): sequence = tokenizer.texts_to_sequences(lines) sequence = pad_sequences(sequence, sequence_length, padding="post") # 0s after the actual sequence return sequence
_____no_output_____
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Splitting the dataset into training and testing sets
train, test = train_test_split(deu_eng, test_size=.2, random_state=12) x_train = encode_sequences(deu_tokenizer, deu_sequence_length, train[:, 1]) y_train = encode_sequences(eng_tokenizer, eng_sequence_length, train[:, 0]) x_test = encode_sequences(deu_tokenizer, deu_sequence_length, test[:, 1]) y_test = encode_sequences(eng_tokenizer, eng_sequence_length, test[:, 0]) print (x_train.shape, y_train.shape) print (x_test.shape, x_test.shape)
(40000, 8) (40000, 8) (10000, 8) (10000, 8)
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Training on a TPUIn order to connect to a TPU, we can follow 4 easy steps:1. Connect to a TPU instance2. Initialise a parallelly-distributed training `strategy`3. Build our NMT model under the `strategy`4. Train the model on a TPUFor more details on training on TPUs for free, feel free to check out [this](https://medium.com/@mail.rishabh.anand/tpu-training-made-easy-with-colab-3b73b920878f) article that covers the process in great detail. Connecting to available TPU instancesHere, we search for available instances of version 2 TPUs (the ones Google publically allocates)
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection # Initialising a parallelly-distributed training strategy tf.tpu.experimental.initialize_tpu_system(tpu) strategy = tf.distribute.experimental.TPUStrategy(tpu, steps_per_run=128) print('Running on TPU ', tpu.cluster_spec().as_dict()['worker']) print("Number of accelerators: ", strategy.num_replicas_in_sync) # Building our model under that strategy in_vocab = deu_vocab_size out_vocab = eng_vocab_size units = 512 in_timesteps = deu_sequence_length out_timesteps = eng_sequence_length with strategy.scope(): model = Sequential() model.add(Embedding(in_vocab, units, input_length=in_timesteps, mask_zero=True)) model.add(LSTM(units)) model.add(RepeatVector(out_timesteps)) model.add(LSTM(units, return_sequences=True)) model.add(Dense(out_vocab, activation='softmax')) rms = RMSprop(lr=0.001) model.compile(loss='sparse_categorical_crossentropy', optimizer=rms) model.summary() tf.keras.utils.plot_model( model, show_shapes=True, show_layer_names=True, rankdir="TB" ) history = model.fit(x_train, y_train.reshape(y_train.shape[0], y_train.shape[1], 1), epochs=30, steps_per_epoch=500)
Epoch 1/30 500/500 [==============================] - 12s 23ms/step - loss: 2.2086 Epoch 2/30 500/500 [==============================] - 7s 15ms/step - loss: 1.7406 Epoch 3/30 500/500 [==============================] - 8s 15ms/step - loss: 1.5482 Epoch 4/30 500/500 [==============================] - 8s 16ms/step - loss: 1.2944 Epoch 5/30 500/500 [==============================] - 8s 16ms/step - loss: 1.1328 Epoch 6/30 500/500 [==============================] - 8s 17ms/step - loss: 0.9843 Epoch 7/30 500/500 [==============================] - 9s 17ms/step - loss: 0.8681 Epoch 8/30 500/500 [==============================] - 9s 17ms/step - loss: 0.7995 Epoch 9/30 500/500 [==============================] - 9s 18ms/step - loss: 0.6747 Epoch 10/30 500/500 [==============================] - 9s 19ms/step - loss: 0.6119 Epoch 11/30 500/500 [==============================] - 10s 19ms/step - loss: 0.5652 Epoch 12/30 500/500 [==============================] - 10s 19ms/step - loss: 0.4462 Epoch 13/30 500/500 [==============================] - 10s 20ms/step - loss: 0.4422 Epoch 14/30 500/500 [==============================] - 10s 20ms/step - loss: 0.3768 Epoch 15/30 500/500 [==============================] - 10s 21ms/step - loss: 0.3660 Epoch 16/30 500/500 [==============================] - 10s 21ms/step - loss: 0.3685 Epoch 17/30 500/500 [==============================] - 11s 22ms/step - loss: 0.2964 Epoch 18/30 500/500 [==============================] - 11s 22ms/step - loss: 0.2531 Epoch 19/30 500/500 [==============================] - 11s 22ms/step - loss: 0.2844 Epoch 20/30 500/500 [==============================] - 12s 23ms/step - loss: 0.2386 Epoch 21/30 500/500 [==============================] - 11s 23ms/step - loss: 0.2292 Epoch 22/30 500/500 [==============================] - 12s 23ms/step - loss: 0.2215 Epoch 23/30 500/500 [==============================] - 12s 24ms/step - loss: 0.1749 Epoch 24/30 500/500 [==============================] - 12s 24ms/step - loss: 0.1745 Epoch 25/30 500/500 [==============================] - 12s 25ms/step - loss: 0.1726 Epoch 26/30 500/500 [==============================] - 13s 25ms/step - loss: 0.1428 Epoch 27/30 500/500 [==============================] - 13s 26ms/step - loss: 0.1585 Epoch 28/30 500/500 [==============================] - 13s 26ms/step - loss: 0.1811 Epoch 29/30 500/500 [==============================] - 14s 27ms/step - loss: 0.1676 Epoch 30/30 500/500 [==============================] - 14s 29ms/step - loss: 0.1636
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Checking the loss values
plt.plot(history.history['loss']) plt.xlabel('Epochs') plt.ylabel('Sparse Categorical Loss') plt.legend(['train']) plt.show()
_____no_output_____
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Running our model on testing dataset
# Getting the predictions from the testing dataset preds = model.predict_classes(x_test.reshape(x_test.shape[0], x_test.shape[1])[:10]) # only predicting over 10 instances print (preds) # A function to convert a sequence back into words def convert_words(n, tokenizer): for word, idx in tokenizer.word_index.items(): if idx == n: return word return None # Running our model on the testing dataset pred_texts = [] for i in preds: temp = [] for j in range(len(i)): word = convert_words(i[j], eng_tokenizer) if j > 0: if (word == convert_words(i[j-1], eng_tokenizer)) or (word == None): temp.append('') else: temp.append(word) else: if (word == None): temp.append('') else: temp.append(word) pred_texts.append(' '.join(temp))
_____no_output_____
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Translating the text from German to EnglishWe can see that our model does a relatively good job in translating the German text to English. However, there are instances that seem to have the wrong translation or are outright incorrect. Nonetheless, for a basic NMT model that was trained for 30 epochs, the model's generalisation is great.
pred_df = pd.DataFrame({'actual': test[:10, 0], 'prediction': pred_texts}) pred_df
_____no_output_____
MIT
labs/Lab 5A - Neural Machine Translation on TPUs.ipynb
rish-16/machine-learning-workshop
Daymet
from pynhd import NLDI import pydaymet as daymet import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore")
_____no_output_____
MIT
docs/examples/daymet.ipynb
DavidChoi76/hydrodata
The Daymet database provides climatology data at 1-km resolution. First, we use [PyNHD](https://github.com/cheginit/pynhd) to get the contributing watershed geometry of a NWIS station with the ID of `USGS-01031500`:
geometry = NLDI().getfeature_byid("nwissite", "USGS-01031500", basin=True).geometry[0]
_____no_output_____
MIT
docs/examples/daymet.ipynb
DavidChoi76/hydrodata
[PyDaymet](https://github.com/cheginit/pynhd) allows us to get the data for a single pixel or for a region as gridded data. The function to get single pixel is called `pydaymet.get_byloc` and for gridded data is called `pydaymet.get_bygeom`. Both have identical arguments where the first positional argument is a coordinate for the single pixel case or a geometry for the gridded case, and the second posiitonal argument is the dates. The dates can be either a tuple of length two like `("2000-01-01", "2000-01-31")` or a list of years like `[2000, 2010]`.We can also specify a subset of variables to be downloaded via the ``variables`` argument. The available variables in the Daymet database are ``tmin``, ``tmax``, ``prcp``, ``srad``, ``vp``, ``swe``, ``dayl``.There's also a flag for computing Potential EvapoTraspiration (PET) based on the Daymet data. Let's get the precipitaiton, minimum temperature, and PET.
variables = ["prcp", "tmin"] clm_g = daymet.get_bygeom(geometry, ("2000-01-01", "2000-01-31"), variables=variables, pet=True)
_____no_output_____
MIT
docs/examples/daymet.ipynb
DavidChoi76/hydrodata
Note that the default CRS is EPSG:4326. If the input geometry (or coordinate) is in a different CRS we can pass it to the function. The gridded data are automatically masked to the input geometry. Now, Let's get the data for a coordinate in EPSG:3542 CRS.
coords = (-1431147.7928, 318483.4618) crs = "epsg:3542" clm_p = daymet.get_byloc(coords, 2001, crs=crs, variables=variables, pet=True)
_____no_output_____
MIT
docs/examples/daymet.ipynb
DavidChoi76/hydrodata
Now, let's plot the data.
fig = plt.figure(figsize=(20, 8), dpi=300) gs = fig.add_gridspec(2, 2) ax = fig.add_subplot(gs[:, 0]) clm_g.prcp.isel(time=10).plot(ax=ax) ax.set_aspect("auto") axes = gs[:,1].subgridspec(2, 1, hspace=0).subplots(sharex=True) clm_p["tmin (deg c)"].plot(ax=axes[0], color="r") axes[0].set_ylabel("$T_{min}$ (deg C)") axes[0].xaxis.set_ticks_position('none') clm_p["prcp (mm/day)"].plot(ax=axes[1]) axes[1].set_ylabel("$P$ (mm/day)") plt.tight_layout()
_____no_output_____
MIT
docs/examples/daymet.ipynb
DavidChoi76/hydrodata
*Practical Data Science 19/20* Programming Assignment In this programming assignment you need to apply your new `numpy`, `pandas` and `matplotlib` knowledge. You will need to do several [`groupby`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html)s and [`join`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html)`s to solve the task. Load required packages
import pandas as pd %matplotlib inline
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
Load Data
DATA_URL = 'https://raw.githubusercontent.com/pds1920/_a1-template/master/data/' transactions = pd.read_csv(DATA_URL + '/sales_train.csv.gz') items = pd.read_csv(DATA_URL + '/items.csv') item_categories = pd.read_csv(DATA_URL + '/item_categories.csv')
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
Get to know the dataPrint the **shape** of the loaded dataframes.- You can use a list comprehension here
# Write your code here
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
Use [`df.head`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html) function to print several rows of each data frame. Examine the features you are given.
# Write your code here # Write your code here # Write your code here
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
Now use your `pandas` skills to get answers for the following questions. What was the maximum total revenue among all the shops in June, 2014?* Revenue refers to total sales minus value of goods returned.* Sometimes items are returned, find such examples in the dataset. * It is handy to split `date` field into [`day`, `month`, `year`] components and use `df.year == 14` and `df.month == 6` in order to select target subset of dates.* You may work with `date` feature as with strings, or you may first convert it to `pd.datetime` type with `pd.to_datetime` function, but do not forget to set correct `format` argument.
# Write your code here max_revenue = # Write your code here max_revenue
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
How many items are there?* Let's assume, that the items are returned for the same price as they had been sold
num_items_constant_price = # Write your code here num_items_constant_price
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
What was the variance of the number of sold items per day sequence for the shop with `shop_id = 25` in December, 2014?* Do not count the items that were sold but returned back later.* Fill `total_num_items_sold`: An (ordered) array that contains the total number of items sold on each day * Fill `days`: An (ordered) array that contains all relevant days* Then compute variance of the of `total_num_items_sold`* If there were no sales at a given day, ***do not*** impute missing value with zero, just ignore that day
shop_id = 25 # Write your code here total_num_items_sold = # Write your code here days = # Write your code here total_num_items_sold_var = # Write your code here total_num_items_sold_var
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
Vizualization of the daily items soldUse `total_num_items_sold` and `days` arrays to and plot the daily revenue of `shop_id = 25` in December, 2014.* plot-title: 'Daily items sold for shop_id = 25'
# Write your code here
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
What item category that generated the highest revenue in spring 2014? Spring is the period from March to Mai.
# Write your code here category_id_with_max_revenue =# Write your code here category_id_with_max_revenue
_____no_output_____
MIT
Assignment_1.ipynb
NikoStein/_a2-template
Demo Notebook for CPW Kappa Calculation Let's start by importing Qiskit Metal:
import qiskit_metal as metal from qiskit_metal import designs, draw from qiskit_metal import MetalGUI, Dict, open_docs
_____no_output_____
Apache-2.0
tutorials/6 Analysis/CPW_kappa_calculation_demo.ipynb
mtreinish/qiskit-metal