markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import time, sys def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() sys.stdout.write('\rEpoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) print("") # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved')
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the <UNK> word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ unk = vocab_to_int['<UNK>'] return [vocab_to_int.get(word, unk) for word in sentence.lower().split()] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq)
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Translate This will translate translate_sentence from English to French.
translate_sentence = 'Paris is cold and wet in the winter .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
language-translation/dlnd_language_translation.ipynb
retnuh/deep-learning
mit
Change the cron schedule (line 1) to the desired frequency.
%%writefile modem/bqml/pipeline/shell_scheduler.sh cron_schedule="*/2 * * * *" echo "$cron_schedule cd ~/modem/bqml/pipeline && python main.py >> logs.csv" | crontab - echo "Your workflow has been scheduled with the cron schedule of $cron_schedule. Enjoy!"
bqml/utils/BQML_Deployment_Template_Compute_Engine.ipynb
google/modem
apache-2.0
Step 3: Deploy the code Run the cell below.
!sh enable_firewall.sh !gcloud compute scp --recurse modem $SSH_DESTINATION:~/ !echo "Code deployed."
bqml/utils/BQML_Deployment_Template_Compute_Engine.ipynb
google/modem
apache-2.0
Step 4: Test and schedule the code When you run the cell below, an interactive shell opens up. Use the following commands - 1. Test the code. If successful, you should see a SUCCESS message along with a timestamp. Proceed to Step 2. <br> If there are any errors, you can either work through the colab from Step 1, fixing the parameters, or log into the Compute Engine instance to fix the errors. cd modem/bqml/pipeline sh shell_deploy.sh 2. Ensure Step 1 was successful. Run the command below. sh shell_scheduler.sh
!gcloud compute ssh $SSH_DESTINATION
bqml/utils/BQML_Deployment_Template_Compute_Engine.ipynb
google/modem
apache-2.0
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np from collections import Counter import problem_unittests as tests from sklearn.feature_extraction.text import CountVectorizer def create_lookup_tables(text, min_count=1): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ words = text #cv = CountVectorizer() #vectorized = cv.fit_transform(text) #print(vectorized) word_counts = Counter(words) #word_counts_2 = Counter(word_counts) for k in list(word_counts): if word_counts[k] < min_count: del word_counts[k] sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True) int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)} vocab_to_int = {word: ii for ii, word in int_to_vocab.items()} print(len(sorted_vocab)) return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ return { '.': '__period__', ',': '__comma__', '"': '__double_quote__', ';': '__semi-colon__', '!': '__exclamation__', '?': '__question__', '(': '__open_paren__', ')': '__close_paren__', '--': '__dash__', '\n': '__endline__' } """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ input_test = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') return input_test, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size, lstm_layers=1, keep_prob=1.0): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :param lstm_layers: Number of layers to apply to LSTM :param keep_prob: Dropout keep probability for cell :return: Tuple (cell, initialize state) """ # A basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) # Add dropout to the cell dropout_wrapper = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([dropout_wrapper] * lstm_layers) # Getting an initial state of all zeros initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), 'initial_state') return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ #embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1)) #embed = tf.nn.embedding_lookup(embedding, input_data) #return embed # consider using: return tf.contrib.layers.embed_sequence( input_data, vocab_size=vocab_size, embed_dim=embed_dim) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # note: third argument is placeholder for initial_state outputs, final_state = tf.nn.dynamic_rnn( cell=cell, inputs=inputs, dtype=tf.float32) final_state = tf.identity(final_state, 'final_state') return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param embed_dim: Size of word embeddings to use :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ embedding = get_embed(input_data, vocab_size, embed_dim) lstm_outputs, final_state = build_rnn(cell, embedding) logits = tf.contrib.layers.fully_connected( lstm_outputs, vocab_size, weights_initializer=tf.truncated_normal_initializer(stddev=0.1), biases_initializer=tf.zeros_initializer(), activation_fn=None) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ n_batches = (len(int_text)-1)//(batch_size * seq_length) int_text = int_text[:n_batches * batch_size * seq_length + 1] int_text_input_seq = [int_text[i*seq_length:i*seq_length+seq_length] for i in range(0, n_batches * batch_size)] int_text = int_text[1:] int_text_output = [int_text[i*seq_length:i*seq_length+seq_length] for i in range(0, n_batches * batch_size)] all_data = [] for row in range(n_batches): input_cols = [] target_cols = [] for col in range(batch_size): input_cols.append(int_text_input_seq[col * n_batches + row]) target_cols.append(int_text_output[col * n_batches + row]) all_data.append([input_cols, target_cols]) return np.array(all_data) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the text word embeddings. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# reminder: tune hyper params according to advice at # check out https://nd101.slack.com/messages/C3PJV4741/convo/C3PJV4741-1490412688.590254/ # Number of Epochs num_epochs = 50 # Batch Size batch_size = 128 # RNN Size rnn_size = 512 # Number of embedding dimensions embed_dim = 300 # Sequence Length seq_length = 16 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 25 keep_prob = 1.0 lstm_layers = 2 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Build the Graph Build the graph using the neural network you implemented.
""" DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size, lstm_layers=lstm_layers, keep_prob=keep_prob) logits, final_state = build_nn(cell, embed_dim, input_text, vocab_size) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def weighted_choice(choices): """ Cribbed from http://stackoverflow.com/questions/3679694/a-weighted-version-of-random-choice """ total = sum(w for c, w in choices) r = random.uniform(0, total) upto = 0 for c, w in choices: if upto + w >= r: return c upto += w assert False, "Shouldn't get here" def pick_word(probabilities, int_to_vocab, top_n=5): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ #print('Num probs: {}'.format(len(probabilities))) top_n_choices = [] for i in range(min(len(probabilities), top_n)): max_idx = np.argmax(probabilities) top_n_choices.append((max_idx, probabilities[max_idx])) probabilities.itemset(max_idx, 0) #print('Top {} highest indexes: {}'.format(top_n, top_n_choices)) word_idx = weighted_choice(top_n_choices) word = int_to_vocab[word_idx] #print('Chosen word: {} (idx: {})'.format(word_idx, word)) return word #highest_prob_idx = np.squeeze(np.argwhere(probabilities == np.max(probabilities))) #word_idx = np.random.choice(highest_prob_idx) #word = int_to_vocab[word_idx] #return word """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
gen_length = 400 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'bart_simpson' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script)
tv-script-generation/dlnd_tv_script_generation.ipynb
seinberg/deep-learning
mit
Cost of running a business (difficulty: 5) You are running a small business for n months. Each month i, you either run it out of your office in New York and incur an operating cost Ni, or out of your office in Seattle and incur a cost Si. If you move to a different city in month i + 1, you pay a fixed cost M. <br> A plan is a sequence of n locations such that the i-th location indicates the city you will based in month i. The cost of a plan is the sum of all the operating costs plus any moving costs. Give an algorithm that computes the plan of minimum cost, and its cost. <br> Example: n = 4, M = 10, Ni,Si as below <br>
n = 4 M = 10 NY = [1, 3, 20, 30] SE = [50, 20, 2, 4] NY = [100, 1, 100, 1, 100, 1, 100, 1, 100, 1] SE = [1, 100, 1, 100, 1, 100, 1, 100, 1, 100] n = 10 M = 10 dp = [0]*n dp[0] = 'NY' if NY[0] < SE[0] else 'SE' for i in range(1, n): if dp[i-1] == 'NY': dp[i] = 'NY' if NY[i] + M < SE[i] else 'SE' else: dp[i] = 'NY' if NY[i] < SE[i] + M else 'SE' print dp
dynamic_programming.ipynb
pablovicente/python-tutorials
mit
Travel Planning (difficulty: 5) You are going on a long trip. You start on the road at mile post 0. Along the way there are n hotels, at mile posts a1 < a2 < · · · < an, where each ai is measured from the starting point. The only places you are allowed to stop are at these hotels, but you can choose which of the hotels you stop at. Your destination is the final hotel (at distance an) and you must stop there. <br> You’d ideally like to travel 200 miles a day, but this may not be possible, depending on the spacing of the hotels. If you travel x miles during a day, the penalty for that day is (200 − x)2. Your task is to give an efficient algorithm that minimizes the total penalty —that is, the sum, over all travel days, of the daily penalties.
a = [0] opt = [0] a = [0,200] opt = [0, 0] a = [0,200,400,600,800,1000] opt = [0, 0, 0, 0, 0, 0] a = [0,400,800,1600,1700,112000] opt = [0, 40000, 80000, 440000, 450000, 12122460000] a = [0,100,150,300,350,550,750,800,950] opt = [0, 10000, 2500, 5000, 2500, 2500, 2500, 5000, 2500] n = len(a) dp = [0]*n for i in range(1, n): dp[i] = sys.maxint for j in range(i): tmp = dp[j] + (200 - (a[i] - a[j]))**2 if tmp < dp[i]: dp[i] = tmp print dp
dynamic_programming.ipynb
pablovicente/python-tutorials
mit
A game with stones (difficulty: 5) You’re playing a game with stones. At the beginning of the game, there are n piles of stones, each of size pi ≥ 1, in a line. Your goal is to merge the stones in one big pile subject to the following rules of the game: <br> 1. At each step, you can merge two adjacent piles of sizes x, y to obtain a new pile of size x+y.<br> 2. The cost of merging a pile of size x with a pile of size y is x+y.<br> Your goal is to merge all the stones in one big pile so that the total cost is minimized. Design an algorithm for this task.<br> Longest monotonically increasing subsequence (dificulty: 5) Give an e cient algorithm to compute the length of the longest increasing subsequence of a sequence of numbers a1, . . . , an. <br> A subsequence is any subset of these numbers taken in order, of the form ai1,ai2,...,aik, where 1  i1 < i2 < ... < ik  n. An increasing subsequence is one in which the numbers are getting strictly larger.<br> Example<br> Input: 5,2,8,6,3,6,7<br> Output: 4 (corresponding to subsequence 2, 3, 6, 7)<br>
a = [5,2,8,6,3,6,7] n = len(a) dp = [1]*n for i in range(1,n): for j in range(0,i): if a[i] > a[j] and dp[i] < dp[j] + 1: dp[i] = dp[j]+1 print dp
dynamic_programming.ipynb
pablovicente/python-tutorials
mit
A card game (dificulty: 8) Consider the following game. To begin, n cards are laid out in order from left to right, where n is even. The i-th card from the left has value vi. Two players take turns, each taking one card, with the restriction that on a player’s turn, they can only take either the leftmost or the rightmost card remaining. The goal is to collect cards of the largest total value. <br> Give an O(n2) dynamic programming algorithm that precomputes the optimal strategy.
def card_game(a): n = len(a) dp = [[0]*n for _ in range(n)] for gap in range(n): for i, j in zip(range(0, n), range(gap, n)): x = dp[i+2][j] if (i+2) <= j else 0 y = dp[i+1][j-1] if (i+1) <= (j-1) else 0 z = dp[i+2][j] if i <= (j-2) else 0 dp[i][j] = max(a[i] + min(x, y), a[j] + min(y, z)) for i in range(n): for j in range(n): print dp[i][j],'\t', print return dp[0][n-1] a = [8, 15, 3, 7] card_game(a)
dynamic_programming.ipynb
pablovicente/python-tutorials
mit
But if our goal is to partition the space of $k$-mers, couldn't we use a hash function instead? Say $k$ is 10 and $l$ is 4. A 10,4-minimizing scheme is a way for dividing the space of $4^{10}$ 10-mers (a million or so) into $4^4 = 256$ partitions. We can accomplish this with a hash function that maps $k$-mers to integers in $[0, 255]$. Why would we prefer minimizers over hash functions? The answer is that two strings that share long substrings tend to have the same minimizer, but not the same hash value. For example, the strings abracadabr and bracadabra have the substring bracadabr in common, and they have the same minimal 4-mer:
minimizer('abracadabr', 4), minimizer('bracadabra', 4)
notebooks/Minimizers.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
But their hash values (modulo 256) are not the same:
# you might need to 'pip install mmh3' first import mmh3 mmh3.hash('abracadabr') % 256, mmh3.hash('bracadabra') % 256
notebooks/Minimizers.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
Partition size distribution A feature of hash functions is that they divide the 10-mers quite uniformly (evenly) among the 256 buckets. 10,4-minimzers divide them much less uniformly. This becomes clear when you consider that, given a random 10-mer, the 4-mer TTTT is very unlikely to be its minimizer, whereas the 4-mer AAAA is much more likely. We can also show this empirically by partitioning a collection of random 10-mers:
import random random.seed(629) def random_kmer(k): return ''.join([random.choice('ACGT') for _ in range(k)]) %matplotlib inline import matplotlib.pyplot as plt def plot_counts(counter, title=None): idx = range(256) cnts = list(map(lambda x: counter.get(x, 0), idx)) plt.bar(idx, cnts, ec='none') plt.xlim(0, 256) plt.ylim(0, 35) if title is not None: plt.title(title) plt.show() from collections import Counter # hash 1000 random 10-mers cnt = Counter([mmh3.hash(s) % 256 for s in [random_kmer(10) for _ in range(1000)]]) plot_counts(cnt, 'Frequency of partitions using hash mod 256') def lmer_to_int(mer): """ Maps AAAA to 0, AAAC to 1, etc. Works for any length argument. """ cum = 0 charmap = {'A':0, 'C':1, 'G':2, 'T':3} for c in mer: cum *= 4 cum += charmap[c] return cum # get minimal 4-mers from 1000 random 10-mers cnt = Counter([lmer_to_int(minimizer(s, 4)) for s in [random_kmer(10) for _ in range(1000)]]) plot_counts(cnt, 'Frequency of partitions using minimal 4-mer; AAAA at left, TTTT at right')
notebooks/Minimizers.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
Récupération des données Les données ont été préalablement téléchargées et zippées.
from ensae_teaching_cs.data import load_irep load_irep() import pandas df = pandas.read_csv("2017/Prod_dechets_dangereux.csv") df.head() [_ for _ in set(df.Nom_Etablissement) if "lubrizol" in _.lower()] df[df.Nom_Etablissement.str.contains("LUBRI")].head().T ets = pandas.read_csv("2017/etablissements.csv") ets.head() ets[ets.Nom_Etablissement.str.contains("LUBRI")]
_doc/notebooks/data/data_irep.ipynb
sdpython/ensae_teaching_cs
mit
Conversion des coordonnées Les coordonnées utilisées par l'admnistration française sont souvent différentes des longitudes et latitudes, il pourrait s'agir du système Lambert 93. Il faut convertir.
from pyproj import Proj, transform p1 = Proj(init='epsg:4326') # longitude / latidude p2 = Proj(init='epsg:2154') # Lambert 93 exp = (509411.0, 2494315.0) # Rouen dans la base de données reçues. transform(p1, p2, 1.059011, 49.434654), exp
_doc/notebooks/data/data_irep.ipynb
sdpython/ensae_teaching_cs
mit
Ca n'a pas l'air d'être ça. On en essaye plusieurs.
from tqdm import tqdm summary = [] for i in tqdm(range(2000, 10000)): try: p2 = Proj(init='epsg:%d' % i) except RuntimeError: # does not exist continue try: res = transform(p1, p2, 1.059011, 49.434654) except RuntimeError: # impossible continue d = abs(res[0] - exp[0]) + abs(res[1] - exp[1]) summary.append((d, i, res)) summary.sort() summary[:10]
_doc/notebooks/data/data_irep.ipynb
sdpython/ensae_teaching_cs
mit
On tombe donc sur espg:2192 ou espg:7401.
p2 = Proj(init='epsg:2192') long, lat = transform(p2, p1, ets.Coordonnees_X.values, ets.Coordonnees_Y.values) ets['LLX'] = long ets['LLY'] = lat ets[ets.Nom_Etablissement.str.contains("LUBRI")]
_doc/notebooks/data/data_irep.ipynb
sdpython/ensae_teaching_cs
mit
Ca a l'air de coller.
ets16 = pandas.read_csv("2016/etablissements.csv") ets17 = pandas.read_csv("2017/etablissements.csv") p16 = pandas.read_csv("2016/Prod_dechets_dangereux.csv") p17 = pandas.read_csv("2017/Prod_dechets_dangereux.csv") long, lat = transform(p2, p1, ets16.Coordonnees_X.values, ets16.Coordonnees_Y.values) ets16['LLX'] = long ets16['LLY'] = lat long, lat = transform(p2, p1, ets17.Coordonnees_X.values, ets17.Coordonnees_Y.values) ets17['LLX'] = long ets17['LLY'] = lat ets16_2 = p16.merge(ets16, on="Identifiant") ets16_2.shape, p16.shape ets17_2 = p17.merge(ets17, on="Identifiant") ets17_2.shape, p17.shape
_doc/notebooks/data/data_irep.ipynb
sdpython/ensae_teaching_cs
mit
Les fusions se sont bien passées. On vérifie que les unités sont comparables.
set(p16.Unite), set(p17.Unite)
_doc/notebooks/data/data_irep.ipynb
sdpython/ensae_teaching_cs
mit
Vue dynamique
from ipywidgets import interact @interact(annee=[2016, 2017], column=list(sorted(p16.columns)), x=(0, 100000, 100)) def show_rows(annee=2017, column='Quantite', x=100000): if annee == 2017: return ets17_2[ets17_2[column] >= x].sort_values(column).T else: return ets16_2[ets16_2[column] >= x].sort_values(column).T
_doc/notebooks/data/data_irep.ipynb
sdpython/ensae_teaching_cs
mit
Cartes On regarde les projets sur la métropole en 2017.
lim_metropole = [-5, 10, 41, 52] ets17_2_metro = ets17_2[((ets17_2.LLX >= lim_metropole[0]) & (ets17_2.LLX <= lim_metropole[1]) & (ets17_2.LLY >= lim_metropole[2]) & (ets17_2.LLY <= lim_metropole[3]))] ets17_2_metro.shape, ets17_2.shape import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt fig = plt.figure(figsize=(7,7)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree()) ax.set_extent(lim_metropole) ax.add_feature(cfeature.OCEAN.with_scale('50m')) ax.add_feature(cfeature.COASTLINE.with_scale('50m')) ax.add_feature(cfeature.RIVERS.with_scale('50m')) ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':') ax.scatter(ets17_2_metro.LLX, ets17_2_metro.LLY, s=ets17_2_metro.Quantite ** 0.5 / 5, alpha=0.5) ax.set_title('France 2017\nproduction de produits dangereux');
_doc/notebooks/data/data_irep.ipynb
sdpython/ensae_teaching_cs
mit
Q: Determine the parameters ${\bf w}$ fit by the model. It might be helpful to consult the documentation for the classifier on the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html">sklearn website</a>. Hint: The classifier stores the coefficients and bias term separately. Q: In general, what does the Logistic Regression decision boundary look like for data with two features? Q: Modify the code below to plot the decision boundary along with the data.
import numpy as np import math fig = plt.figure(figsize=(8,8)) plt.scatter(X_train[:, 0], X_train[:, 1], s=100, c=[mycolors["red"] if yi==1 else mycolors["blue"] for yi in y_train]) plt.xlabel('Sepal length') plt.ylabel('Sepal width') x_min, x_max = np.min(X_train[:,0])-0.1, np.max(X_train[:,0])+0.1 y_min, y_max = np.min(X_train[:,1])-0.1, np.max(X_train[:,1])+0.1 plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) x1 = np.linspace(x_min, x_max, 100) w0 = logreg.intercept_ w1 = logreg.coef_[0][0] w2 = logreg.coef_[0][1] x2 = ... # TODO plt.plot(x1, x2, color="gray");
notebooks/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
To be consistent with sklearn conventions, we'll encode the documents as row-vectors stored in a matrix. In this case, each row of the matrix corresponds to a document, and each column corresponds to a term in the vocabulary. For our example this gives us a matrix $M$ of shape $3 \times 6$. The $(d,t)$-entry in $M$ is then the number of times the term $t$ appears in document $d$ Q: Your first task is to write some simple Python code to construct the term-frequency matrix $M$
M = np.zeros((len(D),len(V))) for ii, doc in enumerate(D): for term in doc.split(): if(term in V): #only print if the term is in our dictionary ... #TODO print(M)
notebooks/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Notice that the query document included the word $\texttt{new}$ twice, which corresponds to the entry in the $(0,2)$-position. Q: What's missing from $x4$ that we might expect to see from the query document? <br> Problem 3: Term Frequency - Inverse Document Frequency The Bag-of-Words model for text classification is very popular, but let's see if we can do better. Currently we're weighting every word in the corpus by it's frequency. It turns out that in text classification there are often features that are not particularly useful predictors for the document class, either because they are too common or too uncommon. Stop-words are extremely common, low-information words like "a", "the", "as", etc. Removing these from documents is typically the first thing done in peparing data for document classification. Q: Can you think of a situation where it might be useful to keep stop words in the corpus? Other words that tend to be uninformative predictors are words that appear very very rarely. In particular, if they do not appear frequently enough in the training data then it is difficult for a classification algorithm to weight them heavily in the classification process. In general, the words that tend to be useful predictors are the words that appear frequently, but not too frequently. Consider the following frequency graph for a corpus. <img src="figs/feat_freq.png"> The features in column A appear too frequently to be very useful, and the features in column C appear too rarely. One first-pass method of feature selection in text classification would be to discard the words from columns A and C, and build a classifier with only features from column B. Another common model for identifying the useful terms in a document is the Term Frequency - Inverse Document Frequency (tf-idf) model. Here we won't throw away any terms, but we'll replace their Bag-of-Words frequency counts with tf-idf scores which we describe below. The tf-idf score is the product of two statistics, term frequency and inverse document frequency $$\texttt{tfidf(d,t)} = \texttt{tf(d,t)} \times \texttt{idf(t)}$$ The term frequency $\texttt{tf(d,t)}$ is a measure of the frequency with which term $t$ appears in document $d$. The inverse document frequency $\texttt{idf(t)}$ is a measure of how much information the word provides, that is, whether the term is common or rare across all documents. By multiplying the two quantities together, we obtain a representation of term $t$ in document $d$ that weighs how common the term is in the document with how common the word is in the entire corpus. You can imagine that the words that get the highest associated values are terms that appear many times in a small number of documents. There are many ways to compute the composite terms $\texttt{tf}$ and $\texttt{idf}$. For simplicity, we'll define $\texttt{tf(d,t)}$ to be the number of times term $t$ appears in document $d$ (i.e., Bag-of-Words). We will define the inverse document frequency as follows: $$ \texttt{idf(t)} = \ln ~ \frac{\textrm{total # documents}}{\textrm{1 + # documents with term }t} = \ln ~ \frac{|D|}{|d: ~ t \in d |} $$ Note that we could have a potential problem if a term comes up that is not in any of the training documents, resulting in a divide by zero. This might happen if you use a canned vocabulary instead of constructing one from the training documents. To guard against this, many implementations will use add-one smoothing in the denominator (this is what sklearn does). $$ \texttt{idf(t)} = \ln ~ \frac{\textrm{total # documents}}{\textrm{1 + # documents with term }t} = \ln ~ \frac{|D|}{1 + |d: ~ t \in d |} $$ Q: Compute $\texttt{idf(t)}$ (without smoothing) for each of the terms in the training documents from the previous problem Q: Compute the td-ifd matrix for the training set
idf = np.array([np.log(3), np.log(3), np.log(3./2), np.log(3), np.log(3./2), np.log(3./2)]) Xtfidf = np.dot(X.todense(), np.diag(idf))
notebooks/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np from sklearn.preprocessing import LabelBinarizer # Explore the dataset batch_id = 1 sample_id = 6 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) features, labels = helper.load_cfar10_batch(cifar10_dataset_folder_path, batch_id) unique, counts = np.unique(labels, return_counts=True) buckets = dict(zip(labels, counts)) label_text2id = { "airplane": 0, "automobile": 1, "bird": 2, "cat": 3, "deer": 4, "dog": 5, "frog": 6, "horse": 7, "ship": 8, "truck": 9 } label_id2text = { v: k for k, v in label_text2id.items() } #print(label_id2text)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function # Apply normalization over the entire numpy array (fast) possible_values = 256 return x / possible_values #Using loops (slow) #for i in range(b.size): # b[i] = b[i] / (max_val + 1) #return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
from sklearn import preprocessing def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function labels = np.array(x) lb = preprocessing.LabelBinarizer() lb.fit(labels) lb.classes_ = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] return lb.transform(labels) #y = np.zeros((len(x), 10)) #for i in range(len(x)): #print(x[i]) # y[i][x[i]] = 1 #print(y[i]) #return y """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf #Also seen in 11.30 def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # Tensor of floats for an unbounded list of tensors (32, 32, 3), named x return tf.placeholder(tf.float32, (None, image_shape[0], image_shape[1], image_shape[2]), name="x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # Tensor of floats for an unbounded list of tensors (n_classes), named y return tf.placeholder(tf.float32, (None, n_classes), name="y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # A simple float placeholder to hold our variable "keep_prob" return tf.placeholder(tf.float32, name="keep_prob") """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function # Grab the dimensions as a list dims = [i.value for i in x_tensor.shape] # Use the final three dimensions to describe the weights dimension, adding a dimension for the # number of outputs #print(dims) #print("conv_ksize: {0}".format(conv_ksize)) #W = tf.Variable(tf.random_normal([dims[1], dims[2], dims[3], conv_num_outputs])) W = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.shape[3].value , conv_num_outputs],stddev=0.01)) #print("x_tensor.shape {0}".format(x_tensor.shape)) #print("W.shape {0}".format(W.shape)) # the biases are just the number of convolution outputs b = tf.Variable(tf.random_normal([conv_num_outputs])) # you need to convert conv_strides to a list of 4 parts # Make a 2d convolution, with weights and strides as a 4 element list, with 'SAME' padding x = tf.nn.conv2d(x_tensor, W, [1, conv_strides[0], conv_strides[1], 1], padding='SAME') # Add the biases x = tf.nn.bias_add(x, b) # Apploy a REctified Linear Unit (nonlinear activation) x = tf.nn.relu(x) #Apply a max pooling using pool_ksize and pool_strides x = tf.nn.max_pool(x, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
from tensorflow.contrib.layers import flatten as contrib_flatten def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function return contrib_flatten(x_tensor) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
from tensorflow.contrib.layers import fully_connected def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return fully_connected(x_tensor, num_outputs, activation_fn=tf.nn.relu) # return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # Lesson 11 part 30 dims = [i.value for i in x_tensor.shape] W = tf.Variable(tf.random_normal([dims[-1], num_outputs])) b = tf.Variable(tf.random_normal([num_outputs])) # Linear combination of inputs and weights, then add the bias return tf.add(tf.matmul(x_tensor, W), b) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # local variables #conv_num_outputs = 32 conv_ksize = (5, 5) conv_strides = (2, 2) pool_ksize = (2, 2) pool_strides = (2, 2) #num_outputs = 10 # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) x = conv2d_maxpool(x, 32, conv_ksize, conv_strides, pool_ksize, pool_strides) x = conv2d_maxpool(x, 64, conv_ksize, conv_strides, pool_ksize, pool_strides) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) x = flatten(x) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) x = fully_conn(x, 128) x = fully_conn(x, 256) # dropout layer x = tf.nn.dropout(x, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: x = output(x, 10) # TODO: return output return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network.
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function session.run(optimizer, feed_dict={ x: feature_batch, y: label_batch, keep_prob: keep_probability }) #pass """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network)
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function #print("Session: {0}".format(session)) #print("Feature_batch: {0}".format(feature_batch)) #print("Label_batch: {0}".format(label_batch)) # Calculate cost loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.}) #print("Loss: {:>10.4f}".format(loss)) # Get validation accuracy valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.}) print("Loss: {:>10.4f} Validation Accuracy: {:.6f}".format(loss, valid_acc)) #pass
image-classification/dlnd_image_classification.ipynb
joeandrewkey/deep-learning
mit
Create forward bond future PV (Exposure) time profile Setting up parameters
t_step = 1.0 / 365.0 simNumber = 10 trim_start = date(2005,3,10) trim_end = date(2010,12,31) # Last Date of the Portfolio start = date(2005, 3, 10) referenceDate = date(2005, 3, 10)
GroupHW_1_Exposure_ForwardBond.ipynb
sytjyjj/Group-Homeproject1
apache-2.0
Data input for the CouponBond portfolio The word portfolio is used to describe just a dict of CouponBonds. This line creates a referenceDateList myScheduler = Scheduler() ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate) Create Simulator This section creates Monte Carlo Trajectories in a wide range. Notice that the BondCoupon maturities have to be inside the Monte Carlo simulation range [trim_start,trim_end] Sigma has been artificially increased (OIS has smaller sigma) to allow for visualization of distinct trajectories. # SDE parameters - Vasicek SDE # dr(t) = k(θ − r(t))dt + σdW(t) self.kappa = x[0] self.theta = x[1] self.sigma = x[2] self.r0 = x[3] myVasicek = MC_Vasicek_Sim() xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509] myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0) myVasicek.getLibor() Create Coupon Bond with several startDates. SixMonthDelay = myScheduler.extractDelay("6M") TwoYearsDelay = myScheduler.extractDelay("2Y") startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)] For debugging uncomment this to choose a single date for the forward bond print(startDates) startDates = [date(2005,3,10)] # or startDates = [date(2005,3,10) + SixMonthDelay] maturities = [(x+TwoYearsDelay) for x in startDates] You can change the coupon and see its effect on the Exposure Profile. The breakevenRate is calculated, for simplicity, always at referenceDate=self.start, that is, at the first day of the CouponBond life. Below is a way to create random long/short bond portfolio of any size. The notional only affects the product class at the last stage of calculation. In my case, the only parameters affected are Exposure (PV on referenceDate), pvAvg(average PV on referenceDate) myPortfolio = {} coupon = 0.07536509 for i in range(len(startDates)): notional=(-1.0)**i myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional, maturity= maturities[i], freq="3M", referencedate=referenceDate)
myScheduler = Scheduler() ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate) # Create Simulator xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509] myVasicek = MC_Vasicek_Sim(ReferenceDateList,xOIS,simNumber,1/365.0) myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0) myVasicek.getLibor() # Create Coupon Bond with several startDates. SixMonthDelay = myScheduler.extractDelay("6M") TwoYearsDelay = myScheduler.extractDelay("2Y") startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)] # For debugging uncomment this to choose a single date for the forward bond # print(startDates) startDates = [date(2005,3,10)+SixMonthDelay,date(2005,3,10)+TwoYearsDelay ] maturities = [(x+TwoYearsDelay) for x in startDates] myPortfolio = {} coupon = 0.07536509 for i in range(len(startDates)): notional=(-1.0)**i myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional, maturity= maturities[i], freq="3M", referencedate=referenceDate,observationdate=trim_start )
GroupHW_1_Exposure_ForwardBond.ipynb
sytjyjj/Group-Homeproject1
apache-2.0
Create Libor and portfolioScheduleOfCF. This datelist contains all dates to be used in any calculation of the portfolio positions. BondCoupon class has to have a method getScheduleComplete, which return fullSet on [0] and datelist on [1], calculated by BondCoupon as: def getScheduleComplete(self): self.datelist=self.myScheduler.getSchedule(start=self.start,end=self.maturity,freq=self.freq,referencedate=self.referencedate) self.ntimes = len(self.datelist) fullset = sorted(set(self.datelist) .union([self.referencedate]) .union([self.start]) .union([self.maturity]) ) return fullset,self.datelist portfolioScheduleOfCF is the concatenation of all fullsets. It defines the set of all dates for which Libor should be known.
# Create FullDateList portfolioScheduleOfCF = set(ReferenceDateList) for i in range(len(myPortfolio)): portfolioScheduleOfCF=portfolioScheduleOfCF.union(myPortfolio[i].getScheduleComplete()[0] ) portfolioScheduleOfCF = sorted(portfolioScheduleOfCF.union(ReferenceDateList)) OIS = myVasicek.getSmallLibor(datelist=portfolioScheduleOfCF) #print(OIS) # at this point OIS contains all dates for which the discount curve should be known. # If the OIS doesn't contain that date, it would not be able to discount the cashflows and the calcualtion would faill. pvs={} for t in portfolioScheduleOfCF: pvs[t] = np.zeros([1,simNumber]) #(pvs[t]) for i in range(len(myPortfolio)): myPortfolio[i].setLibor(OIS) pvs[t] = pvs[t] + myPortfolio[i].getExposure(referencedate=t).values # print(myPortfolio[i].getExposure(referencedate=t).values) #print(pvs) #print(OIS) #print(myPortfolio[i].getExposure(referencedate=t).value) pvsPlot = pd.DataFrame.from_dict(list(pvs.items())) pvsPlot.index= list(pvs.keys()) pvs1={} for i,t in zip(pvsPlot.values,pvsPlot.index): pvs1[t]=i[1][0] pvs = pd.DataFrame.from_dict(data=pvs1,orient="index") ax=pvs.plot(legend=False) ax.set_xlabel("Year") ax.set_ylabel("Coupon Bond Exposure")
GroupHW_1_Exposure_ForwardBond.ipynb
sytjyjj/Group-Homeproject1
apache-2.0
Hide/Show tip labels The argument tip_labels can be entered as True, False, or as a list of names, with the default being True in which case the tip labels are shown and are taken from the tree object (parsed from the newick string).
## A tree with edge lengths newick = "((apple:2,orange:4):2,(((tomato:2,eggplant:1):2,pepper:3):1,tomatillo:2):1);" tre = toytree.tree(newick) ## show tip labels tre.draw(); ## hide tip labels tre.draw(tip_labels=False);
docs/TipLabels.ipynb
eaton-lab/toytree
bsd-3-clause
Modify tip labels You can enter a list to tip_labels if which case the list of names will replace the labels at the tips of the tree. The list must be the same length as the number of tips. You can use the function .get_tip_labels() to return a list of names that are currently on the tree. Names are ordered from top to bottom in position on a right-facing tree. The .get_tip_labels() argument is useful for returning a list of names that you can modify and then enter back into tip_labels like in the example below.
## enter a new list of names tipnames = ["a", "b", "c", "d", "e", "f"] tre.draw(tip_labels=tipnames); ## get list of existing names and modify it modnames = ["tip - " + i for i in tre.get_tip_labels()] tre.draw(tip_labels=modnames); ## you can use HTML tags to further style the text modnames = ["<b>{}</b>".format(i) if 'tom' in i else i for i in tre.get_tip_labels()] tre.draw(tip_labels=modnames);
docs/TipLabels.ipynb
eaton-lab/toytree
bsd-3-clause
Color tip labels You can color all tip labels to be the same color or you can enter a different color for each tip label by entering a list of colors. Colors can be entered as HTML names, as HEX strings, or as rgba codes. See the excellent Toyplot color guide for more color options.
## set a single tip labels color tre.draw(tip_labels_colors="darkcyan"); ## use a list of colors to assign different values to tips colorlist = ["darkcyan" if "tom" in t else "darkorange" for t in tre.get_tip_labels()] tre.draw(tip_labels_colors=colorlist);
docs/TipLabels.ipynb
eaton-lab/toytree
bsd-3-clause
Aligning tip labels By default Toytree plots trees without edge length information so that the tips are aligned. If you add the argument use_edge_lengths=True then tips of the tree will no longer necessarily be aligned and thus tip labels may not be aligned. To use edge lengths but still enforce aligning of the tip labels you can add the additional argument tip_labels_align=True, which will add dashed lines complete the edge lengths.
## default tree tre.draw(); ## with edge lengths tre.draw(use_edge_lengths=False); ## with edge lengths and aligned tips tre.draw(tip_labels_align=True);
docs/TipLabels.ipynb
eaton-lab/toytree
bsd-3-clause
Styling edges on aligned tip trees You can of course modify the styling of the dashed edge lengths (such as making them not dashed or changing their color) by using the edge_align_style dictionary.
## style the edges and alignment-edges tre.draw( use_edge_lengths=True, tip_labels_align=True, edge_style={"stroke": "darkcyan"}, edge_align_style={"stroke": "darkorange"}, );
docs/TipLabels.ipynb
eaton-lab/toytree
bsd-3-clause
Price History Retrieve Google's stock price history.
%%with_globals %%bigquery --project {PROJECT} SELECT * FROM `stock_src.price_history` LIMIT 10 def query_stock(symbol): return bq.query(''' SELECT * FROM `stock_src.price_history` WHERE symbol="{0}" ORDER BY Date '''.format(symbol)).to_dataframe() df_stock = query_stock('GOOG') df_stock.Date = pd.to_datetime(df_stock.Date) ax = df_stock.plot(x='Date', y='Close', title='Google stock') # Add smoothed plot. df_stock['Close_smoothed'] = df_stock.Close.rolling(100, center=True).mean() df_stock.plot(x='Date', y='Close_smoothed', ax=ax);
courses/machine_learning/deepdive2/time_series_prediction/solutions/optional_1_data_exploration.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Stock splits can also impact our data - causing a stock price to rapidly drop. In practice, we would need to clean all of our stock data to account for this. This would be a major effort! Fortunately, in the case of IBM, for example, all stock splits occurred before the year 2000. Learning objective 2
stock_symbol = 'IBM' %%with_globals %%bigquery df --project {PROJECT} SELECT date, close FROM `stock_src.price_history` WHERE symbol='{stock_symbol}' ORDER BY date IBM_STOCK_SPLIT_DATE = '1979-05-10' ax = df.plot(x='date', y='close') ax.vlines(pd.to_datetime(IBM_STOCK_SPLIT_DATE), 0, 500, linestyle='dashed', color='grey', alpha=0.7);
courses/machine_learning/deepdive2/time_series_prediction/solutions/optional_1_data_exploration.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Linear systems <img src="https://i.ytimg.com/vi/7ujEpq7MWfE/maxresdefault.jpg" width="400" /> Given square matrix $A_{nxn}$, and vector $\mathbf{b}$, find such vector $\mathbf{x}$ that $A \cdot \mathbf{x} = \mathbf{b}$. Example 1 $$ \begin{bmatrix} 10 & -1 & 2 & 0 \ -1 & 11 & -1 & 3 \ 2 & -1 & 10 & -1 \ 0 & 3 & -1 & 8 \end{bmatrix} \cdot \begin{bmatrix} x_1 \ x_2 \ x_3 \ x_4 \end{bmatrix} = \begin{bmatrix} 6 \ 25 \ -11 \ 15 \end{bmatrix} $$ Equation above can be rewritten as $$ \left{ \begin{aligned} 10x_1 - x_2 + 2x_3 + 0x_4 &= 6 \ -x_1 + 11x_2 + -x_3 + 3x_4 &= 25 \ 2x_1 - x_2 + 10x_3 - x_4 &= -11 \ 0x_1 + 3x_2 - 2x_3 + 8x_4 &= 15 \end{aligned} \right. $$ We can easily check that $\mathbf{x} = [1, 2, -1, 1]$ is a solution. This particular example has only one solution, but generaly speaking matrix equation can have no, one or infinite number of solutions. Some methods can found them all(eg. Gauss), but some can only converge to one solution(eg. Seidel, Jacobi). To start, let's define some helper functions
def is_square(a): return a.shape[0] == a.shape[1] def has_solutions(a, b): return np.linalg.matrix_rank(a) == np.linalg.matrix_rank(np.append(a, b[np.newaxis].T, axis=1))
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Diagonally dominance We say that matrix $A_{nxn}$ is diagonally dominant iff $$ |a_{ii}| \geq \sum_{j\neq i} |a_{ij}|, \quad \forall i=1, n \, $$ equivalent $$ 2 \cdot |a_{ii}| \geq \sum_{j = 1, n} |a_{ij}|, \quad \forall i=1, n \, $$ Also we say that matrix $A_{nxn}$ is strictly diagonally dominant iff it's diagonally dominant and $$ \exists i : |a_{ii}| > \sum_{j\neq i} |a_{ij}|$$
def is_dominant(a): return np.all(np.abs(a).sum(axis=1) <= 2 * np.abs(a).diagonal()) and \ np.any(np.abs(a).sum(axis=1) < 2 * np.abs(a).diagonal()) def make_dominant(a): for i in range(a.shape[0]): a[i][i] = max(abs(a[i][i]), np.abs(a[i]).sum() - abs(a[i][i]) + 1) return a
num_methods/first/lab2.ipynb
lionell/laboratories
mit
NOTE! All generate functions will return already strictly diagonally dominant matrices
def generate_random(n): return make_dominant(np.random.rand(n, n) * n), np.random.rand(n) * n def generate_hilbert(n): return make_dominant(sp.linalg.hilbert(n)), np.arange(1, n + 1, dtype=np.float) def linalg(a, b, debug=False): return np.linalg.solve(a, b),
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Gauss method To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: Swapping two rows Multiplying a row by a non-zero number Adding a multiple of one row to another row. Using these operations, a matrix can always be transformed into an upper triangular matrix, and in fact one that is in row echelon form. Once all of the leading coefficients (the left-most non-zero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced row echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where multiple elementary operations might be done at each step), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form. $$\left[\begin{array}{rrr|r} 1 & 3 & 1 & 9 \ 1 & 1 & -1 & 1 \ 3 & 11 & 5 & 35 \end{array}\right]\to \left[\begin{array}{rrr|r} 1 & 3 & 1 & 9 \ 0 & -2 & -2 & -8 \ 0 & 2 & 2 & 8 \end{array}\right]\to \left[\begin{array}{rrr|r} 1 & 3 & 1 & 9 \ 0 & -2 & -2 & -8 \ 0 & 0 & 0 & 0 \end{array}\right]\to \left[\begin{array}{rrr|r} 1 & 0 & -2 & -3 \ 0 & 1 & 1 & 4 \ 0 & 0 & 0 & 0 \end{array}\right] $$ Using row operations to convert a matrix into reduced row echelon form is sometimes called Gauss–Jordan elimination. Some authors use the term Gaussian elimination to refer to the process until it has reached its upper triangular, or (non-reduced) row echelon form. For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced.
def gauss(a, b, debug=False): assert is_square(a) and has_solutions(a, b) a = np.append(a.copy(), b[np.newaxis].T, axis=1) i = 0 k = 0 while i < a.shape[0]: r = np.argmax(a[i:, i]) + i a[[i, r]] = a[[r, i]] if a[i][i] == 0: break for j in range(a.shape[0]): if j == i: continue a[j] -= (a[j][i] / a[i][i]) * a[i] a[i] = a[i] / a[i][i] i += 1 assert np.count_nonzero(a[i:]) == 0 return a[:, -1],
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Gauss-Seidel method The Gauss–Seidel method is an iterative technique for solving a square system of $n$ linear equations with unknown $x$: $$ A \mathbf{x} = \mathbf{b} $$ It is defined by the iteration $$ L_* \mathbf{x}^{(k+1)} = \mathbf{b} - U \mathbf{x}^{(k)}, $$ where $\mathbf{x}^{(k)}$ is the $k$-th approximation or iteration of $\mathbf{x},\,\mathbf{x}^{k+1}$ is the next or $k + 1$ iteration of $\mathbf{x}$, and the matrix $A$ is decomposed into a lower triangular component $L_$, and a strictly upper triangular component $U: A = L_ + U $. In more detail, write out $A$, $\mathbf{x}$ and $\mathbf{b}$ in their components: $$A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \ a_{21} & a_{22} & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \ x_2 \ \vdots \ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \ b_2 \ \vdots \ b_n \end{bmatrix}.$$ Then the decomposition of $A$ into its lower triangular component and its strictly upper triangular component is given by: $$A=L_+U \qquad \text{where} \qquad L_ = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \ a_{21} & a_{22} & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \quad U = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \ 0 & 0 & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \0 & 0 & \cdots & 0 \end{bmatrix}.$$ The system of linear equations may be rewritten as: $$L_* \mathbf{x} = \mathbf{b} - U \mathbf{x} $$ The Gauss–Seidel method now solves the left hand side of this expression for $\mathbf{x}$, using previous value for $\mathbf{x}$ on the right hand side. Analytically, this may be written as: $$ \mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)}). $$ However, by taking advantage of the triangular form of $L_$, the elements of $\mathbf{x}^{(k + 1)}$ can be computed sequentially using forward substitution*: $$ x^{(k+1)}i = \frac{1}{a{ii}} \left(b_i - \sum_{j=1}^{i-1}a_{ij}x^{(k+1)}j - \sum{j=i+1}^{n}a_{ij}x^{(k)}_j \right),\quad i=1,2,\dots,n. $$ The procedure is generally continued until the changes made by an iteration are below some tolerance.
def seidel(a, b, x0 = None, limit=20000, debug=False): assert is_square(a) and is_dominant(a) and has_solutions(a, b) if x0 is None: x0 = np.zeros_like(b, dtype=np.float) x = x0.copy() while limit > 0: tx = x.copy() for i in range(a.shape[0]): x[i] = (b[i] - a[i, :].dot(x)) / a[i][i] + x[i] if debug: print(x) if np.allclose(x, tx, atol=ATOL, rtol=RTOL): return x, limit limit -= 1 return x, limit
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Jacobi method The Jacobi method is an iterative technique for solving a square system of $n$ linear equations with unknown $x$: $$ A \mathbf{x} = \mathbf{b} $$ where $$A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \ a_{21} & a_{22} & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \ x_2 \ \vdots \ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \ b_2 \ \vdots \ b_n \end{bmatrix}.$$ Then $A$ can be decomposed into a diagonal component $D$, and the remainder $R$: $$A=D+R \qquad \text{where} \qquad D = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \ 0 & a_{22} & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \0 & 0 & \cdots & a_{nn} \end{bmatrix} \text{ and } R = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \ a_{21} & 0 & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \ a_{n1} & a_{n2} & \cdots & 0 \end{bmatrix}. $$ The solution is then obtained iteratively via $$ \mathbf{x}^{(k+1)} = D^{-1} (\mathbf{b} - R \mathbf{x}^{(k)}), $$ where $\mathbf{x}^{(k)}$ is the $k$-th approximation or iteration of $\mathbf{x}$ and $\mathbf{x}^{(k+1)}$ is the next or $k + 1$ iteration of $\mathbf{x}$. The element-based formula is thus: $$ x^{(k+1)}i = \frac{1}{a{ii}} \left(b_i -\sum_{j\ne i}a_{ij}x^{(k)}_j\right),\quad i=1,2,\ldots,n. $$ The computation of $x_{i}^{(k+1)}$ requires each element in $\mathbf{x}^k$ except itself. Unlike the Gauss–Seidel method, we can't overwrite $x_i^k$ with $x_i^{(k+1)}$, as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size $n$.
def jacobi(a, b, x0 = None, limit=20000, debug=False): assert is_square(a) and is_dominant(a) and has_solutions(a, b) if x0 is None: x0 = np.zeros_like(b, dtype=np.float) x = x0.copy() while limit > 0: tx = x.copy() for i in range(a.shape[0]): x[i] = (b[i] - a[i, :].dot(tx)) / a[i][i] + tx[i] if debug: print(x) if np.allclose(x, tx, atol=ATOL, rtol=RTOL): return x, limit limit -= 1 return x, limit
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Evaluation To evaluate algorithm we are going to calculate $$ score = || A \cdot \mathbf{x^*} - \mathbf{b} || $$ where $\mathbf{x}^*$ is a result of algorithm.
def norm(a, b, res): return np.linalg.norm(a.dot(res) - b) def run(method, a, b, verbose=False, **kwargs): if not verbose: print("-" * 100) print(method.__name__.upper()) res = method(a, b, **kwargs) score = norm(a, b, res[0]) if not verbose: print("res =", res) print("score =", score) return score
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Back to examples Example 1 Do you remember example from the problem statement? Now, we are going to see how iterative algorithms converges for that example.
a4 = np.array([[10., -1., 2., 0.], [-1., 11., -1., 3.], [2., -1., 10., -1.], [0., 3., -1., 8.]]) print(a4) b = np.array([6., 25., -11., 15.]) print("b =", b) _ = run(linalg, a4, b) _ = run(gauss, a4, b) _ = run(seidel, a4, b) _ = run(jacobi, a4, b)
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Seidel vs Jacobi Here is tricky example of linear system, when Jacobi method converges faster than Gauss-Seidel. As you will see, Jacobi needs only 1 iteration to find an answer, while Gauss-Seidel needs over 10 iterations to find approximate solution.
a4 = np.array([[1., -1/8, 1/32, 1/64], [-1/2, 2., 1/16, 1/32], [-1., 1/4, 4., 1/16], [-1., 1/4, 1/8, 8.]]) print(a4) b = np.array([1., 4., 16., 64.]) print("b =", b) _ = run(linalg, a4, b) _ = run(gauss, a4, b) _ = run(seidel, a4, b) _ = run(jacobi, a4, b)
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Hilbert matrices Now, let's have a look at Hilbert matrix. It's said that non-iterative methods should do bad here. Proof?
a, b = generate_hilbert(1000) print("LINALG =", run(linalg, a, b, verbose=True)) print("GAUSS =", run(gauss, a, b, verbose=True)) print("SEIDEL =", run(seidel, a, b, x0=np.zeros_like(b, dtype=np.float), verbose=True)) print("JACOBI =", run(jacobi, a, b, x0=np.zeros_like(b, dtype=np.float), verbose=True))
num_methods/first/lab2.ipynb
lionell/laboratories
mit
As we can see, it's true for huge $n>1000$, that direct methods works bad on Hilbert matrices. But how about the size of matrix? Is $n=500$ enough?
def plot_hilbert_score_by_matrix_size(method, sizes): scores = np.zeros_like(sizes, dtype=np.float) for i in range(len(sizes)): a, b = generate_hilbert(sizes[i]) scores[i] = run(method, a, b, verbose=True) plt.plot(sizes, scores, label=method.__name__) sizes = np.linspace(1, 600, num=50, dtype=np.int) plt.figure(figsize=(15, 10)) plot_hilbert_score_by_matrix_size(linalg, sizes) plot_hilbert_score_by_matrix_size(gauss, sizes) plot_hilbert_score_by_matrix_size(seidel, sizes) plot_hilbert_score_by_matrix_size(jacobi, sizes) plt.title("Scores of different methods for Hilbert matrices") \ .set_fontsize("xx-large") plt.xlabel("n").set_fontsize("xx-large") plt.ylabel("score").set_fontsize("xx-large") legend = plt.legend(loc="upper right") for label in legend.get_texts(): label.set_fontsize("xx-large") plt.show()
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Random matrices It's time to test our methods on random generated matrices. To be deterministic, we are going to set seed equals to $17$ every time, we call generate_random(...)
a, b = generate_random(20) _ = run(linalg, a, b) _ = run(gauss, a, b) _ = run(seidel, a, b) _ = run(jacobi, a, b)
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Runtime Now, let's compare our methods by actual time running. To have more accurate results, we need to run them on large matrix(eg. random $200x200$).
a, b = generate_random(200) %timeit run(linalg, a, b, verbose=True) %timeit run(gauss, a, b, verbose=True) %timeit run(seidel, a, b, verbose=True) %timeit run(jacobi, a, b, verbose=True)
num_methods/first/lab2.ipynb
lionell/laboratories
mit
Convergence speed We already have compared methods by accuracy and time, but how about speed. Now we are going to show what will be the error of method after some number of iterations. To be more clear we use logarithmic x-scale.
def plot_convergence(method, a, b, limits): scores = np.zeros_like(limits, dtype=np.float) for i in range(len(limits)): scores[i] = run(method, a, b, x0 = np.zeros_like(b, dtype=np.float), limit=limits[i], verbose=True) plt.plot(limits, scores, label=method.__name__) a, b = generate_random(15) limits = np.arange(0, 350) plt.figure(figsize=(15, 10)) plot_convergence(seidel, a, b, limits) plot_convergence(jacobi, a, b, limits) plt.title("Convergence of Seidel/Jacobi methods for random matrix").set_fontsize("xx-large") plt.xlabel("n_iters").set_fontsize("xx-large") plt.ylabel("score").set_fontsize("xx-large") plt.xscale("log") legend = plt.legend(loc="upper right") for label in legend.get_texts(): label.set_fontsize("xx-large") plt.show()
num_methods/first/lab2.ipynb
lionell/laboratories
mit
&larr; Back to Index Neural Networks Neural networks are a category of machine learning models which have seen a resurgence since 2006. Deep learning is the recent area of machine learning which combines many neuron layers (e.g. 20, 50, or more) to form a "deep" neural network. In doing so, a deep neural network can accomplish sophisticated classification tasks that classical machine learning models would find difficult. Keras Keras is a Python package for deep learning which provides an easy-to-use layer of abstraction on top of Theano and Tensorflow. Import Keras objects:
from keras.models import Sequential from keras.layers.core import Dense import keras.optimizers
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Create a neural network architecture by layering neurons. Define the number of neurons in each layer and their activation functions:
model = Sequential() model.add(Dense(4, activation='relu', input_dim=2)) model.add(Dense(4, activation='relu')) model.add(Dense(2, activation='softmax'))
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Choose the optimizer, i.e. the update rule that the neural network will use to train:
optimizer = keras.optimizers.SGD(decay=0.001, momentum=0.99)
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Compile the model, i.e. create the low-level code that the CPU or GPU will actually use for its calculations during training and testing:
model.compile(loss='binary_crossentropy', optimizer=optimizer)
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Example: XOR The operation XOR is defined as: XOR(x, y) = 1 if x != y else 0 Synthesize training data for the XOR problem.
X_train = numpy.random.randn(10000, 2) print(X_train.shape) print(X_train[:5])
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Create target labels for the training data.
y_train = numpy.array([ [float(x[0]*x[1] > 0), float(x[0]*x[1] <= 0)] for x in X_train ]) print(y_train.shape) y_train[:5]
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Plot the training data:
plt.figure(figsize=(9, 5)) plt.scatter(X_train[y_train[:,0]>0.5,0], X_train[y_train[:,0]>0.5,1], c='r', s=1) plt.scatter(X_train[y_train[:,1]>0.5,0], X_train[y_train[:,1]>0.5,1], c='b', s=1)
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Finally, train the model!
results = model.fit(X_train, y_train, epochs=200, batch_size=100)
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Plot the loss function as a function of the training iteration number:
plt.plot(results.history['loss'])
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Create test data:
X_test = numpy.random.randn(5000, 2)
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Use the trained neural network to make predictions from the test data:
y_test = model.predict(X_test) y_test.shape
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Let's see if it worked:
plt.figure(figsize=(9, 5)) plt.scatter(X_test[y_test[:, 0] > 0.5,0], X_test[y_test[:, 0] > 0.5,1], c='r', s=1) plt.scatter(X_test[y_test[:, 1] > 0.5,0], X_test[y_test[:, 1] > 0.5,1], c='b', s=1)
neural_networks.ipynb
stevetjoa/stanford-mir
mit
Binary vector generator Version 2 - via Itertool
# def binary_vector_2(rows_n = [2,4,6,8,10], columns_n = 10): # rows = how_many(rows_n, 10) # index = 0 # locations = np.zeros((rows, columns_n)) # for i in rows_n: # for bin_string in kbits(10,i): # bin_array = np.fromstring(bin_string,'u1') - ord('0') # locations[index,:] = bin_array # index = index+1 # return locations # inputs = binary_vector_2() # labels = find_labels(inputs, one_hot=True) # #dataset_ver = Dataset(inputs, labels) # #pickle_test(dataset_ver) # inputs.shape import numpy as np import itertools from scipy.special import comb def kbits(n, k): """ Generate a list of ordered binary strings representing all the possibile way n chooses k. Args: n (int): set cardinality k (int): subset cardinality Returns: result (string): list of binary strings """ result = [] for bits in itertools.combinations(range(n), k): s = ['0'] * n for bit in bits: s[bit] = '1' result.append(''.join(s)) return result def binary_vector_2(rows_n = [2,4,6,8,10], distribution=[45], columns_n = 10): """ Matrix of binary vectors from distribution. Args: rows_n (int, ndarray): nx1 distribution (int, ndarray): nx1 Returns: ndarray of dimension rows_n * distribution, columns_n TODO: check inputs, here given as list, but should it be a ndarray? remove index accumulator and rewrite via len(kbit) Examples: Should be written in doctest format and should illustrate how to use the function. distribution=comb(columns_n, row) returns all possible combinations: in reality not, should remove randomness: or better set flag replacement = False """ rows_n = np.array(rows_n) distribution = np.array(distribution) assert np.all(rows_n >0) assert np.all(distribution >0), "Distribution values must be positive. {} provided".format(distribution) if len(distribution) == 1: distribution = np.repeat(distribution, len(rows_n)) assert len(distribution) == len(rows_n) rows = np.sum(distribution) index = 0 locations = np.zeros((rows, columns_n)) cluster_size = comb(columns_n,rows_n) for i in range(len(rows_n)): kbit = kbits(10,rows_n[i]) take_this = np.random.randint(cluster_size[i], size=distribution[i]) lista =[] for indices in take_this: lista.append(kbit[indices]) kbit = lista for bin_string in kbit: bin_array = np.fromstring(bin_string,'u1') - ord('0') locations[index,:] = bin_array index = index+1 return locations
Input_utils.ipynb
bramacchino/numberSense
mit
Accumulator Inputs ##
import numpy as np class accumulatorMatrix(object): """ Generate a matrix which row vectors correspond to accumulated numerosity, where each number is coded by repeating 1 times times. If zero = true, the zero vector is included. Args: max_number (int): the greatest number to be represented length (int): vectors length, if not provided is computed as the minimum length compatible times (int): length of unity representation zero (bool): whether the zero vector is included or excluded Returns: outputs (int, ndarray): max_number x length ndarray """ def __init__(self, max_number, length=None, times=2, zero=False): self.max_number = max_number self.length = length self.times = times self.zero = zero if not length: self.length = self.times * self.max_number assert self.max_number == self.length/times if self.zero: self.max_number = self.max_number + 1 add = 0 else: add = 1 self.outputs = np.zeros((self.max_number, self.length), dtype=int) for i in range(0,self.max_number): self.outputs[i,:self.times * (i+add)].fill(1) def shuffle_(self): np.random.shuffle(self.outputs) #def unshuffle(self): """We want to access the random shuffle in order to have the list http://stackoverflow.com/questions/19306976/python-shuffling-with-a-parameter-to-get-the-same-result""" def replicate(self, times=1): self.outputs = np.tile(self.outputs, [times, 1]) import warnings def accumulator_matrix(max_number, length=None, times=2, zero=False): """ Generate a matrix which row vectors correspond to accumulated numerosity, where each number is coded by repeating 1 times times. If zero = true, the zero vector is included. Args: max_number (int): the greatest number to be represented length (int): vectors length, if not provided is computed as the minimum length compatible times (int): length of unity representation zero (bool): whether the zero vector is included or excluded Returns: outputs (int, ndarray): max_number x length ndarray """ warnings.warn("shouldn't use this function anymore! Now use the class accumulatorMatrix.",DeprecationWarning) if not length: length = times * max_number assert max_number == length/times if zero: max_number = max_number + 1 add = 0 else: add = 1 outputs = np.zeros((max_number, length), dtype=int) for i in range(0,max_number): outputs[i,:times * (i+add)].fill(1) return outputs # np.random.seed(105) # Weights = np.random.rand(5,10)
Input_utils.ipynb
bramacchino/numberSense
mit
Label the data
def find_labels(inputs, multiple=1, one_hot=False): """ Generate the labels corresponding to binary vectors. If one_hot = true, the label are on hot encoded, otherwise integers. Args: inputs (int, ndarray): ndarray row samples multiple (int): lenght of unity representation one_hot (bool): False for integer labels, True for one hot encoded labels Returns: labels (int): integer or one hot encoded labels """ labels = (np.sum(inputs, axis=1)/multiple).astype(int) if one_hot: size = np.max(labels) label_matrix = np.zeros((labels.shape[0], size+1)) label_matrix[np.arange(labels.shape[0]), labels] = 1 labels = label_matrix return labels
Input_utils.ipynb
bramacchino/numberSense
mit
Create dataset Namedtuple
from collections import namedtuple def Dataset(inputs, labels): """Creates dataset Args: inputs (array): labels (array): corresponding labels Returns: Datasets: named tuple """ Dataset = namedtuple('Dataset', ['data', 'labels']) Datasets = Dataset(inputs, labels) return Datasets
Input_utils.ipynb
bramacchino/numberSense
mit
Pickling
from collections import namedtuple Dataset = namedtuple('Dataset', ['data', 'labels']) #data_verguts = Dataset(inputs, labels) import pickle def pickle_test(Data, name): f = open(name+'.pickle', 'ab') pickle.dump(Data, f) f.close() #pickle_test(data_verguts, "verguts") # # Test opening the pickle # pickle_in = open("Data.pickle", "rb") # ex = pickle.load(pickle_in) # ex.labels[25]
Input_utils.ipynb
bramacchino/numberSense
mit
We now pickle the named_tuple cfr. When to pickle See http://localhost:8888/notebooks/Dropbox/Programming/Jupyter/Competitive-Unsupervised/NNTf.ipynb for creating a panda dataframe out of the namedtuple http://stackoverflow.com/questions/16377215/how-to-pickle-a-namedtuple-instance-correctly https://blog.hartleybrody.com/python-serialize/ Simon and Petersons 2000, Input Dataset The dataset consist of vecors of lenght 16 and vector of lenght 6 as label, one hot encoded. 50.000 inputs pattern are generated A numerosities in range(6) is picked randomly. Then locations are randomly selected. Verguts and Fias: Inputs ## Uniformly distributed input The outlier 5 is represented only 10 times, this to allow the net to see it a reasonable numbers of times, but not too much, considering that it can only have one shape.
rows_n = [2,4,6,8,10] #comb(10, rows_n) inputs = binary_vector_2(distribution = comb(10, rows_n)) labels = find_labels(inputs, multiple=2, one_hot=True) count = 0 for i in inputs: print(count, i, int(np.sum(i)/2), labels[count]) count +=1
Input_utils.ipynb
bramacchino/numberSense
mit
Accumulator inputs - Verguts& Fias## Numerosity from 1 to 5, where unity is represented by 3 repeated ones. (e.g. 2 is represented as [1,1,1,1,1,1,0,0,0,0,0,0,0,0,0]). No zero vector.
inputs = accumulatorMatrix(5, times=2).outputs labels = find_labels(inputs, multiple=2, one_hot=True) Dataset = namedtuple('Dataset', ['data', 'labels']) verguts2004 = Dataset(inputs, labels) pickle_test(verguts2004, "verguts_accumulator") verguts2004.labels
Input_utils.ipynb
bramacchino/numberSense
mit
Import libraries
# Import Python libraries import logging import os import sys import gc import pandas as pd from IPython.display import display, HTML from collections import OrderedDict import numpy as np import statistics from scipy.stats import stats # Import local Python modules from Configs.CONSTANTS import CONSTANTS from Configs.Logger import Logger from Features.Variables import Variables from ReadersWriters.ReadersWriters import ReadersWriters from Stats.PreProcess import PreProcess from Stats.FeatureSelection import FeatureSelection from Stats.TrainingMethod import TrainingMethod from Stats.Plots import Plots # Check the interpreter print("\nMake sure the correct Python interpreter is used!") print(sys.version) print("\nMake sure sys.path of the Python interpreter is correct!") print(os.getcwd())
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
<br/><br/> 1.1. Initialise General Settings <font style="font-weight:bold;color:red">Main configuration Settings: </font> - Specify the full path of the configuration file <br/>&#9; &#8594; config_path - Specify the full path of the output folder <br/>&#9; &#8594; io_path - Specify the application name (the suffix of the outputs file name) <br/>&#9; &#8594; app_name - Specify the sub-model name, to locate the related feature configuration, based on the "Table_Reference_Name" column in the configuration file <br/>&#9; &#8594; submodel_name - Specify the sub-model's the file name of the input (excluding the CSV extension) <br/>&#9; &#8594; submodel_input_name <br/> <br/> <font style="font-weight:bold;color:red">External Configration Files: </font> - The MySQL database configuration setting &amp; other configration metadata <br/>&#9; &#8594; <i>Inputs/CONFIGURATIONS_1.ini</i> - The input features' confugration file (Note: only the CSV export of the XLSX will be used by this Notebook) <br/>&#9; &#8594; <i>Inputs/config_features_path.xlsx</i> <br/>&#9; &#8594; <i>Inputs/config_features_path.csv</i>
config_path = os.path.abspath("ConfigInputs/CONFIGURATIONS.ini") io_path = os.path.abspath("../../tmp/TCARER/Basic_prototype") app_name = "T-CARER" submodel_name = "hesIp" submodel_input_name = "tcarer_model_features_ip" print("\n The full path of the configuration file: \n\t", config_path, "\n The full path of the output folder: \n\t", io_path, "\n The application name (the suffix of the outputs file name): \n\t", app_name, "\n The sub-model name, to locate the related feature configuration: \n\t", submodel_name, "\n The the sub-model's the file name of the input: \n\t", submodel_input_name)
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
<br/><br/> Initialise logs
if not os.path.exists(io_path): os.makedirs(io_path, exist_ok=True) logger = Logger(path=io_path, app_name=app_name, ext="log") logger = logging.getLogger(app_name)
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
Initialise constants and some of classes
# Initialise constants CONSTANTS.set(io_path, app_name) # Initialise other classes readers_writers = ReadersWriters() preprocess = PreProcess(io_path) feature_selection = FeatureSelection() plts = Plots() # Set print settings pd.set_option('display.width', 1600, 'display.max_colwidth', 800)
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
1.2. Initialise Features Metadata Read the input features' confugration file &amp; store the features metadata
# variables settings features_metadata = dict() features_metadata_all = readers_writers.load_csv(path=CONSTANTS.io_path, title=CONSTANTS.config_features_path, dataframing=True) features_metadata = features_metadata_all.loc[(features_metadata_all["Selected"] == 1) & (features_metadata_all["Table_Reference_Name"] == submodel_name)] features_metadata.reset_index() # print display(features_metadata)
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
Set input features' metadata dictionaries
# Dictionary of features types, dtypes, & max-states features_types = dict() features_dtypes = dict() features_states_values = dict() features_names_group = dict() for _, row in features_metadata.iterrows(): if not pd.isnull(row["Variable_Max_States"]): states_values = str(row["Variable_Max_States"]).split(',') states_values = list(map(int, states_values)) else: states_values = None if not pd.isnull(row["Variable_Aggregation"]): postfixes = row["Variable_Aggregation"].replace(' ', '').split(',') f_types = row["Variable_Type"].replace(' ', '').split(',') f_dtypes = row["Variable_dType"].replace(' ', '').split(',') for p in range(len(postfixes)): features_types[row["Variable_Name"] + "_" + postfixes[p]] = f_types[p] features_dtypes[row["Variable_Name"] + "_" + postfixes[p]] = pd.Series(dtype=f_dtypes[p]) features_states_values[row["Variable_Name"] + "_" + postfixes[p]] = states_values features_names_group[row["Variable_Name"] + "_" + postfixes[p]] = row["Variable_Name"] + "_" + postfixes[p] else: features_types[row["Variable_Name"]] = row["Variable_Type"] features_dtypes[row["Variable_Name"]] = row["Variable_dType"] features_states_values[row["Variable_Name"]] = states_values features_names_group[row["Variable_Name"]] = row["Variable_Name"] if states_values is not None: for postfix in states_values: features_names_group[row["Variable_Name"] + "_" + str(postfix)] = row["Variable_Name"] features_dtypes = pd.DataFrame(features_dtypes).dtypes # Dictionary of features groups features_types_group = OrderedDict() f_types = set([f_type for f_type in features_types.values()]) features_types_group = OrderedDict(zip(list(f_types), [set() for _ in range(len(f_types))])) for f_name, f_type in features_types.items(): features_types_group[f_type].add(f_name) print("Available features types: " + ','.join(f_types))
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
<br/><br/> <font style="font-weight:bold;color:red">2. Generate Features</font> <font style="font-weight:bold;color:red">Notes:</font> - It generates the final spell-wise &amp; temporal features from the MySQL table(s), &amp; converts it into CSV(s); - It generates the CSV(s) based on the configuration file of the features (Note: only the CSV export of the XLSX will be used by this Notebook) <br/>&#9; &#8594; <i>Inputs/config_features_path.xlsx</i> <br/>&#9; &#8594; <i>Inputs/config_features_path.csv</i>
skip = True # settings csv_schema = ["my_db_schema"] csv_input_tables = ["tcarer_features"] csv_history_tables = ["hesIp"] csv_column_index = "localID" csv_output_table = "tcarer_model_features_ip" csv_query_batch_size = 100000 if skip is False: # generate the csv file variables = Variables(submodel_name, CONSTANTS.io_path, CONSTANTS.io_path, CONSTANTS.config_features_path, csv_output_table) variables.set(csv_schema, csv_input_tables, csv_history_tables, csv_column_index, csv_query_batch_size)
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
<br/><br/> 3. Read Data Read the input features from the CSV input file
features_input = readers_writers.load_csv(path=CONSTANTS.io_path, title=submodel_input_name, dataframing=True) features_input.astype(dtype=features_dtypes) print("Number of columns: ", len(features_input.columns), "; Total records: ", len(features_input.index))
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
Verify features visually
display(features_input.head())
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0
<br/><br/> 4. Filter Features 4.1. Descriptive Statsistics Produce a descriptive stat report of 'Categorical', 'Continuous', & 'TARGET' features
file_name = "Step_04_Data_ColumnNames" readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name, data=list(features_input.columns.values), append=False) file_name = "Step_04_Stats_Categorical" o_stats = preprocess.stats_discrete_df(df=features_input, includes=features_types_group["CATEGORICAL"], file_name=file_name) file_name = "Step_04_Stats_Continuous" o_stats = preprocess.stats_continuous_df(df=features_input, includes=features_types_group["CONTINUOUS"], file_name=file_name) file_name = "Step_04_Stats_Target" o_stats = preprocess.stats_discrete_df(df=features_input, includes=features_types_group["TARGET"], file_name=file_name)
TCARER_Basic.ipynb
mesgarpour/T-CARER
apache-2.0