repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
Kulbear/deep-learning-nano-foundation
DLND-language-translation/dlnd_language_translation.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n eos = source_vocab_to_int['<EOS>']\n source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\\n')]\n target_id_text = [[target_vocab_to_int[word] for word in (sentence.split() + ['<EOS>'])] for sentence in target_text.split('\\n')]\n return source_id_text, target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoding_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n inputs = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n learning_rate = tf.placeholder(tf.float32, None, name='learning_rate')\n keep_prob = tf.placeholder(tf.float32, None, name='keep_prob')\n \n return inputs, targets, learning_rate, keep_prob\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoding Input\nImplement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.", "def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for dencoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n preprocessed_target_data = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n\n return preprocessed_target_data\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().", "def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n multi_cells = [tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers\n cell = tf.contrib.rnn.MultiRNNCell(multi_cells)\n output, rnn_state = tf.nn.dynamic_rnn(cell, tf.nn.dropout(rnn_inputs, keep_prob), dtype=tf.float32)\n \n return rnn_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.", "def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param sequence_length: Sequence Length\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)\n train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)\n train_logits = output_fn(tf.nn.dropout(train_pred, keep_prob))\n \n return train_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param maximum_length: The maximum allowed time steps to decode\n :param vocab_size: Size of vocabulary\n :param decoding_scope: TensorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n inference_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length - 1, vocab_size)\n inference_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, inference_decoder_fn, scope=decoding_scope)\n inference_logits = tf.nn.dropout(inference_pred, keep_prob)\n \n return inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nCreate RNN cell for decoding using rnn_size and num_layers.\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # refer to chc170's code\n with tf.variable_scope(\"decoding\") as decoding_scope:\n dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)\n train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)\n decoding_scope.reuse_variables()\n infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length-1, vocab_size, decoding_scope, output_fn, keep_prob)\n \n return train_logits, infer_logits\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).\nProcess target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).", "def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param sequence_length: Sequence Length\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)\n enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)\n process_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size)\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, process_target_data)\n \n train_logits, refer_logits = decoding_layer(\n dec_embed_input, dec_embeddings, enc_state, target_vocab_size, \n sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)\n \n return train_logits, refer_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability", "# Number of Epochs\nepochs = 10\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 512\n# Number of Layers\nnum_layers = 2\n# Embedding Size (# of words)\nencoding_embedding_size = 256\ndecoding_embedding_size = 256\n# Learning Rate\nlearning_rate = 0.001\n# Dropout Keep Probability\nkeep_probability = 0.75", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_source_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n sentence_int = [vocab_to_int.get(word) or vocab_to_int['<UNK>'] for word in sentence.lower().split()]\n return sentence_int\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ML4DS/ML4all
A1.Optimization/Optimization_professor.ipynb
mit
[ "Optimización\nAuthor: Jesús Cid-Sueiro\n Jerónimo Arenas-García\n\nVersión: 0.1 (2019/09/13)\n 0.2 (2019/10/02): Solutions added\n\nExercise: compute the minimum of a real-valued function\nThe goal of this exercise is to implement and test optimization algorithms for the minimization of a given function. Gradient descent and Newton's method will be explored.\nOur goal it so find the minimizer of the real-valued function\n$$\nf(w) = - w exp(-w)\n$$\nbut the whole code will be easily modified to try with other alternative functions.\nYou will need to import some libraries (at least, numpy and matplotlib). Insert below all the imports needed along the whole notebook. Remind that numpy is usually abbreviated as np and matplotlib.pyplot is usually abbreviated as plt.", "# <SOL>\nimport numpy as np\nimport matplotlib.pyplot as plt\n# </SOL>", "Part 1: The function and its derivatives.\nQuestion 1.1: Implement the following three methods:\n\nMethod f: given $w$, it returns the value of function $f(w)$.\nMethod df: given $w$, it returns the derivative of $f$ at $w$\nMedhod d2f: given $w$, it returns the second derivative of $f$ at $w$", "# Funcion f\n# <SOL>\ndef f(w):\n y = - w * np.exp(-w)\n return y\n# </SOL>\n\n# First derivative\n# <SOL>\ndef df(w):\n y = (w -1) * np.exp(-w)\n return y\n# </SOL>\n\n# Second derivative\n# <SOL>\ndef d2f(w):\n y = (2 - w) * np.exp(-w)\n return y\n# </SOL>", "Part 2: Gradient descent.\nQuestion 2.1: Implement a method gd that, given w and a learning rate parameter rho applies a single iteration of the gradien descent algorithm", "# <SOL>\ndef gd(w0, rho):\n y = w0 - rho * df(w)\n return y\n# </SOL>", "Question 2.2: Apply the gradient descent to optimize the given function. To do so, start with an initial value $w=0$ and iterate $20$ times. Save two lists:\n\nA list of succesive values of $w_n$\nA list of succesive values of the function $f(w_n)$.", "# <SOL>\nwn = []\nfwn = []\nniter = 20\nrho = 0.2\nw = 0\nwn.append(w)\nfwn.append(f(w))\n\nfor k in range(niter):\n w = gd(w,rho)\n wn.append(w)\n fwn.append(f(w))\n# </SOL>", "Question 2.3: Plot, in a single figure:\n\nThe given function, for values ranging from 0 to 20.\nThe sequence of points $(w_n, f(w_n))$.", "# <SOL>\nnpoints = 1000\nw_grid = np.linspace(0,20,npoints)\nplt.plot(w_grid, f(w_grid))\nplt.plot(wn,fwn,'r.')\nplt.show()\n# </SOL>", "You can check the effect of modifying the value of the learning rate.\nPart 2: Newton's method.\nQuestion 3.1: Implement a method newton that, given w and a learning rate parameter rho applies a single iteration of the Newton's method", "# <SOL>\ndef newton(w0, rho):\n y = w0 - rho * df(w) / d2f(w)\n return y\n# </SOL>", "Question 3: Apply the Newton's method to optimize the given function. To do so, start with an initial value $w=0$ and iterate $20$ times. Save two lists:\n\nA list of succesive values of $w_n$\nA list of succesive values of the function $f(w_n)$.", "# <SOL>\nwn = []\nfwn = []\nniter = 20\nrho = 0.5\nw = 0\nwn.append(w)\nfwn.append(f(w))\n\nfor k in range(niter):\n w = newton(w,rho)\n wn.append(w)\n fwn.append(f(w))\n# </SOL>", "Question 4: Plot, in a single figure:\n\nThe given function, for values ranging from 0 to 20.\nThe sequence of points $(w_n, f(w_n))$.", "# <SOL>\nnpoints = 1000\nw_grid = np.linspace(0,20,npoints)\nplt.plot(w_grid, f(w_grid))\nplt.plot(wn,fwn,'r.')\nplt.show()\n# </SOL>", "You can check the effect of modifying the value of the learning rate.\nPart 3: Optimize other cost functions\nNow you are ready to explore these optimization algorithms with other more sophisticated functions. Try with them." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
psychemedia/parlihacks
notebooks/Apache Drill - JSON Written Questions.ipynb
mit
[ "Using Apache Drill to Query Parliament Written Questions Data\nA bit of a play to try to get to grips with Apache Drill, querying over JSON and CSV for Parliament data that doesn't quite link up as it should do...", "import pandas as pd\nfrom pydrill.client import PyDrill\n\n%matplotlib inline\n\n#Get a connection to the Apache Drill server\ndrill = PyDrill(host='localhost', port=8047)", "Download Written Questions Data for a Session\nThis is a faff, because there is no bulk download...\nIt also makes more sense to use requests-cache along the way in case things break midway through the download, so you don't then have to reload everything again...", "#Get Written questions data - may take some time!\nstub='http://lda.data.parliament.uk'.strip('/')\n\n#We're going to have to call the API somehow\nimport requests\n\n\n##To make thinks more efficient if we do this again, cache requests\n#!pip3 install requests_cache\n#import requests_cache\n#requests_cache.install_cache('parlidata_cache', backend='sqlite')\n\n\n#Get data from URL\ndef getURL(url):\n print(url)\n r=requests.get(url)\n print(r.status_code)\n return r\n\n#Download data - if there is more, get it\ndef loader(url):\n items=[]\n done=False\n r=getURL(url)\n while not done:\n items=items+r.json()['result']['items']\n if 'next' in r.json()['result']:\n r=getURL(r.json()['result']['next']+'&_pageSize=500')\n else: done=True\n return items\n\n\nurl='{}/{}.json?session={}'.format(stub,'commonswrittenquestions','2015/16')\nitems=loader(url)\n\n\n#Save the data\nimport json\nwith open('writtenQuestions.json', 'w') as outfile:\n json.dump(items, outfile)", "We should now have all the data in a single JSON file (writtenQuestions.json).\n(Actually, if we had downloaded the data into the same directory as separately and uniquely named JSON files, Apache Drill should be able to query over them...) \nLet's see if we can query it...", "#What does the whole table look like?\nq=''' SELECT * from dfs.`/Users/ajh59/Dropbox/parlidata/notebooks/writtenQuestions.json` LIMIT 3'''\ndrill.query(q).to_dataframe()\n\n#Try to select a column\nq='''\nSELECT j.tablingMember._about AS memberURL \nFROM dfs.`/Users/ajh59/Dropbox/parlidata/notebooks/writtenQuestions.json` j LIMIT 3\n'''\ndrill.query(q).to_dataframe()\n\n\n#Try to select an item from a list in a column\nq='''\nSELECT tablingMemberPrinted[0]._value AS Name \nFROM dfs.`/Users/ajh59/Dropbox/parlidata/notebooks/writtenQuestions.json` LIMIT 3\n'''\ndrill.query(q).to_dataframe()\n\n\n#Get a dataframe of all the member URLs - so we can get the data fro each from the Parliament data API\nq='''\nSELECT DISTINCT j.tablingMember._about AS memberURL \nFROM dfs.`/Users/ajh59/Dropbox/parlidata/notebooks/writtenQuestions.json` j\n'''\nmemberIds = drill.query(q).to_dataframe()\nmemberIds.head()\n\n\n#The URLs in the written question data donlt actually resolve - we need to tweak them\n#Generate a set of members who have tabled questions that have been answered\n#Note that the identifier Linked Data URL doesn't link... so patch it...\nmembers= ['{}.json'.format(i.replace('http://','http://lda.')) for i in memberIds['memberURL']]\n\n#Preview the links\nmembers[:3]\n\n#Download the data files into a data directory\n!mkdir -p data/members\nfor member in members:\n !wget -quiet -P data/members {member}\n\n!ls data/members\n\n#Preview one of the files\n!head data/members/1474.json", "Apache Drill can query over multiple files in the same directory, so let's try that...\nQuery over all the downloaded member JSON files to create a dataframe to pull out the gender for each member ID URL.", "q=''' SELECT j.`result`.primaryTopic.gender._value AS gender,\nj.`result`._about AS url\nFROM dfs.`/Users/ajh59/Dropbox/parlidata/notebooks/data/members` j'''\nmembersdf=drill.query(q).to_dataframe()\nmembersdf.head()", "Now we need to remap those URLs onto URLs of the form used in the Written Questions data.", "#Lets reverse the URL to the same form as in the written questions - then we can use this for a JOIN\nmembersdf['fixedurl']=membersdf['url'].str.replace('http://lda.','http://').str.replace('.json','')\n#Save the data as a CSV file\nmembersdf.to_csv('data/members.csv',index=False)\n!head data/members.csv", "Querying Over JOINed JSON and CSV Files\nLet's see if we can now run a query over the joined monolithic wirtten questions JSON data file and the members CSV data file we created.", "#Now find the gender of a question asker - join a query over the monolithic JSON file with the CSV file\nq=''' SELECT DISTINCT j.tablingMember._about AS memberURL, m.gender\nFROM dfs.`/Users/ajh59/Dropbox/parlidata/notebooks/writtenQuestions.json` j \nJOIN dfs.`/Users/ajh59/Dropbox/parlidata/notebooks/data/members.csv` m\nON j.tablingMember._about = m.fixedurl\nLIMIT 3'''\ndrill.query(q).to_dataframe()", "JOINing Across A Monolithic JSON file and a Directory of Files with Regularly Mismatched Keys\nThat's a clunky route round though... Can we actually do a JOIN between the monolithc written answers JSON file and the separate members JSON files, hacking the member ID URL into the correct form as part of the ON condition?", "#Let's see if we can modify the URL in the spearate JSON files so we can join with the monolithic file\nq=''' SELECT DISTINCT j.tablingMember._about AS memberURL,\nm.`result`.primaryTopic.gender._value AS gender,\nm.`result`._about AS url\nFROM dfs.`{path}/writtenQuestions.json` j \nJOIN dfs.`{path}/data/members` m\nON j.tablingMember._about = REGEXP_REPLACE(REGEXP_REPLACE(m.`result`._about,'http://lda.','http://'),'\\.json','')\nLIMIT 3'''.format(path='/Users/ajh59/Dropbox/parlidata/notebooks')\ndrill.query(q).to_dataframe()\n", "Now let's do some counting... in the session for which we downloaded the data, how many written questions were tabled by gender, in total?", "q=''' SELECT COUNT(*) AS Number,\nm.`result`.primaryTopic.gender._value AS gender\nFROM dfs.`{path}/writtenQuestions.json` j \nJOIN dfs.`{path}/data/members` m\nON j.tablingMember._about = REGEXP_REPLACE(REGEXP_REPLACE(m.`result`._about,'http://lda.','http://'),'\\.json','')\nGROUP BY m.`result`.primaryTopic.gender._value'''.format(path='/Users/ajh59/Dropbox/parlidata/notebooks')\ndrill.query(q).to_dataframe()\n", "How many per person, by gender?", "q=''' SELECT COUNT(*) AS Number, j.tablingMemberPrinted[0]._value AS Name,\nm.`result`.primaryTopic.gender._value AS gender\nFROM dfs.`{path}/writtenQuestions.json` j \nJOIN dfs.`{path}/data/members` m\nON j.tablingMember._about = REGEXP_REPLACE(REGEXP_REPLACE(m.`result`._about,'http://lda.','http://'),'\\.json','')\nGROUP BY m.`result`.primaryTopic.gender._value, j.tablingMemberPrinted[0]._value \n'''.format(path='/Users/ajh59/Dropbox/parlidata/notebooks')\n\ndrill.query(q).to_dataframe().head()", "Can we do the average too?", "q='''\nSELECT AVG(Number) AS average, gender\nFROM (SELECT COUNT(*) AS Number, j.tablingMemberPrinted[0]._value AS Name,\n m.`result`.primaryTopic.gender._value AS gender\n FROM dfs.`{path}/writtenQuestions.json` j \n JOIN dfs.`{path}/data/members` m\n ON j.tablingMember._about = REGEXP_REPLACE(REGEXP_REPLACE(m.`result`._about,'http://lda.','http://'),'\\.json','')\n GROUP BY m.`result`.primaryTopic.gender._value, j.tablingMemberPrinted[0]._value )\nGROUP BY gender\n'''.format(path='/Users/ajh59/Dropbox/parlidata/notebooks')\n\ndrill.query(q).to_dataframe()", "How about by party?", "q='''\nSELECT AVG(Number) AS average, party\nFROM (SELECT COUNT(*) AS Number, j.tablingMemberPrinted[0]._value AS Name,\n m.`result`.primaryTopic.party._value AS party\n FROM dfs.`{path}/writtenQuestions.json` j \n JOIN dfs.`{path}/data/members` m\n ON j.tablingMember._about = REGEXP_REPLACE(REGEXP_REPLACE(m.`result`._about,'http://lda.','http://'),'\\.json','')\n GROUP BY m.`result`.primaryTopic.party._value, j.tablingMemberPrinted[0]._value )\nGROUP BY party\n'''.format(path='/Users/ajh59/Dropbox/parlidata/notebooks')\n\ndq=drill.query(q).to_dataframe()\ndq['average']=dq['average'].astype(float)\ndq\n\ndq.set_index('party').sort_values(by='average').plot(kind=\"barh\");", "Party and gender?", "q='''\nSELECT AVG(Number) AS average, party, gender\nFROM (SELECT COUNT(*) AS Number, j.tablingMemberPrinted[0]._value AS Name,\n m.`result`.primaryTopic.party._value AS party,\n m.`result`.primaryTopic.gender._value AS gender\n FROM dfs.`{path}/writtenQuestions.json` j \n JOIN dfs.`{path}/data/members` m\n ON j.tablingMember._about = REGEXP_REPLACE(REGEXP_REPLACE(m.`result`._about,'http://lda.','http://'),'\\.json','')\n GROUP BY m.`result`.primaryTopic.party._value, m.`result`.primaryTopic.gender._value, j.tablingMemberPrinted[0]._value )\nGROUP BY party, gender\n'''.format(path='/Users/ajh59/Dropbox/parlidata/notebooks')\n\ndq=drill.query(q).to_dataframe()\ndq['average']=dq['average'].astype(float)\ndq\n\ndq.set_index(['party','gender']).sort_values(by='average').plot(kind=\"barh\");\n\ndq.sort_values(by=['gender','average']).set_index(['party','gender']).plot(kind=\"barh\");\n\ndqp=dq.pivot(index='party',columns='gender')\ndqp.columns = dqp.columns.get_level_values(1)\ndqp\n\ndqp.plot(kind='barh');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
thiagoqd/queirozdias-deep-learning
sentiment-rnn/Sentiment RNN Solution.ipynb
mit
[ "Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.\nThe architecture for this network is shown below.\n<img src=\"assets/network_diagram.png\" width=400px>\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.", "import numpy as np\nimport tensorflow as tf\n\nwith open('../sentiment_network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment_network/labels.txt', 'r') as f:\n labels = f.read()\n\nreviews[:2000]", "Data preprocessing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. Then I can combined all the reviews back together into one big string.\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "from string import punctuation\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()\n\nall_text[:2000]\n\nwords[:100]", "Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.", "from collections import Counter\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\nreviews_ints = []\nfor each in reviews:\n reviews_ints.append([vocab_to_int[word] for word in each.split()])", "Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively.", "labels = labels.split('\\n')\nlabels = np.array([1 if each == 'positive' else 0 for each in labels])\n\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))", "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\nExercise: First, remove the review with zero length from the reviews_ints list.", "non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]\nlen(non_zero_idx)\n\nreviews_ints[-1]", "Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.", "reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]\nlabels = np.array([labels[ii] for ii in non_zero_idx])", "Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "seq_len = 200\nfeatures = np.zeros((len(reviews_ints), seq_len), dtype=int)\nfor i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_len]\n\nfeatures[:10,:100]", "Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.", "split_frac = 0.8\nsplit_idx = int(len(features)*0.8)\ntrain_x, val_x = features[:split_idx], features[split_idx:]\ntrain_y, val_y = labels[:split_idx], labels[split_idx:]\n\ntest_idx = int(len(val_x)*0.5)\nval_x, test_x = val_x[:test_idx], val_x[test_idx:]\nval_y, test_y = val_y[:test_idx], val_y[test_idx:]\n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2500, 200)\nBuild the graph\nHere, we'll build the graph. First up, defining the hyperparameters.\n\nlstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nlstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\nbatch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\nlearning_rate: Learning rate", "lstm_size = 256\nlstm_layers = 1\nbatch_size = 500\nlearning_rate = 0.001", "For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.\n\nExercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.", "n_words = len(vocab_to_int)\n\n# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')\n labels_ = tf.placeholder(tf.int32, [None, None], name='labels')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')", "Embedding\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\nExercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].", "# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, inputs_)", "LSTM cell\n<img src=\"assets/network_diagram.png\" width=400px>\nNext, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\nTo create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;)\nyou can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like \nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nto create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\nMost of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\nHere, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\nExercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.\n\nHere is a tutorial on building RNNs that will help you out.", "with graph.as_default():\n # Your basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)", "RNN forward pass\n<img src=\"assets/network_diagram.png\" width=400px>\nNow we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\nAbove I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\nExercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.", "with graph.as_default():\n outputs, final_state = tf.nn.dynamic_rnn(cell, embed,\n initial_state=initial_state)", "Output\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.", "with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Validation accuracy\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.", "with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batching\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].", "def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]", "Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.", "epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")", "Testing", "test_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ledeprogram/algorithms
class7/donow/radhikapc_Class7_DoNow.ipynb
gpl-3.0
[ "Apply logistic regression to categorize whether a county had high mortality rate due to contamination\n1. Import the necessary packages to read in the data, plot, and create a logistic regression model", "import pandas as pd\n%matplotlib inline\nimport numpy as np\nfrom sklearn.linear_model import LogisticRegression", "2. Read in the hanford.csv file in the data/ folder", "df = pd.read_csv(\"hanford.csv\")\ndf.head()", "<img src=\"../../images/hanford_variables.png\"></img>\n3. Calculate the basic descriptive statistics on the data", "df.describe()\n\ndf.median()\n\nrang= df['Mortality'].max() - df['Mortality'].min()\nrang\n\niqr_m = df['Mortality'].quantile(q=0.75)- df['Mortality'].quantile(q=0.25)\niqr_m\n\niqr_e = df['Exposure'].quantile(q=0.75)- df['Exposure'].quantile(q=0.25)\niqr_e\n\nUAL_m= (iqr_m*1.5) + df['Mortality'].quantile(q=0.75)\nUAL_m\n\nUAL_e= (iqr_m*1.5) + df['Exposure'].quantile(q=0.75)\nUAL_e\n\nLAL_m= df['Mortality'].quantile(q=0.25) - (iqr_e*1.5) \nLAL_m\n\nLAL_e= df['Exposure'].quantile(q=0.25) - (iqr_e*1.5) \nLAL_e\n\nlen(df[df['Mortality']> UAL_m]) \n\nlen(df[df['Exposure']> UAL_e]) \n\nlen(df[df['Mortality']< LAL_m]) \n\nlen(df[df['Mortality'] > UAL_m])", "4. Find a reasonable threshold to say exposure is high and recode the data\n5. Create a logistic regression model", "lm = LogisticRegression()\n\ndata = np.asarray(df[['Mortality','Exposure']])\nx = data[:,1:]\ny = data[:,0]\n\ndata\n\nx\n\n\ny\n\nlm.fit(x,y)\n\nlm.coef_\n\nlm.score(x,y)\n\nslope = lm.coef_[0]\n\nintercept = lm.intercept_", "6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50", "lm.predict(50)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gururajl/deep-learning
seq2seq/sequence_to_sequence_implementation.ipynb
mit
[ "Character Sequence to Sequence\nIn this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models. This notebook was updated to work with TensorFlow 1.1 and builds on the work of Dave Currie. Check out Dave's post Text Summarization with Amazon Reviews.\n<img src=\"images/sequence-to-sequence.jpg\"/>\nDataset\nThe dataset lives in the /data/ folder. At the moment, it is made up of the following files:\n * letters_source.txt: The list of input letter sequences. Each sequence is its own line. \n * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.", "import numpy as np\nimport time\n\nimport helper\n\nsource_path = 'data/letters_source.txt'\ntarget_path = 'data/letters_target.txt'\n\nsource_sentences = helper.load_data(source_path)\ntarget_sentences = helper.load_data(target_path)", "Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.", "source_sentences[:50].split('\\n')\n\nsource_sentences[0]", "target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.", "target_sentences[-1]\n\ntarget_sentences[:50].split('\\n')", "Preprocess\nTo do anything useful with it, we'll need to turn the each string into a list of characters: \n<img src=\"images/source_and_target_arrays.png\"/>\nThen convert the characters to their int values as declared in our vocabulary:", "def extract_character_vocab(data):\n special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']\n\n set_words = set([character for line in data.split('\\n') for character in line])\n int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n\n return int_to_vocab, vocab_to_int\n\n# Build int2letter and letter2int dicts\nsource_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)\ntarget_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)\n\n# Convert characters to ids\nsource_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\\n')]\ntarget_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\\n')] \n\nprint(\"Example source sequence\")\nprint(source_letter_ids[:3])\nprint(\"\\n\")\nprint(\"Example target sequence\")\nprint(target_letter_ids[:3])", "This is the final shape we need them to be in. We can now proceed to building the model.\nModel\nCheck the Version of TensorFlow\nThis will check to make sure you have the correct version of TensorFlow", "from distutils.version import LooseVersion\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))", "Hyperparameters", "# Number of Epochs\nepochs = 60\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 50\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 15\ndecoding_embedding_size = 15\n# Learning Rate\nlearning_rate = 0.001", "Input", "def get_model_inputs():\n input_data = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n lr = tf.placeholder(tf.float32, name='learning_rate')\n\n target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n \n return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length\n", "Sequence to Sequence Model\nWe can now start defining the functions that will build the seq2seq model. We are building it from the bottom up with the following components:\n2.1 Encoder\n - Embedding\n - Encoder cell\n2.2 Decoder\n 1- Process decoder inputs\n 2- Set up the decoder\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n2.3 Seq2seq model connecting the encoder and decoder\n2.4 Build the training graph hooking up the model with the \n optimizer\n\n2.1 Encoder\nThe first bit of the model we'll build is the encoder. Here, we'll embed the input data, construct our encoder, then pass the embedded data to the encoder.\n\n\nEmbed the input data using tf.contrib.layers.embed_sequence\n<img src=\"images/embed_sequence.png\" />\n\n\nPass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.\n<img src=\"images/encoder.png\" />", "def encoding_layer(input_data, rnn_size, num_layers,\n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n\n\n # Encoder embedding\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)\n\n # RNN cell\n def make_cell(rnn_size):\n enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return enc_cell\n\n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)\n \n return enc_output, enc_state", "2.2 Decoder\nThe decoder is probably the most involved part of this model. The following steps are needed to create it:\n1- Process decoder inputs\n2- Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\nProcess Decoder Input\nIn the training process, the target sequences will be used in two different places:\n\nUsing them to calculate the loss\nFeeding them to the decoder during training to make the model more robust.\n\nNow we need to address the second point. Let's assume our targets look like this in their letter/word form (we're doing this for readibility. At this point in the code, these sequences would be in int form):\n<img src=\"images/targets_1.png\"/>\nWe need to do a simple transformation on the tensor before feeding it to the decoder:\n1- We will feed an item of the sequence to the decoder at each time step. Think about the last timestep -- where the decoder outputs the final word in its output. The input to that step is the item before last from the target sequence. The decoder has no use for the last item in the target sequence in this scenario. So we'll need to remove the last item. \nWe do that using tensorflow's tf.strided_slice() method. We hand it the tensor, and the index of where to start and where to end the cutting.\n<img src=\"images/strided_slice_1.png\"/>\n2- The first item in each sequence we feed to the decoder has to be GO symbol. So We'll add that to the beginning.\n<img src=\"images/targets_add_go.png\"/>\nNow the tensor is ready to be fed to the decoder. It looks like this (if we convert from ints to letters/symbols):\n<img src=\"images/targets_after_processing_1.png\"/>", "# Process the input we'll feed to the decoder\ndef process_decoder_input(target_data, vocab_to_int, batch_size):\n '''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)\n\n return dec_input", "Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\n1- Embedding\nNow that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder. \nWe'll create an embedding matrix like the following then have tf.nn.embedding_lookup convert our input to its embedded equivalent:\n<img src=\"images/embeddings.png\" />\n2- Decoder Cell\nThen we declare our decoder cell. Just like the encoder, we'll use an tf.contrib.rnn.LSTMCell here as well.\nWe need to declare a decoder for the training process, and a decoder for the inference/prediction process. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).\nFirst, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.\n3- Dense output layer\nBefore we move to declaring our decoders, we'll need to create the output layer, which will be a tensorflow.python.layers.core.Dense layer that translates the outputs of the decoder to logits that tell us which element of the decoder vocabulary the decoder is choosing to output at each time step.\n4- Training decoder\nEssentially, we'll be creating two decoders which share their parameters. One for training and one for inference. The two are similar in that both created using tf.contrib.seq2seq.BasicDecoder and tf.contrib.seq2seq.dynamic_decode. They differ, however, in that we feed the the target sequences as inputs to the training decoder at each time step to make it more robust.\nWe can think of the training decoder as looking like this (except that it works with sequences in batches):\n<img src=\"images/sequence-to-sequence-training-decoder.png\"/>\nThe training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).\n5- Inference decoder\nThe inference decoder is the one we'll use when we deploy our model to the wild.\n<img src=\"images/sequence-to-sequence-inference-decoder.png\"/>\nWe'll hand our encoder hidden state to both the training and inference decoders and have it process its output. TensorFlow handles most of the logic for us. We just have to use the appropriate methods from tf.contrib.seq2seq and supply them with the appropriate inputs.", "def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,\n target_sequence_length, max_target_sequence_length, enc_state, dec_input):\n # 1. Decoder Embedding\n target_vocab_size = len(target_letter_to_int)\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n # 2. Construct the decoder cell\n def make_cell(rnn_size):\n dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return dec_cell\n\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # 3. Dense layer to translate the decoder's output at each time \n # step into a choice from the target vocabulary\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n\n\n # 4. Set up a training decoder and an inference decoder\n # Training Decoder\n with tf.variable_scope(\"decode\"):\n\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n \n \n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n enc_state,\n output_layer) \n \n # Perform dynamic decoding using the decoder\n training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)[0]\n # 5. Inference Decoder\n # Reuses the same parameters trained by the training process\n with tf.variable_scope(\"decode\", reuse=True):\n start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n target_letter_to_int['<EOS>'])\n\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n enc_state,\n output_layer)\n \n # Perform dynamic decoding using the decoder\n inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)[0]\n \n\n from IPython.core.debugger import Tracer; Tracer()()\n\n return training_decoder_output, inference_decoder_output", "2.3 Seq2seq model\nLet's now go a step above, and hook up the encoder and decoder using the methods we just declared", "\ndef seq2seq_model(input_data, targets, lr, target_sequence_length, \n max_target_sequence_length, source_sequence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, \n rnn_size, num_layers):\n \n # Pass the input data through the encoder. We'll ignore the encoder output, but use the state\n _, enc_state = encoding_layer(input_data, \n rnn_size, \n num_layers, \n source_sequence_length,\n source_vocab_size, \n encoding_embedding_size)\n \n \n # Prepare the target sequences we'll feed to the decoder in training mode\n dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)\n \n # Pass encoder state and decoder inputs to the decoders\n training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int, \n decoding_embedding_size, \n num_layers, \n rnn_size,\n target_sequence_length,\n max_target_sequence_length,\n enc_state, \n dec_input) \n \n return training_decoder_output, inference_decoder_output\n \n\n", "Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this:\n<img src=\"images/logits.png\"/>\nThe logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient.", "# Build the graph\ntrain_graph = tf.Graph()\n# Set the graph to default to ensure that it is ready for training\nwith train_graph.as_default():\n \n # Load the model inputs \n input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()\n \n # Create the training and inference logits\n training_decoder_output, inference_decoder_output = seq2seq_model(input_data, \n targets, \n lr, \n target_sequence_length, \n max_target_sequence_length, \n source_sequence_length,\n len(source_letter_to_int),\n len(target_letter_to_int),\n encoding_embedding_size, \n decoding_embedding_size, \n rnn_size, \n num_layers) \n \n # Create tensors for the training logits and inference logits\n training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')\n inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')\n \n # Create the weights for sequence_loss\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n \n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Get Batches\nThere's little processing involved when we retreive the batches. This is a simple example assuming batch_size = 2\nSource sequences (it's actually in int form, we're showing the characters for clarity):\n<img src=\"images/source_batch.png\" />\nTarget sequences (also in int, but showing letters for clarity):\n<img src=\"images/target_batch.png\" />", " def pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\ndef get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n \n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n \n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n \n yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths", "Train\nWe're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.", "# Split data to training and validation sets\ntrain_source = source_letter_ids[batch_size:]\ntrain_target = target_letter_ids[batch_size:]\nvalid_source = source_letter_ids[:batch_size]\nvalid_target = target_letter_ids[:batch_size]\n(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>']))\n\ndisplay_step = 20 # Check training loss after every 20 batches\n\ncheckpoint = \"best_model.ckpt\" \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n \n for epoch_i in range(1, epochs+1):\n for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(\n get_batches(train_target, train_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>'])):\n \n # Training step\n _, loss = sess.run(\n [train_op, cost],\n {input_data: sources_batch,\n targets: targets_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths})\n\n # Debug message updating us on the status of the training\n if batch_i % display_step == 0 and batch_i > 0:\n \n # Calculate validation cost\n validation_loss = sess.run(\n [cost],\n {input_data: valid_sources_batch,\n targets: valid_targets_batch,\n lr: learning_rate,\n target_sequence_length: valid_targets_lengths,\n source_sequence_length: valid_sources_lengths})\n \n print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'\n .format(epoch_i,\n epochs, \n batch_i, \n len(train_source) // batch_size, \n loss, \n validation_loss[0]))\n\n \n \n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, checkpoint)\n print('Model Trained and Saved')", "Prediction", "def source_to_seq(text):\n '''Prepare the text for the model'''\n sequence_length = 7\n return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))\n\n\n\n\ninput_sentence = 'wassup'\ntext = source_to_seq(input_sentence)\n\ncheckpoint = \"./best_model.ckpt\"\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(checkpoint + '.meta')\n loader.restore(sess, checkpoint)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n \n #Multiply by batch_size to match the model's input parameters\n answer_logits = sess.run(logits, {input_data: [text]*batch_size, \n target_sequence_length: [len(text)]*batch_size, \n source_sequence_length: [len(text)]*batch_size})[0] \n\n\npad = source_letter_to_int[\"<PAD>\"] \n\nprint('Original Text:', input_sentence)\n\nprint('\\nSource')\nprint(' Word Ids: {}'.format([i for i in text]))\nprint(' Input Words: {}'.format(\" \".join([source_int_to_letter[i] for i in text])))\n\nprint('\\nTarget')\nprint(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))\nprint(' Response Words: {}'.format(\" \".join([target_int_to_letter[i] for i in answer_logits if i != pad])))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Jhanelle/Jhanelle_New_Version_of_final_project
bin/.ipynb_checkpoints/Compiled_Codes_for_Final_Project-checkpoint.ipynb
mit
[ "Loading Data\nStatistis for my data", "# Identitfy version of software used\npd.__version__\n\n#Identify version of software used\nnp.__version__\n\n# import libraries\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n#stats library\n\nimport statsmodels.api as sm\nimport scipy\n\n#T-test is imported to complete the statistical analysis\n\nfrom scipy.stats import ttest_ind\n\nfrom scipy import stats\n\n#The function below is used to show the plots within the notebook\n\n%matplotlib inline", "Loading Data using Pandas", "data=pd.read_csv('../data/Testdata.dat', delimiter=' ')\n\n# Print the first 8 rows of the dataset \ndata.head(8)\n\n#Print the last 8 rows of the dataset \ndata.tail(8)\n\n# Commands used to check the title names in each column as some of the titles were omitted\ndata.dtypes.head()\n", "Hypothesis and Questions\n..........\nThings I need to do:\nFormat the date properly. I need help do use regular expression to fix the format of my date", "# Here we extract only two columns from the data as these are the main variables for the statistcal analysis\n\nstrain_df=data[['strain','speed']]\nstrain_df.head()\n\n# Eliminate NaN from the dataset \n\nstrain_df=strain_df.dropna()\nstrain_df.head()\n\n#Resample the data to group by strain\n\nstrain_resampled=strain_df.groupby('strain')\nstrain_resampled.head()\n\n#Created a histogram to check the normal distribution the data\n\nstrain_resampled.hist(column='speed', bins=50)\n\n# I need help adding titles to these histograms", "Interpretation of Histograms\nBased on the histograms of the respective strain it is clear that the data does not follow a normal distribution.\nTherefore t-tests and linear regression cannot be applied to this data set as planned.", "# I know I should applu Apply Kruskal. wallis statistics to data, however I am not sure how to deal with the array\n\nscipy.stats.mstats.kruskalwallis( , )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
capn-freako/ibisami
example/ibisami_Example_Models_Tester.ipynb
bsd-3-clause
[ "ibisami Example Models Tester <a name=\"top\"/>\nOriginal author: David Banas\nOriginal date: May 21, 2015\nCopyright (c) 2015 David Banas; all rights reserved World wide.\nThis iPython notebook tests the example Tx and Rx models provided with the ibisami package.\nContents\n\n<a href=\"#tx_model\">Tx Model</a></li>\n<a href=\"#tx_basic_sanity_check\">Basic Sanity Check</a></li>\n<a href=\"#tx_impulse_and_frequency_response_check\">Impulse and Frequency Response Check</a></li>\n<a href=\"#tx_getwave_vs_init\">GetWave() vs. Init()</a></li>\n\n\n<a href=\"#rx_model\">Rx Model</a></li>\n<a href=\"#rx_basic_sanity_check\">Basic Sanity Check</a></li>\n<a href=\"#rx_impulse_and_frequency_response_check\">Impulse and Frequency Response Check</a></li>\n<a href=\"#dynamic_dfe_check\">Dynamic DFE Check</a></li>\n<a href=\"#static_dfe_check\">Static DFE Check</a></li>\n<a href=\"#rx_getwave_vs_init_static\">GetWave() vs. Init() - Static</a></li>\n<a href=\"#rx_getwave_vs_init_dynamic\">GetWave() vs. Init() - Dynamic</a></li>\n\n\n\nTx Model <a name=\"tx_model\"/>\nBasic Sanity Check <a name=\"tx_basic_sanity_check\"/>", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nfrom numpy import array\nfrom pyibisami import amimodel as ami\n\n# gTxDLLName = \"example_tx_x86_amd64.so\"\ngTxDLLName = \"example_tx_x86.dll\"\ngBitRate = 10.e9\ngSampsPerBit = 32\ngNumBits = 100\n\nbit_time = 1. / gBitRate\nsample_interval = bit_time / gSampsPerBit\nrow_size = gNumBits * gSampsPerBit\nchannel_response = array([1.0 / sample_interval, 0.0])\nchannel_response.resize(row_size)\n\nmy_tx = ami.AMIModel(gTxDLLName)\ntx_init = ami.AMIModelInitializer({'root_name' : \"example_tx\",\n 'tx_tap_nm1' : \"10\",\n })\ntx_init.bit_time = bit_time\ntx_init.sample_interval = sample_interval\ntx_init.channel_response = channel_response\nmy_tx.initialize(tx_init)\n\nprint \"Message from model:\"\nprint \"\\t\", my_tx.msg\nprint \"Parameter string from model:\"\nprint \"\\t\", my_tx.ami_params_out\n", "<a href=\"#top\">Back to Top</a>\nImpulse and Frequency Response Check <a name=\"tx_impulse_and_frequency_response_check\"/>", "import numpy.fft as fft\nfrom numpy import log10, array, cumsum\n\ngTxZoomBits = 5\n\ncase = 1\nfor tx_tap_nm1 in [0, 5, 10]:\n for samps_per_bit in [16, 32, 64]:\n sample_interval = bit_time / samps_per_bit\n row_size = gNumBits * samps_per_bit\n channel_response = array([1.0 / sample_interval, 0.0])\n channel_response.resize(row_size)\n tx_init.ami_params['tx_tap_nm1'] = tx_tap_nm1\n tx_init.sample_interval = sample_interval\n tx_init.channel_response = channel_response\n my_tx.initialize(tx_init)\n imp_resp = array(my_tx.initOut)\n step_resp = cumsum(imp_resp) * sample_interval\n t = array([i * my_tx.sample_interval for i in range(len(imp_resp))])\n\n plt.figure(1)\n plt.plot(t * 1.e9, imp_resp / 1.e9, label=\"case_%d\"%case)\n\n plt.figure(2)\n plt.plot(t * 1.e9, imp_resp / 1.e9, label=\"case_%d\"%case)\n\n freq_resp = fft.fft(imp_resp)\n freq_resp *= step_resp[-1] / abs(freq_resp[0]) # Normalize to proper d.c. value.\n f0 = 1. / (my_tx.sample_interval * len(imp_resp))\n f = array([i * f0 for i in range(len(freq_resp) // 2)])\n\n plt.figure(3)\n plt.semilogx(f / 1.e9, 20. * log10(abs(freq_resp[:len(freq_resp) // 2])), label=\"case_%d\"%case)\n\n plt.figure(4)\n plt.plot(t * 1.e9, step_resp, label=\"case_%d\"%case)\n\n case += 1\n \nplt.figure(1)\nplt.title(\"Model Impulse Response Output - Full Scale\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"h(t) (V/ns)\")\nplt.legend()\n\nplt.figure(2)\nplt.title(\"Model Impulse Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"h(t) (V/ns)\")\nplt.axis(xmax = gTxZoomBits * bit_time * 1.e9)\nplt.legend()\n\nplt.figure(3)\nplt.title(\"Model Output Frequency Response\")\nplt.xlabel(\"freq. (GHz)\")\nplt.ylabel(\"|H(f)| (dB)\")\nplt.legend()\n\nplt.figure(4)\nplt.title(\"Model Step Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"s(t) (V)\")\nplt.axis(xmax = gTxZoomBits * bit_time * 1.e9)\nplt.legend()\n", "<a href=\"#top\">Back to Top</a>\nGetWave() vs. Init() <a name=\"tx_getwave_vs_init\"/>\nMake sure the new default GetWave() behavior matches Init().", "from numpy import shape, resize\nfrom pybert.pybert_util import import_qucs_csv\n\ngChannelFile = \"Channel_Impulse.csv\"\n\nchannel_response = import_qucs_csv(gChannelFile, sample_interval)\n# channel_response.resize(row_size)\nchannel_step = cumsum(channel_response) * sample_interval\nt = array([i * my_tx.sample_interval for i in range(len(channel_step))])\n\ntx_init.channel_response = channel_response\ntx_init.ami_params['tx_tap_nm1'] = 8\ntx_init.ami_params['tx_tap_np1'] = 2\nmy_tx.initialize(tx_init)\n\nimp_resp = array(my_tx.initOut)\nstep_resp_init = cumsum(imp_resp) * sample_interval\nstep_resp_getwave = my_tx.getWave(channel_step)[0]\n\nt.resize(len(step_resp_getwave))\n\nprint \"After GetWave() call:\"\nprint \"\\tMessage:\"\nprint \"\\t\\t{}\".format(my_tx.msg)\nprint \"\\tAMI output parameters:\"\nprint \"\\t\\t{}\".format(my_tx.ami_params_out)\n\nstep_resp_init.resize(len(t))\nchannel_step.resize(len(t))\n\nplt.plot(t * 1.e9, channel_step, label=\"Channel\")\nplt.plot(t * 1.e9, step_resp_init, label=\"Init\")\nplt.plot(t * 1.e9, step_resp_getwave, label=\"GetWave\")\nplt.legend()\nplt.axis(xmax=5)\n", "<a href=\"#top\">Back to Top</a>\nRx Model <a name=\"rx_model\"/>\nBasic Sanity Check <a name=\"rx_basic_sanity_check\"/>", "# gRxDLLName = \"example_rx_x86_amd64.so\"\ngRxDLLName = \"example_rx_x86.dll\"\n\nbit_time = 1. / gBitRate\nsample_interval = bit_time / gSampsPerBit\nrow_size = gNumBits * gSampsPerBit\nchannel_response = array([0.0, 1.0,])\nchannel_response.resize(row_size)\n\nmy_rx = ami.AMIModel(gRxDLLName)\nrx_init = ami.AMIModelInitializer({'root_name' : \"example_rx\",\n 'ctle_mode' : \"1\",\n 'ctle_freq' : \"5.0e9\",\n 'ctle_mag' : \"10\"})\nrx_init.bit_time = bit_time\nrx_init.sample_interval = sample_interval\nrx_init.channel_response = channel_response\nmy_rx.initialize(rx_init)\n\nprint \"Message from model:\"\nprint my_rx.msg\nprint \"Parameter string from model:\"\nprint my_rx.ami_params_out\n", "<a href=\"#top\">Back to Top</a>\nImpulse and Frequency Response Check <a name=\"rx_impulse_and_frequency_response_check\"/>", "gRxZoomBits = 1.0\n\ncase = 1\nfor ctle_mag in [0, 6, 12]:\n for samps_per_bit in [16, 32, 64]:\n sample_interval = bit_time / samps_per_bit\n row_size = gNumBits * samps_per_bit\n channel_response = array([0.0, 1.0 / sample_interval,])\n channel_response.resize(row_size)\n rx_init.ami_params['ctle_mag'] = ctle_mag\n rx_init.sample_interval = sample_interval\n rx_init.channel_response = channel_response\n my_rx.initialize(rx_init)\n imp_resp = array(my_rx.initOut)\n step_resp = cumsum(imp_resp) * sample_interval\n t = array([i * my_rx.sample_interval for i in range(len(imp_resp))])\n\n plt.figure(1)\n plt.plot(t * 1.e9, imp_resp / 1.e9, label=\"case_%d\"%case)\n\n plt.figure(2)\n plt.plot(t * 1.e9, imp_resp / 1.e9, label=\"case_%d\"%case)\n\n freq_resp = fft.fft(imp_resp)\n freq_resp *= step_resp[-1] / abs(freq_resp[0]) # Normalize to proper d.c. value.\n f0 = 1. / (my_rx.sample_interval * len(imp_resp))\n f = array([i * f0 for i in range(len(freq_resp) // 2)])\n\n plt.figure(3)\n plt.semilogx(f / 1.e9, 20. * log10(abs(freq_resp[:len(freq_resp) // 2])), label=\"case_%d\"%case)\n\n plt.figure(4)\n plt.plot(t * 1.e9, step_resp, label=\"case_%d\"%case)\n\n case += 1\n \nplt.figure(1)\nplt.title(\"Model Impulse Response Output - Full Scale\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"h(t) (V/ns)\")\nplt.legend()\n\nplt.figure(2)\nplt.title(\"Model Impulse Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"h(t) (V/ns)\")\nplt.axis(xmax = gRxZoomBits * bit_time * 1.e9)\nplt.legend()\n\nplt.figure(3)\nplt.title(\"Model Output Frequency Response\")\nplt.xlabel(\"freq. (GHz)\")\nplt.ylabel(\"|H(f)| (dB)\")\nplt.legend()\n\nplt.figure(4)\nplt.title(\"Model Step Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"s(t) (V)\")\nplt.axis(xmax = gRxZoomBits * bit_time * 1.e9)\nplt.legend()\n", "<a href=\"#top\">Back to Top</a>\nDynamic DFE Check <a name=\"dynamic_dfe_check\"/>", "from numpy import concatenate\nfrom pybert.pybert_util import import_qucs_csv\n\ngChannelFile = \"Channel_Impulse.csv\"\ngRxZoomBits = 20.0\ngNumBits = 1000\n\nrx_init.ami_params['ctle_mode'] = 1\nrx_init.ami_params['ctle_mag'] = 12\nrx_init.ami_params['dfe_mode'] = 2\nrx_init.ami_params['dfe_vout'] = 0.3\nrx_init.ami_params['dfe_gain'] = 0.02\nrx_init.ami_params['dfe_tap1'] = 0.0\nrx_init.ami_params['dfe_tap2'] = 0.0\nrx_init.ami_params['dfe_tap3'] = 0.0\nrx_init.ami_params['dfe_tap4'] = 0.0\nrx_init.ami_params['dfe_tap5'] = 0.0\n# rx_init.ami_params['dfe_ntaps'] = 6\nrx_init.ami_params['debug'] = \"(enable 0) (sig_tap 0) (init_adapt_tap 4)\"\n\ndef delay(x, n):\n return concatenate([x[:n], x[:-n]])\n\ncase = 1\nfor samps_per_bit in [16, 32, 64]:\n sample_interval = bit_time / samps_per_bit\n row_size = gNumBits * samps_per_bit\n channel_response = import_qucs_csv(gChannelFile, sample_interval)\n channel_response.resize(row_size)\n channel_step = cumsum(channel_response) * sample_interval\n channel_pulse = channel_step - delay(channel_step, samps_per_bit)\n\n rx_init.sample_interval = sample_interval\n rx_init.channel_response = channel_response\n my_rx.initialize(rx_init)\n imp_resp = array(my_rx.initOut)\n step_resp = cumsum(imp_resp) * sample_interval\n pulse_resp = step_resp - delay(step_resp, samps_per_bit)\n t = array([i * my_rx.sample_interval for i in range(len(imp_resp))])\n\n plt.figure(1)\n plt.plot(t * 1.e9, imp_resp / 1.e9, label=\"Rx out: case_%d\"%case)\n plt.plot(t * 1.e9, channel_response / 1.e9, label=\"Channel: case_%d\"%case)\n\n plt.figure(2)\n plt.plot(t * 1.e9, imp_resp / 1.e9, label=\"Rx out: case_%d\"%case)\n plt.plot(t * 1.e9, channel_response / 1.e9, label=\"Channel: case_%d\"%case)\n\n plt.figure(3)\n plt.plot(t * 1.e9, step_resp, label=\"Rx out: case_%d\"%case)\n plt.plot(t * 1.e9, channel_step, label=\"Channel: case_%d\"%case)\n\n plt.figure(4)\n plt.plot(t * 1.e9, step_resp, label=\"Rx out: case_%d\"%case)\n plt.plot(t * 1.e9, channel_step, label=\"Channel: case_%d\"%case)\n\n plt.figure(5)\n plt.plot(t * 1.e9, pulse_resp, label=\"Rx out: case_%d\"%case)\n plt.plot(t * 1.e9, channel_pulse, label=\"Channel: case_%d\"%case)\n\n plt.figure(6)\n plt.plot(t * 1.e9, pulse_resp, label=\"Rx out: case_%d\"%case)\n plt.plot(t * 1.e9, channel_pulse, label=\"Channel: case_%d\"%case)\n\n case += 1\n\nprint \"Message from model:\"\nprint my_rx.msg\nprint \"Parameter string from model:\"\nprint my_rx.ami_params_out\n\nplt.figure(1)\nplt.title(\"Model Impulse Response Output - Full Scale\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"h(t) (V/ns)\")\nplt.legend()\nplt.grid()\n\nplt.figure(2)\nplt.title(\"Model Impulse Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"h(t) (V/ns)\")\nplt.axis(xmax = gRxZoomBits * bit_time * 1.e9)\nplt.legend()\nplt.grid()\n\nplt.figure(3)\nplt.title(\"Model Step Response Output - Full Scale\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"s(t) (V)\")\nplt.legend()\n\nplt.figure(4)\nplt.title(\"Model Step Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"s(t) (V)\")\nplt.axis(xmax = gRxZoomBits * bit_time * 1.e9)\nplt.legend()\n\nplt.figure(5)\nplt.title(\"Model Pulse Response Output - Full Scale\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"p(t) (V)\")\nplt.legend()\n\nplt.figure(6)\nplt.title(\"Model Pulse Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"p(t) (V)\")\nplt.axis(xmax = gRxZoomBits * bit_time * 1.e9)\nplt.legend()\n", "Eye Check <a name=\"eye_check\"/>", "from numpy import convolve\n\ndef lfsr_bits(taps, seed):\n val = int(seed)\n num_taps = max(taps)\n mask = (1 << num_taps) - 1\n\n while(True):\n xor_res = reduce(lambda x, b: x ^ b, [bool(val & (1 << (tap - 1))) for tap in taps])\n val = (val << 1) & mask # Just to keep 'val' from growing without bound.\n if(xor_res):\n val += 1\n yield(val & 1)\n\ngEyeBits = 1000\n\nbit_gen = lfsr_bits([7, 6], 1)\nbits = []\nfor i in range(gEyeBits):\n bits.append(bit_gen.next())\nbits = 2 * array(bits) - 1\nbits = bits.repeat(samps_per_bit)\nsig = convolve(bits, imp_resp) * sample_interval\n\nfor i in range(10, gEyeBits - 2):\n plt.plot(sig[i * samps_per_bit : (i + 2) * samps_per_bit], \"b\")\n", "<a href=\"#top\">Back to Top</a>\nStatic DFE Check <a name=\"static_dfe_check\"/>\nCheck to ensure that if we program the DFE statically with the results of the adaptation, above, we get the same results.", "rx_init.ami_params['dfe_mode'] = 1\nrx_init.ami_params['dfe_tap1'] = 0.486954\nrx_init.ami_params['dfe_tap2'] = 0.18674\nrx_init.ami_params['dfe_tap3'] = 0.116029\nrx_init.ami_params['dfe_tap4'] = 0.091125\nrx_init.ami_params['dfe_tap5'] = 0.0783757\n\ncase = 1\nfor samps_per_bit in [16, 32, 64]:\n sample_interval = bit_time / samps_per_bit\n row_size = gNumBits * samps_per_bit\n channel_response = import_qucs_csv(gChannelFile, sample_interval)\n channel_response.resize(row_size)\n rx_init.sample_interval = sample_interval\n rx_init.channel_response = channel_response\n my_rx.initialize(rx_init)\n imp_resp = array(my_rx.initOut)\n step_resp = cumsum(imp_resp) * sample_interval\n t = array([i * my_rx.sample_interval for i in range(len(imp_resp))])\n\n plt.figure(1)\n plt.plot(t * 1.e9, imp_resp / 1.e9, label=\"case_%d\"%case)\n\n plt.figure(2)\n plt.plot(t * 1.e9, imp_resp / 1.e9, label=\"case_%d\"%case)\n\n plt.figure(3)\n plt.plot(t * 1.e9, step_resp, label=\"case_%d\"%case)\n\n case += 1\n\nprint \"Message from model:\"\nprint \"\\t\", my_rx.msg\nprint \"Parameter string from model:\"\nprint \"\\t\", my_rx.ami_params_out\n\nplt.figure(1)\nplt.title(\"Model Impulse Response Output - Full Scale\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"h(t) (V/ns)\")\nplt.legend()\n\nplt.figure(2)\nplt.title(\"Model Impulse Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"h(t) (V/ns)\")\nplt.axis(xmax = gRxZoomBits * bit_time * 1.e9)\nplt.legend()\n\nplt.figure(3)\nplt.title(\"Model Step Response Output - Zoomed In\")\nplt.xlabel(\"time (ns)\")\nplt.ylabel(\"s(t) (V)\")\nplt.axis(xmax = gRxZoomBits * bit_time * 1.e9)\nplt.legend()\n", "Eye Check <a name=\"eye_check\"/>", "sig = convolve(bits, imp_resp) * sample_interval\n\nfor i in range(10, gEyeBits - 2):\n plt.plot(sig[i * samps_per_bit : (i + 2) * samps_per_bit], \"b\")\n", "<a href=\"#top\">Back to Top</a>\nGetWave() vs. Init() - Static <a name=\"rx_getwave_vs_init_static\"/>\nMake sure GetWave() behavior matches Init(), when DFE is statically programmed.", "rx_in = convolve(bits, channel_response)[:len(bits)] * sample_interval\nsig_td = my_rx.getWave(rx_in, len(rx_in))[0]\n\nprint \"After GetWave() call:\"\nprint \"\\tMessage:\"\nprint \"\\t\\t{}\".format(my_rx.msg)\nprint \"\\tAMI output parameters:\"\nprint \"\\t\\t{}\".format(my_rx.ami_params_out)\n\nfor i in range(10, gEyeBits - 2):\n plt.plot(sig_td[i * samps_per_bit : (i + 2) * samps_per_bit], \"b\")\n", "<a href=\"#top\">Back to Top</a>\nGetWave() vs. Init() - Dynamic <a name=\"rx_getwave_vs_init_dynamic\"/>\nMake sure GetWave() behavior matches Init(), when DFE adapts.", "rx_init.ami_params['dfe_mode'] = 2\nrx_init.ami_params['dfe_tap1'] = 0.0\nrx_init.ami_params['dfe_tap2'] = 0.0\nrx_init.ami_params['dfe_tap3'] = 0.0\nrx_init.ami_params['dfe_tap4'] = 0.0\nrx_init.ami_params['dfe_tap5'] = 0.0\nmy_rx.initialize(rx_init)\nsig_td_adapt = my_rx.getWave(rx_in, len(rx_in))[0]\n\nprint \"After GetWave() call:\"\nprint \"\\tMessage:\"\nprint \"\\t\\t{}\".format(my_rx.msg)\nprint \"\\tAMI output parameters:\"\nprint \"\\t\\t{}\".format(my_rx.ami_params_out)\n\nfor i in range(10, gEyeBits - 2):\n plt.plot(sig_td_adapt[i * samps_per_bit : (i + 2) * samps_per_bit], \"b\")\n", "<a href=\"#top\">Back to Top</a>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/cccma/cmip6/models/sandbox-2/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CCCMA\nSource ID: SANDBOX-2\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccma', 'sandbox-2', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
giacomov/astromodels
examples/Priors_for_Bayesian_analysis.ipynb
bsd-3-clause
[ "Priors for Bayesian analysis\nAstromodels supports the definition of priors for all parameters in your model. You can use as prior any function (although of course not all functions should be used this way, but the choice is up to you).\nFirst let's define a simple model containing one point source (see the \"Model tutorial\" for more info):", "from astromodels import *\n\n# Create a point source named \"pts1\"\npts1 = PointSource('pts1',ra=125.23, dec=17.98, spectral_shape=powerlaw())\n\n# Create the model\nmy_model = Model(pts1)", "Now let's assign uniform priors to the parameters of the powerlaw function. The function uniform_prior is defined like this:", "uniform_prior.info()", "We can use it as such:", "# Set 'lower_bound' to -10, 'upper bound' to 10, and leave the 'value' parameter \n# to the default value\npts1.spectrum.main.powerlaw.K.prior = log_uniform_prior(lower_bound = 1e-15, upper_bound=1e-7)\n\n# Display it\npts1.spectrum.main.powerlaw.K.display()\n\n# Set 'lower_bound' to -10, 'upper bound' to 0, and leave the 'value' parameter \n# to the default value\npts1.spectrum.main.powerlaw.index.prior = uniform_prior(lower_bound = -10, upper_bound=0)\n\npts1.spectrum.main.powerlaw.index.display()", "Now we can evaluate the prior simply as:", "# Create a short cut to avoid writing too much\npo = pts1.spectrum.main.powerlaw\n\n# Evaluate the prior in 2.3e-5\npoint = 2.3e-21\nprior_value1 = po.K.prior(point * po.K.unit)\n\n# Equivalently we can use the fast call with no units\nprior_value2 = po.K.prior.fast_call(point)\n\nassert prior_value1 == prior_value2\n\nprint(\"The prior for logK evaluate to %s in %s\" % (prior_value1, point))", "Let's plot the value of the prior at some random locations:", "# You need matplotlib installed for this\nimport matplotlib.pyplot as plt\n\n# This is for the IPython notebook\n%matplotlib inline\n\n# Let's get 500 points uniformly distributed between -20 and 20\n\nrandom_points = np.logspace(-30,2,50)\n\nplt.loglog(random_points,pts1.spectrum.main.powerlaw.K.prior.fast_call(random_points), '.' )\n#plt.xscale(\"log\")\n#plt.ylim([-0.1,1.2])\nplt.xlabel(\"value of K\")\nplt.ylabel(\"Prior\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
VVard0g/ThreatHunter-Playbook
docs/notebooks/windows/08_lateral_movement/WIN-201009183000.ipynb
mit
[ "Remote DCOM IErtUtil DLL Hijack\nMetadata\n| Metadata | Value |\n|:------------------|:---|\n| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |\n| creation date | 2020/10/09 |\n| modification date | 2020/10/09 |\n| playbook related | ['WIN-201012004336'] |\nHypothesis\nThreat actors might be copying files remotely to abuse a DLL hijack opportunity found on the DCOM InternetExplorer.Application Class.\nTechnical Context\nOffensive Tradecraft\nA threat actor could use a known DLL hijack vulnerability on the DCOM InternetExplorer.Application Class while instantiating the object remotely.\nWhen the object instantiate, it looks for iertutil.dll in the c:\\Program Files\\Internet Explorer\\ directory. That DLL does not exist in that folder. Therefore, a threat actor could easily copy its own DLL in that folder and execute it by instantiating an object via the DCOM InternetExplorer.Application Class remotely.\nWhen the malicious DLL is loaded, there are various approaches to hijacking execution, but most likely a threat actor would want the DLL to act as a proxy to the real DLL to minimize the chances of interrupting normal operations.\nOne way to do this is by cloning the export table from one DLL to another one. One known tool that can help with it is Koppeling. \nSecurity Datasets\n| Metadata | Value |\n|:----------|:----------|\n| docs | https://securitydatasets.com/notebooks/atomic/windows/lateral_movement/SDWIN-201009183000.html |\n| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/covenant_dcom_iertutil_dll_hijack.zip |\nAnalytics\nInitialize Analytics Engine", "from openhunt.mordorutils import *\nspark = get_spark()", "Download & Process Security Dataset", "sd_file = \"https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/covenant_dcom_iertutil_dll_hijack.zip\"\nregisterMordorSQLTable(spark, sd_file, \"sdTable\")", "Analytic I\nLook for non-system accounts SMB accessing a C:\\Program Files\\Internet Explorer\\iertutil.dll with write (0x2) access mask via an administrative share (i.e C$).\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName\nFROM sdTable\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 5145\n AND RelativeTargetName LIKE '%Internet Explorer\\\\\\iertutil.dll'\n AND NOT SubjectUserName LIKE '%$'\n AND AccessMask = '0x2'\n'''\n)\ndf.show(10,False)", "Analytic II\nLook for C:\\Program Files\\Internet Explorer\\iertutil.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$) and created by the System process on the target system.\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |\n| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName\nFROM sdTable b\nINNER JOIN (\n SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\\'))[0]) as TargetFilename\n FROM sdTable\n WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'\n AND Image = 'System'\n AND EventID = 11\n AND TargetFilename LIKE '%Internet Explorer\\\\\\iertutil.dll'\n) a\nON LOWER(REVERSE(SPLIT(RelativeTargetName, '\\'))[0]) = a.TargetFilename\nWHERE LOWER(b.Channel) = 'security'\n AND b.EventID = 5145\n AND b.AccessMask = '0x2'\n'''\n)\ndf.show(10,False)", "Analytic III\nLook for C:\\Program Files\\Internet Explorer\\iertutil.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$) and created by the System process on the target system.\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |\n| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName\nFROM sdTable b\nINNER JOIN (\n SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\\'))[0]) as TargetFilename\n FROM sdTable\n WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'\n AND Image = 'System'\n AND EventID = 11\n AND TargetFilename LIKE '%Internet Explorer\\\\\\iertutil.dll'\n) a\nON LOWER(REVERSE(SPLIT(RelativeTargetName, '\\'))[0]) = a.TargetFilename\nWHERE LOWER(b.Channel) = 'security'\n AND b.EventID = 5145\n AND b.AccessMask = '0x2'\n'''\n)\ndf.show(10,False)", "Analytic IV\nLook for C:\\Program Files\\Internet Explorer\\iertutil.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$), created by the System process and loaded by the WMI provider host (wmiprvse.exe). All happening on the target system.\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |\n| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |\n| File | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName\nFROM sdTable d\nINNER JOIN (\n SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\\'))[0]) as TargetFilename\n FROM sdTable b\n INNER JOIN (\n SELECT ImageLoaded\n FROM sdTable\n WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'\n AND EventID = 7\n AND LOWER(Image) LIKE '%iexplore.exe'\n AND ImageLoaded LIKE '%Internet Explorer\\\\\\iertutil.dll'\n ) a\n ON b.TargetFilename = a.ImageLoaded\n WHERE b.Channel = 'Microsoft-Windows-Sysmon/Operational'\n AND b.Image = 'System'\n AND b.EventID = 11\n) c\nON LOWER(REVERSE(SPLIT(RelativeTargetName, '\\'))[0]) = c.TargetFilename\nWHERE LOWER(d.Channel) = 'security'\n AND d.EventID = 5145\n AND d.AccessMask = '0x2'\n'''\n)\ndf.show(10,False)", "Known Bypasses\nFalse Positives\nNone\nHunter Notes\n\nBaseline your environment to identify normal activity. Document all accounts creating files over the network via administrative shares.\nBaseline iexplore.exe execution and modules loaded (i.e signed and un-signed)\n\nHunt Output\n| Type | Link |\n| :----| :----|\n| Sigma Rule | https://github.com/SigmaHQ/sigma/blob/master/rules/windows/builtin/security/win_dcom_iertutil_dll_hijack.yml |\n| Sigma Rule | https://github.com/SigmaHQ/sigma/blob/master/rules/windows/sysmon/sysmon_dcom_iertutil_dll_hijack.yml |\nReferences\n\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-dcom/64af4c57-5466-4fdf-9761-753ea926a494" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
slundberg/shap
notebooks/api_examples/explainers/GPUTree.ipynb
mit
[ "GPUTree explainer\nThis notebooks demonstrates how to use the GPUTree explainer on some simple datasets. Like the Tree explainer, the GPUTree explainer is specifically designed for tree-based machine learning models, but it is designed to accelerate the computations using NVIDA GPUs.\nNote that in order to use the GPUTree explainer you need to have an NVIDA GPU, and SHAP needs to have been compiled to support the current GPU libraries on your system. On a recent Ubuntu server the steps to make this happen would be:\n\nCheck to makes sure you have the NVIDA CUDA Toolkit installed by running the nvcc command (the CUDA compiler) from the terminal. If this command is not found then you need to install it with something like sudo apt install nvidia-cuda-toolkit.\nOnce the NVIDA CUDA Toolkit is installed you need to set the CUDA_PATH environment variable. If which nvcc produces /usr/bin/nvcc then you can run export CUDA_PATH=/usr.\nBuild SHAP with CUDA support by cloning the shap repo using git clone https://github.com/slundberg/shap.git then running python setup.py install --user.\n\nIf you run into issues with the above instructions, make sure you don't still have an old version of SHAP around by ensuring import shap fails before you start the new install.\nBelow we domonstrate how to use the GPUTree explainer on a simple adult income classification dataset and model.", "import shap\nimport xgboost\n\n# get a dataset on income prediction\nX,y = shap.datasets.adult()\n\n# train an XGBoost model (but any other model type would also work)\nmodel = xgboost.XGBClassifier()\nmodel.fit(X, y)", "Tabular data with independent (Shapley value) masking", "# build a Permutation explainer and explain the model predictions on the given dataset\nexplainer = shap.explainers.GPUTree(model, X)\nshap_values = explainer(X)\n\n# get just the explanations for the positive class\nshap_values = shap_values", "Plot a global summary", "shap.plots.bar(shap_values)", "Plot a single instance", "shap.plots.waterfall(shap_values[0])", "Interaction values\nGPUTree support the Shapley taylor interaction values (an improvement over what the Tree explainer original provided).", "explainer2 = shap.explainers.GPUTree(model, feature_perturbation=\"tree_path_dependent\")\ninteraction_shap_values = explainer2(X[:100], interactions=True)\n\nshap.plots.scatter(interaction_shap_values[:,:,0])", "<hr>\nHave an idea for more helpful examples? Pull requests that add to this documentation notebook are encouraged!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/cams/cmip6/models/sandbox-1/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: CAMS\nSource ID: SANDBOX-1\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:43\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cams', 'sandbox-1', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
maxis42/ML-DA-Coursera-Yandex-MIPT
4 Stats for data analysis/Lectures notebooks/11 non-parametric tests rel ind/stat.non_parametric_tests_ind.ipynb
mit
[ "Непараметрические криетрии\nКритерий | Одновыборочный | Двухвыборочный | Двухвыборочный (связанные выборки)\n ------------- | -------------|\n Знаков | $\\times$ | | $\\times$ \n Ранговый | $\\times$ | $\\times$ | $\\times$\nПерестановочный | $\\times$ | $\\times$ | $\\times$ \nНедвижимость в Сиэттле\nИмеются данные о продажной стоимости недвижимости в Сиэтле для 50 сделок в 2001 году и 50 в 2002. Изменились ли в среднем цены?", "import numpy as np\nimport pandas as pd\nimport itertools\n\nfrom scipy import stats\nfrom statsmodels.stats.descriptivestats import sign_test\nfrom statsmodels.stats.weightstats import zconfint\nfrom statsmodels.stats.weightstats import *\n\n%pylab inline", "Загрузка данных", "seattle_data = pd.read_csv('seattle.txt', sep = '\\t', header = 0)\n\nseattle_data.shape\n\nseattle_data.head()\n\nprice2001 = seattle_data[seattle_data['Year'] == 2001].Price\nprice2002 = seattle_data[seattle_data['Year'] == 2002].Price\n\npylab.figure(figsize=(12,4))\n\npylab.subplot(1,2,1)\npylab.grid()\npylab.hist(price2001, color = 'r')\npylab.xlabel('2001')\n\npylab.subplot(1,2,2)\npylab.grid()\npylab.hist(price2002, color = 'b')\npylab.xlabel('2002')\n\npylab.show()", "Двухвыборочные критерии для независимых выборок", "print '95%% confidence interval for the mean: [%f, %f]' % zconfint(price2001)\n\nprint '95%% confidence interval for the mean: [%f, %f]' % zconfint(price2002)", "Ранговый критерий Манна-Уитни\n$H_0\\colon F_{X_1}(x) = F_{X_2}(x)$\n$H_1\\colon F_{X_1}(x) = F_{X_2}(x + \\Delta), \\Delta\\neq 0$", "stats.mannwhitneyu(price2001, price2002)", "Перестановочный критерий\n$H_0\\colon F_{X_1}(x) = F_{X_2}(x)$\n$H_1\\colon F_{X_1}(x) = F_{X_2}(x + \\Delta), \\Delta\\neq 0$", "def permutation_t_stat_ind(sample1, sample2):\n return np.mean(sample1) - np.mean(sample2)\n\ndef get_random_combinations(n1, n2, max_combinations):\n index = range(n1 + n2)\n indices = set([tuple(index)])\n for i in range(max_combinations - 1):\n np.random.shuffle(index)\n indices.add(tuple(index))\n return [(index[:n1], index[n1:]) for index in indices]\n\ndef permutation_zero_dist_ind(sample1, sample2, max_combinations = None):\n joined_sample = np.hstack((sample1, sample2))\n n1 = len(sample1)\n n = len(joined_sample)\n \n if max_combinations:\n indices = get_random_combinations(n1, len(sample2), max_combinations)\n else:\n indices = [(list(index), filter(lambda i: i not in index, range(n))) \\\n for index in itertools.combinations(range(n), n1)]\n \n distr = [joined_sample[list(i[0])].mean() - joined_sample[list(i[1])].mean() \\\n for i in indices]\n return distr\n\npylab.hist(permutation_zero_dist_ind(price2001, price2002, max_combinations = 1000))\npylab.show()\n\ndef permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'):\n if alternative not in ('two-sided', 'less', 'greater'):\n raise ValueError(\"alternative not recognized\\n\"\n \"should be 'two-sided', 'less' or 'greater'\")\n \n t_stat = permutation_t_stat_ind(sample, mean)\n \n zero_distr = permutation_zero_dist_ind(sample, mean, max_permutations)\n \n if alternative == 'two-sided':\n return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr)\n \n if alternative == 'less':\n return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr)\n\n if alternative == 'greater':\n return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr)\n\nprint \"p-value: %f\" % permutation_test(price2001, price2002, max_permutations = 10000)\n\nprint \"p-value: %f\" % permutation_test(price2001, price2002, max_permutations = 50000)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ImAlexisSaez/deep-learning-specialization-coursera
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v3.ipynb
mit
[ "Planar data classification with one hidden layer\nWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. \nYou will learn how to:\n- Implement a 2-class classification neural network with a single hidden layer\n- Use units with a non-linear activation function, such as tanh \n- Compute the cross entropy loss \n- Implement forward and backward propagation\n1 - Packages\nLet's first import all the packages that you will need during this assignment.\n- numpy is the fundamental package for scientific computing with Python.\n- sklearn provides simple and efficient tools for data mining and data analysis. \n- matplotlib is a library for plotting graphs in Python.\n- testCases provides some test examples to assess the correctness of your functions\n- planar_utils provide various useful functions used in this assignment", "# Package imports\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom testCases import *\nimport sklearn\nimport sklearn.datasets\nimport sklearn.linear_model\nfrom planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets\n\n%matplotlib inline\n\nnp.random.seed(1) # set a seed so that the results are consistent", "2 - Dataset\nFirst, let's get the dataset you will work on. The following code will load a \"flower\" 2-class dataset into variables X and Y.", "X, Y = load_planar_dataset()", "Visualize the dataset using matplotlib. The data looks like a \"flower\" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.", "# Visualize the data:\nplt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);", "You have:\n - a numpy-array (matrix) X that contains your features (x1, x2)\n - a numpy-array (vector) Y that contains your labels (red:0, blue:1).\nLets first get a better sense of what our data is like. \nExercise: How many training examples do you have? In addition, what is the shape of the variables X and Y? \nHint: How do you get the shape of a numpy array? (help)", "### START CODE HERE ### (≈ 3 lines of code)\nshape_X = X.shape\nshape_Y = Y.shape\nm = Y.flatten().shape # training set size\n### END CODE HERE ###\n\nprint ('The shape of X is: ' + str(shape_X))\nprint ('The shape of Y is: ' + str(shape_Y))\nprint ('I have m = %d training examples!' % (m))", "Expected Output:\n<table style=\"width:20%\">\n\n <tr>\n <td>**shape of X**</td>\n <td> (2, 400) </td> \n </tr>\n\n <tr>\n <td>**shape of Y**</td>\n <td>(1, 400) </td> \n </tr>\n\n <tr>\n <td>**m**</td>\n <td> 400 </td> \n </tr>\n\n</table>\n\n3 - Simple Logistic Regression\nBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.", "# Train the logistic regression classifier\nclf = sklearn.linear_model.LogisticRegressionCV();\nclf.fit(X.T, Y.T);", "You can now plot the decision boundary of these models. Run the code below.", "# Plot the decision boundary for logistic regression\nplot_decision_boundary(lambda x: clf.predict(x), X, Y)\nplt.title(\"Logistic Regression\")\n\n# Print accuracy\nLR_predictions = clf.predict(X.T)\nprint ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +\n '% ' + \"(percentage of correctly labelled datapoints)\")", "Expected Output:\n<table style=\"width:20%\">\n <tr>\n <td>**Accuracy**</td>\n <td> 47% </td> \n </tr>\n\n</table>\n\nInterpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! \n4 - Neural Network model\nLogistic regression did not work well on the \"flower dataset\". You are going to train a Neural Network with a single hidden layer.\nHere is our model:\n<img src=\"images/classification_kiank.png\" style=\"width:600px;height:300px;\">\nMathematically:\nFor one example $x^{(i)}$:\n$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\\tag{1}$$ \n$$a^{[1] (i)} = \\tanh(z^{[1] (i)})\\tag{2}$$\n$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\\tag{3}$$\n$$\\hat{y}^{(i)} = a^{[2] (i)} = \\sigma(z^{ [2] (i)})\\tag{4}$$\n$$y^{(i)}_{prediction} = \\begin{cases} 1 & \\mbox{if } a^{2} > 0.5 \\ 0 & \\mbox{otherwise } \\end{cases}\\tag{5}$$\nGiven the predictions on all the examples, you can also compute the cost $J$ as follows: \n$$J = - \\frac{1}{m} \\sum\\limits_{i = 0}^{m} \\large\\left(\\small y^{(i)}\\log\\left(a^{[2] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[2] (i)}\\right) \\large \\right) \\small \\tag{6}$$\nReminder: The general methodology to build a Neural Network is to:\n 1. Define the neural network structure ( # of input units, # of hidden units, etc). \n 2. Initialize the model's parameters\n 3. Loop:\n - Implement forward propagation\n - Compute loss\n - Implement backward propagation to get the gradients\n - Update parameters (gradient descent)\nYou often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data.\n4.1 - Defining the neural network structure\nExercise: Define three variables:\n - n_x: the size of the input layer\n - n_h: the size of the hidden layer (set this to 4) \n - n_y: the size of the output layer\nHint: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.", "# GRADED FUNCTION: layer_sizes\n\ndef layer_sizes(X, Y):\n \"\"\"\n Arguments:\n X -- input dataset of shape (input size, number of examples)\n Y -- labels of shape (output size, number of examples)\n \n Returns:\n n_x -- the size of the input layer\n n_h -- the size of the hidden layer\n n_y -- the size of the output layer\n \"\"\"\n ### START CODE HERE ### (≈ 3 lines of code)\n n_x = X.shape[0] # size of input layer\n n_h = 4\n n_y = Y.shape[0] # size of output layer\n ### END CODE HERE ###\n return (n_x, n_h, n_y)\n\nX_assess, Y_assess = layer_sizes_test_case()\n(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)\nprint(\"The size of the input layer is: n_x = \" + str(n_x))\nprint(\"The size of the hidden layer is: n_h = \" + str(n_h))\nprint(\"The size of the output layer is: n_y = \" + str(n_y))", "Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).\n<table style=\"width:20%\">\n <tr>\n <td>**n_x**</td>\n <td> 5 </td> \n </tr>\n\n <tr>\n <td>**n_h**</td>\n <td> 4 </td> \n </tr>\n\n <tr>\n <td>**n_y**</td>\n <td> 2 </td> \n </tr>\n\n</table>\n\n4.2 - Initialize the model's parameters\nExercise: Implement the function initialize_parameters().\nInstructions:\n- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.\n- You will initialize the weights matrices with random values. \n - Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b).\n- You will initialize the bias vectors as zeros. \n - Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros.", "# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n params -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.\n \n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = np.random.randn(n_h, n_x) * 0.01\n b1 = np.zeros((n_h, 1))\n W2 = np.random.randn(n_y, n_h) * 0.01\n b2 = np.zeros((n_y, 1))\n ### END CODE HERE ###\n \n assert (W1.shape == (n_h, n_x))\n assert (b1.shape == (n_h, 1))\n assert (W2.shape == (n_y, n_h))\n assert (b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nn_x, n_h, n_y = initialize_parameters_test_case()\n\nparameters = initialize_parameters(n_x, n_h, n_y)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected Output:\n<table style=\"width:90%\">\n <tr>\n <td>**W1**</td>\n <td> [[-0.00416758 -0.00056267]\n [-0.02136196 0.01640271]\n [-0.01793436 -0.00841747]\n [ 0.00502881 -0.01245288]] </td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.]] </td> \n </tr>\n\n</table>\n\n4.3 - The Loop\nQuestion: Implement forward_propagation().\nInstructions:\n- Look above at the mathematical representation of your classifier.\n- You can use the function sigmoid(). It is built-in (imported) in the notebook.\n- You can use the function np.tanh(). It is part of the numpy library.\n- The steps you have to implement are:\n 1. Retrieve each parameter from the dictionary \"parameters\" (which is the output of initialize_parameters()) by using parameters[\"..\"].\n 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).\n- Values needed in the backpropagation are stored in \"cache\". The cache will be given as an input to the backpropagation function.", "# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(X, parameters):\n \"\"\"\n Argument:\n X -- input data of size (n_x, m)\n parameters -- python dictionary containing your parameters (output of initialization function)\n \n Returns:\n A2 -- The sigmoid output of the second activation\n cache -- a dictionary containing \"Z1\", \"A1\", \"Z2\" and \"A2\"\n \"\"\"\n # Retrieve each parameter from the dictionary \"parameters\"\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Implement Forward Propagation to calculate A2 (probabilities)\n ### START CODE HERE ### (≈ 4 lines of code)\n Z1 = np.dot(W1, X) + b1\n A1 = np.tanh(Z1)\n Z2 = np.dot(W2, A1) + b2\n A2 = sigmoid(Z2)\n ### END CODE HERE ###\n \n assert(A2.shape == (1, X.shape[1]))\n \n cache = {\"Z1\": Z1,\n \"A1\": A1,\n \"Z2\": Z2,\n \"A2\": A2}\n \n return A2, cache\n\nX_assess, parameters = forward_propagation_test_case()\n\nA2, cache = forward_propagation(X_assess, parameters)\n\n# Note: we use the mean here just to make sure that your output matches ours. \nprint(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))", "Expected Output:\n<table style=\"width:55%\">\n <tr>\n <td> -0.000499755777742 -0.000496963353232 0.000438187450959 0.500109546852 </td> \n </tr>\n</table>\n\nNow that you have computed $A^{[2]}$ (in the Python variable \"A2\"), which contains $a^{2}$ for every example, you can compute the cost function as follows:\n$$J = - \\frac{1}{m} \\sum\\limits_{i = 0}^{m} \\large{(} \\small y^{(i)}\\log\\left(a^{[2] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[2] (i)}\\right) \\large{)} \\small\\tag{13}$$\nExercise: Implement compute_cost() to compute the value of the cost $J$.\nInstructions:\n- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented\n$- \\sum\\limits_{i=0}^{m} y^{(i)}\\log(a^{2})$:\npython\nlogprobs = np.multiply(np.log(A2),Y)\ncost = - np.sum(logprobs) # no need to use a for loop!\n(you can use either np.multiply() and then np.sum() or directly np.dot()).", "# GRADED FUNCTION: compute_cost\n\ndef compute_cost(A2, Y, parameters):\n \"\"\"\n Computes the cross-entropy cost given in equation (13)\n \n Arguments:\n A2 -- The sigmoid output of the second activation, of shape (1, number of examples)\n Y -- \"true\" labels vector of shape (1, number of examples)\n parameters -- python dictionary containing your parameters W1, b1, W2 and b2\n \n Returns:\n cost -- cross-entropy cost given equation (13)\n \"\"\"\n \n m = Y.shape[1] # number of example\n\n # Compute the cross-entropy cost\n ### START CODE HERE ### (≈ 2 lines of code)\n logprobs = np.multiply(Y, np.log(A2)) + np.multiply(np.log(1 - A2), 1 - Y)\n cost = - 1 / m * np.sum(logprobs)\n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # makes sure cost is the dimension we expect. \n # E.g., turns [[17]] into 17 \n assert(isinstance(cost, float))\n \n return cost\n\nA2, Y_assess, parameters = compute_cost_test_case()\n\nprint(\"cost = \" + str(compute_cost(A2, Y_assess, parameters)))", "Expected Output:\n<table style=\"width:20%\">\n <tr>\n <td>**cost**</td>\n <td> 0.692919893776 </td> \n </tr>\n\n</table>\n\nUsing the cache computed during forward propagation, you can now implement backward propagation.\nQuestion: Implement the function backward_propagation().\nInstructions:\nBackpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. \n<img src=\"images/grad_summary.png\" style=\"width:600px;height:300px;\">\n<!--\n$\\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } = \\frac{1}{m} (a^{[2](i)} - y^{(i)})$\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial W_2 } = \\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } a^{[1] (i) T} $\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial b_2 } = \\sum_i{\\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)}}}$\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)} } = W_2^T \\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial W_1 } = \\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)} } X^T $\n\n$\\frac{\\partial \\mathcal{J} _i }{ \\partial b_1 } = \\sum_i{\\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)}}}$\n\n- Note that $*$ denotes elementwise multiplication.\n- The notation you will use is common in deep learning coding:\n - dW1 = $\\frac{\\partial \\mathcal{J} }{ \\partial W_1 }$\n - db1 = $\\frac{\\partial \\mathcal{J} }{ \\partial b_1 }$\n - dW2 = $\\frac{\\partial \\mathcal{J} }{ \\partial W_2 }$\n - db2 = $\\frac{\\partial \\mathcal{J} }{ \\partial b_2 }$\n\n!-->\n\n\nTips:\nTo compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute \n$g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)).", "# GRADED FUNCTION: backward_propagation\n\ndef backward_propagation(parameters, cache, X, Y):\n \"\"\"\n Implement the backward propagation using the instructions above.\n \n Arguments:\n parameters -- python dictionary containing our parameters \n cache -- a dictionary containing \"Z1\", \"A1\", \"Z2\" and \"A2\".\n X -- input data of shape (2, number of examples)\n Y -- \"true\" labels vector of shape (1, number of examples)\n \n Returns:\n grads -- python dictionary containing your gradients with respect to different parameters\n \"\"\"\n m = X.shape[1]\n \n # First, retrieve W1 and W2 from the dictionary \"parameters\".\n ### START CODE HERE ### (≈ 2 lines of code)\n W1 = parameters[\"W1\"]\n W2 = parameters[\"W2\"]\n ### END CODE HERE ###\n \n # Retrieve also A1 and A2 from dictionary \"cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n A1 = cache[\"A1\"]\n A2 = cache[\"A2\"]\n ### END CODE HERE ###\n \n # Backward propagation: calculate dW1, db1, dW2, db2. \n ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)\n dZ2 = A2 - Y\n dW2 = 1 / m * np.dot(dZ2, A1.T)\n db2 = 1 / m * np.sum(dZ2, axis=1, keepdims=True)\n dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))\n dW1 = 1 / m * np.dot(dZ1, X.T)\n db1 = 1 / m * np.sum(dZ1, axis=1, keepdims=True)\n ### END CODE HERE ###\n \n grads = {\"dW1\": dW1,\n \"db1\": db1,\n \"dW2\": dW2,\n \"db2\": db2}\n \n return grads\n\nparameters, cache, X_assess, Y_assess = backward_propagation_test_case()\n\ngrads = backward_propagation(parameters, cache, X_assess, Y_assess)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"db1 = \"+ str(grads[\"db1\"]))\nprint (\"dW2 = \"+ str(grads[\"dW2\"]))\nprint (\"db2 = \"+ str(grads[\"db2\"]))", "Expected output:\n<table style=\"width:80%\">\n <tr>\n <td>**dW1**</td>\n <td> [[ 0.01018708 -0.00708701]\n [ 0.00873447 -0.0060768 ]\n [-0.00530847 0.00369379]\n [-0.02206365 0.01535126]] </td> \n </tr>\n\n <tr>\n <td>**db1**</td>\n <td> [[-0.00069728]\n [-0.00060606]\n [ 0.000364 ]\n [ 0.00151207]] </td> \n </tr>\n\n <tr>\n <td>**dW2**</td>\n <td> [[ 0.00363613 0.03153604 0.01162914 -0.01318316]] </td> \n </tr>\n\n\n <tr>\n <td>**db2**</td>\n <td> [[ 0.06589489]] </td> \n </tr>\n\n</table>\n\nQuestion: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).\nGeneral gradient descent rule: $ \\theta = \\theta - \\alpha \\frac{\\partial J }{ \\partial \\theta }$ where $\\alpha$ is the learning rate and $\\theta$ represents a parameter.\nIllustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.\n<img src=\"images/sgd.gif\" style=\"width:400;height:400;\"> <img src=\"images/sgd_bad.gif\" style=\"width:400;height:400;\">", "# GRADED FUNCTION: update_parameters\n\ndef update_parameters(parameters, grads, learning_rate = 1.2):\n \"\"\"\n Updates parameters using the gradient descent update rule given above\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients \n \n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n # Retrieve each parameter from the dictionary \"parameters\"\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Retrieve each gradient from the dictionary \"grads\"\n ### START CODE HERE ### (≈ 4 lines of code)\n dW1 = grads[\"dW1\"]\n db1 = grads[\"db1\"]\n dW2 = grads[\"dW2\"]\n db2 = grads[\"db2\"]\n ## END CODE HERE ###\n \n # Update rule for each parameter\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = W1 - learning_rate * dW1\n b1 = b1 - learning_rate * db1\n W2 = W2 - learning_rate * dW2\n b2 = b2 - learning_rate * db2\n ### END CODE HERE ###\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nparameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads)\n\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected Output:\n<table style=\"width:80%\">\n <tr>\n <td>**W1**</td>\n <td> [[-0.00643025 0.01936718]\n [-0.02410458 0.03978052]\n [-0.01653973 -0.02096177]\n [ 0.01046864 -0.05990141]]</td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ -1.02420756e-06]\n [ 1.27373948e-05]\n [ 8.32996807e-07]\n [ -3.20136836e-06]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.00010457]] </td> \n </tr>\n\n</table>\n\n4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()\nQuestion: Build your neural network model in nn_model().\nInstructions: The neural network model has to use the previous functions in the right order.", "# GRADED FUNCTION: nn_model\n\ndef nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):\n \"\"\"\n Arguments:\n X -- dataset of shape (2, number of examples)\n Y -- labels of shape (1, number of examples)\n n_h -- size of the hidden layer\n num_iterations -- Number of iterations in gradient descent loop\n print_cost -- if True, print the cost every 1000 iterations\n \n Returns:\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n \n np.random.seed(3)\n n_x = layer_sizes(X, Y)[0]\n n_y = layer_sizes(X, Y)[2]\n \n # Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: \"n_x, n_h, n_y\". Outputs = \"W1, b1, W2, b2, parameters\".\n ### START CODE HERE ### (≈ 5 lines of code)\n parameters = initialize_parameters(n_x, n_h, n_y)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n \n ### START CODE HERE ### (≈ 4 lines of code)\n # Forward propagation. Inputs: \"X, parameters\". Outputs: \"A2, cache\".\n A2, cache = forward_propagation(X, parameters)\n \n # Cost function. Inputs: \"A2, Y, parameters\". Outputs: \"cost\".\n cost = compute_cost(A2, Y, parameters)\n \n # Backpropagation. Inputs: \"parameters, cache, X, Y\". Outputs: \"grads\".\n grads = backward_propagation(parameters, cache, X, Y)\n \n # Gradient descent parameter update. Inputs: \"parameters, grads\". Outputs: \"parameters\".\n parameters = update_parameters(parameters, grads)\n \n ### END CODE HERE ###\n \n # Print the cost every 1000 iterations\n if print_cost and i % 1000 == 0:\n print (\"Cost after iteration %i: %f\" %(i, cost))\n\n return parameters\n\nX_assess, Y_assess = nn_model_test_case()\n\nparameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=False)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected Output:\n<table style=\"width:90%\">\n <tr>\n <td>**W1**</td>\n <td> [[-4.18494056 5.33220609]\n [-7.52989382 1.24306181]\n [-4.1929459 5.32632331]\n [ 7.52983719 -1.24309422]]</td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ 2.32926819]\n [ 3.79458998]\n [ 2.33002577]\n [-3.79468846]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-6033.83672146 -6008.12980822 -6033.10095287 6008.06637269]] </td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[-52.66607724]] </td> \n </tr>\n\n</table>\n\n4.5 Predictions\nQuestion: Use your model to predict by building predict().\nUse forward propagation to predict results.\nReminder: predictions = $y_{prediction} = \\mathbb 1 \\text{{activation > 0.5}} = \\begin{cases}\n 1 & \\text{if}\\ activation > 0.5 \\\n 0 & \\text{otherwise}\n \\end{cases}$ \nAs an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X &gt; threshold)", "# GRADED FUNCTION: predict\n\ndef predict(parameters, X):\n \"\"\"\n Using the learned parameters, predicts a class for each example in X\n \n Arguments:\n parameters -- python dictionary containing your parameters \n X -- input data of size (n_x, m)\n \n Returns\n predictions -- vector of predictions of our model (red: 0 / blue: 1)\n \"\"\"\n \n # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.\n ### START CODE HERE ### (≈ 2 lines of code)\n A2, cache = forward_propagation(X, parameters)\n predictions = (A2 > 0.5)\n ### END CODE HERE ###\n \n return predictions\n\nparameters, X_assess = predict_test_case()\n\npredictions = predict(parameters, X_assess)\nprint(\"predictions mean = \" + str(np.mean(predictions)))", "Expected Output: \n<table style=\"width:40%\">\n <tr>\n <td>**predictions mean**</td>\n <td> 0.666666666667 </td> \n </tr>\n\n</table>\n\nIt is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.", "# Build a model with a n_h-dimensional hidden layer\nparameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)\n\n# Plot the decision boundary\nplot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)\nplt.title(\"Decision Boundary for hidden layer size \" + str(4))", "Expected Output:\n<table style=\"width:40%\">\n <tr>\n <td>**Cost after iteration 9000**</td>\n <td> 0.218607 </td> \n </tr>\n\n</table>", "# Print accuracy\npredictions = predict(parameters, X)\nprint ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')", "Expected Output: \n<table style=\"width:15%\">\n <tr>\n <td>**Accuracy**</td>\n <td> 90% </td> \n </tr>\n</table>\n\nAccuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. \nNow, let's try out several hidden layer sizes.\n4.6 - Tuning hidden layer size (optional/ungraded exercise)\nRun the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.", "# This may take about 2 minutes to run\n\nplt.figure(figsize=(16, 32))\nhidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]\nfor i, n_h in enumerate(hidden_layer_sizes):\n plt.subplot(5, 2, i+1)\n plt.title('Hidden Layer of size %d' % n_h)\n parameters = nn_model(X, Y, n_h, num_iterations = 5000)\n plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)\n predictions = predict(parameters, X)\n accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)\n print (\"Accuracy for {} hidden units: {} %\".format(n_h, accuracy))", "Interpretation:\n- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. \n- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.\n- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. \nOptional questions:\nNote: Remember to submit the assignment but clicking the blue \"Submit Assignment\" button at the upper-right. \nSome optional/ungraded questions that you can explore if you wish: \n- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?\n- Play with the learning_rate. What happens?\n- What if we change the dataset? (See part 5 below!)\n<font color='blue'>\nYou've learnt to:\n- Build a complete neural network with a hidden layer\n- Make a good use of a non-linear unit\n- Implemented forward propagation and backpropagation, and trained a neural network\n- See the impact of varying the hidden layer size, including overfitting.\nNice work! \n5) Performance on other datasets\nIf you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.", "# Datasets\nnoisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()\n\ndatasets = {\"noisy_circles\": noisy_circles,\n \"noisy_moons\": noisy_moons,\n \"blobs\": blobs,\n \"gaussian_quantiles\": gaussian_quantiles}\n\n### START CODE HERE ### (choose your dataset)\ndataset = \"noisy_moons\"\n### END CODE HERE ###\n\nX, Y = datasets[dataset]\nX, Y = X.T, Y.reshape(1, Y.shape[0])\n\n# make blobs binary\nif dataset == \"blobs\":\n Y = Y%2\n\n# Visualize the data\nplt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);", "Congrats on finishing this Programming Assignment!\nReference:\n- http://scs.ryerson.ca/~aharley/neural-networks/\n- http://cs231n.github.io/neural-networks-case-study/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
arnoldlu/lisa
ipynb/tests/Frequency_Invariance_Test.ipynb
apache-2.0
[ "Frequency Invariant Load Tracking Test\nFreqInvarianceTest is a LisaTest class for automated testing of frequency invariant load tracking. This notebook uses the methods it provides to perform the same analysis as the automated test and plot some results.\nThe test class runs the same workload at a selection of frequencies, each entry in t.experiments represents a run at a different frequency.\nSetup", "%matplotlib inline\n\nimport pandas as pd\nimport json\n\nfrom trace import Trace\nfrom trappy.plotter import plot_trace\nfrom trappy.stats.grammar import Parser\nfrom trappy import ILinePlot\n\nimport logging\nfrom conf import LisaLogging\nLisaLogging.setup()\nlogging.getLogger('Analysis').setLevel(logging.ERROR)\nlogging.getLogger('Trace').setLevel(logging.ERROR)", "Run test workload\nThere's currently no way to pass a TestEnv or configuration to automated test classes. Instead the target information comes from the target.config file (in the root of the LISA source tree), so you'll need to edit that to configure LISA to connect to your target.", "from tests.eas.load_tracking import FreqInvarianceTest\n\nt = FreqInvarianceTest()\nprint t.__doc__", "To run automated tests from within a notebook we instantiate the test class and call runExperiments on it.", "t.runExperiments()", "Show variance in util_avg and load_avg\nWe want to see the same util_avg and load_avg values regardless of frequencies - the bar charts below should have bars all with roughly the same height.", "# Get the frequency an experiment was run at\ndef experiment_freq(exp):\n [cpu] = exp.wload.cpus\n freq = exp.conf['cpufreq']['freqs'][cpu]\n return freq\nfreqs = [experiment_freq(e) for e in t.experiments]\n\ndef plot_signal_against_freq(signal):\n means = [t.get_signal_mean(e, signal) for e in t.experiments]\n limits = (0 , max(means) * 1.15)\n pd.DataFrame(means, index=freqs, columns=['Mean ' + signal]).plot(kind='bar', ylim=limits)", "Plot of variation of util_avg value with frequency:", "t.get_trace(t.experiments[0]).available_events\n\nplot_signal_against_freq('util_avg')", "And the same thing for load_avg:", "plot_signal_against_freq('load_avg')", "Examine trace from workload execution\nPlot task residency and sched_util and sched_load for the workload task, along with the expected mean value for util_avg. Note that assuming the system was under little load, so that the task was RUNNING whenever it was RUNNABLE, load_avg and util_avg should be the same. \nCall examine_experiment with different experiment indexes to get plots for runs at different frequencies.", "signals = ['util_avg', 'load_avg']\ndef examine_experiment(idx):\n experiment = t.experiments[idx]\n \n [freq] = experiment.conf['cpufreq']['freqs'].values()\n print \"Experiment ran at frequency {}\".format(freq)\n events = t.te.test_conf[\"ftrace\"][\"events\"]\n \n print 'Trace plot:'\n plot_trace(t.get_trace(experiment).ftrace)\n \n # Get observed signal\n signal_df = t.get_sched_task_signals(experiment, signals)\n # Get expected average value for util_avg signal\n expected_util_avg_mean = t.get_expected_util_avg(experiment)\n \n # Plot task util_avg signal with expected mean value\n util_avg_mean = pd.Series([expected_util_avg_mean], name='expected_util_avg', index=[signal_df.index[0]])\n df = pd.concat([signal_df, util_avg_mean], axis=1).ffill()\n ILinePlot(df, column=signals + ['expected_util_avg'], drawstyle='steps-post',\n title='Scheduler task signals').view()\n\nfor i , f in enumerate(freqs):\n print \"Experiment {}:{:10d}Hz\".format(i, f)\n\nexamine_experiment(0)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vinhqdang/my_mooc
coursera/advanced_machine_learning_spec/4_nlp/natural-language-processing-master/project/week5-project.ipynb
mit
[ "Final project: StackOverflow assistant bot\nCongratulations on coming this far and solving the programming assignments! In this final project, we will combine everything we have learned about Natural Language Processing to construct a dialogue chat bot, which will be able to:\n* answer programming-related questions (using StackOverflow dataset);\n* chit-chat and simulate dialogue on all non programming-related questions.\nFor a chit-chat mode we will use a pre-trained neural network engine available from ChatterBot.\nThose who aim at honor certificates for our course or are just curious, will train their own models for chit-chat.\n\n©xkcd\nData description\nTo detect intent of users questions we will need two text collections:\n- tagged_posts.tsv — StackOverflow posts, tagged with one programming language (positive samples).\n- dialogues.tsv — dialogue phrases from movie subtitles (negative samples).", "import sys\nsys.path.append(\"..\")\nfrom common.download_utils import download_project_resources\n\ndownload_project_resources()", "For those questions, that have programming-related intent, we will proceed as follows predict programming language (only one tag per question allowed here) and rank candidates withing the tag using embeddings.\nFor the ranking part, you will need:\n- word_embeddings.tsv — word embeddings, that you trained with StarSpace in the 3rd assignment. It's not a problem if you didn't do it, because we can offer an alternative solution for you.\nAs a result of this notebook, you should obtain the following new objects that you will then use in the running bot:\n\nintent_recognizer.pkl — intent recognition model;\ntag_classifier.pkl — programming language classification model;\ntfidf_vectorizer.pkl — vectorizer used during training;\nthread_embeddings_by_tags — folder with thread embeddings, arranged by tags.\n\nSome functions will be reused by this notebook and the scripts, so we put them into utils.py file. Don't forget to open it and fill in the gaps!", "from utils import *", "Part I. Intent and language recognition\nWe want to write a bot, which will answer programming-related questions, but also will be able to maintain a dilogue. We would also like to detect the intent of the user from the question (we could have had a 'Question answering mode' check-box in the bot, but it wouldn't fun at all, would it?). So the first thing we need to do is to distinguish programming-related questions from general ones.\nIt would also be good to predict which programming language a particular question referres to. By doing so, we will speed up question search by a factor of the number of languages (10 here), and exercise our text classification skill a bit. :)", "import numpy as np\nimport pandas as pd\nimport pickle\nimport re\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer", "Data preparation\nIn the first assignment (Predict tags on StackOverflow with linear models), you have already learnt how to preprocess texts and do TF-IDF tranformations. Reuse your code here. In assition, you will also need to dump the TF-IDF vectorizer with pickle to use it later in the running bot.", "def tfidf_features(X_train, X_test, vectorizer_path):\n \"\"\"Performs TF-IDF transformation and dumps the model.\"\"\"\n \n # Train a vectorizer on X_train data.\n # Transform X_train and X_test data.\n \n # Pickle the trained vectorizer to 'vectorizer_path'\n # Don't forget to open the file in writing bytes mode.\n \n ######################################\n ######### YOUR CODE HERE #############\n ######################################\n \n return X_train, X_test", "Now, load examples of two classes. Use a subsample of stackoverflow data to balance the classes. You will need the full data later.", "sample_size = 200000\n\ndialogue_df = pd.read_csv('data/dialogues.tsv', sep='\\t').sample(sample_size, random_state=0)\nstackoverflow_df = pd.read_csv('data/tagged_posts.tsv', sep='\\t').sample(sample_size, random_state=0)", "Check how the data look like:", "dialogue_df.head()\n\nstackoverflow_df.head()", "Apply text_prepare function to preprocess the data:", "dialogue_df['text'] = ######### YOUR CODE HERE #############\nstackoverflow_df['title'] = ######### YOUR CODE HERE #############", "Intent recognition\nWe will do a binary classification on TF-IDF representations of texts. Labels will be either dialogue for general questions or stackoverflow for programming-related questions. First, prepare the data for this task:\n- concatenate dialogue and stackoverflow examples into one sample\n- split it into train and test in proportion 9:1, use random_state=0 for reproducibility\n- transform it to TF-IDF features", "from sklearn.model_selection import train_test_split\n\nX = np.concatenate([dialogue_df['text'].values, stackoverflow_df['title'].values])\ny = ['dialogue'] * dialogue_df.shape[0] + ['stackoverflow'] * stackoverflow_df.shape[0]\n\nX_train, X_test, y_train, y_test = ######### YOUR CODE HERE ##########\nprint('Train size = {}, test size = {}'.format(len(X_train), len(X_test)))\n\nX_train_tfidf, X_test_tfidf = ######### YOUR CODE HERE ###########", "Train the intent recognizer using LogisticRegression on the train set with the following parameters: penalty='l2', C=10, random_state=0. Print out the accuracy on the test set to check whether everything looks good.", "from sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import accuracy_score\n\n######################################\n######### YOUR CODE HERE #############\n######################################\n\n# Check test accuracy.\ny_test_pred = intent_recognizer.predict(X_test_tfidf)\ntest_accuracy = accuracy_score(y_test, y_test_pred)\nprint('Test accuracy = {}'.format(test_accuracy))", "Dump the classifier to use it in the running bot.", "pickle.dump(intent_recognizer, open(RESOURCE_PATH['INTENT_RECOGNIZER'], 'wb'))", "Programming language clasification\nWe will train one more classifier for the programming-related questions. It will predict exactly one tag (=programming language) and will be also based on Logistic Regression with TF-IDF features. \nFirst, let us prepare the data for this task.", "X = stackoverflow_df['title'].values\ny = stackoverflow_df['tag'].values\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\nprint('Train size = {}, test size = {}'.format(len(X_train), len(X_test)))", "Let us reuse the TF-IDF vectorizer that we have already created above. It should not make a huge difference which data was used to train it.", "vectorizer = pickle.load(open(RESOURCE_PATH['TFIDF_VECTORIZER'], 'rb'))\n\nX_train_tfidf, X_test_tfidf = vectorizer.transform(X_train), vectorizer.transform(X_test)", "Train the tag classifier using OneVsRestClassifier wrapper over LogisticRegression. Use the following parameters: penalty='l2', C=5, random_state=0.", "from sklearn.multiclass import OneVsRestClassifier\n\n######################################\n######### YOUR CODE HERE #############\n######################################\n\n# Check test accuracy.\ny_test_pred = tag_classifier.predict(X_test_tfidf)\ntest_accuracy = accuracy_score(y_test, y_test_pred)\nprint('Test accuracy = {}'.format(test_accuracy))", "Dump the classifier to use it in the running bot.", "pickle.dump(tag_classifier, open(RESOURCE_PATH['TAG_CLASSIFIER'], 'wb'))", "Part II. Ranking questions with embeddings\nTo find a relevant answer (a thread from StackOverflow) on a question you will use vector representations to calculate similarity between the question and existing threads. We already had question_to_vec function from the assignment 3, which can create such a representation based on word vectors. \nHowever, it would be costly to compute such a representation for all possible answers in online mode of the bot (e.g. when bot is running and answering questions from many users). This is the reason why you will create a database with pre-computed representations. These representations will be arranged by non-overlaping tags (programming languages), so that the search of the answer can be performed only within one tag each time. This will make our bot even more efficient and allow not to store all the database in RAM. \nLoad StarSpace embeddings which were trained on Stack Overflow posts. These embeddings were trained in supervised mode for duplicates detection on the same corpus that is used in search. We can account on that these representations will allow us to find closely related answers for a question. \nIf for some reasons you didn't train StarSpace embeddings in the assignment 3, you can use pre-trained word vectors from Google. All instructions about how work with these vectors were provided in the same assignment.", "starspace_embeddings, embeddings_dim = load_embeddings('data/word_embeddings.tsv')", "Since we want to precompute representations for all possible answers, we need to load the whole posts dataset, unlike we did for the intent classifier:", "posts_df = pd.read_csv('data/tagged_posts.tsv', sep='\\t')", "Look at the distribution of posts for programming languages (tags) and find the most common ones. \nYou might want to use pandas groupby and count methods:", "counts_by_tag = ######### YOUR CODE HERE #############\ncounts_by_tag", "Now for each tag you need to create two data structures, which will serve as online search index:\n* tag_post_ids — a list of post_ids with shape (counts_by_tag[tag],). It will be needed to show the title and link to the thread;\n* tag_vectors — a matrix with shape (counts_by_tag[tag], embeddings_dim) where embeddings for each answer are stored.\nImplement the code which will calculate the mentioned structures and dump it to files. It should take several minutes to compute it.", "import os\nos.makedirs(RESOURCE_PATH['THREAD_EMBEDDINGS_FOLDER'], exist_ok=True)\n\nfor tag, count in counts_by_tag.items():\n tag_posts = posts_df[posts_df['tag'] == tag]\n \n tag_post_ids = ######### YOUR CODE HERE #############\n \n tag_vectors = np.zeros((count, embeddings_dim), dtype=np.float32)\n for i, title in enumerate(tag_posts['title']):\n tag_vectors[i, :] = ######### YOUR CODE HERE #############\n\n # Dump post ids and vectors to a file.\n filename = os.path.join(RESOURCE_PATH['THREAD_EMBEDDINGS_FOLDER'], '%s.pkl' % tag)\n pickle.dump((tag_post_ids, tag_vectors), open(filename, 'wb'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ja/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Probability Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "ベイジアンガウス混合モデルとハミルトニアン MCMC\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Bayesian_Gaussian_Mixture_Model\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で表示</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab で実行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\"> GitHub でソースを表示</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a></td>\n</table>\n\nこの Colab では、TensorFlow Prabability のプリミティブのみを使用してベイジアンガウス混合モデル(BGMM)の事後確率からサンプルを取得する方法を説明します。\nモデル\n次元 $D$ ごとの $k\\in{1,\\ldots, K}$ 混合コンポーネントでは、次のベイジアンガウス混合モデルを使用して $i\\in{1,\\ldots,N}$ iid サンプルをモデリングします。\n$$\\begin{align} \\theta &amp;\\sim \\text{Dirichlet}(\\text{concentration}=\\alpha_0)\\ \\mu_k &amp;\\sim \\text{Normal}(\\text{loc}=\\mu_{0k}, \\text{scale}=I_D)\\ T_k &amp;\\sim \\text{Wishart}(\\text{df}=5, \\text{scale}=I_D)\\ Z_i &amp;\\sim \\text{Categorical}(\\text{probs}=\\theta)\\ Y_i &amp;\\sim \\text{Normal}(\\text{loc}=\\mu_{z_i}, \\text{scale}=T_{z_i}^{-1/2})\\ \\end{align}$$\nscale 引数にはすべて、cholesky セマンティクスがあります。これは TF Distributions の方法であるため、それを使用します(計算上のメリットにより、一部にこの方法が使用されています)。\nここでも目標は、事後確率からサンプルを生成することです。\n$$p\\left(\\theta, {\\mu_k, T_k}{k=1}^K \\Big| {y_i}{i=1}^N, \\alpha_0, {\\mu_{ok}}_{k=1}^K\\right)$$\n${Z_i}_{i=1}^N$ が存在しないところに注目してください。関心があるのは、$N$ でスケーリングしない確率変数のみです。(幸いにも、$Z_i$ の除外を処理する TF 分布があります。)\n計算が困難な正規化項のため、この分布から直接サンプリングすることはできません。\n正規化が困難な分布からサンプリングするには、メトロポリスヘイスティングスアルゴリズムを使用することができます。\nTensorFlow Probability では、メトロポリスヘイスティングス法に基づくいくつかの手法を含む多数の MCMC オプションが用意されています。このノートブックでは、ハミルトニアンモンテカルロ法(tfp.mcmc.HamiltonianMonteCarlo)を使用します。HMC は急速に収束することが可能で、(座標的であるのに対し)同時に状態空間をサンプリングし、TF の美徳の 1 つである自動微分を利用するため、ほとんどにおいて適切なオプションです。それを踏まえると、BGMM 事後確率からのサンプリングは実際、ギブスサンプリングなどの他のアプローチよりも優れている場合があります。", "%matplotlib inline\n\n\nimport functools\n\nimport matplotlib.pyplot as plt; plt.style.use('ggplot')\nimport numpy as np\nimport seaborn as sns; sns.set_context('notebook')\n\nimport tensorflow.compat.v2 as tf\ntf.enable_v2_behavior()\nimport tensorflow_probability as tfp\n\ntfd = tfp.distributions\ntfb = tfp.bijectors\n\nphysical_devices = tf.config.experimental.list_physical_devices('GPU')\nif len(physical_devices) > 0:\n tf.config.experimental.set_memory_growth(physical_devices[0], True)", "実際にモデルを構築する前に、新しいタイプの分布を定義する必要があります。上記のモデルの仕様から、逆共分散行列、つまり「精度行列」(https://en.wikipedia.org/wiki/Precision_(statistics%29) を使って MVN をパラメータ化することは明らかです。これを TF で行うには、Bijector を使用する必要があります。この Bijector は前方変換を使用します。\n\nY = tf.linalg.triangular_solve((tf.linalg.matrix_transpose(chol_precision_tril), X, adjoint=True) + loc\n\nまた、log_prob 計算は、次のように単に逆になります。\n\nX = tf.linalg.matmul(chol_precision_tril, X - loc, adjoint_a=True)\n\nHMC に必要なのは log_prob だけであるため、つまり、tf.linalg.triangular_solve を絶対に呼び出さないようにします(tfd.MultivariateNormalTriL の場合と同じように)。tf.linalg.matmul はキャッシュロイヤリティに優れていることから、通常は高速で行われるため、これは有利です。", "class MVNCholPrecisionTriL(tfd.TransformedDistribution):\n \"\"\"MVN from loc and (Cholesky) precision matrix.\"\"\"\n\n def __init__(self, loc, chol_precision_tril, name=None):\n super(MVNCholPrecisionTriL, self).__init__(\n distribution=tfd.Independent(tfd.Normal(tf.zeros_like(loc),\n scale=tf.ones_like(loc)),\n reinterpreted_batch_ndims=1),\n bijector=tfb.Chain([\n tfb.Shift(shift=loc),\n tfb.Invert(tfb.ScaleMatvecTriL(scale_tril=chol_precision_tril,\n adjoint=True)),\n ]),\n name=name)", "tfd.Independent 分布はある分布の独立したドローを統計的に独立した座標を使った多変量分布に変換します。log_prob の計算の観点では、この「メタ分布」はイベント次元の単純な和として表されます。\nまた、スケール行列の adjoint(「転置」)を取っていることにも注目してください。これは、精度が共分散である場合、すなわち $P=C^{-1}$ であり、$C=AA^\\top$ である場合に $P=BB^{\\top}$ where $B=A^{-\\top}$ となるためです。\nこの分布にはそれとなくコツがいるため、思った通りに MVNCholPrecisionTriL が動作することをさっと検証しましょう。", "def compute_sample_stats(d, seed=42, n=int(1e6)):\n x = d.sample(n, seed=seed)\n sample_mean = tf.reduce_mean(x, axis=0, keepdims=True)\n s = x - sample_mean\n sample_cov = tf.linalg.matmul(s, s, adjoint_a=True) / tf.cast(n, s.dtype)\n sample_scale = tf.linalg.cholesky(sample_cov)\n sample_mean = sample_mean[0]\n return [\n sample_mean,\n sample_cov,\n sample_scale,\n ]\n\ndtype = np.float32\ntrue_loc = np.array([1., -1.], dtype=dtype)\ntrue_chol_precision = np.array([[1., 0.],\n [2., 8.]],\n dtype=dtype)\ntrue_precision = np.matmul(true_chol_precision, true_chol_precision.T)\ntrue_cov = np.linalg.inv(true_precision)\n\nd = MVNCholPrecisionTriL(\n loc=true_loc,\n chol_precision_tril=true_chol_precision)\n\n[sample_mean, sample_cov, sample_scale] = [\n t.numpy() for t in compute_sample_stats(d)]\n\nprint('true mean:', true_loc)\nprint('sample mean:', sample_mean)\nprint('true cov:\\n', true_cov)\nprint('sample cov:\\n', sample_cov)", "サンプルの平均と共分散が真の平均と共分散に近いため、分布は適切に実装されているようです。次に、MVNCholPrecisionTriL tfp.distributions.JointDistributionNamed を使用して、BGMM モデルを指定します。観測モデルでは、tfd.MixtureSameFamily を使用して、自動的に ${Z_i}_{i=1}^N$ ドローを統合します。", "dtype = np.float64\ndims = 2\ncomponents = 3\nnum_samples = 1000\n\nbgmm = tfd.JointDistributionNamed(dict(\n mix_probs=tfd.Dirichlet(\n concentration=np.ones(components, dtype) / 10.),\n loc=tfd.Independent(\n tfd.Normal(\n loc=np.stack([\n -np.ones(dims, dtype),\n np.zeros(dims, dtype),\n np.ones(dims, dtype),\n ]),\n scale=tf.ones([components, dims], dtype)),\n reinterpreted_batch_ndims=2),\n precision=tfd.Independent(\n tfd.WishartTriL(\n df=5,\n scale_tril=np.stack([np.eye(dims, dtype=dtype)]*components),\n input_output_cholesky=True),\n reinterpreted_batch_ndims=1),\n s=lambda mix_probs, loc, precision: tfd.Sample(tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(probs=mix_probs),\n components_distribution=MVNCholPrecisionTriL(\n loc=loc,\n chol_precision_tril=precision)),\n sample_shape=num_samples)\n))\n\ndef joint_log_prob(observations, mix_probs, loc, chol_precision):\n \"\"\"BGMM with priors: loc=Normal, precision=Inverse-Wishart, mix=Dirichlet.\n\n Args:\n observations: `[n, d]`-shaped `Tensor` representing Bayesian Gaussian\n Mixture model draws. Each sample is a length-`d` vector.\n mix_probs: `[K]`-shaped `Tensor` representing random draw from\n `Dirichlet` prior.\n loc: `[K, d]`-shaped `Tensor` representing the location parameter of the\n `K` components.\n chol_precision: `[K, d, d]`-shaped `Tensor` representing `K` lower\n triangular `cholesky(Precision)` matrices, each being sampled from\n a Wishart distribution.\n\n Returns:\n log_prob: `Tensor` representing joint log-density over all inputs.\n \"\"\"\n return bgmm.log_prob(\n mix_probs=mix_probs, loc=loc, precision=chol_precision, s=observations)", "「トレーニング」データを生成する\nこのデモでは、いくつかのランダムデータをサンプリングします。", "true_loc = np.array([[-2., -2],\n [0, 0],\n [2, 2]], dtype)\nrandom = np.random.RandomState(seed=43)\n\ntrue_hidden_component = random.randint(0, components, num_samples)\nobservations = (true_loc[true_hidden_component] +\n random.randn(num_samples, dims).astype(dtype))", "HMC を使用したベイジアン推論\nTFD を使用してモデルを指定し、観測データを取得したので、HMC を実行するために必要な材料がすべて揃いました。\nこれを行うには、サンプリングしないものを「突き止める」ために部分適用を使用します。この場合は、observations のみを突き止める必要があります。(ハイパーパラメータは joint_log_prob 関数シグネチャの一部ではなく、前の分布にすでにベイクされています。)", "unnormalized_posterior_log_prob = functools.partial(joint_log_prob, observations)\n\ninitial_state = [\n tf.fill([components],\n value=np.array(1. / components, dtype),\n name='mix_probs'),\n tf.constant(np.array([[-2., -2],\n [0, 0],\n [2, 2]], dtype),\n name='loc'),\n tf.linalg.eye(dims, batch_shape=[components], dtype=dtype, name='chol_precision'),\n]", "制約のない表現\nハミルトニアンモンテカルロ法(HMC)では、ターゲットの対数確率関数がその引数に関して微分可能である必要があります。さらに、HMC は、状態空間が制約されていない場合に、はるかに高い有効性を見せることがあります。\nつまり、BGMM 事後確率からサンプリングする場合には、大きく 2 つの問題を対処する必要があるということになります。\n\n$\\theta$ は離散確率ベクトルを表すため、$\\sum_{k=1}^K \\theta_k = 1$ と $\\theta_k&gt;0$ である必要があります。\n$T_k$ は可逆共分散行列を表すため、$T_k \\succ 0$、すなわち正定値である必要があります。\n\nこの要件を解決するには、次のことを行う必要があります。\n\n制約された変数を制約のない空間に変換する\n制約のない空間で MCMC を実行する\n制約のない変数を制約された空間に変換する\n\nMVNCholPrecisionTriL と同様に、確率変数を制約のない空間に変換するには、Bijector を使用します。\n\n\nDirichlet は ソフトマックス関数を通じて制約のない空間に変換されます。\n\n\n精度確率変数は、半正定値行列の分布です。これらの制約を解くには、FillTriangular と TransformDiagonal bijector を使用します。これらはベクトルを下三角行列に変換し、対角線が正であるようにします。前者は、$d^2$ ではなく、$d(d+1)/2$ 浮動小数のみのサンプリングを可能にするため、有用です。", "unconstraining_bijectors = [\n tfb.SoftmaxCentered(),\n tfb.Identity(),\n tfb.Chain([\n tfb.TransformDiagonal(tfb.Softplus()),\n tfb.FillTriangular(),\n ])]\n\n@tf.function(autograph=False)\ndef sample():\n return tfp.mcmc.sample_chain(\n num_results=2000,\n num_burnin_steps=500,\n current_state=initial_state,\n kernel=tfp.mcmc.SimpleStepSizeAdaptation(\n tfp.mcmc.TransformedTransitionKernel(\n inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=unnormalized_posterior_log_prob,\n step_size=0.065,\n num_leapfrog_steps=5),\n bijector=unconstraining_bijectors),\n num_adaptation_steps=400),\n trace_fn=lambda _, pkr: pkr.inner_results.inner_results.is_accepted)\n\n[mix_probs, loc, chol_precision], is_accepted = sample()", "ここで、チェーンを実行し、事後の平均を出力します。", "acceptance_rate = tf.reduce_mean(tf.cast(is_accepted, dtype=tf.float32)).numpy()\nmean_mix_probs = tf.reduce_mean(mix_probs, axis=0).numpy()\nmean_loc = tf.reduce_mean(loc, axis=0).numpy()\nmean_chol_precision = tf.reduce_mean(chol_precision, axis=0).numpy()\nprecision = tf.linalg.matmul(chol_precision, chol_precision, transpose_b=True)\n\n\nprint('acceptance_rate:', acceptance_rate)\nprint('avg mix probs:', mean_mix_probs)\nprint('avg loc:\\n', mean_loc)\nprint('avg chol(precision):\\n', mean_chol_precision)\n\nloc_ = loc.numpy()\nax = sns.kdeplot(loc_[:,0,0], loc_[:,0,1], shade=True, shade_lowest=False)\nax = sns.kdeplot(loc_[:,1,0], loc_[:,1,1], shade=True, shade_lowest=False)\nax = sns.kdeplot(loc_[:,2,0], loc_[:,2,1], shade=True, shade_lowest=False)\nplt.title('KDE of loc draws');", "結論\nこの単純な Colab では、TensorFlow Probability のプリミティブを使用して階層的ベイジアン混合モデルを構築する方法を説明しました。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
azogue/esiosdata
notebooks/factura electricidad PVPC - uso de perfiles de consumo para estimar datos horarios.ipynb
mit
[ "Perfiles de consumo del PVPC para clientes sin registro horario\nDescarga de CSV's mensuales con los perfiles finales de consumo\nEn la web de REE están disponibles para descarga los perfiles finales de consumo, mediante ficheros CSV comprimidos para cada mes, publicados antes de que transcurran cinco días desde el final del mes de consumo al que se refieren. \nLas correcciones de los datos de demanda, que puedan producirse con posterioridad a la publicación de los perfiles de consumo, serán tenidas en cuenta únicamente a los efectos informativos que correspondan, sin afectar en ningún caso al cálculo de estos perfiles.\nlink: http://www.ree.es/es/actividades/operacion-del-sistema-electrico/medidas-electricas\nPara el año 2017, se suministran los coeficientes y demanda de referencia para calcular los perfiles iniciales. \nPDF con la última resolución con la demanda de referencia: Resolución de 28 de diciembre de 2016 de la Dirección General de Política Energética y Minas (PDF, 2,37 MB)\nFichero excel con coeficientes para los perfiles iniciales para 2017:\nEstablecidos en la resolución anteriormente indicada, disponibles en formato Excel para su consulta en Demanda de referencia y perfiles iniciales para el año 2017 (XLSX, 705 KB)", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nimport datetime as dt\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport pytz\nimport requests\nfrom urllib.error import HTTPError\n# output color\nfrom prettyprinting import *\n\n\ntz = pytz.timezone('Europe/Madrid')\nurl_perfiles_2017 = 'http://www.ree.es/sites/default/files/01_ACTIVIDADES/Documentos/Documentacion-Simel/perfiles_iniciales_2017.xlsx'\n\n\ndef _gen_ts(mes, dia, hora, año):\n \"\"\"Generación de timestamp a partir de componentes de fecha. \n Ojo al DST y cambios de hora.\"\"\"\n try:\n return dt.datetime(año, mes, dia, hora - 1) #, tzinfo=tz)\n except ValueError as e:\n print_err('Cambio de hora (día con 25h): \"{}\"; el 2017-{}-{}, hora={}'.format(e, mes, dia, hora))\n return dt.datetime(2017, mes, dia, hora - 2) #, tzinfo=tz)\n\n\ndef get_data_coeficientes_perfilado_2017(url_perfiles_2017):\n \"\"\"Extrae la información de las dos hojas del Excel proporcionado por REE \n con los perfiles iniciales para 2017.\"\"\"\n # Coeficientes de perfilado y demanda de referencia (1ª hoja)\n cols_sheet1 = ['Mes', 'Día', 'Hora', \n 'Pa,0m,d,h', 'Pb,0m,d,h', 'Pc,0m,d,h', 'Pd,0m,d,h',\n 'Demanda de Referencia 2017 (MW)']\n perfiles_2017 = pd.read_excel(url_perfiles_2017, header=None, \n skiprows=[0, 1], names=cols_sheet1)\n perfiles_2017['ts'] = [_gen_ts(mes, dia, hora, 2017) \n for mes, dia, hora in zip(perfiles_2017.Mes,\n perfiles_2017.Día, \n perfiles_2017.Hora)]\n # Coefs Alfa, Beta, Gamma (2ª hoja):\n coefs_alpha_beta_gamma = pd.read_excel(url_perfiles_2017, sheetname=1)\n return perfiles_2017.set_index('ts'), coefs_alpha_beta_gamma\n\n\n# Extracción:\nperf_demref_2017, coefs_abg = get_data_coeficientes_perfilado_2017(url_perfiles_2017)\nprint_info(coefs_abg)\nperf_demref_2017.head()\n\n# Conversión de formato de dataframe de perfiles 2017 a finales (para uniformizar):\ncols_usar = ['Pa,0m,d,h', 'Pb,0m,d,h', 'Pc,0m,d,h', 'Pd,0m,d,h']\nperfiles_2017 = perf_demref_2017[cols_usar].copy()\nperfiles_2017.columns = ['COEF. PERFIL {}'.format(p) for p in 'ABCD']\nperfiles_2017.head()", "Descarga de CSV's mensuales con los perfiles finales de consumo\nArchivos CSV: 'http://www.ree.es/sites/default/files/simel/perff/PERFF_{año}{mes}.gz'", "def get_data_perfiles_finales_mes(año, mes=None):\n \"\"\"Lee el fichero CSV comprimido con los perfiles finales de consumo eléctrico para\n el mes dado desde la web de REE. Desecha columnas de fecha e información de DST.\n \n Introduce (:int: año, :int: mes) o (:datetime_obj: ts)\n \"\"\"\n mask_ts = 'http://www.ree.es/sites/default/files/simel/perff/PERFF_{:%Y%m}.gz'\n if (type(año) is int) and (mes is not None):\n ts = dt.datetime(año, mes, 1, 0, 0)\n else:\n ts = año\n url_perfiles_finales = mask_ts.format(ts)\n print_info('Descargando perfiles finales del mes de {:%b de %Y} en {}'\n .format(ts, url_perfiles_finales))\n # Intenta descargar perfiles finales, y si falla, recurre a los estimados para 2017:\n try:\n perfiles_finales = pd.read_csv(url_perfiles_finales, sep=';',\n encoding='latin_1', compression='gzip'\n ).dropna(how='all', axis=1)\n except HTTPError as e:\n print_warn('HTTPError: {}. Se utilizan perfiles estimados de 2017.'.format(e))\n return perfiles_2017[(perfiles_2017.index.year == ts.year) \n & (perfiles_2017.index.month == ts.month)]\n \n cols_date = ['MES', 'DIA', 'HORA', 'AÑO']\n zip_date = zip(*[perfiles_finales[c] for c in cols_date])\n perfiles_finales['ts'] = [_gen_ts(*args) for args in zip_date]\n cols_date.append('VERANO(1)/INVIERNO(0)')\n # perfiles_finales['dst'] = perfiles_finales['VERANO(1)/INVIERNO(0)'].astype(bool)\n return perfiles_finales.set_index('ts').drop(cols_date, axis=1)\n\n\nperfiles_finales_2016_11 = get_data_perfiles_finales_mes(2016, 11)\nprint_ok(perfiles_finales_2016_11.head())\n\nperfiles_2017_02 = get_data_perfiles_finales_mes(2017, 2)\nperfiles_2017_02.head()", "Descarga de perfiles horarios para un intervalo dado\nCon objeto de calcular el precio medio ponderado de aplicación para dicho intervalo.", "def extract_perfiles_intervalo(t0, tf):\n t_ini = pd.Timestamp(t0)\n t_fin = pd.Timestamp(tf)\n assert(t_fin > t_ini)\n marca_fin = '{:%Y%m}'.format(t_fin)\n marca_ini = '{:%Y%m}'.format(t_ini)\n if marca_ini == marca_fin:\n perfiles = get_data_perfiles_finales_mes(t_ini)\n else:\n dates = pd.DatetimeIndex(start=t_ini.replace(day=1), \n end=t_fin.replace(day=1), freq='MS')\n perfiles = pd.concat([get_data_perfiles_finales_mes(t) for t in dates])\n return perfiles.loc[t_ini:t_fin].iloc[:-1]", "Estimación de consumo horario a partir de consumo total en un intervalo", "# Ejemplo de generación de valores de consumo horario a partir de consumo total y perfiles de uso:\nt0, tf = '2016-10-31', '2017-01-24'\nconsumo_total_interv_kWh = 836.916\nprint_secc('Consumo horario estimado para el intervalo {} -> {}, con E={:.3f} kWh'\n .format(t0, tf, consumo_total_interv_kWh))\n\n# perfiles finales:\nperfs_interv = extract_perfiles_intervalo(t0, tf)\nprint_ok(perfs_interv.head())\nprint_ok(perfs_interv.tail())\n\n# Estimación con perfil A:\nsuma_perfiles_interv = perfs_interv['COEF. PERFIL A'].sum()\nmch_pa = perfs_interv['COEF. PERFIL A'] * consumo_total_interv_kWh / suma_perfiles_interv\nconsumo_diario_est = mch_pa.groupby(pd.TimeGrouper('D')).sum()\nprint_red('CHECK CONSUMO TOTAL: {:.3f} == {:.3f} == {:.3f} kWh'\n .format(consumo_total_interv_kWh, consumo_diario_est.sum(), mch_pa.sum()))\n\n# Plot del consumo diario estimado en el intervalo:\nprint_secc('Consumo horario diario estimado:')\nax = consumo_diario_est.plot(figsize=(16, 9), color='blue', lw=2)\nparams_lines = dict(lw=1, linestyle=':', alpha=.6)\nxlim = consumo_diario_est[0], consumo_diario_est.index[-1]\nax.hlines([consumo_diario_est.mean()], *xlim, color='orange', **params_lines)\nax.hlines([consumo_diario_est.max()], *xlim, color='red', **params_lines)\nax.hlines([consumo_diario_est.min()], *xlim, color='green', **params_lines)\nax.set_title('Consumo diario estimado (Total={:.1f} kWh)'.format(consumo_total_interv_kWh))\nax.set_ylabel('kWh/día')\nax.set_xlabel('')\nax.set_ylim((0, consumo_diario_est.max() * 1.1))\nax.grid('on', axis='x');\n\n# Copia a otro notebook:\npd.DataFrame(mch_pa.rename('kWh')).to_clipboard()\n\n# Consumo medio por día de la semana (patrón semanal de consumo):\nmedia_diaria = mch_pa.groupby(pd.TimeGrouper('D')).sum()\nmedia_semanal = media_diaria.groupby(lambda x: x.weekday).mean().round(1)\ndías_semana = ['Lunes', 'Martes', 'Miércoles', 'Jueves', 'Viernes', 'Sábado', 'Domingo']\nmedia_semanal.columns = días_semana\nprint_ok(media_semanal)\n\nax = media_semanal.T.plot(kind='bar', figsize=(16, 9), color='orange', legend=False)\nax.set_xticklabels(días_semana, rotation=0)\nax.set_title('Patrón semanal de consumo')\nax.set_ylabel('kWh/día')\nax.grid('on', axis='y')\nax.hlines([media_diaria.mean()], -1, 7, lw=3, color='blue', linestyle=':');\n\n# Comprobación de perfiles al cabo del año:\nperfiles_2016 = extract_perfiles_intervalo('2016-01-01', '2016-12-31')\nprint_ok(perfiles_2016.sum())\n\nperfiles_2017.sum()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
manoharan-lab/structural-color
structure_factor_data.ipynb
gpl-3.0
[ "Tutorial for using structure factor data as the structure factor used in the structural-color package\nThis tutorial describes how to add your own structor factor data to Monte Carlo calculations\nCopyright 2016, Vinothan N. Manoharan, Victoria Hwang, Annie Stephenson\nThis file is part of the structural-color python package.\nThis package is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\nThis package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with this package. If not, see http://www.gnu.org/licenses/.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport structcol as sc\nimport structcol.refractive_index as ri\nfrom structcol import montecarlo as mc\nfrom structcol import detector as det\nfrom structcol import model\nfrom structcol import structure\n%matplotlib inline", "For the single scattering model\nset parameters", "wavelengths = sc.Quantity(np.arange(400, 800, 20), 'nm') # wavelengths\nradius = sc.Quantity('0.5 um') # particle radius\nvolume_fraction = sc.Quantity(0.5, '') # volume fraction of particles\nn_particle = ri.n('fused silica', wavelengths)\nn_matrix = ri.n('vacuum', wavelengths) # called from the refractive_index module. n_matrix is the \nn_medium = ri.n('vacuum', wavelengths) # space within sample. n_medium is outside the sample. \n # n_particle and n_matrix can have complex indices if absorption is desired\nthickness = sc.Quantity('50 um') # thickness of the sample film", "Construct the structure factor data\nHere, we use discrete points from the percus-yevick approximation for structure factor, as an example. In practice, you will most likely use actual structure factor data imported from your own file", "qd_data = np.arange(0,75, 0.1)\ns_data = structure.factor_py(qd_data, volume_fraction.magnitude)", "plot the structure factor data and interpolated function", "qd = np.arange(0,70, 0.1)# works up to qd = 72\ns = structure.factor_data(qd, s_data, qd_data)\n\nplt.figure()\nplt.plot(qd, s, label = 'interpolated')\nplt.plot(qd_data, s_data,'.', label = 'data')\nplt.legend()\nplt.xlabel('qd')\nplt.ylabel('structure factor')", "Calculate reflectance", "reflectance=np.zeros(len(wavelengths))\nfor i in range(len(wavelengths)):\n reflectance[i],_,_,_,_ = sc.model.reflection(n_particle[i], n_matrix[i], n_medium[i], wavelengths[i], \n radius, volume_fraction, \n thickness=thickness,\n structure_type='data',\n structure_s_data=s_data,\n structure_qd_data=qd_data)", "plot", "plt.figure()\nplt.plot(wavelengths, reflectance)\nplt.ylim([0,0.1])\nplt.ylabel('Reflectance')\nplt.xlabel('wavelength (nm)')", "For the Monte Carlo model\nset parameters", "ntrajectories = 500 # number of trajectories\nnevents = 500 # number of scattering events in each trajectory\nwavelengths = sc.Quantity(np.arange(400, 800, 20), 'nm') # wavelengths\nradius = sc.Quantity('0.5 um') # particle radius\nvolume_fraction = sc.Quantity(0.5, '') # volume fraction of particles\nn_particle = ri.n('fused silica', wavelengths)\nn_matrix = ri.n('vacuum', wavelengths) # called from the refractive_index module. n_matrix is the \nn_medium = ri.n('vacuum', wavelengths) # space within sample. n_medium is outside the sample. \n # n_particle and n_matrix can have complex indices if absorption is desired\nboundary = 'film' # geometry of sample, can be 'film' or 'sphere', see below for tutorial \n # on sphere case\nthickness = sc.Quantity('50 um') # thickness of the sample film", "Construct the structure factor data\nHere, we use discrete points from the percus-yevick approximation for structure factor, as an example. In practice, you will most likely use actual structure factor data imported from your own file", "qd_data = np.arange(0,75, 0.1)\ns_data = structure.factor_py(qd_data, volume_fraction.magnitude)", "plot the structure factor data and interpolated function", "qd = np.arange(0,70, 0.1)# works up to qd = 72\ns = structure.factor_data(qd, s_data, qd_data)\n\nplt.figure()\nplt.plot(qd, s, label = 'interpolated')\nplt.plot(qd_data, s_data,'.', label = 'data')\nplt.legend()\nplt.xlabel('qd')\nplt.ylabel('structure factor')", "Calculate reflectance", "reflectance = np.zeros(wavelengths.size)\nfor i in range(wavelengths.size):\n \n # calculate n_sample\n n_sample = ri.n_eff(n_particle[i], n_matrix[i], volume_fraction)\n \n # Calculate the phase function and scattering and absorption coefficients from the single scattering model\n p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle[i], n_sample, volume_fraction, wavelengths[i], \n structure_type = 'data',\n structure_s_data = s_data,\n structure_qd_data = qd_data)\n \n # Initialize the trajectories\n r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium[i], n_sample, boundary)\n r0 = sc.Quantity(r0, 'um')\n k0 = sc.Quantity(k0, '')\n W0 = sc.Quantity(W0, '')\n \n # Generate a matrix of all the randomly sampled angles first \n sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)\n \n # Create step size distribution\n step = mc.sample_step(nevents, ntrajectories, mu_scat)\n \n # Create trajectories object\n trajectories = mc.Trajectory(r0, k0, W0)\n \n # Run photons\n trajectories.absorb(mu_abs, step) \n trajectories.scatter(sintheta, costheta, sinphi, cosphi) \n trajectories.move(step)\n \n reflectance[i], transmittance = det.calc_refl_trans(trajectories, thickness, n_medium[i], n_sample, boundary)", "plot", "plt.figure()\nplt.plot(wavelengths, reflectance)\nplt.ylim([0,1])\nplt.ylabel('Reflectance')\nplt.xlabel('wavelength (nm)')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
malminhas/udacity-deep-learning
1_notmnist.ipynb
apache-2.0
[ "Deep Learning\nAssignment 1\nThe objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.\nThis notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.", "%matplotlib inline\n\n# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport sys\nimport tarfile\nfrom IPython.display import display, Image\nfrom scipy import ndimage\nfrom sklearn.linear_model import LogisticRegression\nfrom six.moves.urllib.request import urlretrieve\nfrom six.moves import cPickle as pickle", "First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.", "url = 'http://yaroslavvb.com/upload/notMNIST/'\n\ndef maybe_download(filename, expected_bytes, force=False):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n if force or not os.path.exists(filename):\n filename, _ = urlretrieve(url + filename, filename)\n statinfo = os.stat(filename)\n if statinfo.st_size == expected_bytes:\n print('Found and verified', filename)\n else:\n raise Exception(\n 'Failed to verify' + filename + '. Can you get to it with a browser?')\n return filename\n\ntrain_filename = maybe_download('notMNIST_large.tar.gz', 247336696)\ntest_filename = maybe_download('notMNIST_small.tar.gz', 8458043)", "Extract the dataset from the compressed .tar.gz file.\nThis should give you a set of directories, labelled A through J.", "num_classes = 10\nnp.random.seed(133)\n\ndef maybe_extract(filename, force=False):\n root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz\n if os.path.isdir(root) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping extraction of %s.' % (root, filename))\n else:\n print('Extracting data for %s. This may take a while. Please wait.' % root)\n tar = tarfile.open(filename)\n sys.stdout.flush()\n tar.extractall()\n tar.close()\n data_folders = [\n os.path.join(root, d) for d in sorted(os.listdir(root))\n if os.path.isdir(os.path.join(root, d))]\n if len(data_folders) != num_classes:\n raise Exception(\n 'Expected %d folders, one per class. Found %d instead.' % (\n num_classes, len(data_folders)))\n print(data_folders)\n return data_folders\n \ntrain_folders = maybe_extract(train_filename)\ntest_folders = maybe_extract(test_filename)", "Problem 1\nLet's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.", "# Let's display the 100th image from each subdirectory \nimages = []\nfor sdir in train_folders:\n files = os.listdir(sdir)\n im = os.path.join(sdir,files[100])\n images.append(im)\n\nprint(images)\nImage(filename = images[0])\n\nImage(filename = images[1])\n\nImage(filename = images[2])", "Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.\nWe'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. \nA few images might not be readable, we'll just skip them.", "image_size = 28 # Pixel width and height.\npixel_depth = 255.0 # Number of levels per pixel.\n\ndef load_letter(folder, min_num_images):\n \"\"\"Load the data for a single letter label.\"\"\"\n image_files = os.listdir(folder)\n dataset = np.ndarray(shape=(len(image_files), image_size, image_size),\n dtype=np.float32)\n image_index = 0\n print(folder)\n for image in os.listdir(folder):\n image_file = os.path.join(folder, image)\n try:\n image_data = (ndimage.imread(image_file).astype(float) - \n pixel_depth / 2) / pixel_depth\n if image_data.shape != (image_size, image_size):\n raise Exception('Unexpected image shape: %s' % str(image_data.shape))\n dataset[image_index, :, :] = image_data\n image_index += 1\n except IOError as e:\n print('Could not read:', image_file, ':', e, '- it\\'s ok, skipping.')\n \n num_images = image_index\n dataset = dataset[0:num_images, :, :]\n if num_images < min_num_images:\n raise Exception('Many fewer images than expected: %d < %d' %\n (num_images, min_num_images))\n \n print('Full dataset tensor:', dataset.shape)\n print('Mean:', np.mean(dataset))\n print('Standard deviation:', np.std(dataset))\n return dataset\n \ndef maybe_pickle(data_folders, min_num_images_per_class, force=False):\n dataset_names = []\n for folder in data_folders:\n set_filename = folder + '.pickle'\n dataset_names.append(set_filename)\n if os.path.exists(set_filename) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping pickling.' % set_filename)\n else:\n print('Pickling %s.' % set_filename)\n dataset = load_letter(folder, min_num_images_per_class)\n try:\n with open(set_filename, 'wb') as f:\n pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', set_filename, ':', e)\n \n return dataset_names\n\ntrain_datasets = maybe_pickle(train_folders, 45000)\ntest_datasets = maybe_pickle(test_folders, 1800)", "Problem 2\nLet's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.", "def getImage(fname,N):\n try:\n print(\"Trying to open '%s'\" % fname)\n with open(fname, 'rb') as f:\n dataset = pickle.load(f)\n print(type(dataset))\n im = dataset[N,:,:]\n except Exception as e:\n print('Unable to open', fname, ':', e)\n plt.imshow(im, cmap=plt.cm.CMRmap)\n return im\n\nim = getImage(test_datasets[0],100)\n\nim = getImage(test_datasets[1],100)\n\nim = getImage(test_datasets[2],100)", "Problem 3\nAnother check: we expect the data to be balanced across classes. Verify that.", "print(test_datasets)\nfor cl in test_datasets:\n with open(cl,'rb') as f:\n dataset = pickle.load(f)\n print(\"class %s has %d samples\" % (cl,len(dataset)))", "Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.\nAlso create a validation dataset for hyperparameter tuning.", "def make_arrays(nb_rows, img_size):\n if nb_rows:\n dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)\n labels = np.ndarray(nb_rows, dtype=np.int32)\n else:\n dataset, labels = None, None\n return dataset, labels\n\ndef merge_datasets(pickle_files, train_size, valid_size=0):\n num_classes = len(pickle_files)\n valid_dataset, valid_labels = make_arrays(valid_size, image_size)\n train_dataset, train_labels = make_arrays(train_size, image_size)\n vsize_per_class = valid_size // num_classes\n tsize_per_class = train_size // num_classes\n \n start_v, start_t = 0, 0\n end_v, end_t = vsize_per_class, tsize_per_class\n end_l = vsize_per_class+tsize_per_class\n for label, pickle_file in enumerate(pickle_files): \n try:\n with open(pickle_file, 'rb') as f:\n letter_set = pickle.load(f)\n # let's shuffle the letters to have random validation and training set\n np.random.shuffle(letter_set)\n if valid_dataset is not None:\n valid_letter = letter_set[:vsize_per_class, :, :]\n valid_dataset[start_v:end_v, :, :] = valid_letter\n valid_labels[start_v:end_v] = label\n start_v += vsize_per_class\n end_v += vsize_per_class\n \n train_letter = letter_set[vsize_per_class:end_l, :, :]\n train_dataset[start_t:end_t, :, :] = train_letter\n train_labels[start_t:end_t] = label\n start_t += tsize_per_class\n end_t += tsize_per_class\n except Exception as e:\n print('Unable to process data from', pickle_file, ':', e)\n raise\n \n return valid_dataset, valid_labels, train_dataset, train_labels\n \n \ntrain_size = 200000\nvalid_size = 10000\ntest_size = 10000\n\nvalid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(\n train_datasets, train_size, valid_size)\n_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)\n\nprint('Training:', train_dataset.shape, train_labels.shape)\nprint('Validation:', valid_dataset.shape, valid_labels.shape)\nprint('Testing:', test_dataset.shape, test_labels.shape)", "Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.", "def randomize(dataset, labels):\n permutation = np.random.permutation(labels.shape[0])\n shuffled_dataset = dataset[permutation,:,:]\n shuffled_labels = labels[permutation]\n return shuffled_dataset, shuffled_labels\ntrain_dataset, train_labels = randomize(train_dataset, train_labels)\ntest_dataset, test_labels = randomize(test_dataset, test_labels)\nvalid_dataset, valid_labels = randomize(valid_dataset, valid_labels)", "Problem 4\nConvince yourself that the data is still good after shuffling!", "im = test_dataset[0,:,:]\nplt.imshow(im, cmap=plt.cm.CMRmap)\n\nim = test_dataset[1,:,:]\nplt.imshow(im, cmap=plt.cm.CMRmap)\n\nim = test_dataset[2,:,:]\nplt.imshow(im, cmap=plt.cm.CMRmap)", "Finally, let's save the data for later reuse:", "pickle_file = 'notMNIST.pickle'\n\ntry:\n f = open(pickle_file, 'wb')\n save = {\n 'train_dataset': train_dataset,\n 'train_labels': train_labels,\n 'valid_dataset': valid_dataset,\n 'valid_labels': valid_labels,\n 'test_dataset': test_dataset,\n 'test_labels': test_labels,\n }\n pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n f.close()\nexcept Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\nprint(\"Finished\")\n\nstatinfo = os.stat(pickle_file)\nprint('Compressed pickle size:', statinfo.st_size)", "Problem 5\nBy construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.\nMeasure how much overlap there is between training, validation and test samples.\nOptional questions:\n- What about near duplicates between datasets? (images that are almost identical)\n- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.", "### WARNING, WILL TAKE AN INCREDIBLY LONG TIME TO RUN ON FULL TRAINING DATASET ###\n\n# Cycle through train dataset\n# If dupe matrices exist in validation or testing set, delete them from sanitized dataset\n\nsanitized_valid_dataset = valid_dataset\nsanitized_test_dataset = test_dataset\nsanitized_valid_labels = valid_labels\nsanitized_test_labels = test_labels\n\nvalid_count = 0\ntest_count = 0\n#for i in range(len(train_dataset)):\nfor i in range(80,90):\n\n # Validation set\n for j in range(len(valid_dataset)):\n # Check for duplicates\n if np.alltrue(train_dataset[i] == valid_dataset[j]):\n # All with i = 80\n sanitized_valid_dataset = np.delete(valid_dataset, j, axis=0)\n sanitized_valid_labels = np.delete(valid_labels, j)\n valid_count += 1\n\n # Test set\n for j in range(len(test_dataset)):\n # Check for duplicates\n if np.alltrue(train_dataset[i] == test_dataset[j]):\n # All with i = 80\n sanitized_test_dataset = np.delete(test_dataset, j, axis=0)\n sanitized_test_labels = np.delete(test_labels, j)\n test_count += 1\n \nprint('Deleted', valid_count, 'images from validation set across', len(valid_dataset)*10)\nprint('Deleted', test_count, 'images from test set across', len(test_dataset)*10)\n\nprint(train_dataset.shape)\nprint(train_labels.shape)\nprint(valid_dataset.shape)\nprint(valid_labels.shape)\nprint(test_dataset.shape)\nprint(test_labels.shape)\n\nprint(sanitized_valid_dataset.shape)\nprint(sanitized_valid_labels.shape)\nprint(sanitized_test_dataset.shape)\nprint(sanitized_test_labels.shape)", "Problem 6\nLet's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.\nTrain a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.\nOptional question: train an off-the-shelf model on all the data!", "def lrclassify(total_training_samples):\n train_dataset[:total_training_samples].shape\n\n # Reshape tensors into matrices for training and validation\n X = np.reshape(train_dataset[:total_training_samples], (total_training_samples, 28*28))\n X_valid = np.reshape(valid_dataset, (len(valid_dataset), 28*28))\n\n # Train model\n clf = LogisticRegression()\n clf.fit(X, train_labels[:total_training_samples])\n\n # Accuracy\n return sum(clf.predict(X_valid) == valid_labels) / float(len(X_valid))\n \nprint(lrclassify(50))\nprint(lrclassify(100))\nprint(lrclassify(1000))\nprint(lrclassify(5000))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
infilect/ml-course1
keras-notebooks/Frameworks/2.3 Introduction to Keras.ipynb
mit
[ "<img src=\"../imgs/keras-logo-small.jpg\" width=\"20%\" />\nKeras: Deep Learning library for Theano and TensorFlow\n\nKeras is a minimalist, highly modular neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. \nIt was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.\nref: https://keras.io/\n\n<a name=\"kaggle\"></a>\nKaggle Challenge Data\n\nThe Otto Group is one of the world’s biggest e-commerce companies, A consistent analysis of the performance of products is crucial. However, due to diverse global infrastructure, many identical products get classified differently.\nFor this competition, we have provided a dataset with 93 features for more than 200,000 products. The objective is to build a predictive model which is able to distinguish between our main product categories. \nEach row corresponds to a single product. There are a total of 93 numerical features, which represent counts of different events. All features have been obfuscated and will not be defined any further.\n\nhttps://www.kaggle.com/c/otto-group-product-classification-challenge/data\nFor this section we will use the Kaggle Otto Group Challenge Data. You will find these data in\n../data/kaggle_ottogroup/ folder.\nLogistic Regression\nThis algorithm has nothing to do with the canonical linear regression, but it is an algorithm that allows us to solve problems of classification (supervised learning). \nIn fact, to estimate the dependent variable, now we make use of the so-called logistic function or sigmoid. \nIt is precisely because of this feature we call this algorithm logistic regression.\n\nData Preparation", "from kaggle_data import load_data, preprocess_data, preprocess_labels\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nX_train, labels = load_data('../data/kaggle_ottogroup/train.csv', train=True)\nX_train, scaler = preprocess_data(X_train)\nY_train, encoder = preprocess_labels(labels)\n\nX_test, ids = load_data('../data/kaggle_ottogroup/test.csv', train=False)\nX_test, _ = preprocess_data(X_test, scaler)\n\nnb_classes = Y_train.shape[1]\nprint(nb_classes, 'classes')\n\ndims = X_train.shape[1]\nprint(dims, 'dims')\n\nnp.unique(labels)\n\nY_train # one-hot encoding", "Using Theano", "import theano as th\nimport theano.tensor as T\n\n#Based on example from DeepLearning.net\nrng = np.random\nN = 400\nfeats = 93\ntraining_steps = 10\n\n# Declare Theano symbolic variables\nx = T.matrix(\"x\")\ny = T.vector(\"y\")\nw = th.shared(rng.randn(feats), name=\"w\")\nb = th.shared(0., name=\"b\")\n\n# Construct Theano expression graph\np_1 = 1 / (1 + T.exp(-T.dot(x, w) - b)) # Probability that target = 1\nprediction = p_1 > 0.5 # The prediction thresholded\nxent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy loss function\ncost = xent.mean() + 0.01 * (w ** 2).sum() # The cost to minimize\ngw, gb = T.grad(cost, [w, b]) # Compute the gradient of the cost\n \n\n# Compile\ntrain = th.function(\n inputs=[x,y],\n outputs=[prediction, xent],\n updates=((w, w - 0.1 * gw), (b, b - 0.1 * gb)),\n allow_input_downcast=True)\npredict = th.function(inputs=[x], outputs=prediction, allow_input_downcast=True)\n\n#Transform for class1\ny_class1 = []\nfor i in Y_train:\n y_class1.append(i[0])\ny_class1 = np.array(y_class1)\n\n# Train\nfor i in range(training_steps):\n print('Epoch %s' % (i+1,))\n pred, err = train(X_train, y_class1)\n\nprint(\"target values for Data:\")\nprint(y_class1)\nprint(\"prediction on training set:\")\nprint(predict(X_train))", "Using Tensorflow", "import tensorflow as tf\n\n# Parameters\nlearning_rate = 0.01\ntraining_epochs = 25\ndisplay_step = 1\n\n# tf Graph Input\nx = tf.placeholder(\"float\", [None, dims]) \ny = tf.placeholder(\"float\", [None, nb_classes])\n\nx", "Model (Introducing Tensorboard)", "# Construct (linear) model\nwith tf.name_scope(\"model\") as scope:\n # Set model weights\n W = tf.Variable(tf.zeros([dims, nb_classes]))\n b = tf.Variable(tf.zeros([nb_classes]))\n activation = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax\n\n # Add summary ops to collect data\n w_h = tf.summary.histogram(\"weights_histogram\", W)\n b_h = tf.summary.histogram(\"biases_histograms\", b)\n tf.summary.scalar('mean_weights', tf.reduce_mean(W))\n tf.summary.scalar('mean_bias', tf.reduce_mean(b))\n\n# Minimize error using cross entropy\n# Note: More name scopes will clean up graph representation\nwith tf.name_scope(\"cost_function\") as scope:\n cross_entropy = y*tf.log(activation)\n cost = tf.reduce_mean(-tf.reduce_sum(cross_entropy,reduction_indices=1))\n # Create a summary to monitor the cost function\n tf.summary.scalar(\"cost_function\", cost)\n tf.summary.histogram(\"cost_histogram\", cost)\n\nwith tf.name_scope(\"train\") as scope:\n # Set the Optimizer\n optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)", "Accuracy", "with tf.name_scope('Accuracy') as scope:\n correct_prediction = tf.equal(tf.argmax(activation, 1), tf.argmax(y, 1))\n # Calculate accuracy\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n # Create a summary to monitor the cost function\n tf.summary.scalar(\"accuracy\", accuracy)", "Learning in a TF Session", "LOGDIR = \"/tmp/logistic_logs\"\nimport os, shutil\nif os.path.isdir(LOGDIR):\n shutil.rmtree(LOGDIR)\nos.mkdir(LOGDIR)\n\n# Plug TensorBoard Visualisation \nwriter = tf.summary.FileWriter(LOGDIR, graph=tf.get_default_graph())\n\nfor var in tf.get_collection(tf.GraphKeys.SUMMARIES):\n print(var.name)\n \nsummary_op = tf.summary.merge_all()\nprint('Summary Op: ' + summary_op)\n\n# Launch the graph\nwith tf.Session() as session:\n # Initializing the variables\n session.run(tf.global_variables_initializer())\n \n cost_epochs = []\n # Training cycle\n for epoch in range(training_epochs):\n _, summary, c = session.run(fetches=[optimizer, summary_op, cost], \n feed_dict={x: X_train, y: Y_train})\n cost_epochs.append(c)\n writer.add_summary(summary=summary, global_step=epoch)\n print(\"accuracy epoch {}:{}\".format(epoch, accuracy.eval({x: X_train, y: Y_train})))\n \n print(\"Training phase finished\")\n \n #plotting\n plt.plot(range(len(cost_epochs)), cost_epochs, 'o', label='Logistic Regression Training phase')\n plt.ylabel('cost')\n plt.xlabel('epoch')\n plt.legend()\n plt.show()\n \n prediction = tf.argmax(activation, 1)\n print(prediction.eval({x: X_test}))\n\n%%bash\npython -m tensorflow.tensorboard --logdir=/tmp/logistic_logs", "Using Keras", "from keras.models import Sequential\nfrom keras.layers import Dense, Activation\n\ndims = X_train.shape[1]\nprint(dims, 'dims')\nprint(\"Building model...\")\n\nnb_classes = Y_train.shape[1]\nprint(nb_classes, 'classes')\n\nmodel = Sequential()\nmodel.add(Dense(nb_classes, input_shape=(dims,), activation='sigmoid'))\nmodel.add(Activation('softmax'))\n\nmodel.compile(optimizer='sgd', loss='categorical_crossentropy')\nmodel.fit(X_train, Y_train)", "Simplicity is pretty impressive right? :)\nTheano:\nshape = (channels, rows, cols)\nTensorflow:\nshape = (rows, cols, channels)\nimage_data_format : channels_last | channels_first", "!cat ~/.keras/keras.json", "Now lets understand:\n<pre>The core data structure of Keras is a <b>model</b>, a way to organize layers. The main type of model is the <b>Sequential</b> model, a linear stack of layers.</pre>\n\nWhat we did here is stacking a Fully Connected (<b>Dense</b>) layer of trainable weights from the input to the output and an <b>Activation</b> layer on top of the weights layer.\nDense\n```python\nfrom keras.layers.core import Dense\nDense(units, activation=None, use_bias=True, \n kernel_initializer='glorot_uniform', bias_initializer='zeros', \n kernel_regularizer=None, bias_regularizer=None, \n activity_regularizer=None, kernel_constraint=None, bias_constraint=None)\n```\n\n\nunits: int > 0.\n\n\ninit: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.\n\n\nactivation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. \"linear\" activation: a(x) = x).\n\n\nweights: list of Numpy arrays to set as initial weights. The list should have 2 elements, of shape (input_dim, output_dim) and (output_dim,) for weights and biases respectively.\n\n\nkernel_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.\n\n\nbias_regularizer: instance of WeightRegularizer, applied to the bias.\n\n\nactivity_regularizer: instance of ActivityRegularizer, applied to the network output.\n\n\nkernel_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.\n\n\nbias_constraint: instance of the constraints module, applied to the bias.\n\n\nuse_bias: whether to include a bias (i.e. make the layer affine rather than linear).\n\n\n(some) others keras.core.layers\n\nkeras.layers.core.Flatten()\nkeras.layers.core.Reshape(target_shape)\nkeras.layers.core.Permute(dims)\n\n```python\nmodel = Sequential()\nmodel.add(Permute((2, 1), input_shape=(10, 64)))\nnow: model.output_shape == (None, 64, 10)\nnote: None is the batch dimension\n```\n\nkeras.layers.core.Lambda(function, output_shape=None, arguments=None)\nkeras.layers.core.ActivityRegularization(l1=0.0, l2=0.0)\n\n<img src=\"../imgs/dl_overview.png\" >\nCredits: Yam Peleg (@Yampeleg)\nActivation\n```python\nfrom keras.layers.core import Activation\nActivation(activation)\n```\nSupported Activations : [https://keras.io/activations/]\nAdvanced Activations: [https://keras.io/layers/advanced-activations/]\nOptimizer\nIf you need to, you can further configure your optimizer. A core principle of Keras is to make things reasonably simple, while allowing the user to be fully in control when they need to (the ultimate control being the easy extensibility of the source code).\nHere we used <b>SGD</b> (stochastic gradient descent) as an optimization algorithm for our trainable weights. \n<img src=\"http://sebastianruder.com/content/images/2016/09/saddle_point_evaluation_optimizers.gif\" width=\"40%\">\nSource & Reference: http://sebastianruder.com/content/images/2016/09/saddle_point_evaluation_optimizers.gif\n\"Data Sciencing\" this example a little bit more\nWhat we did here is nice, however in the real world it is not useable because of overfitting.\nLets try and solve it with cross validation.\nOverfitting\nIn overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. \nA model that has been overfit has poor predictive performance, as it overreacts to minor fluctuations in the training data.\n<img src=\"../imgs/overfitting.png\">\n<pre>To avoid overfitting, we will first split out data to training set and test set and test out model on the test set.\nNext: we will use two of keras's callbacks <b>EarlyStopping</b> and <b>ModelCheckpoint</b></pre>\n\n\nLet's see first the model we implemented", "model.summary()\n\nfrom sklearn.model_selection import train_test_split\nfrom keras.callbacks import EarlyStopping, ModelCheckpoint\n\nX_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.15, random_state=42)\n\nfBestModel = 'best_model.h5' \nearly_stop = EarlyStopping(monitor='val_loss', patience=2, verbose=1) \nbest_model = ModelCheckpoint(fBestModel, verbose=0, save_best_only=True)\n\nmodel.fit(X_train, Y_train, validation_data = (X_val, Y_val), epochs=50, \n batch_size=128, verbose=True, callbacks=[best_model, early_stop]) ", "Multi-Layer Fully Connected Networks\n<img src=\"../imgs/MLP.png\" width=\"45%\">\nForward and Backward Propagation\n<img src=\"../imgs/backprop.png\" width=\"50%\">\nQ: How hard can it be to build a Multi-Layer Fully-Connected Network with keras?\nA: It is basically the same, just add more layers!", "model = Sequential()\nmodel.add(Dense(100, input_shape=(dims,)))\nmodel.add(Dense(nb_classes))\nmodel.add(Activation('softmax'))\nmodel.compile(optimizer='sgd', loss='categorical_crossentropy')\nmodel.summary()\n\nmodel.fit(X_train, Y_train, validation_data = (X_val, Y_val), epochs=20, \n batch_size=128, verbose=True)", "Your Turn!\nHands On - Keras Fully Connected\nTake couple of minutes and try to play with the number of layers and the number of parameters in the layers to get the best results.", "model = Sequential()\nmodel.add(Dense(100, input_shape=(dims,)))\n\n# ...\n# ...\n# Play with it! add as much layers as you want! try and get better results.\n\nmodel.add(Dense(nb_classes))\nmodel.add(Activation('softmax'))\nmodel.compile(optimizer='sgd', loss='categorical_crossentropy')\n\nmodel.summary()\n\nmodel.fit(X_train, Y_train, validation_data = (X_val, Y_val), epochs=20, \n batch_size=128, verbose=True)", "Building a question answering system, an image classification model, a Neural Turing Machine, a word2vec embedder or any other model is just as fast. The ideas behind deep learning are simple, so why should their implementation be painful?\nTheoretical Motivations for depth\n\nMuch has been studied about the depth of neural nets. Is has been proven mathematically[1] and empirically that convolutional neural network benifit from depth! \n\n[1] - On the Expressive Power of Deep Learning: A Tensor Analysis - Cohen, et al 2015\nTheoretical Motivations for depth\nOne much quoted theorem about neural network states that:\n\nUniversal approximation theorem states[1] that a feed-forward network with a single hidden layer containing a finite number of neurons (i.e., a multilayer perceptron), can approximate continuous functions on compact subsets of $\\mathbb{R}^n$, under mild assumptions on the activation function. The theorem thus states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters; however, it does not touch upon the algorithmic learnability of those parameters.\n\n[1] - Approximation Capabilities of Multilayer Feedforward Networks - Kurt Hornik 1991\nAddendum\n2.3.1 Keras Backend" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/google-research
kws_streaming/colab/02_inference.ipynb
apache-2.0
[ "Copyright 2019 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.", "!git clone https://github.com/google-research/google-research.git\n\nimport sys\nimport os\nimport tarfile\nimport urllib\nimport zipfile\nsys.path.append('./google-research')", "Examples of streaming and non streaming inference with TF/TFlite\nImports", "# TF streaming\nfrom kws_streaming.models import models\nfrom kws_streaming.models import utils\nfrom kws_streaming.models import model_utils\nfrom kws_streaming.layers.modes import Modes\n\n\nimport tensorflow as tf\nimport numpy as np\nimport tensorflow.compat.v1 as tf1\nimport logging\nfrom kws_streaming.models import model_flags\nfrom kws_streaming.models import model_params\nfrom kws_streaming.train import inference\nfrom kws_streaming.train import test\nfrom kws_streaming.data import input_data\nfrom kws_streaming.data import input_data_utils as du\ntf1.disable_eager_execution()\n\nconfig = tf1.ConfigProto()\nconfig.gpu_options.allow_growth = True\nsess = tf1.Session(config=config)\n\n# general imports\nimport matplotlib.pyplot as plt\nimport os\nimport json\nimport numpy as np\nimport scipy as scipy\nimport scipy.io.wavfile as wav\nimport scipy.signal\n\ntf.__version__\n\ntf1.reset_default_graph()\nsess = tf1.Session()\ntf1.keras.backend.set_session(sess)\ntf1.keras.backend.set_learning_phase(0)", "Load wav file", "def waveread_as_pcm16(filename):\n \"\"\"Read in audio data from a wav file. Return d, sr.\"\"\"\n samplerate, wave_data = wav.read(filename)\n # Read in wav file.\n return wave_data, samplerate\n\ndef wavread_as_float(filename, target_sample_rate=16000):\n \"\"\"Read in audio data from a wav file. Return d, sr.\"\"\"\n wave_data, samplerate = waveread_as_pcm16(filename)\n desired_length = int(\n round(float(len(wave_data)) / samplerate * target_sample_rate))\n wave_data = scipy.signal.resample(wave_data, desired_length)\n\n # Normalize short ints to floats in range [-1..1).\n data = np.array(wave_data, np.float32) / 32768.0\n return data, target_sample_rate\n\n# set PATH to data sets (for example to speech commands V2):\n# it can be downloaded from\n# https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz\n# if you run 00_check-data.ipynb then data2 should be located in the current folder\ncurrent_dir = os.getcwd()\nDATA_PATH = os.path.join(current_dir, \"data2/\")\n\n# Set path to wav file for testing.\nwav_file = os.path.join(DATA_PATH, \"left/012187a4_nohash_0.wav\")\n\n# read audio file\nwav_data, samplerate = wavread_as_float(wav_file)\n\nassert samplerate == 16000\n\nplt.plot(wav_data)", "Prepare batched model", "# This notebook is configured to work with 'ds_tc_resnet' and 'svdf'.\nMODEL_NAME = 'ds_tc_resnet'\n# MODEL_NAME = 'svdf'\nMODELS_PATH = os.path.join(current_dir, \"models\")\nMODEL_PATH = os.path.join(MODELS_PATH, MODEL_NAME + \"/\")\nMODEL_PATH\n\ntrain_dir = os.path.join(MODELS_PATH, MODEL_NAME)\n\n# below is another way of reading flags - through json\nwith tf.compat.v1.gfile.Open(os.path.join(train_dir, 'flags.json'), 'r') as fd:\n flags_json = json.load(fd)\n\nclass DictStruct(object):\n def __init__(self, **entries):\n self.__dict__.update(entries)\n\nflags = DictStruct(**flags_json)\n\nflags.data_dir = DATA_PATH\n\n# get total stride of the model\r\n\r\ntotal_stride = 1\r\nif MODEL_NAME == 'ds_tc_resnet':\r\n # it can be automated by scanning layers of the model, but for now just use parameters of specific model\r\n pools = model_utils.parse(flags.ds_pool)\r\n strides = model_utils.parse(flags.ds_stride)\r\n time_stride = [1]\r\n for pool in pools:\r\n if pool > 1:\r\n time_stride.append(pool)\r\n for stride in strides:\r\n if stride > 1:\r\n time_stride.append(stride)\r\n total_stride = np.prod(time_stride)\r\n\r\n# overide input data shape for streaming model with stride/pool\r\nflags.data_stride = total_stride\r\nflags.data_shape = (total_stride * flags.window_stride_samples,)\n\n# prepare mapping of index to word\naudio_processor = input_data.AudioProcessor(flags)\nindex_to_label = {}\n# labels used for training\nfor word in audio_processor.word_to_index.keys():\n if audio_processor.word_to_index[word] == du.SILENCE_INDEX:\n index_to_label[audio_processor.word_to_index[word]] = du.SILENCE_LABEL\n elif audio_processor.word_to_index[word] == du.UNKNOWN_WORD_INDEX:\n index_to_label[audio_processor.word_to_index[word]] = du.UNKNOWN_WORD_LABEL\n else:\n index_to_label[audio_processor.word_to_index[word]] = word\n\n# training labels\nindex_to_label\n\n# pad input audio with zeros, so that audio len = flags.desired_samples\npadded_wav = np.pad(wav_data, (0, flags.desired_samples-len(wav_data)), 'constant')\n\ninput_data = np.expand_dims(padded_wav, 0)\ninput_data.shape\n\n# create model with flag's parameters\nmodel_non_stream_batch = models.MODELS[flags.model_name](flags)\n\n# load model's weights\nweights_name = 'best_weights'\nmodel_non_stream_batch.load_weights(os.path.join(train_dir, weights_name))\n\ntf.keras.utils.plot_model(\n model_non_stream_batch,\n show_shapes=True,\n show_layer_names=True,\n expand_nested=True)", "Run inference with TF\nTF Run non streaming inference", "# convert model to inference mode with batch one\ninference_batch_size = 1\ntf.keras.backend.set_learning_phase(0)\nflags.batch_size = inference_batch_size # set batch size\n\nmodel_non_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE)\n#model_non_stream.summary()\n\ntf.keras.utils.plot_model(\n model_non_stream,\n show_shapes=True,\n show_layer_names=True,\n expand_nested=True)\n\npredictions = model_non_stream.predict(input_data)\npredicted_labels = np.argmax(predictions, axis=1)\n\npredicted_labels\n\nindex_to_label[predicted_labels[0]]", "TF Run streaming inference with internal state", "# convert model to streaming mode\nflags.batch_size = inference_batch_size # set batch size\n\nmodel_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_INTERNAL_STATE_INFERENCE)\n#model_stream.summary()\n\ntf.keras.utils.plot_model(\n model_stream,\n show_shapes=True,\n show_layer_names=True,\n expand_nested=True)\n\nstream_output_prediction = inference.run_stream_inference_classification(flags, model_stream, input_data)\n\nstream_output_arg = np.argmax(stream_output_prediction)\r\nstream_output_arg\n\nindex_to_label[stream_output_arg]", "TF Run streaming inference with external state", "# convert model to streaming mode\nflags.batch_size = inference_batch_size # set batch size\n\nmodel_stream_external = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE)\n#model_stream.summary()\n\ntf.keras.utils.plot_model(\n model_stream_external,\n show_shapes=True,\n show_layer_names=True,\n expand_nested=True)\n\ninputs = []\nfor s in range(len(model_stream_external.inputs)):\n inputs.append(np.zeros(model_stream_external.inputs[s].shape, dtype=np.float32))\n\nwindow_stride = flags.data_shape[0]\n\nstart = 0\nend = window_stride\nwhile end <= input_data.shape[1]:\n # get new frame from stream of data\n stream_update = input_data[:, start:end]\n\n # update indexes of streamed updates\n start = end\n end = start + window_stride\n\n # set input audio data (by default input data at index 0)\n inputs[0] = stream_update\n\n # run inference\n outputs = model_stream_external.predict(inputs)\n\n # get output states and set it back to input states\n # which will be fed in the next inference cycle\n for s in range(1, len(model_stream_external.inputs)):\n inputs[s] = outputs[s]\n\n stream_output_arg = np.argmax(outputs[0])\nstream_output_arg\n\nindex_to_label[stream_output_arg]", "Run inference with TFlite\nRun non streaming inference with TFLite", "tflite_non_streaming_model = utils.model_to_tflite(sess, model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE)\ntflite_non_stream_fname = 'tflite_non_stream.tflite'\nwith open(os.path.join(MODEL_PATH, tflite_non_stream_fname), 'wb') as fd:\n fd.write(tflite_non_streaming_model)\n\ninterpreter = tf.lite.Interpreter(model_content=tflite_non_streaming_model)\ninterpreter.allocate_tensors()\n\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\n# set input audio data (by default input data at index 0)\ninterpreter.set_tensor(input_details[0]['index'], input_data.astype(np.float32))\n\n# run inference\ninterpreter.invoke()\n\n# get output: classification\nout_tflite = interpreter.get_tensor(output_details[0]['index'])\n\nout_tflite_argmax = np.argmax(out_tflite)\n\nout_tflite_argmax\n\nindex_to_label[out_tflite_argmax]", "Run streaming inference with TFLite", "tflite_streaming_model = utils.model_to_tflite(sess, model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE)\ntflite_stream_fname = 'tflite_stream.tflite'\n\nwith open(os.path.join(MODEL_PATH, tflite_stream_fname), 'wb') as fd:\n fd.write(tflite_streaming_model)\n\ninterpreter = tf.lite.Interpreter(model_content=tflite_streaming_model)\r\ninterpreter.allocate_tensors()\r\n\r\ninput_details = interpreter.get_input_details()\r\noutput_details = interpreter.get_output_details()\r\n\r\ninput_states = []\r\nfor s in range(len(input_details)):\r\n input_states.append(np.zeros(input_details[s]['shape'], dtype=np.float32))\n\nout_tflite = inference.run_stream_inference_classification_tflite(flags, interpreter, input_data, input_states)\n\nout_tflite_argmax = np.argmax(out_tflite[0])\n\nindex_to_label[out_tflite_argmax]", "Run evaluation on all testing data", "test.tflite_non_stream_model_accuracy(\r\n flags,\r\n MODEL_PATH,\r\n tflite_model_name=tflite_non_stream_fname,\r\n accuracy_name='tflite_non_stream_model_accuracy.txt')\n\ntest.tflite_stream_state_external_model_accuracy(\r\n flags,\r\n MODEL_PATH,\r\n tflite_model_name=tflite_stream_fname,\r\n accuracy_name='tflite_stream_state_external_model_accuracy.txt',\r\n reset_state=True)\n\ntest.tflite_stream_state_external_model_accuracy(\r\n flags,\r\n MODEL_PATH,\r\n tflite_model_name=tflite_stream_fname,\r\n accuracy_name='tflite_stream_state_external_model_accuracy.txt',\r\n reset_state=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
spohnan/geowave
python/src/examples/pygw-showcase.ipynb
apache-2.0
[ "PyGw Showcase\nThis notebook demonstrates the some of the utility provided by the pygw python package.\nIn this guide, we will show how you can use pygw to easily:\n- Define a data schema for Geotools SimpleFeature/Vector data (aka create a new data type)\n- Create instances for the new type\n- Create a GeoWave Data Store\n- Register a DataType Adapter & Index to the data store for your new data type\n- Ingest user-created data into the GeoWave Data Store\n- Query data out of the data store\nTo make this guide more interesting, we will be playing with this toy-data set from Kaggle on Boston Public School buildings\nInstallation\nWe can use pip to install pygw!", "# Install pygw\n!pip install ../main/python/", "Importing pygw", "import pygw\n\n# --- Importing Relevant Modules ---\n# Data Stores module\nimport pygw.stores\n# Index module\nimport pygw.indices\n# Geotools support\nimport pygw.geotools\n# Query module\nimport pygw.query", "Loading the Boston Public Schools Data Set", "import csv\n\nwith open(\"public_schools.csv\", encoding='utf-8-sig') as f:\n reader = csv.DictReader(f)\n raw_data = [row for row in reader]\n\n# Let's take a look at what the data looks like\nraw_data[0]", "For the purposes of this exercise, let's just look at the ADDRESS, X, Y, and BLDG_NAME properties of each datapoint.\nCreating a new SimpleFeature data type for the Boston Public Schools Data Set\nWe can define a data schema for our needs & create an appropriate SimpleFeatureType. The SimpleFeatureType constructor takes in varargs for the kinds of attributes we want our type to have.\nWe can easily create these with data-type specific convenience methods for constructing Attributes like SimpleFeatureTypeAttribute.string", "from pygw.geotools import SimpleFeatureType as SFT\nfrom pygw.geotools import SimpleFeatureTypeAttribute as SFTAttr\n\n# Creating the Data Type for Public Schools data\npub_school_dt = SFT(\"public_schools\",\n SFTAttr.string(\"building_name\"),\n SFTAttr.string(\"address\"),\n SFTAttr.geometry(\"coordinates\")) # Let's group X and Y as a coordinate", "Creating features for each data point using our new SimpleFeatureType\nPyGw allows you to create SimpleFeature instances straight from a SimpleFeatureType. We can use the SimpleFeatureType.create_feature method to do so easily!\nSimpleFeatureType.create_feature takes in an id and kwargs corresponding to the attribute descriptions associated with the type when we first created it.", "features = []\nfor bldg in raw_data:\n \n data_id = int(bldg[\"BLDG_ID\"])\n addr = bldg[\"ADDRESS\"]\n name = bldg[\"BLDG_NAME\"]\n coords = (float(bldg[\"X\"]), float(bldg[\"Y\"]))\n \n ft = pub_school_dt.create_feature(data_id, building_name=name, address=addr, coordinates=coords)\n \n features.append(ft)", "Creating a Data Store\nLet's now create a Data Store to ingest our data. A simple one we can use for this example is RocksDbDs.", "store = pygw.stores.RocksDbDs(gw_namespace=\"pygw.boston_schools.example\", dir=\"./schools\")", "An aside: help()\nMuch of pygw is well-documented, and the help method in python can be useful for figuring out what a pygw instance can do. Let's try it out on our store.", "help(store)", "Registering our Data Type to the data store\nTo store data into our data store, we first have to register a DataTypeAdapter and designate an Index to put our data into.", "# We provide a convenience method to get the type adapter straight from the SimpleFeatureType!\npub_school_adapter = pub_school_dt.get_type_adapter()\n\n# We want to index by coordinates so we want a spatial index\nindex = pygw.indices.SpatialIndex()\n\n# Add our type to our data store\nstore.add_type(pub_school_adapter, index)\n\n# Check that we've successfully registered an index and type\nstore.get_types()\n\nstore.get_indices()", "Writing data to our store", "# Create a writer for our data\nwriter = store.create_writer(pub_school_dt.get_name())\n\n# Writing data to the data store\nfor ft in features:\n writer.write(ft)\n\nwriter.close()", "Querying our store to make sure the data was ingested properly", "from pygw.query import Query\n\n# `Query.everything` is a convenience method for creating an 'Everything` query\nresults = store.query(Query.everything())\n\n# The results returned above was an interator, so let's convert to a list\nresults = [r for r in results]\n\n# Do we have anything\"?\nlen(results)", "Unfortunately pretty pygw wrapping of returned results from a query is not yet supported. However, we can use the pygw.debug.print_obj method to see what things look like:", "from pygw.debug import print_obj\n\nprint_obj(results[0])", "Something more interesting...\nWoo-hoo! We've successfully ingested our custom data into our data store. That's cool, but now what? ... Can pygw do more?\nLet's say we wanted to get retrieve only the public school buildings in East Boston -- How would we go about doing that? For the purposes of this, let's just say we want schools to the East of Franklin Park Zoo, which has coordinates:\n42.3055° N, 71.0900° W --> (-71.0900, 42.3055)", "# A CQL query for things east of the zoo\ncql_query_string = \"BBOX(coordinates,-71.0900,-180,180,180)\"\n\n# Getting the results iterable\nresults = store.query(Query.cql(cql_query_string))\n\n# list of results\nresults = [r for r in results]\n\n# Less than before!\nlen(results)\n\nprint_obj(results[0])", "Let's say we still want to query for buildings to the East of the zoo, but also we only want to find buildings that exist on \"Avenue\"s. We can do that!", "cql_query_string = \"BBOX(coordinates,-71.0900,-180,180,180) and address like '%Avenue'\"\n\n# Getting the results iterable\nresults = store.query(Query.cql(cql_query_string))\nresults = [r for r in results]\nlen(results)\n\nprint_obj(results[0])\n\n# DELETE EVERYTHING\nstore.delete_all()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arasdar/DL
uri-dl/uri-dl-hw-2/assignment2/ConvolutionalNetworks.ipynb
unlicense
[ "Convolutional Networks\nSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.\nFirst you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.", "# As usual, a bit of setup\nfrom __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.cnn import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient\nfrom cs231n.layers import *\nfrom cs231n.fast_layers import *\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)", "Convolution: Naive forward pass\nThe core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. \nYou don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.\nYou can test your implementation by running the following:", "x_shape = (2, 3, 4, 4)\nw_shape = (3, 3, 4, 4)\nx = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)\nw = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)\nb = np.linspace(-0.1, 0.2, num=3)\n\nconv_param = {'stride': 2, 'pad': 1}\nout, _ = conv_forward_naive(x, w, b, conv_param)\ncorrect_out = np.array([[[[-0.08759809, -0.10987781],\n [-0.18387192, -0.2109216 ]],\n [[ 0.21027089, 0.21661097],\n [ 0.22847626, 0.23004637]],\n [[ 0.50813986, 0.54309974],\n [ 0.64082444, 0.67101435]]],\n [[[-0.98053589, -1.03143541],\n [-1.19128892, -1.24695841]],\n [[ 0.69108355, 0.66880383],\n [ 0.59480972, 0.56776003]],\n [[ 2.36270298, 2.36904306],\n [ 2.38090835, 2.38247847]]]])\n\n# Compare your output to ours; difference should be around 2e-8\nprint('Testing conv_forward_naive')\nprint('difference: ', rel_error(out, correct_out))", "Aside: Image processing via convolutions\nAs fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.", "from scipy.misc import imread, imresize\n\nkitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')\n# kitten is wide, and puppy is already square\nd = kitten.shape[1] - kitten.shape[0]\nkitten_cropped = kitten[:, d//2:-d//2, :]\n\nimg_size = 200 # Make this smaller if it runs too slow\nx = np.zeros((2, 3, img_size, img_size))\nx[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))\nx[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))\n\n# Set up a convolutional weights holding 2 filters, each 3x3\nw = np.zeros((2, 3, 3, 3))\n\n# The first filter converts the image to grayscale.\n# Set up the red, green, and blue channels of the filter.\nw[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]\nw[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]\nw[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]\n\n# Second filter detects horizontal edges in the blue channel.\nw[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]\n\n# Vector of biases. We don't need any bias for the grayscale\n# filter, but for the edge detection filter we want to add 128\n# to each output so that nothing is negative.\nb = np.array([0, 128])\n\n# Compute the result of convolving each input in x with each filter in w,\n# offsetting by b, and storing the results in out.\nout, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})\n\ndef imshow_noax(img, normalize=True):\n \"\"\" Tiny helper to show images as uint8 and remove axis labels \"\"\"\n if normalize:\n img_max, img_min = np.max(img), np.min(img)\n img = 255.0 * (img - img_min) / (img_max - img_min)\n plt.imshow(img.astype('uint8'))\n plt.gca().axis('off')\n\n# Show the original images and the results of the conv operation\nplt.subplot(2, 3, 1)\nimshow_noax(puppy, normalize=False)\nplt.title('Original image')\nplt.subplot(2, 3, 2)\nimshow_noax(out[0, 0])\nplt.title('Grayscale')\nplt.subplot(2, 3, 3)\nimshow_noax(out[0, 1])\nplt.title('Edges')\nplt.subplot(2, 3, 4)\nimshow_noax(kitten_cropped, normalize=False)\nplt.subplot(2, 3, 5)\nimshow_noax(out[1, 0])\nplt.subplot(2, 3, 6)\nimshow_noax(out[1, 1])\nplt.show()", "Convolution: Naive backward pass\nImplement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.\nWhen you are done, run the following to check your backward pass with a numeric gradient check.", "np.random.seed(231)\nx = np.random.randn(4, 3, 5, 5)\nw = np.random.randn(2, 3, 3, 3)\nb = np.random.randn(2,)\ndout = np.random.randn(4, 2, 5, 5)\nconv_param = {'stride': 1, 'pad': 1}\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)\n\nout, cache = conv_forward_naive(x, w, b, conv_param)\ndx, dw, db = conv_backward_naive(dout, cache)\n\n# Your errors should be around 1e-8'\nprint('Testing conv_backward_naive function')\nprint('dx error: ', rel_error(dx, dx_num))\nprint('dw error: ', rel_error(dw, dw_num))\nprint('db error: ', rel_error(db, db_num))", "Max pooling: Naive forward\nImplement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.\nCheck your implementation by running the following:", "x_shape = (2, 3, 4, 4)\nx = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)\npool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}\n\nout, _ = max_pool_forward_naive(x, pool_param)\n\ncorrect_out = np.array([[[[-0.26315789, -0.24842105],\n [-0.20421053, -0.18947368]],\n [[-0.14526316, -0.13052632],\n [-0.08631579, -0.07157895]],\n [[-0.02736842, -0.01263158],\n [ 0.03157895, 0.04631579]]],\n [[[ 0.09052632, 0.10526316],\n [ 0.14947368, 0.16421053]],\n [[ 0.20842105, 0.22315789],\n [ 0.26736842, 0.28210526]],\n [[ 0.32631579, 0.34105263],\n [ 0.38526316, 0.4 ]]]])\n\n# Compare your output with ours. Difference should be around 1e-8.\nprint('Testing max_pool_forward_naive function:')\nprint('difference: ', rel_error(out, correct_out))", "Max pooling: Naive backward\nImplement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.\nCheck your implementation with numeric gradient checking by running the following:", "np.random.seed(231)\nx = np.random.randn(3, 2, 8, 8)\ndout = np.random.randn(3, 2, 4, 4)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\ndx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)\n\nout, cache = max_pool_forward_naive(x, pool_param)\ndx = max_pool_backward_naive(dout, cache)\n\n# Your error should be around 1e-12\nprint('Testing max_pool_backward_naive function:')\nprint('dx error: ', rel_error(dx, dx_num))", "Fast layers\nMaking convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.\nThe fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:\nbash\npython setup.py build_ext --inplace\nThe API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.\nNOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.\nYou can compare the performance of the naive and fast versions of these layers by running the following:", "from cs231n.fast_layers import conv_forward_fast, conv_backward_fast\nfrom time import time\nnp.random.seed(231)\nx = np.random.randn(100, 3, 31, 31)\nw = np.random.randn(25, 3, 3, 3)\nb = np.random.randn(25,)\ndout = np.random.randn(100, 25, 16, 16)\nconv_param = {'stride': 2, 'pad': 1}\n\nt0 = time()\nout_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)\nt1 = time()\nout_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)\nt2 = time()\n\nprint('Testing conv_forward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('Fast: %fs' % (t2 - t1))\nprint('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('Difference: ', rel_error(out_naive, out_fast))\n\nt0 = time()\ndx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint('\\nTesting conv_backward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('Fast: %fs' % (t2 - t1))\nprint('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('dx difference: ', rel_error(dx_naive, dx_fast))\nprint('dw difference: ', rel_error(dw_naive, dw_fast))\nprint('db difference: ', rel_error(db_naive, db_fast))\n\nfrom cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast\nnp.random.seed(231)\nx = np.random.randn(100, 3, 32, 32)\ndout = np.random.randn(100, 3, 16, 16)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nt0 = time()\nout_naive, cache_naive = max_pool_forward_naive(x, pool_param)\nt1 = time()\nout_fast, cache_fast = max_pool_forward_fast(x, pool_param)\nt2 = time()\n\nprint('Testing pool_forward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('fast: %fs' % (t2 - t1))\nprint('speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('difference: ', rel_error(out_naive, out_fast))\n\nt0 = time()\ndx_naive = max_pool_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast = max_pool_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint('\\nTesting pool_backward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('speedup: %fx' % (t2 - t1))\nprint('speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('dx difference: ', rel_error(dx_naive, dx_fast))", "Convolutional \"sandwich\" layers\nPreviously we introduced the concept of \"sandwich\" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.", "from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward\nnp.random.seed(231)\nx = np.random.randn(2, 3, 16, 16)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nout, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)\ndx, dw, db = conv_relu_pool_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)\n\nprint('Testing conv_relu_pool')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))\n\nfrom cs231n.layer_utils import conv_relu_forward, conv_relu_backward\nnp.random.seed(231)\nx = np.random.randn(2, 3, 8, 8)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\n\nout, cache = conv_relu_forward(x, w, b, conv_param)\ndx, dw, db = conv_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)\n\nprint('Testing conv_relu:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))", "Three-layer ConvNet\nNow that you have implemented all the necessary layers, we can put them together into a simple convolutional network.\nOpen the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:\nSanity check loss\nAfter you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.", "class ThreeLayerConvNet(object):\n \"\"\"\n A three-layer convolutional network with the following architecture:\n\n conv - relu - 2x2 max pool - affine - relu - affine - softmax\n\n The network operates on minibatches of data that have shape (N, C, H, W)\n consisting of N images, each with height H and width W and with C input\n channels.\n \"\"\"\n\n def __init__(self, input_dim=(3, 32, 32), num_filters=32, filter_size=7,\n hidden_dim=100, num_classes=10, weight_scale=1e-3, reg=0.0,\n dtype=np.float32):\n \"\"\"\n Initialize a new network.\n\n Inputs:\n - input_dim: Tuple (C, H, W) giving size of input data\n - num_filters: Number of filters to use in the convolutional layer\n - filter_size: Size of filters to use in the convolutional layer\n - hidden_dim: Number of units to use in the fully-connected hidden layer\n - num_classes: Number of scores to produce from the final affine layer.\n - weight_scale: Scalar giving standard deviation for random initialization\n of weights.\n - reg: Scalar giving L2 regularization strength\n - dtype: numpy datatype to use for computation.\n \"\"\"\n self.params = {}\n self.reg = reg\n self.dtype = dtype\n\n ############################################################################\n # TODO: Initialize weights and biases for the three-layer convolutional #\n # network. Weights should be initialized from a Gaussian with standard #\n # deviation equal to weight_scale; biases should be initialized to zero. #\n # All weights and biases should be stored in the dictionary self.params. #\n # Store weights and biases for the convolutional layer using the keys 'W1' #\n # and 'b1'; use keys 'W2' and 'b2' for the weights and biases of the #\n # hidden affine layer, and keys 'W3' and 'b3' for the weights and biases #\n # of the output affine layer. #\n ############################################################################\n # def _init_model(self, D, C, H):\n # D, H, C = input_dim, hidden_dim, num_classes\n # The output size of convolution\n # 32 - 7 + 1 with pad=0 and stride=1\n # (32 + 2*p - 7)/ stride + 1\n # 32-7+1 * 32-7+1 * 32\n # 32-6 * 32-6 * 32\n # 26 * 26 * 32\n # CxHxW\n in_C, in_H, in_W = input_dim\n stride, pad = 1, (filter_size - 1) // 2\n H = ((in_H + (2*pad) - filter_size) / stride) + 1\n W = ((in_W + (2*pad) - filter_size) / stride) + 1\n pool_H, pool_W, pool_stride, pool_pad = 2, 2, 2, 0\n H = ((H + (2*pool_pad) - pool_H) / pool_stride) + 1\n W = ((W + (2*pool_pad) - pool_W) / pool_stride) + 1 \n # print('H, W', H, W)\n # print(int(H * W * num_filters))\n # print(16 * 16 * 32)\n\n self.params = dict(\n W1=np.random.randn(num_filters, 3, filter_size, filter_size) * weight_scale,\n W2=np.random.randn(int(H * W * num_filters), hidden_dim) * weight_scale,\n W3=np.random.randn(hidden_dim, num_classes) * weight_scale,\n b1=np.zeros((num_filters, 1)),\n b2=np.zeros((1, hidden_dim)),\n b3=np.zeros((1, num_classes))\n )\n\n pass\n ############################################################################\n # END OF YOUR CODE #\n ############################################################################\n\n for k, v in self.params.items():\n self.params[k] = v.astype(dtype)\n\n def loss(self, X, y=None):\n \"\"\"\n Evaluate loss and gradient for the three-layer convolutional network.\n\n Input / output: Same API as TwoLayerNet in fc_net.py.\n \"\"\"\n W1, b1 = self.params['W1'], self.params['b1']\n W2, b2 = self.params['W2'], self.params['b2']\n W3, b3 = self.params['W3'], self.params['b3']\n\n # pass conv_param to the forward pass for the convolutional layer\n filter_size = W1.shape[2]\n conv_param = {'stride': 1, 'pad': (filter_size - 1) // 2}\n \n # # resulting output size\n # # conv stride = 1\n # # conv pad = (7-1)//2== 6//2==6/2==3\n # # (32 + 2*3 - 7)/ 1 + 1 = \n # # 32 + 6 -7 +1 = \n # # 32 -1 +1 = 32\n # stride, pad = 1, (filter_size - 1) // 2\n\n # # CxHxW\n # in_C, in_H, in_W = input_dim\n # print('input_dim', input_dim)\n # H = ((in_H + (2*pad) - filter_size) / stride) + 1\n # W = ((in_W + (2*pad) - filter_size) / stride) + 1\n # print('H, W', H, W)\n \n # pass pool_param to the forward pass for the max-pooling layer\n pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n \n # # output size\n # # pad=0, stride=2\n # # (32 + 2*0 - 2)/2 +1\n # # (32 -2)/2 +1\n # # 30/2 +1\n # # 15 +1\n # # 16\n # pool_H, pool_W, pool_stride, pool_pad = 2, 2, 2, 0\n # H = ((H + (2*pool_pad) - pool_H) / pool_stride) + 1\n # W = ((W + (2*pool_pad) - pool_W) / pool_stride) + 1 \n # print('H, W', H, W)\n\n scores = None\n ############################################################################\n # TODO: Implement the forward pass for the three-layer convolutional net, #\n # computing the class scores for X and storing them in the scores #\n # variable. #\n ############################################################################\n # Input layer\n h1, conv_cache = conv_forward_naive(b=b1, conv_param=conv_param, w=W1, x=X)\n# print('h1.shape', h1.shape)\n h1, nl_cache1 = relu_forward(x=h1)\n h1, pool_cache = max_pool_forward_naive(pool_param=pool_param, x=h1)\n# print('h1.shape', h1.shape)\n\n # Hidden layer\n # h1 = h1.reshape(X.shape[0], -1) # WHY not this one?\n h2 = h1.ravel().reshape(X.shape[0], -1)\n# print('h1.shape', h1.shape)\n h2, affine_cache = affine_forward(b=b2, w=W2, x=h2)\n h2, nl_cache2 = relu_forward(x=h2)\n# print('h2.shape', h2.shape)\n\n # Output layer\n scores, scores_cache = affine_forward(b=b3, w=W3, x=h2)\n# print('scores.shape', scores.shape)\n\n pass\n ############################################################################\n # END OF YOUR CODE #\n ############################################################################\n\n if y is None:\n return scores\n\n loss, grads = 0, {}\n ############################################################################\n # TODO: Implement the backward pass for the three-layer convolutional net, #\n # storing the loss and gradients in the loss and grads variables. Compute #\n # data loss using softmax, and make sure that grads[k] holds the gradients #\n # for self.params[k]. Don't forget to add L2 regularization! #\n ############################################################################\n reg_loss = regularization(lam=self.reg, model=self.params, reg_type='l2')\n \n loss, dy = softmax_loss(x=scores, y=y)\n loss += reg_loss\n\n # Output layer\n dh2, dW3, db3 = affine_backward(cache=scores_cache, dout=dy)\n# print('dh2.shape', dh2.shape)\n \n # Hidden layer\n dh2 = relu_backward(cache=nl_cache2, dout=dh2)\n dh2, dW2, db2 = affine_backward(cache=affine_cache, dout=dh2)\n# print('dh1.shape', dh1.shape)\n dh1 = dh2.reshape(h1.shape)\n# print('dh1.shape', dh1.shape)\n \n # Input layer\n dh1 = max_pool_backward_naive(cache=pool_cache, dout=dh1)\n dh1 = relu_backward(cache=nl_cache1, dout=dh1)\n _, dW1, db1 = conv_backward_naive(cache=conv_cache, dout=dh1)\n \n # Gradients\n grads = dict(W1 = dW1, b1 = db1, \n W2 = dW2, b2 = db2, \n W3 = dW3, b3 = db3)\n\n pass\n ############################################################################\n # END OF YOUR CODE #\n ############################################################################\n\n return loss, grads\n\nmodel = ThreeLayerConvNet()\n\nN = 50\nX = np.random.randn(N, 3, 32, 32)\ny = np.random.randint(10, size=N)\n\nloss, grads = model.loss(X, y)\nprint('Initial loss (no regularization): ', loss)\n\nmodel.reg = 0.5\nloss, grads = model.loss(X, y)\nprint('Initial loss (with regularization): ', loss)", "Gradient check\nAfter the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.", "num_inputs = 2\ninput_dim = (3, 16, 16)\nreg = 0.0\nnum_classes = 10\nnp.random.seed(231)\nX = np.random.randn(num_inputs, *input_dim)\ny = np.random.randint(num_classes, size=num_inputs)\n\nmodel = ThreeLayerConvNet(num_filters=3, filter_size=3,\n input_dim=input_dim, hidden_dim=7,\n dtype=np.float64)\nloss, grads = model.loss(X, y)\nfor param_name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)\n e = rel_error(param_grad_num, grads[param_name])\n print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))", "Overfit small data\nA nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.", "np.random.seed(231)\n\nnum_train = 100\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nmodel = ThreeLayerConvNet(weight_scale=1e-2)\n\nsolver = Solver(model, small_data,\n num_epochs=15, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=1)\nsolver.train()", "Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:", "plt.subplot(2, 1, 1)\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('iteration')\nplt.ylabel('loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(solver.train_acc_history, '-o')\nplt.plot(solver.val_acc_history, '-o')\nplt.legend(['train', 'val'], loc='upper left')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()", "Train the net\nBy training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:", "model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)\n\nsolver = Solver(model, data,\n num_epochs=1, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()", "Visualize Filters\nYou can visualize the first-layer convolutional filters from the trained network by running the following:", "from cs231n.vis_utils import visualize_grid\n\ngrid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))\nplt.imshow(grid.astype('uint8'))\nplt.axis('off')\nplt.gcf().set_size_inches(5, 5)\nplt.show()", "Spatial Batch Normalization\nWe already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called \"spatial batch normalization.\"\nNormally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.\nIf the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.\nSpatial batch normalization: forward\nIn the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:", "np.random.seed(231)\n# Check the training-time forward pass by checking means and variances\n# of features both before and after spatial batch normalization\n\nN, C, H, W = 2, 3, 4, 5\nx = 4 * np.random.randn(N, C, H, W) + 10\n\nprint('Before spatial batch normalization:')\nprint(' Shape: ', x.shape)\nprint(' Means: ', x.mean(axis=(0, 2, 3)))\nprint(' Stds: ', x.std(axis=(0, 2, 3)))\n\n# Means should be close to zero and stds close to one\ngamma, beta = np.ones(C), np.zeros(C)\nbn_param = {'mode': 'train'}\nout, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\nprint('After spatial batch normalization:')\nprint(' Shape: ', out.shape)\nprint(' Means: ', out.mean(axis=(0, 2, 3)))\nprint(' Stds: ', out.std(axis=(0, 2, 3)))\n\n# Means should be close to beta and stds close to gamma\ngamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])\nout, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\nprint('After spatial batch normalization (nontrivial gamma, beta):')\nprint(' Shape: ', out.shape)\nprint(' Means: ', out.mean(axis=(0, 2, 3)))\nprint(' Stds: ', out.std(axis=(0, 2, 3)))\n\nnp.random.seed(231)\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\nN, C, H, W = 10, 4, 11, 12\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(C)\nbeta = np.zeros(C)\nfor t in range(50):\n x = 2.3 * np.random.randn(N, C, H, W) + 13\n spatial_batchnorm_forward(x, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nx = 2.3 * np.random.randn(N, C, H, W) + 13\na_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After spatial batch normalization (test-time):')\nprint(' means: ', a_norm.mean(axis=(0, 2, 3)))\nprint(' stds: ', a_norm.std(axis=(0, 2, 3)))", "Spatial batch normalization: backward\nIn the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:", "np.random.seed(231)\nN, C, H, W = 2, 3, 4, 5\nx = 5 * np.random.randn(N, C, H, W) + 12\ngamma = np.random.randn(C)\nbeta = np.random.randn(C)\ndout = np.random.randn(N, C, H, W)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))", "Extra Credit Description\nIf you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
whitead/numerical_stats
unit_7/hw_2019/Homework_7_Key.ipynb
gpl-3.0
[ "1. Conceptual Questions (7 Points)\nAnswer these in Markdown\n\n[1 point] What is the difference between a probability mass function and a probability density function?\n[1 point] What is the difference between a cumulative distribution function and a prediction interval?\n[1 point] Is the exponential distribution a continuous or discrete distribution? Is it valid to compute the probability of a single element in the sample space?\n[2 points] What is the probability of $t > 8$ in an exponential distribution with $\\lambda = \\frac{1}{4}$? Leave your answer as an unevaluated exponential. \n[1 point] This slice must have how many elements: a[5:2]? How can you tell without counting?\n\n1.1\nPMF is for discrete sample space, PDF is for continuous\n1.2\nCDF is probability of an interval, prediction interval is interval given a probability\n1.3\nContinuous, no\n1.4\n$$\n\\int_8^{\\infty} e^{-t / 4} \\, dt = \\left. -e^{-t / 4}\\right]_4^{\\infty} = 0 - - e^{-2} = e^{-2}\n$$\n1.5\n$3$, because $5 - 2$\n2. Car Stopping Distance (10 Points)\n\n\n[4 points] Load the cars dataset and create a scatter plot. It contains measurements a cars' stopping distance in feet as a function of speed in mph. If you get an error when loading pydataset that says No Module named 'pydataset', then execute this code in a new cell once: !pip install --user pydataset\n\n\n[4 points] Compute the sample correlation coefficient between stopping distance and speed in python and report your answer by writing a complete sentence in Markdown.\n\n\n[2 points] Why might there be multiple stopping distances for a single speed?", "#2.1\nimport pydataset\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\ncars = pydataset.data('cars').values\nplt.plot(cars[:,0], cars[:,1], 'o')\nplt.xlabel('Speed [mph]')\nplt.ylabel('Stopping Distance [ft]')\nplt.show()\n\n#1.2\nnp.corrcoef(cars[:,0].astype(float), cars[:,1].astype(float))", "2.2\nThe correlation coefficient is 0.81.\n2.3\nMultiple cars were tested or one car was tested multiple times or both.\n3. Housing Prices (24 Points)\n\n[8 points] Load the 'House' dataset and use pydataset.data('Housing', show_doc=True) to see information about the dataset. Use the snippet below to format your ticks with dollar signs and commas for thousands. Note that this data is from the 1970s. Assess the correlation between lotsize and price. Use plots and sample correlation coefficient as evidence to support a written answer.\n\npython\nimport matplotllib.ticker\nfmt = '${x:,.0f}'\ntick = matplotllib.ticker.StrMethodFormatter(fmt)\nplt.gca().yaxis.set_major_formatter(tick)\n\n\n[8 points] Use a violin plot to show if being in a preferred neighborhood affects price. You may use any other calculations (e.g., sample standard deviation) to support your conclusions. Write out your conclusion.\n\n\n[8 points] Use a boxplot to determine if bedroom number affects price. What is your conclusion?\n\n\n3.1", "import matplotlib.ticker\nfmt = '${x:,.0f}'\ntick = matplotlib.ticker.StrMethodFormatter(fmt)\n\nfmt = '{x:,.0f}'\nxtick = matplotlib.ticker.StrMethodFormatter(fmt)\n\nhouse = pydataset.data('Housing').values\n\nplt.gca().yaxis.set_major_formatter(tick) \nplt.gca().xaxis.set_major_formatter(xtick) \nplt.plot(house[:, 0], house[:,1], 'o')\nplt.ylabel('House Price')\nplt.xlabel('Lot Size [sq ft]')\nnp.corrcoef(house[:,0].astype(np.float), house[:,1].astype(np.float))", "There is a weak correlation. The correlation coefficient is low at 0.53, but there is so much data that we can see a weak correlation especially at small lot sizes. \n3.2", "import seaborn as sns\np = house[:,-1] == 'yes'\n\nsns.violinplot(data=[house[p,0], house[~p,0]])\nplt.gca().yaxis.set_major_formatter(tick)\nprint(np.median(house[p,0]))\nprint(np.median(house[~p,0]))\nplt.xticks(range(2), ['Preferred Neighborhood', 'Normal Neighborhood'])\nplt.ylabel('House Price')\nplt.show()", "The preferred neighborhood has a \\$20,000 higher median price and has a longer tail at high prices, indicating many expensive homes. \n3.3", "labels = np.unique(house[:,2])\nldata = []\n#slice out each set of rows that matches label\n#and add to list\nfor l in labels:\n ldata.append(house[house[:,2] == l, 0].astype(float))\n\nsns.boxplot(data=ldata)\nplt.xticks(range(len(labels)), labels)\nplt.xlabel('Number of Bedrooms')\nplt.gca().yaxis.set_major_formatter(tick)\nplt.ylabel('House Price')\nplt.show()", "The number of bedrooms is important up until 4, after which it seems to have less effect. Having 1 bedroom has a very narrow distribution. There appears to be a correlation overall with bedroom number." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.17/_downloads/8e91c4d84fe688d78859cf6274554a8b/plot_compute_csd.ipynb
bsd-3-clause
[ "%matplotlib inline", "==================================================\nCompute a cross-spectral density (CSD) matrix\n==================================================\nA cross-spectral density (CSD) matrix is similar to a covariance matrix, but in\nthe time-frequency domain. It is the first step towards computing\nsensor-to-sensor coherence or a DICS beamformer.\nThis script demonstrates the three methods that MNE-Python provides to compute\nthe CSD:\n\nUsing short-term Fourier transform: :func:mne.time_frequency.csd_fourier\nUsing a multitaper approach: :func:mne.time_frequency.csd_multitaper\nUsing Morlet wavelets: :func:mne.time_frequency.csd_morlet", "# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>\n# License: BSD (3-clause)\nfrom matplotlib import pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.time_frequency import csd_fourier, csd_multitaper, csd_morlet\n\nprint(__doc__)", "In the following example, the computation of the CSD matrices can be\nperformed using multiple cores. Set n_jobs to a value >1 to select the\nnumber of cores to use.", "n_jobs = 1", "Loading the sample dataset.", "data_path = sample.data_path()\nfname_raw = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfname_event = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nraw = mne.io.read_raw_fif(fname_raw)\nevents = mne.read_events(fname_event)", "By default, CSD matrices are computed using all MEG/EEG channels. When\ninterpreting a CSD matrix with mixed sensor types, be aware that the\nmeasurement units, and thus the scalings, differ across sensors. In this\nexample, for speed and clarity, we select a single channel type:\ngradiometers.", "picks = mne.pick_types(raw.info, meg='grad')\n\n# Make some epochs, based on events with trigger code 1\nepochs = mne.Epochs(raw, events, event_id=1, tmin=0, tmax=1,\n picks=picks, baseline=(None, 0),\n reject=dict(grad=4000e-13), preload=True)", "Computing CSD matrices using short-term Fourier transform and (adaptive)\nmultitapers is straightforward:", "csd_fft = csd_fourier(epochs, fmin=15, fmax=20, n_jobs=n_jobs)\ncsd_mt = csd_multitaper(epochs, fmin=15, fmax=20, adaptive=True, n_jobs=n_jobs)", "When computing the CSD with Morlet wavelets, you specify the exact\nfrequencies at which to compute it. For each frequency, a corresponding\nwavelet will be constructed and convolved with the signal, resulting in a\ntime-frequency decomposition.\nThe CSD is constructed by computing the correlation between the\ntime-frequency representations between all sensor-to-sensor pairs. The\ntime-frequency decomposition originally has the same sampling rate as the\nsignal, in our case ~600Hz. This means the decomposition is over-specified in\ntime and we may not need to use all samples during our CSD computation, just\nenough to get a reliable correlation statistic. By specifying decim=10,\nwe use every 10th sample, which will greatly speed up the computation and\nwill have a minimal effect on the CSD.", "frequencies = [16, 17, 18, 19, 20]\ncsd_wav = csd_morlet(epochs, frequencies, decim=10, n_jobs=n_jobs)", "The resulting :class:mne.time_frequency.CrossSpectralDensity objects have a\nplotting function we can use to compare the results of the different methods.\nWe're plotting the mean CSD across frequencies.", "csd_fft.mean().plot()\nplt.suptitle('short-term Fourier transform')\n\ncsd_mt.mean().plot()\nplt.suptitle('adaptive multitapers')\n\ncsd_wav.mean().plot()\nplt.suptitle('Morlet wavelet transform')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
binh-vu/python-tutorial
4.0_Modules.ipynb
mit
[ "Modules\n\nReason: Your program gets bigger => need to split it to several files for easier maintenance\nA module looks like:\n\n<img src=\"/files/assets/module_example.png\" width=\"75%\" />\n\nimport a module using <code>import</code> command\n\n<img src=\"/files/assets/4_import_order.png\" width=\"75%\" />", "import module_a\n\nprint module_a.double(5)", "a module is also initialize only once", "import module_a\nimport module_b", "using <code>from</code> statement to import name from a module directly", "from module_b import triple\n\nprint triple(5)\n\nfrom module_a import *\n\nprint 'single', single(5)\nprint 'double', double(5)", "import all names will not import those beginning with underscore (_)", "print _test_func\n\nprint module_a._test_func\n\nfrom module_a import _test_func\n\nprint _test_func", "global variable <code>name</code> contains the module name.", "print module_a.__name__\n\nprint __name__", "Module search paths\nWhen a module is imported\n 1. search for a built-in module with that name\n 2. search in the list of directories given by variable sys.path\nsys.path is initialized from:\n\nthe directory where python program start, i.e: \"current directory\"\nenvironment variable PYTHONPATH (syntax like PATH env)\nsome default locations\n\nPackages\n\ncollection of module in the hierarchy structure.\nis a folder, which have <code>init.py</code> \n\n<pre>\n.\nexamples/\n mathlib/\n __init__.py\n linalg/\n __init__.py\n dot.py\n not_a_package/\n test.py\n ndarray.py\n random_variable.py\n</pre>", "import examples\n\nimport mathlib\n\nimport sys\nsys.path.append('./examples')\n\nimport mathlib", "<code>init.py</code>", "mathlib.random_variable\n\nmathlib.self_introduction()", "import submodule", "import mathlib.ndarray\n\nprint mathlib.ndarray.sum1d([1,2,3])\n\nmathlib.ndarray.sum1d\n\nimport mathlib.random_variable as random_variable\n\nprint random_variable.draw_uniform(0, 100)\n\ns = random_variable\n\ns.draw_uniform(0, 100)\n\nfrom mathlib.linalg import dot\n\nprint dot.dot1d([1,2,3], [1,2,3])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
erinspace/share_tutorials
3_SHARE_Data_in_the_Wide_World_py3.ipynb
apache-2.0
[ "SHARE Data in the Wide World\nThis notebook will focus on how to export SHARE data into different formats, and how to query SHARE for specific information from your institution, say from a list of names or from a list of emails or ORCIDs that act as reseearcher identifiers.\nExporting a DataFrame to csv and Excel\nWhen doing an aggregation on SHARE data, it might be beneficial to export the data to a format that is easier to widely distribute, such as a csv file or and Excel file.\nFirst, we'll do a SHARE aggregation query for documents from each source that have a description, turn it into a pandas DataFrame, and export the data into both csv and Excel formats.", "# Pandas is a python library that is used for data manipulation and analysis -- good for numbers + time series.\n # Pandas gives us some extra data structures (arrays are data structures, for example) which is nice\n # We are calling Pandas pd by using the \"as\" -- locally, we know Pandas as pd\n # Helpful Links:\n # https://en.wikipedia.org/wiki/Pandas_(software)\n # http://pandas.pydata.org/ \nimport pandas as pd\n\n# Sharepa is a python client for browsing and analyzing SHARE data specifically using elasticsearch querying.\n # We can use this to aggregate, graph, and analyze the data. \n # Helpful Links:\n # https://github.com/CenterForOpenScience/sharepa\n # https://pypi.python.org/pypi/sharepa\nfrom sharepa import ShareSearch\n\n#When we say from X import Y, we are saying \"of all the things in this python library, import only this\nfrom sharepa.helpers import pretty_print\n\ndescription_search = ShareSearch()\n\n# exists -- a type of query, will accept a lucene query string\n # Lucene supports fielded data. When performing a search you can either specify a field, or use the default field. \n # The field names and default field is implementation specific.\n# field = description -- This lucene query string will find all documents that don't have a description\ndescription_search = description_search.query(\n 'exists', \n field='description',\n)\n\n# here we are aggregating all the entries by source\ndescription_search.aggs.bucket(\n 'sources', # Every aggregation needs a name\n 'significant_terms', # There are many kinds of aggregations\n field='sources', # We store the source of a document in its type, so this will aggregate by source\n min_doc_count=0,\n percentage={}, # Will make the score value the percentage of all results (doc_count/bg_count)\n size=0\n)\n\ndescription_results = description_search.execute()\n\n# Creates a dataframe using Pandas (what we call pd) that aggregates the results\ndescription_dataframe = pd.DataFrame(description_results.aggregations.sources.to_dict()['buckets'])\n\n# We will add our own \"percent\" column to make things clearer\ndescription_dataframe['percent'] = (description_dataframe['score'] * 100)\n\n# Let's set the source name as the index, and then drop the old column\ndescription_dataframe = description_dataframe.set_index(description_dataframe['key'])\ndescription_dataframe = description_dataframe.drop('key', 1)\n\n# Finally, we'll show the results!\ndescription_dataframe", "Let's export this pandas dataframe to a csv file, and to an excel file.\nThe next cell will work when running locally!", "# Note: Uncomment the following lines if running locally:\n\ndescription_dataframe.to_csv('SHARE_Counts_with_Descriptions.csv')\ndescription_dataframe.to_excel('SHARE_Counts_with_Descriptions.xlsx')", "Working with outside data\nLet's say we had a list of names of researchers that were from a particular University. We're interested in seeing if their full names appear in any sources across the SHARE data set.", "# this is a simple list\nnames = [\"Susan Jones\", \"Ravi Patel\"]\n\n#this is a \nname_search = ShareSearch()\n\n# We are searching the entire SHARE dataset for each item in the list we called name, i.e. Susan Jones and Ravi Patel\nfor name in names:\n name_search = name_search.query(\n {\n \"bool\": {\n \"should\": [\n {\n \"match\": {\n \"contributors.full_name\": {\n \"query\": name, \n \"operator\": \"and\",\n \"type\" : \"phrase\"\n }\n }\n }\n ]\n }\n }\n )\n\n# We are putting all the results into a new list called name_results\n# name_search is our original list, and .execute() is a built-in function (one that the library provides, and we\n # don't have to write) that puts the results of the loop above into a new list\nname_results = name_search.execute()\n\n# Prints out the number of documents that have those \nprint('There are {} documents with contributors who have any of those names.'.format(name_search.count()))\n\n# Just visual queues for us to make it more readable\nprint('Here are the first 10:')\nprint('---------')\n\n# Loops over the list called \"name_results\" and prints out 10\nfor result in name_results:\n print(\n '{} -- with contributors {}'.format(\n result.title,\n [contributor.full_name for contributor in result.contributors]\n )\n )\n", "If we were interested to see an analysis of what sources these names came from, we can add an aggregation.", "name_search.aggs.bucket(\n 'sources', # Every aggregation needs a name\n 'terms', # There are many kinds of aggregations, terms is a pretty useful one though\n field='sources', # We store the source of a document in its type, so this will aggregate by source\n size=0, # These are just to make sure we get numbers for all the sources, to make it easier to combine graphs\n min_doc_count=1\n)\n\n# We are putting all the results into a new list called name_results\n# name_search is our original list, and .execute() is a built-in function (one that the library provides, and we\n # don't have to write) that puts the results of the loop above into a new list\nname_results = name_search.execute()\n\n# We are aggregating these into a DataFrame from Pandas (which we called pd)\npd.DataFrame(name_results.aggregations.sources.to_dict()['buckets'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nansencenter/nansat-lectures
notebooks/06 scipy.ipynb
gpl-3.0
[ "SciPy - Scientific Python\nSciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension of Python. It adds significant power to the interactive Python session by providing the user with high-level commands and classes for manipulating and visualizing data. With SciPy an interactive Python session becomes a data-processing and system-prototyping environment rivaling sytems such as MATLAB, IDL, Octave, R-Lab, and SciLab.\nSciPy Organization\n\ncluster Clustering algorithms\nconstants Physical and mathematical constants\nfftpack Fast Fourier Transform routines\nintegrate Integration and ordinary differential equation solvers\ninterpolate Interpolation and smoothing splines\nio Input and Output\nlinalg Linear algebra\nndimage N-dimensional image processing\nodr Orthogonal distance regression\noptimize Optimization and root-finding routines\nsignal Signal processing\nsparse Sparse matrices and associated routines\nspatial Spatial data structures and algorithms\nspecial Special functions\nstats Statistical distributions and functions\nweave C/C++ integration", "from scipy import linalg, optimize\n\nimport scipy as sp\nsp.info(optimize.fmin)", "Linear algebra", "import numpy as np\nfrom scipy import linalg\narr = np.array([[1, 2], # square matrix\n [3, 4]])\nprint (linalg.det(arr)) # determinant\nprint (linalg.inv(arr)) # inversion\n\narr = np.array([[3, 2], # singluar matrix\n [6, 4]])\nprint (linalg.det(arr))\n\nprint (linalg.inv(arr)) # inversion of singular matrix", "Optimization", "import numpy as np\nfrom scipy import optimize\n%matplotlib inline\nimport matplotlib.pyplot as plt", "Define function to analyze", "def f(x):\n return x**2 + 10*np.sin(x)", "Find roots and minima", "grid = (-10, 10, 0.1)\nxmin_global = optimize.brute(f, (grid,))\nxmin_local = optimize.fminbound(f, 0, 10)\nroot = optimize.fsolve(f, 1) # our initial guess is 1\nroot2 = optimize.fsolve(f, -2.5)\nprint (xmin_global, xmin_local, root, root2)", "Create data and add random noise", "xdata = np.linspace(-10, 10, num=50)\nnp.random.seed(1234)\nydata = f(xdata) + np.random.randn(xdata.size) * 1", "Define function for fitting the synthetic data", "def f2(x, a, b):\n return a*x**2 + b*np.sin(x)", "Curve fit", "guess = [.1, .1]\n[a, b], params_covariance = optimize.curve_fit(f2, xdata, ydata, guess)\nprint (a, b)", "Plot", "x = np.arange(-10, 10, 0.1)\nplt.plot(x, f(x), 'b-', label=\"f(x)\")\nplt.plot(xdata, ydata, '.', label=\"Synthetic data\")\nplt.plot(x, f2(x, a, b), 'r--', label=\"Curve fit result\")\nxmins = np.array([xmin_global[0], xmin_local])\nplt.plot(xmins, f(xmins), 'go', label=\"Minima\")\nroots = np.array([root, root2])\nplt.plot(roots, f(roots), 'kv', label=\"Roots\", ms=15)\nplt.legend()\nplt.xlabel('x')\nplt.ylabel('f(x)')", "Interpolation\nCreate synthetic data close to a sine function with random noise", "measured_time = np.linspace(0, 1, 10)\nnoise = (np.random.random(10)*2 - 1) * .3\nmeasures = np.sin(2 * np.pi * measured_time) + noise", "Perform interpolation", "from scipy.interpolate import interp1d\ncomputed_time = np.linspace(0, 1, 50) # X - values\n\nlinear_interp = interp1d(measured_time,\n measures) # create interpolator\nlinear_results = linear_interp(computed_time) # use interpolator\n\ncubic_interp = interp1d(measured_time,\n measures,\n kind='cubic') # create interpolator\ncubic_results = cubic_interp(computed_time) # use interpolator", "Plot", "plt.plot(measured_time, measures, 'o', ms=6, label='measures')\nplt.plot(computed_time, linear_results, '.-', label='linear interp')\nplt.plot(computed_time, cubic_results, '.-', label='cubic interp')\nplt.legend()\nplt.show()", "Image processing", "from scipy import ndimage\nfrom scipy import misc", "Load sample image", "face = misc.face()[:,:,0]\nplt.imshow(face, cmap='gray')\nplt.show()", "Perform distortion of the sample image", "noisy_face = face + face.std()*0.5*np.random.standard_normal(face.shape)\nplt.imshow(noisy_face, cmap='gray')\n\nblurred_face = ndimage.gaussian_filter(noisy_face, sigma=3)\nplt.imshow(blurred_face, cmap='gray')\n\nmedian_face = ndimage.median_filter(noisy_face, size=5)\nplt.imshow(median_face, cmap='gray')", "Improve plurred image and plot", "from scipy import signal\nwiener_face = signal.wiener(noisy_face, (5,5))\nplt.imshow(wiener_face, cmap='gray')\nplt.axis('off')", "Measurements on images\nLet us first generate a nice synthetic binary image.", "x, y = np.indices((100, 100))\nsig = np.sin(2*np.pi*x/50.)*np.sin(2*np.pi*y/50.)*(1+x*y/50.**2)**2\nplt.imshow(sig);plt.colorbar()\n\nmask = sig > 1\nplt.imshow(mask.astype(int));plt.colorbar()\n\nlabels, nb = ndimage.label(mask)\nplt.imshow(labels); plt.colorbar()", "Find areas of the identified zones", "label_ids = np.unique(labels)\nprint (label_ids)\nprint (ndimage.sum(mask, labels, label_ids))", "Find maximum value for each object", "print (ndimage.maximum(sig, labels, range(1, labels.max()+1)))", "Extract idetified object from original matrix", "sl = ndimage.find_objects(labels==4)\nplt.imshow(sig[sl[0]])", "Exercises\n1. Analyze the function:\n\\begin{equation} y = 0.2 x^2 + 10 cos(x) \\end{equation}\n\nFind roots of the function\nFind local minimum of the function\n\n2. Fit 2nd order polynom to noisy data:\nx = np.linspace(-10, 10, 20)\ny = 2 * x ** 2 + np.random.randn(x.size) * 10\n3. Find edges on the 'ascent' image (use ndimage.sobel)\nfrom scipy import misc\nlena = misc.ascent()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/zh-cn/tutorials/text/word_embeddings.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "单词嵌入向量\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/text/word_embeddings\" class=\"\"> <img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\" class=\"\"> 在 TensorFlow.org 上查看</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/text/word_embeddings.ipynb\" class=\"\"> <img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\" class=\"\"> 在 Google Colab 中运行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/text/word_embeddings.ipynb\" class=\"\"> <img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\" class=\"\"> 在 GitHub 上查看源代码</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/text/word_embeddings.ipynb\" class=\"\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\" class=\"\">下载笔记本</a></td>\n</table>\n\n本教程将介绍单词嵌入向量。包含完整的代码,可在小型数据集上从头开始训练单词嵌入向量,并使用 Embedding Projector(如下图所示)可视化这些嵌入向量。\n<img alt=\"Screenshot of the embedding projector\" width=\"400\" src=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/embedding.jpg?raw=1\">\n用数字表示文本\n机器学习模型将向量(数字数组)作为输入。在处理文本时,我们必须先想出一种策略,将字符串转换为数字(或将文本“向量化”),然后再其馈入模型。在本部分中,我们将探究实现这一目标的三种策略。\n独热编码\n作为第一个想法,我们可以对词汇表中的每个单词进行“独热”编码。考虑这样一句话:“The cat sat on the mat”。这句话中的词汇(或唯一单词)是(cat、mat、on、sat、the)。为了表示每个单词,我们将创建一个长度等于词汇量的零向量,然后在与该单词对应的索引中放置一个 1。下图显示了这种方法。\n<img alt=\"Diagram of one-hot encodings\" width=\"400\" src=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/one-hot.png?raw=1\">\n为了创建一个包含句子编码的向量,我们可以将每个单词的独热向量连接起来。\n要点:这种方法效率低下。一个独热编码向量十分稀疏(这意味着大多数索引为零)。假设我们的词汇表中有 10,000 个单词。为了对每个单词进行独热编码,我们将创建一个其中 99.99% 的元素都为零的向量。\n用一个唯一的数字编码每个单词\n我们可以尝试的第二种方法是使用唯一的数字来编码每个单词。继续上面的示例,我们可以将 1 分配给“cat”,将 2 分配给“mat”,依此类推。然后,我们可以将句子“The cat sat on the mat”编码为一个密集向量,例如 [5, 1, 4, 3, 5, 2]。这种方法是高效的。现在,我们有了一个密集向量(所有元素均已满),而不是稀疏向量。\n但是,这种方法有两个缺点:\n\n\n整数编码是任意的(它不会捕获单词之间的任何关系)。\n\n\n对于要解释的模型而言,整数编码颇具挑战。例如,线性分类器针对每个特征学习一个权重。由于任何两个单词的相似性与其编码的相似性之间都没有关系,因此这种特征权重组合没有意义。\n\n\n单词嵌入向量\n单词嵌入向量为我们提供了一种使用高效、密集表示的方法,其中相似的单词具有相似的编码。重要的是,我们不必手动指定此编码。嵌入向量是浮点值的密集向量(向量的长度是您指定的参数)。它们是可以训练的参数(模型在训练过程中学习的权重,与模型学习密集层权重的方法相同),无需手动为嵌入向量指定值。8 维的单词嵌入向量(对于小型数据集)比较常见,而在处理大型数据集时最多可达 1024 维。维度更高的嵌入向量可以捕获单词之间的细粒度关系,但需要更多的数据来学习。\n<img alt=\"Diagram of an embedding\" width=\"400\" src=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/embedding2.png?raw=1\">\n上面是一个单词嵌入向量的示意图。每个单词都表示为浮点值的 4 维向量。还可以将嵌入向量视为“查找表”。学习完这些权重后,我们可以通过在表中查找对应的密集向量来编码每个单词。\n设置", "import tensorflow as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nimport tensorflow_datasets as tfds\ntfds.disable_progress_bar()", "使用嵌入向量层\nKeras 让使用单词嵌入向量变得轻而易举。我们来看一下嵌入向量层。\n可以将嵌入向量层理解为一个从整数索引(代表特定单词)映射到密集向量(其嵌入向量)的查找表。嵌入向量的维数(或宽度)是一个参数,您可以试验它的数值,以了解多少维度适合您的问题,这与您试验密集层中神经元数量的方式非常相似。", "embedding_layer = layers.Embedding(1000, 5)", "创建嵌入向量层时,嵌入向量的权重会随机初始化(就像其他任何层一样)。在训练过程中,通过反向传播来逐渐调整这些权重。训练后,学习到的单词嵌入向量将粗略地编码单词之间的相似性(因为它们是针对训练模型的特定问题而学习的)。\n如果将整数传递给嵌入向量层,结果会将每个整数替换为嵌入向量表中的向量:", "result = embedding_layer(tf.constant([1,2,3]))\nresult.numpy()", "对于文本或序列问题,嵌入向量层采用整数组成的 2D 张量,其形状为 (samples, sequence_length),其中每个条目都是一个整数序列。它可以嵌入可变长度的序列。您可以在形状为 (32, 10)(32 个长度为 10 的序列组成的批次)或 (64, 15)(64 个长度为 15 的序列组成的批次)的批次上方馈入嵌入向量层。\n返回的张量比输入多一个轴,嵌入向量沿新的最后一个轴对齐。向其传递 (2, 3) 输入批次,输出为 (2, 3, N)", "result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))\nresult.shape", "当给定一个序列批次作为输入时,嵌入向量层将返回形状为 (samples, sequence_length, embedding_dimensionality) 的 3D 浮点张量。为了从可变长度的序列转换为固定表示,有多种标准方法。您可以先使用 RNN、注意力或池化层,然后再将其传递给密集层。本教程使用池化,因为它最简单。接下来,学习使用 RNN 进行文本分类教程是一个不错的选择。\n从头开始学习嵌入向量\n在本教程中,您将基于 IMDB 电影评论来训练情感分类器。在此过程中,模型将从头开始学习嵌入向量。我们将使用经过预处理的数据集。\n要从头开始加载文本数据集,请参阅加载文本教程。", "(train_data, test_data), info = tfds.load(\n 'imdb_reviews/subwords8k', \n split = (tfds.Split.TRAIN, tfds.Split.TEST), \n with_info=True, as_supervised=True)", "获取编码器 (tfds.features.text.SubwordTextEncoder),并快速浏览词汇表。\n词汇表中的“”代表空格。请注意词汇表如何包含完整单词(以“”结尾)以及可用于构建更大单词的部分单词:", "encoder = info.features['text'].encoder\nencoder.subwords[:20]", "电影评论的长度可以不同。我们将使用 padded_batch 方法来标准化评论的长度。", "train_batches = train_data.shuffle(1000).padded_batch(10)\ntest_batches = test_data.shuffle(1000).padded_batch(10)", "导入时,评论的文本是整数编码的(每个整数代表词汇表中的特定单词或单词部分)。\n请注意尾随零,因为批次会填充为最长的示例。", "train_batch, train_labels = next(iter(train_batches))\ntrain_batch.numpy()", "创建一个简单模型\n我们将使用 Keras 序列式 API 定义模型。在这种情况下,它是一个“连续词袋”样式的模型。\n\n\n接下来,嵌入向量层将采用整数编码的词汇表,并查找每个单词索引的嵌入向量。在模型训练时会学习这些向量。向量会向输出数组添加维度。得到的维度为:(batch, sequence, embedding)。\n\n\n接下来,通过对序列维度求平均值,GlobalAveragePooling1D 层会返回每个样本的固定长度输出向量。这让模型能够以最简单的方式处理可变长度的输入。\n\n\n此固定长度输出向量通过一个包含 16 个隐藏单元的完全连接(密集)层进行流水线传输。\n\n\n最后一层与单个输出节点密集连接。利用 Sigmoid 激活函数,得出此值是 0 到 1 之间的浮点数,表示评论为正面的概率(或置信度)。\n\n\n小心:此模型不使用遮盖,而是使用零填充作为输入的一部分,因此填充长度可能会影响输出。要解决此问题,请参阅遮盖和填充指南。", "embedding_dim=16\n\nmodel = keras.Sequential([\n layers.Embedding(encoder.vocab_size, embedding_dim),\n layers.GlobalAveragePooling1D(),\n layers.Dense(16, activation='relu'),\n layers.Dense(1)\n])\n\nmodel.summary()", "编译和训练模型", "model.compile(optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nhistory = model.fit(\n train_batches,\n epochs=10,\n validation_data=test_batches, validation_steps=20)", "通过这种方法,我们的模型可以达到约 88% 的验证准确率(请注意,该模型过度拟合,因此训练准确率要高得多)。", "import matplotlib.pyplot as plt\n\nhistory_dict = history.history\n\nacc = history_dict['accuracy']\nval_acc = history_dict['val_accuracy']\nloss=history_dict['loss']\nval_loss=history_dict['val_loss']\n\nepochs = range(1, len(acc) + 1)\n\nplt.figure(figsize=(12,9))\nplt.plot(epochs, loss, 'bo', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()\n\nplt.figure(figsize=(12,9))\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend(loc='lower right')\nplt.ylim((0.5,1))\nplt.show()", "检索学习的嵌入向量\n接下来,我们检索在训练期间学习的单词嵌入向量。这将是一个形状为 (vocab_size, embedding-dimension) 的矩阵。", "e = model.layers[0]\nweights = e.get_weights()[0]\nprint(weights.shape) # shape: (vocab_size, embedding_dim)", "现在,我们将权重写入磁盘。要使用 Embedding Projector,我们将以制表符分隔的格式上传两个文件:一个向量文件(包含嵌入向量)和一个元数据文件(包含单词)。", "import io\n\nencoder = info.features['text'].encoder\n\nout_v = io.open('vecs.tsv', 'w', encoding='utf-8')\nout_m = io.open('meta.tsv', 'w', encoding='utf-8')\n\nfor num, word in enumerate(encoder.subwords):\n vec = weights[num+1] # skip 0, it's padding.\n out_m.write(word + \"\\n\")\n out_v.write('\\t'.join([str(x) for x in vec]) + \"\\n\")\nout_v.close()\nout_m.close()", "如果您正在 Colaboratory 中运行本教程,则可以使用以下代码段将这些文件下载到本地计算机上(或使用文件浏览器,View -> Table of contents -> File browser)。", "try:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download('vecs.tsv')\n files.download('meta.tsv')", "可视化嵌入向量\n为了可视化嵌入向量,我们将它们上传到 Embedding Projector。\n打开 Embedding Projector(也可以在本地 TensorBoard 实例中运行)。\n\n\n点击“Load data”。\n\n\n上传我们在上面创建的两个文件:vecs.tsv 和 meta.tsv。\n\n\n现在将显示您已训练的嵌入向量。您可以搜索单词以查找其最邻近。例如,尝试搜索“beautiful”,您可能会看到“wonderful”等相邻单词。\n注:您的结果可能会略有不同,具体取决于训练嵌入向量层之前如何随机初始化权重。\n注:您可以试验性地使用更简单的模型来生成更多可解释的嵌入向量。尝试删除 Dense(16) 层,重新训练模型,然后再次可视化嵌入向量。\n<img alt=\"Screenshot of the embedding projector\" width=\"400\" src=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/embedding.jpg?raw=1\">\n后续步骤\n本教程向您展示了如何在小数据集上从头开始训练和可视化单词嵌入向量。\n\n\n要了解循环网络,请参阅 Keras RNN 指南。\n\n\n要详细了解文本分类(包括整个工作流,以及如果您对何时使用嵌入向量还是独热编码感到好奇),我们建议您阅读这篇实用的文本分类指南。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hglanz/phys202-2015-work
assignments/assignment08/InterpolationEx01.ipynb
mit
[ "Interpolation Exercise 1", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n\nfrom scipy.interpolate import interp1d", "2D trajectory interpolation\nThe file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time:\n\nt which has discrete values of time t[i].\nx which has values of the x position at those times: x[i] = x(t[i]).\nx which has values of the y position at those times: y[i] = y(t[i]).\n\nLoad those arrays into this notebook and save them as variables x, y and t:", "data = np.load(\"trajectory.npz\")\nt = np.array(data['t'])\nx = np.array(data['x'])\ny = np.array(data['y'])\n\n#raise NotImplementedError()\n\nassert isinstance(x, np.ndarray) and len(x)==40\nassert isinstance(y, np.ndarray) and len(y)==40\nassert isinstance(t, np.ndarray) and len(t)==40", "Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays:\n\nnewt which has 200 points between ${t_{min},t_{max}}$.\nnewx which has the interpolated values of $x(t)$ at those times.\nnewy which has the interpolated values of $y(t)$ at those times.", "xt = interp1d(t, x, kind='cubic')\nyt = interp1d(t, y, kind='cubic')\n\nnewt = np.linspace(min(t), max(t), 200)\nnewx = xt(newt)\nnewy = yt(newt)\n#raise NotImplementedError()\n\nassert newt[0]==t.min()\nassert newt[-1]==t.max()\nassert len(newt)==200\nassert len(newx)==200\nassert len(newy)==200", "Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points:\n\nFor the interpolated points, use a solid line.\nFor the original points, use circles of a different color and no line.\nCustomize you plot to make it effective and beautiful.", "plt.plot(x, y, marker='o', linestyle='', label='original data')\nplt.plot(newx, newy, marker='.', label='interpolated');\nplt.legend();\nplt.xlabel('x(t)')\nplt.ylabel('y(t)');\n#raise NotImplementedError()\n\nassert True # leave this to grade the trajectory plot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WikiWatershed/model-my-watershed
doc/MMW_API_landproperties_demo.ipynb
apache-2.0
[ "Model My Watershed (MMW) API Demo: Analyze land properties\nEmilio Mayorga, University of Washington, Seattle. 2018-5-17 (minor updates to documentation on 2018-8-19). Demo put together using as a starting point instructions from Azavea from October 2017. See also the related, previous notebook, https://github.com/WikiWatershed/model-my-watershed/blob/develop/doc/MMW_API_watershed_demo.ipynb\nIntroduction\nThe Model My Watershed API allows you to delineate watersheds and analyze geo-data for watersheds and arbitrary areas. You can read more about the work at WikiWatershed or use the web app.\nMMW users can discover their API keys through the user interface, and test the MMW geoprocessing API on either the live or staging apps. An Account page with the API key is available from either app (live or staging). To see it, go to the app, log in, and click on \"Account\" in the dropdown that appears when you click on your username in the top right. Your key is different between staging and production. For testing with the live (production) API and key, go to https://modelmywatershed.org/api/docs/\nThe API can be tested from the command line using curl. This example uses the production API to test the watershed endpoint:\nbash\ncurl -H \"Content-Type: application/json\" -H \"Authorization: Token YOUR_API_KEY\" -X POST \n -d '{ \"location\": [39.67185,-75.76743] }' https://modelmywatershed.org/api/watershed/\nMMW API: Obtain land properties based on \"analyze\" geoprocessing on AOI (small box around a point)\n1. Set up", "import json\nimport requests\nfrom requests.adapters import HTTPAdapter\nfrom requests.packages.urllib3.util.retry import Retry\n\ndef requests_retry_session(\n retries=3,\n backoff_factor=0.3,\n status_forcelist=(500, 502, 504),\n session=None,\n):\n session = session or requests.Session()\n retry = Retry(\n total=retries,\n read=retries,\n connect=retries,\n backoff_factor=backoff_factor,\n status_forcelist=status_forcelist,\n )\n adapter = HTTPAdapter(max_retries=retry)\n session.mount('http://', adapter)\n session.mount('https://', adapter)\n return session", "MMW production API endpoint base url.", "api_url = \"https://modelmywatershed.org/api/\"", "The job is not completed instantly and the results are not returned directly by the API request that initiated the job. The user must first issue an API request to confirm that the job is complete, then fetch the results. The demo presented here performs automated retries (checks) until the server confirms the job is completed, then requests the JSON results and converts (deserializes) them into a Python dictionary.", "def get_job_result(api_url, s, jobrequest):\n url_tmplt = api_url + \"jobs/{job}/\"\n get_url = url_tmplt.format\n \n result = ''\n while not result:\n get_req = requests_retry_session(session=s).get(get_url(job=jobrequest['job']))\n result = json.loads(get_req.content)['result']\n \n return result\n\ns = requests.Session()\n\nAPIToken = '<YOUR API TOKEN STRING>' # ENTER YOUR OWN API TOKEN \n\ns.headers.update({\n 'Authorization': APIToken,\n 'Content-Type': 'application/json'\n})", "2. Construct AOI GeoJSON for job request\nParameters passed to the \"analyze\" API requests.", "from shapely.geometry import box, MultiPolygon\n\nwidth = 0.0004 # Looks like using a width smaller than 0.0002 causes a problem with the API?\n\n# GOOS: (-88.5552, 40.4374) elev 240.93. Agriculture Site—Goose Creek (Corn field) Site (GOOS) at IML CZO\n# SJER: (-119.7314, 37.1088) elev 403.86. San Joaquin Experimental Reserve Site (SJER) at South Sierra CZO\nlon, lat = -119.7314, 37.1088\n\nbbox = box(lon-0.5*width, lat-0.5*width, lon+0.5*width, lat+0.5*width)\n\npayload = MultiPolygon([bbox]).__geo_interface__\n\njson_payload = json.dumps(payload)\n\npayload", "3. Issue job requests, fetch job results when done, then examine results. Repeat for each request type", "# convenience function, to simplify the request calls, below\ndef analyze_api_request(api_name, s, api_url, json_payload):\n post_url = \"{}analyze/{}/\".format(api_url, api_name)\n post_req = requests_retry_session(session=s).post(post_url, data=json_payload)\n jobrequest_json = json.loads(post_req.content)\n # Fetch and examine job result\n result = get_job_result(api_url, s, jobrequest_json)\n return result", "Issue job request: analyze/land/2011_2011/", "result = analyze_api_request('land/2011_2011', s, api_url, json_payload)", "Everything below is just exploration of the results. Examine the content of the results (as JSON, and Python dictionaries)", "type(result), result.keys()", "result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item.", "result['survey'].keys()\n\ncategories = result['survey']['categories']\n\nlen(categories), categories[1]\n\nland_categories_nonzero = [d for d in categories if d['coverage'] > 0]\n\nland_categories_nonzero", "Issue job request: analyze/terrain/", "result = analyze_api_request('terrain', s, api_url, json_payload)", "result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item.", "categories = result['survey']['categories']\n\nlen(categories), categories\n\n[d for d in categories if d['type'] == 'average']", "Issue job request: analyze/climate/", "result = analyze_api_request('climate', s, api_url, json_payload)", "result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item.", "categories = result['survey']['categories']\n\nlen(categories), categories[:2]\n\nppt = [d['ppt'] for d in categories]\ntmean = [d['tmean'] for d in categories]\n\n# ppt is in cm, right?\nsum(ppt)\n\nimport calendar\nimport numpy as np\n\ncalendar.mdays\n\n# Annual tmean needs to be weighted by the number of days per month\nsum(np.asarray(tmean) * np.asarray(calendar.mdays[1:]))/365" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/recommenders
docs/examples/basic_retrieval.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Recommending movies: retrieval\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/recommenders/examples/basic_retrieval\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/basic_retrieval.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/recommenders/blob/main/docs/examples/basic_retrieval.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/basic_retrieval.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nReal-world recommender systems are often composed of two stages:\n\nThe retrieval stage is responsible for selecting an initial set of hundreds of candidates from all possible candidates. The main objective of this model is to efficiently weed out all candidates that the user is not interested in. Because the retrieval model may be dealing with millions of candidates, it has to be computationally efficient.\nThe ranking stage takes the outputs of the retrieval model and fine-tunes them to select the best possible handful of recommendations. Its task is to narrow down the set of items the user may be interested in to a shortlist of likely candidates.\n\nIn this tutorial, we're going to focus on the first stage, retrieval. If you are interested in the ranking stage, have a look at our ranking tutorial.\nRetrieval models are often composed of two sub-models:\n\nA query model computing the query representation (normally a fixed-dimensionality embedding vector) using query features.\nA candidate model computing the candidate representation (an equally-sized vector) using the candidate features\n\nThe outputs of the two models are then multiplied together to give a query-candidate affinity score, with higher scores expressing a better match between the candidate and the query.\nIn this tutorial, we're going to build and train such a two-tower model using the Movielens dataset.\nWe're going to:\n\nGet our data and split it into a training and test set.\nImplement a retrieval model.\nFit and evaluate it.\nExport it for efficient serving by building an approximate nearest neighbours (ANN) index.\n\nThe dataset\nThe Movielens dataset is a classic dataset from the GroupLens research group at the University of Minnesota. It contains a set of ratings given to movies by a set of users, and is a workhorse of recommender system research.\nThe data can be treated in two ways:\n\nIt can be interpreted as expressesing which movies the users watched (and rated), and which they did not. This is a form of implicit feedback, where users' watches tell us which things they prefer to see and which they'd rather not see.\nIt can also be seen as expressesing how much the users liked the movies they did watch. This is a form of explicit feedback: given that a user watched a movie, we can tell roughly how much they liked by looking at the rating they have given.\n\nIn this tutorial, we are focusing on a retrieval system: a model that predicts a set of movies from the catalogue that the user is likely to watch. Often, implicit data is more useful here, and so we are going to treat Movielens as an implicit system. This means that every movie a user watched is a positive example, and every movie they have not seen is an implicit negative example.\nImports\nLet's first get our imports out of the way.", "!pip install -q tensorflow-recommenders\n!pip install -q --upgrade tensorflow-datasets\n!pip install -q scann\n\nimport os\nimport pprint\nimport tempfile\n\nfrom typing import Dict, Text\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\n\nimport tensorflow_recommenders as tfrs", "Preparing the dataset\nLet's first have a look at the data.\nWe use the MovieLens dataset from Tensorflow Datasets. Loading movielens/100k_ratings yields a tf.data.Dataset object containing the ratings data and loading movielens/100k_movies yields a tf.data.Dataset object containing only the movies data.\nNote that since the MovieLens dataset does not have predefined splits, all data are under train split.", "# Ratings data.\nratings = tfds.load(\"movielens/100k-ratings\", split=\"train\")\n# Features of all the available movies.\nmovies = tfds.load(\"movielens/100k-movies\", split=\"train\")", "The ratings dataset returns a dictionary of movie id, user id, the assigned rating, timestamp, movie information, and user information:", "for x in ratings.take(1).as_numpy_iterator():\n pprint.pprint(x)", "The movies dataset contains the movie id, movie title, and data on what genres it belongs to. Note that the genres are encoded with integer labels.", "for x in movies.take(1).as_numpy_iterator():\n pprint.pprint(x)", "In this example, we're going to focus on the ratings data. Other tutorials explore how to use the movie information data as well to improve the model quality.\nWe keep only the user_id, and movie_title fields in the dataset.", "ratings = ratings.map(lambda x: {\n \"movie_title\": x[\"movie_title\"],\n \"user_id\": x[\"user_id\"],\n})\nmovies = movies.map(lambda x: x[\"movie_title\"])", "To fit and evaluate the model, we need to split it into a training and evaluation set. In an industrial recommender system, this would most likely be done by time: the data up to time $T$ would be used to predict interactions after $T$.\nIn this simple example, however, let's use a random split, putting 80% of the ratings in the train set, and 20% in the test set.", "tf.random.set_seed(42)\nshuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)\n\ntrain = shuffled.take(80_000)\ntest = shuffled.skip(80_000).take(20_000)", "Let's also figure out unique user ids and movie titles present in the data. \nThis is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding tables.", "movie_titles = movies.batch(1_000)\nuser_ids = ratings.batch(1_000_000).map(lambda x: x[\"user_id\"])\n\nunique_movie_titles = np.unique(np.concatenate(list(movie_titles)))\nunique_user_ids = np.unique(np.concatenate(list(user_ids)))\n\nunique_movie_titles[:10]", "Implementing a model\nChoosing the architecture of our model is a key part of modelling.\nBecause we are building a two-tower retrieval model, we can build each tower separately and then combine them in the final model.\nThe query tower\nLet's start with the query tower.\nThe first step is to decide on the dimensionality of the query and candidate representations:", "embedding_dimension = 32", "Higher values will correspond to models that may be more accurate, but will also be slower to fit and more prone to overfitting.\nThe second is to define the model itself. Here, we're going to use Keras preprocessing layers to first convert user ids to integers, and then convert those to user embeddings via an Embedding layer. Note that we use the list of unique user ids we computed earlier as a vocabulary:", "user_model = tf.keras.Sequential([\n tf.keras.layers.StringLookup(\n vocabulary=unique_user_ids, mask_token=None),\n # We add an additional embedding to account for unknown tokens.\n tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)\n])", "A simple model like this corresponds exactly to a classic matrix factorization approach. While defining a subclass of tf.keras.Model for this simple model might be overkill, we can easily extend it to an arbitrarily complex model using standard Keras components, as long as we return an embedding_dimension-wide output at the end.\nThe candidate tower\nWe can do the same with the candidate tower.", "movie_model = tf.keras.Sequential([\n tf.keras.layers.StringLookup(\n vocabulary=unique_movie_titles, mask_token=None),\n tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)\n])", "Metrics\nIn our training data we have positive (user, movie) pairs. To figure out how good our model is, we need to compare the affinity score that the model calculates for this pair to the scores of all the other possible candidates: if the score for the positive pair is higher than for all other candidates, our model is highly accurate.\nTo do this, we can use the tfrs.metrics.FactorizedTopK metric. The metric has one required argument: the dataset of candidates that are used as implicit negatives for evaluation.\nIn our case, that's the movies dataset, converted into embeddings via our movie model:", "metrics = tfrs.metrics.FactorizedTopK(\n candidates=movies.batch(128).map(movie_model)\n)", "Loss\nThe next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy.\nIn this instance, we'll make use of the Retrieval task object: a convenience wrapper that bundles together the loss function and metric computation:", "task = tfrs.tasks.Retrieval(\n metrics=metrics\n)", "The task itself is a Keras layer that takes the query and candidate embeddings as arguments, and returns the computed loss: we'll use that to implement the model's training loop.\nThe full model\nWe can now put it all together into a model. TFRS exposes a base model class (tfrs.models.Model) which streamlines building models: all we need to do is to set up the components in the __init__ method, and implement the compute_loss method, taking in the raw features and returning a loss value.\nThe base model will then take care of creating the appropriate training loop to fit our model.", "class MovielensModel(tfrs.Model):\n\n def __init__(self, user_model, movie_model):\n super().__init__()\n self.movie_model: tf.keras.Model = movie_model\n self.user_model: tf.keras.Model = user_model\n self.task: tf.keras.layers.Layer = task\n\n def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:\n # We pick out the user features and pass them into the user model.\n user_embeddings = self.user_model(features[\"user_id\"])\n # And pick out the movie features and pass them into the movie model,\n # getting embeddings back.\n positive_movie_embeddings = self.movie_model(features[\"movie_title\"])\n\n # The task computes the loss and the metrics.\n return self.task(user_embeddings, positive_movie_embeddings)", "The tfrs.Model base class is a simply convenience class: it allows us to compute both training and test losses using the same method.\nUnder the hood, it's still a plain Keras model. You could achieve the same functionality by inheriting from tf.keras.Model and overriding the train_step and test_step functions (see the guide for details):", "class NoBaseClassMovielensModel(tf.keras.Model):\n\n def __init__(self, user_model, movie_model):\n super().__init__()\n self.movie_model: tf.keras.Model = movie_model\n self.user_model: tf.keras.Model = user_model\n self.task: tf.keras.layers.Layer = task\n\n def train_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:\n\n # Set up a gradient tape to record gradients.\n with tf.GradientTape() as tape:\n\n # Loss computation.\n user_embeddings = self.user_model(features[\"user_id\"])\n positive_movie_embeddings = self.movie_model(features[\"movie_title\"])\n loss = self.task(user_embeddings, positive_movie_embeddings)\n\n # Handle regularization losses as well.\n regularization_loss = sum(self.losses)\n\n total_loss = loss + regularization_loss\n\n gradients = tape.gradient(total_loss, self.trainable_variables)\n self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))\n\n metrics = {metric.name: metric.result() for metric in self.metrics}\n metrics[\"loss\"] = loss\n metrics[\"regularization_loss\"] = regularization_loss\n metrics[\"total_loss\"] = total_loss\n\n return metrics\n\n def test_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:\n\n # Loss computation.\n user_embeddings = self.user_model(features[\"user_id\"])\n positive_movie_embeddings = self.movie_model(features[\"movie_title\"])\n loss = self.task(user_embeddings, positive_movie_embeddings)\n\n # Handle regularization losses as well.\n regularization_loss = sum(self.losses)\n\n total_loss = loss + regularization_loss\n\n metrics = {metric.name: metric.result() for metric in self.metrics}\n metrics[\"loss\"] = loss\n metrics[\"regularization_loss\"] = regularization_loss\n metrics[\"total_loss\"] = total_loss\n\n return metrics", "In these tutorials, however, we stick to using the tfrs.Model base class to keep our focus on modelling and abstract away some of the boilerplate.\nFitting and evaluating\nAfter defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.\nLet's first instantiate the model.", "model = MovielensModel(user_model, movie_model)\nmodel.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))", "Then shuffle, batch, and cache the training and evaluation data.", "cached_train = train.shuffle(100_000).batch(8192).cache()\ncached_test = test.batch(4096).cache()", "Then train the model:", "model.fit(cached_train, epochs=3)", "If you want to monitor the training process with TensorBoard, you can add a TensorBoard callback to fit() function and then start TensorBoard using %tensorboard --logdir logs/fit. Please refer to TensorBoard documentation for more details.\nAs the model trains, the loss is falling and a set of top-k retrieval metrics is updated. These tell us whether the true positive is in the top-k retrieved items from the entire candidate set. For example, a top-5 categorical accuracy metric of 0.2 would tell us that, on average, the true positive is in the top 5 retrieved items 20% of the time.\nNote that, in this example, we evaluate the metrics during training as well as evaluation. Because this can be quite slow with large candidate sets, it may be prudent to turn metric calculation off in training, and only run it in evaluation.\nFinally, we can evaluate our model on the test set:", "model.evaluate(cached_test, return_dict=True)", "Test set performance is much worse than training performance. This is due to two factors:\n\nOur model is likely to perform better on the data that it has seen, simply because it can memorize it. This overfitting phenomenon is especially strong when models have many parameters. It can be mediated by model regularization and use of user and movie features that help the model generalize better to unseen data.\nThe model is re-recommending some of users' already watched movies. These known-positive watches can crowd out test movies out of top K recommendations.\n\nThe second phenomenon can be tackled by excluding previously seen movies from test recommendations. This approach is relatively common in the recommender systems literature, but we don't follow it in these tutorials. If not recommending past watches is important, we should expect appropriately specified models to learn this behaviour automatically from past user history and contextual information. Additionally, it is often appropriate to recommend the same item multiple times (say, an evergreen TV series or a regularly purchased item).\nMaking predictions\nNow that we have a model, we would like to be able to make predictions. We can use the tfrs.layers.factorized_top_k.BruteForce layer to do this.", "# Create a model that takes in raw query features, and\nindex = tfrs.layers.factorized_top_k.BruteForce(model.user_model)\n# recommends movies out of the entire movies dataset.\nindex.index_from_dataset(\n tf.data.Dataset.zip((movies.batch(100), movies.batch(100).map(model.movie_model)))\n)\n\n# Get recommendations.\n_, titles = index(tf.constant([\"42\"]))\nprint(f\"Recommendations for user 42: {titles[0, :3]}\")", "Of course, the BruteForce layer is going to be too slow to serve a model with many possible candidates. The following sections shows how to speed this up by using an approximate retrieval index.\nModel serving\nAfter the model is trained, we need a way to deploy it.\nIn a two-tower retrieval model, serving has two components:\n\na serving query model, taking in features of the query and transforming them into a query embedding, and\na serving candidate model. This most often takes the form of an approximate nearest neighbours (ANN) index which allows fast approximate lookup of candidates in response to a query produced by the query model.\n\nIn TFRS, both components can be packaged into a single exportable model, giving us a model that takes the raw user id and returns the titles of top movies for that user. This is done via exporting the model to a SavedModel format, which makes it possible to serve using TensorFlow Serving.\nTo deploy a model like this, we simply export the BruteForce layer we created above:", "# Export the query model.\nwith tempfile.TemporaryDirectory() as tmp:\n path = os.path.join(tmp, \"model\")\n\n # Save the index.\n tf.saved_model.save(index, path)\n\n # Load it back; can also be done in TensorFlow Serving.\n loaded = tf.saved_model.load(path)\n\n # Pass a user id in, get top predicted movie titles back.\n scores, titles = loaded([\"42\"])\n\n print(f\"Recommendations: {titles[0][:3]}\")", "We can also export an approximate retrieval index to speed up predictions. This will make it possible to efficiently surface recommendations from sets of tens of millions of candidates.\nTo do so, we can use the scann package. This is an optional dependency of TFRS, and we installed it separately at the beginning of this tutorial by calling !pip install -q scann.\nOnce installed we can use the TFRS ScaNN layer:", "scann_index = tfrs.layers.factorized_top_k.ScaNN(model.user_model)\nscann_index.index_from_dataset(\n tf.data.Dataset.zip((movies.batch(100), movies.batch(100).map(model.movie_model)))\n)", "This layer will perform approximate lookups: this makes retrieval slightly less accurate, but orders of magnitude faster on large candidate sets.", "# Get recommendations.\n_, titles = scann_index(tf.constant([\"42\"]))\nprint(f\"Recommendations for user 42: {titles[0, :3]}\")", "Exporting it for serving is as easy as exporting the BruteForce layer:", "# Export the query model.\nwith tempfile.TemporaryDirectory() as tmp:\n path = os.path.join(tmp, \"model\")\n\n # Save the index.\n tf.saved_model.save(\n index,\n path,\n options=tf.saved_model.SaveOptions(namespace_whitelist=[\"Scann\"])\n )\n\n # Load it back; can also be done in TensorFlow Serving.\n loaded = tf.saved_model.load(path)\n\n # Pass a user id in, get top predicted movie titles back.\n scores, titles = loaded([\"42\"])\n\n print(f\"Recommendations: {titles[0][:3]}\")", "To learn more about using and tuning fast approximate retrieval models, have a look at our efficient serving tutorial.\nItem-to-item recommendation\nIn this model, we created a user-movie model. However, for some applications (for example, product detail pages) it's common to perform item-to-item (for example, movie-to-movie or product-to-product) recommendations.\nTraining models like this would follow the same pattern as shown in this tutorial, but with different training data. Here, we had a user and a movie tower, and used (user, movie) pairs to train them. In an item-to-item model, we would have two item towers (for the query and candidate item), and train the model using (query item, candidate item) pairs. These could be constructed from clicks on product detail pages.\nNext steps\nThis concludes the retrieval tutorial.\nTo expand on what is presented here, have a look at:\n\nLearning multi-task models: jointly optimizing for ratings and clicks.\nUsing movie metadata: building a more complex movie model to alleviate cold-start." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ling7334/tensorflow-get-started
mnist/Deep_MNIST_for_Experts.ipynb
apache-2.0
[ "深入MNIST\nTensorFlow是一个非常强大的用来做大规模数值计算的库。其所擅长的任务之一就是实现以及训练深度神经网络。\n在本教程中,我们将学到构建一个TensorFlow模型的基本步骤,并将通过这些步骤为MNIST构建一个深度卷积神经网络。\n这个教程假设你已经熟悉神经网络和MNIST数据集。如果你尚未了解,请查看新手指南。\n关于本教程\n本教程首先解释了mnist_softmax.py中的代码 —— 一个简单的Tensorflow模型的应用。然后展示了一些提高精度的方法。\n你可以运行本教程中的代码,或通读代码。\n本教程将会完成:\n创建一个softmax回归算法用以输入MNIST图片来辨识数位的模型,用Tensorfow通过辨识成百上千的例子来训练模型(运行我们首个Tensorflow session)\n使用测试数据来测试模型准确度\n构建,训练,测试一个多层卷积神经网络来提高精度 \n安装\n在创建模型之前,我们会先加载MNIST数据集,然后启动一个TensorFlow的session。\n加载MNIST数据\n为了方便起见,我们已经准备了一个脚本来自动下载和导入MNIST数据集。它会自动创建一个MNIST_data的目录来存储数据。", "import input_data\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)", "这里,mnist是一个轻量级的类。它以Numpy数组的形式存储着训练、校验和测试数据集。同时提供了一个函数,用于在迭代每一小批数据,后面我们将会用到。\n运行TensorFlow的InteractiveSession\nTensorflow依赖于一个高效的C++后端来进行计算。与后端的这个连接叫做session。一般而言,使用TensorFlow程序的流程是先创建一个图,然后在session中启动它。\n这里,我们使用更加方便的InteractiveSession类。通过它,你可以更加灵活地构建你的代码。它能让你在运行图的时候,插入一些计算图,这些计算图是由某些操作(operations)构成的。这对于工作在交互式环境中的人们来说非常便利,比如使用IPython。如果你没有使用InteractiveSession,那么你需要在启动session之前构建整个计算图,然后启动该计算图。", "import tensorflow as tf\nsess = tf.InteractiveSession()", "计算图\n为了在Python中进行高效的数值计算,我们通常会使用像NumPy一类的库,将一些诸如矩阵乘法的耗时操作在Python环境的外部来计算,这些计算通常会通过其它语言并用更为高效的代码来实现。\n但遗憾的是,每一个操作切换回Python环境时仍需要不小的开销。如果你想在GPU或者分布式环境中计算时,这一开销更加可怖,这一开销主要可能是用来进行数据迁移。\nTensorFlow也是在Python外部完成其主要工作,但是进行了改进以避免这种开销。其并没有采用在Python外部独立运行某个耗时操作的方式,而是先让我们描述一个交互操作图,然后完全将其运行在Python外部。这与Theano或Torch的做法类似。\n因此Python代码的目的是用来构建这个可以在外部运行的计算图,以及安排计算图的哪一部分应该被运行。详情请查看基本用法中的计算图一节。\n构建Softmax 回归模型\n在这一节中我们将建立一个拥有一个线性层的softmax回归模型。在下一节,我们会将其扩展为一个拥有多层卷积网络的softmax回归模型。\n占位符\n我们通过为输入图像和目标输出类别创建节点,来开始构建计算图。", "x = tf.placeholder(\"float\", shape=[None, 784])\ny_ = tf.placeholder(\"float\", shape=[None, 10])", "这里的x和y_并不是特定的值,相反,他们都只是一个占位符,可以在TensorFlow运行某一计算时根据该占位符输入具体的值。\n输入图片x是一个2维的浮点数张量。这里,分配给它的维度为[None, 784],其中784是一张展平的MNIST图片的维度。None表示其值大小不定,在这里作为第一个维度值,用以指代batch的大小,意即x的数量不定。输出类别值y_也是一个2维张量,其中每一行为一个10维的独热码向量,用于代表对应某一MNIST图片的类别。\n虽然占位符的维度参数是可选的,但有了它,TensorFlow能够自动捕捉因数据维度不一致导致的错误。\n变量\n我们现在为模型定义权重W和偏置b。可以将它们当作额外的输入量,但是TensorFlow有一个更好的处理方式:变量。一个变量代表着TensorFlow计算图中的一个值,能够在计算过程中使用,甚至进行修改。在机器学习的应用过程中,模型参数一般用变量来表示。", "W = tf.Variable(tf.zeros([784,10]))\nb = tf.Variable(tf.zeros([10]))", "我们在调用tf.Variable的时候传入初始值。在这个例子里,我们把W和b都初始化为零向量。W是一个784x10的矩阵(因为我们有784个特征和10个输出值)。b是一个10维的向量(因为我们有10个分类)。\n变量需要通过seesion初始化后,才能在session中使用。这一初始化步骤为,为初始值指定具体值(本例当中是全为零),并将其分配给每个变量,可以一次性为所有变量完成此操作。", "sess.run(tf.global_variables_initializer())", "类别预测与损失函数\n现在我们可以实现我们的回归模型了。这只需要一行!我们把向量化后的图片x和权重矩阵W相乘,加上偏置b。", "y = tf.matmul(x,W) + b", "我们可以指定损失函数来指示模型预测一个实例有多不准;我们要在整个训练过程中使其最小化。这里我们的损失函数是目标类别和预测类别之间的交叉熵。斤现在初级教程中一样,我们使用稳定方程:", "cross_entropy = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))", "注意,tf.nn.softmax_cross_entropy_with_logits隐式地对模型未归一化模型预测值和所有类别的总值应用了softmax函数,tf.reduce_sum取了总值的平均值。\n训练模型\n我们已经定义好模型和训练用的损失函数,那么用TensorFlow进行训练就很简单了。因为TensorFlow知道整个计算图,它可以使用自动微分法找到对于各个变量的损失的梯度值。TensorFlow有大量内置的优化算法这个例子中,我们用最速下降法让交叉熵下降,步长为0.5。", "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)", "这一行代码实际上是用来往计算图上添加一个新操作,其中包括计算梯度,计算每个参数的步长变化,并且计算出新的参数值。\n返回的train_step操作对象,在运行时会使用梯度下降来更新参数。因此,整个模型的训练可以通过反复地运行train_step来完成。", "for _ in range(1000):\n batch = mnist.train.next_batch(100)\n train_step.run(feed_dict={x: batch[0], y_: batch[1]})", "每一步迭代,我们都会加载50个训练样本,然后执行一次train_step,并通过feed_dict将x和y_张量占位符用训练替代为训练数据。\n注意,在计算图中,你可以用feed_dict来替代任何张量,并不仅限于替换占位符。\n评估模型\n那么我们的模型性能如何呢?\n首先让我们找出那些预测正确的标签。tf.argmax是一个非常有用的函数,它能给出某个tensor对象在某一维上的其数据最大值所在的索引值。由于标签向量是由0,1组成,因此最大值1所在的索引位置就是类别标签,比如tf.argmax(y,1)返回的是模型对于任一输入x预测到的标签值,而tf.argmax(y_,1)代表正确的标签,我们可以用tf.equal来检测我们的预测是否真实标签匹配(索引位置一样表示匹配)。", "correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))", "这里返回一个布尔数组。为了计算我们分类的准确率,我们将布尔值转换为浮点数来代表对、错,然后取平均值。例如:[True, False, True, True]变为[1,0,1,1],计算出平均值为0.75。", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))", "最后,我们可以计算出在测试数据上的准确率,大概是92%。", "print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))", "构建一个多层卷积网络\n在MNIST上只有91%正确率,实在太糟糕。在这个小节里,我们用一个稍微复杂的模型:卷积神经网络来改善效果。这会达到大概99.2%的准确率。虽然不是最高,但是还是比较让人满意。\n权重初始化\n为了创建这个模型,我们需要创建大量的权重和偏置项。这个模型中的权重在初始化时应该加入少量的噪声来打破对称性以及避免0梯度。由于我们使用的是[ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)神经元,因此比较好的做法是用一个较小的正数来初始化偏置项,以避免神经元节点输出恒为0的问题(dead neurons)。为了不在建立模型的时候反复做初始化操作,我们定义两个函数用于初始化。", "def weight_variable(shape):\n initial = tf.truncated_normal(shape, stddev=0.1)\n return tf.Variable(initial)\n\ndef bias_variable(shape):\n initial = tf.constant(0.1, shape=shape)\n return tf.Variable(initial)", "卷积和池化\nTensorFlow在卷积和池化上有很强的灵活性。我们怎么处理边界?步长应该设多大?在这个实例里,我们会一直使用vanilla版本。我们的卷积使用1步长(stride size),0边距(padding size)的模板,保证输出和输入是同一个大小。我们的池化用简单传统的2x2大小的模板做max pooling。为了代码更简洁,我们把这部分抽象成一个函数。", "def conv2d(x, W):\n return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')\n\ndef max_pool_2x2(x):\n return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1], padding='SAME')", "第一层卷积\n现在我们可以开始实现第一层了。它由一个卷积接一个max pooling完成。卷积在每个5x5的patch中算出32个特征。卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小,接着是输入的通道数目,最后是输出的通道数目。 而对于每一个输出通道都有一个对应的偏置量。", "W_conv1 = weight_variable([5, 5, 1, 32])\nb_conv1 = bias_variable([32])", "为了用这一层,我们把x变成一个4d向量,其第2、第3维对应图片的宽、高,最后一维代表图片的颜色通道数(因为是灰度图所以这里的通道数为1,如果是rgb彩色图,则为3)。", "x_image = tf.reshape(x, [-1,28,28,1])", "我们把x_image和权值向量进行卷积,加上偏置项,然后应用ReLU激活函数,最后进行max pooling。max_pool_2x2方法会将图像降为14乘14。", "h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)\nh_pool1 = max_pool_2x2(h_conv1)", "第二层卷积\n为了构建一个更深的网络,我们会把几个类似的层堆叠起来。第二层中,每个5x5的patch会得到64个特征。", "W_conv2 = weight_variable([5, 5, 32, 64])\nb_conv2 = bias_variable([64])\n\nh_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)\nh_pool2 = max_pool_2x2(h_conv2)", "密集连接层\n现在,图片尺寸减小到7x7,我们加入一个有1024个神经元的全连接层,用于处理整个图片。我们把池化层输出的张量reshape成一些向量,乘上权重矩阵,加上偏置,然后对其使用ReLU。", "W_fc1 = weight_variable([7 * 7 * 64, 1024])\nb_fc1 = bias_variable([1024])\n\nh_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])\nh_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)", "Dropout\n为了减少过拟合,我们在输出层之前加入dropout。我们用一个placeholder来代表一个神经元的输出在dropout中保持不变的概率。这样我们可以在训练过程中启用dropout,在测试过程中关闭dropout。 TensorFlow的tf.nn.dropout操作除了可以屏蔽神经元的输出外,还会自动处理神经元输出值的scale。所以用dropout的时候可以不用考虑scale。<sup>1</sup>\n1: 事实上,对于这个小型卷积网络,有没有dropout性能都差不多。dropout通常对降低过拟合总是很有用,但是是对于大型的神经网络来说。", "keep_prob = tf.placeholder(\"float\")\nh_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)", "输出层\n最后,我们添加一个softmax层,就像前面的单层softmax回归一样。", "W_fc2 = weight_variable([1024, 10])\nb_fc2 = bias_variable([10])\n\ny_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)", "训练和评估模型\n这个模型的效果如何呢?为了进行训练和评估,我们使用与之前简单的单层SoftMax神经网络模型几乎相同的一套代码,只是我们会用更加复杂的ADAM优化器来做梯度最速下降,在feed_dict中加入额外的参数keep_prob来控制dropout比例。然后每100次迭代输出一次日志。\n不同之处是:\n我们替换最速梯度下降优化器为更复杂的自适应动量估计优化器\n我们在feed_dict增加了keep_prob以控制dropout率\n我们在训练过程中每100次迭代记录一次日志 \n你可以随意运行这段代码,但它有20,000次迭代可能要运行一段时间(可能超过半小时)。", "cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))\ntrain_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\ncorrect_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\nsess.run(tf.global_variables_initializer())\nfor i in range(20000):\n batch = mnist.train.next_batch(50)\n if i%100 == 0:\n train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})\n print(\"step %d, training accuracy %g\" % (i, train_accuracy))\n train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n# print (\"test accuracy %g\" % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))\n# Tensorflow throw OOM error if evaluate accuracy at once and memory is not enough\ncross_accuracy = 0\nfor i in range(100):\n testSet = mnist.test.next_batch(50)\n each_accuracy = accuracy.eval(feed_dict={ x: testSet[0], y_: testSet[1], keep_prob: 1.0})\n cross_accuracy += each_accuracy\n print(\"test %d accuracy %g\" % (i,each_accuracy))\nprint(\"test average accuracy %g\" % (cross_accuracy/100,))", "以上代码,在最终测试集上的准确率大概是99.2%。\n目前为止,我们已经学会了用TensorFlow快捷地搭建、训练和评估一个复杂一点儿的深度学习模型。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mauriciogtec/PropedeuticoDataScience2017
Alumnos/FranciscoBahena/Tarea2_Técnica.ipynb
mit
[ "SVD imagenes\n\nSvd de una imagen en grayscale.", "import matplotlib.pyplot as plt\n%matplotlib inline\nfrom skimage import io\nfrom skimage.viewer import ImageViewer\nfrom numpy import linalg\nimport numpy as np\n\n\nmatrix = io.imread('face.png', as_grey=True)\nprint(matrix.shape)\n\n#### La clase image viewer te permite ver imagenes\n#viewer = ImageViewer(img)\n#viewer.show()\n\nplt.imshow(matrix, cmap='gray')\nplt.show()", "Ahora la descomponemos en SVD.", "matrix.shape\n\nu, s, vt = linalg.svd(matrix, full_matrices=False)\nprint(u.shape)\nS = np.diag(s)\nprint(S.shape)\nprint(vt.shape)\n\n\nS=np.diag(s)\nm_check = np.matmul(np.matmul(u,S),vt)\nprint(m_check.shape)", "Sea m la matrix en blanco y negro correspondiente a una imagen\n\nLas filas de u son los eigenvectores asociados a $m$ $x$ $mT$\nLas columnas de vt son los eigenvectores asociados a $mT$ $x$ $m$\n$S$ es la matriz diagonal cuya diagonal contiene los valores singulares asociados a $m$\n\nA continuación se grafica la comprobación", "plt.imshow(m_check, cmap='gray')\nplt.show()", "Ahora usaremos la SVD para aproximar un rango menor de m. K será 100.", "k=10\n\n\nu_r = u[:,:k]\nS_r = S[:k,:k]\nvt_r = vt[:k,:]\n\nmatrix_k = np.matmul(u_r, np.matmul(S_r,vt_r))\nplt.imshow(matrix_k, cmap='gray')\nplt.show()", "Esta descomposición es fundamental cuando se trata de comprimir imágenes. La matriz aproximación K mostrada previamente es una compresión de la matriz original. \n\nAhora veremos la aplicación a pseudoinversa y sistemas de ecuaciones", "# A es una matriz cualquiera.\nA = np.array([[1,2,4],[1,4,6],[2,1,6]])\nAu, As, Avt = linalg.svd(A, full_matrices=True)\nAS = np.diag(As)\n\n\nAinv = np.linalg.inv(A)\nAu_inv = np.linalg.inv(Au)\nAs_inv = np.linalg.inv(AS)\nAvt_inv = np.linalg.inv(Avt)\nprint(Au_inv,'\\n', As_inv,'\\n', Avt_inv)", "Para Au y Avt, las transpestas son las inversas, en cuanto a S, por ser diagonal, los reciprocos de su diagonal, son la diagonal principal de la inversa de S, siempre y cuando distitnos de cero.", "AS = np.diag(np.array([1/x if x != 0 else 0 for x in As]))\nAS\n\n# Verificación\nprint(np.dot(Au, Au_inv))\nprint(np.dot(As, As_inv))\nprint(np.dot(Avt, Avt_inv))", "Por que la definición es muy clara. Si la SVD de $A = U\\sum V^{T}$\nLa pseudoinversa de A, llamemosla $A^{+} = V\\sum^{-1}U^{T} $\nMultiplicar de esta forma sería un error $U^{T}\\sum^{-1}V$", "print(np.dot(Avt_inv,np.dot( As_inv,Au_inv)) )\n# Ahora todo va a coincidir\nprint(np.linalg.pinv(A))\nprint(Ainv)", "Programar una función que dada cualquier matriz devuelva la pseudainversa usando la descomposición SVD.", "def get_pseudoinverse(matrix):\n \n \"\"\"\n Get pseudo inverse of a matrix\n \"\"\"\n \n Au, As, Avt = linalg.svd(matrix, full_matrices=False)\n Au_inv = Au.transpose()\n As_inv = np.diag(np.array([1/x if x != 0 else 0 for x in As]))\n Avt_inv = Avt.transpose()\n \n pseudo_matrix = np.dot(Avt_inv,np.dot( As_inv, Au_inv))\n \n return pseudo_matrix\n\n#Verificamos\nA = np.array([[1,2,4],[1,4,6],[2,1,6]])\nAinv = get_pseudoinverse(A)\niden = np.dot(A,Ainv)\niden\n\ndef solve_system(coefficient_matrix, image_vector):\n \n pseudoinverse = get_pseudoinverse(coefficient_matrix)\n domain_vector = np.dot(pseudoinverse,image_vector)\n \n return domain_vector\n \n\n### Prueba de calidad \nB = np.array([[-10,9],[10,5]])\nb = np.array([-9,-5])\nprint(B,'\\n',b)\nx = solve_system(B,b)\nprint(x)\nnp.dot(B,x)\n\n###### Funciona!!!!!!!!", "Ejercicio 3", "A = np.array([[1,1],[0,0]])\nA", "La imagen de A es el siguiente conjunto $ b \\hspace{3mm} x \\hspace{3mm} \\left(\\begin{array}{ccc} 1 \\ 0 \\end{array} \\right)$ con $b \\in \\Bbb R$", "b = np.array([1,1])", "Evidentemente b no esta en la imagen\nLa solución no trivial al sistema homogeneo es $ x \\hspace{3mm} = \\hspace{3mm} \\left(\\begin{array}{ccc} a \\ -a \\end{array} \\right)$ con $a \\in \\Bbb R$", "solve_system(A,b)", "Por la pseudoinversa se encuentra una solución que no es otra cosa más que el vector $b$ proyectado en el espacio generado por la matriz A.", "np.linalg.solve(A,b)", "La matriz A es singular por definición no tiene inversa. Sin embargo, por\nel método de la pseudo inversa podemos aproximarla solución aún cuando no esté en el espacio de la imagen de x o, equivalentemente, el espacio generado por $A$ el método que diseñamos, obtiene la pseudoinversa y aproxima una solución proyectando el vector $b$ sobre el espacio generado por la matriz A. \n\n\nSi usaramos el tradicional de python o R, de obten la inversa, alzaría una excepción del tipo\nError, la matriz es singular.\n\n\nAhora $ A = \\left( \\begin{array}{ccc}\n 1 & 1 \\\n 0 & 1e-32 \\end{array} \\right)$", "A = np.array([[1,1], [0,1e-32]])\nA\n\nnp.linalg.det(A)", "El determinante es distinto de cero, la inversa existe.", "b = np.array([2,0])\nc = np.array([1,1])\n\nsolution_pseudo = solve_system(A,b)\nsolution_inv = np.linalg.solve(A,b)\nprint(np.dot(A,solution_pseudo))\nprint(np.dot(A,solution_inv))\n\n\nsolution_pseudo_c = solve_system(A,c)\nsolution_inv_c = np.linalg.solve(A,c)\nprint(solution_pseudo_c)\nprint(solution_inv_c)\n#print(np.dot(A,solution_pseudo_c))\nprint(np.dot(A,solution_inv_c))", "El método de la inversa sí encontró una solución, sin embargo, no es el vector (1,1)", "print(solution_pseudo_c)\nprint(np.dot(A,solution_pseudo_c))", "El método gradiente descendiente", "import pandas as pd\n\nbase =pd.read_csv('study_vs_sat.csv')\nbase.head(5)", "Planteamiento del problema de minimización\nSea $ y $ sat_score y la queremos explicar mediante una relación lineal con $ x $ study_hours, \nentonces queremos encontrar $ \\alpha , \\beta $ tal que para cada observación aproximamos $ y_i$ con $ \\hat{y}_i = \\alpha_i + \\beta x_i $\nLa función de error que deseamos minimizar: $ E = \\sum\\limits_{i=1}^n (y_i - \\hat{y}i )^2 = \\sum\\limits{i=1}^n (y_i - \\alpha_i + \\beta x_i )^2 $\nEncontrar $ \\alpha $ y $ \\beta $ tal que minimizen E. \nLas condiciones de primer orden implican obtener las derivadas parciales con el vector gradiente e igualarlas 0.\nVector Gradiente $$ \\nabla F(\\alpha,\\beta) = (\\frac{\\partial E}{\\partial \\alpha},\\frac{\\partial E}{\\partial \\beta}) $$\nCondiciones de Primer Orden\n$$ \\frac{\\partial E}{\\partial \\alpha} = -2 * \\sum\\limits_{i=1}^n (y_i - \\alpha_i + \\beta x_i ) = 0 $$\n$$ \\frac{\\partial E}{\\partial \\beta} = -2 * \\sum\\limits_{i=1}^n (y_i - \\alpha_i + \\beta x_i )*x_i = 0 $$\nLa siguiente función recibe los parámetros alfa, beta y un numpy array de una dimensión con las horas de estudio y regresa un numpy array con predicciones de la forma $ y_i = \\alpha + \\beta x_i $", "def predict_score( alpha, beta, study_hours):\n \n predictions = np.array([alpha + beta * study_time for study_time in study_hours])\n \n return predictions\n ", "Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda con la variable study_hours. Observen que X[alpha,beta] nos devuelve alpha + betastudy_hours_i en cada entrada y que entonces el problema se vuelve sat_score ~ X*[alpha,beta]", "base =pd.read_csv('study_vs_sat.csv')\nbase.head(5)\nstudy_hours = base.iloc[:,0]\nstudy_hours = study_hours.values\n\nalpha, beta = 2,3\n\nX_array = np.empty( shape=[20,2], dtype=int)\n\nfor i in range(20):\n \n X_array[i] = [1, study_hours[i]]\n\nX_array * [alpha, beta]", "Calculen la pseudoinversa X^+ de X y computen (X^+)*sat_score para obtener alpha y beta soluciones", "#La pseudoinversa de X es la siguiente \nX_pseudoinv = get_pseudoinverse(X_array)\nX_pseudoinv\n\nsat_score = base['sat_score'].values\nsolution_vector = np.dot(X_pseudoinv, sat_score)\n\nprint('Alfa óptima con pseudoinversa es :',solution_vector[0])\nprint('\\n'*1)\nprint('Beta óptima con pseudoinversa es :', solution_vector[1])", "Comparen la solución anterior con la de la fórmula directa de solución exacta (alpha,beta)=(X^tX)^(-1)X^t*sat_score\nResolvamos por partes, queremos \n$((X^{t}X))^{-1}X^{t}$ * study_hours\nSea $ J = ((X^{t}X))^{-1}$", "\nJ = np.linalg.inv(np.dot(X_array.transpose(), X_array))\nK = np.dot(J, X_array.transpose())\nsol_vector = np.dot(K, sat_score)\nprint('Alfa óptima con inversa tradicional es :',sol_vector[0])\nprint('\\n'*1)\nprint('Beta óptima con inversa tradicional es :', sol_vector[1])\nprint('La solución es la misma que con la pseudoinversa')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/applied-machine-learning-intensive
content/05_deep_learning/04_transfer_learning/colab.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/05_deep_learning/04_transfer_learning/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Transfer Learning\nIn the field of deep learning, transfer learning is defined as the conveyance of knowledge from one pretrained model to a new model. This simply means that transfer learning uses a pretrained model to train a new model. Typically the new model will have a more specific application than the pre-trained model.\nNote that this lab is largely based on an excellent transfer learning lab from TensorFlow.\nExploratory Data Analysis\nThe tensorflow_datasets package has a catalog of datasets that are easy to load into your TensorFlow environment for experimentation.\nIn this lab we'll work with the cats_vs_dogs dataset. This dataset contains thousands of images of cats and dogs. Looking at the documentation for the dataset, we can see there are 23,262 examples in the 'train' split of data. There are no test and validation splits.\nWe could just load this one split directly and then split the data once we download it. Another option is to tell tfds.load() to split the data for us. To do that we must specify the splits.\nThere is a specific notation we can use that tells the function how much of the data we want in each split. For instance 'train[:80%]' indicates that we want the first 80% of the train split in one tranche. 'train[80%:90%]' indicates that we want the next 10% of the data in another tranche, and so on. You can see this at work in our split example below.", "import tensorflow_datasets as tfds\n\n(raw_train, raw_validation, raw_test), metadata = tfds.load(\n 'cats_vs_dogs',\n split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],\n with_info=True,\n as_supervised=True,\n)", "The metadata returned from our dataset contains useful information about the data. For instance, it includes the number of classes:", "metadata.features['label'].num_classes", "And the class names:", "metadata.features['label'].names", "It even comes with some handy functions for converting between class names and numbers:", "print(metadata.features['label'].int2str(1))\nprint(metadata.features['label'].str2int('cat'))", "Let's store the int2str into a more conveniently named function for later use.", "get_class_name = metadata.features['label'].int2str\nget_class_name(0), get_class_name(1)", "Let's take a quick look at our dataset. First we'll peek at the shape of the data.", "raw_train", "(None, None, 3) lets us know that we have three channel images, but we aren't sure of the lengths and widths. They are likely different depending on the image. We also don't know how many images we have.\nLet's do some deeper analysis.\nIt turns out that you can iterate over a DatasetV1Adapter with a standard for loop. The items returned at each iteration are the image and the label.\nWe'll create a helper function to analyze a split of our data.", "import collections\n\n\ndef split_details(split):\n counts = collections.defaultdict(int)\n for image, label in split:\n counts[label.numpy()]+=1\n\n total = 0\n for cls, cnt in counts.items():\n print(f\"Class {get_class_name(cls)}: {cnt}\")\n total += cnt\n \n print(f\"Total: {total}\")\n\nfor s in (\n (\"Train\", raw_train),\n (\"Validation\", raw_validation),\n (\"Test\", raw_test)):\n print(s[0])\n split_details(s[1])\n print()", "We'll train on 18,610 examples, validating on 2,326, and performing our final testing on 2,326. Our classes look pretty evenly spread across all of the splits. The classes also seem to have a similar number of total examples.\nLet's now see what our images look like. We'll display one dog and one cat.", "import matplotlib.pyplot as plt\n\n\nfor cls in (0, 1):\n for image, label in raw_train:\n if label == cls:\n plt.figure()\n plt.imshow(image)\n break", "These are color images with noisy backgrounds. Also, the images aren't the same size, so we'll need to eventually resize them to feed our model.\nLet's find the range of color values and image sizes.", "import sys \n\nglobal_min = sys.maxsize \nglobal_max = -sys.maxsize-1\nsizes = collections.defaultdict(int)\n\nfor split in (raw_train, raw_validation, raw_test):\n for image, _ in split:\n local_max = image.numpy().max()\n local_min = image.numpy().min()\n sizes[image.numpy().shape] += 1\n\n if local_max > global_max:\n global_max = local_max\n \n if local_min < global_min:\n global_min = local_min\n\nprint(f\"Color values range from {global_min} to {global_max}\")\nresolutions = [x[0] for x in sorted(sizes.items(), key=lambda r: r[0])]\nprint(f\"There are {len(resolutions)} resolutions ranging from \",\n f\"{resolutions[0]} to {resolutions[-1]}\")\n", "It looks like we are dealing with color values from 0 through 255, which is pretty standard.\nWe have a huge number of different resolutions. There are over 6,000 different image sizes in this dataset, some as small as 4x4x3! It is difficult to imagine that an image that small would be meaningful. Let's see how many tiny images we are dealing with.", "for resolution in sorted(sizes.items(), key=lambda r: r[0])[:10]:\n print(resolution[0], ': ', resolution[1])", "There is only one truly tiny image. Let's take a look at it.", "shown = False\nfor split in (raw_train, raw_validation, raw_test):\n if shown:\n break\n for image, _ in split:\n if image.numpy().shape == (4, 4, 3):\n plt.figure()\n plt.imshow(image)\n shown = True\n break", "That's definitely bad data. Let's go ahead and sample some of the other small images.", "for split in (raw_train, raw_validation, raw_test):\n for image, _ in split:\n if image.numpy().shape[0] < 50 and image.numpy().shape[0] > 4:\n plt.figure()\n plt.imshow(image) ", "Though some are difficult to interpret, you can probably tell that each image contains either cats or dogs.\nIn order to not process the tiny image, we can write a filter function. We know the shape is (4, 4, 3), so we can filter for that exact shape. To make the filter a little more generic, we'll instead filter out any image that is shorter or narrower than 6 pixels.", "import tensorflow as tf\n\ndef filter_out_small(image, _):\n return tf.math.reduce_any(tf.shape(image)[0] > 5 and tf.shape(image)[1] > 5)\n\nfor s in (\n (\"Train\", raw_train.filter(filter_out_small)),\n (\"Validation\", raw_validation.filter(filter_out_small)),\n (\"Test\", raw_test.filter(filter_out_small))):\n print(s[0])\n split_details(s[1])\n print()", "It looks like our problematic image was a cat in the holdout test set.\nThe Pretrained Model\nTo build our cat/dog classifier, we'll use the learnings of a pre-trained model. Specifically MobileNetV2. We'll use tf.keras.applications.MobileNetV2 to load the data.\nModel-Specific Preprocessing\nResearching MobileNetV2, you'll find that the neural network by default takes an input of image of size (224, 224, 3). Though the model can be configured to take other inputs, all of our images are different sizes. So we might as well resize them to fit the default.", "IMG_SIZE = 224 \n\ndef resize_images(image, label):\n image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))\n return image, label\n\ntrain_resized = raw_train.map(resize_images)\nvalidation_resized = raw_validation.map(resize_images)\ntest_resized = raw_test.map(resize_images)", "We also need to normalize our data, but what should our input values be scaled to? Ideally our input data should look like the input data that the MobileNetV2 was trained on. Unfortunately, this isn't published.\nMobileNetV2 internally uses relu6, which limits activation outputs to the range of 0 through 6. This hints that we might want to normalize our values between [0, 1] or even [0, 6].\nIt also performs batch normalization throughout the network. This is the process of dividing input values by the mean and subtracting the standard deviation of each batch of data processed. So \"batch normalization\" is really \"batch standardization\".\nStandardizing our data by batch is possible. We could also calculate the mean and standard deviation of all of the data and standardize the entire dataset in one pass. Or we could approximate standardization and simply divide our input data by 127.5 (the midpoint of our [0, 255] range) and then subtract 1 (a guessed standard deviation).", "def standardize_images(image, label):\n image = tf.cast(image, tf.float32)\n image = (image/127.5) - 1\n image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))\n return image, label\n\ntrain_standardized = train_resized.map(standardize_images)\nvalidation_standardized = validation_resized.map(standardize_images)\ntest_standardized = test_resized.map(standardize_images)", "Did it work? Let's check it out.", "import sys \n\nglobal_min = sys.maxsize \nglobal_max = -sys.maxsize-1\nsizes = collections.defaultdict(int)\n\nfor split in (train_standardized, validation_standardized, test_standardized):\n for image, _ in split:\n local_max = image.numpy().max()\n local_min = image.numpy().min()\n sizes[image.numpy().shape] += 1\n\n if local_max > global_max:\n global_max = local_max\n \n if local_min < global_min:\n global_min = local_min\n\nprint(f\"Color values range from {global_min} to {global_max}\")\nresolutions = [x[0] for x in sorted(sizes.items(), key=lambda r: r[0])]\nprint(f\"There are {len(resolutions)} resolutions ranging from \",\n f\"{resolutions[0]} to {resolutions[-1]}\")\n", "Looks great! Now it is time to load our pretrained model.\nLoading MobileNetV2\nLoading MobileNetV2 is pretty straight-forward.\nWe need to pass in the input shape, which is (224, 224, 3) for each image.\nWe also include pre-trained weights based on ImageNet. This is where the transfer learning comes in. ImageNet has over a million images that map to a thousand labels. MobileNetV2 has been trained on ImageNet. We'll use those learnings and then add a few more layers of our own model to build a cat/dog classifier.\nThe final argument is include_top. Typically when building a classification model, toward the end of the model, high-dimensional layers are flattened down into two-dimensional tensors. This is considered the top of the model since diagrams often show the final layers at the top. For transfer learning we'll leave this dimensionality reduction off.\nIf you do include the top of the model, the following extra layers will be shown:\n```text \n\nglobal_average_pooling2d_1 (Glo (None, 1280) 0 out_relu[0][0] \n\npredictions (Dense) (None, 1000) 1281000 global_average_pooling2d_1[0][0] \n```", "IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)\n\nmnv2 = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,\n weights='imagenet',\n include_top=False)\n\nmnv2.summary()", "It is often a good idea to \"freeze\" the trained model. This prevents its weights from being updated when we train our new model.\nIt is really only recommended to update the weights of the pretrained model when you are about to train on a large and similar dataset, as compared to the one that was originally trained on. This is not the case in our example. ImageNet has a thousand classes and over a million images. We have two classes and a few thousand images.", "mnv2.trainable = False", "Batching\nWe will want to train our model in batches. In our case we'll use a batch size of 32. You might want to experiment with other sizes.", "BATCH_SIZE = 32\nSHUFFLE_BUFFER_SIZE = 1000\n\ntrain_batches = train_standardized.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)\nvalidation_batches = validation_standardized.batch(BATCH_SIZE)\ntest_batches = test_standardized.filter(filter_out_small).batch(BATCH_SIZE)", "You can see that we now have a well-defined input shape for each training batch.", "image_batch, label_batch = next(iter(train_batches.take(1)))\n\nimage_batch.shape", "If we apply our model to our first batch, you can see that we get a (32, 7, 7, 1280) block of features. These will be the input to our cat/dog model.", "feature_batch = mnv2(image_batch)\nprint(feature_batch.shape)", "Extending the Model\nNow we can perform the actual transfer learning. We'll build a new model that classifies images as containing dogs or cats. In order to do that, we can use a Sequential model with the pretrained model as the first layer.\nNote that the output layer of our pretrained model is:\ntext\nout_relu (ReLU) (None, 7, 7, 1280) 0 Conv_1_bn[0][0]\nSince the activation function is relu6, we know that the data that we'll be receiving is in the range of [0, 6]. We apply a pooling layer to reduce our inputs. In our output layer, we distill the inputs down to a single number that indicates if the image is of a cat or dog. We chose the sigmoid function, which will cause the output to be in a range of [0, 1]. This represents the confidence in an image being a dog, since dog is encoded as 1.", "model = tf.keras.Sequential([\n mnv2,\n tf.keras.layers.GlobalAveragePooling2D(),\n tf.keras.layers.Dense(1, activation='sigmoid'),\n])\n\nmodel.summary()", "We now compile the model, training for accuracy with binary cross entropy used to calculate loss.", "model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(),\n metrics=['accuracy']\n)\n\nmodel.summary()", "Training will take a few minutes. Be sure to use GPU or it will take a really long time.", "history = model.fit(\n train_batches,\n epochs=10,\n validation_data=validation_batches\n)", "We got a training accuracy of over 99% and a validation accuracy close to 99%! Let's graph the accuracy and loss per epoch.", "acc = history.history['accuracy']\nval_acc = history.history['val_accuracy']\n\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nplt.figure(figsize=(8, 8))\nplt.subplot(2, 1, 1)\nplt.plot(acc, label='Training Accuracy')\nplt.plot(val_acc, label='Validation Accuracy')\nplt.legend(loc='lower right')\nplt.ylabel('Accuracy')\nplt.ylim([min(plt.ylim()),1])\nplt.title('Training and Validation Accuracy')\n\nplt.subplot(2, 1, 2)\nplt.plot(loss, label='Training Loss')\nplt.plot(val_loss, label='Validation Loss')\nplt.legend(loc='upper right')\nplt.ylabel('Cross Entropy')\nplt.ylim([0,1.0])\nplt.title('Training and Validation Loss')\nplt.xlabel('epoch')", "The graph makes it look like we might be overfitting, but if you look at the range on the y-axis, we actually aren't doing too badly. We should, however, perform a final test to see if we can generalize well.", "model.evaluate(test_batches)", "We got an accuracy of just over 99%, which can give us some confidence that this model generalizes well.\nMaking Predictions\nWe can use the model to make predictions by using the predict() function.", "predictions = model.predict(test_batches)\n\npredictions.min(), predictions.max()", "Remember the predictions can range from 0.0 to 1.0. We can round them and cast them to integers to get class mappings.", "import numpy as np\n\npredictions = np.round(predictions.flatten(), 0).astype(np.int)\npredictions", "And we can now print the predicted class alongside the original image.", "print(get_class_name(predictions[0]))\n_ = plt.imshow(next(iter(raw_test.take(1)))[0].numpy())", "You can also make predictions by calling the model directly and passing it a single batch.", "predictions = model(next(iter(test_batches)))\npredictions = np.round(predictions, 0).astype(np.int).flatten()\n\nprint(get_class_name(predictions[0]))\n_ = plt.imshow(next(iter(raw_test.take(1)))[0].numpy())", "Exercises\nExercise 1: Food 101\nIn this exercise you'll build a classifier for the Food 101 dataset. The classifier will transfer learnings from DenseNet201.\nIn order to complete the exercise, you will need to:\n* Load the Food 101 dataset. Be sure to pay attention to the splits!\n* Perform exploratory data analysis on the dataset.\n* Ensure every class is represented in your train, test, and validation splits of the dataset.\n* Normalize or standardize your data in the way that the model was trained. You can find this information in the paper introducing the model.\n* Extend DenseNet201 with a new model, and have it classify the 101 food types. Note that one_hot and Dataset.map can help you manipulate the targets to make the model train faster.\n* Graph training accuracy and loss.\n* Calculate accuracy and loss for your holdout test set.*\n* Make predictions and print out one predicted label and original image.\n\n*Don't sweat too much about your model's performance. We were only able to get about 75% training accuracy (with obvious overfitting) in our naive model after 10 training epochs. This model is trying to classify 101 different things with messy images. Don't expect it to perform anywhere close to our binary model above.\n\nUse as many code and text cells as you need to complete this task. Explain your work.\nStudent Solution", "# Your Solution Goes Here", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
desihub/desisim
doc/nb/transient_simulation_reader.ipynb
bsd-3-clause
[ "Time Domain Spectral Simulations\nDemonstrate how to inspect simulated spectra produced using quicktransients.", "import numpy as np\n\nfrom astropy.io import fits\nfrom astropy.table import Table, Column\n\nfrom desispec.io.spectra import read_spectra\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nmpl.rc('font', size=14)", "Input Files\nThe quicktransients program in the desisim transients branch will produce two FITS outputs:\n1. A truth file with information about the templates used for each object.\n2. A spect file with the templates \"observed\" under conditions specified by the user.\nThe spectra can then be coadded using the desi_coadd_spectra program available in desispec.", "truth_file = '../../bgs_2020-03-08_0300s_001_truth.fits'\nspect_file = '../../bgs_2020-03-08_0300s_001_spect.fits'\ncoadd_file = '../../bgs_2020-03-08_0300s_001_coadd.fits'", "Contents of the Truth File\nThe truth file has the following tables:\n1. A wavelength table called WAVE.\n1. A flux table called FLUX.\n1. A TARGETS table simulating a target list available in data.\n1. A simulation TRUTH table with information about the object (redshift, flux, etc.).\n1. A simulation OBJTRUTH table with line fluxes and other data generated for each object.", "hdus = fits.open(truth_file)\nhdus.info()\n\nwave = hdus['WAVE'].data\nflux = hdus['FLUX'].data\ntargets = Table.read(truth_file, 'TARGETS')\ntruth = Table.read(truth_file, 'TRUTH')\nobjtr = Table.read(truth_file, 'OBJTRUTH')\n\ntruth\n\nr = 22.5 - 2.5*np.log10(truth['FLUX_R'])\nz = truth['TRUEZ']\n\nfig, ax = plt.subplots(1,1, figsize=(6,4))\nax.scatter(z, r)\nax.set(xlabel='$z$', ylabel='$r$')\nfig.tight_layout();\n\ntargets\n\nobjtr", "Object Truth Data\nPlot transient data from the objtruth table:\n1. Distribution of \"epoch,\" time in days w.r.t. $t_0$.\n1. Distribution of transient/host flux ratio in $r$ band.", "fig, axes = plt.subplots(1,2, figsize=(9,4), sharey=True)\nax = axes[0]\nn, bins, patches = ax.hist(objtr['TRANSIENT_EPOCH'], bins=10, color='k', alpha=0.3)\nx = 0.5*(bins[1:] + bins[:-1])\ndx = 0.5*np.diff(bins)\nax.errorbar(x, n, xerr=dx, yerr=np.sqrt(n), fmt=',', color='k')\nax.set(xlabel='epoch [day]', ylabel='count')\n\nax = axes[1]\nn, bins, patches = ax.hist(objtr['TRANSIENT_RFLUXRATIO'], bins=10, color='k', alpha=0.3)\nx = 0.5*(bins[1:] + bins[:-1])\ndx = 0.5*np.diff(bins)\nax.errorbar(x, n, xerr=dx, yerr=np.sqrt(n), fmt=',', color='k')\nax.set(xlabel=r'$r_\\mathrm{trans}/r_\\mathrm{host}$')\n\nfig.tight_layout();", "Plot Templates\nDraw the first 10 templates generated from the mock catalog.", "for i in range(10):\n plt.plot(wave, flux[i])", "Observed Spectra\nThe spectra generated under observing conditions are stored in a single file. A FIBERMAP table provides a summary of target data for each object, and individual wavelength, flux, variance, resolution, and mask tables are present for each camera.\nThe data are best accessed using the read_spectra function from desispec, which packs everything into a single object.", "spectra = read_spectra(spect_file)\nhdus = fits.open(spect_file)\nhdus.info()\n\nspectra.fibermap", "Coadd Files\nThe coadds are generated using the desi_coadd_spectra program available in desispec. For example, to add data across the cameras run\ndesi_coadd_spectra -i bgs_2020-03-08_0150s_001_spect.fits -o bgs_2020-03-08_0150s_001_coadd.fits --coadd-cameras\nThe data can then be accessed using the read_spectra function.", "coadds = read_spectra(coadd_file)\nhdus = fits.open(coadd_file)\nhdus.info()\n\ncoadds.fibermap", "Spectral Scores\nPer-camera median coadded fluxes and SNRs are available in a scores table.\nThe code below computes a total SNR and adds it to the table.", "if 'MEDIAN_COADD_SNR' not in coadds.scores.columns:\n totsnr = None\n for cam in 'BRZ':\n camsnr = coadds.scores['MEDIAN_COADD_SNR_{}'.format(cam)]\n if totsnr is None:\n totsnr = camsnr**2\n else:\n totsnr += camsnr**2\n totsnr = np.sqrt(totsnr)\n coadds.scores.add_column(Column(totsnr, name='MEDIAN_COADD_SNR'))\n\ncoadds.scores", "Plot Results\nTo visualize the results, the templates, camera data, and coadds for the first 10 spectra in the input files listed at the top of this notebook are plotted.", "from scipy.signal import medfilt\n\nfig, axes = plt.subplots(10,3, figsize=(14,30), sharex=True)\nfor i in range(10):\n axes[i,0].plot(wave, flux[i], alpha=0.7, label='template')\n axes[i,0].legend()\n \n for _filt, _col in zip(spectra.bands, ['b', 'r', 'saddlebrown']):\n wl = spectra.wave[_filt]\n fl = spectra.flux[_filt][i]\n axes[i,1].plot(wl, fl, color=_col, alpha=0.7, label=_filt)\n axes[i,1].legend()\n \n axes[i,2].plot(coadds.wave['brz'], coadds.flux['brz'][i], alpha=0.7, label='coadd')\n axes[i,2].plot(coadds.wave['brz'], medfilt(coadds.flux['brz'][i], 149), color='yellow')\n axes[i,2].legend()\n\nfig.tight_layout();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aparafita/news-similarity
notebooks/sgd.ipynb
gpl-3.0
[ "import os.path\nimport json\nfrom collections import defaultdict\n\nimport newsbreaker\nfrom newsbreaker.data import load_entries\n\nfrom pymongo import MongoClient\n\nimport autograd.numpy as np\nfrom autograd import grad\n\nimport pandas as pd\n%matplotlib inline\n\nfrom sklearn import cross_validation", "Initialization", "folder = os.path.join('..', 'data')\n\nnewsbreaker.init(os.path.join(folder, 'topic_model'), 'topic_model.pkl', 'vocab.txt')\n\nentries = load_entries(folder)\n\nentries_dict = defaultdict(list)\n\nfor entry in entries:\n entries_dict[entry.feed].append(entry)\n\nclient = MongoClient()\ndb = client.newstagger", "Algorithm", "def get_entry(s):\n feedname, index = s.split('|')\n try:\n index = int(index)\n except ValueError:\n raise KeyError('Malformed entry %s' % s)\n \n for feed, l in entries_dict.items():\n if feed.name == feedname:\n for entry in l:\n if entry.index == index:\n return entry\n else:\n break\n \n raise KeyError('Entry %s not found' % s)\n\ncoefs = []\nY = []\ntests = list(db.pairs.find())\n\nfor test in tests:\n base = get_entry(test['base'])\n e1 = get_entry(test['e1'])\n e2 = get_entry(test['e2'])\n \n coefs.append(\n [\n [\n base.what_distance(e1),\n base.who_distance(e1),\n base.where_distance(e1)\n ],\n [\n base.what_distance(e2),\n base.who_distance(e2),\n base.where_distance(e2)\n ]\n ]\n )\n \n Y.append(float(test['res']))", "Save coefs (X) and Y, along with tests (to know what each row refers to) to work with it later", "with open('X.txt', 'w') as f:\n f.write('\\n'.join(str(x) for x in coefs))\n\nwith open('Y.txt', 'w') as f:\n f.write('\\n'.join(str(x) for x in Y))\n\nimport json\n\nwith open('tests.json', 'w') as f:\n f.write(\n json.dumps(\n [\n { k: v for k, v in d.items() if k != '_id' }\n for d in tests\n ], \n indent=2\n )\n )", "What without NEs", "from collections import Counter\n\ndef what_without_ne(entry):\n entry.doc(tag=True, parse=False, entity=True)\n \n avoid_ent_cats = set(entry.who_ne_cats)\n avoid_ent_cats.update(entry.where_ne_cats)\n \n avoid_ents = [\n (ent.start, ent.end)\n for ent in entry.doc.ents\n if ent.label_ in avoid_ent_cats\n ]\n \n words = []\n doc_words = list(entry.doc)\n \n while doc_words and avoid_ents:\n i = doc_words[0].i\n \n low, high = avoid_ents[0]\n \n if i < low:\n words.append(doc_words.pop(0))\n elif low <= i and i < high:\n doc_words.pop(0) # but don't save it, since is part of NE\n else: # low < high <= i\n avoid_ents.pop(0) # delete ent, since we overpassed it\n \n words += doc_words # no more ents to filter with\n \n counter = Counter(\n word.lower_\n for word in words\n )\n \n entry._what = entry.topic_model.model.transform(\n np.array([ counter[word] for word in entry.topic_model.vocab ])\n )\n\nnot_ne_what_coefs = []\n\nfor test in tests:\n base = get_entry(test['base'])\n what_without_ne(base)\n \n e1 = get_entry(test['e1'])\n what_without_ne(e1)\n \n e2 = get_entry(test['e2'])\n what_without_ne(e2)\n \n not_ne_what_coefs.append(\n [\n base.what_distance(e1),\n base.what_distance(e2)\n ]\n )", "SGD", "with open('X.txt') as f:\n coefs = [eval(x) for x in f.read().split('\\n')]\n\nwith open('Y.txt') as f:\n Y = [float(x) for x in f.read().split('\\n')]\n\nwith open('tests.json') as f:\n tests = json.loads(f.read())\n\nX_copy = list(coefs); Y_copy = list(Y)\n\nX = np.array(\n [\n [\n v1[i] - v2[i]\n for i in range(3)\n ]\n \n for v1, v2 in coefs\n ]\n)\n\nY = np.array(Y)\n\nX_not_ne_what = np.array(\n [\n [\n not_ne_what_coefs[n][0] - not_ne_what_coefs[n][1],\n row[1], row[2]\n ]\n \n for n, row in enumerate(X)\n ]\n)\n\ndef sigmoid(x, gamma=1.):\n return 1.0 / (1.0 + np.exp(-gamma * x))\n\ndef cost(theta, X=None, Y=None): # theta is np.array \n return np.sum(\n (sigmoid(np.dot(X, np.abs(theta))) - Y) ** 2\n ) / len(X)\n\ngrad_cost = grad(cost)\n\nclass SGD:\n \n def __init__(self, learning=0.5, max_iters=10**5, prec=10**-3):\n self.learning = learning\n self.max_iters = max_iters\n self.prec = prec\n \n self.theta = None\n \n self._iters = None\n self._costs = None\n \n \n def get_params(self, deep=True):\n return {\n 'learning': self.learning,\n 'max_iters': self.max_iters,\n 'prec': self.prec\n }\n \n \n @property\n def iters(self):\n if self._iters is None:\n raise Exception('SGD must be fitted to access iters')\n \n return self._iters\n \n @iters.setter\n def iters(self, value): self._iters = value\n \n \n @property\n def costs(self):\n if self._costs is None:\n raise Exception('SGD must be fitted to access costs')\n \n return self._costs\n \n @costs.setter\n def costs(self, value): self._costs = value\n \n \n def fit(self, X, Y):\n self.iters = 0\n self.costs = []\n theta = np.random.random(3)\n \n while self.iters < self.max_iters: \n self.iters += 1\n self.costs.append(cost(theta, X=X, Y=Y))\n \n prev_theta = theta.copy()\n theta -= self.learning * grad_cost(theta, X=X, Y=Y)\n\n if np.linalg.norm(theta - prev_theta) < self.prec:\n break\n \n self.costs.append(cost(theta, X=X, Y=Y))\n self.theta = theta\n return self\n \n \n def score(self, X, Y):\n return sum(\n (not ((pred > 0.) ^ (cls > 0.))) if pred != 0. else 0.\n for pred, cls in zip(np.dot(X, self.theta), Y)\n ) / len(Y)\n\nclass WhatSGD(SGD):\n \n def fit(self, X, Y):\n self.theta = np.array([1., 0., 0.])\n return self", "Simple trial", "threshold = int(len(X) * 0.9)\nX_train, X_test = X[:threshold], X[threshold:]\nY_train, Y_test = Y[:threshold], Y[threshold:]\n\ntrained_sgd = SGD()\ntrained_sgd.fit(X_train, Y_train)\n\npd.Series(trained_sgd.costs).plot() # error on each iteration", "Run cross validation for each model", "X_not_what = X.copy()\n\nfor row in X_not_what:\n row[0] = 0\n\nsgd = SGD()\nwhat_sgd = WhatSGD()\nsgd_not_what = SGD()\n\nrows = []\n\nfor i in range(2, 20 + 1):\n rows.append(\n [\n cross_validation.cross_val_score(\n sgd, X, Y, cv=i\n ), \n cross_validation.cross_val_score(\n sgd, X_not_ne_what, Y, cv=i\n ), \n cross_validation.cross_val_score(\n what_sgd, X, Y, cv=i\n ),\n cross_validation.cross_val_score(\n what_sgd, X_not_ne_what, Y, cv=i\n )\n ]\n )\n \nfor n, i in enumerate(range(2, 20 + 1)):\n rows[n].append(\n cross_validation.cross_val_score(\n sgd_not_what, X_not_what, Y, cv=i\n )\n )\n\ndf = pd.DataFrame(\n [[s.mean() for s in row] for row in rows],\n columns=['sgd, what with NE', 'sgd, what without NE', 'what with NE', 'what without NE', 'sgd without what'],\n index=[100 - 100 // i for i in range(2, 20 + 1)]\n)\n\ndf.plot(ylim=(0, 1))\ndf.plot()\n\ndf.mean()\n\ndf[df.index > 75].mean()\n\ndf[df.index > 90].mean()", "Scores of each model", "scores = cross_validation.cross_val_score(\n sgd, X, Y, cv=10\n)\n\nscores, scores.mean()\n\nscores = cross_validation.cross_val_score(\n sgd, X_not_ne_what, Y, cv=10\n)\n\nscores, scores.mean()\n\nscores = cross_validation.cross_val_score(\n what_sgd, X, Y, cv=10\n)\n\nscores, scores.mean()\n\nscores = cross_validation.cross_val_score(\n what_sgd, X_not_ne_what, Y, cv=10\n)\n\nscores, scores.mean()\n\nsgd_not_what = SGD()\nscores = cross_validation.cross_val_score(\n sgd_not_what, X_not_what, Y, cv=10\n)\n\nscores, scores.mean()", "Final results", "sgd = SGD()\nsgd.fit(X, Y)\n\ncost(sgd.theta, X, Y), cost(sgd.theta / sgd.theta.sum(), X, Y)", "The one not normalised gets less error due to the fact that bigger absolute values make sigmoid closer to 1 or 0, thus reducing the error. Anyhow, the desired value is the one normalised (values must sum 1)", "sgd.theta, sgd.theta / sgd.theta.sum()\n\nsgd = SGD()\nsgd.fit(X_not_ne_what, Y)\n\nsgd.theta, sgd.theta / sgd.theta.sum()", "Notice how with not-ne-what What becomes slightly more important (maybe because it doesn't lose accuracy due to NE mistreatment?). Even if the resulting accuracy is the same but more stable, the other system makes What be more easily computed in terms of performance, so it would still be the selected approach, because the algorithm is slow enough already. In next steps, when the algorithm is optimised, working with not-ne-what would be a good idea.\nFinally, by checking what would be the values with just Who and Where:", "sgd_not_what = SGD()\nsgd_not_what.fit(X_not_what, Y)\n\n(sgd_not_what.theta - np.array([sgd_not_what.theta[0], 0., 0.])) / sgd_not_what.theta[1:].sum()", "Who is really the most important of the two, almost twice" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Danghor/Formal-Languages
Exercises/Grammar2HTML-Antlr/Grammar-2-HTML.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open (\"../../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)", "Converting a Grammar into <span style=\"font-variant:small-caps;\">Html</span>\nYou should store the stored in the file Grammar.g4. This grammar should describe the lexical structure of the grammar for the language \nC that is contained in the file \n<a href=\"https://github.com/karlstroetmann/Formal-Languages/blob/master/Exercises/Grammar2HTML-Antlr/c-grammar.g\"><tt>c-grammar.g</tt></a>.\nYour grammar <b style=\"color:red\">must not</b> use the string rule as a variable name. The reason is that rule is a variable that is already used in the parser generated by \n<span style=\"font-variant:small-caps;\">Antlr</span>.\nYou grammar should generate an abstract syntax tree that conforms to the following type specification:\nGrammar: List&lt;Rule&gt;\nRule: Pair&lt;String, List&lt;Body&gt;&gt;\nBody: List&lt;Item&gt;\nItem: Pair&lt;'var', String&gt; + Pair&lt;'token', String&gt; + Pair&lt;'literal', String&gt;", "!cat Grammar.g4", "The file c-grammar.g contains a context-free grammar for the language C.", "!cat c-grammar.g", "Our goal is to convert this grammar into an <span style=\"font-variant:small-caps;\">Html</span> <a href=\"c-grammar.html\">file</a>.\nWe start by generating both scanner and parser.", "!antlr4 -Dlanguage=Python3 Grammar.g4\n\nfrom GrammarLexer import GrammarLexer\nfrom GrammarParser import GrammarParser\nimport antlr4", "The function grammar_2_string takes a list of grammar rules as its input and renders these rules as an <span style=\"font-variant:small-caps;\">Html</span> file.", "def grammar_2_string(grammar):\n result = ''\n result += '<html>\\n'\n result += '<head>\\n'\n result += '<title>Grammar</title>\\n'\n result += '</head>\\n'\n result += '<body>\\n'\n result += '<table>\\n'\n for rule in grammar:\n result += rule_2_string(rule)\n result += '</table>\\n'\n result += '</body>\\n'\n result += '</html>\\n' \n return result", "The function rule_2_string takes a grammar rule $r$ as its input and transforms this rule into an <span style=\"font-variant:small-caps;\">Html</span> \nstring. Here the grammar rule $r$ has the form\n$$ r = (V, L) $$\nwhere $V$ is the name of the variable defined by $r$ and $L$ is a list of <em style=\"color:blue\">grammar rule bodies</em>. A single grammar rule\nbody is a list of <em style=\"color:blue\">grammar items</em>. A grammar item is either a non-terminal, a token or a literal.", "def rule_2_string(rule):\n head, body = rule\n result = ''\n result += '<tr>\\n'\n result += '<td style=\"text-align:right\"><a name=\"' + head + '\"><em>' + head + '<em></a></td>\\n'\n result += '<td><code>:</code></td>\\n'\n result += '<td>' + body_2_string(body[0]) + '</td>'\n result += '</tr>\\n'\n for i in range(1, len(body)):\n result += '<tr><td></td><td><code>|</code></td><td>'\n result += body_2_string(body[i])\n result += '</td></tr>\\n'\n result += '<tr><td></td><td><code>;</code></td><tr>\\n\\n'\n return result", "The function body_2_string takes a list of grammar items as its inputs and turns them into an <span style=\"font-variant:small-caps;\">Html</span> string.", "def body_2_string(body):\n result = ''\n if len(body) > 0:\n for item in body:\n result += item_2_string(item) + ' '\n else:\n result += '<code>/* empty */</code>'\n return result", "The function item_2_string takes a grammar item as its inputs and turns the item into an <span style=\"font-variant:small-caps;\">Html</span> string.\nAn item represents either a non-terminal or a terminal. If it represents a non-terminal it has the form\n$$(\\texttt{'var'}, \\textrm{name}) $$\nwhere $\\textrm{name}$ is the name of the variable. Otherwise it has the form\n$$(\\textrm{kind}, \\textrm{name}), $$\nwhere $\\textrm{kind}$ is either token or literal.", "def item_2_string(item):\n kind, contend = item\n if kind == 'var':\n return '<a href=\"#' + contend + '\"><em>' + contend + '</em></a>'\n else:\n return '<code>' + contend + '</code>'\n\ndef main():\n input_stream = antlr4.FileStream('c-grammar.g')\n lexer = GrammarLexer(input_stream)\n token_stream = antlr4.CommonTokenStream(lexer)\n parser = GrammarParser(token_stream)\n grammar = parser.start()\n result = grammar_2_string(grammar.g)\n file = open('c-grammar.html', 'w')\n file.write(result)\n\nmain()\n\n!open c-grammar.html", "The command below cleans the directory. If you are running windows, you have to replace rmwith del.", "!rm *.py *.tokens *.interp" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PrACiDa/intro_ciencia_de_datos
02_distribuciones_de_probabilidad.ipynb
gpl-3.0
[ "%matplotlib inline\nimport numpy as np\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nplt.style.use('seaborn-darkgrid')", "Probabilidades\nLa matemática es la lógica de la certeza mientras que la probabilidad es la lógica de la incerteza, dice Joseph K. Blitzstein condensando el pensamiento de cientos de personas antes que el. Entender como pensar en presencia de incertezas es central en Ciencia de Datos. Esta incerteza proviene de diversas fuentes, incluyendo datos incompletos, errores de medición, límites de los diseños experimentales, dificultad de observar ciertos eventos, aproximaciones, etc.\nEn este capítulo veremos una introducción breve a algunos conceptos centrales en probabilidad que nos dará el lenguaje para comprender mejor los fundamentos de varios métodos y procedimientos que veremos más adelante, para quienes tengan interés en profundizar en el tema recomiendo leer el libro Introduction to Probability de Joseph K. Blitzstein y Jessica Hwang.\nEmpecemos por el concepto de probabilidad, existen al menos tres grandes definiciones de probabilidad:\n\n\nDecimos que una moneda tiene probabilidad 0,5 (o 50%) de caer cara, por que asumimos que ninguno de los dos eventos, {cara, ceca}, tiene preferencia sobre el otro. Es decir, pensamos que ambos eventos son equi-probables. Esto se conoce como definición clásica o naíf. Es la misma que usamos para decir que la probabilidad de obtener 3 al arrojar un dado es de $\\frac{1}{6}$, o que la probabilidad de tener una hija es de 0,5. Esta definición se lleva a las patadas con preguntas como ¿Cuál es la probabilidad de existencia de vida en Marte?, claramente 0,5 es una sobreestimación, ya que el evento vida y el evento no-vida no son igualmente probables.\n\n\nOtra forma de ver a una probabilidad es bajo el prisma frecuentista. En esta concepción de probabilidad, en vez de asumir que los eventos son igualmente probables, diseñamos un experimento (en el sentido amplio de la palabra) y contamos cuantas veces observamos el evento que nos interesa $x$ respecto del total de intentos $n$. Entonces podemos aproximar la probabilidad mediante la frecuencia relativa $\\frac{n_x}{n}$, según este procedimiento la probabilidad de obtener 3 al arrojar un dado no es necesariamente de $\\frac{1}{6}$ si no que bien podría ser $\\frac{1}{3}$. Esta noción de probabilidad se suele asociar con la idea de la existencia de un número correcto al que nos aproximamos a medida que aumentan los intentos $n$. Por lo tanto, podemos definir formalmente probabilidad como:\n\n\n$$p(x) = \\lim_{n \\rightarrow \\infty} \\frac{n_x}{n}$$\nLa definición frecuentista de probabilidad tiene el inconveniente de no ser muy útil para pensar en problemas que ocurren una sola vez. Por ejemplo, ¿Cuál es la probabilidad que mañana llueva? Estrictamente solo hay un mañana y o bien lloverá o bien no. Los frecuentistas suelen evadir este problema recurriendo a experimentos imaginarios. En ese caso podríamos intentar estimar la probabilidad de lluvia para mañana imaginando que hay una cantidad muy grande de mañanas y luego contando en cuantos de esos mañanas llueve y en cuantos no. Esta ficción científica es perfectamente válida y muy útil.\n\nLa tercer forma de pensar una probabilidad se refiere a cuantificar la incertidumbre que tenemos sobre la posibilidad que un evento suceda. Si el evento es imposible entonces la probabilidad de ese evento será exactamente 0, si en cambio el evento sucede siempre entonces la probabilidad de ese evento será de 1. Todos los valores intermedios reflejan grados de certeza/incerteza. Desde este punto de vista es natural preguntarse cual es la probabilidad que la masa de Saturno sea $x$ kg, o hablar sobre la probabilidad de lluvia durante el 25 de Mayo de 1810, o la probabilidad de que mañana amanezca. Esta tercer interpretación del concepto de probabilidad es llamado Bayesiana y se puede pensar como una versión que incluye, como casos especiales, a las definiciones frecuentista y clásica.\n\nIndependientemente de la interpretación del concepto de probabilidad la teoría de probabilidades nos ofrece un marco único, coherente y riguroso para trabajar con probabilidades.\nProbabilidades y conjuntos\nEl marco matemático para trabajar con las probabilidades se construye alrededor de los conjuntos matemáticos. \nEl espacio muestral $\\mathcal{X}$ es el conjunto de todos los posibles resultados de un experimento. Un evento $A$ es un subconjunto de $\\mathcal{X}$. Decimos que $A$ ha ocurrido si al realizar un experimento obtenemos como resultado $A$. Si tuviéramos un típico dado de 6 caras tendríamos que:\n$$\\mathcal{X} = {1, 2, 3, 4, 5, 6}$$\nPodemos definir al evento $A$ como:\n$$A = {2}$$\nSi queremos indicar la probabilidad de que $A$ ocurra escribimos $P(A=2)$, es común usar una forma abreviada, simplemente $P(A)$. Recordemos que esta probabilidad no tiene por que ser $\\frac{1}{6}$. Es importante notar que podriamos haber definido al evento $A$ usando más de un elemento de $\\mathcal{X}$, por ejemplo cualquier número impar (siempre dentro de $\\mathcal{X}$) $A = {1, 3, 5}$, o tal vez $A = {4,5,6}$, todo dependerá del problema que tengamos interés en resolver.\nEntonces, tenemos que los eventos son subconjuntos de un espacio muestral definido y las probabilidades son números asociados a la posibilidad que esos eventos ocurran, ya sea que esa \"posibilidad\" la definamos:\n\na partir de asumir todos los eventos equiprobables\ncomo la fracción de eventos favorables respecto del total de eventos\ncomo el grado de certeza de obtener tal evento\n\nAxiomas de Kolmogorov\nUna aproximación a la formalización del concepto de probabilidad son los axiomas de Kolmogorov. Esta no es la única vía, una alternativa es el teorema de Cox que suele ser preferida por quienes suscriben a la definición Bayesiana de probabilidad. Nosotros veremos los axiomas de Kolmogorov por ser los más comúnmente empleados, pero es importante aclarar que ambas aproximaciones conducen, esencialmente, al mismo marco probabilístico.\n\n\nLa probabilidad de un evento es un número real mayor o igual a cero\n $$P(A)\\in \\mathbb {R} ,P(A)\\geq 0\\qquad \\forall A \\in \\mathcal{X}$$\n\n\nLa probabilidad que algo ocurra es 1, queda implícito que todo lo que puede suceder está contenido en $\\mathcal{X}$\n $$P(\\mathcal{X}) = 1$$\n\n\nSi los eventos $A_1, A_2, ..., A_j$ son mutuamente excluyentes entonces\n\n\n$$P(A_1 \\cup A_2 \\cup \\cdots A_j) = \\bigcup {i=1}^{j}P(A{i}) = \\sum {i=1}^{j}P(A{i})$$\nSi obtengo un 1 en un dado no puedo obtener simultaneamente otro número, por lo tanto la probabilidad de obtener, por ej 1 o 3, o 6 es igual $P(1) + P(3) + P(6)$\nDe estos tres axiomas se desprende que las probabilidades están restringidas al intervalo [0, 1], es decir números que van entre 0 y 1 (incluyendo ambos extremos).\nProbabilidad condicional\nDado dos eventos $A$ y $B$ siendo $P(B) > 0$, la probabilidad $A$ dado $B$, que se simboliza como $P(A \\mid B)$ Es definida como:\n$$P(A \\mid B) = \\frac{P(A, B)}{P(B)}$$ \n$P(A, B)$ es la probabilidad que ocurran los eventos $A$ y $B$, también se suele escribir como $P(A \\cap B)$ (el símbolo $\\cap$ indica intersección de conjuntos), la probabilidad de la intersección de los eventos $A$ y $B$.\n$P(A \\mid B)$ es lo que se conoce como probabilidad condicional, y es la probabilidad de que ocurra el evento A condicionada por el hecho que sabemos que B ha ocurrido. Por ejemplo la probabilidad que una vereda esté mojada es diferente de la probabilidad que tal vereda esté mojada dado que está lloviendo. \nUna probabilidad condicional se puede vizualizar como la reducción del espacio muestral. Para ver esto de forma más clara vamos a usar una figura adaptada del libro Introduction to Probability de Joseph K. Blitzstein & Jessica Hwang. En ella se puede ver como pasamos de tener los eventos $A$ y $B$ en el espacio muestral $\\mathcal{X}$, en el primer cuadro, a tener $P(A \\mid B)$ en el último cuadro donde el espacio muestral se redujo de $\\mathcal{X}$ a $B$. \n<center>\n<img src='imagenes/cond.png' width=500 >\n</center>\nEl concepto de probabilidad condicional está en el corazón de la estadística y es central para pensar en como debemos actualizar el conocimiento que tenemos de un evento a la luz de nuevos datos, veremos más sobre esto en el curso \"Análisis Bayesiano de datos\" y en \"Aprendizaje automático y minería de datos\". Por ahora dejamos este tema con la siguiente aclaración. Todas las probabilidades son condicionales (respecto de algún supuesto o modelo) aún cuando no lo expresemos explícitamente, no existen probabilidades sin contexto.\nVariables aleatorias discretas y distribuciones de probabilidad\nUna variable aleatoria es una función que asocia numéros reales $\\mathbb{R}$ con un espacio muestral. Podríamos definir entonces una variable aleatoria $C$ cuyo espacio muestral es ${rojo, verde, azul}$. Si los eventos de interés fuesen rojo, verde, azul, entonces podríamos codificarlos de la siguiente forma:\nC(rojo) = 0, C(verde)=1, C(azul)=2\nEsta codificación es útil ya que en general es más facil operar con números que con strings, ya sea que las operaciones las hagamos manualmente o con una computadora.\nUna variable es aleatoria en el sentido de que en cada experimento es posible obtener un evento distinto sin que la sucesión de eventos siga un patrón determinista. Por ejemplo si preguntamos cual es el valor de $C$ tres veces seguida podríamos obtener, rojo, rojo, azul o quizá azul, verde, azul, etc. Es importante destacar que la variable NO puede tomar cualquier posible, en nuestro ejemplo solo son posibles 3 valores.\nOtra confusión muy común es creer que aleatorio implica que todos los eventos tienen igual probabilidad. Pero esto no es cierto, bien podría darse el siguiente ejemplo:\n$$P(C=rojo) = \\frac{1}{2}, P(C=verde) = \\frac{1}{4}, P(C=azul) = \\frac{1}{4}$$\nLa equiprobabilidad de los eventos es solo un caso especial.\nPrácticamente la totalidad de los problemas de interés requiere lidiar con solo dos tipos de variables aleatorias: \n\nDiscretas\nContinuas \n\nUna variable aleatoria discreta es una variable que puede tomar valores discretos, los cuales forman un conjunto finito (o infinito numerable). En nuestro ejemplo $C$ es discreta ya que solo puede tomar 3 valores, sin posibilidad de valores intermedios entre ellos, no es posible obtener el valor verde-rojizo!\nSi en vez de \"rótulos\" hubiéramos usado el espectro continuo de longitudes onda visibles otro sería el caso, ya que podríamos haber definido a $C={400 \\text{ nm} ... 750\\text{ nm}}$ y en este caso no hay dudas que sería posible obtener un valor a mitad de camino entre rojo ($\\approx 700 \\text{ nm}$) y verde ($\\approx 530 \\text{ nm}$), de hecho podemos encontrar infinitos valores entre ellos. Este sería el ejemplo de una variable aleatoria continua.\nUna variable aleatoria tiene una lista asociada con la probabilidad de cada evento. El nombre formal de esta lista es disribución de probabilidad, en el caso particular de variables aleatorias discretas se le suele llamar también función de masa de probabilidad (o pmf por su sigla en inglés). Es importante destacar que la $pmf$ es una función que devuelve probabilidades, por lo tanto siempre obtendremos valores comprendidos entre [0, 1] y cuya suma total (sobre todos los eventos) dará 1.\nEn principio nada impide que uno defina su propia distribución de probabilidad. Pero existen algunas distribuciones de probabilidad tan comúnmente usadas que tienen nombre \"propio\" por lo que conviene saber que existen. El siguiente listado no es exhaustivo ni tiene como propósito que memoricen las distribuciones y sus propiedades, solo que ganen cierta familiaridad con las mismas. Si en el futuro necesitan utilizar alguna $pmf$ pueden volver a esta notebook (o pueden revisar Wikipedia!!!)\nEn las siguientes gráficas la altura de las barras azules indican la probabilidad de cada valor de $x$. Se indican, además, la media ($\\mu$) y desviación estándar ($\\sigma$) de las distribuciones, es importante destacar que estos valores NO son calculados a partir de datos si no que son los valores exactos (calculados analíticamente) que le corresponden a cada distribución.\nDistribución uniforme discreta\nEs una distribución que asigna igual probabilidad a un conjunto finitos de valores, su $pmf$ es:\n$$p(k \\mid a, b)={\\frac {1}{b - a + 1}}$$\nPara valores de $k$ en el intervalo [a, b], fuera de este intervalo $p(k) = 0$\nPodemos usar esta distribución para modelar, por ejemplo un dado no cargado.", "distri = stats.randint(1, 7) # límite inferior, límite superior + 1\nx = np.arange(0, 8)\nx_pmf = distri.pmf(x) # la pmf evaluada para todos los \"x\"\nmedia, varianza = distri.stats(moments='mv')\nplt.vlines(x, 0, x_pmf, colors='C0', lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma$ = {:3.1f}'.format(float(media),\n float(varianza)**0.5))\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución binomial\nEs la distribución de probabilidad discreta que cuenta el número de éxitos en una secuencia de $n$ ensayos de Bernoulli (experimentos si/no) independientes entre sí, con una probabilidad fija $p$ de ocurrencia del éxito entre los ensayos.\nCuando $n=1$ esta distribución se reduce a la distribución de Bernoulli.\n$$p(x \\mid n,p) = \\frac{n!}{x!(n-x)!}p^x(1-p)^{n-x}$$\nEl término $p^x(1-p)^{n-x}$ indica la probabilidad de obtener $x$ éxitos en $n$ intentos. Este término solo tiene en cuenta el número total de éxitos obtenidos pero no la secuencia en la que aparecieron. El primer término conocido como coeficiente binomial calcula todas las posibles combinaciones de $n$ en $x$, es decir el número de subconjuntos de $x$ elementos escogidos de un conjunto con $n$ elementos.", "n = 4 # número de intentos\np = 0.5 # probabilidad de \"éxitos\"\ndistri = stats.binom(n, p) \nx = np.arange(0, n + 1)\nx_pmf = distri.pmf(x) # la pmf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.vlines(x, 0, x_pmf, colors='C0', lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma^2$ = {:3.1f}'.format(float(media),\n float(varianza**0.5)))\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución de Poisson\nEs una distribución de probabilidad discreta que expresa la probabilidad que $x$ eventos sucedan en un intervalo fijo de tiempo (o espacio o volumen) cuando estos eventos suceden con una taza promedio $\\mu$ y de forma independiente entre si. Se la utiliza para modelar eventos con probabilidades pequeñas (sucesos raros) como accidentes de tráfico o decaimiento radiactivo.\n$$\np(x \\mid \\mu) = \\frac{\\mu^{x} e^{-\\mu}}{x!}\n$$\nTando la media como la varianza de esta distribución están dadas por $\\mu$. \nA medida que $\\mu$ aumenta la distribución de Poisson se aproxima a una distribución Gaussiana (aunque sigue siendo discreta). La distribución de Poisson tiene estrecha relación con otra distribución de probabilidad, la binomial. Una distribución binomial puede ser aproximada con una distribución de Poisson, cuando $n >> p$, es decir, cuando la cantidad de \"éxitos\" ($p$) es baja respecto de la cantidad de \"intentos\" (p) entonces $Poisson(np) \\approx Binon(n, p)$. Por esta razón la distribución de Poisson también se conoce como \"ley de los pequeños números\" o \"ley de los eventos raros\". Ojo que esto no implica que $\\mu$ deba ser pequeña, quien es pequeño/raro es $p$ respecto de $n$.", "distri = stats.poisson(2.3) # occurrencia media del evento\nx = np.arange(0, 10)\nx_pmf = distri.pmf(x) # la pmf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.vlines(x, 0, x_pmf, colors='C0', lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma^2$ = {:3.1f}'.format(float(media),\n float(varianza**0.5)))\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Variables aleatorias y distribuciones de probabilidad continuas\nHasta ahora hemos visto variables aleatorias discretas y distribuciones de masa de probabilidad. Existe otro tipo de variable aleatoria que es muy usado y son las llamadas variables aleatorias continuas, ya que toman valores en $\\mathbb{R}$.\nLa diferencia más importante entre variables aleatoria discretas y continuas es que para las continuas $P(X=x) = 0$, es decir, la probabilidad de cualquier valor es exactamente 0.\nEn las gráficas anteriores, para variables discretas, es la altura de las lineas lo que define la probabilidad de cada evento. Si sumamos las alturas siempre obtenemos 1, es decir la suma total de las probabilidades. En una distribución continua no tenemos lineas si no que tenemos una curva continua, la altura de esa curva es la densidad de probabilidad. Si queremos averiguar cuanto más probable es el valor $x_1$ respecto de $x_2$ basta calcular:\n$$\\frac{pdf(x_1)}{pdf(x_2)}$$\nDonde $pdf$ es la función de densidad de probabilidad (por su sigla en inglés). Y es análoga a la $pmf$ que vimos para variables discretas. Una diferencia importante es que la $pdf(x)$ puede ser mayor a 1. Para obtener una probabilidad a partir de una pdf debemos integrar en un intervalo dado, ya que es el área bajo la curva y no la altura lo que nos da la probabilidad, es decir es esta integral la que debe dar 1.\n$$P(a \\lt X \\lt b) = \\int_a^b pdf(x) dx$$\nEn muchos textos es común usar $p$ para referirse a la probabilidad de un evento en particular o a la $pmf$ o a la $pdf$, esperando que la diferencia se entienda por contexto.\nA continuación veremos varias distribuciones continuas. La curva azul representa la $pdf$, mientras que el histograma (en naranja) representan muestras tomadas a partir de cada distribución. Al igual que con los ejemplos anteriores de distribuciones discretas. Se indican la media ($\\mu$) y desviación estándar ($\\sigma$) de las distribuciones, también en este caso recalcamos que estos valores NO son calculados a partir de datos si no que son los valores exactos (calculados analíticamente) que le corresponden a cada distribución.\nDistribución uniforme\nAún siendo simple, la distribución uniforme es muy usada en estadística, por ej para representar nuestra ignorancia sobre el valor que pueda tomar un parámetro. La distribución uniforme tiene entropía cero (todos los estados son igualmente probables).\n$$\np(x \\mid a,b)=\\begin{cases} \\frac{1}{b-a} & para\\ a \\le x \\le b \\ 0 & \\text{para el resto} \\end{cases}\n$$", "distri = stats.uniform(0, 1) # distribución uniforme entre a=0 y b=1\nx = np.linspace(-0.5, 1.5, 200)\nx_rvs = distri.rvs(500) # muestrear 500 valores de la distribución\nx_pdf = distri.pdf(x) # la pdf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.plot (x, x_pdf, lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma$ = {:3.1f}'.format(float(media),\n float(varianza)**0.5))\nplt.hist(x_rvs, density=True)\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución Gaussiana (o normal)\nEs quizá la distribución más conocida. Por un lado por que muchos fenómenos pueden ser descriptos (aproximadamente) usando esta distribución. Por otro lado por que posee ciertas propiedades matemáticas que facilitan trabajar con ella de forma analítica. Es por ello que muchos de los resultados de la estadística frecuentista se basan en asumir una distribución Gaussiana.\nLa distribución Gaussiana queda definida por dos parámetros, la media $\\mu$ y la desviación estándar $\\sigma$. Una distribución Gaussiana con $\\mu = 0$ y $\\sigma = 1$ es conocida como la distribución Gaussiana estándar.\n$$\np(x \\mid \\mu,\\sigma) = \\frac{1}{\\sigma \\sqrt{ 2 \\pi}} e^{ - \\frac{ (x - \\mu)^2 } {2 \\sigma^2}}\n$$", "distri = stats.norm(loc=0, scale=1) # media cero y desviación standard 1\nx = np.linspace(-4, 4, 100)\nx_rvs = distri.rvs(500) # muestrear 500 valores de la distribución\nx_pdf = distri.pdf(x) # la pdf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.plot (x, x_pdf, lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma$ = {:3.1f}'.format(float(media),\n float(varianza)**0.5))\nplt.hist(x_rvs, density=True)\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución t de Student\nHistóricamente esta distribución surgió para estimar la media de una población normalmente distribuida cuando el tamaño de la muestra es pequeño. En estadística Bayesiana su uso más frecuente es el de generar modelos robustos a datos aberrantes.\n$$p(x \\mid \\nu,\\mu,\\sigma) = \\frac{\\Gamma(\\frac{\\nu + 1}{2})}{\\Gamma(\\frac{\\nu}{2})\\sqrt{\\pi\\nu}\\sigma} \\left(1+\\frac{1}{\\nu}\\left(\\frac{x-\\mu}{\\sigma}\\right)^2\\right)^{-\\frac{\\nu+1}{2}}\n$$\ndonde $\\Gamma$ es la función gamma y donde $\\nu$ es un parámetro llamado grados de libertad en la mayoría de los textos aunque también se le dice grado de normalidad, ya que a medida que $\\nu$ aumenta la distribución se aproxima a una Gaussiana. En el caso extremo de $\\lim_{\\nu\\to\\infty}$ la distribución es exactamente igual a una Gaussiana.\nEn el otro extremo, cuando $\\nu=1$, (aunque en realidad $\\nu$ puede tomar valores por debajo de 1) estamos frente a una distribución de Cauchy. Es similar a una Gaussiana pero las colas decrecen muy lentamente, eso provoca que en teoría esta distribución no poseen una media o varianza definidas. Es decir, es posible calcular a partir de un conjunto de datos una media, pero si los datos provienen de una distribución de Cauchy, la dispersión alrededor de la media será alta y esta dispersión no disminuirá a medida que aumente el tamaño de la muestra. La razón de este comportamiento extraño es que en distribuciones como la Cauchy están dominadas por lo que sucede en las colas de la distribución, contrario a lo que sucede por ejemplo con la distribución Gaussiana.\nPara esta distribución $\\sigma$ no es la desviación estándar, que como ya se dijo podría estar indefinida, $\\sigma$ es la escala. A medida que $\\nu$ aumenta la escala converge a la desviación estándar de una distribución Gaussiana.", "distri = stats.t(loc=0, scale=2, df=4) # media 0, escala 2, grados de libertad 4\nx = np.linspace(-10, 10, 100)\nx_rvs = distri.rvs(500) # muestrear 500 valores de la distribución\nx_pdf = distri.pdf(x) # la pdf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.plot (x, x_pdf, lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma$ = {:3.1f}'.format(float(media),\n float(varianza)**0.5))\nplt.hist(x_rvs, density=True)\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución exponencial\nLa distribución exponencial se define solo para $x > 0$. Esta distribución se suele usar para describir el tiempo que transcurre entre dos eventos que ocurren de forma continua e independiente a una taza fija. El número de tales eventos para un tiempo fijo lo da la distribución de Poisson.\n$$\np(x \\mid \\lambda) = \\lambda e^{-\\lambda x}\n$$\nLa media y la desviación estándar de esta distribución están dadas por $\\frac{1}{\\lambda}$ \nScipy usa una parametrización diferente donde la escala es igual a $\\frac{1}{\\lambda}$", "distri = stats.expon(scale=3) # escala 3, lambda = 1/3\nx = np.linspace(0, 25, 100)\nx_rvs = distri.rvs(500) # muestrear 500 valores de la distribución\nx_pdf = distri.pdf(x) # la pdf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.plot (x, x_pdf, lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma$ = {:3.1f}'.format(float(media),\n float(varianza)**0.5))\nplt.hist(x_rvs, density=True)\n\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución de Laplace\nTambién llamada distribución doble exponencial, ya que puede pensarse como una distribucion exponencial \"más su imagen especular\". Esta distribución surge de medir la diferencia entre dos variables exponenciales (idénticamente distribuidas). \n$$p(x \\mid \\mu, b) = \\frac{1}{2b} \\exp \\left{ - \\frac{|x - \\mu|}{b} \\right}$$", "distri = stats.laplace(0, 0.7) # escala 3, lambda = 1/3\nx = np.linspace(-5, 5, 500)\nx_rvs = distri.rvs(500) # muestrear 500 valores de la distribución\nx_pdf = distri.pdf(x) # la pdf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.plot (x, x_pdf, lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma$ = {:3.1f}'.format(float(media),\n float(varianza)**0.5))\nplt.hist(x_rvs, density=True)\n\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución beta\nEs una distribución definida en el intervalo [0, 1]. Se usa para modelar el comportamiento de variables aleatorias limitadas a un intervalo finito. Es útil para modelar proporciones o porcentajes. \n$$\np(x \\mid \\alpha, \\beta)= \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}\\, x^{\\alpha-1}(1-x)^{\\beta-1}\n$$\nEl primer término es simplemente una constante de normalización que asegura que la integral de la $pdf$ de 1. $\\Gamma$ es la función gamma. Cuando $\\alpha=1$ y $\\beta=1$ la distribución beta se reduce a la distribución uniforme.\nSi queremos expresar la distribución beta en función de la media y la dispersión alrededor de la media podemos hacerlo de la siguiente forma.\n$$\\alpha = \\mu \\kappa$$\n$$\\beta = (1 − \\mu) \\kappa$$\nSiendo $\\mu$ la media y $\\kappa$ una parámetro llamado concentración a media que $\\kappa$ aumenta la dispersión disminuye. Notese, además que $\\kappa = \\alpha + \\beta$.", "distri = stats.beta(5, 2) # alfa=5, beta=2\nx = np.linspace(0, 1, 100)\nx_rvs = distri.rvs(500) # muestrear 500 valores de la distribución\nx_pdf = distri.pdf(x) # la pdf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.plot (x, x_pdf, lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma$ = {:3.1f}'.format(float(media), \n float(varianza)**0.5))\nplt.hist(x_rvs, density=True)\n\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución Gamma\nScipy parametriza a la distribución gamma usando un parámetro $\\alpha$ y uno $\\theta$, usando estos parámetros la $pdf$ es:\n$$\np(x \\mid \\alpha, \\theta) = \\frac{1}{\\Gamma(\\alpha) \\theta^\\alpha} x^{\\alpha \\,-\\, 1} e^{-\\frac{x}{\\theta}}\n$$\nUna parametrización más común en estadística Bayesiana usa los parámetros $\\alpha$ y $\\beta$, siendo $\\beta = \\frac{1}{\\theta}$. En este caso la pdf queda como:\n$$\np(x \\mid \\alpha, \\beta) = \\frac{\\beta^{\\alpha}x^{\\alpha-1}e^{-\\beta x}}{\\Gamma(\\alpha)}\n$$\nLa distribución gamma se reduce a la exponencial cuando $\\alpha=1$.", "distri = stats.gamma(a=3, scale=0.5) # alfa 3, theta 0.5\nx = np.linspace(0, 8, 100)\nx_rvs = distri.rvs(500) # muestrear 500 valores de la distribución\nx_pdf = distri.pdf(x) # la pdf evaluada para todos los x\nmedia, varianza = distri.stats(moments='mv')\nplt.plot (x, x_pdf, lw=5,\n label='$\\mu$ = {:3.1f}\\n$\\sigma$ = {:3.1f}'.format(float(media), \n float(varianza)**0.5))\nplt.hist(x_rvs, normed=True)\n\nplt.xlabel('x')\nplt.ylabel('$p(x)$')\nplt.legend(frameon=True);", "Distribución acumulada\nLa $pdf$ (o la $pmf$) son formas comunes de representar y trabajar con variables aleatorias, pero no son las únicas formas posibles. Existen otras representaciones equivalentes. Por ejemplo la función de distribución acumulada ($cdf$ en inglés). Al integrar una $pdf$ se obtiene la correspondiente $cdf$, y al derivar la $cdf$ se obtiene la $pdf$.\nLa integral de la $pdf$ es llamada función de distribución acumulada ($cdf$):\n\\begin{equation}\ncdf(x) = \\int_{-\\infty}^{x} pdf(x) d(x)\n\\end{equation}\nEn algunas situaciones se prefiere hablar de la función de supervivencia:\n\\begin{equation}\nS(x) = 1 - cdf \n\\end{equation}\nA continuación un ejemplo de la $pdf$ y $cdf$ para 4 distribuciones de la familia Gaussiana.", "_, ax = plt.subplots(2,1, figsize=(6, 8), sharex=True)\nx_values = np.linspace(-4, 4, 200)\nvalues = [(0., .2), (0., 1.), (0., 2.), (-2., .5)]\ncolor = ['C0', 'C1', 'C2', 'C3']\nfor val, c in zip(values, color):\n pdf = stats.norm(*val).pdf(x_values)\n cdf = stats.norm(*val).cdf(x_values)\n ax[0].plot(x_values, pdf, lw=3, color=c,\n label='$\\mu$ = {}, $\\sigma$ = {}'.format(*val))\n ax[1].plot(x_values, cdf, lw=3, color=c)\nax[0].set_ylabel('$pdf$', fontsize=14, rotation=0, labelpad=20)\nax[0].legend()\nax[1].set_ylabel('$cdf$', fontsize=14, rotation=0, labelpad=20)\nax[1].set_xlabel('$x$');", "La siguiente figura tomada del libro Think Stats resume las relaciones entre la $cdf$, $pdf$ y $pmf$.\n<img src='imagenes/cmf_pdf_pmf.png' width=600 >\nDistribuciones empíricas versus teóricas\nUn método gráfico para comparar si un conjunto de datos se ajusta a una distribución teórica es comparar los valores esperados de la distribución teórica en el eje $x$ y en el eje $y$ los valores de los datos ordenados de menor a mayor. Si la distribución empírica fuese exactamente igual a la teórica los puntos caerían sobre la linea recta a $45^{\\circ}$, es decir la linea donde $y = x$.", "muestra = np.random.normal(0, 1, 100)\ndist = stats.norm(0, 1), stats.laplace(scale=0.7)\nx = np.linspace(-4, 4, 100)\ndist_pdf = dist[0].pdf(x), dist[1].pdf(x)\n\n\n_, ax = plt.subplots(2, 2, figsize=(8, 8))\nfor i in range(2):\n osm, osr = stats.probplot(muestra, fit=False, dist=dist[i])\n ax[0,i].plot(osm, osm)\n ax[0,i].plot(osm, osr, 'o')\n ax[0,i].set_xlabel('Valores esperados')\n ax[0,i].set_ylabel('Valores observados')\n ax[1, i].plot(x, dist_pdf[i], lw=3)\n ax[1, i].hist(muestra, density=True)\n ax[1, i].set_ylim(0, np.max(dist_pdf) * 1.1)", "Límites\nLos dos teoremás más conocidos y usados en probabilidad son la ley de los grande números y el teorema del límite central. Ambos nos dicen que le sucede a la media muestral a medida que el tamaño de la muestra aumenta.\nLa ley de los grandes números\nEl valor promedio calculado para una muestra converge al valor esperado (media) de dicha distribución. Esto no es cierto para algunas distribuciones como la distribución de Cauchy (la cual no tiene media ni varianza finita).\nLa ley de los grandes números se suele malinterpretar y dar lugar a la paradoja del apostador. Un ejemplo de esta paradoja es creer que conviene apostar en la lotería/quiniela a un número atrasado, es decir un número que hace tiempo que no sale. El razonamiento, erróneo, es que como todos los números tienen la misma probabilidad a largo plazo si un número viene atrasado entonces hay alguna especie de fuerza que aumenta la probabilidad de ese número en los próximo sorteos para así re-establecer la equiprobabilidad de los números.", "tamaño_muestra = 200\nmuestras = range(1, tamaño_muestra)\ndist = stats.uniform(0, 1)\nmedia_verdadera = dist.stats(moments='m')\n\nfor _ in range(3):\n muestra = dist.rvs(tamaño_muestra)\n media_estimada = [muestra[:i].mean() for i in muestras]\n plt.plot(muestras, media_estimada, lw=1.5)\n\nplt.hlines(media_verdadera, 0, tamaño_muestra, linestyle='--', color='k')\nplt.ylabel(\"media\", fontsize=14)\nplt.xlabel(\"# de muestras\", fontsize=14);", "El teorema central del límite\nEl teorema central del límite (también llamado teorema del límite central) establece que si tomamos $n$ valores (de forma independiente) de una distribución arbitraria la media $\\bar X$ de esos valores se distribuirá aproximadamente como una Gaussiana a medida que ${n \\rightarrow \\infty}$:\n$$\\bar X_n \\dot\\sim \\mathcal{N} \\left(\\mu, \\frac{\\sigma^2}{n}\\right)$$\nDonde $\\mu$ y $\\sigma^2$ son la media y varianza poblacionales.\nPara que el teorema del límite central se cumpla se deben cumplir los siguientes supuestos:\n\nLas variables se muestrean de forma independiente\nLas variables provienen de la misma distribución\nLa media y la desviación estándar de la distribución tiene que ser finitas\n\nLos criterios 1 y 2 se pueden relajar bastante y aún así obtendremos aproximadamente una Gaussiana, pero del criterio 3 no hay forma de escapar. Para distribuciones como la distribución de Cauchy, que no posen media ni varianza definida este teorema no se aplica. El promedio de $N$ valores provenientes de una distribución Cauchy no siguen una Gaussiana sino una distribución de Cauchy.\nEl teorema del límite central explica la prevalencia de la distribución Gaussiana en la naturaleza. Muchos de los fenómenos que estudiamos se pueden explicar como fluctuaciones alrededor de una media, o ser el resultado de la suma de muchos factores diferentes. Además, las Gaussianas son muy comunes en probabilidad, estadística y machine learning ya que que esta familia de distribuciones son más simples de manipular matemáticamente que muchas otras distribuciones.\nA continuación vemos una simulación que nos muestra el teorema del límite central en acción.", "np.random.seed(4)\nplt.figure(figsize=(9,6))\niters = 2000\ndistri = stats.expon(scale=1)\nmu, var = distri.stats(moments='mv')\n\nfor i, n in enumerate([1, 5, 100]):\n sample = np.mean(distri.rvs((n, iters)), axis=0)\n plt.subplot(2, 3, i+1)\n sd = (var/n)**0.5 \n x = np.linspace(mu - 4 * sd, mu + 4 * sd, 200)\n plt.plot(x, stats.norm(mu, sd).pdf(x))\n plt.hist(sample, density=True, bins=20)\n plt.title('n = {}'.format(n))\n plt.subplot(2, 3, i+4)\n osm, osr = stats.probplot(sample, dist=stats.norm(mu, (var/n)**0.5), fit=False)\n plt.plot(osm, osm)\n plt.plot(osm, osr, 'o')\n plt.xlabel('Valores esperados')\n plt.ylabel('Valores observados')\n\n\nplt.tight_layout()", "Ejercicios\n\nSiguiendo los axiomas de Kolmogorov\nPor qué la probabilidades no pueden ser mayores a 1? \n\n\nSegún la definición de probabilidad condicional\nCual es el valor de $P(A \\mid A)$?\nCual es la probabilidad de $P(A, B)$?\nCual es la probabilidad de $P(A, B)$ en el caso especial que $A$ y $B$ sean independientes? \nCuando se cumple que $P(A \\mid B) = P(A)$?\nEs posible que $P(A \\mid B) > P(A)$, cuando?\nEs posible que $P(A \\mid B) < P(A)$, cuando?\n\n\n\nLos siguientes ejercicios se deben realizar usando Python (y NumPy, SciPy, Matplotlib)\n1. Ilustar que la distribución de Poisson se aproxima a una binomial cuando para la binomial $n >> p$.\n\n\nPara alguna de las distribuciones discretas presentadas en esta notebook verificar que la probabilidad total es 1.\n\n\nPara alguna de las distribuciones continuas presentadas en esta notebook verificar que el área bajo la curva es 1.\n\n\nObtener la cdf a partir de la pdf (usar el método pdf provisto por SciPy). La función np.cumsum puede ser de utilidad.\n\n\nObtener la pdf a partir de la cdf (usar el método cdf provisto por SciPy). La función np.diff puede ser de utilidad.\n\n\nRepetir la simulación para la ley de los grandes números para al menos 3 distribuciones de probabilidad. Para cada distribución probar más de un conjunto de paramétros.\n\n\nRepetir la simulación para el teorema central del límite para al menos 3 distribuciones de probabilidad. Para cada distribución probar más de un conjunto de paramétros.\n\n\nMostrar en un gráfico que la media $\\bar X$ converge a $\\mu$ y la varianza converge a $\\frac{\\sigma^2}{n}$ a medida que aumenta el tamaño de la muestra." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SnShine/aima-python
nlp.ipynb
mit
[ "NATURAL LANGUAGE PROCESSING\nThis notebook covers chapters 22 and 23 from the book Artificial Intelligence: A Modern Approach, 3rd Edition. The implementations of the algorithms can be found in nlp.py.\nRun the below cell to import the code from the module and get started!", "import nlp\nfrom nlp import Page, HITS\nfrom nlp import Lexicon, Rules, Grammar, ProbLexicon, ProbRules, ProbGrammar\nfrom nlp import CYK_parse, Chart\n\nfrom notebook import psource", "CONTENTS\n\nOverview\nLanguages\nHITS\nQuestion Answering\nCYK Parse\nChart Parsing\n\nOVERVIEW\nNatural Language Processing (NLP) is a field of AI concerned with understanding, analyzing and using natural languages. This field is considered a difficult yet intriguing field of study, since it is connected to how humans and their languages work.\nApplications of the field include translation, speech recognition, topic segmentation, information extraction and retrieval, and a lot more.\nBelow we take a look at some algorithms in the field. Before we get right into it though, we will take a look at a very useful form of language, context-free languages. Even though they are a bit restrictive, they have been used a lot in research in natural language processing.\nLANGUAGES\nLanguages can be represented by a set of grammar rules over a lexicon of words. Different languages can be represented by different types of grammar, but in Natural Language Processing we are mainly interested in context-free grammars.\nContext-Free Grammars\nA lot of natural and programming languages can be represented by a Context-Free Grammar (CFG). A CFG is a grammar that has a single non-terminal symbol on the left-hand side. That means a non-terminal can be replaced by the right-hand side of the rule regardless of context. An example of a CFG:\nS -&gt; aSb | ε\nThat means S can be replaced by either aSb or ε (with ε we denote the empty string). The lexicon of the language is comprised of the terminals a and b, while with S we denote the non-terminal symbol. In general, non-terminals are capitalized while terminals are not, and we usually name the starting non-terminal S. The language generated by the above grammar is the language a<sup>n</sup>b<sup>n</sup> for n greater or equal than 1.\nProbabilistic Context-Free Grammar\nWhile a simple CFG can be very useful, we might want to know the chance of each rule occuring. Above, we do not know if S is more likely to be replaced by aSb or ε. Probabilistic Context-Free Grammars (PCFG) are built to fill exactly that need. Each rule has a probability, given in brackets, and the probabilities of a rule sum up to 1:\nS -&gt; aSb [0.7] | ε [0.3]\nNow we know it is more likely for S to be replaced by aSb than by e.\nAn issue with PCFGs is how we will assign the various probabilities to the rules. We could use our knowledge as humans to assign the probabilities, but that is a laborious and prone to error task. Instead, we can learn the probabilities from data. Data is categorized as labeled (with correctly parsed sentences, usually called a treebank) or unlabeled (given only lexical and syntactic category names).\nWith labeled data, we can simply count the occurences. For the above grammar, if we have 100 S rules and 30 of them are of the form S -&gt; ε, we assign a probability of 0.3 to the transformation.\nWith unlabeled data we have to learn both the grammar rules and the probability of each rule. We can go with many approaches, one of them the inside-outside algorithm. It uses a dynamic programming approach, that first finds the probability of a substring being generated by each rule, and then estimates the probability of each rule.\nChomsky Normal Form\nA grammar is in Chomsky Normal Form (or CNF, not to be confused with Conjunctive Normal Form) if its rules are one of the three:\n\nX -&gt; Y Z\nA -&gt; a\nS -&gt; ε\n\nWhere X, Y, Z, A are non-terminals, a is a terminal, ε is the empty string and S is the start symbol (the start symbol should not be appearing on the right hand side of rules). Note that there can be multiple rules for each left hand side non-terminal, as long they follow the above. For example, a rule for X might be: X -&gt; Y Z | A B | a | b.\nOf course, we can also have a CNF with probabilities.\nThis type of grammar may seem restrictive, but it can be proven that any context-free grammar can be converted to CNF.\nLexicon\nThe lexicon of a language is defined as a list of allowable words. These words are grouped into the usual classes: verbs, nouns, adjectives, adverbs, pronouns, names, articles, prepositions and conjuctions. For the first five classes it is impossible to list all words, since words are continuously being added in the classes. Recently \"google\" was added to the list of verbs, and words like that will continue to pop up and get added to the lists. For that reason, these first five categories are called open classes. The rest of the categories have much fewer words and much less development. While words like \"thou\" were commonly used in the past but have declined almost completely in usage, most changes take many decades or centuries to manifest, so we can safely assume the categories will remain static for the foreseeable future. Thus, these categories are called closed classes.\nAn example lexicon for a PCFG (note that other classes can also be used according to the language, like digits, or RelPro for relative pronoun):\nVerb -&gt; is [0.3] | say [0.1] | are [0.1] | ...\nNoun -&gt; robot [0.1] | sheep [0.05] | fence [0.05] | ...\nAdjective -&gt; good [0.1] | new [0.1] | sad [0.05] | ...\nAdverb -&gt; here [0.1] | lightly [0.05] | now [0.05] | ...\nPronoun -&gt; me [0.1] | you [0.1] | he [0.05] | ...\nRelPro -&gt; that [0.4] | who [0.2] | which [0.2] | ...\nName -&gt; john [0.05] | mary [0.05] | peter [0.01] | ...\nArticle -&gt; the [0.35] | a [0.25] | an [0.025] | ...\nPreposition -&gt; to [0.25] | in [0.2] | at [0.1] | ...\nConjuction -&gt; and [0.5] | or [0.2] | but [0.2] | ...\nDigit -&gt; 1 [0.3] | 2 [0.2] | 0 [0.2] | ...\nGrammar\nWith grammars we combine words from the lexicon into valid phrases. A grammar is comprised of grammar rules. Each rule transforms the left-hand side of the rule into the right-hand side. For example, A -&gt; B means that A transforms into B. Let's build a grammar for the language we started building with the lexicon. We will use a PCFG.\n```\nS -> NP VP [0.9] | S Conjuction S [0.1]\nNP -> Pronoun [0.3] | Name [0.1] | Noun [0.1] | Article Noun [0.25] |\n Article Adjs Noun [0.05] | Digit [0.05] | NP PP [0.1] |\n NP RelClause [0.05]\nVP -> Verb [0.4] | VP NP [0.35] | VP Adjective [0.05] | VP PP [0.1]\n VP Adverb [0.1]\nAdjs -> Adjective [0.8] | Adjective Adjs [0.2]\nPP -> Preposition NP [1.0]\nRelClause -> RelPro VP [1.0]\n```\nSome valid phrases the grammar produces: \"mary is sad\", \"you are a robot\" and \"she likes mary and a good fence\".\nWhat if we wanted to check if the phrase \"mary is sad\" is actually a valid sentence? We can use a parse tree to constructively prove that a string of words is a valid phrase in the given language and even calculate the probability of the generation of the sentence.\n\nThe probability of the whole tree can be calculated by multiplying the probabilities of each individual rule transormation: 0.9 * 0.1 * 0.05 * 0.05 * 0.4 * 0.05 * 0.3 = 0.00000135.\nTo conserve space, we can also write the tree in linear form:\n[S [NP [Name mary]] [VP [VP [Verb is]] [Adjective sad]]]\nUnfortunately, the current grammar overgenerates, that is, it creates sentences that are not grammatically correct (according to the English language), like \"the fence are john which say\". It also undergenerates, which means there are valid sentences it does not generate, like \"he believes mary is sad\".\nImplementation\nIn the module we have implementation both for probabilistic and non-probabilistic grammars. Both these implementation follow the same format. There are functions for the lexicon and the rules which can be combined to create a grammar object.\nNon-Probabilistic\nExecute the cell below to view the implemenations:", "psource(Lexicon, Rules, Grammar)", "Let's build a lexicon and a grammar for the above language:", "lexicon = Lexicon(\n Verb = \"is | say | are\",\n Noun = \"robot | sheep | fence\",\n Adjective = \"good | new | sad\",\n Adverb = \"here | lightly | now\",\n Pronoun = \"me | you | he\",\n RelPro = \"that | who | which\",\n Name = \"john | mary | peter\",\n Article = \"the | a | an\",\n Preposition = \"to | in | at\",\n Conjuction = \"and | or | but\",\n Digit = \"1 | 2 | 0\"\n)\n\nprint(\"Lexicon\", lexicon)\n\nrules = Rules(\n S = \"NP VP | S Conjuction S\",\n NP = \"Pronoun | Name | Noun | Article Noun \\\n | Article Adjs Noun | Digit | NP PP | NP RelClause\",\n VP = \"Verb | VP NP | VP Adjective | VP PP | VP Adverb\",\n Adjs = \"Adjective | Adjective Adjs\",\n PP = \"Preposition NP\",\n RelClause = \"RelPro VP\"\n)\n\nprint(\"\\nRules:\", rules)", "Both the functions return a dictionary with keys the left-hand side of the rules. For the lexicon, the values are the terminals for each left-hand side non-terminal, while for the rules the values are the right-hand sides as lists.\nWe can now use the variables lexicon and rules to build a grammar. After we've done so, we can find the transformations of a non-terminal (the Noun, Verb and the other basic classes do not count as proper non-terminals in the implementation). We can also check if a word is in a particular class.", "grammar = Grammar(\"A Simple Grammar\", rules, lexicon)\n\nprint(\"How can we rewrite 'VP'?\", grammar.rewrites_for('VP'))\nprint(\"Is 'the' an article?\", grammar.isa('the', 'Article'))\nprint(\"Is 'here' a noun?\", grammar.isa('here', 'Noun'))", "If the grammar is in Chomsky Normal Form, we can call the class function cnf_rules to get all the rules in the form of (X, Y, Z) for each X -&gt; Y Z rule. Since the above grammar is not in CNF though, we have to create a new one.", "E_Chomsky = Grammar(\"E_Prob_Chomsky\", # A Grammar in Chomsky Normal Form\n Rules(\n S = \"NP VP\",\n NP = \"Article Noun | Adjective Noun\",\n VP = \"Verb NP | Verb Adjective\",\n ),\n Lexicon(\n Article = \"the | a | an\",\n Noun = \"robot | sheep | fence\",\n Adjective = \"good | new | sad\",\n Verb = \"is | say | are\"\n ))\n\nprint(E_Chomsky.cnf_rules())", "Finally, we can generate random phrases using our grammar. Most of them will be complete gibberish, falling under the overgenerated phrases of the grammar. That goes to show that in the grammar the valid phrases are much fewer than the overgenerated ones.", "grammar.generate_random('S')", "Probabilistic\nThe probabilistic grammars follow the same approach. They take as input a string, are assembled from a grammar and a lexicon and can generate random sentences (giving the probability of the sentence). The main difference is that in the lexicon we have tuples (terminal, probability) instead of strings and for the rules we have a list of tuples (list of non-terminals, probability) instead of list of lists of non-terminals.\nExecute the cells to read the code:", "psource(ProbLexicon, ProbRules, ProbGrammar)", "Let's build a lexicon and rules for the probabilistic grammar:", "lexicon = ProbLexicon(\n Verb = \"is [0.5] | say [0.3] | are [0.2]\",\n Noun = \"robot [0.4] | sheep [0.4] | fence [0.2]\",\n Adjective = \"good [0.5] | new [0.2] | sad [0.3]\",\n Adverb = \"here [0.6] | lightly [0.1] | now [0.3]\",\n Pronoun = \"me [0.3] | you [0.4] | he [0.3]\",\n RelPro = \"that [0.5] | who [0.3] | which [0.2]\",\n Name = \"john [0.4] | mary [0.4] | peter [0.2]\",\n Article = \"the [0.5] | a [0.25] | an [0.25]\",\n Preposition = \"to [0.4] | in [0.3] | at [0.3]\",\n Conjuction = \"and [0.5] | or [0.2] | but [0.3]\",\n Digit = \"0 [0.35] | 1 [0.35] | 2 [0.3]\"\n)\n\nprint(\"Lexicon\", lexicon)\n\nrules = ProbRules(\n S = \"NP VP [0.6] | S Conjuction S [0.4]\",\n NP = \"Pronoun [0.2] | Name [0.05] | Noun [0.2] | Article Noun [0.15] \\\n | Article Adjs Noun [0.1] | Digit [0.05] | NP PP [0.15] | NP RelClause [0.1]\",\n VP = \"Verb [0.3] | VP NP [0.2] | VP Adjective [0.25] | VP PP [0.15] | VP Adverb [0.1]\",\n Adjs = \"Adjective [0.5] | Adjective Adjs [0.5]\",\n PP = \"Preposition NP [1]\",\n RelClause = \"RelPro VP [1]\"\n)\n\nprint(\"\\nRules:\", rules)", "Let's use the above to assemble our probabilistic grammar and run some simple queries:", "grammar = ProbGrammar(\"A Simple Probabilistic Grammar\", rules, lexicon)\n\nprint(\"How can we rewrite 'VP'?\", grammar.rewrites_for('VP'))\nprint(\"Is 'the' an article?\", grammar.isa('the', 'Article'))\nprint(\"Is 'here' a noun?\", grammar.isa('here', 'Noun'))", "If we have a grammar in CNF, we can get a list of all the rules. Let's create a grammar in the form and print the CNF rules:", "E_Prob_Chomsky = ProbGrammar(\"E_Prob_Chomsky\", # A Probabilistic Grammar in CNF\n ProbRules(\n S = \"NP VP [1]\",\n NP = \"Article Noun [0.6] | Adjective Noun [0.4]\",\n VP = \"Verb NP [0.5] | Verb Adjective [0.5]\",\n ),\n ProbLexicon(\n Article = \"the [0.5] | a [0.25] | an [0.25]\",\n Noun = \"robot [0.4] | sheep [0.4] | fence [0.2]\",\n Adjective = \"good [0.5] | new [0.2] | sad [0.3]\",\n Verb = \"is [0.5] | say [0.3] | are [0.2]\"\n ))\n\nprint(E_Prob_Chomsky.cnf_rules())", "Lastly, we can generate random sentences from this grammar. The function prob_generation returns a tuple (sentence, probability).", "sentence, prob = grammar.generate_random('S')\nprint(sentence)\nprint(prob)", "As with the non-probabilistic grammars, this one mostly overgenerates. You can also see that the probability is very, very low, which means there are a ton of generateable sentences (in this case infinite, since we have recursion; notice how VP can produce another VP, for example).\nHITS\nOverview\nHyperlink-Induced Topic Search (or HITS for short) is an algorithm for information retrieval and page ranking. You can read more on information retrieval in the text notebook. Essentially, given a collection of documents and a user's query, such systems return to the user the documents most relevant to what the user needs. The HITS algorithm differs from a lot of other similar ranking algorithms (like Google's Pagerank) as the page ratings in this algorithm are dependent on the given query. This means that for each new query the result pages must be computed anew. This cost might be prohibitive for many modern search engines, so a lot steer away from this approach.\nHITS first finds a list of relevant pages to the query and then adds pages that link to or are linked from these pages. Once the set is built, we define two values for each page. Authority on the query, the degree of pages from the relevant set linking to it and hub of the query, the degree that it points to authoritative pages in the set. Since we do not want to simply count the number of links from a page to other pages, but we also want to take into account the quality of the linked pages, we update the hub and authority values of a page in the following manner, until convergence:\n\n\nHub score = The sum of the authority scores of the pages it links to.\n\n\nAuthority score = The sum of hub scores of the pages it is linked from.\n\n\nSo the higher quality the pages a page is linked to and from, the higher its scores.\nWe then normalize the scores by dividing each score by the sum of the squares of the respective scores of all pages. When the values converge, we return the top-valued pages. Note that because we normalize the values, the algorithm is guaranteed to converge.\nImplementation\nThe source code for the algorithm is given below:", "psource(HITS)", "First we compile the collection of pages as mentioned above. Then, we initialize the authority and hub scores for each page and finally we update and normalize the values until convergence.\nA quick overview of the helper functions functions we use:\n\n\nrelevant_pages: Returns relevant pages from pagesIndex given a query.\n\n\nexpand_pages: Adds to the collection pages linked to and from the given pages.\n\n\nnormalize: Normalizes authority and hub scores.\n\n\nConvergenceDetector: A class that checks for convergence, by keeping a history of the pages' scores and checking if they change or not.\n\n\nPage: The template for pages. Stores the address, authority/hub scores and in-links/out-links.\n\n\nExample\nBefore we begin we need to define a list of sample pages to work on. The pages are pA, pB and so on and their text is given by testHTML and testHTML2. The Page class takes as arguments the in-links and out-links as lists. For page \"A\", the in-links are \"B\", \"C\" and \"E\" while the sole out-link is \"D\".\nWe also need to set the nlp global variables pageDict, pagesIndex and pagesContent.", "testHTML = \"\"\"Like most other male mammals, a man inherits an\n X from his mom and a Y from his dad.\"\"\"\ntestHTML2 = \"a mom and a dad\"\n\npA = Page('A', ['B', 'C', 'E'], ['D'])\npB = Page('B', ['E'], ['A', 'C', 'D'])\npC = Page('C', ['B', 'E'], ['A', 'D'])\npD = Page('D', ['A', 'B', 'C', 'E'], [])\npE = Page('E', [], ['A', 'B', 'C', 'D', 'F'])\npF = Page('F', ['E'], [])\n\nnlp.pageDict = {pA.address: pA, pB.address: pB, pC.address: pC,\n pD.address: pD, pE.address: pE, pF.address: pF}\n\nnlp.pagesIndex = nlp.pageDict\n\nnlp.pagesContent ={pA.address: testHTML, pB.address: testHTML2,\n pC.address: testHTML, pD.address: testHTML2,\n pE.address: testHTML, pF.address: testHTML2}", "We can now run the HITS algorithm. Our query will be 'mammals' (note that while the content of the HTML doesn't matter, it should include the query words or else no page will be picked at the first step).", "HITS('mammals')\npage_list = ['A', 'B', 'C', 'D', 'E', 'F']\nauth_list = [pA.authority, pB.authority, pC.authority, pD.authority, pE.authority, pF.authority]\nhub_list = [pA.hub, pB.hub, pC.hub, pD.hub, pE.hub, pF.hub]", "Let's see how the pages were scored:", "for i in range(6):\n p = page_list[i]\n a = auth_list[i]\n h = hub_list[i]\n \n print(\"{}: total={}, auth={}, hub={}\".format(p, a + h, a, h))", "The top score is 0.82 by \"C\". This is the most relevant page according to the algorithm. You can see that the pages it links to, \"A\" and \"D\", have the two highest authority scores (therefore \"C\" has a high hub score) and the pages it is linked from, \"B\" and \"E\", have the highest hub scores (so \"C\" has a high authority score). By combining these two facts, we get that \"C\" is the most relevant page. It is worth noting that it does not matter if the given page contains the query words, just that it links and is linked from high-quality pages.\nQUESTION ANSWERING\nQuestion Answering is a type of Information Retrieval system, where we have a question instead of a query and instead of relevant documents we want the computer to return a short sentence, phrase or word that answers our question. To better understand the concept of question answering systems, you can first read the \"Text Models\" and \"Information Retrieval\" section from the text notebook.\nA typical example of such a system is AskMSR (Banko et al., 2002), a system for question answering that performed admirably against more sophisticated algorithms. The basic idea behind it is that a lot of questions have already been answered in the web numerous times. The system doesn't know a lot about verbs, or concepts or even what a noun is. It knows about 15 different types of questions and how they can be written as queries. It can rewrite [Who was George Washington's second in command?] as the query [* was George Washington's second in command] or [George Washington's second in command was *].\nAfter rewriting the questions, it issues these queries and retrieves the short text around the query terms. It then breaks the result into 1, 2 or 3-grams. Filters are also applied to increase the chances of a correct answer. If the query starts with \"who\", we filter for names, if it starts with \"how many\" we filter for numbers and so on. We can also filter out the words appearing in the query. For the above query, the answer \"George Washington\" is wrong, even though it is quite possible the 2-gram would appear a lot around the query terms.\nFinally, the different results are weighted by the generality of the queries. The result from the general boolean query [George Washington OR second in command] weighs less that the more specific query [George Washington's second in command was *]. As an answer we return the most highly-ranked n-gram.\nCYK PARSE\nOverview\nSyntactic analysis (or parsing) of a sentence is the process of uncovering the phrase structure of the sentence according to the rules of a grammar. There are two main approaches to parsing. Top-down, start with the starting symbol and build a parse tree with the given words as its leaves, and bottom-up, where we start from the given words and build a tree that has the starting symbol as its root. Both approaches involve \"guessing\" ahead, so it is very possible it will take long to parse a sentence (wrong guess mean a lot of backtracking). Thankfully, a lot of effort is spent in analyzing already analyzed substrings, so we can follow a dynamic programming approach to store and reuse these parses instead of recomputing them. The CYK Parsing Algorithm (named after its inventors, Cocke, Younger and Kasami) utilizes this technique to parse sentences of a grammar in Chomsky Normal Form.\nThe CYK algorithm returns an M x N x N array (named P), where N is the number of words in the sentence and M the number of non-terminal symbols in the grammar. Each element in this array shows the probability of a substring being transformed from a particular non-terminal. To find the most probable parse of the sentence, a search in the resulting array is required. Search heuristic algorithms work well in this space, and we can derive the heuristics from the properties of the grammar.\nThe algorithm in short works like this: There is an external loop that determines the length of the substring. Then the algorithm loops through the words in the sentence. For each word, it again loops through all the words to its right up to the first-loop length. The substring it will work on in this iteration is the words from the second-loop word with first-loop length. Finally, it loops through all the rules in the grammar and updates the substring's probability for each right-hand side non-terminal.\nImplementation\nThe implementation takes as input a list of words and a probabilistic grammar (from the ProbGrammar class detailed above) in CNF and returns the table/dictionary P. An item's key in P is a tuple in the form (Non-terminal, start of substring, length of substring), and the value is a probability. For example, for the sentence \"the monkey is dancing\" and the substring \"the monkey\" an item can be ('NP', 0, 2): 0.5, which means the first two words (the substring from index 0 and length 2) have a 0.5 probablity of coming from the NP terminal.\nBefore we continue, you can take a look at the source code by running the cell below:", "psource(CYK_parse)", "When updating the probability of a substring, we pick the max of its current one and the probability of the substring broken into two parts: one from the second-loop word with third-loop length, and the other from the first part's end to the remainer of the first-loop length.\nExample\nLet's build a probabilistic grammar in CNF:", "E_Prob_Chomsky = ProbGrammar(\"E_Prob_Chomsky\", # A Probabilistic Grammar in CNF\n ProbRules(\n S = \"NP VP [1]\",\n NP = \"Article Noun [0.6] | Adjective Noun [0.4]\",\n VP = \"Verb NP [0.5] | Verb Adjective [0.5]\",\n ),\n ProbLexicon(\n Article = \"the [0.5] | a [0.25] | an [0.25]\",\n Noun = \"robot [0.4] | sheep [0.4] | fence [0.2]\",\n Adjective = \"good [0.5] | new [0.2] | sad [0.3]\",\n Verb = \"is [0.5] | say [0.3] | are [0.2]\"\n ))", "Now let's see the probabilities table for the sentence \"the robot is good\":", "words = ['the', 'robot', 'is', 'good']\ngrammar = E_Prob_Chomsky\n\nP = CYK_parse(words, grammar)\nprint(P)", "A defaultdict object is returned (defaultdict is basically a dictionary but with a default value/type). Keys are tuples in the form mentioned above and the values are the corresponding probabilities. Most of the items/parses have a probability of 0. Let's filter those out to take a better look at the parses that matter.", "parses = {k: p for k, p in P.items() if p >0}\n\nprint(parses)", "The item ('Article', 0, 1): 0.5 means that the first item came from the Article non-terminal with a chance of 0.5. A more complicated item, one with two words, is ('NP', 0, 2): 0.12 which covers the first two words. The probability of the substring \"the robot\" coming from the NP non-terminal is 0.12. Let's try and follow the transformations from NP to the given words (top-down) to make sure this is indeed the case:\n\n\nThe probability of NP transforming to Article Noun is 0.6.\n\n\nThe probability of Article transforming to \"the\" is 0.5 (total probability = 0.6*0.5 = 0.3).\n\n\nThe probability of Noun transforming to \"robot\" is 0.4 (total = 0.3*0.4 = 0.12).\n\n\nThus, the total probability of the transformation is 0.12.\nNotice how the probability for the whole string (given by the key ('S', 0, 4)) is 0.015. This means the most probable parsing of the sentence has a probability of 0.015.\nCHART PARSING\nOverview\nLet's now take a look at a more general chart parsing algorithm. Given a non-probabilistic grammar and a sentence, this algorithm builds a parse tree in a top-down manner, with the words of the sentence as the leaves. It works with a dynamic programming approach, building a chart to store parses for substrings so that it doesn't have to analyze them again (just like the CYK algorithm). Each non-terminal, starting from S, gets replaced by its right-hand side rules in the chart, until we end up with the correct parses.\nImplementation\nA parse is in the form [start, end, non-terminal, sub-tree, expected-transformation], where sub-tree is a tree with the corresponding non-terminal as its root and expected-transformation is a right-hand side rule of the non-terminal.\nThe chart parsing is implemented in a class, Chart. It is initialized with a grammar and can return the list of all the parses of a sentence with the parses function.\nThe chart is a list of lists. The lists correspond to the lengths of substrings (including the empty string), from start to finish. When we say 'a point in the chart', we refer to a list of a certain length.\nA quick rundown of the class functions:\n\n\nparses: Returns a list of parses for a given sentence. If the sentence can't be parsed, it will return an empty list. Initializes the process by calling parse from the starting symbol.\n\n\nparse: Parses the list of words and builds the chart.\n\n\nadd_edge: Adds another edge to the chart at a given point. Also, examines whether the edge extends or predicts another edge. If the edge itself is not expecting a transformation, it will extend other edges and it will predict edges otherwise.\n\n\nscanner: Given a word and a point in the chart, it extends edges that were expecting a transformation that can result in the given word. For example, if the word 'the' is an 'Article' and we are examining two edges at a chart's point, with one expecting an 'Article' and the other a 'Verb', the first one will be extended while the second one will not.\n\n\npredictor: If an edge can't extend other edges (because it is expecting a transformation itself), we will add to the chart rules/transformations that can help extend the edge. The new edges come from the right-hand side of the expected transformation's rules. For example, if an edge is expecting the transformation 'Adjective Noun', we will add to the chart an edge for each right-hand side rule of the non-terminal 'Adjective'.\n\n\nextender: Extends edges given an edge (called E). If E's non-terminal is the same as the expected transformation of another edge (let's call it A), add to the chart a new edge with the non-terminal of A and the transformations of A minus the non-terminal that matched with E's non-terminal. For example, if an edge E has 'Article' as its non-terminal and is expecting no transformation, we need to see what edges it can extend. Let's examine the edge N. This expects a transformation of 'Noun Verb'. 'Noun' does not match with 'Article', so we move on. Another edge, A, expects a transformation of 'Article Noun' and has a non-terminal of 'NP'. We have a match! A new edge will be added with 'NP' as its non-terminal (the non-terminal of A) and 'Noun' as the expected transformation (the rest of the expected transformation of A).\n\n\nYou can view the source code by running the cell below:", "psource(Chart)", "Example\nWe will use the grammar E0 to parse the sentence \"the stench is in 2 2\".\nFirst we need to build a Chart object:", "chart = Chart(nlp.E0)", "And then we simply call the parses function:", "print(chart.parses('the stench is in 2 2'))", "You can see which edges get added by setting the optional initialization argument trace to true.", "chart_trace = Chart(nlp.E0, trace=True)\nchart_trace.parses('the stench is in 2 2')", "Let's try and parse a sentence that is not recognized by the grammar:", "print(chart.parses('the stench 2 2'))", "An empty list was returned." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
slundberg/shap
notebooks/tabular_examples/tree_based_models/tree_shap_paper/Figure 1 - Simple Inconsistency Example.ipynb
mit
[ "Figure 1 - Simple Inconsistency Example\nHere we create synthetic data to make XGBoost build the Model A and Model B from Figure 1 of the Tree SHAP paper. We then compute both individualized feature importances using Tree SHAP and Saabas, and global feature importances using Tree SHAP, gain, split count, and permutation.", "import matplotlib.pyplot as pl\nimport numpy as np\nimport shap\nimport xgboost as xgb", "Create Model A\nThis is just a simple AND function with a small amount of noise to force the creation of the left child split. Feature 0 is Fever and feature 1 is Cough.", "N = 2000\nX = np.zeros((N,2))\nX[:1000,0] = 1\nX[:500,1] = 1\nX[1000:1500,1] = 1\nyA = 80 * (X[:,0] * X[:,1]) + 1e-4 * ((X[:,0] == 0) * (X[:,1] == 0)) # last term forces the creation of left split\nXd = xgb.DMatrix(X)\n\n# train a model with single tree\nXdA = xgb.DMatrix(X, label=yA)\nmodelA = xgb.train({\n 'eta': 1, 'max_depth': 3, 'base_score': 0, \"lambda\": 0\n}, XdA, 1)\nprint(modelA.get_dump(with_stats=True)[0])", "Create Model B\nThis is identical to Model A, except Cough is more important because it has its own marginal effect in addition to the original AND function in Model A.", "yB = yA + X[:,1] * 10\n\n# train a model with single tree\nXdB = xgb.DMatrix(X, label=yB)\nmodelB = xgb.train({\n 'eta': 1, 'max_depth': 3, 'base_score': 0, \"lambda\": 0\n}, XdB, 1)\nprint(modelB.get_dump(with_stats=True)[0])", "SHAP Values", "shap_valuesA = modelA.predict(Xd, pred_contribs=True)\nshap_valuesA[0]\n\nshap_valuesB = modelB.predict(Xd, pred_contribs=True)\nshap_valuesB[0]", "Saabas Values", "saabas_valuesA = modelA.predict(Xd, pred_contribs=True, approx_contribs=True)\nsaabas_valuesA[0]\n\nsaabas_valuesB = modelB.predict(Xd, pred_contribs=True, approx_contribs=True)\nsaabas_valuesB[0]", "mean(abs(SHAP Values))", "np.abs(shap_valuesA).mean(0)\n\nnp.abs(shap_valuesB).mean(0)", "mean(abs(Saabas Values))\nNote that the mean absolute Saabas values happen to be identical to the mean absolute SHAP values in this simple example, but in general this is not true.", "np.abs(saabas_valuesA).mean(0)\n\nnp.abs(saabas_valuesB).mean(0)", "Split count", "tmp = modelA.get_score(importance_type=\"weight\")\nsplitsA_fever = tmp[\"f0\"]\nsplitsA_cough = tmp[\"f1\"]\nsplitsA_fever,splitsA_cough\n\ntmp = modelB.get_score(importance_type=\"weight\")\nsplitsB_fever = tmp[\"f0\"]\nsplitsB_cough = tmp[\"f1\"]\nsplitsB_fever,splitsB_cough", "Gain\nFor some reason XGBoost averages the gain instead of summing as is classically proposed by Brieman, Friedman and others. So we undo the average by multiplying by the split count. (The averaged version of the gain is also inconsistent, but just not with this example.)", "tmp = modelA.get_score(importance_type=\"gain\")\ngainA_fever = tmp[\"f0\"]*splitsA_fever\ngainA_cough = tmp[\"f1\"]*splitsA_cough \ntotal = gainA_fever+gainA_cough\ngainA_fever /= total / 100\ngainA_cough /= total / 100\ngainA_fever,gainA_cough\n\ntmp[\"f0\"]\n\ntmp[\"f1\"]\n\ntmp = modelB.get_score(importance_type=\"gain\")\ngainB_fever = tmp[\"f0\"] * splitsB_fever\ngainB_cough = tmp[\"f1\"] * splitsB_cough \ntotal = gainB_fever + gainB_cough\ngainB_fever /= total / 100\ngainB_cough /= total / 100\ngainB_fever, gainB_cough\n\ntmp[\"f0\"]*splitsB_fever/2000\n\ntmp[\"f1\"]*splitsB_cough\n\n1250000.0/2000\n\n(90+10+0+0)/4\n\n((90-25)**2 + (10-25)**2 + (0-25)**2 + (0-25)**2)/4\n\n((90-25)**2 + (10-25)**2 + (0-25)**2 + (0-25)**2)/4\n\n((90-50)**2 + (10-50)**2 + (0-0)**2 + (0-0)**2)/4", "Permutation\nXGBoost does not implement permtation importance so we compute it ourselves.", "def permute_importance(model, y):\n vals_fever = []\n Xtmp = X.copy()\n inds = list(range(Xtmp.shape[0]))\n for i in range(1000):\n np.random.shuffle(inds)\n Xtmp[:,0] = Xtmp[inds,0]\n err = y - model.predict(xgb.DMatrix(Xtmp))\n vals_fever.append(np.mean(np.sqrt(err*err)))\n \n vals_cough = []\n Xtmp = X.copy()\n inds = list(range(Xtmp.shape[0]))\n for i in range(1000):\n np.random.shuffle(inds)\n Xtmp[:,1] = Xtmp[inds,1]\n err = y - model.predict(xgb.DMatrix(Xtmp))\n vals_cough.append(np.mean(np.sqrt(err*err)))\n return np.mean(vals_fever),np.mean(vals_cough)\n\npermuteA_fever,permuteA_cough = permute_importance(modelA, yA)\npermuteA_fever,permuteA_cough\n\npermuteB_fever,permuteB_cough = permute_importance(modelB, yB)\npermuteB_fever,permuteB_cough", "Weighted Split Count\nThe weighted split count is another option in XGBoost, it is not inconsistent in this example, but is for other scenarios.", "modelA.get_score(importance_type=\"cover\")\n\nmodelB.get_score(importance_type=\"cover\")", "Make plot\nHere we make the core bar plot for Figure 1 of the paper.", "# fever\nf = pl.figure(figsize=(7,6))\npl.subplot(1,2,1)\nd = 2\nvalues_A = [\n permuteA_fever,\n splitsA_fever,\n gainA_fever,\n np.abs(shap_valuesA).mean(0)[0],\n saabas_valuesA[0,0],\n shap_valuesA[0,0]\n]\ndisplay_A = [str(int(round(v))) for v in values_A]\ndisplay_A[2] = str(int(display_A[2]))+\"%\"\npositions_A = [\n 1,\n 4,\n 7,\n 10,\n 13+d,\n 16+d\n]\nvalues_B = [\n permuteA_cough,\n splitsA_cough,\n gainA_cough,\n np.abs(shap_valuesA).mean(0)[1],\n saabas_valuesA[0,1],\n shap_valuesA[0,1]\n]\ndisplay_B = [str(int(round(v))) for v in values_B]\ndisplay_B[2] = str(int(display_B[2]))+\"%\"\npositions_B = [\n 0,\n 3,\n 6,\n 9,\n 12+d,\n 15+d\n]\npl.barh(positions_A, values_A, color=\"#008BE0\")\npl.barh(positions_B, values_B, color=\"#008BE0\")\npl.yticks([])\npl.axis('off')\nfor i, v in enumerate(values_A):\n pl.text(v + 3, positions_A[i]-0.25, str(display_A[i]), color='#008BE0', fontweight='bold')\nfor i, v in enumerate(values_B):\n pl.text(v + 3, positions_B[i]-0.25, str(display_B[i]), color='#008BE0', fontweight='bold')\n\n# cough\npl.subplot(1,2,2)\nd = 2\nvalues_A = [\n permuteB_fever,\n splitsB_fever,\n gainB_fever,\n np.abs(shap_valuesB).mean(0)[0],\n saabas_valuesB[0,0],\n shap_valuesB[0,0]\n]\ndisplay_A = [str(int(round(v))) for v in values_A]\ndisplay_A[2] = display_A[2]+\"%\"\npositions_A = [\n 1,\n 4,\n 7,\n 10,\n 13+d,\n 16+d\n]\nvalues_B = [\n permuteB_cough,\n splitsB_cough,\n gainB_cough,\n np.abs(shap_valuesB).mean(0)[1],\n saabas_valuesB[0,1],\n shap_valuesB[0,1]\n]\ndisplay_B = [str(int(round(v))) for v in values_B]\ndisplay_B[2] = str(int(display_B[2]))+\"%\"\npositions_B = [\n 0,\n 3,\n 6,\n 9,\n 12+d,\n 15+d\n]\npl.barh(positions_A, values_A, color=\"#FF165A\")\npl.barh(positions_B, values_B, color=\"#FF165A\")\npl.yticks([])\npl.axis('off')\nfor i, v in enumerate(values_A):\n pl.text(v + 3, positions_A[i]-0.25, str(display_A[i]), color='#FF165A', fontweight='bold')\nfor i, v in enumerate(values_B):\n pl.text(v + 3, positions_B[i]-0.25, str(display_B[i]), color='#FF165A', fontweight='bold')\n \npl.show()\n#pl.savefig(\"data/bar.pdf\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
deehzee/cs231n
assignment2/FullyConnectedNets.ipynb
mit
[ "Fully-Connected Neural Nets\nIn the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.\nIn this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:\n```python\ndef layer_forward(x, w):\n \"\"\" Receive inputs x and weights w \"\"\"\n # Do some computations ...\n z = # ... some intermediate value\n # Do some more computations ...\n out = # the output\ncache = (x, w, z, out) # Values we need to compute gradients\nreturn out, cache\n```\nThe backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:\n```python\ndef layer_backward(dout, cache):\n \"\"\"\n Receive derivative of loss with respect to outputs and cache,\n and compute derivative with respect to inputs.\n \"\"\"\n # Unpack cache values\n x, w, z, out = cache\n# Use values in cache to compute derivatives\n dx = # Derivative of loss with respect to x\n dw = # Derivative of loss with respect to w\nreturn dx, dw\n```\nAfter implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.\nIn addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.", "# As usual, a bit of setup\nfrom __future__ import absolute_import, division, print_function\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn\n\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient\nfrom cs231n.gradient_check import eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0)\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) \\\n / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print('{}: {}'.format(k, v.shape))", "Affine layer: foward\nOpen the file cs231n/layers.py and implement the affine_forward function.\nOnce you are done you can test your implementaion by running the following:", "# Test the affine_forward function\n\nnum_inputs = 2\ninput_shape = (4, 5, 6)\noutput_dim = 3\n\ninput_size = num_inputs * np.prod(input_shape)\nweight_size = output_dim * np.prod(input_shape)\n\nx = np.linspace(-0.1, 0.5, num=input_size).reshape(\n num_inputs, *input_shape)\nw = np.linspace(-0.2, 0.3, num=weight_size).reshape(\n np.prod(input_shape), output_dim)\nb = np.linspace(-0.3, 0.1, num=output_dim)\n\nout, _ = affine_forward(x, w, b)\ncorrect_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],\n [ 3.25553199, 3.5141327, 3.77273342]])\n\n# Compare your output with ours. The error should be around 1e-9.\nprint('Testing affine_forward function:')\nprint('difference:', rel_error(out, correct_out))", "Affine layer: backward\nNow implement the affine_backward function and test your implementation using numeric gradient checking.", "# Test the affine_backward function\n\nx = np.random.randn(10, 2, 3)\nw = np.random.randn(6, 5)\nb = np.random.randn(5)\ndout = np.random.randn(10, 5)\n\ndx_num = eval_numerical_gradient_array(\n lambda x: affine_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(\n lambda w: affine_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(\n lambda b: affine_forward(x, w, b)[0], b, dout)\n\n_, cache = affine_forward(x, w, b)\ndx, dw, db = affine_backward(dout, cache)\n\n# The error should be around 1e-10\nprint('Testing affine_backward function:')\nprint('dx error:', rel_error(dx_num, dx))\nprint('dw error:', rel_error(dw_num, dw))\nprint('db error:', rel_error(db_num, db))", "ReLU layer: forward\nImplement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:", "# Test the relu_forward function\n\nx = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)\n\nout, _ = relu_forward(x)\ncorrect_out = np.array(\n [[ 0., 0., 0., 0., ],\n [ 0., 0., 0.04545455, 0.13636364,],\n [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])\n\n# Compare your output with ours. The error should be around 1e-8\nprint('Testing relu_forward function:')\nprint('difference:', rel_error(out, correct_out))", "ReLU layer: backward\nNow implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:", "x = np.random.randn(10, 10)\ndout = np.random.randn(*x.shape)\n\ndx_num = eval_numerical_gradient_array(\n lambda x: relu_forward(x)[0], x, dout)\n\n_, cache = relu_forward(x)\ndx = relu_backward(dout, cache)\n\n# The error should be around 1e-12\nprint('Testing relu_backward function:')\nprint('dx error:', rel_error(dx_num, dx))", "\"Sandwich\" layers\nThere are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.\nFor now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:", "from cs231n.layer_utils import affine_relu_forward, \\\n affine_relu_backward\n\nx = np.random.randn(2, 3, 4)\nw = np.random.randn(12, 10)\nb = np.random.randn(10)\ndout = np.random.randn(2, 10)\n\nout, cache = affine_relu_forward(x, w, b)\ndx, dw, db = affine_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(\n lambda x: affine_relu_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(\n lambda w: affine_relu_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(\n lambda b: affine_relu_forward(x, w, b)[0], b, dout)\n\nprint('Testing affine_relu_forward:')\nprint('dx error:', rel_error(dx_num, dx))\nprint('dw error:', rel_error(dw_num, dw))\nprint('db error:', rel_error(db_num, db))", "Loss layers: Softmax and SVM\nYou implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.\nYou can make sure that the implementations are correct by running the following:", "num_classes, num_inputs = 10, 50\nx = 0.001 * np.random.randn(num_inputs, num_classes)\ny = np.random.randint(num_classes, size=num_inputs)\n\ndx_num = eval_numerical_gradient(\n lambda x: svm_loss(x, y)[0], x, verbose=False)\nloss, dx = svm_loss(x, y)\n\n# Test svm_loss function. Loss should be around 9 and dx error\n# should be 1e-9\nprint('Testing svm_loss...')\nprint('loss:', loss)\nprint('dx error:', rel_error(dx_num, dx))\n\ndx_num = eval_numerical_gradient(\n lambda x: softmax_loss(x, y)[0], x, verbose=False)\nloss, dx = softmax_loss(x, y)\n\n# Test softmax_loss function. Loss should be 2.3 and dx error\n# should be 1e-8\nprint('\\nTesting softmax_loss:')\nprint('loss:', loss)\nprint('dx error:', rel_error(dx_num, dx))", "Two-layer network\nIn the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.\nOpen the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.", "N, D, H, C = 3, 5, 50, 7\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=N)\n\nstd = 1e-2\nmodel = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C,\n weight_scale=std)\n\nprint('Testing initialization ... ')\nW1_std = abs(model.params['W1'].std() - std)\nb1 = model.params['b1']\nW2_std = abs(model.params['W2'].std() - std)\nb2 = model.params['b2']\nassert W1_std < std / 10, 'First layer weights do not seem right'\nassert np.all(b1 == 0), 'First layer biases do not seem right'\nassert W2_std < std / 10, 'Second layer weights do not seem right'\nassert np.all(b2 == 0), 'Second layer biases do not seem right'\n\nprint('Testing test-time forward pass ... ')\nmodel.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)\nmodel.params['b1'] = np.linspace(-0.1, 0.9, num=H)\nmodel.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)\nmodel.params['b2'] = np.linspace(-0.9, 0.1, num=C)\nX = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T\nscores = model.loss(X)\ncorrect_scores = np.asarray(\n [[11.53165108, 12.2917344, 13.05181771, 13.81190102,\n 14.57198434, 15.33206765, 16.09215096],\n [12.05769098, 12.74614105, 13.43459113, 14.1230412,\n 14.81149128, 15.49994135, 16.18839143],\n [12.58373087, 13.20054771, 13.81736455, 14.43418138,\n 15.05099822, 15.66781506, 16.2846319 ]])\nscores_diff = np.abs(scores - correct_scores).sum()\nassert scores_diff < 1e-6, 'Problem with test-time forward pass'\n\nprint('Testing training loss (no regularization)')\ny = np.asarray([0, 5, 1])\nloss, grads = model.loss(X, y)\ncorrect_loss = 3.4702243556\nassert abs(loss - correct_loss) < 1e-10, \\\n 'Problem with training-time loss'\n\nmodel.reg = 1.0\nloss, grads = model.loss(X, y)\ncorrect_loss = 26.5948426952\nassert abs(loss - correct_loss) < 1e-10, \\\n 'Problem with regularization loss'\n\nfor reg in [0.0, 0.7]:\n print('Running numeric gradient check with reg =', reg)\n model.reg = reg\n loss, grads = model.loss(X, y)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(\n f, model.params[name], verbose=False)\n print('{} relative error: {:.2e}'.format(\n name, rel_error(grad_num, grads[name])))", "Solver\nIn the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.\nOpen the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.", "model = TwoLayerNet()\nsolver = None\n\n###################################################################\n# TODO: Use a Solver instance to train a TwoLayerNet that #\n# achieves at least 50% accuracy on the validation set. #\n###################################################################\ninput_dim = 3 * 32 * 32\nnum_classes = 10\n\nhidden_dim = 200\nweight_scale = np.sqrt(2.0 / data['X_train'].shape[0])\nreg = 1\nlearning_rate = 2e-3\nlr_decay = 0.7\nbatch_size = 250\nnum_epochs = 5\nprint_every = 100\nverbose = True\n\nmodel = TwoLayerNet(\n input_dim=input_dim,\n hidden_dim=hidden_dim,\n num_classes=num_classes,\n weight_scale=weight_scale,\n reg=reg,\n)\nsolver = Solver(\n model, data,\n update_rule='sgd',\n optim_config={'learning_rate': learning_rate},\n lr_decay=lr_decay,\n batch_size=batch_size,\n num_epochs=num_epochs,\n print_every=print_every,\n verbose=verbose,\n)\nsolver.train()\n###################################################################\n# END OF YOUR CODE #\n###################################################################\n\n# Run this cell to visualize training loss and train / val accuracy\n\nplt.subplot(2, 1, 1)\nplt.title('Training loss')\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('Iteration')\n\nplt.subplot(2, 1, 2)\nplt.title('Accuracy')\nplt.plot(solver.train_acc_history, '-o', label='train')\nplt.plot(solver.val_acc_history, '-o', label='val')\nplt.plot([0.5] * len(solver.val_acc_history), 'k--')\nplt.xlabel('Epoch')\nplt.legend(loc='lower right')\nplt.gcf().set_size_inches(15, 12)\nplt.show()", "Multilayer network\nNext you will implement a fully-connected network with an arbitrary number of hidden layers.\nRead through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.\nImplement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.\nInitial loss and gradient check\nAs a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?\nFor gradient checking, you should expect to see errors around 1e-6 or less.", "N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n### [djn] ===>\n# model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n# reg=reg, weight_scale=5e-2,\n# dtype=np.float64)\n\n# scores = model.loss(X)\n# print('scores:', scores)\n\n### [djn] <===\n\nfor reg in [0, 3.14, 30000]:\n print('Running check with reg =', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n #loss_func='softmax',\n reg=reg, weight_scale=5e-2,\n dtype=np.float64)\n\n loss, grads = model.loss(X, y)\n print('Initial loss:', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(\n f, model.params[name], verbose=False, h=1e-5)\n print('{} relative error: {:.2e}'.format(\n name, rel_error(grad_num, grads[name])))", "As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.", "# TODO: Use a three-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = np.sqrt(2.0 / num_train) #1e-2\nlearning_rate = 1e-3 #1e-4\nmodel = FullyConnectedNet([100, 100],\n #loss_func='svm',\n weight_scale=weight_scale,\n dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()", "Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.", "# TODO: Use a five-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nlearning_rate = 1e-4\nweight_scale = np.sqrt(2.0 / num_train) # 1e-5\nmodel = FullyConnectedNet([100, 100, 100, 100],\n #loss_func='svm',\n weight_scale=weight_scale,\n dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()", "Inline question:\nDid you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?\nAnswer:\n[FILL THIS IN]\nTraining the 5 layer was easier comparatively.\nUpdate rules\nSo far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.\nSGD+Momentum\nStochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.\nOpen the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.", "from cs231n.optim import sgd_momentum\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nv = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-3, 'velocity': v}\nnext_w, _ = sgd_momentum(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],\n [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],\n [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],\n [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])\nexpected_velocity = np.asarray([\n [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],\n [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],\n [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],\n [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])\n\nprint('next_w error:', rel_error(next_w, expected_next_w))\nprint('velocity error: ',\n rel_error(expected_velocity, config['velocity']))", "Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.", "num_train = 4000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\n\nfor update_rule in ['sgd', 'sgd_momentum']:\n print('running with ', update_rule)\n model = FullyConnectedNet([100, 100, 100, 100, 100],\n weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': 1e-2,\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print('')\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in solvers.iteritems():\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n\n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n\nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "RMSProp and Adam\nRMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.\nIn the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.\n[1] Tijmen Tieleman and Geoffrey Hinton. \"Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.\" COURSERA: Neural Networks for Machine Learning 4 (2012).\n[2] Diederik Kingma and Jimmy Ba, \"Adam: A Method for Stochastic Optimization\", ICLR 2015.", "# Test RMSProp implementation; you should see errors less than 1e-7\nfrom cs231n.optim import rmsprop\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\ncache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'cache': cache}\nnext_w, _ = rmsprop(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],\n [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],\n [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],\n [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])\nexpected_cache = np.asarray([\n [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],\n [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],\n [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],\n [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])\n\nprint('next_w error:', rel_error(expected_next_w, next_w))\nprint('cache error:', rel_error(expected_cache, config['cache']))\n\n# Test Adam implementation; you should see errors around 1e-7 or less\nfrom cs231n.optim import adam\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nm = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\nv = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}\nnext_w, _ = adam(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],\n [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],\n [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],\n [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])\nexpected_v = np.asarray([\n [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],\n [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],\n [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],\n [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])\nexpected_m = np.asarray([\n [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],\n [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],\n [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],\n [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])\n\nprint('next_w error: ', rel_error(expected_next_w, next_w))\nprint('v error: ', rel_error(expected_v, config['v']))\nprint('m error: ', rel_error(expected_m, config['m']))", "Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:", "learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}\nfor update_rule in ['adam', 'rmsprop']:\n print('running with ', update_rule)\n model = FullyConnectedNet([100, 100, 100, 100, 100],\n weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': learning_rates[update_rule]\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in solvers.iteritems():\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n\n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "Train a good model!\nTrain the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.\nIf you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.\nYou might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.", "best_model = None\n###################################################################\n# TODO: Train the best FullyConnectedNet that you can on #\n# CIFAR-10. You might batch normalization and dropout useful. #\n# Store your best model in the best_model variable. #\n###################################################################input_dim = 3 * 32 * 32\ninput_dim = 3 * 32 * 32\nnum_classes = 10\n\n#hidden_dims = [100, 200, 50, 50, 50]\nweight_scale = np.sqrt(2.0 / data['X_train'].shape[0])\n#reg = 1\n\n#learning_rate = 2e-3\n#lr_decay = 0.7\n\nmodel = FullyConnectedNet(\n input_dim=3 * 32 * 32,\n num_classes=10,\n #\n hidden_dims=[450, 400, 300, 200], \n #hidden_dims=[200, 400, 100, 100],\n loss_func='softmax',\n dropout=0.95, #0.95,\n use_batchnorm=True,\n weight_scale=weight_scale,\n reg=1e-4, #1e-4,\n)\nsolver = Solver(\n model, data,\n update_rule='adam',\n optim_config={'learning_rate': 2e-3},\n lr_decay=0.7,\n batch_size=250,\n num_epochs=5,\n print_every=100,\n verbose=True,\n)\nsolver.train()\n\n# visualize training loss and train / val accuracy\n\nplt.subplot(2, 1, 1)\nplt.title('Training loss')\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('Iteration')\n\nplt.subplot(2, 1, 2)\nplt.title('Accuracy')\nplt.plot(solver.train_acc_history, '-o', label='train')\nplt.plot(solver.val_acc_history, '-o', label='val')\nplt.plot([0.5] * len(solver.val_acc_history), 'k--')\nplt.xlabel('Epoch')\nplt.legend(loc='lower right')\nplt.gcf().set_size_inches(15, 12)\nplt.show()\n###################################################################\n# END OF YOUR CODE #\n###################################################################", "Test you model\nRun your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.", "best_model = model\nX_test = data['X_test']\nX_val = data['X_val']\ny_test = data['y_test']\ny_val = data['y_val']\n\ny_test_pred = np.argmax(best_model.loss(X_test), axis=1)\ny_val_pred = np.argmax(best_model.loss(X_val), axis=1)\nprint('Validation set accuracy:', (y_val_pred == y_val).mean())\nprint('Test set accuracy:', (y_test_pred == y_test).mean())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
UltronAI/Deep-Learning
CS231n/assignment2/PyTorch.ipynb
mit
[ "Training a ConvNet PyTorch\nIn this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.", "import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data import sampler\n\nimport torchvision.datasets as dset\nimport torchvision.transforms as T\n\nimport numpy as np\n\nimport timeit", "What's this PyTorch business?\nYou've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.\nFor the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook). \nWhy?\n\nOur code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).\nWe want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. \nWe want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) \nWe want you to be exposed to the sort of deep learning code you might run into in academia or industry. \n\nHow will I learn PyTorch?\nIf you've used Torch before, but are new to PyTorch, this tutorial might be of use: http://pytorch.org/tutorials/beginner/former_torchies_tutorial.html\nOtherwise, this notebook will walk you through much of what you need to do to train models in Torch. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.\nLoad Datasets\nWe load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.", "class ChunkSampler(sampler.Sampler):\n \"\"\"Samples elements sequentially from some offset. \n Arguments:\n num_samples: # of desired datapoints\n start: offset where we should start selecting from\n \"\"\"\n def __init__(self, num_samples, start = 0):\n self.num_samples = num_samples\n self.start = start\n\n def __iter__(self):\n return iter(range(self.start, self.start + self.num_samples))\n\n def __len__(self):\n return self.num_samples\n\nNUM_TRAIN = 49000\nNUM_VAL = 1000\n\ncifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,\n transform=T.ToTensor())\nloader_train = DataLoader(cifar10_train, batch_size=64, sampler=ChunkSampler(NUM_TRAIN, 0))\n\ncifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,\n transform=T.ToTensor())\nloader_val = DataLoader(cifar10_val, batch_size=64, sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))\n\ncifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True,\n transform=T.ToTensor())\nloader_test = DataLoader(cifar10_test, batch_size=64)\n", "For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.", "dtype = torch.FloatTensor # the CPU datatype\n\n# Constant to control how frequently we print train loss\nprint_every = 100\n\n# This is a little utility that we'll use to reset the model\n# if we want to re-initialize all our parameters\ndef reset(m):\n if hasattr(m, 'reset_parameters'):\n m.reset_parameters()", "Example Model\nSome assorted tidbits\nLet's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.\nWe'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where:\n* N is the number of datapoints\n* C is the number of channels\n* H is the height of the intermediate feature map in pixels\n* W is the height of the intermediate feature map in pixels\nThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we input data into fully connected affine layers, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a \"Flatten\" operation to collapse the C x H x W values per representation into a single long vector. The Flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a \"view\" of that data. \"View\" is analogous to numpy's \"reshape\" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).", "class Flatten(nn.Module):\n def forward(self, x):\n N, C, H, W = x.size() # read in N, C, H, W\n return x.view(N, -1) # \"flatten\" the C * H * W values into a single vector per image", "The example model itself\nThe first step to training your own model is defining its architecture.\nHere's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layer\none after the other.\nIn that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used. \nMake sure you understand why the parameters of the Linear layer are 5408 and 10.", "# Here's where we define the architecture of the model... \nsimple_model = nn.Sequential(\n nn.Conv2d(3, 32, kernel_size=7, stride=2),\n nn.ReLU(inplace=True),\n Flatten(), # see above for explanation\n nn.Linear(5408, 10), # affine layer\n )\n\n# Set the type of all data in this model to be FloatTensor \nsimple_model.type(dtype)\n\nloss_fn = nn.CrossEntropyLoss().type(dtype)\noptimizer = optim.Adam(simple_model.parameters(), lr=1e-2) # lr sets the learning rate of the optimizer", "PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note: what we call in the class \"spatial batch norm\" is called \"BatchNorm2D\" in PyTorch.\n\nLayers: http://pytorch.org/docs/nn.html\nActivations: http://pytorch.org/docs/nn.html#non-linear-activations\nLoss functions: http://pytorch.org/docs/nn.html#loss-functions\nOptimizers: http://pytorch.org/docs/optim.html#algorithms\n\nTraining a specific model\nIn this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the PyTorch documentation and configuring your own model. \nUsing the code provided above as guidance, and using the following PyTorch documentation, specify a model with the following architecture:\n\n7x7 Convolutional Layer with 32 filters and stride of 1\nReLU Activation Layer\nSpatial Batch Normalization Layer\n2x2 Max Pooling layer with a stride of 2\nAffine layer with 1024 output units\nReLU Activation Layer\nAffine layer from 1024 input units to 10 outputs\n\nAnd finally, set up a cross-entropy loss function and the RMSprop learning rule.", "fixed_model_base = nn.Sequential( # You fill this in!\n )\n\nfixed_model = fixed_model_base.type(dtype)", "To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):", "## Now we're going to feed a random batch into the model you defined and make sure the output is the right size\nx = torch.randn(64, 3, 32, 32).type(dtype)\nx_var = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data\nans = fixed_model(x_var) # Feed it through the model! \n\n# Check to make sure what comes out of your model\n# is the right dimensionality... this should be True\n# if you've done everything correctly\nnp.array_equal(np.array(ans.size()), np.array([64, 10])) ", "GPU!\nNow, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.\nIf this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.", "# Verify that CUDA is properly configured and you have a GPU available\n\ntorch.cuda.is_available()\n\nimport copy\ngpu_dtype = torch.cuda.FloatTensor\n\nfixed_model_gpu = copy.deepcopy(fixed_model_base).type(gpu_dtype)\n\nx_gpu = torch.randn(64, 3, 32, 32).type(gpu_dtype)\nx_var_gpu = Variable(x.type(gpu_dtype)) # Construct a PyTorch Variable out of your input data\nans = fixed_model_gpu(x_var_gpu) # Feed it through the model! \n\n# Check to make sure what comes out of your model\n# is the right dimensionality... this should be True\n# if you've done everything correctly\nnp.array_equal(np.array(ans.size()), np.array([64, 10]))", "Run the following cell to evaluate the performance of the forward pass running on the CPU:", "%%timeit \nans = fixed_model(x_var)", "... and now the GPU:", "%%timeit \ntorch.cuda.synchronize() # Make sure there are no pending GPU computations\nans = fixed_model_gpu(x_var_gpu) # Feed it through the model! \ntorch.cuda.synchronize() # Make sure there are no pending GPU computations", "You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors: as a reminder that is torch.cuda.FloatTensor (in our notebook here as gpu_dtype)\nTrain the model.\nNow that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the simple_model we provided above).\nMake sure you understand how each PyTorch function used below corresponds to what you implemented in your custom neural network implementation.\nNote that because we are not resetting the weights anywhere below, if you run the cell multiple times, you are effectively training multiple epochs (so your performance should improve).\nFirst, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function:", "loss_fn = None\noptimizer = None\npass\n\n\n# This sets the model in \"training\" mode. This is relevant for some layers that may have different behavior\n# in training mode vs testing mode, such as Dropout and BatchNorm. \nfixed_model_gpu.train()\n\n# Load one batch at a time.\nfor t, (x, y) in enumerate(loader_train):\n x_var = Variable(x.type(gpu_dtype))\n y_var = Variable(y.type(gpu_dtype).long())\n\n # This is the forward pass: predict the scores for each class, for each x in the batch.\n scores = fixed_model_gpu(x_var)\n \n # Use the correct y values and the predicted y values to compute the loss.\n loss = loss_fn(scores, y_var)\n \n if (t + 1) % print_every == 0:\n print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))\n\n # Zero out all of the gradients for the variables which the optimizer will update.\n optimizer.zero_grad()\n \n # This is the backwards pass: compute the gradient of the loss with respect to each \n # parameter of the model.\n loss.backward()\n \n # Actually update the parameters of the model using the gradients computed by the backwards pass.\n optimizer.step()", "Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model:", "def train(model, loss_fn, optimizer, num_epochs = 1):\n for epoch in range(num_epochs):\n print('Starting epoch %d / %d' % (epoch + 1, num_epochs))\n model.train()\n for t, (x, y) in enumerate(loader_train):\n x_var = Variable(x.type(gpu_dtype))\n y_var = Variable(y.type(gpu_dtype).long())\n\n scores = model(x_var)\n \n loss = loss_fn(scores, y_var)\n if (t + 1) % print_every == 0:\n print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\ndef check_accuracy(model, loader):\n if loader.dataset.train:\n print('Checking accuracy on validation set')\n else:\n print('Checking accuracy on test set') \n num_correct = 0\n num_samples = 0\n model.eval() # Put the model in test mode (the opposite of model.train(), essentially)\n for x, y in loader:\n x_var = Variable(x.type(gpu_dtype), volatile=True)\n\n scores = model(x_var)\n _, preds = scores.data.cpu().max(1)\n num_correct += (preds == y).sum()\n num_samples += preds.size(0)\n acc = float(num_correct) / num_samples\n print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))", "Check the accuracy of the model.\nLet's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.\nYou should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.\nBut don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.", "torch.cuda.random.manual_seed(12345)\nfixed_model_gpu.apply(reset)\ntrain(fixed_model_gpu, loss_fn, optimizer, num_epochs=1)\ncheck_accuracy(fixed_model_gpu, loader_val)", "Don't forget the validation set!\nAnd note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either loader_test or loader_val as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperparameter tuning, and only run the test set once at the end to report a final value. \nTrain a great model on CIFAR-10!\nNow it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >=70% accuracy on the CIFAR-10 validation set. You can use the check_accuracy and train functions from above.\nThings you should try:\n\nFilter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient\nNumber of filters: Above we used 32 filters. Do more or fewer do better?\nPooling vs Strided Convolution: Do you use max pooling or just stride convolutions?\nBatch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?\nNetwork architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:\n[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]\n\n\nGlobal Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).\nRegularization: Add l2 weight regularization, or perhaps use Dropout.\n\nTips for training\nFor each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:\n\nIf the parameters are working well, you should see improvement within a few hundred iterations\nRemember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.\nOnce you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.\nYou should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.\n\nGoing above and beyond\nIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.\n\nAlternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.\nAlternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.\nModel ensembles\nData augmentation\nNew Architectures\nResNets where the input from the previous layer is added to the output.\nDenseNets where inputs into previous layers are concatenated together.\nThis blog has an in-depth overview\n\nIf you do decide to implement something extra, clearly describe it in the \"Extra Credit Description\" cell below.\nWhat we expect\nAt the very least, you should be able to train a ConvNet that gets at least 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.\nYou should use the space below to experiment and train your network. \nHave fun and happy training!", "# Train your model here, and make sure the output of this cell is the accuracy of your best model on the \n# train, val, and test sets. Here's some code to get you started. The output of this cell should be the training\n# and validation accuracy on your best model (measured by validation accuracy).\n\nmodel = None\nloss_fn = None\noptimizer = None\n\ntrain(model, loss_fn, optimizer, num_epochs=1)\ncheck_accuracy(model, loader_val)", "Describe what you did\nIn the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.\nTell us here!\nTest set -- run this only once\nNow that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.", "best_model = None\ncheck_accuracy(best_model, loader_test)", "Going further with PyTorch\nThe next assignment will make heavy use of PyTorch. You might also find it useful for your projects. \nHere's a nice tutorial by Justin Johnson that shows off some of PyTorch's features, like dynamic graphs and custom NN modules: http://pytorch.org/tutorials/beginner/pytorch_with_examples.html\nIf you're interested in reinforcement learning for your final project, this is a good (more advanced) DQN tutorial in PyTorch: http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bjedwards/NetworkXTutorial
I. The Graph Data Structures.ipynb
bsd-3-clause
[ "The NetworkX Module\nNetworkX is a python module. To start exploring NetworkX we simply need to start a python session (Like the IPython session you are in now!), and type", "import networkx", "All of NetworkX's data structures and functions can then be accessed using the syntax networkx.[Object], where [Object] is the function or data structure you need. Of course you would replace [Object] with the function you wanted. For example to make a graph, we'd write:", "G = networkx.Graph()", "Usually to save ourselves some keystrokes, we'll import NetworkX using a shorter variable name", "import networkx as nx", "Basic Graph Data Structures\nOne of the main strengths of NetworkX is its flexible graph data structures. There are four data structures\n - Graph: Undirected Graphs\n - DiGraph: Directed Graphs\n - MultiGraph: Undirected multigraphs, ie graphs which allow for multiple edges between nodes\n - MultiDiGraph: Directed Multigraphs\nEach of these has the same basic structure, attributes and features, with a few minor differences.\nCreating Graphs\nCreating Graphs is as simple as calling the appropriate constructor.", "G = nx.Graph()\nD = nx.DiGraph()\nM = nx.MultiGraph()\nMD = nx.MultiDiGraph()", "You can also add attributes to a graph during creation, either by providing a dictionary, or simply using keyword arguments", "G = nx.Graph(DateCreated='2015-01-10',name=\"Terry\")\n\nG.graph", "The graph attribute is just a dictionary and can be treated as one, so you can add and delete more information from it.", "G.graph['Current']=False\ndel G.graph['name']\n\nG.graph", "Nodes\nNext we'll cover how to add and remove nodes, as well as check for their existance in a graph and add attributes to both!\nAdding Nodes\nThere are two main functions for adding nodes. add_node, and add_nodes_from. The former takes single values, and the latter takes any iterable (list, set, iterator, generator). Nodes can be of any immutable type. This means numbers (ints and floats complex), strings, bytes, tuples or frozen sets. They cannot be mutable, such as lists, dictionaries or sets. Nodes in the same graph do not have to be of the same type", "# Adding single nodes of various types\nG.add_node(0)\nG.add_node('A')\nG.add_node(('x',1.2))\n# Adding collections of nodes\nG.add_nodes_from([2,4,6,8,10])\nG.add_nodes_from(set([10+(3*i)%5 for i in range(10,50)]))", "Listing Nodes\nAccessing nodes is done using the nodes function which is a member of the Graph object.", "G.nodes()", "Sometimes to save memory we might only want to access a list of nodes one at a time, so we can use an iterator. These are especially useful in long running loops to save memory.", "for n in G.nodes_iter():\n if type(n)== str:\n print(n + ' is a string!')\n else:\n print(str(n) + \" is not a string!\")", "In the future more functions of NetworkX will exclusively use iterators to save memory and be more Python 3 like...\nChecking whether nodes are in a Graph\nWe can also check to see if a graph has a node several different ways. The easiest is just using the in keyword in python, but there is also the has_node function.", "13 in G\n\n9 in G\n\nG.has_node(13)\n\nG.has_node(9)", "Node attributes\nYou can also add attributes to nodes. This can be handy for storing information about nodes within the graph object. This can be done when you create new nodes using keyword arguments to the add_node and add_nodes_from function", "G.add_node('Spam',company='Hormel',food='meat')", "When using add_nodes_from you provide a tuple with the first element being the node, and the second being a dictionary of attributes for that node. You can also add attributes which will be applied to all added nodes using keyword arguments", "G.add_nodes_from([('Bologna',{'company':'Oscar Meyer'}),\n ('Bacon',{'company':'Wright'}),\n ('Sausage',{'company':'Jimmy Dean'})],food='meat')", "To list node attributes you need to provide the data=True keyword to the nodes and nodes_iter functions", "G.nodes(data=True)", "Attributes are stored in a special dictionary within the graph called node you can access, edit and remove attributes there", "G.node['Spam']\n\nG.node['Spam']['Delicious'] = True\nG.node[6]['integer'] = True\n\nG.nodes(data=True)\n\ndel G.node[6]['integer']\n\nG.nodes(data=True)", "Similiarly, you can remove nodes with the remove_node and remove_nodes_from functions", "G.remove_node(14)\nG.remove_nodes_from([10,11,12,13])\n\nG.nodes()", "Exercises\nRepeated Nodes\n\nWhat happens when you add nodes to a graph that already exist?\nWhat happens when you add nodes to the graph that already exist but have new attributes?\nWhat happens when you add nodes to a graph with attributes different from existing nodes?\nTry removing a node that doesn't exist, what happens?\n\nThe FizzBuzz Graph\nUsing the spaces provided below make a new graph, FizzBuzz. Add nodes labeled 0 to 100 to the graph. Each node should have an attribute 'fizz' and 'buzz'. If the nodes label is divisble by 3 fizz=True if it is divisble by 5 buzz=True, otherwise both are false.\nEdges\nAdding edges is similar to adding nodes. They can be added, using either add_edge or add_edges_from. They can also have attributes in the same way nodes can. If you add an edge that includes a node that doesn't exist it will create it for you", "G.add_edge('Bacon','Sausage',breakfast=True)\nG.add_edge('Ham','Bacon',breakfast=True)\nG.add_edge('Spam','Eggs',breakfast=True)", "Here we are using a list comprehension. This is an easy way to construct lists using a single line. Learn more about list comprehensions here.", "G.add_edges_from([(i,i+2) for i in range(2,8,2)])\n\nG.edges()\n\nG.edges(data=True)", "Removing edges is accomplished by using the remove_edge or remove_edges_from function. Remove edge attributes can be done by indexing into the graph", "G['Spam']['Eggs']\n\ndel G['Spam']['Eggs']['breakfast']\n\nG.remove_edge(2,4)\n\nG.edges(data=True)", "You can check for the existance of edges with has_edge", "G.has_edge(2,4)\n\nG.has_edge('Ham','Bacon')", "For directed graphs, ordering matters. add_edge(u,v) will add an edge from u to v", "D.add_nodes_from(range(10))\n\nD.add_edges_from([(i,i+1 % 10) for i in range(0,10)])\n\nD.edges()\n\nD.has_edge(0,1)\n\nD.has_edge(1,0)", "You can also access edges for only a subset of nodes by passing edges a collection of nodes", "D.edges([3,4,5])", "Exercises\nFor the FizzBuzz graph above, add edges betweeen two nodes u and v if they are both divisible by 2 or by 7. Each edge should include attributes div2 and div7 which are true if u and v are divisible by 2 and 7 respecitively. Exclude self loops.\nMultigraphs\nMultigraphs can have multiple edges between any two nodes. They are referenced by a key.", "M.add_edge(0,1)\nM.add_edge(0,1)\n\nM.edges()", "The keys of the edges can be accessed by using the keyword keys=True. This will give a tuple of (u,v,k), with the edge being u and v and the key being k.", "M.edges(keys=True)", "MultiDraphs and MultiDiGraphs are similar to Graphs and DiGraphs in most respects\nAdding Graph Motifs\nIn addition to adding nodes and edges one at a time networkx has some convenient functions for adding complete subgraphs. But beware, these may be removed, or the API changed in the future.", "G.add_cycle(range(100,110))\n\nG.edges()", "Basic Graph Properties\nBasic graph properties are functions which are member of the Graph class itself. We'll explore different metrics in part III.\nNode and Edge Counts\nThe order of a graph is the number of nodes, it can be accessed by calling G.order() or using the builtin length function: len(G).", "G.order()\n\nlen(G)", "The number of edges is usually referred to as the size of the graph, and can be accessed by G.size(). You could also find out by calling len(G.edges()), but this is much slower.", "G.size()", "For multigraphs it counts the number of edges includeing multiplicity", "M.size()", "Node Neighbors\nNode neighbors can be accessed via the neighbors", "G.neighbors('Bacon')", "In the case of directed graphs, neighbors are only those originating at the node.", "D.add_edges_from([(0,i) for i in range(5,10)])\nD.neighbors(0)", "For multigraphs, neighbors are only reported once.", "M.neighbors(0)", "Degree\nThe degree of a graph can be found using the degree function for undirected graphs, and in_degree and out_degree for directed graphs. They both return a dictionary with the node as the keys of the dictionary and the degree as the value", "G.degree()\n\nD.in_degree()\n\nD.out_degree()", "Both of these can be called on a single node or a subset of nodes if not all degrees are needed", "D.in_degree(5)\n\nD.out_degree([0,1,2])", "You can also calculate weighted degree. To do this each edge has to have specific attribute to be used as a weight.", "WG = nx.Graph()\nWG.add_star(range(5))\nWG.add_star(range(5,10))\nWG.add_edges_from([(i,2*i %10) for i in range(10)])\nfor (u,v) in WG.edges_iter():\n WG[u][v]['product'] = (u+1)*(v+1)\n\nWG.degree(weight='product')", "Exercises\nCreate A Classroom Graph\nLet's make a network of the people in this room. First, create a graph called C. Everyone state their name (one at a time) and where they are from. Add nodes to the graph representing each individual, with an attribute denoting where they are from. Add edges to the graph between an individual and their closest three classmates. Have each edge have an attribute that indicates whether there was a previous relationship between the two. If none existed have relationship=None, if it does exist have the relationship stated, e.g. relationship='Cousin-in-law'\nHow many nodes are in the Graph? How many Edges? What is the degree of the graph?\nQuickly Saving a Graph\nIn the next section we'll learn more about saving and loading graphs, as well as operations on graphs, but for now just run the code below.", "nx.write_gpickle(C,'./data/Classroom.pickle')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
alorenzo175/pvlib-python
docs/tutorials/tmy.ipynb
bsd-3-clause
[ "TMY tutorial\nThis tutorial shows how to use the pvlib.tmy module to read data from TMY2 and TMY3 files.\nThis tutorial has been tested against the following package versions:\n* pvlib 0.3.0\n* Python 3.5.1\n* IPython 4.1\n* pandas 0.18.0\nAuthors:\n* Will Holmgren (@wholmgren), University of Arizona. July 2014, July 2015, March 2016.\nImport modules", "# built in python modules\nimport datetime\nimport os\nimport inspect\n\n# python add-ons\nimport numpy as np\nimport pandas as pd\n\n# plotting libraries\n%matplotlib inline\nimport matplotlib.pyplot as plt\ntry:\n import seaborn as sns\nexcept ImportError:\n pass\n\nimport pvlib", "pvlib comes packaged with a TMY2 and a TMY3 data file.", "# Find the absolute file path to your pvlib installation\npvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib)))", "Import the TMY data using the functions in the pvlib.tmy module.", "tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv'))\ntmy2_data, tmy2_metadata = pvlib.tmy.readtmy2(os.path.join(pvlib_abspath, 'data', '12839.tm2'))", "Print the TMY3 metadata and the first 5 lines of the data.", "print(tmy3_metadata)\ntmy3_data.head(5)\n\ntmy3_data['GHI'].plot()", "The TMY readers have an optional argument to coerce the year to a single value.", "tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv'), coerce_year=1987)\n\ntmy3_data['GHI'].plot()", "Here's the TMY2 data.", "print(tmy2_metadata)\nprint(tmy2_data.head())", "Finally, the TMY readers can access TMY files directly from the NREL website.", "tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3('http://rredc.nrel.gov/solar/old_data/nsrdb/1991-2005/data/tmy3/722740TYA.CSV', coerce_year=2015)\n\ntmy3_data['GHI'].plot(figsize=(12,6))\nplt.title('Tucson TMY GHI')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cgpotts/cs224u
sst_01_overview.ipynb
apache-2.0
[ "Supervised sentiment: overview of the Stanford Sentiment Treebank", "__author__ = \"Christopher Potts\"\n__version__ = \"CS224u, Stanford, Spring 2022\"", "Contents\n\nOverview of this unit\nSet-up\nData readers\nTrain split\nDev and test splits\n\n\nTokenization\n\nOverview of this unit\nWe have a few inter-related goals for this unit:\n\n\nProvide a basic introduction to supervised learning in the context of a problem that has long been central to academic research and industry applications: sentiment analysis.\n\n\nExplore and evaluate a diverse array of methods for modeling sentiment:\n\nHand-built feature functions with (mostly linear) classifiers\nDense feature representations derived from VSMs as we built them in the previous unit\n\nRecurrent neural networks (RNNs)\n\n\nBegin discussing and implementing responsible methods for hyperparameter optimization and classifier assessment and comparison.\n\n\nThe unit is built around the Stanford Sentiment Treebank (SST), a widely-used resource for evaluating supervised NLU models, and one that provides rich linguistic representations.\nSet-up\n\n\nMake sure your environment includes all the requirements for the cs224u repository.\n\n\nIf you haven't already, download the course data, unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change SST_HOME below.)", "from nltk.tokenize.treebank import TreebankWordDetokenizer\nfrom nltk.tokenize.treebank import TreebankWordTokenizer\nimport os\nimport pandas as pd\n\nimport sst\n\nSST_HOME = os.path.join('data', 'sentiment')", "Data readers\nOur SST distribution is the ternary version of the problem (SST-3). It consists of train/dev/test files with the following columns:\n\nexample_id: a string with the format 'N-S' where N is the example number and S is the index for the subtree in example N. Both N and S are five-digit numbers with 0-padding.\nsentence: a string giving the example sentence.\nlabel: a string giving the label: 'positive', 'negative', or 'neutral'. This value is derived from the original SST by mapping labels 0 and 1 to 'negative', label 2 to 'neutral', and labels 3 and 4 to 'positive'.\nis_subtree: the integer 1 if the example is a (proper) subtree, else 0. This affects only the train file. Our dev and test splits contain no subtrees – full examples only – and hence is_subtree is always 0 for them.\n\nTrain split\nWhen reading in the train split, you have a few options. \nRoot-only formulation\nThe default will include only full examples and retain duplicate examples:", "train_df = sst.train_reader(SST_HOME)\n\ntrain_df.sample(3, random_state=1).to_dict(orient=\"records\")\n\ntrain_df.shape[0]", "This yields the following label distribution:", "train_df.label.value_counts()", "You might want to remove the duplicate examples:", "dup_train_df = sst.train_reader(SST_HOME, dedup=True)\n\ndup_train_df.shape[0]", "This removes only ten examples for this setting so it is unlikely to be a significant choice.\nOur CSV-based distribution should make it easy to do basic analysis of the dataset to inform system development. \nHere's a look at the distribution of examples by length in characters:", "_ = train_df.sentence.str.len().hist().set_ylabel(\"Length in characters\")", "And by word count, assuming a very simple tokenization strategy:", "train_df['word_count'] = train_df.sentence.str.split().apply(len)\n\n_ = train_df['word_count'].hist().set_ylabel(\"Length in words\")\n\n_ = train_df.boxplot(\"word_count\", by=\"label\")", "Including subtrees\nMuch of the special interest of the SST is that it includes labels, not just for full examples, but also for all the constituent words and phrases in those examples. You might also want to try training on this expanded dataset. It's much larger and so experiments will be more costly in terms of time and compute resources, but it could be worth it.", "subtree_train_df = sst.train_reader(SST_HOME, include_subtrees=True)\n\nsubtree_train_df.shape[0]\n\nsubtree_train_df.head()\n\nsubtree_train_df['word_count'] = subtree_train_df.sentence.str.split().apply(len)\n\n_ = subtree_train_df['word_count'].hist().set_ylabel(\"Length in words\")", "In this setting, removing duplicates has a large effect, since many subtrees are repeated:", "subtree_dedup_train_df = sst.train_reader(SST_HOME, include_subtrees=True, dedup=True)\n\nsubtree_dedup_train_df.shape", "Label distribution:", "subtree_dedup_train_df.label.value_counts()", "Dev and test splits\nFor the dev and test splits, we include only the root-level examples, and we do not deduplicate to remain aligned with the original paper. (The dev set has one repeated example, and the test set has none.)", "dev_df = sst.dev_reader(SST_HOME)\n\ndev_df.shape", "Label distribution:", "dev_df.label.value_counts()", "There is an associated sst.test_reader(SST_HOME) with 2,210 (root-only) examples and no duplicates. As always in our field, you should use the test set only at the very end of your system development, and you should never, ever develop a system on the basis of test-set scores. \nIn a similar vein, you should use the dev set only very sparingly. This will give you a clearer picture of how you will ultimately do on test; over-use of a dev set can lead to over-fitting on that particular dataset with a resulting loss of performance at test time.\nIn the homework and associated bake-off for this course, we will introduce a second dev/test pair involving sentences about restaurants. The goal there is to have a fresh test set, and to push you to develop a system that works both for the SST movie domain and this new domain.", "_ = dev_df.sentence.str.len().hist().set_ylabel(\"Length in characters\")\n\ndev_df['word_count'] = dev_df.sentence.str.split().apply(len)\n\n_ = dev_df['word_count'].hist().set_ylabel(\"Length in words\")\n\n_ = dev_df.boxplot(\"word_count\", by=\"label\")", "Tokenization\nThe SST began as a collection of sentences from Rotten Tomatoes that were released as a corpus by Pang and Lee 2004. The data were parsed as part of the SST project, and we are now releasing them in a flat format similar to what one sees in benchmarks like GLUE. Along this journey, the sentences have acquired a tokenization scheme that is reminiscent of what one sees in standard Penn Treenbank formats, with some additional quirks. This makes the tokens different in sigificant respects from what one sees in most standard English texts:", "ex = train_df.iloc[0].sentence\n\nex", "One can address some of this using the NLTK TreebankWordDetokenizer:", "detokenizer = TreebankWordDetokenizer()\n\ndef detokenize(s):\n return detokenizer.detokenize(s.split())\n\ndetokenize(ex)", "As you can see, there is additional clean-up one could do, but this is a start.\nAnother option would be to go in the reverse – for outside data, one could try to bring it into the SST format:", "tokenizer = TreebankWordTokenizer()\n\ndef treebank_tokenize(s):\n return tokenizer.tokenize(s)\n\ntreebank_tokenize(\"The Rock isn't the new ``Conan'' – he's this generation's Olivier!\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arsenovic/clifford
docs/tutorials/PerformanceCliffordTutorial.ipynb
bsd-3-clause
[ "This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.\nWriting high(ish) performance code with Clifford and Numba via Numpy\nThis document describes how to take algorithms developed in the clifford package with notation that is close to the maths and convert it into numerically efficient and fast running code. To do this we will expose the underlying representation of multivector as a numpy array of canonical basis vector coefficients and operate directly on these arrays in a manner that is conducive to JIT compilation with numba.\nFirst import the Clifford library as well as numpy and numba", "import clifford as cf\nimport numpy as np\nimport numba", "Choose a specific space\nFor this document we will use 3d euclidean space embedded in the conformal framework giving a Cl(4,1) algebra.\nWe will also rename some of the variables to match the notation that used by Lasenby et al. in \"A Covariant Approach to Geometry using Geometric Algebra\"", "from clifford import g3c\n# Get the layout in our local namespace etc etc\nlayout = g3c.layout\nlocals().update(g3c.blades)\n\nep, en, up, down, homo, E0, ninf, no = (g3c.stuff[\"ep\"], g3c.stuff[\"en\"], \n g3c.stuff[\"up\"], g3c.stuff[\"down\"], g3c.stuff[\"homo\"], \n g3c.stuff[\"E0\"], g3c.stuff[\"einf\"], -g3c.stuff[\"eo\"])\n# Define a few useful terms\nE = ninf^(no)\nI5 = e12345\nI3 = e123", "Performance of mathematically idiomatic Clifford algorithms\nBy default the Clifford library sacrifices performance for syntactic convenience.\nConsider a function that applies a rotor to a multivector:", "def apply_rotor(R,mv):\n return R*mv*~R", "We will define a rotor that takes one line to another:", "line_one = (up(0)^up(e1)^ninf).normal()\nline_two = (up(0)^up(e2)^ninf).normal()\nR = 1 + line_two*line_one", "Check that this works", "print(line_two)\nprint(apply_rotor(R,line_one).normal())", "We would like to improve the speed of our algorithm, first we will profile it and see where it spends its time", "#%%prun -s cumtime\n#for i in range(1000000):\n# apply_rotor(R,line_one)", "An example profile output from running this notebook on the author's laptop is as follows:\nncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 66.290 66.290 {built-in method builtins.exec}\n 1 0.757 0.757 66.290 66.290 &lt;string&gt;:2(&lt;module&gt;)\n 1000000 3.818 0.000 65.534 0.000 &lt;ipython-input-13-70a01003bf51&gt;:1(apply_rotor)\n 2000000 9.269 0.000 55.641 0.000 __init__.py:751(__mul__)\n 2000000 3.167 0.000 29.900 0.000 __init__.py:717(_checkOther)\n 2000000 1.371 0.000 19.906 0.000 __init__.py:420(__ne__)\n 2000000 6.000 0.000 18.535 0.000 numeric.py:2565(array_equal)\n 2000000 10.505 0.000 10.505 0.000 __init__.py:260(mv_mult)\nWe can see that the function spends almost all of its time in __mul__ and within __mul__ it spends most of its time in _checkOther. In fact it only spends a small fraction of its time in mv_mult which does the numerical multivector multiplication. To write more performant code we need to strip away the high level abstractions and deal with the underlying representations of the blade component data.\nCanonical blade coefficient representation in Clifford\nIn Clifford a multivector is internally represented as a numpy array of the coefficients of the canonical basis vectors, they are arranged in order of grade. So for our 4,1 algebra the first element is the scalar part, the next 5 are the vector coefficients, the next 10 are the bivectors, the next 10 the triectors, the next 5 the quadvectors and the final value is the pseudoscalar coefficient.", "(5.0*e1 - e2 + e12 + e135 + np.pi*e1234).value", "Exploiting blade representation to write a fast function\nWe can rewrite our rotor application function using the functions that the layout exposes for operations on the numpy arrays themselves.", "def apply_rotor_faster(R,mv):\n return layout.MultiVector(layout.gmt_func(R.value,layout.gmt_func(mv.value,layout.adjoint_func(R.value))) )\n\n#%%prun -s cumtime\n#for i in range(1000000):\n# apply_rotor_faster(R,line_one)", "This gives a much faster function\nncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 19.567 19.567 {built-in method builtins.exec}\n 1 0.631 0.631 19.567 19.567 &lt;string&gt;:2(&lt;module&gt;)\n 1000000 7.373 0.000 18.936 0.000 &lt;ipython-input-35-6a5344d83bdb&gt;:1(apply_rotor_faster)\n 2000000 9.125 0.000 9.125 0.000 __init__.py:260(mv_mult)\n 1000000 1.021 0.000 1.619 0.000 __init__.py:677(__init__)\n 1000000 0.819 0.000 0.819 0.000 __init__.py:244(adjoint_func)\nWe have successfully skipped past the higher level checks on the multivectors while maintaining exactly the same function signature.\nIt is important to check that we still have the correct answer:", "print(line_two)\nprint(apply_rotor_faster(R,line_one).normal())", "The performance improvements gained by rewriting our function are significant but it comes at the cost of readability.\nBy loading the layouts gmt_func and adjoint_func into the global namespace before the function is defined and \nseparating the value operations from the multivector wrapper we can make our code more concise.", "gmt_func = layout.gmt_func\nadjoint_func = layout.adjoint_func\n\ndef apply_rotor_val(R_val,mv_val):\n return gmt_func(R_val,gmt_func(mv_val,adjoint_func(R_val)))\n\ndef apply_rotor_wrapped(R,mv):\n return cf.MultiVector(layout,apply_rotor_val(R.value,mv.value))\n\n#%%prun -s cumtime\n#for i in range(1000000):\n# apply_rotor_wrapped(R,line_one)", "The time cost is essentially the same, there is probably some minor overhead from the function call itself\nncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 19.621 19.621 {built-in method builtins.exec}\n 1 0.557 0.557 19.621 19.621 &lt;string&gt;:2(&lt;module&gt;)\n 1000000 1.421 0.000 19.064 0.000 &lt;ipython-input-38-a1e0b5c53cdc&gt;:7(apply_rotor_wrapped)\n 1000000 6.079 0.000 16.033 0.000 &lt;ipython-input-38-a1e0b5c53cdc&gt;:4(apply_rotor_val)\n 2000000 9.154 0.000 9.154 0.000 __init__.py:260(mv_mult)\n 1000000 1.017 0.000 1.610 0.000 __init__.py:677(__init__)\n 1000000 0.800 0.000 0.800 0.000 __init__.py:244(adjoint_func)", "print(line_two)\nprint(apply_rotor_wrapped(R,line_one).normal())", "The additional advantage of splitting the function like this is that the numba JIT compiler can reason about the memory layout of numpy arrays in no python mode as long as no pure python objects are operated upon within the function. This means we can JIT our function that operates on the value directly.", "@numba.njit\ndef apply_rotor_val_numba(R_val,mv_val):\n return gmt_func(R_val,gmt_func(mv_val,adjoint_func(R_val)))\n\ndef apply_rotor_wrapped_numba(R,mv):\n return cf.MultiVector(layout,apply_rotor_val_numba(R.value,mv.value))\n\n#%%prun -s cumtime\n#for i in range(1000000):\n# apply_rotor_wrapped_numba(R,line_one)", "This gives a small improvement in performance but more importantly it allows us to write larger functions that also use the jitted apply_rotor_val_numba and are themselves jitted.\nncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 16.033 16.033 {built-in method builtins.exec}\n 1 0.605 0.605 16.033 16.033 &lt;string&gt;:2(&lt;module&gt;)\n 1000000 2.585 0.000 15.428 0.000 &lt;ipython-input-42-1142126d93ca&gt;:5(apply_rotor_wrapped_numba)\n 1000000 8.606 0.000 8.606 0.000 &lt;ipython-input-42-1142126d93ca&gt;:1(apply_rotor_val_numba)\n 1 0.000 0.000 2.716 2.716 dispatcher.py:294(_compile_for_args)\n 7/1 0.000 0.000 2.716 2.716 dispatcher.py:554(compile)\nComposing larger functions\nBy chaining together functions that operate on the value arrays of multivectors it is easy to construct fast and readable code", "I5_val = I5.value\nomt_func = layout.omt_func\n\ndef dual_mv(mv):\n return -I5*mv\n\ndef meet_unwrapped(mv_a,mv_b):\n return -dual_mv(dual_mv(mv_a)^dual_mv(mv_b))\n\n@numba.njit\ndef dual_val(mv_val):\n return -gmt_func(I5_val,mv_val)\n\n@numba.njit\ndef meet_val(mv_a_val,mv_b_val):\n return -dual_val( omt_func( dual_val(mv_a_val) , dual_val(mv_b_val)) )\n\ndef meet_wrapped(mv_a,mv_b):\n return cf.layout.MultiVector(meet_val(mv_a.value, mv_b.value))\n\nsphere = (up(0)^up(e1)^up(e2)^up(e3)).normal()\nprint(sphere.meet(line_one).normal().normal())\nprint(meet_unwrapped(sphere,line_one).normal())\nprint(meet_wrapped(line_one,sphere).normal())\n\n#%%prun -s cumtime\n#for i in range(100000):\n# meet_unwrapped(sphere,line_one)", "ncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 13.216 13.216 {built-in method builtins.exec}\n 1 0.085 0.085 13.216 13.216 &lt;string&gt;:2(&lt;module&gt;)\n 100000 0.418 0.000 13.131 0.000 &lt;ipython-input-98-f91457c8741a&gt;:7(meet_unwrapped)\n 300000 0.681 0.000 9.893 0.000 &lt;ipython-input-98-f91457c8741a&gt;:4(dual_mv)\n 300000 1.383 0.000 8.127 0.000 __init__.py:751(__mul__)\n 400000 0.626 0.000 5.762 0.000 __init__.py:717(_checkOther)\n 400000 0.270 0.000 3.815 0.000 __init__.py:420(__ne__)\n 400000 1.106 0.000 3.544 0.000 numeric.py:2565(array_equal)\n 100000 0.460 0.000 2.439 0.000 __init__.py:783(__xor__)\n 800000 0.662 0.000 2.053 0.000 __init__.py:740(_newMV)\n 400000 1.815 0.000 1.815 0.000 __init__.py:260(mv_mult)", "#%%prun -s cumtime\n#for i in range(100000):\n# meet_wrapped(sphere,line_one)", "ncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 1.951 1.951 {built-in method builtins.exec}\n 1 0.063 0.063 1.951 1.951 &lt;string&gt;:2(&lt;module&gt;)\n 100000 0.274 0.000 1.888 0.000 &lt;ipython-input-98-f91457c8741a&gt;:18(meet_wrapped)\n 100000 1.448 0.000 1.448 0.000 &lt;ipython-input-98-f91457c8741a&gt;:14(meet_val)\n 100000 0.096 0.000 0.166 0.000 __init__.py:677(__init__)\nAlgorithms exploiting known sparseness of MultiVector value array\nThe standard multiplication generator function for two general multivectors is as follows:", "def get_mult_function(mult_table,n_dims):\n ''' \n Returns a function that implements the mult_table on two input multivectors\n '''\n non_zero_indices = mult_table.nonzero()\n k_list = non_zero_indices[0]\n l_list = non_zero_indices[1]\n m_list = non_zero_indices[2]\n mult_table_vals = np.array([mult_table[k,l,m] for k,l,m in np.transpose(non_zero_indices)],dtype=int)\n\n @numba.njit\n def mv_mult(value,other_value):\n output = np.zeros(n_dims)\n for ind,k in enumerate(k_list):\n l = l_list[ind]\n m = m_list[ind]\n output[l] += value[k]*mult_table_vals[ind]*other_value[m]\n return output\n return mv_mult", "There are however instances in which we might be able to use the known sparseness of the input data value representation to speed up the operations. For example, in Cl(4,1) rotors only contain even grade blades and we can therefore remove all the operations accessing odd grade objects.", "def get_grade_from_index(index_in):\n if index_in == 0:\n return 0\n elif index_in < 6:\n return 1\n elif index_in < 16:\n return 2\n elif index_in < 26:\n return 3\n elif index_in < 31:\n return 4\n elif index_in == 31:\n return 5\n else:\n raise ValueError('Index is out of multivector bounds')\n\ndef get_sparse_mult_function(mult_table,n_dims,grades_a,grades_b):\n ''' \n Returns a function that implements the mult_table on two input multivectors\n '''\n non_zero_indices = mult_table.nonzero()\n k_list = non_zero_indices[0]\n l_list = non_zero_indices[1]\n m_list = non_zero_indices[2]\n mult_table_vals = np.array([mult_table[k,l,m] for k,l,m in np.transpose(non_zero_indices)],dtype=int)\n \n # Now filter out the sparseness\n filter_mask = np.zeros(len(k_list), dtype=bool)\n for i in range(len(filter_mask)):\n if get_grade_from_index(k_list[i]) in grades_a:\n if get_grade_from_index(m_list[i]) in grades_b:\n filter_mask[i] = 1\n \n k_list = k_list[filter_mask]\n l_list = l_list[filter_mask]\n m_list = m_list[filter_mask]\n mult_table_vals = mult_table_vals[filter_mask]\n\n @numba.njit\n def mv_mult(value,other_value):\n output = np.zeros(n_dims)\n for ind,k in enumerate(k_list):\n l = l_list[ind]\n m = m_list[ind]\n output[l] += value[k]*mult_table_vals[ind]*other_value[m]\n return output\n return mv_mult\n\nleft_rotor_mult = get_sparse_mult_function(layout.gmt,layout.gaDims,[0,2,4],[0,1,2,3,4,5])\nright_rotor_mult = get_sparse_mult_function(layout.gmt,layout.gaDims,[0,1,2,3,4,5],[0,2,4])\n\n@numba.njit\ndef sparse_apply_rotor_val(R_val,mv_val):\n return left_rotor_mult(R_val,right_rotor_mult(mv_val,adjoint_func(R_val)))\n\ndef sparse_apply_rotor(R,mv):\n return cf.MultiVector(layout,sparse_apply_rotor_val(R.value,mv.value))\n\n#%%prun -s cumtime\n#for i in range(1000000):\n# sparse_apply_rotor(R,line_one)", "ncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 9.490 9.490 {built-in method builtins.exec}\n 1 0.624 0.624 9.489 9.489 &lt;string&gt;:2(&lt;module&gt;)\n 1000000 2.684 0.000 8.865 0.000 &lt;ipython-input-146-f75aae3ce595&gt;:8(sparse_apply_rotor)\n 1000000 4.651 0.000 4.651 0.000 &lt;ipython-input-146-f75aae3ce595&gt;:4(sparse_apply_rotor_val)\n 1000000 0.934 0.000 1.530 0.000 __init__.py:677(__init__)\n 1000000 0.596 0.000 0.596 0.000 {built-in method numpy.core.multiarray.array}", "print(line_two)\nprint(sparse_apply_rotor(R,line_one).normal())", "We can do the same with the meet operation that we defined earlier if we know what grade objects we are meeting", "left_pseudo_mult = get_sparse_mult_function(layout.gmt,layout.gaDims,[5],[0,1,2,3,4,5])\nsparse_omt_2_1 = get_sparse_mult_function(layout.omt,layout.gaDims,[2],[1])\n\n@numba.njit\ndef dual_sparse_val(mv_val):\n return -left_pseudo_mult(I5_val,mv_val)\n\n@numba.njit\ndef meet_sparse_3_4_val(mv_a_val,mv_b_val):\n return -dual_sparse_val( sparse_omt_2_1( dual_sparse_val(mv_a_val) , dual_sparse_val(mv_b_val)) )\n\ndef meet_sparse_3_4(mv_a,mv_b):\n return cf.layout.MultiVector(meet_sparse_3_4_val(mv_a.value, mv_b.value))\n\nprint(sphere.meet(line_one).normal().normal())\nprint(meet_sparse_3_4(line_one,sphere).normal())\n\n#%%prun -s cumtime\n#for i in range(100000):\n# meet_sparse_3_4(line_one,sphere)", "ncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 0.725 0.725 {built-in method builtins.exec}\n 1 0.058 0.058 0.725 0.725 &lt;string&gt;:2(&lt;module&gt;)\n 100000 0.252 0.000 0.667 0.000 &lt;ipython-input-156-f346d0563682&gt;:12(meet_sparse_3_4)\n 100000 0.267 0.000 0.267 0.000 &lt;ipython-input-156-f346d0563682&gt;:8(meet_sparse_3_4_val)\n 100000 0.088 0.000 0.148 0.000 __init__.py:677(__init__)\nFuture work on performance\nInvestigate efficient operations on containers of large numbers of multivectors.\nPossibly investigate http://numba.pydata.org/numba-doc/0.13/CUDAJit.html for larger algebras/other areas in which GPU memory latency will not be such a large factor, ie, lots of bulk parallel numerical operations." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JohnGriffiths/ConWhAt
docs/examples/downloading_conwhat_atlases.ipynb
bsd-3-clause
[ "Downloading ConWhAt Atlases\nImport the fetcher function", "from conwhat.utils.fetchers import fetch_conwhat_atlas\nimport glob", "Define the output directory", "atlas_dir = '/scratch/hpc3230/Data/conwhat_atlases'", "Define which atlas to grab", "atlas_name = 'CWL2k8Sc33Vol3d100s_v01'", "Break a leg", "fetch_conwhat_atlas(atlas_name,atlas_dir,remove_existing=True);", "The zipped and unzipped atlas folders are now there in the top-level atlas directory;", "glob.glob(atlas_dir + '/*')", "The .zip file can be optionally removed automatically if desired. \nvolumetric atlas folders contain a small number of fairly small .txt files", "glob.glob('%s/%s/*.txt' %(atlas_dir,atlas_name))", "...and a larger number of nifti images; one for each atlas structure", "glob.glob('%s/%s/*.nii.gz' %(atlas_dir,atlas_name))[:5]\n\nlen(glob.glob('%s/%s/*.nii.gz' %(atlas_dir,atlas_name)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
seg/2016-ml-contest
StoDIG/Facies_classification_StoDIG.ipynb
apache-2.0
[ "Facies classification using Convolutional Neural Networks\nTeam StoDIG - Statoil Deep-learning Interest Group\nDavid Wade, John Thurmond & Eskil Kulseth Dahl\nIn this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge. \nGiven the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.\nWe use three key innovations:\n - Augmenting the input data with 1st and 2nd order derivatives\n - Inserting a convolutional layer as the first layer in the Neural Network\n - Adding Dropout regularization to prevent overfitting\nProblem Modeling\n\nThe dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). \nThe dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types. \nThis data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. \nThe seven predictor variables are:\n* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),\nphotoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.\n* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)\nThe nine discrete facies (classes of rocks) are: \n1. Nonmarine sandstone\n2. Nonmarine coarse siltstone \n3. Nonmarine fine siltstone \n4. Marine siltstone and shale \n5. Mudstone (limestone)\n6. Wackestone (limestone)\n7. Dolomite\n8. Packstone-grainstone (limestone)\n9. Phylloid-algal bafflestone (limestone)\nThese facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.\nFacies |Label| Adjacent Facies\n:---: | :---: |:--:\n1 |SS| 2\n2 |CSiS| 1,3\n3 |FSiS| 2\n4 |SiSh| 5\n5 |MS| 4,6\n6 |WS| 5,7\n7 |D| 6,8\n8 |PS| 6,7,9\n9 |BS| 7,8\nSetup\n\nCheck we have all the libraries we need, and import the modules we require. Note that we have used the Theano backend for Keras, and to achieve a reasonable training time we have used an NVidia K20 GPU.", "%%sh\npip install pandas\npip install scikit-learn\npip install keras\npip install sklearn\n\nfrom __future__ import print_function\nimport time\nimport numpy as np\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom keras.preprocessing import sequence\nfrom keras.models import Sequential\nfrom keras.constraints import maxnorm\nfrom keras.optimizers import SGD\nfrom keras.optimizers import Adam\nfrom keras.optimizers import Adamax\nfrom keras.optimizers import Nadam\nfrom keras.layers import Dense, Dropout, Activation, Convolution1D, Flatten, Reshape, MaxPooling1D, GaussianNoise\nfrom keras.wrappers.scikit_learn import KerasClassifier\nfrom keras.utils import np_utils\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import KFold , StratifiedKFold\nfrom classification_utilities import display_cm, display_adj_cm\nfrom sklearn.metrics import confusion_matrix, f1_score\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import GridSearchCV", "Data ingest\n\nWe load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model.", "filename = 'train_test_data.csv'\ndata = pd.read_csv(filename)\ndata.head(12)\n\n# Set 'Well Name' and 'Formation' fields as categories\ndata['Well Name'] = data['Well Name'].astype('category')\ndata['Formation'] = data['Formation'].astype('category')\n\n# Fill missing values and normalize for 'PE' field\ndata['PE'] = data['PE'].fillna(value=0)\nmean_pe = data['PE'].mean()\nstd_pe = data['PE'].std()\ndata['PE'] = (data['PE']-mean_pe)/std_pe\n\n# Normalize the rest of fields (GR, ILD_log10, DelthaPHI, PHIND,NM_M,RELPOS)\ncorrect_facies_labels = data['Facies'].values\nfeature_vectors = data.drop(['Formation'], axis=1)\nwell_labels = data[['Well Name', 'Facies']].values\ndata_vectors = feature_vectors.drop(['Well Name', 'Facies'], axis=1).values\nscaler = preprocessing.StandardScaler().fit(data_vectors)\nscaled_features = scaler.transform(data_vectors)\ndata_out = np.hstack([well_labels, scaled_features])", "Split data into training data and blind data, and output as Numpy arrays", "def preprocess(data_out):\n \n data = data_out\n well_data = {}\n well_names = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', \n 'NOLAN', 'Recruit F9', 'NEWBY', 'CHURCHMAN BIBLE', 'STUART', 'CRAWFORD']\n for name in well_names:\n well_data[name] = [[], []]\n\n for row in data:\n well_data[row[0]][1].append(row[1])\n well_data[row[0]][0].append(list(row[2::]))\n\n chunks = []\n chunks_test = []\n chunk_length = 1 \n chunks_facies = []\n wellID=0.0\n for name in well_names:\n \n if name not in ['STUART', 'CRAWFORD']:\n test_well_data = well_data[name]\n log_values = np.array(test_well_data[0])\n facies_values = np.array(test_well_data[1])\n for i in range(log_values.shape[0]):\n toAppend = np.concatenate((log_values[i:i+1, :], np.asarray(wellID).reshape(1,1)), axis=1)\n chunks.append(toAppend)\n chunks_facies.append(facies_values[i])\n else:\n test_well_data = well_data[name]\n log_values = np.array(test_well_data[0])\n for i in range(log_values.shape[0]):\n toAppend = np.concatenate((log_values[i:i+1, :], np.asarray(wellID).reshape(1,1)), axis=1)\n chunks_test.append(toAppend)\n \n wellID = wellID + 1.0\n \n chunks_facies = np.array(chunks_facies, dtype=np.int32)-1\n X_ = np.array(chunks)\n X = np.zeros((len(X_),len(X_[0][0]) * len(X_[0])))\n for i in range(len(X_)):\n X[i,:] = X_[i].flatten()\n \n X_test = np.array(chunks_test)\n X_test_out = np.zeros((len(X_test),len(X_test[0][0]) * len(X_test[0])))\n for i in range(len(X_test)):\n X_test_out[i,:] = X_test[i].flatten()\n y = np_utils.to_categorical(chunks_facies)\n return X, y, X_test_out\n\nX_train_in, y_train, X_test_in = preprocess(data_out)", "Data Augmentation\n\nIt is physically reasonable to expect 1st and 2nd order derivatives of logs to play an important role in determining facies. To save the CNN the effort of learning convolution kernels to represent these features to the rest of the Neural Network we compute them here (for training and validation data). Further, we expand the input data to be acted on by the convolutional layer.", "conv_length = 7\n\n# Reproducibility\nnp.random.seed(7) \n# Load data\n\ndef addGradients(input):\n output = input\n for i in range(8):\n grad = np.gradient(output[:,i])\n gradT = np.reshape(grad,(grad.size,1))\n output = np.concatenate((output, gradT), axis=1)\n\n grad2 = np.gradient(grad)\n grad2T = np.reshape(grad2,(grad2.size,1))\n output = np.concatenate((output, grad2T), axis=1)\n\n return output\n\n\ndef expand_dims(input):\n r = int((conv_length-1)/2)\n l = input.shape[0]\n n_input_vars = input.shape[1]\n output = np.zeros((l, conv_length, n_input_vars))\n for i in range(l):\n for j in range(conv_length):\n for k in range(n_input_vars):\n output[i,j,k] = input[min(i+j-r,l-1),k]\n return output\n\nX_train = np.empty((0,conv_length,24), dtype=float)\nX_test = np.empty((0,conv_length,24), dtype=float)\n\nwellId = 0.0\nfor i in range(10):\n X_train_subset = X_train_in[X_train_in[:, 8] == wellId][:,0:8]\n X_train_subset = addGradients(X_train_subset)\n X_train_subset = expand_dims(X_train_subset)\n X_train = np.concatenate((X_train,X_train_subset),axis=0)\n wellId = wellId + 1.0\n \nfor i in range(2):\n X_test_subset = X_test_in[X_test_in[:, 8] == wellId][:,0:8]\n X_test_subset = addGradients(X_test_subset)\n X_test_subset = expand_dims(X_test_subset)\n X_test = np.concatenate((X_test,X_test_subset),axis=0)\n wellId = wellId + 1.0\n \nprint(X_train.shape)\nprint(X_test.shape)\n\n# Obtain labels\ny_labels = np.zeros((len(y_train),1))\nfor i in range(len(y_train)):\n y_labels[i] = np.argmax(y_train[i])\ny_labels = y_labels.astype(int)", "Convolutional Neural Network\nWe build a CNN with the following layers:\n\nDropout layer on input\nOne 1D convolutional layer, with MaxPooling\nSeries of Dropout & Fully-Connected layers, of parameterizable length", "# Set parameters\ninput_dim = 24\noutput_dim = 9\nn_per_batch = 128\nepochs = 200\n\ndef dnn_model(init_dropout_rate=0.5, main_dropout_rate=0.45, hidden_dim_1=192, hidden_dim_2=96, max_norm=10, n_dense=3, sigma=0.0, nb_conv=32):\n # Define the model\n model = Sequential()\n model.add(Dropout(init_dropout_rate, input_shape=(conv_length,input_dim,)))\n model.add(Convolution1D(nb_conv, conv_length, border_mode='same', activation='relu', input_shape=(conv_length,input_dim), input_length=conv_length))\n model.add(MaxPooling1D(pool_length=2, stride=None, border_mode='same'))\n model.add(Flatten())\n model.add(Dropout(main_dropout_rate, input_shape=(nb_conv*conv_length,)))\n model.add(Dense(hidden_dim_1, input_dim=nb_conv*conv_length, init='uniform', activation='relu', W_constraint=maxnorm(max_norm)))\n for i in range(n_dense):\n if (i==1): \n model.add(Dropout(main_dropout_rate, input_shape=(hidden_dim_1,)))\n model.add(Dense(hidden_dim_2, input_dim=hidden_dim_1, init='uniform', activation='relu', W_constraint=maxnorm(max_norm)))\n else:\n model.add(Dropout(main_dropout_rate, input_shape=(hidden_dim_2,)))\n model.add(Dense(hidden_dim_2, input_dim=hidden_dim_2, init='uniform', activation='relu', W_constraint=maxnorm(max_norm)))\n model.add(Dense(output_dim, init='normal', activation='softmax'))\n \n optimizerNadam = Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.004)\n model.compile(loss='categorical_crossentropy', optimizer=optimizerNadam, metrics=['accuracy'])\n return model", "We train the CNN and evaluate it on precision/recall.", "# Load the model\nt0 = time.time()\nmodel_dnn = dnn_model()\nmodel_dnn.summary()\nt1 = time.time()\nprint(\"Load time = %d\" % (t1-t0) )\n\n#Train model\nt0 = time.time()\nmodel_dnn.fit(X_train, y_train, batch_size=n_per_batch, nb_epoch=epochs, verbose=0)\nt1 = time.time()\nprint(\"Train time = %d seconds\" % (t1-t0) )\n\n\n# Predict Values on Training set\nt0 = time.time()\ny_predicted = model_dnn.predict( X_train , batch_size=n_per_batch, verbose=2)\nt1 = time.time()\nprint(\"Test time = %d seconds\" % (t1-t0) )\n\n# Print Report\n\n# Format output [0 - 8 ]\ny_ = np.zeros((len(y_train),1))\nfor i in range(len(y_train)):\n y_[i] = np.argmax(y_train[i])\n\ny_predicted_ = np.zeros((len(y_predicted), 1))\nfor i in range(len(y_predicted)):\n y_predicted_[i] = np.argmax( y_predicted[i] )\n \n# Confusion Matrix\nconf = confusion_matrix(y_, y_predicted_)\n\ndef accuracy(conf):\n total_correct = 0.\n nb_classes = conf.shape[0]\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n acc = total_correct/sum(sum(conf))\n return acc\n\nadjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']\n\ndef accuracy_adjacent(conf, adjacent_facies):\n nb_classes = conf.shape[0]\n total_correct = 0.\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n for j in adjacent_facies[i]:\n total_correct += conf[i][j]\n return total_correct / sum(sum(conf))\n\n# Print Results\nprint (\"\\nModel Report\")\nprint (\"-Accuracy: %.6f\" % ( accuracy(conf) ))\nprint (\"-Adjacent Accuracy: %.6f\" % ( accuracy_adjacent(conf, adjacent_facies) ))\nprint (\"\\nConfusion Matrix\")\ndisplay_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)", "We display the learned 1D convolution kernels", "print(model_dnn.layers[1].get_weights()[0].shape)\n\nfig, ax = plt.subplots(figsize=(12,6))\n\nplt.subplot(421)\nplt.imshow(model_dnn.layers[1].get_weights()[0][:,0,0,:], interpolation='none')\nplt.subplot(422)\nplt.imshow(model_dnn.layers[1].get_weights()[0][:,0,1,:], interpolation='none')\nplt.subplot(423)\nplt.imshow(model_dnn.layers[1].get_weights()[0][:,0,2,:], interpolation='none')\nplt.subplot(424)\nplt.imshow(model_dnn.layers[1].get_weights()[0][:,0,3,:], interpolation='none')\nplt.subplot(425)\nplt.imshow(model_dnn.layers[1].get_weights()[0][:,0,4,:], interpolation='none')\nplt.subplot(426)\nplt.imshow(model_dnn.layers[1].get_weights()[0][:,0,5,:], interpolation='none')\nplt.subplot(427)\nplt.imshow(model_dnn.layers[1].get_weights()[0][:,0,6,:], interpolation='none')\nplt.subplot(428)\nplt.imshow(model_dnn.layers[1].get_weights()[0][:,0,7,:], interpolation='none')\n\nplt.show()", "In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.", "# Cross Validation\ndef cross_validate():\n t0 = time.time()\n estimator = KerasClassifier(build_fn=dnn_model, nb_epoch=epochs, batch_size=n_per_batch, verbose=0)\n skf = StratifiedKFold(n_splits=5, shuffle=True)\n results_dnn = cross_val_score(estimator, X_train, y_train, cv= skf.get_n_splits(X_train, y_train))\n t1 = time.time()\n print(\"Cross Validation time = %d\" % (t1-t0) )\n print(' Cross Validation Results')\n print( results_dnn )\n print(np.mean(results_dnn))\n\ncross_validate()", "Prediction\n\nTo predict the STUART and CRAWFORD blind wells we do the following:\nSet up a plotting function to display the logs & facies.", "# 1=sandstone 2=c_siltstone 3=f_siltstone \n# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite\n# 8=packstone 9=bafflestone\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\n#facies_color_map is a dictionary that maps facies labels\n#to their respective colors\nfacies_color_map = {}\nfor ind, label in enumerate(facies_labels):\n facies_color_map[label] = facies_colors[ind]\n\ndef label_facies(row, labels):\n return labels[ row['Facies'] -1]\n\ndef make_facies_log_plot(logs, facies_colors):\n #make sure logs are sorted by depth\n logs = logs.sort_values(by='Depth')\n cmap_facies = colors.ListedColormap(\n facies_colors[0:len(facies_colors)], 'indexed')\n \n ztop=logs.Depth.min(); zbot=logs.Depth.max()\n \n cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)\n \n f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))\n ax[0].plot(logs.GR, logs.Depth, '-g')\n ax[1].plot(logs.ILD_log10, logs.Depth, '-')\n ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')\n ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')\n ax[4].plot(logs.PE, logs.Depth, '-', color='black')\n im=ax[5].imshow(cluster, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n \n divider = make_axes_locatable(ax[5])\n cax = divider.append_axes(\"right\", size=\"20%\", pad=0.05)\n cbar=plt.colorbar(im, cax=cax)\n cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', \n 'SiSh', ' MS ', ' WS ', ' D ', \n ' PS ', ' BS ']))\n cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')\n \n for i in range(len(ax)-1):\n ax[i].set_ylim(ztop,zbot)\n ax[i].invert_yaxis()\n ax[i].grid()\n ax[i].locator_params(axis='x', nbins=3)\n \n ax[0].set_xlabel(\"GR\")\n ax[0].set_xlim(logs.GR.min(),logs.GR.max())\n ax[1].set_xlabel(\"ILD_log10\")\n ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())\n ax[2].set_xlabel(\"DeltaPHI\")\n ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())\n ax[3].set_xlabel(\"PHIND\")\n ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())\n ax[4].set_xlabel(\"PE\")\n ax[4].set_xlim(logs.PE.min(),logs.PE.max())\n ax[5].set_xlabel('Facies')\n \n ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])\n ax[4].set_yticklabels([]); ax[5].set_yticklabels([])\n ax[5].set_xticklabels([])\n f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)", "Run the model on the blind data\n\nOutput a CSV\nPlot the wells in the notebook", "# DNN model Prediction\ny_test = model_dnn.predict( X_test , batch_size=n_per_batch, verbose=0)\npredictions_dnn = np.zeros((len(y_test),1))\nfor i in range(len(y_test)):\n predictions_dnn[i] = np.argmax(y_test[i]) + 1 \npredictions_dnn = predictions_dnn.astype(int)\n# Store results\ntest_data = pd.read_csv('../validation_data_nofacies.csv')\ntest_data['Facies'] = predictions_dnn\ntest_data.to_csv('Prediction_StoDIG.csv')\n\nmake_facies_log_plot(\n test_data[test_data['Well Name'] == 'STUART'],\n facies_colors=facies_colors)\n\nmake_facies_log_plot(\n test_data[test_data['Well Name'] == 'CRAWFORD'],\n facies_colors=facies_colors)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
michael-isaev/cse6040_qna
PythonQnA_10_more_comprehensions.ipynb
apache-2.0
[ "For Loops vs. List Comprehension Examples\nIn this notebook, we will look at many of the exercises that you saw in Notebook 1. The exercises could be answered with list comprehensions or for loops.", "grades = [\n # First line is descriptive header. Subsequent lines hold data\n ['Student', 'Exam 1', 'Exam 2', 'Exam 3'],\n ['Thorny', '100', '90', '80'],\n ['Mac', '88', '99', '111'],\n ['Farva', '45', '56', '67'],\n ['Rabbit', '59', '61', '67'],\n ['Ursula', '73', '79', '83'],\n ['Foster', '89', '97', '101']\n]", "Exercise 0 (students_test: 1 point). Write some code that computes a new list named students[:], which holds the names of the students as they from \"top to bottom\" in the table.", "### BEGIN SOLUTION (LIST COMPREHENSION)\nstudents = [L[0] for L in grades[1:]]\n### END SOLUTION\n\n### BEGIN SOLUTION (FOR LOOP)\nstudents = []\nfor x in range (1, len(grades)):\n students.append(grades[x][0])\n### END SOLUTION\n\n# `students_test`: Test cell\nprint(students)\nassert type(students) is list\nassert students == ['Thorny', 'Mac', 'Farva', 'Rabbit', 'Ursula', 'Foster']\nprint(\"\\n(Passed!)\")", "Exercise 1 (assignments_test: 1 point). Write some code to compute a new list named assignments[:], to hold the names of the class assignments. (These appear in the descriptive header element of grades.)", "### BEGIN SOLUTION\nassignments = grades[0][1:]\n### END SOLUTION", "Exercise 2 (grade_lists_test: 1 point). Write some code to compute a new dictionary, named grade_lists, that maps names of students to lists of their exam grades. The grades should be converted from strings to integers. For instance, grade_lists['Thorny'] == [100, 90, 80].", "# Create a dict mapping names to lists of grades.\n### BEGIN SOLUTION (LIST COMPREHENSIONS)\ngrade_lists = {L[0]: [int(g) for g in L[1:]] for L in grades[1:]}\n### END SOLUTION\n\n### BEGIN SOLUTION (FOR LOOP)\ngrade_lists = {}\nfor x in range (1, len(grades)):\n stu_grades = []\n for y in range(1, len(grades[x])):\n stu_grades.append(int(grades[x][y]))\n grade_lists[grades[x][0]] = stu_grades\n### END SOLUTION\n\n# `grade_lists_test`: Test cell\nprint(grade_lists)\nassert type(grade_lists) is dict, \"Did not create a dictionary.\"\nassert len(grade_lists) == len(grades)-1, \"Dictionary has the wrong number of entries.\"\nassert {'Thorny', 'Mac', 'Farva', 'Rabbit', 'Ursula', 'Foster'} == set(grade_lists.keys()), \"Dictionary has the wrong keys.\"\nassert grade_lists['Thorny'] == [100, 90, 80], 'Wrong grades for: Thorny'\nassert grade_lists['Mac'] == [88, 99, 111], 'Wrong grades for: Mac'\nassert grade_lists['Farva'] == [45, 56, 67], 'Wrong grades for: Farva'\nassert grade_lists['Rabbit'] == [59, 61, 67], 'Wrong grades for: Rabbit'\nassert grade_lists['Ursula'] == [73, 79, 83], 'Wrong grades for: Ursula'\nassert grade_lists['Foster'] == [89, 97, 101], 'Wrong grades for: Foster'\nprint(\"\\n(Passed!)\")", "Exercise 3 (grade_dicts_test: 2 points). Write some code to compute a new dictionary, grade_dicts, that maps names of students to dictionaries containing their scores. Each entry of this scores dictionary should be keyed on assignment name and hold the corresponding grade as an integer. For instance, grade_dicts['Thorny']['Exam 1'] == 100.", "# Create a dict mapping names to dictionaries of grades.\n### BEGIN SOLUTION (LIST COMPREHENSION)\ngrade_dicts = {L[0]: dict(zip(assignments, [int(g) for g in L[1:]])) for L in grades[1:]}\n### END SOLUTION\n\n# Create a dict mapping names to dictionaries of grades.\n### BEGIN SOLUTION (FOR LOOP)\ngrade_dicts = {}\nfor x in range(1, len(grades)):\n stu_grades = []\n for y in range(1, len(grades[x])):\n stu_grades.append(int(grades[x][y]))\n grade_dicts[grades[x][0]] = dict(zip(assignments, stu_grades))\n### END SOLUTION\n\n# `grade_dicts_test`: Test cell\nprint(grade_dicts)\nassert type(grade_dicts) is dict, \"Did not create a dictionary.\"\nassert len(grade_dicts) == len(grades)-1, \"Dictionary has the wrong number of entries.\"\nassert {'Thorny', 'Mac', 'Farva', 'Rabbit', 'Ursula', 'Foster'} == set(grade_dicts.keys()), \"Dictionary has the wrong keys.\"\nassert grade_dicts['Foster']['Exam 1'] == 89, 'Wrong score'\nassert grade_dicts['Foster']['Exam 3'] == 101, 'Wrong score'\nassert grade_dicts['Foster']['Exam 2'] == 97, 'Wrong score'\nassert grade_dicts['Ursula']['Exam 1'] == 73, 'Wrong score'\nassert grade_dicts['Ursula']['Exam 3'] == 83, 'Wrong score'\nassert grade_dicts['Ursula']['Exam 2'] == 79, 'Wrong score'\nassert grade_dicts['Rabbit']['Exam 1'] == 59, 'Wrong score'\nassert grade_dicts['Rabbit']['Exam 3'] == 67, 'Wrong score'\nassert grade_dicts['Rabbit']['Exam 2'] == 61, 'Wrong score'\nassert grade_dicts['Mac']['Exam 1'] == 88, 'Wrong score'\nassert grade_dicts['Mac']['Exam 3'] == 111, 'Wrong score'\nassert grade_dicts['Mac']['Exam 2'] == 99, 'Wrong score'\nassert grade_dicts['Farva']['Exam 1'] == 45, 'Wrong score'\nassert grade_dicts['Farva']['Exam 3'] == 67, 'Wrong score'\nassert grade_dicts['Farva']['Exam 2'] == 56, 'Wrong score'\nassert grade_dicts['Thorny']['Exam 1'] == 100, 'Wrong score'\nassert grade_dicts['Thorny']['Exam 3'] == 80, 'Wrong score'\nassert grade_dicts['Thorny']['Exam 2'] == 90, 'Wrong score'\nprint(\"\\n(Passed!)\")", "Exercise 5 (grades_by_assignment_test: 2 points). Write some code to compute a dictionary named grades_by_assignment, whose keys are assignment (exam) names and whose values are lists of scores over all students on that assignment. For instance, grades_by_assignment['Exam 1'] == [100, 88, 45, 59, 73, 89].", "### BEGIN SOLUTION (LIST COMPREHENSION)\ngrades_by_assignment = {a: [int(L[k]) for L in grades[1:]] for k, a in zip(range(1, 4), assignments)}\n### END SOLUTION\n\n### BEGIN SOLUTION (FOR LOOP)\ngrades_by_assignment = {}\nfor k in range(0, len(assignments)): #1,2,3\n stu_assignment = []\n for m in range(1,len(grades)):\n stu_assignment.append(int(grades[m][k+1]))\n grades_by_assignment[assignments[k]] = stu_assignment\n### END SOLUTION\n\n# `grades_by_assignment_test`: Test cell\nprint(grades_by_assignment)\nassert type(grades_by_assignment) is dict, \"Output is not a dictionary.\"\nassert len(grades_by_assignment) == 3, \"Wrong number of assignments.\"\nassert grades_by_assignment['Exam 1'] == [100, 88, 45, 59, 73, 89], 'Wrong grades list'\nassert grades_by_assignment['Exam 3'] == [80, 111, 67, 67, 83, 101], 'Wrong grades list'\nassert grades_by_assignment['Exam 2'] == [90, 99, 56, 61, 79, 97], 'Wrong grades list'\nprint(\"\\n(Passed!)\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aitatanit/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Chapter7_BayesianMachineLearning/DontOverfit.ipynb
mit
[ "Implementation of Salisman's Don't Overfit submission\nFrom Kaggle\n\nIn order to achieve this we have created a simulated data set with 200 variables and 20,000 cases. An ‘equation’ based on this data was created in order to generate a Target to be predicted. Given the all 20,000 cases, the problem is very easy to solve – but you only get given the Target value of 250 cases – the task is to build a model that gives the best predictions on the remaining 19,750 cases.", "import gzip\nimport requests\nimport zipfile\n\nurl = \"https://dl.dropbox.com/s/lnly9gw8pb1xhir/overfitting.zip\"\n\n\nresults = requests.get(url)\n\nimport StringIO\nz = zipfile.ZipFile(StringIO.StringIO(results.content))\n# z.extractall()\n\nz.extractall()\n\nz.namelist()\n\nd = z.open('overfitting.csv')\nd.readline()\n\nimport numpy as np\n\nM = np.fromstring(d.read(), sep=\",\")\n\nlen(d.read())\n\nnp.fromstring?\n\ndata = np.loadtxt(\"overfitting.csv\", delimiter=\",\", skiprows=1)\n\nprint \"\"\"\nThere are also 5 other fields,\n\ncase_id - 1 to 20,000, a unique identifier for each row\n\ntrain - 1/0, this is a flag for the first 250 rows which are the training dataset\n\nTarget_Practice - we have provided all 20,000 Targets for this model, so you can develop your method completely off line.\n\nTarget_Leaderboard - only 250 Targets are provided. You submit your predictions for the remaining 19,750 to the Kaggle leaderboard. \n\nTarget_Evaluate - again only 250 Targets are provided. Those competitors who beat the 'benchmark' on the Leaderboard will be asked to make one further submission for the Evaluation model.\n\n\"\"\"\n\ndata.shape\n\nix_training = data[:, 1] == 1\nix_testing = data[:, 1] == 0\n\ntraining_data = data[ix_training, 5:]\ntesting_data = data[ix_testing, 5:]\n\ntraining_labels = data[ix_training, 2]\ntesting_labels = data[ix_testing, 2]\n\nprint \"training:\", training_data.shape, training_labels.shape\nprint \"testing: \", testing_data.shape, testing_labels.shape", "Develop Tim's model\nHe mentions that the X variables are from a Uniform distribution. Let's investigate this:", "figsize(12, 4)\n\nhist(training_data.flatten())\nprint training_data.shape[0] * training_data.shape[1]", "looks pretty right", "import pymc as pm\n\nto_include = pm.Bernoulli(\"to_include\", 0.5, size=200)\n\ncoef = pm.Uniform(\"coefs\", 0, 1, size=200)\n\n@pm.deterministic\ndef Z(coef=coef, to_include=to_include, data=training_data):\n ym = np.dot(to_include * training_data, coef)\n return ym - ym.mean()\n\n@pm.deterministic\ndef T(z=Z):\n return 0.45 * (np.sign(z) + 1.1)\n\nobs = pm.Bernoulli(\"obs\", T, value=training_labels, observed=True)\n\nmodel = pm.Model([to_include, coef, Z, T, obs])\nmap_ = pm.MAP(model)\nmap_.fit()\n\nmcmc = pm.MCMC(model)\n\nmcmc.sample(100000, 90000, 1)\n\n(np.round(T.value) == training_labels).mean()\n\nt_trace = mcmc.trace(\"T\")[:]\n(np.round(t_trace[-500:-400, :]).mean(axis=0) == training_labels).mean()\n\nt_mean = np.round(t_trace).mean(axis=1)\n\nimshow(t_trace[-10000:, :], aspect=\"auto\")\ncolorbar()\n\nfigsize(23, 8)\ncoef_trace = mcmc.trace(\"coefs\")[:]\nimshow(coef_trace[-10000:, :], aspect=\"auto\", cmap=pyplot.cm.RdBu, interpolation=\"none\")\n\ninclude_trace = mcmc.trace(\"to_include\")[:]\n\nfigsize(23, 8)\nimshow(include_trace[-10000:, :], aspect=\"auto\", interpolation=\"none\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
PyLCARS/PythonUberHDL
myHDL_DigLogicFundamentals/myHDL_BitOperations/BitwiseBehavior_in_myHDL.ipynb
bsd-3-clause
[ "\\title{Bitwise Behavior in myHDL: Selecting, Shifting, Concatenation, Slicing}\n\\author{Steven K Armour}\n\\maketitle\n<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\" style=\"margin-top: 1em;\"><ul class=\"toc-item\"><li><span><a href=\"#References\" data-toc-modified-id=\"References-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>References</a></span></li><li><span><a href=\"#Libraries-and-Helper-functions\" data-toc-modified-id=\"Libraries-and-Helper-functions-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Libraries and Helper functions</a></span></li><li><span><a href=\"#myHDL-Bit-Indexing\" data-toc-modified-id=\"myHDL-Bit-Indexing-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>myHDL Bit Indexing</a></span><ul class=\"toc-item\"><li><span><a href=\"#Expected-Indexing-Selection-Behavior\" data-toc-modified-id=\"Expected-Indexing-Selection-Behavior-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Expected Indexing Selection Behavior</a></span></li><li><span><a href=\"#Attempted-Selection-with-Python-Negative-Warping\" data-toc-modified-id=\"Attempted-Selection-with-Python-Negative-Warping-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Attempted Selection with Python Negative Warping</a></span></li><li><span><a href=\"#Selecting-above-the-MSB\" data-toc-modified-id=\"Selecting-above-the-MSB-3.3\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Selecting above the MSB</a></span></li><li><span><a href=\"#Bit-Selection-of-Signal\" data-toc-modified-id=\"Bit-Selection-of-Signal-3.4\"><span class=\"toc-item-num\">3.4&nbsp;&nbsp;</span>Bit Selection of <code>Signal</code></a></span></li><li><span><a href=\"#myHDL-Bit-Selection-Demo\" data-toc-modified-id=\"myHDL-Bit-Selection-Demo-3.5\"><span class=\"toc-item-num\">3.5&nbsp;&nbsp;</span>myHDL Bit Selection Demo</a></span><ul class=\"toc-item\"><li><span><a href=\"#Bit-Assignment\" data-toc-modified-id=\"Bit-Assignment-3.5.1\"><span class=\"toc-item-num\">3.5.1&nbsp;&nbsp;</span>Bit Assignment</a></span></li></ul></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-3.6\"><span class=\"toc-item-num\">3.6&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-3.7\"><span class=\"toc-item-num\">3.7&nbsp;&nbsp;</span>Verilog Conversion</a></span><ul class=\"toc-item\"><li><span><a href=\"#Verilog-Conversion-Error\" data-toc-modified-id=\"Verilog-Conversion-Error-3.7.1\"><span class=\"toc-item-num\">3.7.1&nbsp;&nbsp;</span>Verilog Conversion Error</a></span></li></ul></li><li><span><a href=\"#VHDL-Conversion\" data-toc-modified-id=\"VHDL-Conversion-3.8\"><span class=\"toc-item-num\">3.8&nbsp;&nbsp;</span>VHDL Conversion</a></span><ul class=\"toc-item\"><li><span><a href=\"#VHDL-Conversion-Issue\" data-toc-modified-id=\"VHDL-Conversion-Issue-3.8.1\"><span class=\"toc-item-num\">3.8.1&nbsp;&nbsp;</span>VHDL Conversion Issue</a></span></li></ul></li><li><span><a href=\"#myHDL-to-Verilog/VHDL-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog/VHDL-Testbench-3.9\"><span class=\"toc-item-num\">3.9&nbsp;&nbsp;</span>myHDL to Verilog/VHDL Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#Verilog-Testbench\" data-toc-modified-id=\"Verilog-Testbench-3.9.1\"><span class=\"toc-item-num\">3.9.1&nbsp;&nbsp;</span>Verilog Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#Verilog-Testbench-Conversion-Issue\" data-toc-modified-id=\"Verilog-Testbench-Conversion-Issue-3.9.1.1\"><span class=\"toc-item-num\">3.9.1.1&nbsp;&nbsp;</span>Verilog Testbench Conversion Issue</a></span></li></ul></li><li><span><a href=\"#VHDL-Testbench\" data-toc-modified-id=\"VHDL-Testbench-3.9.2\"><span class=\"toc-item-num\">3.9.2&nbsp;&nbsp;</span>VHDL Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#VHDL-Testbench-Conversion-Issue\" data-toc-modified-id=\"VHDL-Testbench-Conversion-Issue-3.9.2.1\"><span class=\"toc-item-num\">3.9.2.1&nbsp;&nbsp;</span>VHDL Testbench Conversion Issue</a></span></li></ul></li></ul></li></ul></li><li><span><a href=\"#myHDL-shift-(&lt;&lt;/&gt;&gt;)-behavior\" data-toc-modified-id=\"myHDL-shift-(<</>>)-behavior-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>myHDL shift (<code>&lt;&lt;</code>/<code>&gt;&gt;</code>) behavior</a></span><ul class=\"toc-item\"><li><span><a href=\"#Left-Shift-(&lt;&lt;)\" data-toc-modified-id=\"Left-Shift-(<<)-4.1\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>Left Shift (&lt;&lt;)</a></span><ul class=\"toc-item\"><li><span><a href=\"#Left-Shifting-with-intbv\" data-toc-modified-id=\"Left-Shifting-with-intbv-4.1.1\"><span class=\"toc-item-num\">4.1.1&nbsp;&nbsp;</span>Left Shifting with <code>intbv</code></a></span></li><li><span><a href=\"#Left-Shifting-with-signed-intbv\" data-toc-modified-id=\"Left-Shifting-with-signed-intbv-4.1.2\"><span class=\"toc-item-num\">4.1.2&nbsp;&nbsp;</span>Left Shifting with signed <code>intbv</code></a></span></li><li><span><a href=\"#Left-Shifting-with-modbv\" data-toc-modified-id=\"Left-Shifting-with-modbv-4.1.3\"><span class=\"toc-item-num\">4.1.3&nbsp;&nbsp;</span>Left Shifting with <code>modbv</code></a></span></li></ul></li><li><span><a href=\"#Right-Shift-(&gt;&gt;)\" data-toc-modified-id=\"Right-Shift-(>>)-4.2\"><span class=\"toc-item-num\">4.2&nbsp;&nbsp;</span>Right Shift (<code>&gt;&gt;</code>)</a></span><ul class=\"toc-item\"><li><span><a href=\"#Right-Shifting-with-intbv\" data-toc-modified-id=\"Right-Shifting-with-intbv-4.2.1\"><span class=\"toc-item-num\">4.2.1&nbsp;&nbsp;</span>Right Shifting with <code>intbv</code></a></span></li><li><span><a href=\"#Right-Shifting-with-signed-intbv\" data-toc-modified-id=\"Right-Shifting-with-signed-intbv-4.2.2\"><span class=\"toc-item-num\">4.2.2&nbsp;&nbsp;</span>Right Shifting with signed <code>intbv</code></a></span></li><li><span><a href=\"#Right-Shifting-with-modbv\" data-toc-modified-id=\"Right-Shifting-with-modbv-4.2.3\"><span class=\"toc-item-num\">4.2.3&nbsp;&nbsp;</span>Right Shifting with <code>modbv</code></a></span></li></ul></li><li><span><a href=\"#myHDL-Shifting-Demo-Module\" data-toc-modified-id=\"myHDL-Shifting-Demo-Module-4.3\"><span class=\"toc-item-num\">4.3&nbsp;&nbsp;</span>myHDL Shifting Demo Module</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-4.4\"><span class=\"toc-item-num\">4.4&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-4.5\"><span class=\"toc-item-num\">4.5&nbsp;&nbsp;</span>Verilog Conversion</a></span></li><li><span><a href=\"#VHDL-Conversion\" data-toc-modified-id=\"VHDL-Conversion-4.6\"><span class=\"toc-item-num\">4.6&nbsp;&nbsp;</span>VHDL Conversion</a></span></li><li><span><a href=\"#myHDL-to-Verilog/VHDL-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog/VHDL-Testbench-4.7\"><span class=\"toc-item-num\">4.7&nbsp;&nbsp;</span>myHDL to Verilog/VHDL Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#Verilog-Testbench\" data-toc-modified-id=\"Verilog-Testbench-4.7.1\"><span class=\"toc-item-num\">4.7.1&nbsp;&nbsp;</span>Verilog Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#Verilog-Testbench-Conversion-Issue\" data-toc-modified-id=\"Verilog-Testbench-Conversion-Issue-4.7.1.1\"><span class=\"toc-item-num\">4.7.1.1&nbsp;&nbsp;</span>Verilog Testbench Conversion Issue</a></span></li></ul></li><li><span><a href=\"#VHDL-Testbench\" data-toc-modified-id=\"VHDL-Testbench-4.7.2\"><span class=\"toc-item-num\">4.7.2&nbsp;&nbsp;</span>VHDL Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#VHDL-Testbench-Conversion-Issue\" data-toc-modified-id=\"VHDL-Testbench-Conversion-Issue-4.7.2.1\"><span class=\"toc-item-num\">4.7.2.1&nbsp;&nbsp;</span>VHDL Testbench Conversion Issue</a></span></li></ul></li></ul></li></ul></li><li><span><a href=\"#myHDL-concat--behavior\" data-toc-modified-id=\"myHDL-concat--behavior-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>myHDL <code>concat</code> behavior</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-concat-Demo\" data-toc-modified-id=\"myHDL-concat-Demo-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>myHDL <code>concat</code> Demo</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Verilog Conversion</a></span></li><li><span><a href=\"#VHDL-Conversion\" data-toc-modified-id=\"VHDL-Conversion-5.4\"><span class=\"toc-item-num\">5.4&nbsp;&nbsp;</span>VHDL Conversion</a></span></li><li><span><a href=\"#myHDL-to-Verilog/VHDL-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog/VHDL-Testbench-5.5\"><span class=\"toc-item-num\">5.5&nbsp;&nbsp;</span>myHDL to Verilog/VHDL Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#Verilog-Testbench\" data-toc-modified-id=\"Verilog-Testbench-5.5.1\"><span class=\"toc-item-num\">5.5.1&nbsp;&nbsp;</span>Verilog Testbench</a></span></li><li><span><a href=\"#VHDL-Testbench\" data-toc-modified-id=\"VHDL-Testbench-5.5.2\"><span class=\"toc-item-num\">5.5.2&nbsp;&nbsp;</span>VHDL Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#VHDL-Testbench-Conversion-Issue\" data-toc-modified-id=\"VHDL-Testbench-Conversion-Issue-5.5.2.1\"><span class=\"toc-item-num\">5.5.2.1&nbsp;&nbsp;</span>VHDL Testbench Conversion Issue</a></span></li></ul></li></ul></li></ul></li><li><span><a href=\"#myHDL-Bitslicing-Behavior\" data-toc-modified-id=\"myHDL-Bitslicing-Behavior-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>myHDL Bitslicing Behavior</a></span><ul class=\"toc-item\"><li><span><a href=\"#Slicing-intbv\" data-toc-modified-id=\"Slicing-intbv-6.1\"><span class=\"toc-item-num\">6.1&nbsp;&nbsp;</span>Slicing <code>intbv</code></a></span></li><li><span><a href=\"#Slicing-Signed-intbv\" data-toc-modified-id=\"Slicing-Signed-intbv-6.2\"><span class=\"toc-item-num\">6.2&nbsp;&nbsp;</span>Slicing Signed <code>intbv</code></a></span></li><li><span><a href=\"#Slicing-modbv\" data-toc-modified-id=\"Slicing-modbv-6.3\"><span class=\"toc-item-num\">6.3&nbsp;&nbsp;</span>Slicing <code>modbv</code></a></span></li><li><span><a href=\"#myHDL-BitSlicing-Demo-Module\" data-toc-modified-id=\"myHDL-BitSlicing-Demo-Module-6.4\"><span class=\"toc-item-num\">6.4&nbsp;&nbsp;</span>myHDL BitSlicing Demo Module</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-6.5\"><span class=\"toc-item-num\">6.5&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-6.6\"><span class=\"toc-item-num\">6.6&nbsp;&nbsp;</span>Verilog Conversion</a></span><ul class=\"toc-item\"><li><span><a href=\"#Verilog-Conversion-Issue\" data-toc-modified-id=\"Verilog-Conversion-Issue-6.6.1\"><span class=\"toc-item-num\">6.6.1&nbsp;&nbsp;</span>Verilog Conversion Issue</a></span></li></ul></li><li><span><a href=\"#VHDL-Conversion\" data-toc-modified-id=\"VHDL-Conversion-6.7\"><span class=\"toc-item-num\">6.7&nbsp;&nbsp;</span>VHDL Conversion</a></span><ul class=\"toc-item\"><li><span><a href=\"#VHDL-Conversion-Issue\" data-toc-modified-id=\"VHDL-Conversion-Issue-6.7.1\"><span class=\"toc-item-num\">6.7.1&nbsp;&nbsp;</span>VHDL Conversion Issue</a></span></li></ul></li><li><span><a href=\"#myHDL-to-Verilog/VHDL-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog/VHDL-Testbench-6.8\"><span class=\"toc-item-num\">6.8&nbsp;&nbsp;</span>myHDL to Verilog/VHDL Testbench</a></span><ul class=\"toc-item\"><li><span><a href=\"#Verilog-Testbench\" data-toc-modified-id=\"Verilog-Testbench-6.8.1\"><span class=\"toc-item-num\">6.8.1&nbsp;&nbsp;</span>Verilog Testbench</a></span></li><li><span><a href=\"#VHDL-Testbench\" data-toc-modified-id=\"VHDL-Testbench-6.8.2\"><span class=\"toc-item-num\">6.8.2&nbsp;&nbsp;</span>VHDL Testbench</a></span></li></ul></li></ul></li></ul></div>\n\nReferences\n@misc{myhdl_2018,\ntitle={Hardware-oriented types — MyHDL 0.10 documentation},\nurl={http://docs.myhdl.org/en/stable/manual/hwtypes.html},\njournal={Docs.myhdl.org},\nauthor={myHDL},\nyear={2018}\n},\n@misc{vandenbout_2018,\ntitle={pygmyhdl 0.0.3 documentation},\nurl={https://xesscorp.github.io/pygmyhdl/docs/_build/singlehtml/index.html},\njournal={Xesscorp.github.io},\nauthor={Vandenbout, Dave},\nyear={2018}\n}\nLibraries and Helper functions", "#This notebook also uses the `(some) LaTeX environments for Jupyter`\n#https://github.com/ProfFan/latex_envs wich is part of the\n#jupyter_contrib_nbextensions package\n\nfrom myhdl import *\nfrom myhdlpeek import Peeker\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sympy import *\ninit_printing()\n\nimport random\n\n#https://github.com/jrjohansson/version_information\n%load_ext version_information\n%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random\n\n#helper functions to read in the .v and .vhd generated files into python\ndef VerilogTextReader(loc, printresult=True):\n with open(f'{loc}.v', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***Verilog modual from {loc}.v***\\n\\n', VerilogText)\n return VerilogText\n\ndef VHDLTextReader(loc, printresult=True):\n with open(f'{loc}.vhd', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***VHDL modual from {loc}.vhd***\\n\\n', VerilogText)\n return VerilogText\n\nCountVal=17\nBitSize=int(np.log2(CountVal))+1; BitSize", "myHDL Bit Indexing\nBit Indexing is the act of selecting or assigning one of the bits in a Bit Vector\nExpected Indexing Selection Behavior", "TV=intbv(-93)[8:].signed()\nprint(f'Value:{int(TV)}, Binary {bin(TV)}')\n\nfor i in range(len(TV)):\n print(f'Bit from LSB: {i}, Selected Bit: {int(TV[i])}')", "which shows that when selecting a single bit from a BitVector that selection [0] is the Least Significant Bit (LSB) (inclusive behavior) while for the Most Significant Bit (MSB) will be the index of the BitVector length -1 (noninclusive behavior)\nAttempted Selection with Python Negative Warping", "try:\n TV[-1]\nexcept ValueError:\n print(\"ValueError: negative shift count\")", "This means that negative indexing using python's list selection wrap around is NOT implemented in a myHDL intbv", "TV=modbv(-93)[8:].signed()\nprint(f'Value:{int(TV)}, Binary {bin(TV)}')\ntry:\n TV[-1]\nexcept ValueError:\n print(\"ValueError: negative shift count\")", "nor is the negative wrapping supported by the use of the modbv\nSelecting above the MSB", "TV=intbv(93)[8:]\nTV_S=intbv(-93)[8:].signed()\nTV_M=modbv(-93)[8:].signed()\nprint(f'`intbv`:Value:{int(TV)}, Binary {bin(TV)}, [8]:{int(TV[8])}, [9]:{int(TV[9])}')\nprint(f'`intbv signed`:Value:{int(TV_S)}, Binary {bin(TV_S)}, [8]:{int(TV_S[8])}, [9]:{int(TV_S[9])}')\nprint(f'`modbv`:Value:{int(TV_M)}, Binary {bin(TV_M)}, [8]:{int(TV_M[8])}, [9]:{int(TV_M[9])}')", "Thus selecting above the MSB will generate a 0 if the Bit Vector is not signed where as selecting above the MSB for a signed bit will produce a 1.\nBit Selection of Signal", "TV=Signal(intbv(93)[8:])\n\nTV[0], TV(0), TV[9], TV(9)", "The difference is that outside of a generator, bit selection of a signal using [] only returns a value and not a signal that is only returned using (). This is important to know since only a Signal can be converted to registers/wires in the conversion from myHDL to Verilog/VHDL \nmyHDL Bit Selection Demo", "@block\ndef BitSelectDemo(Index, Res, SignRes):\n \"\"\"\n Bit Selection Demo\n \n Input:\n Index(4BitVec): value for selection from internal refrances \n Output:\n Res(8BitVec): BitVector with Bit Location set from `Index` from \n refrance internal 8Bit `intbv` with value 93\n SignRes(8BitVec Signed): signed BitVector with Bit Location set from `Index` from \n refrance internal signed 8Bit `intbv` with value -93\n \"\"\"\n Ref=Signal(intbv(93)[8:])\n RefS=Signal(intbv(-93)[8:].signed())\n @always_comb\n def logic():\n Res.next[Index]=Ref[Index]\n SignRes.next[Index]=RefS[Index]\n\n return instances()", "Bit Assignment\nNote: that in the above the module also shows how to perform bit selection assignment. The output signal Res or SignRes is assigned a value from the References at position Index but then the bit from the references is set to position Index in the outputs. Notice that the syntax is\nVariable.next[index]=\nThe same structure is also used in setting bit slices so that for a big slice assignment is\nVariable.next[MSB:LSB]=\nmyHDL Testing", "Peeker.clear()\nIndex=Signal(intbv(0)[4:]); Peeker(Index, 'Index')\nRes=Signal(intbv(0)[8:]); Peeker(Res, 'Res')\nSignRes=Signal(intbv(0)[8:].signed()); Peeker(SignRes, 'SignRes')\n\nDUT=BitSelectDemo(Index, Res, SignRes)\n\ndef BitSelectDemo_TB():\n \"\"\"\n myHDL only Testbench\n \"\"\"\n @instance\n def stimules():\n for i in range(7):\n Index.next=i\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\nsim=Simulation(DUT, BitSelectDemo_TB(), *Peeker.instances()).run() ", "Note that if the for loop range was increased beyond 7 an error would be triggered.", "Peeker.to_wavedrom('Index', 'Res', 'SignRes')\n\nBitSelectDemoData=Peeker.to_dataframe() \nBitSelectDemoData['Res Bin']=BitSelectDemoData['Res'].apply(lambda Row: bin(Row, 8), 1)\nBitSelectDemoData['SignRes Bin']=BitSelectDemoData['SignRes'].apply(lambda Row: bin(Row, 8), 1)\nBitSelectDemoData=BitSelectDemoData[['Index', 'Res', 'Res Bin', 'SignRes', 'SignRes Bin']]\nBitSelectDemoData", "Verilog Conversion\nVerilog Conversion Error\nLine 24 in the conversion of BitSelectDemo to BitSelectDemo.v is incorrect. The myHDL source line is\nRefS=Signal(intbv(-93)[8:].signed())\nbut the converted line becomes \nassign RefS = 8'd-93;\nbut this needs to instead become \n```\nassign RefS = -8'd93;\n``\ninBitSelectDemo.v`", "DUT.convert()\nVerilogTextReader('BitSelectDemo');", "\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{BitSelectDemo_v_RTL.png}}\n\\caption{\\label{fig:BSDVRTL} BitSelectDemo Verilog RTL schematic with corrected errors; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{BitSelectDemo_v_SYN.png}}\n\\caption{\\label{fig:BSDVHDSYN} BitSelectDemo Verilog Synthesized Schematic with corrected errors; Xilinx Vivado 2017.4}\n\\end{figure}\nVHDL Conversion\nVHDL Conversion Issue\nThe resulting BitSelectDemo.vhd from BitSelectDemo contains a line that calls from a libary work.pck_myhdl_010.all that is created when this file is ran. Make sure to import this file along with BitSelectDemo.vhd.", "DUT.convert('VHDL')\nVHDLTextReader('BitSelectDemo');", "\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{BitSelectDemo_vhd_RTL.png}}\n\\caption{\\label{fig:BSDVHDRTL} BitSelectDemo VHDL RTL schematic with corrected errors; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{BitSelectDemo_vhd_SYN.png}}\n\\caption{\\label{fig:BSDVHDSYN} BitSelectDemo VHDL Synthesized Schematic with corrected errrors; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog/VHDL Testbench", "@block\ndef BitSelectDemo_TB_V_VHDL():\n \"\"\"\n myHDL -> Verilog/VHDL Testbench for `BitSelectDemo`\n \"\"\"\n\n Index=Signal(intbv(0)[4:])\n Res=Signal(intbv(0)[8:])\n SignRes=Signal(intbv(0)[8:].signed())\n\n @always_comb\n def print_data():\n print(Index, Res, SignRes)\n\n DUT=BitSelectDemo(Index, Res, SignRes)\n\n @instance\n def stimules():\n for i in range(7):\n Index.next=i\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\nTB=BitSelectDemo_TB_V_VHDL()", "Verilog Testbench\nVerilog Testbench Conversion Issue\nThis testbench will work after\nassign RefS = 8'd-93;\nis changed to\nassign RefS = -8'd93;", "TB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('BitSelectDemo_TB_V_VHDL');", "VHDL Testbench\nVHDL Testbench Conversion Issue\nThis Testbench is not working in Vivado", "TB.convert(hdl=\"VHDL\", initial_values=True)\nVHDLTextReader('BitSelectDemo_TB_V_VHDL');", "myHDL shift (&lt;&lt;/&gt;&gt;) behavior\nLeft Shift (<<)\nLeft Shifting with intbv", "#Left Shift test with intbv\n#intialize \nTV=intbv(52)[8:]\nprint(TV, bin(TV, 8))\n#demenstrate left shifting with intbv\nfor i in range(8):\n LSRes=TV<<i\n print(f'Left Shift<<{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}')", "Left Shifting with signed intbv", "#Left Shift test with intbv signed\n#intialize \nTV=intbv(-52)[8:].signed()\nprint(TV, bin(TV, 8))\n#demenstrate left shifting with intbv signed\nfor i in range(8):\n LSRes=(TV<<i).signed()\n print(f'Left Shift<<{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}')", "Left Shifting with modbv", "#Left Shift test with modbv\n#intialize \nTV=modbv(52)[8:]\nprint(TV, bin(TV, 8))\n#demenstrate left shifting with modbv\nfor i in range(8):\n LSRes=(TV<<i).signed()\n print(f'Left Shift<<{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}')", "As can be seen, Left shifting tacks on a number of zeros equivalent to the shift increment to the end of the binary expression for the value. This then increases the size of the needed register that the resulting value needs to set into for each left shift that does not undergo right bit cutoff\nRight Shift (&gt;&gt;)\nRight Shifting with intbv", "#Right Shift test with intbv\n#intialize \nTV=intbv(52)[8:]\nprint(TV, bin(TV, 8))\n#demenstrate left shifting with intbv\nfor i in range(8):\n LSRes=TV>>i\n print(f'Right Shift>>{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}')", "Right Shifting with signed intbv", "#Right Shift test with intbv signed\n#intialize \nTV=intbv(-52)[8:].signed()\nprint(TV, bin(TV, 8))\n#demenstrate left shifting with intbv signed\nfor i in range(8):\n LSRes=(TV>>i)\n print(f'Right Shift>>{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}')", "Right Shifting with modbv", "#Right Shift test with modbv\n#intialize \nTV=modbv(52)[8:]\nprint(TV, bin(TV, 8))\n#demenstrate left shifting with modbv\nfor i in range(8):\n LSRes=(TV>>i)\n print(f'Right Shift>>{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}')", "As can be seen, the right shift moves values (shifts) to the right by the shift increment while preserving the length of the register that is being shifted. While this means that overflow is not going to be in encountered. Right shifting trades that vulnerability for information loss as any information carried in the leftmost bits gets lost as it is shifted right beyond of the length of the register\nmyHDL Shifting Demo Module", "@block\ndef ShiftingDemo(ShiftVal, RSRes, LSRes):\n \"\"\"\n Module to Demo Shifting Behavior in myHDL refrance value\n -55 8Bit\n \n Input:\n ShiftVal(4BitVec): shift amount, for this demo to not\n use values greater then 7\n Output:\n RSRes(8BitVec Signed): output of Right Shifting \n LSRes (15BitVec Signed): output of Left Shifting\n \"\"\"\n RefVal=Signal(intbv(-55)[8:].signed())\n @always_comb\n def logic():\n RSRes.next=RefVal>>ShiftVal\n LSRes.next=RefVal<<ShiftVal\n return instances()", "myHDL Testing", "Peeker.clear()\nShiftVal=Signal(intbv()[4:]); Peeker(ShiftVal, 'ShiftVal')\nRSRes=Signal(intbv()[8:].signed()); Peeker(RSRes, 'RSRes')\nLSRes=Signal(intbv()[15:].signed()); Peeker(LSRes, 'LSRes')\n\nDUT=ShiftingDemo(ShiftVal, RSRes, LSRes)\n\ndef ShiftingDemo_TB():\n \"\"\"\n myHDL only Testbench for `ShiftingDemo`\n \"\"\"\n \n @instance\n def stimules():\n for i in range(8):\n ShiftVal.next=i\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\nsim=Simulation(DUT, ShiftingDemo_TB(), *Peeker.instances()).run() \n\nPeeker.to_wavedrom('ShiftVal', 'LSRes', 'RSRes');\n\nPeeker.to_dataframe()[['ShiftVal', 'LSRes', 'RSRes']]", "Verilog Conversion\nUnfortunately this is an unsynthesizable module as is due \nassign RefVal = 8'd-55; \nneeding to be changed to \nassign RefVal = -8'd55; \nafter wich the module is synthesizable", "DUT.convert()\nVerilogTextReader('ShiftingDemo');", "\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ShiftingDemo_v_RTL.png}}\n\\caption{\\label{fig:SDVRTL} ShiftingDemo Verilog RTL schematic with corrected errors; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ShiftingDemo_v_SYN.png}}\n\\caption{\\label{fig:SDVSYN} ShiftingDemo Verilog Synthesized Schematic with corrected errors; Xilinx Vivado 2017.4}\n\\end{figure}\nVHDL Conversion", "DUT.convert(hdl='VHDL')\nVHDLTextReader('ShiftingDemo');", "\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ShiftingDemo_vhd_RTL.png}}\n\\caption{\\label{fig:SDVHDRTL} ShiftingDemo VHDL RTL schematic with corrected errors; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ShiftingDemo_vhd_SYN.png}}\n\\caption{\\label{fig:SDVHDSYN} ShiftingDemo VHDL Synthesized Schematic with corrected errors; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog/VHDL Testbench", "@block\ndef ShiftingDemo_TB_V_VHDL():\n \"\"\"\n myHDL -> verilog/VHDL testbench for `ShiftingDemo`\n \"\"\"\n \n ShiftVal=Signal(intbv()[4:])\n RSRes=Signal(intbv()[8:].signed())\n LSRes=Signal(intbv()[15:].signed())\n \n @always_comb\n def print_data():\n print(ShiftVal, RSRes, LSRes)\n\n DUT=ShiftingDemo(ShiftVal, RSRes, LSRes)\n\n \n @instance\n def stimules():\n for i in range(8):\n ShiftVal.next=i\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\nTB=ShiftingDemo_TB_V_VHDL()", "Verilog Testbench\nVerilog Testbench Conversion Issue\nThis Testbench will work after \nassign RefVal = 8'd-55; \nis changed to \nassign RefVal = -8'd55;", "TB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('ShiftingDemo_TB_V_VHDL');", "VHDL Testbench\nVHDL Testbench Conversion Issue\nThis Testbench is not working in Vivado", "TB.convert(hdl=\"VHDL\", initial_values=True)\nVHDLTextReader('ShiftingDemo_TB_V_VHDL');", "myHDL concat behavior\nThe concat function is an abbreviated name for the full name of concatenation which is that action that this operator performs by joining the bits of all the signals that are arguments to it into a new concatenated single binary", "RefVal=intbv(25)[6:]; RefVal, bin(RefVal, 6)\n\nResult=concat(True, RefVal); Result, bin(Result)\n\nResultSigned=concat(True, RefVal).signed(); ResultSigned, bin(ResultSigned)", "myHDL concat Demo", "@block\ndef ConcatDemo(Res, ResS):\n \"\"\"\n `concat` demo \n Input:\n None\n Ouput:\n Res(7BitVec): concat result\n Res(7BitVec Signed): concat result that is signed\n \"\"\"\n RefVal=Signal(intbv(25)[6:])\n @always_comb\n def logic():\n Res.next=concat(True, RefVal)\n ResS.next=concat(True, RefVal).signed()\n return instances()", "myHDL Testing", "Peeker.clear()\nRes=Signal(intbv(0)[7:]); Peeker(Res, 'Res')\nResS=Signal(intbv(0)[7:].signed()); Peeker(ResS, ResS)\n\nDUT=ConcatDemo(Res, ResS)\n\ndef ConcatDemo_TB():\n \"\"\"\n myHDL only Testbench for `ConcatDemo`\n \"\"\"\n @instance\n def stimules():\n for i in range(2):\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\nsim=Simulation(DUT, ConcatDemo_TB(), *Peeker.instances()).run() \n\nPeeker.to_wavedrom()", "Verilog Conversion", "DUT.convert()\nVerilogTextReader('ConcatDemo');", "\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ConcatDemo_v_RTL.png}}\n\\caption{\\label{fig:CDVRTL} ConcatDemo Verilog RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ConcatDemo_v_SYN.png}}\n\\caption{\\label{fig:CDVSYN} ConcatDemo Verilog Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nVHDL Conversion", "DUT.convert('VHDL')\nVHDLTextReader('ConcatDemo');", "\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ConcatDemo_vhd_RTL.png}}\n\\caption{\\label{fig:CDVHDRTL} {ConcatDemo VHDL RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ConcatDemo_vhd_SYN.png}}\n\\caption{\\label{fig:CDVHDSYN} ConcatDemo VHDL Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog/VHDL Testbench", "@block\ndef ConcatDemo_TB_V_VHDL():\n \"\"\"\n myHDL-> Verilog/VHDL Testbench\n \"\"\"\n\n Res=Signal(intbv(0)[7:])\n ResS=Signal(intbv(0)[7:].signed())\n\n @always_comb\n def print_data():\n print(Res, ResS)\n\n\n DUT=ConcatDemo(Res, ResS)\n\n @instance\n def stimules():\n for i in range(2):\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\n\nTB=ConcatDemo_TB_V_VHDL()", "Verilog Testbench", "TB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('ConcatDemo_TB_V_VHDL');", "VHDL Testbench\nVHDL Testbench Conversion Issue\nThis Testbench is not working in Vivado", "TB.convert(hdl=\"VHDL\", initial_values=True)\nVHDLTextReader('ConcatDemo_TB_V_VHDL');", "myHDL Bitslicing Behavior\nThese example values come from the future work with floating point implemented in fixed point architecture which is incredibly important for Digital Signal Processing as will be shown. For now, just understand that the example are based on multiplying two Q4.4 (8bit fixed point) number resulting in Q8.8 (16bit fixed point) product\nSlicing intbv\nthe following is an example of truncation from 16bit to 8bit rounding that shows how bit slicing works in myHDL. The truncation bit slicing keeps values from the far left (Most Significant Bit (MSB) ) to the rightmost specified bit (Least Significant Bit (LSB))", "TV=intbv(1749)[16:]\nprint(f'int 1749 in bit is {bin(TV, len(TV))}')\nfor j in range(16):\n try:\n Trunc=TV[16:j]\n print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:15, LSB: {j}')\n except ValueError:\n print ('MSB {15} is <= LSB {j}')\n\nTV=intbv(1749)[16:]\nprint(f'int 1749 in bit is {bin(TV, len(TV))}')\nfor i in reversed(range(16+1)):\n try:\n Trunc=TV[i:0]\n print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:{i}, LSB: {0}')\n except ValueError:\n print ('MSB is <= LSB index')", "Slicing Signed intbv", "TV=intbv(-1749)[16:].signed()\nprint(f'int -1749 in bit is {bin(TV, len(TV))}')\nfor j in range(16):\n try:\n Trunc=TV[16:j].signed()\n print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:15, LSB: {j}')\n except ValueError:\n print ('MSB {15} is <= LSB {j}')\n\nTV=intbv(-1749)[16:].signed()\nprint(f'int -1749 in bit is {bin(TV, len(TV))}')\nfor i in reversed(range(16+1)):\n try:\n Trunc=TV[i:0].signed()\n print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:{i}, LSB: {0}')\n except ValueError:\n print ('MSB is <= LSB index')", "Slicing modbv", "TV=modbv(1749)[16:]\nprint(f'int 1749 in bit is {bin(TV, len(TV))}')\nfor j in range(16):\n try:\n Trunc=TV[16:j]\n print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:15, LSB: {j}')\n except ValueError:\n print ('MSB {15} is <= LSB {j}')\n\nTV=modbv(1749)[16:]\nprint(f'int 1749 in bit is {bin(TV, len(TV))}')\n\nfor i in reversed(range(16+1)):\n try:\n Trunc=TV[i:0]\n print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:{i}, LSB: {0}')\n except ValueError:\n print ('MSB is <= LSB index')", "myHDL BitSlicing Demo Module", "@block\ndef BitSlicingDemo(MSB, LSB, Res):\n \"\"\"\n Demenstration Module for Bit Slicing in myHDL\n \n Inputs:\n MSB (5BitVec): Most Signficant Bit Index Must be > LSB, \n ex: if LSB==0 MSB must range between 1 and 15\n LSB (5BitVec): Lest Signficant Bit Index Must be < MSB\n ex: if MSB==15 LSB must range beteen 0 and 15\n \n Outputs:\n Res(16BitVec Signed): Result of the slicing operation from\n Refrance Vales (hard coded in module) -1749 (16BitVec Signed)\n \n \n \"\"\"\n RefVal=Signal(intbv(-1749)[16:].signed())\n @always_comb\n def logic():\n Res.next=RefVal[MSB:LSB].signed()\n return instances()\n ", "myHDL Testing", "Peeker.clear()\nMSB=Signal(intbv(16)[5:]); Peeker(MSB, 'MSB')\nLSB=Signal(intbv(0)[5:]); Peeker(LSB, 'LSB')\nRes=Signal(intbv(0)[16:].signed()); Peeker(Res, 'Res')\n\nDUT=BitSlicingDemo(MSB, LSB, Res)\n\ndef BitslicingDemo_TB():\n \"\"\"\n myHDL only Testbench for `BitSlicingDemo`\n \"\"\"\n @instance\n def stimules():\n \n for j in range(15):\n MSB.next=16\n LSB.next=j\n yield delay(1)\n \n for i in reversed(range(1, 16)):\n MSB.next=i\n LSB.next=0\n yield delay(1)\n \n raise StopSimulation()\n return instances()\n \n \nsim=Simulation(DUT, BitslicingDemo_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom('MSB', 'LSB', 'Res')\n\nPeeker.to_dataframe()[['MSB', 'LSB', 'Res']]", "Verilog Conversion\nVerilog Conversion Issue\nThe following is unsynthesizable since Verilog requires that the indexes in bit slicing (aka Part-selects) be constant values. Along with the error in assign RefVal = 16'd-1749;\nHowever, the generated Verilog code from BitSlicingDemo does hold merit in showing how the index values are mapped from myHDL to Verilog", "DUT.convert()\nVerilogTextReader('BitSlicingDemo');", "VHDL Conversion\nVHDL Conversion Issue\nThe following is unsynthesizable since VHDL requires that the indexes in bit slicing (aka Part-selects) be constant values. \nHowever, the generated VHDL code from BitSlicingDemo does hold merit in showing how the index values are mapped from myHDL to VHDL", "DUT.convert(hdl='VHDL')\nVHDLTextReader('BitSlicingDemo');", "myHDL to Verilog/VHDL Testbench", "@block\ndef BitslicingDemo_TB_V_VHDL():\n \"\"\"\n myHDL -> Verilog/VHDL Testbench for `BitSlicingDemo`\n \"\"\"\n\n MSB=Signal(intbv(16)[5:])\n LSB=Signal(intbv(0)[5:])\n Res=Signal(intbv(0)[16:].signed())\n \n @always_comb\n def print_data():\n print(MSB, LSB, Res)\n\n DUT=BitSlicingDemo(MSB, LSB, Res)\n\n @instance\n def stimules():\n \n for j in range(15):\n MSB.next=16\n LSB.next=j\n yield delay(1)\n \n #!!! reversed is not being converted\n #for i in reversed(range(1, 16)):\n # MSB.next=i\n # LSB.next=0\n # yield delay(1)\n \n raise StopSimulation()\n return instances()\n \n \nTB=BitslicingDemo_TB_V_VHDL()", "Verilog Testbench", "TB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('BitslicingDemo_TB_V_VHDL');", "VHDL Testbench", "TB.convert(hdl=\"VHDL\", initial_values=True)\nVHDLTextReader('BitslicingDemo_TB_V_VHDL');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lmoresi/UoM-VIEPS-Intro-to-Python
Notebooks/Numpy+Scipy/2 - Application - The Game of Life.ipynb
mit
[ "The game of life (John Conway)\nThe universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead. Every cell interacts with its eight neighbours, which are the cells that are directly horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:\n\nAny live cell with fewer than two live neighbours dies, as if by needs caused by underpopulation.\nAny live cell with more than three live neighbours dies, as if by overcrowding.\nAny live cell with two or three live neighbours lives, unchanged, to the next generation.\nAny dead cell with exactly three live neighbours becomes a live cell.\n\nSee this page for some more information. And, note, thanks to Dan Sandiford for setting up this example.", "%pylab inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\nstart = np.array([[1,0,0,0,0,0],\n [0,0,0,1,0,0],\n [0,1,0,1,0,0],\n [0,0,1,1,0,0],\n [0,0,0,0,0,0],\n [0,0,0,0,0,1]])", "We will talk more about plotting later, but for now we can use this without digging deeper:", "plt.imshow(start, interpolation='nearest', cmap=\"gray\") \n\nprint start[4:8,4:8] # neighbours of start[5,5]\nprint start[1:4,1:4] # neighbours of start[2,2]\n#print start[?:?] # neighbours of start[1,1]\n#print start[?:?] # neighbours of start[0,0]\n\nlive_neighbours = np.empty(start.shape)\nfor index, value in np.ndenumerate(start):\n #Need to add 2, becase the slicing works like 'up to but not including'\n x0 = max(0,(index[0]-1))\n x1 = max(0,(index[0]+2))\n y0 = max(0,(index[1]-1))\n y1 = max(0,(index[1]+2))\n subarray = start[x0:x1, y0:y1]\n live_neighbours[index] = subarray.sum() - value # need to subtract actual value at that cell...\n\nlive_neighbours", "Exercise: Your task is to write a function that \"runs\" the game of life. This should be possible by filling out the two function templates below. \n\nconditions: boundaries are always dead", "def get_neighbours(start):\n\"\"\"\nThis function gets the number of live neighbours in the binary array start\nstart : np.ndarray\n\"\"\" \n\ndef game_of_life(start, n):\n \"\"\"\n this function runs the game of life for n steps...\n start : np.ndarray (0s and 1s)\n n: int number of steps \n \"\"\"\n assert (1>= start.min() >= 0) and (1>= start.max() >= 0), \"array must be ones and zeros\"\n \n current = np.copy(start)\n \n while n:\n neighbours = get_neighbours(current)\n \n for index, value in np.ndenumerate(current):\n print(index, value)\n # Apply the rules to current\n if ...\n \n else ...\n \n n -= 1 \n \n \n return current" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jstac/quantecon_nyu_2016
homework_assignments/hw_set6/ols_via_projection/OLS_and_projection.ipynb
bsd-3-clause
[ "OLS Through StatsModels vs Projection\nIn this exercise we're going to run a regression using some trade data. (The regression model is a gravity model, although the details don't really matter for this exercise.) The idea is to compute the OLS coefficients and other related quantities using \n\nA regression package, and\nThe expressions given in the lecture on orthogonal projection.\n\nNote that you need to download the data set \"trade_data.csv\" as well as this notebook.\nYour task is to complete the notebook, as discussed below.\nFirst let's try a standard approach, using StatsModels.", "matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nfrom numpy import log\nimport statsmodels.formula.api as smf", "First we read in the data.", "data = pd.read_csv(\"trade_data.csv\")", "Let's see what it looks like.", "data.head()", "Let's get a full list of columns.", "data.columns", "Let's regress 'value' on 'egdp', 'igdp' and 'dist', all in logs. To do this we make a formula object.", "formula = \"log(value) ~ log(egdp) + log(igdp) + log(dist)\"\nmodel = smf.ols(formula, data)\n\nresult = model.fit(cov_type='HC1')\nprint(result.summary())", "Replication using Projection\nNow let's reproduce the same values using the formulas from the lecture on projection. I'm going to be nice and build $\\mathbf X$ and $\\mathbf y$ for you.", "data2 = data[['value', 'egdp', 'igdp', 'dist']]\ndata2 = data2.dropna()\n\ny = np.asarray(np.log(data2.value))\nX = np.ones((len(y), 4))\nX[:, 1] = log(data2.egdp)\nX[:, 2] = log(data2.igdp)\nX[:, 3] = log(data2.dist)", "Now reproduce the coefficients by computing $\\hat \\beta$, using the matrix expression given in the lectures.", "# Derive betahat using the expression from the lectures\nprint(betahat)", "Next replicate the value for $R^2$ produced in the table above using the formula given in the lecture slides.", "# Derive R^2 using y, Py, etc. as defined in the lecture\nprint(Rsq)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
FractalFlows/DAOResearch
notebooks/LS-LMSR.ipynb
mit
[ "import matplotlib.pylab as pl\nimport numpy as np\nfrom scipy.optimize import fmin_cobyla\nimport pandas as pd\n%matplotlib inline", "Creating a scientific task", "result1_task1 = {\n 'description': 'Attempt to reproduce the result 1 of this article',\n 'doi': '10.1051/itmconf/20140201004',\n 'reference': 'result 1, p. 4',\n 'type': 'scientific task',\n 'possible_outcomes': [\n 'the result is reproducible',\n 'the result is not reproducible'\n ]\n}", "The LS-LMSR model from Augur\nHeavily inspired from this blog post: Augur’s Automated Market Maker: The LS-LMSR, By Dr. Abe Othman.\nThe cost function for the LMSR is given by:\n$$\nC(\\textbf{q}) = b \\log \\left(\\sum_{i=1}^n e^{\\frac{q_i}{b}} \\right)\n$$\nand the marginal prices on each event are given by the partial derivatives of the cost function:\n$$\np_j(\\textbf{q}) = \\frac{e^{\\frac{q_j}{b}}}{\\sum_{i=1}^n e^{\\frac{q_i}{b}}}\n$$\nwhere $b$, which is defined as a constant in the original LMSR model of Hanson, is here defined as a variable of q\n$$\nb(\\textbf{q})=\\alpha \\sum_{i=1}^n q_i\n$$\nwith $\\alpha$ defined as\n$$\n\\alpha = \\frac{0.1}{n \\log{n}}\n$$\nwith $n$ being the number of dimensions of $\\textbf{q}$", "class LS_LMSRMarket(object):\n def __init__(self, task, vig=0.1, init=1.0, market='LS_LMSR', b=None):\n \"\"\"\n Parameters\n ----------\n task dict\n A dictionary describing the task for which the predictive market is created.\n Keys:\n -----\n type: str \n (e.g. 'scientific task')\n description: str\n description of the task to be performed\n reference: str\n Internal reference (e.g. 'result 1, p. 4')\n doi: str\n DOI of the related publication\n possible_outcomes: list\n List of strings describing the possible outcomes of the task\n \n vig float\n parameter of the `alpha` variable used to calculate the `b` variable.\n Corresponds to the market \"vig\" value - typically between 5 and 30 percent in real-world markets\n \n init float\n The initial subsidies of the market, spread equally in this algorithm on all the outcomes.\n \n market srt, 'LS_LMSR' | 'LMSR'\n The market type. If 'LMSR' is selected, then a b value should be given.\n \"\"\"\n self.market = market\n if self.market == 'LSMR':\n if b == None:\n raise Exception('b value is needed for LSMR markets')\n self._b = b\n\n for k, v in task.items():\n setattr(self, k, v)\n self.init = init\n self.n = len(self.possible_outcomes)\n self._x = [np.ones([self.n])*init/self.n]\n self._book = []\n self.market_value = init\n self._history = []\n self.alpha = vig*self.n/np.log(self.n)\n \n @property\n def b(self):\n if self.market == 'LMSR':\n return self._b\n elif self.market == 'LS_LMSR':\n return self._b_func(self.x)\n else:\n raise Exception('market must be set to either \"LMSR\" or \"LS_LMSR\"')\n \n def _b_func(self, x):\n \"\"\"Calculate the `b` equation: b=\\alpha \\Sigma x\"\"\"\n return self.alpha * x.sum()\n \n @property\n def book(self):\n return pd.DataFrame(self._book)\n \n @property\n def x(self):\n return self._x[-1].copy()\n \n def cost(self, x):\n return self.b*np.log(np.exp(x/self.b).sum())\n \n def _new_x(self, shares, outcome):\n new_x = self.x\n new_x[outcome] += shares \n return new_x\n \n def price(self, shares, outcome):\n return self._price(self._new_x(shares, outcome))\n \n def _price(self, x):\n return self.cost(x)-self.cost(self.x)\n \n def register_x(self, x):\n self._x.append(x)\n \n def calculate_shares(self, paid, outcome):\n obj_func = lambda s: np.abs(self.price(s, outcome) - paid)\n return fmin_cobyla(obj_func, paid/self.p[outcome], [])\n \n def buy_shares(self, name, paid, outcome):\n shares = self.calculate_shares(paid, outcome)\n self.register_x(self._new_x(shares, outcome))\n self._book.append({'name':name, \n 'shares':shares, \n 'outcome':outcome, \n 'paid':paid})\n self._history.append(self.p)\n self.market_value += paid\n print(\"%s paid %2.2f EUR, for %2.2f shares of outcome %d, which will give him %2.2f EUR if he wins\"%(\n name, paid, shares, outcome, shares/self.x[outcome]*self.market_value))\n return shares\n \n def sell_shares(self, name, shares, outcome):\n price = self.price(-share, outcome)\n self._book.append({'name':name, \n 'shares':-shares, \n 'outcome':outcome, \n 'paid':-price}) \n self.market_value -= price \n self._history.append(self.p) \n return price\n\n def outcome_probability(self):\n K = np.exp(self.x/self.b)\n return K/K.sum()\n \n @property\n def p(self):\n return self.outcome_probability()\n \n def history(self):\n return np.array(self._history)\n\npm = LS_LMSRMarket(result1_task1, init=10., vig=0.1)\n\npm.buy_shares('Mark', 1., 0)\n\npm.buy_shares('Erik', 300., 1)\npm.buy_shares('Soeren', 1., 0)\npm.buy_shares('Albert', 3., 1)\n\npm.market_value\n\npm.book\n\ntotal_shares = pm.book.groupby('outcome').shares.sum()\nbook = pm.book\nbook['possible_payout'] = pm.market_value * pm.book.shares / total_shares.values[pm.book.outcome.values]\nbook['ownership_ratio'] = pm.book.shares / total_shares.values[pm.book.outcome.values]\ngrouped = book.groupby('name')\ndf = grouped.paid.sum().to_frame(name='paid')\ndf['possible_payout'] = grouped.possible_payout.sum()\ndf\n\nbook\n\npm.x\n\npm.market_value\n\npl.plot(pm.history())\npl.ylim([0.,1.])\npl.legend(['outcome 0', 'outcome 1'])", "The Augur example\nInspired from Augur white paper\nJoe is creating a new event", "new_event = {\n \"type\": \"CreateEvent\", \n \"vin\": [{\n \"n\": 0,\n \"value\": 0.01000000,\n \"units\": \"bitcoin\", \n \"scriptSig\": \"\"\"\n <Joe’s signature>\n <Joe´s public key >\"\"\"\n }], \n \"vout\": [{\n \"n\": 0,\n \"value\" : 0.01000000, \n \"units\": \"bitcoin\", \n \"event\": {\n \"id\": \"<event hash >\", \n \"description\": \"\"\"Hillary Clinton \n wins the 2016 U.S. \n Presidential Election.\"\"\",\n \"branch\": \"politics\", \n \"is_binary\": True, \n \"valid_range\": [0, 1], \n \"expiration\": 1478329200, \n \"creator\": \"<Joe’s address>\"\n },\n \"address\": \"<base-58 event ID>\", \n \"script\": \"\"\"\n OP_DUP \n OP_HASH160 \n <event hash > \n OP_EQUALVERIFY \n OP_MARKETCHECK\"\"\"\n }]\n }", "Joe is creating a Market of events", "new_market = {\n \"type\": \"CreateMarket\", \n \"loss_limit\": 1.2, \n \"vin\": [{\n \"n\": 0,\n \"value\": 27.72588722,\n \"units\": \"bitcoin\", \n \"tradingFee\": 0.005, \n \"scriptSig\": \"\"\"<Joe’s signature>\n <Joe ’s public key >\"\"\"\n }], \n \"vout\": [{\n \"n\": 0,\n \"value\": 27.72588722, \n \"units\": \"bitcoin\", \n \"script\": \"\"\"\n OP_DUP\n OP_HASH160 \n OP_EVENTLOOKUP \n OP_ISSHARES \n OP_MARKETCHECK\"\"\"\n },\n {\n \"n\": 1,\n \"value\": 10**9,\n \"units\": \"shares\", \n \"event\": \"<event -1 hash >\", \n \"branch\": \"politics\", \n \"script\": \"\"\"\n OP_DUP\n OP_HASH160 \n OP_EVENTLOOKUP \n OP_ISBITCOIN \n OP_MARKETCHECK\"\"\"\n },\n {\n \"n\": 2,\n \"value\": 10**9,\n \"units\": \"shares\",\n \"event\": \"<event-2 hash>\",\n \"branch\": \"politics\", \n \"script\": \"\"\"\n OP_DUP\n OP_HASH160 \n OP_EVENTLOOKUP \n OP_ISBITCOIN\n OP_MARKETCHECK\"\"\"\n }],\n \"id\": \"<market hash>\",\n \"creator\": \"<Joe’s address>\"\n}", "100 Traders example", "n = 100\noutcome = 0.001\n# The amount is assumed to increase linearly with time, as the market increases its liquidity\namount = np.random.random([n]) * 100. #* (1+np.arange(n))/(1.*n)\noutcomes = np.zeros([n])\noutcomes[np.random.random([n])<outcome] = 1.0", "Creating the new task prediction market", "pm = LS_LMSRMarket(result1_task1, init=10., vig=0.1)", "One company comes along and bet sh*t ton of money", "pm.buy_shares('EvilMegaCorp', 1000, 1)", "Performing the bets", "for i, a, o in zip(range(n),amount, outcomes):\n pm.buy_shares('Trader-%d'%(i), a, int(o))\n\npm.buy_shares('EvilMegaCorp', 1000, 1)", "The total to pay for each outcome", "pm.market_value\n\ntotal_shares = pm.book.groupby('outcome').shares.sum()\nbook = pm.book\nbook['possible_payout'] = pm.market_value * pm.book.shares / total_shares.values[pm.book.outcome.values]\ngrouped = book.groupby('name')\ndf = grouped.paid.sum().to_frame(name='paid')\ndf['possible_payout'] = grouped.possible_payout.sum()\n\ndf\n\ntotal = pm.book.groupby('outcome').shares.sum()\ntotal\n\npm.book.groupby('outcome').paid.sum()\n\npm.book.groupby('outcome').shares.sum()\n\npm.p\n\npm.book.groupby('outcome').sum().values/pm.book.groupby('outcome').sum().values.sum()", "Plot the market prediction history", "pl.plot(pm.history())\npl.ylim([0.,1.])\npl.legend(['outcome 0', 'outcome 1'])\npl.title('%d Trades, total market value=%2.2f EUR'%(n, pm.market_value))", "The book of trades", "book = pm.book\nbook['possible_win'] = pm.book.owed - pm.book.paid\nbook['p0'] = pm.history()[:,0]\nbook['p1'] = pm.history()[:,1]\nbook\n\npm.market_value\n\npm.p" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
martinjrobins/hobo
examples/sampling/first-example.ipynb
bsd-3-clause
[ "Sampling: First example\nThis example shows you how to perform Bayesian inference on a time series, using Adaptive Covariance MCMC.\nIt follows on from Optimisation: First example\nLike in the optimisation example, we start by importing pints:", "import pints", "Next, we create a model class. \nInstead of using a real model, in this example we use the \"Logistic\" toy model included in pints:", "import pints.toy as toy\nmodel = toy.LogisticModel()", "In order to generate some test data, we choose an arbitrary set of \"true\" parameters:", "true_parameters = [0.015, 500]", "And a number of time points at which to sample:", "import numpy as np\ntimes = np.linspace(0, 1000, 400)", "Using these parameters and time points, we can now generate some toy data:", "org_values = model.simulate(true_parameters, times)", "And make it more realistic by adding gaussian noise:", "noise = 25\nvalues = org_values + np.random.normal(0, noise, org_values.shape)", "We can use matplotlib (or any other plotting package) to look at the data we've created:", "import matplotlib.pyplot as plt\n\nplt.figure(figsize=(12,4.5))\nplt.xlabel('Time')\nplt.ylabel('Values')\nplt.plot(times, values, label='Noisy data')\nplt.plot(times, org_values, lw=2, label='Noise-free data')\nplt.legend()\nplt.show()", "Now we have enough data (a model, a list of times, and a list of data) to formulate a problem:", "problem = pints.SingleOutputProblem(model, times, values)", "We now have some toy data, and a model that can be used for forward simulations. To make it into a probabilistic problem, we need to add a noise model. One way to do this is using the GaussianLogLikelihood function, which assumes independently distributed Gaussian noise over the data, and can calculate log-likelihoods:", "log_likelihood = pints.GaussianLogLikelihood(problem)", "This noise has mean zero, and an unknown standard deviation. How can we find out the standard deviation? By inferring it along with the other parameters. This means we have added one parameter to our problem!", "print('Original problem dimension: ' + str(problem.n_parameters()))\n\nprint('New dimension: ' + str(log_likelihood.n_parameters()))", "(This means we also have to update our vector of true parameters)", "true_parameters += [noise]\nprint(true_parameters)", "This log_likelihood represents the conditional probability $p(y|\\theta)$, given a set of parameters $\\theta$ and a series of $y=$ values, it can calculate the probability of finding those values if the real parameters are $\\theta$.\nWe can use this in a Bayesian inference scheme to find the quantity we're interested in:\n$p(\\theta|y) = \\frac{p(\\theta)p(y|\\theta)}{p(y)} \\propto p(\\theta)p(y|\\theta)$\nTo solve this, we now define a prior, indicating our initial ideas about what the parameters should be. \nJust as we're using a log-likelihood (the natural logarithm of a likelihood), we'll define this using a log-prior. This simplifies the above equation to:\n$\\log p(\\theta|y) \\propto \\log p(\\theta) + \\log p(y|\\theta)$\nIn this example we'll assume we don't know too much about the prior except lower and upper bounds for each variable: We assume the first model parameter is somewhere on the interval $[0.01, 0.02]$, the second model parameter on $[400, 600]$, and the standard deviation of the noise is somewhere on $[1, 100]$.", "log_prior = pints.UniformLogPrior(\n [0.01, 400, 1],\n [0.02, 600, 100]\n )", "With this prior, we can now define the numerator of Bayes' rule -- the unnormalised log posterior, $\\log \\left[ p(y|\\theta) p(\\theta) \\right]$:", "# Create a posterior log-likelihood (log(likelihood * prior))\nlog_posterior = pints.LogPosterior(log_likelihood, log_prior)", "Finally we create a list of guesses to use as initial positions. We'll run three MCMC chains so we create three initial positions:", "xs = [\n np.array(true_parameters) * 0.9,\n np.array(true_parameters) * 1.05,\n np.array(true_parameters) * 1.15,\n]", "And this gives us everything we need to run an MCMC routine:", "chains = pints.mcmc_sample(log_posterior, 3, xs)", "Using Pints' diagnostic plots to inspect the results\nWe can take a further look at the obtained results using Pints's diagnostic plots.\nFirst, we use the trace method to see if the three chains converged to the same solution.", "import pints.plot\npints.plot.trace(chains)\nplt.show()", "Based on this plot, it looks like the three chains become very similar after about 1000 iterations.\nTo be safe, we throw away the first 2000 samples and continue our analysis with the first chain.", "chain = chains[0]\nchain = chain[2000:]", "We can also look for autocorrelation in the chains, using the autocorrelation() method. If everything went well, the samples in the chain should be relatively independent, so the autocorrelation should get quite low when the lag on the x-axis increases.", "pints.plot.autocorrelation(chain)\nplt.show()", "Now we can inspect the inferred distribution by plotting histograms:", "fig, axes = pints.plot.histogram([chain], ref_parameters=true_parameters)\n\n# Show where the sample standard deviation of the generated noise is:\nnoise_sample_std = np.std(values - org_values)\naxes[-1].axvline(noise_sample_std, color='orange', label='Sample standard deviation of noise')\naxes[-1].legend()\n\nfig.set_size_inches(14, 9)\nplt.show()", "Here we've analysed each parameter in isolation, but we can also look at correlations between parameters we found using the pairwise() plot.\nTo speed things up, we'll first apply some thinning to the chain:", "chain = chain[::10]\n\npints.plot.pairwise(chain, kde=True, ref_parameters=true_parameters)\nplt.show()", "As these plots show, we came pretty close to the original \"true\" values (represented by the blue line). \nBut not exactly... Worse, the method seems to suggest a normal distribution but around the wrong point.\nTo find out what's going on, we can plot the log-posterior function near the true parameters:", "# Plot log-posterior function\nfig, axes = pints.plot.function(log_posterior, true_parameters)\n\n# Add a line showing the sample standard deviation of the generated noise\naxes[-1].axvline(noise_sample_std, color='orange', label='Sample standard deviation of noise')\naxes[-1].legend()\n\n# Customise the figure size\nfig.set_size_inches(14, 9)\nplt.show()", "As this plot (created entirely without MCMC!) shows, the MCMC method did well, but our estimate of the true parameters has become biased by the stochastic noise! You can test this by increasing the number of sample points, which increases the size of the noise sample, and reduces the bias.\nFinally, we can look at the bit that really matters: The model predictions made from models with the parameters we found (a posterior predictive check). Thes can be plotted using the series() method.", "fig, axes = pints.plot.series(chain, problem)\n\n# Customise the plot, and add the original, noise-free data\nfig.set_size_inches(12,4.5)\nplt.plot(times, org_values, c='orange', label='Noise-free data')\nplt.legend()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sbenthall/bigbang
examples/experimental_notebooks/Testing Power Law Response Time Hypothesis.ipynb
agpl-3.0
[ "An early result in the study of human dynamic systems is the claim that response times to email follow a power law distribution (http://cds.cern.ch/record/613536/). This result has been built on by others (http://www.uvm.edu/~pdodds/files/papers/others/2004/johansen2004.pdf, http://dx.doi.org/10.1103/physreve.83.056101). However, Clauset, Shalizi, and Newman (citation needed) have challenged the pervasive use discovery of powerlaws, claiming that these studies often depend on unsound statistics.\nHere we apply the method of power law distribution fitting and testing to the email response times of several public mailing lists.", "from bigbang.archive import Archive\nimport pandas as pd\n\narx = Archive(\"ipython-dev\",archive_dir=\"../archives\")\nprint arx.data.shape\n\narx.data.drop_duplicates(subset=('From','Date'),inplace=True)", "We will look at messages in our archive that are responses to other messages and how long after the original email the response was made.", "response_times = []\n\nresponse_times = []\nfor x in list(arx.data.iterrows()):\n if x[1]['In-Reply-To'] is not None:\n try:\n d1 = arx.data.loc[x[1]['In-Reply-To']]['Date']\n \n if isinstance(d1,pd.Series):\n d1 = d1[0]\n \n d2 = x[1]['Date']\n \n rt = (d2 - d1)\n \n response_times.append(rt.total_seconds())\n \n except AttributeError as e:\n print e\n except TypeError as e:\n print e\n except KeyError as e:\n # print e -- suppress error\n pass\n\nlen(response_times)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.semilogy(sorted(response_times,reverse=True))\n\nimport powerlaw\n\nf = powerlaw.Fit(response_times)\nprint f.power_law.alpha\nprint f.xmin\nprint f.D\nR, p = f.distribution_compare('power_law', 'lognormal')\nprint R,p", "We conclude that there is no reason to maintain that there is a power law distribution of email response times." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
yhilpisch/dx
05_dx_portfolio_multi_risk.ipynb
agpl-3.0
[ "<img src=\"http://hilpisch.com/tpq_logo.png\" alt=\"The Python Quants\" width=\"45%\" align=\"right\" border=\"4\">\nMulti-Risk Derivatives Portfolios\nThe step from multi-risk derivatives instruments to multi-risk derivatives instrument portfolios is not a too large one. This part of the tutorial shows how to model an economy with three risk factors", "from dx import *\nfrom pylab import plt\nplt.style.use('seaborn')", "Risk Factors\nThis sub-section models the single risk factors. We start with definition of the risk-neutral discounting object.", "# constant short rate\nr = constant_short_rate('r', 0.02)", "Three risk factors ares modeled:\n\ngeometric Brownian motion\njump diffusion\nstochastic volatility process", "# market environments\nme_gbm = market_environment('gbm', dt.datetime(2015, 1, 1))\nme_jd = market_environment('jd', dt.datetime(2015, 1, 1))\nme_sv = market_environment('sv', dt.datetime(2015, 1, 1))", "Assumptions for the geometric_brownian_motion object.", "# geometric Brownian motion\nme_gbm.add_constant('initial_value', 36.)\nme_gbm.add_constant('volatility', 0.2) \nme_gbm.add_constant('currency', 'EUR')\nme_gbm.add_constant('model', 'gbm')", "Assumptions for the jump_diffusion object.", "# jump diffusion\nme_jd.add_constant('initial_value', 36.)\nme_jd.add_constant('volatility', 0.2)\nme_jd.add_constant('lambda', 0.5)\n # probability for jump p.a.\nme_jd.add_constant('mu', -0.75)\n # expected jump size [%]\nme_jd.add_constant('delta', 0.1)\n # volatility of jump\nme_jd.add_constant('currency', 'EUR')\nme_jd.add_constant('model', 'jd')", "Assumptions for the stochastic_volatility object.", "# stochastic volatility model\nme_sv.add_constant('initial_value', 36.)\nme_sv.add_constant('volatility', 0.2)\nme_sv.add_constant('vol_vol', 0.1)\nme_sv.add_constant('kappa', 2.5)\nme_sv.add_constant('theta', 0.4)\nme_sv.add_constant('rho', -0.5)\nme_sv.add_constant('currency', 'EUR')\nme_sv.add_constant('model', 'sv')", "Finally, the unifying valuation assumption for the valuation environment.", "# valuation environment\nval_env = market_environment('val_env', dt.datetime(2015, 1, 1))\nval_env.add_constant('paths', 10000)\nval_env.add_constant('frequency', 'W')\nval_env.add_curve('discount_curve', r)\nval_env.add_constant('starting_date', dt.datetime(2015, 1, 1))\nval_env.add_constant('final_date', dt.datetime(2015, 12, 31))", "These are added to the single market_environment objects of the risk factors.", "# add valuation environment to market environments\nme_gbm.add_environment(val_env)\nme_jd.add_environment(val_env)\nme_sv.add_environment(val_env)", "Finally, the market model with the risk factors and the correlations between them.", "risk_factors = {'gbm' : me_gbm, 'jd' : me_jd, 'sv' : me_sv}\ncorrelations = [['gbm', 'jd', 0.66], ['jd', 'sv', -0.75]]", "Derivatives\nIn this sub-section, we model the single derivatives instruments.\nAmerican Put Option\nThe first derivative instrument is an American put option.", "gbm = geometric_brownian_motion('gbm_obj', me_gbm)\n\nme_put = market_environment('put', dt.datetime(2015, 1, 1))\nme_put.add_constant('maturity', dt.datetime(2015, 12, 31))\nme_put.add_constant('strike', 40.)\nme_put.add_constant('currency', 'EUR')\nme_put.add_environment(val_env)\n\nam_put = valuation_mcs_american_single('am_put', mar_env=me_put, underlying=gbm,\n payoff_func='np.maximum(strike - instrument_values, 0)')\n\nam_put.present_value(fixed_seed=True, bf=5)", "European Maximum Call on 2 Assets\nThe second derivative instrument is a European maximum call option on two risk factors.", "jd = jump_diffusion('jd_obj', me_jd)\n\nme_max_call = market_environment('put', dt.datetime(2015, 1, 1))\nme_max_call.add_constant('maturity', dt.datetime(2015, 9, 15))\nme_max_call.add_constant('currency', 'EUR')\nme_max_call.add_environment(val_env)\n\npayoff_call = \"np.maximum(np.maximum(maturity_value['gbm'], maturity_value['jd']) - 34., 0)\"\n\nassets = {'gbm' : me_gbm, 'jd' : me_jd}\nasset_corr = [correlations[0]]\n\nasset_corr\n\nmax_call = valuation_mcs_european_multi('max_call', me_max_call, assets, asset_corr,\n payoff_func=payoff_call)\n\nmax_call.present_value(fixed_seed=False)\n\nmax_call.delta('jd')\n\nmax_call.delta('gbm')", "American Minimum Put on 2 Assets\nThe third derivative instrument is an American minimum put on two risk factors.", "sv = stochastic_volatility('sv_obj', me_sv)\n\nme_min_put = market_environment('min_put', dt.datetime(2015, 1, 1))\nme_min_put.add_constant('maturity', dt.datetime(2015, 6, 17))\nme_min_put.add_constant('currency', 'EUR')\nme_min_put.add_environment(val_env)\n\npayoff_put = \"np.maximum(32. - np.minimum(instrument_values['jd'], instrument_values['sv']), 0)\"\n\nassets = {'jd' : me_jd, 'sv' : me_sv}\nasset_corr = [correlations[1]]\nasset_corr\n\nmin_put = valuation_mcs_american_multi(\n 'min_put', val_env=me_min_put, risk_factors=assets,\n correlations=asset_corr, payoff_func=payoff_put)\n\nmin_put.present_value(fixed_seed=True)\n\nmin_put.delta('jd')\n\nmin_put.delta('sv')", "Portfolio\nTo compose a derivatives portfolio, derivatives_position objects are needed.", "am_put_pos = derivatives_position(\n name='am_put_pos',\n quantity=2,\n underlyings=['gbm'],\n mar_env=me_put,\n otype='American single',\n payoff_func='np.maximum(instrument_values - 36., 0)')\n\nmax_call_pos = derivatives_position(\n 'max_call_pos', 3, ['gbm', 'jd'],\n me_max_call, 'European multi',\n payoff_call)\n\nmin_put_pos = derivatives_position(\n 'min_put_pos', 5, ['sv', 'jd'],\n me_min_put, 'American multi',\n payoff_put)", "These objects are to be collected in dictionary objects.", "positions = {'am_put_pos' : am_put_pos, 'max_call_pos' : max_call_pos,\n 'min_put_pos' : min_put_pos}", "All is together to instantiate the derivatives_portfolio class.", "port = derivatives_portfolio(name='portfolio',\n positions=positions,\n val_env=val_env,\n risk_factors=risk_factors,\n correlations=correlations)", "Let us have a look at the major portfolio statistics.", "%time stats = port.get_statistics()\nstats\n\nstats['pos_value'].sum()", "Finally, a graphical look at two selected, simulated paths of the stochastic volatility risk factor and the jump diffusion risk factor, respectively.", "path_no = 1\npaths1 = port.underlying_objects['sv'].get_instrument_values()[:, path_no]\npaths2 = port.underlying_objects['jd'].get_instrument_values()[:, path_no]\n\npaths1\n\npaths2", "The resulting plot illustrates the strong negative correlation.", "import matplotlib.pyplot as plt\n%matplotlib inline\nplt.figure(figsize=(10, 6))\nplt.plot(port.time_grid, paths1, 'r', label='sv')\nplt.plot(port.time_grid, paths2, 'b', label='jd')\nplt.gcf().autofmt_xdate()\nplt.legend(loc=0); plt.grid(True)\n# negatively correlated underlyings", "Copyright, License & Disclaimer\n© Dr. Yves J. Hilpisch | The Python Quants GmbH\nDX Analytics (the \"dx library\" or \"dx package\") is licensed under the GNU Affero General\nPublic License version 3 or later (see http://www.gnu.org/licenses/).\nDX Analytics comes with no representations or warranties, to the extent\npermitted by applicable law.\nhttp://tpq.io | dx@tpq.io |\nhttp://twitter.com/dyjh\n<img src=\"http://hilpisch.com/tpq_logo.png\" alt=\"The Python Quants\" width=\"35%\" align=\"right\" border=\"0\"><br>\nQuant Platform | http://pqp.io\nPython for Finance Training | http://training.tpq.io\nCertificate in Computational Finance | http://compfinance.tpq.io\nDerivatives Analytics with Python (Wiley Finance) |\nhttp://dawp.tpq.io\nPython for Finance (2nd ed., O'Reilly) |\nhttp://py4fi.tpq.io" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dwhswenson/openpathsampling
examples/toy_model_mstis/toy_mstis_A3_new_analysis.ipynb
mit
[ "TIS Analysis Framework Examples\nThis notebook provides an overview of the TIS analysis framework in OpenPathSampling. We start with the StandardTISAnalysis object, which will probably meet the needs of most users. Then we go into details of how to set up custom objects for analysis, and how to assemble them into a generic TISAnalysis object.\nFor an overview of TIS and this analysis framework, see http://openpathsampling.org/latest/topics/tis_analysis.html", "# if our large test file is available, use it. Otherwise, use file generated from toy_mstis_2_run.ipynb\nfrom __future__ import print_function\nimport os\ntest_file = \"../toy_mstis_1k_OPS1.nc\"\nfilename = \"mstis.nc\" # this requires newer functionality that in the standard tests\n#filename = test_file if os.path.isfile(test_file) else \"mstis.nc\"\nprint('Using file ' + filename + ' for analysis')\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport openpathsampling as paths\nimport pandas as pd\n\n%%time\nstorage = paths.AnalysisStorage(filename)\n\nnetwork = storage.networks[0]\nscheme = storage.schemes[0]\n\nstateA = storage.volumes['A']\nstateB = storage.volumes['B']\nstateC = storage.volumes['C']\nall_states = [stateA, stateB, stateC] # all_states gives the ordering", "Simplified Combined Analysis\nThe StandardTISAnalysis object makes it very easy to perform the main TIS rate analysis. Furthermore, it caches all the intemediate results, so they can also be analyzed.", "from openpathsampling.analysis.tis import StandardTISAnalysis\n\n# the scheme is only required if using the minus move for the flux\ntis_analysis = StandardTISAnalysis(\n network=network,\n scheme=scheme,\n max_lambda_calcs={t: {'bin_width': 0.05, 'bin_range': (0.0, 0.5)}\n for t in network.sampling_transitions}\n)\n\n#tis_analysis.progress = 'silent'\n\n%%time\nrate_matrix = tis_analysis.rate_matrix(steps=storage.steps).to_pandas(order=all_states)\n\nrate_matrix", "The rate matrix is a pandas.DataFrame. pandas has conveniences to easily convert that into a LaTeX table:", "print(rate_matrix.to_latex(float_format=\"{:.2e}\".format))", "Note that there are many options for setting up the StandardTISAnalysis object. Most customizations to the analysis can be performed by changing the initialization parameters of that object; see its documentation for details.\nLooking at the parts of the calculation\nOnce you run the rate calculation (or if you run tis_analysis.calculate(steps), you have already cached a large number of subcalculations. All of those are available in the results dictionary, although the analysis object has a number of conveniences to access some of them.\nLooking at the keys of the results dictionary, we can see what has been cached:", "tis_analysis.results.keys()", "In practice, however, we won't go directly to the results dictionary. We'd rather use the convenience methods that make it easier to get to the interesting results.\nWe'll start by looking at the flux:", "tis_analysis.flux_matrix\n\ns = paths.analysis.tis.flux_matrix_pd(tis_analysis.flux_matrix)\npd.DataFrame(s)", "Next we look at the total crossing probability (i.e., the crossing probability, joined by WHAM) for each sampled transition. We could also look at this per physical transition, but of course $A\\to B$ and $A\\to C$ are identical in MSTIS -- only the initial state matters.", "# if you don't like the \"Flux\" label in a separate line, fix it by hand\nprint(s.to_latex(float_format='{:.4f}'.format))", "Next we look at the total crossing probability (i.e., the crossing probability, joined by WHAM) for each sampled transition. We could also look at this per physical transition, but of course $A\\to B$ and $A\\to C$ are identical in MSTIS -- only the initial state matters.", "for transition in network.sampling_transitions:\n label = transition.name\n tcp = tis_analysis.total_crossing_probability[transition]\n plt.plot(tcp.x, np.log(tcp), label=label)\nplt.title(\"Total Crossing Probability\")\nplt.xlabel(\"$\\lambda$\")\nplt.ylabel(\"$\\ln(P(\\lambda | X_0))$\")\nplt.legend();", "We may want to look in more detail at one of these, by checking the per-ensemble crossing probability (as well at the total crossing probability). Here we select based on the $A\\to B$ transition, we would get the same results if we selected the transition using either trans = network.from_state[stateA] or trans = network.transitions[(stateA, stateC)].", "state_pair = (stateA, stateB)\ntrans = network.transitions[state_pair]\nfor ens in trans.ensembles:\n crossing = tis_analysis.crossing_probability(ens)\n label = ens.name\n plt.plot(crossing.x, np.log(crossing), label=label)\ntcp = tis_analysis.total_crossing_probability[state_pair]\nplt.plot(tcp.x, np.log(tcp), '-k', label=\"total crossing probability\")\nplt.title(\"Crossing Probabilities, \" + stateA.name + \" -> \" + stateB.name)\nplt.xlabel(\"$\\lambda$\")\nplt.ylabel(\"$\\ln(P_A(\\lambda | \\lambda_i))$\")\nplt.legend();", "Finally, we look at the last part of the rate calculation: the conditional transition probability. This is calculated for the outermost interface in each interface set.", "tis_analysis.conditional_transition_probability", "Individual components of the analysis\nThe combined analysis is the easiest way to perform analysis, but if you need to customize things (or if you want to compare different calculation methods) you might want to create objects for components of the analysis individually. Note that unlike the StandardTISAnalysis object, these do not cache their intermediate results.\nFlux from the minus move", "from openpathsampling.analysis.tis import MinusMoveFlux\n\nflux_calc = MinusMoveFlux(scheme)", "To calculate the fluxes, we use the .calculate method of the MinusMoveFlux object:", "%%time\nfluxes = flux_calc.calculate(storage.steps)\n\nfluxes", "This is in the same format as the flux_matrix given above, and we can convert it to a pandas.DataFrame in the same way. This can then be converted to a LaTeX table to copy-paste into an article in the same way as was done earlier.", "pd.DataFrame(paths.analysis.tis.flux_matrix_pd(fluxes))", "The minus move flux calculates some intermediate information along the way, which can be of use for further analysis. This is cached when using the StandardTISAnalysis, but can always be recalculated. The intermediate maps each (state, interface) pair to a dictionary. For details on the structure of that dictionary, see the documentation of TrajectoryTransitionAnalysis.analyze_flux.", "%%time\nflux_dicts = flux_calc.intermediates(storage.steps)[0]\n\nflux_dicts", "Flux from existing dictionary\nThe DictFlux class (which is required for MISTIS, and often provides better statistics than the minus move flux in other cases) takes a pre-calculated flux dictionary for initialization, and always returns that dictionary. The dictionary is in the same format as the fluxes returned by the MinusMoveFlux.calculate method; here, we'll just use the results we calculated above:", "from openpathsampling.analysis.tis import DictFlux\n\ndict_flux = DictFlux(fluxes)\n\ndict_flux.calculate(storage.steps)", "Note that DictFlux.calculate just echoes back the dictionary we gave it, so it doesn't actually care if we give it the steps argument or not:", "dict_flux.calculate(None)", "This object can be used to provide the flux part of the TIS calculation, in exactly the same way a MinusMoveFlux object does.\nTotal crossing probability function\nTo calculate the total crossing probability, we must first calculate the individual ensemble crossing probabilities. This is done by creating a histogram of the maximum values of the order parameter. The class to do that is FullHistogramMaxLambdas. Then we'll create the TotalCrossingProbability.", "transition = network.sampling_transitions[0]\n\nprint(transition)\n\nfrom openpathsampling.analysis.tis import FullHistogramMaxLambdas, TotalCrossingProbability\nfrom openpathsampling.numerics import WHAM\n\nmax_lambda_calc = FullHistogramMaxLambdas(\n transition=transition,\n hist_parameters={'bin_width': 0.05, 'bin_range': (0.0, 0.5)}\n)", "We can also change the function used to calculate the maximum value of the order parameter with the max_lambda_func parameter. This can be useful to calculate the crossing probabilities along some other order parameter.\nTo calculate the total crossing probability function, we also need a method for combining the ensemble crossing probability functions. We'll use the default WHAM here; see its documentation for details on how it can be customized.", "combiner = WHAM(interfaces=transition.interfaces.lambdas)", "Now we can put these together into the total crossing probability function:", "total_crossing = TotalCrossingProbability(\n max_lambda_calc=max_lambda_calc,\n combiner=combiner\n)\n\ntcp = total_crossing.calculate(storage.steps)\n\nplt.plot(tcp.x, np.log(tcp))\n\nplt.title(\"Total Crossing Probability, exiting \" + transition.stateA.name)\nplt.xlabel(\"$\\lambda$\")\nplt.ylabel(\"$\\ln(P_A(\\lambda | \\lambda_i))$\");", "Conditional transition probability\nThe last part of the standard calculation is the conditional transition probability. We'll make a version of this that works for all ensembles:", "from openpathsampling.analysis.tis import ConditionalTransitionProbability\n\noutermost_ensembles = [trans.ensembles[-1] for trans in network.sampling_transitions]\n\ncond_transition = ConditionalTransitionProbability(\n ensembles=outermost_ensembles,\n states=network.states\n)\n\nctp = cond_transition.calculate(storage.steps)\n\nctp", "StandardTISAnalysis.conditional_transition_probability converts this into a pandas.DataFrame, which gives prettier printing. However, the same data in included in this dict-of-dict structure.\nAssembling a TIS analysis from scratch\nIf you're using the \"standard\" TIS approach, then the StandardTISAnalysis object is the most efficient way to do it. However, if you want to use another analysis approach, it can be useful to see how the \"standard\" approach can be assembled.\nThis won't have all the shortcuts or saved intermediates that the specialized object does, but it will use the same algorithms to get the same results.", "from openpathsampling.analysis.tis import StandardTransitionProbability, TISAnalysis", "Some of the objects that we created in previous sections can be reused here. In particular, there is only only one flux calculation and only one conditional transitional transition probability per reaction network. However, the total crossing probability method is dependent on the transition (different order parameters might have different histrogram parameters). So we need to associate each transition with a different TotalCrossingProbability object. In this example, we take the default behavior of WHAM (instead of specifying in explicitly, as above).", "tcp_methods = {\n trans: TotalCrossingProbability(\n max_lambda_calc=FullHistogramMaxLambdas(\n transition=trans,\n hist_parameters={'bin_width': 0.05, 'bin_range': (0.0, 0.5)}\n )\n )\n for trans in network.transitions.values()\n}", "The general TISAnalysis object makes the most simple splitting: flux and transition probability. A single flux calculation is used for all transitions, but each transition has a different transition probability (since each transition can have a different total crossing probability). We make those objects here.", "transition_probability_methods = {\n trans: StandardTransitionProbability(\n transition=trans,\n tcp_method=tcp_methods[trans],\n ctp_method=cond_transition\n )\n for trans in network.transitions.values()\n}", "Finally we put this all together into a TISAnalysis object, and calculate the rate matrix.", "analysis = TISAnalysis(\n network=network,\n flux_method=dict_flux,\n transition_probability_methods=transition_probability_methods\n)\n\nanalysis.rate_matrix(storage.steps)", "This is the same rate matrix as we obtained with the StandardTISAnalysis. It is a little faster because we used the precalculated DictFlux object in this instead of the MinusMoveFlux, otherwise this would be slower." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jrossyra/adaptivemd
examples/tutorial/1_example_setup_project.ipynb
lgpl-2.1
[ "Tutorial 1- AdaptiveMD basics\nFirst we cover some basics demonstrating the use of AdaptiveMD objects to get you going.\nWe will briefly talk about\n\nresources\nfiles\ngenerators\nhow to run a simple trajectory\n\nAll of these objects from the adaptivemd package are used to organize a workflow by associating them with the adaptivemd.Project class. We will create a new project called \"tutorial\" that is used in this and subsequent tutorial notebooks, so as with any project, be careful about deleting work if revisiting this notebook. All python packages specified in the README must be installed on your local machine, along with MongoDB. The first tutorials can easily be modified to test the use of other resources, so make any necessary adjustments. A mongodb port must be visible to this notebook session and the at the time of execution defined resource. \nThe Project\nAlright, let's load the package and import the Project class since we want to start a project.", "from adaptivemd import Project", "Let's open a project with a UNIQUE name by instantiating Project with a name. This will be the name used in the DB so make sure it is new and not too short. Calling adaptivemd.Project with a name to construct an instance will always create a non-existing project, or reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.\nFirst, let's see what projects exist already by listing them from the database. Careful not to delete something you want to keep.", "Project.list()\n\n# Use this to completely remove the tutorial project from the database.\nProject.delete('tutorial')\nProject.list()", "Note that if you have trajectories or models saved in a pre-existing folder for a project named tutorial, they are not deleted. They will be overwritten as new data is produced. The new project will iterate through pre-existing trajectory names and overwrite the data as each next tutorial is run. The data must be manually moved if desired, or deleted. Only the MongoDB storage associated with the project has been affected by Project.delete('tutorial').", "project = Project('tutorial')\n\nproject.list()", "Now we have a handle for our project. First thing is to set it up to work on a resource.\nThe Resource Configuration\nWhat is a configuration?\nA Configuration specifies a shared filesystem and clustere attached to it. This can be your local machine, a regular cluster, or even a group of cluster that can access the same FS (like Titan, Eos and Rhea do). At least one is created for a project, and more may be specified via a configurations.cfg file or object creation in a session. Once created, these should remain static for a project since AdaptiveMD doesn't move data around yet on a (multiple) filesystem(s).\nLet us use a local resource configuration; your laptop or desktop machine for now. No cluster / HPC involved. There are a few possibilities. shared_path is a required field/attribute to tell adaptivemd where to store project data such as simulations and models:\n - no configuration specified when initializing project --> local config with shared_path '$HOME/adaptivemd'\n - give configuration dict to initialize method --> {'shared_path': '$HOME/admd'}\n - give configuration object to initialize\nSince this object defines the path where all files will be placed, let's get the path to the shared folder. This path must be accessible from all workers on the resource. When using a local resource make sure you have the default folder created on your machine.\nData files will be placed in $HOME/ by default. You can change this using an option when creating the Configuration object (manually), or during initialization which creates a configuration in the ways just described.\npython\nfrom adaptivemd import Configuration\nConfiguration(shared_path='$HOME/my/adaptive/folder/')", "from adaptivemd import Configuration\ncfg = Configuration('local', shared_path='$HOME/admd')\n\ncfg.shared_path", "Last, we save our resource configuration and initialize our empty project with it. initialize is only run once for a project. When this command is executed, the project is entered in the MongoDB under the name 'tutorial'. The directory for the project is not yet created, as workers manage the files and folders associated with a project.", "#TODO#property of current config\n#FIXME#project.initialize(cfg)\nproject.initialize({'shared_path': '$HOME/admd'})\n#FIXME#noresourcename#project.initialize()\n#project.initialize('../configuration.cfg', 'local')\n\nproject.configurations.one.shared_path", "File Objects", "from adaptivemd import File", "First we define a File object. Instead of just a string, these are used to represent files anywhere, on the cluster or your local application. There are some subclasses or extensions of File that have additional meta information like Trajectory or Frame. The underlying base object of a File is called a Location.\nWe start with a first PDB file that is located on this machine at a relative path", "pdb_file = File('file://../files/alanine/alanine.pdb')", "File, like any complex object in adaptivemd, can have a .name attribute that makes them easier to find later. You can either set the .name property after creation, or use a little helper method .named() to get a one-liner. This function will set .name and return itself.\nFor more information about the possibilities to specify filelocation consult the documentation for File", "pdb_file.name = 'initial_pdb'", "The .load() at the end is important. It causes the File object to load the content of the file, and if you save the File object in the database, the actual file is stored with it. This way it can simply be rewritten on the cluster or anywhere else.", "pdb_file.load()", "Generator Objects\nTaskGenerators are instances whose purpose is to create tasks for execution. This is similar to the\nway Kernels work. A TaskGenerator will generate Task objects for you which will be translated into a TaskDescription and executed. In simple terms:\nThe task generator creates the bash scripts for you that run a simulation or run pyemma.\nA task generator will be initialized with all parameters needed to make it work, and it will know what files need to be staged for the task to be used. adaptivemd relies primarily on two types of generators we will call:\n\nengine: for producing simulation data\nmodeller: for analyzing simulation data\n\nThe Engine is used to run trajectories", "from adaptivemd.engine.openmm import OpenMMEngine", "A task generator will create tasks that workers use to run simulations. Currently, this means a little python script is created that will excute OpenMM. It requires conda to be added to the PATH variable, or at least openmm to be included in the python installation used by the resource. If you set up your resource correctly, then the task should execute automatically via a worker.\nSo let's do an example for the OpenMM engine. A small python script is created that makes OpenMM look like a executable. It runs a simulation by providing an initial frame, OpenMM specific system.xml and integrator.xml files, and some additional parameters like the platform name, how often to store simulation frames, etc.", "engine = OpenMMEngine(\n pdb_file=pdb_file,\n system_file=File('file://../files/alanine/system.xml').load(),\n integrator_file=File('file://../files/alanine/integrator.xml').load(),\n args='-r --report-interval 1 -p CPU'\n).named('openmm')", "We have now an OpenMMEngine which uses the previously made pdb File object in the location defined by its shared_path. The same for the OpenMM XML files, along with some args to run using the CPU kernel, etc.\nLast we name the engine openmm to find it later, when we reopen the project.", "engine.name", "Next, we need to set the output types we want the engine to generate. We chose a stride of 10 for the master trajectory without selection, and save a second trajectory selecting only protein atoms and native stride.\nNote that the stride and frame number ALWAYS refer to the native steps used in the engine. In our example the engine uses 2fs time steps. So master stores every 20fs and protein every 2fs.", "engine.add_output_type('master', 'master.dcd', stride=10)\nengine.add_output_type('protein', 'protein.dcd', stride=1, selection='protein')", "The selection must be an mdtraj formatted atom selection string.\nThe PyEMMAAnalysis modeller", "from adaptivemd.analysis.pyemma import PyEMMAAnalysis", "The object that computes an MSM model from existing trajectories that you pass it. It is initialized with a .pdb file that is used to create features between the $c_\\alpha$ atoms. This implementaton requires a PDB but in general this is not necessay. It is specific to my PyEMMAAnalysis show case.", "modeller = PyEMMAAnalysis(\n engine=engine,\n outtype='protein',\n features={'add_inverse_distances': {'select_Backbone': None}}\n).named('pyemma')", "Again we name it pyemma for later reference.\nWe specified which output type from the engine we want to analyse. We chose the protein trajectories since these are faster to load and have better time resolution.\nThe features dict expresses which features to use in the analysis. In this case we will use all inverse distances between backbone c_alpha atoms.\nAdd generators to project\nNext step is to add the generators to the project for later usage. We pick the .generators store and just add it. Consider a store that works like a set() in python, where additionally when an object is added it is stored in the database. It contains objects only once and is not ordered. Therefore we need a name to find the objects later. Of course you can always iterate over all objects, but the order is not given.", "#project.generators.add(engine)\n#project.generators.add(modeller)\nproject.generators.add([engine, modeller])\nlen(project.generators)", "Note, that you cannot add the same engine instance twice (or any stored object to its store). If you create a new but equivalent engine, it will be considered different and hence you can store it again.", "project.generators.add(engine)\nlen(project.generators)", "Create one initial trajectory\nFinally we are ready to run a first trajectory that we will store as a point of reference in the project. Also it is nice to see how it works in general.\nWe are using a Worker approach. This means simply that someone (in our case the user from inside a script or a notebook) creates a list of tasks to be done and some other instance (the worker) will actually do the work.\nCreate a Trajectory object\nFirst we create the parameters for the engine to run the simulation. We use a Trajectory object (a special File with initial frame and length) as the input. You could of course pass these things separately, but this way, we can actually reference the not yet existing trajectory and do stuff with it.\nA Trajectory should have a unique name, and so there is a project function to do this automatically. It uses numbers and makes sure that this number has not been used yet in the project. The data will be stored in the \"$HOME/admd/projects/project.name/trajs/traj.name\" directory we set in the configuration, ie:\n\n$HOME/admd/project/tutorial/trajs/00000000/", "trajectory = project.new_trajectory(engine['pdb_file'], 100, engine)\n#trajectory = project.new_trajectory(pdb_file, 100, engine)\ntrajectory", "This says, initial is alanine.pdb run for 100 frames and is named xxxxxxxx. This is the name of a folder in the data directory, where trajectory files will be stored. Multiple atom selections, e.g. protein and all atoms, may be written to create multiple files in this folder. We will refer to these distinct trajectories as the outtypes later.\nWhy do we need a trajectory object?\nYou might wonder why a Trajectory object is necessary. You could just build a function that will take these parameters and run a simulation. At the end it will return the trajectory object. The same object we created just now.\nOne main reason is to use it as a so-called Promise in AdaptiveMD's asynchronous execution framework. The trajectory object we built acts as a Promise, so what is that exactly?\nA Promise is a value (or an object) that represents the result of a function at some point in the future. In our case it represents a trajectory at some point in the future. Normal promises have specific functions do deal with the unknown result, for us this is a little different but the general concept stands. We create an object that represents the specifications of a Trajectory and so, regardless of existence (the corresponding data file), we can use the trajectory as if it would exists to build operations on it.\nWe see the second reason by considering the object after the promise is fulfilled. We now have an object that can offer a lightweight view on the trajectory data it represents for inspection and sampling. Later we will use it as a convenient way to view analysis results. \nTrajectory objects are list-like, get the length:", "print(trajectory.length)", "and since the length is fixed, we know how many frames there are and can access them", "print(trajectory[20].exists)\nprint(trajectory[20])\nprint(trajectory[19].exists)\nprint(trajectory[19])", "extend method to elongate the trajectory in an additional task", "print(trajectory.extend(100))", "run method gives us a task that will do an MD simulation and create the trajectory", "print(trajectory.run())", "We can ask to extend it, we can save it. We can reference specific frames in it before running a simulation. You could even build a whole set of related simulations this way without running a single frame. This is pretty powerful especially in the context of running asynchronous simulations.\nCreate a Task object\nNow, we want that this trajectory actually exists so we have to make it. This requires a Task object that knows how to describe the execution of a simulation. Since Task objects are very flexible and can be complex, there are helper functions (i.e. factories) to create these in an easy manner like the ones we created before.\nUse the trajectory (which uses its engine) to call .run() and save the returned task object to directly work with it.", "task = trajectory.run()", "That's it, just take a trajectory description and turn it into a task that contains the shell commands and needed files, etc. Use the property trajectory.exists so see whether the trajectory object is associated with any data.", "trajectory.exists", "Submit the task to the queue\nFinally we need to add this task to the things we want to be done. This is easy and only requires saving the task to the project. This is done to the project.tasks bundle and once it has been stored it can be picked up by any worker to execute it.\nNote that you should be able to submit a trajectory like this, however in practical situations it is likely some additional operations are required in the pre- and post- tasks (outside scope of this tutorial), so they will usually be converted to tasks prior to queueing in project.", "#FIXME#project.queue(trajectory)\n# shortcut for project.tasks.add(task)\nproject.queue(task)\n\nlen(project.tasks)", "That is all we can do from here. To execute the tasks you need to create a worker using the adaptivemdworker command from the shell:\nbash\nadaptivemdworker tutorial --verbose\nFor the simple setup in this tutorial, you can just navigate to the directory as follows if it's not in your PATH already.\nbash\ncd home/of/adaptivemd/scripts/\nThe worker is responsible for managing the project filestructure, so when both the worker is running and project entries and structure are changed in the database, the directory and subdirectories of \"$shared_path/project/tutorial\" are created and modified. The project subfolders \"trajs\" and \"models\" are populated with trajectories and models of the corresponding names as workers complete tasks that are entered into the database.", "task.state\nproject.tasks.all.state\n\n# now there are data files & folders associated with the trajectory\nproject.wait_until(task.is_done)\n\ntask.state", "If you are done for now, its also good practice to relieve your workers (and save yourself some compute time charges on HPC resources!). You don't have to, even if you're closing the project's database connection. They are associated with the project and will accept tasks at any point that are entered in.", "# use the 'one' method inherited from bundle to see available methods\n# for the worker type, such as 'execute'\n#project.workers.one.execute('shutdown')\n\n# but use 'all' method in practice to apply across all members of\n# the workers bundle in the typical case, where you have many workers\nproject.workers.all.execute('shutdown')", "The final project.close() will close the DB connection. The daemon outside the notebook would be closed separately.", "project.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jegibbs/phys202-2015-work
assignments/assignment03/NumpyEx02.ipynb
mit
[ "Numpy Exercise 2\nImports", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "Factorial\nWrite a function that computes the factorial of small numbers using np.arange and np.cumprod.", "def np_fact(n):\n \"\"\"Compute n! = n*(n-1)*...*1 using Numpy.\"\"\"\n if n == 0:\n return 1\n else:\n a = np.arange(1,n+1,1)\n b = a.cumprod(0)\n return b[n-1]\n\nassert np_fact(0)==1\nassert np_fact(1)==1\nassert np_fact(10)==3628800\nassert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]", "Write a function that computes the factorial of small numbers using a Python loop.", "def loop_fact(n):\n \"\"\"Compute n! using a Python for loop.\"\"\"\n if n == 0:\n return 1\n else:\n factorial = 1\n for i in range(1,n+1):\n factorial *= i\n return factorial\n\nassert loop_fact(0)==1\nassert loop_fact(1)==1\nassert loop_fact(10)==3628800\nassert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]", "Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is:\npython\n%timeit -n1 -r1 function_to_time()", "%timeit -n1 -r1 np_fact(100)\n\n%timeit -n1 -r1 loop_fact(100)", "In the cell below, summarize your timing tests. Which version is faster? Why do you think that version is faster?\nI would have guessed that np_fact was going to be faster, but it turned out loop_fact was faster. I think np_fact was slower because it creates a whole array and then finds the product of all of the values in that array. loop_fact is faster because it iterates over a range and multiplies those numbers together as one step." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
PaulSalden/notebooks
Non-Uniform Rational B-Splines (NURBS).ipynb
mit
[ "%matplotlib inline\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\nimport matplotlib.pyplot as plt", "Non-Uniform Rational B-Splines (NURBS)\nIn an exploration of the possibilities of Jupyter Notebooks, I like to consider NURBS. These curves (or surfaces or solids) play a basic but important role for my graduation thesis. My goal is to make extensive use of numpy and its vectorization capabilities.\nNURBS are formed as a combination of basis functions, each defined by the Cox-De Boor recursion formula[1][2].\n$$ N_{i,n}\\left( u\\right) =\\frac{u-k_i}{k_{i+n}-k_i}N_{i,n-1}\\left( u\\right) +\\frac{k_{i+n+1}- u}{k_{i+n+1}-k_{i+1}}N_{i+1,n-1}\\left( u\\right) $$\nThis formula yields the $i$th basis function of polynomial order $n$, given a knot vector $k$. An implementation for arrays of $u$ values is suitable for plotting purposes.", "# Cox-De Boor recursion formula, result evaluated for array u\n# i = 0, 1, 2, ..., n\ndef cox_de_boor(knots, i, n, u):\n if n == 0:\n isbetween = np.logical_and(u >= knots[i], u < knots[i + 1])\n return np.where(isbetween, 1, 0)\n \n result = np.zeros_like(u)\n \n # determine if denominators are zero first\n Ad = knots[i + n] - knots[i]\n if Ad != 0:\n A = u - knots[i] / Ad\n result += A * cox_de_boor(knots, i, n - 1, u)\n\n Bd = knots[i + n + 1] - knots[i + 1]\n if Bd != 0:\n B = knots[i + n + 1] - u / Bd\n result += B * cox_de_boor(knots, i + 1, n - 1, u)\n \n return result", "Constructing NURBS requires taking a sum of the basis functions belonging to a certain polynomial order. It is therefore often convenient to collect values of all basis functions for all $u$ positions in a 2D matrix.", "def basis_matrix(knots, u):\n # determine polynomial order from assumed open knot vector\n n = np.sum(knots == knots[0]) - 1\n \n # number of basis functions\n nbas = knots.size - n - 1\n \n result = np.empty((nbas, u.size))\n for i in range(nbas):\n result[i, :] = cox_de_boor(knots, i, n, u)\n \n return result", "Plotting a NURBS basis then becomes straightforward.", "knots = np.array([0., 0., 0., 0., 1., 1., 1., 1.])\nu = np.linspace(knots[0], 0.99999 * knots[-1], 100)\n\nbasis = basis_matrix(knots, u)\n\nplt.plot(np.tile(u, (basis.shape[0], 1)).T, basis.T)\nplt.show()", "The actual NURBS is formed as a linear combination of the basis. In a way, each basis value is 'weighted' and combined with a control point.[1]\n$$ C\\left(u\\right) =\\sum\\limits^{n+1}{i=1}\\frac{N{i,n}\\left(u\\right) w_i}{\\sum^{n+1}{j=1}N{j,n}\\left(u\\right) w_j}\\boldsymbol{P}_i $$", "class NURBS(object):\n def __init__(self, knots, cpoints, weights):\n self.knots = knots\n self.cpoints = cpoints\n self.weights = weights\n \n def eval(self, u):\n basis = basis_matrix(self.knots, u)\n \n wbasis = np.einsum('i, ik -> ik', self.weights, basis)\n W = wbasis.sum(axis=0)\n \n return np.einsum('ij, ik -> jk', self.cpoints, wbasis / W)", "Note the use of the einsum() function.[3] While this method may require some work to understand, the author believes it provides the computation with more transparency than conventional matrix manipulation. Compared to the formula however, one needs extra indices because of handling many $u$ values at once.\nThis NURBS object may then be used to plot a curve in space. A spiral for instance, is readily constructed.", "knots = np.array([0., 0., 0., 1., 1., 2., 2., 3., 3., 4., 4., 5., 5., 6., 6., 6.])\ncpoints = np.array([[3, 0, 0], [3, 3, 1], [0, 3, 2], [-3, 3, 3],\n [-3, 0, 4], [-3, -3, 5], [0, -3, 6], [3, -3, 7],\n [3, 0, 8], [3, 3, 9], [0, 3, 10], [-3, 3, 11],\n [-3, 0, 12]])\nw = 1 / np.sqrt(2)\nweights = np.array([1, w, 1, w, 1, w, 1, w, 1, w, 1, w, 1])\n\ncurve = NURBS(knots, cpoints, weights)\n\nu = np.linspace(knots[0], 0.99999 * knots[-1], 100)\nx, y, z = curve.eval(u)\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.plot(x, y, z)\nplt.show()", "NURBS may be expanded to higher dimensional objects[1]. The basis is then a 'tensor product' of one dimensional basis functions. For this reason, NURBS surfaces (and solids) are said to have a 'tensor product structure'. As an example, a NURBS surface is computed.\n$$ S\\left( u,v\\right) =\\sum\\limits^{n+1}{i=1}\\sum\\limits^{m+1}{j=1}\\frac{N_{i,n}\\left( u\\right) N_{j,m}\\left( v\\right) w_{i,j}}{\\sum^{n+1}{p=1}\\sum^{m+1}{q=1}N_{p,n}\\left( u\\right) N_{q,m}\\left( v\\right) w_{p,q}}\\boldsymbol{P}_{i,j} $$", "class Surface(object):\n def __init__(self, knots_u, knots_v, cpoints, weights):\n self.knots_u = knots_u\n self.knots_v = knots_v\n self.cpoints = cpoints\n self.weights = weights\n \n def eval(self, u, v):\n basis_u = basis_matrix(knots_u, u)\n basis_v = basis_matrix(knots_v, v)\n \n wbasis = np.einsum('ij, ik, jl -> ijkl', self.weights, basis_u, basis_v)\n W = wbasis.sum(axis=(0, 1))\n \n return np.einsum('ijk, ijlm -> klm', self.cpoints, wbasis / W)\n\nknots_u = np.array([0., 0., 0., 1., 1., 2., 2., 2.])\nknots_v = knots_u\n\ncpoints = np.array([\n [[0, 0, 0] , [0, 0, -1], [1, 0, -1], [2, 0, -1], [2, 0, 0]],\n [[0, 1, -0.5], [0, 1, -1], [1, 1, -1], [2, 1, -1], [2, 1, -0.5]],\n [[0, 2, 0] , [0, 2, 0] , [1, 2, 0] , [2, 2, 0] , [2, 2, 0]],\n [[0, 3, 0.5] , [0, 3, 1] , [1, 3, 1] , [2, 3, 1] , [2, 3, 0.5]],\n [[0, 4, 0] , [0, 4, 1] , [1, 4, 1] , [2, 4, 1] , [2, 4, 0]],\n ])\nw = 1 / np.sqrt(2)\nweights = np.array([\n [1, w, 1, w, 1],\n [1, w, 1, w, 1],\n [1, w, 1, w, 1],\n [1, w, 1, w, 1],\n [1, w, 1, w, 1],\n ])\n\nsurface = Surface(knots_u, knots_v, cpoints, weights)\n\nu = np.linspace(knots_u[0], 0.99999 * knots_u[-1], 100)\nv = np.linspace(knots_v[0], 0.99999 * knots_v[-1], 100)\nx, y, z = surface.eval(u, v)\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.plot_surface(x, y, z)\nplt.show()", "References\n\n[1] Wikipedia, Non-uniform rational B-spline, Technical specifications\n[2] Wikipedia, De Boor's algorithm\n[3] Numpy's einsum()", "from IPython.core.display import HTML\nHTML(open(\"style.css\", \"r\").read())" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ProbablyOverthinkingIt
fever.ipynb
mit
[ "Copyright 2016 Allen Downey\nMIT License: http://opensource.org/licenses/MIT", "from __future__ import print_function, division\n\nimport thinkbayes2\nimport thinkplot\n\nimport numpy as np\nfrom scipy import stats\n\n%matplotlib inline", "Here's an update that takes the prior probability of being sick prior=$x$, and the likelihoods of the data, prob_given_sick=$P(fever|sick)$ and prob_given_not =$P(fever|not sick)$", "def update(prior, prob_given_sick, prob_given_not):\n suite = thinkbayes2.Suite()\n suite['sick'] = prior * prob_given_sick\n suite['not sick'] = (1-prior) * prob_given_not\n suite.Normalize()\n return suite['sick']", "If we start with $x=0.1$ and update with the assumption that fever is more likely if you're sick, the posterior goes up to $x\\prime = 0.25$", "prior = 0.1\nprob_given_sick = 0.9\nprob_given_not = 0.3\n\npost = update(prior, prob_given_sick, prob_given_not)\npost", "Now suppose we don't know $s =$ prob_given_sick=$P(fever|sick)$ and $t = $ prob_given_not =$P(fever|not sick)$, but we think they are uniformly distributed and independent.", "dist_s = thinkbayes2.Beta(1, 1)\ndist_t = thinkbayes2.Beta(1, 1)\ndist_s.Mean(), dist_t.Mean()", "We can compute the distribute of $x\\prime$ by drawing samples from the distributions of $s$ and $t$ and computing the posterior for each.", "n = 1000\nss = dist_s.Sample(n)\nts = dist_t.Sample(n)", "Just checking that the samples have the right distributions:", "thinkplot.Cdf(thinkbayes2.Cdf(ss))\nthinkplot.Cdf(thinkbayes2.Cdf(ts))", "Now computing the posteriors:", "posts = [update(prior, s, t) for s, t in zip(ss, ts)]", "Here's what the distribution of values for $x\\prime$ looks like:", "cdf = thinkbayes2.Cdf(posts)\nthinkplot.Cdf(cdf)", "And here's the mean:", "cdf.Mean()", "This result implies that if our prior probability for $x$ is 0.1, and then we learn that the patient has a fever, we should be uncertain about $x\\prime$, and this distribution describes that uncertainty. It says that the fever probably has little predictive power, but might have quite a lot.\nThe mean of this distribution is a little higher than the prior, which suggests that our priors for $s$ and $t$ are not neutral with respect to updating $x$. It's surprising that the effect is not symmetric, because our beliefs about $s$ and $t$ are symmetric. But then again, we just computed an arithmetic mean on a set of probabilities, which is a bogus kind of thing to do. So maybe we deserve what we got.\nJust for fun, what would we have to believe about $s$ and $t$ to make them neutral with respect to the posterior mean of $x$?", "dist_s = thinkbayes2.Beta(1, 1)\ndist_t = thinkbayes2.Beta(1.75, 1)\nn = 10000\nss = dist_s.Sample(n)\nts = dist_t.Sample(n)\nthinkplot.Cdf(thinkbayes2.Cdf(ss))\nthinkplot.Cdf(thinkbayes2.Cdf(ts))\n\nposts = [update(prior, s, t) for s, t in zip(ss, ts)]\nnp.array(posts).mean()", "Now here's a version that simulates worlds where $x$ is known and $s$ and $t$ are drawn from uniform distributions. For each $s$-$t$ pair, we generate one patient with a fever and compute the probability that they are sick.", "def prob_sick(x, s, t):\n return x * s / (x * s + (1-x) * t)\n\ndist_s = thinkbayes2.Beta(1, 1)\ndist_t = thinkbayes2.Beta(1, 1)\nn = 10000\nss = dist_s.Sample(n)\nts = dist_t.Sample(n)\n\nx = 0.1\nprobs = [prob_sick(x, s, t) for s, t in zip(ss, ts)]\n\nnp.array(probs).mean()\n\ncohort = np.random.random(len(probs)) < probs\n\ncohort.mean()", "April 6, 2016\nSuppose \n* t is known to be 0.5\n* x is known to be 0.1\n* s is equally likely to be 0.2 or 0.8, but we don't know which\nIf we take the average value of s and compute p = p(sick|fever), we would consider p to be the known quantity 0.1", "s = 0.5\nt = 0.5\nx = 0.1\np = x * s / (x * s + (1-x) * t)\np", "If we propagate the uncertainty about s through the calculation, we consider p to be either p1 or p2, but we don't know which.", "t = 0.8\np1 = x * s / (x * s + (1-x) * t)\np1, 5/77\n\nt = 0.2\np2 = x * s / (x * s + (1-x) * t)\np2, 5/23", "If we were asked to make a prediction about a single patient, we would average the two possible values of p.", "(p1 + p2) / 2\n\ndef logodds(p):\n return np.log(p / (1-p))\n\nlogodds(p1) - logodds(0.1), logodds(p2) - logodds(0.1)\n\np_mix = (p1 + p2) / 2\nlogodds(p_mix) - logodds(0.1)", "So let's simulate a series of patients by drawing a random value of y, computing p, and then tossing coins with probability p.", "import random\n\ndef simulate_patient():\n s = 0.5\n t = random.choice([0.2, 0.8])\n x = 0.1\n p = x * s / (x * s + (1-x) * t)\n return random.random() < p\n\nsimulate_patient()", "In this simulation, the fraction of patients with fever who turn out to be sick is close to 0.141", "patients = [simulate_patient() for _ in range(10000)]\nsum(patients) / len(patients)\n\nx = 0.1\ns = 0.5\nt1 = 0.2\nt2 = 0.8\n\nimport pandas as pd\nd1 = dict(feverbad=0.5, fevergood=0.5)\nd2 = dict(sick=x, notsick=1-x)\nd3 = dict(fever='t', notfever='1-t')\n\niterables = [d1.keys(), d2.keys(), d3.keys()]\n\nindex = pd.MultiIndex.from_product(iterables, names=['first', 'second', 'third'])\ndf = pd.DataFrame(np.zeros(8), index=index, columns=['prob'])\n\nt_map = dict(fevergood=t2, feverbad=t1)\n\nfor label1, p1 in d1.items():\n t = t_map[label1]\n for label2, p2 in d2.items():\n for label3, p3 in d3.items():\n if label2 == 'sick':\n p = p1 * p2 * 0.5\n else:\n p = p1 * p2 * eval(p3)\n\n df.prob[label1, label2, label3] = p\n \ndf ", "If there are two kinds of people, some more fever prone than others, and we don't know which kind of patient we're dealing with, P(sick | fever)", "df.prob[:, 'sick', 'fever'].sum() / df.prob[:, :, 'fever'].sum()", "If we know fevergood, P(sick|fever) is", "p_sick_fevergood = df.prob['fevergood', 'sick', 'fever'].sum() / df.prob['fevergood', :, 'fever'].sum()\np_sick_fevergood", "If we know feverbad, P(sick|fever) is", "p_sick_feverbad = df.prob['feverbad', 'sick', 'fever'].sum() / df.prob['feverbad', :, 'fever'].sum()\np_sick_feverbad", "If we think there's a 50-50 chance of feverbad", "(p_sick_fevergood + p_sick_feverbad) / 2", "If fevergood, here's the fraction of all patients with fever:", "p_fever_fevergood = df.prob['fevergood', :, 'fever'].sum()\np_fever_fevergood", "If fever bad, here's the fraction of all patients with fever", "p_fever_feverbad = df.prob['feverbad', :, 'fever'].sum()\np_fever_feverbad", "So if we started out thinking there's a 50-50 chance of feverbad, we should now think feverbad is less likely", "p_feverbad_fever = p_fever_feverbad / (p_fever_feverbad + p_fever_fevergood)\np_feverbad_fever", "And if we compute the weighted sum of the two possible worlds", "p_feverbad_fever * p_sick_feverbad + (1-p_feverbad_fever) * p_sick_fevergood" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.19/_downloads/285b673bc4afbda20046e6f8ff40b632/plot_sensor_noise_level.ipynb
bsd-3-clause
[ "%matplotlib inline", "Show noise levels from empty room data\nThis shows how to use :meth:mne.io.Raw.plot_psd to examine noise levels\nof systems. See [1]_ for an example.\nReferences\n.. [1] Khan S, Cohen D (2013). Note: Magnetic noise from the inner wall of\n a magnetically shielded room. Review of Scientific Instruments 84:56101.\n https://doi.org/10.1063/1.4802845", "# Author: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport mne\n\ndata_path = mne.datasets.sample.data_path()\n\nraw_erm = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample',\n 'ernoise_raw.fif'), preload=True)", "We can plot the absolute noise levels:", "raw_erm.plot_psd(tmax=10., average=True, spatial_colors=False,\n dB=False, xscale='log')" ]
[ "code", "markdown", "code", "markdown", "code" ]
ellisztamas/faps
docs/.ipynb_checkpoints/08 Data cleaning in A. majus-checkpoint.ipynb
mit
[ "Data cleaning for Antirrhinum majus data set from 2012\nTom Ellis, June 2017\nIn this notebook we will examine an empirical dataset using the snapdragon Antirrhinum majus.\nIn 2012 we collected open-pollinated seed capsules from wild mothers and genotypes samples of the offsping. A single seed capsule contains up to several hundred seeds from between 1 and lots of pollen donors. We also collected tissue and GPS positions for as many of the adults reproductive plants as we could find.\nThese data are those desribed and analysed by Ellis et al. (2018), and are available from the IST Austria data repository (DOI:10.15479/AT:ISTA:95). \nBelow, we will do an initial data inspection to weed out dubious loci and individuals. It can be argued that this process was overly conservative, and we threw out a lot of useful data, so you need not necessarily be so critical of your own data.", "import numpy as np\nfrom faps import *\nimport matplotlib.pyplot as plt\n%pylab inline", "Data inspection\nImport genotype data for the reproductive adults and offspring. The latter includes information on the ID of the maternal mother.", "progeny = read_genotypes('../manuscript_faps/data_files/offspring_SNPs_2012.csv', mothers_col=1, genotype_col=2)\nadults = read_genotypes('../manuscript_faps/data_files/parents_SNPs_2012.csv')\n\niix = [i in adults.names for i in progeny.mothers.tolist()]\nprogeny = progeny.subset(iix)", "GPS data\nWe will also import data GPS data for 2219 individuals tagged as alive in 2012. Since not all of these have been genotyped, we reformat the data to match the genotype data.", "gps_pos = np.genfromtxt('../manuscript_faps/data_files/amajus_GPS_2012.csv', delimiter=',', skip_header=1, usecols=[3,4]) # import CSV file\ngps_lab = np.genfromtxt('../manuscript_faps/data_files/amajus_GPS_2012.csv', delimiter=',', skip_header=1, usecols=0, dtype='str') # import CSV file\n# subset GPS data to match the genotype data.\nix = [i for i in range(len(gps_lab)) if gps_lab[i] in adults.names]\ngps_pos, gps_lab = gps_pos[ix], gps_lab[ix]", "17 individuals are actually from about 15km to the East. It is hard to imagine that these individuals could contribute to the pollen pool of the mothers, so let's remove these from the sample.", "plt.scatter(gps_pos[:,0], gps_pos[:,1])\n\nix = [i for i in range(len(gps_lab)) if gps_pos[i,0] > -5000]\ngps_pos, gps_lab = gps_pos[ix], gps_lab[ix]", "Genotype information\nAs a sanity check, confirm that the marker names really do match.", "all([progeny.markers[i] == adults.markers[i] for i in range(progeny.nloci)])", "Tissue from the adults and progeny was dried in different ways. For the progeny, I didnt use enough silica gel to dry the tissue rapidly, and the DNA became degraded. Reflecting this, although genotype dropouts (the rate at which genotype information at a single locus fails to amplify) is respectable for the adults, but dire for the offspring.", "print adults.missing_data().max()\nprint progeny.missing_data().max()\n", "Luckily a lot of this is driven by a small number of loci/individuals with very high dropout rates.", "fig = plt.figure(figsize=(10,10))\nfig.subplots_adjust(wspace=0.2, hspace=0.2)\n\nmdo = fig.add_subplot(2,2,1)\nmdo.hist(progeny.missing_data('marker'), bins=np.arange(0, 1, 0.05))\nmdo.set_xlabel(\"Missing data\")\nmdo.set_ylabel(\"Number of loci\")\nmdo.set_title('Per locus: offspring')\n\nindo = fig.add_subplot(2,2,2)\nindo.hist(progeny.missing_data(by='individual'), bins=np.arange(0, 1, 0.05))\nindo.set_xlabel(\"Missing data\")\nindo.set_ylabel(\"Number of loci\")\nindo.set_title('Per indiviudual: offspring')\n\nmda = fig.add_subplot(2,2,3)\nmda.hist(adults.missing_data('marker'), bins=np.arange(0, 1, 0.05))\nmda.set_xlabel(\"Missing data\")\nmda.set_ylabel(\"Number of loci\")\nmda.set_title('Per locus: adults')\n\ninda = fig.add_subplot(2,2,4)\ninda.hist(adults.missing_data(by='individual'), bins=np.arange(0, 1, 0.05))\ninda.set_xlabel(\"Missing data\")\ninda.set_ylabel(\"Number of loci\")\ninda.set_title('Per indiviudual: adults')", "Although overall per locus drop-out rates are low for the adults, there are some individuals with alarmingly high amounts of missing data. Candidates with very few loci typed can come out as being highly compatible with many offspring, just because there is insufficient information to exclude them.", "print adults.missing_data(by='individual').max()\nprint progeny.missing_data('individual').max()", "Count, then remove individuals with >5% missing data.", "print \"Offspring:\", len(np.array(progeny.names)[progeny.missing_data(1) > 0.05])\nprint \"Parents:\", len(np.array(adults.names)[adults.missing_data(1) > 0.05])\n\nprogeny = progeny.subset( individuals= progeny.missing_data(1) < 0.05)\nadults = adults.subset(individuals= adults.missing_data(1) < 0.05)", "Histograms look much better. It would still worth removing some of the dubious loci with high drop-out rates though.", "fig = plt.figure(figsize=(10,10))\nfig.subplots_adjust(wspace=0.2, hspace=0.2)\n\nmdo = fig.add_subplot(2,2,1)\nmdo.hist(progeny.missing_data('marker'), bins=np.arange(0, 0.7, 0.05))\nmdo.set_xlabel(\"Missing data\")\nmdo.set_ylabel(\"Number of loci\")\nmdo.set_title('Per locus: offspring')\n\nindo = fig.add_subplot(2,2,2)\nindo.hist(progeny.missing_data(by='individual'), bins=np.arange(0, 0.7, 0.05))\nindo.set_xlabel(\"Missing data\")\nindo.set_ylabel(\"Number of loci\")\nindo.set_title('Per indiviudual: offspring')\n\nmda = fig.add_subplot(2,2,3)\nmda.hist(adults.missing_data('marker'), bins=np.arange(0, 0.7, 0.05))\nmda.set_xlabel(\"Missing data\")\nmda.set_ylabel(\"Number of loci\")\nmda.set_title('Per locus: adults')\n\ninda = fig.add_subplot(2,2,4)\ninda.hist(adults.missing_data(by='individual'), bins=np.arange(0, 0.7, 0.05))\ninda.set_xlabel(\"Missing data\")\ninda.set_ylabel(\"Number of loci\")\ninda.set_title('Per indiviudual: adults')", "Remove the loci with dropouts >10% from both the offspring and adult datasets.", "print np.array(progeny.markers)[progeny.missing_data(0) >= 0.1]\n\nprogeny= progeny.subset(loci= progeny.missing_data(0) < 0.1)\nadults = adults.subset(loci = progeny.missing_data(0) < 0.1)", "Allele frequency and heterozygosity generally show the convex pattern one would expect. An exception is the locus with allele frequency at around 0.4, but heterozygosity >0.7, which is suspect, and indicative of a possible outlier.", "plt.scatter(adults.allele_freqs(), adults.heterozygosity(0))\nplt.xlabel('Allele frequency')\nplt.ylabel('Heterozygosity')\nplt.show()", "Loci with low heterozygosity are not dangerous in themselves; they might contribute some information, albeit little. To be on the safe side, let's remove loci with less than 0.2 heterozygosity, and the errant locus with high heterozygosity.", "print \"Heterozygosity > 0.7:\", adults.markers[adults.heterozygosity(0) >0.7]\nprint \"Heterozygosity < 0.2:\", progeny.markers[adults.heterozygosity(0) < 0.2]\n\nprogeny = progeny.subset(loci= (adults.heterozygosity(0) > 0.2) * (adults.heterozygosity(0) < 0.7))\nadults = adults.subset( loci= (adults.heterozygosity(0) > 0.2) * (adults.heterozygosity(0) < 0.7))", "Summary of genotype data\nThis leaves us with a dataset of 61 loci for which allele frequency and heterozygosity are highest around 0.5, which is what we would like. In particular, heterozygosity (and hence homozygosity) among the adults is humped around 0.5, which is a good sign that parents should be readily distinguishable. There is nevertheless substantial spread towards zero and one for the progeny data however, which is less than ideal.", "fig = plt.figure(figsize=(10,10))\nfig.subplots_adjust(wspace=0.1, hspace=0.2)\n\nafp = fig.add_subplot(2,2,1)\nafp.hist(adults.allele_freqs())\nafp.set_title('Adults')\nafp.set_xlabel(\"Allele frequency\")\n\nafo = fig.add_subplot(2,2,2)\nafo.hist(progeny.allele_freqs())\nafo.set_title('Offspring')\nafo.set_xlabel(\"Allele frequency\")\n\nhetp = fig.add_subplot(2,2,3)\nhetp.hist(adults.heterozygosity(0))\nhetp.set_xlabel(\"Heterozygosity\")\n\nheto = fig.add_subplot(2,2,4)\nheto.hist(progeny.heterozygosity(0))\nheto.set_xlabel(\"Heterozygosity\")", "The effective number of loci can be seen as the number of loci at which one can make compare the offspring, maternal and candidate paternal genotype (i.e. those loci with no missing data). Given how high dropouts are in the offspring, it is worthwhile to check the effective number of loci for this dataset.\nIn fact, effective number of loci is good. The minimum number of valid loci to compare is 46, and in 99% of cases there are 57 or more loci.", "np.unique([progeny.mothers[i] for i in range(progeny.size) if progeny.mothers[i] not in adults.names])\nix = [i for i in range(progeny.size) if progeny.mothers[i] in adults.names]\nprogeny = progeny.subset(ix)\n\n\nmothers = adults.subset(progeny.parent_index('m', adults.names))\nneloci = effective_nloci(progeny, mothers, adults)\nplt.hist(neloci.flatten(), bins=np.arange(45.5,63.5,1))\nplt.show()", "Finally, print some summary statistics about the quality of the genotype information in the data set.", "print(adults.nloci)\nprint progeny.missing_data(0).mean()\nprint adults.missing_data(0).mean()\nprint adults.heterozygosity(0).min(), adults.heterozygosity(0).max()\nprint adults.allele_freqs().min(), adults.allele_freqs().max()", "Example family: L1872\nThe progeny dataset consists of offspring from multiple families that were genotyped at the same time. It was convenient to consider them as one so far to tidy up the genotype data, but for subsequent analysis we need to split them up into their constituent full sib families. This is easy to do with split, which returns a list of genotypeArray objects.", "prlist = progeny.split(progeny.mothers)\nlen(prlist)", "By way of a sanity check we will examine one of the largest families in detail. After the data filtering above, there are 20 offspring from mother L1872. Distributions of missing data, heterozygosity and allele frequency at each locus suggest no reason for alarm.", "ex_progeny = prlist[2]\nex_mother = adults.subset(ex_progeny.parent_index('m', adults.names))\n\nex_progeny.size", "Family structure\nCluster the family into sibships. I have set the proportion of missing parents to 0.1; we have removed 140 of the 2219 (6%) candidates logged as alive in 2012, and I allow for 10% of candidates having been missed. In fact the results do not depend on the parameter unless it is unrealistically high.", "allele_freqs = adults.allele_freqs() # population allele frequencies\nex_patlik = paternity_array(ex_progeny, ex_mother, adults, 0.0015, missing_parents=0.1)\nex_sc = sibship_clustering(ex_patlik, 1000)", "We can first look at the dendrogram of relatedness between individuals derived from the array of paternity likleihoods.", "from scipy.cluster.hierarchy import dendrogram\ndendrogram(ex_sc.linkage_matrix, orientation='left', color_threshold=0,\n above_threshold_color='black')\nplt.show()", "We can compare this to the most-likely partition structure to get a rough idea of what as going on. This partition groups offspring into 7 full sibships and has a posterior probability of 0.8. The partition structure simply labels individuals 0 to 20 with a unique, arbitrary identifier. For example, individuals 2 and 3 are grouped into an especially large family labelled '1'.", "print \"most-likely partition:\", ex_sc.mlpartition\nprint \"unique families:\", np.unique(ex_sc.mlpartition)\nprint \"partition probability:\", np.exp(ex_sc.prob_partitions.max())", "We can recover posterior probabilties of paternity for each candidate on each offspring using posterior_paternity_matrix. For most offspring, there is a single candidate with a probability of paternity close to one.", "postpat = ex_sc.posterior_paternity_matrix()\n\n# names of most probable candidates\nmx = np.array([np.where(postpat[i].max() == postpat[i])[0][0] for i in range(ex_progeny.size)])\n\nfrom pandas import DataFrame as df\ndf([adults.names[mx], np.exp(postpat.max(1))]).T", "Family sizes\nConsistent with the results for many families (shown below), the posterior distributions for family size suggest many small families and a smaller number of larger families.", "fig = plt.figure(figsize=(15,6))\n\nnf = fig.add_subplot(1,2,1)\nnf.plot(range(1,ex_progeny.size+1), ex_sc.nfamilies())\nnf.set_xlabel('Number of families')\nnf.set_ylabel('Probability')\n\nfs = fig.add_subplot(1,2,2)\nfs.plot(range(1,ex_progeny.size+1), ex_sc.family_size())\nfs.set_xlabel('Family size')\nplt.show()", "Geographic positions\nIntuitively, one would expect most pollen donors to be fairly close to the mother. Since the most probable partition had fairly strong support and identified a set of candidates with posterior probabilities close to one, it is reasonable to use these individuals to get an idea of where the pollen donors are to be found.", "ix =[i for i in range(len(gps_lab)) if gps_lab[i] in adults.names[mx]]\ngps_cands = gps_pos[ix]\ngps_ex = gps_pos[gps_lab == \"L1872\"].squeeze()", "The map below shows the spatial positions of all individuals in the sample in green. Overlaid are the mother in red, and top candidates in blue. The likley candidates are indeed found close to the mother along the lower (southern-most) road, with two individuals on the upper (northern) road. This gives us no cause to doubt the validity of the paternity results.", "second = np.sort(postpat, 1)[:, 1]\nsx = np.array([np.where(second[i] == postpat[i])[0][0] for i in range(ex_progeny.size)])\ngps_sec = gps_pos[np.unique(sx)]\n\nfig = plt.figure(figsize=(16.9/2.54,6.75/2.54))\n#plt.figure(figsize=(12.5,5)\n\nplt.xlabel('East-West positition (m)')\nplt.ylabel('North-South positition (m)')\nplt.xlim(-2500,2000)\nplt.scatter(gps_pos[:,0], gps_pos[:,1], s=5, color='green', alpha=0.5)\nplt.scatter(gps_sec[:,0], gps_sec[:,1], color='gold')\nplt.scatter(gps_cands[:,0],gps_cands[:,1], color='blue')\nplt.scatter(gps_ex[0], gps_ex[1], color='red', s=40, edgecolors='black')\nplt.show()", "We can use these data to get a very rough dispersal kernal. Most pollen comes from within 50m of the maternal plant.", "dists = np.sqrt((gps_ex[0] - gps_cands[:,0])**2 + (gps_ex[1] - gps_cands[:,1])**2)\nprint \"Mean dispersal =\",mean(dists)\n\nplt.hist(dists, bins=np.arange(0,650,50))\nplt.show()", "In contrast, the second-most-likely candidates are on average more than 800m from the maternal plant.", "dists2 = np.sqrt((gps_ex[0] - gps_sec[:,0])**2 + (gps_ex[1] - gps_sec[:,1])**2)\nprint \"Mean dispersal =\",mean(dists2)\n", "Missing data in the candidates\nWhen candidate fathers have substantial missing data, they can have apparently high likelihoods of paternity just because there are fewer opportunities to show an incompatibility.\nI previously found that candidates with ~10% missing data tended to be assigned as the true father alarmingly frequently. Here, we have already excluded candidates with more than 5% missing data. As a sanity to check to ensure missing data is not a substantial problem here, we can check how often the most likely candidate has 0, 1, 2, 3 or 4 loci with no genotype information.\nThe histograms below show these distributions based on probabilities before and after sibship clustering. Top candidates have either one or zero missing data points. This is probably really a random draw from the pool of candidates, because these are the most common categories.", "md = ex_sc.posterior_paternity_matrix()\npx = [np.where(md[i] == md[i].max())[0][0] for i in range(ex_progeny.size)]\nlx = [np.where(ex_patlik.lik_array[i] == ex_patlik.lik_array[i][i].max())[0][0] for i in range(ex_progeny.size)]\n\nfig= plt.figure(figsize=(16.9/2.54, 6/2.54))\nfig.subplots_adjust(wspace=0.3)\n\n\nlh= fig.add_subplot(1,2,1)\nlh.bar(range(4), np.unique(adults.missing_data(1), return_counts=True)[1])\nlh.set_title('All adults')\nlh.set_xlabel('Number failed loci')\nlh.set_ylabel('Number of individuals')\nlh.set_xticks(np.arange(4)+0.4)\nlh.set_xticklabels(np.arange(4))\n\nph= fig.add_subplot(1,2,2)\nph.bar(np.arange(2), np.unique(adults.missing_data(1)[px], return_counts=True)[1])\nph.set_title('Top candidates')\nph.set_xlabel('Number failed loci')\nph.set_xticks(np.arange(4)+0.4)\nph.set_xticklabels(np.arange(4))\nplt.show()", "Relatedness\nAnother explanation for splitting a full sibship into multiple smaller sibships would be that there is relatedness among candidates, and a relative of the true sire can sometimes have a higher likelihood of paternity than the true sire just by chance. If this is the case we would expect the most likely candidates to be more related to one another than we would expect if they were a random draw from the population.\nFirst, calculate a matrix of pairwise relatedness between all individuals in the sample of candidates:", "matches = [(adults.geno[:,:,i][np.newaxis] == adults.geno[:,:,j][:, np.newaxis]).mean(2) for i in [0,1] for j in [0,1]]\nmatches = np.array(matches)\nmatches = matches.mean(0)", "These histograms show pairwise relatedness for all pairs of candidates in blue, and the most probable father of each individual after clustering in orange (I have excluded duplicate candidates). There is no reason to suppose the top candidates are anything other than a random draw.", "ux = np.unique(px)\ntop_r = np.array([matches[ux[i],ux] for i in range(len(ux))])\n\n# weight bars to ensure the histograms sum to one.\nw1 = np.ones_like(matches[np.triu_indices(2079, 1)]) / float(len(matches[np.triu_indices(2079, 1)]))\nw2 = np.ones_like(top_r[np.triu_indices(len(ux), 1)]) / float(len(top_r[np.triu_indices(len(ux), 1)]))\n\nfig= plt.figure(figsize=(9/2.54, 8/2.54))\nplt.hist(matches[np.triu_indices(2079, 1)], histtype='step', bins=np.arange(0.3,0.7, 0.025), weights=w1)\nplt.hist(top_r[np.triu_indices(len(ux), 1)], histtype='step', bins=np.arange(0.3,0.7, 0.025), weights=w2)\nplt.xlabel('Relatedness')\nplt.ylabel('Density')\nplt.show()", "Multiple families\nThe code becomes more challenging because we will need to perform operations on every element in this list. Luckily this is straightforward in Python if we use list comprehensions. For example, we can pull out and plot the number of offspring in each half-sibling array:", "plt.hist([prlist[i].size for i in range(len(prlist))], bins=np.arange(0,25))\nplt.show()", "All of these families are samples from much larger half sib arrays, so comparing full-sibship sizes and number is even more difficult if there are different numbers of offspring. For this reason we can pick out only those families with 17 or more offspring.\nThis cell splits genotype data into maternal families of 17 or more offspring, then pick 17 offspring at random (there is no meaning in the order of individuals in the genotypeArray object, so taking the first 17 is tantamount to choosing at random). This leaves us with 18 familes of 17 offspring.", "# split into maternal families\nmlist = mothers.split(progeny.mothers)\nprlist = progeny.split(progeny.mothers)\n# families with 20 or more offspring\nprog17 = [prlist[i] for i in range(len(prlist)) if prlist[i].size >=17] \nmlist = [mlist[i] for i in range(len(prlist)) if prlist[i].size >=17]\n# take the first 17 offspring\nprog17 = [x.subset(range(17)) for x in prog17]\nmlist = [x.subset(range(17)) for x in mlist]", "Calculate likelihoods of paternity for each family. This took 3 seconds on a 2010 Macbook Pro; your mileage may vary. In order to do so we also need population allele frequencies.", "allele_freqs = adults.allele_freqs() # population allele frequencies\n\nfrom time import time\nt0=time()\npatlik = paternity_array(prog17, mlist, adults, mu=0.0013)\nprint \"Completed in {} seconds.\".format(time() - t0)", "The next step is clustering each family into full sibships.", "t1 = time()\nsc = sibship_clustering(patlik)\nprint \"Completed in {} seconds.\".format(time() - t1)", "Calculate probability distributions for family size and number of families for each array.", "nfamilies = [x.nfamilies() for x in sc]\nnfamilies = np.array(nfamilies)\n\nfamsize = [x.family_size() for x in sc]\nfamsize = np.array(famsize)", "Plots below show the probability distributions for the number and sizes of families. Grey bars show 95% credible intervals (see CDF plots below). Samples of 17 offspring are divided into between four and 16 full-sibling families consisting of between one and eight individuals. Most families seem to be small, with a smaller number of large families.", "fig = plt.figure(figsize=(16.9/2.54, 6/2.54))\nfig.subplots_adjust(wspace=0.3, hspace=0.1)\n\nnf = fig.add_subplot(1,2,1)\nnf.set_ylabel('Probability density')\nnf.set_xlabel('Number of families')\nnf.set_ylim(-0.005,0.2)\nnf.set_xlim(0,18)\nnf.bar(np.arange(0.5,17.5), nfamilies.sum(0)/nfamilies.sum(), color='1', width=1)\nnf.bar(np.arange(3.5,16.5), (nfamilies.sum(0)/nfamilies.sum())[3:16], color='0.75', width=1)\n\nfs = fig.add_subplot(1,2,2)\nfs.set_xlabel('Family size')\n#fs.set_ylabel('Probability density')\nfs.set_ylim(-0.05,0.8)\nfs.set_xlim(0,17)\nfs.bar(np.arange(0.5,17.5), famsize.sum(0)/famsize.sum(), color='1', width=1)\nfs.bar(np.arange(0.5,6.5), (famsize.sum(0)/famsize.sum())[:6], color='0.75', width=1)\n\nplt.show()", "Cumulative probability density plots demonstrate the credible intervals for family size and number.", "fig = plt.figure(figsize=(15, 6))\nfig.subplots_adjust(wspace=0.3, hspace=0.1)\n\nnf = fig.add_subplot(1,2,1)\nnf.set_ylabel('Cumulative density')\nnf.set_xlabel('Number of families')\nnf.set_xlim(0,20)\nnf.set_ylim(0,1.05)\nnf.plot(np.arange(1,18), np.cumsum(nfamilies.sum(0)/nfamilies.sum()))\nnf.axhline(0.975, 0.05, 0.95, linestyle='dashed')\nnf.axhline(0.025, 0.05, 0.95, linestyle='dashed')\nnf.grid()\n\nfs = fig.add_subplot(1,2,2)\nfs.set_ylabel('Cumulative density')\nfs.set_xlabel('Family size')\nfs.set_xlim(0,21)\nfs.set_ylim(0,1.05)\nfs.plot(np.arange(1,18), np.cumsum(famsize.sum(0)/famsize.sum()))\nfs.axhline(0.975, 0.05, 0.95, linestyle='dashed')\nfs.axhline(0.025, 0.05, 0.95, linestyle='dashed')\nfs.grid()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
notebooks/misc/CmdStanPy_Example_Notebook.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/CmdStanPy_Example_Notebook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nRunning STAN (CmdStanPy) MCMC library in Colab Example\nTaken from\nhttps://mc-stan.org/users/documentation/case-studies/jupyter_colab_notebooks_2020.html\nThis notebook demonstrates how to install the CmdStanPy toolchain on a Google Colab instance and verify the installation by running the Stan NUTS-HMC sampler on the example model and data which are included with CmdStan. Each code block in this notebook updates the Python environment, therefore you must step through this notebook cell by cell.", "# Load packages used in this notebook\nimport os\nimport json\nimport shutil\nimport urllib.request\nimport pandas as pd", "Step 1: install CmdStanPy", "# Install package CmdStanPy\n!pip install --upgrade cmdstanpy", "Step 2: download and untar the CmdStan binary for Google Colab instances.", "# Install pre-built CmdStan binary\n# (faster than compiling from source via install_cmdstan() function)\ntgz_file = \"colab-cmdstan-2.23.0.tar.gz\"\ntgz_url = \"https://github.com/stan-dev/cmdstan/releases/download/v2.23.0/colab-cmdstan-2.23.0.tar.gz\"\nif not os.path.exists(tgz_file):\n urllib.request.urlretrieve(tgz_url, tgz_file)\n shutil.unpack_archive(tgz_file)\n\n!ls", "Step 3: Register the CmdStan install location.", "# Specify CmdStan location via environment variable\nos.environ[\"CMDSTAN\"] = \"./cmdstan-2.23.0\"\n# Check CmdStan path\nfrom cmdstanpy import CmdStanModel, cmdstan_path\n\ncmdstan_path()", "The CmdStan installation includes a simple example program bernoulli.stan and test data bernoulli.data.json. These are in the CmdStan installation directory examples/bernoulli.\nThe program bernoulli.stan takes a vector y of length N containing binary outcomes and uses a bernoulli distribution to estimate theta, the chance of success.", "bernoulli_stan = os.path.join(cmdstan_path(), \"examples\", \"bernoulli\", \"bernoulli.stan\")\nwith open(bernoulli_stan, \"r\") as fd:\n print(\"\\n\".join(fd.read().splitlines()))", "The data file bernoulli.data.json contains 10 observations, split between 2 successes (1) and 8 failures (0).", "bernoulli_data = os.path.join(cmdstan_path(), \"examples\", \"bernoulli\", \"bernoulli.data.json\")\nwith open(bernoulli_data, \"r\") as fd:\n print(\"\\n\".join(fd.read().splitlines()))", "The following code test that the CmdStanPy toolchain is properly installed by compiling the example model, fitting it to the data, and obtaining a summary of estimates of the posterior distribution of all parameters and quantities of interest.", "# Run CmdStanPy Hello, World! example\nfrom cmdstanpy import cmdstan_path, CmdStanModel\n\n# Compile example model bernoulli.stan\nbernoulli_model = CmdStanModel(stan_file=bernoulli_stan)\n\n# Condition on example data bernoulli.data.json\nbern_fit = bernoulli_model.sample(data=bernoulli_data, seed=123)\n\n# Print a summary of the posterior sample\nbern_fit.summary()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
atlury/deep-opencl
DL0110EN/5.1.2.dropoutAssignment.ipynb
lgpl-3.0
[ "<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"http://cocl.us/pytorch_link_top\"><img src = \"http://cocl.us/Pytorch_top\" width = 950, align = \"center\"></a>\n\n<img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 200, align = \"center\">\n\n<h1 align=center><font size = 5>Using Dropout for Classification Assignment </font></h1> \n\n# Table of Contents\nin this lab, you will see how adding dropout to your model will decrease overfitting by using <code>nn.Sequential</code> and Cross Entropy Loss.\n\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<li><a href=\"#ref0\">Make Some Data</a></li>\n<li><a href=\"#ref1\">Create the Model and Cost Function the Pytorch way</a></li>\n<li><a href=\"#ref2\">Batch Gradient Descent</a></li>\n<br>\n<p></p>\nEstimated Time Needed: <strong>20 min</strong>\n</div>\n\n<hr>\n\nImport all the libraries that you need for the lab:", "import torch\nimport matplotlib.pyplot as plt\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\nfrom matplotlib.colors import ListedColormap", "Use this function only for plotting:", "def plot_decision_regions_3class(data_set,model=None):\n\n cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA','#00AAFF'])\n cmap_bold = ListedColormap(['#FF0000', '#00FF00','#00AAFF'])\n X=data_set.x.numpy()\n y=data_set.y.numpy()\n h = .02\n x_min, x_max = X[:, 0].min()-0.1 , X[:, 0].max()+0.1 \n y_min, y_max = X[:, 1].min()-0.1 , X[:, 1].max() +0.1 \n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h))\n newdata=np.c_[xx.ravel(), yy.ravel()]\n \n #XX=torch.torch.Tensor(newdata)\n #_,yhat=torch.max(model(XX),1)\n #yhat=yhat.numpy().reshape(xx.shape)\n \n Z=data_set.fun(newdata).flatten()\n f=np.zeros(Z.shape)\n f[Z>0]=1\n f=f.reshape(xx.shape)\n if model!=None:\n model.eval()\n XX=torch.torch.Tensor(newdata)\n _,yhat=torch.max(model(XX),1)\n yhat=yhat.numpy().reshape(xx.shape)\n plt.pcolormesh(xx, yy, yhat, cmap=cmap_light)\n plt.contour(xx, yy, f, cmap=plt.cm.Paired)\n else:\n plt.contour(xx, yy, f, cmap=plt.cm.Paired)\n plt.pcolormesh(xx, yy, f, cmap=cmap_light) \n\n plt.title(\"decision region vs True decision boundary\")\n plt.legend()\n ", "Use this function to calculate accuracy:", "def accuracy(model,data_set):\n _,yhat=torch.max(model(data_set.x),1)\n return (yhat==data_set.y).numpy().mean()", "<a id=\"ref0\"></a>\n<h2 align=center>Get Some Data </h2>\n\nCreate a nonlinearly separable dataset:", "from torch.utils.data import Dataset, DataLoader\n\nclass Data(Dataset):\n def __init__(self,N_SAMPLES = 1000,noise_std=0.1,train=True):\n \n a=np.matrix([-1,1,2,1,1,-3,1]).T\n \n self.x = np.matrix(np.random.rand(N_SAMPLES,2))\n\n self.f=np.array(a[0]+(self.x)*a[1:3]+np.multiply(self.x[:,0], self.x[:,1])*a[4]+np.multiply(self.x, self.x)*a[5:7]).flatten()\n self.a=a\n \n self.y=np.zeros(N_SAMPLES)\n self.y[self.f> 0]=1\n self.y=torch.from_numpy(self.y).type(torch.LongTensor)\n self.x=torch.from_numpy(self.x).type(torch.FloatTensor)\n self.x = self.x+noise_std*torch.randn(self.x.size())\n self.f=torch.from_numpy(self.f)\n self.a=a\n if train==True:\n torch.manual_seed(1)\n \n self.x = self.x+noise_std*torch.randn(self.x.size())\n torch.manual_seed(0)\n \n \n def __getitem__(self,index): \n return self.x[index],self.y[index]\n def __len__(self):\n return self.len\n def plot(self):\n X=data_set.x.numpy()\n y=data_set.y.numpy()\n h = .02\n x_min, x_max = X[:, 0].min() , X[:, 0].max()\n y_min, y_max = X[:, 1].min(), X[:, 1].max() \n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h))\n Z=data_set.fun(np.c_[xx.ravel(), yy.ravel()]).flatten()\n f=np.zeros(Z.shape)\n f[Z>0]=1\n f=f.reshape(xx.shape)\n plt.title('True decision boundary and sample points with noise ')\n \n\n plt.plot(self.x[self.y==0,0].numpy(),self.x[self.y==0,1].numpy(),'bo',label='y=0' ) \n plt.plot(self.x[self.y==1,0].numpy(), self.x[self.y==1,1].numpy(),'ro',label='y=1' )\n plt.contour(xx, yy, f, cmap=plt.cm.Paired)\n plt.xlim(0,1)\n plt.ylim(0,1)\n plt.legend()\n def fun(self,x):\n \n x=np.matrix(x)\n\n out=np.array(self.a[0]+(x)*self.a[1:3]+np.multiply(x[:,0], x[:,1])*self.a[4]+np.multiply(x, x)*self.a[5:7])\n out=np.array(out)\n return out\n ", "Create a dataset object:", "data_set=Data(noise_std=0.2)\ndata_set.plot()\n", "Get some validation data:", "torch.manual_seed(0) \nvalidation_set=Data(train=False)", "<a id=\"ref1\"></a>\n<h2 align=center>Create the Model, Optimizer, and Total Loss Function (cost)</h2>\n\nCreate a three-layer neural network <code>model</code> with a ReLU() activation function for regression. All the appropriate layers should be 300 units.\nDouble-click here for the solution.\n<!-- Your answer is below:\n n_hidden = 300\n model= torch.nn.Sequential(\n torch.nn.Linear(2, n_hidden), \n torch.nn.ReLU(),\n torch.nn.Linear(n_hidden, n_hidden),\n torch.nn.ReLU(),\n torch.nn.Linear(n_hidden, 2)\n)\n -->\n\nCreate a three-layer neural network <code>model_drop</code> with a ReLU() activation function for regression. All the appropriate layers should be 300 units. Apply dropout to all but the last layer and make the probability of dropout is 50%.\nDouble-click here for the solution.\n<!-- Your answer is below:\nn_hidden = 300\nmodel_drop= torch.nn.Sequential(\n torch.nn.Linear(2, n_hidden),\n torch.nn.Dropout(0.5), \n torch.nn.ReLU(),\n torch.nn.Linear(n_hidden, n_hidden),\n torch.nn.Dropout(0.5), \n torch.nn.ReLU(),\n torch.nn.Linear(n_hidden, 2)\n)\n-->\n\n<a id=\"ref2\"></a>\n<h2 align=center>Train the Model via Mini-Batch Gradient Descent </h2>\n\nSet the model by using dropout to training mode; this is the default mode, but it's a good practice.", "model_drop.train()", "Train the model by using the Adam optimizer. See the unit on other optimizers. Use the Cross Entropy Loss:", "optimizer_ofit = torch.optim.Adam(model.parameters(), lr=0.01)\noptimizer_drop = torch.optim.Adam(model_drop.parameters(), lr=0.01)\ncriterion = torch.nn.CrossEntropyLoss()", "Create the appropriate loss function.\n<!-- Your answer is below:\ncriterion = torch.nn.CrossEntropyLoss()\n-->\n\nInitialize a dictionary that stores the training and validation loss for each model:", "LOSS={}\nLOSS['training data no dropout']=[]\nLOSS['validation data no dropout']=[]\nLOSS['training data dropout']=[]\nLOSS['validation data dropout']=[]", "Run 500 iterations of batch gradient decent:", "epochs=500\n\nfor epoch in range(epochs):\n #make a prediction for both models \n yhat = model(data_set.x)\n yhat_drop = model_drop(data_set.x)\n #calculate the lossf or both models \n loss = criterion(yhat, data_set.y)\n loss_drop = criterion(yhat_drop, data_set.y)\n \n #store the loss for both the training and validation data for both models \n LOSS['training data no dropout'].append(loss.item())\n LOSS['validation data no dropout'].append(criterion(model(validation_set.x), validation_set.y).item())\n LOSS['training data dropout'].append(loss_drop.item())\n model_drop.eval()\n LOSS['validation data dropout'].append(criterion(model_drop(validation_set.x), validation_set.y).item())\n model_drop.train()\n \n #clear gradient \n optimizer_ofit.zero_grad()\n optimizer_drop.zero_grad()\n #Backward pass: compute gradient of the loss with respect to all the learnable parameters\n loss.backward()\n loss_drop.backward()\n #the step function on an Optimizer makes an update to its parameters\n optimizer_ofit.step()\n optimizer_drop.step()", "Set the model with dropout to evaluation mode:", "model_drop.eval()", "Test the accuracy of the model without dropout on the validation data.\nDouble-click here for the solution.\n<!-- Your answer is below:\n_,yhat=torch.max(model(validation_set.x),1)\n(yhat==validation_set.y).numpy().mean()\n-->\n\nTest the accuracy of the model without dropout on the validation data.\nDouble-click here for the solution.\n<!-- Your answer is below:\n_,yhat=torch.max(model_drop(validation_set.x),1)\n(yhat==validation_set.y).numpy().mean()\n-->\n\nYou see that the model with dropout performs better on the validation data. \nPlot the decision boundary and the prediction of the networks in different colors:\ntrue function", "plot_decision_regions_3class(data_set)", "model without dropout", "plot_decision_regions_3class(data_set,model)", "model with dropout", "plot_decision_regions_3class(data_set,model_drop)", "You can see that the model using dropout does better at tracking the function that generated the data. \nPlot out the loss for training and validation data on both models:", "plt.figure(figsize=(6.1, 10))\nfor key, value in LOSS.items():\n plt.plot(np.log(np.array(value)),label=key)\n plt.legend()\n plt.xlabel(\"iterations\")\n plt.ylabel(\"Log of cost or total loss\")", "You see that the model without dropout performs better on the training data, but it performs worse on the validation data. This suggests overfitting. However, the model using dropout performed better on the validation data, but worse on the training data. \n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"http://cocl.us/pytorch_link_bottom\"><img src = \"http://cocl.us/pytorch_image_bottom\" width = 950, align = \"center\"></a>\n\n### About the Authors: \n\n [Joseph Santarcangelo]( https://www.linkedin.com/in/joseph-s-50398b136/) has a PhD in Electrical Engineering. His research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. \n\nOther contributors: [Michelle Carey]( https://www.linkedin.com/in/michelleccarey/), [Morvan Youtube channel]( https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg), [Mavis Zhou]( https://www.linkedin.com/in/jiahui-mavis-zhou-a4537814a/)\n\n<hr>\n\nCopyright &copy; 2018 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DistrictDataLabs/yellowbrick
examples/iguk1987/Yellowbrick_Tour.ipynb
apache-2.0
[ "Unlocking the Black Box: How to Visualize Data Science Project Pipeline with Yellowbrick Library\nNo matter whether you are a novice data scientist or a well-seasoned and established professional working in the field for a long time, you most likely faced a challenge of interpreting results generated at any stage of the data science pipeline, be it data ingestion or wrangling, feature selection or model evaluation. This issue becomes even more prominent when the need arises to present interim findings to a group of stakeholders, clients, etc. How do you deal in that case with the long arrays of numbers, scientific notations and formulas which tell a story of your data set? That's when visualization library like Yellowbrick becomes an essential tool in the arsenal of any data scientist and helps to undertake that endevour by providing interpretable and comprehensive visualization means for any stage of a project pipeline.\nIntroduction\nIn this post we will explain how to integrate visualization step into each stage of your project without a need to create customized and time-consuming charts, while getting the benefit of drawing necessary insights into the data you are working with. Because, let's agree on that, unlike computers, human eye perceives graphical represenation of information way better, than it does with bits and digits. Yellowbrick machine learning visualization library serves just that purpose - to \"create publication-ready figures and interactive data explorations while still allowing developers fine-grain control of figures. For users, Yellowbrick can help evaluate the performance, stability, and predictive value of machine learning models and assist in diagnosing problems throughout the machine learning workflow\" ( http://www.scikit-yb.org/en/latest/about.html ).\nFor the purpose of this exercise we will be using a dataset from UCI Machine Learning Repository on Absenteeism at Work ( https://archive.ics.uci.edu/ml/machine-learning-databases/00445/ ). This data set contains a mix of continuous, binary and hierarchical features, along with continuous target representing a number of work hours an employee has been absent for from work. Such a variety in data makes for an interesting wrangling, feature selection and model evaluation task, results of which we will make sure to visualize along the way.\nTo begin, we will need to pip install and import Yellowbrick Pyhton library. To do that, simply run the following command from your command line: \n$ pip install yellowbrick\nOnce that's done, let's import Yellowbrick along with other essential packages, libraries and user-preference set up into the Jupyter Notebook.", "import numpy as np\nimport pandas as pd\n%matplotlib inline\nfrom cycler import cycler\nimport matplotlib.style\nimport matplotlib as mpl\nmpl.style.use('seaborn-white')\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import figure\nfrom sklearn.cluster import KMeans\nfrom sklearn.linear_model import RidgeCV\nfrom sklearn.model_selection import KFold\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.svm import LinearSVC, NuSVC, SVC\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.linear_model import Ridge, Lasso, ElasticNet\nfrom sklearn.linear_model import LogisticRegressionCV, LogisticRegression, SGDClassifier\nfrom sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier, RandomTreesEmbedding, GradientBoostingClassifier\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.model_selection import train_test_split as tts\nfrom sklearn.metrics import roc_curve\nfrom sklearn.metrics import f1_score\nfrom sklearn.metrics import recall_score\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import precision_score\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import classification_report\nfrom yellowbrick.features import Rank1D\nfrom yellowbrick.features import Rank2D\nfrom yellowbrick.classifier import ClassBalance\nfrom yellowbrick.model_selection import LearningCurve\nfrom yellowbrick.model_selection import ValidationCurve\nfrom yellowbrick.classifier import ClassPredictionError\nfrom yellowbrick.classifier import ClassificationReport\nfrom yellowbrick.features.importances import FeatureImportances", "Data Ingestion and Wrangling\nNow we are ready to proceed with downloading a zipped archive containing the dataset directly from the UCI Machine Learning Repository and extracting the data file. To perform this step, we will be using the urllib.request module which helps with opening URLs (mostly HTTP) in a complex world.", "import urllib.request\n\nprint('Beginning file download...')\n\nurl = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00445/Absenteeism_at_work_AAA.zip' \n\nurllib.request.urlretrieve( url ## , Specify a path to folder you want the archive to be stored in, e.g. '/Users/Yara/Downloads/Absenteeism_at_work_AAA.zip') \n )", "Unzip the archive and extract a CSV data file which we will be using. Zipfile module does that flawlessly.", "import zipfile\n \nfantasy_zip = zipfile.ZipFile('C:\\\\Users\\\\Yara\\\\Downloads\\\\Absenteeism_at_work_AAA.zip')\nfantasy_zip.extract('Absenteeism_at_work.csv', 'C:\\\\Users\\\\Yara\\\\Downloads')\n \nfantasy_zip.close()", "Load the data and place it in the same folder as your Python code.", "dataset = pd.read_csv('C:\\\\Users\\\\Yara\\\\Downloads\\\\Absenteeism_at_work.csv', 'Absenteeism_at_work.csv', delimiter=';')", "Let's take a look at a couple of randomly selected rows from the loaded data set.", "dataset.sample(10)\n\ndataset.ID.count()", "As we can see, selected dataset contains 740 instances, each instance representing an employed individual. Features provided in the dataset are those considered to be related to the number of hours an employee was absent from work (target). For the purpose of this exercise, we will subjectively group all instances into 3 categories, thus, converting continuous target into categorical. To identify appropriate bins for the target, let's look at the min, max and mean values.", "# Getting basic statistical information for the target\nprint(dataset.loc[:, 'Absenteeism time in hours'].mean())\nprint(dataset.loc[:, 'Absenteeism time in hours'].min())\nprint(dataset.loc[:, 'Absenteeism time in hours'].max())", "If approximately 7 hours of absence is an average value accross our dataset, it makes sense to group records in the following manner:\n1) Low rate of absence (Low), if 'Absenteeism time in hours' value is < 6;\n2) Medium rate of absence (Medium), if 'Absenteeism time in hours' value is between 6 and 30;\n3) High rate of absence (High), if 'Absenteeism time in hours' value is > 30.\nUpon grouping, we will be further exploring data and selecting relevant features from the dataset in order to predict an absentee category for the instances in the test portion of the data.", "dataset['Absenteeism time in hours'] = np.where(dataset['Absenteeism time in hours'] < 6, 1, dataset['Absenteeism time in hours'])\ndataset['Absenteeism time in hours'] = np.where(dataset['Absenteeism time in hours'].between(6, 30), 2, dataset['Absenteeism time in hours'])\ndataset['Absenteeism time in hours'] = np.where(dataset['Absenteeism time in hours'] > 30, 3, dataset['Absenteeism time in hours'])\n\n#Let's look at the data now!\ndataset.head()", "Once the target is taken care of, time to look at the features. Those of them storing unique identifiers and / or data which might 'leak' information to the model, should be dropped from the data set. For instance, 'Reason for absence' feature stores the information 'from the future' since it will not be available in the real world business scenario when running the model on a new set of data. Therefore, it is highly correlated with the target.", "dataset = dataset.drop(['ID', 'Reason for absence'], axis=1)\n\ndataset.columns", "We are now left with the set of features and a target to use in a machine learning model of our choice. So, let's separate features from the target, and split our dataset into a matrix of features (X) and an array of target values (y).", "features = ['Month of absence', 'Day of the week', 'Seasons',\n 'Transportation expense', 'Distance from Residence to Work',\n 'Service time', 'Age', 'Work load Average/day ', 'Hit target',\n 'Disciplinary failure', 'Education', 'Son', 'Social drinker',\n 'Social smoker', 'Pet', 'Weight', 'Height', 'Body mass index']\ntarget = ['Absenteeism time in hours']\n\nX = dataset.drop(['Absenteeism time in hours'], axis=1)\ny = dataset.loc[:, 'Absenteeism time in hours']\n\n# Setting up some visual preferences prior to visualizing data\nclass color:\n PURPLE = '\\033[95m'\n CYAN = '\\033[96m'\n DARKCYAN = '\\033[36m'\n BLUE = '\\033[94m'\n GREEN = '\\033[92m'\n YELLOW = '\\033[93m'\n RED = '\\033[91m'\n BOLD = '\\033[1m'\n UNDERLINE = '\\033[4m'\n END = '\\033[0m'", "Exploratory Analysis and Feature Selection\nWhenever one deals with a categorical target, it is important to remember to test the data set for class imbalance issue. Machine learning models struggle with performing well on imbalanced data where one class is overrepresented, while the other one is underrepresented. While such data sets are representative of the real life, e.g. no company will have majority or even half of its employees missing work on a massive scale, they need to be adjusted for the machine learning purposes, to improve algorithms' ability to pick up patterns present in that data.\nAnd to check for the potential class imbalance in our data, we will use Class Balance Visualizer from Yellowbrick.", "# Calculating population breakdown by target category\nTarget = y.value_counts()\nprint(color.BOLD, 'Low:', color.END, Target[1])\nprint(color.BOLD, 'Medium:', color.END, Target[2])\nprint(color.BOLD, 'High:', color.END, Target[3])\n\n# Creating class labels\nclasses = [\"Low\", \"Medium\", \"High\"]\n\n# Instantiate the classification model and visualizer\nmpl.rcParams['axes.prop_cycle'] = cycler('color', ['red', 'limegreen', 'yellow'])\nforest = RandomForestClassifier()\nfig, ax = plt.subplots(figsize=(10, 7))\nvisualizer = ClassBalance(forest, classes=classes, ax=ax)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.grid(axis='x')\n\nvisualizer.fit(X, y) # Fit the training data to the visualizer\nvisualizer.score(X, y) # Evaluate the model on the test data\ng = visualizer.show()", "There is an obvious class imbalance here, therefore, we can expect the model to have difficulties learning the pattern for Medium and High categories, unless data resampling is performed or class weight parameter applied within selected model if chosen algorithm allows it.\nWith that being said, let's proceed with assessing feature importance and selecting those which will be used further in a model of our choice. Yellowbrick library provides a number of convenient vizualizers to perform feature analysis, and we will use a couple of them for demonstration purposes, as well as to make sure that consistent results are returned when different methods are applied.\nRank 1D visualizer utilizes Shapiro-Wilk algorithm that takes into account only a single feature at a time and assesses the normality of the distribution of instances with respect to the feature. Let's see how it works!", "# Creating 1D visualizer with the Sharpiro feature ranking algorithm\nfig, ax = plt.subplots(figsize=(10, 7))\nvisualizer = Rank1D(features=features, ax=ax, algorithm='shapiro')\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.spines['left'].set_visible(False)\nax.spines['bottom'].set_visible(False)\n\nvisualizer.fit(X, y)\nvisualizer.transform(X)\nvisualizer.show()", "Rank 2D Visualizer, in its turn, utilizes a ranking algorithm that takes into account pairs of features at a time. It provides an option for a user to select ranking algorithm of their choice. We are going to experiment with covariance and Pearson, and compare the results.", "# Instantiate visualizer using covariance ranking algorithm\nfigsize=(10, 7)\nfig, ax = plt.subplots(figsize=figsize)\nvisualizer = Rank2D(features=features, ax=ax, algorithm='covariance', colormap='summer')\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.spines['left'].set_visible(False)\nax.spines['bottom'].set_visible(False)\n\nvisualizer.fit(X, y)\nvisualizer.transform(X)\nvisualizer.show()\n\n# Instantiate visualizer using Pearson ranking algorithm\nfigsize=(10, 7)\nfig, ax = plt.subplots(figsize=figsize)\nvisualizer = Rank2D(features=features, algorithm='pearson', colormap='winter')\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.spines['left'].set_visible(False)\nax.spines['bottom'].set_visible(False)\n\nvisualizer.fit(X, y)\nvisualizer.transform(X)\nvisualizer.show()", "Visual representation of feature correlation makes it much easier to spot pairs of features, which have high or low correlation coefficients. For instance, lighter colours on both plots indicate strong correlation between such pairs of features as 'Body mass index' and 'Weight'; 'Seasons' and 'Month of absence', etc.\nAnother way of estimating feature importance relative to the model is to rank them by feature_importances_ attribute when data is fitted to the model. The Yellowbrick Feature Importances visualizer utilizes this attribute to rank and plot features' relative importances. Let's look at how this approach works with Ridge, Lasso and ElasticNet models.", "# Visualizing Ridge, Lasso and ElasticNet feature selection models side by side for comparison\n\n# Ridge\n# Create a new figure\nmpl.rcParams['axes.prop_cycle'] = cycler('color', ['red'])\nfig = plt.gcf()\nfig.set_size_inches(10,10)\nax = plt.subplot(311)\nlabels = features\nviz = FeatureImportances(Ridge(alpha=0.1), ax=ax, labels=labels, relative=False)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.grid(False)\n\n# Fit and display\nviz.fit(X, y)\nviz.show()\n\n# ElasticNet\n# Create a new figure\nmpl.rcParams['axes.prop_cycle'] = cycler('color', ['salmon'])\nfig = plt.gcf()\nfig.set_size_inches(10,10)\nax = plt.subplot(312)\nlabels = features\nviz = FeatureImportances(ElasticNet(alpha=0.01), ax=ax, labels=labels, relative=False)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.grid(False)\n\n# Fit and display\nviz.fit(X, y)\nviz.show()\n\n# Lasso\n# Create a new figure\nmpl.rcParams['axes.prop_cycle'] = cycler('color', ['purple'])\nfig = plt.gcf()\nfig.set_size_inches(10,10)\nax = plt.subplot(313)\nlabels = features\nviz = FeatureImportances(Lasso(alpha=0.01), ax=ax, labels=labels, relative=False)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.grid(False)\n\n# Fit and display\nviz.fit(X, y)\nviz.show()", "Having analyzed the output of all utilized visualizations (Shapiro algorithm, Pearson Correlation Ranking, Covariance Ranking, Lasso, Ridge and ElasticNet), we can now select a set of features which have meaningful coefficient values (positive or negative). These are the features to be kept in the model:\n\nDisciplinary failure\nDay of the week\nSeasons\nDistance from Residence to Work\nNumber of children (Son)\nSocial drinker\nSocial smoker\nHeight\nWeight\nBMI\nPet\nMonth of absence\n\nGraphic visualization of the feature coefficients calculated in a number of different ways significantly simplifies feature selection process, making it more obvious, as it provides an easy way to visualy compare multiple values and consider only those which are statistically significant to the model.\nNow let's drop features which didn't make it and proceed with creating models.", "# Dropping features from X based on visual feature importance visualization\nX = X.drop(['Transportation expense', 'Age', 'Transportation expense', 'Service time', 'Hit target', 'Education','Work load Average/day '], axis=1)", "Some of the features which are going to be further utilized in the modeling stage, might be of a hierarchical type and require encoding. Let's look at the top couple of rows to see if we have any of those.", "X.head()", "Looks like 'Month of absence', 'Day of week' and 'Seasons' are not binary. Therefore, we'll be using pandas get_dummies function to encode them.", "# Encoding some categorical features\nX = pd.get_dummies(data=X, columns=['Month of absence', 'Day of the week', 'Seasons'])\n\nX.head()\n\nprint(X.columns)", "Model Evaluation and Selection\nOur matrix of features X is now ready to be fitted to a model, but first we need to split the data into train and test portions for further model validation.", "# Perform 80/20 training/test split\nX_train, X_test, y_train, y_test = tts(X, y, test_size=0.20, random_state=42)", "For the purpose of model evaluation and selection we will be using Yellowbrick's Classification Report Visualizer, which displays the precision, recall, F1, and support scores for the model. In order to support easier interpretation and problem detection, the report integrates numerical scores with a color-coded heatmap. All heatmaps are normalized, i.e. in the range from 0 to 1, to facilitate easy comparison of classification models across different classification reports.", "# Creating a function to visualize estimators\ndef visual_model_selection(X, y, estimator):\n visualizer = ClassificationReport(estimator, classes=['Low', 'Medium', 'High'], cmap='PRGn')\n visualizer.fit(X, y) \n visualizer.score(X, y)\n visualizer.show() \n\nvisual_model_selection(X, y, BaggingClassifier())\n\nvisual_model_selection(X, y, LogisticRegression(class_weight='balanced'))\n\nvisual_model_selection(X, y, KNeighborsClassifier())\n\nvisual_model_selection(X, y, RandomForestClassifier(class_weight='balanced'))\n\nvisual_model_selection(X, y, ExtraTreesClassifier(class_weight='balanced'))", "For the purposes of this exercise we will consider F1 score when estimating models' performance and making a selection. All of the above models visualized through Yellowbrick's Classification Report Visualizer make clear that classifier algorithms performed the best. We need to pay special attention to the F1 score for the underrepresented classes, such as \"High\" and \"Medium\", as they contained significantly less instances than \"Low\" class. Therefore, high F1 score for all three classes indicate a very strong performance of the following models: Bagging Classifier, Random Forest Classifier, Extra Trees Classifier.\nWe will also use Class Prediction Error visualizer for these models to confirm their strong performance.", "# Visualizaing class prediction error for Bagging Classifier model\nclasses = ['Low', 'Medium', 'High']\n\nmpl.rcParams['axes.prop_cycle'] = cycler('color', ['turquoise', 'cyan', 'teal', 'coral', 'blue', 'lime', 'lavender', 'lightblue', 'darkgreen', 'tan', 'salmon', 'gold', 'darkred', 'darkblue'])\n\nfig = plt.gcf()\nfig.set_size_inches(10,10)\nax = plt.subplot(311)\nvisualizer = ClassPredictionError(BaggingClassifier(), classes=classes, ax=ax)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.grid(False)\n\nvisualizer.fit(X_train, y_train)\nvisualizer.score(X_test, y_test)\ng = visualizer.show()\n\n# Visualizaing class prediction error for Random Forest Classifier model\nclasses = ['Low', 'Medium', 'High']\n\nmpl.rcParams['axes.prop_cycle'] = cycler('color', ['coral', 'tan', 'darkred'])\n\nfig = plt.gcf()\nfig.set_size_inches(10,10)\nax = plt.subplot(312)\nvisualizer = ClassPredictionError(RandomForestClassifier(class_weight='balanced'), classes=classes, ax=ax)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.grid(False)\n\nvisualizer.fit(X_train, y_train)\nvisualizer.score(X_test, y_test)\ng = visualizer.show()\n\n# Visualizaing class prediction error for Extra Trees Classifier model\nclasses = ['Low', 'Medium', 'High']\n\nmpl.rcParams['axes.prop_cycle'] = cycler('color', ['limegreen', 'yellow', 'orange'])\n\nfig = plt.gcf()\nfig.set_size_inches(10,10)\nax = plt.subplot(313)\nvisualizer = ClassPredictionError(ExtraTreesClassifier(class_weight='balanced'), classes=classes, ax=ax)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.grid(False)\n\nvisualizer.fit(X_train, y_train)\nvisualizer.score(X_test, y_test)\ng = visualizer.show()", "Model Optimization\nNow we can conclude that ExtraTreesClassifier seems to perform better as it had no instances from \"High\" class reported under the \"Low\" class.\nHowever, decision trees become more overfit the deeper they are because at each level of the tree the partitions are dealing with a smaller subset of data. One way to avoid overfitting is by adjusting the depth of the tree. Yellowbrick's Validation Curve visualizer explores the relationship of the \"max_depth\" parameter to the R2 score with 10 shuffle split cross-validation.\nSo let's proceed with hyperparameter tuning for our selected ExtraTreesClassifier model using Validation Curve visualizer!", "# Performing Hyperparameter tuning \n# Validation Curve\nmpl.rcParams['axes.prop_cycle'] = cycler('color', ['purple', 'darkblue'])\n\nfig = plt.gcf()\nfig.set_size_inches(10,10)\nax = plt.subplot(411)\nviz = ValidationCurve(ExtraTreesClassifier(class_weight='balanced'), ax=ax, param_name=\"max_depth\", param_range=np.arange(1, 11), cv=3, scoring=\"accuracy\")\n\n# Fit and show the visualizer\nviz.fit(X, y)\nviz.show()", "We can observe on the above chart that even though training score keeps rising continuosly, cross validation score drops down at max_depth=7. Therefore, we will chose that parameter for our selected model to optimize its performance.", "visual_model_selection(X, y, ExtraTreesClassifier(class_weight='balanced', max_depth=7))", "Conclusions\nAs we demonstrated in this article, visualization techniques prove to be a useful tool in the machine learning toolkit, and Yellowbrick provides a wide selection of visualizers to meet the needs at every step and stage of the data science project pipeline. Ranging from feature analysis and selection, to model selection and optimization, Yellowbrick visualizers make it easy to make a decision as to which features to keep in the model, which model performs best, and how to tune model's hyperparameters to achieve its optimal performance for future use. Moreover, visualizing algorithmic output also makes it easy to present insights to the audience and stakeholders, and contribute to the simplified interpretability of the machine learning results." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
KECB/learn
BAMM.101x/pandas.ipynb
mit
[ "<h1>Pandas</h1>", "#installing pandas libraries\n!pip install pandas-datareader\n!pip install --upgrade html5lib==1.0b8\n\n#There is a bug in the latest version of html5lib so install an earlier version\n#Restart kernel after installing html5lib", "<h2>Imports</h2>", "import pandas as pd #pandas library\nfrom pandas_datareader import data #data readers (google, html, etc.)\n#The following line ensures that graphs are rendered in the notebook\n%matplotlib inline \nimport numpy as np\nimport matplotlib.pyplot as plt #Plotting library\nimport datetime as dt #datetime for timeseries support", "<h2>The structure of a dataframe</h2>", "pd.DataFrame([[1,2,3],[1,2,3]],columns=['A','B','C'])", "<h3>Accessing columns and rows</h3>", "df = pd.DataFrame([['r1','00','01','02'],['r2','10','11','12'],['r3','20','21','22']],columns=['row_label','A','B','C'])\nprint(id(df))\ndf.set_index('row_label',inplace=True)\nprint(id(df))\ndf\n\ndata = {'nationality': ['UK', 'China', 'US', 'UK', 'Japan', 'China', 'UK', 'UK', 'Japan', 'US'],\n 'age': [25, 30, 15, np.nan, 25, 22, np.nan,45 ,18, 33],\n 'type': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],\n 'diabetes': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}\n\nlabels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']\ndf=pd.DataFrame(data=data,index=labels)\n#print(df[df['age'].between(20,30)])\n#print(df.groupby('nationality').mean()['age'])\n#print(df.sort_values(by=['age','type'],ascending=[False,True]))\ndf['nationality'] = df['nationality'].replace('US','United States')\nprint(df)\n\ndf.ix[1]", "<h3>Getting column data</h3>", "df['B']", "<h3>Getting row data</h3>", "df.loc['r1']", "<h3>Getting a row by row number</h3>", "df.iloc[0]", "<h3>Getting multiple columns<h3>", "df[['B','A']] #Note that the column identifiers are in a list", "<h3>Getting a specific cell</h3>", "df.loc['r2','B']\n\ndf.loc['r2']['A']", "<h3>Slicing</h3>", "print(df)\nprint(df.loc['r1':'r2'])\n\ndf.loc['r1':'r2','B':'C']", "<h2>Pandas datareader</h2>\n<li>Access data from html tables on any web page</li>\n<li>Get data from google finance</li>\n<li>Get data from the federal reserve</li>\n\n<h3>HTML Tables</h3>\n<li>Pandas datareader can read a table in an html page into a dataframe\n<li>the read_html function returns a list of all dataframes with one dataframe for each html table on the page\n\n<h4>Example: Read the tables on the google finance page</h4>", "#df_list = pd.read_html('http://www.bloomberg.com/markets/currencies/major')\ndf_list = pd.read_html('http://www.waihuipaijia.cn/'\n , encoding='utf-8')\nprint(len(df_list))", "<h4>The page contains only one table so the read_html function returns a list of one element</h4>", "df = df_list[0]\nprint(df)", "<h4>Note that the read_html function has automatically detected the header columns</h4>\n<h4>If an index is necessary, we need to explicitly specify it</h4>", "df.set_index('Currency',inplace=True)\nprint(df)", "<h4>Now we can use .loc to extract specific currency rates</h4>", "df.loc['EUR-CHF','Value']", "<h3>Working with views and copies</h3>\n\n<h4>Chained indexing creates a copy and changes to the copy won't be reflected in the original dataframe</h4>", "eur_usd = df.loc['EUR-USD']['Change'] #This is chained indexing\ndf.loc['EUR-USD']['Change'] = 1.0 #Here we are changing a value in a copy of the dataframe\nprint(eur_usd)\nprint(df.loc['EUR-USD']['Change']) #Neither eur_usd, nor the dataframe are changed\n\neur_usd = df.loc['EUR-USD','Change'] #eur_usd points to the value inside the dataframe\ndf.loc['EUR-USD','Change'] = 1.0 #Change the value in the view \nprint(eur_usd) #eur_usd is changed (because it points to the view)\nprint(df.loc['EUR-USD']['Change']) #The dataframe has been correctly updated", "<h2>Getting historical stock prices from Google financs</h2>\nUsage: DataReader(ticker,source,startdate,enddate)<br>\nUnfortunately, the Yahoo finance datareader has stopped working because of a change to Yahoo's website", "from pandas_datareader import data\nimport datetime as dt\nstart=dt.datetime(2017, 1, 1)\nend=dt.datetime.today()\n\n\nprint(start,end)\n\n\ndf = data.DataReader('IBM', 'google', start, end)\n\n\ndf", "<h2>Datareader documentation</h2>\nhttp://pandas-datareader.readthedocs.io/en/latest/</h2>\n<h3>Working with a timeseries data frame</h3>\n<li>The data is organized by time with the index serving as the timeline\n\n\n<h4>Creating new columns</h4>\n<li>Add a column to a dataframe\n<li>Base the elements of the column on some combination of data in the existing columns\n<h4>Example: Number of Days that the stock closed higher than it opened\n<li>We'll create a new column with the header \"UP\"\n<li>And use np.where to decide what to put in the column", "df['UP']=np.where(df['Close']>df['Open'],1,0)\ndf", "<h3>Get summary statistics</h3>\n<li>The \"describe\" function returns a dataframe containing summary stats for all numerical columns\n<li>Columns containing non-numerical data are ignored", "df.describe()", "<h4>Calculate the percentage of days that the stock has closed higher than its open</h4>", "df['UP'].sum()/df['UP'].count()", "<h4>Calculate percent changes</h4>\n<li>The function pct_change computes a percent change between successive rows (times in timeseries data)\n<li>Defaults to a single time delta\n<li>With an argument, the time delta can be changed", "df['Close'].pct_change() #One timeperiod percent change\n\nn=2\ndf['Close'].pct_change(n) #n timeperiods percent change", "<h3>NaN support</h3>\nPandas functions can ignore NaNs", "n=13\ndf['Close'].pct_change(n).mean()", "<h3>Rolling windows</h3>\n<li>\"rolling\" function extracts rolling windows\n<li>For example, the 21 period rolling window of the 13 period percent change", "df['Close'].pct_change(n).rolling(21)", "<h4>Calculate something on the rolling windows</h4>\n\n<h4>Example: mean (the 21 day moving average of the 13 day percent change)", "n=13\ndf['Close'].pct_change(n).rolling(21).mean()", "<h4>Calculate several moving averages and graph them</h4>", "ma_8 = df['Close'].pct_change(n).rolling(window=8).mean()\nma_13= df['Close'].pct_change(n).rolling(window=13).mean()\nma_21= df['Close'].pct_change(n).rolling(window=21).mean()\nma_34= df['Close'].pct_change(n).rolling(window=34).mean()\nma_55= df['Close'].pct_change(n).rolling(window=55).mean()\n\nma_8.plot()\nma_34.plot()", "<h2>Linear regression with pandas</h2>\n<h4>Example: TAN is the ticker for a solar ETF. FSLR, RGSE, and SCTY are tickers of companies that build or lease solar panels. Each has a different business model. We'll use pandas to study the risk reward tradeoff between the 4 investments and also see how correlated they are</h4>", "import datetime\nimport pandas_datareader as data\nstart = datetime.datetime(2015,7,1)\nend = datetime.datetime(2016,6,1)\nsolar_df = data.DataReader(['FSLR', 'TAN','RGSE','SCTY'],'google', start=start,end=end)['Close']\n\nsolar_df", "<h4>Let's calculate returns (the 1 day percent change)</h4>", "rets = solar_df.pct_change()\nprint(rets)", "<h4>Let's visualize the relationship between each stock and the ETF</h4>", "import matplotlib.pyplot as plt\nplt.scatter(rets.FSLR,rets.TAN)\n\nplt.scatter(rets.RGSE,rets.TAN)\n\nplt.scatter(rets.SCTY,rets.TAN)", "<h4>The correlation matrix</h4>", "solar_corr = rets.corr()\nprint(solar_corr)", "<h3>Basic risk analysis</h3>\n<h4>We'll plot the mean and std or returns for each ticker to get a sense of the risk return profile</h4>", "plt.scatter(rets.mean(), rets.std())\nplt.xlabel('Expected returns')\nplt.ylabel('Standard deviations')\nfor label, x, y in zip(rets.columns, rets.mean(), rets.std()):\n plt.annotate(\n label, \n xy = (x, y), xytext = (20, -20),\n textcoords = 'offset points', ha = 'right', va = 'bottom',\n bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),\n arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))\nplt.show()\n", "<h2>Regressions</h2>\nhttp://statsmodels.sourceforge.net/\n<h3>Steps for regression</h3>\n<li>Construct y (dependent variable series)\n<li>Construct matrix (dataframe) of X (independent variable series)\n<li>Add intercept\n<li>Model the regression\n<li>Get the results\n<h3>The statsmodels library contains various regression packages. We'll use the OLS (Ordinary Least Squares) model", "import numpy as np\nimport statsmodels.api as sm\nX=solar_df[['FSLR','RGSE','SCTY']]\nX = sm.add_constant(X)\ny=solar_df['TAN']\nmodel = sm.OLS(y,X,missing='drop')\nresult = model.fit()\nprint(result.summary())", "<h4>Finally plot the fitted line with the actual y values", "fig, ax = plt.subplots(figsize=(8,6))\nax.plot(y)\nax.plot(result.fittedvalues)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LukasMosser/Jupetro
notebooks/Casing Seat Depth Determination - Part 1.ipynb
gpl-3.0
[ "Simple Bottom Up Casing Seat Design - Part 1\nCreated by Lukas Mosser, 2016\nIntroduction\nWe will look at how we can use python and related libraries to estimate casing setting depths.\nWe will derive a simplified methodology to determine casing setting depths from first principles.\nAssume here that our pore pressure curves are simple monotonic increasing or decreasing functions. In simple terms,\nwe don't encounter any abnormal pressure zones or \"bumps\" in our pore pressure or frac gradient. Of course, in reality this is not always the case and we may want to consider these zones in the future.\n\nLearning Outcomes\nWhat you will learn from this:\n\n1D-Interpolation using numpy.interp library\nUsing scalar-vector multiplication in numpy\nSimple \"bottom-up\" casing setting depth determination\n\nBottom up casing seats from first principles\nWe will consider the following first principles:\n\nDon't fracture the formation: Stay below the fracture gradient\nAvoid an influx of fluids (kick): Stay above the pore pressure\nOur data is uncertain: Add a safety margin.\nConsider HSE\n\nNow of course we have to translate these into a computational framework!\nComputational approach (Part 1: simplified)\nWe can find the setting depths of our casings in four simplified steps (remember, no abnormal pressures)\n\nWe start at the highest value of pore pressure at our target depth.\nExtend a vertical line. Where we intersect the fracture gradient, this depth is where a casing is placed.\nFind the pore pressure at the casing seat depth.\nHave you reached the surface? No, go back to 2. Yes, great we're done.\n\nAs you may have noticed we will move between 2. and 4. until we have determined all casing seats and arrive at the surface.\nThis method will be similar for when we account for abnormal pressures, but we will have to deal with the \"bumps\" later.\nLet's load some data and have a look at it:", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Optional: Use your own data\nIf you want to use this notebook on your own, all you have to do is change the target directory and filename \ndefined as fracture_pressure_data_location and pore_pressure_data_location. In a future notebook, we will cover how you can import Microsoft Excel based data to any Jupyter Notebook!", "# Change these file names to your own datas locations.\nfracture_pressure_data_location = \"data/fracture_pressure.csv\"\npore_pressure_data_location = \"data/pore_pressure.csv\"\n\nfracture_pressure_data = np.loadtxt(\"data/fracture_pressure.csv\", delimiter=\",\")\nfracture_pressure, TVD_frac = fracture_pressure_data.T\n\npore_pressure_data = np.loadtxt(\"data/pore_pressure.csv\", delimiter=\",\")\npore_pressure, TVD_pore = pore_pressure_data.T", "Safety Factor\nLet's add a safety factor. Since our data is in numpy arrays, multiplying a list of values by a constant is performed as simple multiplication of the two:", "percent_safety = 2. # 2% Safety margin\nsafety = percent_safety/100. \nfracture_pressure_safety = (1.-safety)*fracture_pressure\npore_pressure_safety = (1.+safety)*pore_pressure", "Plotting our data", "fig, ax = plt.subplots(1, figsize=(13, 13))\n\n\n# We plot our initial data\nax.plot(fracture_pressure, TVD_frac, color=\"red\", linewidth=3, label=\"Fracture Pressure\")\nax.plot(pore_pressure, TVD_pore, color=\"blue\", linewidth=3, label=\"Pore Pressure\")\n\n# And now our data plus/minus a safety margin of 2%\nax.plot(fracture_pressure_safety, TVD_frac, color=\"red\", linewidth=3, linestyle=\"--\")\nax.plot(pore_pressure_safety, TVD_pore, color=\"blue\", linewidth=3, linestyle=\"--\")\n\n# -------------- Formatting ------------- Ignore for now\n\nax.set_title(\"Casing Setting Depths\", fontsize=30, y=1.08)\nlabel_size = 12\n\nax.set_ylabel(\"Total Vertical Depth [ft]\", fontsize=25)\nax.set_ylim(ax.get_ylim()[::-1])\n\nax.xaxis.tick_top()\nax.xaxis.set_label_position(\"top\")\n\nyed = [tick.label.set_fontsize(label_size) for tick in ax.yaxis.get_major_ticks()]\nxed = [tick.label.set_fontsize(label_size) for tick in ax.xaxis.get_major_ticks()]\nax.set_xlabel(\"Equivalent Mud Density [ppg]\", fontsize=25)\nax.ticklabel_format(fontsize=25)\nax.grid()\nax.legend(fontsize=25)", "Turning theory into code\nWe now want to turn our list of steps into a piece of working code. Let's start with a naive approach.\nWe will use numpy.interp to compute linear interpolated values from our datasets.\nThis function takes a list of x values where we want to interpolate (We can compute many at once! Neat!), as well as the x and y values of the data we want to interpolate from. In our case the pore and fracture pressure data.\nSince we use a bottom up approach, we know that at the bottom casing setting depth is governed by the pore pressure at the target depth. This is easily found in python by performing list_name[-1] on any list! (Ignore significant digits, you know the drill!)", "bottom_up_pore_pressure = pore_pressure_safety[-1]\nprint \"Equivalent pore pressure at depth: \", bottom_up_pore_pressure, \" [ppg]\"", "Step two now involves extending a line up until we hit the fracture pressure. This means we have to interpolate our known pore pressure at target to find the corresponding depth in our fracture pressure data.\nNumpy lets us perform this task in one simple line of code.", "second_section_tvd = np.interp(bottom_up_pore_pressure, fracture_pressure_safety, TVD_frac)\nprint \"Depth at which pore pressure and fracture pressure are equal: \", second_section_tvd, \" [ft]\"", "This depth will be equivalent to our second casing seat (the first was at the target).\nNow let's get the pore pressure at this depth. \nTo do so we have to switch x and y axis. We know the depth (y-axis) and want to know the pore pressure value (x-axis)\nIn python this is just as simple as the task before:", "second_section_pore_pressure = np.interp(second_section_tvd, TVD_pore, pore_pressure_safety)\nprint \"Equivalent pore pressure for second section: \", second_section_pore_pressure, \" [ppg]\"", "Great! Now we know the drill (pun not intended :) ) let's see where we hit the frac gradient this time!", "third_section_tvd = np.interp(second_section_pore_pressure, fracture_pressure_safety, TVD_frac)\nprint \"Depth at which pore pressure and fracture pressure are equal: \", third_section_tvd, \" [ft]\"", "We've hit the surface! Therefore, no more casing seats and we are done.\nLet's plot the results!", "casing_seats_tvd = [TVD_pore[-1], second_section_tvd, second_section_tvd, third_section_tvd]\ncasing_seats_ppg = [pore_pressure_safety[-1], pore_pressure_safety[-1], second_section_pore_pressure, second_section_pore_pressure]", "We will use matplotlib to perform any plotting tasks.", "fig, ax = plt.subplots(1, figsize=(13, 13))\n\n# We plot our initial data\nax.plot(fracture_pressure, TVD_frac, color=\"red\", linewidth=3, label=\"Fracture Pressure\")\nax.plot(pore_pressure, TVD_pore, color=\"blue\", linewidth=3, label=\"Pore Pressure\")\n\n# And now our data plus/minus a safety margin of 2%\nax.plot(fracture_pressure_safety, TVD_frac, color=\"red\", linewidth=3, linestyle=\"--\")\nax.plot(pore_pressure_safety, TVD_pore, color=\"blue\", linewidth=3, linestyle=\"--\")\n\n# Finally our casing design\nax.plot(casing_seats_ppg, casing_seats_tvd, color=\"black\", linestyle=\"--\", linewidth=3, label=\"Casing Seats\")\n\n# -------------- Formatting ------------- Ignore for now\n\n\nax.set_title(\"Casing Setting Depths\", fontsize=30, y=1.08)\nlabel_size = 12\n\nax.set_ylabel(\"Total Vertical Depth [ft]\", fontsize=25)\nax.set_ylim(ax.get_ylim()[::-1])\n\nax.xaxis.tick_top()\nax.xaxis.set_label_position(\"top\")\n\nyed = [tick.label.set_fontsize(label_size) for tick in ax.yaxis.get_major_ticks()]\nxed = [tick.label.set_fontsize(label_size) for tick in ax.xaxis.get_major_ticks()]\nax.set_xlabel(\"Equivalent Mud Density [ppg]\", fontsize=25)\nax.ticklabel_format(fontsize=25)\nax.grid()\nax.legend(fontsize=25)", "This concludes part 1 of our casing setting depth determination notebook. Next time we will automate the design process using a simple class. Our third part will deal with abnormal pressure zones and finally we will finish by integrating Microsoft Excel data sources. Until next time!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
datactive/bigbang
bigbang/datasets/domains/Create Domain-Category Data.ipynb
mit
[ "import itertools\nimport pandas as pd", "The following dictionary contains hand-curated labeled domains.", "domain_categories = {\n \"generic\" : [\n \"gmail.com\",\n \"hotmail.com\",\n \"gmx.de\",\n \"gmx.net\",\n \"gmx.at\",\n \"earthlink.net\",\n \"comcast.net\",\n \"yahoo.com\",\n \"email.com\"\n ],\n \"personal\" : [\n \"mnot.net\",\n \"henriknordstrom.net\",\n \"adambarth.com\",\n \"brianrosen.net\",\n \"taugh.com\",\n \"csperkins.org\",\n \"sandelman.ca\",\n \"lowentropy.net\",\n \"gbiv.com\"\n ],\n \"company\" : [\n \"apple.com\",\n \"cisco.com\",\n \"chromium.org\",\n \"microsoft.com\",\n \"oracle.com\",\n \"google.com\",\n \"facebook.com\",\n \"intel.com\",\n \"verizon.com\",\n \"verizon.net\",\n \"salesforce.com\",\n \"cloudflare.com\",\n \"broadcom.com\",\n \"juniper.net\",\n \"netflix.com\",\n \"akamai.com\",\n \"us.ibm.com\",\n \"qualcomm.com\",\n \"siemens.com\",\n \"boeing.com\",\n \"sandvine.com\",\n \"marconi.com\",\n \"trilliant.com\",\n \"huawei.com\", # chinese\n \"zte.com.cn\" # chinese\n \"chinamobile.com\",\n \"chinaunicom.cn\",\n \"chinatelecom.cn\",\n \"cnnic.cn\" # registry\n ],\n # from R.N.\n \"academic\" : [\n \"caict.ac.cn\", # chinese\n \"scu.edu.cn\", # chinese\n \"tongji.edu.cn\", # chinese\n \"mit.edu\",\n \"ieee.org\",\n \"acm.org\",\n \"berkeley.edu\",\n \"harvard.edu\",\n \"lbl.gov\"\n ],\n \"sdo\" : [\n \"isoc.org\",\n \"icann.org\",\n \"amsl.com\",\n \"iana.org\",\n \"tools.ietf.org\",\n \"w3.org\"\n ]\n}\n\ndf = pd.DataFrame.from_records(itertools.chain(\n *[[{'domain' : dom, 'category' : cat} \n for dom in domain_categories[cat]] \n for cat in domain_categories]\n))", "The following scripts gathers generic email hosts from a list provided on a public Gist.", "!wget https://gist.githubusercontent.com/ammarshah/f5c2624d767f91a7cbdc4e54db8dd0bf/raw/660fd949eba09c0b86574d9d3aa0f2137161fc7c/all_email_provider_domains.txt\n\naepd = open(\"all_email_provider_domains.txt\").read().split(\"\\n\")\n\ndf = df.append([{'domain' : d, 'category' : 'generic'} for d in aepd]).drop_duplicates(subset=['domain'])\n\ndf.to_csv(\"domain_categories.csv\", index=False)", "<hr>", "pd.read_csv(\"domain_categories.csv\", index_col='domain')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/cas/cmip6/models/sandbox-2/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: CAS\nSource ID: SANDBOX-2\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cas', 'sandbox-2', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
davidweichiang/tock
docs/source/tutorial/PDAs.ipynb
mit
[ "Pushdown automata", "from tock import *", "Creating PDAs\nA pushdown automaton (PDA) can be created with PushdownAutomaton() or by reading from a file.", "m = read_csv(\"examples/sipser-2-14.csv\")\nm.is_finite(), m.is_pushdown(), m.is_deterministic()\n\nto_table(m)", "The first column lists the states, just as in a FA. The first row lists input symbols and the second row lists popped stack symbols.\nThe cells contain pairs of new states and pushed stack symbols. So, if the machine is in state q2, and the next input symbol is 0, then the machine stays in state q2 and pushes a 0 onto the stack. If a cell has multiple tuples, then each tuple must be enclosed in parentheses (and curly braces are optional).\nHere's the state transition diagram:", "to_graph(m)", "Now it's easier to see that it accepts strings of the form $\\texttt{0}^n\\texttt{1}^n$. If you draw the graph in a graph editor, use -&gt; for a right arrow.\nRunning PDAs", "run(m, \"0 0 0 1 1 1\")", "The nodes of the run now show the contents of the stack as well. The top symbol of the stack is marked with square brackets. If the stack gets too deep, then only the top few elements are shown, with an ellipsis instead of the rest of the stack. You can change how many stack elements are displayed using the show_stack option, which defaults to 3. (More about ellipses below.)\nThis is a deterministic PDA. That's not very exciting, so let's try a nondeterministic PDA. This one (Example 2.18) accepts strings of the form ${ww^R}$.", "m = read_csv(\"examples/sipser-2-18.csv\")\nm.is_finite(), m.is_pushdown(), m.is_deterministic()\n\nto_table(m)\n\nto_graph(m)\n\nrun(m, \"0 0 1 1 0 0\")", "Notice the nondeterminism: at every step, the automaton considers whether it might be at the midpoint of the string.\nWe saw that in a nondeterministic FA, there could be infinitely many runs for an input string, and the run graph indicated this using a cycle. With nondeterministic PDAs, we have a new problem: they can push/pop as many times as they want without reading in any input, so the infinitely many runs also go through infinitely many configurations! Consider the following PDA, which (for some reason) pushes an arbitrary number of # signs, reads in a single 0, then pops all the # signs again:", "m = read_csv(\"examples/pdaloop.csv\")\nto_graph(m)\n\nrun(m, \"0\")", "The graph shows us the first few pushes, but after the stack gets deep enough, only the top few items are shown, and the pushes stop creating new nodes. What isn't apparent from the run graph is that the simulator does make sure that the same number of # signs are pushed and popped." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
d00d/quantNotebooks
Notebooks/07092017-PRA-Home-and-Ciriculum-Notebook-001.ipynb
unlicense
[ "Matplotlib\nworking with matplotlib for 2D graphing", "import matplotlib.pyplot as plt\nfrom matplotlib.offsetbox import AnchoredText\n#import matplotlib.animation as animation\n\n%matplotlib inline\n\nweight =[258.1,257.1,256.6,257.7,257.6,254.3,252.5,252.6,251.7,251.2,250.1,247.8] \n\n#plot(weight, 'm', label='line1', linewidth=4)\nplt.title('Q2 2017 - Progress on Weight Loss Program')\nplt.grid(True)\nplt.xlabel('Weigh in #')\nplt.ylabel('Weight in Lbs.')\nax = plt.gca()\nat = AnchoredText(\n \"Rob's Weekly Weight Loss progress\",\n loc=3, prop=dict(size=10), frameon=True,\n )\nat.patch.set_boxstyle(\"round,pad=0.,rounding_size=0.2\")\nax.add_artist(at)\n\nplt.plot(weight,'m', linewidth=4, linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10)", "Dataframes\nWorking with Pandas and DataFrames. Importing necessary packages.", "import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(data=np.array([[1,2,3,4], [5,6,7,8]], dtype=int), columns=['Pacific','Mountain','Central','Eastern'])\nplt.plot(df)\ndf", "LaTex usage for Math representations.\nAny LaTeX math should be inside $$:\n$$c = \\sqrt{a^2 + b^2}$$", "from IPython.display import display, Math, Latex\ndisplay(Math(r'\\sqrt{a^2 + b^3}'))\n\n#%lsmagic\n\n#%quickref\n\n#%debug" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
harishkrao/DSE200x
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
mit
[ "<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nClustering with scikit-learn\n\n<br><br></p>\n\nIn this notebook, we will learn how to perform k-means lustering using scikit-learn in Python. \nWe will use cluster analysis to generate a big picture model of the weather at a local station using a minute-graunlarity data. In this dataset, we have in the order of millions records. How do we create 12 clusters our of them?\nNOTE: The dataset we will use is in a large CSV file called minute_weather.csv. Please download it into the weather directory in your Week-7-MachineLearning folder. The download link is: https://drive.google.com/open?id=0B8iiZ7pSaSFZb3ItQ1l4LWRMTjg \n<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nImporting the Necessary Libraries<br></p>", "from sklearn.preprocessing import StandardScaler\nfrom sklearn.cluster import KMeans\n#import utils\nimport pandas as pd\nimport numpy as np\nfrom itertools import cycle, islice\nimport matplotlib.pyplot as plt\nfrom pandas.tools.plotting import parallel_coordinates\n\n%matplotlib inline", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nCreating a Pandas DataFrame from a CSV file<br><br></p>", "data = pd.read_csv('./weather/minute_weather.csv')", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\">Minute Weather Data Description</p>\n<br>\nThe minute weather dataset comes from the same source as the daily weather dataset that we used in the decision tree based classifier notebook. The main difference between these two datasets is that the minute weather dataset contains raw sensor measurements captured at one-minute intervals. Daily weather dataset instead contained processed and well curated data. The data is in the file minute_weather.csv, which is a comma-separated file.\nAs with the daily weather data, this data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data for different seasons and weather conditions is captured.\nEach row in minute_weather.csv contains weather data captured for a one-minute interval. Each row, or sample, consists of the following variables:\n\nrowID: unique number for each row (Unit: NA)\nhpwren_timestamp: timestamp of measure (Unit: year-month-day hour:minute:second)\nair_pressure: air pressure measured at the timestamp (Unit: hectopascals)\nair_temp: air temperature measure at the timestamp (Unit: degrees Fahrenheit)\navg_wind_direction: wind direction averaged over the minute before the timestamp (Unit: degrees, with 0 means coming from the North, and increasing clockwise)\navg_wind_speed: wind speed averaged over the minute before the timestamp (Unit: meters per second)\nmax_wind_direction: highest wind direction in the minute before the timestamp (Unit: degrees, with 0 being North and increasing clockwise)\nmax_wind_speed: highest wind speed in the minute before the timestamp (Unit: meters per second)\nmin_wind_direction: smallest wind direction in the minute before the timestamp (Unit: degrees, with 0 being North and inceasing clockwise)\nmin_wind_speed: smallest wind speed in the minute before the timestamp (Unit: meters per second)\nrain_accumulation: amount of accumulated rain measured at the timestamp (Unit: millimeters)\nrain_duration: length of time rain has fallen as measured at the timestamp (Unit: seconds)\nrelative_humidity: relative humidity measured at the timestamp (Unit: percent)", "data.shape\n\ndata.head()", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nData Sampling<br></p>\n\nLots of rows, so let us sample down by taking every 10th row. <br>", "sampled_df = data[(data['rowID'] % 10) == 0]\nsampled_df.shape", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nStatistics\n<br><br></p>", "sampled_df.describe().transpose()\n\nsampled_df[sampled_df['rain_accumulation'] == 0].shape\n\nsampled_df[sampled_df['rain_duration'] == 0].shape", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nDrop all the Rows with Empty rain_duration and rain_accumulation\n<br><br></p>", "del sampled_df['rain_accumulation']\ndel sampled_df['rain_duration']\n\nrows_before = sampled_df.shape[0]\nsampled_df = sampled_df.dropna()\nrows_after = sampled_df.shape[0]", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nHow many rows did we drop ?\n<br><br></p>", "rows_before - rows_after\n\nsampled_df.columns", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nSelect Features of Interest for Clustering\n<br><br></p>", "features = ['air_pressure', 'air_temp', 'avg_wind_direction', 'avg_wind_speed', 'max_wind_direction', \n 'max_wind_speed','relative_humidity']\n\nselect_df = sampled_df[features]\n\nselect_df.columns\n\nselect_df", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nScale the Features using StandardScaler\n<br><br></p>", "X = StandardScaler().fit_transform(select_df)\nX", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nUse k-Means Clustering\n<br><br></p>", "kmeans = KMeans(n_clusters=12)\nmodel = kmeans.fit(X)\nprint(\"model\\n\", model)", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nWhat are the centers of 12 clusters we formed ?\n<br><br></p>", "centers = model.cluster_centers_\ncenters", "<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nPlots\n<br><br></p>\n\nLet us first create some utility functions which will help us in plotting graphs:", "# Function that creates a DataFrame with a column for Cluster Number\n\ndef pd_centers(featuresUsed, centers):\n\tcolNames = list(featuresUsed)\n\tcolNames.append('prediction')\n\n\t# Zip with a column called 'prediction' (index)\n\tZ = [np.append(A, index) for index, A in enumerate(centers)]\n\n\t# Convert to pandas data frame for plotting\n\tP = pd.DataFrame(Z, columns=colNames)\n\tP['prediction'] = P['prediction'].astype(int)\n\treturn P\n\n# Function that creates Parallel Plots\n\ndef parallel_plot(data):\n my_colors = list(islice(cycle(['b', 'r', 'g', 'y', 'k']), None, len(data)))\n #print(my_colors)\n plt.figure(figsize=(15,8)).gca().axes.set_ylim([-3,+3])\n parallel_coordinates(data, 'prediction', color = my_colors, marker='o')\n\nP = pd_centers(features, centers)\nP", "Dry Days", "parallel_plot(P[P['relative_humidity'] < -0.5])", "Warm Days", "parallel_plot(P[P['air_temp'] > 0.5])", "Cool Days", "parallel_plot(P[(P['relative_humidity'] > 0.5) & (P['air_temp'] < 0.5)])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
piyueh/SEM-Toolbox
solutions/chapter02/exercise05.ipynb
mit
[ "Exercise 5", "import numpy\nfrom matplotlib import pyplot\n% matplotlib inline\n\nimport os, sys\nsys.path.append(os.path.split(os.path.split(os.getcwd())[0])[0])\n\nimport utils.grids.one_d as assembly\nimport utils.quadrature as quad\n\nimport warnings\nwarnings.filterwarnings('error')", "Consider the following problem:\n$$\n\\frac{d^2 u}{d x^2} - \\lambda u = f\n$$\nwhere $x$ is defined as $x \\in [-1, 1]$. The weak form with test function $\\nu$ before lifting is\n$$\n\\int_{-1}^{1}\\frac{d\\nu}{dx}\\frac{du}{dx} dx + \n\\lambda\\int_{-1}^{1}\\nu u dx =\n\\nu(1)\\frac{du}{dx}(1) - \n\\nu(-1)\\frac{du}{dx}(-1) -\n\\int_{-1}^{1} \\nu f dx\n$$\nFor continuous Galerkin method, the matrix form of the problem is:\n$$\n\\left(\\mathbb{L} + \\lambda\\mathbb{M}\\right)\\mathbf{u} = - \\mathbf{f} + \\mathbf{BC_N}\n$$\n$\\mathbf{BC_N}$ represents Newmann type boundary conditions, and \n$$\n\\mathbf{BC_N} =\n\\left{\n\\begin{array}{ll}\n\\left.-\\frac{du}{dx}\\right|{x=-1} & \\text{, if i represents the node }x=-1 \\\n\\left.\\frac{du}{dx}\\right|{x=1} & \\text{, if i represents the node }x=1 \\\n0 & \\text{, else}\n\\end{array}\n\\right.\n$$\nIf any Dirchlet BC is given at any boundary point,\nthen $\\frac{du}{dx}$ is normally unknown at that point.\nHowever, after lifting the problem,\nthe unknown $\\frac{du}{dx}$ are eliminated.\nSo those unknown $\\frac{du}{dx}$ will not show up in the matrix system.\nHere we define some wrapper functions first for our convenience.", "def solver(N, P, f, DBCs, ends):\n \"\"\"solvie 1D Helmholts PDE\"\"\"\n \n g = assembly.DecomposeAssembly(N, ends, \"CommonJacobi\", P)\n \n rhs = - g.weak_rhs(f)\n A = g.wL + g.M\n\n rhs -= A[:, g.l2g[0][0]].A.flatten() * DBCs[0]\n rhs -= A[:, g.l2g[-1][-1]].A.flatten() * DBCs[1]\n \n A = numpy.delete(A, [g.l2g[0][0], g.l2g[-1][-1]], 0)\n A = numpy.delete(A, [g.l2g[0][0], g.l2g[-1][-1]], 1)\n rhs = numpy.delete(rhs, [g.l2g[0][0], g.l2g[-1][-1]])\n \n ui = numpy.linalg.solve(A, rhs)\n ui = numpy.insert(ui, g.l2g[0][0], DBCs[0])\n ui = numpy.insert(ui, g.l2g[-1][-1], DBCs[1])\n g.set_coeffs(ui)\n \n return g\n\ndef H1(g, exact, d_exact, l):\n \"\"\"calculate energy (H1) norm\"\"\"\n \n qd = quad.GaussLobattoJacobi(15)\n ans = 0\n for e in g.elems:\n f = lambda x: (e.derivative(x) - d_exact(x))**2 + (e(x) - exact(x))**2\n ans += numpy.sqrt(qd(f, xmin=e.ends[0], xMax=e.ends[1]))\n return ans / numpy.sqrt(l)\n\ndef ex_wrapper(N, P, f, exact, d_exact, ends):\n \"\"\"exercise wrapper\"\"\"\n \n Ndof = {\"h\": numpy.zeros_like(N, dtype=numpy.int),\n \"p\": numpy.zeros_like(P, dtype=numpy.int)}\n err = {\"h\": numpy.zeros_like(N, dtype=numpy.float64),\n \"p\": numpy.zeros_like(P, dtype=numpy.float64)}\n \n DBCs = [exact(ends[0]), exact(ends[1])]\n L = ends[1] - ends[0]\n \n for i, Ni in enumerate(N):\n g = solver(Ni, 1, f, DBCs, ends)\n Ndof[\"h\"][i] = g.nModes - 2 # values on two boundary nodes are know\n err[\"h\"][i] = H1(g, exact, d_exact, L)\n \n for i, Pi in enumerate(P):\n g = solver(2, Pi, f, DBCs, ends)\n Ndof[\"p\"][i] = g.nModes - 2 # values on two boundary nodes are know\n err[\"p\"][i] = H1(g, exact, d_exact, L)\n \n return Ndof, err", "Case 1\nNow we consider a all-Dirichlet BCs case (with $\\lambda=1$):\n$$\nf(x) = -(\\pi^2 + \\lambda)\\sin(\\pi x)\n$$\nwhere the exact solution is $u(x) = \\sin(\\pi x)$\nPlot the convergence order for both h- and p-type expansion.", "N = numpy.array([2 + 4 * i for i in range(25)])\nP = numpy.array([1, 3, 5, 7, 9, 11, 13, 15])\n\nf = lambda x: - (1 + numpy.pi * numpy.pi) * numpy.sin(numpy.pi * x)\nexact = lambda x: numpy.sin(numpy.pi * x)\nd_exact = lambda x: numpy.pi * numpy.cos(numpy.pi * x)\n\nNdof, err = ex_wrapper(N, P, f, exact, d_exact, [-1, 1])\n\npyplot.loglog(Ndof[\"h\"], err[\"h\"], 'ko-', lw=1.5, label=\"h-type expansion\")\npyplot.loglog(Ndof[\"p\"], err[\"p\"], 'k^-', lw=1.5, label=\"p-type expansion\")\npyplot.title(\"Convergence of h- and p-type expansion, \\nlog-log plot, and \" + \n r\"$u(x)=\\sin(\\pi x)$\")\npyplot.xlabel(r\"$N_{dof}$\")\npyplot.ylabel(r\"Energy norm, $\\left|\\left|error\\right|\\right|_E$\")\npyplot.legend(loc=0);\n\npyplot.semilogy(Ndof[\"h\"], err[\"h\"], 'ko-', lw=1.5, label=\"h-type expansion\")\npyplot.semilogy(Ndof[\"p\"], err[\"p\"], 'k^-', lw=1.5, label=\"p-type expansion\")\npyplot.title(\"Convergence of h- and p-type expansion, \\n semi-log plot, and \" + \n r\"$u(x)=\\sin(\\pi x)$\")\npyplot.xlabel(r\"$N_{dof}$\")\npyplot.ylabel(r\"Energy norm, $\\left|\\left|error\\right|\\right|_E$\")\npyplot.legend(loc=0);", "Case 2\nNow we consider cases with non-smooth exact solutions: $u(x) = x^{\\alpha}$, where $\\alpha=0.6,\\ 0.9,\\ 1.2$. And the right-hand-side will be $f(x) = \\alpha(\\alpha-1)x^{\\alpha-2}-x^{\\alpha}$. We still use all-Dirichlet BCs here.\nPlot the convergence order for both h- and p-type expansion.\nDue to the singularity of the RHS is at $x=0$, so the integration for weak-form RHS on the first element will be Gauss-Radau-Jacobi quadrature (with $x=1$ included). Hence we have to modify the code.", "def solver(N, P, f, DBCs, ends):\n \"\"\"solvie 1D Helmholts PDE\"\"\"\n\n g = assembly.DecomposeAssembly(N, ends, \"CommonJacobi\", P)\n \n rhs = numpy.zeros(g.nModes, dtype=numpy.float64)\n rhs[g.l2g[0]] -= g.elems[0].weak_rhs(f, 25, \"GaussRadauJacobi\", end=1)\n for i, e in enumerate(g.elems[1:]):\n rhs[g.l2g[i+1]] -= e.weak_rhs(f, 20)\n \n A = g.wL + g.M\n\n rhs -= A[:, g.l2g[0][0]].A.flatten() * DBCs[0]\n rhs -= A[:, g.l2g[-1][-1]].A.flatten() * DBCs[1]\n \n A = numpy.delete(A, [g.l2g[0][0], g.l2g[-1][-1]], 0)\n A = numpy.delete(A, [g.l2g[0][0], g.l2g[-1][-1]], 1)\n rhs = numpy.delete(rhs, [g.l2g[0][0], g.l2g[-1][-1]])\n \n ui = numpy.linalg.solve(A, rhs)\n ui = numpy.insert(ui, g.l2g[0][0], DBCs[0])\n ui = numpy.insert(ui, g.l2g[-1][-1], DBCs[1])\n g.set_coeffs(ui)\n \n return g\n\ndef H1(g, exact, d_exact, l):\n \"\"\"calculate energy (H1) norm\"\"\"\n \n qd = quad.GaussRadauJacobi(25, end=1)\n f = lambda x: ((g.elems[0].derivative(x) - d_exact(x))**2 +\n (g.elems[0](x) - exact(x))**2)\n ans = numpy.sqrt(qd(f, xmin=g.elems[0].ends[0], xMax=g.elems[0].ends[1]))\n\n qd = quad.GaussLobattoJacobi(20)\n for e in g.elems[1:]:\n f = lambda x: (e.derivative(x) - d_exact(x))**2 + (e(x) - exact(x))**2\n ans += numpy.sqrt(qd(f, xmin=e.ends[0], xMax=e.ends[1]))\n return ans / numpy.sqrt(l)\n\nN = numpy.array([2 + 4 * i for i in range(25)])\nP = numpy.array([1 + 2 * i for i in range(11)])\n\nAlpha = [0.6, 0.9, 1.2]\nNdof = numpy.empty(3, dtype=dict)\nerr = numpy.empty(3, dtype=dict)\n\nfor i, alpha in enumerate(Alpha):\n exact = lambda x: x**alpha\n d_exact = lambda x: alpha * (x**(alpha-1))\n f = lambda x: alpha * (alpha - 1) * (x**(alpha-2)) - x**alpha\n Ndof[i], err[i] = ex_wrapper(N, P, f, exact, d_exact, [0, 1])\n\npyplot.loglog(Ndof[0][\"h\"], err[0][\"h\"], 'ko-', lw=1.5, label=r\"$\\alpha=0.6$\")\npyplot.loglog(Ndof[1][\"h\"], err[1][\"h\"], 'k^-', lw=1.5, label=r\"$\\alpha=0.9$\")\npyplot.loglog(Ndof[2][\"h\"], err[2][\"h\"], 'ks-', lw=1.5, label=r\"$\\alpha=1.2$\")\npyplot.title(\"Convergence of h-expansion, \\nlog-log plot, and \" + \n r\"$u(x)=x^{\\alpha}$\")\npyplot.xlabel(r\"$N_{dof}$\")\npyplot.ylabel(r\"Energy norm, $\\left|\\left|error\\right|\\right|_E$\")\npyplot.legend(loc=0);\n\npyplot.loglog(Ndof[0][\"p\"], err[0][\"p\"], 'ko-', lw=1.5, label=r\"$\\alpha=0.6$\")\npyplot.loglog(Ndof[1][\"p\"], err[1][\"p\"], 'k^-', lw=1.5, label=r\"$\\alpha=0.9$\")\npyplot.loglog(Ndof[2][\"p\"], err[2][\"p\"], 'ks-', lw=1.5, label=r\"$\\alpha=1.2$\")\npyplot.title(\"Convergence of p-expansion, \\nlog-log plot, and \" + \n r\"$u(x)=x^{\\alpha}$\")\npyplot.xlabel(r\"$N_{dof}$\")\npyplot.ylabel(r\"Energy norm, $\\left|\\left|error\\right|\\right|_E$\")\npyplot.legend(loc=0);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google-research/google-research
rllim/main_local_dynamics_recovery.ipynb
apache-2.0
[ "Copyright 2019 The Google Research Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nRecovering Local Dynamics using RL-LIM with Synthetic Datasets\n\nJinsung Yoon, Sercan O Arik, Tomas Pfister, \"RL-LIM: Reinforcement Learning-based Locally Interpretable Modeling\", arXiv preprint arXiv:1909.12367 (2019) - https://arxiv.org/abs/1909.12367.\n\nThis notebook describes the user-guide of recovering local dynamics with synthetic datasets using \"Reinforcement Learning based Locally Interpretable Modeling (RL-LIM)\".\nRL-LIM efficiently utilizes the small representational capacity of locally interpretable models by training with a small number of training samples that are determined to have the highest value contribution to the fitting of a locally interpretable model. In order to select these highest-value instances, we train instance-wise weight estimators (modeled with deep neural networks) using a reinforcement signal that quantifies the fidelity metric (i.e. how well does the model approximate the black-box model predictions). RL-LIM constructs locally interpretable models with the selected highest-value instances and outputs instance-wise explanations and interpretable predictions.\nOn real-world datasets it is challenging to directly evaluate the explanation quality of the locally interpretable models due to the absence of ground-truth explanations. Thus we initially focus on synthetic datasets (with known ground-truth explanations (local dynamics)) to directly evaluate how well the locally interpretable models can recover the underlying local dynamics.\nPrerequisite\n\nClone https://github.com/google-research/google-research.git to the current directory.", "import os\nfrom git import Repo\n\n# Current working directory\nrepo_dir = os.getcwd() + '/repo'\n\nif not os.path.exists(repo_dir):\n os.makedirs(repo_dir)\n\n# Clones github repository\nif not os.listdir(repo_dir):\n git_url = \"https://github.com/google-research/google-research.git\"\n Repo.clone_from(git_url, repo_dir)", "Necessary packages and function calls\n\nridge: Ridge regression model used as an interpretable model.\nload_synthetic_data: Data loader for synthetic datasets (Syn1, Syn2, Syn3).\nrllim: RL-LIM class for training instance-wise weight estimator.\nrllim_metrics: Evaluation metrics for the locally interpretable models in various metrics (fidelity and absolute weight difference (AWD)).", "import numpy as np\nfrom sklearn.linear_model import Ridge\n\n# Sets current directory\nos.chdir(repo_dir)\n\nfrom rllim.data_loading import load_synthetic_data\nfrom rllim import rllim\nfrom rllim.rllim_metrics import awd_metric, fidelity_metrics, plot_result", "Data loading\n\nLoad training, probe and testing datasets and ground-truth explanations (local dynamics).\nIn this notebook, we generate synthetic training, probe, and testing datasets.\n\noutputs:\n * x_train, y_train: Training features and labels.\n * x_probe, y_probe: Probe features and labels.\n * x_test, y_test: Testing features and labels.\n * c_test: Ground truth local dynamics of testing samples.", "# Data name: 'Syn1' or 'Syn2' or 'Syn3' in this notebook\n\n# X ~ N(0, I)\n# Syn1: Y = X_0 + 2 X_1 if X_10 < 0 & Y = X_2 + 2 X_3 if X_10 >= 0\n# Syn2: Y = X_0 + 2 X_1 if X_10 + e^{X_11} < 1 & Y = X_2 + 2 X_3 if X_10 + e^{X_11} >= 1\n# Syn3: Y = X_0 + 2 X_1 if X_10 + (X_11)^3 < 0 & Y = X_2 + 2 X_3 if X_10 + (X_11)^3 >= 0 \n\ndata_name = 'Syn1'\n\n# The number of training, probe and testing samples\ndict_no = dict()\ndict_no['train'] = 1000\ndict_no['probe'] = 100\ndict_no['test'] = 1000\n\n# The number of dimensions\ndict_no['dim'] = 11\n\n# Random seed\nseed = 0\n\n# Loads data\nx_train, y_train, x_probe, y_probe, x_test, y_test, c_test = load_synthetic_data(data_name, dict_no, seed)\n\nprint('Finished data loading.')", "Step 0 & Step 1:\nIn this notebook, we skip Step 0 and Step 1 (of RL-LIM) because we treat y_train, y_probe, and y_test as the predictions of the pre-trained black-box model. We directly use the ground truth function as the black-box model and focus on how well locally interpretable modeling can capture the local dynamics.\n\nStep 0: Black-box model training.\nStep 1: Auxiliary dataset construction.\n\nStep 2: Interpretable baseline training\nTo improve the stability of the instance-wise weight estimator training, an interpretable baseline model is observed to be beneficial. We use a globally interpretable model (in this notebook, we use Ridge regression) optimized to replicate the predictions of the black-box model.\n\nInput: \n\nlocally interpretable model: Ridge regression (we can switch this to linear regression). The model must have fit, predict (for regression) or predict_proba (for classification), intercept_ and coef_ as the methods.\n\n\nOutput:\n\nTrained interpretable baseline model: Function that tries to replicate the predictions of the black-box model using globally interpretable model.", "# Defines baseline\nbaseline = Ridge(alpha=1)\n\n# Trains interpretable baseline model\nbaseline.fit(x_train, y_train)\n\nprint('Finished interpretable baseline training.')", "Step 3: Trains instance-wise weight estimator\nWe train an instance-wise weight estimator using training dataset (x_train, y_train) and probe dataset (x_probe, y_probe) using reinforcement learning.\n\nInput: \nNetwork parameters: Set network parameters of instance-wise weight estimator.\n\nLocally interpretable model: Ridge regression (we can switch this to linear regression). The model must have fit, predict (for regression) or predict_proba (for classification), intercept_ and coef_ as the methods.\n\n\nOutput:\n\nInstancewise weight estimator: Function that uses training set and a testing sample as inputs to estimate weights for each training sample to construct locally interpretable model for the testing sample.", "# Instance-wise weight estimator network parameters\nparameters = dict()\nparameters['hidden_dim'] = 100\nparameters['iterations'] = 2000\nparameters['num_layers'] = 5\nparameters['batch_size'] = 900\nparameters['batch_size_inner'] = 10\nparameters['lambda'] = 1.0\n\n# Checkpoint file name\ncheckpoint_file_name = './tmp/model.ckpt'\n\n# Defines locally interpretable model\ninterp_model = Ridge(alpha=1)\n\n# Initializes RL-LIM\nrllim_class = rllim.Rllim(x_train, y_train, x_probe, y_probe, parameters, interp_model, baseline, checkpoint_file_name)\n\n# Trains RL-LIM\nrllim_class.rllim_train()\n\nprint('Finished instance-wise weight estimator training.')\n\n## Output functions\n# Instance-wise weight estimator for x_test[0, :]\ndve_out = rllim_class.instancewise_weight_estimator(x_train, y_train, x_test[0, :])\n\n# Interpretable predictions (test_y_fit) and instance-wise explanations (test_coef) for x_test[0, :]\ntest_y_fit, test_coef = rllim_class.rllim_interpreter(x_train, y_train, x_test[0, :], interp_model)\n\nprint('Finished instance-wise weight estimations, interpretable predictions, and local explanations.')", "Step 4: Interpretable inference\nUnlike Step 3 (training instance-wise weight estimator), we use a fixed instance-wise weight estimator (without the sampler and interpretable baseline) and merely fit the locally interpretable model at inference. Given the test instance, we obtain the selection probabilities from the instance-wise weight estimator, and using these as the weights, we fit the locally interpretable model via weighted optimization. \n\nInput: \n\nLocally interpretable model: Ridge regression (we can switch this to linear regression). The model must have fit, predict (for regression) or predict_proba (for classification), intercept_ and coef_ as the methods.\n\n\nOutput:\n\nInstance-wise explanations (test_coef): Estimated local dynamics for testing samples using trained locally interpretable model.\nInterpretable predictions (test_y_fit): Local predictions for testing samples using trained locally interpretable model.", "# Trains locally interpretable models and output instance-wise explanations (test_coef) and\n# interpretable predictions (test_y_fit) \ntest_y_fit, test_coef = rllim_class.rllim_interpreter(x_train, y_train, x_test, interp_model)\n\nprint('Finished interpretable predictions and local explanations.')", "Evaluation\nWe use two metrics (fidelity and average weight difference (AWD)) to evaluate the locally interpretable models.\n\nFidelity: Difference between black-box model predictions (y_test_hat) and interpretable predictions (test_y_fit). In this notebook, we use Mean Absolute Error (MAE) as the metric.\nAverage Weight Difference (AWD): Mean absolute difference between ground truth local dynamics (test_c) and instance-wise explanations (test_coef).", "# Fidelity\nmae = fidelity_metrics (y_test, test_y_fit, metric='mae')\nprint('fidelity of RL-LIM in terms of MAE: ' + str(np.round(mae, 4)))\n\n# AWD\nawd = awd_metric (c_test, test_coef)\nprint('AWD of RL-LIM: ' + str(np.round(awd, 4)))", "1. Fidelity plot\nWe visualize the fidelity (in terms of MAE) of the testing samples based on the distance from the boundary where the local dynamics change (in percentile).\n\nx-axis: Distance from the boundary where the local dynamics change (in percentile).\ny-axis: Fidelity (in terms of MAE).", "# Reports fidelity plot\nplot_result(x_test, data_name, y_test, test_y_fit, c_test, test_coef, metric='mae', criteria='Fidelity')", "2. AWD plot\nWe visualize the AWD of the testing samples based on the distance from the boundary where the local dynamics change (in percentile).\n\nx-axis: Distance from the boundary where the local dynamics change (in percentile).\ny-axis: AWD.", "# Reports AWD plot\nplot_result(x_test, data_name, y_test, test_y_fit, c_test, test_coef, metric='mae', criteria='AWD')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hektor-monteiro/python-notebooks
aula-1_programacao_basica.ipynb
gpl-2.0
[ "Programação básica\nprogramas são um conjunto de instruções para o computador\nvariáveis\nquantidades são representadas por variáveis no mesmo sentido que são aplicadas em algebra\npodemos alocar valores para variáveis:", "x = 1.0 # isso é uma alocação\n\nx, y = 3.0, 4.5 # python permite fazer alocações multiplas\n\nu, v = 2*x+1, (x+y)/3 # podemos usar expressões matemáticas nas alocações", "vamos usar variáveos extensivamente para representar grandezas físicas\nem python podemos usar nomes com várias letras ou mesmo numeros e letras\nvariáveis não podem começar com um número\nalguns caracterers com & não podem ser usados em nomes de variáveis pois são reservados\npython distingue entre maiúsculas e minúsculas, logo, x é diferente de X e podem ter valores distintos\nde a suas variáveis nomes que tenham significado relacionado ao que elas são. Por exemplo, se voce vai usar uma variavel para energia, não a denomine de E, use energia ou ener\nTipos básicos\nem física usamos três tipos básicos de variáveis: inteiro, ponto flutuante e complexo\nInteiro", "print (1 + 1)\na = 4\ntype(a)", "Floats", "c = 2.1\nprint('the value in c is a ',type(c))", "Complexos\nnote que em python a unidade de complexo é o j e não i como usual", "a = 1.5 + 0.5j\nprint (a.real)\nprint (a.imag)\ntype(1. + 0j)", "Boleanos", "print (3 > 4)\ntest = (3 > 4)\nprint (test)\ntype(test)", "strings", "x = 'isso é uma string'\nprint (x)\nprint ('a variável x é ',type(x))\n\ny = '1.23'\nprint ('a variável y é ',type(y))\n", "é importante tentar sempre usar o tipo de variável adequada pois um integer ocupa menos memoria que um float e este ocupa menos que um complexo. Em programas que operam com milhões ou bilhões de números essa diferença pode ficar relevanterecebem\ninteiros não sofrem com problemas de precisão\nem python variáveis não precisam ser declaradas com tipo definido antes de serem usadas. As variáveis definem o tipo baseado no input que recebem \noperações básicas\nem python as operações aritiméticas funcionam como esperado em algebra, seguindo as regras usuais", "print (3 + 5) # adição\nprint (10 - 5.) # subtração\nprint (7 * 3.) # multiplicação\nprint (2**10) # potencia\n\nprint (7./2)\n\nprint (14 % 3) # retorna o resto da divisão\nprint (14.3 % 3)\n\nprint (14//3) # retorna a parte inteira da divisão\nprint (14/3.)", "python também permite algumas construções que podem ter utilidade em programas mais complexos:", "x = 0\n\nx += 1 # adicione 1 a x\nprint (x)\n\nx -= 4 # subtraia 4 de x\nprint (x)\n\nx *= -2.6 # multiplique x por -2.6\nprint (x)\n\ny = 2\n\nx /= 5*y # divide por 5 vezes y\nprint (x)\n\nx //= 0.4 # divide x por 3.4 e arredonda para inteiro\nprint (x)", "Entrada e saída\njá vimos acima uma das principais formas de saída: print\nexitem diferenças do print em python 2.7 e 3: fiquem atentos!", "# um exemplo de uso do print\n\nx=1\ny=2\n\nprint(\"o valor de x é\",x,'e o valor de y é',y)", "input\nfazer input é um pouco mais elaborado\no formato básico é:", "x = input('Entre com o valor de x:')\n\nprint ('o valor de x é', x, 'e o seu tipo é', type(x))", "ATENÇÃO: nessa função também existe diferença entre python 2.7 e 3.\nEm python 3 input sempre recebe a variável como string e a definição de tipo de variável é feita depois. Em 2.7 a variável é definida com base no que foi entrado.", "x = input('Entre com o valor de x:')\nprint ('o valor de x é', x, 'e o seu tipo é', type(x))\n\n# quando pre-definimos o tipo de entrada desejada\nx = float(input('Entre com o valor de x:'))\nprint ('ovalor de x é', x, 'e o seu tipo é', type(x))", "módulos, pacotes e funções\nmuitas operações que vamos precisar fazer são mais complicadas que pura aritmética\npython tem pacotes e funções prontas para lidar com boa parte dessas complicações\nnormalmente as coisas são organizadas em pacotes com um dado nome, que contém coisas relacionadas.\npor exemplo o pacote math contém funções matemáticas:", "from math import log, log10 # esse é o formato para importar pacotes\n\nprint (log(100.0), log10(100.0))\n\n# se tentarmos usar uma função não importada teremos um erro\n\nprint (sqrt(4.0))\n\n# uma solução é importar todo o pacote\nfrom math import *\nprint (sqrt(4.0))", "importar todo o pacote deve ser evitado por economia de memória. Além disso pode levar a conflitos inesperados de nomes de variáveis.\nExercícios:\n1 - Suponha que a posição de um ponto no espaço bidimensional é dado em coordenadas polares $r$, $\\theta$ e queremos convertê-lo em coordenadas cartesianas x , y. Escreva um programa para fazer essa conversão. Para isso faça:\n\nObter do usuário os valores de $r$ e $\\theta$.\nconverter esses valores em coordenadas cartesianas usando as fórmulas: $x = rcos(\\theta)$ e $y = rsin(\\theta)$\n\n2 - Escreva um programa que compute os principais parametros de um lançamento obliquo dado o módulo da velocidade inicial e o angulo de lançamento. Imprima a saída de maneira que os resultados sejam descritos claramente.\n3 - Escreva um programa que cheque se um numero dado é par ou impar.\n4 - Exercício 1.8: Atualizar variável no prompt de comando: Chame Python interativamente e execute as etapas a seguir.\n\nInicialize uma variável x para 2\nAdicione 3 a x. Imprima o resultado.\nQue tipo de variável é x?\nImprima o resultado de x+12 e de (x+1.0)2. (Observe como os parenteses e a notação fazem a diferença).\nQue tipo de variável é x em cada caso?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
timothyb0912/pylogit
examples/.ipynb_checkpoints/mlogit_Benchmark--Heating-checkpoint.ipynb
bsd-3-clause
[ "Mlogit Benchmark 2: Kenneth Train's Heating Data\nThe purpose of this notebook is to:\n<ol>\n <li> Demonstrate the use of the pyLogit to estimate conditional logit models.</li>\n <li> Benchmark the results reported pyLogit against those reported by the mlogit package.</li>\n</ol>\n\nThe models estimated in this notebook will be the same models detailed in \"Kenneth Train’s exercises using the mlogit package for R.\" In particular, the following models will be estimated:\n<ol>\n <li> The model with installation cost and operating cost, without intercepts (p.2). \n <pre> mlogit(depvar~ic+oc|0, H) </pre>\n </li>\n <li> The model that imposes the constraint that r = 0.12 (such that wtp = 8.33) (p. 4).\n <pre> H$lcc=H$ic+H$oc/0.12\n mlcc <- mlogit(depvar~lcc|0, H)\n </pre>\n </li>\n <li> The model with installation cost, operating cost, and all intercepts except that of the \"hp\" alternative (p.5).\n <pre> mc <- mlogit(depvar~ic+oc, H, reflevel = 'hp')\n </pre>\n </li>\n <li> The model with installation cost divided by income, operating cost, and all intercepts except that of the \"hp\" alternative (p. 7).\n <pre> mi <- mlogit(depvar~oc+I(ic/income), H, reflevel = 'hp')\n </pre>\n </li>\n <li> The model with intallation costs, operating costs, alternative specific coefficients for income, and all intercepts except that of the \"hp\" alternative (p.7).\n <pre> mi2 <- mlogit(depvar~oc+ic|income, H, reflevel=\"hp\")\n </pre>\n </li>\n</ol>\n\n1. Import Needed libraries", "from collections import OrderedDict # For recording the model specification \n\nimport pandas as pd # For file input/output\nimport numpy as np # For vectorized math operations\n\nimport pylogit as pl # For MNL model estimation and\n # conversion from wide to long format\n ", "2. Load and look at the required datasets", "# Load the Heating data, noting that the data is in wide data format\nwide_heating_df = pd.read_csv(\"../data/heating_data_r.csv\")\n\n\n# Look at the raw Heating data\nwide_heating_df.head().T", "3. Convert the wide format dataframes to long format\n3a. Perform needed data cleaning\nNoting that the column denoting the choice (depvar) contains string objects, we need to convert the choice column into an integer based column.", "# Convert the choice column for the Train data into integers\n# Note that we will use a 1 to denote 'choice1' and a 2 to \n# represent 'choice2'\nwide_heating_df[\"choice\"] = wide_heating_df[\"depvar\"].map(dict(zip(['gc', 'gr',\n 'ec', 'er',\n 'hp'],\n range(1,6))))\n", "For the Heating data, all of the alternatives are available in all choice situations. Note that, in general, this is not the case for choice data. As such we need to have columns that denote the availability of each alternative for each individual.\nThese columns will all be filled with ones for each row in the wide format dataframes because all of the alternatives are always available for each individual.", "# Create the needed availability columns for the Heating data\nfor i in range(1, 6):\n wide_heating_df[\"availability_{}\".format(i)] = 1\n", "3b. Convert the Heating dataset to long format", "# Look at the columns that we need to account for when converting from\n# the wide data format to the long data format.\nwide_heating_df.columns\n\n##########\n# Define lists of the variables pertaining to each variable type\n# that we need to account for in the data format transformation\n##########\n# Determine the name for the alternative ids in the long format \n# data frame\nheating_alt_id = \"alt_id\"\n# Determine the column that denotes the id of what we're treating\n# as individual observations, i.e. the choice situations.\nheating_obs_id_col = \"idcase\"\n# Determine what column denotes the choice that was made\nheating_choice_column = \"choice\"\n\n\n# Create the list of observation specific variables\nheating_ind_variables = [\"depvar\", \"income\", \"agehed\", \"rooms\", \"region\"]\n\n# Specify the variables that vary across individuals and some or all alternatives\n# Note that each \"main\" key should be the desired name of the column in the long\n# data format. The inner keys shoud be the alternative ids that that have some\n# value for the \"main\" key variable.\nheating_alt_varying_variables = {\"installation_costs\": {1: \"ic.gc\",\n 2: \"ic.gr\",\n 3: \"ic.ec\",\n 4: \"ic.er\",\n 5: \"ic.hp\"},\n \"operating_costs\": {1: \"oc.gc\",\n 2: \"oc.gr\",\n 3: \"oc.ec\",\n 4: \"oc.er\",\n 5: \"oc.hp\"},\n }\n\n# Specify the availability variables\nheating_availability_variables = OrderedDict()\nfor alt_id, var in zip(range(1, 6),\n [\"availability_{}\".format(i) for i in range(1, 6)]):\n heating_availability_variables[alt_id] = var\n\n\n##########\n# Actually perform the conversion to long format\n##########\nlong_heating_df = pl.convert_wide_to_long(wide_data=wide_heating_df,\n ind_vars=heating_ind_variables,\n alt_specific_vars=heating_alt_varying_variables,\n availability_vars=heating_availability_variables,\n obs_id_col=heating_obs_id_col,\n choice_col=heating_choice_column,\n new_alt_id_name=heating_alt_id)\n\n# Look at the long format Heating data\nlong_heating_df.head()", "4. Create desired variables", "# Create the life-cycle cost variable needed for model 2 where\n# we assume the discount rate, r, is 0.12.\nlong_heating_df[\"life_cycle_cost\"] = (long_heating_df[\"installation_costs\"] + \n long_heating_df[\"operating_costs\"] / 0.12)\n\n# Create the installation cost divided by income variable\nlong_heating_df[\"installation_cost_burden\"] = (long_heating_df[\"installation_costs\"] /\n long_heating_df[\"income\"])\n", "For numeric stability reasons, it is advised that one scale one's variables so that the estimated coefficients are similar in absolute magnitude, and if possible so that the estimated coefficients are close to 1 in absolute value (in other words, not terribly tiny or extremely large). This is done for the fishing data below\n5. Specify and estimate the desired models needed for benchmarking\n5a. The model with installation cost and operating cost, without intercepts", "# Create the model specification\nmodel_1_spec = OrderedDict()\nmodel_1_names = OrderedDict()\n\n# Note that for the specification dictionary, the\n# keys should be the column names from the long format\n# dataframe and the values should be a list with a combination\n# of alternative id's and/or lists of alternative id's. There \n# should be one element for each beta that will be estimated \n# in relation to the given column. Lists of alternative id's\n# mean that all of the alternatives in the list will get a\n# single beta for them, for the given variable.\n# The names dictionary should contain one name for each\n# element (that is each alternative id or list of alternative \n# ids) in the specification dictionary value for the same \n# variable\n\nmodel_1_spec[\"installation_costs\"] = [range(1, 6)]\nmodel_1_names[\"installation_costs\"] = [\"installation_costs\"]\n\nmodel_1_spec[\"operating_costs\"] = [range(1, 6)]\nmodel_1_names[\"operating_costs\"] = [\"operating_costs\"]\n\n\n# Create an instance of the MNL model class\nmodel_1 = pl.create_choice_model(data=long_heating_df,\n alt_id_col=heating_alt_id,\n obs_id_col=heating_obs_id_col,\n choice_col=heating_choice_column,\n specification=model_1_spec,\n model_type=\"MNL\",\n names=model_1_names)\n\n# Estimate the given model, starting from a point of all zeros\n# as the initial values.\nmodel_1.fit_mle(np.zeros(2), method='newton-cg')\n\n# Look at the estimation summaries\nmodel_1.get_statsmodels_summary()\n\n# Look at the 'standard summary' since it includes robust p-values\nmodel_1.print_summaries()", "Compare with mlogit\nThe call from mlogit was as follows:\n<pre>\nCall:\nmlogit(formula = depvar ~ ic + oc | 0, data = H, method = \"nr\", \n print.level = 0)\n\nFrequencies of alternatives:\n ec er gc gr hp \n0.071111 0.093333 0.636667 0.143333 0.055556 \n\nnr method\n4 iterations, 0h:0m:0s \ng'(-H)^-1g = 1.56E-07 \ngradient close to zero \n\nCoefficients :\n Estimate Std. Error t-value Pr(>|t|) \nic -0.00623187 0.00035277 -17.665 < 2.2e-16 \\*\\*\\*\noc -0.00458008 0.00032216 -14.217 < 2.2e-16 \\*\\*\\*\n\\---\nSignif. codes: 0 ‘\\*\\*\\*’ 0.001 ‘\\*\\*’ 0.01 ‘\\*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nLog-Likelihood: -1095.2\n</pre>\nAs can be seen, the estimates, standard errors, t-values, and log-likelihood agree. The p-values differ but this is because mlogit calculates its p-values based on a t-distribution whereas pyLogit uses an asymptotic normal distribution.\n5b. The model that imposes the constraint that the discount rate, r = 0.12, still without intercepts.", "# Create the model specification\nmodel_2_spec = OrderedDict()\nmodel_2_names = OrderedDict()\n\nmodel_2_spec[\"life_cycle_cost\"] = [range(1, 6)]\nmodel_2_names[\"life_cycle_cost\"] = [\"installation_costs\"]\n\n# Create an instance of the MNL model class\nmodel_2 = pl.create_choice_model(data=long_heating_df,\n alt_id_col=heating_alt_id,\n obs_id_col=heating_obs_id_col,\n choice_col=heating_choice_column,\n specification=model_2_spec,\n model_type=\"MNL\",\n names=model_2_names)\n\n# Estimate the given model, starting from a point of all zeros\n# as the initial values.\nmodel_2.fit_mle(np.zeros(1), method='newton-cg')\n\n# Look at the estimation summaries\nmodel_2.get_statsmodels_summary()\n", "Compare with mlogit\nLook at the corresponding results from mlogit:\n<pre>\nCall:\nmlogit(formula = depvar ~ lcc | 0, data = H, method = \"nr\", print.level = 0)\n\nFrequencies of alternatives:\n ec er gc gr hp \n0.071111 0.093333 0.636667 0.143333 0.055556 \n\nnr method\n5 iterations, 0h:0m:0s \ng'(-H)^-1g = 9.32E-05 \nsuccessive function values within tolerance limits \n\nCoefficients :\n Estimate Std. Error t-value Pr(>|t|) \nlcc -7.1585e-04 4.2761e-05 -16.741 < 2.2e-16 ***\n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nLog-Likelihood: -1248.7\n</pre>\n\nAs before, all computed values agree except for the p-values, which we already know to be different due to the distribution being used to compute the p-values (t-distribution vs normal distribution).\n5c. The model with installation cost, operating cost, and all intercepts except that of the \"hp\" alternative", "# Create the model specification\nmodel_3_spec = OrderedDict()\nmodel_3_names = OrderedDict()\n\nmodel_3_spec[\"intercept\"] = range(1, 5)\nmodel_3_names[\"intercept\"] = [\"ASC: {}\".format(x) \n for x in [\"gc\", \"gr\", \"ec\", \"er\"]]\n\nmodel_3_spec[\"installation_costs\"] = [range(1, 6)]\nmodel_3_names[\"installation_costs\"] = [\"installation_costs\"]\n\nmodel_3_spec[\"operating_costs\"] = [range(1, 6)]\nmodel_3_names[\"operating_costs\"] = [\"operating_costs\"]\n\n# Create an instance of the MNL model class\nmodel_3 = pl.create_choice_model(data=long_heating_df,\n alt_id_col=heating_alt_id,\n obs_id_col=heating_obs_id_col,\n choice_col=heating_choice_column,\n specification=model_3_spec,\n model_type=\"MNL\",\n names=model_3_names)\n\n# Estimate the given model, starting from a point of all zeros\n# as the initial values.\nmodel_3.fit_mle(np.zeros(6))\n\n# Look at the estimation summaries\nmodel_3.get_statsmodels_summary()\n", "Compare with mlogit\nLook at the corresponding results from mlogit:\n<pre>\nCall:\nmlogit(formula = depvar ~ ic + oc, data = H, reflevel = \"hp\", \n method = \"nr\", print.level = 0)\n\nFrequencies of alternatives:\n hp ec er gc gr \n0.055556 0.071111 0.093333 0.636667 0.143333 \n\nnr method\n6 iterations, 0h:0m:0s \ng'(-H)^-1g = 9.58E-06 \nsuccessive function values within tolerance limits \n\nCoefficients :\n Estimate Std. Error t-value Pr(>|t|) \nec:(intercept) 1.65884594 0.44841936 3.6993 0.0002162 ***\ner:(intercept) 1.85343697 0.36195509 5.1206 3.045e-07 ***\ngc:(intercept) 1.71097930 0.22674214 7.5459 4.485e-14 ***\ngr:(intercept) 0.30826328 0.20659222 1.4921 0.1356640 \nic -0.00153315 0.00062086 -2.4694 0.0135333 * \noc -0.00699637 0.00155408 -4.5019 6.734e-06 ***\n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nLog-Likelihood: -1008.2\nMcFadden R^2: 0.013691 \nLikelihood ratio test : chisq = 27.99 (p.value = 8.3572e-07)\n</pre>\n\nAgain, all calculated values except for the p-values and McFadden's $R^2$ seem to agree.\nAs noted in the mlogit benchmark # 1 notebook, the mlogit values for McFadden's $R^2$ seem to be incorrect. Based off of the formula: $$\\begin{aligned} \\textrm{McFadden's }R^2 &= 1 - \\frac{\\mathscr{L}_M}{\\mathscr{L}_0} \\\n\\textrm{where } \\mathscr{L}_M &= \\textrm{the fitted log-likelihood} \\\n\\mathscr{L}_0 &= \\textrm{the null log-likelihood}\\end{aligned}$$\nfrom \"Coefficients of Determination for Multiple Logistic Regression Analysis\" by Scott Menard (2000), The American Statistician, 54:1, 17-24, the calculated value of McFadden's R^2 should be 0.303947 as reported by pyLogit.\nNote that the initial log-likelihood and McFadden $R^2$ are recomputed below for verification of its correctnes.", "# Note that every observation in the Heating dataset\n# has 5 available alternatives, therefore the null \n# probability is 0.20\nnull_prob = 0.20\n\n# Calculate how many observations are in the Heating \n# dataset\nnum_heating_obs = wide_heating_df.shape[0]\n\n# Calculate the Fishing dataset's null log-likelihood \nnull_heating_log_likelihood = (num_heating_obs * \n np.log(null_prob))\n\n# Determine whether pyLogit's null log-likelihood is correct\ncorrect_null_ll = np.allclose(null_heating_log_likelihood,\n model_3.null_log_likelihood)\nprint \"pyLogit's null log-likelihood is correct:\", correct_null_ll\n\n# Calculate McFadden's R^2\nmcfaddens_r2 = 1 - (model_3.log_likelihood / model_3.null_log_likelihood)\nprint \"McFadden's R^2 is {:.5f}\".format(mcfaddens_r2)\n", "5d. The model with installation cost divided by income, operating cost, and all intercepts except that for \"hp\"", "# Create the model specification\nmodel_4_spec = OrderedDict()\nmodel_4_names = OrderedDict()\n\nmodel_4_spec[\"intercept\"] = range(1, 5)\nmodel_4_names[\"intercept\"] = [\"ASC: {}\".format(x) \n for x in [\"gc\", \"gr\", \"ec\", \"er\"]]\n\nmodel_4_spec[\"installation_cost_burden\"] = [range(1, 6)]\nmodel_4_names[\"installation_cost_burden\"] = [\"installation_cost_burden\"]\n\nmodel_4_spec[\"operating_costs\"] = [range(1, 6)]\nmodel_4_names[\"operating_costs\"] = [\"operating_costs\"]\n\n# Create an instance of the MNL model class\nmodel_4 = pl.create_choice_model(data=long_heating_df,\n alt_id_col=heating_alt_id,\n obs_id_col=heating_obs_id_col,\n choice_col=heating_choice_column,\n specification=model_4_spec,\n model_type=\"MNL\",\n names=model_4_names)\n\n# Estimate the given model, starting from a point of all zeros\n# as the initial values.\nmodel_4.fit_mle(np.zeros(6))\n\n# Look at the estimation summaries\nmodel_4.get_statsmodels_summary()", "Compare with mlogit\nLook at the results from mlogit:\n<pre>\nCall:\nmlogit(formula = depvar ~ oc + I(ic/income), data = H, reflevel = \"hp\", \n method = \"nr\", print.level = 0)\n\nFrequencies of alternatives:\n hp ec er gc gr \n0.055556 0.071111 0.093333 0.636667 0.143333 \n\nnr method\n6 iterations, 0h:0m:0s \ng'(-H)^-1g = 1.03E-05 \nsuccessive function values within tolerance limits \n\nCoefficients :\n Estimate Std. Error t-value Pr(>|t|) \nec:(intercept) 1.8700773 0.4364248 4.2850 1.827e-05 ***\ner:(intercept) 1.9340707 0.3599991 5.3724 7.768e-08 ***\ngc:(intercept) 1.9264254 0.2034031 9.4710 < 2.2e-16 ***\ngr:(intercept) 0.4047710 0.2011694 2.0121 0.04421 * \noc -0.0071066 0.0015518 -4.5797 4.657e-06 ***\nI(ic/income) -0.0027658 0.0018944 -1.4600 0.14430 \n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nLog-Likelihood: -1010.2\nMcFadden R^2: 0.011765 \nLikelihood ratio test : chisq = 24.052 (p.value = 5.9854e-06)\n</pre>\n\nAgain, all calculated values except for the p-values and McFadden's $R^2$ seem to agree.\n5e. The model with intallation costs, operating costs, alternative specific income, and all intercepts for \"hp.\"", "# Create the model specification\nmodel_5_spec = OrderedDict()\nmodel_5_names = OrderedDict()\n\nmodel_5_spec[\"intercept\"] = range(1, 5)\nmodel_5_names[\"intercept\"] = [\"ASC: {}\".format(x) \n for x in [\"gc\", \"gr\", \"ec\", \"er\"]]\n\nmodel_5_spec[\"installation_costs\"] = [range(1, 6)]\nmodel_5_names[\"installation_costs\"] = [\"installation_costs\"]\n\nmodel_5_spec[\"operating_costs\"] = [range(1, 6)]\nmodel_5_names[\"operating_costs\"] = [\"operating_costs\"]\n\nmodel_5_spec[\"income\"] = range(1, 5)\nmodel_5_names[\"income\"] = [\"income_{}\".format(x)\n for x in [\"gc\", \"gr\", \"ec\", \"er\"]]\n\n# Create an instance of the MNL model class\nmodel_5 = pl.create_choice_model(data=long_heating_df,\n alt_id_col=heating_alt_id,\n obs_id_col=heating_obs_id_col,\n choice_col=heating_choice_column,\n specification=model_5_spec,\n model_type=\"MNL\",\n names=model_5_names)\n\n# Estimate the given model, starting from a point of all zeros\n# as the initial values.\nmodel_5.fit_mle(np.zeros(10))\n\n# Look at the estimation summaries\nmodel_5.get_statsmodels_summary()\n", "Compare with mlogit\nLook at the output from mlogit:\n<pre>\nCall:\nmlogit(formula = depvar ~ oc + ic | income, data = H, reflevel = \"hp\", \n method = \"nr\", print.level = 0)\n\nFrequencies of alternatives:\n hp ec er gc gr \n0.055556 0.071111 0.093333 0.636667 0.143333 \n\nnr method\n6 iterations, 0h:0m:0s \ng'(-H)^-1g = 6.27E-06 \nsuccessive function values within tolerance limits \n\nCoefficients :\n Estimate Std. Error t-value Pr(>|t|) \nec:(intercept) 1.95445797 0.70353833 2.7780 0.0054688 ** \ner:(intercept) 2.30560852 0.62390478 3.6954 0.0002195 ***\ngc:(intercept) 2.05517018 0.48639682 4.2253 2.386e-05 ***\ngr:(intercept) 1.14158139 0.51828845 2.2026 0.0276231 * \noc -0.00696000 0.00155383 -4.4792 7.491e-06 ***\nic -0.00153534 0.00062251 -2.4664 0.0136486 * \nec:income -0.06362917 0.11329865 -0.5616 0.5743846 \ner:income -0.09685787 0.10755423 -0.9005 0.3678281 \ngc:income -0.07178917 0.08878777 -0.8085 0.4187752 \ngr:income -0.17981159 0.10012691 -1.7958 0.0725205 . \n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nLog-Likelihood: -1005.9\nMcFadden R^2: 0.01598 \nLikelihood ratio test : chisq = 32.67 (p.value = 1.2134e-05)\n</pre>\n\nAs before, pyLogit and mlogit agree on all calculated values except for the p-values and McFadden's $R^2$." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
spacedrabbit/PythonBootcamp
Errors and Exceptions Homework.ipynb
mit
[ "Errors and Exceptions Homework -\nProblem 1\nHandle the exception thrown by the code below by using try and except blocks.", "for i in ['a','b','c']:\n try: \n result = i**2\n except TypeError:\n print(\"Type error attempting to run on {i}\".format(i=i))\n else:\n print result", "Problem 2\nHandle the exception thrown by the code below by using try and except blocks. Then use a finally block to print 'All Done.'", "x = 5\ny = 0\n\ntry:\n z = x/y\nexcept ZeroDivisionError:\n print(\"Cannot divide by zero\")\nfinally:\n print 'all done'", "Problem 3\nWrite a function that asks for an integer and prints the square of it. Use a while loop with a try,except, else block to account for incorrect inputs.", "def ask():\n while True:\n try:\n input = int(raw_input(\"Enter an integer: \"))\n except:\n print 'Could not make conversion. Try again'\n else:\n print input**2\n\nask()", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tcstewar/testing_notebooks
sgbc/Simple LSTM example.ipynb
gpl-2.0
[ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom tensorflow.contrib import rnn\n", "First, we create some data. In a real example, this would be loaded up out of the file.\nIn this case, input_data is two values, and output_data is one value (the thing we're trying to predict given the input_data). For the particular data I've generated here, you can't do it given only the current input_data; you can only make an accurate prediction given the previous input_data as well.", "t = np.arange(50)*0.05\ninput_data = np.sign(np.array([np.sin(2*np.pi*t),np.sin(2*np.pi*t)]).T).astype(float)\ninput_data += np.random.normal(size=input_data.shape)*0.1\noutput_data = (np.sign(np.sin(2*np.pi*t*2+np.pi)).astype(float)+1)/2\n\nprint('Input Data', input_data)\nprint('Output Data', output_data)", "Let's plot that data, just to make it clearer", "plt.subplot(2,1,1)\nplt.plot(input_data)\nplt.title('input data')\nplt.subplot(2,1,2)\nplt.plot(output_data)\nplt.title('output data')\nplt.tight_layout()\nplt.show()", "Now we need to make our network and train it.", "n_epochs = 4000 # number of times to run the training\nn_units = 200 # size of the neural network\nn_classes = 1 # number of values in the output\nn_features = 2 # number of values in the input", "Now we create our network. I don't quite understand exactly what's happening here, but I copied it from an LSTM tutorial.", "X = tf.placeholder('float',[None,n_features])\nY = tf.placeholder('float')\n\nweights = tf.Variable(tf.random_normal([n_units, n_classes]))\nbias = tf.Variable(tf.random_normal([n_classes]))\n\nx = tf.split(X, n_features, 1)\nlstm_cell = rnn.BasicLSTMCell(n_units) \noutputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32) \noutput = tf.matmul(outputs[-1], weights) + bias\noutput = tf.reshape(output, [-1])\n\ncost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=output, labels=Y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n", "Now we train it.", "with tf.Session() as session:\n\n # initialize the network\n tf.global_variables_initializer().run()\n tf.local_variables_initializer().run()\n\n # now do the training\n for epoch in range(n_epochs):\n # this does one pass through the traiing\n _, error = session.run([optimizer, cost], feed_dict={X: input_data, Y: output_data})\n\n # print a message every 100 epochs\n if epoch % 100 == 0:\n print('Epoch', epoch, 'completed out of', n_epochs, 'error:', error)\n\n # now compute the output after training\n pred = tf.round(tf.nn.sigmoid(output)).eval({X: input_data})\n\nplt.subplot(2, 1, 1)\nplt.title('ideal output')\nplt.plot(output_data)\nplt.subplot(2, 1, 2)\nplt.title('predicted output')\nplt.plot(pred)\nplt.tight_layout()\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JarnoRFB/qtpyvis
notebooks/pytorch/introspection.ipynb
mit
[ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torchvision import datasets, transforms\nfrom torch.autograd import Variable\nfrom collections import OrderedDict\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.rcParams['image.cmap'] = 'gray'\n%matplotlib inline\n\n# input batch size for training (default: 64)\nbatch_size = 64\n\n# input batch size for testing (default: 1000)\ntest_batch_size = 1000\n\n# number of epochs to train (default: 10)\nepochs = 10\n\n# learning rate (default: 0.01)\nlr = 0.01\n\n# SGD momentum (default: 0.5)\nmomentum = 0.5\n\n# disables CUDA training\nno_cuda = True\n\n# random seed (default: 1)\nseed = 1\n\n# how many batches to wait before logging training status\nlog_interval = 10\n\n# Setting seed for reproducibility.\ntorch.manual_seed(seed)\n\ncuda = not no_cuda and torch.cuda.is_available()\nprint(\"CUDA: {}\".format(cuda))\n\nif cuda:\n torch.cuda.manual_seed(seed)\ncudakwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}\n\nmnist_transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,)) # Precalcualted values.\n])\n\ntrain_set = datasets.MNIST(\n root='data',\n train=True,\n transform=mnist_transform,\n download=True,\n)\n\ntest_set = datasets.MNIST(\n root='data',\n train=False,\n transform=mnist_transform,\n download=True,\n)\n\ntrain_loader = torch.utils.data.DataLoader(\n dataset=train_set,\n batch_size=batch_size,\n shuffle=True,\n **cudakwargs\n)\n\ntest_loader = torch.utils.data.DataLoader(\n dataset=test_set,\n batch_size=test_batch_size,\n shuffle=True,\n **cudakwargs\n)\n", "Loading the model.\nHere we will focus only on nn.Sequential model types as they are easier to deal with. Generalizing the methods described here to nn.Module will require more work.", "class Flatten(nn.Module):\n def forward(self, x):\n return x.view(x.size(0), -1)\n \n def __str__(self):\n return 'Flatten()'\n\nmodel = nn.Sequential(OrderedDict([\n ('conv2d_1', nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3)),\n ('relu_1', nn.ReLU()),\n ('max_pooling2d_1', nn.MaxPool2d(kernel_size=2)),\n ('conv2d_2', nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3)),\n ('relu_2', nn.ReLU()),\n ('dropout_1', nn.Dropout(p=0.25)),\n ('flatten_1', Flatten()),\n ('dense_1', nn.Linear(3872, 64)),\n ('relu_3', nn.ReLU()),\n ('dropout_2', nn.Dropout(p=0.5)),\n ('dense_2', nn.Linear(64, 10)),\n ('readout', nn.LogSoftmax())\n]))\n\nmodel.load_state_dict(torch.load('example_torch_mnist_model.pth'))", "Accessing the layers\nA torch.nn.Sequential module serves itself as an iterable and subscriptable container for all its children modules.", "for i, layer in enumerate(model):\n print('{}\\t{}'.format(i, layer))", "Moreover .modules and .children provide generators for accessing layers.", "for m in model.modules():\n print(m)\n\nfor c in model.children():\n print(c)", "Getting the weigths.", "conv2d_1_weight = model[0].weight.data.numpy()\nconv2d_1_weight.shape\n\nfor i in range(32):\n plt.imshow(conv2d_1_weight[i, 0])\n plt.show()", "Getting layer properties\nThe layer objects themselfs expose most properties as attributes.", "conv2d_1 = model[0]\n\nconv2d_1.kernel_size\n\nconv2d_1.stride\n\nconv2d_1.dilation\n\nconv2d_1.in_channels, conv2d_1.out_channels\n\nconv2d_1.padding\n\nconv2d_1.output_padding\n\ndropout_1 = model[5]\n\ndropout_1.p\n\ndense_1 = model[7]\n\ndense_1.in_features, dense_1.out_features" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ML4DS/ML4all
TM2.Topic_Models/TM_py2_wikitools/notebooks/TM2_Topic_Modeling_professor.ipynb
mit
[ "Topic Modelling\nAuthor: Jesús Cid Sueiro\nDate: 2016/11/27\nIn this notebook we will explore some tools for text analysis in python. To do so, first we will import the requested python libraries.", "%matplotlib inline\n\n# Required imports\nfrom wikitools import wiki\nfrom wikitools import category\n\nimport nltk\nfrom nltk.tokenize import word_tokenize\nfrom nltk.corpus import stopwords\nfrom nltk.stem import WordNetLemmatizer\n\nimport gensim\n\nimport numpy as np\nimport lda\nimport lda.datasets\n\nfrom time import time\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\nfrom sklearn.decomposition import LatentDirichletAllocation\n\nimport matplotlib.pyplot as plt\nimport pylab\n\nfrom test_helper import Test", "1. Corpus acquisition.\nIn this notebook we will explore some tools for text processing and analysis and two topic modeling algorithms available from Python toolboxes.\nTo do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes easy the capture of content from wikimedia sites.\n(As a side note, there are many other available text collections to test topic modelling algorithm. In particular, the NLTK library has many examples, that can explore them using the nltk.download() tool.\nimport nltk\nnltk.download()\n\nfor instance, you can take the gutemberg dataset\nMycorpus = nltk.corpus.gutenberg\ntext_name = Mycorpus.fileids()[0]\nraw = Mycorpus.raw(text_name)\nWords = Mycorpus.words(text_name)\n\nAlso, tools like Gensim or Sci-kit learn include text databases to work with).\nIn order to use Wikipedia data, we will select a single category of articles:", "site = wiki.Wiki(\"https://en.wikipedia.org/w/api.php\")\n# Select a category with a reasonable number of articles (>100)\n# cat = \"Economics\"\ncat = \"Pseudoscience\"\nprint cat", "You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https://en.wikipedia.org/wiki/Category:Contents, for instance.\nWe start downloading the text collection.", "# Loading category data. This may take a while\nprint \"Loading category data. This may take a while...\"\ncat_data = category.Category(site, cat)\n\ncorpus_titles = []\ncorpus_text = []\n\nfor n, page in enumerate(cat_data.getAllMembersGen()):\n print \"\\r Loading article {0}\".format(n + 1),\n corpus_titles.append(page.title)\n corpus_text.append(page.getWikiText())\n\nn_art = len(corpus_titles)\nprint \"\\nLoaded \" + str(n_art) + \" articles from category \" + cat", "Now, we have stored the whole text collection in two lists:\n\ncorpus_titles, which contains the titles of the selected articles\ncorpus_text, with the text content of the selected wikipedia articles\n\nYou can browse the content of the wikipedia articles to get some intuition about the kind of documents that will be processed.", "# n = 5\n# print corpus_titles[n]\n# print corpus_text[n]", "2. Corpus Processing\nTopic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.\nThus, we will proceed with the following steps:\n\nTokenization, filtering and cleaning\nHomogeneization (stemming or lemmatization)\nVectorization\n\n2.1. Tokenization, filtering and cleaning.\nThe first steps consists on the following:\n\nTokenization: convert text string into lists of tokens.\nFiltering:\nRemoving capitalization: capital alphabetic characters will be transformed to their corresponding lowercase characters. \nRemoving non alphanumeric tokens (e.g. punktuation signs)\nCleaning: Removing stopwords, i.e., those words that are very common in language and do not carry out useful semantic content (articles, pronouns, etc).\n\nTo do so, we will need some packages from the Natural Language Toolkit.", "# You can comment this if the package is already available.\n# Select option \"d) Download\", and identifier \"punkt\"\n# Select option \"d) Download\", and identifier \"stopwords\"\n# nltk.download()\n\nstopwords_en = stopwords.words('english')\ncorpus_clean = []\n\nfor n, art in enumerate(corpus_text): \n print \"\\rProcessing article {0} out of {1}\".format(n + 1, n_art),\n # This is to make sure that all characters have the appropriate encoding.\n art = art.decode('utf-8') \n \n # Tokenize each text entry. \n # scode: tokens = <FILL IN>\n token_list = word_tokenize(art)\n \n # Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.\n # Store the result in a new token list, clean_tokens.\n # scode: filtered_tokens = <FILL IN>\n filtered_tokens = [token.lower() for token in token_list if token.isalnum()]\n\n # Remove all tokens in the stopwords list and append the result to corpus_clean\n # scode: clean_tokens = <FILL IN>\n clean_tokens = [token for token in filtered_tokens if token not in stopwords_en] \n\n # scode: <FILL IN>\n corpus_clean.append(clean_tokens)\nprint \"\\nLet's check the first tokens from document 0 after processing:\"\nprint corpus_clean[0][0:30]\n\nTest.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')\nTest.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')", "2.2. Stemming vs Lemmatization\nAt this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.\nTask: Apply the .stem() method, from the stemmer object created in the first line, to corpus_filtered.", "# Select stemmer.\nstemmer = nltk.stem.SnowballStemmer('english')\ncorpus_stemmed = []\n\nfor n, token_list in enumerate(corpus_clean):\n print \"\\rStemming article {0} out of {1}\".format(n + 1, n_art),\n \n # Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.\n # Store the result in a new token list, clean_tokens.\n # scode: stemmed_tokens = <FILL IN>\n stemmed_tokens = [stemmer.stem(token) for token in token_list]\n \n # Add art to the stemmed corpus\n # scode: <FILL IN>\n corpus_stemmed.append(stemmed_tokens)\n\nprint \"\\nLet's check the first tokens from document 0 after stemming:\"\nprint corpus_stemmed[0][0:30]\n\nTest.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])), \n 'It seems that stemming has not been applied properly')", "Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk", "# You can comment this if the package is already available.\n# Select option \"d) Download\", and identifier \"wordnet\"\n# nltk.download()", "Task: Apply the .lemmatize() method, from the WordNetLemmatizer object created in the first line, to corpus_filtered.", "wnl = WordNetLemmatizer()\n\n# Select stemmer.\ncorpus_lemmat = []\n\nfor n, token_list in enumerate(corpus_clean):\n print \"\\rLemmatizing article {0} out of {1}\".format(n + 1, n_art),\n \n # scode: lemmat_tokens = <FILL IN>\n lemmat_tokens = [wnl.lemmatize(token) for token in token_list]\n\n # Add art to the stemmed corpus\n # scode: <FILL IN>\n corpus_lemmat.append(lemmat_tokens)\n\nprint \"\\nLet's check the first tokens from document 0 after stemming:\"\nprint corpus_lemmat[0][0:30]", "One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.\nHowever, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why \"is\" or \"are\" are preserved and not replaced by infinitive \"be\".\nAs an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').\n2.3. Vectorization\nUp to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library. \nAs a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.", "# Create dictionary of tokens\nD = gensim.corpora.Dictionary(corpus_clean)\nn_tokens = len(D)\n\nprint \"The dictionary contains {0} tokens\".format(n_tokens)\nprint \"First tokens in the dictionary: \"\nfor n in range(10):\n print str(n) + \": \" + D[n]", "In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list. \n Task: Apply the doc2bow method from gensim dictionary D, to all tokens in every article in corpus_clean. The result must be a new list named corpus_bow where each element is a list of tuples (token_id, number_of_occurrences).", "# Transform token lists into sparse vectors on the D-space\ncorpus_bow = [D.doc2bow(doc) for doc in corpus_clean]\n\nTest.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size') ", "At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.\nAfter that, we have transformed each article (in corpus_clean) in a list tuples (id, n).", "print \"Original article (after cleaning): \"\nprint corpus_clean[0][0:30]\nprint \"Sparse vector representation (first 30 components):\"\nprint corpus_bow[0][0:30]\nprint \"The first component, {0} from document 0, states that token 0 ({1}) appears {2} times\".format(\n corpus_bow[0][0], D[0], corpus_bow[0][0][1])", "Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples \n[(0, 1), (3, 3), (5,2)]\n\nfor a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.\n[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]\n\nThese sparse vectors will be the inputs to the topic modeling algorithms.\nNote that, at this point, we have built a Dictionary containing", "print \"{0} tokens\".format(len(D))", "and a bow representation of a corpus with", "print \"{0} Wikipedia articles\".format(len(corpus_bow))", "Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.", "# SORTED TOKEN FREQUENCIES (I):\n# Create a \"flat\" corpus with all tuples in a single list\ncorpus_bow_flat = [item for sublist in corpus_bow for item in sublist]\n\n# Initialize a numpy array that we will use to count tokens.\n# token_count[n] should store the number of ocurrences of the n-th token, D[n]\ntoken_count = np.zeros(n_tokens)\n\n# Count the number of occurrences of each token.\nfor x in corpus_bow_flat:\n # Update the proper element in token_count\n # scode: <FILL IN>\n token_count[x[0]] += x[1]\n\n# Sort by decreasing number of occurences\nids_sorted = np.argsort(- token_count)\ntf_sorted = token_count[ids_sorted]", "ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is", "print D[ids_sorted[0]]", "which appears", "print \"{0} times in the whole corpus\".format(tf_sorted[0])", "In the following we plot the most frequent terms in the corpus.", "# SORTED TOKEN FREQUENCIES (II):\nplt.rcdefaults()\n\n# Example data\nn_bins = 25\nhot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]\ny_pos = np.arange(len(hot_tokens))\nz = tf_sorted[n_bins-1::-1]/n_art\n\nplt.barh(y_pos, z, align='center', alpha=0.4)\nplt.yticks(y_pos, hot_tokens)\nplt.xlabel('Average number of occurrences per article')\nplt.title('Token distribution')\nplt.show()\n\n# SORTED TOKEN FREQUENCIES:\n\n# Example data\nplt.semilogy(tf_sorted)\nplt.xlabel('Average number of occurrences per article')\nplt.title('Token distribution')\nplt.show()", "Exercise: There are usually many tokens that appear with very low frequency in the corpus. Count the number of tokens appearing only once, and what is the proportion of them in the token list.", "# scode: <WRITE YOUR CODE HERE>\n# Example data\ncold_tokens = [D[i] for i in ids_sorted if tf_sorted[i]==1]\n\nprint \"There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary\".format(\n len(cold_tokens), float(len(cold_tokens))/n_tokens*100)", "Exercise: Represent graphically those 20 tokens that appear in the highest number of articles. Note that you can use the code above (headed by # SORTED TOKEN FREQUENCIES) with a very minor modification.", "# scode: <WRITE YOUR CODE HERE>\n\n# SORTED TOKEN FREQUENCIES (I):\n# Count the number of occurrences of each token.\ntoken_count2 = np.zeros(n_tokens)\nfor x in corpus_bow_flat:\n token_count2[x[0]] += (x[1]>0)\n\n# Sort by decreasing number of occurences\nids_sorted2 = np.argsort(- token_count2)\ntf_sorted2 = token_count2[ids_sorted2]\n\n# SORTED TOKEN FREQUENCIES (II):\n# Example data\nn_bins = 25\nhot_tokens2 = [D[i] for i in ids_sorted2[n_bins-1::-1]]\ny_pos2 = np.arange(len(hot_tokens2))\nz2 = tf_sorted2[n_bins-1::-1]/n_art\n\nplt.barh(y_pos2, z2, align='center', alpha=0.4)\nplt.yticks(y_pos2, hot_tokens2)\nplt.xlabel('Average number of occurrences per article')\nplt.title('Token distribution')\nplt.show()", "3. Semantic Analysis\nThe dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms. In this section we will explore two algorithms:\n\nLatent Semantic Indexing (LSI)\nLatent Dirichlet Allocation (LDA)\n\nThe topic model algorithms in gensim assume that input documents are parameterized using the tf-idf model. This can be done using", "tfidf = gensim.models.TfidfModel(corpus_bow)", "From now on, tfidf can be used to convert any vector from the old representation (bow integer counts) to the new one (TfIdf real-valued weights):", "doc_bow = [(0, 1), (1, 1)]\ntfidf[doc_bow]", "Or to apply a transformation to a whole corpus", "corpus_tfidf = tfidf[corpus_bow]\nprint corpus_tfidf[0][0:5]", "3.1. Latent Semantic Indexing (LSI)\nNow we are ready to apply a topic modeling algorithm. Latent Semantic Indexing is provided by LsiModel.\nTask: Generate a LSI model with 5 topics for corpus_tfidf and dictionary D. You can check de sintaxis for gensim.models.LsiModel.", "# Initialize an LSI transformation\nn_topics = 5\n\n# scode: lsi = <FILL IN>\nlsi = gensim.models.LsiModel(corpus_tfidf, id2word=D, num_topics=n_topics) ", "From LSI, we can check both the topic-tokens matrix and the document-topics matrix.\nNow we can check the topics generated by LSI. An intuitive visualization is provided by the show_topics method.", "lsi.show_topics(num_topics=-1, num_words=10, log=False, formatted=True)", "However, a more useful representation of topics is as a list of tuples (token, value). This is provided by the show_topic method.\nTask: Represent the columns of the topic-token matrix as a series of bar diagrams (one per topic) with the top 25 tokens of each topic.", "# SORTED TOKEN FREQUENCIES (II):\nplt.rcdefaults()\n\nn_bins = 25\n \n# Example data\ny_pos = range(n_bins-1, -1, -1)\n\npylab.rcParams['figure.figsize'] = 16, 8 # Set figure size\nfor i in range(n_topics):\n \n ### Plot top 25 tokens for topic i\n # Read i-thtopic\n # scode: <FILL IN>\n topic_i = lsi.show_topic(i, topn=n_bins)\n tokens = [t[0] for t in topic_i]\n weights = [t[1] for t in topic_i]\n\n # Plot\n # scode: <FILL IN>\n plt.subplot(1, n_topics, i+1)\n plt.barh(y_pos, weights, align='center', alpha=0.4)\n plt.yticks(y_pos, tokens)\n plt.xlabel('Top {0} topic weights'.format(n_bins))\n plt.title('Topic {0}'.format(i))\n \nplt.show()", "LSI approximates any document as a linear combination of the topic vectors. We can compute the topic weights for any input corpus entered as input to the lsi model.", "# On real corpora, target dimensionality of\n# 200–500 is recommended as a “golden standard”\n# Create a double wrapper over the original \n# corpus bow tfidf fold-in-lsi\ncorpus_lsi = lsi[corpus_tfidf]\nprint corpus_lsi[0]", "Task: Find the document with the largest positive weight for topic 0. Compare the document and the topic.", "# Extract weights from corpus_lsi\n# scode weight0 = <FILL IN>\nweight0 = [doc[0][1] if doc != [] else -np.inf for doc in corpus_lsi]\n\n# Locate the maximum positive weight\nnmax = np.argmax(weight0)\nprint nmax\nprint weight0[nmax]\nprint corpus_lsi[nmax]\n\n# Get topic 0\n# scode: topic_0 = <FILL IN>\ntopic_0 = lsi.show_topic(0, topn=n_bins)\n\n# Compute a list of tuples (token, wordcount) for all tokens in topic_0, where wordcount is the number of \n# occurences of the token in the article.\n# scode: token_counts = <FILL IN>\ntoken_counts = [(t[0], corpus_clean[nmax].count(t[0])) for t in topic_0]\n\nprint \"Topic 0 is:\"\nprint topic_0\nprint \"Token counts:\"\nprint token_counts", "3.2. Latent Dirichlet Allocation (LDA)\nThere are several implementations of the LDA topic model in python:\n\nPython library lda.\nGensim module: gensim.models.ldamodel.LdaModel\nSci-kit Learn module: sklearn.decomposition\n\n3.2.1. LDA using Gensim\nThe use of the LDA module in gensim is similar to LSI. Furthermore, it assumes that a tf-idf parametrization is used as an input, which is not in complete agreement with the theoretical model, which assumes documents represented as vectors of token-counts.\nTo use LDA in gensim, we must first create a lda model object.", "ldag = gensim.models.ldamodel.LdaModel(\n corpus=corpus_tfidf, id2word=D, num_topics=10, update_every=1, passes=10)\n\nldag.print_topics()", "3.2.2. LDA using python lda library\nAn alternative to gensim for LDA is the lda library from python. It requires a doc-frequency matrix as input", "# For testing LDA, you can use the reuters dataset\n# X = lda.datasets.load_reuters()\n# vocab = lda.datasets.load_reuters_vocab()\n# titles = lda.datasets.load_reuters_titles()\nX = np.int32(np.zeros((n_art, n_tokens)))\nfor n, art in enumerate(corpus_bow):\n for t in art:\n X[n, t[0]] = t[1]\nprint X.shape\nprint X.sum()\n\nvocab = D.values()\ntitles = corpus_titles\n\n\n# Default parameters:\n# model = lda.LDA(n_topics, n_iter=2000, alpha=0.1, eta=0.01, random_state=None, refresh=10)\nmodel = lda.LDA(n_topics=10, n_iter=1500, random_state=1)\nmodel.fit(X) # model.fit_transform(X) is also available\ntopic_word = model.topic_word_ # model.components_ also works\n\n# Show topics...\nn_top_words = 8\nfor i, topic_dist in enumerate(topic_word):\n topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]\n print('Topic {}: {}'.format(i, ' '.join(topic_words)))", "Document-topic distribution", "doc_topic = model.doc_topic_\nfor i in range(10):\n print(\"{} (top topic: {})\".format(titles[i], doc_topic[i].argmax()))\n\n# This is to apply the model to a new doc(s)\n# doc_topic_test = model.transform(X_test)\n# for title, topics in zip(titles_test, doc_topic_test):\n# print(\"{} (top topic: {})\".format(title, topics.argmax()))", "It allows incremental updates\n3.2.2. LDA using Sci-kit Learn\nThe input matrix to the sklearn implementation of LDA contains the token-counts for all documents in the corpus.\nsklearn contains a powerfull CountVectorizer method that can be used to construct the input matrix from the corpus_bow. \nFirst, we will define an auxiliary function to print the top tokens in the model, that has been taken from the sklearn documentation.", "# Adapted from an example in sklearn site \n# http://scikit-learn.org/dev/auto_examples/applications/topics_extraction_with_nmf_lda.html\n\n# You can try also with the dataset provided by sklearn in \n# from sklearn.datasets import fetch_20newsgroups\n# dataset = fetch_20newsgroups(shuffle=True, random_state=1,\n#  remove=('headers', 'footers', 'quotes'))\n\ndef print_top_words(model, feature_names, n_top_words):\n for topic_idx, topic in enumerate(model.components_):\n print(\"Topic #%d:\" % topic_idx)\n print(\" \".join([feature_names[i]\n for i in topic.argsort()[:-n_top_words - 1:-1]]))\n print()", "Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator.", "print(\"Loading dataset...\")\n# scode: data_samples = <FILL IN>\nprint \"*\".join(['Esto', 'es', 'un', 'ejemplo'])\ndata_samples = [\" \".join(c) for c in corpus_clean]\nprint 'Document 0:'\nprint data_samples[0][0:200], '...'", "Now we are ready to compute the token counts.", "# Use tf (raw term count) features for LDA.\nprint(\"Extracting tf features for LDA...\")\nn_features = 1000\nn_samples = 2000\ntf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\n\nt0 = time()\ntf = tf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\nprint tf[0][0][0]", "Now we can apply the LDA algorithm. \nTask: Create an LDA object with the following parameters: \n n_topics=n_topics, max_iter=5,\n learning_method='online',\n learning_offset=50.,\n random_state=0", "print(\"Fitting LDA models with tf features, \"\n \"n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\n# scode: lda = <FILL IN>\nlda = LatentDirichletAllocation(n_topics=n_topics, max_iter=10,\n learning_method='online', learning_offset=50., random_state=0) \n# doc_topic_prior= 1.0/n_topics, topic_word_prior= 1.0/n_topics)", "Task: Fit model lda with the token frequencies computed by tf_vectorizer.", "t0 = time()\ncorpus_lda = lda.fit_transform(tf)\nprint corpus_lda[10]/np.sum(corpus_lda[10])\nprint(\"done in %0.3fs.\" % (time() - t0))\nprint corpus_titles[10]\n# print corpus_text[10]\n\nprint(\"\\nTopics in LDA model:\")\ntf_feature_names = tf_vectorizer.get_feature_names()\nprint_top_words(lda, tf_feature_names, 20)\n\ntopics = lda.components_\ntopic_probs = [t/np.sum(t) for t in topics]\n# print topic_probs[0]\nprint -np.sort(-topic_probs[0])\n", "Exercise: Represent graphically the topic distributions for the top 25 tokens with highest probability for each topic.", "# SORTED TOKEN FREQUENCIES (II):\nplt.rcdefaults()\n\nn_bins = 50\n \n# Example data\ny_pos = range(n_bins-1, -1, -1)\n\npylab.rcParams['figure.figsize'] = 16, 8 # Set figure size\n\nfor i in range(n_topics):\n \n ### Plot top 25 tokens for topic i\n # Read i-thtopic\n # scode: <FILL IN>\n topic_i = topic_probs[i]\n rank = np.argsort(- topic_i)[0:n_bins]\n \n tokens = [tf_feature_names[r] for r in rank]\n weights = [topic_i[r] for r in rank]\n\n # Plot\n # scode: <FILL IN>\n plt.subplot(1, n_topics, i+1)\n plt.barh(y_pos, weights, align='center', alpha=0.4)\n plt.yticks(y_pos, tokens)\n plt.xlabel('Top {0} topic weights'.format(n_bins))\n plt.title('Topic {0}'.format(i))\n \nplt.show()", "Exercise: Explore the influence of the concentration parameters, $alpha$ (doc_topic_prior in sklearn) and $eta$(topic_word_prior). In particular observe how do topic and document distributions change as these parameters increase.\n Exercise: The token dictionary and the token distribution have shown that:\n\n\nSome tokens, despite being very frequent in the corpus, have no semantic relevance for topic modeling. Unfortunately, they were not present in the stopword list, and have not been elliminated before the analysis.\n\n\nA large portion of tokens appear only once and, thus, they are not statistically relevant for the inference engine of the topic models.\n\n\nRevise the entire corpus be removing from the corpus all these sets of terms.\n Exercise: Note that we have not used the terms in the article titles, though the can be expected to contain relevant words for the topic modeling. Include the title words in the analyisis. In order to give them a special relevante, insert them in the corpus several times, so as to make their words more significant.\n Exercise: The topic modelling algorithms we have tested in this notebook are non-supervised. This makes them difficult to evaluate objectivelly. In order to test if LDA captures real topics, construct a dataset as the mixture of wikipedia articles from 4 different categories, and test if LDA with 4 topics identifies topics closely related to the original categories.\nExercise:\nRepresent, in a scatter plot, the distribution of component 0 and 1 of the document-topic matrix (corpus_lda) for the whole corpus." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
WNoxchi/Kaukasos
misc/ridge-regression-tutorial_analytics-vidhya_20181020.ipynb
mit
[ "WNIXALO | 20181020 | https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-ridge-lasso-regression-python/#three", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn.linear_model import Ridge\n\nfrom matplotlib.pylab import rcParams\nrcParams['figure.figsize'] = 12, 10", "Data simulating a sine-curve between 60°-300° with random noise:", "# Define input array with angles from 60° to 300° in radians\nx = np.array([i*np.pi/180 for i in range(60,300,4)])\nnp.random.seed(0) # setting rand seed for reproducibility\ny = np.sin(x) + np.random.normal(0,0.15,len(x))\ndata = pd.DataFrame(np.column_stack([x,y]), columns=['x','y'])\n\nplt.plot(data['x'], data['y'], '.');", "Adding a column for each power up to 15:", "for i in range(2,16): # power of 1 is already there\n colname = f'x_{i}' # new var will be the x_power\n data[colname] = data['x']**i\n\ndata.head()\n# print(data.head())", "Generic function for ridge regression, similar to that defined for simple linear regression:", "# from: https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-ridge-lasso-regression-python/#three\ndef ridge_regression(data, predictors, alpha, models_to_plot={}):\n # Fit the model\n ridgereg = Ridge(alpha=alpha, normalize=True)\n ridgereg.fit(data[predictors], data['y'])\n y_pred = ridgereg.predict(data[predictors])\n \n # Check if a plot is to be made for the entered alpha\n if alpha in models_to_plot:\n plt.subplot(models_to_plot[alpha])\n plt.tight_layout()\n plt.plot(data['x'], y_pred)\n plt.plot(data['x'], data['y'], '.')\n plt.title(f'Plot for alpha: {alpha:.3g}')\n \n # Return result in pre-defined format\n rss = sum((y_pred - data['y'])**2)\n ret = [rss]\n ret.extend([ridgereg.intercept_])\n ret.extend(ridgereg.coef_)\n return ret", "Analyze result of ridge (L2) regression for 10 values of α:", "# Initialize predictors to be set of 15 powers of x\npredictors = ['x']\npredictors.extend([f'x_{i}' for i in range(2,16)])\n\n# Set different values of alpha to be tested\nalpha_ridge = [1e-15, 1e-10, 1e-8, 1e-4, 1e-3,1e-2, 1, 5, 10, 20]\n\n# Initialize dataframe for storing coefficients\ncol = ['rss','intercept'] + [f'coef_x_{i}' for i in range(1,16)]\nind = [f'alpha_{alpha_ridge[i]:.2g}' for i in range(0,10)]\ncoef_matrix_ridge = pd.DataFrame(index=ind, columns=col)\n\nmodels_to_plot = {1e-15:231, 1e-10:232, 1e-4:233, 1e-3:234, 1e-2:235, 5:236}\n\nfor i in range(10):\n coef_matrix_ridge.iloc[i,] = ridge_regression(data, predictors, \n alpha_ridge[i], models_to_plot)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zakandrewking/cobrapy
documentation_builder/deletions.ipynb
lgpl-2.1
[ "Simulating Deletions", "import pandas\nfrom time import time\n\nimport cobra.test\nfrom cobra.flux_analysis import (\n single_gene_deletion, single_reaction_deletion, double_gene_deletion,\n double_reaction_deletion)\n\ncobra_model = cobra.test.create_test_model(\"textbook\")\necoli_model = cobra.test.create_test_model(\"ecoli\")", "Knocking out single genes and reactions\nA commonly asked question when analyzing metabolic models is what will happen if a certain reaction was not allowed to have any flux at all. This can tested using cobrapy by", "print('complete model: ', cobra_model.optimize())\nwith cobra_model:\n cobra_model.reactions.PFK.knock_out()\n print('pfk knocked out: ', cobra_model.optimize())", "For evaluating genetic manipulation strategies, it is more interesting to examine what happens if given genes are knocked out as doing so can affect no reactions in case of redundancy, or more reactions if gene when is participating in more than one reaction.", "print('complete model: ', cobra_model.optimize())\nwith cobra_model:\n cobra_model.genes.b1723.knock_out()\n print('pfkA knocked out: ', cobra_model.optimize())\n cobra_model.genes.b3916.knock_out()\n print('pfkB knocked out: ', cobra_model.optimize())", "Single Deletions\nPerform all single gene deletions on a model", "deletion_results = single_gene_deletion(cobra_model)", "These can also be done for only a subset of genes", "single_gene_deletion(cobra_model, cobra_model.genes[:20])", "This can also be done for reactions", "single_reaction_deletion(cobra_model, cobra_model.reactions[:20])", "Double Deletions\nDouble deletions run in a similar way. Passing in return_frame=True will cause them to format the results as a pandas.DataFrame.", "double_gene_deletion(\n cobra_model, cobra_model.genes[-5:], return_frame=True).round(4)", "By default, the double deletion function will automatically use multiprocessing, splitting the task over up to 4 cores if they are available. The number of cores can be manually specified as well. Setting use of a single core will disable use of the multiprocessing library, which often aids debugging.", "start = time() # start timer()\ndouble_gene_deletion(\n ecoli_model, ecoli_model.genes[:300], number_of_processes=2)\nt1 = time() - start\nprint(\"Double gene deletions for 200 genes completed in \"\n \"%.2f sec with 2 cores\" % t1)\n\nstart = time() # start timer()\ndouble_gene_deletion(\n ecoli_model, ecoli_model.genes[:300], number_of_processes=1)\nt2 = time() - start\nprint(\"Double gene deletions for 200 genes completed in \"\n \"%.2f sec with 1 core\" % t2)\n\nprint(\"Speedup of %.2fx\" % (t2 / t1))", "Double deletions can also be run for reactions.", "double_reaction_deletion(\n cobra_model, cobra_model.reactions[2:7], return_frame=True).round(4)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ngageoint/geowave
examples/data/notebooks/jupyter/pygw-showcase.ipynb
apache-2.0
[ "PyGw Showcase\nThis notebook demonstrates the some of the utility provided by the pygw python package.\nIn this guide, we will show how you can use pygw to easily:\n- Define a data schema for Geotools SimpleFeature/Vector data (aka create a new data type)\n- Create instances for the new type\n- Create a RocksDB GeoWave Data Store\n- Register a DataType Adapter & Index to the data store for your new data type\n- Write user-created data into the GeoWave Data Store\n- Query data out of the data store", "%pip install ../../../../python/src/main/python", "Loading state capitals test data set\nLoad state capitals from CSV", "import csv\n\nwith open(\"../../../java-api/src/main/resources/stateCapitals.csv\", encoding=\"utf-8-sig\") as f:\n reader = csv.reader(f)\n raw_data = [row for row in reader]\n\n\n# Let's take a look at what the data looks like\nraw_data[0]", "For the purposes of this exercise, we will use the state name ([0]), capital name ([1]), longitude ([2]), latitude ([3]), and the year that the capital was established ([4]).\nCreating a new SimpleFeatureType for the state capitals data set\nWe can define a data schema for our data by using a SimpleFeatureTypeBuilder to build a SimpleFeatureType.\nWe can use the convenience methods defined in AttributeDescriptor to define each field of the feature type.", "from pygw.geotools import SimpleFeatureTypeBuilder\nfrom pygw.geotools import AttributeDescriptor\n\n# Create the feature type builder\ntype_builder = SimpleFeatureTypeBuilder()\n# Set the name of the feature type\ntype_builder.set_name(\"StateCapitals\")\n# Add the attributes\ntype_builder.add(AttributeDescriptor.point(\"location\"))\ntype_builder.add(AttributeDescriptor.string(\"state_name\"))\ntype_builder.add(AttributeDescriptor.string(\"capital_name\"))\ntype_builder.add(AttributeDescriptor.date(\"established\"))\n# Build the feature type\nstate_capitals_type = type_builder.build_feature_type()\n", "Creating features for each data point using our new SimpleFeatureType\npygw allows you to create SimpleFeature instances for SimpleFeatureType using a SimpleFeatureBuilder.\nThe SimpleFeatureBuilder allows us to specify all of the attributes of a feature, and then build it by providing a feature ID. For this exercise, we will use the index of the data as the unique feature id. We will use shapely to create the geometries for each feature.", "from pygw.geotools import SimpleFeatureBuilder\nfrom shapely.geometry import Point\nfrom datetime import datetime\n\nfeature_builder = SimpleFeatureBuilder(state_capitals_type)\n\nfeatures = []\nfor idx, capital in enumerate(raw_data):\n state_name = capital[0]\n capital_name = capital[1]\n longitude = float(capital[2])\n latitude = float(capital[3])\n established = datetime(int(capital[4]), 1, 1)\n \n feature_builder.set_attr(\"location\", Point(longitude, latitude))\n feature_builder.set_attr(\"state_name\", state_name)\n feature_builder.set_attr(\"capital_name\", capital_name)\n feature_builder.set_attr(\"established\", established)\n \n feature = feature_builder.build(str(idx))\n \n features.append(feature)", "Creating a data store\nNow that we have a set of SimpleFeatures, let's create a data store to write the features into. pygw supports all of the data store types that GeoWave supports. All that is needed is to first construct the appropriate DataStoreOptions variant that defines the parameters of the data store, then to pass those options to a DataStoreFactory to construct the DataStore. In this example we will create a new RocksDB data store.", "from pygw.store import DataStoreFactory\nfrom pygw.store.rocksdb import RocksDBOptions\n\n# Specify the options for the data store\noptions = RocksDBOptions()\noptions.set_geowave_namespace(\"geowave.example\")\n# NOTE: Directory is relative to the JVM working directory.\noptions.set_directory(\"./datastore\")\n# Create the data store\ndatastore = DataStoreFactory.create_data_store(options)", "An aside: help()\nMuch of pygw is well-documented, and the help method in python can be useful for figuring out what a pygw instance can do. Let's try it out on our data store.", "help(datastore)", "Adding our data to the data store\nTo store data into our data store, we first have to register a DataTypeAdapter for our simple feature data and create an index that defines how the data is queried. GeoWave supports simple feature data through the use of a FeatureDataAdapter. All that is needed for a FeatureDataAdapter is a SimpleFeatureType. We will also add both spatial and spatial/temporal indices.", "from pygw.geotools import FeatureDataAdapter\n\n# Create an adapter for feature type\nstate_capitals_adapter = FeatureDataAdapter(state_capitals_type)\n\nfrom pygw.index import SpatialIndexBuilder\nfrom pygw.index import SpatialTemporalIndexBuilder\n\n# Add a spatial index\nspatial_idx = SpatialIndexBuilder().set_name(\"spatial_idx\").create_index()\n\n# Add a spatial/temporal index\nspatial_temporal_idx = SpatialTemporalIndexBuilder().set_name(\"spatial_temporal_idx\").create_index()\n\n# Now we can add our type to the data store with our spatial index\ndatastore.add_type(state_capitals_adapter, spatial_idx, spatial_temporal_idx)\n\n# Check that we've successfully registered an index and type\nregistered_types = datastore.get_types()\n\nfor t in registered_types:\n print(t.get_type_name())\n\nregistered_indices = datastore.get_indices(state_capitals_adapter.get_type_name())\n\nfor i in registered_indices:\n print(i.get_name())", "Writing data to our store\nNow our data store is ready to receive our feature data. To do this, we must create a Writer for our data type.", "# Create a writer for our data\nwriter = datastore.create_writer(state_capitals_adapter.get_type_name())\n\n# Writing data to the data store\nfor ft in features:\n writer.write(ft)\n\n# Close the writer when we are done with it\nwriter.close()", "Querying our store to make sure the data was ingested properly\npygw supports querying data in the same fashion as the Java API. You can use a VectorQueryBuilder to create queries on simple feature data sets. We will use one now to query all of the state capitals in the data store.", "from pygw.query import VectorQueryBuilder\n\n# Create the query builder\nquery_builder = VectorQueryBuilder()\n\n# When you don't supply any constraints to the query builder, everything will be queried\nquery = query_builder.build()\n\n# Execute the query\nresults = datastore.query(query)", "The results returned above is a closeable iterator of SimpleFeature objects. Let's define a function that we can use to print out some information about these feature and then close the iterator when we are finished with it.", "def print_results(results):\n for result in results:\n capital_name = result.get_attribute(\"capital_name\")\n state_name = result.get_attribute(\"state_name\")\n established = result.get_attribute(\"established\")\n print(\"{}, {} was established in {}\".format(capital_name, state_name, established.year))\n \n # Close the iterator\n results.close()\n\n# Print the results\nprint_results(results)", "Constraining the results\nQuerying all of the data can be useful occasionally, but most of the time we will want to filter the data to only return results that we are interested in. pygw supports several types of constraints to make querying data as flexible as possible.\nCQL Constraints\nOne way you might want to query the data is using a simple CQL query.", "# A CQL expression for capitals that are in the northeastern part of the US\ncql_expression = \"BBOX(location, -87.83,36.64,-66.74,48.44)\"\n\n# Create the query builder\nquery_builder = VectorQueryBuilder()\nquery_builder.add_type_name(state_capitals_adapter.get_type_name())\n\n# If we want, we can tell the query builder to use the spatial index, since we aren't using time\nquery_builder.index_name(spatial_idx.get_name())\n\n# Get the constraints factory\nconstraints_factory = query_builder.constraints_factory()\n# Create the cql constraints\nconstraints = constraints_factory.cql_constraints(cql_expression)\n\n# Set the constraints and build the query\nquery = query_builder.constraints(constraints).build()\n# Execute the query\nresults = datastore.query(query)\n\n# Display the results\nprint_results(results)", "Spatial/Temporal Constraints\nYou may also want to contrain the data by both spatial and temporal constraints using the SpatialTemporalConstraintsBuilder. For this example, we will query all capitals that were established after 1800 within 10 degrees of Washington DC.", "# Create the query builder\nquery_builder = VectorQueryBuilder()\nquery_builder.add_type_name(state_capitals_adapter.get_type_name())\n\n# We can tell the builder to use the spatial/temporal index\nquery_builder.index_name(spatial_temporal_idx.get_name())\n\n# Get the constraints factory\nconstraints_factory = query_builder.constraints_factory()\n# Create the spatial/temporal constraints builder\nconstraints_builder = constraints_factory.spatial_temporal_constraints()\n# Create the spatial constraint geometry.\nwashington_dc_buffer = Point(-77.035, 38.894).buffer(10.0)\n# Set the spatial constraint\nconstraints_builder.spatial_constraints(washington_dc_buffer)\n# Set the temporal constraint\nconstraints_builder.add_time_range(datetime(1800,1,1), datetime.now())\n# Build the constraints\nconstraints = constraints_builder.build()\n\n# Set the constraints and build the query\nquery = query_builder.constraints(constraints).build()\n# Execute the query\nresults = datastore.query(query)\n\n# Display the results\nprint_results(results)", "Filter Factory Constraints\nWe can also use the FilterFactory to create more complicated filters. For example, if we wanted to find all of the capitals within 500 miles of Washington DC that contain the letter L that were established after 1830.", "from pygw.query import FilterFactory\n\n# Create the filter factory\nfilter_factory = FilterFactory()\n\n# Create a filter that passes when the capital location is within 500 miles of the\n# literal location of Washington DC\nlocation_prop = filter_factory.property(\"location\")\nwashington_dc_lit = filter_factory.literal(Point(-77.035, 38.894))\ndistance_km = 500 * 1.609344 # Convert miles to kilometers\ndistance_filter = filter_factory.dwithin(location_prop, washington_dc_lit, distance_km, \"kilometers\")\n\n# Create a filter that passes when the capital name contains the letter L.\ncapital_name_prop = filter_factory.property(\"capital_name\")\nname_filter = filter_factory.like(capital_name_prop, \"*l*\")\n\n# Create a filter that passes when the established date is after 1830\nestablished_prop = filter_factory.property(\"established\")\ndate_lit = filter_factory.literal(datetime(1830, 1, 1))\ndate_filter = filter_factory.after(established_prop, date_lit)\n\n# Combine the name, distance, and date filters\ncombined_filter = filter_factory.and_([distance_filter, name_filter, date_filter])\n\n# Create the query builder\nquery_builder = VectorQueryBuilder()\nquery_builder.add_type_name(state_capitals_adapter.get_type_name())\n\n# Get the constraints factory\nconstraints_factory = query_builder.constraints_factory()\n# Create the filter constraints\nconstraints = constraints_factory.filter_constraints(combined_filter)\n\n# Set the constraints and build the query\nquery = query_builder.constraints(constraints).build()\n# Execute the query\nresults = datastore.query(query)\n\n# Display the results\nprint_results(results)", "Using Pandas with GeoWave query results\nIt's fairly easy to load vector features from GeoWave queries into a Pandas DataFrame. To do this, make sure pandas is installed.", "%pip install pandas", "Next we will import pandas and issue a query to the datastore to load into a dataframe.", "from pandas import DataFrame\n\n# Query everything\nquery = VectorQueryBuilder().build()\nresults = datastore.query(query)\n\n# Load the results into a pandas dataframe\ndataframe = DataFrame.from_records([feature.to_dict() for feature in results])\n\n# Display the dataframe\ndataframe" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ioos/notebooks_demos
notebooks/2016-12-19-exploring_csw.ipynb
mit
[ "How to search the IOOS CSW catalog with Python tools\nThis notebook demonstrates a how to query the IOOS Catalog Catalog Service for the Web (CSW), parse resulting records to obtain web data service endpoints, and retrieve data from these service endpoints.\nLet's start by creating the search filters.\nThe filter used here constraints the search on a certain geographical region (bounding box), a time span (last week), and some CF variable standard names that represent sea surface temperature.", "from datetime import datetime\n\n# Region: Northwest coast.\nbbox = [-127, 43, -123.75, 48]\nmin_lon, max_lon = -127, -123.75\nmin_lat, max_lat = 43, 48\n\nbbox = [min_lon, min_lat, max_lon, max_lat]\ncrs = \"urn:ogc:def:crs:OGC:1.3:CRS84\"\n\n# Temporal range of 1 week.\nstart = datetime(2017, 4, 14, 0, 0, 0)\nstop = datetime(2017, 4, 21, 0, 0, 0)\n\n# Sea surface temperature CF names.\ncf_names = [\n \"sea_water_temperature\",\n \"sea_surface_temperature\",\n \"sea_water_potential_temperature\",\n \"equivalent_potential_temperature\",\n \"sea_water_conservative_temperature\",\n \"pseudo_equivalent_potential_temperature\",\n]", "With these 3 elements it is possible to assemble a OGC Filter Encoding (FE) using the owslib.fes* module.\n* OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models.\nAlthough CSW has a built-in feature to find datasets within a specified bounding box, it doesn't have a feature to find datasets within a specified time interval. We therefore create the function fes_date_filter below that finds all datasets that have at least part of their data within the specified interval. So we find all datasets that start before the end of the interval and stop after the beginning of the interval.", "from owslib import fes\n\n\ndef fes_date_filter(start, stop, constraint=\"overlaps\"):\n \"\"\"\n Take datetime-like objects and returns a fes filter for date range\n (begin and end inclusive).\n NOTE: Truncates the minutes!!!\n\n Examples\n --------\n >>> from datetime import datetime, timedelta\n >>> stop = datetime(2010, 1, 1, 12, 30, 59).replace(tzinfo=pytz.utc)\n >>> start = stop - timedelta(days=7)\n >>> begin, end = fes_date_filter(start, stop, constraint='overlaps')\n >>> begin.literal, end.literal\n ('2010-01-01 12:00', '2009-12-25 12:00')\n >>> begin.propertyoperator, end.propertyoperator\n ('ogc:PropertyIsLessThanOrEqualTo', 'ogc:PropertyIsGreaterThanOrEqualTo')\n >>> begin, end = fes_date_filter(start, stop, constraint='within')\n >>> begin.literal, end.literal\n ('2009-12-25 12:00', '2010-01-01 12:00')\n >>> begin.propertyoperator, end.propertyoperator\n ('ogc:PropertyIsGreaterThanOrEqualTo', 'ogc:PropertyIsLessThanOrEqualTo')\n\n \"\"\"\n start = start.strftime(\"%Y-%m-%d %H:00\")\n stop = stop.strftime(\"%Y-%m-%d %H:00\")\n if constraint == \"overlaps\":\n propertyname = \"apiso:TempExtent_begin\"\n begin = fes.PropertyIsLessThanOrEqualTo(propertyname=propertyname, literal=stop)\n propertyname = \"apiso:TempExtent_end\"\n end = fes.PropertyIsGreaterThanOrEqualTo(\n propertyname=propertyname, literal=start\n )\n elif constraint == \"within\":\n propertyname = \"apiso:TempExtent_begin\"\n begin = fes.PropertyIsGreaterThanOrEqualTo(\n propertyname=propertyname, literal=start\n )\n propertyname = \"apiso:TempExtent_end\"\n end = fes.PropertyIsLessThanOrEqualTo(propertyname=propertyname, literal=stop)\n else:\n raise NameError(\"Unrecognized constraint {}\".format(constraint))\n return begin, end\n\nkw = dict(wildCard=\"*\", escapeChar=\"\\\\\", singleChar=\"?\", propertyname=\"apiso:AnyText\")\n\nor_filt = fes.Or([fes.PropertyIsLike(literal=(\"*%s*\" % val), **kw) for val in cf_names])\n\nbegin, end = fes_date_filter(start, stop)\nbbox_crs = fes.BBox(bbox, crs=crs)\n\nfilter_list = [\n fes.And(\n [\n bbox_crs, # bounding box\n begin,\n end, # start and end date\n or_filt, # or conditions (CF variable names)\n ]\n )\n]\n\nfrom owslib.csw import CatalogueServiceWeb\n\nendpoint = \"https://data.ioos.us/csw\"\n\ncsw = CatalogueServiceWeb(endpoint, timeout=60)", "We have created a csw object, but nothing has been searched yet.\nBelow we create a get_csw_records function that calls the OSWLib method getrecords2 iteratively to retrieve all the records matching the search criteria specified by the filter_list.", "def get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000):\n \"\"\"Iterate `maxrecords`/`pagesize` times until the requested value in\n `maxrecords` is reached.\n \"\"\"\n from owslib.fes import SortBy, SortProperty\n\n # Iterate over sorted results.\n sortby = SortBy([SortProperty(\"dc:title\", \"ASC\")])\n csw_records = {}\n startposition = 0\n nextrecord = getattr(csw, \"results\", 1)\n while nextrecord != 0:\n csw.getrecords2(\n constraints=filter_list,\n startposition=startposition,\n maxrecords=pagesize,\n sortby=sortby,\n )\n csw_records.update(csw.records)\n if csw.results[\"nextrecord\"] == 0:\n break\n startposition += pagesize + 1 # Last one is included.\n if startposition >= maxrecords:\n break\n csw.records.update(csw_records)\n\nget_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)\n\nrecords = \"\\n\".join(csw.records.keys())\nprint(\"Found {} records.\\n\".format(len(csw.records.keys())))\nfor key, value in list(csw.records.items()):\n print(u\"[{}]\\n{}\\n\".format(value.title, key))", "That search returned a lot of records!\nWhat if we are not interested in those model results nor global dataset?\nWe can those be excluded from the search with a fes.Not filter.", "kw = dict(wildCard=\"*\", escapeChar=\"\\\\\\\\\", singleChar=\"?\", propertyname=\"apiso:AnyText\")\n\n\nfilter_list = [\n fes.And(\n [\n bbox_crs, # Bounding box\n begin,\n end, # start and end date\n or_filt, # or conditions (CF variable names).\n fes.Not([fes.PropertyIsLike(literal=\"*NAM*\", **kw)]), # no NAM results\n fes.Not([fes.PropertyIsLike(literal=\"*CONUS*\", **kw)]), # no NAM results\n fes.Not([fes.PropertyIsLike(literal=\"*GLOBAL*\", **kw)]), # no NAM results\n fes.Not([fes.PropertyIsLike(literal=\"*ROMS*\", **kw)]), # no NAM results\n ]\n )\n]\n\nget_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)\n\nrecords = \"\\n\".join(csw.records.keys())\nprint(\"Found {} records.\\n\".format(len(csw.records.keys())))\nfor key, value in list(csw.records.items()):\n print(u\"[{}]\\n{}\\n\".format(value.title, key))", "Now we got fewer records to deal with. That's better. But if the user is interested in only some specific service, it is better to filter by a string, like CO-OPS.", "filter_list = [\n fes.And(\n [\n bbox_crs, # Bounding box\n begin,\n end, # start and end date\n or_filt, # or conditions (CF variable names).\n fes.PropertyIsLike(literal=\"*CO-OPS*\", **kw), # must have CO-OPS\n ]\n )\n]\n\nget_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)\n\nrecords = \"\\n\".join(csw.records.keys())\nprint(\"Found {} records.\\n\".format(len(csw.records.keys())))\nfor key, value in list(csw.records.items()):\n print(\"[{}]\\n{}\\n\".format(value.title, key))", "The easiest way to get more information is to explorer the individual records.\nHere is the abstract and subjects from the last station in the list.", "import textwrap\n\nvalue = csw.records[key]\n\nprint(\"\\n\".join(textwrap.wrap(value.abstract)))\n\nprint(\"\\n\".join(value.subjects))", "The next step is to inspect the type services/schemes available for downloading the data. The easiest way to accomplish that is with by \"sniffing\" the URLs with geolinks.", "from geolinks import sniff_link\n\nmsg = \"geolink: {geolink}\\nscheme: {scheme}\\nURL: {url}\\n\".format\nfor ref in value.references:\n print(msg(geolink=sniff_link(ref[\"url\"]), **ref))", "There are many direct links to Comma Separated Value (CSV) and\neXtensible Markup Language (XML) responses to the various variables available in that station. \nIn addition to those links, there are three very interesting links for more information: 1.) the QC document, 2.) the station photo, 3.) the station home page.\nFor a detailed description of what those geolink results mean check the lookup table.\nThe original search was focused on sea water temperature,\nso there is the need to extract only the endpoint for that variable.\nPS: see also the pyoos example for fetching data from CO-OPS.", "start, stop\n\nfor ref in value.references:\n url = ref[\"url\"]\n if \"csv\" in url and \"sea\" in url and \"temperature\" in url:\n print(msg(geolink=sniff_link(url), **ref))\n break", "Note that the URL returned by the service has some hard-coded start/stop dates.\nIt is easy to overwrite those with the same dates from the filter.", "fmt = (\n \"https://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/SOS?\"\n \"service=SOS&\"\n \"eventTime={0:%Y-%m-%dT00:00:00}/{1:%Y-%m-%dT00:00:00}&\"\n \"observedProperty=http://mmisw.org/ont/cf/parameter/sea_water_temperature&\"\n \"version=1.0.0&\"\n \"request=GetObservation&offering=urn:ioos:station:NOAA.NOS.CO-OPS:9439040&\"\n \"responseFormat=text/csv\"\n)\n\nurl = fmt.format(start, stop)", "Finally, it is possible to download the data directly into a data pandas data frame and plot it.", "import io\n\nimport pandas as pd\nimport requests\n\nr = requests.get(url)\n\ndf = pd.read_csv(\n io.StringIO(r.content.decode(\"utf-8\")), index_col=\"date_time\", parse_dates=True\n)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(figsize=(11, 2.75))\nax = df[\"sea_water_temperature (C)\"].plot(ax=ax)\nax.set_xlabel(\"\")\nax.set_ylabel(r\"Sea water temperature ($^\\circ$C)\")\nax.set_title(value.title)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/dwd/cmip6/models/sandbox-1/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: DWD\nSource ID: SANDBOX-1\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:57\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joshspeagle/dynesty
demos/Examples -- Exponential Wave.ipynb
mit
[ "Exponential Wave\nA simple toy problem first suggested by Johannes Buchner.\nSetup\nFirst, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.", "# system functions that are always useful to have\nimport time, sys, os\n\n# basic numeric setup\nimport numpy as np\n\n# inline plotting\n%matplotlib inline\n\n# plotting\nimport matplotlib\nfrom matplotlib import pyplot as plt\n\n# seed the random number generator\nrstate = np.random.default_rng(916301)\n\n# re-defining plotting defaults\nfrom matplotlib import rcParams\nrcParams.update({'xtick.major.pad': '7.0'})\nrcParams.update({'xtick.major.size': '7.5'})\nrcParams.update({'xtick.major.width': '1.5'})\nrcParams.update({'xtick.minor.pad': '7.0'})\nrcParams.update({'xtick.minor.size': '3.5'})\nrcParams.update({'xtick.minor.width': '1.0'})\nrcParams.update({'ytick.major.pad': '7.0'})\nrcParams.update({'ytick.major.size': '7.5'})\nrcParams.update({'ytick.major.width': '1.5'})\nrcParams.update({'ytick.minor.pad': '7.0'})\nrcParams.update({'ytick.minor.size': '3.5'})\nrcParams.update({'ytick.minor.width': '1.0'})\nrcParams.update({'font.size': 30})\n\nimport dynesty", "We first generate a simple transformed periodic single from $0$ to $2\\pi$ based on the relation:\n$$ y(x) = \\exp\\left[ n_a \\sin(f_a x + p_a) + n_b \\sin(f_b x + p_b) \\right] $$\nThis has six free parameters controling the relevant amplitude, period, and phase of each component. We also have a seventh, $\\sigma$, corresponding to the amount of scatter.", "# x values sampled uniformly\nx = rstate.uniform(0, 2 * np.pi, size=100)\nx.sort()\n\n# define model\nfa = 4.2 / 4\nfb = 42 / 10\nna = 0.8\nnb = 0.3\npa = 0.1\npb = 2.4\nsigma = 0.2\n\n# generate noisy observations\nypred = np.exp(na * np.sin(x * fa + pa) + nb * np.sin(x * fb + pb))\ny = rstate.normal(ypred, sigma)\n\n# plot results\nplt.figure(figsize=(12, 5))\nplt.plot(x, y, color='black', marker='x', \n ls='none', alpha=0.9, markersize=10)\nplt.plot(x, ypred, marker='o', color='red', ls='none', alpha=0.7)\nplt.xlim([-0.1, 2 * np.pi + 0.1])\nplt.xlabel('X')\nplt.ylabel('Y')\nplt.tight_layout()", "Our priors will be uniform in all dimensions, with the phases having periodic boundary conditions.", "def prior_transform(u):\n v = u * 100\n v[0] = u[0] * 4 - 2\n v[1] = u[1] * 4 - 2\n v[2] = (u[2] % 1.) * 2 * np.pi\n v[3] = u[3] * 4 - 2\n v[4] = u[4] * 4 - 2\n v[5] = (u[5] % 1.) * 2 * np.pi\n v[6] = u[6] * 2 - 2\n return v\n\ndef loglike(v):\n logna, logfa, pa, lognb, logfb, pb, logsigma = v\n na, fa, pa, nb, fb, pb, sigma = (10**logna, 10**logfa, pa, \n 10**lognb, 10**logfb, pb, 10**logsigma)\n ypred = np.exp(na * np.sin(x * fa + pa) + nb * np.sin(x * fb + pb))\n residsq = (ypred - y)**2 / sigma**2\n loglike = -0.5 * np.sum(residsq + np.log(2 * np.pi * sigma**2))\n \n if not np.isfinite(loglike):\n loglike = -1e300\n \n return loglike", "Let's sample from this distribution using the default dynesty settings with 'slice'.", "# sample\nsampler = dynesty.DynamicNestedSampler(loglike, prior_transform, 7, \n periodic=[2, 5],\n sample='rslice', nlive=2000,\n rstate=rstate)\nsampler.run_nested()\nres = sampler.results", "Let's see how we did.", "from dynesty import plotting as dyplot\n\ndyplot.runplot(res)\nplt.tight_layout()\n\nlabels = [r'$\\log n_a$', r'$\\log f_a$', r'$p_a$', \n r'$\\log n_b$', r'$\\log f_b$', r'$p_b$', \n r'$\\log \\sigma$']\ntruths = [np.log10(na), np.log10(fa), pa,\n np.log10(nb), np.log10(fb), pb,\n np.log10(sigma)]\nfig, axes = dyplot.traceplot(res, labels=labels, truths=truths,\n fig=plt.subplots(7, 2, figsize=(16, 25)))\nfig.tight_layout()\n\nfig, axes = dyplot.cornerplot(res, truths=truths, show_titles=True, \n title_kwargs={'y': 1.04}, labels=labels,\n fig=plt.subplots(7, 7, figsize=(35, 35)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sinkap/bart
notebooks/sched/SchedDeadline.ipynb
apache-2.0
[ "Setup", "from trappy.stats.Topology import Topology\nfrom bart.sched.SchedMultiAssert import SchedMultiAssert\nfrom bart.sched.SchedAssert import SchedAssert\nimport trappy\nimport os\nimport operator\nimport json\n\n#Define a CPU Topology (for multi-cluster systems)\nBIG = [1, 2]\nLITTLE = [0, 3, 4, 5]\nCLUSTERS = [BIG, LITTLE]\ntopology = Topology(clusters=CLUSTERS)\n\nBASE_PATH = \"/Users/kapileshwarsingh/AnalysisRawData/LPC/sched_deadline/\"\n\nTHRESHOLD = 10.0\ndef between_threshold(a, b):\n return abs(((a - b) * 100.0) / b) < THRESHOLD", "Periodic Yield\nThe thread periodic_yeild is woken up at 30ms intervals where it calls sched_yield and relinquishes its time-slice.\nThe expectation is that the task will have a duty cycle < 1% and a period of 30ms.\nThere are two threads, and the rank=1 conveys that the condition is true for one of the threads with the name \"periodic_yeild\"", "TRACE_FILE = os.path.join(BASE_PATH, \"yield\")\nrun = trappy.Run(TRACE_FILE, \"cpuhog\")\n\n# Assert Period\ns = SchedMultiAssert(run, topology, execnames=\"periodic_yield\")\nif s.assertPeriod(30, between_threshold, rank=1):\n print \"PASS: Period\"\n print json.dumps(s.getPeriod(), indent=3)\n\nprint \"\"\n \n# Assert DutyCycle \nif s.assertDutyCycle(1, operator.lt, window=(0,4), rank=2):\n print \"PASS: DutyCycle\"\n print json.dumps(s.getDutyCycle(window=(0,4)), indent=3)", "CPU Hog\nThe reservation of a CPU hogging task is set to 10ms for every 100ms. The assertion ensures a duty cycle of 10%", "TRACE_FILE = os.path.join(BASE_PATH, \"cpuhog\")\nrun = trappy.Run(TRACE_FILE, \"cpuhog\")\ns = SchedMultiAssert(run, topology, execnames=\"cpuhog\")\ns.plot().view()\n\n# Assert DutyCycle\nif s.assertDutyCycle(10, between_threshold, window=(0, 5), rank=1):\n print \"PASS: DutyCycle\"\n print json.dumps(s.getDutyCycle(window=(0, 5)), indent=3)", "Changing Reservations\nA CPU hogging task has reservations set in the increasing order starting from 10% followed by a 2s period of normal execution", "TRACE_FILE = os.path.join(BASE_PATH, \"cancel_dl_timer\")\nrun = trappy.Run(TRACE_FILE, \"cpuhog\")\ns = SchedAssert(run, topology, execname=\"cpuhog\")\ns.plot().view()\n\nNUM_PHASES = 10\nPHASE_DURATION = 2\nstart = s.getStartTime()\nDUTY_CYCLE_FACTOR = 10\n\n\nfor phase in range(NUM_PHASES + 1):\n window = (start + (phase * PHASE_DURATION),\n start + ((phase + 1) * PHASE_DURATION))\n \n if phase % 2 == 0:\n DUTY_CYCLE = (phase + 2) * DUTY_CYCLE_FACTOR / 2\n else:\n DUTY_CYCLE = 100\n\n\n print \"WINDOW -> [{:.2f}, {:.2f}]\".format(window[0],\n window[1])\n \n \n \n if s.assertDutyCycle(DUTY_CYCLE, between_threshold, window=window):\n print \"PASS: Expected={} Actual={:.2f} THRESHOLD={}\".format(DUTY_CYCLE,\n s.getDutyCycle(window=window),\n THRESHOLD)\n else:\n print \"FAIL: Expected={} Actual={:.2f} THRESHOLD={}\".format(DUTY_CYCLE,\n s.getDutyCycle(window=window),\n THRESHOLD)\n \n print \"\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ProfessorKazarinoff/staticsite
content/code/mechanics/mechanics_problem.ipynb
gpl-3.0
[ "import sys\nimport platform\nfrom math import sqrt, pi\n\nprint('Python version: ', sys.version)\nprint('Running on: ', sys.platform)", "Given: A steel bar with a cross-section of 130 mm<sup>2</sup> and a length of 50.8 mm is axially loaded in tension with a force of 57.0 kN. The modulus of elasticity of the steel is 210 GPa.\nFind: What is the axial stress in the bar?\nSolution: \nF = 57.0 kN\nF = 5700 N\nA<sub>0</sub> = 130 mm<sup>2</sup>\n&sigma; = ?\n$$ \\sigma = \\frac{F}{A_0} $$", "F = 57000\nA0 = 130\nstress = F/A0\nprint(stress)", "Find: What is the resulting strain?\nSolution: \n&sigma; = 438.46 MPa\nE = 210 GPa\nE = 210000 MPa\n&epsilon; = ?\n$$ E = \\frac{\\Delta\\sigma}{\\Delta\\epsilon} $$", "E = 210000\n# E = stress/strain\nstrain = stress/E\nprint(strain)", "Find: What is the change in length of the bar?\nSolution:\n&epsilon; = 0.0020879\nL<sub>0</sub> = 50.8 mm\n&Delta;L = ?\n$$ \\epsilon = \\frac{\\Delta L}{L_0} $$\n$$ \\Delta L = \\epsilon L_0 $$", "L0=50.8\ndeltaL =strain*L0\nprint(deltaL)", "Find: What is the final length of the bar?\nSolution:\nL<sub>0</sub> = 50.8 mm\n&Delta;L = 0.10607 mm\nL<sub>f</sub> = ?\n$$ L_f = L_0 +\\Delta L $$", "Lf = L0 + deltaL\nprint(Lf)", "2. Given: An aluminum rod will be loaded in tension with a force of 350 kip.\nFind: What is the minimum diameter required to have a factor of safety of 1.9 agaist yield, if the yield stress is 40 ksi?\nSolution:\nSF = 1.9\n&sigma;<sub>y</sub> = 40 ksi\nd = ?", "SF =1.9\nstress_y = 40\nstress_app = stress_y/SF\nprint('Stress: ',stress_app)\nF = 350\nA0 = F/stress_app\nprint('Cross-sectional Area: ', A0)\nd = sqrt(4*A0/pi)\nprint('Diameter, d = ', d)", "3. A composite rod is axially loaded. It is ridigidly mounted at the wall and a single load <b>P</b> of 50 kN is applied at the end.\n| Material | Area (mm<sup>2</sup>) | Length (mm) | Elastic Modulus, E (GPa) | Yield Strength, &sigma;<sub>y</sub> (MPa) |\n| --- | --- | --- | --- | --- |\n|Aluminum | 400 | 600 | 70.0| 240 |\n|Brass | 300 | 800 | 105| 410 |\na. What is the stress in the Aluminum?", "A_Al=400\nF = 50000\nstress_Al= F/A_Al\nprint(stress_Al)", "b. What is the stress in the Brass?", "A_Br = 300\nstress_Br = F/A_Br\nprint(stress_Br)", "c. What is the final length of the entire composite rod once the load <b>P</b> is applied?", "E_Al = 70000\nstrain_Al = stress_Al/E_Al\nprint(strain_Al)\n\nL0_Al = 600\ndL_Al = strain_Al*L0_Al\nprint(dL_Al)\n\nE_Br = 105000\nstrain_Br = stress_Br/E_Br\nprint(strain_Br)\n\nL0_Br = 800\ndL_Br = strain_Br*L0_Br\nprint(dL_Br)\n\nLf = L0_Al + dL_Al + L0_Br + dL_Br\nprint(Lf)", "d. What is the factor of safety against yeild for the brass?", "stress_y_Br = 410\nSF_Br = stress_y_Br / stress_Br\nprint(SF_Br)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Merinorus/adaisawesome
Homework/01 - Pandas and Data Wrangling/Data Wrangling with Pandas.ipynb
gpl-3.0
[ "Table of Contents\n<p><div class=\"lev1\"><a href=\"#Data-Wrangling-with-Pandas\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Data Wrangling with Pandas</a></div><div class=\"lev2\"><a href=\"#Date/Time-data-handling\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Date/Time data handling</a></div><div class=\"lev2\"><a href=\"#Merging-and-joining-DataFrame-objects\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Merging and joining DataFrame objects</a></div><div class=\"lev2\"><a href=\"#Concatenation\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Concatenation</a></div><div class=\"lev2\"><a href=\"#Exercise-1\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Exercise 1</a></div><div class=\"lev2\"><a href=\"#Reshaping-DataFrame-objects\"><span class=\"toc-item-num\">1.5&nbsp;&nbsp;</span>Reshaping DataFrame objects</a></div><div class=\"lev2\"><a href=\"#Pivoting\"><span class=\"toc-item-num\">1.6&nbsp;&nbsp;</span>Pivoting</a></div><div class=\"lev2\"><a href=\"#Data-transformation\"><span class=\"toc-item-num\">1.7&nbsp;&nbsp;</span>Data transformation</a></div><div class=\"lev3\"><a href=\"#Dealing-with-duplicates\"><span class=\"toc-item-num\">1.7.1&nbsp;&nbsp;</span>Dealing with duplicates</a></div><div class=\"lev3\"><a href=\"#Value-replacement\"><span class=\"toc-item-num\">1.7.2&nbsp;&nbsp;</span>Value replacement</a></div><div class=\"lev3\"><a href=\"#Inidcator-variables\"><span class=\"toc-item-num\">1.7.3&nbsp;&nbsp;</span>Inidcator variables</a></div><div class=\"lev2\"><a href=\"#Categorical-Data\"><span class=\"toc-item-num\">1.8&nbsp;&nbsp;</span>Categorical Data</a></div><div class=\"lev3\"><a href=\"#Discretization\"><span class=\"toc-item-num\">1.8.1&nbsp;&nbsp;</span>Discretization</a></div><div class=\"lev3\"><a href=\"#Permutation-and-sampling\"><span class=\"toc-item-num\">1.8.2&nbsp;&nbsp;</span>Permutation and sampling</a></div><div class=\"lev2\"><a href=\"#Data-aggregation-and-GroupBy-operations\"><span class=\"toc-item-num\">1.9&nbsp;&nbsp;</span>Data aggregation and GroupBy operations</a></div><div class=\"lev3\"><a href=\"#Apply\"><span class=\"toc-item-num\">1.9.1&nbsp;&nbsp;</span>Apply</a></div><div class=\"lev2\"><a href=\"#Exercise-2\"><span class=\"toc-item-num\">1.10&nbsp;&nbsp;</span>Exercise 2</a></div><div class=\"lev2\"><a href=\"#References\"><span class=\"toc-item-num\">1.11&nbsp;&nbsp;</span>References</a></div>\n\n# Data Wrangling with Pandas\n\nNow that we have been exposed to the basic functionality of Pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.\n\nAs most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_context('notebook')", "Date/Time data handling\nDate and time data are inherently problematic. There are an unequal number of days in every month, an unequal number of days in a year (due to leap years), and time zones that vary over space. Yet information about time is essential in many analyses, particularly in the case of time series analysis.\nThe datetime built-in library handles temporal information down to the nanosecond.", "from datetime import datetime\n\nnow = datetime.now()\nnow\n\nnow.day\n\nnow.weekday()", "In addition to datetime there are simpler objects for date and time information only, respectively.", "from datetime import date, time\n\ntime(3, 24)\n\ndate(1970, 9, 3)", "Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times:", "my_age = now - datetime(1970, 1, 1)\nmy_age\n\nprint(type(my_age))\nmy_age.days/365", "In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed. \nThe International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.\nFor our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc.", "segments = pd.read_csv(\"Data/AIS/transit_segments.csv\")\nsegments.head()", "For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram:", "segments.seg_length.hist(bins=500)", "Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful:", "segments.seg_length.apply(np.log).hist(bins=500)", "We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime.", "segments.st_time.dtype", "Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information.", "datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M')", "The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically.", "from dateutil.parser import parse\n\nparse(segments.st_time.ix[0])", "We can convert all the dates in a particular column by using the apply method.", "segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M'))", "As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects.", "pd.to_datetime(segments.st_time[:10])", "Pandas also has a custom NA value for missing datetime objects, NaT.", "pd.to_datetime([None])", "Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument.\nThe read_* functions now have an optional parse_dates argument that try to convert any columns passed to it into datetime format upon import:", "segments = pd.read_csv(\"Data/AIS/transit_segments.csv\", parse_dates=['st_time', 'end_time'])\n\nsegments.dtypes", "Columns of the datetime type have an accessor to easily extract properties of the data type. This will return a Series, with the same row index as the DataFrame. For example:", "segments.st_time.dt.month.head()\n\nsegments.st_time.dt.hour.head()", "This can be used to easily filter rows by particular temporal attributes:", "segments[segments.st_time.dt.month==2].head()", "In addition, time zone information can be applied:", "segments.st_time.dt.tz_localize('UTC').head()\n\nsegments.st_time.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').head()", "Merging and joining DataFrame objects\nNow that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. In the data/AIS folder there is a second table that contains information about each of the ships that traveled the segments in the segments table.", "vessels = pd.read_csv(\"Data/AIS/vessel_information.csv\", index_col='mmsi')\nvessels.head()\n\n[v for v in vessels.type.unique() if v.find('/')==-1]\n\nvessels.type.value_counts()", "The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.\nIn Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example:", "df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))\ndf2 = pd.DataFrame(dict(id=list(range(3))+list(range(3)), \n score=np.random.random(size=6)))\n\ndf1\n\ndf2\n\npd.merge(df1, df2)", "Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables. \nNotice also that id=3 from df1 was omitted from the merged table. This is because, by default, merge performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables.", "pd.merge(df1, df2, how='outer')", "The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.\nLooking at the two datasets that we wish to merge:", "segments.head(1)\n\nvessels.head(1)", "we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other.", "segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')\n\nsegments_merged.head()", "In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other. \nNotice that mmsi field that was an index on the vessels table is no longer an index on the merged table.\nHere, we used the merge function to perform the merge; we could also have used the merge method for either of the tables:", "vessels.merge(segments, left_index=True, right_on='mmsi').head()", "Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them.", "segments['type'] = 'foo'\npd.merge(vessels, segments, left_index=True, right_on='mmsi').head()", "This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.\nConcatenation\nA common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with concatenate or the convenience \"functions\" c_ and r_:", "np.concatenate([np.random.random(5), np.random.random(5)])\n\nnp.r_[np.random.random(5), np.random.random(5)]\n\nnp.c_[np.random.random(5), np.random.random(5)]", "Notice that c_ and r_ are not really functions at all, since it is performing some sort of indexing operation, rather than being called. They are actually class instances, but they are here behaving mostly like functions. Don't think about this too hard; just know that they are there.\n\nThis operation is also called binding or stacking.\nWith Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.\nLets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index.", "mb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)\nmb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)\nmb1.shape, mb2.shape\n\nmb1.head()", "Let's give the index and columns meaningful labels:", "mb1.columns = mb2.columns = ['Count']\n\nmb1.index.name = mb2.index.name = 'Taxon'\n\nmb1.head()", "The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.", "mb1.index[:3]\n\nmb1.index.is_unique", "If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated:", "pd.concat([mb1, mb2], axis=0).shape", "However, the index is no longer unique, due to overlap between the two DataFrames.", "pd.concat([mb1, mb2], axis=0).index.is_unique", "Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames.", "pd.concat([mb1, mb2], axis=1).shape\n\npd.concat([mb1, mb2], axis=1).head()", "If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument.", "pd.concat([mb1, mb2], axis=1, join='inner').head()", "If we wanted to use the second table to fill values absent from the first table, we could use combine_first.", "mb1.combine_first(mb2).head()", "We can also create a hierarchical index based on keys identifying the original tables.", "pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head()\n\npd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique", "Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict, resulting in a \"wide\" format table.", "pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()", "If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.\nExercise 1\nIn the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single DataFrame, adding the identifying information from the metadata spreadsheet as columns in the combined DataFrame.", "# Loading all the .xls files one by one\nmb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)\nmb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)\nmb3 = pd.read_excel('Data/microbiome/MID3.xls', 'Sheet 1', index_col=0, header=None)\nmb4 = pd.read_excel('Data/microbiome/MID4.xls', 'Sheet 1', index_col=0, header=None)\nmb5 = pd.read_excel('Data/microbiome/MID5.xls', 'Sheet 1', index_col=0, header=None)\nmb6 = pd.read_excel('Data/microbiome/MID6.xls', 'Sheet 1', index_col=0, header=None)\nmb7 = pd.read_excel('Data/microbiome/MID7.xls', 'Sheet 1', index_col=0, header=None)\nmb8 = pd.read_excel('Data/microbiome/MID8.xls', 'Sheet 1', index_col=0, header=None)\nmb9 = pd.read_excel('Data/microbiome/MID9.xls', 'Sheet 1', index_col=0, header=None)\n# Each of these files contain two column : the name of the taxon and a counter. So we name the second column as \"count\" to keep the same meaning.\nmb1.columns = mb2.columns = mb3.columns = mb4.columns = mb5.columns = mb6.columns = mb7.columns = mb8.columns = mb9.columns = ['Count']\n# Same here for the first column by adding the name of the taxon.\nmb1.index.name = mb2.index.name = mb3.index.name = mb4.index.name = mb5.index.name = mb6.index.name = mb7.index.name = mb8.index.name = mb9.index.name = 'Taxon'\n# Now we'll add three columns which are defined in the metadata file : the barcode, the group and the sample type of each excel file.\ndataframe = pd.concat([mb1, mb2, mb3, mb4, mb5, mb6, mb7, mb8, mb9], axis=0)\ndataframe['Barcode']=['MID1']*len(mb1) + ['MID2']*len(mb2) + ['MID3']*len(mb3) + ['MID4']*len(mb4)+ ['MID5']*len(mb5)+ ['MID6']*len(mb6)+ ['MID7']*len(mb7)+ ['MID8']*len(mb8)+ ['MID9']*len(mb9) \ndataframe['Group']=['Extraction Control']*len(mb1) + ['NEC 1']*len(mb2) + ['Control 1']*len(mb3) + ['NEC 2']*len(mb4)+ ['Control 2']*len(mb5)+ ['NEC 1']*len(mb6)+ ['Control 1']*len(mb7)+ ['NEC 2']*len(mb8)+ ['Control 2']*len(mb9) \ndataframe['Sample']=['NA']*len(mb1) + ['tissue']*len(mb2) + ['tissue']*len(mb3) + ['tissue']*len(mb4)+ ['tissue']*len(mb5)+ ['stool']*len(mb6)+ ['stool']*len(mb7)+ ['stool']*len(mb8)+ ['stool']*len(mb9) \ndataframe.tail()\n\ntype(dataset)", "Reshaping DataFrame objects\nIn the context of a single DataFrame, we are often interested in re-arranging the layout of our data. \nThis dataset is from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.\n\nRandomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)\nResponse variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)\nTWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began", "cdystonia = pd.read_csv(\"Data/cdystonia.csv\", index_col=None)\ncdystonia.head()", "This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.\nThe stack method rotates the data frame so that columns are represented in rows:", "stacked = cdystonia.stack()\nstacked", "To complement this, unstack pivots from rows back to columns.", "stacked.unstack().head()", "For this dataset, it makes sense to create a hierarchical index based on the patient and observation:", "cdystonia2 = cdystonia.set_index(['patient','obs'])\ncdystonia2.head()\n\ncdystonia2.index.is_unique", "If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.", "twstrs_wide = cdystonia2['twstrs'].unstack('obs')\ntwstrs_wide.head()\n\ncdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]\n .drop_duplicates()\n .merge(twstrs_wide, right_index=True, left_on='patient', how='inner')\n .head())\ncdystonia_wide", "A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:", "(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']\n .unstack('week').head())", "To convert our \"wide\" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one\nor more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are \"unpivoted\" to\nthe row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.", "pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'], \n var_name='obs', value_name='twsters').head()", "This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.\nThe preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.\nPivoting\nThe pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.\nFor example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:", "cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()", "If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table:", "cdystonia.pivot('patient', 'obs')", "A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.", "cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs', \n aggfunc=max).head(20)", "For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.", "pd.crosstab(cdystonia.sex, cdystonia.site)", "Data transformation\nThere are a slew of additional operations for DataFrames that we would collectively refer to as \"transformations\" which include tasks such as removing duplicate values, replacing values, and grouping values.\nDealing with duplicates\nWe can easily identify and remove duplicate values from DataFrame objects. For example, say we want to removed ships from our vessels dataset that have the same name:", "vessels.duplicated(subset='names')\n\nvessels.drop_duplicates(['names'])", "Value replacement\nFrequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:", "cdystonia.treat.value_counts()", "A logical way to specify these numerically is to change them to integer values, perhaps using \"Placebo\" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.", "treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}\n\ncdystonia['treatment'] = cdystonia.treat.map(treatment_map)\ncdystonia.treatment", "Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method. \nAn example where replacement is useful is dealing with zeros in certain transformations. For example, if we try to take the log of a set of values:", "vals = pd.Series([float(i)**10 for i in range(10)])\nvals\n\nnp.log(vals)", "In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace.", "vals = vals.replace(0, 1e-6)\nnp.log(vals)", "We can also perform the same replacement that we used map for with replace:", "cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})", "Inidcator variables\nFor some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.\nLet's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships:", "top5 = vessels.type.isin(vessels.type.value_counts().index[:5])\ntop5.head(10)\n\nvessels5 = vessels[top5]\n\npd.get_dummies(vessels5.type).head(10)", "Categorical Data\nPandas provides a convenient dtype for reprsenting categorical (factor) data, called category. \nFor example, the treat column in the cervical dystonia dataset represents three treatment levels in a clinical trial, and is imported by default as an object type, since it is a mixture of string characters.", "cdystonia.treat.head()", "We can convert this to a category type either by the Categorical constructor, or casting the column using astype:", "pd.Categorical(cdystonia.treat)\n\ncdystonia['treat'] = cdystonia.treat.astype('category')\n\ncdystonia.treat.describe()", "By default the Categorical type represents an unordered categorical.", "cdystonia.treat.cat.categories", "However, an ordering can be imposed. The order is lexical by default, but will assume the order of the listed categories to be the desired order.", "cdystonia.treat.cat.categories = ['Placebo', '5000U', '10000U']\n\ncdystonia.treat.cat.as_ordered().head()", "The important difference between the category type and the object type is that category is represented by an underlying array of integers, which is then mapped to character labels.", "cdystonia.treat.cat.codes", "Notice that these are 8-bit integers, which are essentially single bytes of data, making memory usage lower.\nThere is also a performance benefit. Consider an operation such as calculating the total segment lengths for each ship in the segments table (this is also a preview of pandas' groupby operation!):", "%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()\n\nsegments['name'] = segments.name.astype('category')\n\n%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()", "Hence, we get a considerable speedup simply by using the appropriate dtype for our data.\nDiscretization\nPandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!\nLets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:", "cdystonia.age.describe()", "Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's:", "pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]", "The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False:", "pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]", "Since the data are now ordinal, rather than numeric, we can give them labels:", "pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30]", "A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:", "pd.qcut(cdystonia.age, 4)[:30]", "Alternatively, one can specify custom quantiles to act as cut points:", "quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1])\nquantiles[:30]", "Note that you can easily combine discretiztion with the generation of indicator variables shown above:", "pd.get_dummies(quantiles).head(10)", "Permutation and sampling\nFor some data analysis tasks, such as simulation, we need to be able to randomly reorder our data, or draw random values from it. Calling NumPy's permutation function with the length of the sequence you want to permute generates an array with a permuted sequence of integers, which can be used to re-order the sequence.", "new_order = np.random.permutation(len(segments))\nnew_order[:30]", "Using this sequence as an argument to the take method results in a reordered DataFrame:", "segments.take(new_order).head()", "Compare this ordering with the original:", "segments.head()", "For random sampling, DataFrame and Series objects have a sample method that can be used to draw samples, with or without replacement:", "vessels.sample(n=10)\n\nvessels.sample(n=10, replace=True)", "Data aggregation and GroupBy operations\nOne of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:\n\naggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results\nslicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting)\ngroup-wise transformation, such as standardization/normalization", "cdystonia_grouped = cdystonia.groupby(cdystonia.patient)", "This grouped dataset is hard to visualize", "cdystonia_grouped", "However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:", "for patient, group in cdystonia_grouped:\n print('patient', patient)\n print('group', group)", "A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.\nFor example, we may want to aggregate our data with with some function.\n\n<div align=\"right\">*(figure taken from \"Python for Data Analysis\", p.251)*</div>\n\nWe can aggregate in Pandas using the aggregate (or agg, for short) method:", "cdystonia_grouped.agg(np.mean).head()", "Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.\nSome aggregation functions are so common that Pandas has a convenience method for them, such as mean:", "cdystonia_grouped.mean().head()", "The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation:", "cdystonia_grouped.mean().add_suffix('_mean').head()\n\n# The median of the `twstrs` variable\ncdystonia_grouped['twstrs'].quantile(0.5)", "If we wish, we can easily aggregate according to multiple keys:", "cdystonia.groupby(['week','site']).mean().head()", "Alternately, we can transform the data, using a function of our choice with the transform method:", "normalize = lambda x: (x - x.mean())/x.std()\n\ncdystonia_grouped.transform(normalize).head()", "It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns:", "cdystonia_grouped['twstrs'].mean().head()\n\n# This gives the same result as a DataFrame\ncdystonia_grouped[['twstrs']].mean().head()", "If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:", "chunks = dict(list(cdystonia_grouped))\n\nchunks[4]", "By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way:", "grouped_by_type = cdystonia.groupby(cdystonia.dtypes, axis=1)\n{g:grouped_by_type.get_group(g) for g in grouped_by_type.groups}", "Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index:", "cdystonia2.head(10)\n\ncdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()", "Apply\nWe can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.\nThe function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.", "def top(df, column, n=5):\n return df.sort_values(by=column, ascending=False)[:n]", "To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:", "top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]\ntop3segments.head(15)", "Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.\nRecall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index.", "mb1.index[:3]", "Using the string methods split and join we can create an index that just uses the first three classifications: domain, phylum and class.", "class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3]))\n\nmb_class = mb1.copy()\nmb_class.index = class_index", "However, since there are multiple taxonomic units with the same class, our index is no longer unique:", "mb_class.head()", "We can re-establish a unique index by summing all rows with the same class, using groupby:", "mb_class.groupby(level=0).sum().head(10)", "Exercise 2\nLoad the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.", "from IPython.core.display import HTML\nHTML(filename='Data/titanic.html')\n\n#import titanic data file\ntitanic = pd.read_excel(\"Data/titanic.xls\", index_col=None)\ntitanic.head()\n\n# turn \"sex\" attribute into numerical attribute\n# 0 = male ; 1= female\n\nsex_map = {'male': 0, 'female': 1}\ntitanic['sex'] = titanic.sex.map(sex_map)\ntitanic.head()\n\n\n# clean duplicate values\ntitanic_2 = titanic.drop_duplicates(['name'])\n\n# convert attributes to categorical data\npd.Categorical(titanic_2.pclass)\npd.Categorical(titanic_2.survived)\npd.Categorical(titanic_2.sex)\npd.Categorical(titanic_2.age)\npd.Categorical(titanic_2.sibsp)\npd.Categorical(titanic_2.parch)\npd.Categorical(titanic_2.ticket)\npd.Categorical(titanic_2.fare)\npd.Categorical(titanic_2.cabin)\npd.Categorical(titanic_2.embarked)\npd.Categorical(titanic_2.boat)\npd.Categorical(titanic_2.body)\n\n\ntitanic_2\n\n# describe passenger class\npclasses = titanic_2.pclass.value_counts()\nclass1 = (pclasses[1]/1307)*100\nclass2 = (pclasses[2]/1307)*100\nclass3 = (pclasses[3]/1307)*100\nd = {'1st Class' : class1, '2nd Class' : class2, '3rdclass' : class3}\npd.Series(d)\n#24% of passengers travelled 1st class, 21% travelled in 2nd class and 54% travelled in 3rd class\n\n# plot classes 1 = 1st 2 = 2nd and 3 = 3rd\npclasses.plot.pie() \n\n# describe passenger survival\nsurvivals = titanic_2.survived.value_counts()\nsurvived = (survivals[1]/1307)*100\nsurvived\n# 38.25% of passengers survived\n\n# plot survivals 0 = death & 1 = survival\nsurvivals.plot.pie() \n\n\n# describe passenger sex\nsex = titanic_2.sex.value_counts()\nsex\nmale_ratio = (sex[1]/1307)*100\nmale_ratio\n# results show that 35% of passengers are male and 65% are female\n\n# plot gender distribution 0 = male & 1 = female\nsex.plot.pie() \n\n# calculate proportions of port of embarcation S = Southtampton & C = Cherbourg & Q = Queenstown\nport = titanic_2.embarked.value_counts()\nS = (port[0]/1307)*100\nC = (port[1]/1307)*100\nQ = (port[2]/1307)*100\nd = {'S' : S, 'C' : C, 'Q' : Q}\npd.Series(d)\n# 20.6% of passengers boarded in C, 9.4% boarded in Q and 69.7% boarded in S.\n\n# plot gender distribution 0 = male & 1 = female\nport.plot.pie() \n\n# describe passenger age\n# assumption - dropping all NaN values and including values of estimated ages\ntitanic_2age = titanic_2.age.dropna()\ntitanic_2age.describe()\n# results show that mean age was 29.86 y.o. \n# min age was 0.16y.o. and max was 80 y.o. \n# 25% of passengers under 21, 50% under 28, 75% under 39 y.o.\n\n# show distribution of ages on board\ntitanic_2age.plot.hist(bins=50) \n\n# describe passenger fare\n# assumption - dropping all NaN values \ntitanic_2fare = titanic_2.fare.dropna()\ntitanic_2fare.describe()\n# results show that mean fare was 33 \n# min fare was 0 and max was 512 \n# 25% of passengers paid under 7.9, 50% under 14.5, 75% under 31.27\n\n# show distribution of fares on board\ntitanic_2fare.plot.hist(bins=50) \n# majority of fares under 100 with few outliers \n\n# description of statistics on # of siblings and spouses on board\n# assumption - dropping all NaN values and include values which are 0\ntitanic_2sibsp = titanic_2.sibsp.dropna()\ntitanic_2sibsp.describe()\n# results show that mean # of sibsp was 0.49 siblings or spouses aboard \n# min number of siblings or spouses was 0 and max was 8 \n# 75% of passengers had less than 1 sibling or spouse aboard, indicating outliers above 1\n\n# show distribution of # of siblings and spouses on board\ntitanic_2sibsp.plot.hist(bins=50) \n\n\n# description of statistics on # of parents and children on board\n# assumption - dropping all NaN values and include values which are 0\ntitanic_2parch = titanic_2.parch.dropna()\ntitanic_2parch.describe()\n# results show that mean # of parch was 0.38 parents or children aboard \n# min number of parents or children was 0 and max was 9\n# 75% of passengers had less or equal to 0 parents or children aboard, indicating many outliers in the data\n\n# show distribution of # of siblings and spouses on board\ntitanic_2parch.plot.hist(bins=50) \n", "Women and children first?\n\nDescribe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings.\nUse the groupby method to calculate the proportion of passengers that survived by sex.\nCalculate the same proportion, but by class and sex.\nCreate age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.", "# Part 2\n# Using Groupby to find ratio of survival by sex\nsex_survival = titanic.groupby(titanic.survived).sex.value_counts()\nsex_survival\n\n\n# survivers gender profile calculation\nsurv_tot = sex_survival[1].sum() # calculate total number of survivors\nfem_surv = (sex_survival[1,1]/surv_tot)*100 # calculate proportion of survived females\nmale_surv = (sex_survival[1,0]/surv_tot)*100 # calculate proportion of survived males\nout2 = {'Male Survivors' : male_surv , 'Female Survivors' : fem_surv,} # display outputs simultaneously\npd.Series(out2)\n# 67.8% of survivors were female and 32.2% were male\n\n# Part 3\n# Using Groupby to find ratio of survival by sex and class\n# table outputs raw numbers, but not proportions\nsex_class = titanic_2.groupby(['survived','sex']).pclass.value_counts()\nsex_class\n\n\n# survivers gender + class profile calculation\ndata = pd.DataFrame(sex_class) # turn into data set\nsurv_tot = sex_class[1].sum() # calculate total number of survivors\ndata['proportion of survived'] = (data/nsurv_tot)*100 #add column of proportion of survivors\n# this column refers to the percentage of people that survived/ did not survived that belong to each category (e.g. percntage of non survivors that were females in second class)\ndata.loc[1]\n# the table below only shows proportions of different categories of people among survivors\n\n# Part 4\n# Create Age Categories\n# Assumption: Dropped all NaNs\nage_group = pd.cut(titanic_2.age, [0,14,20,64,100], labels=['children','adolescents','adult','seniors']) # create age categories\ntitanic_2['age_group'] = age_group #add column of age group to main dataframe\nsex_class_age = titanic_2.groupby(['survived','sex', 'pclass']).age_group.value_counts() #find counts for different combinations of age group, sex and class\nsex_class_age\n\n# survivers gender + class + age group profile calculation\ndata = pd.DataFrame(sex_class_age) # turn into data set\nsurv_tot = sex_class_age[1].sum() # calculate total number of survivors\ndata['proportion of survivors'] = (data/surv_tot)*100 #add column of proportion\n# this column refers to the percentage of people that survived/ did not survive that belong to each category (e.g. percntage of survivors that were old males in first class\ndata.loc[1]\n# the table below shows proportions of survivals belonging to different categories", "References\nPython for Data Analysis Wes McKinney" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
uber/pyro
tutorial/source/intro_part_i.ipynb
apache-2.0
[ "An Introduction to Models in Pyro\nThe basic unit of probabilistic programs is the stochastic function. \nThis is an arbitrary Python callable that combines two ingredients:\n\ndeterministic Python code; and\nprimitive stochastic functions that call a random number generator\n\nConcretely, a stochastic function can be any Python object with a __call__() method, like a function, a method, or a PyTorch nn.Module.\nThroughout the tutorials and documentation, we will often call stochastic functions models, since stochastic functions can be used to represent simplified or abstract descriptions of a process by which data are generated. Expressing models as stochastic functions means that models can be composed, reused, imported, and serialized just like regular Python callables.", "import torch\nimport pyro\n\npyro.set_rng_seed(101)", "Primitive Stochastic Functions\nPrimitive stochastic functions, or distributions, are an important class of stochastic functions for which we can explicitly compute the probability of the outputs given the inputs. As of PyTorch 0.4 and Pyro 0.2, Pyro uses PyTorch's distribution library. You can also create custom distributions using transforms.\nUsing primitive stochastic functions is easy. For example, to draw a sample x from the unit normal distribution $\\mathcal{N}(0,1)$ we do the following:", "loc = 0. # mean zero\nscale = 1. # unit variance\nnormal = torch.distributions.Normal(loc, scale) # create a normal distribution object\nx = normal.rsample() # draw a sample from N(0,1)\nprint(\"sample\", x)\nprint(\"log prob\", normal.log_prob(x)) # score the sample from N(0,1)", "Here, torch.distributions.Normal is an instance of the Distribution class that takes parameters and provides sample and score methods. Pyro's distribution library pyro.distributions is a thin wrapper around torch.distributions because we want to make use of PyTorch's fast tensor math and autograd capabilities during inference.\nA Simple Model\nAll probabilistic programs are built up by composing primitive stochastic functions and deterministic computation. Since we're ultimately interested in probabilistic programming because we want to model things in the real world, let's start with a model of something concrete. \nLet's suppose we have a bunch of data with daily mean temperatures and cloud cover. We want to reason about how temperature interacts with whether it was sunny or cloudy. A simple stochastic function that describes how that data might have been generated is given by:", "def weather():\n cloudy = torch.distributions.Bernoulli(0.3).sample()\n cloudy = 'cloudy' if cloudy.item() == 1.0 else 'sunny'\n mean_temp = {'cloudy': 55.0, 'sunny': 75.0}[cloudy]\n scale_temp = {'cloudy': 10.0, 'sunny': 15.0}[cloudy]\n temp = torch.distributions.Normal(mean_temp, scale_temp).rsample()\n return cloudy, temp.item()", "Let's go through this line-by-line. First, in lines 2 we define a binary random variable 'cloudy', which is given by a draw from the Bernoulli distribution with a parameter of 0.3. Since the Bernoulli distributions return 0s or 1s, in line 3 we convert the value cloudy to a string so that return values of weather are easier to parse. So according to this model 30% of the time it's cloudy and 70% of the time it's sunny.\nIn lines 4-5 we define the parameters we're going to use to sample the temperature in lines 6. These parameters depend on the particular value of cloudy we sampled in line 2. For example, the mean temperature is 55 degrees (Fahrenheit) on cloudy days and 75 degrees on sunny days. Finally we return the two values cloudy and temp in line 7.\nHowever, weather is entirely independent of Pyro - it only calls PyTorch. We need to turn it into a Pyro program if we want to use this model for anything other than sampling fake data.\nThe pyro.sample Primitive\nTo turn weather into a Pyro program, we'll replace the torch.distributions with pyro.distributions and the .sample() and .rsample() calls with calls to pyro.sample, one of the core language primitives in Pyro. Using pyro.sample is as simple as calling a primitive stochastic function with one important difference:", "x = pyro.sample(\"my_sample\", pyro.distributions.Normal(loc, scale))\nprint(x)", "Just like a direct call to torch.distributions.Normal().rsample(), this returns a sample from the unit normal distribution. The crucial difference is that this sample is named. Pyro's backend uses these names to uniquely identify sample statements and change their behavior at runtime depending on how the enclosing stochastic function is being used. As we will see, this is how Pyro can implement the various manipulations that underlie inference algorithms.\nNow that we've introduced pyro.sample and pyro.distributions we can rewrite our simple model as a Pyro program:", "def weather():\n cloudy = pyro.sample('cloudy', pyro.distributions.Bernoulli(0.3))\n cloudy = 'cloudy' if cloudy.item() == 1.0 else 'sunny'\n mean_temp = {'cloudy': 55.0, 'sunny': 75.0}[cloudy]\n scale_temp = {'cloudy': 10.0, 'sunny': 15.0}[cloudy]\n temp = pyro.sample('temp', pyro.distributions.Normal(mean_temp, scale_temp))\n return cloudy, temp.item()\n\nfor _ in range(3):\n print(weather())", "Procedurally, weather() is still a non-deterministic Python callable that returns two random samples. Because the randomness is now invoked with pyro.sample, however, it is much more than that. In particular weather() specifies a joint probability distribution over two named random variables: cloudy and temp. As such, it defines a probabilistic model that we can reason about using the techniques of probability theory. For example we might ask: if I observe a temperature of 70 degrees, how likely is it to be cloudy? How to formulate and answer these kinds of questions will be the subject of the next tutorial.\nUniversality: Stochastic Recursion, Higher-order Stochastic Functions, and Random Control Flow\nWe've now seen how to define a simple model. Building off of it is easy. For example:", "def ice_cream_sales():\n cloudy, temp = weather()\n expected_sales = 200. if cloudy == 'sunny' and temp > 80.0 else 50.\n ice_cream = pyro.sample('ice_cream', pyro.distributions.Normal(expected_sales, 10.0))\n return ice_cream", "This kind of modularity, familiar to any programmer, is obviously very powerful. But is it powerful enough to encompass all the different kinds of models we'd like to express?\nIt turns out that because Pyro is embedded in Python, stochastic functions can contain arbitrarily complex deterministic Python and randomness can freely affect control flow. For example, we can construct recursive functions that terminate their recursion nondeterministically, provided we take care to pass pyro.sample unique sample names whenever it's called. For example we can define a geometric distribution that counts the number of failures until the first success like so:", "def geometric(p, t=None):\n if t is None:\n t = 0\n x = pyro.sample(\"x_{}\".format(t), pyro.distributions.Bernoulli(p))\n if x.item() == 1:\n return 0\n else:\n return 1 + geometric(p, t + 1)\n \nprint(geometric(0.5))", "Note that the names x_0, x_1, etc., in geometric() are generated dynamically and that different executions can have different numbers of named random variables. \nWe are also free to define stochastic functions that accept as input or produce as output other stochastic functions:", "def normal_product(loc, scale):\n z1 = pyro.sample(\"z1\", pyro.distributions.Normal(loc, scale))\n z2 = pyro.sample(\"z2\", pyro.distributions.Normal(loc, scale))\n y = z1 * z2\n return y\n\ndef make_normal_normal():\n mu_latent = pyro.sample(\"mu_latent\", pyro.distributions.Normal(0, 1))\n fn = lambda scale: normal_product(mu_latent, scale)\n return fn\n\nprint(make_normal_normal()(1.))", "Here make_normal_normal() is a stochastic function that takes one argument and which, upon execution, generates three named random variables.\nThe fact that Pyro supports arbitrary Python code like this&mdash;iteration, recursion, higher-order functions, etc.&mdash;in conjuction with random control flow means that Pyro stochastic functions are universal, i.e. they can be used to represent any computable probability distribution. As we will see in subsequent tutorials, this is incredibly powerful. \nIt is worth emphasizing that this is one reason why Pyro is built on top of PyTorch: dynamic computational graphs are an important ingredient in allowing for universal models that can benefit from GPU-accelerated tensor math.\nNext Steps\nWe've shown how we can use stochastic functions and primitive distributions to represent models in Pyro. In order to learn models from data and reason about them we need to be able to do inference. This is the subject of the next tutorial." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
KiranArun/A-Level_Maths
Trigonometry/Trigonometry.ipynb
mit
[ "Trigonometry", "import numpy as np\nimport matplotlib.pyplot as plt", "Contents\n\nSine, cosine and tangent\nMeasurements\nSmall angle approximation\nTrigonometric functions\nMore trigonometric functions\nIdentities\nCompound angles\n\n<a id='Sine_cosine_and_tangent'></a>\nSine, cosine and tangent\nSine:\n- $\\sin\\theta = \\frac{opp}{hyp}$\n- with triangle with angles A, B and C, and lines a, b and c opposite their respective angles\n- $\\frac{a}{\\sin A} = \\frac{b}{\\sin B} = \\frac{c}{\\sin C}$\nCosine:\n- $\\cos\\theta = \\frac{adj}{hyp}$\n- with triangle with angles A, B and C, and lines a, b and c opposite their respective angles\n- $a^2 = b^2 + c^2 - 2bc \\cos A$\n- $b^2 = a^2 + c^2 - 2ac \\cos B$\n- $c^2 = a^2 + b^2 - 2ab \\cos C$\n- $\\cos A = \\frac{b^2 + c^2 - a^2}{2bc}$\nTangent:\n- $\\tan\\theta = \\frac{opp}{adj}$\nArea of triangle:\n- $\\frac{1}{2}ab\\sin C$\n<a id='Measurements'></a>\nMeasurements\nRadians:\n- 1 radian = angle when the arc opposite the angle = r (the 2 points on the circumference)\n- since $c = 2\\pi r$, 1 circumference = 2 pi radians\nArc Length:\n- length, s, of arc on circumference with angle $\\theta$ in radians\n- $s = r\\theta$\nArea of Sector:\n- area, a, of sector with angle $\\theta$ in radians\n- $\\frac{1}{2} r^2\\theta$\n<a id='Small_angle_approximation'></a>\nSmall angle approximation\nwhen $\\theta \\approx 0$ (in radians)\nor $\\theta = \\lim_{\\theta\\to0}$\n$\\sin \\theta \\approx \\theta$\n$\\cos \\theta \\approx 1 - \\frac{\\theta^2}{2} \\approx 1$\n$\\tan \\theta \\approx \\theta$\n<a id='Trigonometric_functions'></a>\nTrigonometric functions\narcsin, arcos and arctan are the inverse (from length to angle in circle)\nDomains and ranges:\nsin:\n- $\\theta = \\mathbb{R}$\n- $-1 \\le \\sin\\theta \\le 1$\n- $-\\frac{\\pi}{2} \\le \\arcsin x \\le \\frac{\\pi}{2}$\ncos:\n- $\\theta = \\mathbb{R}$\n- $-1 \\le \\cos\\theta \\le 1$\n- $0 \\le \\arccos x \\le \\pi$\ntan:\n- $\\theta \\not= \\frac{\\pi}{2}, \\frac{3\\pi}{2} \\dots$\n- $\\tan$ range is undefined\n- $-\\pi \\le \\arctan x \\le \\pi$\nGraphing:", "fig, ax = plt.subplots(1, 3, figsize=(13,4))\n\nx = np.linspace(0, 2*np.pi, 30*np.pi).astype(np.float32)\n\nax[0].plot(x, np.sin(x), label='sin')\nax[1].plot(x, np.cos(x), label='cos')\nax[2].plot(x, np.tan(x), label='tan')\n\nax[0].plot(x, np.arcsin(np.sin(x)), label='arcsin')\nax[1].plot(x, np.arccos(np.cos(x)), label='arccos')\nax[2].plot(x, np.arctan(np.tan(x)), label='arctan')\n\nfor axes in ax:\n axes.grid(True)\n axes.legend()\n\nplt.show()", "<a id='More_trigonometric_functions'></a>\nMore trigonometric functions\nSecant:\n- $\\sec \\theta = \\frac{1}{\\cos \\theta}$\nCosecant:\n- $\\mathrm{cosec} \\theta = \\frac{1}{\\sin \\theta}$\nCotangent:\n- $\\cot \\theta = \\frac{1}{\\tan\\theta} = \\frac{\\cos\\theta}{\\sin\\theta}$\nGraphing:", "fig, ax = plt.subplots(1, 3, figsize=(13,4))\n\nx = np.linspace(0, 2*np.pi, 20*np.pi)\n\nax[0].plot(x, 1/np.cos(x), label='$sec$')\nax[1].plot(x, 1/np.sin(x), label='cosec')\nax[2].plot(x, np.cos(x)/np.sin(x), label='cot')\n\nfor axes in ax:\n axes.grid(True)\n axes.set_ylim([-20,20])\n \n axes.legend()\n\nplt.show()", "<a id='Identities'></a>\nIdentities\n$\\tan\\theta = \\frac{\\sin\\theta}{\\cos\\theta}$\n$\\sin^2\\theta + \\cos^2\\theta = 1$\n$\\sec^2\\theta = 1 + \\tan^2\\theta$\n$\\mathrm{cosec} \\theta = 1 + \\cot^2\\theta$\n<a id='Compound_angles'></a>\nCompound angles\nSin:\n- $\\sin(A+B) = \\sin A\\cos B + \\cos A\\sin B$\n- $\\sin(2A) = 2\\sin A\\cos A$\nCos:\n- $\\cos(A+B) = \\cos A\\cos B - \\sin A\\sin B$\n- $\\cos(2A) = \\cos^2A - 2\\sin^2B$\n$= 2\\cos^2x - 1$\n$= 1 - 2\\sin^2x$\nTan:\n- $\\tan(A+B) = \\frac{\\tan A + \\tan B}{1 - \\tan A\\tan B}$\n- $\\tan(2A) = \\frac{2\\tan A}{1 - \\tan^2A}$\n$r\\cos(\\theta+a)$\nuseful to reformat:\n$a\\cos \\theta + b\\sin \\theta = r\\cos(\\theta+a)$\n$r\\cos(\\theta + a) = r \\cos a \\cos \\theta - r \\sin a \\sin \\theta$\nso:\n$r \\cos a \\cos \\theta = a\\cos \\theta$\n$\\therefore$ $r \\cos a = a$\n$- r \\sin a \\sin \\theta = b\\sin \\theta$\n$\\therefore$ $r \\sin a = -b$\nThen solve as simultaneous equations:\nsolving a:\n$\\frac{\\sin a}{\\cos a} = \\frac{-b}{a}$\n$a \\tan a = -b$\nsolving r:\n$r^2\\cos^2a + r^2\\sin^2a = a^2+b^2$\n$r^2 = a^2+b^2$" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ThomasProctor/Slide-Rule-Data-Intensive
DataStory/When do people take cabs?.ipynb
mit
[ "Cars are for parties\nIf we look at the days of the week that people hire cabs - both yellow cabs and uber cabs - we find something a little bit unexpected. While rush hour traffic on weekdays might lead one to believe that weekdays would be the most common time to get a cab. However, we see below that the more cabs are hailed on Friday and Saturday - the days with weekend nights - then the other days of the week.\nWhile there is plenty of weekday usage of cabs, the spike on Friday and Saturday indicates that people like to hail cabs to go out on weekends the most.", "from wand.image import Image as WImage\nimg = WImage(filename='uberweekday.pdf')\nimg\n\nfrom wand.image import Image as WImage\nimg = WImage(filename='yellowcabweek.pdf')\nimg", "Uber is growing\nUnsurprisingly as it has just started up, uber is growing in NYC. I suspect that doing a regression on this data will show that it isn't quite growing exponentially, at least over the period studied.", "from wand.image import Image as WImage\nimg = WImage(filename='ubergrowth.pdf')\nimg", "Taxi rides, meanwhile, fluctuate surprisingly much throughout the year.", "from wand.image import Image as WImage\nimg = WImage(filename='yellowcabyear.pdf')\nimg" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
wmvanvliet/neuroscience_tutorials
mne-intro/beginner.ipynb
bsd-2-clause
[ "<img src=\"images/charmander.png\" alt=\"Beginner\" width=\"200\">\n<img src=\"images/coffee_machine.jpeg\" width=\"300\" style=\"float: right; margin-top: 30px\">\nBeginner level\nWelcome!\nOur goal in this module will be to let you have a taste of what programming is like.\nI'm going to assume that until now, all your interactions with a computer have been through a graphical user interface (GUI).\nThese interfaces completely revolutionized how we interact with computers by simplifying what is an enormously complex machine into an appliance that has only a few functions.\nEach function can be executed by pressing a button.\nLike it's a coffee machine.\nBut the computer you are talking to right now is not a coffee machine.\nOnce you drop the GUI mask and start interacting with the computer itself, its full potential becomes available to you!\nWe're going to have to start from scratch. So settle in. You've got a lot to learn about programming before we can get to data visualization.\nYour first ever program\nProgramming code is written in a programming language.\nYou'll have to learn this language in order to talk with the computer.\nDon't worry though, programming languages are designed to be very easy to learn.\nFor the purposes of this hands-on session, you only need to know a little of the vocabulary and grammar of a programming language called Python.\nThe text you are reading now is inside a \"cell\". This notebook contains both text cells like this one, and \"code cells\" in which you can write programming code and have the computer \"execute\" it.\nThe first thing I'm going to teach you is how to use the computer as a calculator. In Python, arithmetic works just as you would expect it to work. Try typing 1 + 1 in the cell below and press Ctrl + Enter (both keys on your keyboard at the same time) to execute the code.\n<div style=\"border: 3px solid #aaccff; margin: 10px 100px; padding: 10px\">\n <b>Note about colors</b>\n\nAs you type in programming code in the cell below, you will notice the computer will automatically color parts of the text. These colors are purely a visual aid for you, the programmer, and have no meaning in the programming language.\n</div>\n\nDid it work? Hopefully the computer answered back with \"2\". Phenomenal computer power at work ladies and gentlemen! In the Python language, +, -, *, / all do what you would expect them to do. For example, to compute the average of 5, 3, 7 and 10, we would write:\npython\n(5 + 3 + 7 + 10) / 4\njust like you would write it in math. Try it in the code cell above if you like.\nVariables\nThe computer ran your little program, reported the answer back to you, and by now has forgotten all about it.\nIf we wish for the computer to remember the result of a computation, so we may use it in another computation, we need to give it a name by using the = character. Try running the program below and you'll see what I mean:", "x = 1\ny = 2\nz = x + y\nz * 3", "In good mathematical tradition, I've named various things x, y and z and re-used them in various lines of the program. You are free to choose whatever names you like, but there are some rules. For example, you cannot name a result +, since that would be very confusing. Another important restriction is that names can not have spaces. For example, the answer is not a valid name, but the_answer is (notice I used an underscore \"_\" character instead of a space, which is a common thing that programmers do).\nNaming things is terribly important in programming, for it bestows meaning. When you get down to it, a computer is just a pile of carefully arranged sand with a rectangle of little lights. It takes a human to bestow meaning to the patterns of light and currents of electricity flowing through the machine. Do the numbers mean sheep to barter? Nuclear missiles to fire? Thoughts in a persons brain?\nYour turn, mighty human. Write an elegant program to solve the following math problem:\n\nJan has 5 coins and Mary has 3 coins.\nHow much coins do Jan and Mary have together?\n\nFunctions and modules\nFor our next program, let's do something more difficult and compute the sine of 2 (it should be around 0.9). We can't easily compute that with just +, -, *, and /.\nProgramming languages are inspired a lot by the language of mathematics. In math, we would write the sine of 2 as:\n$$\\sin 2$$\nWhere \"$\\sin$\" is a placeholder for \"use whatever method you want to compute the sine\". More formally, $\\sin$ is called a function and 2 is an argument to this function.\nFunctions are central to programming. They are the verbs of the programming language. Once you've learned a few verbs you can say lots of things!\nIf numbers are your raw material, functions are the tools you use to manipulate them.\nThere are tons of functions available to you.\nSo many, that to keep track of them all, they are organized in different \"modules\".\nYou can think of a module as a toolbox that has a nice label on it so we know what is inside.\nThe sin function we want to use is kept inside the math module.\n<img src=\"images/toolbox.png\" alt=\"Functions are like tools. Modules are like toolboxes\">\nIn order for us to use the sin function, we must first \"import\" it from the math module.\nIn our metaphor, importing a function is like taking a tool from a toolbox and placing it on our workbench.\nIn the grammar of our programming language, you say:\npython\nfrom math import sin\nFrom that point on, the sin function is available for you to use. Computing the sine of two is then sin(2).\n<img src=\"images/function.png\" alt=\"sin(2)\" width=\"400\">\nI've filled in the program for you below. Try running it.", "from math import sin\nsin(2)", "Your turn. Write a program that computes the cosine of 5. The function to compute the cosine is called cos and can also be found inside the math module (remember to import it first!). You can use the cell below to write your program in:\nWe can combine functions with variables. Unfortunately, the answer to the cosine of 5 you have so cleverly computed above is already gone with the wind. You can \"assign\" the result of a function to a variable, just as we did above with numbers.\nFor example: here is how to assign the sine of 2 to a variable called my_result:", "my_result = sin(2)", "Try running the above code cell. It doesn't seem to work??!! Nothing happened!\nActually, the program did work, but the computer is keeping silent. As we have seen, the computer will tell you the result that was produced by the last line in your program, but only if you did not assign that result to a variable. It was a choice made by programmers that were of the opinion that their computers were being too chatty.\n<img src=\"images/printer.jpeg\" width=\"300\" style=\"float: right; margin-top: 30px\">\nExplicitly telling the computer to display something\nThere is a function that is not part of any module and is always available to you. This function will make the computer display things. It is called print and it writes text to the screen. Its name is a leftover from the times when computer monitors did not exist yet and the computer could only print things on paper. Funny how etymology applies to programming languages too.\nAnyway, here is an example of the print function in action:", "my_result = sin(2)\nprint(my_result)", "Ok, your turn. Write a program that assigns the cosine of 2 to the variable my_result. Use the print function to display the variable to check that it has changed.\nWorking with text\nHere is a little program that makes the computer say hello:", "print('hello')", "In the Python programming language, literal text needs to always be surrounded by ' quotation marks. Without quotation marks, the program above would try to display the contents of a variable named hello. Try running this example and you'll see what I mean:", "hello = 'Hello, world!'\nprint(hello)", "The print function can write multiple things at the same time by giving it more than one argument. To give multiple arguments, put a comma between them, like this:", "print('The man said:', hello, 'How are you?')", "As a child, I would write little programs like this:", "name = input('What is your name?')\nage = input('What is your age?')\ncolor = input('What is your favorite color?')\npet = input('What kind of pet would you like?')\njob = input('What do you want to be when you grow up?')\n\nprint('')\nprint('There once was a', pet, 'named', name)\nprint('Every day,', name, 'bought', age, 'bottles of lemonade.')\nprint('Until one day it turned', color, 'from drinking too much.')\nprint('It was rushed to the hospital by a', job)", "To familiarize yourself with manipulating text in a programming language, try modifying the program above to tell a story of your own.\nLoading some MEG data\nFinally! We got all the ingredients we need to use MNE-Python. Using functions, saving results to variables and re-using them. These concepts allow us to do pretty powerful things.\nWe're going to instruct the computer to \"load\" some MEG data.\nThe MEG data is currently stored as a file on the hard drive, called:\ndata/sample-raw.fif\nThe above \"file path\" may look weird to you if you are used to Windows.\nThe computer you are talking to is a linux machine.\nPaths look a little different.\nTo \"load\" a file means to make it available as a variable so we can use it in our programming code.\nMNE-Python has a function that we can use to do this for us.\nIt's called read_raw_fif and it lives in the mne.io module.\nExecuting the cell below will import the function for you, so you can use it.\nImporting a function will make it available \"from now on\", also in future cells.", "from mne.io import read_raw_fif", "Now, everything you've learned so far needs to come together. I'm going to leave it up to you to use the function correctly.\nMake the computer load the MEG data and put it inside a variable called raw, by writing a single line of code.\nHere are some pointers to help you along:\n 1. The function you want to call is named: read_raw_fif. It has already been imported.\n 2. Call it with a single argument: the name of the file to load. Remember the example sin(2) \n 3. The name of the file to load is data/sample-raw.fif\n 4. The name of the file is text. Remember what you know about text and quotation ' marks!\n 5. Assign the result to a variable with the name raw using the = symbol.\nWrite your single line of code in the cell below:\nIf your code above was correct, executing the cell below will display some information about the data you've just loaded:", "print(raw)", "Visualizing the MEG data\nNow that the MEG data has been loaded into memory, we can look at it. The data is in it's \"raw\" form, meaning it has come straight out of the scanner and nothing has been done to it yet. MNE-Python has a visualization tool for data in it's raw form, called plot_raw. In programmer lingo, visualizing data points is referred to as \"plotting\".\nAgain, I'll write the code to import the plot_raw function for you. This time, I'm also going to add some housekeeping code that will instruct the computer to send the graphics to your browser.", "%matplotlib notebook\nprint('From now on, all graphics will send to your browser.')\n\nfrom mne.viz import plot_raw", "It's up to you to write the code to call the function and give the raw variable as argument to this function. Ready? go!\nIf you wrote the code correctly, you should be looking at a little interface that shows the data collected on all the MEG sensors. Try using the arrow keys and clicking in the figure to explore the data.\nContinue onward to the adept level!\nGreat job making it this far. You've taken your first steps along the path towards becoming a data analyst. Take a look at the clock. If we still have some time, I'd like to invite you to move on to the next level.\n<center>\n<a href=\"adept.ipynb\" alt=\"To adept level\"><img src=\"images/charmeleon.png\" width=\"200\"></a>\n<a href=\"adept.ipynb\">Move on to the adept level</a>\n</center>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tjwei/HackNTU_Data_2017
Week03/00-Download-M06A.ipynb
mit
[ "下載 ETC M06A 資料\n<a href=\"http://www.freeway.gov.tw/UserFiles/File/TIMCCC/TDCS%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8A(tanfb)v3.0-1.pdf\">國道高速公路電子收費交通資料蒐集支援系統(Traffic Data Collection System,TDCS)使用手冊</a>", "from urllib.request import urlopen, urlretrieve\nimport tqdm", "基本的資料", "# 歷史資料網址\ndata_baseurl=\"http://tisvcloud.freeway.gov.tw/history/TDCS/M06A/\"\n# 壓縮檔的檔名格式\nfilename_format=\"M06A_{year:04d}{month:02d}{day:02d}.tar.gz\".format\n# csv 檔的路徑格式\ncsv_format = \"M06A/{year:04d}{month:02d}{day:02d}/{hour:02d}/TDCS_M06A_{year:04d}{month:02d}{day:02d}_{hour:02d}0000.csv\".format\n\n# 下載檔案的程式\n# 如果有 ipywidgets, 可以將 tqdm.tqdm 換成 tqdm.tqdm_notebook 比較 notebook 一點的界面\n\n# 將 req 下載到檔案\ndef download_req(req, filename):\n # 取得檔案長度\n total = int(req.getheader(\"Content-Length\"))\n # tqdm 的設定\n tqdm_conf = dict(total=total, desc=filename, unit='B', unit_scale=True)\n # 開啟 tqdm 進度條及寫入檔案\n with tqdm.tqdm(**tqdm_conf) as pbar:\n with open(filename,'wb') as f:\n # 從 req 每次讀入 8192 byte 的資料\n for data in iter(lambda: req.read(8192), b\"\"): \n # 寫入檔案,並且更新進度條\n pbar.update(f.write(data))\n \ndef download_M06A(year, month, day):\n # 依照年月日來設定檔名\n filename = filename_format(year=year, month=month, day=day)\n # 用 urlopen 開啟連結\n with urlopen(data_baseurl + filename) as req:\n download_req(req, filename)\n\n\ndownload_M06A(2016,12,18)\n\n# 其實也可以用 urlretrieve\n# 下面的寫法改自 tqdm 範例\nfilename = filename_format(year=2015, month=6, day=26)\nwith tqdm.tqdm(desc=filename, unit='B', unit_scale=True) as pbar:\n last_b = 0\n def tqdmhook(b, bsize, tsize):\n nonlocal last_b\n if tsize != -1:\n pbar.total = tsize\n pbar.update((b-last_b)*bsize)\n last_b = b\n urlretrieve(data_baseurl+filename, filename=filename, reporthook=tqdmhook)", "反過來由檔名找日期,可以用 regexp 或者 datetime", "import re\nm=re.match(\"M06A_(\\d{4})(\\d\\d)(\\d\\d).tar.gz\" ,\"M06A_20170103.tar.gz\")\nm.groups()\n\nimport datetime\ndatetime.datetime.strptime(\"M06A_20170103.tar.gz\", \"M06A_%Y%m%d.tar.gz\")", "抓所有的壓縮檔案", "# 使用 BeautifulSoup4 來解析\nfrom bs4 import BeautifulSoup\n# 抓下目錄頁\nwith urlopen(data_baseurl) as req:\n data = req.read()\n# 用 BeautifulSoup 解析目錄頁\nsoup = BeautifulSoup(data, \"html.parser\")\n# 找到所有 <a href=... 的 tag\nfiles = set(x.attrs['href'] for x in soup.find_all('a') if 'href' in x.attrs)\n\n#files = set(x for x in files if x and x.endswith(\".tar.gz\") and x.startswith(\"M06A_\"))\n# 過濾剩下 href 開頭為 M06A_,結尾是.tar.gz 並且解出年月日\nre_M06A_tgz=re.compile(\"M06A_(\\d{4})(\\d\\d)(\\d\\d).tar.gz\")\nfiles = (re_M06A_tgz.match(x) for x in files)\nfiles = [x.groups() for x in files if x]\nfiles[:10]\n\n# 結合上面來抓所有的資料\nfor y,m,d in files:\n download_M06A(int(y), int(m), int(d))", "將 .tar.gz 重新打包成 .tar.xz", "import glob\nimport lzma\nimport gzip\nimport os\nimport os.path\n\n# 建立輸出目錄 xz\nos.makedirs(\"xz\", exist_ok=True)\n\ndef repack(filename):\n # 原來的檔名需要是 gz 結尾\n assert filename.endswith(\"gz\")\n # 檔案大小,用來顯示進度條\n length = os.path.getsize(filename)\n # 輸出檔名\n xzfn = os.path.join(\"xz/\", os.path.split(f)[-1][:-2]+\"xz\")\n # 不要覆蓋已經有的檔案\n if os.path.isfile(xzfn):\n print(\"skip\", filename)\n return\n # 開啟檔案和進度條, lzma 的 preset 可設定 0~9 \n with gzip.open(filename, 'r') as gzfile, \\\n lzma.open(xzfn, \"w\", preset=1) as xzfile, \\\n tqdm.tqdm(total=length, desc=filename, unit='B', unit_scale=True) as pbar:\n # 從 .gz 解壓縮 data\n for data in iter(lambda: gzfile.read(1024*1024), b\"\"):\n # 將 data 寫入 .xz\n xzfile.write(data)\n # 更新 pbar\n pbar.update(gzfile.fileobj.tell() - pbar.n)\n\n# 找出檔案,依序重新壓縮\nfor f in glob.glob(\"M06A_201612*.gz\"):\n repack(f)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DJCordhose/ai
notebooks/tf2/rnn-add-example.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tf2/rnn-add-example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nAddition as a Sequence to Sequence Translation\nAdapted from https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py", "!pip install -q tf-nightly-gpu-2.0-preview\n\nimport tensorflow as tf\nprint(tf.__version__)", "Step 1: Generate sample equations", "class CharacterTable(object):\n \"\"\"Given a set of characters:\n + Encode them to a one hot integer representation\n + Decode the one hot integer representation to their character output\n + Decode a vector of probabilities to their character output\n \"\"\"\n def __init__(self, chars):\n \"\"\"Initialize character table.\n\n # Arguments\n chars: Characters that can appear in the input.\n \"\"\"\n self.chars = sorted(set(chars))\n self.char_indices = dict((c, i) for i, c in enumerate(self.chars))\n self.indices_char = dict((i, c) for i, c in enumerate(self.chars))\n\n def encode(self, C, num_rows):\n \"\"\"One hot encode given string C.\n\n # Arguments\n num_rows: Number of rows in the returned one hot encoding. This is\n used to keep the # of rows for each data the same.\n \"\"\"\n x = np.zeros((num_rows, len(self.chars)))\n for i, c in enumerate(C):\n x[i, self.char_indices[c]] = 1\n return x\n\n def decode(self, x, calc_argmax=True):\n if calc_argmax:\n x = x.argmax(axis=-1)\n return ''.join(self.indices_char[x] for x in x)\n\n\nclass colors:\n ok = '\\033[92m'\n fail = '\\033[91m'\n close = '\\033[0m'\n\nimport numpy as np\n\n# Parameters for the model and dataset.\nTRAINING_SIZE = 50000\nDIGITS = 3\n# REVERSE = True\nREVERSE = False\n\n# Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of\n# int is DIGITS.\nMAXLEN = DIGITS + 1 + DIGITS\n\n# All the numbers, plus sign and space for padding.\nchars = '0123456789+ '\nctable = CharacterTable(chars)\n\nquestions = []\nexpected = []\nseen = set()\nprint('Generating data...')\nwhile len(questions) < TRAINING_SIZE:\n f = lambda: int(''.join(np.random.choice(list('0123456789'))\n for i in range(np.random.randint(1, DIGITS + 1))))\n a, b = f(), f()\n # Skip any addition questions we've already seen\n # Also skip any such that x+Y == Y+x (hence the sorting).\n key = tuple(sorted((a, b)))\n if key in seen:\n continue\n seen.add(key)\n # Pad the data with spaces such that it is always MAXLEN.\n q = '{}+{}'.format(a, b)\n query = q + ' ' * (MAXLEN - len(q))\n ans = str(a + b)\n # Answers can be of maximum size DIGITS + 1.\n ans += ' ' * (DIGITS + 1 - len(ans))\n if REVERSE:\n # Reverse the query, e.g., '12+345 ' becomes ' 543+21'. (Note the\n # space used for padding.)\n query = query[::-1]\n questions.append(query)\n expected.append(ans)\nprint('Total addition questions:', len(questions))\n\nquestions[0]\n\nexpected[0]\n\nprint('Vectorization...')\nx = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)\ny = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)\nfor i, sentence in enumerate(questions):\n x[i] = ctable.encode(sentence, MAXLEN)\nfor i, sentence in enumerate(expected):\n y[i] = ctable.encode(sentence, DIGITS + 1)\n\n\nlen(x[0])\n\nlen(questions[0])", "Input is encoded as one-hot, 7 digits times 12 possibilities", "x[0]", "Same for output, but at most 4 digits", "y[0]\n\n# Shuffle (x, y) in unison as the later parts of x will almost all be larger\n# digits.\nindices = np.arange(len(y))\nnp.random.shuffle(indices)\nx = x[indices]\ny = y[indices]", "Step 2: Training/Validation Split", "# Explicitly set apart 10% for validation data that we never train over.\nsplit_at = len(x) - len(x) // 10\n(x_train, x_val) = x[:split_at], x[split_at:]\n(y_train, y_val) = y[:split_at], y[split_at:]\n\nprint('Training Data:')\nprint(x_train.shape)\nprint(y_train.shape)\n\nprint('Validation Data:')\nprint(x_val.shape)\nprint(y_val.shape)", "Step 3: Create Model", "# input shape: 7 digits, each being 0-9, + or space (12 possibilities)\nMAXLEN, len(chars)\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import LSTM, GRU, SimpleRNN, Dense, RepeatVector\n\n# Try replacing LSTM, GRU, or SimpleRNN.\n# RNN = LSTM\nRNN = SimpleRNN # should be enough since we do not have long sequences and only local dependencies\n# RNN = GRU\nHIDDEN_SIZE = 128\nBATCH_SIZE = 128\n\nmodel = Sequential()\n# encoder \nmodel.add(RNN(units=HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))\n\n# latent space\nencoding_dim = 32\nmodel.add(Dense(units=encoding_dim, activation='relu', name=\"encoder\"))\n\n# decoder: have 4 temporal outputs one for each of the digits of the results\nmodel.add(RepeatVector(DIGITS + 1))\n\n# return_sequences=True tells it to keep all 4 temporal outputs, not only the final one (we need all four digits for the results)\nmodel.add(RNN(units=HIDDEN_SIZE, return_sequences=True))\n\nmodel.add(Dense(name='classifier', units=len(chars), activation='softmax'))\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\nmodel.summary()", "Step 4: Train", "%%time\n\n# Train the model each generation and show predictions against the validation\n# dataset.\n\nmerged_losses = {\n \"loss\": [],\n \"val_loss\": [],\n \"accuracy\": [],\n \"val_accuracy\": [],\n \n}\n\nfor iteration in range(1, 50):\n print()\n print('-' * 50)\n print('Iteration', iteration)\n iteration_history = model.fit(x_train, y_train,\n batch_size=BATCH_SIZE,\n epochs=1,\n validation_data=(x_val, y_val))\n \n merged_losses[\"loss\"].append(iteration_history.history[\"loss\"])\n merged_losses[\"val_loss\"].append(iteration_history.history[\"val_loss\"])\n merged_losses[\"accuracy\"].append(iteration_history.history[\"accuracy\"])\n merged_losses[\"val_accuracy\"].append(iteration_history.history[\"val_accuracy\"])\n\n # Select 10 samples from the validation set at random so we can visualize\n # errors.\n for i in range(10):\n ind = np.random.randint(0, len(x_val))\n rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]\n preds = model.predict_classes(rowx, verbose=0)\n q = ctable.decode(rowx[0])\n correct = ctable.decode(rowy[0])\n guess = ctable.decode(preds[0], calc_argmax=False)\n print('Q', q[::-1] if REVERSE else q, end=' ')\n print('T', correct, end=' ')\n if correct == guess:\n print(colors.ok + '☑' + colors.close, end=' ')\n else:\n print(colors.fail + '☒' + colors.close, end=' ')\n print(guess)\n\nimport matplotlib.pyplot as plt\n\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.yscale('log')\n\nplt.plot(merged_losses['loss'])\nplt.plot(merged_losses['val_loss'])\n\nplt.legend(['loss', 'validation loss'])\n\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\n# plt.yscale('log')\n\nplt.plot(merged_losses['accuracy'])\nplt.plot(merged_losses['val_accuracy'])\n\nplt.legend(['accuracy', 'validation accuracy'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]