code
stringlengths
2.5k
150k
kind
stringclasses
1 value
<h1> Preprocessing using Dataflow </h1> This notebook illustrates: <ol> <li> Creating datasets for Machine Learning using Dataflow </ol> <p> While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. ``` !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst pip install --user apache-beam[gcp] ``` Run the command again if you are getting oauth2client error. Note: You may ignore the following responses in the cell output above: ERROR (in Red text) related to: witwidget-gpu, fairing WARNING (in Yellow text) related to: hdfscli, hdfscli-avro, pbr, fastavro, gen_client <b>Restart</b> the kernel before proceeding further. Make sure the Dataflow API is enabled by going to this [link](https://console.developers.google.com/apis/api/dataflow.googleapis.com). Ensure that you've installed Beam by importing it and printing the version number. ``` # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.1 import apache_beam as beam print(beam.__version__) ``` You may receive a `UserWarning` about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this. ``` # change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} fi ``` <h2> Create ML dataset using Dataflow </h2> Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files. In this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me. <p> If you wish to continue without doing this step, you can copy my preprocessed output: <pre> gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/ </pre> But if you do this, you also have to use my TensorFlow model since yours might expect the fields in a different order ``` import datetime, os def to_csv(rowdict): import hashlib import copy # TODO #1: # Pull columns from BQ and create line(s) of CSV input CSV_COLUMNS = None # Create synthetic data where we assume that no ultrasound has been performed # and so we don't know sex of the baby. Let's assume that we can tell the difference # between single and multiple, but that the errors rates in determining exact number # is difficult in the absence of an ultrasound. no_ultrasound = copy.deepcopy(rowdict) w_ultrasound = copy.deepcopy(rowdict) no_ultrasound['is_male'] = 'Unknown' if rowdict['plurality'] > 1: no_ultrasound['plurality'] = 'Multiple(2+)' else: no_ultrasound['plurality'] = 'Single(1)' # Change the plurality column to strings w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1] # Write out two rows for each input row, one with ultrasound and one without for result in [no_ultrasound, w_ultrasound]: data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS]) key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key yield str('{},{}'.format(data, key)) def preprocess(in_test_mode): import shutil, os, subprocess job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S') if in_test_mode: print('Launching local job ... hang on') OUTPUT_DIR = './preproc' shutil.rmtree(OUTPUT_DIR, ignore_errors=True) os.makedirs(OUTPUT_DIR) else: print('Launching Dataflow job {} ... hang on'.format(job_name)) OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET) try: subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split()) except: pass options = { 'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'), 'temp_location': os.path.join(OUTPUT_DIR, 'tmp'), 'job_name': job_name, 'project': PROJECT, 'region': REGION, 'teardown_policy': 'TEARDOWN_ALWAYS', 'no_save_main_session': True, 'num_workers': 4, 'max_num_workers': 5 } opts = beam.pipeline.PipelineOptions(flags = [], **options) if in_test_mode: RUNNER = 'DirectRunner' else: RUNNER = 'DataflowRunner' p = beam.Pipeline(RUNNER, options = opts) query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND weight_pounds > 0 AND mother_age > 0 AND plurality > 0 AND gestation_weeks > 0 AND month > 0 """ if in_test_mode: query = query + ' LIMIT 100' for step in ['train', 'eval']: if step == 'train': selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query) else: selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query) (p ## TODO Task #2: Modify the Apache Beam pipeline such that the first part of the pipe reads the data from BigQuery | '{}_read'.format(step) >> None | '{}_csv'.format(step) >> beam.FlatMap(to_csv) | '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step)))) ) job = p.run() if in_test_mode: job.wait_until_finish() print("Done!") # TODO Task #3: Once you have verified that the files produced locally are correct, change in_test_mode to False # to execute this in Cloud Dataflow preprocess(in_test_mode = False) ``` The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step. ``` %%bash gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000* ``` Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
# Arcane Python syntax Python has a syntax with a few features that are quite unique. **General advice:** don't use any of this unless you feel comfortable with it, since mistakes can lead to bugs that are hard to track down. ## Interval checking In Python an expression such as `a < x <= b` is legal and well-defined. ``` a, b = 1, 3 for x in range(-2, 5): if a < x <= b: print(f'{a} < {x} <= {b}: True') else: print(f'{a} < {x} <= {b}: False') ``` The code above can be simplified to: ``` a, b = 1, 3 for x in range(-2, 5): print(f'{a} < {x:2d} <= {b}: {a < x <= b}') ``` ## Multiple equality, inequality Along the same lines, `a == b == x` is also legal and well-defined. ``` for a in range(3): for b in range(3): print(f'{a} == {b} == 1: {a == b == 1}') ``` Although `a != b != x` is legal as well, it may, at least at first sight, not behave as expected. ``` for a in range(3): for b in range(3): print(f'{a} != {b} != 1: {a != b != 1}') ``` From the above, it is clear that `a != b != x` translates to `a != b and b != c`, which is true when `a == c and a != b`. From a mathematical point of view, bear in mind that `==` is transitive, while `!=` is not. ## Iteration with `else` Iteration statements in Python, i.e., `for` and `while` can have an `else` block. The latter is executed when the iteration statement terminates normally, i.e., not by a `break` statement. ``` def illustrate_for_else(x): for i in range(10): if i == x: print('break') break else: print('no break') illustrate_for_else(12) illustrate_for_else(5) ``` Although naming this syntactic construct `else` feels awkward, it is quite useful, since it is a syntactic shortcut for the following reasonably common construct. ``` def illustrate_for_bool(x): completed_succesfully = True for i in range(10): if i == x: print('break') completed_succesfully = False break if completed_succesfully: print('no break') illustrate_for_bool(12) illustrate_for_bool(5) ``` The execution of `continue` has no influence on this. ``` for i in range(5): if i > -1: continue print(f'did something for {i}') else: print('completed normally') ``` The `while` statement can have an `else` with the same semantics. ## Logical shortcircuits Boolean operators can be used directly as control statements. This is familiar to experience Bash programmers, but leads to code that is somewhat hard to understand. ``` import sys output_file = None fh = output_file or sys.stdout print('hello', file=fh) ``` If the first operand to `or` is `False`, the value of the expression will be that of the second operand. The value `None` converts to Boolean `False`, hence the behavior above. However, if the first operand can be converted to `True`, the value of the expression is that of the first operand, a file handle in the example below. ``` output_file = open('remove_me.txt', 'w') fh = output_file or sys.stdout print('hello', file=fh) !cat remove_me.txt ``` The semantics of `and` expressions is similar. If the first operand converts to `True`, the expression will have the value of the second operand. If the first operand converts to `False`, that operand will be the value of the expression. ``` a_list = [] b_list = [3, 5, 7] my_list = a_list and b_list print(my_list) a_list = [3, 5, 7] b_list = [3, 5, 7, 9] my_list = a_list and b_list print(my_list) ```
github_jupyter
``` import os os.environ['CUDA_VISIBLE_DEVICES'] = '' import malaya_speech.train.model.conformer as conformer import malaya_speech.train.model.transducer as transducer import malaya_speech import tensorflow as tf import numpy as np import json from glob import glob subwords = malaya_speech.subword.load('transducer-mixed.subword') featurizer = malaya_speech.tf_featurization.STTFeaturizer( normalize_per_feature = True ) n_mels = 80 sr = 16000 maxlen = 18 minlen_text = 1 def mp3_to_wav(file, sr = sr): audio = AudioSegment.from_file(file) audio = audio.set_frame_rate(sr).set_channels(1) sample = np.array(audio.get_array_of_samples()) return malaya_speech.astype.int_to_float(sample), sr def generate(file): with open(file) as fopen: dataset = json.load(fopen) audios, cleaned_texts = dataset['X'], dataset['Y'] for i in range(len(audios)): try: if audios[i].endswith('.mp3'): # print('found mp3', audios[i]) wav_data, _ = mp3_to_wav(audios[i]) else: wav_data, _ = malaya_speech.load(audios[i], sr = sr) if (len(wav_data) / sr) > maxlen: # print(f'skipped audio too long {audios[i]}') continue if len(cleaned_texts[i]) < minlen_text: # print(f'skipped text too short {audios[i]}') continue t = malaya_speech.subword.encode( subwords, cleaned_texts[i], add_blank = False ) yield { 'waveforms': wav_data, 'targets': t, 'targets_length': [len(t)], } except Exception as e: print(e) def preprocess_inputs(example): s = featurizer.vectorize(example['waveforms']) mel_fbanks = tf.reshape(s, (-1, n_mels)) length = tf.cast(tf.shape(mel_fbanks)[0], tf.int32) length = tf.expand_dims(length, 0) example['inputs'] = mel_fbanks example['inputs_length'] = length example.pop('waveforms', None) return example def get_dataset( file, batch_size = 6, shuffle_size = 20, thread_count = 24, maxlen_feature = 1800, ): def get(): dataset = tf.data.Dataset.from_generator( generate, { 'waveforms': tf.float32, 'targets': tf.int32, 'targets_length': tf.int32, }, output_shapes = { 'waveforms': tf.TensorShape([None]), 'targets': tf.TensorShape([None]), 'targets_length': tf.TensorShape([None]), }, args = (file,), ) dataset = dataset.shuffle(shuffle_size) dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE) dataset = dataset.map( preprocess_inputs, num_parallel_calls = thread_count ) dataset = dataset.padded_batch( batch_size, padded_shapes = { 'inputs': tf.TensorShape([None, n_mels]), 'inputs_length': tf.TensorShape([None]), 'targets': tf.TensorShape([None]), 'targets_length': tf.TensorShape([None]), }, padding_values = { 'inputs': tf.constant(0, dtype = tf.float32), 'inputs_length': tf.constant(0, dtype = tf.int32), 'targets': tf.constant(0, dtype = tf.int32), 'targets_length': tf.constant(0, dtype = tf.int32), }, ) return dataset return get dev_dataset = get_dataset('mixed-asr-test.json')() features = dev_dataset.make_one_shot_iterator().get_next() features training = True config = malaya_speech.config.conformer_base_encoder_config conformer_model = conformer.Model( kernel_regularizer = None, bias_regularizer = None, **config ) decoder_config = malaya_speech.config.conformer_base_decoder_config transducer_model = transducer.rnn.Model( conformer_model, vocabulary_size = subwords.vocab_size, **decoder_config ) targets_length = features['targets_length'][:, 0] v = tf.expand_dims(features['inputs'], -1) z = tf.zeros((tf.shape(features['targets'])[0], 1), dtype = tf.int32) c = tf.concat([z, features['targets']], axis = 1) logits = transducer_model([v, c, targets_length + 1], training = training) decoded = transducer_model.greedy_decoder(v, features['inputs_length'][:, 0], training = training) sess = tf.Session() sess.run(tf.global_variables_initializer()) var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) saver = tf.train.Saver(var_list = var_list) saver.restore(sess, 'asr-base-conformer-transducer-mixed/model.ckpt-1000000') wer, cer = [], [] index = 0 while True: try: r = sess.run([decoded, features['targets']]) for no, row in enumerate(r[0]): d = malaya_speech.subword.decode(subwords, row[row > 0]) t = malaya_speech.subword.decode(subwords, r[1][no]) wer.append(malaya_speech.metrics.calculate_wer(t, d)) cer.append(malaya_speech.metrics.calculate_cer(t, d)) print(index) index += 1 except: break np.mean(wer), np.mean(cer) for no, row in enumerate(r[0]): d = malaya_speech.subword.decode(subwords, row[row > 0]) t = malaya_speech.subword.decode(subwords, r[1][no]) print(no, d) print(t) print() ```
github_jupyter
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/Course%201%20-%20Part%202%20-%20Lesson%202%20-%20Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # The Hello World of Deep Learning with Neural Networks Like every first app you should start with something super simple that shows the overall scaffolding for how your code works. In the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' — ``` float hw_function(float x){ float y = (2 * x) - 1; return y; } ``` So how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them. This is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece. ## Imports Let's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use. We then import a library called numpy, which helps us to represent our data as lists easily and quickly. The framework for defining a neural network as a set of Sequential layers is called keras, so we import that too. ``` import tensorflow as tf import numpy as np from tensorflow import keras ``` ## Define and Compile the Neural Network Next we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value. ``` xs = np.array([i**1 for i in range(50000,600000,50000)]) print(xs) model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) ``` Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer. If you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here — let's explain... We know that in our function, the relationship between the numbers is y=2x-1. When the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did. It then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower) It will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :) Over time you will learn the different and appropriate loss and optimizer functions for different scenarios. ``` model.compile(optimizer='sgd', loss='mean_squared_error') ``` ## Providing the Data Next up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc. A python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values as an np.array[] ``` xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float) ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float) ``` # Training the Neural Network The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side. ``` model.fit(xs, ys, epochs=500) ``` Ok, now you have a model that has been trained to learn the relationship between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code: ``` print(model.predict([10.0])) ``` You might have thought 19, right? But it ended up being a little under. Why do you think that is? Remember that neural networks deal with probabilities, so given the data that we fed the NN with, it calculated that there is a very high probability that the relationship between X and Y is Y=2X-1, but with only 6 data points we can't know for sure. As a result, the result for 10 is very close to 19, but not necessarily 19. As you work with neural networks, you'll see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification.
github_jupyter
``` queries = [ """ INSERT INTO page_lookup_nonredirect SELECT page.page_id as redircet_id, page.page_title as redirect_title, page.page_title true_title, page.page_id, page.page_latest FROM page LEFT OUTER JOIN redirect ON page.page_id = redirect.rd_from WHERE redirect.rd_from IS NULL """, """ insert into page_lookup_redirect select original_page.page_id redirect_id, original_page.page_title redirect_title, final_page.page_title as true_title, final_page.page_id, final_page.page_latest from page final_page join redirect on (redirect.page_title = final_page.page_title) join page original_page on (redirect.rd_from = original_page.page_id) """, """ INSERT INTO page_lookup SELECT redirect_id, redirect_title, true_title, page_id, page_version FROM ( SELECT redirect_id, redirect_title, true_title, page_id, page_version FROM page_lookup_nonredirect UNION ALL SELECT redirect_id, redirect_title, true_title, page_id, page_version FROM page_lookup_redirect) u """, """ INSERT INTO filtered_pagecounts SELECT regexp_replace (reflect ('java.net.URLDecoder','decode', reflect ('java.net.URLDecoder','decode',pvs.page_title)),'^\s*([a-zA-Z0-9]+).*','$1') page_title ,SUM (pvs.views) AS total_views, SUM (pvs.bytes_sent) AS total_bytes_sent FROM pagecounts as pvs WHERE not pvs.page_title LIKE '(MEDIA|SPECIAL||Talk|User|User_talk|Project|Project_talk|File|File_talk|MediaWiki|MediaWiki_talk|Template|Template_talk|Help|Help_talk|Category|Category_talk|Portal|Wikipedia|Wikipedia_talk|upload|Special)\:(.*)' and pvs.page_title LIKE '^([A-Z])(.*)' and not pvs.page_title LIKE '(.*).(jpg|gif|png|JPG|GIF|PNG|txt|ico)$' and pvs.page_title <> '404_error/' and pvs.page_title <> 'Main_Page' and pvs.page_title <> 'Hypertext_Transfer_Protocol' and pvs.page_title <> 'Favicon.ico' and pvs.page_title <> 'Search' and pvs.dt = '2020-01-01' GROUP BY regexp_replace (reflect ('java.net.URLDecoder','decode', reflect ('java.net.URLDecoder','decode',pvs.page_title)),'^\s*([a-zA-Z0-9]+).*','$1') """, """ INSERT INTO normalized_pagecounts SELECT pl.page_id page_id, REGEXP_REPLACE(pl.true_title, '_', ' ') page_title, pl.true_title page_url, views, bytes_sent FROM page_lookup pl JOIN filtered_pagecounts fp ON fp.page_title = pl.redirect_title where fp.dt='2020-01-01' """ ] from data_lineage.catalog.query import Query from data_lineage.data_lineage import parse, get_dml_queries, create_graph graph = create_graph(get_dml_queries(parse([Query(q) for q in queries]))) fig = graph.fig() import plotly plotly.offline.iplot(fig) sub_graph = graph.sub_graph((None, 'page_lookup')) sub_fig = sub_graph.fig() plotly.offline.iplot(sub_fig) ```
github_jupyter
STAT 453: Deep Learning (Spring 2021) Instructor: Sebastian Raschka (sraschka@wisc.edu) Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2021/ GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss21 ``` %load_ext watermark %watermark -a 'Sebastian Raschka' -v -p torch ``` - Runs on CPU or GPU (if available) # Softmax Regression on MNIST Implementation of softmax regression (multinomial logistic regression). ## Imports ``` import time from torchvision import datasets from torchvision import transforms from torch.utils.data import DataLoader import torch.nn.functional as F import torch ``` ## Settings and Dataset ``` ########################## ### SETTINGS ########################## # Device device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Hyperparameters random_seed = 123 learning_rate = 0.1 num_epochs = 25 batch_size = 256 # Architecture num_features = 784 num_classes = 10 ########################## ### MNIST DATASET ########################## train_dataset = datasets.MNIST(root='data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root='data', train=False, transform=transforms.ToTensor()) train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) #NCHW print('Image label dimensions:', labels.shape) break ########################## ### MODEL ########################## class SoftmaxRegression(torch.nn.Module): def __init__(self, num_features, num_classes): super(SoftmaxRegression, self).__init__() self.linear = torch.nn.Linear(num_features, num_classes) # print(self.linear.weight.shape) # print(self.linear.bias.shape) self.linear.weight.detach().zero_() self.linear.bias.detach().zero_() def forward(self, x): logits = self.linear(x) probas = F.softmax(logits, dim=1) return logits, probas model = SoftmaxRegression(num_features=num_features, num_classes=num_classes) model.to(device) ########################## ### COST AND OPTIMIZER ########################## optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) # Manual seed for deterministic data loader torch.manual_seed(random_seed) def compute_accuracy(model, data_loader): correct_pred, num_examples = 0, 0 for features, targets in data_loader: features = features.view(-1, 28*28).to(device) targets = targets.to(device) logits, probas = model(features) _, predicted_labels = torch.max(probas, 1) num_examples += targets.size(0) correct_pred += (predicted_labels == targets).sum() return correct_pred.float() / num_examples * 100 start_time = time.time() epoch_costs = [] for epoch in range(num_epochs): avg_cost = 0. for batch_idx, (features, targets) in enumerate(train_loader): features = features.view(-1, 28*28).to(device) # print("features.shape: ", features.shape) targets = targets.to(device) ### FORWARD AND BACK PROP logits, probas = model(features) # note that the PyTorch implementation of # CrossEntropyLoss works with logits, not # probabilities cost = F.cross_entropy(logits, targets) optimizer.zero_grad() cost.backward() avg_cost += cost ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 50: print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f' %(epoch+1, num_epochs, batch_idx, len(train_dataset)//batch_size, cost)) with torch.set_grad_enabled(False): avg_cost = avg_cost/len(train_dataset) epoch_costs.append(avg_cost) print('Epoch: %03d/%03d training accuracy: %.2f%%' % ( epoch+1, num_epochs, compute_accuracy(model, train_loader))) print('Time elapsed: %.2f min' % ((time.time() - start_time)/60)) %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.plot(epoch_costs) plt.ylabel('Avg Cross Entropy Loss\n(approximated by averaging over minibatches)') plt.xlabel('Epoch') plt.show() print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader))) for features, targets in test_loader: break fig, ax = plt.subplots(1, 4) for i in range(4): ax[i].imshow(features[i].view(28, 28), cmap=matplotlib.cm.binary) plt.show() _, predictions = model.forward(features[:4].view(-1, 28*28).to(device)) predictions = torch.argmax(predictions, dim=1) print('Predicted labels', predictions) ```
github_jupyter
<center><img alt="" src="images/Cover_EDA.jpg"/></center> ## <center><font color="blue">EDA-04: Unsupervised Learning - Clustering Bagian ke-02</font></center> <h2 style="text-align: center;">(C) Taufik Sutanto - 2020</h2> <h2 style="text-align: center;">tau-data Indonesia ~ <a href="https://tau-data.id/eda-04/" target="_blank"><span style="color: #0009ff;">https://tau-data.id/eda-04/</span></a></h2> ``` # Run this cell ONLY if this notebook run from Google Colab # Kalau dijalankan lokal (Anaconda/WinPython) maka silahkan install di terminal/command prompt # Lalu unduh secara manual file yang dibutuhkan dan letakkan di folder Python anda. !pip install --upgrade umap-learn !wget https://raw.githubusercontent.com/taudata-indonesia/eLearning/master/tau_unsup.py # Importing Modules untuk Notebook ini import warnings; warnings.simplefilter('ignore') import time, umap, numpy as np, tau_unsup as tau, matplotlib.pyplot as plt, pandas as pd, seaborn as sns from matplotlib.colors import ListedColormap from sklearn import cluster, datasets from sklearn.metrics.pairwise import pairwise_distances_argmin from sklearn.preprocessing import StandardScaler from itertools import cycle, islice from sklearn.metrics import silhouette_score as siluet from sklearn.metrics.cluster import homogeneity_score as purity from sklearn.metrics import normalized_mutual_info_score as NMI sns.set(style="ticks", color_codes=True) random_state = 99 ``` # Review ## EDA-03 * Pendahuluan Unsupervised Learning * k-Means, k-Means++, MiniBatch k-Means * internal & External Evaluation * Parameter Tunning ## EDA-04 * Hierarchical Clustering * Spectral Clustering * DBScan * Clustering Evaluation Revisited ## Linkages Comparisons * single linkage is fast, and can perform well on non-globular data, but it performs poorly in the presence of noise. * average and complete linkage perform well on cleanly separated globular clusters, but have mixed results otherwise. * Ward is the most effective method for noisy data. * http://scikit-learn.org/stable/auto_examples/cluster/plot_linkage_comparison.html#sphx-glr-auto-examples-cluster-plot-linkage-comparison-py ``` tau.compare_linkages() ``` ### Pros * No assumption of a particular number of clusters (i.e. k-means) * May correspond to meaningful taxonomies ### Cons * Once a decision is made to combine two clusters, it can’t be undone * Too slow for large data sets, O(𝑛2 log(𝑛)) ``` # Kita akan menggunakan data yang sama dengan EDA-03 df = sns.load_dataset("iris") X = df[['sepal_length','sepal_width','petal_length','petal_width']].values C = df['species'].values print(X.shape) df.head() # Hierarchical http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html#sklearn.cluster.AgglomerativeClustering hierarchical = cluster.AgglomerativeClustering(n_clusters=3, linkage='average', affinity = 'euclidean') hierarchical.fit(X) # Lambat .... dan menggunakan banyak memori O(N^2 log(N)) C_h = hierarchical.labels_.astype(np.int) C_h[:10] # Dendogram Example # http://seaborn.pydata.org/generated/seaborn.clustermap.html g = sns.clustermap(X, method="single", metric="euclidean") # Scatter Plot of the hierarchical clustering results X2D = umap.UMAP(n_neighbors=5, min_dist=0.3, random_state=random_state).fit_transform(X) fig, ax = plt.subplots() ax.scatter(X2D[:,0], X2D[:,1], c=C_h) plt.show() ``` # Evaluasi Hierarchical Clustering? * Silhoutte Coefficient, Dunn index, or Davies–Bouldin index * Domain knowledge - interpretability * External Evaluation ### Read more here: https://www.ims.uni-stuttgart.de/document/team/schulte/theses/phd/algorithm.pdf ``` # Spectral : http://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html spectral = cluster.SpectralClustering(n_clusters=3) spectral.fit(X) C_spec = spectral.labels_.astype(np.int) sns.countplot(C_spec) C_spec[:10] fig, ax = plt.subplots() ax.scatter(X2D[:,0], X2D[:,1], c=C_spec) plt.show() # DBSCAN http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html # tidak membutuhkan input parameter k!!!... sangat bermanfaat untuk clustering data yang besar dbscan = cluster.DBSCAN(eps=0.8, min_samples=5, metric='euclidean') dbscan.fit(X) C_db = dbscan.labels_.astype(np.int) sns.countplot(C_db) C_db[:10] # apa makna cluster label -1? sum([1 for i in C_db if i==-1]) fig, ax = plt.subplots() ax.scatter(X2D[:,0], X2D[:,1], c=C_db) plt.show() try: # Should work in Google Colab !wget https://raw.githubusercontent.com/christopherjenness/DBCV/master/DBCV/DBCV.py except: pass # Download manually on windows import dbcv dbcv.DBCV(X, C_db) ``` ## Pelajari Studi Kasus Berikut (Customer Segmentation): ## http://www.data-mania.com/blog/customer-profiling-and-segmentation-in-python/
github_jupyter
<h1 align="center">TensorFlow Deep Neural Network Lab</h1> <img src="image/notmnist.png"> In this lab, you'll use all the tools you learned from the *Deep Neural Networks* lesson to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in differents font. The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. While there is no predefined goal for this lab, we would like you to experiment and discuss with fellow students on what can improve such models to achieve the highest possible accuracy values. To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "`All modules imported`". ``` import hashlib import os import pickle from urllib.request import urlretrieve import numpy as np from PIL import Image from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.utils import resample from tqdm import tqdm from zipfile import ZipFile print('All modules imported.') ``` The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J). ``` def download(url, file): """ Download file from <url> :param url: URL to file :param file: Local file path """ if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') # Download the training and test dataset. download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip') download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip') # Make sure the files aren't corrupted assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\ 'notMNIST_train.zip file is corrupted. Remove the file and try again.' assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\ 'notMNIST_test.zip file is corrupted. Remove the file and try again.' # Wait until you see that all files have been downloaded. print('All files downloaded.') def uncompress_features_labels(file): """ Uncompress features and labels from a zip file :param file: The zip file to extract the data from """ features = [] labels = [] with ZipFile(file) as zipf: # Progress Bar filenames_pbar = tqdm(zipf.namelist(), unit='files') # Get features and labels from all files for filename in filenames_pbar: # Check if the file is a directory if not filename.endswith('/'): with zipf.open(filename) as image_file: image = Image.open(image_file) image.load() # Load image data as 1 dimensional array # We're using float32 to save on memory space feature = np.array(image, dtype=np.float32).flatten() # Get the the letter from the filename. This is the letter of the image. label = os.path.split(filename)[1][0] features.append(feature) labels.append(label) return np.array(features), np.array(labels) # Get the features and labels from the zip files train_features, train_labels = uncompress_features_labels('notMNIST_train.zip') test_features, test_labels = uncompress_features_labels('notMNIST_test.zip') # Limit the amount of data to work with size_limit = 150000 train_features, train_labels = resample(train_features, train_labels, n_samples=size_limit) # Set flags for feature engineering. This will prevent you from skipping an important step. is_features_normal = False is_labels_encod = False # Wait until you see that all features and labels have been uncompressed. print('All features and labels uncompressed.') ``` <img src="image/mean_variance.png" style="height: 75%;width: 75%; position: relative; right: 5%"> ## Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the `normalize()` function to a range of `a=0.1` and `b=0.9`. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9. Since the raw notMNIST image data is in [grayscale](https://en.wikipedia.org/wiki/Grayscale), the current values range from a min of 0 to a max of 255. Min-Max Scaling: $ X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}} $ ``` # Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ # TODO: Implement Min-Max scaling for grayscale image data gray_min = 0 gray_max = 255 a = 0.1 b = 0.9 return a + ((image_data - gray_min) * (b - a)) / (gray_max - gray_min) ### DON'T MODIFY ANYTHING BELOW ### # Test Cases np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])), [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314, 0.125098039216, 0.128235294118, 0.13137254902, 0.9], decimal=3) np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])), [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078, 0.896862745098, 0.9]) if not is_features_normal: train_features = normalize_grayscale(train_features) test_features = normalize_grayscale(test_features) is_features_normal = True print('Tests Passed!') if not is_labels_encod: # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit(train_labels) train_labels = encoder.transform(train_labels) test_labels = encoder.transform(test_labels) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) is_labels_encod = True print('Labels One-Hot Encoded') assert is_features_normal, 'You skipped the step to normalize the features' assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels' # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) print('Training features and labels randomized and split.') # Save the data for easy access pickle_file = 'notMNIST.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open('notMNIST.pickle', 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.') ``` # Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed. ``` %matplotlib inline # Load the modules import pickle import math import numpy as np import tensorflow as tf from tqdm import tqdm import matplotlib.pyplot as plt # Reload the data pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: pickle_data = pickle.load(f) train_features = pickle_data['train_dataset'] train_labels = pickle_data['train_labels'] valid_features = pickle_data['valid_dataset'] valid_labels = pickle_data['valid_labels'] test_features = pickle_data['test_dataset'] test_labels = pickle_data['test_labels'] del pickle_data # Free up memory print('Data and modules loaded.') ``` <img src="image/weight_biases.png" style="height: 60%;width: 60%; position: relative; right: 10%"> ## Problem 2 For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - `features` - Placeholder tensor for feature data (`train_features`/`valid_features`/`test_features`) - `labels` - Placeholder tensor for label data (`train_labels`/`valid_labels`/`test_labels`) - `keep_prob` - Placeholder tensor for dropout's keep probability value - `weights` - List of Variable Tensors with random numbers from a truncated normal distribution for each list index. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">`tf.truncated_normal()` documentation</a> for help. - `biases` - List of Variable Tensors with all zeros for each list index. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> `tf.zeros()` documentation</a> for help. ``` features_count = 784 labels_count = 10 # TODO: Set the hidden layer width. You can try different widths for different layers and experiment. hidden_layer_width = 64 # TODO: Set the features, labels, and keep_prob tensors features = tf.placeholder(tf.float32) labels = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32) # TODO: Set the list of weights and biases tensors based on number of layers weights = [ tf.Variable(tf.random_normal([features_count, hidden_layer_width])), tf.Variable(tf.random_normal([hidden_layer_width, labels_count]))] biases = [ tf.Variable(tf.random_normal([hidden_layer_width])), tf.Variable(tf.random_normal([labels_count]))] ### DON'T MODIFY ANYTHING BELOW ### from tensorflow.python.ops.variables import Variable assert features._op.name.startswith('Placeholder'), 'features must be a placeholder' assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder' assert all(isinstance(weight, Variable) for weight in weights), 'weights must be a TensorFlow variable' assert all(isinstance(bias, Variable) for bias in biases), 'biases must be a TensorFlow variable' assert features._shape == None or (\ features._shape.dims[0].value is None and\ features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect' assert labels._shape == None or (\ labels._shape.dims[0].value is None and\ labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect' assert features._dtype == tf.float32, 'features must be type float32' assert labels._dtype == tf.float32, 'labels must be type float32' ``` ## Problem 3 This problem would help you implement the hidden and output layers of your model. As it was covered in the classroom, you will need the following: - [tf.add](https://www.tensorflow.org/api_docs/python/tf/add) and [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul) to create your hidden and output(logits) layers. - [tf.nn.relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu) for your ReLU activation function. - [tf.nn.dropout](https://www.tensorflow.org/api_docs/python/tf/nn/dropout) for your dropout layer. ``` # TODO: Hidden Layers with ReLU Activation and dropouts. "features" would be the input to the first layer. hidden_layer_1 = tf.add(tf.matmul(features, weights[0]), biases[0]) hidden_layer_1 = tf.nn.relu(hidden_layer_1) hidden_layer_1 = tf.nn.dropout(hidden_layer_1, keep_prob) # hidden_layer_2 = tf.add(tf.matmul(hidden_layer_1, weights[0]), biases[0]) # hidden_layer_2 = tf.nn.relu(hidden_layer_2) # hidden_layer_2 = tf.nn.dropout(hidden_layer_2, keep_prob) # TODO: Output layer logits = tf.add(tf.matmul(hidden_layer_1, weights[1]), biases[1]) ### DON'T MODIFY ANYTHING BELOW ### prediction = tf.nn.softmax(logits) # Training loss loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) # Create an operation that initializes all variables init = tf.global_variables_initializer() # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) print('Accuracy function created.') ``` <img src="image/learn_rate_tune.png" style="height: 60%;width: 60%"> ## Problem 4 In the previous lab for a single Neural Network, you attempted several different configurations for the hyperparameters given below. Try to first use the same parameters as the previous lab, and then adjust and finetune those values based on your new model if required. You have another hyperparameter to tune now, however. Set the value for keep_probability and observe how it affects your results. ``` # TODO: Find the best parameters for each configuration epochs = 10 batch_size = 64 learning_rate = 0.01 keep_probability = 0.5 ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements use for graphing loss and accuracy log_batch_step = 50 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: batch_features, labels: batch_labels, keep_prob: keep_probability}) # Log every 50 batches if not batch_i % log_batch_step: # Calculate Training and Validation accuracy training_accuracy = session.run(accuracy, feed_dict={features: train_features, labels: train_labels, keep_prob: keep_probability}) validation_accuracy = session.run(accuracy, feed_dict={features: valid_features, labels: valid_labels, keep_prob: 1.0}) # Log batches previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) # Check accuracy against Validation data validation_accuracy = session.run(accuracy, feed_dict={features: valid_features, labels: valid_labels, keep_prob: 1.0}) loss_plot = plt.subplot(211) loss_plot.set_title('Loss') loss_plot.plot(batches, loss_batch, 'g') loss_plot.set_xlim([batches[0], batches[-1]]) acc_plot = plt.subplot(212) acc_plot.set_title('Accuracy') acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy') acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy') acc_plot.set_ylim([0, 1.0]) acc_plot.set_xlim([batches[0], batches[-1]]) acc_plot.legend(loc=4) plt.tight_layout() plt.show() print('Validation accuracy at {}'.format(validation_accuracy)) ``` ## Test Set the epochs, batch_size, and learning_rate with the best learning parameters you discovered in problem 4. You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. ``` # TODO: Set the epochs, batch_size, and learning_rate with the best parameters from problem 4 epochs = 10 batch_size = 64 learning_rate = .01 ### DON'T MODIFY ANYTHING BELOW ### # The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels, keep_prob: 1.0}) # Check accuracy against Test data test_accuracy = session.run(accuracy, feed_dict={features: test_features, labels: test_labels, keep_prob: 1.0}) print('Nice Job! Test Accuracy is {}'.format(test_accuracy)) ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # tf.distribute.Strategy with Training Loops <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/distribute/training_loops"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/training_loops.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/training_loops.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> This tutorial demonstrates how to use [`tf.distribute.Strategy`](https://www.tensorflow.org/guide/distribute_strategy) with custom training loops. We will train a simple CNN model on the fashion MNIST dataset. The fashion MNIST dataset contains 60000 train images of size 28 x 28 and 10000 test images of size 28 x 28. We are using custom training loops to train our model because they give us flexibility and a greater control on training. Moreover, it is easier to debug the model and the training loop. ``` from __future__ import absolute_import, division, print_function, unicode_literals # Import TensorFlow import tensorflow as tf # Helper libraries import numpy as np import os print(tf.__version__) ``` ## Download the fashion mnist dataset ``` fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # Adding a dimension to the array -> new shape == (28, 28, 1) # We are doing this because the first layer in our model is a convolutional # layer and it requires a 4D input (batch_size, height, width, channels). # batch_size dimension will be added later on. train_images = train_images[..., None] test_images = test_images[..., None] # Getting the images in [0, 1] range. train_images = train_images / np.float32(255) test_images = test_images / np.float32(255) train_labels = train_labels.astype('int64') test_labels = test_labels.astype('int64') ``` ## Create a strategy to distribute the variables and the graph How does `tf.distribute.MirroredStrategy` strategy work? * All the variables and the model graph is replicated on the replicas. * Input is evenly distributed across the replicas. * Each replica calculates the loss and gradients for the input it received. * The gradients are synced across all the replicas by summing them. * After the sync, the same update is made to the copies of the variables on each replica. Note: You can put all the code below inside a single scope. We are dividing it into several code cells for illustration purposes. ``` # If the list of devices is not specified in the # `tf.distribute.MirroredStrategy` constructor, it will be auto-detected. strategy = tf.distribute.MirroredStrategy() print ('Number of devices: {}'.format(strategy.num_replicas_in_sync)) ``` ## Setup input pipeline If a model is trained on multiple GPUs, the batch size should be increased accordingly so as to make effective use of the extra computing power. Moreover, the learning rate should be tuned accordingly. ``` BUFFER_SIZE = len(train_images) BATCH_SIZE_PER_REPLICA = 64 BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync EPOCHS = 10 ``` `strategy.make_dataset_iterator` creates an iterator that evenly distributes the data across all the replicas. Note: This API is expected to change in the near future. ``` with strategy.scope(): train_dataset = tf.data.Dataset.from_tensor_slices( (train_images, train_labels)).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) train_iterator = strategy.make_dataset_iterator(train_dataset) test_dataset = tf.data.Dataset.from_tensor_slices( (test_images, test_labels)).batch(BATCH_SIZE) test_iterator = strategy.make_dataset_iterator(test_dataset) ``` ## Model Creation Create a model using `tf.keras.Sequential`. You can also use the Model Subclassing API to do this. ``` with strategy.scope(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(64, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) optimizer = tf.train.GradientDescentOptimizer(0.001) ``` ## Define the loss function Normally, on a single machine with 1 GPU/CPU, loss is divided by the number of examples in the batch of input. *So, how is the loss calculated when using a `tf.distribute.Strategy`?* > For an example, let's say you have 4 GPU's and a batch size of 64. One batch of input is distributed across the replicas (4 GPUs), each replica getting an input of size 16. > The model on each replica does a forward pass with its respective input and calculates the loss. Now, instead of dividing the loss by the number of examples in its respective input (16), the loss is divided by the global input size (64). *Why is this done?* > This is done because after the gradients are calculated on each replica, they are synced across the replicas by **summing** them. *How to handle this in TensorFlow?* > `tf.keras.losses` handles this automatically. > If you distribute a custom loss function, don't implement it using `tf.reduce_mean` (which divides by the local batch size), divide the sum by the global batch size: ```python scale_loss = tf.reduce_sum(loss) * (1. / global_batch_size) ``` ## Training loop ``` with strategy.scope(): def train_step(): def step_fn(inputs): images, labels = inputs logits = model(images) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits, labels=labels) loss = tf.reduce_sum(cross_entropy) * (1.0 / BATCH_SIZE) train_op = optimizer.minimize(loss) with tf.control_dependencies([train_op]): return tf.identity(loss) per_replica_losses = strategy.experimental_run( step_fn, train_iterator) mean_loss = strategy.reduce( tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) return mean_loss with strategy.scope(): iterator_init = train_iterator.initialize() var_init = tf.global_variables_initializer() loss = train_step() with tf.Session() as sess: sess.run([var_init]) for epoch in range(EPOCHS): sess.run([iterator_init]) for step in range(10000): if step % 1000 == 0: print('Epoch {} Step {} Loss {:.4f}'.format(epoch+1, step, sess.run(loss))) ``` ## Next Steps Try out the new `tf.distribute.Strategy` API on your models. ``` ```
github_jupyter
# Day and Night Image Classifier --- The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images. We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images! *Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* ### Import resources Before you get started on the project code, import the libraries and resources that you'll need. ``` import cv2 # computer vision library import helpers import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline ``` ## Training and Testing Data The 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier. * 40% are test images, which will be used to test the accuracy of your classifier. First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored ``` # Image data directories image_dir_training = "day_night_images/training/" image_dir_test = "day_night_images/test/" ``` ## Load the datasets These first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```. ``` # Using the load_dataset function in helpers.py # Load training data IMAGE_LIST = helpers.load_dataset(image_dir_training) ``` ## Construct a `STANDARDIZED_LIST` of input images and output labels. This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels. ``` # Standardize all training images STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST) ``` ## Visualize the standardized data Display a standardized image from STANDARDIZED_LIST. ``` # Display a standardized image and its label # Select an image by index image_num = 0 selected_image = STANDARDIZED_LIST[image_num][0] selected_label = STANDARDIZED_LIST[image_num][1] # Display image and data about it plt.imshow(selected_image) print("Shape: "+str(selected_image.shape)) print("Label [1 = day, 0 = night]: " + str(selected_label)) ``` # Feature Extraction Create a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- ### Find the average brightness using the V channel This function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night. ``` # Find the average Value or brightness of an image def avg_brightness(rgb_image): # Convert image to HSV hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV) # Add up all the pixel values in the V channel sum_brightness = np.sum(hsv[:,:,2]) area = 600*1100.0 # pixels # find the avg avg = sum_brightness/area return avg # Testing average brightness levels # Look at a number of different day and night images and think about # what average brightness value separates the two types of images # As an example, a "night" image is loaded in and its avg brightness is displayed image_num = 190 test_im = STANDARDIZED_LIST[image_num][0] avg = avg_brightness(test_im) print('Avg brightness: ' + str(avg)) plt.imshow(test_im) ``` # Classification and Visualizing Error In this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- ### TODO: Build a complete classifier Complete this code so that it returns an estimated class label given an input RGB image. ``` # This function should take in RGB image input def estimate_label(rgb_image): # TO-DO: Extract average brightness feature from an RGB image avg = avg_brightness(rgb_image) # Use the avg brightness feature to predict a label (0, 1) predicted_label = 0 # TO-DO: Try out different threshold values to see what works best! threshold = 98.999999 if(avg > threshold): # if the average brightness is above the threshold value, we classify it as "day" predicted_label = 1 # else, the predicted_label can stay 0 (it is predicted to be "night") return predicted_label ``` ## Testing the classifier Here is where we test your classification algorithm using our test set of data that we set aside at the beginning of the notebook! Since we are using a pretty simple brightess feature, we may not expect this classifier to be 100% accurate. We'll aim for around 75-85% accuracy usin this one feature. ### Test dataset Below, we load in the test dataset, standardize it using the `standardize` function you defined above, and then **shuffle** it; this ensures that order will not play a role in testing accuracy. ``` import random # Using the load_dataset function in helpers.py # Load test data TEST_IMAGE_LIST = helpers.load_dataset(image_dir_test) # Standardize the test data STANDARDIZED_TEST_LIST = helpers.standardize(TEST_IMAGE_LIST) # Shuffle the standardized test data random.shuffle(STANDARDIZED_TEST_LIST) ``` ## Determine the Accuracy Compare the output of your classification algorithm (a.k.a. your "model") with the true labels and determine the accuracy. This code stores all the misclassified images, their predicted labels, and their true labels, in a list called `misclassified`. ``` # Constructs a list of misclassified images given a list of test images and their labels def get_misclassified_images(test_images): # Track misclassified images by placing them into a list misclassified_images_labels = [] # Iterate through all the test images # Classify each image and compare to the true label for image in test_images: # Get true data im = image[0] true_label = image[1] # Get predicted label from your classifier predicted_label = estimate_label(im) # Compare true and predicted labels if(predicted_label != true_label): # If these labels are not equal, the image has been misclassified misclassified_images_labels.append((im, predicted_label, true_label)) # Return the list of misclassified [image, predicted_label, true_label] values return misclassified_images_labels # Find all misclassified images in a given test set MISCLASSIFIED = get_misclassified_images(STANDARDIZED_TEST_LIST) # Accuracy calculations total = len(STANDARDIZED_TEST_LIST) num_correct = total - len(MISCLASSIFIED) accuracy = num_correct/total print('Accuracy: ' + str(accuracy)) print("Number of misclassified images = " + str(len(MISCLASSIFIED)) +' out of '+ str(total)) ``` --- <a id='task9'></a> ### TO-DO: Visualize the misclassified images Visualize some of the images you classified wrong (in the `MISCLASSIFIED` list) and note any qualities that make them difficult to classify. This will help you identify any weaknesses in your classification algorithm. ``` # Visualize misclassified example(s) num = 0 test_mis_im = MISCLASSIFIED[num][0] ## TODO: Display an image in the `MISCLASSIFIED` list ## TODO: Print out its predicted label - ## to see what the image *was* incorrectly classified as ``` --- <a id='question2'></a> ## (Question): After visualizing these misclassifications, what weaknesses do you think your classification algorithm has? **Answer:** Write your answer, here. # 5. Improve your algorithm! * (Optional) Tweak your threshold so that accuracy is better. * (Optional) Add another feature that tackles a weakness you identified! ---
github_jupyter
# Components of StyleGAN ### Goals In this notebook, you're going to implement various components of StyleGAN, including the truncation trick, the mapping layer, noise injection, adaptive instance normalization (AdaIN), and progressive growing. ### Learning Objectives 1. Understand the components of StyleGAN that differ from the traditional GAN. 2. Implement the components of StyleGAN. ## Getting Started You will begin by importing some packages from PyTorch and defining a visualization function which will be useful later. ``` import torch import torch.nn as nn import torch.nn.functional as F def show_tensor_images(image_tensor, num_images=16, size=(3, 64, 64), nrow=3): ''' Function for visualizing images: Given a tensor of images, number of images, size per image, and images per row, plots and prints the images in an uniform grid. ''' image_tensor = (image_tensor + 1) / 2 image_unflat = image_tensor.detach().cpu().clamp_(0, 1) image_grid = make_grid(image_unflat[:num_images], nrow=nrow, padding=0) plt.imshow(image_grid.permute(1, 2, 0).squeeze()) plt.axis('off') plt.show() ``` ## Truncation Trick The first component you will implement is the truncation trick. Remember that this is done after the model is trained and when you are sampling beautiful outputs. The truncation trick resamples the noise vector $z$ from a truncated normal distribution which allows you to tune the generator's fidelity/diversity. The truncation value is at least 0, where 1 means there is little truncation (high diversity) and 0 means the distribution is all truncated except for the mean (high quality/fidelity). This trick is not exclusive to StyleGAN. In fact, you may recall playing with it in an earlier GAN notebook. ``` # UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL: get_truncated_noise from scipy.stats import truncnorm def get_truncated_noise(n_samples, z_dim, truncation): ''' Function for creating truncated noise vectors: Given the dimensions (n_samples, z_dim) and truncation value, creates a tensor of that shape filled with random numbers from the truncated normal distribution. Parameters: n_samples: the number of samples to generate, a scalar z_dim: the dimension of the noise vector, a scalar truncation: the truncation value, a non-negative scalar ''' #### START CODE HERE #### truncated_noise = truncnorm.rvs(-truncation, truncation, size=(n_samples, z_dim)) #### END CODE HERE #### return torch.Tensor(truncated_noise) # Test the truncation sample assert tuple(get_truncated_noise(n_samples=10, z_dim=5, truncation=0.7).shape) == (10, 5) simple_noise = get_truncated_noise(n_samples=1000, z_dim=10, truncation=0.2) assert simple_noise.max() > 0.199 and simple_noise.max() < 2 assert simple_noise.min() < -0.199 and simple_noise.min() > -0.2 assert simple_noise.std() > 0.113 and simple_noise.std() < 0.117 print("Success!") ``` ## Mapping $z$ → $w$ The next component you need to implement is the mapping network. It takes the noise vector, $z$, and maps it to an intermediate noise vector, $w$. This makes it so $z$ can be represented in a more disentangled space which makes the features easier to control later. The mapping network in StyleGAN is composed of 8 layers, but for your implementation, you will use a neural network with 3 layers. This is to save time training later. <details> <summary> <font size="3" color="green"> <b>Optional hints for <code><font size="4">MappingLayers</font></code></b> </font> </summary> 1. This code should be five lines. 2. You need 3 linear layers and should use ReLU activations. 3. Your linear layers should be input -> hidden_dim -> hidden_dim -> output. </details> ``` # UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL: MappingLayers class MappingLayers(nn.Module): ''' Mapping Layers Class Values: z_dim: the dimension of the noise vector, a scalar hidden_dim: the inner dimension, a scalar w_dim: the dimension of the intermediate noise vector, a scalar ''' def __init__(self, z_dim, hidden_dim, w_dim): super().__init__() self.mapping = nn.Sequential( # Please write a neural network which takes in tensors of # shape (n_samples, z_dim) and outputs (n_samples, w_dim) # with a hidden layer with hidden_dim neurons #### START CODE HERE #### nn.Linear(z_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, w_dim) #### END CODE HERE #### ) def forward(self, noise): ''' Function for completing a forward pass of MappingLayers: Given an initial noise tensor, returns the intermediate noise tensor. Parameters: noise: a noise tensor with dimensions (n_samples, z_dim) ''' return self.mapping(noise) #UNIT TEST COMMENT: Required for grading def get_mapping(self): return self.mapping # Test the mapping function map_fn = MappingLayers(10,20,30) assert tuple(map_fn(torch.randn(2, 10)).shape) == (2, 30) assert len(map_fn.mapping) > 4 outputs = map_fn(torch.randn(1000, 10)) assert outputs.std() > 0.05 and outputs.std() < 0.3 assert outputs.min() > -2 and outputs.min() < 0 assert outputs.max() < 2 and outputs.max() > 0 layers = [str(x).replace(' ', '').replace('inplace=True', '') for x in map_fn.get_mapping()] assert layers == ['Linear(in_features=10,out_features=20,bias=True)', 'ReLU()', 'Linear(in_features=20,out_features=20,bias=True)', 'ReLU()', 'Linear(in_features=20,out_features=30,bias=True)'] print("Success!") ``` ## Random Noise Injection Next, you will implement the random noise injection that occurs before every AdaIN block. To do this, you need to create a noise tensor that is the same size as the current feature map (image). The noise tensor is not entirely random; it is initialized as one random channel that is then multiplied by learned weights for each channel in the image. For example, imagine an image has 512 channels and its height and width are (4 x 4). You would first create a random (4 x 4) noise matrix with one channel. Then, your model would create 512 values—one for each channel. Next, you multiply the (4 x 4) matrix by each one of these values. This creates a "random" tensor of 512 channels and (4 x 4) pixels, the same dimensions as the image. Finally, you add this noise tensor to the image. This introduces uncorrelated noise and is meant to increase the diversity in the image. New starting weights are generated for every new layer, or generator, where this class is used. Within a layer, every following time the noise injection is called, you take another step with the optimizer and the weights that you use for each channel are optimized (i.e. learned). <details> <summary> <font size="3" color="green"> <b>Optional hint for <code><font size="4">InjectNoise</font></code></b> </font> </summary> 1. The weight should have the shape (1, channels, 1, 1). </details> <!-- <details> <summary> <font size="3" color="green"> <b>Optional hint for <code><font size="4">InjectNoise</font></code></b> </font> </summary> 1. Remember that you only make the noise for one channel (it is then multiplied by random values to create ones for the other channels). </details> --> <!-- (not sure how??) You'll find the get_noise function from before helpful here --> ``` # UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL: InjectNoise class InjectNoise(nn.Module): ''' Inject Noise Class Values: channels: the number of channels the image has, a scalar ''' def __init__(self, channels): super().__init__() self.weight = nn.Parameter( # You use nn.Parameter so that these weights can be optimized # Initiate the weights for the channels from a random normal distribution #### START CODE HERE #### torch.randn(1, channels, 1, 1) #### END CODE HERE #### ) def forward(self, image): ''' Function for completing a forward pass of InjectNoise: Given an image, returns the image with random noise added. Parameters: image: the feature map of shape (n_samples, channels, width, height) ''' # Set the appropriate shape for the noise! #### START CODE HERE #### noise_shape = (image.shape[0], 1, image.shape[2], image.shape[3]) #### END CODE HERE #### noise = torch.randn(noise_shape, device=image.device) # Creates the random noise return image + self.weight * noise # Applies to image after multiplying by the weight for each channel #UNIT TEST COMMENT: Required for grading def get_weight(self): return self.weight #UNIT TEST COMMENT: Required for grading def get_self(self): return self # UNIT TEST test_noise_channels = 3000 test_noise_samples = 20 fake_images = torch.randn(test_noise_samples, test_noise_channels, 10, 10) inject_noise = InjectNoise(test_noise_channels) assert torch.abs(inject_noise.weight.std() - 1) < 0.1 assert torch.abs(inject_noise.weight.mean()) < 0.1 assert type(inject_noise.get_weight()) == torch.nn.parameter.Parameter assert tuple(inject_noise.weight.shape) == (1, test_noise_channels, 1, 1) inject_noise.weight = nn.Parameter(torch.ones_like(inject_noise.weight)) # Check that something changed assert torch.abs((inject_noise(fake_images) - fake_images)).mean() > 0.1 # Check that the change is per-channel assert torch.abs((inject_noise(fake_images) - fake_images).std(0)).mean() > 1e-4 assert torch.abs((inject_noise(fake_images) - fake_images).std(1)).mean() < 1e-4 assert torch.abs((inject_noise(fake_images) - fake_images).std(2)).mean() > 1e-4 assert torch.abs((inject_noise(fake_images) - fake_images).std(3)).mean() > 1e-4 # Check that the per-channel change is roughly normal per_channel_change = (inject_noise(fake_images) - fake_images).mean(1).std() assert per_channel_change > 0.9 and per_channel_change < 1.1 # Make sure that the weights are being used at all inject_noise.weight = nn.Parameter(torch.zeros_like(inject_noise.weight)) assert torch.abs((inject_noise(fake_images) - fake_images)).mean() < 1e-4 assert len(inject_noise.weight.shape) == 4 print("Success!") ``` ## Adaptive Instance Normalization (AdaIN) The next component you will implement is AdaIN. To increase control over the image, you inject $w$ — the intermediate noise vector — multiple times throughout StyleGAN. This is done by transforming it into a set of style parameters and introducing the style to the image through AdaIN. Given an image ($x_i$) and the intermediate vector ($w$), AdaIN takes the instance normalization of the image and multiplies it by the style scale ($y_s$) and adds the style bias ($y_b$). You need to calculate the learnable style scale and bias by using linear mappings from $w$. # $ \text{AdaIN}(\boldsymbol{\mathrm{x}}_i, \boldsymbol{\mathrm{y}}) = \boldsymbol{\mathrm{y}}_{s,i} \frac{\boldsymbol{\mathrm{x}}_i - \mu(\boldsymbol{\mathrm{x}}_i)}{\sigma(\boldsymbol{\mathrm{x}}_i)} + \boldsymbol{\mathrm{y}}_{b,i} $ <details> <summary> <font size="3" color="green"> <b>Optional hints for <code><font size="4">forward</font></code></b> </font> </summary> 1. Remember the equation for AdaIN. 2. The instance normalized image, style scale, and style shift have already been calculated for you. </details> ``` # UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL: AdaIN class AdaIN(nn.Module): ''' AdaIN Class Values: channels: the number of channels the image has, a scalar w_dim: the dimension of the intermediate noise vector, a scalar ''' def __init__(self, channels, w_dim): super().__init__() # Normalize the input per-dimension self.instance_norm = nn.InstanceNorm2d(channels) # You want to map w to a set of style weights per channel. # Replace the Nones with the correct dimensions - keep in mind that # both linear maps transform a w vector into style weights # corresponding to the number of image channels. #### START CODE HERE #### self.style_scale_transform = nn.Linear(w_dim, channels) self.style_shift_transform = nn.Linear(w_dim, channels) #### END CODE HERE #### def forward(self, image, w): ''' Function for completing a forward pass of AdaIN: Given an image and intermediate noise vector w, returns the normalized image that has been scaled and shifted by the style. Parameters: image: the feature map of shape (n_samples, channels, width, height) w: the intermediate noise vector ''' normalized_image = self.instance_norm(image) style_scale = self.style_scale_transform(w)[:, :, None, None] style_shift = self.style_shift_transform(w)[:, :, None, None] # Calculate the transformed image #### START CODE HERE #### transformed_image = style_scale * normalized_image + style_shift #### END CODE HERE #### return transformed_image #UNIT TEST COMMENT: Required for grading def get_style_scale_transform(self): return self.style_scale_transform #UNIT TEST COMMENT: Required for grading def get_style_shift_transform(self): return self.style_shift_transform #UNIT TEST COMMENT: Required for grading def get_self(self): return self w_channels = 50 image_channels = 20 image_size = 30 n_test = 10 adain = AdaIN(image_channels, w_channels) test_w = torch.randn(n_test, w_channels) assert adain.style_scale_transform(test_w).shape == adain.style_shift_transform(test_w).shape assert adain.style_scale_transform(test_w).shape[-1] == image_channels assert tuple(adain(torch.randn(n_test, image_channels, image_size, image_size), test_w).shape) == (n_test, image_channels, image_size, image_size) w_channels = 3 image_channels = 2 image_size = 3 n_test = 1 adain = AdaIN(image_channels, w_channels) adain.style_scale_transform.weight.data = torch.ones_like(adain.style_scale_transform.weight.data) / 4 adain.style_scale_transform.bias.data = torch.zeros_like(adain.style_scale_transform.bias.data) adain.style_shift_transform.weight.data = torch.ones_like(adain.style_shift_transform.weight.data) / 5 adain.style_shift_transform.bias.data = torch.zeros_like(adain.style_shift_transform.bias.data) test_input = torch.ones(n_test, image_channels, image_size, image_size) test_input[:, :, 0] = 0 test_w = torch.ones(n_test, w_channels) test_output = adain(test_input, test_w) assert(torch.abs(test_output[0, 0, 0, 0] - 3 / 5 + torch.sqrt(torch.tensor(9 / 8))) < 1e-4) assert(torch.abs(test_output[0, 0, 1, 0] - 3 / 5 - torch.sqrt(torch.tensor(9 / 32))) < 1e-4) print("Success!") ``` ## Progressive Growing in StyleGAN The final StyleGAN component that you will create is progressive growing. This helps StyleGAN to create high resolution images by gradually doubling the image's size until the desired size. You will start by creating a block for the StyleGAN generator. This is comprised of an upsampling layer, a convolutional layer, random noise injection, an AdaIN layer, and an activation. ``` # UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL: MicroStyleGANGeneratorBlock class MicroStyleGANGeneratorBlock(nn.Module): ''' Micro StyleGAN Generator Block Class Values: in_chan: the number of channels in the input, a scalar out_chan: the number of channels wanted in the output, a scalar w_dim: the dimension of the intermediate noise vector, a scalar kernel_size: the size of the convolving kernel starting_size: the size of the starting image ''' def __init__(self, in_chan, out_chan, w_dim, kernel_size, starting_size, use_upsample=True): super().__init__() self.use_upsample = use_upsample # Replace the Nones in order to: # 1. Upsample to the starting_size, bilinearly (https://pytorch.org/docs/master/generated/torch.nn.Upsample.html) # 2. Create a kernel_size convolution which takes in # an image with in_chan and outputs one with out_chan (https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html) # 3. Create an object to inject noise # 4. Create an AdaIN object # 5. Create a LeakyReLU activation with slope 0.2 #### START CODE HERE #### if self.use_upsample: self.upsample = nn.Upsample((starting_size), mode='bilinear') self.conv = nn.Conv2d(in_chan, out_chan, kernel_size, padding=1) # Padding is used to maintain the image size self.inject_noise = InjectNoise(out_chan) self.adain = AdaIN(out_chan, w_dim) self.activation = nn.LeakyReLU(0.2) #### END CODE HERE #### def forward(self, x, w): ''' Function for completing a forward pass of MicroStyleGANGeneratorBlock: Given an x and w, computes a StyleGAN generator block. Parameters: x: the input into the generator, feature map of shape (n_samples, channels, width, height) w: the intermediate noise vector ''' if self.use_upsample: x = self.upsample(x) x = self.conv(x) x = self.inject_noise(x) x = self.activation(x) x = self.adain(x, w) return x #UNIT TEST COMMENT: Required for grading def get_self(self): return self; test_stylegan_block = MicroStyleGANGeneratorBlock(in_chan=128, out_chan=64, w_dim=256, kernel_size=3, starting_size=8) test_x = torch.ones(1, 128, 4, 4) test_x[:, :, 1:3, 1:3] = 0 test_w = torch.ones(1, 256) test_x = test_stylegan_block.upsample(test_x) assert tuple(test_x.shape) == (1, 128, 8, 8) assert torch.abs(test_x.mean() - 0.75) < 1e-4 test_x = test_stylegan_block.conv(test_x) assert tuple(test_x.shape) == (1, 64, 8, 8) test_x = test_stylegan_block.inject_noise(test_x) test_x = test_stylegan_block.activation(test_x) assert test_x.min() < 0 assert -test_x.min() / test_x.max() < 0.4 test_x = test_stylegan_block.adain(test_x, test_w) foo = test_stylegan_block(torch.ones(10, 128, 4, 4), torch.ones(10, 256)) print("Success!") ``` Now, you can implement progressive growing. StyleGAN starts with a constant 4 x 4 (x 512 channel) tensor which is put through an iteration of the generator without upsampling. The output is some noise that can then be transformed into a blurry 4 x 4 image. This is where the progressive growing process begins. The 4 x 4 noise can be further passed through a generator block with upsampling to produce an 8 x 8 output. However, this will be done gradually. You will simulate progressive growing from an 8 x 8 image to a 16 x 16 image. Instead of simply passing it to the generator block with upsampling, StyleGAN gradually trains the generator to the new size by mixing in an image that was only upsampled. By mixing an upsampled 8 x 8 image (which is 16 x 16) with increasingly more of the 16 x 16 generator output, the generator is more stable as it progressively trains. As such, you will do two separate operations with the 8 x 8 noise: 1. Pass it into the next generator block to create an output noise, that you will then transform to an image. 2. Transform it into an image and then upsample it to be 16 x 16. You will now have two images that are both double the resolution of the 8 x 8 noise. Then, using an alpha ($\alpha$) term, you combine the higher resolution images obtained from (1) and (2). You would then pass this into the discriminator and use the feedback to update the weights of your generator. The key here is that the $\alpha$ term is gradually increased until eventually, only the image from (1), the generator, is used. That is your final image or you could continue this process to make a 32 x 32 image or 64 x 64, 128 x 128, etc. This micro model you will implement will visualize what the model outputs at a particular stage of training, for a specific value of $\alpha$. However to reiterate, in practice, StyleGAN will slowly phase out the upsampled image by increasing the $\alpha$ parameter over many training steps, doing this process repeatedly with larger and larger alpha values until it is 1—at this point, the combined image is solely comprised of the image from the generator block. This method of gradually training the generator increases the stability and fidelity of the model. <!-- by passing a random noise vector in $z$ through the mapping function you wrote to get $w$. $w$ is then passed through the first block of the generator to create your first output noise. --> <details> <summary> <font size="3" color="green"> <b>Optional hint for <code><font size="4">forward</font></code></b> </font> </summary> 1. You may find [torch.lerp](https://pytorch.org/docs/stable/generated/torch.lerp.html) helpful. </details> ``` # UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL: MicroStyleGANGenerator class MicroStyleGANGenerator(nn.Module): ''' Micro StyleGAN Generator Class Values: z_dim: the dimension of the noise vector, a scalar map_hidden_dim: the mapping inner dimension, a scalar w_dim: the dimension of the intermediate noise vector, a scalar in_chan: the dimension of the constant input, usually w_dim, a scalar out_chan: the number of channels wanted in the output, a scalar kernel_size: the size of the convolving kernel hidden_chan: the inner dimension, a scalar ''' def __init__(self, z_dim, map_hidden_dim, w_dim, in_chan, out_chan, kernel_size, hidden_chan): super().__init__() self.map = MappingLayers(z_dim, map_hidden_dim, w_dim) # Typically this constant is initiated to all ones, but you will initiate to a # Gaussian to better visualize the network's effect self.starting_constant = nn.Parameter(torch.randn(1, in_chan, 4, 4)) self.block0 = MicroStyleGANGeneratorBlock(in_chan, hidden_chan, w_dim, kernel_size, 4, use_upsample=False) self.block1 = MicroStyleGANGeneratorBlock(hidden_chan, hidden_chan, w_dim, kernel_size, 8) self.block2 = MicroStyleGANGeneratorBlock(hidden_chan, hidden_chan, w_dim, kernel_size, 16) # You need to have a way of mapping from the output noise to an image, # so you learn a 1x1 convolution to transform the e.g. 512 channels into 3 channels # (Note that this is simplified, with clipping used in the real StyleGAN) self.block1_to_image = nn.Conv2d(hidden_chan, out_chan, kernel_size=1) self.block2_to_image = nn.Conv2d(hidden_chan, out_chan, kernel_size=1) self.alpha = 0.2 def upsample_to_match_size(self, smaller_image, bigger_image): ''' Function for upsampling an image to the size of another: Given a two images (smaller and bigger), upsamples the first to have the same dimensions as the second. Parameters: smaller_image: the smaller image to upsample bigger_image: the bigger image whose dimensions will be upsampled to ''' return F.interpolate(smaller_image, size=bigger_image.shape[-2:], mode='bilinear') def forward(self, noise, return_intermediate=False): ''' Function for completing a forward pass of MicroStyleGANGenerator: Given noise, computes a StyleGAN iteration. Parameters: noise: a noise tensor with dimensions (n_samples, z_dim) return_intermediate: a boolean, true to return the images as well (for testing) and false otherwise ''' x = self.starting_constant w = self.map(noise) x = self.block0(x, w) x_small = self.block1(x, w) # First generator run output x_small_image = self.block1_to_image(x_small) x_big = self.block2(x_small, w) # Second generator run output x_big_image = self.block2_to_image(x_big) x_small_upsample = self.upsample_to_match_size(x_small_image, x_big_image) # Upsample first generator run output to be same size as second generator run output # Interpolate between the upsampled image and the image from the generator using alpha #### START CODE HERE #### interpolation = ((1 - self.alpha) * x_small_upsample) + (self.alpha * x_big_image) #### END CODE HERE #### if return_intermediate: return interpolation, x_small_upsample, x_big_image return interpolation #UNIT TEST COMMENT: Required for grading def get_self(self): return self; z_dim = 128 out_chan = 3 truncation = 0.7 mu_stylegan = MicroStyleGANGenerator( z_dim=z_dim, map_hidden_dim=1024, w_dim=496, in_chan=512, out_chan=out_chan, kernel_size=3, hidden_chan=256 ) test_samples = 10 test_result = mu_stylegan(get_truncated_noise(test_samples, z_dim, truncation)) # Check if the block works assert tuple(test_result.shape) == (test_samples, out_chan, 16, 16) # Check that the interpolation is correct mu_stylegan.alpha = 1. test_result, _, test_big = mu_stylegan( get_truncated_noise(test_samples, z_dim, truncation), return_intermediate=True) assert torch.abs(test_result - test_big).mean() < 0.001 mu_stylegan.alpha = 0. test_result, test_small, _ = mu_stylegan( get_truncated_noise(test_samples, z_dim, truncation), return_intermediate=True) assert torch.abs(test_result - test_small).mean() < 0.001 print("Success!") ``` ## Running StyleGAN Finally, you can put all the components together to run an iteration of your micro StyleGAN! You can also visualize what this randomly initiated generator can produce. The code will automatically interpolate between different values of alpha so that you can intuitively see what it means to mix the low-resolution and high-resolution images using different values of alpha. In the generated image, the samples start from low alpha values and go to high alpha values. ``` import numpy as np from torchvision.utils import make_grid import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [15, 15] viz_samples = 10 # The noise is exaggerated for visual effect viz_noise = get_truncated_noise(viz_samples, z_dim, truncation) * 10 mu_stylegan.eval() images = [] for alpha in np.linspace(0, 1, num=5): mu_stylegan.alpha = alpha viz_result, _, _ = mu_stylegan( viz_noise, return_intermediate=True) images += [tensor for tensor in viz_result] show_tensor_images(torch.stack(images), nrow=viz_samples, num_images=len(images)) mu_stylegan = mu_stylegan.train() ```
github_jupyter
Центр непрерывного образования # Программа «Python для автоматизации и анализа данных» Неделя 1 - 1 *Татьяна Рогович, НИУ ВШЭ* ## Введение в Python. Целые и вещественные числа. Логические переменные. # Функция print() С помощью Python можно решать огромное количество задач. Мы начнем с очень простых и постепенно будем их усложнять, закончив наш курс небольшим проектом. Если вы уже сталкивались с программированием, то вы помните, что обычно самой первой программой становится вывод "Hello, world". Попробуем сделать это в Python. ``` a = 1 a = 2 a + 2 a = a + 2 a += 2 # a = a + 2 a a -= 2 a *= 2 b = a a c print(a) print('Hello, world!') print(1) ``` Обратите внимание, что "Hello, world!" мы написали в кавычках, а единицу - нет. Это связанно с тем, что в программировании мы имеем дело с разными типами данных. И Python будет воспринимать текст как текст (строковую переменную), только в том случае, если мы его возьмем в кавычки (неважно, одинарные или двойные). А при выводе эти кавычки отображаться уже не будут (они служат знаком для Python, что внутри них - текст). print() - это функция, которая выводит то, что мы ей передадим. В других IDE это был бы вывод в терминал, а в Jupyter вывод напечатается под запускаемой ячейкой. Распознать функцию в питоне можно по скобкам после слова, внутри которых мы передаем аргумент, к которому эту функцию нужно применить (текст "Hello, world" или 1 в нашем случае). ``` print(Hello, world) ``` Написали без кавычек - получили ошибку. Кстати, обратите внимание, что очень часто в тексте ошибки есть указание на то, что именно произошло, и можно попробовать догадаться, что же нужно исправить. Текст без кавычек Python воспринимает как название переменной, которую еще не задали. Кстати, если забыть закрыть или открыть кавычку (или поставить разные), то тоже поймаем ошибку. Иногда мы хотим комментировать наш код, чтобы я-будущий или наши коллеги поменьше задавались вопросами, а что вообще имелось ввиду. Комментарии можно писать прямо в коде, они не повлияют на работу программы, если их правильно оформить. ``` # Обратите внимание: так выглядит комментарий - часть скрипта, которая не будет исполнена # при запуске программы. # Каждую строку комментария мы начинаем со знака хэштега. ''' Это тоже комментарий - обычно выделение тремя апострофами мы используем для тех случаев, когда хотим написать длинный, развернутый текст. ''' print('Hello, world') ``` Обратите внимание, что в отличие от других IDE (например, PyCharm) в Jupyter Notebook не всегда обязательно использовать print(), чтобы что-то вывести. Но не относитесь к этому как к стандарту для любых IDE. ``` "Hello, world" ``` Выше рядом с выводом появилась подпись Out[]. В данном случае Jupyter показывает нам последнее значение, лежащее в буфере ячейки. Например, в PyCharm такой вывод всегда будет скрыт, пока мы его не "напечатаем" с помощью print(). Но эта особенность Jupyter помогает нам быстро проверить, что, например, находится внутри переменной, когда мы отлаживаем наш код. Следующая вещь, которую нужно знать про язык программирования - как в нем задаются переменные. Переменные - это контейнеры, которые хранят в себе информацию (текстовую, числовую, какие-то более сложные виды данных). В Python знаком присвоения является знак =. ``` x = 'Hello, world!' print(x) # Обратите внимание, что результат вызова этой функции такой же, как выше, # только текст теперь хранится внутри переменной. ``` Python - язык чувствительный к регистру. Поэтому, когда создаете/вызываете переменные или функции, обязательно используйте правильный регистр. Так, следующая строка выдаст ошибку. ``` print(X) # мы создали переменную x, а X не существует ``` Еще раз обратим внимание на ошибку. *NameError: name 'X' is not defined* означает, что переменная с таким названием не была создана в этой программе. Кстати, обратите внимание, что переменные в Jupyter хранятся на протяжении всей сессии (пока вы работаете с блокнотом и не закрыли его), и могут быть созданы в одной ячейке, а вызваны в другой. Давайте опять попробуем обратиться к x. ``` print(x) # работает! ``` # Типы данных: целочисленные переменные (integer) Знакомство с типа данных мы начнем с целых чисел. Если вы вдруг знакомы с другими языками программирования, то стоит отметить, что типизация в Python - динамическая. Это значит, что вам не нужно говорить какой тип данных вы хотите положить в переменную - Python сам все определит. Проверить тип данных можно с помощью функции type(), передав ей в качестве аргумента сами данные или переменную. **ЦЕЛЫЕ ЧИСЛА (INT, INTEGER):** 1, 2, 592, 1030523235 - любое целое число без дробной части. ``` 1 10 100 type(100) 5 / 2 type(2.5) type(1.0) int(1.0) type(int(1.0)) float(2) y = 2 print(type(2)) print(type(y)) ``` Обратите внимание - выше мы вызвали функцию внутри функции. type(2) возвращает скрытое значение типа переменной (int для integer). Чтобы вывести это скрытое значение - мы должны его "напечатать". Самое элементарное, что можно делать в Python - использовать его как калькулятор. Давайте посмотрим, как он вычитает, складывает и умножает. ``` print(2 + 2) print(18 - 9) print(4 * 3) ``` С делением нужно быть немного осторожней. Существует два типа деления - привычное нам, которое даст в ответе дробь при делении 5 на 2, и целочисленное деление, в результате которого мы получим только целую часть частного. ``` print(5 / 2) # в результате такого деления получается другой тип данных (float), подробнее о нем поговорим позже. print(5 // 2) ``` А если нам надо как раз найти остаток от деления, то мы можем воспользоваться оператором модуло % ``` print(5 % 2) ``` Еще одна математическая операция, которую мы можем выполнять без загрузки специальных математических библиотек - это возведение в степень. ``` print(5**2) ``` Все то же самое работает и с переменными, содержащими числа. ``` a = 2 b = 3 print(a ** b) # изменится ли результат, если мы перезапишем переменную a? a = 5 print(a ** b) ``` Часто возникает ситуация, что мы считали какие-то данные в формате текста, и у нас не работают арифметические операции. Тогда мы можем с помощью функции int() преобразовать строковую переменную (о них ниже) в число, если эта строка может быть переведена в число в десятичной системе. ``` print(2 + '5') # ошибка, не сможем сложить целое число и строку print(2 + int('5')) # преобразовали строку в число и все заработало int('текст') # здесь тоже поймаем ошибку, потому что строка не представляет собой число ``` ## (∩`-´)⊃━☆゚.*・。゚ Задача ### Сумма цифр трехзначного числа Дано трехзначное число 179. Найдите сумму его цифр. **Формат вывода** Выведите ответ на задачу. **Ответ** Вывод программы: 17 ``` # (∩`-´)⊃━☆゚.*・。゚ x = int(input()) x_1 = x // 100 x_2 = x // 10 % 10 x_3 = x % 10 # print(x_1, x_2, x_3) # тестовый вывод, проверяем, что правильно "достали" цифры из числа print(x_1 + x_2 + x_3) # ответ на задачу x = int(input()) int(12.5) type(x) 179 // 100 179 // 10 17 % 10 ``` ## (∩`-´)⊃━☆゚.*・。゚ Задача ### Электронные часы Дано число N. С начала суток прошло N минут. Определите, сколько часов и минут будут показывать электронные часы в этот момент. **Формат ввода** Вводится число N — целое, положительное, не превышает 10⁷. **Формат вывода** Программа должна вывести два числа: количество часов (от 0 до 23) и количество минут (от 0 до 59). Учтите, что число N может быть больше, чем количество минут в сутках. #### Примеры Тест 1 **Входные данные:** 150 **Вывод программы:** 2 30 ``` # (∩`-´)⊃━☆゚.*・。゚ minutes = int(input()) print(minutes // 60 % 24, minutes % 60) ``` # Типы данных: логические или булевы переменные (boolean) Следующий тип данных - это логические переменные. Логические переменные могут принимать всего два значения - **истина (True)** или **ложь (False)**. В Python тип обознчается **bool**. ``` print(type(True), type(False)) type(True) ``` Логические переменные чаще всего используется в условных операторах if-else и в цикле с остановкой по условию while. В части по анализу данных еще увидим одно частое применение - создание булевых масок для фильтрации данных (например, вывести только те данные, где возраст больше 20). Обратите внимание, что True и False обязательно пишутся с большой буквы и без кавычек, иначе можно получить ошибку. ``` print(type('True')) # тип str - строковая переменная print(true) # ошибка, Python думает, что это название переменной ``` Как и в случае чисел и строк, с логическими переменными работает преобразование типов. Превратить что-либо в логическую переменную можно с помощью функции bool(). Преобразование чисел работает следующим образом - 0 превращается в False, все остальное в True. ``` int(2.45) float(3) float('2.5') bool(0) bool(1) print(bool(0)) print(bool(23)) print(bool(-10)) ``` Пустая строка преобразуется в False, все остальные строки в True. ``` print(bool('')) print(bool('Hello')) print(bool(' ')) # даже строка из одного пробела - это True print(bool('False')) # и даже строка 'False' - это True ``` И при необходимости булеву переменную можно конвертировать в int. Тут все без сюрпризов - ноль и единица. ``` print(int(True)) print(int(False)) ``` ## Логические выражения Давайте посмотрим, где используется новый тип данных. Мы поработаем с логическими выражениями, результат которых - булевы переменные. Логические выражения выполняют проверку на истинность, то есть выражение равно True, если оно истинно, и False, если ложно. В логических выражениях используются операторы сравнения: * == (равно) * != (не равно) * \> (больше) * < (меньше) * \>= (больше или равно) * <= (меньше или равно) ``` 5 == 5 a = 5 a == 5 5 != 5 1 < 3 < 5 == 7 1 < 3 < 5 < 7 print(1 == 1) print(1 != '1') c = 1 > 3 print(c) print(type(c)) x = 5 print(1 < x <= 5) # можно писать связки цепочкой ``` Логические выражения можно комбинировать с помощью следующих логических операторов: * логическое И (and) - выражение истинно, только когда обе части истинны, иначе оно ложно * логическое ИЛИ (or) - выражение ложно, только когда обе части ложны, иначе оно истинно * логическое отрицание (not) - превращает True в False и наоборот ``` print((1 == 1) and ('1' == 1)) print((1 == 1) or ('1' == 1)) print(not(1 == 1)) print(((1 == 1) or ('1' == 1)) and (2 == 2)) ``` ## (∩`-´)⊃━☆゚.*・。゚ Задача ## Вася в Италии Вася уехал учиться по обмену на один семестр в Италию. Единственный магазин в городе открыт с 6 до 8 утра и с 16 до 17 вечера (включительно). Вася не мог попасть в магазин уже несколько дней и страдает от голода. Он может прийти в магазин в X часов. Если магазин открыт в X часов, то выведите True, а если закрыт - выведите False. В единственной строке входных данных вводится целое число X, число находится в пределах от 0 до 23 **Формат ввода** Целое число X, число находится в пределах от 0 до 23 **Формат вывода** True или False #### Примеры Тест 1 **Входные данные:** 16 **Вывод программы:** True ``` ## (∩`-´)⊃━☆゚.*・。゚ time = 16 can_visit = 6 <= time <= 8 can_visit2 = 16 <= time <= 17 print(can_visit or can_visit2) ``` # Типы данных: вещественные числа (float) По сути, вещественные числа это десятичные дроби, записанные через точку. Вещественные числа в питоне обозначаются словом float (от "плавающей" точки в них). Также могут быть представлены в виде инженерной записи: 1/10000 = 1e-05 **ВЕЩЕСТВЕННЫЕ ЧИСЛА (FLOAT):** 3.42, 2.323212, 3.0, 1e-05 - число с дробной частью (в том числе целое с дробной частью равной 0) ``` 4.5 + 5 ``` Если у нас было действие с участие целого и вещественного числа, то ответ всегда будет в виде вещественного числа (см. выше). Также давайте вспомним про "обычное" деление, в результате которого получается вещественное число. ``` print(11 / 2) print(type(11 / 2)) print(11 // 2) print(type(11 // 2)) ``` С вещественными числами нужно быть осторожными со сравнениями. В связи с особенностями устройства памяти компьютера дробные числа хранятся там весьма хитро и не всегда условные 0.2 то, чем кажутся. Это связано с проблемой точности представления чисел. Подробнее можно прочитать [здесь](https://pythoner.name/documentation/tutorial/floatingpoint). ``` 0.2 + 0.1 == 0.3 0.4 - 0.3 == 0.1 x = 0.4 - 0.3 x x == 0.10000000000000003 0 <= x - 0.1 < 0.0001 ``` Наверняка, от такого равенства мы ожидали результат True, но нет. Поэтому будьте аккуратны и старайтесь не "завязывать" работу вашей программы на условия, связанные с вычислением вещественных чисел. Давайте посмотрим, как на самом деле выглядят эти числа в памяти компьютера. ``` print(0.2 + 0.1) print(0.3) ``` Числа с плавающей точкой представлены в компьютерном железе как дроби с основанием 2 (двоичная система счисления). Например, десятичная дробь 0.125 имеет значение 1/10 + 2/100 + 5/1000, и таким же образом двоичная дробь 0.001 имеет значение 0/2 + 0/4 + 1/8. Эти две дроби имеют одинаковые значения, отличаются только тем, что первая записана в дробной нотации по основанию 10, а вторая по основанию 2. К сожалению, большинство десятичных дробей не могут быть точно представлены в двоичной записи. Следствием этого является то, что в основном десятичные дробные числа вы вводите только приближенными к двоичным, которые и сохраняются в компьютере. Если вам совсем не обойтись без такого сравнения, то можно сделать так: сравнивать не результат сложения и числа, а разность эти двух чисел с каким-то очень маленьким числом (с таким, размер которого будет точно не критичен для нашего вычисления). Например, порог это числа будет разным для каких-то физических вычислений, где важна высокая точность, и сравнения доходов граждан. ``` 0.2 + 0.1 - 0.3 < 0.000001 0.2 + 0.1 - 0.3 < 1e-16 ``` Следующая проблема, с которой можно столкнуться - вместо результата вычисления получить ошибку 'Result too large'. Cвязано это с ограничением выделяемой памяти на хранение вещественного числа. ``` 1.5 ** 100000 1.5 ** 1000 ``` А если все получилось, то ответ еще может выглядеть вот так. Такая запись числа называется научной и экономит место - она хранит целую часть числа (мантисса) и степень десятки на которую это число нужно умножить (экспонента). Здесь результатом возведения 1.5 в степень 1000 будет число 1.2338405969061735, умноженное на 10 в степень 176. Понятно, что это число очень большое. Если бы вместо знакак + стоял -, то и степень десятки была бы отрицательная (10 в -176 степени), и такое число было бы очень, очень маленьким. Как и в случае с целыми числами, вы можете перевести строку в вещественное число, если это возможно. Сделать это можно фукнцией float() ``` print(2.5 + float('2.4')) ``` ## Округление вещественных чисел У нас часто возникает необходимость превратить вещественное число в целое ("округлить"). В питоне есть несколько способов это сделать, но, к сожалению, ни один из них не работает как наше привычное округление и про это всегда нужно помнить. Большинство этих функций не реализованы в базовом наборе команд питона и для того, чтобы ими пользоваться, нам придется загрузить дополнительную библиотеку math, которая содержит всякие специальные функции для математических вычислений. ``` import math # команда import загружает модуль под названием math math.ceil(2.3) import math as mh mh.ceil(2.1) from math import ceil ceil(3.1) math.floor(3.8) # !pip install math ``` Модуль math устанавливается в рамках дистрибутива Anaconda, который мы использовали, чтобы установить Jupyter Notebook, поэтому его не нужно отдельно скачивать, а можно просто импортировать (загрузить в оперативную память текущей сессии). Иногда нужную библиотеку придется сначала установить на компьютер с помощью команды !pip install -название модуля- и только потом импортировать. Самый простой способ округлить число - применить к нему функцию int. ``` int(2.6) int(2.1) ``` Обратите внимание, что такой метод просто обрубает дробную часть (значения выше 0.5 не округляются в сторону большего числа). ``` print(int(2.6)) print(int(-2.6)) ``` Округление "в пол" из модуля math округляет до ближайшего меньшего целого числа. ``` print(math.floor(2.6)) # чтобы использовать функцю из дополнительного модуля - # нужно сначала написать название этого модуля и через точку название функции print(math.floor(-2.6)) ``` Округление "в потолок" работает ровно наоброт - округляет до ближайшего большего числа, независимо от значения дробной части. ``` print(math.ceil(2.6)) print(math.ceil(-2.6)) ``` В самом питоне есть еще функция round(). Вот она работает почти привычно нам, если бы не одно "но"... ``` print(round(2.2)) print(round(2.7)) print(round(2.5)) # внимание на эту строку print(round(3.5)) ``` Неожиданно? Тут дело в том, что в питоне реализованы не совсем привычные нам правила округления чисел с вещественной частью 0.5 - такое число округляется до ближайшего четного числа: 2 для 2.5 и 4 для 3.5. ## Замечание по импорту функций Иногда нам не нужна вся библиотека, а только одна функция из-за нее. Скажите, странно же хранить в опреативной памяти всю "Войну и мир", если нас интересует только пятое предложение на восьмой странице. Для этого можно воспользоваться импортом функции из библиотеки и тогда не нужно будет писать ее название через точку. Подводный камень здесь только тот, что если среди базовых команд питона есть функция с таким же именем, то она перезапишется и придется перезапускать свой блокнот, чтобы вернуть все как было. ``` from math import ceil ceil(2.6) # теперь работает без math. ``` ## (∩`-´)⊃━☆゚.*・。゚ Задача ## Дробная часть Дано вещественное число. Выведите его дробную часть. **Формат ввода** Вещественное число **Формат вывода** Вещественное число (ответ на задачу) #### Примеры Тест 1 **Входные данные:** 4.0 **Вывод программы:** 0.0 ``` # (∩`-´)⊃━☆゚.*・。゚ x = 4.0 print(x - int(x)) x = 5.2 print(x - int(x)) !pip install wikipedia import wikipedia wikipedia.search("Barack") a = input() ```
github_jupyter
``` from mxnet import nd def pure_batch_norm(X, gamma, beta, eps=1e-5): assert len(X.shape) in (2, 4) # 全连接: batch_size x feature if len(X.shape) == 2: # 每个输入维度在样本上的平均和方差 mean = X.mean(axis=0) variance = ((X - mean)**2).mean(axis=0) # 2D卷积: batch_size x channel x height x width else: # 对每个通道算均值和方差,需要保持4D形状使得可以正确地广播 mean = X.mean(axis=(0,2,3), keepdims=True) variance = ((X - mean)**2).mean(axis=(0,2,3), keepdims=True) # 均一化 X_hat = (X - mean) / nd.sqrt(variance + eps) # 拉升和偏移 print(X_hat.shape) print(mean.shape) return gamma.reshape(mean.shape) * X_hat + beta.reshape(mean.shape) A = nd.arange(6).reshape((3,2)) pure_batch_norm(A, gamma=nd.array([1, 1]), beta=nd.array([0, 0])) B = nd.arange(18).reshape((1,2,3,3)) pure_batch_norm(B, gamma=nd.array([1,1]), beta=nd.array([0,0])) # 在训练时,计算整个 batch 上的均值和方差 # 在测试时,计算整个数据集的均值方差,当训练数据量极大时,使用 moving average 近似计算 def batch_norm(X, gamma, beta, is_training, moving_mean, moving_variance, eps = 1e-5, moving_momentum = 0.9): assert len(X.shape) in (2, 4) # 全连接: batch_size x feature if len(X.shape) == 2: # 每个输入维度在样本上的平均和方差 mean = X.mean(axis=0) variance = ((X - mean)**2).mean(axis=0) # 2D卷积: batch_size x channel x height x width else: # 对每个通道算均值和方差,需要保持4D形状使得可以正确的广播 mean = X.mean(axis=(0,2,3), keepdims=True) variance = ((X - mean)**2).mean(axis=(0,2,3), keepdims=True) # 变形使得可以正确的广播 moving_mean = moving_mean.reshape(mean.shape) moving_variance = moving_variance.reshape(mean.shape) # 均一化 if is_training: X_hat = (X - mean) / nd.sqrt(variance + eps) #!!! 更新全局的均值和方差 moving_mean[:] = moving_momentum * moving_mean + ( 1.0 - moving_momentum) * mean moving_variance[:] = moving_momentum * moving_variance + ( 1.0 - moving_momentum) * variance else: #!!! 测试阶段使用全局的均值和方差 X_hat = (X - moving_mean) / nd.sqrt(moving_variance + eps) # 拉升和偏移 return gamma.reshape(mean.shape) * X_hat + beta.reshape(mean.shape) ``` # 定义模型 ``` import sys sys.path.append('..') import utils ctx = utils.try_gpu() ctx weight_scale = 0.01 # 输出通道 = 20, 卷积核 = (5,5) c1 = 20 W1 = nd.random.normal(shape=(c1,1,5,5), scale=weight_scale, ctx=ctx) b1 = nd.zeros(c1, ctx=ctx) # 第1层批量归一化 gamma1 = nd.random.normal(shape=c1, scale=weight_scale, ctx=ctx) beta1 = nd.random.normal(shape=c1, scale=weight_scale, ctx=ctx) moving_mean1 = nd.zeros(c1, ctx=ctx) moving_variance1 = nd.zeros(c1, ctx=ctx) # 输出通道 = 50, 卷积核 = (3,3) c2 = 50 W2 = nd.random_normal(shape=(c2,c1,3,3), scale=weight_scale, ctx=ctx) b2 = nd.zeros(c2, ctx=ctx) # 第2层批量归一化 gamma2 = nd.random.normal(shape=c2, scale=weight_scale, ctx=ctx) beta2 = nd.random.normal(shape=c2, scale=weight_scale, ctx=ctx) moving_mean2 = nd.zeros(c2, ctx=ctx) moving_variance2 = nd.zeros(c2, ctx=ctx) # 输出维度 = 128 o3 = 128 W3 = nd.random.normal(shape=(1250, o3), scale=weight_scale, ctx=ctx) b3 = nd.zeros(o3, ctx=ctx) # 输出维度 = 10 W4 = nd.random_normal(shape=(W3.shape[1], 10), scale=weight_scale, ctx=ctx) b4 = nd.zeros(W4.shape[1], ctx=ctx) # 注意这里moving_*是不需要更新的 params = [W1, b1, gamma1, beta1, W2, b2, gamma2, beta2, W3, b3, W4, b4] for param in params: param.attach_grad() # BatchNorm 添加的位置: 卷积层之后,激活函数之前 def net(X, is_training=False, verbose=False): X = X.as_in_context(W1.context) # 第一层卷积 h1_conv = nd.Convolution( data=X, weight=W1, bias=b1, kernel=W1.shape[2:], num_filter=c1) ### 添加了批量归一化层 h1_bn = batch_norm(h1_conv, gamma1, beta1, is_training, moving_mean1, moving_variance1) h1_activation = nd.relu(h1_bn) h1 = nd.Pooling( data=h1_activation, pool_type="max", kernel=(2,2), stride=(2,2)) # 第二层卷积 h2_conv = nd.Convolution( data=h1, weight=W2, bias=b2, kernel=W2.shape[2:], num_filter=c2) ### 添加了批量归一化层 h2_bn = batch_norm(h2_conv, gamma2, beta2, is_training, moving_mean2, moving_variance2) h2_activation = nd.relu(h2_bn) h2 = nd.Pooling(data=h2_activation, pool_type="max", kernel=(2,2), stride=(2,2)) h2 = nd.flatten(h2) # 第一层全连接 h3_linear = nd.dot(h2, W3) + b3 h3 = nd.relu(h3_linear) # 第二层全连接 h4_linear = nd.dot(h3, W4) + b4 if verbose: print('1st conv block:', h1.shape) print('2nd conv block:', h2.shape) print('1st dense:', h3.shape) print('2nd dense:', h4_linear.shape) print('output:', h4_linear) return h4_linear from mxnet import autograd from mxnet import gluon batch_size = 256 train_data, test_data = utils.load_data_fashion_mnist(batch_size) softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss() learning_rate = 0.2 for epoch in range(5): train_loss = 0. train_acc = 0. for data, label in train_data: label = label.as_in_context(ctx) with autograd.record(): output = net(data, is_training=True) loss = softmax_cross_entropy(output, label) loss.backward() utils.SGD(params, learning_rate/batch_size) train_loss += nd.mean(loss).asscalar() train_acc += utils.accuracy(output, label) test_acc = utils.evaluate_accuracy(test_data, net, ctx) print("Epoch %d. Loss: %f, Train acc %f, Test acc %f" % ( epoch, train_loss/len(train_data), train_acc/len(train_data), test_acc)) ```
github_jupyter
<a href="https://colab.research.google.com/github/wguesdon/BrainPost_google_analytics/blob/master/Report_v01_02.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Project Presentation ## About BrainPost Kasey Hemington runs BrainPost with a fellow PhD friend, Leigh Christopher as a way to keep in touch with her scientific roots while working as a data scientist! Every Tuesday since we started in early 2018, we publish our e-newsletter which is three short summaries of new neuroscience studies that have just come out. After publishing on our website each Tuesday, we typically post on Twitter (@brainpostco) and Facebook (@brainpostco) once to announce the release of the e-newsletter and three times (once for each of the three summaries) to highlight each summary. There are a few exceptions to our publishing schedule. Sometimes we post extra articles here: https://www.brainpost.co/brainpost-life-hacks, and also a few weeks we've only been able to publish two summaries instead of three. At around the same time as we publish the e-newsletter on the website each Tuesday, we also send it to our ~1700 email subscribers directly (via mailchimp). ## About the Challenge We're always wondering if we should change what type of content we're publishing, and how people are finding us. From some small surveys we've done for example, we find people would be interested in more casual/applicable to daily life content (like we publish on this tab https://www.brainpost.co/brainpost-life-hacks) than more technical summaries of complex articles, but we also aren't really sure if that's just a subgroup of people who get the e-newsletter to their email inbox filling out the survey. We also might have two audiences - academics and non-academics (?) who like different things. ## About the data In the remaining tabs of this workbook there is weekly pageview data for each page on the website (I think..according to our google analytics). Each tab represents pageview data for two one week periods, split up by the page name/URL and the source/medium (first two columns). My general idea was people can look at the data at a weekly cadence and figure out stats about different pages/content, BUT with google analytics a huge problem is that it doesn't really take into account that different content is published on different days (for example, a stat about 'only 2 pageviews' to a page is meaningless to me if it is because the page was only published an hour ago). Our content is published weekly so it should approximately match that I extracted the data weekly. My apologies for the formatting... Google analytics was a nightmare to extract from - a very manual process. But, I guess data cleaning is a part of the process! So, we've been publishing ~3 new pages a week since 2018, but I've only included data starting in July 2020 because the data extraction process is so manual. The date of publication can be extracted from the URL. We've noticed some pages seem really popular possibly for strange reasons (maybe they come up on the first page of google because people are searching for something really similar?) and those anomalies might not reflect what people like overall about the site. There is also a tab with a page (URL) to page title lookup ## The questions we'd like to ask What content (or types of content) is most popular (what are patterns we see in popular content) and is different content popular amongst different subgroups (e.g. by source/medium)? Any question that will help us to take action to better tailor our content to our audience(s) or understand how traffic comes to the site. Where are people visiting from (source-wise)? ## How this challenge works: Just like the last challenge, you can submit an entry by posting a github link to your analysis on the signup page (second tab). Use any combination of reports/visuals/code/presentation you think is best - just make sure your code is accessible! Let's have a few days for people to review the data and ask any questions and then we can discuss what everyone thinks is a reasonable deadline/timeline and set the timeline from there. If you have any further data requests you think would help answer the questions I might be able to get it (google analytics or mailchimp). After the deadline I'll choose the first and second place submission. The criteria will be whatever submission provides the most compelling evidence that gives me a clear idea of what actions we could take next to improve the site. # Initialize ``` # Load libraries # https://stackoverflow.com/questions/58667299/google-colab-why-matplotlib-has-a-behaviour-different-then-default-after-import # Panda profilling alter seaborn plotting style from google.colab import drive # to load data from google drive import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt # ploting the data import seaborn as sns # ploting the data %matplotlib inline import math # calculation drive.mount("/content/drive") ``` # Data Cleaning ``` # Load the Pages Titles pages_tiltes = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='Page to Page Title lookup', skiprows=6) # Load the content-pages oct 4-17 # Skip the first 6 rows with headers # Skip the last 18 rows with total view # See https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html # See https://stackoverflow.com/questions/49876077/pandas-reading-excel-file-starting-from-the-row-below-that-with-a-specific-valu #content-pages july 12-25 july12_25 = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='content-pages july 12-25', skiprows=6, skipfooter=20) #july12_25 #content-pages jul 26 - aug 8 jul26_aug8 = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='content-pages jul 26 - aug 8', skiprows=6, skipfooter=20) #jul26_aug8 #content-pages aug 9-22 aug9_22 = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='content-pages aug 9-22', skiprows=6, skipfooter=20) #aug9_22 #content-pages aug23-sept5 aug23_sept5 = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='content-pages aug23-sept5', skiprows=6, skipfooter=20) #aug23_sept5 #content-pages sept 6-19 sept6_19 = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='content-pages sept 6-19', skiprows=6, skipfooter=20) #sept6_19 #content-pages sept 20-oct3 sept20_oct3 = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='content-pages sept 20-oct3', skiprows=6, skipfooter=20) #sept20_oct3 #content-pages oct 4-17 Oct4_17 = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='content-pages oct 4-17', skiprows=6, skipfooter=20) #Oct4_17 #content-pages oct 18-31 Oct18_31 = pd.read_excel(r'/content/drive/MyDrive/DSS/BrainPost_blog/Data/DSS BrainPost Web Analytics Challenge!.xlsx', sheet_name='content-pages oct 18-31', skiprows=6, skipfooter=20) #Oct18_31 # Combine data frame # https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.combine.html # https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html frames = [july12_25, jul26_aug8, aug9_22, aug23_sept5, sept6_19, sept20_oct3, Oct4_17, Oct18_31] df = pd.concat(frames) df df_outer = pd.merge(df, pages_tiltes, how='outer', on='Page') df = df_outer.copy() # Determine the number of missing values for every column df.isnull().sum() # Filter entries with Pageviews >= 0 # https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html # https://cmdlinetips.com/2018/02/how-to-subset-pandas-dataframe-based-on-values-of-a-column/ df_Pageviews_filtered = df[df['Pageviews'] > 0] df_Pageviews_filtered.isnull().sum() # Create a value count column # https://stackoverflow.com/questions/29791785/python-pandas-add-a-column-to-my-dataframe-that-counts-a-variable df_Pageviews_filtered['Page_count'] = df_Pageviews_filtered.groupby('Page')['Page'].transform('count') df_Pageviews_filtered['Source_count'] = df_Pageviews_filtered.groupby('Source / Medium')['Source / Medium'].transform('count') df_Pageviews_filtered['Page_Title_count'] = df_Pageviews_filtered.groupby('Page Title')['Page Title'].transform('count') df = df_Pageviews_filtered.copy() # Merge all facebook # https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html # I chose to combine all facebook referal # m.facebook indicate mobile trafic. # There is no equivalent for the other Source so I will ignnore this for this part of the analysis. df = df.replace(to_replace = ["l.facebook.com / referral", "m.facebook.com / referral", "facebook.com / referral"], value = "facebook") df = df.replace( to_replace = "google / organic", value = "google") df = df.replace( to_replace = "(direct) / (none)", value = "direct") # t.co is twitter # https://analypedia.carloseo.com/t-co-referral/ df = df.replace( to_replace = "t.co / referral", value = "twitter") df = df.replace( to_replace = "bing / organic", value = "bing") # Deal with time data df['time'] = pd.to_datetime(df['Avg. Time on Page'], format='%H:%M:%S') df['time'] = pd.to_datetime(df['time'], unit='s') df[['time0']] = pd.to_datetime('1900-01-01 00:00:00') df['time_diff'] = df['time'] - df['time0'] df['time_second'] = df['time_diff'].dt.total_seconds().astype(int) ``` # Data Visualization ``` # https://stackoverflow.com/questions/42528921/how-to-prevent-overlapping-x-axis-labels-in-sns-countplot # https://stackoverflow.com/questions/46623583/seaborn-countplot-order-categories-by-count # https://stackoverflow.com/questions/25328003/how-can-i-change-the-font-size-using-seaborn-facetgrid data = df.copy() data = data[data['Page_count'] >= 60] plt.figure(figsize=(10, 5)) sns.set(font_scale=1.25) sns.set_style("white") title = 'Top 5 Pages visited' sns.countplot(y = data['Page'], order = data['Page'].value_counts().index) plt.title(title) plt.ioff() ``` **Figure: Top 5 pages visited all time** ``` data = df.copy() data = data[data['Page_count'] >= 25] plt.figure(figsize=(10, 10)) sns.set(font_scale=1.25) sns.set_style("white") title = 'Top Pages visited more than 25 times' sns.countplot(y = data['Page'], order = data['Page'].value_counts().index, color='#2b7bba') plt.title(title) plt.ioff() ``` **Figure: Top pages visited at least 25 times** ``` # https://www.statology.org/pandas-filter-multiple-conditions/ data = df.copy() data = data[(data['Source_count'] >= 270)] sns.displot(data, x="time_second", kind="kde", hue='Source / Medium') plt.ioff() ``` **Figure: Average time spent on page for the top sources** The 0 second trafic is not restricted to google. See this [discussion](https://support.google.com/google-ads/thread/1455669?hl=en) related to 0 second visits on Google Analytics. The issue seems related to bounce rate preventing Google Analytic to accuralely measure the time spent on the page. ``` # Time spend on page # https://www.statology.org/pandas-filter-multiple-conditions/ data = df.copy() data = data[(data['Source_count'] >= 270)] sns.set(font_scale=1.25) sns.set_style("white") sns.displot(data, x="Bounce Rate", kind="kde", hue='Source / Medium') plt.ioff() ``` **Figure: Average Bounce Rate on page for the top sources** The bounce rate correspond to users that take no other action after landing on the site such as visiting another page. From the figure above this seems to be frequent on the blog which make sense if the user are interested in a particular article. It also potentially means that for most traffic on the site the average time spend on a page can't be calculated. There seems to be way to work around this as seen in this [discussion](https://support.google.com/google-ads/thread/1455669?hl=en). ``` data = df.copy() data = data[data['Source_count'] >= 100] plt.figure(figsize=(10, 5)) sns.set(font_scale=1.25) sns.set_style("white") title = 'Top 5 Source for Pages visited more than 100 time' sns.countplot(y = data['Source / Medium'], order = data['Source / Medium'].value_counts().index) plt.title(title) plt.ioff() ``` **Figure: Top 5 Source for Pages visited more than 100 time** ``` # https://www.statology.org/pandas-filter-multiple-conditions/ data = df.copy() data = data[data['Source_count'] >= 100] data = data[data['time_second'] > 5] y="Source / Medium" x="time_second" title = 'Top 5 Source for Pages visited more than 100 time and viewed more than 5 seconds' plt.figure(figsize=(10, 10)) sns.set(font_scale=1.25) sns.set_style("white") sns.boxplot(x=x, y=y, data=data, notch=True, showmeans=False, meanprops={"marker":"s","markerfacecolor":"white", "markeredgecolor":"black"}) plt.title(title) plt.ioff() # https://www.statology.org/pandas-filter-multiple-conditions/ data = df.copy() data = data[(data['Page_count'] >= 60) & (data['Source_count'] >= 100)] data = data[data['time_second'] > 5] title = "" y="Page" x="time_second" plt.figure(figsize=(10, 8)) sns.set(font_scale=1.25) sns.set_style("white") sns.boxplot(x=x, y=y, data=data, notch=False, showmeans=False, meanprops={"marker":"s","markerfacecolor":"white", "markeredgecolor":"black"}, hue='Source / Medium') plt.title(title) plt.ioff() # https://www.statology.org/pandas-filter-multiple-conditions/ data = df.copy() data = data[(data['Page_count'] >= 61) & (data['Source_count'] >= 100)] data = data[data['time_second'] > 5] data = data[data['time_second'] < 600] title = "" y="Page" x="time_second" plt.figure(figsize=(10, 8)) sns.set(font_scale=1.25) sns.set_style("white") sns.boxplot(x=x, y=y, data=data, notch=False, showmeans=False, meanprops={"marker":"s","markerfacecolor":"white", "markeredgecolor":"black"}, hue='Source / Medium') plt.title(title) plt.ioff() # https://stackoverflow.com/questions/42528921/how-to-prevent-overlapping-x-axis-labels-in-sns-countplot # https://stackoverflow.com/questions/46623583/seaborn-countplot-order-categories-by-count # https://stackoverflow.com/questions/25328003/how-can-i-change-the-font-size-using-seaborn-facetgrid data = df.copy() data = data[data['Page_count'] >= 50] data = data[data['Source_count'] >= 100] #data = data[data['time_second'] > 5] plt.figure(figsize=(15, 10)) sns.set(font_scale=1.25) sns.set_style("white") title = 'Average View for page visited more than 50 times from top source' sns.countplot(data = data, y = 'Page', hue='Source / Medium') # Put the legend out of the figure # https://stackoverflow.com/questions/30490740/move-legend-outside-figure-in-seaborn-tsplot plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.title(title) plt.ioff() ``` # Conclusion * **What content (or types of content) is most popular (what are patterns we see in popular content) and is different content popular amongst different subgroups (e.g. by source/medium)?** The homepage, weekly-brainpost, and archive are the most popular pages on the website. We would need more information to analyze trends in page popularity by sources. The user with direct access, possibly via a mailing list, tend to visit the weekly brain posts pages more. * **Any question that will help us to take action to better tailor our content to our audience(s) or understand how traffic comes to the site.** What is causing the high bounce rate in the analytics? A large number of visitors are registered as spending 0 seconds on the page. This, combined with the high bounce rate, could indicate an issue in measuring the site traffic. This Google support [page](https://www.brainpost.co/about-brainpost) offers suggestions to solve this problem. * **Where are people visiting from (source-wise)?** Google, Facebook and direct access are the three most common traffic sources. # Final remarks I put this together as a quick and modest analysis. I appreciate real-life projects, so I plan to investigate this further. Any feedback is welcome. * E-mail: wguesdon@gmail.com * Twitter: williamguesdon * LinkedIn: william-guesdon
github_jupyter
# Merry Christmas and Happy Holidays ! ## Using Python Author: Sumudu Tennakoon\ Created Date: 2020-12-24 ## Goal To create a Greeting Card in the console output with  * An illustration of a Christmas tree using characters. * Randomly distributed Ornaments and Decorations. * a start in the top of of the tree * A border at the bottom of the leaves * A base attached to the tree * Greeting message "MERRY CHRISTMAS AND HAPPY HOLIDAYS" * Colors: red star green tree, a blue border at the bottom, magenta base, and the message in red. ## Task 1: Setup Tree Parameters (width, body height, and full hight with the base) ``` width = 25 # Tree width body_height = 25 # Height without stand full_height = 31 # Total height ``` ## Task 2: Print Basic Tree Body ``` tree = '' for x in range(1, full_height, 2): s = '' if x == 1 : s='*' print(s.center(width)) elif x < body_height: for y in range(0,x): s = s + '^' print(s.center(width)) ``` ## Task 3: Add Bottom Border Ribbon, Base, and Ground ``` tree = '' for x in range(1, full_height, 2): s = '' if x == 1 : s='*' print(s.center(width)) elif x < body_height: for y in range(0,x): s = s + '^' print(s.center(width)) elif x == body_height: s = s + '#'*width print(s) elif x > body_height and x < full_height: s = s + 'II' print(s.center(width)) print(('~'*width).center(width)) ``` ## Task 4: Add Ornaments ``` from random import randint tree = '' for x in range(1, full_height, 2): s = '' if x == 1 : s='*' print(s.center(width)) elif x < body_height: for y in range(0,x): b = randint(0, width) # Location to add random decoration 1 a = randint(0, width) # Add random decoration 2 c = randint(0, width) # Add random decoration 3 if y==b: s = s + 'o' elif y==a: s = s + '@' elif y==c: s = s + '+' else: s = s + '^' print(s.center(width)) elif x == body_height: s = s + '#'*width print(s) elif x > body_height and x < full_height: s = s + 'II' print(s.center(width)) print(('~'*width).center(width)) ``` ## Task 5: Add Greeting Message ``` from random import randint tree = '' for x in range(1, full_height, 2): s = '' if x == 1 : s='*' print(s.center(width)) elif x < body_height: for y in range(0,x): b = randint(0, width) # Location to add random decoration 1 a = randint(0, width) # Add random decoration 2 c = randint(0, width) # Add random decoration 3 if y==b: s = s + 'o' elif y==a: s = s + '@' elif y==c: s = s + '+' else: s = s + '^' print(s.center(width)) elif x == body_height: s = s + '#'*width print(s) elif x > body_height and x < full_height: s = s + 'II' print(s.center(width)) print(('~'*width).center(width)) print('MERRY CHRISTMAS'.center(width)) print('AND'.center(width)) print('HAPPY HOLIDAYS !'.center(width)) print(('~'*width).center(width)) ``` ## Task 6: Add colors ``` from colorama import * # To change font colors (https://github.com/tartley/colorama) from random import randint width = 25 # Tree width body_height = 25 # Height without stand full_height = 31 # Total height tree = '' for x in range(1, full_height, 2): s = '' if x == 1 : s='*' print(Fore.RED + s.center(width)) elif x < body_height: for y in range(0,x): b = randint(0, width) # Location to add random decoration 1 a = randint(0, width) # Add random decoration 2 c = randint(0, width) # Add random decoration 3 if y==b: s = s + 'o' elif y==a: s = s + '@' elif y==c: s = s + '+' else: s = s + '^' print(Fore.GREEN + s.center(width)) elif x == body_height: s = s + '#'*width print(Fore.BLUE + s.center(width)) elif x > body_height and x < full_height: s = s + 'II' print(Fore.MAGENTA + s.center(width)) print(('~'*width).center(width)) print(Fore.RED + 'MERRY CHRISTMAS'.center(width)) print(Fore.RED + 'AND'.center(width)) print(Fore.RED + 'HAPPY HOLIDAYS !'.center(width)) print(('~'*width).center(width)) ``` ## Bonus * Add new Ornament * Add Colors to Ornaments (printing characters individually) ``` from colorama import * # To change font colors (https://github.com/tartley/colorama) from random import randint width = 25 # Tree width body_height = 25 # Height without stand full_height = 31 # Total height tree = '' for x in range(1, full_height, 2): s = '' center = int((width-1)/2) padding = center-int((x-1)/2) print('\n', end='') if x == 1 : print(' '*padding, end='') print(Fore.RED + '*', end='') elif x < body_height: print(' '*padding, end='') for y in range(0,x): b = randint(0, width) # Location to add random decoration 1 a = randint(0, width) # Add random decoration 2 c = randint(0, width) # Add random decoration 3 if y==b: print(Fore.YELLOW + 'o', end='') elif y==a: print(Fore.CYAN + '@', end='') elif y==c: print(Fore.RED + '+', end='') else: print(Fore.GREEN + '^', end='') elif x == body_height: print(' '*padding, end='') print(Fore.BLUE + '#'*width, end='') elif x > body_height and x < full_height: print(' '*center, end='') print(Fore.MAGENTA + 'II', end='') print('\n', end='') print(('~'*width).center(width)) print(Fore.RED + 'MERRY CHRISTMAS'.center(width)) print(Fore.RED + 'AND'.center(width)) print(Fore.RED + 'HAPPY HOLIDAYS !'.center(width)) print(('~'*width).center(width)) ``` <hr> ©2020 Sumudu Tennakoon
github_jupyter
# Decision tree for regression In this notebook, we present how decision trees are working in regression problems. We show differences with the decision trees previously presented in a classification setting. First, we load the penguins dataset specifically for solving a regression problem. <div class="admonition note alert alert-info"> <p class="first admonition-title" style="font-weight: bold;">Note</p> <p class="last">If you want a deeper overview regarding this dataset, you can refer to the Appendix - Datasets description section at the end of this MOOC.</p> </div> ``` import pandas as pd penguins = pd.read_csv("../datasets/penguins_regression.csv") feature_name = "Flipper Length (mm)" target_name = "Body Mass (g)" data_train, target_train = penguins[[feature_name]], penguins[target_name] ``` To illustrate how decision trees are predicting in a regression setting, we will create a synthetic dataset containing all possible flipper length from the minimum to the maximum of the original data. ``` import numpy as np data_test = pd.DataFrame(np.arange(data_train[feature_name].min(), data_train[feature_name].max()), columns=[feature_name]) ``` Using the term "test" here refers to data that was not used for training. It should not be confused with data coming from a train-test split, as it was generated in equally-spaced intervals for the visual evaluation of the predictions. Note that this is methodologically valid here because our objective is to get some intuitive understanding on the shape of the decision function of the learned decision trees. However computing an evaluation metric on such a synthetic test set would be meaningless since the synthetic dataset does not follow the same distribution as the real world data on which the model will be deployed. ``` import matplotlib.pyplot as plt import seaborn as sns sns.scatterplot(data=penguins, x=feature_name, y=target_name, color="black", alpha=0.5) _ = plt.title("Illustration of the regression dataset used") ``` We will first illustrate the difference between a linear model and a decision tree. ``` from sklearn.linear_model import LinearRegression linear_model = LinearRegression() linear_model.fit(data_train, target_train) target_predicted = linear_model.predict(data_test) sns.scatterplot(data=penguins, x=feature_name, y=target_name, color="black", alpha=0.5) plt.plot(data_test[feature_name], target_predicted, label="Linear regression") plt.legend() _ = plt.title("Prediction function using a LinearRegression") ``` On the plot above, we see that a non-regularized `LinearRegression` is able to fit the data. A feature of this model is that all new predictions will be on the line. ``` ax = sns.scatterplot(data=penguins, x=feature_name, y=target_name, color="black", alpha=0.5) plt.plot(data_test[feature_name], target_predicted, label="Linear regression", linestyle="--") plt.scatter(data_test[::3], target_predicted[::3], label="Predictions", color="tab:orange") plt.legend() _ = plt.title("Prediction function using a LinearRegression") ``` Contrary to linear models, decision trees are non-parametric models: they do not make assumptions about the way data is distributed. This will affect the prediction scheme. Repeating the above experiment will highlight the differences. ``` from sklearn.tree import DecisionTreeRegressor tree = DecisionTreeRegressor(max_depth=1) tree.fit(data_train, target_train) target_predicted = tree.predict(data_test) sns.scatterplot(data=penguins, x=feature_name, y=target_name, color="black", alpha=0.5) plt.plot(data_test[feature_name], target_predicted, label="Decision tree") plt.legend() _ = plt.title("Prediction function using a DecisionTreeRegressor") ``` We see that the decision tree model does not have an *a priori* distribution for the data and we do not end-up with a straight line to regress flipper length and body mass. Instead, we observe that the predictions of the tree are piecewise constant. Indeed, our feature space was split into two partitions. Let's check the tree structure to see what was the threshold found during the training. ``` from sklearn.tree import plot_tree _, ax = plt.subplots(figsize=(8, 6)) _ = plot_tree(tree, feature_names=feature_name, ax=ax) ``` The threshold for our feature (flipper length) is 206.5 mm. The predicted values on each side of the split are two constants: 3683.50 g and 5023.62 g. These values corresponds to the mean values of the training samples in each partition. In classification, we saw that increasing the depth of the tree allowed us to get more complex decision boundaries. Let's check the effect of increasing the depth in a regression setting: ``` tree = DecisionTreeRegressor(max_depth=3) tree.fit(data_train, target_train) target_predicted = tree.predict(data_test) sns.scatterplot(data=penguins, x=feature_name, y=target_name, color="black", alpha=0.5) plt.plot(data_test[feature_name], target_predicted, label="Decision tree") plt.legend() _ = plt.title("Prediction function using a DecisionTreeRegressor") ``` Increasing the depth of the tree will increase the number of partition and thus the number of constant values that the tree is capable of predicting. In this notebook, we highlighted the differences in behavior of a decision tree used in a classification problem in contrast to a regression problem.
github_jupyter
# The Correlation Coefficient By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards Part of the Quantopian Lecture Series: * [www.quantopian.com/lectures](https://www.quantopian.com/lectures) * [github.com/quantopian/research_public](https://github.com/quantopian/research_public) Notebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution. --- The correlation coefficient measures the extent to which the relationship between two variables is linear. Its value is always between -1 and 1. A positive coefficient indicates that the variables are directly related, i.e. when one increases the other one also increases. A negative coefficient indicates that the variables are inversely related, so that when one increases the other decreases. The closer to 0 the correlation coefficient is, the weaker the relationship between the variables. The correlation coefficient of two series $X$ and $Y$ is defined as $$r = \frac{Cov(X,Y)}{std(X)std(Y)}$$ where $Cov$ is the covariance and $std$ is the standard deviation. Two random sets of data will have a correlation coefficient close to 0: ##Correlation vs. Covariance Correlation is simply a normalized form of covariance. They are otherwise the same and are often used semi-interchangeably in everyday conversation. It is obviously important to be precise with language when discussing the two, but conceptually they are almost identical. ###Covariance isn't that meaningful by itself Let's say we have two variables $X$ and $Y$ and we take the covariance of the two. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt X = np.random.rand(50) Y = 2 * X + np.random.normal(0, 0.1, 50) np.cov(X, Y)[0, 1] ``` So now what? What does this mean? Correlation uses information about the variance of X and Y to normalize this metric. Once we've normalized the metric to the -1 to 1 scale, we can make meaningful statements and compare correlations. To see how this is done consider the formula. $$\frac{Cov(X, Y)}{std(X)std(Y)}$$ $$= \frac{Cov(X, Y)}{\sqrt{var(X)}\sqrt{var(Y)}}$$ $$= \frac{Cov(X, Y)}{\sqrt{Cov(X, X)}\sqrt{Cov(Y, Y)}}$$ To demonstrate this let's compare the correlation and covariance of two series. ``` X = np.random.rand(50) Y = 2 * X + 4 print 'Covariance of X and Y: \n' + str(np.cov(X, Y)) print 'Correlation of X and Y: \n' + str(np.corrcoef(X, Y)) ``` ## Why do both `np.cov` and `np.corrcoef` return matricies? The covariance matrix is an important concept in statistics. Often people will refer to the covariance of two variables $X$ and $Y$, but in reality that is just one entry in the covariance matrix of $X$ and $Y$. For each input variable we have one row and one column. The diagonal is just the variance of that variable, or $Cov(X, X)$, entries off the diagonal are covariances between different variables. The matrix is symmetric across the diagonal. Let's check that this is true. ``` cov_matrix = np.cov(X, Y) # We need to manually set the degrees of freedom on X to 1, as numpy defaults to 0 for variance # This is usually fine, but will result in a slight mismatch as np.cov defaults to 1 error = cov_matrix[0, 0] - X.var(ddof=1) print 'error: ' + str(error) X = np.random.rand(50) Y = np.random.rand(50) plt.scatter(X,Y) plt.xlabel('X Value') plt.ylabel('Y Value') # taking the relevant value from the matrix returned by np.cov print 'Correlation: ' + str(np.cov(X,Y)[0,1]/(np.std(X)*np.std(Y))) # Let's also use the builtin correlation function print 'Built-in Correlation: ' + str(np.corrcoef(X, Y)[0, 1]) ``` Now let's see what two correlated sets of data look like. ``` X = np.random.rand(50) Y = X + np.random.normal(0, 0.1, 50) plt.scatter(X,Y) plt.xlabel('X Value') plt.ylabel('Y Value') print 'Correlation: ' + str(np.corrcoef(X, Y)[0, 1]) ``` Let's dial down the relationship by introducing more noise. ``` X = np.random.rand(50) Y = X + np.random.normal(0, .2, 50) plt.scatter(X,Y) plt.xlabel('X Value') plt.ylabel('Y Value') print 'Correlation: ' + str(np.corrcoef(X, Y)[0, 1]) ``` Finally, let's see what an inverse relationship looks like. ``` X = np.random.rand(50) Y = -X + np.random.normal(0, .1, 50) plt.scatter(X,Y) plt.xlabel('X Value') plt.ylabel('Y Value') print 'Correlation: ' + str(np.corrcoef(X, Y)[0, 1]) ``` We see a little bit of rounding error, but they are clearly the same value. ## How is this useful in finance? ### Determining related assets Once we've established that two series are probably related, we can use that in an effort to predict future values of the series. For example. let's loook at the price of Apple and a semiconductor equipment manufacturer, Lam Research Corporation. ``` # Pull the pricing data for our two stocks and S&P 500 start = '2013-01-01' end = '2015-01-01' bench = get_pricing('SPY', fields='price', start_date=start, end_date=end) a1 = get_pricing('LRCX', fields='price', start_date=start, end_date=end) a2 = get_pricing('AAPL', fields='price', start_date=start, end_date=end) plt.scatter(a1,a2) plt.xlabel('LRCX') plt.ylabel('AAPL') plt.title('Stock prices from ' + start + ' to ' + end) print "Correlation coefficients" print "LRCX and AAPL: ", np.corrcoef(a1,a2)[0,1] print "LRCX and SPY: ", np.corrcoef(a1,bench)[0,1] print "AAPL and SPY: ", np.corrcoef(bench,a2)[0,1] ``` ### Constructing a portfolio of uncorrelated assets Another reason that correlation is useful in finance is that uncorrelated assets produce the best portfolios. The intuition for this is that if the assets are uncorrelated, a drawdown in one will not correspond with a drawdown in another. This leads to a very stable return stream when many uncorrelated assets are combined. # Limitations ## Significance It's hard to rigorously determine whether or not a correlation is significant, especially when, as here, the variables are not normally distributed. Their correlation coefficient is close to 1, so it's pretty safe to say that the two stock prices are correlated over the time period we use, but is this indicative of future correlation? If we examine the correlation of each of them with the S&P 500, we see that it is also quite high. So, AAPL and LRCX are slightly more correlated with each other than with the average stock. One fundamental problem is that it is easy to datamine correlations by picking the right time period. To avoid this, one should compute the correlation of two quantities over many historical time periods and examine the distibution of the correlation coefficient. More details on why single point estimates are bad will be covered in future notebooks. As an example, remember that the correlation of AAPL and LRCX from 2013-1-1 to 2015-1-1 was 0.95. Let's take the rolling 60 day correlation between the two to see how that varies. ``` rolling_correlation = pd.rolling_corr(a1, a2, 60) plt.plot(rolling_correlation) plt.xlabel('Day') plt.ylabel('60-day Rolling Correlation') ``` ## Non-Linear Relationships The correlation coefficient can be useful for examining the strength of the relationship between two variables. However, it's important to remember that two variables may be associated in different, predictable ways which this analysis would not pick up. For instance, one variable might precisely follow the behavior of a second, but with a delay. There are techniques for dealing with this lagged correlation. Alternatively, a variable may be related to the rate of change of another. Neither of these relationships are linear, but can be very useful if detected. Additionally, the correlation coefficient can be very sensitive to outliers. This means that including or excluding even a couple of data points can alter your result, and it is not always clear whether these points contain information or are simply noise. As an example, let's make the noise distribution poisson rather than normal and see what happens. ``` X = np.random.rand(100) Y = X + np.random.poisson(size=100) plt.scatter(X, Y) np.corrcoef(X, Y)[0, 1] ``` In conclusion, correlation is a powerful technique, but as always in statistics, one should be careful not to interpret results where there are none.
github_jupyter
# Data Preparation with SageMaker Data Wrangler (Part 2) > A detailed guide on AWS SageMaker Data Wrangler to prepare data for machine learning models. This is a five parts series where we will prepare, import, explore, process, and export data using AWS Data Wrangler. You are reading **Part 2:Import data from multiple sources using Data Wrangler**. - toc: true - badges: true - comments: true - categories: [aws, ml, sagemaker] - keyword: [aws, ml, sagemaker, wrangler] - image: images/copied_from_nb/images/2022-05-23-aws-sagemaker-wrangler-p2.jpeg ![](images/2022-05-23-aws-sagemaker-wrangler-p2.jpeg) # Enviornment This notebook is prepared with Amazon SageMaker Studio using `Python 3 (Data Science)` Kernel and `ml.t3.medium` instance. # About This is a detailed guide on using **AWS SageMaker Data Wrangler** service to prepare data for machine learning models. SageMaker Data Wrangler is a multipurpose tool with which you can * import data from multiple sources * explore data with visualizations * apply transformations * export data for ml training This guide is divided into five parts * [Part 1: Prepare synthetic data and place it on multiple sources](https://hassaanbinaslam.github.io/myblog/aws/ml/sagemaker/2022/05/17/aws-sagemaker-wrangler-p1.html) * **Part 2: Import data from multiple sources using Data Wrangler (You are here)** * [Part 3: Explore data with Data Wrangler visualizations](https://hassaanbinaslam.github.io/myblog/aws/ml/sagemaker/2022/05/24/aws-sagemaker-wrangler-p3.html) * [Part 4: Preprocess data using Data Wrangler](https://hassaanbinaslam.github.io/myblog/aws/ml/sagemaker/2022/05/25/aws-sagemaker-wrangler-p4.html) * [Part 5: Export data for ML training](https://hassaanbinaslam.github.io/myblog/aws/ml/sagemaker/2022/05/26/aws-sagemaker-wrangler-p5.html) # Part 2: Import data from multiple sources using Data Wrangler In this post, we will create SageMaker Data Wrangler Flow pipeline to import data from multiple sources. Once data is imported, we will then add a step to join the data into a single dataset that can be used for training ML models. ## Launch SageMaker Data Wrangler Flow Create a new Data Wrangler flow by clicking on the main menu tabs `File > New > Data Wrangler Flow`. ![data-wrangler-new-flow](images/2022-05-23-aws-sagemaker-wrangler-p2/data-wrangler-new-flow.png) Once launched SageMaker may take a minute to initialize a new flow. The reason for this is SageMaker will launch a separate machine in the background `ml.m5.4xlarge` with 16vCPU and 64 GiB memory for processing flow files. A flow file is a JSON file that just captures all the steps performed from the Flow UI console. When you execute the flow, the Flow engine parses this file and performs all the steps. Once a new flow file is available, rename it to `customer-churn.flow`. ![data-wrangler-flow-ready](images/2022-05-23-aws-sagemaker-wrangler-p2/data-wrangler-flow-ready.png) ## Import data from sources First, we will create a flow to import data (created in the part-1 post) from S3 bucket. For this from the flow UI click on **Amazon S3 bucket**. From the next window select the bucket name **S3://sagemaker-us-east-1-801598032724**. In your case, it could be different where you have stored the data. From the UI select the filename "telco_churn_customer_info.csv" and click **Import** ![customer-churn-s3](images/2022-05-23-aws-sagemaker-wrangler-p2/customer-churn-s3.png) Once the data is imported repeat the steps for the filename "telco_churn_account_info.csv". If you are not seeing the "import from S3 bucket" option on the UI then check the flow UI and click on the 'Import' tab option. Once both files are imported, your **Data Flow** tab will look similar to this ![data-flow-customer-account.png](images/2022-05-23-aws-sagemaker-wrangler-p2/data-flow-customer-account.png) Now that we have imported data from S3, we can now work on importing data from the Athena database. For this from the Flow UI Import tab click on **Amazon Athena** option. From the next UI select `AwsDataCatalog` Data catalog option. For Databases drop down select `telco_db` and in the query pane write the below query. ``` select * from telco_churn_utility ``` You can also preview the data by clicking on the table preview option. Once satisfied with the results click 'Import'. When asked about the database name write `telco_churn_utility` ![import-athena-table.png](images/2022-05-23-aws-sagemaker-wrangler-p2/import-athena-table.png) At this point, you will find all three tables imported in Data Flow UI. Against each table, a plus sign (+) will appear that you can use to add any transformations you want to apply on each table. ![all-tables-imported.png](images/2022-05-23-aws-sagemaker-wrangler-p2/all-tables-imported.png) for `telco_churn_customer_info` click on the plus sign and then select 'Edit' to change data types. ![edit_customer_info.png](images/2022-05-23-aws-sagemaker-wrangler-p2/edit_customer_info.png) We will add the following transformations * Change **Area Code** from Long to String * Click **Preview** * Then click **Apply** ![telco_churn_customer_info_edit.png](images/2022-05-23-aws-sagemaker-wrangler-p2/telco_churn_customer_info_edit.png) Similarly for `telco_churn_account_info.csv` edit data types as * Change **Account Length** to Long * Change **Int'l Plan** and **VMail Plan** to Bool * Click **Preview** and then click **Apply** For `telco_churn_utility.csv` edit data types as * Change **custserv_calls** to Long * Click **Preview** and then click **Apply** At this point, we have imported the data from all three sources and have also properly transformed their column types. ## Joining Tables Now we will join all three tables to get a full dataset. For this from the Flow UI Data flow click on the plus sign next to **customer_info** data type and this time select 'Join'. From the new window select **account_info** as the right dataset and click **Configure** ![join-configure.png](images/2022-05-23-aws-sagemaker-wrangler-p2/join-configure.png) From the next screen select * Join Type = Full Outer * Columns Left = CustomerID * Columns Right = CustomerID * Click Preview and then Add ![join-preview.png](images/2022-05-23-aws-sagemaker-wrangler-p2/join-preview.png) A new join step will appear on the Data Flow UI. Click on the plus sign next to it and repeat the steps for **utility** table ![first-join.png](images/2022-05-23-aws-sagemaker-wrangler-p2/first-join.png) * Join Type = Full Outer * Columns Left = CustomerID_0 * Columns Right = CustomerID * Click Preview and then Add ![join-second.png](images/2022-05-23-aws-sagemaker-wrangler-p2/join-second.png) # Summary At this point, we have all the tables joined together. The `customer-churn.flow` created is available on the GitHub [here](https://github.com/hassaanbinaslam/myblog/blob/master/_notebooks/datasets/2022-05-23-aws-sagemaker-wrangler-p2/customer-churn.flow). In the next post, we will clean duplicate columns and create some visualizations to analyze the data.
github_jupyter
``` import pandas as pd ``` # Load and Read In Dataframes ``` from google.colab import files uploaded = files.upload() df1 = pd.read_csv('base_api_df (7).csv') df2 = pd.read_csv('data-2.csv') df3 = pd.read_csv('featuresdf.csv') df4 = pd.read_csv('SpotifyFeatures.csv') ``` # Cut Dataframes Down to Track Ids and Music Features ``` df1.head(1) df1 = df1.drop(columns=['albums', 'artists', 'tracks']) df1.head(1) df2.head(1) df2 = df2.drop(columns=['explicit', 'key', 'popularity', 'release_date', 'year', 'artists', 'duration_ms', 'name', 'mode']) df2.head(1) df3.head(1) df3 = df3.drop(columns=['key', 'name', 'artists', 'key', 'mode', 'duration_ms', 'time_signature']) df3.head(1) df4.head(1) df4 = df4.drop(columns=['genre', 'popularity', 'key', 'artist_name', 'track_name', 'duration_ms', 'mode', 'time_signature']) df4.head(1) ``` # Merge Dataframes ``` # Rename id columns so that columns from every dataframe are the same df4 = df4.rename(columns={'track_id':'id'}) df1 = df1.rename(columns={'tracks_id':'id'}) spotify_df = df1.merge(df2, how='outer') spotify_df = spotify_df.merge(df3, how='outer') spotify_df = spotify_df.merge(df4, how='outer') spotify_df.shape spotify_df = spotify_df.drop_duplicates() spotify_df['id'] = spotify_df['id'].drop_duplicates() spotify_df = spotify_df.dropna() spotify_df.shape spotify_df = spotify_df.rename(columns={'track_key':'id'}) spotify_df.sample(10) track_key = [] for i in range(spotify_df.shape[0]): track_key.append(i) len(track_key) spotify_df['track_key'] = track_key spotify_df.head() spotify_df.to_csv('spotify_song_data.csv') ``` # Let's Create a Baseline Predictive Model Now ``` from sklearn.model_selection import train_test_split train, test = train_test_split(spotify_df) train, test X_train, X_test = train.drop(columns='id'), test.drop(columns='id') from sklearn.preprocessing import StandardScaler, OneHotEncoder # Instantiate Standard Scaler scaler = StandardScaler() # Fit_transform train data X_train_scaled = scaler.fit_transform(X_train) # Transform test data X_test_scaled = scaler.transform(X_test) X_train_scaled len(X_train_scaled), len(X_test_scaled) # Reshape data import numpy as np X_train = X_train_scaled.reshape((len(X_train), np.prod(X_train_scaled.shape[1:]))) X_test = X_test_scaled.reshape((len(X_test), np.prod(X_test_scaled.shape[1:]))) len(X_train),len(X_test) # Create our baseline model from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten # Important hyperparameters inputs = X_train.shape[1] epochs = 75 batch_size = 10 # Create model model = Sequential([ Dense(64, activation='relu', input_dim=inputs), Dense(64, activation='relu'), Dense(10, activation='softmax') ]) # Compile model model.compile(optimizer='adam', loss='categorical_crossentropy') # Fit model model.fit(X_train, X_train, validation_data=(X_test, X_test), epochs=epochs, batch_size=batch_size ) ```
github_jupyter
# Functions of DataFrame 7 **이산화** ``` import numpy as np import pandas as pd from pandas import DataFrame np.random.seed(777) df=DataFrame({'c1':np.random.randn(20), 'c2':['a','a','a','a','a','a','a','a','a','a', 'b','b','b','b','b','b','b','b','b','b']}) print(df) bins=np.linspace(df.c1.min(),df.c1.max(),10) df['c1_bin']=np.digitize(df['c1'],bins) print(df) print(df.groupby('c1_bin')['c1'].size()) print(df.groupby('c1_bin')['c1'].mean()) print(df.groupby('c1_bin')['c1'].std()) print(df.groupby('c1_bin')['c2'].value_counts()) print(df[df['c1_bin']==2]) print(pd.get_dummies(df['c1_bin'],prefix='c1')) print(df.c1.mean()) df['high_low']=np.where(df['c1']>=df.c1.mean(),'high','low') print(df) print(df.groupby('high_low')['c1'].size()) print(df.groupby('high_low')['c1'].mean()) print(df.groupby('high_low')['c1'].std()) Q1=np.percentile(df['c1'],25) Q3=np.percentile(df['c1'],75) df['h_m_l']=np.where(df['c1']>=Q3, '01_high',np.where(df['c1']>=Q1, '02_median', '03_low')) print(df) ``` **재구조화 : reshaping** ``` data=DataFrame({'cust_id': ['c1', 'c1', 'c1', 'c2', 'c2', 'c2', 'c3', 'c3', 'c3'], 'prod_cd': ['p1', 'p2', 'p3', 'p1', 'p2', 'p3', 'p1', 'p2', 'p3'], 'grade' : ['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B'], 'pch_amt': [30, 10, 0, 40, 15, 30, 0, 0, 10]}) print(data) dp=data.pivot(index='cust_id',columns='prod_cd',values='pch_amt') print(dp) dp=pd.pivot_table(data,index='cust_id',columns='prod_cd',values='pch_amt') print(dp) dp=pd.pivot_table(data,index=['cust_id','grade'],columns='prod_cd', values='pch_amt') print(dp) ``` ======================================================= ``` mul_index=pd.MultiIndex.from_tuples([('cust_1', '2018'), ('cust_1', '2019'), ('cust_2', '2018'), ('cust_2', '2019'),]) print(mul_index) data=DataFrame(data=np.arange(16).reshape(4,4),index=mul_index, columns=['prd_1','prd_2','prd_3','prd_4'],dtype='int') print(data) data_stacked=data.stack() print(data_stacked) print(data_stacked.index) print(data_stacked['cust_2']['2018'][['prd_1','prd_2']]) data.loc['cust_2','prd_4']=np.nan print(data) print(data.stack()) print(data.stack(dropna=False)) print(data_stacked) print(data_stacked.unstack(level=-1)) print(data_stacked.unstack(level=0)) print(data_stacked.unstack(level=1)) data_stacked_unstacked=data_stacked.unstack(level=-1) print(data_stacked_unstacked) print(type(data_stacked_unstacked)) dsu_df=data_stacked_unstacked.reset_index() print(dsu_df) dsu_df=dsu_df.rename(columns={'level_0':'custID', 'level_1':'year'}) print(dsu_df) ``` ======================================================= ``` data=DataFrame({'cust_id': ['c1', 'c1', 'c2', 'c2'], 'prod_cd': ['p1', 'p2', 'p1', 'p2'], 'pch_cnt' : [1,2,3,4], 'pch_amt': [100,200,300,400]}) print(data) print(pd.melt(data)) print(pd.melt(data,id_vars=['cust_id','prod_cd'])) data_melt=pd.melt(data,id_vars=['cust_id','prod_cd'], var_name='pch_cd', value_name='pch_value') print(data_melt) print(data_melt.index) print(data_melt.columns) ``` ======================================================= ``` data_melt_pivot=pd.pivot_table(data_melt,index=['cust_id','prod_cd'], columns='pch_cd',values='pch_value') print(data_melt_pivot) print(data_melt_pivot.index) print(data_melt_pivot.columns) ```
github_jupyter
``` import pandas as pd import statsmodels.formula.api as smf from statsmodels.iolib.summary2 import summary_col import matplotlib.pyplot as plt %matplotlib inline #%config InlineBackend.figure_format = 'retina' # Uncomment if using a retina display plt.rc('pdf', fonttype=42) plt.rcParams['ps.useafm'] = True plt.rcParams['pdf.use14corefonts'] = True #plt.rcParams['text.usetex'] = True # Uncomment if LaTeX installed to render plots in LaTeX plt.rcParams['font.serif'] = 'Times' plt.rcParams['font.family'] = 'serif' df = pd.read_csv('../output/model_params_and_results.csv') df.head() ``` Score is defined as the absolute value of the mean of the negative MSE on the test sets across all k-folds used in training. Other variables are renamed. ``` df['score'] = abs(df['mean_test_score']) df['num_hidden_layers'] = df['param_num_hidden_layers'] df['hidden_layer_size'] = df['param_hidden_layer_size'] df['activation_function'] = df['param_activation_function'] ``` We can now look at some descriptive statistics for different parameters. ``` pd.DataFrame.hist(data=df, column='score',bins=50) plt.scatter(x=df.num_hidden_layers, y=df.score) plt.scatter(x=df.hidden_layer_size, y=df.score) df.head() activation_dummies = pd.get_dummies(df.activation_function) df['layers_and_size_interaction'] = df.num_hidden_layers + df.hidden_layer_size df = pd.concat([df,activation_dummies],axis=1) df.columns ``` Basic OLS model: ``` model1 = smf.ols(formula='score ~ num_hidden_layers + hidden_layer_size\ + relu + sigmoid + tanh', data=df) results1 = model1.fit() print(results1.summary()) ``` OLS model with interaction between layer number and size: ``` model2 = smf.ols(formula='score ~ num_hidden_layers + hidden_layer_size + layers_and_size_interaction \ + relu + sigmoid + tanh', data=df) results2 = model2.fit() print(results2.summary()) ``` None of the coefficients are significant, making it difficult to interpret the results of the model. Overall it appears that all of the activation functions have a stronger negative effect on mean squared error (thus improving the predictions) than the reference category, the linear activation function. The number of hidden layers and the interaction between this and the hidden layer size (the number of neurons in the layer) are both also negative, which hidden layer size alone is positive. An issue with the current data is that the outliers with extremely high MSE, where the model was never able to converge towards sensible predictions, may be biasing the results. To assess whether this is the case I can re-run the models excluding these outliers. ``` df_ = df[df['score'] <= 1.0] df_.shape ``` Basic model without outliers: ``` model3 = smf.ols(formula='score ~ num_hidden_layers + hidden_layer_size \ + relu + sigmoid + tanh', data=df_) results3 = model3.fit() print(results3.summary()) ``` Including interactions: ``` model4 = smf.ols(formula='score ~ num_hidden_layers + hidden_layer_size + layers_and_size_interaction \ + relu + sigmoid + tanh', data=df_) results4 = model4.fit() print(results4.summary()) ``` Here we see that the overall model fit has improved, with the R-squared value increasing from 0.079 to 0.374. The coefficient for the ReLU activation is also positive now, consistent with the observation that its performance tends to fluctuate more than the other functions. Overall these results show that there are no clear, statistically significant, effects of basic architecture on model performance. Nonetheless the observed differences in the results elsewhere provide some evidence that models with more layers and node and non-linear activation are better able to generalize to out-of-sample data. The small number of observations may also be preventing us from seeing statistically significant relationships. Future work should assess the performance of models over a greater range of architectures to better assess how these choices affect predictive performance. ``` info = {'N': lambda x: str(int(x.nobs)), 'R2': lambda x: '%.3f' % x.rsquared, 'R2-adj': lambda x: '%.3f' % x.rsquared_adj, 'F': lambda x: '%.3f' % x.fvalue} tbl = summary_col([results1, results2, results3, results4], model_names=['Model 1', 'Model 2', 'Model 3', 'Model 4'], info_dict = info, stars=True, float_format='%0.4f', regressor_order=['num_hidden_layers', 'hidden_layer_size', 'layer_and_size_interaction', 'relu', 'sigmoid', 'tanh', 'Intercept']) tbl # Order is incorrect here even though specified above. Manually changed for paper. ```
github_jupyter
# Storing Multiple Values in Lists Just as a `for` loop is a way to do operations many times, a list is a way to store many values. Unlike NumPy arrays, lists are built into the language (so we don't have to load a library to use them). We create a list by putting values inside square brackets and separating the values with commas: ``` odds = [1,3,5,7] odds ``` We select individual elements from lists by indexing them: ``` weird = ["bob",5,5.8,[1,2,3],sum] odds ``` and if we loop over a list, the loop variable is assigned elements one at a time: ``` odds = [1,3,5,7] odds[2] ``` There is one important difference between lists and strings: we can change the values in a list, but we cannot change individual characters in a string. For example: ``` for o in weird: print(o) ``` works, but: ``` odds[2] = 17 ``` does not. ``` odds ``` ## Ch-Ch-Ch-Ch-Changes Data which can be modified in place is called mutable, while data which cannot be modified is called immutable. Strings and numbers are immutable. This does not mean that variables with string or number values are constants, but when we want to change the value of a string or number variable, we can only replace the old value with a completely new value. Lists and arrays, on the other hand, are mutable: we can modify them after they have been created. We can change individual elements, append new elements, or reorder the whole list. For some operations, like sorting, we can choose whether to use a function that modifies the data in-place or a function that returns a modified copy and leaves the original unchanged. Be careful when modifying data in-place. If two variables refer to the same list, and you modify the list value, it will change for both variables! ``` evens = odds ``` If you want variables with mutable values to be independent, you must make a copy of the value when you assign it. ``` evens[1] = 200 ``` Because of pitfalls like this, code which modifies data in place can be more difficult to understand. However, it is often far more efficient to modify a large data structure in place than to create a modified copy for every small change. You should consider both of these aspects when writing your code. ``` evens odds ``` ## Nested Lists Since lists can contain any Python variable, it can even contain other lists. For example, we could represent the products in the shelves of a small grocery shop: Here is a visual example of how indexing a list of lists `x` works: [![The first element of a list. Adapted from @hadleywickham.](https://pbs.twimg.com/media/CO2_qPVWsAAErbv.png:large)](https://twitter.com/hadleywickham/status/643381054758363136) Using the previously declared list `x`, these would be the results of the index operations shown in the image: *Thanks to [Hadley Wickham](https://twitter.com/hadleywickham/status/643381054758363136) for the image above.* ## Heterogeneous Lists Lists in Python can contain elements of different types. Example: ```python sample_ages = [10, 12.5, 'Unknown'] ``` There are many ways to change the contents of lists besides assigning new values to individual elements: While modifying in place, it is useful to remember that Python treats lists in a slightly counter-intuitive way. If we make a list and (attempt to) copy it then modify in place, we can cause all sorts of trouble: This is because Python stores a list in memory, and then can use multiple names to refer to the same list. If all we want to do is copy a (simple) list, we can use the `list` function, so we do not modify a list we did not mean to: This is different from how variables worked in lesson 1, and more similar to how a spreadsheet works. <section class="challenge panel panel-success"> <div class="panel-heading"> <h2><span class="fa fa-pencil"></span> Challenge: Turn a String Into a List</h2> </div> <div class="panel-body"> <p>Use a for-loop to convert the string "hello" into a list of letters:</p> <div class="codehilite"><pre><span></span><span class="p">[</span><span class="s2">&quot;h&quot;</span><span class="p">,</span> <span class="s2">&quot;e&quot;</span><span class="p">,</span> <span class="s2">&quot;l&quot;</span><span class="p">,</span> <span class="s2">&quot;l&quot;</span><span class="p">,</span> <span class="s2">&quot;o&quot;</span><span class="p">]</span> </pre></div> <p>Hint: You can create an empty list like this:</p> <div class="codehilite"><pre><span></span><span class="n">my_list</span> <span class="o">=</span> <span class="p">[]</span> </pre></div> </div> </section> <section class="solution panel panel-primary"> <div class="panel-heading"> <h2><span class="fa fa-eye"></span> Solution</h2> </div> </section> Subsets of lists and strings can be accessed by specifying ranges of values in brackets, similar to how we accessed ranges of positions in a NumPy array. This is commonly referred to as "slicing" the list/string. <section class="challenge panel panel-success"> <div class="panel-heading"> <h2><span class="fa fa-pencil"></span> Challenge: Slicing From the End</h2> </div> <div class="panel-body"> <p>Use slicing to access only the last four characters of the following string and the last four entries of the following list.</p> </div> </section> Would your solution work regardless of whether you knew beforehand the length of the string or list (e.g. if you wanted to apply the solution to a set of lists of different lengths)? If not, try to change your approach to make it more robust. <section class="solution panel panel-primary"> <div class="panel-heading"> <h2><span class="fa fa-eye"></span> Solution</h2> </div> <div class="panel-body"> <p>Use negative indices to count elements from the end of a container (such as list or string):</p> </div> </section> ## Non-Continuous Slices So far we've seen how to use slicing to take single blocks of successive entries from a sequence. But what if we want to take a subset of entries that aren't next to each other in the sequence? You can achieve this by providing a third argument to the range within the brackets, called the _step size_. The example below shows how you can take every third entry in a list: Notice that the slice taken begins with the first entry in the range, followed by entries taken at equally-spaced intervals (the steps) thereafter. If you wanted to begin the subset with the third entry, you would need to specify that as the starting point of the sliced range: <section class="challenge panel panel-success"> <div class="panel-heading"> <h2><span class="fa fa-pencil"></span> Challenge:</h2> </div> <div class="panel-body"> <p>Use the step size argument to create a new string that contains only every other character in the string "In an octopus's garden in the shade"</p> </div> </section> <section class="solution panel panel-primary"> <div class="panel-heading"> <h2><span class="fa fa-eye"></span> Solution</h2> </div> <div class="panel-body"> <p>To obtain every other character you need to provide a slice with the step size of 2:</p> </div> </section> You can also leave out the beginning and end of the slice to take the whole string and provide only the step argument to go every second element: If you want to take a slice from the beginning of a sequence, you can omit the first index in the range: And similarly, you can omit the ending index in the range to take a slice to the very end of the sequence: ## Overloading `+` usually means addition, but when used on strings or lists, it means "concatenate". Given that, what do you think the multiplication operator `*` does on lists? In particular, what will be the output of the following code? ```python counts = [2, 4, 6, 8, 10] repeats = counts * 2 print(repeats) ``` 1. `[2, 4, 6, 8, 10, 2, 4, 6, 8, 10]` 2. `[4, 8, 12, 16, 20]` 3. `[[2, 4, 6, 8, 10],[2, 4, 6, 8, 10]]` 4. `[2, 4, 6, 8, 10, 4, 8, 12, 16, 20]` The technical term for this is *operator overloading*: a single operator, like `+` or `*`, can do different things depending on what it's applied to. ## Solution The multiplication operator `*` used on a list replicates elements of the list and concatenates them together: ``` counts = [2,4,6,8,10] repeats = counts*2 repeats ``` It's equivalent to: ``` repeats = counts + counts repeats ``` --- The material in this notebook is derived from the Software Carpentry lessons &copy; [Software Carpentry](http://software-carpentry.org/) under the terms of the [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. ``` col = ["c0","c1","c3"] for colour in col*2: print(colour) ```
github_jupyter
Exercise 4 - Polynomial Regression ======== Sometimes our data doesn't have a linear relationship, but we still want to predict an outcome. Suppose we want to predict how satisfied people might be with a piece of fruit, we would expect satisfaction would be low if the fruit was under ripened or over ripened. Satisfaction would be high in between underripened and overripened. This is not something linear regression will help us with, so we can turn to polynomial regression to help us make predictions for these more complex non-linear relationships! Step 1 ------ In this exercise we will look at a dataset analysing internet traffic over the course of the day. Observations were made every hour over the course of several days. Suppose we want to predict the level of traffic we might see at any time during the day, how might we do this? Let's start by opening up our data and having a look at it. #### In the cell below replace the text `<printDataHere>` with `print(dataset.head())`, and __run the code__ to see the data. ``` # This sets up the graphing configuration import warnings warnings.filterwarnings("ignore") import matplotlib.pyplot as graph %matplotlib inline graph.rcParams['figure.figsize'] = (15,5) graph.rcParams["font.family"] = "DejaVu Sans" graph.rcParams["font.size"] = "12" graph.rcParams['image.cmap'] = 'rainbow' graph.rcParams['axes.facecolor'] = 'white' graph.rcParams['figure.facecolor'] = 'white' import numpy as np import pandas as pd dataset = pd.read_csv('Data/traffic_by_hour.csv') ### # BELOW, REPLACE <printDataHere> WITH print(dataset.head()) TO PREVIEW THE DATASET ---### ### print(dataset.head()) ### ``` Step 2 ----- Next we're going to flip the data with the transpose method - our rows will become columns and our columns will become rows. Transpose is commonly used to reshape data so we can use it. Let's try it out. #### In the cell below find the text `<addCallToTranspose>` and replace it with `transpose` ``` ### # REPLACE THE <addCallToTranspose> BELOW WITH transpose ### dataset_T = np.transpose(dataset) ### print(dataset_T.shape) print(dataset_T) ``` Now lets visualize the data. #### Replace the text `<addSampleHere>` with `sample` and then __run the code__. ``` # Let's visualise the data! ### # REPLACE <addSampleHere> BELOW WITH sample ### for sample in range(0, dataset_T.shape[1]): graph.plot(dataset.columns.values, dataset_T[sample]) ### graph.xlabel('Time of day') graph.ylabel('Internet traffic (Gbps)') graph.show() ``` Step 3 ----- This all looks a bit busy, let's see if we can draw out a clearer pattern by taking the __average values__ for each hour. #### In the cell below find all occurances of `<replaceWithHour>` and replace them with `hour` and then __run the code__. ``` # We want to look at the mean values for each hour. hours = dataset.columns.values ### # REPLACE THE <replaceWithHour>'s BELOW WITH hour ### train_Y = [dataset[hour].mean() for hour in hours] # This will be our outcome we measure (label) - amount of internet traffic train_X = np.transpose([int(hour) for hour in hours]) # This is our feature - time of day ### # This makes our graph, don't edit! graph.scatter(train_X, train_Y) for sample in range(0,dataset_T.shape[1]): graph.plot(hours, dataset_T[sample], alpha=0.25) graph.xlabel('Time of day') graph.ylabel('Internet traffic (Gbps)') graph.show() ``` This alone could help us make a prediction if we wanted to know the expected traffic exactly on the hour. But, we'll need to be a bit more clever if we want to make a __good__ prediction of times in between. Step 4 ------ Let's use the midpoints in between the hours to analyse the relationship between the __time of day__ and the __amount of internet traffic__. Numpy's `polyfit(x,y,d)` function allows us to do polynomial regression, or more precisely least squares polynomial fit. We specify a __feature $x$ (time of day)__, our __label $y$ (the amount of traffic)__, and the __degree $d$ of the polynomial (how curvy the line is)__. #### In the cell below find the text `<replaceWithDegree>`, replace it with the value `1` then __run the code__. ``` # Polynomials of degree 1 are linear! # Lets include this one just for comparison ### # REPLACE THE <replaceWithDegree> BELOW WITH 1 ### poly_1 = np.polyfit(train_X, train_Y, 1) ### ``` Let's also compare a few higher-degree polynomials. #### Replace the `<replaceWithDegree>`'s below with numbers, as directed in the comments. ``` ### # REPLACE THE <replaceWithDegree>'s BELOW WITH 2, 3, AND THEN 4 ### poly_2 = np.polyfit(train_X, train_Y, 2) poly_3 = np.polyfit(train_X, train_Y, 3) poly_4 = np.polyfit(train_X, train_Y, 4) ### # Let's plot it! graph.scatter(train_X, train_Y) xp = np.linspace(0, 24, 100) # black dashed linear degree 1 graph.plot(xp, np.polyval(poly_1, xp), 'k--') # red degree 2 graph.plot(xp, np.polyval(poly_2, xp), 'r-') # blue degree 3 graph.plot(xp, np.polyval(poly_3, xp), 'b-') # yellow degree 4 graph.plot(xp, np.polyval(poly_4, xp), 'y-') graph.xticks(train_X, dataset.columns.values) graph.xlabel('Time of day') graph.ylabel('Internet traffic (Gbps)') graph.show() ``` None of these polynomials do a great job of generalising the data. Let's try a few more. #### Follow the instructions in the comments to replace the `<replaceWithDegree>`'s and then __run the code__. ``` ### # REPLACE THE <replaceWithDegree>'s 5, 6, AND 7 ### poly_5 = np.polyfit(train_X, train_Y, 5) poly_6 = np.polyfit(train_X, train_Y, 6) poly_7 = np.polyfit(train_X, train_Y, 7) ### # Let's plot it! graph.scatter(train_X, train_Y) xp = np.linspace(0, 24, 100) # black dashed linear degree 1 graph.plot(xp, np.polyval(poly_1, xp), 'k--') # red degree 5 graph.plot(xp, np.polyval(poly_5, xp), 'r-') # blue degree 6 graph.plot(xp, np.polyval(poly_6, xp), 'b-') # yellow degree 7 graph.plot(xp, np.polyval(poly_7, xp), 'y-') graph.xticks(train_X, dataset.columns.values) graph.xlabel('Time of day') graph.ylabel('Internet traffic (Gbps)') graph.show() ``` It looks like the 5th and 6th degree polynomials have an identical curve. This looks like a good curve to use. We could perhaps use an even higher degree polynomial to fit it even more tightly, but we don't want to overfit the curve, since we want just a generalisation of the relationship. Let's see how our degree 6 polynomial compares to the real data. #### Replace the text `<replaceWithPoly6>` with `poly_6` and __run the code__. ``` for row in range(0,dataset_T.shape[1]): graph.plot(dataset.columns.values, dataset_T[row], alpha = 0.5) ### # REPLACE <replaceWithPoly6> BELOW WITH poly_6 - THE POLYNOMIAL WE WISH TO VISUALIZE ### graph.plot(xp, np.polyval(poly_6, xp), 'k-') ### graph.xlabel('Time of day') graph.ylabel('Internet traffic (Gbps)') graph.show() ``` Step 5 ------ Now let's try using this model to make a prediction for a time between 00 and 24. #### In the cell below follow the instructions in the code to replace `<replaceWithTime>` and `<replaceWithPoly6>` then __run the code__. ``` ### # REPLACE <replaceWithTime> BELOW WITH 12.5 (this represents the time 12:30) ### time = 12.5 ### ### # REPLACE <replaceWithPoly6> BELOW WITH poly_6 SO WE CAN VISUALIZE THE 6TH DEGREE POLYNOMIAL MODEL ### pred = np.polyval(poly_6, time) ### print("at t=%s, predicted internet traffic is %s Gbps"%(time,pred)) # Now let's visualise it graph.plot(xp, np.polyval(poly_6, xp), 'y-') graph.plot(time, pred, 'ko') # result point graph.plot(np.linspace(0, time, 2), np.full([2], pred), dashes=[6, 3], color='black') # dashed lines (to y-axis) graph.plot(np.full([2], time), np.linspace(0, pred, 2), dashes=[6, 3], color='black') # dashed lines (to x-axis) graph.xticks(train_X, dataset.columns.values) graph.ylim(0, 60) graph.title('expected traffic throughout the day') graph.xlabel('time of day') graph.ylabel('internet traffic (Gbps)') graph.show() ``` Conclusion ----- And there we have it! You have made a polynomial regression model and used it for analysis! This models gives us a prediction for the level of internet traffic we should expect to see at any given time of day. You can go back to the course and either click __'Next Step'__ to start an optional step with tips on how to better work with AI models, or you can go to the next module where instead of predicting numbers we predict categories.
github_jupyter
# Building a Classifier from Lending Club Data **An end-to-end machine learning example using Pandas and Scikit-Learn** ## Data Ingestion ``` %matplotlib inline import os import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from pandas.tools.plotting import scatter_matrix names = [ #Lending Club features "funded_amnt", "term", "int_rate", "emp_length", "home_ownership", "annual_inc", "verification_status", "purpose", "dti", "delinq_2yrs", "inq_last_6mths", "open_acc", "pub_rec", "revol_bal", "revol_util", # Macroeconomical data "ilc_mean", "ilc_LSFT", "gdp_mean", "gdp_LSFT", "Tbill_mean", "Tbill_LSFT", "cc_rate", "unemp", "unemp_LSFT", "spread", # Label "loan_status" ] Fnames = names[:-1] label = names[-1] # Open up the earlier CSV to determine how many different types ofentries there are in the column 'loan_status' data_with_all_csv_features = pd.read_csv("./data/dfaWR4F.csv") full_data = data_with_all_csv_features[names]; data = full_data.copy()[names] data.head(3) ``` # Data Exploration The very first thing to do is to explore the dataset and see what's inside. ``` # Shape of the full dataset print data.shape import matplotlib.pyplot as plt %matplotlib inline data.boxplot(column="annual_inc",by="loan_status") from pandas.tools.plotting import radviz import matplotlib.pyplot as plt fig = plt.figure() radviz(data, 'loan_status') plt.show() areas = full_data[['funded_amnt','term','int_rate', 'loan_status']] scatter_matrix(areas, alpha=0.2, figsize=(18,18), diagonal='kde') sns.set_context("poster") sns.countplot(x='home_ownership', hue='loan_status', data=full_data,) sns.set_context("poster") sns.countplot(x='emp_length', hue='loan_status', data=full_data,) sns.set_context("poster") sns.countplot(x='term', hue='loan_status', data=full_data,) sns.set_context("poster") sns.countplot(y='purpose', hue='loan_status', data=full_data,) sns.set_context("poster", font_scale=0.8) plt.figure(figsize=(15, 15)) plt.ylabel('Loan Originating State') sns.countplot(y='verification_status', hue='loan_status', data=full_data) pd.crosstab(data["term"],data["loan_status"],margins=True) def percConvert(ser): return ser/float(ser[-1]) pd.crosstab(data["term"],data["loan_status"],margins=True).apply(percConvert, axis=1) data.hist(column="annual_inc",by="loan_status",bins=30) # Balancing the data so that we have 50/50 class balancing (underbalancing reducing one class) paid_data = data.loc[(data['loan_status'] == "Paid")] default_data = data.loc[(data['loan_status'] == "Default")] # Reduce the Fully Paid data to the same number as Defaulted num_of_paid = default_data.shape[0] reduce_paid_data = paid_data.sample(num_of_paid) # This is the smaller sample data with 50-50 Defaulted and Fully aod loan balanced_data = reduce_paid_data.append(default_data,ignore_index = True ) #Now shuffle several times data = balanced_data.sample(balanced_data.shape[0]) data = data.sample(balanced_data.shape[0]) print "Fully Paid data size was {}".format(paid_data.shape[0]) print "Default data size was {}".format(default_data.shape[0]) print "Updated new Data size is {}".format(data.shape[0]) pd.crosstab(data["term"],data["loan_status"],margins=True).apply(percConvert, axis=1) fig = plt.figure() ax = fig.add_subplot(111) ax.hist(paid_data['int_rate'], bins = 50, alpha = 0.4, label='Fully_Paid', color = 'blue', range = (paid_data['int_rate'].min(),reduce_paid_data['int_rate'].max())) ax.hist(default_data['int_rate'], bins = 50, alpha = 0.4, label='Default', color = 'red', range = (default_data['int_rate'].min(),default_data['int_rate'].max())) plt.title('Interest Rate vs Number of Loans') plt.legend(loc='upper right') plt.xlabel('Interest Rate') plt.axis([0, 25, 0, 8000]) plt.ylabel('Number of Loans') plt.show() ``` The countplot function accepts either an x or a y argument to specify if this is a bar plot or a column plot. I chose to use the y argument so that the labels were readable. The hue argument specifies a column for comparison; in this case we're concerned with the relationship of our categorical variables to the target income. ## Data Management In order to organize our data on disk, we'll need to add the following files: - `README.md`: a markdown file containing information about the dataset and attribution. Will be exposed by the `DESCR` attribute. - `meta.json`: a helper file that contains machine readable information about the dataset like `target_names` and `feature_names`. ``` import json meta = { 'target_names': list(full_data.loan_status.unique()), 'feature_names': list(full_data.columns), 'categorical_features': { column: list(full_data[column].unique()) for column in full_data.columns if full_data[column].dtype == 'object' }, } with open('data/ls_meta.json', 'wb') as f: json.dump(meta, f, indent=2) ``` This code creates a `meta.json` file by inspecting the data frame that we have constructued. The `target_names` column, is just the two unique values in the `data.loan_status` series; by using the `pd.Series.unique` method - we're guarenteed to spot data errors if there are more or less than two values. The `feature_names` is simply the names of all the columns. Then we get tricky &mdash; we want to store the possible values of each categorical field for lookup later, but how do we know which columns are categorical and which are not? Luckily, Pandas has already done an analysis for us, and has stored the column data type, `data[column].dtype`, as either `int64` or `object`. Here I am using a dictionary comprehension to create a dictionary whose keys are the categorical columns, determined by checking the object type and comparing with `object`, and whose values are a list of unique values for that field. Now that we have everything we need stored on disk, we can create a `load_data` function, which will allow us to load the training and test datasets appropriately from disk and store them in a `Bunch`: ``` from sklearn import cross_validation from sklearn.cross_validation import train_test_split from sklearn.datasets.base import Bunch def load_data(root='data'): # Load the meta data from the file with open(os.path.join(root, 'meta.json'), 'r') as f: meta = json.load(f) names = meta['feature_names'] # Load the readme information with open(os.path.join(root, 'README.md'), 'r') as f: readme = f.read() X = data[Fnames] # Remove the target from the categorical features meta['categorical_features'].pop(label) y = data[label] X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y,test_size = 0.2,random_state=10) # Return the bunch with the appropriate data chunked apart return Bunch( #data = train[names[:-1]], data = X_train, #target = train[names[-1]], target = y_train, #data_test = test[names[:-1]], data_test = X_test, #target_test = test[names[-1]], target_test = y_test, target_names = meta['target_names'], feature_names = meta['feature_names'], categorical_features = meta['categorical_features'], DESCR = readme, ) dataset = load_data() print meta['target_names'] dataset.data_test.head() ``` The primary work of the `load_data` function is to locate the appropriate files on disk, given a root directory that's passed in as an argument (if you saved your data in a different directory, you can modify the root to have it look in the right place). The meta data is included with the bunch, and is also used split the train and test datasets into `data` and `target` variables appropriately, such that we can pass them correctly to the Scikit-Learn `fit` and `predict` estimator methods. ## Feature Extraction Now that our data management workflow is structured a bit more like Scikit-Learn, we can start to use our data to fit models. Unfortunately, the categorical values themselves are not useful for machine learning; we need a single instance table that contains _numeric values_. In order to extract this from the dataset, we'll have to use Scikit-Learn transformers to transform our input dataset into something that can be fit to a model. In particular, we'll have to do the following: - encode the categorical labels as numeric data - impute missing values with data (or remove) We will explore how to apply these transformations to our dataset, then we will create a feature extraction pipeline that we can use to build a model from the raw input data. This pipeline will apply both the imputer and the label encoders directly in front of our classifier, so that we can ensure that features are extracted appropriately in both the training and test datasets. ### Label Encoding Our first step is to get our data out of the object data type land and into a numeric type, since nearly all operations we'd like to apply to our data are going to rely on numeric types. Luckily, Sckit-Learn does provide a transformer for converting categorical labels into numeric integers: [`sklearn.preprocessing.LabelEncoder`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html). Unfortunately it can only transform a single vector at a time, so we'll have to adapt it in order to apply it to multiple columns. Like all Scikit-Learn transformers, the `LabelEncoder` has `fit` and `transform` methods (as well as a special all-in-one, `fit_transform` method) that can be used for stateful transformation of a dataset. In the case of the `LabelEncoder`, the `fit` method discovers all unique elements in the given vector, orders them lexicographically, and assigns them an integer value. These values are actually the indices of the elements inside the `LabelEncoder.classes_` attribute, which can also be used to do a reverse lookup of the class name from the integer value. For example, if we were to encode the `home_ownership` column of our dataset as follows: ``` from sklearn.preprocessing import LabelEncoder ownership = LabelEncoder() ownership.fit(dataset.data.home_ownership) print(ownership.classes_) from sklearn.preprocessing import LabelEncoder purpose = LabelEncoder() purpose.fit(dataset.data.purpose) print(purpose.classes_) ``` Obviously this is very useful for a single column, and in fact the `LabelEncoder` really was intended to encode the target variable, not necessarily categorical data expected by the classifiers. In order to create a multicolumn LabelEncoder, we'll have to extend the `TransformerMixin` in Scikit-Learn to create a transformer class of our own, then provide `fit` and `transform` methods that wrap individual `LabelEncoders` for our columns. ``` from sklearn.base import BaseEstimator, TransformerMixin class EncodeCategorical(BaseEstimator, TransformerMixin): """ Encodes a specified list of columns or all columns if None. """ def __init__(self, columns=None): self.columns = columns self.encoders = None def fit(self, data, target=None): """ Expects a data frame with named columns to encode. """ # Encode all columns if columns is None if self.columns is None: self.columns = data.columns # Fit a label encoder for each column in the data frame self.encoders = { column: LabelEncoder().fit(data[column]) for column in self.columns } return self def transform(self, data): """ Uses the encoders to transform a data frame. """ output = data.copy() for column, encoder in self.encoders.items(): output[column] = encoder.transform(data[column]) return output encoder = EncodeCategorical(dataset.categorical_features.keys()) #data = encoder.fit_transform(dataset.data) ``` This specialized transformer now has the ability to label encode multiple columns in a data frame, saving information about the state of the encoders. It would be trivial to add an `inverse_transform` method that accepts numeric data and converts it to labels, using the `inverse_transform` method of each individual `LabelEncoder` on a per-column basis. ### Imputation Scikit-Learn provides a transformer for dealing with missing values at either the column level or at the row level in the `sklearn.preprocessing` library called the [Imputer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html). The `Imputer` requires information about what missing values are, either an integer or the string, `Nan` for `np.nan` data types, it then requires a strategy for dealing with it. For example, the `Imputer` can fill in the missing values with the mean, median, or most frequent values for each column. If provided an axis argument of 0 then columns that contain only missing data are discarded; if provided an axis argument of 1, then rows which contain only missing values raise an exception. Basic usage of the `Imputer` is as follows: ```python imputer = Imputer(missing_values='Nan', strategy='most_frequent') imputer.fit(dataset.data) ``` ``` from sklearn.preprocessing import Imputer class ImputeCategorical(BaseEstimator, TransformerMixin): """ Encodes a specified list of columns or all columns if None. """ def __init__(self, columns=None): self.columns = columns self.imputer = None def fit(self, data, target=None): """ Expects a data frame with named columns to impute. """ # Encode all columns if columns is None if self.columns is None: self.columns = data.columns # Fit an imputer for each column in the data frame #self.imputer = Imputer(strategy='most_frequent') self.imputer = Imputer(strategy='mean') self.imputer.fit(data[self.columns]) return self def transform(self, data): """ Uses the encoders to transform a data frame. """ output = data.copy() output[self.columns] = self.imputer.transform(output[self.columns]) return output imputer = ImputeCategorical(Fnames) #data = imputer.fit_transform(data) data.head(5) ``` Our custom imputer, like the `EncodeCategorical` transformer takes a set of columns to perform imputation on. In this case we only wrap a single `Imputer` as the `Imputer` is multicolumn &mdash; all that's required is to ensure that the correct columns are transformed. I had chosen to do the label encoding first, assuming that because the `Imputer` required numeric values, I'd be able to do the parsing in advance. However, after requiring a custom imputer, I'd say that it's probably best to deal with the missing values early, when they're still a specific value, rather than take a chance. ## Model Build To create classifier, we're going to create a [`Pipeline`](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) that uses our feature transformers and ends in an estimator that can do classification. We can then write the entire pipeline object to disk with the `pickle`, allowing us to load it up and use it to make predictions in the future. A pipeline is a step-by-step set of transformers that takes input data and transforms it, until finally passing it to an estimator at the end. Pipelines can be constructed using a named declarative syntax so that they're easy to modify and develop. # PCA ``` from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import scale %matplotlib inline # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) #print yencode # construct the pipeline pca = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), #('classifier', PCA(n_components=20)) ('classifier', PCA()) ]) # fit the pipeline pca.fit(dataset.data, yencode.transform(dataset.target)) #print dataset.target import numpy as np #The amount of variance that each PC explains var= pca.named_steps['classifier'].explained_variance_ratio_ #Cumulative Variance explains var1=np.cumsum(np.round(pca.named_steps['classifier'].explained_variance_ratio_, decimals=4)*100) print var1 plt.plot(var1) ``` # LDA ``` from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.lda import LDA import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import scale %matplotlib inline # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) #print yencode # construct the pipeline lda = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('classifier', LDA()) ]) # fit the pipeline lda.fit(dataset.data, yencode.transform(dataset.target)) #print dataset.target import numpy as np #The amount of variance that each PC explains var= lda.named_steps['classifier'] print var #Cumulative Variance explains #var1=np.cumsum(np.round(lda.named_steps['classifier'], decimals=4)*100) print var1 plt.plot(var1) ``` # Logistic Regression Fits a logistic model to data and makes predictions about the probability of a categorical event (between 0 and 1). Logistic regressions make predictions between 0 and 1, so in order to classify multiple classes a one-vs-all scheme is used (one model per class, winner-takes-all). ``` from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import Normalizer from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) #normalizer = Normalizer(copy=False) # construct the pipeline lr = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), #('normalizer', Normalizer(copy=False)), #('classifier', LogisticRegression(class_weight='{0:.5, 1:.3}')) ('classifier', LogisticRegression()) ]) # fit the pipeline lr.fit(dataset.data, yencode.transform(dataset.target)) from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score import collections print collections.Counter(dataset.target_test) print collections.Counter(dataset.target) print collections.Counter(full_data[label]) print "Test under TEST DATASET" # encode test targets y_true = yencode.transform(dataset.target_test) # use the model to get the predicted value y_pred = lr.predict(dataset.data_test) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) print "Test under TRAIN DATASET" # encode test targets y_true = yencode.transform(dataset.target) # use the model to get the predicted value y_pred = lr.predict(dataset.data) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) print "Test under FULL IMBALANCED DATASET without new fit call" #lr.fit(full_data[Fnames], yencode.transform(full_data[label])) # encode test targets y_true = yencode.transform(full_data[label]) # use the model to get the predicted value y_pred = lr.predict(full_data[Fnames]) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) ``` ## Chaining PCA and Logistic Regression The PCA does an unsupervised dimensionality reduction, while the logistic regression does the prediction. Here we are using default values for all component of the pipeline. ``` from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from sklearn import linear_model, decomposition yencode = LabelEncoder().fit(dataset.target) logistic = linear_model.LogisticRegression() pca = decomposition.PCA() pipe = Pipeline(steps=[ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('pca', pca), ('logistic', logistic) ]) # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) #print yencode # construct the pipeline lda = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('classifier', LDA()) ]) # fit the pipeline lda.fit(dataset.data, yencode.transform(dataset.target)) # Running the test from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score import collections print collections.Counter(dataset.target_test) print collections.Counter(dataset.target) print collections.Counter(full_data[label]) print "Test under TEST DATASET" # encode test targets y_true = yencode.transform(dataset.target_test) # use the model to get the predicted value y_pred = lda.predict(dataset.data_test) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) print "Test under TRAIN DATASET" # encode test targets y_true = yencode.transform(dataset.target) # use the model to get the predicted value y_pred = lda.predict(dataset.data) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) print "Test under FULL IMBALANCED DATASET without new fit call" #lda.fit(full_data[Fnames], yencode.transform(full_data[label])) # encode test targets y_true = yencode.transform(full_data[label]) # use the model to get the predicted value y_pred = lda.predict(full_data[Fnames]) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) ``` ## Random Forest ``` from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import Pipeline from sklearn.metrics import mean_squared_error as mse from sklearn.metrics import r2_score # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) # construct the pipeline rf = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('classifier', RandomForestClassifier(n_estimators=20, oob_score=True, max_depth=7)) ]) # ...and then run the 'fit' method to build a forest of trees rf.fit(dataset.data, yencode.transform(dataset.target)) # Running the test from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score import collections print collections.Counter(dataset.target_test) print collections.Counter(dataset.target) print collections.Counter(full_data[label]) print "Test under TEST DATASET" # encode test targets y_true = yencode.transform(dataset.target_test) # use the model to get the predicted value y_pred = rf.predict(dataset.data_test) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) print "Test under TRAIN DATASET" # encode test targets y_true = yencode.transform(dataset.target) # use the model to get the predicted value y_pred = rf.predict(dataset.data) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) print "Test under FULL IMBALANCED DATASET without new fit call" #rf.fit(full_data[Fnames], yencode.transform(full_data[label])) # encode test targets y_true = yencode.transform(full_data[label]) # use the model to get the predicted value y_pred = rf.predict(full_data[Fnames]) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) ``` ## ElasticNet ``` from sklearn.pipeline import Pipeline from sklearn.linear_model import ElasticNet from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) # construct the pipeline lelastic = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('classifier', ElasticNet(alpha=0.01, l1_ratio =0.1)) ]) # fit the pipeline lelastic.fit(dataset.data, yencode.transform(dataset.target)) #A helper method for pretty-printing linear models def pretty_print_linear(coefs, names = None, sort = False): if names == None: names = ["X%s" % x for x in range(len(coefs))] lst = zip(coefs[0], names) if sort: lst = sorted(lst, key = lambda x:-np.abs(x[0])) return " + ".join("%s * %s" % (round(coef, 3), name) for coef, name in lst) coefs = lelastic.named_steps['classifier'].coef_ print coefs #print "Linear model:", pretty_print_linear(coefs, Fnames) #Naive Bayes from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import MultinomialNB from sklearn.naive_bayes import BernoulliNB from sklearn.pipeline import Pipeline from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) # construct the pipeline nb = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), # ('classifier', GaussianNB()) # ('classifier', MultinomialNB(alpha=0.7, class_prior=[0.5, 0.5], fit_prior=True)) ('classifier', BernoulliNB(alpha=1.0, binarize=0.0, fit_prior=False)) ]) # Next split up the data with the 'train test split' method in the Cross Validation module #X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2) # ...and then run the 'fit' method to build a model nb.fit(dataset.data, yencode.transform(dataset.target)) # Running the test from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score import collections print collections.Counter(dataset.target_test) print collections.Counter(dataset.target) print collections.Counter(full_data[label]) print "Test under TEST DATASET" # encode test targets y_true = yencode.transform(dataset.target_test) # use the model to get the predicted value y_pred = nb.predict(dataset.data_test) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) print "Test under TRAIN DATASET" # encode test targets y_true = yencode.transform(dataset.target) # use the model to get the predicted value y_pred = nb.predict(dataset.data) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) print "Test under FULL IMBALANCED DATASET without new fit call" #rf.fit(full_data[Fnames], yencode.transform(full_data[label])) # encode test targets y_true = yencode.transform(full_data[label]) # use the model to get the predicted value y_pred = nb.predict(full_data[Fnames]) # execute classification report print classification_report(y_true, y_pred, target_names=dataset.target_names) ``` ## Gradient Boosting Classifier ``` from sklearn.ensemble import GradientBoostingClassifier clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0).fit(dataset.data, dataset.target) # encode test targets y_true = yencode.transform(dataset.target_test) # use the model to get the predicted value y_pred = clf.predict(dataset.data_test) # execute classification report clf.score(dataset.data_test, y_true) ``` ## Voting Classifier 1xLogistic, 4xRandom Forest, 1xgNB, 1xDecisionTree, 2xkNeighbors ``` from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import VotingClassifier from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score from sklearn import linear_model, decomposition from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) clf1 = LogisticRegression(random_state=12) clf2 = RandomForestClassifier(max_features=5, min_samples_leaf=4, min_samples_split=9, bootstrap=False, criterion='entropy', max_depth=None, n_estimators=24, random_state=12) clf3 = GaussianNB() clf4 = DecisionTreeClassifier(max_depth=4) clf5 = KNeighborsClassifier(n_neighbors=7) #clf6 = SVC(kernel='rbf', probability=True) pca = decomposition.PCA(n_components=24) # construct the pipeline pipe = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('pca', pca), ('eclf_classifier', VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3), ('dtc', clf4),('knc', clf5)], voting='soft', weights=[1, 4, 1, 1, 2])), ]) # fit the pipeline pipe.fit(dataset.data, yencode.transform(dataset.target)) from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score import collections print collections.Counter(dataset.target_test) print collections.Counter(dataset.target) print collections.Counter(full_data[label]) print "Test under TEST DATASET" y_true, y_pred = yencode.transform(dataset.target_test), pipe.predict(dataset.data_test) print(classification_report(y_true, y_pred)) print "Test under TRAIN DATASET" y_true, y_pred = yencode.transform(dataset.target), pipe.predict(dataset.data) print(classification_report(y_true, y_pred)) print "Test under FULL IMBALANCED DATASET without new fit call" y_true, y_pred = yencode.transform(full_data[label]), pipe.predict(full_data[Fnames]) print(classification_report(y_true, y_pred)) ``` ## Parameter Tuning for Logistic regression inside pipeline A grid search or feature analysis may lead to a higher scoring model than the one we quickly put together. ``` from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from sklearn import linear_model, decomposition yencode = LabelEncoder().fit(dataset.target) logistic = LogisticRegression(penalty='l2', dual=False, solver='newton-cg') clf2 = RandomForestClassifier(max_features=5, min_samples_leaf=4, min_samples_split=9, bootstrap=False, criterion='entropy', max_depth=None, n_estimators=24, random_state=12) clf3 = GaussianNB() clf4 = DecisionTreeClassifier(max_depth=4) clf5 = KNeighborsClassifier(n_neighbors=7) #clf6 = SVC(kernel='rbf', probability=True) pca = decomposition.PCA(n_components=24) # construct the pipeline pipe = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('pca', pca), ('logistic', logistic), ]) tuned_parameters = { #'pca__n_components':[5, 7, 13, 24], 'logistic__fit_intercept':(False, True), #'logistic__C':(0.1, 1, 10), 'logistic__class_weight':({0:.5, 1:.5},{0:.7, 1:.3},{0:.6, 1:.4},{0:.55, 1:.45},None), } scores = ['precision', 'recall', 'f1'] for score in scores: print("# Tuning hyper-parameters for %s" % score) print() clf = GridSearchCV(pipe, tuned_parameters, scoring='%s_weighted' % score) clf.fit(dataset.data, yencode.transform(dataset.target)) print("Best parameters set found on development set:") print() print(clf.best_params_) print() print("Grid scores on development set:") print() for params, mean_score, scores in clf.grid_scores_: print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params)) print() print("Detailed classification report:") print() print("The model is trained on the full development set.") print("The scores are computed on the full evaluation set.") print() print "Test under TEST DATASET" y_true, y_pred = yencode.transform(dataset.target_test), clf.predict(dataset.data_test) print(classification_report(y_true, y_pred)) print "Test under TRAIN DATASET" y_true, y_pred = yencode.transform(dataset.target), clf.predict(dataset.data) print(classification_report(y_true, y_pred)) print "Test under FULL IMBALANCED DATASET without new fit call" y_true, y_pred = yencode.transform(full_data[label]), clf.predict(full_data[Fnames]) print(classification_report(y_true, y_pred)) ``` ## Parameter Tuning for classifiers inside VotingClassifier A grid search or feature analysis may lead to a higher scoring model than the one we quickly put together. ``` from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from sklearn import linear_model, decomposition yencode = LabelEncoder().fit(dataset.target) logistic = LogisticRegression(penalty='l2', dual=False, solver='newton-cg') clf2 = RandomForestClassifier(max_features=5, min_samples_leaf=4, min_samples_split=9, bootstrap=False, criterion='entropy', max_depth=None, n_estimators=24, random_state=12) clf3 = GaussianNB() clf4 = DecisionTreeClassifier(max_depth=4) clf5 = KNeighborsClassifier(n_neighbors=7) #clf6 = SVC(kernel='rbf', probability=True) pca = decomposition.PCA(n_components=24) # construct the pipeline pipe = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('pca', pca), ('eclf_classifier', VotingClassifier(estimators=[('logistic', logistic), ('randomf', clf2), ('nb', clf3), ('decisiontree', clf4),('kn', clf5)], voting='soft', weights=[1, 4, 1, 1, 2])), ]) tuned_parameters = { #'pca__n_components':[5, 7, 13, 20, 24], #'eclf_classifier__logistic__fit_intercept':(False, True), #'logistic__C':(0.1, 1, 10), 'eclf_classifier__logistic__class_weight':({0:.5, 1:.5},{0:.7, 1:.3},{0:.6, 1:.4},{0:.55, 1:.45},None), #'randomf__max_depth': [3, None], #'randomf__max_features': sp_randint(1, 11), #'randomf__min_samples_split': sp_randint(1, 11), #'randomf__min_samples_leaf': sp_randint(1, 11), #'randomf__bootstrap': [True, False], #'randomf__criterion': ['gini', 'entropy'] } scores = ['precision', 'recall', 'f1'] for score in scores: print("# Tuning hyper-parameters for %s" % score) print() clf = GridSearchCV(pipe, tuned_parameters, scoring='%s_weighted' % score) clf.fit(dataset.data, yencode.transform(dataset.target)) print("Best parameters set found on development set:") print() print(clf.best_params_) print() print("Grid scores on development set:") print() for params, mean_score, scores in clf.grid_scores_: print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params)) print() print("Detailed classification report:") print() print("The model is trained on the full development set.") print("The scores are computed on the full evaluation set.") print() print "Test under TEST DATASET" y_true, y_pred = yencode.transform(dataset.target_test), clf.predict(dataset.data_test) print(classification_report(y_true, y_pred)) print "Test under TRAIN DATASET" y_true, y_pred = yencode.transform(dataset.target), clf.predict(dataset.data) print(classification_report(y_true, y_pred)) print "Test under FULL IMBALANCED DATASET without new fit call" y_true, y_pred = yencode.transform(full_data[label]), clf.predict(full_data[Fnames]) print(classification_report(y_true, y_pred)) ``` ## Tuning the weights in the VotingClassifier ``` from sklearn.base import BaseEstimator from sklearn.base import ClassifierMixin import numpy as np import operator class EnsembleClassifier(BaseEstimator, ClassifierMixin): """ Ensemble classifier for scikit-learn estimators. Parameters ---------- clf : `iterable` A list of scikit-learn classifier objects. weights : `list` (default: `None`) If `None`, the majority rule voting will be applied to the predicted class labels. If a list of weights (`float` or `int`) is provided, the averaged raw probabilities (via `predict_proba`) will be used to determine the most confident class label. """ def __init__(self, clfs, weights=None): self.clfs = clfs self.weights = weights def fit(self, X, y): """ Fit the scikit-learn estimators. Parameters ---------- X : numpy array, shape = [n_samples, n_features] Training data y : list or numpy array, shape = [n_samples] Class labels """ for clf in self.clfs: clf.fit(X, y) def predict(self, X): """ Parameters ---------- X : numpy array, shape = [n_samples, n_features] Returns ---------- maj : list or numpy array, shape = [n_samples] Predicted class labels by majority rule """ self.classes_ = np.asarray([clf.predict(X) for clf in self.clfs]) if self.weights: avg = self.predict_proba(X) maj = np.apply_along_axis(lambda x: max(enumerate(x), key=operator.itemgetter(1))[0], axis=1, arr=avg) else: maj = np.asarray([np.argmax(np.bincount(self.classes_[:,c])) for c in range(self.classes_.shape[1])]) return maj def predict_proba(self, X): """ Parameters ---------- X : numpy array, shape = [n_samples, n_features] Returns ---------- avg : list or numpy array, shape = [n_samples, n_probabilities] Weighted average probability for each class per sample. """ self.probas_ = [clf.predict_proba(X) for clf in self.clfs] avg = np.average(self.probas_, axis=0, weights=self.weights) return avg y_true = yencode.transform(full_data[label]) df = pd.DataFrame(columns=('w1', 'w2', 'w3','w4','w5', 'mean', 'std')) i = 0 for w1 in range(0,2): for w2 in range(0,2): for w3 in range(0,2): for w4 in range(0,2): for w5 in range(0,2): if len(set((w1,w2,w3,w4,w5))) == 1: # skip if all weights are equal continue eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3, clf4, clf5], weights=[w1,w2,w3,w4,w5]) eclf.fit(dataset.data, yencode.transform(dataset.target)) print "w1" print w1 print "w2" print w2 print "w3" print w3 print "w4" print w4 print "w5" print w5 print "Test under TEST DATASET" y_true, y_pred = yencode.transform(dataset.target_test), eclf.predict(dataset.data_test) print(classification_report(y_true, y_pred)) print "Test under TRAIN DATASET" y_true, y_pred = yencode.transform(dataset.target), eclf.predict(dataset.data) print(classification_report(y_true, y_pred)) print "Test under FULL IMBALANCED DATASET without new fit call" y_true, y_pred = yencode.transform(full_data[label]), eclf.predict(full_data[Fnames]) print(classification_report(y_true, y_pred)) #scores = cross_validation.cross_val_score( # estimator=eclf, # X=full_data[Fnames], # y=y_true, # cv=5, # scoring='f1', # n_jobs=1) #df.loc[i] = [w1, w2, w3, w4, w5, scores.mean(), scores.std()] i += 1 #print i #print scores.mean() #df.sort(columns=['mean', 'std'], ascending=False) ``` The pipeline first passes data through our encoder, then to the imputer, and finally to our classifier. In this case, I have chosen a `LogisticRegression`, a regularized linear model that is used to estimate a categorical dependent variable, much like the binary target we have in this case. We can then evaluate the model on the test data set using the same exact pipeline. The last step is to save our model to disk for reuse later, with the `pickle` module: # Model Pickle ``` import pickle def dump_model(model, path='data', name='classifier.pickle'): with open(os.path.join(path, name), 'wb') as f: pickle.dump(model, f) dump_model(lr) import pickle def dump_model(model, path='data', name='encodert.pickle'): with open(os.path.join(path, name), 'wb') as f: pickle.dump(model, f) dump_model(yencode) ``` # SVMs Support Vector Machines (SVM) uses points in transformed problem space that separates the classes into groups. ``` from sklearn.pipeline import Pipeline from sklearn.metrics import mean_squared_error as mse from sklearn.metrics import r2_score from sklearn.svm import SVC # we need to encode our target data as well. yencode = LabelEncoder().fit(dataset.target) # construct the pipeline svm = Pipeline([ ('encoder', EncodeCategorical(dataset.categorical_features.keys())), ('imputer', ImputeCategorical(Fnames)), ('scalar', StandardScaler()), ('classifier', SVC(kernel='linear')) ]) svm.fit(dataset.data, yencode.transform(dataset.target)) print "Test under TEST DATASET" y_true, y_pred = yencode.transform(dataset.target_test), svm.predict(dataset.data_test) print(classification_report(y_true, y_pred)) print "Test under TRAIN DATASET" y_true, y_pred = yencode.transform(dataset.target), svm.predict(dataset.data) print(classification_report(y_true, y_pred)) print "Test under FULL IMBALANCED DATASET without new fit call" y_true, y_pred = yencode.transform(full_data[label]), svm.predict(full_data[Fnames]) print(classification_report(y_true, y_pred)) #kernels = ['linear', 'poly', 'rbf'] #for kernel in kernels: # if kernel != 'poly': # model = SVC(kernel=kernel) # else: # model = SVC(kernel=kernel, degree=3) ``` We can also dump meta information about the date and time your model was built, who built the model, etc. But we'll skip that step here. ## Model Operation Now it's time to explore how to use the model. To do this, we'll create a simple function that gathers input from the user on the command line, and returns a prediction with the classifier model. Moreover, this function will load the pickled model into memory to ensure the latest and greatest saved model is what's being used. ``` def load_model(path='data/classifier.pickle'): with open(path, 'rb') as f: return pickle.load(f) def predict(model, meta=meta): data = {} # Store the input from the user for column in meta['feature_names'][:-1]: # Get the valid responses valid = meta['categorical_features'].get(column) # Prompt the user for an answer until good while True: val = "" + raw_input("enter {} >".format(column)) print val # if valid and val not in valid: # print "Not valid, choose one of {}".format(valid) # else: data[column] = val break # Create prediction and label # yhat = model.predict(pd.DataFrame([data])) yhat = model.predict_proba(pd.DataFrame([data])) print yhat return yencode.inverse_transform(yhat) # Execute the interface #model = load_model() #predict(model) #print data #yhat = model.predict_proba(pd.DataFrame([data])) ``` ## Conclusion
github_jupyter
# Regularization Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen! **You will learn to:** Use regularization in your deep learning models. Let's first import the packages you are going to use. ``` # import packages import numpy as np import matplotlib.pyplot as plt from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters import sklearn import sklearn.datasets import scipy.io from testCases import * %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' ``` **Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. <img src="images/field_kiank.png" style="width:600px;height:350px;"> <caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption> They give you the following 2D dataset from France's past 10 games. ``` train_X, train_Y, test_X, test_Y = load_2D_dataset() ``` Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field. - If the dot is blue, it means the French player managed to hit the ball with his/her head - If the dot is red, it means the other team's player hit the ball with their head **Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball. **Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. ## 1 - Non-regularized model You will use the following neural network (already implemented for you below). This model can be used: - in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python. - in *dropout mode* -- by setting the `keep_prob` to a value less than one You will first try the model without any regularization. Then, you will implement: - *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`" - *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`" In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model. ``` def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1): """ Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples) learning_rate -- learning rate of the optimization num_iterations -- number of iterations of the optimization loop print_cost -- If True, print the cost every 10000 iterations lambd -- regularization hyperparameter, scalar keep_prob - probability of keeping a neuron active during drop-out, scalar. Returns: parameters -- parameters learned by the model. They can then be used to predict. """ grads = {} costs = [] # to keep track of the cost m = X.shape[1] # number of examples layers_dims = [X.shape[0], 20, 3, 1] # Initialize parameters dictionary. parameters = initialize_parameters(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID. if keep_prob == 1: a3, cache = forward_propagation(X, parameters) elif keep_prob < 1: a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob) # Cost function if lambd == 0: cost = compute_cost(a3, Y) else: cost = compute_cost_with_regularization(a3, Y, parameters, lambd) # Backward propagation. assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout, # but this assignment will only explore one at a time if lambd == 0 and keep_prob == 1: grads = backward_propagation(X, Y, cache) elif lambd != 0: grads = backward_propagation_with_regularization(X, Y, cache, lambd) elif keep_prob < 1: grads = backward_propagation_with_dropout(X, Y, cache, keep_prob) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the loss every 10000 iterations if print_cost and i % 10000 == 0: print("Cost after iteration {}: {}".format(i, cost)) if print_cost and i % 1000 == 0: costs.append(cost) # plot the cost plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (x1,000)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters ``` Let's train the model without any regularization, and observe the accuracy on the train/test sets. ``` parameters = model(train_X, train_Y) print ("On the training set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ``` The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model. ``` plt.title("Model without regularization") axes = plt.gca() axes.set_xlim([-0.75,0.40]) axes.set_ylim([-0.75,0.65]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. ## 2 - L2 Regularization The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from: $$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$ To: $$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$ Let's modify your cost and observe the consequences. **Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use : ```python np.sum(np.square(Wl)) ``` Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $. ``` # GRADED FUNCTION: compute_cost_with_regularization def compute_cost_with_regularization(A3, Y, parameters, lambd): """ Implement the cost function with L2 regularization. See formula (2) above. Arguments: A3 -- post-activation, output of forward propagation, of shape (output size, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) parameters -- python dictionary containing parameters of the model Returns: cost - value of the regularized loss function (formula (2)) """ m = Y.shape[1] W1 = parameters["W1"] W2 = parameters["W2"] W3 = parameters["W3"] cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost ### START CODE HERE ### (approx. 1 line) L2_regularization_cost = (lambd / (2 * m)) * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3))) ### END CODER HERE ### cost = cross_entropy_cost + L2_regularization_cost return cost A3, Y_assess, parameters = compute_cost_with_regularization_test_case() print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1))) ``` **Expected Output**: <table> <tr> <td> **cost** </td> <td> 1.78648594516 </td> </tr> </table> Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. **Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$). ``` # GRADED FUNCTION: backward_propagation_with_regularization def backward_propagation_with_regularization(X, Y, cache, lambd): """ Implements the backward propagation of our baseline model to which we added an L2 regularization. Arguments: X -- input dataset, of shape (input size, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) cache -- cache output from forward_propagation() lambd -- regularization hyperparameter, scalar Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y ### START CODE HERE ### (approx. 1 line) dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m) * W3 ### END CODE HERE ### db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) ### START CODE HERE ### (approx. 1 line) dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m) * W2 ### END CODE HERE ### db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) ### START CODE HERE ### (approx. 1 line) dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m) * W1 ### END CODE HERE ### db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case() grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7) print ("dW1 = "+ str(grads["dW1"])) print ("dW2 = "+ str(grads["dW2"])) print ("dW3 = "+ str(grads["dW3"])) ``` **Expected Output**: <table> <tr> <td> **dW1** </td> <td> [[-0.25604646 0.12298827 -0.28297129] [-0.17706303 0.34536094 -0.4410571 ]] </td> </tr> <tr> <td> **dW2** </td> <td> [[ 0.79276486 0.85133918] [-0.0957219 -0.01720463] [-0.13100772 -0.03750433]] </td> </tr> <tr> <td> **dW3** </td> <td> [[-1.77691347 -0.11832879 -0.09397446]] </td> </tr> </table> Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call: - `compute_cost_with_regularization` instead of `compute_cost` - `backward_propagation_with_regularization` instead of `backward_propagation` ``` parameters = model(train_X, train_Y, lambd = 0.7) print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ``` Congrats, the test set accuracy increased to 93%. You have saved the French football team! You are not overfitting the training data anymore. Let's plot the decision boundary. ``` plt.title("Model with L2-regularization") axes = plt.gca() axes.set_xlim([-0.75,0.40]) axes.set_ylim([-0.75,0.65]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` **Observations**: - The value of $\lambda$ is a hyperparameter that you can tune using a dev set. - L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias. **What is L2-regularization actually doing?**: L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. <font color='blue'> **What you should remember** -- the implications of L2-regularization on: - The cost computation: - A regularization term is added to the cost - The backpropagation function: - There are extra terms in the gradients with respect to weight matrices - Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. ## 3 - Dropout Finally, **dropout** is a widely used regularization technique that is specific to deep learning. **It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means! <!-- To understand drop-out, consider this conversation with a friend: - Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!" - Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?" - You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution." !--> <center> <video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls> </video> </center> <br> <caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption> <center> <video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls> </video> </center> <caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption> When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. ### 3.1 - Forward propagation with dropout **Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. **Instructions**: You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps: 1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$. 2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True. 3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values. 4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.) ``` # GRADED FUNCTION: forward_propagation_with_dropout def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5): """ Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID. Arguments: X -- input dataset, of shape (2, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (20, 2) b1 -- bias vector of shape (20, 1) W2 -- weight matrix of shape (3, 20) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) keep_prob - probability of keeping a neuron active during drop-out, scalar Returns: A3 -- last activation value, output of the forward propagation, of shape (1,1) cache -- tuple, information stored for computing the backward propagation """ np.random.seed(1) # retrieve parameters W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) ### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above. D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...) D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold) A1 = A1 * D1 # Step 3: shut down some neurons of A1 A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down ### END CODE HERE ### Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) ### START CODE HERE ### (approx. 4 lines) D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...) D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold) A2 = A2 * D2 # Step 3: shut down some neurons of A2 A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down ### END CODE HERE ### Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) return A3, cache X_assess, parameters = forward_propagation_with_dropout_test_case() A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7) print ("A3 = " + str(A3)) ``` **Expected Output**: <table> <tr> <td> **A3** </td> <td> [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]] </td> </tr> </table> ### 3.2 - Backward propagation with dropout **Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. **Instruction**: Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps: 1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. 2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`). ``` # GRADED FUNCTION: backward_propagation_with_dropout def backward_propagation_with_dropout(X, Y, cache, keep_prob): """ Implements the backward propagation of our baseline model to which we added dropout. Arguments: X -- input dataset, of shape (2, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) cache -- cache output from forward_propagation_with_dropout() keep_prob - probability of keeping a neuron active during drop-out, scalar Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables """ m = X.shape[1] (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) ### START CODE HERE ### (≈ 2 lines of code) dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down ### END CODE HERE ### dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) ### START CODE HERE ### (≈ 2 lines of code) dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down ### END CODE HERE ### dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case() gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8) print ("dA1 = " + str(gradients["dA1"])) print ("dA2 = " + str(gradients["dA2"])) ``` **Expected Output**: <table> <tr> <td> **dA1** </td> <td> [[ 0.36544439 0. -0.00188233 0. -0.17408748] [ 0.65515713 0. -0.00337459 0. -0. ]] </td> </tr> <tr> <td> **dA2** </td> <td> [[ 0.58180856 0. -0.00299679 0. -0.27715731] [ 0. 0.53159854 -0. 0.53159854 -0.34089673] [ 0. 0. -0.00292733 0. -0. ]] </td> </tr> </table> Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call: - `forward_propagation_with_dropout` instead of `forward_propagation`. - `backward_propagation_with_dropout` instead of `backward_propagation`. ``` parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3) print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ``` Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! Run the code below to plot the decision boundary. ``` plt.title("Model with dropout") axes = plt.gca() axes.set_xlim([-0.75,0.40]) axes.set_ylim([-0.75,0.65]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` **Note**: - A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training. - Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks. <font color='blue'> **What you should remember about dropout:** - Dropout is a regularization technique. - You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time. - Apply dropout both during forward and backward propagation. - During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5. ## 4 - Conclusions **Here are the results of our three models**: <table> <tr> <td> **model** </td> <td> **train accuracy** </td> <td> **test accuracy** </td> </tr> <td> 3-layer NN without regularization </td> <td> 95% </td> <td> 91.5% </td> <tr> <td> 3-layer NN with L2-regularization </td> <td> 94% </td> <td> 93% </td> </tr> <tr> <td> 3-layer NN with dropout </td> <td> 93% </td> <td> 95% </td> </tr> </table> Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system. Congratulations for finishing this assignment! And also for revolutionizing French football. :-) <font color='blue'> **What we want you to remember from this notebook**: - Regularization will help you reduce overfitting. - Regularization will drive your weights to lower values. - L2 regularization and Dropout are two very effective regularization techniques.
github_jupyter
## Yield Data ``` import pandas as pd import numpy as np import altair as alt import os pwd vegetables = pd.read_csv('MichiganVegetableData.csv') commodity_list1 = vegetables['Commodity'].unique().tolist() for commodity in commodity_list1: commoditydf = vegetables[vegetables['Commodity'] == commodity] mi_commodity_YIELD = commoditydf[commoditydf['Data Item'].str.contains("YIELD")] year_length = len(mi_commodity_YIELD.Year.unique().tolist()) if year_length > 15: print(commodity) ``` ## Cucumbers ``` mi_cucumbers = vegetables[vegetables['Commodity'] == 'CUCUMBERS'] mi_cucumbers['Data Item'].unique() mi_cucumbers_yield = mi_cucumbers[mi_cucumbers['Data Item'] == 'CUCUMBERS, PROCESSING, PICKLES - YIELD, MEASURED IN TONS / ACRE'] mi_cucumbers_yield.Year.unique() #mi_cucumbers_yield #cucumbers_ordered cucumbers_yield_stripped_data = mi_cucumbers_yield[['Year', 'Value']] cucumbers_yield_stripped_data['Value'] = pd.to_numeric( cucumbers_yield_stripped_data['Value'].str.replace(',', ''), errors='coerce') cucumbers_yield_stripped_data ``` ## Pumpkin ``` mi_pumpkins = vegetables[vegetables['Commodity'] == 'PUMPKINS'] mi_pumpkins_yield = mi_pumpkins[mi_pumpkins['Data Item'] == 'PUMPKINS - YIELD, MEASURED IN CWT / ACRE'] mi_pumpkins_yield.Year.unique() pumpkins_yield_stripped_data = mi_pumpkins_yield[['Year', 'Value']] pumpkins_yield_stripped_data ``` ## Cabbage ``` mi_cabbage = vegetables[vegetables['Commodity'] == 'CABBAGE'] #mi_cabbage['Data Item'].unique() mi_cabbage_yield = mi_cabbage[mi_cabbage['Data Item'] == 'CABBAGE, FRESH MARKET - YIELD, MEASURED IN CWT / ACRE'] #mi_cabbage_yield mi_cabbage_yield.Year.unique() cabbage_yield_stripped_data = mi_cabbage_yield[['Year', 'Value']] cabbage_yield_stripped_data ``` ## Potatoes ``` mi_potatoes = vegetables[vegetables['Commodity'] == 'POTATOES'] #mi_potatoes['Data Item'].unique() potatoes_yield = mi_potatoes[mi_potatoes['Data Item'] == 'POTATOES - YIELD, MEASURED IN CWT / ACRE'] potatoes_yield.Year.unique() potatoes_year_only = potatoes_yield[potatoes_yield['Period'] == 'YEAR'] potatoes_yield_stripped_data = potatoes_year_only[['Year', 'Value']] potatoes_yield_stripped_data ``` ## Squash ``` mi_squash = vegetables[vegetables['Commodity'] == 'SQUASH'] squash_yield = mi_squash[mi_squash['Data Item'] == "SQUASH - YIELD, MEASURED IN CWT / ACRE"] squash_yield.Year.unique() squash_yield_stripped_data = squash_yield[['Year', 'Value']] squash_yield_stripped_data ``` ## Carrots ``` mi_carrots = vegetables[vegetables['Commodity'] == 'CARROTS'] #mi_carrots['Data Item'].unique() carrots_yield = mi_carrots[mi_carrots['Data Item'] == "CARROTS, FRESH MARKET - YIELD, MEASURED IN CWT / ACRE"] carrots_yield.Year.unique() carrots_yield_stripped_data = carrots_yield[['Year', 'Value']] carrots_yield_stripped_data ``` ## Celery ``` mi_celery = vegetables[vegetables['Commodity'] == 'CELERY'] #mi_celery['Data Item'].unique() celery_yield = mi_celery[mi_celery['Data Item'] == "CELERY - YIELD, MEASURED IN CWT / ACRE"] celery_yield.Year.unique() celery_yield_stripped_data = celery_yield[['Year', 'Value']] celery_yield_stripped_data ``` ## Onions ``` mi_onions = vegetables[vegetables['Commodity'] == 'ONIONS'] #mi_onions['Data Item'].unique() onion_yield = mi_onions[mi_onions['Data Item'] == "ONIONS, DRY, SUMMER, STORAGE - YIELD, MEASURED IN CWT / ACRE"] onion_yield.Year.nunique() onion_yield_stripped_data = onion_yield[['Year', 'Value']] onion_yield_stripped_data onion_yield_stripped_data.to_csv('onions.csv', index=False) ``` ## Peppers ``` mi_peppers = vegetables[vegetables['Commodity'] == 'PEPPERS'] #mi_peppers['Data Item'].unique() pepper_yield = mi_peppers[mi_peppers['Data Item'] == "PEPPERS, BELL - YIELD, MEASURED IN CWT / ACRE"] pepper_yield.Year.unique() peppers_yield_stripped_data = pepper_yield[['Year', 'Value']] peppers_yield_stripped_data peppers_yield_stripped_data.to_csv('peppers.csv', index=False) ``` ## Corn ``` mi_corn = vegetables[vegetables['Commodity'] == 'SWEET CORN'] #mi_corn['Data Item'].unique() corn_yield = mi_corn[mi_corn['Data Item'] == "SWEET CORN, FRESH MARKET - YIELD, MEASURED IN CWT / ACRE"] corn_yield.Year.unique() corn_yield_stripped_data = corn_yield[['Year', 'Value']] corn_yield_stripped_data corn_yield_stripped_data.to_csv('corn.csv', index=False) ``` ## Tomatoes ``` mi_tomatoes = vegetables[vegetables['Commodity'] == 'TOMATOES'] #mi_tomatoes['Data Item'].unique() tomatoes_yield = mi_tomatoes[mi_tomatoes['Data Item'] == "TOMATOES, IN THE OPEN, FRESH MARKET - YIELD, MEASURED IN CWT / ACRE"] tomatoes_yield.Year.unique() tomatoes_yield_stripped_data = tomatoes_yield[['Year', 'Value']] tomatoes_yield_stripped_data tomatoes_yield_stripped_data.to_csv('tomatoes.csv', index=False) ``` ## Asparagus ``` mi_asparagus = vegetables[vegetables['Commodity'] == 'ASPARAGUS'] #mi_asparagus['Data Item'].unique() asparagus_yield = mi_asparagus[mi_asparagus['Data Item'] == "ASPARAGUS - YIELD, MEASURED IN CWT / ACRE"] asparagus_yield.Year.unique() asparagus_yield_stripped_data = asparagus_yield[['Year', 'Value']] asparagus_yield_stripped_data asparagus_yield_stripped_data.to_csv('asparagus.csv', index=False) ``` ## Merging DataFrames ``` #cucumbers carrots onions corn tomatoes asparagus cucumbers_yield_stripped_data['Crop'] = 'Cucumbers' cucumbers_yield_stripped_data carrots_yield_stripped_data ['Crop'] = 'Carrots' cucumbers_and_carrots = cucumbers_yield_stripped_data.merge(carrots_yield_stripped_data, on='Year') carrots_cucumbers = pd.concat([carrots_yield_stripped_data, cucumbers_yield_stripped_data]) onion_yield_stripped_data['Crop'] = 'Onions' corn_yield_stripped_data['Crop'] = 'Corn' asparagus_yield_stripped_data['Crop'] = 'Asparagus' tomatoes_yield_stripped_data['Crop'] = 'Tomatoes' squash_yield_stripped_data['Crop'] = 'Squash' celery_yield_stripped_data['Crop'] = 'Celery' peppers_yield_stripped_data['Crop'] = 'Peppers' pumpkins_yield_stripped_data['Crop'] = 'Pumpkins' cabbage_yield_stripped_data['Crop'] = 'Cabbage' carrots_cucumbers_asparagus = pd.concat([carrots_cucumbers, asparagus_yield_stripped_data]) carrots_cucumbers_asparagus carrots_cucumbers_asparagus_tomatoes = pd.concat([carrots_cucumbers_asparagus, tomatoes_yield_stripped_data]) carrots_cucumbers_asparagus_tomatoes_onions = pd.concat([carrots_cucumbers_asparagus_tomatoes, onion_yield_stripped_data]) carrots_cucumbers_asparagus_tomatoes_onions.Crop.unique() carrots_cucumbers_asparagus_tomatoes_onions_cabbage = pd.concat([carrots_cucumbers_asparagus_tomatoes_onions, cabbage_yield_stripped_data]) carrots_cucumbers_asparagus_tomatoes_onions_cabbage_pumpkins = pd.concat([carrots_cucumbers_asparagus_tomatoes_onions_cabbage, pumpkins_yield_stripped_data]) carrots_cucumbers_asparagus_tomatoes_onions_cabbage_pumpkins_squash = pd.concat([carrots_cucumbers_asparagus_tomatoes_onions_cabbage_pumpkins, squash_yield_stripped_data]) carrots_cucumbers_asparagus_tomatoes_onions_cabbage_pumpkins_squash_peppers = pd.concat([carrots_cucumbers_asparagus_tomatoes_onions_cabbage_pumpkins_squash, peppers_yield_stripped_data]) vegetable_yield = pd.concat([carrots_cucumbers_asparagus_tomatoes_onions_cabbage_pumpkins_squash_peppers, celery_yield_stripped_data]) from vega_datasets import data source = data.stocks() vegetable_yield all_crops = alt.Chart(vegetable_yield).mark_line().encode( x='Year:N', y=alt.Y('Value:Q', title = 'Yield (CWT/ACRE)'), color='Crop', strokeDash='Crop', ) asparagus_and_cucumbers = pd.concat([asparagus_yield_stripped_data, cucumbers_yield_stripped_data]) alt.Chart(asparagus_and_cucumbers).mark_line().encode( x='Year:N', y=alt.Y('Value:Q', title = 'Yield (CWT/ACRE)'), color='Crop', strokeDash='Crop', ) crops=list(vegetable_yield['Crop'].unique()) crops.sort() crops selectCrops =alt.selection_multi( fields=['Crop'], init={"Crop":crops[0]}, # notice the binding_radio bind=alt.binding_radio(options=crops, name="Crop"),#edit this line name="Crop" ) radio_chart = all_crops.encode( opacity=alt.condition(selectCrops, alt.value(1.0), alt.value(0.0)) ).add_selection(selectCrops) radio_chart hover = alt.selection_single( fields=["Year"], nearest=True, on="mouseover", empty="none", clear="mouseout", ) selectors = all_crops.mark_point(filled = True, color = 'grey', size = 100).encode( x=alt.X( 'Year'), opacity=alt.condition(hover, alt.value(1), alt.value(0))) selectors tooltips = alt.Chart(vegetable_yield).mark_rule(strokeWidth=2, color="grey").encode( x='Year:T', opacity=alt.condition(hover, alt.value(1), alt.value(0)), tooltip=['Value:Q','Year:N', 'Crop'] ).add_selection(hover) tooltips alt.layer(radio_chart, tooltips, selectors) base = ( alt.Chart(vegetable_yield) .encode( x=alt.X( "Year:T", axis=alt.Axis(title=None, format=("%b %Y"), labelAngle=0, tickCount=6), ), y=alt.Y( "Value:Q", axis=alt.Axis(title='Yield (CWT/ACRE)') ), ) .properties(width=500, height=400) ) radio_select = alt.selection_multi( fields=["Crop"], name="Crop", ) crop_color_condition = alt.condition( radio_select, alt.Color("Crop:N", legend=None), alt.value("lightgrey") ) make_selector = ( alt.Chart(vegetable_yield) .mark_circle(size=200) .encode( y=alt.Y("Crop:N", axis=alt.Axis(title="Pick Crop", titleFontSize=15)), color=crop_color_condition, ) .add_selection(radio_select) ) highlight_crops = ( base.mark_line(strokeWidth=2) .add_selection(radio_select) .encode(color=crop_color_condition) ).properties(title="Crop Yield by Year") # nearest = alt.selection( # type="single", nearest=True, on="mouseover", fields=["Year"], empty="none" # ) # # Transparent selectors across the chart. This is what tells us # # the x-value of the cursor # selectors = ( # alt.Chart(vegetable_yield) # .mark_point() # .encode( # x="Year:T", # opacity=alt.value(0), # ) # .add_selection(nearest) # ) # points = base.mark_point(size=5, dy=-10).encode( # opacity=alt.condition(nearest, alt.value(1), alt.value(0)) # ).transform_filter(radio_select) # tooltip_text = base.mark_text( # align="left", # dx=-60, # dy=-15, # fontSize=10, # fontWeight="bold", # lineBreak = "\n", # ).encode( # text=alt.condition( # nearest, # alt.Text("Value:Q", format=".2f"), # alt.value(" "), # ), # ).transform_filter(radio_select) # # Draw a rule at the location of the selection # rules = ( # alt.Chart(vegetable_yield) # .mark_rule(color="black", strokeWidth=2) # .encode( # x="Year:T", # ) # .transform_filter(nearest) # ) hover = alt.selection_single( fields=["Year"], nearest=True, on="mouseover", empty="none", clear="mouseout", ) tooltips2 = alt.Chart(vegetable_yield).transform_pivot( "Crop", "Value", groupby=["Year"] ).mark_rule(strokeWidth=2, color="red").encode( x='Year:T', opacity=alt.condition(hover, alt.value(1), alt.value(0)), tooltip=["Year", "Asparagus:Q", "Cabbage:Q", "Carrots:Q", "Celery:Q", "Cucumbers:Q", "Onions:Q", "Peppers:Q", "Pumpkins:Q", "Squash:Q", "Tomatoes:Q"] ).add_selection(hover) (make_selector | alt.layer(highlight_crops, tooltips2 )) ```
github_jupyter
# DiscreteDP Example: Water Management **Daisuke Oyama** *Faculty of Economics, University of Tokyo* From Miranda and Fackler, <i>Applied Computational Economics and Finance</i>, 2002, Section 7.6.5 ``` %matplotlib inline import itertools import numpy as np from scipy import sparse import matplotlib.pyplot as plt from quantecon.markov import DiscreteDP maxcap = 30 n = maxcap + 1 # Number of states m = n # Number of actions a1, b1 = 14, 0.8 a2, b2 = 10, 0.4 F = lambda x: a1 * x**b1 # Benefit from irrigation U = lambda c: a2 * c**b2 # Benefit from recreational consumption c = s - x probs = [0.1, 0.2, 0.4, 0.2, 0.1] supp_size = len(probs) beta = 0.9 ``` ## Product formulation ``` # Reward array R = np.empty((n, m)) for s, x in itertools.product(range(n), range(m)): R[s, x] = F(x) + U(s-x) if x <= s else -np.inf # Transition probability array Q = np.zeros((n, m, n)) for s, x in itertools.product(range(n), range(m)): if x <= s: for j in range(supp_size): Q[s, x, np.minimum(s-x+j, n-1)] += probs[j] # Create a DiscreteDP ddp = DiscreteDP(R, Q, beta) # Solve the dynamic optimization problem (by policy iteration) res = ddp.solve() # Number of iterations res.num_iter # Optimal policy res.sigma # Optimal value function res.v # Simulate the controlled Markov chain for num_rep times # and compute the average init = 0 nyrs = 50 ts_length = nyrs + 1 num_rep = 10**4 ave_path = np.zeros(ts_length) for i in range(num_rep): path = res.mc.simulate(ts_length, init=init) ave_path = (i/(i+1)) * ave_path + (1/(i+1)) * path ave_path # Stationary distribution of the Markov chain stationary_dist = res.mc.stationary_distributions[0] stationary_dist # Plot sigma, v, ave_path, stationary_dist hspace = 0.3 fig, axes = plt.subplots(2, 2, figsize=(12, 8+hspace)) fig.subplots_adjust(hspace=hspace) axes[0, 0].plot(res.sigma, '*') axes[0, 0].set_xlim(-1, 31) axes[0, 0].set_ylim(-0.5, 5.5) axes[0, 0].set_xlabel('Water Level') axes[0, 0].set_ylabel('Irrigation') axes[0, 0].set_title('Optimal Irrigation Policy') axes[0, 1].plot(res.v) axes[0, 1].set_xlim(0, 30) y_lb, y_ub = 300, 700 axes[0, 1].set_ylim(y_lb, y_ub) axes[0, 1].set_yticks(np.linspace(y_lb, y_ub, 5, endpoint=True)) axes[0, 1].set_xlabel('Water Level') axes[0, 1].set_ylabel('Value') axes[0, 1].set_title('Optimal Value Function') axes[1, 0].plot(ave_path) axes[1, 0].set_xlim(0, nyrs) y_lb, y_ub = 0, 15 axes[1, 0].set_ylim(y_lb, y_ub) axes[1, 0].set_yticks(np.linspace(y_lb, y_ub, 4, endpoint=True)) axes[1, 0].set_xlabel('Year') axes[1, 0].set_ylabel('Water Level') axes[1, 0].set_title('Average Optimal State Path') axes[1, 1].bar(range(n), stationary_dist, align='center') axes[1, 1].set_xlim(-1, n) y_lb, y_ub = 0, 0.15 axes[1, 1].set_ylim(y_lb, y_ub+0.01) axes[1, 1].set_yticks(np.linspace(y_lb, y_ub, 4, endpoint=True)) axes[1, 1].set_xlabel('Water Level') axes[1, 1].set_ylabel('Probability') axes[1, 1].set_title('Stationary Distribution') plt.show() ``` ## State-action pairs formulation ``` # Arrays of state and action indices S = np.arange(n) X = np.arange(m) S_left = S.reshape(n, 1) - X.reshape(1, n) s_indices, a_indices = np.where(S_left >= 0) # Reward vector S_left = S_left[s_indices, a_indices] R = F(X[a_indices]) + U(S_left) # Transition probability array L = len(S_left) Q = sparse.lil_matrix((L, n)) for i, s_left in enumerate(S_left): for j in range(supp_size): Q[i, np.minimum(s_left+j, n-1)] += probs[j] # Create a DiscreteDP ddp = DiscreteDP(R, Q, beta, s_indices, a_indices) # Solve the dynamic optimization problem (by policy iteration) res = ddp.solve() # Number of iterations res.num_iter # Simulate the controlled Markov chain for num_rep times # and compute the average init = 0 nyrs = 50 ts_length = nyrs + 1 num_rep = 10**4 ave_path = np.zeros(ts_length) for i in range(num_rep): path = res.mc.simulate(ts_length, init=init) ave_path = (i/(i+1)) * ave_path + (1/(i+1)) * path # Stationary distribution of the Markov chain stationary_dist = res.mc.stationary_distributions[0] # Plot sigma, v, ave_path, stationary_dist hspace = 0.3 fig, axes = plt.subplots(2, 2, figsize=(12, 8+hspace)) fig.subplots_adjust(hspace=hspace) axes[0, 0].plot(res.sigma, '*') axes[0, 0].set_xlim(-1, 31) axes[0, 0].set_ylim(-0.5, 5.5) axes[0, 0].set_xlabel('Water Level') axes[0, 0].set_ylabel('Irrigation') axes[0, 0].set_title('Optimal Irrigation Policy') axes[0, 1].plot(res.v) axes[0, 1].set_xlim(0, 30) y_lb, y_ub = 300, 700 axes[0, 1].set_ylim(y_lb, y_ub) axes[0, 1].set_yticks(np.linspace(y_lb, y_ub, 5, endpoint=True)) axes[0, 1].set_xlabel('Water Level') axes[0, 1].set_ylabel('Value') axes[0, 1].set_title('Optimal Value Function') axes[1, 0].plot(ave_path) axes[1, 0].set_xlim(0, nyrs) y_lb, y_ub = 0, 15 axes[1, 0].set_ylim(y_lb, y_ub) axes[1, 0].set_yticks(np.linspace(y_lb, y_ub, 4, endpoint=True)) axes[1, 0].set_xlabel('Year') axes[1, 0].set_ylabel('Water Level') axes[1, 0].set_title('Average Optimal State Path') axes[1, 1].bar(range(n), stationary_dist, align='center') axes[1, 1].set_xlim(-1, n) y_lb, y_ub = 0, 0.15 axes[1, 1].set_ylim(y_lb, y_ub+0.01) axes[1, 1].set_yticks(np.linspace(y_lb, y_ub, 4, endpoint=True)) axes[1, 1].set_xlabel('Water Level') axes[1, 1].set_ylabel('Probability') axes[1, 1].set_title('Stationary Distribution') plt.show() ```
github_jupyter
# 第3部 Pythonによるデータ分析|Pythonで学ぶ統計学入門 ## 5章 標本の統計量の性質 ### ライブラリのインポート ``` # 数値計算に使うライブラリ import numpy as np import pandas as pd import scipy as sp from scipy import stats # グラフを描画するライブラリ from matplotlib import pyplot as plt import seaborn as sns sns.set() # 表示桁数の指定 %precision 3 # グラフをjupyter Notebook内に表示させるための指定 %matplotlib inline # 平均4、標準偏差0.8の正規分布を使いまわす population = stats.norm(loc = 4, scale = 0.8) ``` ### 標本平均を何度も計算してみる ``` # 平均値を格納する入れ物 sample_mean_array = np.zeros(10000) # 「データを10個選んで平均値を求める」試行を10000回繰り返す np.random.seed(1) for i in range(0, 10000): sample = population.rvs(size = 10) sample_mean_array[i] = sp.mean(sample) sample_mean_array ``` ### 標本平均の平均値は、母平均に近い ``` # 標本平均の平均値 sp.mean(sample_mean_array) # 標本平均の標準偏差 sp.std(sample_mean_array, ddof = 1) # 標本平均の分布 sns.distplot(sample_mean_array, color = 'black') ``` ### サンプルサイズ大なら、標本平均は母平均に近い ``` # サンプルサイズを10~100010までの範囲で100区切りで変化させる size_array = np.arange( start = 10, stop = 100100, step = 100) size_array # 「標本平均」を格納する入れ物 sample_mean_array_size = np.zeros(len(size_array)) # 「標本平均を求める」試行を、サンプルサイズを変えながら何度も実行 np.random.seed(1) for i in range(0, len(size_array)): sample = population.rvs(size = size_array[i]) sample_mean_array_size[i] = sp.mean(sample) plt.plot(size_array, sample_mean_array_size, color = 'black') plt.xlabel("sample size") plt.ylabel("sample mean") ``` ### 標本平均を何度も計算する関数を作る ``` # 標本平均を何度も計算する関数 def calc_sample_mean(size, n_trial): sample_mean_array = np.zeros(n_trial) for i in range(0, n_trial): sample = population.rvs(size = size) sample_mean_array[i] = sp.mean(sample) return(sample_mean_array) # 動作確認。 # 「データを10個選んで平均値を求める」試行を10000回繰り返した結果をさらに平均する np.random.seed(1) sp.mean(calc_sample_mean(size = 10, n_trial = 10000)) ``` ### サンプルサイズを変えた時の標本平均の分布 ``` np.random.seed(1) # サンプルサイズ10 size_10 = calc_sample_mean(size = 10, n_trial = 10000) size_10_df = pd.DataFrame({ "sample_mean":size_10, "size" :np.tile("size 10", 10000) }) # サンプルサイズ20 size_20 = calc_sample_mean(size = 20, n_trial = 10000) size_20_df = pd.DataFrame({ "sample_mean":size_20, "size" :np.tile("size 20", 10000) }) # サンプルサイズ30 size_30 = calc_sample_mean(size = 30, n_trial = 10000) size_30_df = pd.DataFrame({ "sample_mean":size_30, "size" :np.tile("size 30", 10000) }) # 結合 sim_result = pd.concat( [size_10_df, size_20_df, size_30_df]) # 結果の表示 print(sim_result.head()) sns.violinplot(x = "size", y = "sample_mean", data = sim_result, color = 'gray') ``` ### 標本平均の標準偏差は母標準偏差よりも小さくなる ``` # サンプルサイズを2~100までの範囲で2区切りで変化させる size_array = np.arange( start = 2, stop = 102, step = 2) size_array # 「標本平均の標準偏差」を格納する入れ物 sample_mean_std_array = np.zeros(len(size_array)) # 「標本平均の標準偏差を計算する」試行を、サンプルサイズを変えながら何度も実行 np.random.seed(1) for i in range(0, len(size_array)): sample_mean = calc_sample_mean(size =size_array[i], n_trial = 100) sample_mean_std_array[i] = sp.std(sample_mean, ddof = 1) plt.plot(size_array, sample_mean_std_array, color = 'black') plt.xlabel("sample size") plt.ylabel("mean_std value") ``` ### 標準誤差 ``` # 標本平均の理論上の値:標準誤差 standard_error = 0.8 / np.sqrt(size_array) standard_error plt.plot(size_array, sample_mean_std_array, color = 'black') plt.plot(size_array, standard_error, color = 'black', linestyle = 'dotted') plt.xlabel("sample size") plt.ylabel("mean_std value") ``` ### 標本分散の平均値は、母分散からずれている ``` # 「標本分散」を格納する入れ物 sample_var_array = np.zeros(10000) # 「データを10個選んで標本分散を求める」試行を10000回繰り返す np.random.seed(1) for i in range(0, 10000): sample = population.rvs(size = 10) sample_var_array[i] = sp.var(sample, ddof = 0) # 標本分散の平均値 sp.mean(sample_var_array) ``` ### 不偏分散を使うと、バイアスがなくなる ``` # 「不偏分散」を格納する入れ物 unbias_var_array = np.zeros(10000) # 「データを10個選んで不偏分散を求める」試行を # 10000回繰り返す np.random.seed(1) for i in range(0, 10000): sample = population.rvs(size = 10) unbias_var_array[i] = sp.var(sample, ddof = 1) # 不偏分散の平均値 sp.mean(unbias_var_array) ``` ### サンプルサイズを増やすと、不偏分散は母分散に近づく ``` # サンプルサイズを10~100010までの範囲で100区切りで変化させる size_array = np.arange( start = 10, stop = 100100, step = 100) size_array # 「不偏分散」を格納する入れ物 unbias_var_array_size = np.zeros(len(size_array)) # 「不偏分散を求める」試行を、サンプルサイズを変えながら何度も実行 np.random.seed(1) for i in range(0, len(size_array)): sample = population.rvs(size = size_array[i]) unbias_var_array_size[i] = sp.var(sample, ddof = 1) plt.plot(size_array, unbias_var_array_size, color = 'black') plt.xlabel("sample size") plt.ylabel("unbias var") ``` ### 補足:中心極限定理 ``` # サンプルサイズと試行回数 n_size = 10000 n_trial = 50000 # 表ならば1、裏ならば0を表す coin = np.array([0,1]) # 表が出た回数 count_coin = np.zeros(n_trial) # コインをn_size回投げる試行をn_trial回行う np.random.seed(1) for i in range(0, n_trial): count_coin[i] = sp.sum( np.random.choice(coin, size = n_size, replace = True)) # ヒストグラムを描く sns.distplot(count_coin, color = 'black') ```
github_jupyter
# Setup IAM for Kinesis ``` import boto3 import sagemaker import pandas as pd sess = sagemaker.Session() bucket = sess.default_bucket() role = sagemaker.get_execution_role() region = boto3.Session().region_name sts = boto3.Session().client(service_name="sts", region_name=region) iam = boto3.Session().client(service_name="iam", region_name=region) ``` # Create Kinesis Role ``` iam_kinesis_role_name = "DSOAWS_Kinesis" iam_kinesis_role_passed = False assume_role_policy_doc = { "Version": "2012-10-17", "Statement": [ {"Effect": "Allow", "Principal": {"Service": "kinesis.amazonaws.com"}, "Action": "sts:AssumeRole"}, {"Effect": "Allow", "Principal": {"Service": "firehose.amazonaws.com"}, "Action": "sts:AssumeRole"}, {"Effect": "Allow", "Principal": {"Service": "kinesisanalytics.amazonaws.com"}, "Action": "sts:AssumeRole"}, ], } import json import time from botocore.exceptions import ClientError try: iam_role_kinesis = iam.create_role( RoleName=iam_kinesis_role_name, AssumeRolePolicyDocument=json.dumps(assume_role_policy_doc), Description="DSOAWS Kinesis Role", ) print("Role succesfully created.") iam_kinesis_role_passed = True except ClientError as e: if e.response["Error"]["Code"] == "EntityAlreadyExists": iam_role_kinesis = iam.get_role(RoleName=iam_kinesis_role_name) print("Role already exists. That is OK.") iam_kinesis_role_passed = True else: print("Unexpected error: %s" % e) time.sleep(30) iam_role_kinesis_name = iam_role_kinesis["Role"]["RoleName"] print("Role Name: {}".format(iam_role_kinesis_name)) iam_role_kinesis_arn = iam_role_kinesis["Role"]["Arn"] print("Role ARN: {}".format(iam_role_kinesis_arn)) account_id = sts.get_caller_identity()["Account"] ``` # Specify Stream Name ``` stream_name = "dsoaws-kinesis-data-stream" ``` # Specify Firehose Name ``` firehose_name = "dsoaws-kinesis-data-firehose" ``` # Specify Lambda Function Name ``` lambda_fn_name = "DeliverKinesisAnalyticsToCloudWatch" ``` # Create Policy ``` kinesis_policy_doc = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:PutObject", ], "Resource": [ "arn:aws:s3:::{}/kinesis-data-firehose".format(bucket), "arn:aws:s3:::{}/kinesis-data-firehose/*".format(bucket), ], }, { "Effect": "Allow", "Action": ["logs:PutLogEvents"], "Resource": ["arn:aws:logs:{}:{}:log-group:/*".format(region, account_id)], }, { "Effect": "Allow", "Action": [ "kinesis:Get*", "kinesis:DescribeStream", "kinesis:Put*", "kinesis:List*", ], "Resource": ["arn:aws:kinesis:{}:{}:stream/{}".format(region, account_id, stream_name)], }, { "Effect": "Allow", "Action": [ "firehose:*", ], "Resource": ["arn:aws:firehose:{}:{}:deliverystream/{}".format(region, account_id, firehose_name)], }, { "Effect": "Allow", "Action": [ "kinesisanalytics:*", ], "Resource": ["*"], }, { "Sid": "UseLambdaFunction", "Effect": "Allow", "Action": ["lambda:InvokeFunction", "lambda:GetFunctionConfiguration"], "Resource": "arn:aws:lambda:{}:{}:function:{}:$LATEST".format(region, account_id, lambda_fn_name), }, { "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:aws:iam::*:role/service-role/kinesis-analytics*", }, ], } print(json.dumps(kinesis_policy_doc, indent=4, sort_keys=True, default=str)) ``` # Update Policy ``` import time response = iam.put_role_policy( RoleName=iam_role_kinesis_name, PolicyName="DSOAWS_KinesisPolicy", PolicyDocument=json.dumps(kinesis_policy_doc) ) time.sleep(30) print(json.dumps(response, indent=4, sort_keys=True, default=str)) ``` # Create AWS Lambda IAM Role ``` iam_lambda_role_name = "DSOAWS_Lambda" iam_lambda_role_passed = False assume_role_policy_doc = { "Version": "2012-10-17", "Statement": [ {"Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}, {"Effect": "Allow", "Principal": {"Service": "kinesisanalytics.amazonaws.com"}, "Action": "sts:AssumeRole"}, ], } import time from botocore.exceptions import ClientError try: iam_role_lambda = iam.create_role( RoleName=iam_lambda_role_name, AssumeRolePolicyDocument=json.dumps(assume_role_policy_doc), Description="DSOAWS Lambda Role", ) print("Role succesfully created.") iam_lambda_role_passed = True except ClientError as e: if e.response["Error"]["Code"] == "EntityAlreadyExists": iam_role_lambda = iam.get_role(RoleName=iam_lambda_role_name) print("Role already exists. This is OK.") iam_lambda_role_passed = True else: print("Unexpected error: %s" % e) time.sleep(30) iam_role_lambda_name = iam_role_lambda["Role"]["RoleName"] print("Role Name: {}".format(iam_role_lambda_name)) iam_role_lambda_arn = iam_role_lambda["Role"]["Arn"] print("Role ARN: {}".format(iam_role_lambda_arn)) ``` # Create AWS Lambda IAM Policy ``` lambda_policy_doc = { "Version": "2012-10-17", "Statement": [ { "Sid": "UseLambdaFunction", "Effect": "Allow", "Action": ["lambda:InvokeFunction", "lambda:GetFunctionConfiguration"], "Resource": "arn:aws:lambda:{}:{}:function:*".format(region, account_id), }, {"Effect": "Allow", "Action": "cloudwatch:*", "Resource": "*"}, { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:{}:{}:*".format(region, account_id), }, { "Effect": "Allow", "Action": ["logs:CreateLogStream", "logs:PutLogEvents"], "Resource": "arn:aws:logs:{}:{}:log-group:/aws/lambda/*".format(region, account_id), }, ], } print(json.dumps(lambda_policy_doc, indent=4, sort_keys=True, default=str)) import time response = iam.put_role_policy( RoleName=iam_role_lambda_name, PolicyName="DSOAWS_LambdaPolicy", PolicyDocument=json.dumps(lambda_policy_doc) ) time.sleep(30) print(json.dumps(response, indent=4, sort_keys=True, default=str)) ``` # Store Variables for Next Notebooks ``` %store stream_name %store firehose_name %store iam_kinesis_role_name %store iam_role_kinesis_arn %store iam_lambda_role_name %store iam_role_lambda_arn %store lambda_fn_name %store iam_kinesis_role_passed %store iam_lambda_role_passed %store %%javascript Jupyter.notebook.save_checkpoint() Jupyter.notebook.session.delete(); ```
github_jupyter
# Plot unit conversions This notebook demonstrates some examples of different kinds of units, and the circumstances under which they are converted and displayed. ``` %matplotlib inline import sys import atomica as at import matplotlib.pyplot as plt import numpy as np import sciris as sc from IPython.display import display, HTML testdir = at.parent_dir() P = at.Project(framework='unit_demo_framework.xlsx',databook='unit_demo_databook.xlsx') P.load_progbook('unit_demo_progbook.xlsx') res = P.run_sim('default','default',at.ProgramInstructions(start_year=2018)) ``` This test example has examples of parameters with different timescales, and different types of programs. ##### Parameters - `recrate` - Duration in months - `infdeath` - Weekly probability - `susdeath` - Daily probability - `foi` - Annual probability ``` d = at.PlotData(res,outputs=['recrate','infdeath','susdeath','foi','sus:inf','susdeath:flow','dead'],pops='adults') at.plot_series(d,axis='pops'); ``` Notice that parameters are plotted in their native units. For example, a probability per day is shown as probability per day, matching the numbers that were entered in the databook. Aggregating these units without specifying the aggregation method will result in either integration or averaging as most appropriate for the units of the underlying quantity: ``` for output in ['recrate','infdeath','susdeath','foi','sus:inf','susdeath:flow','dead']: d = at.PlotData(res,outputs=output,pops='adults',t_bins=10) at.plot_bars(d); ``` Accumulation will result in the units and output name being updated appropriately: ``` d = at.PlotData(res,outputs='sus:inf',pops='adults',accumulate='integrate',project=P) at.plot_series(d); d = at.PlotData(res,outputs='sus',pops='adults',accumulate='integrate',project=P) at.plot_series(d); ``` ##### Programs - `Risk avoidance` - Continuous - `Harm reduction 1` - Continuous - `Harm reduction 2` - Continuous - `Treatment 1` - One-off - `Treatment 2` - One-off Programs with continuous coverage cover a certain number of people every year: ``` d = at.PlotData.programs(res,outputs='Risk avoidance',quantity='coverage_number') at.plot_series(d); ``` Programs with one-off coverage cover a number of people at each time step. This is the number that gets returned by `Result.get_coverage()` but it is automatically annualized for plotting: ``` annual_coverage = res.model.progset.programs['Treatment 1'].spend_data.vals[0]/res.model.progset.programs['Treatment 1'].unit_cost.vals[0] timestep_coverage = res.get_coverage('number')['Treatment 1'][0] print('Annual coverage = %g, Timestep coverage = %g' % (annual_coverage, timestep_coverage)) d = at.PlotData.programs(res,outputs='Treatment 1',quantity='coverage_number') at.plot_series(d) ``` These units are handled automatically when aggregating. For example, consider computing the number of people covered over a period of time: ``` d = at.PlotData.programs(res,outputs='Treatment 1',quantity='coverage_number',t_bins=[2000,2000.5]) at.plot_bars(d); d = at.PlotData.programs(res,outputs='Treatment 1',quantity='coverage_number',t_bins=[2000,2002]) at.plot_bars(d); d = at.PlotData.programs(res,outputs='Treatment 1',quantity='coverage_eligible',t_bins=[2000,2000.5]) at.plot_bars(d); d = at.PlotData.programs(res,outputs='Treatment 1',quantity='coverage_number',t_bins=[2000,2002]) at.plot_bars(d); ```
github_jupyter
# Using the model and best-fit parameters from CenQue, we measure the following values: The "true" SF fraction $$f_{True SF}(\mathcal{M}_*)$$ The "true" SF SMF $$\Phi_{True SF}(\mathcal{M}_*)$$ ``` import numpy as np import pickle import util as UT import observables as Obvs from scipy.interpolate import interp1d # plotting import matplotlib.pyplot as plt %matplotlib inline from ChangTools.plotting import prettyplot from ChangTools.plotting import prettycolors prettyplot() pretty_colors = prettycolors() ``` ## import output from CenQue model with best-fit parameters $$ F_{cenque} ({\bf \theta_{best-fit}}) $$ ``` cenque = pickle.load(open(''.join([UT.dat_dir(), 'Descendant.ABC_posterior.RHOssfrfq_TinkerFq_Std.updated_prior.p']), 'rb')) print cenque.keys() for k in cenque.keys(): if cenque[k] is not None: print k print cenque[k][:10] print cenque['sfr_class'][np.where(cenque['quenched'] != 0)] print cenque['t_quench'][np.where(cenque['quenched'] != 0)] print cenque['t_quench'][np.where((cenque['quenched'] != 0) & (cenque['sfr_class'] == 'star-forming'))] # Star-forming only isSF = np.where((cenque['sfr_class'] == 'star-forming') & (cenque['quenched'] == 0)) # quenching #isQing = np.where((cenque['quenched'] == 0) & (cenque['t_quench'] != 999)) isQing = np.where((cenque['quenched'] == 0) & (cenque['sfr_class'] == 'quiescent')) # quiescent isQ = np.where(cenque['quenched'] != 0) assert len(cenque['sfr_class']) == len(isSF[0]) + len(isQing[0]) + len(isQ[0]) ``` # Lets examine SSFRs of each galaxy class ``` esef = Obvs.Ssfr() bin_pssfr_tot, pssfr_tot = esef.Calculate(cenque['mass'], cenque['ssfr']) bin_pssfr_sf, pssfr_sf = esef.Calculate(cenque['mass'][isSF], cenque['ssfr'][isSF]) bin_pssfr_qing, pssfr_qing = esef.Calculate(cenque['mass'][isQing], cenque['ssfr'][isQing]) bin_pssfr_q, pssfr_q = esef.Calculate(cenque['mass'][isQ], cenque['ssfr'][isQ]) fig = plt.figure(figsize=(20, 5)) bkgd = fig.add_subplot(111, frameon=False) for i_m, mass_bin in enumerate(esef.mass_bins): sub = fig.add_subplot(1, 4, i_m+1) in_mbin = (cenque['mass'] >= mass_bin[0]) & (cenque['mass'] < mass_bin[1]) also_sf = (cenque['sfr_class'] == 'star-forming') & (cenque['quenched'] == 0) also_q = cenque['quenched'] != 0 also_qing = (cenque['quenched'] == 0) & (cenque['sfr_class'] == 'quiescent') N_tot = np.float(len(np.where(in_mbin)[0])) f_sf = np.float(len(np.where(in_mbin & also_sf)[0])) / N_tot f_q = np.float(len(np.where(in_mbin & also_q)[0])) / N_tot f_qing = np.float(len(np.where(in_mbin & also_qing)[0])) / N_tot assert f_sf + f_q + f_qing == 1. # Star-forming sub.fill_between(bin_pssfr_sf[i_m], f_sf * pssfr_sf[i_m], np.repeat(0., len(bin_pssfr_sf[i_m])), color='b', edgecolor=None) # Quiescent sub.fill_between(bin_pssfr_q[i_m], f_q * pssfr_q[i_m], np.repeat(0., len(bin_pssfr_q[i_m])), color='r', edgecolor=None) # quienching sub.fill_between(bin_pssfr_qing[i_m], f_qing * pssfr_qing[i_m] + f_q * pssfr_q[i_m] + f_sf * pssfr_sf[i_m], f_q * pssfr_q[i_m] + f_sf * pssfr_sf[i_m], color='g', edgecolor=None) sub.plot(bin_pssfr_tot[i_m], pssfr_tot[i_m], color='k', lw=3, ls='--') massbin_str = ''.join([r'$\mathtt{log \; M_{*} = [', str(mass_bin[0]), ',\;', str(mass_bin[1]), ']}$']) sub.text(-12., 1.4, massbin_str, fontsize=20) # x-axis sub.set_xlim([-13., -9.]) # y-axis sub.set_ylim([0.0, 1.7]) sub.set_yticks([0.0, 0.5, 1.0, 1.5]) if i_m == 0: sub.set_ylabel(r'$\mathtt{P(log \; SSFR)}$', fontsize=25) else: sub.set_yticklabels([]) bkgd.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') bkgd.set_xlabel(r'$\mathtt{log \; SSFR \;[yr^{-1}]}$', fontsize=25) plt.show() fig = plt.figure(figsize=(20, 5)) bkgd = fig.add_subplot(111, frameon=False) for i_m, mass_bin in enumerate(esef.mass_bins): sub = fig.add_subplot(1, 4, i_m+1) in_mbin = (cenque['mass'] >= mass_bin[0]) & (cenque['mass'] < mass_bin[1]) also_sf = (cenque['sfr_class'] == 'star-forming') & (cenque['quenched'] == 0) also_q = cenque['quenched'] != 0 also_qing = (cenque['quenched'] == 0) & (cenque['sfr_class'] == 'quiescent') N_tot = np.float(len(np.where(in_mbin)[0])) f_sf = np.float(len(np.where(in_mbin & also_sf)[0])) / N_tot f_q = np.float(len(np.where(in_mbin & also_q)[0])) / N_tot f_qing = np.float(len(np.where(in_mbin & also_qing)[0])) / N_tot assert f_sf + f_q + f_qing == 1. # quienching sub.fill_between(bin_pssfr_qing[i_m], f_qing * pssfr_qing[i_m], np.zeros(len(bin_pssfr_qing[i_m])), color='g', edgecolor=None) # Star-forming sub.fill_between(bin_pssfr_sf[i_m], f_sf * pssfr_sf[i_m] + f_qing * pssfr_qing[i_m], f_qing * pssfr_qing[i_m], color='b', edgecolor=None) # Quiescent sub.fill_between(bin_pssfr_q[i_m], f_q * pssfr_q[i_m] + f_sf * pssfr_sf[i_m] + f_qing * pssfr_qing[i_m], f_sf * pssfr_sf[i_m] + f_qing * pssfr_qing[i_m], color='r', edgecolor=None) sub.plot(bin_pssfr_tot[i_m], pssfr_tot[i_m], color='k', lw=3, ls='--') massbin_str = ''.join([r'$\mathtt{log \; M_{*} = [', str(mass_bin[0]), ',\;', str(mass_bin[1]), ']}$']) sub.text(-12., 1.4, massbin_str, fontsize=20) # x-axis sub.set_xlim([-13., -9.]) # y-axis sub.set_ylim([0.0, 1.7]) sub.set_yticks([0.0, 0.5, 1.0, 1.5]) if i_m == 0: sub.set_ylabel(r'$\mathtt{P(log \; SSFR)}$', fontsize=25) else: sub.set_yticklabels([]) bkgd.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') bkgd.set_xlabel(r'$\mathtt{log \; SSFR \;[yr^{-1}]}$', fontsize=25) plt.show() ``` ## Calculate $f_{True SF}$ ``` effq = Obvs.Fq() theta_sfms = {'name': 'linear', 'zslope': 1.14} qf = effq.Calculate(mass=cenque['mass'], sfr=cenque['sfr'], z=UT.z_nsnap(1), theta_SFMS=theta_sfms) # calculate true SF fraction m_low = np.arange(8.0, 12.0, 0.1) m_high = m_low + 0.1 m_mid, f_truesf = np.zeros(len(m_low)), np.zeros(len(m_low)) also_sf = (cenque['sfr_class'] == 'star-forming') & (cenque['quenched'] == 0) for i_m in range(len(m_low)): in_mbin = (cenque['mass'] >= m_low[i_m]) & (cenque['mass'] < m_high[i_m]) N_tot = np.float(len(np.where(in_mbin)[0])) N_sf = np.float(len(np.where(in_mbin & also_sf)[0])) m_mid[i_m] = 0.5 * (m_low[i_m] + m_high[i_m]) f_truesf[i_m] = N_sf/N_tot ``` ### Comparison of $f_{SF} = 1 - f_Q$ versus $f_{True SF}$ ``` fig = plt.figure(figsize=(7,7)) sub = fig.add_subplot(111) sub.plot(qf[0], 1. - qf[1], c='k', ls='--', lw=2, label='$f_{SF} = 1 - f_Q$') sub.plot(m_mid, f_truesf, c='b', ls='-', lw=2, label='$f_{True\;SF}$') f_truesf_interp = interp1d(m_mid, f_truesf) sub.fill_between(qf[0], (1. - qf[1]) - f_truesf_interp(qf[0]), np.zeros(len(qf[0])), color='k', edgecolor=None, label='$\Delta$') # x-axis sub.set_xlim([9., 12.]) sub.set_xlabel('Stellar Mass $(\mathcal{M}_*)$', fontsize=25) sub.set_ylim([0., 1.]) sub.set_ylabel('Star-forming Fraction', fontsize=25) sub.legend(loc = 'upper right', prop={'size': 25}) ``` ## Calculate SMF of (only) star-forming galaxies ``` # total SMF smf_tot = Obvs.getMF(cenque['mass']) # SMF of true SF smf_truesf = Obvs.getMF(cenque['mass'][isSF]) # SMF of galaxies *classified* as SF gal_class = effq.Classify(cenque['mass'], cenque['sfr'], UT.z_nsnap(1), theta_SFMS=theta_sfms) smf_sfclass = Obvs.getMF(cenque['mass'][np.where(gal_class == 'star-forming')]) fig = plt.figure(figsize=(7,7)) sub = fig.add_subplot(111) sub.plot(smf_tot[0], smf_tot[1], c='k', lw=3, label='Total') sub.plot(smf_truesf[0], smf_truesf[1], c='b', lw=3, label='True SF') sub.plot(smf_sfclass[0], smf_sfclass[1], c='k', ls='--') sub.set_xlim([9., 12.]) sub.set_xlabel('Stellar Masses $(\mathcal{M}_*)$', fontsize=25) sub.set_ylim([1e-5, 10**-1.5]) sub.set_yscale('log') sub.set_ylabel('$\Phi$', fontsize=25) ```
github_jupyter
# Extremal linkage networks This notebook contains code accompanying the paper [extremal linkage networks](https://arxiv.org/abs/1904.01817). We first implement the network dynamics and then rely on [TikZ](https://github.com/pgf-tikz/pgf) for visualization. ## The Model We define a random network on an infinite set of layers, each consisting of $N \ge 1$ nodes. The node $i \in \{0, \dots, N - 1\}$ in layer $h \in \mathbb Z$ has a fitness $F_{i, h}$, where we assume the family $\{F_{i, h}\}_{i \in \{0, \dots, N - 1\}, h \in \mathbb Z}$ to be independent and identically distributed (i.i.d.). Then, the number of nodes on layer $h+1$ that are visible for the $i$th node in layer $h$, which we call the *scope* of $(i,h)$, is given by $\varphi(F_{i, h}) \wedge N$, where \begin{equation}\label{eqPhiDef} \varphi(f) = 1 + 2 \lceil f \rceil. \end{equation} Now, $(i, h)$ connects to precisely one visible node $(j, h+1)$ in layer $h+1$, namely the one of maximum fitness. In other words, $$F_{j, h+1} = \max_{j':\, d_N(i, j') \le \lceil F_{i, h}\rceil}F_{j', h+1}.$$ Henceforth, we assume the fitnesses to follow a Fréchet distribution with tail index 1. That is, $$\mathbb P(F \le s) = \exp(-s^{-1}).$$ ## Simulation of Network Dynamics ``` def simulate_network(hrange = 250, layers = 6): """Simulation of the network model # Arguments hrange: horizontal range of the network layers: number of layers # Result fitnesses and selected edge """ #generate fréchet distribution fits = np.array([1/np.log(1/np.random.rand(hrange)) for _ in range(layers)]) fits_int = 1+ np.array(fits, dtype = np.int) #determine possible neighbors neighbs = [[(idx + np.arange(-fit, fit + 1)) % hrange for idx, fit in enumerate(layer) ] for layer in fits_int] #determine selected neighbor sel_edge = [[neighb[np.argmax(fit[neighb])] for neighb in nb] for (fit, nb) in zip(np.roll(fits, -1, 0), neighbs)] return fits, sel_edge ``` Now, we simulate the random network model as described above. ``` import numpy as np #seed seed = 56 np.random.seed(seed) fits, edges = simulate_network() ``` ## Visualization Now, plot the network in tikz. ``` def plot_synapses(edges, idxs = np.arange(102, 131), layers = 4, x_scale = .15, node_scale = .5): """Plot relevant synapses # Arguments idxs: indexes of layer-0 node edges: edges in the linkage graph layers: number of layers to be plotted x_scale: scaling in x-direction node_scale: scaling of nodes # Result tikz representation of graph """ result = [] #horizontal range hrange = len(edges[0]) #plot layer by layer for layer in range(layers): #plot points result +=["\\fill ({0:1.2f}, {1:1.2f}) circle ({2:1.1f}pt);\n".format((idx % hrange) * x_scale, layer, node_scale * np.log(fits)[layer, idx]) for idx in idxs] #plot edges string = "\\draw[line width = .5pt]" string += " ({0:1.2f}, {1:1.2f})--({2:1.2f}, {3:1.2f});\n" path_unordered = [string.format(idx * x_scale, layer, edges[layer][idx] * x_scale, layer + 1) for idx in idxs] result += path_unordered #update indexes idxs = np.unique([edges[layer][idx] for idx in idxs]) #plot points result +=["\\fill ({0:1.2f}, {1:1.2f}) circle ({2:1.1f}pt);\n".format((idx % hrange) * x_scale, layers, node_scale * np.log(fits)[layer + 1, idx]) for idx in idxs] tikz = ''.join(result) return '\\begin{tikzpicture}\n' + tikz + '\\end{tikzpicture}\n' ``` Finally, we write to a file. ``` fname = 'coalesc.tex' f = open(fname, "w") f.write(plot_synapses(idxs, edges)) f.close() !pdflatex evolFig.tex ```
github_jupyter
``` import os import random import math import time import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras.optimizers import Adam from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Conv1D, MaxPooling1D, Flatten, concatenate, Conv2D, MaxPooling2D from libs.utils import * from libs.generate_boxes import * os.environ['TF_CPP_MIN_LOG_LEVEL'] = '0' tf.get_logger().setLevel('INFO') tf.keras.backend.floatx() plt.style.use('fivethirtyeight') plt.rcParams['figure.figsize'] = (20,10) class StateCNN(tf.keras.Model): def __init__(self, state_size, selected_size, remain_size): super(StateCNN, self).__init__() self.case_cnn1 = Conv2D(filters=16, kernel_size=3, activation='relu', padding='valid', input_shape = selected_size) self.case_cnn2 = Conv2D(filters=16, kernel_size=3, activation='relu', padding='valid') self.select_cnn1 = Conv2D(filters=16, kernel_size=3, activation='relu', padding='valid', input_shape=selected_size) self.select_cnn2 = Conv2D(filters=16, kernel_size=3, activation='relu', padding='valid') self.remain_cnn1 = Conv1D(filters=32, kernel_size=2, activation='relu', padding='same', input_shape = remain_size) self.remain_cnn2 = Conv1D(filters=32, kernel_size=2, activation='relu', padding='same') def call(self, cb_list): c,s,r = cb_list[0], cb_list[1], cb_list[2] c = self.case_cnn1(c) c = MaxPooling2D(pool_size=(2,2))(c) c = self.case_cnn2(c) c = MaxPooling2D(pool_size=(2,2))(c) c = Flatten()(c) s = self.select_cnn1(s) s = MaxPooling2D(pool_size=(2,2))(s) s = self.select_cnn2(s) s = MaxPooling2D(pool_size=(2,2))(s) s = Flatten()(s) r = self.remain_cnn1(r) r = self.remain_cnn2(r) r = MaxPooling1D(pool_size=1)(r) r = Flatten()(r) x = concatenate([c,s,r]) return x class Actor(tf.keras.Model): def __init__(self, output_size): super(Actor, self).__init__() self.d1 = Dense(2048, activation='relu') self.d2 = Dense(1024, activation='relu') self.actor = Dense(output_size) def call(self, inputs): x = self.d1(inputs) x = self.d2(x) actor = self.actor(x) return actor class Critic(tf.keras.Model): def __init__(self, output_size): super(Critic, self).__init__() self.d1 = Dense(2048, activation='relu') self.d2 = Dense(1024, activation='relu') self.critic = Dense(output_size) def call(self, inputs): x = self.d1(inputs) x = self.d2(x) critic = self.critic(x) return critic class ActorCriticAgent: def __init__(self, L=20, B=20, H=20, n_remains=5, lr=1e-8, gamma=0.99): self.state_size = (L,B,1) self.selected_size = (L,B,H) self.remain_size = (n_remains, 3) self.output_size = 1 self.lr = lr self.gamma = gamma self.state_cnn = StateCNN(self.state_size, self.selected_size, self.remain_size) self.actor = Actor(self.output_size) self.critic = Critic(self.output_size) self.actor_optimizer = Adam(learning_rate=self.lr) self.critic_optimizer = Adam(learning_rate=self.lr) self.avg_actor_loss = 0 self.avg_critic_loss = 0 def get_action(self, state, s_locs, r_boxes): sc = self.state_cnn([state, s_locs, r_boxes]) actor = self.actor(sc) argmax_idx = np.where(actor == tf.math.reduce_max(actor)) action_idx = argmax_idx[0][0] return action_idx def actor_loss(): pass def critic_loss(): pass def train(): state_params = self. actor_params = self. critic_params = self. for with tf.GradientTape() as tape: state_cnn = self.state_cnn([state, s_boxes, remains]) actor = self.actor() values = self.critic() next_values = self.critic() expect_reward = actor_loss = critic_loss = self.avg_actor_loss += self.avg_critic_loss += actor_grads = actor_tape.gradient(actor_loss, actor_params) critic_grads = critic_tape.gradient(critic_loss, critic_params) max_episode = 2000 N_MDD = 5 K = 3 N_Candidates = 4 boxes, gt_pos = generation_3dbox_random(case_size=[[20,20,20,]],min_s = 1, N_mdd = N_MDD) boxes = boxes[0] gt_pos = gt_pos[0] num_max_boxes = len(boxes) num_max_remain = num_max_boxes - K num_max_boxes, num_max_remain env = Bpp3DEnv() agent = ActorCriticAgent(L=20, B=20, H=20, n_remains=num_max_remain, lr=1e-6, gamma=0.99) frac_l, avg_actor_loss, avg_critic_loss = [],[],[] for episode in range(max_episode): st = time.time() env.reset() done = False step = 0 used_boxes, pred_pos = [], [] r_boxes = np.array(np.array(boxes).copy()) while not done: state = env.container.copy() k = min(K, len(r_boxes)) step += 1 selected = cbn_select_boxes(r_boxes[:N_Candidates], k) s_order = get_selected_order(selected, k) state_h = env.update_h().copy() in_state, in_r_boxes = raw_to_input(state_h, s_order, r_boxes, num_max_remain) s_loc_c, pred_pos_c, used_boxes_c, next_state_c , num_loaded_box_c, next_cube_c = get_selected_location(s_order, pred_pos, used_boxes, state) action_idx = agent.get_action(in_state, s_loc_c, in_r_boxes) num_loaded_box = num_loaded_box_c[action_idx] if num_loaded_box != 0: new_used_boxes = get_remain(used_boxes, used_boxes_c[action_idx]) r_boxes = get_remain(new_used_boxes, r_boxes) used_boxes = used_boxes_c[action_idx] pred_pos = pred_pos_c[action_idx] env.convert_state(next_cube_c[action_idx]) if len(r_boxes) == 0: done = True else: r_boxes = get_remain(s_order[action_idx], r_boxes) if len(r_boxes) == 0: done = True if done: avg_frac = 0 if len(frac_l) == 0 else np.mean(frac_l) frac_l.append(env.terminal_reward()) agent.train_model() avg_actor_loss.append(agent.avg_actor_loss / float(step)) avg_critic_loss.append(agent.avg_critic_loss / float(step)) log = "=====episode: {:5d} | ".format(e) log += "env.terminal_reward(): {:.3f} | ".format(env.terminal_reward()) log += "actor avg loss : {:6f} ".format(agent.avg_actor_loss / float(step)) log += "critic avg loss : {:6f} ".format(agent.avg_critic_loss / float(step)) log += "time: {:.3f}".format(time.time()-st) print(log) agent.avg_actor_loss, agent.avg_critic_loss = 0, 0 env.reset() done = False step = 0 r_boxes = np.array(np.array(boxes).copy()) state = env.container.copy() k = min(K, len(r_boxes)) step += 1 vis_box(boxes, gt_pos) selected = cbn_select_boxes(r_boxes[:N_Candidates], k) selected s_order = get_selected_order(selected, k) s_order state_h = env.update_h().copy() in_state, in_r_boxes = raw_to_input(state_h, s_order, r_boxes, num_max_remain) pred_pos, used_boxes = [], [] s_loc_c, pred_pos_c, used_boxes_c, next_state_c, num_loaded_box_c, next_cube_c = get_selected_location(s_order, pred_pos, used_boxes, state) action_idx = agent.get_action(in_state, s_loc_c, in_r_boxes) action_idx num_loaded_box_c num_loaded_box = num_loaded_box_c[action_idx] num_loaded_box new_used_boxes = get_remain(used_boxes, used_boxes_c[action_idx]) new_used_boxes r_boxes r_boxes = get_remain(new_used_boxes, r_boxes) r_boxes used_boxes = used_boxes_c[action_idx] pred_pos = pred_pos_c[action_idx] env.convert_state(next_cube_c[action_idx]) t_state = env.container.copy() t_state_h = env.container_h.copy() k = min ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import math import random from math import sqrt def sciPrintR(val, relErr, name=None): if name != None: print name, val, "+-", val * relErr, "(", relErr * 100., "%)" else: print val, "+-", val * relErr, "(", relErr * 100., "%)" def sciPrintD(val, dErr, name=None): if name != None: print name, val, "+-", dErr, "(", (dErr/val) * 100., "%)" else: print val, "+-", dErr, "(", (dErr/val) * 100., "%)" def prodErrorR(errors): errors = np.array(errors) return np.sqrt((errors**2).sum()) def stdPlt(X, Y, title=None): fig = plt.figure(figsize=(8, 16)) if title != None: plt.title(title) ax = fig.add_subplot(111) k_off = 1.05 x_minor_ticks = np.linspace(0, X.max() * k_off + 0.0001, 125) # 104 x_major_ticks = np.array([x_minor_ticks[i] for i in range(0, x_minor_ticks.size, 20)]) y_minor_ticks = np.linspace(0, Y.max() * k_off + 0.0001, 248) # 4822 y_major_ticks = np.array([y_minor_ticks[i] for i in range(0, y_minor_ticks.size, 20)]) ax.set_xticks(x_major_ticks) ax.set_xticks(x_minor_ticks, minor=True) ax.set_yticks(y_major_ticks) ax.set_yticks(y_minor_ticks, minor=True) ax.grid(which='minor', alpha=0.4, linestyle='-') ax.grid(which='major', alpha=0.7, linestyle='-') plt.xlim((0, X.max() * k_off)) plt.ylim((0, Y.max() * k_off)) k = Y.max() / Y.mean() plt.plot([0, X.mean() * k], [0, Y.mean()* k]) #plt.plot(X, Y) plt.scatter(X, Y, s=5, color="black") plt.show() print(math.sqrt(0.1*0.1 + 0.6*0.6 + 0.4*0.4)) prodErrorR([0.1,0.6,0.4]) L = 501 * 1e-3 DL = 1 * 1e-3 am = (50.695 + 2.573 + 2.571 + 0.740) * 1e-3 m1 = 492.7 * 1e-3 m2 = 494.6 * 1e-3 m3 = 491.5 * 1e-3 # 491.5 * 1e-3 Dm = 0.5 * 1e-3 g = 9.815 print(am) freqs = np.array([0, 134.4, 403.0, 675.7, 949.4, 1229.6, 1513.5, 1790.9]) # ! corrr ns = np.array([0, 1, 3, 5, 7, 9, 11, 13]) def pl_works(freqs, ns, m_sum_1): print "freq :" sciPrintD((freqs[1:] / ns[1:]).mean(), (freqs[1:] / ns[1:]).std(ddof=1) / np.size(freqs.size)) # tables wave_lengths = (2. * L) / ns[1:] us = freqs[1:] * wave_lengths # u = lambda * freq print "us :" otd_freq_err = (freqs[1:] / ns[1:]).std(ddof=1) for i in range(us.size): sciPrintR(us[i], prodErrorR([otd_freq_err / (freqs[i + 1] / ns[i + 1]), DL / L])) us_mean = us.mean() Dus_mean = us.std(ddof=1)/np.sqrt(us.size) sciPrintD(us_mean, Dus_mean, "us_mean = ") T = g * m_sum_1 pl = T / (us * us ) # u = sqrt(T / pl) pls = pl print "pls :" for i in range(pl.size): sciPrintR(pl[i], pl.std(ddof=1) / pl[i], "pl[%d]" % (i)) print "pls mean after =" sciPrintR(pl.mean(), prodErrorR([(pl.std(ddof=1) / math.sqrt(pl.size) / pl.mean()), 1 / 501]), "pl_mean_after") # mean freq_base = freqs[1:] / ns[1:] freq_mean = freq_base.mean() Dfreq_mean = freq_base.std(ddof=1) / np.sqrt(freq_base.size) Rfreq_mean = Dfreq_mean / freq_mean sciPrintR(freq_mean, Rfreq_mean, "lambda_1 = ") wave_length = 2. * L / ns[1] Rwave_length = DL / wave_length u = freq_mean * wave_length Ru = prodErrorR([Rwave_length, Rfreq_mean]) Rm_sum_1 = (Dm * 2) / m_sum_1 T = g * m_sum_1 RT = Rm_sum_1 pl = T / (u * u) Rpl = prodErrorR([RT, Ru, Ru]) sciPrintR(pl, Rpl, "pl_mean = ") return pls, us_mean, Dus_mean great_pl_1, us_mean_1, Dus_mean1 = pl_works(freqs, ns, am + m1 + m2) stdPlt(ns, freqs, title=r"$\nu(n) [m = %f]$" % (am + m1 + m2)) print("New start freq = ", freqs[1] * sqrt((m1 + m2 + m3)/(m1 + m2))) freq2 = np.array([0, 161.1, 484.6, 806.7, 1130.5, 1457.55, 1790.7, 2128.0]) ns2 = np.array([0, 1, 3, 5, 7, 9, 11, 13]) great_pl_2, us_mean_2, Dus_mean2 = pl_works(freq2, ns2, am + m1 + m2 + m3 ) stdPlt(ns2, freq2, title=r"$\nu(n) [m = %f]$" % (am + m1 + m2 + m3)) all_pl = np.append(great_pl_1, great_pl_2) # print(all_pl) PL_VAL = all_pl.mean() R_PL_VAL = all_pl.std(ddof=1)/np.sqrt(all_pl.size) / PL_VAL R_PL_VAL = prodErrorR([R_PL_VAL, Dm / (am + m1+ m2+m3), 1. / 501.]) sciPrintR(PL_VAL * 1e6, R_PL_VAL) ```
github_jupyter
``` # libraries from pandas_datareader import data from datetime import datetime import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import os # configure %matplotlib inline sns.set(style = 'whitegrid') # path os.getcwd() # import india rain data india_rain = pd.read_csv('C:\\Users\\Joseph\\Desktop\\AlphaLearn\\Data\\rainfall-india.csv') # summary print(india_rain.head()) print(india_rain.info()) print(india_rain.columns) ## data preprocessing # get time month = list(pd.Series(list(india_rain[' Month'])).apply(lambda t: t.split(' ')[1])) year = [str(time) for time in list(india_rain[' Year'])] zipped_dates = zip(month, year) # combine and cast to datetime object string_dates = [str(t[0]) + "-" + str(t[1]) for t in list(zipped_dates)] obj_dates = pd.Series([datetime.strptime(sd, '%b-%Y') for sd in string_dates], name='Date') # rename rainfall, create dataframe df_india_rainfall = pd.DataFrame(india_rain['Rainfall - (MM)']) start_date = '2013-01-01' end_date = '2016-01-01' # set index to date time df_india_rainfall = df_india_rainfall.set_index(obj_dates) df_india_rainfall = df_india_rainfall.loc[start_date:end_date] df_india_rainfall = df_india_rainfall[0:36] df_india_rainfall.head() # plot data over time df_india_rainfall.plot(figsize = (30, 10), linewidth = 3, fontsize = 20) plt.legend(labels = ['rainfall (mm)'], loc = 'upper left', fontsize = 20) plt.xlabel('Year', fontsize = 20, labelpad = 20); # get gold data start_date = '2013-01-01' end_date = '2016-01-01' gld_price = data.DataReader('GLD', 'yahoo', start_date, end_date) gld_price.head() gld_price.max() gld_price.info() gld_monthly = gld_price.resample('MS').mean() gld_monthly.head() df_gld_price = pd.DataFrame(gld_monthly['Adj Close']) df_gld_price.head() # plot gld price over time df_gld_price.plot(figsize = (30, 10), linewidth = 3, fontsize = 20) plt.legend(labels = ['price'], loc = 'upper left', fontsize = 20) plt.xlabel('Year', fontsize = 20, labelpad = 20); plt.figure(figsize = (10, 8)) sns.jointplot(x = df_india_rainfall['Rainfall - (MM)'], y = df_gld_price['Adj Close'], s = 60) rain = df_india_rainfall['Rainfall - (MM)'] gld = df_gld_price['Adj Close'] # check overall correlation np.corrcoef(rain, gld) # check rolling df_rolling_corr = pd.DataFrame(rain.rolling(3).corr(gld)) # plot 6 month rolling correlation df_rolling_corr.plot(figsize = (20, 10), linewidth = 3, fontsize = 20) plt.legend(labels = ['correlation'], loc = 'upper left', fontsize = 20) plt.xlabel('Year', fontsize = 20, labelpad = 20); plt.figure(figsize = (10, 8)) sns.distplot(df_rolling_corr.dropna(), bins = len(df_rolling_corr)) ``` ### Summary: The point estimate showed a negative correlation of ~ 0.1 between the price of gold and rainfall in India. Further exploration of a 3 month rolling correlation revealed a non-normal distribution of correlations with widely varying oscillations in the correlation values over time. There is no signal found in the relationship between GLD and India's rainfall.
github_jupyter
# MT5 Small Model In this notebook, we will be fine tuning the MT5 Sequence-to-Sequence Transformer model to take a Natural Language structured card specification to Java code. ### Check for Cuda Compatibility. ``` import torch import torch.nn as nn torch.cuda.is_available() using_google_drive = True if using_google_drive: from google.colab import drive drive.mount('/content/gdrive') mahmoud_path = '/content/gdrive/MyDrive/Final Project/' tommy_path = '/content/gdrive/MyDrive/Colab Notebooks/Final Project/' path = mahmoud_path PATH = path %%bash pip -q install transformers pip -q install tqdm pip -q install sentencepiece pip -q install sacrebleu ``` # Tokenizer for the MT5 Model ``` # Tokenizers import transformers #from transformers import MT5ForConditionalGeneration, T5Tokenizer pretrained_model_name = 'google/mt5-small' tokenizer = transformers.AutoTokenizer.from_pretrained(pretrained_model_name) context_ids = tokenizer.encode("When this creature enters the battle field, target creature loses 2 life.") print(tokenizer.convert_ids_to_tokens(context_ids)) ctx_list = [ "You can protect yourself by wearing an N95 mask.", "wearing an N95 mask" ] tokenizer_output = tokenizer.batch_encode_plus( ctx_list, max_length = 12, truncation=True, padding='longest', return_attention_mask=True, return_tensors='pt' ) for key, value in tokenizer_output.items(): print(key) print(value) print('-------------------------') ``` # Dataset Collection and Processing Load the dataset. The framework for making changes to individual points in the dataset is set in the `preprocess_datapoint` method, which at the moment does nothing to our dataset. ``` with open(PATH + 'datasets/train_magic.in') as f: train_x = f.readlines() with open(PATH + 'datasets/train_magic.out') as f: train_y = f.readlines() with open(PATH + 'datasets/test_magic.in') as f: test_x = f.readlines() with open(PATH + 'datasets/test_magic.out') as f: test_y = f.readlines() # Structure the dataset somewhat similarly to the dataset objects training_dataset = [{ 'card': x, 'code': y } for x, y in zip(train_x, train_y)] testing_dataset = [{ 'card': x, 'code': y } for x, y in zip(test_x, test_y )] dataset = { "train": training_dataset, "test": testing_dataset } import json import random from multiprocessing import Pool from tqdm import tqdm, trange def preproc_init(tokenizer_for_model): """ Use this to assign global variables within a new thread Parameters ---------- tokenizer_for_mode: fn The tokenizer for the pretrained transformer model """ global tokenizer tokenizer = tokenizer_for_model def preprocess_datapoint (datapoint): """ Is effectively an identity function, but is here if we do preprocessing later This method will preprocess a single datapoint loaded above. This can involve replacing characters, removing parts of the input or output, etc. The current implementation applies no change to the dict. It can return None if we want to remove this datapoint as well. Parameters ---------- datapoint: dict The dict containing the initial value of each data in the dataset. Each datapoint has the following two datapoints "card": the string for the card description and meta data "code": the string for the card implementation in Java Returns ------- dict A new representation for this individual datapoint. """ # We have access to global vars defined in preproc_init return datapoint def preprocess_dataset(dataset_list, threads, tokenizer): """ Preprocesses the entire dataset in `threads` threads This method opens `threads` new threads in Parameters ---------- dataset_list: dict[] A list of datapoints, where each datapoint is in the shape: "card": the string for the card description and meta data "code": the string for the card implementation in Java threads: int The number of threads to run the preprocessing on tokenizer: fn The tokenizer for the particular pretrained model Returns ------- dict A new representation for every datapoint in the dataset_list """ # Open new threads and map tasks between them with Pool(threads, initializer=preprocess_datapoint, initargs=(tokenizer,)) as p: processed_dataset = list(tqdm(p.imap(preprocess_datapoint, dataset_list), total=len(dataset_list))) # Remove None values in the list processed_dataset = [x for x in processed_dataset if x] json.dump(processed_dataset, open(PATH + "/processed_dataset.json", 'w')) return processed_dataset processed_dataset = preprocess_dataset(dataset['train'], 16, tokenizer) ``` # Building the Model ``` class ModelOutputs: def __init__(self, output_logits=None, loss=None): """ An object containing the output of the CardTranslationModel Parameters ---------- output_logits : torch.tensor shape (batch_size, ans_len) loss : torch.tensor shape (1) The loss of the output """ self.output_logits = output_logits self.loss = loss class CardTranslationModel(nn.Module): def __init__(self, lm=None): """ Initializes the CardTranslationModel with the provided learning mdoel Parameters ---------- lm : pretrained transformer Description of arg1 """ super(CardTranslationModel, self).__init__() self.lm = lm def forward(self, input_ids=None, attention_mask=None, label_ids=None): """ Summary line. Extended description of function. Parameters ---------- input_ids : torch.tensor shape (batch_size, seq_len) ids of the concatenated input tokens attention_mask : torch.tensor shape (batch_size, seq_len) concatenated attention masks label_ids: torch.tensor shape (batch_size, ans_len) the expected code output Returns ------- ModelOutputs Description of return value """ # Feed our input ids into the pretrained transformer lm_output = self.lm( input_ids=input_ids, attention_mask=attention_mask, labels=label_ids, use_cache=False ) # Compute Loss from the output of the learning model total_loss = lm_output['loss'] if False and label_ids is not None: loss_fct = nn.CrossEntropyLoss() total_loss = loss_fct(label_ids, lm_output['logits']) return ModelOutputs( output_logits=lm_output['logits'], loss=total_loss) from transformers import MT5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained(pretrained_model_name) # Create the CardTranslationModel using the MT5 Conditional Generation model lm_pretrained = MT5ForConditionalGeneration.from_pretrained(pretrained_model_name) model = CardTranslationModel(lm_pretrained).cuda() ``` ## Up Next Training: ``` import torch # Hyper-parameters: you could try playing with different settings num_epochs = 5 learning_rate = 3e-5 weight_decay = 1e-5 eps = 1e-6 batch_size = 2 warmup_rate = 0.05 card_max_length = 448 code_max_length = 448 # Calculating the number of warmup steps num_training_cases = len(processed_dataset) t_total = (num_training_cases // batch_size + 1) * num_epochs ext_warmup_steps = int(warmup_rate * t_total) # Initializing an AdamW optimizer ext_optim = torch.optim.AdamW(model.parameters(), lr=learning_rate, eps=eps, weight_decay=weight_decay) # Initializing the learning rate scheduler [details are in the BERT paper] # ext_sche = transformers.get_linear_schedule_with_warmup( # ext_optim, num_warmup_steps=ext_warmup_steps, num_training_steps=t_total # ) print("***** Training Info *****") print(" Num examples = %d" % t_total) print(" Num Epochs = %d" % num_epochs) print(" Batch size = %d" % batch_size) print(" Total optimization steps = %d" % t_total) def vectorize_batch(batch, tokenizer): """ Converts the batch of processed datapoints into separate tensors of token ids Converts the batch of processed datapoints into separate tensors of token ids hosted on the GPU. Parameters ---------- batch: str[] shape (batch_size, 1) tokenizer: fn Converts the batch to a tensor of input and output ids Returns ------- input_ids: torch.tensor shape (batch_size, max_input_len) input_attn_mask: torch.tensor shape (batch_size, max_input_len) label_ids: torch.tensor shape (batch_size, max_output_len) """ # Separate the batch into input and output card_batch = [card_data['card'] for card_data in batch] code_batch = [code_data['code'] for code_data in batch] # Encode the card's natural language representation card_encode = tokenizer.batch_encode_plus( card_batch, max_length = card_max_length, truncation = True, padding = 'longest', return_attention_mask = True, return_tensors = 'pt' ) # Encode the card's java code representation code_encode = tokenizer.batch_encode_plus( code_batch, max_length = code_max_length, truncation = True, padding = 'longest', return_attention_mask = True, return_tensors = 'pt' ) # Move the training batch to GPU card_ids = card_encode['input_ids'].cuda() card_attn_mask = card_encode['attention_mask'].cuda() code_ids = code_encode['input_ids'].cuda() return card_ids, card_attn_mask, code_ids import gc model.train() max_grad_norm = 1 training_dataset = processed_dataset[:6000] num_training_cases = len(training_dataset) step_id = 0 for _ in range(num_epochs): random.shuffle(training_dataset) for i in range(0, num_training_cases, batch_size): gc.collect() torch.cuda.empty_cache() batch = training_dataset[i: i + batch_size] input_ids, input_attn_mask, label_ids = vectorize_batch(batch, tokenizer) gc.collect() torch.cuda.empty_cache() model.zero_grad() # Does the same as ext_optim.zero_grad() # Get the model outputs, including (start, end) logits and losses # stored as a ModelOutput object outputs = model( input_ids=input_ids, attention_mask=input_attn_mask, label_ids=label_ids ) gc.collect() torch.cuda.empty_cache() # Back-propagate the loss signal and clip the gradients loss = outputs.loss.mean() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Update neural network parameters and the learning rate ext_optim.step() # ext_sche.step() # Update learning rate for better convergence if step_id % 100 == 0: print(f'At step {step_id}, the extraction loss = {loss}') step_id += 1 input_ids.detach() input_attn_mask.detach() label_ids.detach() outputs.loss.detach() del input_ids del input_attn_mask del label_ids del outputs torch.cuda.empty_cache() print('Finished Training') torch.save(model.state_dict(), '/content/gdrive/MyDrive/MT5_checkpoint/MT5.pt') import torch import gc gc.collect() torch.cuda.empty_cache() print(torch.cuda.memory_summary(device=None, abbreviated=False)) #model.load_state_dict(torch.load(PATH + 'checkpoint.pt')) #---Sandbox code---# import random batch_size = 2 i = random.randint(0, num_training_cases-1) datapoint = training_dataset[i:i + batch_size] #print(datapoint) card_ids, card_attn_masks, code_ids = vectorize_batch(datapoint, tokenizer) output = model(card_ids, card_attn_masks, code_ids) #---Sandbox code---# import sacrebleu #print(output.output_logits) softmax = torch.nn.Softmax(dim=1)(output.output_logits) input_id = tokenizer([datapoint[_]['card'] for _ in range(2)], padding=True, return_tensors='pt').input_ids.cuda() print('Card: ' + datapoint[0]['card']) #print(input_id) outputs = model.lm.generate(input_id, max_length=1000) #print(outputs.squeeze(0)) #print(outputs[0]) outputs = outputs[0][1:] # remove first pad #print(outputs) #print('wat:',(outputs == 0).nonzero(as_tuple=True)) #print((outputs == 0).nonzero(as_tuple=True)[0][0]) outputs = outputs[:int((outputs == 1).nonzero(as_tuple=True)[0][0])] code = tokenizer.decode(outputs)#[6:] print('Generated code:') print(code) #print(end) print('Ground-truth code: ') print(datapoint[0]['code']) print(type(datapoint[0]['code'])) print(len(code)) # how to sacrebleu: first arg is list of outputs, second arg is list of lists of allowable translations print('BLEU:', sacrebleu.raw_corpus_bleu([code], [[datapoint[0]['code']]]).score) #print(type(tokenizer)) #print(type(code)) def autoregressive_generate(model, tokenizer, card_desc): ''' Applies autoregressive generation on a card description to generate its corresponding code Parameters ---------- model: CardTranslationModel Model used for autoregressive generation tokenizer: transformers.models.tokenizer.PreTrainedTokenizer Tokenizer for encoding the card description card_desc: str or list[str] (batch_size length) card description Returns --------- torch.Tensor containing the sequence of code ids generated from the card description shape: (batch_size, max_seq_len) ''' card_inputs = tokenizer(card_desc, padding=True, return_tensors='pt').input_ids.cuda() return model.lm.generate(card_inputs) def decode(tokenizer, code_ids): ''' Translates code ids into tokens Parameters ---------- tokenizer: transformers.models.tokenizer.PreTrainedTokenizer Tokenizer for decoding code_ids: torch.Tensor shape: (batch_size, max_seq_len) Returns --------- list containing the token seq for each id sequence in the batch shape (batch_size, max_seq_len) ''' decode_output = tokenizer.batch_decode(code_ids) return decode_output test_data = testing_dataset[:200] bleu = 0 for pt in test_data: input_id = tokenizer([pt['card']], padding=True, return_tensors='pt').input_ids.cuda() outputs = model.lm.generate(input_id, max_length=2000) outputs = outputs[0][1:] # remove first pad try: outputs = outputs[:int((outputs == 1).nonzero(as_tuple=True)[0][0])] # truncate at end token except: print('huh', pt['card']) code = tokenizer.decode(outputs) #print('Generated code:') #print(code) #print('Ground-truth code: ') #print(pt['code']) bleu += sacrebleu.raw_corpus_bleu([code], [[pt['code']]]).score #print('baby bleu:', bleu) print("BLEU:", bleu/len(test_data)) ```
github_jupyter
# Retail Demo Store Experimentation Workshop - Interleaving Recommendation Exercise In this exercise we will define, launch, and evaluate the results of an experiment using recommendation interleaving using the experimentation framework implemented in the Retail Demo Store project. If you have not already stepped through the **[3.1-Overview](./3.1-Overview.ipynb)** workshop notebook, please do so now as it provides the foundation built upon in this exercise. It is also recommended, but not required, to complete the **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)** workshop notebook. Recommended Time: 30 minutes ## Prerequisites Since this module uses the Retail Demo Store's Recommendation microservice to run experiments across variations that depend on the personalization features of the Retail Demo Store, it is assumed that you have either completed the [Personalization](../1-Personalization/Lab-1-Introduction-and-data-preparation.ipynb) workshop or those resources have been pre-provisioned in your AWS environment. If you are unsure and attending an AWS managed event such as a workshop, check with your event lead. ## Exercise 2: Interleaving Recommendations Experiment For the first exercise, **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)**, we demonstrated how to create and run an A/B experiment using two different variations for making product recommendations. We calculated the sample sizes of users needed to reach a statistically significant result comparing the two variations. Then we ran the experiment using a simulation until the sample sizes were reached for both variations. In real-life, depending on the baseline and minimum detectable effect rate combined with your site's user traffic, the amount of time necessary to complete an experiment can take several days to a few weeks. This can be expensive from both an opportunity cost perspective as well as negatively impacting the pace at which experiments and changes can be rolled out to your site. In this exercise we will look at an alternative approach to evaluating product recommendation variations that requires a smaller sample size and shorter experiment durations. This technique is often used as a preliminary step before formal A/B testing to reduce a larger number of variations to just the top performers. Traditional A/B testing is then done against the best performing variations, significantly reducing the overall time necessary for experimentation. We will use the same two variations as the last exercise. The first variation will represent our current implementation using the [**Default Product Resolver**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/resolvers.py) and the second variation will use the [**Personalize Recommendation Resolver**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/resolvers.py). The scenario we are simulating is adding product recommendations powered by Amazon Personalize to the home page and measuring the impact/uplift in click-throughs for products as a result of deploying a personalization strategy. We will use the same hypothesis from our A/B test where the conversion rate of our existing approach is 15% and we expect a 25% lift in this rate by adding personalized recommendations. ### What is Interleaving Recommendation Testing? The approach of interleaving recommendations is to take the recommendations from two or more variations and interleave, or blend, them into a single set of recommendations for *every user in the experiment*. Because each user in the sample is exposed to recommendations from all variations, we gain some key benefits. First, the sample size can be smaller since we don't need separate groups of users for each variation. This also results in a shorter experiment duration. Additionally, this approach is less susceptible to variances in user type and behavior that could throw off the results of an experiment. For example, it's not uncommon to have power users who shop/watch/listen/read much more than a typical user. With multiple sample groups, the behavior of these users can throw off results for their group, particularly with smaller sample sizes. Care must be taken in how recommendations are interleaved, though, to account for position bias in the recommendations and to track variation attribution. There are two common methods to interleaving recommendations. First is a balanced approach where recommendations are taken from each variation in an alternating style where the starting variation is selected randomly. The other approach follows the team-draft analogy where team captains select their "best player" (recommendation) from the variations in random selection order. Both methods can result in different interleaving outputs. Interleaving recommendations as an approach to experimenation got its start with information retrieval systems and search engines (Yahoo! & Bing) where different approaches to ranking results could be measured concurrently. More recently, [Netflix has adopted the interleaving technique](https://medium.com/netflix-techblog/interleaving-in-online-experiments-at-netflix-a04ee392ec55) to rapidly evaluate different approaches to making movie recommendations to its users. The image below depicts the recommendations from two different recommenders/variations (Ranker A and Ranker B) and examples of how they are interleaved. ![Interleaving at Netflix](./images/netflix-interleaving.png) ### InterleavingExperiment Class Before stepping through creating and executing our interleaving test, let's look at the relevant source code for the [**InterleavingExperiment**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/experiment_interleaving.py) class that implements this experiment type in the Retail Demo Store project. As noted in the **[3.1-Overview](./3.1-Overview.ipynb)** notebook, all experiment types are subclasses of the abstract **Experiment** class. See **[3.1-Overview](./3.1-Overview.ipynb)** for more details on the experimentation framework. The `InterleavingExperiment.get_items()` method is where item recommendations are retrieved for the experiment. This method will retrieve recommendations from the resolvers for all variations and then use the configured interleaving method (balanced or team-draft) to interleave the recommendations to produce the final result. Exposure tracking is also implemented to facilitate measuring the outcome of an experiment. The implementations for the balanced and team-draft interleaving methods are not included below but are available in the source code for the Recommendations service. ```python # from src/recommendations/src/recommendations-service/experimentation/experiment_interleaving.py class InterleavingExperiment(Experiment): """ Implements interleaving technique described in research paper by Chapelle et al http://olivier.chapelle.cc/pub/interleaving.pdf """ METHOD_BALANCED = 'balanced' METHOD_TEAM_DRAFT = 'team-draft' def __init__(self, table, **data): super(InterleavingExperiment, self).__init__(table, **data) self.method = data.get('method', InterleavingExperiment.METHOD_BALANCED) def get_items(self, user_id, current_item_id = None, item_list = None, num_results = 10, tracker = None): ... # Initialize array structure to hold item recommendations for each variation variations_data = [[] for x in range(len(self.variations))] # Get recomended items for each variation for i in range(len(self.variations)): resolve_params = { 'user_id': user_id, 'product_id': current_item_id, 'product_list': item_list, 'num_results': num_results * 3 # account for overlaps } variation = self.variations[i] items = variation.resolver.get_items(**resolve_params) variations_data[i] = items # Interleave items to produce result interleaved = [] if self.method == InterleavingExperiment.METHOD_TEAM_DRAFT: interleaved = self._interleave_team_draft(user_id, variations_data, num_results) else: interleaved = self._interleave_balanced(user_id, variations_data, num_results) # Increment exposure for each variation (can be optimized) for i in range(len(self.variations)): self._increment_exposure_count(i) ... return interleaved ``` ### Setup - Import Dependencies Througout this workshop we will need access to some common libraries and clients for connecting to AWS services. Let's set those up now. ``` import boto3 import json import uuid import numpy as np import requests import pandas as pd import random import scipy.stats as scs import time import decimal import matplotlib.pyplot as plt from boto3.dynamodb.conditions import Key from random import randint # import custom scripts for plotting results from src.plot import * from src.stats import * %matplotlib inline plt.style.use('ggplot') # We will be using a DynamoDB table to store configuration info for our experiments. dynamodb = boto3.resource('dynamodb') # Service discovery will allow us to dynamically discover Retail Demo Store resources servicediscovery = boto3.client('servicediscovery') # Retail Demo Store config parameters are stored in SSM ssm = boto3.client('ssm') # Utility class to convert types for printing as JSON. class CompatEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, decimal.Decimal): if obj % 1 > 0: return float(obj) else: return int(obj) else: return super(CompatEncoder, self).default(obj) ``` ### Experiment Strategy Datastore Let's create an experiment using the interleaving technique. A DynamoDB table was created by the Retail Demo Store CloudFormation template that we will use to store the configuration information for our experiments. The table name can be found in a system parameter. ``` response = ssm.get_parameter(Name='retaildemostore-experiment-strategy-table-name') table_name = response['Parameter']['Value'] # Do Not Change print('Experiments DDB table: ' + table_name) table = dynamodb.Table(table_name) ``` Next we need to lookup the Amazon Personalize campaign ARN for product recommendations. This is the campaign that was created in the Personalization workshop. ``` response = ssm.get_parameter(Name = '/retaildemostore/personalize/recommended-for-you-arn') campaign_arn = response['Parameter']['Value'] # Do Not Change print('Personalize product recommendations ARN: ' + campaign_arn) ``` ### Create Interleaving Experiment The Retail Demo Store supports running multiple experiments concurrently. For this workshop we will create a single interleaving test/experiment that will expose users of a single group to recommendations from the default behavior and recommendations from Amazon Personalize. The [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) microservice already has logic that supports interleaving experiments when an active experiment is detected. Experiment configurations are stored in a DynamoDB table where each item in the table represents an experiment and has the following fields. - **id** - Uniquely identified this experience (UUID). - **feature** - Identifies the Retail Demo Store feature where the experiment should be applied. The name for the home page product recommendations feature is `home_product_recs`. - **name** - The name of the experiment. Keep the name short but descriptive. It will be used in the UI for demo purposes and when logging events for experiment result tracking. - **status** - The status of the experiment (`ACTIVE`, `EXPIRED`, or `PENDING`). - **type** - The type of test (`ab` for an A/B test, `interleaving` for interleaved recommendations, or `mab` for multi-armed bandit test) - **method** - The interleaving method (`balanced` or `team-draft`) - **variations** - List of configurations representing variations for the experiment. For example, for interleaving tests of the `home_product_recs` feature, the `variations` can be two Amazon Personalize campaign ARNs (variation type `personalize-recommendations`) or a single Personalize campaign ARN and the default product behavior. ``` feature = 'home_product_recs' experiment_name = 'home_personalize_interleaving' # First, make sure there are no other active experiments so we can isolate # this experiment for the exercise. response = table.scan( ProjectionExpression='#k', ExpressionAttributeNames={'#k' : 'id'}, FilterExpression=Key('status').eq('ACTIVE') ) for item in response['Items']: response = table.update_item( Key=item, UpdateExpression='SET #s = :inactive', ExpressionAttributeNames={ '#s' : 'status' }, ExpressionAttributeValues={ ':inactive' : 'INACTIVE' } ) # Query the experiment strategy table to see if our experiment already exists response = table.query( IndexName='feature-name-index', KeyConditionExpression=Key('feature').eq(feature) & Key('name').eq(experiment_name), FilterExpression=Key('status').eq('ACTIVE') ) if response.get('Items') and len(response.get('Items')) > 0: print('Experiment already exists') home_page_experiment = response['Items'][0] else: print('Creating experiment') # Default product resolver variation_0 = { 'type': 'product' } # Amazon Personalize resolver variation_1 = { 'type': 'personalize-recommendations', 'campaign_arn': campaign_arn } home_page_experiment = { 'id': uuid.uuid4().hex, 'feature': feature, 'name': experiment_name, 'status': 'ACTIVE', 'type': 'interleaving', 'method': 'team-draft', 'analytics': {}, 'variations': [ variation_0, variation_1 ] } response = table.put_item( Item=home_page_experiment ) print(json.dumps(response, indent=4)) print(json.dumps(home_page_experiment, indent=4, cls=CompatEncoder)) ``` ## Load Users For our experiment simulation, we will load all Retail Demo Store users and run the experiment until the sample size has been met. First, let's discover the IP address for the Retail Demo Store's [Users](https://github.com/aws-samples/retail-demo-store/tree/master/src/users) service. ``` response = servicediscovery.discover_instances( NamespaceName='retaildemostore.local', ServiceName='users', MaxResults=1, HealthStatus='HEALTHY' ) users_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4'] print('Users Service Instance IP: {}'.format(users_service_instance)) ``` Next, let's load all users into a local data frame. ``` # Load all users so we have enough to satisfy our sample size requirements. response = requests.get('http://{}/users/all?count=10000'.format(users_service_instance)) users = response.json() users_df = pd.DataFrame(users) pd.set_option('display.max_rows', 5) users_df ``` ## Discover Recommendations Service Next, let's discover the IP address for the Retail Demo Store's [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) service. ``` response = servicediscovery.discover_instances( NamespaceName='retaildemostore.local', ServiceName='recommendations', MaxResults=1, HealthStatus='HEALTHY' ) recommendations_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4'] print('Recommendation Service Instance IP: {}'.format(recommendations_service_instance)) ``` ## Simulate Experiment Next we will simulate our interleaving recommendation experiment by making calls to the [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) service across the users we just loaded. ### Simulation Function The following `simulate_experiment` function is supplied with the number of trials we want to run and the probability of conversion for each variation for our simulation. It runs the simulation long enough to satisfy the number of trials and calls the Recommendations service for each trial in the experiment. ``` def simulate_experiment(n_trials, probs): """Simulates experiment based on pre-determined probabilities Example: Parameters: n_trials (int): number of trials to run for experiment probs (array float): array of floats containing probability/conversion rate for each variation Returns: df (df) - data frame of simulation data/results """ # will hold exposure/outcome data data = [] print('Simulating experiment for {} users... this may take a few minutes'.format(n_trials)) for idx in range(n_trials): if idx > 0 and idx % 500 == 0: print('Simulated experiment for {} users so far'.format(idx)) row = {} # Get random user user = users[randint(0, len(users)-1)] # Call Recommendations web service to get recommendations for the user response = requests.get('http://{}/recommendations?userID={}&feature={}'.format(recommendations_service_instance, user['id'], feature)) recommendations = response.json() recommendation = recommendations[randint(0, len(recommendations)-1)] variation = recommendation['experiment']['variationIndex'] row['variation'] = variation # Conversion based on probability of variation row['converted'] = np.random.binomial(1, p=probs[variation]) if row['converted'] == 1: # Update experiment with outcome/conversion correlation_id = recommendation['experiment']['correlationId'] requests.post('http://{}/experiment/outcome'.format(recommendations_service_instance), data={'correlationId':correlation_id}) data.append(row) # convert data into pandas dataframe df = pd.DataFrame(data) print('Done') return df ``` ### Run Simulation Next we run the simulation by defining our simulation parameters for the number of trials and probabilities and then call `simulate_experiment`. This will take a few minutes to run. ``` %%time # Number of trials to run N = 2000 # bcr: baseline conversion rate p_A = 0.15 # d_hat: difference in a metric between the two groups, sometimes referred to as minimal detectable effect or lift depending on the context p_B = 0.1875 ab_data = simulate_experiment(N, [p_A, p_B]) ab_data ``` ### Inspect Experiment Summary Statistics Since the **Experiment** class updates statistics on the experiment in the experiment strategy table when a user is exposed to an experiment ("exposure") and when a user converts ("outcome"), we should see updated counts on our experiment. Let's reload our experiment and inspect the exposure and conversion counts for our simulation. ``` response = table.get_item(Key={'id': home_page_experiment['id']}) print(json.dumps(response['Item'], indent=4, cls=CompatEncoder)) ``` Note the `conversions` and `exposures` counts for each variation above. These counts were incremented by the experiment class each time a trial was run (exposure) and a user converted in the `simulate_experiment` function above. ### Analyze Simulation Results To wrap up, let's analyze some of the results from our simulated interleaving experiment by inspecting the actual conversion rate and verifying our target confidence interval and power. First, let's take a closer look at the results of our simulation. We'll start by calculating some summary statistics. ``` ab_summary = ab_data.pivot_table(values='converted', index='variation', aggfunc=np.sum) # add additional columns to the pivot table ab_summary['total'] = ab_data.pivot_table(values='converted', index='variation', aggfunc=lambda x: len(x)) ab_summary['rate'] = ab_data.pivot_table(values='converted', index='variation') ab_summary ``` Next let's isolate data for each variation. ``` A_group = ab_data[ab_data['variation'] == 0] B_group = ab_data[ab_data['variation'] == 1] A_converted, B_converted = A_group['converted'].sum(), B_group['converted'].sum() A_converted, B_converted ``` Determine the actual sample size for each variation. ``` A_total, B_total = len(A_group), len(B_group) A_total, B_total ``` Calculate the actual conversion rates and uplift from our simulation. ``` p_A, p_B = A_converted / A_total, B_converted / B_total p_A, p_B p_B - p_A ``` ### Determining Statistical Significance For simplicity we will use the same approach as our A/B test to determine statistical significance. Let's plot the data from both groups as binomial distributions. ``` fig, ax = plt.subplots(figsize=(12,6)) xA = np.linspace(A_converted-49, A_converted+50, 100) yA = scs.binom(A_total, p_A).pmf(xA) ax.scatter(xA, yA, s=10) xB = np.linspace(B_converted-49, B_converted+50, 100) yB = scs.binom(B_total, p_B).pmf(xB) ax.scatter(xB, yB, s=10) plt.xlabel('converted') plt.ylabel('probability') ``` Based the probabilities from our hypothesis, we should see that the test group in blue (B) converted more users than the control group in red (A). However, the plot above is not a plot of the null and alternate hypothesis. The null hypothesis is a plot of the difference between the probability of the two groups. > Given the randomness of our user selection, group hashing, and probabilities, your simulation results should be different for each simulation run and therefore may or may not be statistically significant. In order to calculate the difference between the two groups, we need to standardize the data. Because the number of samples can be different between the two groups, we should compare the probability of successes, p. According to the central limit theorem, by calculating many sample means we can approximate the true mean of the population from which the data for the control group was taken. The distribution of the sample means will be normally distributed around the true mean with a standard deviation equal to the standard error of the mean. ``` SE_A = np.sqrt(p_A * (1-p_A)) / np.sqrt(A_total) SE_B = np.sqrt(p_B * (1-p_B)) / np.sqrt(B_total) SE_A, SE_B fig, ax = plt.subplots(figsize=(12,6)) xA = np.linspace(0, .3, A_total) yA = scs.norm(p_A, SE_A).pdf(xA) ax.plot(xA, yA) ax.axvline(x=p_A, c='red', alpha=0.5, linestyle='--') xB = np.linspace(0, .3, B_total) yB = scs.norm(p_B, SE_B).pdf(xB) ax.plot(xB, yB) ax.axvline(x=p_B, c='blue', alpha=0.5, linestyle='--') plt.xlabel('Converted Proportion') plt.ylabel('PDF') ``` ## Next Steps You have completed the exercise for implementing an A/B test using the experimentation framework in the Retail Demo Store. Close this notebook and open the notebook for the next exercise, **[3.4-Multi-Armed-Bandit-Experiment](./3.4-Multi-Armed-Bandit-Experiment.ipynb)**. ### References and Further Reading - [Large Scale Validation and Analysis of Interleaved Search Evaluation](http://olivier.chapelle.cc/pub/interleaving.pdf), Chapelle et al - [Innovating Faster on Personalization Algorithms at Netflix Using Interleaving](https://medium.com/netflix-techblog/interleaving-in-online-experiments-at-netflix-a04ee392ec55), Netflix Technology Blog
github_jupyter
``` # Dependencies and Setup %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np # Hide warning messages in notebook import warnings warnings.filterwarnings('ignore') # File to Load (Remember to Change These) mouse_drug_data_to_load = "data/mouse_drug_data.csv" clinical_trial_data_to_load = "data/clinicaltrial_data.csv" # Read the Mouse and Drug Data and the Clinical Trial Data mouse_data = pd.read_csv(mouse_drug_data_to_load) clinical_data = pd.read_csv(clinical_trial_data_to_load) # Combine the data into a single dataset datamerge = mouse_data.merge(clinical_data, on="Mouse ID", how="left") # Display the data table for preview datamerge.head() #summary stats datamerge.describe() ``` ## Tumor Response to Treatment ``` # Store the Mean Tumor Volume Data Grouped by Drug and Timepoint tumor_response = datamerge.groupby(['Drug','Timepoint']).mean()['Tumor Volume (mm3)'] # Convert to DataFrame df_tumor_response = (pd.DataFrame(data = tumor_response)).reset_index() # Preview DataFrame df_tumor_response.head() # Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint standard_error = datamerge.groupby(['Drug','Timepoint']).sem()['Tumor Volume (mm3)'] # Convert to DataFrame df_standard_error = (pd.DataFrame(standard_error)).reset_index() # Preview DataFrame df_standard_error.head() # Minor Data Munging to Re-Format the Data Frames df_tumor_pvt = df_tumor_response.pivot(index='Timepoint', columns='Drug')['Tumor Volume (mm3)'] df_stderr_pvt = df_standard_error.pivot(index='Timepoint', columns='Drug')['Tumor Volume (mm3)'] # Preview that Reformatting worked df_tumor_pvt #Create list of drugs to chart drug_list = [('Capomulin','o','red', 'Capomulin'),('Infubinol','^','blue', 'Infubinol'), ('Ketapril','s','green', 'Ketapril'),('Placebo','d','black', 'Placebo')] #loop through drug list to plot for drug,marker,colors, label in drug_list: stderr = df_stderr_pvt[drug] tumor_plt = plt.errorbar(df_tumor_pvt.index,df_tumor_pvt[drug],stderr, fmt=marker,ls='--',color=colors,linewidth=0.5, label = label) plt.legend(loc='best') plt.title('Tumor Response to Treatment') plt.xlabel('Time (Days)') plt.ylabel('Tumor Volume (mm3)') plt.grid() plt.show() # Save the Figure plt.savefig('Response_to_Treatment.png') ``` ## Metastatic Response to Treatment ``` # Store the Mean Met. Site Data Grouped by Drug and Timepoint metastic_site_mean = datamerge.groupby(['Drug','Timepoint']).mean()['Metastatic Sites'] # Convert to DataFrame df_metastic_site_mean = (pd.DataFrame(data = metastic_site_mean)).reset_index() # Preview DataFrame df_metastic_site_mean.head() # Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint metastic_site_sem = datamerge.groupby(['Drug','Timepoint']).sem()['Metastatic Sites'] # Convert to DataFrame df_metastic_site_sem = (pd.DataFrame(data = metastic_site_sem)).reset_index() # Preview DataFrame df_metastic_site_sem.head() # Minor Data Munging to Re-Format the Data Frames df_metastic_mean_pvt = df_metastic_site_mean.pivot(index='Timepoint', columns='Drug')['Metastatic Sites'] df_metastic_sem_pvt = df_metastic_site_sem.pivot(index='Timepoint', columns='Drug')['Metastatic Sites'] # Preview that Reformatting worked df_metastic_mean_pvt.head() #df_metastic_sem_pvt.head() for drug,marker,colors, label in drug_list: metaerr = df_metastic_sem_pvt[drug] metastatic_plt = plt.errorbar(df_metastic_mean_pvt.index,df_metastic_mean_pvt[drug],metaerr, fmt=marker,ls='--',color=colors,linewidth=0.5, label = label) plt.legend(loc = 'best') plt.title('Metastatic Spread During Treatment') plt.xlabel('Treatment Duration(Days)') plt.ylabel('Metastatic Sites') x_lim = len(df_metastic_mean_pvt.index) plt.grid() plt.show() ``` ## Survival Rates ``` # Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric) mice_count_grp = datamerge.groupby(['Drug', 'Timepoint'])['Mouse ID'] mice_count = mice_count_grp.nunique() # Convert to DataFrame df_mice_count = (pd.DataFrame(data = mice_count)).reset_index() df_mice_count = df_mice_count.rename(columns={'Mouse ID':'Mouse Count'}) # Preview DataFrame df_mice_count.head() # Minor Data Munging to Re-Format the Data Frames df_mice_count_pvt = df_mice_count.pivot(index='Timepoint', columns='Drug')['Mouse Count'] # Preview the Data Frame df_mice_count_pvt.head() # Generate the Plot (Accounting for percentages) for drug,marker,colors, label in drug_list: micetotal = df_mice_count_pvt[drug][0] Survival_plt = plt.plot(df_mice_count_pvt.index,df_mice_count_pvt[drug]/micetotal * 100, color = colors, marker=marker, markersize = 5, ls='--',linewidth=0.5, label = label) plt.legend(loc = 'best') plt.title('Survival During Treatment') plt.xlabel('Treatment Duration(Days)') plt.ylabel('Survival Rate(%)') plt.grid() plt.show() # Save the Figure plt.savefig('Survival During Treatment.png') ``` ## Summary Bar Graph ``` # Calculate the percent changes for each drug percent_change = (df_tumor_pvt.iloc[-1]/(df_tumor_pvt.iloc[0])-1)*100 # Display the data to confirm percent_change # Store all Relevant Percent Changes into a Tuple drug_list = ['Capomulin','Infubinol','Ketapril','Placebo'] # Splice the data between passing and failing drugs passing = percent_change < 0 # Orient widths. Add labels, tick marks, etc. change_list = [(percent_change[drug]) for drug in drug_list] change_plt = plt.bar(drug_list, change_list, width = -1, align = 'edge', color = passing.map({True:'g', False:'r'})) plt.grid() plt.ylim(-30,70) plt.ylabel('% Tumor Volume Change') plt.title('Tumor Change over 45 Day Treatment') # Use functions to label the percentages of changes def label (changes): for change in changes: height = change.get_height() if height > 0: label_position = 2 else: label_position = -8 plt.text(change.get_x() + change.get_width()/2., label_position, '%d' % int(height)+'%',color='white', ha='center', va='bottom') # Call functions to implement the function calls label(change_plt) # Save the Figure plt.savefig('Tumor Change over Treatment.png') # Show the Figure plt.show() ``` # Observations Capomulin seems to be most effective 1. Tumor Volume decreased over treatment period whereas there was an increase with the other drugs. 2. Metastatic sites had a lower increase rate from the other drugs. 3. Tumor change over a 45 day treatment was positive (decrease).
github_jupyter
# What _projects_ am I a member of? ### Overview There are a number of API calls related to projects. Here we focus on listing projects. As with any **list**-type call, we will get minimal information about each project. There are two versions of this call: 1. (default) **paginated** call that will return 50 projects 2. **all-records** call that will page through and return all projects ### Prerequisites 1. You need to be a member (or owner) of _at least one_ project. 2. You need your _authentication token_ and the API needs to know about it. See <a href="Setup_API_environment.ipynb">**Setup_API_environment.ipynb**</a> for details. ## Imports We import the _Api_ class from the official sevenbridges-python bindings below. ``` import sevenbridges as sbg ``` ## Initialize the object The `Api` object needs to know your **auth\_token** and the correct path. Here we assume you are using the credentials file in your home directory. For other options see <a href="Setup_API_environment.ipynb">Setup_API_environment.ipynb</a> ``` # [USER INPUT] specify credentials file profile {cgc, sbg, default} prof = 'default' config_file = sbg.Config(profile=prof) api = sbg.Api(config=config_file) ``` ## Get _some_ projects We will start with the basic list call. A **list**-call for projects returns the following *attributes*: * **id** _Unique_ identifier for the project, generated based on Project Name * **name** Name of project specified by the user, maybe _non-unique_ * **href** Address<sup>1</sup> of the project. A **detail**-call for projects returns the following *attributes*: * **description** The user specified project description * **id** _Unique_ identifier for the project, generated based on Project Name * **name** Name of project specified by the user, maybe _non-unique_ * **href** Address<sup>1</sup> of the project. * **tags** List of tags * **created_on** Project creation time * **modified_on** Project modification time * **created_by** User that created the project * **root_folder** ID of the root folder for that project * **billing_group** ID of the billing group for the project * **settings** Dictionary with project settings for storage and task execution All list API calls will feature pagination, by _default_ 50 items will be returned. We will also show how to specify a different limit and page forward and backwards. <sup>1</sup> This is the address where, by using API you can get this resource ``` # list (up to) 50 (this is the default for 'limit') projects my_projects = api.projects.query() print(' List of project ids and names:') for project in my_projects: print('{} \t {}'.format(project.id, project.name)) # use a short query to highlight pagination my_projects = api.projects.query(limit=3) print(' List of first 3 project ids and names:') for project in my_projects: print('{} \t {}'.format(project.id, project.name)) # method to retrieve the next page of results next_page_of_projects = my_projects.next_page() print('\n List of next 3 project ids and names:') for project in next_page_of_projects: print('{} \t {}'.format(project.id, project.name)) ``` #### Note For the pagination above, we used the **.next_page()** and could have also used the **.prior_page()** methods. These will return another list with an limit equal to the prior call and a offset based on the prior call ## Get _all_ projects It's probably most useful to know all of your projects. Regardless of the query limit, the project object knows the actual total number of projects. We only need to use the **.all** attribute to get all projects. ``` existing_projects = my_projects.all() print(' List of all project ids and names:') for project in existing_projects: print('{} \t {}'.format(project.id, project.name)) ``` ### Note Each time you do **anything** with this _generator object_, it will become exhausted. The next call will be an empty list ``` # NOTE, after each time you operate on the existing_projects generator object, # it will become an empty list existing_projects = my_projects.all() print(existing_projects) print('\n For the first list() operation, there are %i projects in the generator' \ % (len(list(existing_projects)))) print(' For the next list() operation, there are %i projects in the generator' % \ (len(list(existing_projects)))) ``` ## Additional Information Detailed documentation of this particular REST architectural style request is available [here](http://docs.sevenbridges.com/docs/list-all-your-projects)
github_jupyter
``` import pandas as pd import numpy as np import pickle pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', -1) %matplotlib inline # Remove unrelated columns form data and get their name folder_path = '../../../datalcdem/data/optima/dementia_18July/' patient_df = pd.read_csv(folder_path + 'optima_patients.csv') display(patient_df.head(5)) #patient_df[['MMS1', 'MMS2']].hist() patient_com_df = pd.read_csv(folder_path + 'optima_patients_comorbidities.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Comorbidity_cui']] display(patient_com_df.head(10)) patient_filt_df = pd.read_csv(folder_path + 'optima_patients_filtered.csv') #display(patient_filt_df.head(5)) patient_treat_df = pd.read_csv(folder_path + 'optima_patients_treatments.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Medication_cui']] display(patient_treat_df.head(5)) len(set(patient_com_df['patient_id'].tolist())), len(set(patient_treat_df['patient_id'].tolist())) patient_treat_df['EPISODE_DATE'] = pd.to_datetime(patient_treat_df['EPISODE_DATE']) patient_com_df['EPISODE_DATE'] = pd.to_datetime(patient_com_df['EPISODE_DATE']) patient_com_treat_df = pd.merge(patient_com_df, patient_treat_df,on=['patient_id', 'EPISODE_DATE'], how='outer') #pd.concat([patient_com_df, patient_treat_df], keys=['patient_id', 'EPISODE_DATE'], ignore_index=True, sort=False) #patient_com_df.append(patient_treat_df, sort=False) #pd.concat([patient_com_df, patient_treat_df], axis=0, sort=False) #patient_treat_com_df = patient_treat_df.join(patient_com_df, on=['patient_id', 'EPISODE_DATE'], how='outer') print (patient_com_treat_df.shape) print (len(set(patient_com_treat_df['patient_id'].tolist()))) patient_com_treat_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True) patient_com_treat_df.reset_index(drop=True, inplace=True) patient_com_treat_df.head(10) patient_com_treat_df.to_csv('../../../datalcdem/data/optima/optima_ahmad/patient_com_treat_df.csv') folder_path = '../../../datalcdem/data/optima/dementia_18July/' df_datarequest = pd.read_excel(folder_path+'Data_Request_Jan_2019_final.xlsx') df_datarequest.head(5) df_datarequest_mmse = df_datarequest[['GLOBAL_PATIENT_DB_ID', 'Age At Episode', 'EPISODE_DATE', 'CAMDEX SCORES: MINI MENTAL SCORE']] df_datarequest_mmse_1 = df_datarequest_mmse.rename(columns={'GLOBAL_PATIENT_DB_ID':'patient_id'}) df_datarequest_mmse_1.head(10) #patient_com_treat_df.astype('datetime') patient_com_treat_df['EPISODE_DATE'] = pd.to_datetime(patient_com_treat_df['EPISODE_DATE']) print (df_datarequest_mmse_1.dtypes, patient_com_treat_df.dtypes) patient_com_treat_df = pd.merge(patient_com_treat_df,df_datarequest_mmse_1,on=['patient_id', 'EPISODE_DATE'], how='left') patient_com_treat_df.shape, patient_com_treat_df.head(10) len(set (patient_com_treat_df['patient_id'].tolist())) patient_com_treat_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True) patient_com_treat_df.head(20) patient_com_treat_df.reset_index(inplace=True, drop=True) patient_com_treat_df.head(5) def setLineNumber(lst): lst_dict = {ide:0 for ide in lst} lineNumber_list = [] for idx in lst: if idx in lst_dict: lst_dict[idx] = lst_dict[idx] + 1 lineNumber_list.append(lst_dict[idx]) return lineNumber_list patient_com_treat_df['lineNumber'] = setLineNumber(patient_com_treat_df['patient_id'].tolist()) patient_com_treat_df.tail(20) df = patient_com_treat_df id_dict = {i:0 for i in df['patient_id'].tolist()} for x in df['patient_id'].tolist(): if x in id_dict: id_dict[x]=id_dict[x]+1 line_updated = [int(j) for i in id_dict.values() for j in range(1,i+1)] print (line_updated[0:10]) df.update(pd.Series(line_updated, name='lineNumber'),errors='ignore') display(df.head(20)) #patients merging based on id and creating new columns r = df['lineNumber'].max() print ('Max line:',r) l = [df[df['lineNumber']==i] for i in range(1, int(r+1))] print('Number of Dfs to merge: ',len(l)) df_new = pd.DataFrame() tmp_id = [] for i, df_l in enumerate(l): df_l = df_l[~df_l['patient_id'].isin(tmp_id)] for j, df_ll in enumerate(l[i+1:]): #df_l = df_l.merge(df_ll, on='id', how='left', suffix=(str(j), str(j+1))) #suffixe is not working #print (j) df_l = df_l.join(df_ll.set_index('patient_id'), on='patient_id', rsuffix='_'+str(j+1)) tmp_id = tmp_id + df_l['patient_id'].tolist() #display(df_l) df_new = df_new.append(df_l, ignore_index=True, sort=False) display(df_new.head(20)) display(df_new[['patient_id']+[col for col in df_new.columns if 'line' in col or 'DATE' in col]].head(10)) fltr_linnum = ['_'+str(i) for i in range(10, 27)] print (fltr_linnum) df_new.drop(columns=[col for col in df_new.columns for i in fltr_linnum if i in col],inplace=True) df_new.to_csv(folder_path+'dementialTreatmentLine_preData_line_episode.csv', index=False) df_new = df_new.drop([col for col in df_new.columns if 'lineNumber' in col], axis=1).reset_index(drop=True) df_new.to_csv(folder_path+'dementialTreatmentLine_preData.csv', index=False) # Calculate matching intial episode in the data ''' df_episode= pd.read_csv('../../../datalcdem/data/optima/dementialTreatmentLine_preData_line_episode.csv') df_patients = pd.read_csv('../../../datalcdem/data/optima/patients.csv') display(df_episode.columns, df_patients.columns) df_pat_ep = pd.merge(df_episode[['patient_id', 'EPISODE_DATE']], df_patients[['patient_id', 'epDateInicial', 'mmseInicial']]) df_episode.shape, df_patients.shape, df_pat_ep.shape df_pat_ep['dateEqual']=df_pat_ep['EPISODE_DATE']==df_pat_ep['epDateInicial'] display(sum(df_pat_ep['dateEqual'].tolist())) df_pat_ep.head(10) display(sum(df_pat_ep['mmseInicial']<24))''' df_new.head(10) #### Calculate differnce beween patient API & Ahmad data df_diff = df_new[['patient_id']] df_patient_api = pd.read_csv(folder_path+'patients.csv') df_diff['patient_id_1'] = df_patient_api['patient_id'] df_diff.head(10) set_ahmad = set(df_new['patient_id'].tolist()) df_patient_api = pd.read_csv(folder_path+'patients_df.csv') set_api = set(df_patient_api['patient_id'].tolist()) print (set_ahmad.difference(set_api)) print (sorted(set_api.difference(set_ahmad))) list_val= sorted(set_api.difference(set_ahmad)) df_patient_api[df_patient_api['patient_id'].isin(list_val)].head(10) # Take Some other features from API df_patient_api = pd.read_csv(folder_path+'patients.csv') display(df_patient_api.head(10)) df_patient_api = df_patient_api[['patient_id', 'gender', 'dementia', 'smoker', 'alcohol', 'education', 'bmi', 'weight', 'apoe']] display(df_patient_api.head(10)) display(df_new.head(10)) df_patient_new = df_patient_api.merge(df_new, on=['patient_id'], how='inner') df_patient_new.head(10) df_patient_new.to_csv(folder_path+'patients_new.csv', index=False) def removeNANvalues(lst): return lst[~pd.isnull(lst)] comorbidity_cui_lst = list(set([y for x in removeNANvalues(df_patient_new[[col for col in df_patient_new.columns if 'Comorbidity_cui' in col]].values.flatten()) for y in x])) medication_cui_lst = list(set([y for x in removeNANvalues(df_patient_new[[col for col in df_patient_new.columns if 'Medication_cui' in col]].values.flatten()) for y in x])) with open(folder_path+'comorbidity_cui_lst.pkl', 'wb') as f: pickle.dump(comorbidity_cui_lst, f) with open(folder_path+'medication_cui_lst.pkl', 'wb') as f: pickle.dump(medication_cui_lst, f) ```
github_jupyter
## Product Review Aspect Detection: Laptop ### This is a Natural Language Processing based solution which can detect up to 8 aspects from online product reviews for laptops. This sample notebook shows you how to deploy Product Review Aspect Detection: Laptop using Amazon SageMaker. > **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook. #### Pre-requisites: 1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio. 1. Ensure that IAM role used has **AmazonSageMakerFullAccess** 1. To deploy this ML model successfully, ensure that: 1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used: 1. **aws-marketplace:ViewSubscriptions** 1. **aws-marketplace:Unsubscribe** 1. **aws-marketplace:Subscribe** 2. or your AWS account has a subscription to Product Review Aspect Detection: Laptop. If so, skip step: [Subscribe to the model package](#1.-Subscribe-to-the-model-package) #### Contents: 1. [Subscribe to the model package](#1.-Subscribe-to-the-model-package) 2. [Create an endpoint and perform real-time inference](#2.-Create-an-endpoint-and-perform-real-time-inference) 1. [Create an endpoint](#A.-Create-an-endpoint) 2. [Create input payload](#B.-Create-input-payload) 3. [Perform real-time inference](#C.-Perform-real-time-inference) 4. [Visualize output](#D.-Visualize-output) 5. [Delete the endpoint](#E.-Delete-the-endpoint) 3. [Perform batch inference](#3.-Perform-batch-inference) 4. [Clean-up](#4.-Clean-up) 1. [Delete the model](#A.-Delete-the-model) 2. [Unsubscribe to the listing (optional)](#B.-Unsubscribe-to-the-listing-(optional)) #### Usage instructions You can run this notebook one cell at a time (By using Shift+Enter for running a cell). ### 1. Subscribe to the model package To subscribe to the model package: 1. Open the model package listing page Product Review Aspect Detection: Laptop. 1. On the AWS Marketplace listing, click on the **Continue to subscribe** button. 1. On the **Subscribe to this software** page, review and click on **"Accept Offer"** if you and your organization agrees with EULA, pricing, and support terms. 1. Once you click on **Continue to configuration button** and then choose a **region**, you will see a **Product Arn** displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell. ``` model_package_arn='arn:aws:sagemaker:us-east-2:786796469737:model-package/laptop-aspect-extraction' import base64 import json import uuid from sagemaker import ModelPackage import sagemaker as sage from sagemaker import get_execution_role from sagemaker import ModelPackage from urllib.parse import urlparse import boto3 from IPython.display import Image from PIL import Image as ImageEdit import urllib.request import numpy as np role = get_execution_role() sagemaker_session = sage.Session() bucket=sagemaker_session.default_bucket() bucket ``` ### 2. Create an endpoint and perform real-time inference If you want to understand how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html). ``` model_name='laptop-aspect-extraction' content_type='text/plain' real_time_inference_instance_type='ml.m5.large' batch_transform_inference_instance_type='ml.m5.large' ``` #### A. Create an endpoint ``` def predict_wrapper(endpoint, session): return sage.predictor.Predictor(endpoint, session,content_type) #create a deployable model from the model package. model = ModelPackage(role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session, predictor_cls=predict_wrapper) #Deploy the model predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name) ``` Once endpoint has been created, you would be able to perform real-time inference. #### B. Create input payload ``` file_name = 'sample.txt' ``` <Add code snippet that shows the payload contents> #### C. Perform real-time inference ``` !aws sagemaker-runtime invoke-endpoint \ --endpoint-name $model_name \ --body fileb://$file_name \ --content-type $content_type \ --region $sagemaker_session.boto_region_name \ output.txt ``` #### D. Visualize output ``` import json with open('output.txt', 'r') as f: output = json.load(f) print(output) ``` #### E. Delete the endpoint Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged. ``` predictor=sage.predictor.Predictor(model_name, sagemaker_session,content_type) predictor.delete_endpoint(delete_endpoint_config=True) ``` ### 3. Perform batch inference In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see these links: 1. [How it works](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-batch-transform.html) 2. [How to run a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) ``` #upload the batch-transform job input files to S3 transform_input_folder = "input" transform_input = sagemaker_session.upload_data(transform_input_folder, key_prefix=model_name) print("Transform input uploaded to " + transform_input) #Run the batch-transform job transformer = model.transformer(1, batch_transform_inference_instance_type) transformer.transform(transform_input, content_type=content_type) transformer.wait() import os s3_conn = boto3.client("s3") with open('output.txt', 'wb') as f: s3_conn.download_fileobj(bucket, os.path.basename(transformer.output_path)+'/sample.txt.out', f) print("Output file loaded from bucket") with open('output.txt', 'r') as f: output = json.load(f) print(output) ``` ### 4. Clean-up #### A. Delete the model ``` model.delete_model() ``` #### B. Unsubscribe to the listing (optional) If you would like to unsubscribe to the model package, follow these steps. Before you cancel the subscription, ensure that you do not have any [deployable model](https://console.aws.amazon.com/sagemaker/home#/models) created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model. **Steps to unsubscribe to product from AWS Marketplace**: 1. Navigate to __Machine Learning__ tab on [__Your Software subscriptions page__](https://aws.amazon.com/marketplace/ai/library?productType=ml&ref_=mlmp_gitdemo_indust) 2. Locate the listing that you want to cancel the subscription for, and then choose __Cancel Subscription__ to cancel the subscription.
github_jupyter
``` import numpy as np import pprint import sys if "../" not in sys.path: sys.path.append("../") from lib.envs.gridworld import GridworldEnv pp = pprint.PrettyPrinter(indent=2) env = GridworldEnv() def value_iteration(env, theta=0.0001, discount_factor=1.0): """ Value Iteration Algorithm. Args: env: OpenAI env. env.P represents the transition probabilities of the environment. env.P[s][a] is a list of transition tuples (prob, next_state, reward, done). env.nS is a number of states in the environment. env.nA is a number of actions in the environment. theta: We stop evaluation once our value function change is less than theta for all states. discount_factor: Gamma discount factor. Returns: A tuple (policy, V) of the optimal policy and the optimal value function. """ V = np.zeros(env.nS) policy = np.zeros([env.nS, env.nA]) # Implement! # policy evaluation using Bellman optimality equation while True: threshold = 0 for s in range(env.nS): action_value_list = [] old_value = V[s] for action, action_prob in enumerate(policy[s]): state_action_value = 0 for prob, next_state, reward, done in env.P[s][action]: state_action_value += prob * (reward + discount_factor * V[next_state]) action_value_list.append(state_action_value) # Bellman optimality equation V[s] = np.max(action_value_list) threshold = max(threshold, np.abs(old_value-V[s])) if threshold < theta: break # Policy improvement for s in range(env.nS): action_value_list = [] for action, action_prob in enumerate(policy[s]): state_action_value = 0 for prob, next_state, reward, done in env.P[s][action]: state_action_value += prob * (reward + discount_factor * V[next_state]) action_value_list.append(state_action_value) greedy_action = np.argmax(action_value_list) policy[s] = np.eye(env.nA)[greedy_action] return policy, V policy, v = value_iteration(env) print("Policy Probability Distribution:") print(policy) print("") print("Reshaped Grid Policy (0=up, 1=right, 2=down, 3=left):") print(np.reshape(np.argmax(policy, axis=1), env.shape)) print("") print("Value Function:") print(v) print("") print("Reshaped Grid Value Function:") print(v.reshape(env.shape)) print("") # Test the value function expected_v = np.array([ 0, -1, -2, -3, -1, -2, -3, -2, -2, -3, -2, -1, -3, -2, -1, 0]) np.testing.assert_array_almost_equal(v, expected_v, decimal=2) ```
github_jupyter
#### New to Plotly? Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/). <br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online). <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! #### Version Check Plotly's python package is updated frequently. Run `pip install plotly --upgrade` to use the latest version. ``` import plotly plotly.__version__ ``` #### What is BigQuery? It's a service by Google, which enables analysis of massive datasets. You can use the traditional SQL-like language to query the data. You can host your own data on BigQuery to use the super fast performance at scale. #### Google BigQuery Public Datasets There are [a few datasets](https://cloud.google.com/bigquery/public-data/) stored in BigQuery, available for general public to use. Some of the publicly available datasets are: - Hacker News (stories and comments) - USA Baby Names - GitHub activity data - USA disease surveillance We will use the [Hacker News](https://cloud.google.com/bigquery/public-data/hacker-news) dataset for our analysis. #### Imports ``` import plotly.plotly as py import plotly.graph_objs as go import plotly.figure_factory as ff import pandas as pd from pandas.io import gbq # to communicate with Google BigQuery ``` #### Prerequisites You need to have the following libraries: * [python-gflags](http://code.google.com/p/python-gflags/) * httplib2 * google-api-python-client #### Create Project A project can be created on the [Google Developer Console](https://console.developers.google.com/iam-admin/projects). #### Enable BigQuery API You need to activate the BigQuery API for the project. ![Enable BigQuery](https://raw.githubusercontent.com/pravj/gitpool/master/bigquery-tutorial/enable-bq.png) You will have find the `Project ID` for your project to get the queries working. ![Project ID Credentials](https://raw.githubusercontent.com/pravj/gitpool/master/bigquery-tutorial/creds.png) project_id = 'bigquery-plotly' ### Top 10 Most Active Users on Hacker News (by total stories submitted) We will select the top 10 high scoring `author`s and their respective `score` values. ``` top10_active_users_query = """ SELECT author AS User, count(author) as Stories FROM [fh-bigquery:hackernews.stories] GROUP BY User ORDER BY Stories DESC LIMIT 10 """ ``` The `pandas.gbq` module provides a method `read_gbq` to query the BigQuery stored dataset and stores the result as a `DataFrame`. ``` try: top10_active_users_df = gbq.read_gbq(top10_active_users_query, project_id=project_id) except: print 'Error reading the dataset' ``` Using the `create_table` method from the `FigureFactory` module, we can generate a table from the resulting `DataFrame`. ``` top_10_users_table = ff.create_table(top10_active_users_df) py.iplot(top_10_users_table, filename='top-10-active-users') ``` ### Top 10 Hacker News Submissions (by score) We will select the `title` and `score` columns in the descending order of their `score`, keeping only top 10 stories among all. ``` top10_story_query = """ SELECT title, score, time_ts AS timestamp FROM [fh-bigquery:hackernews.stories] ORDER BY score DESC LIMIT 10 """ try: top10_story_df = gbq.read_gbq(top10_story_query, project_id=project_id) except: print 'Error reading the dataset' # Create a table figure from the DataFrame top10_story_figure = FF.create_table(top10_story_df) # Scatter trace for the bubble chart timeseries story_timeseries_trace = go.Scatter( x=top10_story_df['timestamp'], y=top10_story_df['score'], xaxis='x2', yaxis='y2', mode='markers', text=top10_story_df['title'], marker=dict( color=[80 + i*5 for i in range(10)], size=top10_story_df['score']/50, showscale=False ) ) # Add the trace data to the figure top10_story_figure['data'].extend(go.Data([story_timeseries_trace])) # Subplot layout top10_story_figure.layout.yaxis.update({'domain': [0, .45]}) top10_story_figure.layout.yaxis2.update({'domain': [.6, 1]}) # Y-axis of the graph should be anchored with X-axis top10_story_figure.layout.yaxis2.update({'anchor': 'x2'}) top10_story_figure.layout.xaxis2.update({'anchor': 'y2'}) # Add the height and title attribute top10_story_figure.layout.update({'height':900}) top10_story_figure.layout.update({'title': 'Highest Scoring Submissions on Hacker News'}) # Update the background color for plot and paper top10_story_figure.layout.update({'paper_bgcolor': 'rgb(243, 243, 243)'}) top10_story_figure.layout.update({'plot_bgcolor': 'rgb(243, 243, 243)'}) # Add the margin to make subplot titles visible top10_story_figure.layout.margin.update({'t':75, 'l':50}) top10_story_figure.layout.yaxis2.update({'title': 'Upvote Score'}) top10_story_figure.layout.xaxis2.update({'title': 'Post Time'}) py.image.save_as(top10_story_figure, filename='top10-posts.png') py.iplot(top10_story_figure, filename='highest-scoring-submissions') ``` You can see that the lists consist of the stories involving some big names. * "Death of Steve Jobs and Aaron Swartz" * "Announcements of the Hyperloop and the game 2048". * "Microsoft open sourcing the .NET" The story title is visible when you `hover` over the bubbles. #### From which Top-level domain (TLD) most of the stories come? Here we have used the url-function [TLD](https://cloud.google.com/bigquery/query-reference#tld) from BigQuery's query syntax. We collect the domain for all URLs with their respective count, and group them by it. ``` tld_share_query = """ SELECT TLD(url) AS domain, count(score) AS stories FROM [fh-bigquery:hackernews.stories] GROUP BY domain ORDER BY stories DESC LIMIT 10 """ try: tld_share_df = gbq.read_gbq(tld_share_query, project_id=project_id) except: print 'Error reading the dataset' labels = tld_share_df['domain'] values = tld_share_df['stories'] tld_share_trace = go.Pie(labels=labels, values=values) data = [tld_share_trace] layout = go.Layout( title='Submissions shared by Top-level domains' ) fig = go.Figure(data=data, layout=layout) py.iplot(fig) ``` We can notice that the **.com** top-level domain contributes to most of the stories on Hacker News. #### Public response to the "Who Is Hiring?" posts There is an account on Hacker News by the name [whoishiring](https://news.ycombinator.com/user?id=whoishiring). This account automatically submits a 'Who is Hiring?' post at 11 AM Eastern time on the first weekday of every month. ``` wih_query = """ SELECT id, title, score, time_ts FROM [fh-bigquery:hackernews.stories] WHERE author == 'whoishiring' AND LOWER(title) contains 'who is hiring?' ORDER BY time """ try: wih_df = gbq.read_gbq(wih_query, project_id=project_id) except: print 'Error reading the dataset' trace = go.Scatter( x=wih_df['time_ts'], y=wih_df['score'], mode='markers+lines', text=wih_df['title'], marker=dict( size=wih_df['score']/50 ) ) layout = go.Layout( title='Public response to the "Who Is Hiring?" posts', xaxis=dict( title="Post Time" ), yaxis=dict( title="Upvote Score" ) ) data = [trace] fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='whoishiring-public-response') ``` ### Submission Traffic Volume in a Week ``` week_traffic_query = """ SELECT DAYOFWEEK(time_ts) as Weekday, count(DAYOFWEEK(time_ts)) as story_counts FROM [fh-bigquery:hackernews.stories] GROUP BY Weekday ORDER BY Weekday """ try: week_traffic_df = gbq.read_gbq(week_traffic_query, project_id=project_id) except: print 'Error reading the dataset' week_traffic_df['Day'] = ['NULL', 'Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] week_traffic_df = week_traffic_df.drop(week_traffic_df.index[0]) trace = go.Scatter( x=week_traffic_df['Day'], y=week_traffic_df['story_counts'], mode='lines', text=week_traffic_df['Day'] ) layout = go.Layout( title='Submission Traffic Volume (Week Days)', xaxis=dict( title="Day of the Week" ), yaxis=dict( title="Total Submissions" ) ) data = [trace] fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='submission-traffic-volume') ``` We can observe that the Hacker News faces fewer submissions during the weekends. #### Programming Language Trend on HackerNews We will compare the trends for the Python and PHP programming languages, using the Hacker News post titles. ``` python_query = """ SELECT YEAR(time_ts) as years, COUNT(YEAR(time_ts )) as trends FROM [fh-bigquery:hackernews.stories] WHERE LOWER(title) contains 'python' GROUP BY years ORDER BY years """ php_query = """ SELECT YEAR(time_ts) as years, COUNT(YEAR(time_ts )) as trends FROM [fh-bigquery:hackernews.stories] WHERE LOWER(title) contains 'php' GROUP BY years ORDER BY years """ try: python_df = gbq.read_gbq(python_query, project_id=project_id) except: print 'Error reading the dataset' try: php_df = gbq.read_gbq(php_query, project_id=project_id) except: print 'Error reading the dataset' trace1 = go.Scatter( x=python_df['years'], y=python_df['trends'], mode='lines', line=dict(color='rgba(115,115,115,1)', width=4), connectgaps=True, ) trace2 = go.Scatter( x=[python_df['years'][0], python_df['years'][8]], y=[python_df['trends'][0], python_df['trends'][8]], mode='markers', marker=dict(color='rgba(115,115,115,1)', size=8) ) trace3 = go.Scatter( x=php_df['years'], y=php_df['trends'], mode='lines', line=dict(color='rgba(189,189,189,1)', width=4), connectgaps=True, ) trace4 = go.Scatter( x=[php_df['years'][0], php_df['years'][8]], y=[php_df['trends'][0], php_df['trends'][8]], mode='markers', marker=dict(color='rgba(189,189,189,1)', size=8) ) traces = [trace1, trace2, trace3, trace4] layout = go.Layout( xaxis=dict( showline=True, showgrid=False, showticklabels=True, linecolor='rgb(204, 204, 204)', linewidth=2, autotick=False, ticks='outside', tickcolor='rgb(204, 204, 204)', tickwidth=2, ticklen=5, tickfont=dict( family='Arial', size=12, color='rgb(82, 82, 82)', ), ), yaxis=dict( showgrid=False, zeroline=False, showline=False, showticklabels=False, ), autosize=False, margin=dict( autoexpand=False, l=100, r=20, t=110, ), showlegend=False, ) annotations = [] annotations.append( dict(xref='paper', x=0.95, y=python_df['trends'][8], xanchor='left', yanchor='middle', text='Python', font=dict( family='Arial', size=14, color='rgba(49,130,189, 1)' ), showarrow=False) ) annotations.append( dict(xref='paper', x=0.95, y=php_df['trends'][8], xanchor='left', yanchor='middle', text='PHP', font=dict( family='Arial', size=14, color='rgba(49,130,189, 1)' ), showarrow=False) ) annotations.append( dict(xref='paper', yref='paper', x=0.5, y=-0.1, xanchor='center', yanchor='top', text='Source: Hacker News submissions with the title containing Python/PHP', font=dict( family='Arial', size=12, color='rgb(150,150,150)' ), showarrow=False) ) layout['annotations'] = annotations fig = go.Figure(data=traces, layout=layout) py.iplot(fig, filename='programming-language-trends') ``` As we already know about this trend, Python is dominating PHP throughout the timespan. #### Reference See https://plot.ly/python/getting-started/ for more information about Plotly's Python Open Source Graphing Library! ``` from IPython.display import display, HTML display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />')) display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">')) ! pip install git+https://github.com/plotly/publisher.git --upgrade import publisher publisher.publish( 'BigQuery-Plotly.ipynb', 'python/google_big_query/', 'Google Big-Query', 'How to make your-tutorial-chart plots in Python with Plotly.', title = 'Google Big Query | plotly', has_thumbnail='true', thumbnail='thumbnail/bigquery2.jpg', language='python', page_type='example_index', display_as='databases', order=7) ```
github_jupyter
# Asymmetric Loss This documentation is based on the paper "[Asymmetric Loss For Multi-Label Classification](https://arxiv.org/abs/2009.14119)". ## Asymetric Single-Label Loss ``` import timm import torch import torch.nn.functional as F from timm.loss import AsymmetricLossMultiLabel, AsymmetricLossSingleLabel import matplotlib.pyplot as plt from PIL import Image from pathlib import Path ``` Let's create a example of the `output` of a model, and our `labels`. ``` output = F.one_hot(torch.tensor([0,9,0])).float() labels=torch.tensor([0,0,0]) labels, output ``` If we set all the parameters to 0, the loss becomes `F.cross_entropy` loss. ``` asl = AsymmetricLossSingleLabel(gamma_pos=0,gamma_neg=0,eps=0.0) asl(output,labels) F.cross_entropy(output,labels) ``` Now lets look at the asymetric part. ASL is Asymetric in how it handles positive and negative examples. Positive examples being the labels that are present in the image, and negative examples being labels that are not present in the image. The idea being that an image has a lot of easy negative examples, few hard negative examples and very few positive examples. Getting rid of the influence of easy negative examples, should help emphasize the gradients of the positive examples. ``` Image.open(Path()/'images/cat.jpg') ``` Notice this image contains a cat, that would be a positive label. This images does not contain a dog, elephant bear, giraffe, zebra, banana or many other of the labels found in the coco dataset, those would be negative examples. It is very easy to see that a giraffe is not in this image. ``` output = (2*F.one_hot(torch.tensor([0,9,0]))-1).float() labels=torch.tensor([0,9,0]) losses=[AsymmetricLossSingleLabel(gamma_neg=i*0.04+1,eps=0.1,reduction='mean')(output,labels) for i in range(int(80))] plt.plot([ i*0.04+1 for i,l in enumerate(losses)],[loss for loss in losses]) plt.ylabel('Loss') plt.xlabel('Change in gamma_neg') plt.show() ``` $$L_- = (p)^{\gamma-}\log(1-p) $$ The contibution of small negative examples quickly decreases as gamma_neg is increased as $\gamma-$ is an exponent and $p$ should be a small number close to 0. Below we set `eps=0`, this has the effect of completely flattening out the above graph, we are no longer applying label smoothing, so negative examples end up not contributing to the loss. ``` losses=[AsymmetricLossSingleLabel(gamma_neg=0+i*0.02,eps=0.0,reduction='mean')(output,labels) for i in range(100)] plt.plot([ i*0.04 for i in range(len(losses))],[loss for loss in losses]) plt.ylabel('Loss') plt.xlabel('Change in gamma_neg') plt.show() ``` ## AsymmetricLossMultiLabel `AsymmetricLossMultiLabel` allows for working on multi-label problems. ``` labels=F.one_hot(torch.LongTensor([0,0,0]),num_classes=10)+F.one_hot(torch.LongTensor([1,9,1]),num_classes=10) labels AsymmetricLossMultiLabel()(output,labels) ``` For `AsymmetricLossMultiLabel` another parameter exists called `clip`. This clamps smaller inputs to 0 for negative examples. This is called Asymmetric Probability Shifting. ``` losses=[AsymmetricLossMultiLabel(clip=i/100)(output,labels) for i in range(100)] plt.plot([ i/100 for i in range(len(losses))],[loss for loss in losses]) plt.ylabel('Loss') plt.xlabel('Clip') plt.show() ```
github_jupyter
# 0. Dependências ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_iris from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA %matplotlib inline pd.options.display.max_rows = 10 ``` # 1. Introdução **O objetivo principal do PCA é analisar os dados para identificar padrões visando reduzir a dimensionalidade dos dados com uma perda mínima de informação**. Uma possível aplicação seria o reconhecimento de padrões, onde queremos reduzir os custos computacionais e o erro da estimação de parâmetros pela redução da dimensionalidade do nosso espaço de atributos extraindo um subespaço que descreve os nosso dados "de forma satisfatória". **A redução de dimensionalidade se torna importante quando temos um número de atributos significativamente maior que o número de amostras de treinamento**. Nós aplicamos o PCA para projetar todos os nossos dados (sem rótulos de classe) em um subespaço diferente, procurando encontrar os eixos com a máxima variância onde os dados são mais distribuídos. A questão principal é: **"Qual o subespaço que representa *bem* os nossos dados?"**. **Primeiramente, calculamos os autovetores (componentes principais) dos nossos dados e organizamos em uma matriz de projeção. Cada autovetor (*eigenvector*) é associada a um autovalor (*eigenvalue*) que pode ser interpretado como o "tamanho" ou "magnitude" do autovetor correspondente**. Em geral, consideramos apenas o autovalores que tem uma magnitude significativamente maior que os outros e desconsideramos os autopares (autovetores-autovalores) que consideramos *menos informativos*. Se observamos que todos os autovalores tem uma magnitude similar, isso pode ser um bom indicador que nossos dados já estão em um *bom* subespaço. Por outro lado, **se alguns autovalores tem a magnitude muito maior que a dos outros, devemos escolher seus autovetores já que eles contém mais informação sobre a distribuição dos nossos dados**. Da mesma forma, autovalores próximos a zero são menos informativos e devemos desconsiderá-los na construção do nosso subespaço. Em geral, a aplicação do PCA envolve os seguintes passos: 1. Padronização dos dados 2. Obtenção dos autovetores e autovalores através da: - Matriz de Covariância; ou - Matriz de Correlação; ou - *Singular Vector Decomposition* 3. Construção da matriz de projeção a partir dos autovetores selecionados 4. Transformação dos dados originais X via a matriz de projeção para obtenção do subespaço Y ## 1.1 PCA vs LDA Ambos PCA e LDA (*Linear Discrimant Analysis*) são métodos de transformação linear. Por uma lado, o PCA fornece as direções (autovetores ou componentes principais) que maximiza a variância dos dados, enquanto o LDA visa as direções que maximizam a separação (ou discriminação) entre classes diferentes. Maximizar a variância, no caso do PCA, também significa reduzir a perda de informação, que é representada pela soma das distâncias de projeção dos dados nos eixos dos componentes principais. Enquanto o PCA projeta os dados em um subespaço diferente, o LDA tenta determinar um subespaço adequado para distinguir os padrões pertencentes as classes diferentes. <img src="images/PCAvsLDA.png" width=600> ## 1.2 Autovetores e autovalores Os autovetores e autovalores de uma matriz de covariância (ou correlação) representam a base do PCA: os autovetores (componentes principais) determinam a direção do novo espaço de atributos, e os autovalores determinam sua magnitude. Em outras palavras, os autovalores explicam a variância dos dados ao longo dos novos eixos de atributos. ### 1.2.1 Matriz de Covariância A abordagem clássica do PCA calcula a matriz de covariância, onde cada elemento representa a covariância entre dois atributos. A covariância entre dois atributos é calculada da seguinte forma: $$\sigma_{jk} = \frac{1}{n-1}\sum_{i=1}^N(x_{ij}-\overline{x}_j)(x_{ik}-\overline{x}_k)$$ Que podemos simplificar na forma vetorial através da fórmula: $$S=\frac{1}{n-1}((x-\overline{x})^T(x-\overline{x}))$$ onde $\overline{x}$ é um vetor d-dimensional onde cada valor representa a média de cada atributo, e $n$ representa o número de atributos por amostra. Vale ressaltar ainda que x é um vetor onde cada amostra está organizada em linhas e cada coluna representa um atributo. Caso se tenha um vetor onde as amostras estão organizadas em colunas e cada linha representa um atributo, a transposta passa para o segundo elemento da multiplicação. Na prática, o resultado da matriz de covariância representa basicamente a seguinte estrutura: $$\begin{bmatrix}var(1) & cov(1,2) & cov(1,3) & cov(1,4) \\ cov(1,2) & var(2) & cov(2,3) & cov(2,4) \\ cov(1,3) & cov(2,3) & var(3) & cov(3,4) \\ cov(1,4) & cov(2,4) & cov(3,4) & var(4) \end{bmatrix}$$ Onde a diagonal principal representa a variância em cada dimensão e os demais elementos são a covariância entre cada par de dimensões. Para se calcular os autovalores e autovetores, só precisamos chamar a função *np.linalg.eig*, onde cada autovetor estará representado por uma coluna. > Uma propriedade interessante da matriz de covariância é que **a soma da diagonal principal da matriz (variância para cada dimensão) é igual a soma dos autovalores**. ### 1.2.2 Matriz de Correlação Outra maneira de calcular os autovalores e autovetores é utilizando a matriz de correlação. Apesar das matrizes serem diferentes, elas vão resultar nos mesmos autovalores e autovetores (mostrado mais a frente) já que a matriz de correlação é dada pela normalização da matriz de covariância. $$corr(x,y) = \frac{cov(x,y)}{\sigma_x \sigma_y}$$ ### 1.2.3 Singular Vector Decomposition Apesar da autodecomposição (cálculo dos autovetores e autovalores) efetuada pelas matriz de covariância ou correlação ser mais intuitiva, a maior parte das implementações do PCA executam a *Singular Vector Decomposition* (SVD) para melhorar o desempenho computacional. Para calcular a SVD, podemos utilizar a biblioteca numpy, através do método *np.linalg.svd*. Note que a autodecomposição resulta nos mesmos autovalores e autovetores utilizando qualquer uma das matrizes abaixo: - Matriz de covariânca após a padronização dos dados - Matriz de correlação - Matriz de correlação após a padronização dos dados Mas qual a relação entre a SVD e o PCA? Dado que a matriz de covariância $C = \frac{X^TX}{n-1}$ é uma matriz simétrica, ela pode ser diagonalizada da seguinte forma: $$C = VLV^T$$ onde $V$ é a matriz de autovetores (cada coluna é um autovetor) e $L$ é a matriz diagonal com os autovalores $\lambda_i$ na ordem decrescente na diagonal. Se executarmos o SVD em X, nós obtemos a seguinte decomposição: $$X = USV^T$$ onde $U$ é a matriz unitária e $S$ é a matriz diagonal de *singular values* $s_i$. A partir disso, pode-se calcular que: $$C = VSU^TUSV^T = V\frac{S^2}{n-1}V^T$$ Isso significa que os *right singular vectors* V são as *principal directions* e que os *singular values* estão relacionados aos autovalores da matriz de covariância via $\lambda_i = \frac{s_i^2}{n-1}$. Os componentes principais são dados por $XV = USV^TV = US$ Resumindo: 1. Se $X = USV^T$, então as colunas de V são as direções/eixos principais; 2. As colunas de $US$ são os componentes principais; 3. *Singular values* estão relacionados aos autovalores da matriz de covariância via $\lambda_i = \frac{s_i^2}{n-1}$; 4. Scores padronizados (*standardized*) são dados pelas colunas de $\sqrt{n-1}U$ e *loadings* são dados pelas colunas de $\frac{VS}{\sqrt{n-1}}$. Veja [este link](https://stats.stackexchange.com/questions/125684) e [este](https://stats.stackexchange.com/questions/143905) para entender as diferenças entre *loadings* e *principal directions*; 5. As fórmulas acima só são válidas se $X$ está centralizado, ou seja, somente quando a matriz de covariância é igual a $\frac{X^TX}{n-1}$; 6. As proposições acima estão corretas somente quando $X$ for representado por uma matriz onde as linhas são amostras e as colunas são atributos. Caso contrário, $U$ e $V$ tem interpretações contrárias. Isto é, $U, V = V, U$; 7. Para reduzir a dimensionalidade com o PCA baseado no SVD, selecione as *k*-ésimas primeiras colunas de U, e $k\times k$ parte superior de S. O produto $U_kS_k$ é a matriz $n \times k$ necessária para conter os primeiros $k$ PCs. 8. Para reconstruir os dados originais a partir dos primeiros $k$ PCs, multiplicá-los pelo eixos principais correspondentes $V_k^T$ produz a matriz $X_k = U_kS_kV_k^T$ que tem o tamanho original $n \times p$. Essa forma gera a matriz reconstruída com o menor erro de reconstrução possível dos dados originais. [Veja esse link](https://stats.stackexchange.com/questions/130721); 9. Estritamente falando, $U$ é de tamanho $n \times n$ e $V$ é de tamanho $p \times p$. Entretanto, se $n > p$ então as últimas $n-p$ colunas de $U$ são arbitrárias (e as linhas correspondentes de $S$ são constantes e iguais a zero). ### 1.2.4 Verificação dos autovetores e autovalores Para verificar se os autovetores e autovalores calculados na autodecomposição estão corretos, devemos verificar se eles satisfazem a equação para cada autovetor e autovalor correspondente: $$\Sigma \overrightarrow{v} = \lambda \overrightarrow{v}$$ onde: $$\Sigma = Matriz\,de\,Covariância$$ $$\overrightarrow{v} = autovetor$$ $$\lambda = autovalor$$ ### 1.2.5 Escolha dos autovetores e autovalores Como dito, o objetivo típico do PCA é reduzir a dimensionalidade dos dados pela projeção em um subespaço menor, onde os autovetores formam os eixos. Entretando, os autovetores definem somente as direções dos novos eixos, já que todos eles tem tamanho 1. Logo, para decidir qual(is) autovetor(es) podemos descartar sem perder muita informação na construção do nosso subespaço, precisamos verificar os autovalores correspondentes. **Os autovetores com os maiores valores são os que contém mais informação sobre a distribuição dos nossos dados**. Esse são os autovetores que queremos. Para fazer isso, devemos ordenar os autovalores em ordem decrescente para escolher o top k autovetores. ### 1.2.6 Cálculo da Informação Após ordenar os autovalores, o próximo passo é **definir quantos componentes principais serão escolhidos para o nosso novo subespaço**. Para fazer isso, podemos utilizar o método da *variância explicada*, que calcula quanto de informação (variância) é atribuida a cada componente principal. ## 1. 3 Matriz de Projeção Na prática, a matriz de projeção nada mais é que os top k autovetores concatenados. Portanto, se queremos reduzir o nosso espaço 4-dimensional para um espaço 2-dimensional, devemos escolher os 2 autovetores com os 2 maiores autovalores para construir nossa matriz W (d$\times$k). ## 1.4 Projeção no novo subespaço O último passo do PCA é utilizar a nossa matriz de projeção dimensional W (4x2, onde cada coluna representa um autovetor) para transformar nossas amostras em um novo subespaço. Para isso, basta aplicar a seguinte equação: $$S = (X-\mu_X) \times W$$ Onde cada linha em S contém os pesos para cada atributo (coluna da matriz) no novo subespaço. A título de curiosidade, repare que se W representasse todos os autovetores - e não somente os escolhidos -, poderíamos recompor cada instância em X pela seguinte fórmula: $$X = (S \times W^{-1}) + \mu_X$$ Novamente, cada linha em S representa os pesos para cada atributo, só que dessa vez seria possível representar X pela soma de cada autovetor multiplicado por um peso. ## 1.5 Recomendações - Sempre normalize os atributos antes de aplicar o PCA (StandarScaler); - Lembre-se de armazenar a média para efetuar a ida e volta; - Não aplique o PCA após outros algoritmos de seleção de atributos ([fonte](https://www.quora.com/Should-I-apply-PCA-before-or-after-feature-selection)); - O número de componentes principais que você quer manter deve ser escolhido através da análise entre o número de componentes e a precisão do sistema. Nem sempre mais componentes principais ocasionam em melhor precisão! # 2. Dados ``` iris = load_iris() df = pd.DataFrame(data=iris.data, columns=iris.feature_names) df['class'] = iris.target df df.describe() x = df.drop(labels='class', axis=1).values y = df['class'].values print(x.shape, y.shape) ``` # 3. Implementação ``` class MyPCACov(): def __init__(self, n_components=None): self.n_components = n_components self.eigen_values = None self.eigen_vectors = None def fit(self, x): self.n_components = x.shape[1] if self.n_components is None else self.n_components self.mean_ = np.mean(x, axis=0) cov_matrix = np.cov(x - self.mean_, rowvar=False) self.eigen_values, self.eigen_vectors = np.linalg.eig(cov_matrix) self.eigen_vectors = self.eigen_vectors.T self.sorted_components_ = np.argsort(self.eigen_values)[::-1] self.projection_matrix_ = self.eigen_vectors[self.sorted_components_[:self.n_components]] self.explained_variance_ = self.eigen_values[self.sorted_components_] self.explained_variance_ratio_ = self.explained_variance_ / self.eigen_values.sum() def transform(self, x): return np.dot(x - self.mean_, self.projection_matrix_.T) def inverse_transform(self, x): return np.dot(x, self.projection_matrix_) + self.mean_ class MyPCASVD(): def __init__(self, n_components=None): self.n_components = n_components self.eigen_values = None self.eigen_vectors = None def fit(self, x): self.n_components = x.shape[1] if self.n_components is None else self.n_components self.mean_ = np.mean(x, axis=0) U, s, Vt = np.linalg.svd(x - self.mean_, full_matrices=False) # a matriz s já retorna ordenada # S = np.diag(s) self.eigen_vectors = Vt self.eigen_values = s self.projection_matrix = self.eigen_vectors[:self.n_components] self.explained_variance_ = (self.eigen_values ** 2) / (x.shape[0] - 1) self.explained_variance_ratio_ = self.explained_variance_ / self.explained_variance_.sum() def transform(self, x): return np.dot(x - self.mean_, self.projection_matrix.T) def inverse_transform(self, x): return np.dot(x, self.projection_matrix) + self.mean_ ``` # 4. Teste ``` std = StandardScaler() x_std = StandardScaler().fit_transform(x) ``` ### PCA implementado pela matriz de covariância ``` pca_cov = MyPCACov(n_components=2) pca_cov.fit(x_std) print('Autovetores: \n', pca_cov.eigen_vectors) print('Autovalores: \n', pca_cov.eigen_values) print('Variância explicada: \n', pca_cov.explained_variance_) print('Variância explicada (ratio): \n', pca_cov.explained_variance_ratio_) print('Componentes ordenados: \n', pca_cov.sorted_components_) x_std_proj = pca_cov.transform(x_std) plt.figure() plt.scatter(x_std_proj[:, 0], x_std_proj[:, 1], c=y) x_std_back = pca_cov.inverse_transform(x_std_proj) print(x_std[:5]) print(x_std_back[:5]) ``` ### PCA implementado pela SVD ``` pca_svd = MyPCASVD(n_components=2) pca_svd.fit(x_std) print('Autovetores: \n', pca_svd.eigen_vectors) print('Autovalores: \n', pca_svd.eigen_values) print('Variância explicada: \n', pca_svd.explained_variance_) print('Variância explicada (ratio): \n', pca_svd.explained_variance_ratio_) x_std_proj = pca_svd.transform(x_std) plt.figure() plt.scatter(x_std_proj[:, 0], x_std_proj[:, 1], c=y) x_std_back = pca_svd.inverse_transform(x_std_proj) print(x_std[:5]) print(x_std_back[:5]) ``` ## Comparação com o Scikit-learn ``` pca_sk = PCA(n_components=2) pca_sk.fit(x_std) print('Autovetores: \n', pca_sk.components_) print('Autovalores: \n', pca_sk.singular_values_) print('Variância explicada: \n', pca_sk.explained_variance_) print('Variância explicada (ratio): \n', pca_sk.explained_variance_ratio_) x_std_proj_sk = pca_sk.transform(x_std) plt.figure() plt.scatter(x_std_proj_sk[:, 0], x_std_proj_sk[:, 1], c=y) x_std_back_sk = pca_sk.inverse_transform(x_std_proj_sk) print(x_std[:5]) print(x_std_back_sk[:5]) ``` ### Observação sobre a implementação do Scikit-learn Por algum motivo (que não sei qual), a [implementação do scikit-learn](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/pca.py) inverte os sinais de alguns valores na matriz de autovetores. Na implementação, as matrizes $U$ e $V$ são passada para um método ```svd_flip``` (implementada [nesse arquivo](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/extmath.py)): ```py U, V = svd_flip(U[:, ::-1], V[::-1]) ``` Repare que isso muda apenas os dados projetados. No gráfico, isso inverte os eixos correspondentes apenas. No entanto, os **autovalores**, a ```explained_variance```, ```explained_variance_ratio``` e os dados reprojetados ao espaço original são exatamente iguais. ## 5. Referências - [Antigo notebook do PCA com explicações passo-a-passo](https://github.com/arnaldog12/Machine_Learning/blob/62b628bd3c37ec2fa52e349f38da24751ef67313/PCA.ipynb) - [Principal Component Analysis in Python](https://plot.ly/ipython-notebooks/principal-component-analysis/) - [Implementing a Principal Component Analysis (PCA)](https://sebastianraschka.com/Articles/2014_pca_step_by_step.html) - [Relationship between SVD and PCA. How to use SVD to perform PCA?](https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca) - [How to reverse PCA and reconstruct original variables from several principal components?](https://stats.stackexchange.com/questions/229092/how-to-reverse-pca-and-reconstruct-original-variables-from-several-principal-com) - [Everything you did and didn't know about PCA](http://alexhwilliams.info/itsneuronalblog/2016/03/27/pca/) - [Unpacking (** PCA )](https://towardsdatascience.com/unpacking-pca-b5ea8bec6aa5)
github_jupyter
## Given two binary strings, return their sum (also a binary string). The input strings are both non-empty and contains only characters 1 or 0. ### Example 1: Input: a = "11", b = "1" Output: "100" ### Example 2: Input: a = "1010", b = "1011" Output: "10101" ``` def add_binary(a,b): return '{0:b}'.format(int(a, 2) + int(b, 2)) print(add_binary('11','11')) ``` This method has quite low performance in the case of large input numbers. One could use Bit Manipulation approach to speed up the solution. ### Logic ![title](image/add_binary.png) ![title](image/op_addbinary.jpg) #### 1.Find the maxlength #### 2.Based on the length fill with zero #### 3.Create a variable for carry and result #### 4.Loop through tail, check if value is "1" .If yes then increment carry #### 5.if Carry % 2 == 1 then append 1 else zero #### 6. Divide carry by 2 (Keep doing this for all the elements in the array) #### 7. Outside the loop if carry is 1 then append 1 #### 8. Reverse the string and join all the values ``` # Program with comments def add_binary(a,b): n = max(len(a), len(b)) print('\nlength',n) a, b = a.zfill(n), b.zfill(n) print('zfill a,b',a,b) carry = 0 result = [] for i in range(n - 1, -1, -1): print('\ni',i) print('carry on top',carry) if a[i] == '1': carry += 1 if b[i] == '1': carry += 1 if carry % 2 == 1: result.append('1') else: result.append('0') print('\a[i]',a[i]) print('\b[i]',b[i]) print('carry after increement',carry) print('carry % 2',carry % 2) print('answer',result) #carry =cary//2 print("Carry=carry// 2", carry) carry //= 2 print("After division carry", carry) if carry == 1: result.append('1') print('result',result) result.reverse() return ''.join(result) print(add_binary('01','11')) # Program without comments def add_binary(a,b): n = max(len(a), len(b)) a, b = a.zfill(n), b.zfill(n) carry = 0 result = [] for i in range(n - 1, -1, -1): if a[i] == '1': carry += 1 if b[i] == '1': carry += 1 if carry % 2 == 1: result.append('1') else: result.append('0') carry //= 2 if carry == 1: result.append('1') result.reverse() return ''.join(result) print(add_binary('10','110')) ``` ![title](image/bit_man.jpg) ``` def addBinary(a, b) -> str: x, y = int(a, 2), int(b, 2) while y: answer = x ^ y carry = (x & y) << 1 x, y = answer, carry return bin(x)[2:] print(addBinary('001','101')) ``` def addBinary( a, b): if len(a)==0: return b if len(b)==0: return a if a[-1] == '1' and b[-1] == '1': return addBinary(addBinary(a[0:-1],b[0:-1]),'1')+'0' if a[-1] == '0' and b[-1] == '0': return addBinary(a[0:-1],b[0:-1])+'0' else: return addBinary(a[0:-1],b[0:-1])+'1' ``` # addBinary(addBinary(1,[]),1)+'0' # addBinary(addBinary(1,[]),1) + '0' ==> addBinary(1,[]) return a =1 B # addBinary(1,1)+'0' # return {addBinary(addBinary(a[0:-1],b[0:-1]),'1')+'0' } +'0' ==> addBinary(a[0:-1],b[0:-1]) return empty A # return {addBinary(empty,'1')+'0' } +'0' ===> addBinary(empty,'1') return 1 A #1 +'0' +'0' addBinary("11","1") def add_binary(a,b): print("len(a) {}".format(len(a))) print("len(b) {}".format(len(b))) print("a[-1] {}".format(a[-1])) print("b[-1] {}".format(b[-1])) print("a[0:-1]) {}".format(a[0:-1])) print("b[0:-1]) {}".format(b[0:-1])) if len(a)==0: print("len a==0") return b if len(b)==0: print("len b==0") return a if a[-1] == '1' and b[-1] == '1': print("First if condition 1,1") return addBinary(addBinary(a[0:-1],b[0:-1]),'1')+'0' if a[-1] == '0' and b[-1] == '0': print("Second if condition 0,0") return add_binary(a[0:-1],b[0:-1])+'0' else: print("Else") return add_binary(a[0:-1],b[0:-1])+'1' add_binary("1010","1011") def add_binary_nums(x, y): print((len(x))) print((len(x))) max_len = max(len(x), len(y)) print("max_len {}".format(max_len)) print() #Fill it with zeros x = x.zfill(max_len) print("x {}".format(x)) y = y.zfill(max_len) print("y {}".format(y)) print(y) # initialize the result result = '' # initialize the carry carry = 0 # Traverse the string for i in range(max_len - 1, -1, -1): r = carry r += 1 if x[i] == '1' else 0 r += 1 if y[i] == '1' else 0 result = ('1' if r % 2 == 1 else '0') + result carry = 0 if r < 2 else 1 # Compute the carry. if carry !=0 : result = '1' + result return result.zfill(max_len) add_binary_nums('100','10') """ This is the same solution with print Note: carry //=2 This step is needed to carry the current unit """ def add_binary(a,b): n = max(len(a), len(b)) a, b = a.zfill(n), b.zfill(n) carry = 0 result = [] for i in range(n - 1, -1, -1): if a[i] == '1': carry += 1 if b[i] == '1': carry += 1 print('\na',a[i]) print('b',b[i]) if carry % 2 == 1: result.append('1') else: result.append('0') print('carry',carry % 2 == 1) print('result',result) print('\nb4 carry',carry) carry //= 2 print('out carry',carry) if carry == 1: result.append('1') print('result',result) result.reverse() return ''.join(result) print(add_binary('10','111')) print(0//2) print(1//2) ```
github_jupyter
# Chapter 3: Dynamic Programming ## 1. Exercise 4.1 $\pi$ is equiprobable random policy, so all actions equally likely. - $q_\pi(11, down)$ With current state $s=11$ and action $a=down$, we have next is the terminal state $s'=end$, which have reward $R'=0$ $$ \begin{aligned} q_\pi(11, down) &= \sum_{s',r}p(s',r | s,a)\big[r+\gamma v_\pi(s')\big] \cr &= 1 * (-1 + 0) \cr &= -1 \end{aligned} $$ - $q_\pi(7, down)$ With current state $s=7$ and action $a=down$, we have next is the terminal state $s'=11$, which have state-value function $v_\pi(s)$ $$ \begin{aligned} q_\pi(7, down) &= \sum_{s',r}p(s',r | s,a)\big[r+\gamma v_\pi(s')\big] \cr &= 1 * \big[-1 + \gamma v_\pi(s')\big] \cr &= -1 + \gamma v_\pi(s') \end{aligned} $$ ## 2. Exercise 4.2 - Transitions from the original states are unchanged $$ \begin{aligned} v_\pi(15) &= \sum_a \pi(a|s=15)\sum_{s',r}p(s',r|s,a)\big[r+\gamma v_\pi(s')\big] \cr &= 0.25\big[1*\big(-1+\gamma v_\pi(12)\big)+1*\big(-1+\gamma v_\pi(13)\big)+1*\big(-1+\gamma v_\pi(14)\big)+1*\big(-1+\gamma v_\pi(15)\big)\big] \cr &= -1 + 0.25\gamma\sum_{s=12}^{15}v_\pi(s) \end{aligned} $$ In which, $\displaystyle v_\pi(13)=-1 + 0.25\gamma\sum_{s\in\{9,12,13,14\}}v_\pi(s)$ - Add action **down** to state 13, to go to state 15 Compute Fomular is similar to above: $$v_\pi(15)=-1 + 0.25\gamma\sum_{s=12}^{15}v_\pi(s)$$ But, $\displaystyle v_\pi(13)=-1 + 0.25\gamma\sum_{s\in\{9,12,14,15\}}v_\pi(s)$ ## 3. Exercise 4.3 - $q_\pi$ evaluation $$ \begin{aligned} q_\pi(s, a) &= E[G_t | S_t=s, A_t=a] \cr &= E[R_{t+1}+\gamma G_{t+1} | S_t=s, A_t=a] \cr &= E[R_{t+1}+\gamma V_\pi(S_{t+1}) | S_t=s, A_t=a] \cr &= \sum_{s',r}p(s',r | s,a)\big[r+\gamma v_\pi(s')\big] \end{aligned} $$ - Update rule for $q_\pi$ $$ \begin{aligned} q_{k+1}(s, a) &= E_\pi[R_{t+1} + \gamma v_k(S_{t+1}) | S_t=s, A_t=a] \cr &= \sum_{s',r}p(s',r | s,a)\big[r+\gamma v_k(s')\big] \cr &= \sum_{s',r}p(s',r | s,a)\Big[r+\gamma \sum_{a'\in\mathcal A(s')}\pi(a' | s')q_k(s', a')\Big] \end{aligned} $$ ## 4. Exercise 4.4 When, the policy continually switches between two or more policies that are equally good, the difference between switches is small, so policy evaluation loop will be breaked before convergence. $$\Delta = \max\big(\Delta, | v-V(s) |\big)$$ So, in this case, it maybe useful if we talk the sum of all differences $$\Delta = \Delta + | v-V(s) |$$ ## 5. Exercise 4.5 Policy Iteration algorithm for action values ### 1. Initialization $\quad \pi(s)\in\mathcal A(s)$ and $Q(s,a)\in\mathbb R$ arbitrarily for all $s\in\mathcal S$ and $a\in\mathcal A(s)$ ### 2. Policy Evaluation $\quad$Loop: $\quad\quad \Delta\gets0$ $\quad\quad$ Loop for each $s\in\mathcal S$ $\quad\quad\quad$ Loop for each $a\in\mathcal A(s)$ $\quad\quad\quad\quad q\gets Q(s,a)$ $\quad\quad\quad\quad \displaystyle Q(s,a)\gets \sum_{s',r}p(s',r | s,a)\Big[r+\gamma \sum_{a'\in\mathcal A(s')}\pi(a' | s')Q(s', a')\Big]$ $\quad\quad\quad\quad \Delta\gets \Delta+\big| q- Q(s,a)\big|$ $\quad\quad \text{until }\Delta<\theta$ a small positive number determining the accuracy of estimation ### 3. Policy Improvement $\quad\textit{policy-stable}\gets\textit{true}$ $\quad$For each $s\in\mathcal S$ $\quad\quad \textit{old-aciton}\gets\pi(s)$ $\quad\quad \pi(s)\gets\arg\max_a Q(s,a)$ $\quad\quad$If $\textit{old-aciton}\neq\pi(s)$, then $\textit{policy-stable}\gets\textit{false}$ $\quad$If $\textit{policy-stable}$, then stop and return $Q\approx q_*$ and $\pi\approx\pi_*$; else go to $2$ ## 6. Exercise 4.6 ## 7. Exercise 4.7 ## 8. Exercise 4.8 ## 9. Exercise 4.9 ## 10. Exercise 4.10 Value iteration update for action values, $q_{k+1}(s,a)$ $$ \begin{aligned} q_{k+1}(s,a) &= \max E\big[R_{t+1}+\gamma v_k(S_{t+1}) | S_t=s,A_t=a\big] \cr &= \max\sum_{s',r}p(s',r | s,a)\big[r+\gamma v_k(s')\big] \cr &= \max\sum_{s',r}p(s',r | s,a)\big[r+\gamma\sum_{a'\in\mathcal A(s')}\pi(a' | s')q_k(s', a')\big] \end{aligned} $$
github_jupyter
# Predictions with Pyro + GPyTorch (High-Level Interface) ## Overview In this example, we will give an overview of the high-level Pyro-GPyTorch integration - designed for predictive models. This will introduce you to the key GPyTorch objects that play with Pyro. Here are the key benefits of the integration: **Pyro provides:** - The engines for performing approximate inference or sampling - The ability to define additional latent variables **GPyTorch provides:** - A library of kernels/means/likelihoods - Mechanisms for efficient GP computations ``` import math import torch import pyro import tqdm import gpytorch from matplotlib import pyplot as plt %matplotlib inline ``` In this example, we will be doing simple variational regression to learn a monotonic function. This example is doing the exact same thing as [GPyTorch's native approximate inference](../04_Variational_and_Approximate_GPs/SVGP_Regression_CUDA.ipynb), except we're now using Pyro's variational inference engine. In general - if this was your dataset, you'd be better off using GPyTorch's native exact or approximate GPs. (We're just using a simple example to introduce you to the GPyTorch/Pyro integration). ``` train_x = torch.linspace(0., 1., 21) train_y = torch.pow(train_x, 2).mul_(3.7) train_y = train_y.div_(train_y.max()) train_y += torch.randn_like(train_y).mul_(0.02) fig, ax = plt.subplots(1, 1, figsize=(3, 2)) ax.plot(train_x.numpy(), train_y.numpy(), 'bo') ax.set_xlabel('x') ax.set_ylabel('y') ax.legend(['Training data']) ``` ## The PyroGP model In order to use Pyro with GPyTorch, your model must inherit from `gpytorch.models.PyroGP` (rather than `gpytorch.modelks.ApproximateGP`). The `PyroGP` extends the `ApproximateGP` class and differs in a few key ways: - It adds the `model` and `guide` functions which are used by Pyro's inference engine. - It's constructor requires two additional arguments beyond the variational strategy: - `likelihood` - the model's likelihood - `num_data` - the total amount of training data (required for minibatch SVI training) - `name_prefix` - a unique identifier for the model ``` class PVGPRegressionModel(gpytorch.models.PyroGP): def __init__(self, train_x, train_y, likelihood): # Define all the variational stuff variational_distribution = gpytorch.variational.CholeskyVariationalDistribution( num_inducing_points=train_y.numel(), ) variational_strategy = gpytorch.variational.VariationalStrategy( self, train_x, variational_distribution ) # Standard initializtation super(PVGPRegressionModel, self).__init__( variational_strategy, likelihood, num_data=train_y.numel(), name_prefix="simple_regression_model" ) self.likelihood = likelihood # Mean, covar self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel( gpytorch.kernels.MaternKernel(nu=1.5) ) def forward(self, x): mean = self.mean_module(x) # Returns an n_data vec covar = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean, covar) model = PVGPRegressionModel(train_x, train_y, gpytorch.likelihoods.GaussianLikelihood()) ``` ## Performing inference with Pyro Unlike all the other examples in this library, `PyroGP` models use Pyro's inference and optimization classes (rather than the classes provided by PyTorch). If you are unfamiliar with Pyro's inference tools, we recommend checking out the [Pyro SVI tutorial](http://pyro.ai/examples/svi_part_i.html). ``` # this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) num_iter = 2 if smoke_test else 200 num_particles = 1 if smoke_test else 256 def train(lr=0.01): optimizer = pyro.optim.Adam({"lr": 0.1}) elbo = pyro.infer.Trace_ELBO(num_particles=num_particles, vectorize_particles=True, retain_graph=True) svi = pyro.infer.SVI(model.model, model.guide, optimizer, elbo) model.train() iterator = tqdm.tqdm_notebook(range(num_iter)) for i in iterator: model.zero_grad() loss = svi.step(train_x, train_y) iterator.set_postfix(loss=loss) %time train() ``` In this example, we are only performing inference over the GP latent function (and its associated hyperparameters). In later examples, we will see that this basic loop also performs inference over any additional latent variables that we define. ## Making predictions For some problems, we simply want to use Pyro to perform inference over latent variables. However, we can also use the models' (approximate) predictive posterior distribution. Making predictions with a PyroGP model is exactly the same as for standard GPyTorch models. ``` fig, ax = plt.subplots(1, 1, figsize=(4, 3)) train_data, = ax.plot(train_x.cpu().numpy(), train_y.cpu().numpy(), 'bo') model.eval() with torch.no_grad(): output = model.likelihood(model(train_x)) mean = output.mean lower, upper = output.confidence_region() line, = ax.plot(train_x.cpu().numpy(), mean.detach().cpu().numpy()) ax.fill_between(train_x.cpu().numpy(), lower.detach().cpu().numpy(), upper.detach().cpu().numpy(), color=line.get_color(), alpha=0.5) ax.set_xlabel('x') ax.set_ylabel('y') ax.legend([train_data, line], ['Train data', 'Prediction']) ``` ## Next steps This was a pretty boring example, and it wasn't really all that different from GPyTorch's native SVGP implementation! The real power of the Pyro integration comes when we have additional latent variables to infer over. We will see an example of this in the [next example](./Clustered_Multitask_GP_Regression.ipynb), which learns a clustering over multiple time series using multitask GPs and Pyro.
github_jupyter
# Exercise 1: Schema on Read ``` from pyspark.sql import SparkSession import pandas as pd import matplotlib spark = SparkSession.builder.getOrCreate() dfLog = spark.read.text("data/NASA_access_log_Jul95.gz") ``` # Load the dataset ``` #Data Source: http://ita.ee.lbl.gov/traces/NASA_access_log_Jul95.gz dfLog = spark.read.text("data/NASA_access_log_Jul95.gz") ``` # Quick inspection of the data set ``` # see the schema dfLog.printSchema() # number of lines dfLog.count() #what's in there? dfLog.show(5) #a better show? dfLog.show(5, truncate=False) #pandas to the rescue pd.set_option('max_colwidth', 200) dfLog.limit(5).toPandas() ``` # Let' try simple parsing with split ``` from pyspark.sql.functions import split # TODO dfArrays = dfLog.withColumn("tokenized", split("value"," ")) dfArrays.limit(10).toPandas() ``` # Second attempt, let's build a custom parsing UDF ``` from pyspark.sql.functions import udf # TODO @udf def parseUDF(line): import re PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)' match = re.search(PATTERN, line) if match is None: return (line, 0) size_field = match.group(9) if size_field == '-': size = 0 else: size = match.group(9) return { "host" : match.group(1), "client_identd" : match.group(2), "user_id" : match.group(3), "date_time" : match.group(4), "method" : match.group(5), "endpoint" : match.group(6), "protocol" : match.group(7), "response_code" : int(match.group(8)), "content_size" : size } # TODO dfParsed= dfLog.withColumn("parsed", parseUDF("value")) dfParsed.limit(10).toPandas() dfParsed.printSchema() ``` # Third attempt, let's fix our UDF ``` #from pyspark.sql.functions import udf # already imported from pyspark.sql.types import MapType, StringType # TODO @udf(MapType(StringType(),StringType())) def parseUDFbetter(line): import re PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)' match = re.search(PATTERN, line) if match is None: return (line, 0) size_field = match.group(9) if size_field == '-': size = 0 else: size = match.group(9) return { "host" : match.group(1), "client_identd" : match.group(2), "user_id" : match.group(3), "date_time" : match.group(4), "method" : match.group(5), "endpoint" : match.group(6), "protocol" : match.group(7), "response_code" : int(match.group(8)), "content_size" : size } # TODO dfParsed= dfLog.withColumn("parsed", parseUDFbetter("value")) dfParsed.limit(10).toPandas() # TODO dfParsed= dfLog.withColumn("parsed", parseUDFbetter("value")) dfParsed.limit(10).toPandas() #Bingo!! we'got a column of type map with the fields parsed dfParsed.printSchema() dfParsed.select("parsed").limit(10).toPandas() ``` # Let's build separate columns ``` dfParsed.selectExpr("parsed['host'] as host").limit(5).show(5) dfParsed.selectExpr(["parsed['host']", "parsed['date_time']"]).show(5) fields = ["host", "client_identd","user_id", "date_time", "method", "endpoint", "protocol", "response_code", "content_size"] exprs = [ "parsed['{}'] as {}".format(field,field) for field in fields] exprs dfClean = dfParsed.selectExpr(*exprs) dfClean.limit(5).toPandas() ``` ## Popular hosts ``` from pyspark.sql.functions import desc dfClean.groupBy("host").count().orderBy(desc("count")).limit(10).toPandas() ``` ## Popular content ``` from pyspark.sql.functions import desc dfClean.groupBy("endpoint").count().orderBy(desc("count")).limit(10).toPandas() ``` ## Large Files ``` dfClean.createOrReplaceTempView("cleanlog") spark.sql(""" select endpoint, content_size from cleanlog order by content_size desc """).limit(10).toPandas() from pyspark.sql.functions import expr dfCleanTyped = dfClean.withColumn("content_size_bytes", expr("cast(content_size as int)")) dfCleanTyped.limit(5).toPandas() dfCleanTyped.createOrReplaceTempView("cleantypedlog") spark.sql(""" select endpoint, content_size from cleantypedlog order by content_size_bytes desc """).limit(10).toPandas() from pyspark.sql.functions import col, unix_timestamp parsedDateDf = dfCleanTyped.withColumn( 'parsed_date_time', unix_timestamp(col('date_time'), "dd/MMM/yyyy:HH:mm:ss Z").cast("timestamp") ) parsedDateDf.limit(20).toPandas() ```
github_jupyter
``` %%sh pip -q install sagemaker stepfunctions --upgrade # Enter your role ARN workflow_execution_role = '' import boto3 import sagemaker import stepfunctions from stepfunctions import steps from stepfunctions.steps import TrainingStep, ModelStep, EndpointConfigStep, EndpointStep, TransformStep, Chain from stepfunctions.inputs import ExecutionInput from stepfunctions.workflow import Workflow sess = sagemaker.Session() bucket = sess.default_bucket() role = sagemaker.get_execution_role() prefix = 'sklearn-boston-housing-stepfunc' training_data = sess.upload_data(path='housing.csv', key_prefix=prefix + "/training") output = 's3://{}/{}/output/'.format(bucket,prefix) print(training_data) print(output) import pandas as pd data = pd.read_csv('housing.csv') data.head() data.drop(['medv'], axis=1, inplace=True) data.to_csv('test.csv', index=False, header=False) batch_data = sess.upload_data(path='test.csv', key_prefix=prefix + "/batch") from sagemaker.sklearn import SKLearn sk = SKLearn(entry_point='sklearn-boston-housing.py', role=role, framework_version='0.23-1', train_instance_count=1, train_instance_type='ml.m5.large', output_path=output, hyperparameters={ 'normalize': True, 'test-size': 0.1, } ) execution_input = ExecutionInput(schema={ 'JobName': str, 'ModelName': str, 'EndpointName': str }) training_step = TrainingStep( 'Train a Scikit-Learn script on the Boston Housing dataset', estimator=sk, data={'training': sagemaker.inputs.TrainingInput(training_data, content_type='text/csv')}, job_name=execution_input['JobName'] ) model_step = ModelStep( 'Create the model in SageMaker', model=training_step.get_expected_model(), model_name=execution_input['ModelName'] ) transform_step = TransformStep( 'Transform the dataset in batch mode', transformer=sk.transformer(instance_count=1, instance_type='ml.m5.large'), job_name=execution_input['JobName'], model_name=execution_input['ModelName'], data=batch_data, content_type='text/csv' ) endpoint_config_step = EndpointConfigStep( "Create an endpoint configuration for the model", endpoint_config_name=execution_input['ModelName'], model_name=execution_input['ModelName'], initial_instance_count=1, instance_type='ml.m5.large' ) endpoint_step = EndpointStep( "Create an endpoint hosting the model", endpoint_name=execution_input['EndpointName'], endpoint_config_name=execution_input['ModelName'] ) workflow_definition = Chain([ training_step, model_step, transform_step, endpoint_config_step, endpoint_step ]) import time timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) workflow = Workflow( name='sklearn-boston-housing-workflow1-{}'.format(timestamp), definition=workflow_definition, role=workflow_execution_role, execution_input=execution_input ) # Not available in JupyterLab # see https://github.com/aws/aws-step-functions-data-science-sdk-python/issues/127 # workflow.render_graph(portrait=True) workflow.create() execution = workflow.execute( inputs={ 'JobName': 'sklearn-boston-housing-{}'.format(timestamp), 'ModelName': 'sklearn-boston-housing-{}'.format(timestamp), 'EndpointName': 'sklearn-boston-housing-{}'.format(timestamp) } ) # Not available in JupyterLab # see https://github.com/aws/aws-step-functions-data-science-sdk-python/issues/127 # execution.render_progress() execution.list_events() workflow.list_executions(html=True) Workflow.list_workflows(html=True) ``` ---
github_jupyter
### Math 157: Intro to Mathematical Software ### UC San Diego, Winter 2021 ### Homework 1: due Thursday, Jan 14 at 8PM Pacific In general, each homework will be presented as a single Jupyter notebook like this one. A problem will typically consist of multiple components; however, each overall problem within a homework will count the same towards that homework grade. In addition, homework sets may contain different numbers of problems, but the maximum score on each homework will count the same towards the overall course grade. (Remember, we only count your top 6 of 8 homeworks.) Each component of each problem will also briefly indicate the criteria on which it is being judged. - For free-response problems, answers should be in complete sentences. "Conceptual correctness" means we are looking for a specific answer but not a specific wording; "thoroughness" means you gave enough of a response (e.g., if we want three "essentially different" examples of some phenomenon, your examples should be really different). - For problems involving mathematical calculations, answers should be presented in standard mathematical notation using TeX in a Markdown cell. "Mathematical correctness" means what it would in a normal math course. - For problems requiring code, answers should appear as executable code. "Code correctness" means that the code executes without errors and does what was asked; we may assess this using automated testing software. While you are free to create additional notebooks in which to do scratch work, please compose your answers by typing them directly into this notebook copying/pasting. ### Kernel: All computations in this notebook should use the Python 3 (systemwide) kernel. ### Collaborators/resources used: To start, please list all students you worked with in the box below. Additionally, include basic citations to resources you used along the way (it can be as simple as Title: hyperlink_to_the_webpage). You do not need to add citations to hyperlinks of resources that I have already added. Remember! Collaboration is *encouraged*, but *you must write up answers in your own words*. Answer Box: ### Problem 1: Markdown and Jupyter Notebooks Grading criterion: correctness. The following hyperlink may be useful for the questions on Jupyter Notebooks: http://maxmelnick.com/2016/04/19/python-beginner-tips-and-tricks.html 1a.) Write some text in Markdown that illustrates at least three formatting features not used anywhere else in this homework. Your text should also describe in words what these features are. Answer Box: 1b.) What is the difference between "Command Mode" and "Edit Mode" in Jupyter Notebook? Answer Box: 1c.) Write at least 3 keyboard shortcuts that can be used with Jupyter Notebook (and include what they do!) Example: In command mode, the shortcut "A" inserts a cell above a selected cell. Answer Box: Note: You should keep in mind your answers to 1c.) as the quarter progresses! Keyboard shortcuts can make your workflow much more efficient! ### Problem 2: Python Basics Grading criterion: correctness. 2a. In the cell below there is a list of numbers, `L`. Using *basic (one-line/one-command) Python List operations*, print out -The length of the list -The last number in the list in *two* separate ways -The list in reverse order -Every third number in the list -The sum of the numbers in the list ``` L = [x**2 % 491 for x in range(0,1000,27)] #Your code goes here (There should be 6 separate print statements in this block, corresponding to the 6 questions listed above) ``` 2b. Run the following code. In this specific instance, is Python operating most closely to "pass by reference" or "pass by value"? (In reality Python operates under "pass by object reference" but I mainly want to stress what happens on this example). A useful resource could be: https://robertheaton.com/2014/02/09/pythons-pass-by-object-reference-as-explained-by-philip-k-dick/ . This is an important example to keep in mind as you write programs in python; even though a list can be initialized globally, things that happen to it inside a function call can *persist*. ``` numbers = [1,2,7] print('The list of numbers is:', numbers) def append5(L): L.append(5) return() append5(numbers) append5(numbers) append5(numbers) print('The new list of numbers is:', numbers) def valueAppend5(L): print(L) L[0] = 100 print(L) numbers = [1,2,7] valueAppend5(numbers) print(numbers) ``` Answer Box: 2c. Build a dictionary in Python that has keys of 5 *distinct types*. The values can be arbitrary. Remember: to get the type of an object X in Python, call type(X). In the markdown cell below the dictionary, write out a reasonable key type you could use if you were using the key to index a location in a grid (i.e. the key has an x-coordinate and a y-coordinate). This link may be useful: https://www.tutorialsteacher.com/python/python-data-types ``` typeDict = ###Your code here ``` Answer Box: ### Problem 3: Truth values Grading criterion: correctness and thoroughness. An expression `e` will be called *truthy* if `bool(e)` is `True`. Otherwise `e` is *falsy*. In other words, the following conditional statement determines whether or not an object is truthy: 3a. Create a list `l` consisting of 10 different Python objects that are falsy. For correctness, your list must have the property that `a is b` evaluates to `False` whenever `a` and `b` are entries of the list in different positions. For thoroughness, the entries should look as different as possible. (Hint: an empty list `[]` is an example.) ``` l = [] # insert ten objects here # Use this code to test correctness of your answer. Each print statement should output True if you've done this correctly. print(len(l) == 10) # Checks that your list has exactly 10 elements print(all(not l[i] for i in range(10))) # Checks that your list consists of falsy elements print(all(not (l[i] is l[j]) for i in range(10) for j in range(i+1, 10))) # Checks that different list elements correspond to different ``` 3b. In Python, "is" means "identical objects", whereas "==" can be much more subtle. Create a list `l` consisting of 5 tuples `(a, b)` for each of which `a==b` evaluates to `True` but `a is b` evaluates to `False`. (Hint: the tuple `([], [])` is an example) ``` l = [] # insert five objects here ``` 3c: By analogy with the code snippet given to test your answer in 3a, write a code snippet to verify correctness of your answer to 3b. That is, the code snippet should print one or more True/False values, all of which are True if and only if the answer is correct. ``` # Your code snippet goes here ``` ### Problem 4: Flow control Grading criterion: correctness of output. 4b. Write a function named `fizzBuzz` that accepts an integer `N` and for each integer `m` from `1` to `N`, prints `'Fizz'` if `m` is divisible by 2 but not 3, prints `'Buzz'` if `m` is divisible by 3 but not 2, prints `'FizzBuzz'` if `m` is divisible by 2 and 3, and prints `'Moot'` if none of the above are true. ``` def fizzBuzz(N): # Your code goes here # To test your answer, run the following function call. I have displayed the output you should get in the raw cell below. fizzBuzz(7) ``` ### Problem 5: Better and worse Grading criterion: correctness of code and thoroughness of explanation. 5a. The Fibonacci numbers are defined by $f_0 = f_1 = 1$ and, for $n\geq 2$, $$f_n = f_{n-1}+f_{n-2}.$$ Write two functions which take as input a non-negative integer $n$ and output the $n$th Fibonacci number. The first function, `fib1`, *should use recursion*. The second function, `fib2`, *should not use recursion*. If you have not seen recursion before, I have written a function which uses recursion to compute the $n$th factorial, $$n! = n\cdot(n-1)\dots 2\cdot 1.$$ You may want to model `fib1` after this. ``` def recursiveFactorial(N): if N == 0: #Every recursive function needs a base case, to tell the program where to start return(1) #In this case, the base case is 0! = 1 else: #If we are not in the base case, this means N >= 1 return(N*recursiveFactorial(N-1)) #In this case, we reduce the problem to finding recursiveFactorial(N-1), and then modify it by multiplying by N to get the final answer. def fib1(N): #Your code goes here (use recursion this time!) def fib2(N): #Your code goes here (do not use recursion this time!) #To verify that your functions agree, run this line: print(all([fib1(n) == fib2(n) for n in range(0,15)])) ``` 5b.) What are the two code cells below this question computing? Based on the evaluation of the code cells, which method of computing Fibonacci numbers is preferable? (You may want to read about Python's time module here: https://docs.python.org/3/library/time.html, although you can probably guess what is happening with the timing anyways. For those of you interested in coding as a potential career, you may want to read about the differences between recursion and dynamic programming, which this problem highlights) ``` import time a = time.time() for i in range(10): g = fib1(30) b = time.time() print((b-a)/10) import time a = time.time() for i in range(10): g = fib2(30) b = time.time() print((b-a)/10) ``` Answer Box: ### Problem 6: List comprehensions Grading criterion: correctness. Translate each of the following mathematical definitions of sets into a Python list comprehension. WARNING: Remember how the range function in Python works, and remember that exponentiation in Python *does not* use the carat symbol. - $\{x:0\leq x\leq 100\}$ - $\{x: 0 < x < 100, x \not\equiv 0 \pmod{3} \}$ - $\{x: 10 < x < 50, x^2 \equiv 1 \pmod{5}\}$ - $\{(x,y): 0 < x < 1000, 0 < y < 1000, x^2 - y^3 = 1\}$ ``` l1 = #Replace this comment with your one line list comprehension l2 = #Replace this comment with your one line list comprehension l3 = #Replace this comment with your one line list comprehension l4 = #Replace this comment with your one line list comprehension print(l1) print(l2) print(l3) print(l4) ```
github_jupyter
``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` **Note:** This notebook can run using TensorFlow 2.5.0 ``` #!pip install tensorflow==2.5.0 import json import tensorflow as tf import csv import random import numpy as np from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.utils import to_categorical from tensorflow.keras import regularizers embedding_dim = 100 max_length = 16 trunc_type='post' padding_type='post' oov_tok = "<OOV>" training_size=160000 test_portion=.1 corpus = [] # Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader # You can do that yourself with: # iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv # training_cleaned.csv !gdown --id 1wd8KaeCSHxt-nEpMeuHFSNWrDp8joUXJ num_sentences = 0 with open("./training_cleaned.csv") as csvfile: reader = csv.reader(csvfile, delimiter=',') for row in reader: list_item=[] ### START CODE HERE list_item.append(row[5]) this_label=row[0] if this_label=='0': list_item.append(0) else: list_item.append(1) ### END CODE HERE num_sentences = num_sentences + 1 corpus.append(list_item) print(num_sentences) print(len(corpus)) print(corpus[1]) # Expected Output: # 1600000 # 1600000 # ["is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!", 0] sentences=[] labels=[] random.shuffle(corpus) for x in range(training_size): sentences.append(corpus[x][0])# YOUR CODE HERE) labels.append(corpus[x][1])# YOUR CODE HERE) tokenizer = Tokenizer() tokenizer.fit_on_texts(sentences)# YOUR CODE HERE) word_index = tokenizer.word_index vocab_size=len(word_index)# YOUR CODE HERE) sequences = tokenizer.texts_to_sequences(sentences)# YOUR CODE HERE) padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)# YOUR CODE HERE) split = int(test_portion * training_size) test_sequences = padded[0:split]# YOUR CODE HERE) training_sequences = padded[split:training_size]# YOUR CODE HERE) test_labels = labels[0:split]# YOUR CODE HERE) training_labels = labels[split:training_size]# YOUR CODE HERE) print(vocab_size) print(word_index['i']) # Expected Output # 138856 # 1 # Note this is the 100 dimension version of GloVe from Stanford # glove.6B.100d.txt !gdown --id 1W5vZy2etitAblLdFn8_DxnsQKzfFJ98g embeddings_index = {}; with open('./glove.6B.100d.txt') as f: for line in f: values = line.split(); word = values[0]; coefs = np.asarray(values[1:], dtype='float32'); embeddings_index[word] = coefs; embeddings_matrix = np.zeros((vocab_size+1, embedding_dim)); for word, i in word_index.items(): embedding_vector = embeddings_index.get(word); if embedding_vector is not None: embeddings_matrix[i] = embedding_vector; print(len(embeddings_matrix)) # Expected Output # 138857 model = tf.keras.Sequential([ # YOUR CODE HERE tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False), tf.keras.layers.Dropout(0.2), tf.keras.layers.Conv1D(64, 5, activation='relu'), tf.keras.layers.MaxPooling1D(pool_size=4), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])# YOUR CODE HERE) model.summary() num_epochs = 50 training_padded = np.array(training_sequences) training_labels = np.array(training_labels) testing_padded = np.array(test_sequences) testing_labels = np.array(test_labels) history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=2) print("Training Complete") import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- acc=history.history['accuracy'] val_acc=history.history['val_accuracy'] loss=history.history['loss'] val_loss=history.history['val_loss'] epochs=range(len(acc)) # Get number of epochs #------------------------------------------------ # Plot training and validation accuracy per epoch #------------------------------------------------ plt.plot(epochs, acc, 'r') plt.plot(epochs, val_acc, 'b') plt.title('Training and validation accuracy') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["Accuracy", "Validation Accuracy"]) plt.figure() #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r') plt.plot(epochs, val_loss, 'b') plt.title('Training and validation loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss", "Validation Loss"]) plt.figure() # Expected Output # A chart where the validation loss does not increase sharply! ```
github_jupyter
# Module 10 - Regression Algorithms - Linear Regression Welcome to Machine Learning (ML) in Python! We're going to use a dataset about vehicles and their respective miles per gallon (mpg) to explore the relationships between variables. The first thing to be familiar with is the data preprocessing workflow. Data needs to be prepared in order for us to successfully use it in ML. This is where a lot of the actual work is going to take place! I'm going to use this dataset for each of the regression algorithms, so we can see how each one differs. The next notebooks with the dataset will be: - Linear Regression w/ Transformed Target (Logarithmic) - Ridge Regression with Standardized Inputs - Ridge and LASSO Regression with Polynomial Features These four notebooks are designed to be a part of a series, with this one being the first. We're going to start by importing our usual packages and then some IPython settings to get more output: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" ``` ## Part A: Data Exploration The first thing to do is import and explore our mpg dataset! There's a few things to note in the dataset description: na values are denoted by `?` and column names are in a separate doc. I added the column names so we don't have to worry about them: ``` loc = "https://raw.githubusercontent.com/mhall-simon/python/main/data/car-mpg/auto-mpg.data" df = pd.read_csv(loc, sep="\s+", header=None, na_values="?") cols = {0:"mpg", 1:"cylinders", 2:"displacement", 3:"horsepower", 4:"weight", 5:"accel", 6:"year", 7:"origin", 8:"model"} df = df.rename(columns=cols) df.head(15) ``` When starting, it's always good to have a look at how complete our data set is. Let's see just how many na values were brought into the dataset per column: ``` df.isna().sum() ``` We have 6 missing values for horsepower! A safe assumption for imputing missing values is to insert the column mean, let's do that! (Feature engineering is somewhere that we can go into this more in depth.) *Note:* Imputing values is something that's not always objective, as it introduces some biases. We could also drop those 6 rows out of our dataset, however, I think imputing average hp isn't too serious of an issue. ``` df = df.replace(np.nan, df.horsepower.mean()) df.isna().sum() ``` Now, there's no more missing values! Let's get some descriptive statistics running for our numerical columns (non-numerical are automatically dropped): ``` df.describe() ``` Another thing we can look at is the number of unique car models in the dataset: ``` df.nunique(axis=0) ``` For the ML analysis, there's too many models to worry about, so we're going to have them drop off the dataset! We're trying to predict mpg, and with our data the model name will have practically no predictive power! One Hot Encoding the makes/models would make the dataset have almost more columns than rows! ``` df = df.drop("model", axis=1) df.head() ``` ### Train-Test Split We're getting closer to starting our analysis! The first major consideration is the train/test split, where we reserve a chunk of our dataset to validate the model. Remember, no peeking into the results with testing to train our model! That'll introduce a bias! Let's separate our data into X and y, and then run the split: ``` X = df.iloc[:,1:] y = df.iloc[:,0] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=97) ``` Another important thing to look at is the distributions of continuous variables and their pairwise relationships. Seaborn has a really cool pairplot function that allows us to easily visualize this automatically! We just need to pass in columns of continuous variables. Note: This is a marginal dependence, and does not keep all other variables fixed! We should only analyze this after our split! ``` train_dataset = X_train.copy() train_dataset.insert(0, "mpg", y_train) sns.pairplot(train_dataset[['mpg','displacement','horsepower','weight','accel']], kind='reg', diag_kind='kde') ``` When looking at this, there's two things to takeaway: 1. `mpg` is close to being normal, but there's a long tail. This means we may be better taking the log of mpg when running our analysis - something to explore in the next notebook. 2. Some relationships are not quite linear! We will work on this more in the following notebooks! Let's now get into the ML aspects! ## Part B: Data Preprocessing & Pipeline There's a lot of online tutorials that show the SKLearn models and how to call them in one line, and not much else. A really powerful tool is to leverage the pipelines, as you can adjsut easily on the fly and not rewrite too much code! Pipelines also reduce the potential for errors, as we only define preprocessing steps, and don't actually need to manipulate our tables. When we transform the target with a log later, we also don't need to worry about switching between log and normal values! It'll be handled for us. It's also not as bad as it seems! The first main step is to separate our data into: - categorical columns that need to be one-hot encoded - continuous columns (no changes - for now) - other processing subsets (none in these examples, but binary columns would be handled a bit differently.) - label encoding the response (y) variable when we get into classification models Let's get right to it! We can split apart the explanatory column names into the two categories with basic lists: ``` categorical_columns = ['cylinders','origin','year'] numerical_columns = ['displacement','horsepower','weight','accel'] ``` *Discussion:* Why is Year Categorical, even though it's a numerical year? In Linear Regression, the year 70 (1970) would appear to be a factor of 80 (1980) by about 9/10ths, and it would be scaled that way. This would not make sense, as we expect only marginal increases in mpg year-over-year. To prevent a relationship like this, we're going to one-hot encode the years into categories. Now, let's put together our preprocessing pipeline. We'll need to: 1. OneHot Encode Categorical 2. Leave Continuous Alone Let's build our preprocessor: ``` from sklearn.preprocessing import OneHotEncoder from sklearn.compose import make_column_transformer preprocessor = make_column_transformer((OneHotEncoder(drop="first"), categorical_columns), remainder="passthrough") ``` Why are we dropping the first category in each categorical column? Our regression can imply the first one with zeros for all the encoded variables, and by not including it we are preventing colinearity from being introduced! A potential issue that can arise is when you encounter new labels in the test/validation sets that are not one-hot encoded. Right now, this would toss an error if it happens! Later notebooks will go into how to handle these errors. Now, let's build the pipeline: ``` from sklearn.pipeline import make_pipeline from sklearn.linear_model import LinearRegression model = make_pipeline(preprocessor, LinearRegression()) ``` And now we can easily train our model and preprocess our data all in one step: ``` model.fit(X_train, y_train) ``` Before we start evaluating the model, I'll show you some useful features with the pipeline: 1. View Named Steps ``` model.named_steps ``` 2. View Coefficients and Intercept (Expanded Later) ``` model.named_steps['linearregression'].coef_ model.named_steps['linearregression'].intercept_ ``` 3. Generate Predictions *Viewing First 10* ``` model.predict(X_train)[:10] ``` ## Part C: Evaluating Machine Learning Model So, now we have an ML model, but how do we know if it's good? Also, what's our criteria for good? This changes depending upon what you're doing! Let's bring in some metrics, and look at our "in sample" performance. This is the performance valuation in sample, without looking at any test data yet! - $r^2$: coefficient of determination - mean absolute error - mean squared error Let's generate our in-sample predictions based upon the model: ``` y_pred_in = model.predict(X_train) ``` And now let's generate some metrics: This compares the training (truth) values, to the ones predicted by the line of best fit. ``` from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error r2_score(y_train, y_pred_in) mean_squared_error(y_train, y_pred_in) mean_absolute_error(y_train, y_pred_in) ``` We're explaining about 87.5% of the variation in our in-sample dataset! That's pretty good, but will it hold when analyzing out of sample? Also, we now know that our average absolute error is 2.09 mpg! That's not too bad, considering the range of the dataset and STD from the data: ``` y_train.std() y_train.max() - y_train.min() ``` Let's now visualize our predictions! As a note, we want all of our datapoints to be along the line! *Tip:* If you're reproducing this graph, ensure that the diagonal goes through the origin of the plot. The red line is setup to draw from corner to corner, and if you move your axes this may not work out! ``` fig, ax = plt.subplots(figsize=(5,5)) plt.scatter(y_train, y_pred_in) ax.plot([0,1],[0,1], transform=ax.transAxes, ls="--", c="red") plt.xlim([0,50]) plt.ylim([0,50]) plt.ylabel("Model Predictions") plt.xlabel("Truth Values") plt.title("In Sample Performance") plt.show(); ``` Our predictions are pretty good! A few things to note: - It's a really good fit, but it appears that there's a slight curve to this dataset. - This is still in sample (we trained the model on this data) - If we're making predictions, what regions are we confident in? I think average mpg we'll be accurate, however, at the edges we're missing some of the trend. Let's plot our residual error to see the shape: ``` plt.scatter(y_train, y_train-y_pred_in) plt.xlabel("Truth Values - In Sample") plt.ylabel("Residual Error") plt.xlim([5,50]) plt.plot([5,50],[0,0], color='black', alpha=0.6) plt.show(); ``` Our errors definitely have curvature in them! We'll improve upon this in the next module! For now... Let's start looking at the coefficients in our model while it's simple. We can grab coefficients out of the preprocessor to ensure that the coefficients line up with labels. It'll always be in order of the preprocessor, so we can first fetch the feature names from the one hot encoded, and then just concatenate our numerical columns as there were no changes! ``` feature_names = (model.named_steps['columntransformer'] .named_transformers_['onehotencoder'] .get_feature_names(input_features=categorical_columns)) feature_names = np.concatenate([feature_names, numerical_columns]) coefs = pd.DataFrame( model.named_steps['linearregression'].coef_, columns=['Coefficients'], index=feature_names ) coefs ``` Let's plot the coefficients to see if there's anything we can learn out of it! ``` coefs.Coefficients.plot(kind='barh', figsize=(9,7)) plt.title("Unscaled Linear Regression Coefficients") plt.show(); ``` Woah, it looks like weight in unimportant at first glance, even though it would probably impact mpg quite a bit! A word of caution! We just can't compare the coefficients, as they're in a different scale! If we scale them with their standard deviation, then we can compare them. However, some meaning is lost! Currently, the coefficient `-0.034440` for `horsepower` means that while holding all else equal, increasing the horsepower by 1 unit decreases mpg by about 0.034 mpg! So, if we add 100 hp to the car, mileage decreases by about 3.4 mpg if we hold all else equal! Let's scale these coefficients to compare them better! Just keep in mind that the 1hp:-0.34mpg relationship will no longer be interpretable from the scaled coefficients. But, we will be able to compare between coefficients. Using the model pipeline, we can easily transform our data using the built in transformer, and then take the std: `model.named_steps['columntransformer'].transform(DATASET)` is how we can use the transformer we built above. When training the model, this dataset transformation happened all behind the scenes!! However, we can reproduce it with our training sample to work with it manually: **NOTE:** The pipeline transformation is better than manual, because we know for certain the order of the columns that are being outputted. We fetched them above! The preprocessor in this instance returned a SciPy sparse matrix, which we can import with a new DataFrame constructor: ``` X_train_preprocessed = pd.DataFrame.sparse.from_spmatrix( model.named_steps['columntransformer'].transform(X_train), columns=feature_names ) X_train_preprocessed.head(10) ``` By plotting the standard deviations, we can see for certain that the coeffs are definitely in a different scale! Weight varies in the thousands, while acceleration is usually around 10-20 seconds!! ``` X_train_preprocessed.std(axis=0).plot(kind='barh', figsize=(9,7)) plt.title("Features Std Dev") plt.show(); ``` As you can probably see, the standard deviation of weight is far higher than any other variable! This makes it impossible to compare. Now, let's scale everything. This scale works because very large continuous variables have a large standard deviation, but very small coefficients, which brings them down. The opposite is true for very small continuous variables for standard deviations, their coefficient is usually much larger. By multiplying the two together, we're bringing everything in towrads the middle, and with the same units of measurement. ``` coefs['coefScaled'] = coefs.Coefficients * X_train_preprocessed.std(axis=0) coefs ``` Now, let's plot the scaled coefs: ``` coefs.coefScaled.plot(kind="barh", figsize=(9,7)) plt.title("Scaled Linear Coefficients") plt.show(); ``` Earlier, weight had almost no impact on the model at first glance! Now, we can see that it's the most important explanatory variable for mpg. Let's now do our final validations for the model by bringing in the test data!! The first is going to be done using the test (reserved) dataset, which we can make predictions with easily: ``` y_pred_out = model.predict(X_test) ``` And now let's generate a small DataFrame to compare metrics from in sample and out of sample! Out of sample performance is usually worse, it's usually a question of how much! ``` metrics = pd.DataFrame(index=['r2','mse','mae'],columns=['in','out']) metrics['in'] = (r2_score(y_train, y_pred_in), mean_squared_error(y_train, y_pred_in), mean_absolute_error(y_train, y_pred_in)) metrics['out'] = (r2_score(y_test, y_pred_out), mean_squared_error(y_test, y_pred_out), mean_absolute_error(y_test, y_pred_out)) metrics ``` When looking at the data, we see that the $r^2$ value decreased slightly from 0.875 to 0.854! This is still fairly significant! And let's do a similar graph for out of sample performance: ``` fig, ax = plt.subplots(figsize=(5,5)) plt.scatter(y_test, y_pred_out) ax.plot([0,1],[0,1], transform=ax.transAxes, ls="--", c="red") plt.xlim([0,50]) plt.ylim([0,50]) plt.ylabel("Model Predictions") plt.xlabel("Truth Values") plt.title("Out of Sample Performance") plt.show(); ``` We're doing pretty good! There's stil some curvature that we'll work on fixing in the next notebooks. Let's plot our residuals one more time: ``` plt.scatter(y_test, y_test-y_pred_out) plt.xlabel("Truth Values - Out of Sample") plt.ylabel("Residual Error") plt.xlim([5,50]) plt.plot([5,50],[0,0], color='black', alpha=0.6) plt.show(); ``` Our model is pretty good, except for when we go above 32-ish mpg. Our model is predicting values far too high. We'll solve this in a later notebook. Another key question for ML is... How do we know if the performance is due to just our sample selected? How much would our model change depending upon the sample selected? We can solve for this using cross validation! Cross validation takes different samples from our dataset, runs the regression, and then outputs the results! We can easily cut the dataset into chunks and see how it behaves. We're going to plot the distributions of coefficients throughout the folds to see how stable the model is: ``` from sklearn.model_selection import cross_validate from sklearn.model_selection import RepeatedKFold # Part 1: Defining Cross Validation Model cv_model = cross_validate( model, X, y, cv=RepeatedKFold(n_splits=5, n_repeats=5), return_estimator=True, n_jobs=-1 ) # Part 2: Analyzing Each Model's Coefficients, and Setting Them In DataFrame: cv_coefs = pd.DataFrame( [est.named_steps['linearregression'].coef_ * X_train_preprocessed.std(axis=0) for est in cv_model['estimator']], columns=feature_names ) # Part 3: Plotting the Distribution of Coefficients plt.figure(figsize=(9,7)) sns.stripplot(data=cv_coefs, orient='h', color='k', alpha=0.5) sns.boxplot(data=cv_coefs, orient='h', color='cyan', saturation=0.5) plt.axvline(x=0, color='.5') plt.xlabel('Coefficient importance') plt.title('Coefficient importance and its variability') plt.subplots_adjust(left=.3) plt.show(); ``` What are the takeaways from this plot? Our model doesn't appear to be too sensitive to the splits in training and testing! This is a signal that our model is robust, and we should have confidence that our findings weren't due to choosing a "good" sample! If we saw a variable changing from -6 to +2, that would be a sign it is not stable! Now, we're ready to start exploring the second notebook! Which starts working towards a fix in the curvature! ## Bonus Box: Easily Checking for Variable Colinearity If we suspect two variables are colinear, we can easily check for it with the following code: ``` plt.scatter(cv_coefs['weight'], cv_coefs['displacement']) plt.ylabel('Displacement coefficient') plt.xlabel('Weight coefficient') plt.grid(True) plt.title('Co-variations of variables across folds'); ``` These are not colinear across folds, which is good for the model! If they *were* colinear across folds, it would look something like this: <div> <img src=https://github.com/mhall-simon/python/blob/main/data/screenshots/Screen%20Shot%202021-03-22%20at%206.38.12%20PM.png?raw=True width="400"/> </div> If you notice strong colinearlity, then one should be removed and you can run the model again!
github_jupyter
# Losing your loops ## Python is slow! * dynamically typed -- Python interpreter needs to compare and convert(if needed) in runtime everytime a variable is written, modified or referenced * interpreted -- Vanilla Python comes with no compiler optimization * Uses buffers inefficiently because Python lists aren't homogenous, thus making it super slow compared to languages like C, C++ or Julia. More info [here]("http://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/"). ### Timing a silly function in Python ``` def silly(N): d = 0.0 for i in range(N): d += (i % 3 -1) * i %timeit silly(10000) #1000 loops, best of 5: 1.43 ms per loop ``` ### Timing the same silly function in C ``` %%writefile checktime.c #include <time.h> #include <stdio.h> void silly(int N){ int d = 0.0; for(int i=0; i <= N; i++){ d = d + (i % 3 -1) * i; } } long double mean(long double arr[1000]){ int i; long double sum = 0.0; long double average = 0.0; for(i = 0; i < 1000; i++){ sum = sum + arr[i]; } average = sum/1000; return average; } int main(){ long double time_elapsed = 0.0; long double mean_time = 0.0; long double min_time = 99.0; for(int j=0; j < 5; j++){ long double timearr[1000]; for(int i=0; i < 1000; i++){ clock_t tic = clock(); silly(10000); clock_t toc = clock(); time_elapsed = (long double)(toc - tic) / CLOCKS_PER_SEC; timearr[i] = time_elapsed; } mean_time = mean(timearr); if(mean_time < min_time){ min_time = mean_time; } } printf("1000 loops, best of 5: %Lf s per loop\n", min_time); return 0; } %%shell gcc checktime.c -o output ./output #1000 loops, best of 5: 0.000028 s per loop ``` As you can see, the same code timed in C is ~100x faster than in vanilla Python <mark>"What makes Python fast (for development), is what makes it slow (in code execution)"</mark> -- Jake Vanderplas ## So, what's the remedy? Numpy! or is it? Let's check how Numpy compares with vanilla Python w.r.t. basic scalar Math operations ``` import math import numpy as np %timeit math.log(10) #10000000 loops, best of 5: 165 ns per loop %timeit np.log(10) #1000000 loops, best of 5: 1.21 µs per loop %timeit math.exp(3) #10000000 loops, best of 5: 131 ns per loop %timeit np.exp(3) #1000000 loops, best of 5: 1.19 µs per loop # Sampling from a normal distribution import random %timeit random.gauss(0, 1) #1000000 loops, best of 5: 776 ns per loop %timeit np.random.normal() #100000 loops, best of 5: 2.98 µs per loop ``` Matrix multiplication in vanilla Python <p align="center"> <img src="https://www.mscroggs.co.uk/img/full/multiply_matrices.gif" width=500 height=200></img> </p> ``` def matmul_1(mat1, mat2): mat1_rows, mat1_cols = len(mat1), len(mat1[0]) mat2_rows, mat2_cols = len(mat2), len(mat2[0]) # assert mat1_cols == mat2_rows, "Check matrix dimensions" answer = [[0]*mat2_cols] * mat1_rows for i in range(mat1_rows): for j in range(mat2_cols): agg = 0 for k in range(mat2_rows): agg += (mat1[i][k]*mat2[k][j]) answer[i][j] = agg return answer # matmul_1([[1,1],[1,1]], [[2,2],[2,2]]) ``` <p align="center"> <img src="https://boydjohnson.dev/blog/concurrency-matrix-multiplication/matrix-multiplication-good.gif" width=400 height=300></img> </p> ``` %%timeit -n 10 matmul_1([[1]*50]*50, [[2]*50]*50) #10 loops, best of 5: 21.1 ms per loop ``` #### Exercise: Matrix multiplication in Numpy loops Write the same code as above, using Numpy arrays ``` def matmul_2(mat1, mat2): ############################################################################ ### TODO: Complete this function to perform matmul on two ndarrays ### ############################################################################ mat1_rows, mat1_cols = mat1.shape mat2_rows, mat2_cols = mat2.shape # assert mat1_cols == mat2_rows, "Check matrix dimensions" answer = np.zeros((mat1_rows, mat2_cols)) for i in np.arange(mat1_rows): for j in np.arange(mat2_cols): agg = 0 for k in np.arange(mat2_rows): agg += (mat1[i,k]*mat2[k,j]) answer[i,j] = agg return answer # matmul_2(np.array([[1,1],[1,1]]), np.array([[2,2],[2,2]])) %%timeit -n 10 matmul_2(np.full((50,50), 1), np.full((50,50), 2)) # 10 loops, best of 5: 152 ms per loop ``` Numpy is again slower than vanilla Python. _So, why are we discussing this? Numpy seems to be slower than vanilla Python right?_ Time to unleash Numpy's inner strength!! ## Vectorization, a.k.a. Array Programming Definitions: * This practice of replacing explicit loops with array expressions is commonly referred to as vectorization. In general, vectorized array operations will often be one or two (or more) orders of magnitude faster than their pure Python equivalents, with the biggest impact in any kind of numerical computations. [[Source](https://www.oreilly.com/library/view/python-for-data/9781449323592/ch04.html)] * Generalizing operations we do on scalars (ie., single numbers) to apply transparently to vectors, matrices, and higher-dimensional arrays, which may be executed on a vector processor parallelly (either on a SIMD enabled CPU, or GPU). The matrix multiplication example we looked at is one of the vector-equivalents to scalar multiplication. <p align="center"> <img src="https://media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-020-2649-2/MediaObjects/41586_2020_2649_Fig2_HTML.png?as=webp" width=500 height=500></img> </p> ``` %%timeit -n 10 np.matmul(np.ones((50,50)), np.ones((50,50))*2) #10 loops, best of 5: 42.1 µs per loop ``` In this case it is evident that, vectorized Numpy is * ~500x faster than vanilla Python * ~3600x faster than loopy Numpy So, we should somehow re-formulate the task-at-hand to a vectorized operation. This allows us to use Numpy's inbuilt vectorized functions. Fortunately, Numpy provides us many tricks to help us do this ## Strategies to speed up vanilla Python using Numpy Along with using these direct vectorized Numpy operations, we also have other tricks to speed-up things. ### **1. Universal functions** ([ufunc](https://numpy.org/doc/stable/reference/ufuncs.html)) A ufunc operates on ndarrays in an element-by-element fashion. The idea is to push the loop into the compiled layer that underlies NumPy, thus avoiding the slow loops in Python. Just perform an operation on an ndarray like you would on a scalar value. Numpy would do it for every element in the array using its optimized C/Fortran routines beneath (check list of available ufuncs [here](https://numpy.org/doc/stable/reference/ufuncs.html#available-ufuncs)). #### Exercise: Computing reciprocals ``` def compute_reciprocals(values): ############################################################################ ### TODO: Compute element wise reciprocals ### ############################################################################ output = np.empty(len(values)) for i in range(len(values)): output[i] = 1.0 / values[i] return output # values = np.random.randint(1, 10, size=5) # compute_reciprocals(values) ``` Now, let's compute element-wise reciprocal using Numpy ``` ################################################################################ ##### TODO: Compute element wise reciprocals without loops ##### ################################################################################ (1.0 / big_array) ``` Let's time the two functions now ``` big_array = np.random.randint(1, 100, size=1000000) %timeit compute_reciprocals(big_array) %timeit (1.0 / big_array) np.allclose((1.0/big_array), compute_reciprocals(big_array)) ``` #### Arithmetic with arrays Standard arithmetic operators are overloaded in Numpy to enable vectorization ``` x = np.arange(10) print("x =", x) print("x + 5 =", x + 5) print("x - 5 =", x - 5) print("x * 2 =", x * 2) print("x / 2 =", x / 2) print("x // 2 =", x // 2) # floor division print("-x = ", -x) # negation print("x ** 2 = ", x ** 2) # ?? print("x % 2 = ", x % 2) # ?? # Chaining ufuncs -(0.5*x + 1) ** 0.5 ``` ``` theta = np.linspace(0, np.pi, 5) print("theta = ", theta) print("sin(theta) = ", np.around(np.sin(theta), decimals=3)) print("cos(theta) = ", np.around(np.cos(theta), decimals=3)) print("tan(theta) = ", np.tan(theta)) ``` #### Exercise: Count transitions Given an ndarray consisting of booleans, find the count of `False` to `True` transitions ``` np.random.seed(42) bool_arr = np.random.choice([False, True], size=100000) bool_arr ``` Let's do it in vanilla Python ``` def count_transitions(arr): ############################################################################ ### TODO: count False to True transitions ### ############################################################################ count = 0 for i, j in zip(arr[:-1], arr[1:]): if j and not i: count += 1 return count # count_transitions(bool_arr) ``` Now, try doing the same in Numpy using vectorization ``` ################################################################################ ### TODO: count False to True transitions using vectorization ### ################################################################################ (np.logical_and((~bool_arr[:-1]), (bool_arr[1:]) )).sum() # Alternate solution (bool_arr[:-1] < bool_arr[1:]).sum() %%timeit count_transitions(bool_arr) # 100 loops, best of 5: 9.02 ms per loop %%timeit (bool_arr[:-1] < bool_arr[1:]).sum() # 1000 loops, best of 5: 230 µs per loop ``` ### **2. Aggregations** Computing summary statistics over a data, like central tendencies, deviations, min, max, quantiles etc. can be vectorized using Numpy Let's compare Python's aggregation functions with those of Numpy ``` big_array = np.random.rand(1000000) %timeit sum(big_array) %timeit np.sum(big_array) %timeit min(big_array) %timeit np.min(big_array) ``` For min, max, sum, and several other NumPy aggregates, a shorter syntax is to use methods of the array object itself ``` print(big_array.min(), big_array.max(), big_array.sum()) ``` Multi-dimensional aggregates ``` M = np.random.random((3, 4)) M ``` By default, each NumPy aggregation function will return the aggregate over the entire array ``` np.sum(M) ``` Aggregate across columns ``` M.min(axis=0) ``` Aggregate across rows ``` M.max(axis=1) ``` The following table provides a list of useful aggregation functions available in Numpy |Function Name | NaN-safe Version | Description | |-------------------|---------------------|-----------------------------------------------| | ``np.sum`` | ``np.nansum`` | Compute sum of elements | | ``np.prod`` | ``np.nanprod`` | Compute product of elements | | ``np.mean`` | ``np.nanmean`` | Compute mean of elements | | ``np.std`` | ``np.nanstd`` | Compute standard deviation | | ``np.var`` | ``np.nanvar`` | Compute variance | | ``np.min`` | ``np.nanmin`` | Find minimum value | | ``np.max`` | ``np.nanmax`` | Find maximum value | | ``np.argmin`` | ``np.nanargmin`` | Find index of minimum value | | ``np.argmax`` | ``np.nanargmax`` | Find index of maximum value | | ``np.median`` | ``np.nanmedian`` | Compute median of elements | | ``np.percentile`` | ``np.nanpercentile``| Compute rank-based statistics of elements | | ``np.any`` | N/A | Evaluate whether any elements are true | | ``np.all`` | N/A | Evaluate whether all elements are true | ### Exercise: Mean centering Subtract the mean of the list `rand_list` from every element in the same ``` import random rand_list = [random.randint(10,20) for i in range(10000)] def mean_center(data): ############################################################################ ##### TODO: Complete this function in vanilla Python to perform ##### ##### mean centering ##### ############################################################################ data = data.copy() sum = 0.0 for i in range(len(data)): sum += data[i] mean = sum/len(data) for i in range(len(data)): data[i] -= mean return data ``` Now let's do it in Numpy using ufuncs ``` def mean_center_with_numpy(data): ############################################################################ ##### TODO: Now do the same without using any for-loops ##### ############################################################################ rand_arr = np.array(data) # Convert to numpy array mean = np.mean(rand_arr) return rand_arr - mean np.allclose(mean_center(rand_list), mean_center_with_numpy(rand_list)) %%timeit mean_center(rand_list) # 100 loops, best of 5: 2.2 ms per loop %%timeit mean_center_with_numpy(rand_list) # 1000 loops, best of 5: 742 µs per loop ``` ### Exercise: Max profit over stock price data You are given the stock closing price history as a sequence. Assume that you can make one purchase and one sale. What is the max profit that can be obtained? ``` # Generating the stock data np.random.seed(42) prices = np.full(200, fill_value=np.nan) prices[[10, 25, 60, -5, 90 ,120, 150, 190]] = [120., 30., 75., 45., 60., 90., 90., 95.] # array indexing x = np.arange(len(prices)) is_valid = ~np.isnan(prices) prices = np.interp(x=x, xp=x[is_valid], fp=prices[is_valid]) prices += np.random.randn(len(prices)) * 2 # Gaussian noise fig, ax = plt.subplots() ax.plot(prices) ax.set_title('Stock Price History') ax.set_xlabel('Time') ax.set_ylabel('Price') def profit(prices): ############################################################################ ##### TODO: Compute the max profit. Have two accumulators. ##### ##### one to keep track of minima, one to record max profit ##### ############################################################################ max_px = 0 min_px = prices[0] for px in prices[1:]: min_px = min(min_px, px) max_px = max(px - min_px, max_px) return max_px def profit_with_numpy(prices): ############################################################################ ##### TODO: Compute the max profit in Numpy without any for-loops ##### ##### check out <ufunc>.accumulate ##### ############################################################################ prices = np.asarray(prices) accumulated_mins = np.minimum.accumulate(prices) # 1 pass through the data return np.max(prices - accumulated_mins) # 2 passes through the data print(profit(prices), profit_with_numpy(prices)) np.allclose(profit_with_numpy(prices), profit(prices)) %%timeit profit(prices) # 10000 loops, best of 5: 59.8 µs per loop %%timeit profit_with_numpy(prices) # 100000 loops, best of 5: 8.34 µs per loop ``` ### **3. Broadcasting** Numpy provides a set of rules which allows us to use ufuncs on arrays of different sizes and/or dimensions. Pseudocode of broadcasting: ``` if the arrays have different number of dims: left-pad the smaller shape array with 1s to match the number of dims if any particular dim doesn't match: if one of the those dims is a 1: broadcast this dim else: throw error ``` <p align="center"> <img src="https://github.com/jakevdp/PythonDataScienceHandbook/raw/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/figures/02.05-broadcasting.png" width=500 height=350></img> </p> Let's understand these 3 cases Case 1 ``` arr1 = np.arange(3) arr2 = np.array(5) # scalar value print(arr1.shape, arr2.shape) print(arr1 + arr2) ``` Case 2 ``` arr1 = np.ones((3,3)) arr2 = np.arange(3) print(arr1.shape, arr2.shape) print(arr1 + arr2) ``` Case 3 ``` arr1 = np.arange(3).reshape(3,1) arr2 = np.arange(3) print(arr1.shape, arr2.shape) print(arr1 + arr2) ``` #### Exercise: Verify if broadcast succeeds. and guess the shape of the resulting array ``` arr1 = np.random.rand(3,4,6,2) # random array of shape (3,4,6,2) arr2 = np.random.rand(3,4,1,2) arr1 + arr2 ``` ``` arr1 = np.random.rand(3,6,4,2) # random array of shape (3,6,4,2) arr2 = np.random.rand(1,2) arr1 + arr2 ``` ``` arr1 = np.random.rand(3,6,4,2) # random array of shape (3,6,4,2) arr2 = np.random.rand(1,4,6,2) arr1 + arr2 ``` #### Exercise: Find the point closest to `<x,y>` among a set of points Refresher: _Eucledian distance between two points:_ <p align="center"> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/55/Euclidean_distance_2d.svg/1200px-Euclidean_distance_2d.svg.png" width=400 height=300></img> </p> Hint: check out [`np.argmin`](https://numpy.org/doc/stable/reference/generated/numpy.argmin.html) Generating our 2D points data, [`np.random.random`](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.random.html) returns a number from the uniform distribution of interval `[0.0, 1.0)`. If we want our samples to be uniformly distributed in interval `[a, b)` instead, we can do `(a - b) * np.random.random(...) * a` ``` import numpy as np np.random.seed(42) twod_points = (500) * np.random.random((10000, 2)) + 0 test_point = np.array([3,2]) def find_closest(data, point): ############################################################################ ##### TODO: Find the closest point in vanilla Python ##### ############################################################################ min_dist = np.inf min_dist_index = 0 for i in np.arange(data.shape[0]): dist = np.sqrt( ((data[i][0] - point[0])**2) + ((data[i][1] - point[1])**2) ) if dist < min_dist: min_dist = dist min_dist_index = i return data[min_dist_index] find_closest(twod_points, test_point) def find_closest_vect(data, point): ############################################################################ ##### TODO: Find the closest point without for-loops ##### ############################################################################ index = np.argmin(np.sqrt(np.sum((data - test_point) ** 2, axis=1 ))) return data[index] find_closest_vect(twod_points, test_point) %%timeit find_closest(twod_points, test_point) # 10 loops, best of 5: 45.3 ms per loop %%timeit find_closest_vect(twod_points, test_point) # 1000 loops, best of 5: 297 µs per loop ``` Let's visualize the `find_closest_vect` function ``` import matplotlib.pyplot as plt np.random.seed(42) twod_points_2 = (500) * np.random.random((100, 2)) + 0 test_point_2 = np.array([300,100]) nn = find_closest_vect(twod_points_2, test_point_2) nn fig, (ax1) = plt.subplots(1,1, figsize=(8,8), dpi=100) ax1.scatter(twod_points_2[:,0], twod_points_2[:,1], s=3, c="grey"t) ax1.scatter(*test_point, c="red", s=6, label="test point") draw_circle = plt.Circle(nn, 6, fill=False) ax1.add_artist(draw_circle) ax1.legend() ``` #### Exercise: kNN Now that we have seen how to compute the point closest to a test-point `<x, y>` among a set of training points, we are one step closer to kNN Given a set of 2D co-ordinates, find the distance of each point with every other point. This step is crucial in finding out the 'k' in kNN. Let's consider **'k'= 3**, for this try ``` ################################################################################ ### TODO: Find the pairwise distance. It's just one line using broadcasting #### ### You should have a (10000, 10000, 3) matrix after this #### ################################################################################ diff = twod_points.reshape(10000, 1, 2) - twod_points ################################################################################ ### TODO: Using pairwise distance matrix we computed, let's compute #### ### Euledian distance between the points. Again, just a line of code #### ### The output of this step should give a (10000,10000) shape array #### ### indicating the Eucledian distance of one point with every other #### ################################################################################ pairwise_diff = np.sqrt((diff ** 2).sum(axis=2)) ################################################################################ ### TODO: Now sort the (10000,10000) array on the desired axis. #### ### Check out `np.argsort` #### ################################################################################ # pairwise_diff[np.arange(10000), np.arange(10000)] = np.inf ans = np.argsort(pairwise_diff, axis = 1) ################################################################################ ### TODO: Choose 'k' columns from the sorted (10000,10000) array to get #### ### the k-nearest neighbors of each point with others in the dataset #### ################################################################################ ans[:10,1:3] ``` Alternate way of broadcasting to get pairwise difference matrix (we did this in class) ``` diff = twod_points.reshape(10000, 2, 1) - np.swapaxes(twod_points, 0, 1).reshape(1, 2, 10000) pairwise_diff = np.sqrt((diff ** 2).sum(axis=1)) ans = np.argsort(pairwise_diff, axis = 1) ans[:10,1:3] ``` Verifying if what we did is indeed correct using sklearn's inbuilt kNN function ``` from sklearn.neighbors import NearestNeighbors d, i = NearestNeighbors().fit(twod_points).kneighbors(twod_points, 4) i[:10, 1:3] ``` #### Exercise: Converting a color image to grayscale Note: A png image has 4 channels: R, G, B and alpha, whereas a grayscale has only one channel. So we need a smart way to combine these 3 channels, which renders the image grayscale. _Multiply R channel by `0.2126`, G channel by `0.7152`, B channel by `0.0722`, and ignore the alpha channel. Now add them up. This is the recipe to get a grayscale image [[source]](https://en.wikipedia.org/wiki/Grayscale)._ Code this using Numpy! ``` # Reading the image from urllib.request import urlopen f = urlopen("https://hsasf.hsa.washington.edu/wp-content/uploads/2018/09/UW-Logo.png") img = plt.imread(f) ################################################################################ ##### TODO: Convert a color png image to grayscale using broadcasting ##### ##### `img` has shape (height, width, 4) ################################################################################ channel_weights = [0.2126, 0.7152, 0.0722, 0] grayscale_image = np.sum(img*channel_weights, axis=2) plt.gca().set_axis_off() plt.margins(0, 0) plt.imshow(grayscale_image, cmap='gray') plt.savefig("output_image_rotate.jpg",bbox_inches='tight', pad_inches=0) plt.show() ``` ### **4. Slicing, masking, fancy indexing** # Exercise: Rotating an image Any vector(point, `<x, y>`) in a 2D co-ordinate space can be rotated by angle $\theta$ by doing this <p align="center"> <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/76cd56d49699c53e95cee42a40b340e0a167e078" width=400 height=100></img> </p> So, we have to rotate every pixel in our image by $\theta$ to rotate the whole image. Let's look at a way to do this using Python loops ``` # Reading the image from urllib.request import urlopen f = urlopen("https://hsasf.hsa.washington.edu/wp-content/uploads/2018/09/UW-Logo.png") img = plt.imread(f) import numpy as np import matplotlib.pyplot as plt from pdb import set_trace as bp def rotate_image(img, rot_deg=45): rot_rad = rot_deg * np.pi / 180.0 height, width, num_channels = img.shape print("Height, Width, Channels before padding: ", height, width, num_channels) # Pad the input image with white space so that it won't get cropped # when we rotate it diagonal = int(np.sqrt(height * height + width * width)) # Pythagoras theorm img_padded = np.zeros((diagonal, diagonal, num_channels)) center_h = int((diagonal - height) // 2) center_w = int((diagonal - width) // 2) img_padded[center_h:-center_h-1, center_w:-center_w-1, :] = img rotated_image = np.zeros((diagonal, diagonal, num_channels)) height, width, num_channels = img_padded.shape print("Height, Width, Channels after padding: ", height, width, num_channels) rotated_height, rotated_width, _ = rotated_image.shape mid_row = int((rotated_height+1) / 2) mid_col = int((rotated_width+1) / 2) # for each pixel in output image, find which pixel # it corresponds to in the input image for r in range(rotated_height): # iterating over rows for c in range(rotated_width): # iterating over cols # bp() x = -(r-mid_row)*np.sin(rot_rad) + (c-mid_col)*np.cos(rot_rad) y = (r-mid_row)*np.cos(rot_rad) + (c-mid_col)*np.sin(rot_rad) # add offset x += mid_col y += mid_row x = round(x) y = round(y) # print(r, " ", c, " corresponds to-> " , y, " ", x) # boundary check: if x/y corresponds to a valid pixel in input image if (x >= 0 and y >= 0 and x < rotated_height and y < rotated_width): rotated_image[r][c][:] = img_padded[y][x][:] return rotated_image def rotate_image_vect(img, rot_deg=45): rot_rad = rot_deg * np.pi / 180.0 height, width, num_channels = img.shape # print("Height, Width, Channels : ", height, width, num_channels) diagonal = int(np.sqrt(height * height + width * width)) # Pythagoras theorm img_padded = np.zeros((diagonal, diagonal, num_channels)) center_h = int((diagonal - height) // 2) center_w = int((diagonal - width) // 2) img_padded[center_h:-center_h-1, center_w:-center_w-1, :] = img rotated_image = np.zeros((diagonal, diagonal, num_channels)) height, width, num_channels = img_padded.shape rotated_height, rotated_width, _ = rotated_image.shape mid_row = int( (rotated_height+1)/2 ) mid_col = int( (rotated_width+1)/2 ) ############################################################################ ##### TODO: Remove the nested-for-loops using vectorized operations ##### ############################################################################ # CREATE THE ROTATION MATRIX as a (2,2) ndarray rotate_m = np.array([[np.cos(rot_rad), np.sin(rot_rad)], [-np.sin(rot_rad), np.cos(rot_rad)]]) # CREATE A GRID/MATRIX OF INDICES, where each element of this matrix will be # one of the combinations of the nested-for-loop-indices. In other words, # write the index space of the nested-for-loops as a matrix. # HINT: check out `np.meshgrid` grid = np.meshgrid(np.arange(rotated_height), np.arange(rotated_width)) # CONVERT this grid into an ndarray, and reshape it to (2,-1) # Make a copy of the grid ndarray you just created. # Remember that the input and output image have same shape. So we need two # copies of the grid ndarray we created. i_org = np.array(grid).reshape(2, -1) i_new = i_org.copy() # SUBTRACT `mid_row` and `mid_col` from `i_new` i_new[0] = i_new[0] - mid_row i_new[1] = i_new[1] - mid_col # PERFORM ROTATION on `i_new` i_new = (rotate_m @ i_new).astype(int) # @ is short hand for dot-product/matmul # RECENTER (ie. add back) `mid_row` and `mid_col` to the output matrix of prev. step i_new[0] = i_new[0] + mid_row i_new[1] = i_new[1] + mid_col # CREATE the boolean mask to perform the boundary check mask = np.logical_and.reduce(((i_new[0] >= 0), (i_new[1] >= 0), (i_new[0] < rotated_height), (i_new[1] < rotated_width))) # ASSIGN PIXELS FROM INPUT IMAGE TO THE ROTATED IMAGE USING THE mask created # in the previous step. Remember, the mask created gives us the # valid/in-boundary pixel indices/location in the input image rotated_image[i_org[0][mask], i_org[1][mask], :] = img_padded[i_new[0][mask], i_new[1][mask], :] return rotated_image # printing the padded image # plt.gca().set_axis_off() # plt.margins(0, 0) # plt.imshow(img_padded) # plt.show() plt.gca().set_axis_off() plt.margins(0, 0) rotated_image = rotate_image(img) plt.imshow(rotated_image) # plt.savefig("output_image_rotate.jpg",bbox_inches='tight', pad_inches=0) plt.show() plt.gca().set_axis_off() plt.margins(0, 0) rotated_image = rotate_image_vect(img) plt.imshow(rotated_image) # plt.savefig("output_image_rotate.jpg",bbox_inches='tight', pad_inches=0) plt.show() %%timeit rotate_image(img) # 1 loop, best of 5: 51.7 s per loop %%timeit rotate_image_vect(img) # 1 loop, best of 5: 577 ms per loop ``` References: 1. [Jake Vanderplas' book](https://github.com/jakevdp/PythonDataScienceHandbook/tree/master/notebooks) 2. [Array Programming, Wikipedia](https://en.wikipedia.org/wiki/Array_programming) 3. [Nature article on Array programming in Numpy](https://www.nature.com/articles/s41586-020-2649-2) 4. [Numpy Array Programming blog](https://realpython.com/numpy-array-programming/) 3. [Rotating image without cv2, StackOverflow](https://stackoverflow.com/questions/57648391/how-do-i-rotate-an-image-manually-without-using-cv2-getrotationmatrix2d)
github_jupyter
<a href="https://colab.research.google.com/github/isaacmg/task-vt/blob/biobert_finetune/drug_treatment_extraction/notebooks/BioBERT_RE.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Finetuning BioBERT for RE This is a fine-tuning notebook that we used to finetune BioBERT for relation classification (on our own data, GAD and Euadr) and then convert the resulting model checkpoint to PyTorch HuggingFace library for model inference. This was done for the vaccine and therapeutics task in order to identify drug treatment relations. ``` !git clone https://github.com/dmis-lab/biobert from google.colab import auth from datetime import datetime auth.authenticate_user() !pip install tensorflow==1.15 import os os.chdir('biobert') ``` ### Downloading data ``` !./download.sh !fileid="1GJpGjQj6aZPV-EfbiQELpBkvlGtoKiyA" !wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1GJpGjQj6aZPV-EfbiQELpBkvlGtoKiyA' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1GJpGjQj6aZPV-EfbiQELpBkvlGtoKiyA" -O biobert_w.tar.gz && rm -rf /tmp/cookies.txt !tar -xvf biobert_w.tar.gz %set_env RE_DIR datasets/RE/GAD/1 %set_env TASK_NAME=gad %set_env OUTPUT_DIR=./re_outputs_1 %set_env BIOBERT_DIR=biobert_large !python run_re.py --task_name=$TASK_NAME --do_train=true --do_eval=true --do_predict=true --vocab_file=$BIOBERT_DIR/vocab_cased_pubmed_pmc_30k.txt --bert_config_file=$BIOBERT_DIR/bert_config_bio_58k_large.json --init_checkpoint=$BIOBERT_DIR/bio_bert_large_1000k.ckpt.index --max_seq_length=128 --train_batch_size=32 --learning_rate=2e-5 --num_train_epochs=3.0 --do_lower_case=false --data_dir=$RE_DIR --output_dir=$OUTPUT_DIR #Uncomment this if you want to temporarily stash weights on GCS also collect garbage #!gsutil -m cp -r ./re_outputs_1/model.ckpt-0.data-00000-of-00001 gs://coronaviruspublicdata/new_data . #import gc #gc.collect() ``` ### Converting the model to HuggingFace ``` !pip install transformers import logging import torch logger = logging.getLogger('spam_application') def load_tf_weights_in_bert(model, config, tf_checkpoint_path): """ Load tf checkpoints in a pytorch model. """ try: import re import numpy as np import tensorflow as tf except ImportError: logger.error( "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " "https://www.tensorflow.org/install/ for installation instructions." ) raise tf_path = os.path.abspath(tf_checkpoint_path) logger.info("Converting TensorFlow checkpoint from {}".format(tf_path)) # Load weights from TF model init_vars = tf.train.list_variables(tf_path) excluded = ['BERTAdam','_power','global_step'] init_vars = list(filter(lambda x:all([True if e not in x[0] else False for e in excluded]),init_vars)) names = [] arrays = [] for name, shape in init_vars: logger.info("Loading TF weight {} with shape {}".format(name, shape)) array = tf.train.load_variable(tf_path, name) names.append(name) arrays.append(array) print("A name",names) for name, array in zip(names, arrays): if name in ['output_weights', 'output_bias']: name = 'classifier/' + name name = name.split("/") # if name in ['output_weights', 'output_bias']: # name = 'classifier/' + name # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v # which are not required for using pretrained model if any( n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"] for n in name ): logger.info("Skipping {}".format("/".join(name))) continue pointer = model # if name in ['output_weights' , 'output_bias']: # name = 'classifier/' + name for m_name in name: print("model",m_name) #print(scope_names) if re.fullmatch(r"[A-Za-z]+_\d+", m_name): scope_names = re.split(r"_(\d+)", m_name) else: scope_names = [m_name] if scope_names[0] == "kernel" or scope_names[0] == "gamma": print(scope_names) pointer = getattr(pointer, "weight") elif scope_names[0] == "output_bias" or scope_names[0] == "beta": # elif scope_names[0] == "beta": # print(scope_names) pointer = getattr(pointer, "bias") # elif scope_names[0] == "output_bias": # print(scope_names) # pointer = getattr(pointer, "cls") elif scope_names[0] == "output_weights": print(scope_names) pointer = getattr(pointer, "weight") elif scope_names[0] == "squad": print(scope_names) pointer = getattr(pointer, "classifier") else: try: pointer = getattr(pointer, scope_names[0]) except AttributeError: logger.info("Skipping {}".format("/".join(name))) continue if len(scope_names) >= 2: num = int(scope_names[1]) pointer = pointer[num] if m_name[-11:] == "_embeddings": pointer = getattr(pointer, "weight") elif m_name == "kernel": array = np.transpose(array) try: assert pointer.shape == array.shape except AssertionError as e: e.args += (pointer.shape, array.shape) raise logger.info("Initialize PyTorch weight {}".format(name)) pointer.data = torch.from_numpy(array) return model from transformers import BertConfig, BertForSequenceClassification, BertForPreTraining def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path): # Initialise PyTorch model config = BertConfig.from_json_file(bert_config_file) print("Building PyTorch model from configuration: {}".format(str(config))) config.num_labels = 2 model = BertForSequenceClassification(config) #model = BertForSequenceClassification(config) # Load "weights from tf checkpoint load_tf_weights_in_bert(model, config, tf_checkpoint_path) # Save pytorch-model print("Save PyTorch model to {}".format(pytorch_dump_path)) model.save_pretrained(pytorch_dump_path) return model # Alternatevely you can download existing stashed data #!gsutil cp -r gs://coronaviruspublicdata/re_outputs_1 . import os !mkdir pytorch_output_temp model2 = convert_tf_checkpoint_to_pytorch("re_outputs_1", "biobert_large/bert_config_bio_58k_large.json", "pytorch_output_temp") ``` ### Upload converted checkpoint and test inference If everything goes smoothly we should be able to upload weights and use the converted model. ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('biobert_large/vocab_cased_pubmed_pmc_30k.txt') model2.eval() input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model2(input_ids) outputs = model2(input_ids) outputs input_ids = torch.tensor(tokenizer.encode("All our results indicate that the presence of the @GENE$ genotype (++) in patients with structural @DISEASE$, severe left ventricular dysfunction and malignant ventricular arrhythmias increases the risk for these patients of hemodynamic collapse during these arrhythmias")) outputs = model2(input_ids.unsqueeze(0)) outputs values, indices = torch.max(outputs[0], 1, keepdim=False) indices ``` **Lets refactor this into something nicer** ``` from transformers import BertConfig, BertForSequenceClassification, BertForPreTraining from transformers import BertTokenizer class InferSequenceClassifier(object): def __init__(self, pytorch_model_path, token_path, add_special_tokens=False): self.tokenizer = BertTokenizer.from_pretrained(token_path) self.model = BertForSequenceClassification.from_pretrained(pytorch_model_path) self.add_special_tokens = add_special_tokens def make_prediction(self, text): input_ids = torch.tensor(self.tokenizer.encode(text, add_special_tokens=self.add_special_tokens)) outputs = self.model(input_ids.unsqueeze(0)) print(outputs) values, indices = torch.max(outputs[0], 1, keepdim=False) return indices !cp biobert_large/vocab_cased_pubmed_pmc_30k.txt pytorch_output_temp/vocab.txt !cp biobert_large/bert_config_bio_58k_large.json pytorch_output_temp/config.json seq_infer = InferSequenceClassifier("pytorch_output_temp", "pytorch_output_temp", True) seq_infer.make_prediction("@GENE$ influences brain beta-@DISEASE$ load, cerebrospinal fluid levels of beta-amyloid peptides and phosphorylated tau, and the genetic risk of late-onset sporadic AD.") seq_infer.make_prediction("All our results indicate that the presence of the @GENE$ genotype (++) in patients with structural @DISEASE$, severe left ventricular dysfunction and malignant ventricular arrhythmias increases the risk for these patients of hemodynamic collapse during these arrhythmias") seq_infer.make_prediction("Functional studies to unravel the biological significance of this region in regulating @GENE$ production is clearly indicated, which may lead to new strategies to modify the disease course of severe @DISEASE$.") !gsutil cp -r pytorch_output_temp gs://coronavirusqa/re_convert ```
github_jupyter
``` !pip install eli5 !pip install xgboost ``` ## Import of Libraries needed ``` import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from category_encoders import OrdinalEncoder from xgboost import XGBClassifier from sklearn.inspection import permutation_importance from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import classification_report, plot_confusion_matrix, plot_roc_curve import matplotlib.pyplot as plt from skopt import BayesSearchCV from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score ``` ## Import Datasets ``` train = pd.read_csv('train.csv') test = pd.read_csv('test.csv') census = pd.read_csv('census.csv') print(train.shape, test.shape, census.shape) ``` ## Begin EDA ``` #checking for null values and column types, interesting to see no 'missing' values I'll dive a little further. census.info() #Aha missing values are disguised as '?'. Lets fix that. census['workclass'].value_counts() #Found 3 Object Columns with '?' for missing values. We will fill these with the top value of each row. census.isin(['?']).sum() #Time to make the 'missing' values into NaN so we can work with them census.replace({'?': np.NaN}, inplace=True) #No more '?' census.workclass.value_counts() # They are now registered as NaN. These will be replaced with the top value_counts in each column census.isnull().sum() census.head() #Printing Top Values to Fill NaNs print('Top Value:',census['native-country'].describe()) print('Top Value:',census['occupation'].describe()) print('Top Value:',census['workclass'].describe()) #filling NaN values census['workclass'].replace({np.NaN : 'Private'},inplace=True) census['occupation'].replace({np.NaN : 'Prof-specialty'}, inplace=True) census['native-country'].replace({np.NaN : 'United-States'},inplace=True) #Sanity check to assure NaNs have been fixed with working values. census.isnull().sum() #checking for high cardinality in the dataset as well as seeing what to do with the features. Looks like 'fnlwgt' has a very high cardinality and isnt useful for the model census.astype(object).nunique() ``` #Working on the wrangle function. Not sure how to get these three def/if/else functions wrapped into one working or multi working function inside of a wranglefunction🤔 ``` #Create a New Feature that changes the income column into a 1 if they make more than 50K a year and 0 if they make 50K or less. New Feature called 'makes-50K+'. def over50K(census): if census['income'] == '>50K': val = 1 else: val = 0 return val census['makes-50K+'] = census.apply(over50K, axis=1) #Create a New Feature that changes the hours worked per week column into a 1 if they worked more than 40 hrs a week and 0 if they worked 40 or less. New Feature called 'over40hrs'. def over40(census): if census['hours-per-week'] >40: val = 1 else: val = 0 return val census['over40hrs+'] = census.apply(over40, axis=1) #Create a New Feature that changes the sex column into a 1 if they were Female and 0 if they were Male. New Feature called 'gender-F/1-M/0'. This is new Target column. def gender(census): if census['sex'] == 'Female': val = 1 else: val = 0 return val census['gender-F/1-M/0'] = census.apply(gender, axis=1) #checking to see new features were successful. They are all there. census.head() # Time to drop columns we don't need anylonger. Feature'fnlwgt' is high card and Unnecessary while 'sex' would now become a leaky feature and income and hours per week are now redundant census = census.drop(columns=['fnlwgt','income','hours-per-week','sex','capital-gain','capital-loss']) census ``` # Splitting the Data ``` #Split data randomly with a 60/20/20 split train, val, test = np.split(census.sample(frac=1), [int(.6*len(census)), int(.8*len(census))]) print('Training Set:',train.head(1)) print('Validation Set:',val.head(1)) print('Test Set',test.head(1)) #Split the data into X and y for training the model and making predictions y_train = train[target] X_train = train.drop(target,axis=1) y_val = val[target] X_val = val.drop(target,axis=1) y_test = test[target] X_test = test.drop(target,axis=1) ``` # Establishing the Baseline ``` #First I will check that the target feature is between 50-70%. Its almost to far off but still within the parameters to continue. y_train.value_counts(normalize=True) y_train.value_counts() print('Baseline Accuracy:', y_train.value_counts(normalize=True).max()) ``` # Building the Model ``` #Starting with a pipeline. Using OrdinalEncoder for the object columns, we do not need and Imputer since they were all filled with top values and I am working with XGBClassifier. modelxgb = make_pipeline( OrdinalEncoder(), XGBClassifier(n_jobs=-1) ) modelxgb.fit(X_train,y_train) print('Training accuracy:', modelxgb.score(X_train, y_train)) print('Validation accuracy:', modelxgb.score(X_val, y_val)) scores = cross_val_score(modelxgb, X_train, y_train, cv=20) scores pipeline = make_pipeline( OrdinalEncoder(), RandomForestClassifier(random_state=42) ) params = { 'randomforestclassifier__n_estimators': range(50,500,50), 'randomforestclassifier__max_depth': range(5,101,5), 'randomforestclassifier__max_samples': np.arange(0.2, 0.7, 0.2) } model = RandomizedSearchCV( pipeline, param_distributions=params, cv=5, verbose=1, n_iter=5 ) model.fit(X_train,y_train) scores = cross_val_score(model, X_train, y_train, cv=10) scores print('Training accuracy:', model.score(X_train, y_train)) print('Validation accuracy:', model.score(X_val, y_val)) # make predictions for test data y_pred = model.predict(X_test) # evaluate predictions accuracy = accuracy_score(y_test, y_pred) print("Accuracy: %.2f%%" % (accuracy * 100.0)) # k-fold cross validation evaluation of xgboost model from numpy import loadtxt import xgboost from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score # load data # split data into X and y # CV model model_w_kf = modelxgb kfold = KFold(n_splits=3, random_state=7) results = cross_val_score(modelxgb, X_train, y_train, cv=kfold) print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100)) from sklearn.linear_model import Ridge, LinearRegression, LogisticRegression log_model = make_pipeline( OrdinalEncoder(), LogisticRegression(max_iter=5) ) log_model.fit(X_train, y_train) print('Training accuracy:', log_model.score(X_train, y_train)) print('Validation accuracy:', log_model.score(X_val, y_val)) from sklearn.svm import SVC svc_model = make_pipeline( OrdinalEncoder(), SVC() ) svc_model.fit(X_train, y_train) print('Training accuracy:', svc_model.score(X_train, y_train)) print('Validation accuracy:', svc_model.score(X_val, y_val)) lin_model = make_pipeline( OrdinalEncoder(), LinearRegression() ) lin_model.fit(X_train, y_train) print('Training accuracy:', lin_model.score(X_train, y_train)) print('Validation accuracy:', lin_model.score(X_val, y_val)) modelxgb.fit(X_train, y_train) # make predictions for test data y_pred = modelxgb.predict(X_test) # evaluate predictions accuracy = accuracy_score(y_test, y_pred) print("Accuracy: %.2f%%" % (accuracy * 100.0)) from sklearn.ensemble import GradientBoostingClassifier model_skgb = make_pipeline( OrdinalEncoder(), GradientBoostingClassifier(random_state=42) ) model_skgb.fit(X_train, y_train); print('Training accuracy:', model_skgb.score(X_train, y_train)) print('Validation accuracy:', model_skgb.score(X_val, y_val)) # make predictions for test data y_pred = model_skgb.predict(X_test) # evaluate predictions accuracy = accuracy_score(y_test, y_pred) print("Accuracy: %.2f%%" % (accuracy * 100.0)) X.relationship.value_counts() import matplotlib.pyplot as plt importances = modelxgb.named_steps['xgbclassifier'].feature_importances_ feat_imp = pd.Series(importances, index=X_train.columns).sort_values() feat_imp.tail(10).plot(kind='barh') plt.xlabel('Gini importance') plt.ylabel('Feature') plt.title('Feature importance for model_skgb'); # Using sklearn from sklearn.inspection import permutation_importance perm_imp = permutation_importance(modelxgb, X_val, y_val, n_jobs=10, random_state=42) # Put results into DataFrame data = {'importances_mean' : perm_imp['importances_mean'], 'importances_std' : perm_imp['importances_std']} df = pd.DataFrame(data, index=X_val.columns) df.sort_values('importances_mean', ascending=True, inplace=True) # Make plot df['importances_mean'].tail(10).plot(kind='barh') plt.xlabel('Importance (change in accuracy)') plt.ylabel('Feature') plt.title('Permutation importance for model_xgb'); perm_imp = permutation_importance(modelxgb, X_test, y_test, n_jobs=10, random_state=42) data = {'importances_mean' : perm_imp['importances_mean'], 'importances_std' : perm_imp['importances_std']} permutation_importances = pd.DataFrame(data, index=X_test.columns) permutation_importances.sort_values('importances_mean', ascending=True, inplace=True) permutation_importances fig, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(12,5)) plot_roc_curve(model, X_test, y_test, ax=ax1) plot_roc_curve(modelxgb, X_test, y_test, ax=ax2) ax1.plot([(0,0), (1,1)], color='grey', linestyle='--') ax2.plot([(0,0), (1,1)], color='grey', linestyle='--') ax1.set_title('Random Forest') ax2.set_title('XG Boost') plt.show() %matplotlib inline import seaborn as sns sns.distplot(y_train); #XGBoost model made without pipeline so shap graphing would not be an issue. import category_encoders as ce ore = ce.OrdinalEncoder() XTO_train = ore.fit_transform(X_train) XTO_val = ore.transform(X_val) modelxgb2 = XGBClassifier() modelxgb2.fit(XTO_train,y_train) print('Training accuracy:', modelxgb2.score(XTO_train, y_train)) print('Validation accuracy:', modelxgb2.score(XTO_val, y_val)) import shap row2 = X_test shap_values = shap.TreeExplainer(modelxgb2).shap_values(XTO_train) shap.summary_plot(shap_values, XTO_train, plot_type="bar") import shap row = XTO_val.iloc[[795]] explainer = shap.TreeExplainer(modelxgb2) shap_values = explainer.shap_values(row) shap.initjs() shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=row) row model.predict(row) row_check = y_val.iloc[[795]] row_check import pdpbox.pdp as pdp from pdpbox.pdp import pdp_isolate, pdp_plot feature = 'makes-50K+' isolate = pdp_isolate( model=model, dataset=XTO_test, # <-- use validation data model_features=XTO_test.columns, feature=feature ) pdp_plot(isolate, feature_name=feature); from pdpbox.pdp import pdp_interact, pdp_interact_plot features = ['makes-50K+', 'over40hrs+'] interact = pdp_interact( model=model, dataset=XTO_test, # <-- use validation data model_features=XTO_test.columns, features=features ) pdp_interact_plot(interact, plot_type='contour', feature_names=features); X_enc = ore.fit_transform(X.drop(columns=['capital-gain','capital-loss'])) model.fit(X_enc,y) %matplotlib inline import matplotlib.pyplot as plt from pdpbox import pdp feature = 'race' pdp_dist = pdp.pdp_isolate(model=modelxgb2, dataset=X_enc, model_features=X_enc.columns, feature=feature) pdp.pdp_plot(pdp_dist, feature); features = ['occupation', 'makes-50K+'] interaction = pdp_interact( model=modelxgb2, dataset=X_enc, model_features=X_enc.columns, features=features ) pdp_interact_plot(interaction, plot_type='grid', feature_names=features); ```
github_jupyter
``` %matplotlib inline from matplotlib import style style.use('fivethirtyeight') import matplotlib.pyplot as plt import numpy as np import pandas as pd import datetime as dt from sqlalchemy import inspect ``` # Reflect Tables into SQLAlchemy ORM ``` import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() engine = create_engine("sqlite:///Resources/hawaii.sqlite") # reflecting an existing database into a new model Base = automap_base() # reflecting the tables Base.prepare(engine, reflect=True) # Displaying classes Base.classes.keys() # Saving data bases to variables Measurement = Base.classes.measurement Station = Base.classes.station # Starting session from Python to the DB session = Session(engine) ``` # Exploratory Climate Analysis ``` #Getting the last date in Measurment DB max_date = session.query(func.max(Measurement.date)).first() max_date # Calculating the date 1 year ago from the last data point in the database begin_date = dt.date(2017, 8, 23) - dt.timedelta(days=365) begin_date # Querying the Base tables returns results in a list data = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= begin_date).order_by(Measurement.date).all() data # Getting names and types of columns in "measurement" data set inspector = inspect(engine) columns = inspector.get_columns("measurement") for column in columns: print(column["name"], column["type"]) # Getting names and types of columns in "station" data set inspector = inspect(engine) columns = inspector.get_columns("station") for column in columns: print(column["name"], column["type"]) # Save the query results as a Pandas DataFrame and setting the index to the date column precip_df = pd.DataFrame(data, columns=["Date", "Precipitation"]) precip_df["Date"] = pd.to_datetime(precip_df["Date"]) #Resettinng index to Date column precip_df = precip_df.set_index("Date") #Dropping all N/As precip_df = precip_df.dropna(how = "any") #Sorting by Date colummn - ascending precip_df = precip_df.sort_values(by="Date", ascending=True) precip_df # Use Pandas Plotting with Matplotlib to plot the data plt.figure(figsize=(10,5)) plt.plot(precip_df, label="Precipitation by Date") plt.xlabel("Date") plt.ylabel("Precipitation(in)") plt.xticks(rotation="45") plt.legend(loc="upper center") plt.savefig("Output/Precipitation_plot.png") plt.show() ``` ![precipitation](Images/precipitation.png) ``` #calcualting the summary statistics for the precipitation data precip_df.describe() ``` ![describe](Images/describe.png) ``` # Query to count the number of stations in "Stations" data session.query(func.count(Station.id)).all() # What are the most active stations? (i.e. what stations have the most rows)? # List the stations and the counts in descending order. stations = session.query(Measurement.station, func.count(Measurement.station)).group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all() stations # Using the station id from the previous query, calculate the lowest temperature recorded, # highest temperature recorded, and average temperature of the most active station? session.query(Measurement.station, func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)). filter(Measurement.station == "USC00519281").\ group_by(Measurement.station).all() # Choose the station with the highest number of temperature observations. # Query the last 12 months of temperature observation data for this station and plot the results as a histogram #Filtering data by date and by station data_2 = session.query(Measurement.date, Measurement.tobs).filter(Measurement.station == "USC00519281").\ filter(func.strftime( Measurement.date) >= begin_date).all() data_2 # Cleaning temp.data and setting index to date temp_df = pd.DataFrame(data_2, columns=["Date", "Temperature"]) temp_df = temp_df.sort_values(by="Date", ascending=True) temp_df.set_index("Date", inplace=True) temp_df.head() plt.figure(figsize=[8,5]) #Ploting the results as a histogram with 12 bins plt.hist(x=temp_df["Temperature"], bins=12, label="tobs") # Labeling figure plt.grid plt.xlabel("Temperature (F)") plt.ylabel("Frequency") plt.title("Temperature Frequency Histogram") plt.legend() # Saving Plot plt.savefig("Output/Temp Frequency Histogram"); plt.show() ``` ![precipitation](Images/station-histogram.png) ``` # This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date string in the format %Y-%m-%d end_date (string): A date string in the format %Y-%m-%d Returns: TMIN, TAVE, and TMAX """ return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\ filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all() # function usage example print(calc_temps('2011-02-28', '2011-03-05')) # using the example to calculate min, max and average tempreture for my vacation date # Vacation Dates start_date = "2020-04-01" end_date = "2020-04-11" # Previous Year Dates hst_start_date = "2017-04-01" hst_end_date = "2017-04-11" # Min,average and max temp calculation temp_min = calc_temps(hst_start_date, hst_end_date)[0][0] temp_avg = calc_temps(hst_start_date, hst_end_date)[0][1] temp_max = calc_temps(hst_start_date, hst_end_date)[0][2] print(temp_min, temp_avg, temp_max) # Ploting the results from your previous query as a bar chart. # Use "Trip Avg Temp" as your Title # Use the average temperature for the y value # Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr) x_axis = 1 y_axis = temp_avg error = temp_max-temp_min # Defining Bar and Error paramaters plt.bar(x_axis, y_axis, yerr=error, align='center', color = "r") plt.tick_params(bottom=False,labelbottom=False) # Labeling, tickers and grids plt.ylabel("Temperature (F)") plt.title("Trip Avg Temperature") plt.grid(b=None, which="major", axis="x") plt.margins(1.5, 1.5) plt.ylim(0, 90) plt.savefig("Output/Trip Average Temperature") #Show the Plot plt.show(); ``` ## Optional Challenge Assignment ``` # Create a query that will calculate the daily normals # (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day) def daily_normals(date): """Daily Normals. Args: date (str): A date string in the format '%m-%d' Returns: A list of tuples containing the daily normals, tmin, tavg, and tmax """ sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)] return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all() daily_normals("04-01") # calculate the daily normals for your trip # push each tuple of calculations into a list called `normals` # Seting the start and end date of the trip from historic dates hst_start_date # defined above hst_end_date # Useing the start and end date to create a range of dates dates = session.query(Measurement.date).filter(Measurement.date >= hst_start_date).filter(Measurement.date <= hst_end_date).group_by(Measurement.date).all() #saving trip dates into array arr_dates = [x[0] for x in dates] # Reformating dates to mm-dd format and getting data ion a list arr_dates_mm_dd= [x[5:] for x in arr_dates] start_mmdd = arr_dates_mm_dd[0] end_mmdd = arr_dates_mm_dd[10] # Looping through the list of mm-dd and getting max,ave, min temp averages temps_by_dates = [session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).filter(func.strftime("%m-%d", Measurement.date) >= start_mmdd).filter(func.strftime("%m-%d", Measurement.date) <= end_mmdd).group_by(func.strftime("%m-%d", Measurement.date)).all()] temps_by_dates = temps_by_dates[0] #displaying averages for each date of the trip temps_by_dates # reformating list of temp into Pandas DataFrame temps_by_dates_df= pd.DataFrame(temps_by_dates,columns=["min_t","avg_t","max_t"]) #Adding date column temps_by_dates_df["date"]= arr_dates_mm_dd # Seting index to date temps_by_dates_df.set_index("date",inplace=True) temps_by_dates_df # Ploting the daily normals as an area plot with `stacked=False` temps_by_dates_df.plot(kind='area', stacked=False, x_compat=True, title="Daily Normals for Trip Dates") plt.xticks(rotation="45") plt.savefig(("Output/Temp Frequency")) plt.show() ```
github_jupyter
# Ensemble Learning * The basic idea of ensemble learning is to have multiple learning algorithms for the same problem and combine their results to make a final prediction * There are multiple types on ensemble learning. Common approaches include: * Boosting * Bagging/Bootstrapping * Random Forests * Mixture of Experts ## Boosting and Bagging * When you have one data set, usually you may train an algorithm and learn a single set of parameters. However, when we do this, we have no idea how stable/variable those parameters that we estimated are. * Bootstrapping can show us the variation in estimated parameter values given a particular data set. Sometimes, it can also help to improve our predictions. * Essentially, to perform bootstrapping, you sample from your data set *with replacement* and train your algorithm to estimate the parameters with each sampled subset. You can then look at how much the parameters vary with each sampled subset and you can also combine your estimates from each trained method by averaging over all of the results for regression: \begin{equation} y_{com}(\mathbf{x}) = \frac{1}{M} \sum_{m=1}^M y_m(\mathbf{x}) \end{equation} * You can aggregate results over all your bootstrap samples using majority vote for classification. ``` import numpy as np import matplotlib.pyplot as plt import math import textwrap %matplotlib inline def generateRandData(N, l, u, gVar): '''generateRandData(N, l, u, gVar): Generate N uniformly random data points in the range [l,u) with zero-mean Gaussian random noise with variance gVar''' x = np.random.uniform(l,u,N) e = np.random.normal(0,gVar,N) t = np.sin(2*math.pi*x) + e return x,t def fitdataReg(x,t,M,la): '''fitdata(x,t,M): Fit a polynomial of order M to the data (x,t)''' X = np.array([x**m for m in range(M+1)]).T w = np.linalg.inv(X.T@X+(la*np.identity(M+1)))@X.T@t return w def plotPoly(x,t,xrange, y, esty, subplotloc,la=0): #plot everything plt.subplot(*subplotloc) #identify the subplot to use # plt.tight_layout() plt.ylim([-2,2]) p1 = plt.plot(xrange, y, 'g') #plot true value p2 = plt.plot(x, t, 'bo') #plot training data p3 = plt.plot(xrange, esty, 'r') #plot estimated value #add title, legend and axes labels plt.ylabel('t') #label x and y axes plt.xlabel('x') def bootstrapRegression(M, numData,percentSample,numSamples): #generate data x,t = generateRandData(numData,0,1,1) numDataSamples = round(percentSample*numData) subplotloc = [2, round(numSamples/2), 1] fig = plt.figure() xrange = np.arange(0.05,.95,0.001) #get equally spaced points in the xrange esty = np.empty([numSamples, xrange.shape[0]]) for iter in range(numSamples): #select a random subset of the data rp = np.random.permutation(numData) x_sub = x[rp[0:numDataSamples-1]] t_sub = t[rp[0:numDataSamples-1]] #fit the random subset w = fitdataReg(x_sub,t_sub,M,0) #plot results subplotloc[2] = iter+1 y = np.sin(2*math.pi*xrange) #compute the true function value X = np.array([xrange**m for m in range(w.shape[0])]).T esty[iter,:] = X@w #compute the predicted value plotPoly(x_sub,t_sub,xrange,y,esty[iter,:],subplotloc) #combine the bootstrapped results comy = esty.mean(0) yerr = esty.var(0) # compare to full data set fig = plt.figure() plotPoly(x,t,xrange,y,comy,[1, 1, 1]) plt.errorbar(xrange, comy, yerr=yerr, fmt='r.',ms=10,errorevery=10) fig = plt.figure() w = fitdataReg(x,t,M,0) y = np.sin(2*math.pi*xrange) #compute the true function value X = np.array([xrange**m for m in range(w.shape[0])]).T yy = X@w #compute the predicted value plotPoly(x,t,xrange,y,yy, [1, 1, 1]) #Figure 1.7 from text bootstrapRegression(5, 50,.75,20) ``` # Boosting: AdaBoost * Goal: Combine base (``weak'') classifiers to form a committee whose performance is better than any of the single base classifiers. * The base classifiers are trained in sequence (not in parallel like in bootstrapping) * Each base classifier is trained using a weighted data set (different weights for each base classifier) * Points that are misclassified by a base classifier are weighted more heavily while training the next base classifier * Consider a two class classification problem with $\mathbf{X} = \left\{ \mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_n\right\}$ with corresponding labels $y_i \in \left\{ -1,1\right\}$. * The goal is to construct a classifier of the form: \begin{equation} f(\mathbf{x}) = sign(F(\mathbf{x})) \end{equation} where \begin{equation} F(\mathbf{x}) = \sum_{k=1}^K \frac{1}{2}\alpha_k \phi(\mathbf{x}; \theta_k) \end{equation} where $\phi(\mathbf{x}; \theta_k)$ is the base classifier. * We need to determine the parameter values for each base classifier: \begin{eqnarray} \arg \min_{\alpha_k, \theta_k} \sum_{i=1}^N \exp\left(-y_i F(\mathbf{x}_i) \right) \end{eqnarray} * This cost function penalizes the samples that are incorrectly classified ($y_iF(\mathbf{x}_i) < 0$) heavily * Direct optimization of all $\alpha$s and $\theta$s is difficult. So, we iteratively optimize (which is sub-optimal). At each stage, we train one base classifier holding fixed all those that have already been trained. * Let: \begin{eqnarray} F_m(\mathbf{x}) &=& \sum_{k=1}^m \frac{1}{2}\alpha_k \phi(\mathbf{x}; \theta_k)\\ &=& F_{m-1}(\mathbf{x}) + \frac{1}{2}\alpha_m \phi(\mathbf{x}; \theta_m) \end{eqnarray} * At step $m$, we optimize for $\alpha_m$ and $\theta_m$ where $F_{m-1}(\mathbf{x})$ is fixed: \begin{eqnarray} (\alpha_m, \theta_m) &=& \arg \min_{\alpha, \theta} J(\alpha, \theta)\\ &=& \arg \min_{\alpha, \theta} \sum_{i=1}^N \exp\left( -y_i\left( F_{m-1}(\mathbf{x}_i) +\frac{1}{2} \alpha\phi(\mathbf{x}_i; \theta)\right)\right) \end{eqnarray} * So, let's optimize this in two steps: first $\theta_m$ and then $\alpha_m$ \begin{eqnarray} \theta_m &=& \arg \min_{\theta} \sum_{i=1}^N \exp\left( -y_i\left( F_{m-1}(\mathbf{x}_i) + \frac{1}{2}\alpha\phi(\mathbf{x}_i; \theta)\right)\right)\\ &=& \arg \min_{\theta} \sum_{i=1}^N w_i^{(m)} \exp\left( -\frac{1}{2}y_i\alpha\phi(\mathbf{x}_i; \theta)\right) \end{eqnarray} where \begin{equation} w_i^{(m)} = \exp\left(-y_iF_{m-1}(\mathbf{x}_i)\right) \end{equation} * This can be re-written as: \begin{eqnarray} \theta_m &=& \arg \min_{\theta} \exp\left(-\alpha_m/2\right)\sum_{n \in T_m}w_n^{(m)} + \exp\left(\alpha_m/2\right)\sum_{n \in M_m}w_n^{(m)} \nonumber \\ &=& \left( \exp\left(\alpha_m/2\right) - \exp\left(-\alpha_m/2\right)\right)\sum_{i=1}^Nw_i^{(m)} I(\phi_m(\mathbf{x}_i;\theta) \ne y_i) + \exp\left(-\alpha_m/2\right)\sum_{i=1}^Nw_i^{(m)} \end{eqnarray} * This is equivalent to minimizing \begin{equation} \arg \min_{\theta} \sum_{i=1}^N w_i^{(m)} I(\phi_m(\mathbf{x}_i;\theta) \ne y_i) \end{equation} * Once we have the optimal classifier at step $m$ (i.e., $\theta_m$), then we determine the $\alpha_m$ values \begin{eqnarray} \sum_{y_i\phi(\mathbf{x}_i;\theta_m)<0}w_i^{(m)} = P_m\\ \sum_{y_i\phi(\mathbf{x}_i;\theta_m)>0}w_i^{(m)} = 1 - P_m \end{eqnarray} * Plugging this into J, we get: \begin{eqnarray} \alpha_m = \arg\min_{\alpha} \left\{ \exp(-\alpha)(1-P_m) + \exp(\alpha)P_m\right\} \end{eqnarray} * Take the derivative with respect to $\alpha$, set to zero, we get: \begin{equation} \alpha_m = \frac{1}{2}\ln\frac{1-P_m}{P_m} \end{equation} * Once you get $\theta_m$ and $\alpha_m$, you compute the weights for the next step: \begin{equation} w_i^{(m+1)} = \frac{\exp(-y_iF_m(\mathbf{x}_i))}{Z_m} = \frac{\exp(-y_i\alpha_m\phi(\mathbf{x}_i;\theta_m))}{Z_m} \end{equation} where \begin{equation} Z_m = \sum_{i=1}^N w_i^{(m)}\exp\left(-y_i\alpha_m\phi(\mathbf{x}_i;\phi_m)\right) \end{equation} * Notice that the weight corresponding to a sample is increased (or decreased) with respect to its value in the previous iteration * Notice that the amount of increase or decrease depends on $\alpha_m$ which controls the relative importance of the $m^{th}$ term in building up the final classifier ## Random Forests * A forest is made up of many trees... * For classification/regression, put an input vector down each of the trees in the forest. For classification, classify the data point using majority vote. For regression, average the values * Each tree is grown using: * Sample $N$ data points (with replacement, i.e., a bootstrap sample) from the full training data set * Specify a number $d << D$. $d$ variables are selected at random out of all $D$ features to determine the split on the node. Select the best of the $d$ features to split at that node * Grow each tree as much as possible (i.e., no pruning or stopping early) * Error relates to correlation between the trees. Greater correlation leads to greater error. *Does this make sense?* * Error also relates to the strength of each individual tree. Better individual trees lead to lower error * https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm # Dropout * This is a method to help prevent overfitting and regularize a network. * The approach attempts to minimize co-dependencies between neurons and enhance robustness of network * Dropout has one parameter $p$. In each iteration, you randomly exclude each neuron with probability $1-p$ during the training pass (in both forward and backward propagation). Each iteration, you resample which neurons to keep and which to dropout. * Dropout is related to the concept of ensemble learning with the unique case that the various models in the ensemble share parameters and these models are "combined" into a single model/network at test as opposed to training a fusion model or doing a simple average between outputs. * During test, you use all neurons all the time. * Please see and read: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
github_jupyter
# MixMod Tutorial Welcome to the MixMod tutorial! Here we'll go over the basic functionality of MixMod. It's a small package, so the explanation of the MixtureModel class will be brief and will largely focus on formatting the inputs correctly. (Mixture models are relatively parameter-rich, so the syntax for properly specifying all the components can be a little verbose!) The first portion of this tutorial is a brief introduction to mixture models, their use cases, and why parameter inference is a hard problem, so feel free to skip to the MixMod Class section if you're already familiar with mixture model theory. ## Mixture Model Theory ### What are mixture models and what are they good for? Unlike in introductory statistics courses where the data are typically clean examples of a single distribution, real data are messy. They contain outliers, missing values, and may represent the result of multiple overlapping random processes. One common example is a "bimodal" distribution of exam scores where there are two groups of students, those who understood the material and those who didn't. As an instructor, we'll likely want to calculate the means within groups and give students different instruction depending on whether we think they understood the previous material. In other words, we want to 1) understand the statistical properties of each group and 2) assign observations to these groups. More formally, these two goals are parameter estimation and class inference, and they are the major applications of mixture models. If the observations were labeled with their classes, these calculations would be trivial. The challenge is class labels are typically hidden in real-world data, so the observations from different classes are jumbled together. In most cases, class labels don't even exist since the mixture model is a statistical formalism rather than an accurate representation of the underlying data generation process. (See also "All models are wrong, but some are useful.") ### A formal definition Let's now give a more formal definition of mixture models (which is adapted from [Wikipedia](https://en.wikipedia.org/wiki/Mixture_model)). A mixture model consists of the following components: - A set of *K* mixture components, each of which is a probability distribution. - A set of *K* parameters, each specifying the parameters of its corresponding mixture component. In many cases, each "parameter" is actually a set of parameters. For example, if the mixture components are normal distributions, each component will have a mean and variance. - A set of *K* mixture weights, which are probabilities that sum to 1. The probability density function for a mixture model evaluated at $x_i$ is given by: $$ f(x_i) = \sum_{k=1}^K \phi_k f_k(x_i; \theta_k) $$ where $K$ is number of components, $\phi_k$ is the weight, $f_k$ is the pdf, and $\theta_k$ is the parameter set of each component. The above equation applies to a mixture model for an observation with an unknown class label. If the class label, $z_i$, is known, then the density function is given by: $$ f(x_i) = \sum_{k=1}^K \delta_{kz_i} \phi_k f_k(x_i; \theta_k) $$ where $\delta_{ij}$ is the Kronecker delta function. Since $\delta_{ij} = 0$ when $i \ne j$, this equation reduces to the distribution corresponding to the class of the observation. ### Fitting mixture models If the class labels are known, then some algebra using the above equation will show the overall likelihood for the data is maximized when the component likelihoods are maximized for the data corresponding to that component. This checks out intuitively. If we knew the class labels, then we could treat the components separately and choose the best parameters for each using only the observations from that component. When the class labels are not known, parameter inference is a different beast entirely. The problem is a little like a chicken or egg question. If we knew the class labels, then we could easily infer the component parameters. If we knew the component parameters, then we could infer the class labels (and in turn use those labels to infer the component parameters). This is very nearly estimation-maximization (EM), the algorithm that yields parameter estimates for statistical models with unobserved variables (like the class labels in mixture models). The basic idea is by alternating between assigning class labels to observations using the current parameter estimates and then using those class assignments to update the parameters, the parameters will eventually converge to a local maximum of the likelihood function. The actual procedure is a little more subtle than making hard class assignments for each observation, but the basic idea is very similar. The EM algorithm is highly flexible, so it is possible to implement the procedure for a generic mixture model. However, such an implementation would necessarily rely on general purpose numerical optimization routines, which can be somewhat finicky to use in practice. Thus, for both efficiency and robustness, this package limits the distributions to those where the EM equations are explicitly solved. More details are available in the section "Creating mixtures of other distributions." ## The MixtureModel Class ### Importing the package and generating data With all that out of the way, let's introduce the MixMod package! First we need to import it and some supporting libraries. ``` import matplotlib.pyplot as plt import mixmod import numpy as np import scipy.stats as stats ``` Now let's generate some data. We'll start with a simple mixture of two normal distributions. In the SciPy stats implementation, the mean and standard deviation are specified with the `loc` and `scale` parameters, respectively. This is standard practice within this module as well as in statistics more broadly. Distributions are often characterized by different, but related, parameters depending on the context. However, most of these can be expressed in a standard form as either a location or scale parameter. Location parameters shift the position of the distribution whereas scale parameters control the spread. Both of these have formal mathematical definitions which define these ideas precisely. The practical take-away, however, is the SciPy implementations of distributions express all location and scale parameters in their standard forms. These forms may differ from the conventional parametrizations, so be sure to read the documentation for each distribution thoroughly. ``` rvs0 = stats.norm.rvs(loc=1, scale=1.25, size=400) rvs1 = stats.norm.rvs(loc=5, scale=0.75, size=100) rvs = np.concatenate([rvs0, rvs1]) ``` We can visualize these distributions separately. We'll manually set the bins, so the two histograms are drawn on the same intervals. ``` bins = np.linspace(rvs.min(), rvs.max(), num=50) plt.hist(rvs0, bins=bins, color='C0') plt.hist(rvs1, bins=bins, color='C1'); ``` Usually, however, the observations from the two components are mixed together. ``` plt.hist(rvs, bins=bins, facecolor='white', edgecolor='black'); ``` Clearly the overall distribution is bimodal, but the division between the two components isn't immediately obvious, even in a simple case like this. Let's now use a MixtureModel to try to extract the parameters. ### Instantiating a MixtureModel and plotting its pdf The easiest way of instantiating a MixtureModel is by simply passing a list of SciPy stats distributions. ``` mixture = mixmod.MixtureModel([stats.norm, stats.norm]) mixture ``` This is the minimal amount of information needed, so most of the attributes of the instance are currently empty. Notice, however, the weights were set uniformly across components by default. Let's now make this mixture model more interesting by giving it some better initial parameters. It's not necessary to specify all the parameters for each component. Any parameters not defined in the `params` or `params_fix` dicts will use the default values specified by the distribution. ``` mixture = mixmod.MixtureModel([stats.norm, stats.norm], params=[{'loc': 1}, {'loc': 5}], weights=[0.6, 0.4]) mixture ``` Let's look at how well the density function matches the histogram. ``` x = np.linspace(rvs.min(), rvs.max(), 100) y = mixture.pdf(x) plt.hist(rvs, bins=bins, density=True, facecolor='white', edgecolor='black') plt.plot(x, y, color='black'); ``` We can also extract the pdfs of the individual components and plot them separately. ``` x = np.linspace(rvs.min(), rvs.max(), 100) y = mixture.pdf(x, component='all') plt.hist(rvs, bins=bins, density=True, facecolor='white', edgecolor='black') plt.plot(x, y[0], label='component 0', color='C0') plt.plot(x, y[1], label='component 1', color='C1') plt.legend(frameon=False); ``` ### Fitting a MixtureModel Our initial parameters aren't bad, but let's see if we can do a little better. Let's call `fit` on our data to optimize the parameters. ``` mixture.fit(rvs) mixture ``` These new parameters look closer to their true values. You can also see each component has a `scale` parameter in its `params` dict now since they are now estimated from the data and not using the default values. Let's see if the pdfs match the histograms better. ``` x = np.linspace(rvs.min(), rvs.max(), 100) y = mixture.pdf(x, component='all') plt.hist(rvs, bins=bins, density=True, facecolor='white', edgecolor='black') plt.plot(x, y[0], label='component 0', color='C0') plt.plot(x, y[1], label='component 1', color='C1') plt.legend(frameon=False); ``` ### Fitting a MixtureModel with fixed parameters One downside of this approach is all the parameters associated with each component are fit to the data. In some cases, we might have existing estimates for certain parameters that we want to stay constant. We can communicate this information to a `MixtureModel` by passing these parameters in the `params_fix` dicts. For example, let's say we're confident the `loc` parameter of the second component is 5, but we're unsure about the remaining parameters. ``` mixture = mixmod.MixtureModel([stats.norm, stats.norm], params_fix=[{}, {'loc': 5}]) mixture ``` Notice that an empty dict is supplied for the first component, so the correspondence between components and dicts is unambiguous. When we plot the pdfs of the components, we can see they use their default parameters (`loc=1`, `scale=1`) for any parameters not given in `params` or `params_fix`. ``` x = np.linspace(rvs.min(), rvs.max(), 100) y = mixture.pdf(x, component='all') plt.hist(rvs, bins=bins, density=True, facecolor='white', edgecolor='black') plt.plot(x, y[0], label='component 0', color='C0') plt.plot(x, y[1], label='component 1', color='C1') plt.legend(frameon=False); ``` Now let's fit the free parameters. ``` mixture.fit(rvs) mixture ``` As expected, the `loc` parameter of the second component has remained fixed at 5. ### Predicting class labels Let's now address the second major task of mixture models: inference of class labels. The `posterior` method returns a distribution across components for each observation. ``` posterior = mixture.posterior(rvs) posterior.shape ``` Let's look at an individual observation and its posterior distribution. ``` print(rvs[0]) print(posterior[:, 0]) ``` This isn't the most intuitive way of visualizing the output, so let's try to plot it a few different ways. We can first plot the posterior probability of a class by its position along the x-axis as a line graph. ``` x = np.linspace(rvs.min(), rvs.max(), 100) y = mixture.posterior(x) fig, ax1 = plt.subplots() ax2 = ax1.twinx() ax1.hist(rvs, bins=bins, density=True, facecolor='white', edgecolor='black') ax2.plot(x, y[0], color='C0', label='component 0') ax2.plot(x, y[1], color='C1', label='component 1') ax1.set_ylabel('Density') ax2.set_ylabel('Posterior probability') ax2.legend(ncol=2, loc='upper center', bbox_to_anchor=(0.5, -0.1), frameon=False); ``` We can plot the same information as a heatmap. ``` aspect = 0.2 # Ratio of y-axis to x-axis in display units plt.imshow(y, vmin=0, vmax=1, aspect=aspect*(x.max() - x.min()) / y.shape[0], extent=[x.min(), x.max(), 0, y.shape[0]]) plt.yticks([y + 0.5 for y in range(y.shape[0])], [f'component {y}' for y in range(y.shape[0])]) plt.colorbar(location='bottom', orientation='horizontal'); ``` ### Creating mixtures of other distributions Obviously this package wouldn't be very useful if it was limited to fitting mixture models with only two normal components. Fortunately, it can fit an arbitrary number of components. Unfortunately, these components are limited to a relatively small subset of the distributions defined in SciPy stats, as the EM equations are explicitly solved for these distributions. This makes fitting the parameters more efficient and robust than if general purpose numerical optimization algorithms were used. The cost, however, is the types of distributions available are somewhat limited. We can view the supported distributions by examining the `mles` variable in `mixmod.estimators`. It stores the maximum-likelihood estimators for each distribution in a dictionary. ``` mixmod.estimators.mles.keys() ``` Let's now simulate a mixture of exponential, gamma, and normal components and fit a mixture model! ``` rvs0 = stats.expon.rvs(scale=0.5, size=100) rvs1 = stats.gamma.rvs(a=4, scale=2, size=300) rvs2 = stats.norm.rvs(loc=15, scale=0.75, size=200) rvs = np.concatenate([rvs0, rvs1, rvs2]) bins = np.linspace(rvs.min(), rvs.max(), num=50) plt.hist(rvs, bins=bins, density=True, facecolor='white', edgecolor='black'); mixture = mixmod.MixtureModel([stats.expon, stats.gamma, stats.norm]) mixture mixture.fit(rvs) mixture x = np.linspace(rvs.min(), rvs.max(), 100) y = mixture.pdf(x, component='all') plt.hist(rvs, bins=bins, density=True, facecolor='white', edgecolor='black') plt.plot(x, y[0], label='component 0', color='C0') plt.plot(x, y[1], label='component 1', color='C1') plt.plot(x, y[2], label='component 2', color='C2') plt.legend(frameon=False); ``` ## Conclusion This brings us to the conclusion of the tutorial. We've covered the major parts of the MixtureModel class. There are a few optional arguments and methods we haven't touched on here, but they are straightforward and explained fully in the formal documentation. If you have any questions, please don't hesitate to reach out!
github_jupyter
# Automated setup of mixtures We've been working on streamlining setup of simulations of arbitrary mixtures in AMBER/GROMACS/OpenMM and others for some of our own research. I thought I'd demo this really quick so you can get a feel for it and see if you're interested in contributing. It also allows quick setup and analysis of nontrivial liquid simulations, which can be a good opportunity to try out MDTraj and other analysis tools. *Before running the below*, you will need to have followed the [getting started instructions](https://github.com/MobleyLab/drug-computing/blob/master/uci-pharmsci/getting-started.md) for this course. ``` from solvationtoolkit.solvated_mixtures import * #In this particular instance I'll just look at six solutes/solvent mixtures (not an all-by-all combination) which are pre-specified #solute names solutes = ['phenol', 'toluene', 'benzene', 'methane', 'ethanol', 'naphthalene'] #Solvent names solvents = ['cyclohexane', 'cyclohexane', 'cyclohexane', 'octanol', 'octanol', 'octanol'] #Number of solute/solvent molecules Nsolu = 3 Nsolv = 100 #Construct systems for idx in range( len( solutes) ): # Define new mixture mixture = MixtureSystem() # Add solute and solvent mixture.addComponent(name=solutes[idx], number=Nsolu) mixture.addComponent(name=solvents[idx], number=Nsolv) # Note you can optionally specify mole fraction instead, or a mix of numbers/mole fractions, etc. # Build system, including AMBER input files (but not GROMACS) mixture.build(amber=True, gromacs=True) ``` ## Let's try and see if we can do a quick visualization of one of the systems via mdtraj just to make sure it looks right ``` #Import MDTraj import mdtraj as md #Load "trajectory" (structures) #You can load from either format (SolvationToolkit generates both) #traj = md.load( 'data/amber/phenol_cyclohexane_3_100.inpcrd', top = 'data/amber/phenol_cyclohexane_3_100.prmtop' ) traj = md.load( 'data/gromacs/phenol_cyclohexane_3_100.gro') #Input viewer import nglview #Set up view of structure view = nglview.show_mdtraj(traj) #Try some of the following to modify representations view.clear_representations() view.add_licorice('all') view.add_licorice('1-3', color = "blue") #For selection info, see http://arose.github.io/ngl/doc/#User_manual/Usage/Selection_language view.add_surface('1', opacity=0.3) view.add_surface('2, 3', color = 'red', opacity=0.3) #Show the view. Note that this needs to be the last command used to manipulate the view, i.e. if you modify the #representation after this, your view will be empty. view #VIEWER USAGE: # - Use your typical zoom command/gesture (i.e. pinch) to zoom in and out # - Click and drag to reorient # - Click on specific atoms/residues to find out details of what they are (and how they could be selected) ``` ## Other possibly interesting things to try: * Find the average distance from phenol to phenol * Calculate the density or volume of the system * etc. (Drawing on MDTraj - see docs online) ``` # Use this box to try additional things ``` # Let's use a SMIRNOFF forcefield to parameterize the system, minimize, and run dynamics (This requires `openforcefield`, which you will have conda-installed if you've followed the getting started info.) First we handle imports ``` # Import the SMIRNOFF forcefield engine and some useful tools from openforcefield.typing.engines import smirnoff from openforcefield.typing.engines.smirnoff import ForceField from openforcefield.utils import get_data_filename, extractPositionsFromOEMol, generateTopologyFromOEMol # At this point SMIRNOFF requires oechem, though an RDKit version is in the works from openeye import oechem # We use PDBFile to get OpenMM topologies from PDB files from simtk.openmm.app import PDBFile # We'll use OpenMM for simulations/minimization from simtk import openmm, unit from simtk.openmm import app # MDTraj for working with trajectories; time for timing import time import mdtraj ``` ## Now we handle assignment of force field parameters and generation of an OpenMM System ``` # Specify names of molecules that are components of the system mol_filenames = ['phenol', 'cyclohexane'] # Load OEMols of components of system - SMIRNOFF requires OEMols of the components # and an OpenMM topology as input oemols = [] flavor = oechem.OEIFlavor_Generic_Default | oechem.OEIFlavor_MOL2_Default | oechem.OEIFlavor_MOL2_Forcefield #input flavor to use for reading mol2 files (so that it can understand GAFF atom names) # Loop over molecule files and load oemols for name in mol_filenames: mol = oechem.OEGraphMol() filename = 'data/monomers/'+name+'.mol2' ifs = oechem.oemolistream(filename) ifs.SetFlavor( oechem.OEFormat_MOL2, flavor) oechem.OEReadMolecule(ifs, mol ) oechem.OETriposAtomNames(mol) #Right now we have GAFF atom names, which OE doesn't like; reassign oemols.append(mol) ifs.close() # Load SMIRNOFF99Frosst force field (AMBER-family force field created by Christopher Bayly) forcefield = ForceField(get_data_filename('forcefield/smirnoff99Frosst.ffxml')) # Get OpenMM topology for mixture of phenol and cyclohexane from where SolvationToolkit created # it on disk pdbfile = PDBFile('data/packmol_boxes/phenol_cyclohexane_3_100.pdb') # Assign SMIRNOFF parameters and create system; here we'll use PME with a 1.1 nm LJ cutoff. system = forcefield.createSystem( pdbfile.topology, oemols, nonbondedMethod = smirnoff.PME, nonbondedCutoff=1.1*unit.nanometer ) ``` ## Finally we energy minimize and run dynamics ``` # Set how many steps we'll run and other run parameters num_steps=10000 trj_freq = 100 #Trajectory output frequency data_freq = 100 #Energy/data output frequency temperature = 300*unit.kelvin #Temperature time_step = 2.*unit.femtoseconds friction = 1./unit.picosecond #Langevin friction constant # Bookkeeping -- if you run this more than once and perhaps encountered an exception, we need to make sure the reporter is closed try: reporter.close() except: pass # Set up integrator, platform for running simulation integrator = openmm.LangevinIntegrator(temperature, friction, time_step) platform = openmm.Platform.getPlatformByName('Reference') simulation = app.Simulation(pdbfile.topology, system, integrator) # Set positions, velocities simulation.context.setPositions(pdbfile.positions) simulation.context.setVelocitiesToTemperature(temperature) # Before doing dynamics, energy minimize (initial geometry will be strained) simulation.minimizeEnergy() # Set up reporter for output reporter = mdtraj.reporters.HDF5Reporter('mixture.h5', trj_freq) simulation.reporters=[] simulation.reporters.append(reporter) simulation.reporters.append(app.StateDataReporter('data.csv', data_freq, step=True, potentialEnergy=True, temperature=True, density=True)) # Run the dynamics print("Starting simulation") start = time.clock() simulation.step(num_steps) end = time.clock() print("Elapsed time %.2f seconds" % (end-start)) #netcdf_reporter.close() reporter.close() print("Done!") ``` ## Let's make a movie of our simulation ``` import nglview traj=mdtraj.load('mixture.h5') view = nglview.show_mdtraj(traj) #Try some of the following to modify representations view.clear_representations() view.add_licorice('all') view.add_licorice('1-3', color = "blue") #For selection info, see http://arose.github.io/ngl/doc/#User_manual/Usage/Selection_language view.add_surface('1', opacity=0.3) view.add_surface('2, 3', color = 'red', opacity=0.3) view #Note that if you view a movie and keep it playing, your notebook will run a hair slow... ```
github_jupyter
# Mislabel detection using influence function with all of layers on Cifar-10, ResNet ### Author [Neosapience, Inc.](http://www.neosapience.com) ### Pre-train model conditions --- - made mis-label from 1 percentage dog class to horse class - augumentation: on - iteration: 80000 - batch size: 128 #### cifar-10 train dataset | | horse | dog | airplane | automobile | bird | cat | deer | frog | ship | truck | |----------:|:-----:|:----:|:--------:|:----------:|:----:|:----:|:----:|:----:|:----:|:-----:| | label | 5000 | **4950** | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | | mis-label | **50** | | | | | | | | | | | total | **5050** | 4950 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | ### License --- Apache License 2.0 ### References --- - Darkon Documentation: <http://darkon.io> - Darkon Github: <https://github.com/darkonhub/darkon> - Resnet code: <https://github.com/wenxinxu/resnet-in-tensorflow> - More examples: <https://github.com/darkonhub/darkon-examples> ### Index - [Load results and analysis](#Load-results-and-analysis) - [How to use upweight influence function for mis-label](#How-to-use-upweight-influence-function-for-mis-label) ## Load results and analysis ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline scores = np.load('mislabel-result-all.npy') print('num tests: {}'.format(len(scores))) begin_mislabel_idx = 5000 sorted_indices = np.argsort(scores) print('dogs in helpful: {} / 100'.format(np.sum(sorted_indices[-100:] >= begin_mislabel_idx))) print('mean for all: {}'.format(np.mean(scores))) print('mean for horse: {}'.format(np.mean(scores[:begin_mislabel_idx]))) print('mean for dogs: {}'.format(np.mean(scores[begin_mislabel_idx:]))) mis_label_ranking = np.where(sorted_indices >= begin_mislabel_idx)[0] print('all of mis-labels: {}'.format(mis_label_ranking)) total = scores.size total_pos = mis_label_ranking.size total_neg = total - total_pos tpr = np.zeros([total_pos]) fpr = np.zeros([total_pos]) for idx in range(total_pos): tpr[idx] = float(total_pos - idx) fpr[idx] = float(total - mis_label_ranking[idx] - tpr[idx]) tpr /= total_pos fpr /= total_neg histogram = sorted_indices >= begin_mislabel_idx histogram = histogram.reshape([10, -1]) histogram = np.sum(histogram, axis=1) acc = np.cumsum(histogram[::-1]) fig, ax = plt.subplots(1, 2, figsize=(20, 10)) ax[0].set_ylabel('true positive rate') ax[0].set_xlabel('false positive rate') ax[0].set_ylim(0.0, 1.0) ax[0].set_xlim(0.0, 1.0) ax[0].grid(True) ax[0].plot(fpr, tpr) ax[1].set_ylabel('num of mis-label') ax[1].set_xlabel('threshold') ax[1].grid(True) ax[1].bar(range(10), acc) plt.sca(ax[1]) plt.xticks(range(10), ['{}~{}%'.format(p, p + 10) for p in range(0, 100, 10)]) fig, ax = plt.subplots(figsize=(20, 5)) ax.grid(True) ax.plot(scores) ``` <br><br><br><br> ## How to use upweight influence function for mis-label ### Import packages ``` # resnet: implemented by wenxinxu from cifar10_input import * from cifar10_train import Train import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import darkon # to enable specific GPU %set_env CUDA_VISIBLE_DEVICES=0 # cifar-10 classes _classes = ( 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck' ) ``` ### Download/Extract cifar10 dataset ``` maybe_download_and_extract() ``` ### Implement dataset feeder ``` class MyFeeder(darkon.InfluenceFeeder): def __init__(self): # load train data # for ihvp data, label = prepare_train_data(padding_size=0) # update some label label = self.make_mislabel(label) self.train_origin_data = data / 256. self.train_label = label self.train_data = whitening_image(data) self.train_batch_offset = 0 def make_mislabel(self, label): target_class_idx = 7 correct_indices = np.where(label == target_class_idx)[0] self.correct_indices = correct_indices[:] # 1% dogs to horses. # In the mis-label model training, I used this script to choose random dogs. labeled_dogs = np.where(label == 5)[0] np.random.shuffle(labeled_dogs) mislabel_indices = labeled_dogs[:int(labeled_dogs.shape[0] * 0.01)] label[mislabel_indices] = 7.0 self.mislabel_indices = mislabel_indices print('target class: {}'.format(_classes[target_class_idx])) print(self.mislabel_indices) return label def test_indices(self, indices): return self.train_data[indices], self.train_label[indices] def train_batch(self, batch_size): # for recursion part # calculate offset start = self.train_batch_offset end = start + batch_size self.train_batch_offset += batch_size return self.train_data[start:end, ...], self.train_label[start:end, ...] def train_one(self, idx): return self.train_data[idx, ...], self.train_label[idx, ...] def reset(self): self.train_batch_offset = 0 # to fix shuffled data np.random.seed(75) feeder = MyFeeder() ``` ### Restore pre-trained model ``` # tf model checkpoint check_point = 'pre-trained-mislabel/model.ckpt-79999' net = Train() net.build_train_validation_graph() saver = tf.train.Saver(tf.global_variables()) sess = tf.InteractiveSession() saver.restore(sess, check_point) ``` ### Upweight influence options ``` approx_params = { 'scale': 200, 'num_repeats': 3, 'recursion_depth': 50, 'recursion_batch_size': 100 } # targets test_indices = list(feeder.correct_indices) + list(feeder.mislabel_indices) print('num test targets: {}'.format(len(test_indices))) ``` ### Run upweight influence function ``` # choose all of trainable layers trainable_variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) # initialize Influence function inspector = darkon.Influence( workspace='./influence-workspace', feeder=feeder, loss_op_train=net.full_loss, loss_op_test=net.loss_op, x_placeholder=net.image_placeholder, y_placeholder=net.label_placeholder, trainable_variables=trainable_variables) scores = list() for i, target in enumerate(test_indices): score = inspector.upweighting_influence( sess, [target], 1, approx_params, [target], 10000000, force_refresh=True ) scores += list(score) print('done: [{}] - {}'.format(i, score)) print(scores) np.save('mislabel-result-all.npy', scores) ``` ### License --- <pre> Copyright 2017 Neosapience, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. </pre> ---
github_jupyter
``` import os os.chdir('..') os.chdir('..') print(os.getcwd()) import rsnapsim as rss import numpy as np os.chdir('rsnapsim') os.chdir('interactive_notebooks') import numpy as np import matplotlib.pyplot as plt import time poi_strs, poi_objs, tagged_pois,raw_seq = rss.seqmanip.open_seq_file('../gene_files/H2B_withTags.txt') poi = tagged_pois['1'][0] #protein object poi.tag_epitopes['T_Flag'] = [10,20,30,40,50,60,70] poi.tag_epitopes['T_Hemagglutinin'] = [300,330,340,350] plt.style.use('dark_background') plt.rcParams['figure.dpi'] = 120 plt.rcParams['lines.linewidth'] = 1 plt.rcParams['axes.linewidth'] = 1.5 plt.rcParams['font.size'] = 15 plt.rcParams['axes.grid'] = False colors = ['#00ff51', '#00f7ff'] rss.solver.protein=poi t = np.linspace(0,500,501) poi.visualize_probe(colors=['#00ff51', '#00f7ff']) sttime = time.time() ssa_soln = rss.solver.solve_ssa(poi.kelong,t,ki=.033,n_traj=20) solvetime = time.time()-sttime print(ssa_soln.intensity_vec.shape) plt.plot(np.mean(ssa_soln.intensity_vec[0],axis=1),color='#00ff51',alpha=.8) plt.plot(np.mean(ssa_soln.intensity_vec[1],axis=1),color='#00f7ff',alpha=.8) plt.xlabel('time') plt.ylabel('intensity') print("Low memory, no recording: solved in %f seconds" % solvetime) ``` ## Autocovariances with individual means ``` acov,err_acov = rss.inta.get_autocov(ssa_soln.intensity_vec,norm='ind') plt.plot(np.mean(acov[0],axis=1),color=colors[0]);plt.plot(np.mean(acov[1],axis=1),color=colors[1]) plt.plot(np.mean(acov[0],axis=1) - err_acov[0],'--',color=colors[0]);plt.plot(np.mean(acov[1],axis=1)- err_acov[1],'--',color=colors[1]) plt.plot(np.mean(acov[0],axis=1)+ err_acov[0],'--',color=colors[0]);plt.plot(np.mean(acov[1],axis=1)+ err_acov[1],'--',color=colors[1]) plt.plot([0,500],[0,0],'r--') plt.xlim([0,100]) plt.xlabel('tau') plt.ylabel('G(tau)') #normalized by G0 acc,acc_err = rss.inta.get_autocorr(acov) n_traj = acc.shape[-1] err_acov = 1.0/np.sqrt(n_traj)*np.std(acc,ddof=1,axis=2) plt.plot(np.mean(acc[0],axis=1),color=colors[0]);plt.plot(np.mean(acc[1],axis=1),color=colors[1]) plt.plot(np.mean(acc[0],axis=1) - err_acov[0],'--',color=colors[0]);plt.plot(np.mean(acc[1],axis=1)- err_acov[1],'--',color=colors[1]) plt.plot(np.mean(acc[0],axis=1)+ err_acov[0],'--',color=colors[0]);plt.plot(np.mean(acc[1],axis=1)+ err_acov[1],'--',color=colors[1]) plt.plot([0,500],[0,0],'r--') plt.xlim([0,100]) plt.xlabel('tau') plt.ylabel('G(tau)') ``` ## Global means ``` acov,err_acov = rss.inta.get_autocov(ssa_soln.intensity_vec,norm='global') plt.plot(np.mean(acov[0],axis=1),color='seagreen');plt.plot(np.mean(acov[1],axis=1),color='violet') plt.plot(np.mean(acov[0],axis=1) - err_acov[0],'--',color='seagreen');plt.plot(np.mean(acov[1],axis=1)- err_acov[1],'--',color='violet') plt.plot(np.mean(acov[0],axis=1)+ err_acov[0],'--',color='seagreen');plt.plot(np.mean(acov[1],axis=1)+ err_acov[1],'--',color='violet') plt.plot([0,500],[0,0],'r--') plt.xlim([0,100]) #normalized by G0 acc,acc_error = rss.inta.get_autocorr(acov,g0='G1') mean_acc = np.mean(acc,axis=2) plt.plot(mean_acc[0],color='seagreen');plt.plot(mean_acc[1],color='violet') plt.plot(np.mean(acc[0],axis=1) - acc_error[0],'--',color='seagreen');plt.plot(np.mean(acc[1],axis=1)- acc_error[1],'--',color='violet') plt.plot(np.mean(acc[0],axis=1)+ acc_error[0],'--',color='seagreen');plt.plot(np.mean(acc[1],axis=1)+ acc_error[1],'--',color='violet') plt.plot([0,500],[0,0],'r--') plt.xlim([0,100]) ``` ## Cross correlations ``` cross_corr,err_cc,inds = rss.inta.get_crosscorr(ssa_soln.intensity_vec,norm='indiv') plt.figure() s11_cc = np.mean(cross_corr[0],axis=1) s12_cc = np.mean(cross_corr[1],axis=1) s21_cc = np.mean(cross_corr[2],axis=1) s22_cc = np.mean(cross_corr[3],axis=1) plt.plot(s11_cc/s11_cc[500],color=colors[0] ); plt.plot(s21_cc/s21_cc[500],color='#ff00ee'); plt.plot(s22_cc/s22_cc[500],color=colors[1]); plt.plot(s11_cc/s11_cc[500] - err_cc[0]/s11_cc[500],'--',color=colors[0] ); plt.plot(s11_cc/s11_cc[500] + err_cc[0]/s11_cc[500],'--',color=colors[0] ); plt.plot(s21_cc/s21_cc[500] - err_cc[2]/s21_cc[500] ,'--',color='#ff00ee' ); plt.plot(s21_cc/s21_cc[500] + err_cc[2]/s21_cc[500] ,'--',color='#ff00ee'); plt.plot(s22_cc/s22_cc[500] - s22_cc[3]/s22_cc[500],'--',color=colors[1] ); plt.plot(s22_cc/s22_cc[500] + s22_cc[3]/s22_cc[500],'--',color=colors[1] ); plt.plot([500,500],[0,1.1],'r--') plt.plot([400,600],[0,0],'r--') plt.legend(['00','10','11' ]) plt.xlim([400,600]) plt.xlabel('tau') plt.ylabel('G(tau)') ``` ## normalization modes | norm | effect | | :- | :-: | | global | subtract all intensities by the global mean intensity before correlation | | individual | subtract all intensities by the trajectory mean intensity before correlation | | raw | do nothing, correlate the intensities as they are | ## G0 | norm | effect | | :- | :-: | | global_max | divide correlations by the global maximum point | | individual_max | divide correlations by the individual trajectory maximum point | | global_center | divide correlations by the global average of the center point of the correlation | | individual_center | divide all correlations by the trajectory center point value | | None | do nothing, do not normalize the correlations by anything| ``` cross_corr,err_cc,inds = rss.inta.get_crosscorr(ssa_soln.intensity_vec,norm='indiv',g0='indiv_max') plt.figure() plt.plot(cross_corr[0], color = colors[0],alpha=.5) plt.plot(cross_corr[2],color = '#ff00ee',alpha=.5) plt.plot(cross_corr[3], color = colors[1],alpha=.5) s11_cc = np.mean(cross_corr[0],axis=1) s12_cc = np.mean(cross_corr[1],axis=1) s21_cc = np.mean(cross_corr[2],axis=1) s22_cc = np.mean(cross_corr[3],axis=1) plt.plot([500,500],[0,1.1],'r--') plt.plot([400,600],[0,0],'r--') plt.legend(['00','10','11' ]) plt.xlim([400,600]) plt.xlabel('tau') plt.ylabel('G(tau)') cross_corr,err_cc,inds = rss.inta.get_crosscorr(ssa_soln.intensity_vec,norm='global',g0='indiv_max') plt.figure() plt.plot(cross_corr[0], color = colors[0],alpha=.5) plt.plot(cross_corr[2],color = '#ff00ee',alpha=.5) plt.plot(cross_corr[3], color = colors[1],alpha=.5) s11_cc = np.mean(cross_corr[0],axis=1) s12_cc = np.mean(cross_corr[1],axis=1) s21_cc = np.mean(cross_corr[2],axis=1) s22_cc = np.mean(cross_corr[3],axis=1) plt.plot([500,500],[0,1.1],'r--') plt.plot([400,600],[0,0],'r--') plt.legend(['00','10','11' ]) plt.xlim([400,600]) plt.xlabel('tau') plt.ylabel('G(tau)') ```
github_jupyter
##### Copyright 2020 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Question Answer with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_question_answer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications. This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. # Introduction to Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. <p align="center"><img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_squad_showcase.png" width="500"></p> <p align="center"> <em>Answers are spans in the passage (image credit: <a href="https://rajpurkar.github.io/mlx/qa-and-squad/">SQuAD blog</a>) </em> </p> As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage. The size of input could be set and adjusted according to the length of passage and question. ## End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python # Chooses a model specification that represents the model. spec = model_spec.get('mobilebert_qa') # Gets the training data and validation data. train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True) validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) # Fine-tunes the model. model = question_answer.create(train_data, model_spec=spec) # Gets the evaluation result. metric = model.evaluate(validation_data) # Exports the model to the TensorFlow Lite format in the export directory. model.export(export_dir) ``` The following sections explain the code in more detail. ## Prerequisites To run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). ``` !pip install tflite-model-maker ``` Import the required packages. ``` import numpy as np import os import tensorflow as tf assert tf.__version__.startswith('2') from tflite_model_maker import configs from tflite_model_maker import model_spec from tflite_model_maker import question_answer from tflite_model_maker import QuestionAnswerDataLoader ``` The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. ## Choose a model_spec that represents a model for question answer Each `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models. Supported Model | Name of model_spec | Model Description --- | --- | --- [MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario. [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/). [BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks. In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task. ``` spec = model_spec.get('mobilebert_qa_squad') ``` ## Load Input Data Specific to an On-device ML App and Preprocess the Data The [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library. To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqa#miscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by: * Skipping the samples that couldn't find any answer in the context document; * Getting the original answer in the context without uppercase or lowercase. Download the archived version of the already converted dataset. ``` train_data_path = tf.keras.utils.get_file( fname='triviaqa-web-train-8000.json', origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json') validation_data_path = tf.keras.utils.get_file( fname='triviaqa-verified-web-dev.json', origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json') ``` You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar. <img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png" alt="Upload File" width="800" hspace="100"> If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`. ``` train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True) validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) ``` ## Customize the TensorFlow Model Create a custom question answer model based on the loaded data. The `create` function comprises the following steps: 1. Creates the model for question answer according to `model_spec`. 2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object. ``` model = question_answer.create(train_data, model_spec=spec) ``` Have a look at the detailed model structure. ``` model.summary() ``` ## Evaluate the Customized Model Evaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0. ``` model.evaluate(validation_data) ``` ## Export to TensorFlow Lite Model Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration: ``` config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY]) config._experimental_new_quantizer = True ``` Export the quantized TFLite model according to the quantization config and save the vocabulary to a vocab file. The default TFLite model filename is `model.tflite`, and the default vocab filename is `vocab`. ``` model.export(export_dir='.', quantization_config=config) ``` You can use the TensorFlow Lite model file and vocab file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app by downloading it from the left sidebar on Colab. You can also evalute the tflite model with the `evaluate_tflite` method. This step is expected to take a long time. ``` model.evaluate_tflite('model.tflite', validation_data) ``` ## Advanced Usage The `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps: 1. Creates the model for question answer according to `model_spec`. 2. Train the question answer model. This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. ### Adjust the model You can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class. Adjustable parameters for model: * `seq_len`: Length of the passage to feed into the model. * `query_len`: Length of the question to feed into the model. * `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents. * `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices. * `trainable`: Boolean, whether pre-trained layer is trainable. Adjustable parameters for training pipeline: * `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used. * `dropout_rate`: The rate for dropout. * `learning_rate`: The initial learning rate for Adam. * `predict_batch_size`: Batch size for prediction. * `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`. ``` new_spec = model_spec.get('mobilebert_qa') new_spec.seq_len = 512 ``` The remaining steps are the same. Note that you must rerun both the `dataloader` and `create` parts as different model specs may have different preprocessing steps. ### Tune training hyperparameters You can also tune the training hyperparameters like `epochs` and `batch_size` to impact the model performance. For instance, * `epochs`: more epochs could achieve better performance, but may lead to overfitting. * `batch_size`: number of samples to use in one training step. For example, you can train with more epochs and with a bigger batch size like: ```python model = question_answer.create(train_data, model_spec=spec, epochs=5, batch_size=64) ``` ### Change the Model Architecture You can change the base model your data trains on by changing the `model_spec`. For example, to change to the BERT-Base model, run: ```python spec = model_spec.get('bert_qa') ``` The remaining steps are the same.
github_jupyter
##### Copyright 2018 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # Training a Simple Neural Network, with tensorflow/datasets Data Loading _Forked from_ `neural_network_and_data_loading.ipynb` _Dougal Maclaurin, Peter Hawkins, Matthew Johnson, Roy Frostig, Alex Wiltschko, Chris Leary_ ![JAX](https://raw.githubusercontent.com/google/jax/master/images/jax_logo_250px.png) Let's combine everything we showed in the [quickstart notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb) to train a simple neural network. We will first specify and train a simple MLP on MNIST using JAX for the computation. We will use `tensorflow/datasets` data loading API to load images and labels (because it's pretty great, and the world doesn't need yet another data loading library :P). Of course, you can use JAX with any API that is compatible with NumPy to make specifying the model a bit more plug-and-play. Here, just for explanatory purposes, we won't use any neural network libraries or special APIs for builidng our model. ``` !pip install --upgrade -q https://storage.googleapis.com/jax-wheels/cuda$(echo $CUDA_VERSION | sed -e 's/\.//' -e 's/\..*//')/jaxlib-0.1.15-cp36-none-linux_x86_64.whl !pip install --upgrade -q jax from __future__ import print_function, division, absolute_import import jax.numpy as np from jax import grad, jit, vmap from jax import random ``` ### Hyperparameters Let's get a few bookkeeping items out of the way. ``` # A helper function to randomly initialize weights and biases # for a dense neural network layer def random_layer_params(m, n, key, scale=1e-2): w_key, b_key = random.split(key) return scale * random.normal(w_key, (n, m)), scale * random.normal(b_key, (n,)) # Initialize all layers for a fully-connected neural network with sizes "sizes" def init_network_params(sizes, key): keys = random.split(key, len(sizes)) return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:], keys)] layer_sizes = [784, 512, 512, 10] param_scale = 0.1 step_size = 0.0001 num_epochs = 10 batch_size = 128 n_targets = 10 params = init_network_params(layer_sizes, random.PRNGKey(0)) ``` ### Auto-batching predictions Let us first define our prediction function. Note that we're defining this for a _single_ image example. We're going to use JAX's `vmap` function to automatically handle mini-batches, with no performance penalty. ``` from jax.scipy.misc import logsumexp def relu(x): return np.maximum(0, x) def predict(params, image): # per-example predictions activations = image for w, b in params[:-1]: outputs = np.dot(w, activations) + b activations = relu(outputs) final_w, final_b = params[-1] logits = np.dot(final_w, activations) + final_b return logits - logsumexp(logits) ``` Let's check that our prediction function only works on single images. ``` # This works on single examples random_flattened_image = random.normal(random.PRNGKey(1), (28 * 28,)) preds = predict(params, random_flattened_image) print(preds.shape) # Doesn't work with a batch random_flattened_images = random.normal(random.PRNGKey(1), (10, 28 * 28)) try: preds = predict(params, random_flattened_images) except TypeError: print('Invalid shapes!') # Let's upgrade it to handle batches using `vmap` # Make a batched version of the `predict` function batched_predict = vmap(predict, in_axes=(None, 0)) # `batched_predict` has the same call signature as `predict` batched_preds = batched_predict(params, random_flattened_images) print(batched_preds.shape) ``` At this point, we have all the ingredients we need to define our neural network and train it. We've built an auto-batched version of `predict`, which we should be able to use in a loss function. We should be able to use `grad` to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use `jit` to speed up everything. ### Utility and loss functions ``` def one_hot(x, k, dtype=np.float32): """Create a one-hot encoding of x of size k.""" return np.array(x[:, None] == np.arange(k), dtype) def accuracy(params, images, targets): target_class = np.argmax(targets, axis=1) predicted_class = np.argmax(batched_predict(params, images), axis=1) return np.mean(predicted_class == target_class) def loss(params, images, targets): preds = batched_predict(params, images) return -np.sum(preds * targets) @jit def update(params, x, y): grads = grad(loss)(params, x, y) return [(w - step_size * dw, b - step_size * db) for (w, b), (dw, db) in zip(params, grads)] ``` ### Data Loading with `tensorflow/datasets` JAX is laser-focused on program transformations and accelerator-backed NumPy, so we don't include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let's just use them instead of reinventing anything. We'll use the `tensorflow/datasets` data loader. ``` # Install tensorflow-datasets # TODO(rsepassi): Switch to stable version on release !pip install -q --upgrade tfds-nightly tf-nightly import tensorflow_datasets as tfds data_dir = '/tmp/tfds' # Fetch full datasets for evaluation # tfds.load returns tf.Tensors (or tf.data.Datasets if batch_size != -1) # You can convert them to NumPy arrays (or iterables of NumPy arrays) with tfds.dataset_as_numpy mnist_data, info = tfds.load(name="mnist", batch_size=-1, data_dir=data_dir, with_info=True) mnist_data = tfds.as_numpy(mnist_data) train_data, test_data = mnist_data['train'], mnist_data['test'] num_labels = info.features['label'].num_classes h, w, c = info.features['image'].shape num_pixels = h * w * c # Full train set train_images, train_labels = train_data['image'], train_data['label'] train_images = np.reshape(train_images, (len(train_images), num_pixels)) train_labels = one_hot(train_labels, num_labels) # Full test set test_images, test_labels = test_data['image'], test_data['label'] test_images = np.reshape(test_images, (len(test_images), num_pixels)) test_labels = one_hot(test_labels, num_labels) print('Train:', train_images.shape, train_labels.shape) print('Test:', test_images.shape, test_labels.shape) ``` ### Training Loop ``` import time def get_train_batches(): # as_supervised=True gives us the (image, label) as a tuple instead of a dict ds = tfds.load(name='mnist', split='train', as_supervised=True, data_dir=data_dir) # You can build up an arbitrary tf.data input pipeline ds = ds.batch(128).prefetch(1) # tfds.dataset_as_numpy converts the tf.data.Dataset into an iterable of NumPy arrays return tfds.as_numpy(ds) for epoch in range(num_epochs): start_time = time.time() for x, y in get_train_batches(): x = np.reshape(x, (len(x), num_pixels)) y = one_hot(y, num_labels) params = update(params, x, y) epoch_time = time.time() - start_time train_acc = accuracy(params, train_images, train_labels) test_acc = accuracy(params, test_images, test_labels) print("Epoch {} in {:0.2f} sec".format(epoch, epoch_time)) print("Training set accuracy {}".format(train_acc)) print("Test set accuracy {}".format(test_acc)) ``` We've now used the whole of the JAX API: `grad` for derivatives, `jit` for speedups and `vmap` for auto-vectorization. We used NumPy to specify all of our computation, and borrowed the great data loaders from `tensorflow/datasets`, and ran the whole thing on the GPU.
github_jupyter
# K Nearest Neighbor (KNN) Model ``` # Update sklearn to prevent version mismatches !pip install sklearn --upgrade # Update sklearn to prevent version mismatches !pip install tensorflow==2.2 --upgrade !pip install keras --upgrade # Install joblib. This will be used to save your model. # Restart your kernel after installing !pip install joblib import pandas as pd ``` # Read the CSV and Perform Basic Data Cleaning ``` df = pd.read_csv("exoplanet_data.csv") # Drop the null columns where all values are null df = df.dropna(axis='columns', how='all') # Drop the null rows df = df.dropna() df.head() ``` # Select your features (columns) ``` # Set features. This will also be used as your x values. selected_features = df[['koi_fpflag_nt','koi_fpflag_ss','koi_fpflag_co','koi_fpflag_ec', 'koi_period','koi_period_err1','koi_period_err2', 'koi_time0bk','koi_time0bk_err1','koi_time0bk_err2', 'koi_impact','koi_impact_err1','koi_impact_err2', 'koi_duration','koi_duration_err1','koi_duration_err2', 'koi_depth','koi_depth_err1','koi_depth_err2', 'koi_prad','koi_prad_err1','koi_prad_err2', 'koi_teq','koi_insol','koi_insol_err1','koi_insol_err2', 'koi_model_snr','koi_steff','koi_steff_err1','koi_steff_err2', 'koi_slogg','koi_slogg_err1','koi_slogg_err2', 'koi_srad','koi_srad_err1','koi_srad_err2', 'ra','dec','koi_kepmag']] selected_features.head() ``` # Create a Train Test Split Use `koi_disposition` for the y values ``` # Define target dataframe, target_names array, and X and y variables target = df["koi_disposition"] target_names = ["Confirmed", "False Positive", "Candidate"] X = selected_features y = target # Derive X and y training and testing variables from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y) X_train.head() ``` # Pre-processing Scale the data using the MinMaxScaler and perform some feature selection ``` # Scale your data from sklearn.preprocessing import LabelEncoder, MinMaxScaler from tensorflow.keras.utils import to_categorical X_scaler = MinMaxScaler().fit(X_train) X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test) # Label-encode data set and print the encoded_y_test label_encoder = LabelEncoder() label_encoder.fit(y_train) encoded_y_train = label_encoder.transform(y_train) encoded_y_test = label_encoder.transform(y_test) y_train_categorical = to_categorical(encoded_y_train) y_test_categorical = to_categorical(encoded_y_test) print(y_test_categorical) ``` # Train the Model ``` from sklearn.neighbors import KNeighborsClassifier train_scores = [] test_scores = [] for k in range(1, 20, 2): knn = KNeighborsClassifier(n_neighbors=k) knn.fit(X_train_scaled, encoded_y_train) train_score = knn.score(X_train_scaled, encoded_y_train) test_score = knn.score(X_test_scaled, encoded_y_test) train_scores.append(train_score) test_scores.append(test_score) print(f"k: {k}, Train/Test Score: {train_score:.3f}/{test_score:.3f}") model3 = KNeighborsClassifier(n_neighbors=13) model3 # Fit the data and print Training Data Scores model3.fit(X_train_scaled, encoded_y_train) print(f"Training Data Score: {knn.score(X_train_scaled, encoded_y_train)}") print(f"Testing Data Score: {knn.score(X_test_scaled, encoded_y_test)}") ``` # Test KNN Model ``` # Make predictions predictions = model3.predict(X_test_scaled) predictions # Calculate classification report from sklearn.metrics import classification_report print(classification_report(encoded_y_test, predictions, target_names=target_names)) ``` # Save the Model ``` # save your model by updating "your_name" with your name # and "your_model" with your model variable # be sure to turn this in to BCS # if joblib fails to import, try running the command to install in terminal/git-bash import joblib filename = 'knn.sav' joblib.dump(model3, filename) ```
github_jupyter
``` import pandas as pd import os import glob raw_data_path = os.path.join('data', 'raw') clean_filename = os.path.join('data', 'clean', 'data.csv') ``` # Read data ``` all_files = glob.glob(raw_data_path + "/top_songs_with_lyrics.csv") raw_data = pd.concat(pd.read_csv(f) for f in all_files) raw_data.head() ``` # Pre processing ``` import re import string import nltk from nltk.corpus import stopwords from nltk.tokenize import word_tokenize nltk.download('punkt') nltk.download('stopwords') #Puntuaction Removing [!”#$%&’()*+,-./:;<=>?@[\]^_`{|}~]: def clean_puntuaction(input_df): result=input_df for idx in range(result.shape[0]): result[idx]=result[idx].replace("'","") result[idx]=result[idx].replace("\r"," ") result[idx]=result[idx].replace("\n","") result[idx]= re.sub("[\(\[].*?[\)\]]", "", result[idx]) result[idx]= re.sub(r'[^\w\s]', '', result[idx]) return result def remove_accents(input_str): """ remueve acentos, aunque al ser un texto en inglés no deberían existir acentos """ nfkd_form = unicodedata.normalize('NFKD', input_str ) return u"".join([c for c in nfkd_form if not unicodedata.combining(c)]) #Puntuaction Removing [!”#$%&’()*+,-./:;<=>?@[\]^_`{|}~]: def clean_puntuaction(input_df): result=input_df print(result.shape) #for idx in range(result.shape[0]): # for idx in result: # idx=idx+1 # if(result[idx]==""): # continue # result[idx]=result[idx].replace("'","") # result[idx]=result[idx].replace("\r"," ") # result[idx]=result[idx].replace("\n","") # result[idx]= re.sub("[\(\[].*?[\)\]]", "", result[idx]) # result[idx]= re.sub(r'[^\w\s]', '', result[idx]) # result[idx]= remove_accents(result[idx]) cont=1 for idx in result.values: #idx=idx+1 #if(result[idx]==""): # continue idx=idx.replace("'","") idx=idx.replace("\r"," ") idx=idx.replace("\n"," ") idx= re.sub("[\(\[].*?[\)\]]", "", idx) idx= re.sub(r'[^\w\s]', '', idx) idx= remove_accents(idx) print(cont) print (idx) result[cont]=idx cont=cont+1 return result def clean_str_puntuaction(input_df): input_df=input_df.replace("'","") input_df=input_df.replace("\r"," ") input_df=input_df.replace("\n"," ") input_df=input_df.replace("-"," ") input_df= re.sub("[\(\[].*?[\)\]]", "", input_df) input_df= re.sub(r'[^\w\s]', '', input_df) input_df= remove_accents(input_df) return input_df def remove_stopwords(input_df): result=input_df for idx in range(result.shape[0]): tokens = word_tokenize(result[idx]) stop_words = stopwords.words('spanish') more_stopwords = ['si', 'pa', 'sé', 'solo', 'yeah', 'yeh', 'oh', 'i', 'to', 'va', 'the', 'aunque', 'you', 'eh', 'cómo','ma'] total_stopwords = stop_words + more_stopwords result[idx] = [i for i in tokens if not i in total_stopwords] return result # TODO: Perform cleaning data_filename = 'data/raw/top_songs_with_lyrics.csv' dataset = pd.read_csv(data_filename) dataset.columns.tolist() #dataset.iloc[:,0] #df=dataset[['artists ','title','lyric # lowercase ']] df=dataset['lyric '].str.lower() df=clean_puntuaction(df) df=remove_stopwords(df) #df[1] #freq = nltk.FreqDist(tokens) clean_data = pd.DataFrame(data={'dummy': [1, 2]}) clean_data.to_csv(clean_filename, index=False) ```
github_jupyter
# Linked List vs. Array ## Node and Linked list linked list is based on node structure. We can imagine a node to be an indivdual pod. In each pod, we store different types of data - numbers, strings A linked list also store pointers in each pod on top of the data. If the linked list only has 1 pointer, which points forward, then it is a singly linked list. On the other hand, if it has 2 pointers, pointing forward and backward, then this is a double linked list. ## What's the difference between a singly linked list and array? Singly linked list and array sounds similar because both of them indicate a sequence of data that can only move forward. The difference is, each element in the array is not an individual pod. We can use family and houses as an anology. Linked list is like an apartment, from the 1st floor to the 10th floor. Each floor has a different household. Array is like one of the households in the aparment of a linked list. An array is the family, all elements must be from the same family (same data type). Just like in a family, you have people from the youngest to the oldest. ## When implementing Stack, should we use linked-list or array? Stack can be implemented by using either method. The main concern is about the memory space. - Space: - array-based data structure is a lot more space efficient. The whole stack only requires 1 reference-sized array cell, while in a linked-list, each node has 2 references. - However, a linked-list can grow and shrink dynamically. Therefore, if the Stack is empty, linked-list can free up the space. Also, dynamic array can run into the memory allocation issues. In Python, Stack is normally being implemented by using dynamic arrays, which the built-in list type. "The list Built-in Python’s built-in list type makes a decent stack data structure as it supports push and pop operations in amortized O(1) time. Python’s lists are implemented as dynamic arrays internally which means they occasionally need to resize the storage space for elements stored in them whenever they are added or removed. The storage space is allocated more than required, so that not every push or pop requires resizing and you get an amortized O(1) time complexity for these operations. Although this makes their performance less consistent than the stable O(1) inserts and deletes provided by a linked list-based implementation. On the other hand, lists provide fast O(1) time random access to elements on the stack which can be an added benefit." -- From [Eruka.co](https://www.edureka.co/blog/stack-in-python/) ``` #array based myStack = [] myStack.append('This') myStack.append('is') myStack.append('array') print(myStack) #return ['This', 'is', 'array'] myStack.pop() #'array' #use node/double linked-list based from collections import deque myStack = deque() myStack.append('This') myStack.append('is') myStack.append('linked-list') print(myStack) #return ['This', 'is', 'linked-list'] myStack.pop() #'linked-list' ``` As we can see both methods have the exact same interface. The time cost for push() and pop() operation are also the same. However, the data strucure behind both methods are different. According to [Real Python](https://realpython.com/how-to-implement-python-stack/), it would better to use deque when we don' need to consider threading. When we need to consider threading, we should use LifoQueue, in which all methods (not just push and pop) are desinged to be threaded safe. This design is in the expense of extra running time. ``` from queue import LifoQueue myStack = LifoQueue() myStack.put('This') myStack.put('is') myStack.put('list') print(myStack) #return ['This', 'is', 'list'] myStack.get() #'linked-list' ```
github_jupyter
``` import matplotlib.pyplot as plt from sklearn.neighbors import LocalOutlierFactor import pandas as pd import os import numpy as np # file_dir = os.getcwd() # raw_data_dir = os.path.join(file_dir, '/raw_data') file_list = [] for root, dirs, files in os.walk('./raw_data'): for file in files: if os.path.splitext(file)[1] == '.csv': # 排除掉readme.md等非csv文件 file_list.append(file) # print(file_list) df = pd.DataFrame() for index, csv in enumerate(file_list): df_temp = pd.read_csv('./raw_data/'+csv) if int(csv[-5]) == 0: file_list[index] = csv[:-5] + '1' +csv[-4:] print('changed csv: ', csv) target_column = pd.DataFrame(np.array([int(file_list[index][-5])]*df_temp.shape[0])) # 构造target列,注意要使用二维的array [[1],[1]]这样是列 [[1,1]]这样是行 df_temp = pd.concat([df_temp, target_column], axis=1, ignore_index=True) # 连接样本和target列 df = pd.concat([df, df_temp], ignore_index=True) # 连接所有样本 print(file_list) pd.set_option('display.max_rows', 10) print(df) target_column = df.iloc[:, -1] df = df.iloc[:, :-1] # 让脸部温度单独保存,环境温度设计为统一值 ta = df.min(axis=1) df_face = pd.DataFrame() # df_face 脸部温度+其他区域温度置换为环境温度 df_onlyface = pd.DataFrame() # df_onlyface 只有脸部温度点 for i, minTa in zip(df.values, ta): face = [] onlyface = [] for j in i: try: # 因为检查到有一些数字不是float是str,像21.42346.1,不知是什么原因导致的, if j - minTa > 7: face.append(j) # onlyface.append(j) else: face.append(minTa) ave_ta .... except: j = float(j[:6]) if j - minTa > 7: face.append(j) # onlyface.append(j) else: face.append(minTa) face_todf = pd.DataFrame(face).T # onlyface_todf = pd.DataFrame(onlyface).T df_face = pd.concat([df_face, face_todf], axis = 0, ignore_index=True) # df_onlyface = pd.concat([df_onlyface, onlyface_todf], axis = 0, ignore_index=True) # fit the model for outlier detection (default) clf = LocalOutlierFactor(n_neighbors=40, contamination=0.05) # use fit_predict to compute the predicted labels of the training samples # (when LOF is used for outlier detection, the estimator has no predict, # decision_function and score_samples methods). y_pred = clf.fit_predict(df_face) # n_errors = (y_pred != ground_truth).sum() X_scores = clf.negative_outlier_factor_ min_index = np.argpartition(X_scores, int(df_face.shape[0]*0.05))[:int(df_face.shape[0]*0.05)] # 去除掉5%的异常样本 df = df.drop(min_index) df ta = ta.iloc[df.index] ta.index = np.arange(ta.shape[0]) for i, minTa in zip(df.values, ta): # face = [] onlyface = [] for j in i: try: # 因为检查到有一些数字不是float是str,像21.42346.1,不知是什么原因导致的, if j - minTa > 7: # face.append(j) onlyface.append(j) # else: # face.append(minTa) except: j = float(j[:6]) if j - minTa > 7: # face.append(j) onlyface.append(j) # else: # face.append(minTa) # face_todf = pd.DataFrame(face).T onlyface_todf = pd.DataFrame(onlyface).T # df_face = pd.concat([df_face, face_todf], axis = 0, ignore_index=True) df_onlyface = pd.concat([df_onlyface, onlyface_todf], axis = 0, ignore_index=True) skewness = pd.DataFrame(df_onlyface.skew(axis=1)) maxTemp = pd.DataFrame(df_onlyface.max(axis=1)) minTemp = pd.DataFrame(df_onlyface.min(axis=1)) meanTemp = pd.DataFrame(df_onlyface.mean(axis=1)) # 指定划分bin的点 bins = [28.3, 28.6, 28.9, 29.2, 29.5, 29.8, 30.1, 30.4, 30.7, 31.0, 31.3, 31.6, 31.9, 32.2, 32.5, 32.8, 33.1, 33.4, 33.7, 34.0, 34.3, 34.6, 34.9, 35.2, 35.5, 35.8, 36.1, 36.4, 36.7] highest_bin_list = [] for i in df_onlyface.values: i = [j for j in i if not np.isnan(j)] N, _ = np.histogram(np.clip(i,28.3,36.7), bins=bins) highest_bin = (bins[N.argmax()]+bins[N.argmax()+1])/2 # 返回各区域频数N highest_bin_list.append(highest_bin) modeTemp = pd.DataFrame(highest_bin_list, index=df_onlyface.index) features = pd.concat([skewness, maxTemp, minTemp, meanTemp, modeTemp, ta], axis=1) features.columns = ['skewness', 'maxTemp', 'minTemp', 'meanTemp', 'modeTemp', 'ta'] features['max_minus_min'] = features['maxTemp'] - features['minTemp'] features['mode_minus_ta'] = features['modeTemp'] - features['ta'] # features['mean_minus_ta'] = features['meanTemp'] - features['ta'] # features['max_minus_ta'] = features['maxTemp'] - features['ta'] features['min_minus_ta'] = features['minTemp'] - features['ta'] # features['mode_minus_min'] = features['modeTemp'] - features['minTemp'] # features['mean_minus_min'] = features['meanTemp'] - features['minTemp'] # features['max_minus_mean'] = features['maxTemp'] - features['meanTemp'] # features['mode_squa'] = features['modeTemp'] ** 2 # features['mean_squa'] = features['meanTemp'] ** 2 # features['max_squa'] = features['maxTemp'] ** 2 # features['mode_cub'] = features['modeTemp'] ** 3 # features['mean_cub'] = features['meanTemp'] ** 3 # features['max_cub'] = features['maxTemp'] ** 3 features = features.drop(["minTemp"], axis=1) import seaborn as sns import matplotlib.pyplot as plt featuresCorr = features.corr('spearman') fig = plt.figure(figsize=(20, 20)) # plt.subplots((1,1,1)) # 设置画面大小 sns.heatmap(featuresCorr, annot=True, vmax=1, square=True) plt.show() from sklearn.preprocessing import StandardScaler std = StandardScaler() features_scaler = std.fit_transform(features) target_column = target_column.iloc[df.index] features_scaler.shape target_column.shape from sklearn.model_selection import train_test_split train = features_scaler target = target_column.values train_X,test_X, train_y, test_y = train_test_split(train, target, test_size = 0.1, random_state = 0) from sklearn.feature_selection import SelectFromModel from sklearn.ensemble import ExtraTreesClassifier clf = ExtraTreesClassifier() clf = clf.fit(train_X, train_y) print(clf.feature_importances_ ) model = SelectFromModel(clf, prefit=True) print(features.columns[model.get_support(indices=True)]) # print(model.get_support(indices=True)) X_new = model.transform(train_X) from sklearn.gaussian_process.kernels import RBF from sklearn.gaussian_process import GaussianProcessClassifier gpc = GaussianProcessClassifier(1.0 * RBF(1.0), max_iter_predict=500, n_restarts_optimizer=5, warm_start=True, random_state=1, multi_class='one_vs_rest', n_jobs=-1) gpc.fit(X_new, train_y) test_X_com = np.concatenate((test_X[:, 0].reshape(-1,1), test_X[:, 2].reshape(-1,1), test_X[:, 4:6]), axis=1) gpc_score = round(gpc.score(test_X_com, test_y) * 100, 2) print(gpc_score) from sklearn.model_selection import cross_val_score cross_val_score(gpc, test_X_com, test_y) gpc.log_marginal_likelihood() gpc_notrain = GaussianProcessClassifier(1.0 * RBF(1.0)) gpc_notrain.predict(X_new, return_std=True) ```
github_jupyter
# Working with Samples and Features From a combined dataset of cancer and normal samples, extract the normal samples. Within the normal samples, find the genes coexpressed with LRPPRC (Affymetrix probe M92439_at), a gene with mitochondrial function. ## Before you begin * Sign in to GenePattern by entering your username and password into the form below. If you are seeing a block of code instead of the login form, go to the menu above and select Cell > Run All. * The data will will use in this exercise is from the Global Cancer Map, published along with the paper *[Multi-Class Cancer Diagnosis Using Tumor Gene Expression Signatures](http://www.broadinstitute.org/cgi-bin/cancer/publications/pub_paper.cgi?mode=view&paper_id=61)*. * Links to the data files used in this exercise are below: * RES file: [GCM_Total.res](https://software.broadinstitute.org/cancer/software/genepattern/data/gcm/GCM_Total.res) * CLS file: [GCM_Total.cls](https://software.broadinstitute.org/cancer/software/genepattern/data/gcm/GCM_Total.cls) * CHIP file: [HU6800.chip](https://software.broadinstitute.org/cancer/software/genepattern/data/gcm/HU6800.chip) ``` # Requires GenePattern Notebook: pip install genepattern-notebook import gp import genepattern # Username and password removed for security reasons. genepattern.GPAuthWidget(genepattern.register_session("https://genepattern.broadinstitute.org/gp", "", "")) ``` ## Step 1: Selecting a Subset of an Expression File 1. Insert an analysis cell for the *SelectFeaturesColumns* module and move it below this set of instructions. 2. Set the following parameters: 1. Drag-and-drop the *[GCM_Total.res](https://software.broadinstitute.org/cancer/software/genepattern/data/gcm/GCM_Total.res)* file linked above into the *input.filename* parameter. 2. Set the columns parameter to *190-279*. 3. Set the *output.file* paremeter to *GCM_Normals.res*. 3. Click the *Run* button. ## Step 2: Finding Coexpressed Genes 1. Insert an analysis cell for the *GeneNeighbors* module and move it below this set of instructions. 2. Set the following parameters: 1. Send the *GCM_Normals.res* file produced by the *SelectFeaturesColumns* job above to the *data.filename* parameter. 2. Set the *gene.accession* parameter to *M92439_at*. 3. Click the *Run* button. ## Step 3: Viewing Coexpressed Genes 1. Look for the *GCM_Normals.markerdata.gct* file produced by the GeneNeighbors job above. 2. Click it and look for *Send to New GenePattern Cell* in the menu, then select *HeatMapViewer*. 3. Move the new *HeatMapViewer* cell below these instructions. 4. Click the *Run* button. ## Step 4: Collapse the Expression File 1. Insert an analysis cell for the *CollapseDataset* module and move it below this set of instructions. 2. Set the following parameters: 1. Send the *GCM_Normals.markerdata.gct* file produced by the *GeneNeighbors* job above to the *dataset.file* parameter. 2. Drag-and-drop *[HU6800.chip](https://software.broadinstitute.org/cancer/software/genepattern/data/gcm/HU6800.chip)* to the *chip.platform* parameter. 3. Click the *Run* button. ## Step 5: Converting an Affy Expression File to a List of Genes 1. Look for the *GCM_Normals.markerdata.collapsed.gct* file produced by the *CollapseDataset* job above. 2. Click it and look for *Send to New GenePattern Cell* in the menu, then select *ExtractRowNames*. 3. Move the new *ExtractRowNames* cell below these instructions. 4. Click the *Run* button. 5. View the resulting gene list by clicking *GCM_Normals.markerdata.collapsed.row.names.txt* and selecting *Open in New Tab*. ## Find pathways associated with gene list The following code will search the [mygene.info](http://mygene.info) gene database service and query each result gene to determine which Reactome pathways are associated with it. <div class="alert alert-info"> <p>Executing the cells below will read in a list of genes, similar to the list created earlier in the main Samples and Features exercise. Each gene in this list will then be sent to [mygene.info](http://mygene.info), a gene database service.</p> </div> <div class="alert alert-info"> - Click on the i icon next to the `GCM_Normals.markerdata.collapsed.row.names.txt` file in the last step - Select "Send to Code" - Select and copy the reference to the output file, for example `job1306740.get_output_files()[1]` (do NOT include the "this file = " part) - Paste the result into the code below to replace **INSERT PASTED CODE HERE** - The resulting line should look like `gene_list_filename = job1306740.get_output_files()[1]` - Execute the cell below ``` gene_list_filename = **INSERT PASTED CODE HERE** gene_list_file = gene_list_filename.open() gene_list = gene_list_file.readlines() import requests import json for gene in gene_list: gene = gene.decode("utf-8") if " " in gene: gene=gene[0:gene.find(" ")] gene_results = requests.get("https://mygene.info/v2/query?q="+gene+"&fields=pathway.reactome").content gene_results_json = json.loads(gene_results) print(gene) pathways = list() for h in range(len(gene_results_json["hits"])): for k in gene_results_json["hits"][h].keys(): if u'pathway' == k: for i in range(len(gene_results_json["hits"][h]["pathway"]["reactome"])): pathways.append(gene_results_json["hits"][h]["pathway"]["reactome"][i]["name"]) if (len(pathways) == 0): print("\tNo pathways found") else: for p in sorted(set(pathways)): print("\t" + p) ```
github_jupyter
# Import Use this resource with the export resource to migrate objects from one organization to another. # Prerequisite - Need to get the source and target object ids When we are performing an import operation, we need to map the source connection with the target connection, similartly map the source runtime environment with the target runtime env. As a first step, we need to get the object ids of the source dependant objects and the target dependant objects. ``` # First fetching source depenant object ids import infapy infapy.setFileLogger(name="DEV Logger",level="DEBUG") devInfaHandler = infapy.connect(profile="DEV") v3 = devInfaHandler.v3() # Get the connection object id lookupObj = v3.lookup(path="__ff",objectType="connection") print(lookupObj) srcConnectionID = lookupObj["objects"][0]["id"] print("srcConnection ID: " + srcConnectionID) # Get the agent group object details lookupObj = v3.lookup(path="prashanth-sbx",objectType="agentgroup") print(lookupObj) srcRuntimeID = lookupObj["objects"][0]["id"] print("srcRuntime ID: " + srcRuntimeID) # First fetching target depenant object ids import infapy infapy.setFileLogger(name="QA Logger",level="DEBUG") qaInfaHandler = infapy.connect(profile="QA") v3 = qaInfaHandler.v3() # Get the connection object id lookupObj = v3.lookup(path="FF",objectType="connection") print(lookupObj) tgtConnectionID = lookupObj["objects"][0]["id"] print("tgtConnection ID: " + tgtConnectionID) # Get the agent group object details lookupObj = v3.lookup(path="prashanth-redhat-sbx",objectType="agentgroup") print(lookupObj) tgtRuntimeID = lookupObj["objects"][0]["id"] print("tgtRuntime ID: " + tgtRuntimeID) ``` ## Function: uploadZipToGetJobID() > Use this function to import the zip file to fetch the job id > This function initiates the process > > Args: > filePath (str, optional): Defaults to os.getcwd(). > fileName (str, optional): Defaults to "infapyExportDownloaded.zip". > > Raises: > InvalidArgumentsError: if invalid arguments are provided > > Returns: > json: response after the upload zip has been initiated ``` v3 = qaInfaHandler.v3() importObj = v3.importObject() response = importObj.uploadZipToGetJobID() print(response) print() importJobID = response["jobId"] print("Import Job ID: " + importJobID) ``` ## Function: startImportByJobID() > This function initiates the job once the > zip is uploaded > > Args: > jobID (str): From response of uploadZipToGetJobID > importBody (dict): Read the docs for understanding the import body > > Raises: > InvalidArgumentsError: if invalid body sent > > Returns: > json: import job success response ``` jsonObject = { "name" : "ImportNameFromScript", "importSpecification" : { "defaultConflictResolution" : "OVERWRITE", "objectSpecification" : [ { "sourceObjectId" : "848Au1yuOzAcdxJMgPkdqy", "targetObjectId" : "9YGTW8zLVaAb6O15bcjbyk" }, { "sourceObjectId" : "95OeUg6sjYVhH6zxQUB76k", "targetObjectId" : "iwvniZZPdG6cltC3Uzcf2i" }] } } # using importObj created above response = importObj.startImportByJobID(jobID="8bSlx81r2GSlX417G9XNsq",importBody=jsonObject) print(response) ``` ## Function: getStatusOfImportByImportID() > use this method to get the status of the import > if it is a success or a failure > > Args: > importID (importID): provide the import id you recieved > from uploadZipToGetJobID Method used before this > > Returns: > json: import operation status ``` response = importObj.getStatusOfImportByImportID(importID="8bSlx81r2GSlX417G9XNsq") print(response) ``` ## Function: getImportLogsByImportID() > use this method to get the import > logs > > Args: > importID (importID): provide the import id you recieved > from uploadZipToGetJobID Method used before this > > Returns: > string text: import logs in text ``` response = importObj.getImportLogsByImportID(importID="8bSlx81r2GSlX417G9XNsq") print(response) ```
github_jupyter
``` %%html <link href="http://mathbook.pugetsound.edu/beta/mathbook-content.css" rel="stylesheet" type="text/css" /> <link href="https://aimath.org/mathbook/mathbook-add-on.css" rel="stylesheet" type="text/css" /> <style>.subtitle {font-size:medium; display:block}</style> <link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic" rel="stylesheet" type="text/css" /> <link href="https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext" rel="stylesheet" type="text/css" /><!-- Hide this cell. --> <script> var cell = $(".container .cell").eq(0), ia = cell.find(".input_area") if (cell.find(".toggle-button").length == 0) { ia.after( $('<button class="toggle-button">Toggle hidden code</button>').click( function (){ ia.toggle() } ) ) ia.hide() } </script> ``` **Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard. $\newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{\nmid} \newcommand{\notsubset}{\not\subset} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\gf}{\operatorname{GF}} \newcommand{\inn}{\operatorname{Inn}} \newcommand{\aut}{\operatorname{Aut}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\cis}{\operatorname{cis}} \newcommand{\chr}{\operatorname{char}} \newcommand{\Null}{\operatorname{Null}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} $ <div class="mathbook-content"><h2 class="heading hide-type" alt="Section 19.7 Sage"><span class="type">Section</span><span class="codenumber">19.7</span><span class="title">Sage</span></h2><a href="boolean-sage.ipynb" class="permalink">¶</a></div> <div class="mathbook-content"></div> <div class="mathbook-content"><p id="p-3037">Sage has support for both partially ordered sets (“posets”) and lattices, and does an excellent job of providing visual depictions of both.</p></div> <div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Creating Partially Ordered Sets"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Creating Partially Ordered Sets</span></h3></div> <div class="mathbook-content"><p id="p-3038">Example <a href="section-boolean-lattices.ipynb#example-boolean-poset-divisors-24" class="xref" alt="Example 19.6 " title="Example 19.6 ">19.6</a> in the text is a good example to replicate as a demonstration of Sage commands. We first define the elements of the set $X\text{.}$</p></div> ``` X = (24).divisors() X ``` <div class="mathbook-content"><p id="p-3039">One approach to creating the relation is to specify <em class="emphasis">every</em> instance where one element is comparable to the another. So we build a list of pairs, where each pair contains comparable elements, with the lesser one first. This is the set of relations.</p></div> ``` R = [(a,b) for a in X for b in X if a.divides(b)]; R ``` <div class="mathbook-content"><p id="p-3040">We construct the poset by giving the the <code class="code-inline tex2jax_ignore">Poset</code> constructor a list containing the elements and the relations. We can then easily get a “plot” of the poset. Notice the plot just shows the “cover relations” — a minimal set of comparisons which the assumption of transitivity would expand into the set of all the relations.</p></div> ``` D = Poset([X, R]) D.plot() ``` <div class="mathbook-content"><p id="p-3041">Another approach to creating a <code class="code-inline tex2jax_ignore">Poset</code> is to let the poset constructor run over all the pairs of elements, and all we do is give the constructor a way to test if two elements are comparable. Our comparison function should expect two elements and then return <code class="code-inline tex2jax_ignore">True</code> or <code class="code-inline tex2jax_ignore">False</code>. A “lambda” function is one way to quickly build such a function. This may be a new idea for you, but mastering lambda functions can be a great convenience. Notice that “lambda” is a word reserved for just this purpose (so, for example, <code class="code-inline tex2jax_ignore">lambda</code> is a bad choice for the name of an eigenvalue of a matrix). There are other ways to make functions in Sage, but a lambda function is quickest when the function is simple.</p></div> ``` divisible = lambda x, y: x.divides(y) L = Poset([X, divisible]) L == D L.plot() ``` <div class="mathbook-content"><p id="p-3042">Sage also has a collection of stock posets. Some are one-shot constructions, while others are members of parameterized families. Use tab-completion on <code class="code-inline tex2jax_ignore">Posets.</code> to see the full list. Here are some examples.</p></div> <div class="mathbook-content"><p id="p-3043">A one-shot construction. Perhaps what you would expect, though there might be other, equally plausible, alternatives.</p></div> ``` Q = Posets.PentagonPoset() Q.plot() ``` <div class="mathbook-content"><p id="p-3044">A parameterized family. This is the classic example where the elements are subsets of a set with $n$ elements and the relation is “subset of.”</p></div> ``` S = Posets.BooleanLattice(4) S.plot() ``` <div class="mathbook-content"><p id="p-3045">And random posets. These can be useful for testing and experimenting, but are unlikely to exhibit special cases that may be important. You might run the following command many times and vary the second argument, which is a rough upper bound on the probability any two elements are comparable. Remember that the plot only shows the cover relations. The more elements that are comparable, the more “vertically stretched” the plot will be.</p></div> ``` T = Posets.RandomPoset(20,0.05) T.plot() ``` <div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Properties of a Poset"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Properties of a Poset</span></h3></div> <div class="mathbook-content"><p id="p-3046">Once you have a poset, what can you do with it? Let's return to our first example, <code class="code-inline tex2jax_ignore">D</code>. We can of course determine if one element is less than another, which is the fundamental structure of a poset.</p></div> ``` D.is_lequal(4, 8) D.is_lequal(4, 4) D.is_less_than(4, 8) D.is_less_than(4, 4) D.is_lequal(6, 8) D.is_lequal(8, 6) ``` <div class="mathbook-content"><p id="p-3047">Notice that <code class="code-inline tex2jax_ignore">6</code> and <code class="code-inline tex2jax_ignore">8</code> are not comparable in this poset (it is a <em class="emphasis">partial</em> order). The methods <code class="code-inline tex2jax_ignore">.is_gequal()</code> and <code class="code-inline tex2jax_ignore">.is_greater_than()</code> work similarly, but returns <code class="code-inline tex2jax_ignore">True</code> if the first element is greater (or equal).</p></div> ``` D.is_gequal(8, 4) D.is_greater_than(4, 8) ``` <div class="mathbook-content"><p id="p-3048">We can find the largest and smallest elements of a poset. This is a random poset built with a 10%probability, but copied here to be repeatable.</p></div> ``` X = range(20) C = [[18, 7], [9, 11], [9, 10], [11, 8], [6, 10], [10, 2], [0, 2], [2, 1], [1, 8], [8, 12], [8, 3], [3, 15], [15, 7], [7, 16], [7, 4], [16, 17], [16, 13], [4, 19], [4, 14], [14, 5]] P = Poset([X, C]) P.plot() P.minimal_elements() P.maximal_elements() ``` <div class="mathbook-content"><p id="p-3049">Elements of a poset can be partioned into level sets. In plots of posets, elements at the same level are plotted vertically at the same height. Each level set is obtained by removing all of the previous level sets and then taking the minimal elements of the result.</p></div> ``` P.level_sets() ``` <div class="mathbook-content"><p id="p-3050">If we make two elements in <code class="code-inline tex2jax_ignore">R</code> comparable when they had not previously been, this is an extension of <code class="code-inline tex2jax_ignore">R</code>. Consider all possible extensions of one poset — we can make a poset from all of these, where set inclusion is the relation. A linear extension is a maximal element in this poset of posets. Informally, we are adding as many new relations as possible, consistent with the original poset and so that the result is a total order. In other words, there is an ordering of the elements that is consistent with the order in the poset. We can build such a thing, but the output is just a list of the elements in the linear order. A computer scientist would be inclined to call this a “topological sort.”</p></div> ``` linear = P.linear_extension(); linear ``` <div class="mathbook-content"><p id="p-3051">We can construct subposets by giving a set of elements to induce the new poset. Here we take roughly the “bottom half” of the random poset <code class="code-inline tex2jax_ignore">P</code> by inducing the subposet on a union of some of the level sets.</p></div> ``` level = P.level_sets() bottomhalf = sum([level[i] for i in range(5)], []) B = P.subposet(bottomhalf) B.plot() ``` <div class="mathbook-content"><p id="p-3052">The dual of a poset retains the same set of elements, but reverses any comparisons.</p></div> ``` Pdual = P.dual() Pdual.plot() ``` <div class="mathbook-content"><p id="p-3053">Taking the dual of the divisibility poset from Example <a href="section-boolean-lattices.ipynb#example-boolean-poset-divisors-24" class="xref" alt="Example 19.6 " title="Example 19.6 ">19.6</a> would be like changing the relation to “is a multiple of.”</p></div> ``` Ddual = D.dual() Ddual.plot() ``` <div class="mathbook-content"><h3 class="heading hide-type" alt="Subsection Lattices"><span class="type">Subsection</span><span class="codenumber" /><span class="title">Lattices</span></h3></div> <div class="mathbook-content"><p id="p-3054">Every lattice is a poset, so all the commands above will perform equally well for a lattice. But how do you create a lattice? Simple — first create a poset and then feed it into the <code class="code-inline tex2jax_ignore">LatticePoset()</code> constructor. But realize that just because you give this constructor a poset, it does not mean a lattice will always come back out. Only if the poset is <em class="emphasis">already</em> a lattice will it get upgraded from a poset to a lattice for Sage's purposes, and you will get a <code class="code-inline tex2jax_ignore">ValueError</code> if the upgrade is not possible. Finally, notice that some of the posets Sage constructs are already recognized as lattices, such as the prototypical <code class="code-inline tex2jax_ignore">BooleanLattice</code>.</p></div> ``` P = Posets.AntichainPoset(8) P.is_lattice() LatticePoset(P) ``` <div class="mathbook-content"><p id="p-3055">An integer composition of $n$ is a list of positive integers that sum to $n\text{.}$ A composition $C_1$ covers a composition $C_2$ if $C_2$ can be formed from $C_1$ by adding consecutive parts. For example, $C_1 = [2, 1, 2] \succeq [3, 2] = C_2\text{.}$ With this relation, the set of all integer compositions of a fixed integer $n$ is a poset that is also a lattice.</p></div> ``` CP = Posets.IntegerCompositions(5) C = LatticePoset(CP) C.plot() ``` <div class="mathbook-content"><p id="p-3056">A meet or a join is a fundamental operation in a lattice.</p></div> ``` par = C.an_element().parent() a = par([1, 1, 1, 2]) b = par([2, 1, 1, 1]) a, b C.meet(a, b) c = par([1, 4]) d = par([2, 3]) c, d C.join(c, d) ``` <div class="mathbook-content"><p id="p-3057">Once a poset is upgraded to lattice status, then additional commands become available, or the character of their results changes.</p></div> <div class="mathbook-content"><p id="p-3058">An example of the former is the <code class="code-inline tex2jax_ignore">.is_distributive()</code> method.</p></div> ``` C.is_distributive() ``` <div class="mathbook-content"><p id="p-3059">An example of the latter is the <code class="code-inline tex2jax_ignore">.top()</code> method. What your text calls a largest element and a smallest element of a lattice, Sage calls a top and a bottom. For a poset, <code class="code-inline tex2jax_ignore">.top()</code> and <code class="code-inline tex2jax_ignore">.bottom()</code> may return an element or may not (returning <code class="code-inline tex2jax_ignore">None</code>), but for a lattice it is guaranteed to return exactly one element.</p></div> ``` C.top() C.bottom() ``` <div class="mathbook-content"><p id="p-3060">Notice that the returned values are all elements of the lattice, in this case ordered lists of integers summing to $5\text{.}$</p></div> <div class="mathbook-content"><p id="p-3061">Complements now make sense in a lattice. The result of the <code class="code-inline tex2jax_ignore">.complements()</code> method is a dictionary that uses elements of the lattice as the keys. We say the dictionary is “indexed” by the elements of the lattice. The result is a list of the complements of the element. We call this the “value” of the key-value pair. (You may know dictionaries as “associative arrays”, but they are really just fancy functions.)</p></div> ``` comp = C.complements() comp[par([1, 1, 1, 2])] ``` <div class="mathbook-content"><p id="p-3062">The lattice of integer compositions is a complemented lattice, as we can see by the result that each element has a single (unique) complement, evidenced by the lists of length $1$ in the values of the dictionary. Or we can just ask Sage via <code class="code-inline tex2jax_ignore">.is_complemented()</code>. Dictionaries have no inherent order, so you may get different output each time you inspect the dictionary.</p></div> ``` comp [len(e[1]) for e in comp.items()] C.is_complemented() ``` <div class="mathbook-content"><p id="p-3063">There are many more commands which apply to posets and lattices, so build a few and use tab-completion liberally to explore. There is more to discover than we can cover in just a single chapter, but you now have the basic tools to profitably study posets and lattices in Sage.</p></div>
github_jupyter
<span style="color:red; font-family:Helvetica Neue, Helvetica, Arial, sans-serif; font-size:2em;">An Exception was encountered at '<a href="#papermill-error-cell">In [40]</a>'.</span> # PA005: High Value Customer Identification # 0.0 Imports ``` import os import joblib import s3fs import pickle import re import numpy as np import pandas as pd import seaborn as sns import umap.umap_ as umap from matplotlib import pyplot as plt from sklearn import cluster as c from sklearn import metrics as m from sklearn import ensemble as en from sklearn import preprocessing as pp from sklearn import decomposition as dd from sklearn import manifold as mn from sklearn import mixture as mx from plotly import express as px from scipy.cluster import hierarchy as hc from sqlalchemy import create_engine AWS_ACCESS_KEY_ID=os.environ.get( 'AWS_ACCESS_KEY_ID') AWS_SECRET_ACCESS_KEY=os.environ.get( 'AWS_SECRET_ACCESS_KEY') ``` ## 0.2. Load Dataset ``` # load data #'path_local' = '/home/leandro/repos/insiders_clustering/' path_s3 = 's3://insiders-datasett/' df_raw = pd.read_csv(path_s3 + 'Ecommerce.csv' , encoding = 'iso-8859-1') df_raw.head() df_raw.shape ``` # 1.0. Descrição dos dados ``` df1 = df_raw.copy() df1.head() ``` ## 1.1. Rename Columns ``` # Rename Columns cols_new = ['invoice_no','stock_code','description','quantity','invoice_date','unit_price','customer_id','country'] df1.columns = cols_new df1.sample() df_raw.sample() ``` ## 1.2. Data Dimensions ``` print( 'Number of rows: {}'.format ( df1.shape[0] ) ) print( 'Number of cols: {}'.format ( df1.shape[1] ) ) ``` ## 1.3. Data Types ``` df1.dtypes ``` ## 1.4. Check NA ``` df1.isna().sum() ``` ## 1.5. Replace NA ``` df_missing = df1.loc[ df1['customer_id'].isna(), : ] df_not_missing = df1.loc[~df1['customer_id'].isna(), : ] # Create Reference df_backup = pd.DataFrame( df_missing['invoice_no'].drop_duplicates()) df_backup['customer_id'] = np.arange( 19000, 19000 +len( df_backup),1) # Merge original with reference dataframe df1 = pd.merge( df1, df_backup, on= 'invoice_no', how= 'left' ) # Coalesce df1[ 'customer_id'] = df1['customer_id_x'].combine_first( df1[ 'customer_id_y' ] ) # Drop extra columns df1 = df1.drop( columns = ['customer_id_x', 'customer_id_y'], axis = 1) df1.isna().sum() ``` ## 1.6. Change Dtypes ``` # Invoice Date df1['invoice_date'] = pd.to_datetime( df1['invoice_date'], format = '%d-%b-%y') # Customer Id df1['customer_id'] = df1['customer_id'].astype(int) df1.head() df1.dtypes ``` ## 1.7. Descriptive Statistics ``` num_attributes = df1.select_dtypes( include = [ 'int64', 'float64'] ) cat_attributes = df1.select_dtypes( exclude = [ 'int64', 'float64','datetime64[ns]']) ``` ### 1.7.1 Numerical Attributes ``` # Central tendency - mean, median ct1 = pd.DataFrame(num_attributes.apply( np.mean )).T ct2 = pd.DataFrame(num_attributes.apply( np.median )).T # Dispersion - desvio padrão, mínimo, máximo, range, skew, kurtosis d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T d2 = pd.DataFrame( num_attributes.apply( np.min ) ).T d3 = pd.DataFrame( num_attributes.apply( np.max ) ).T d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max( ) - x.min() ) ).T d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew( ) ) ).T d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T # Concatenate m1 = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index() m1.columns = ['attributes', 'min', 'max', 'range', 'mean', 'mediana', 'std', 'skew', 'kurtosis'] m1 ``` ### 1.7.2 Categorical Attributes ``` cat_attributes.head() ``` ### Invoice_No ``` # Problema: Temos invoice com letras e números # Identificação > df_letter_invoices = df1.loc[df1['invoice_no'].apply( lambda x : bool( re.search( '[^0-9]+', x ) ) ), :] print('Total number of invoices: {}'.format( len( df_letter_invoices ))) print('Total number of negative quantity: {}'.format( len(df_letter_invoices[ df_letter_invoices['quantity'] < 0]))) ``` ### Stock Code ``` # Check stock codes only characters df1.loc[df1['stock_code'].apply( lambda x : bool( re.search( '^[a-zA-Z]+$', x ) ) ) ,'stock_code'].unique() # Ação: ## 1. Remove stock_code in ['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK'] ``` ### Description ``` df1.head() # Ação: Delete Description ``` ### Country ``` df1['country'].unique() df1['country'].value_counts( normalize = True).head() df1[['customer_id','country']].drop_duplicates().groupby( 'country').count().reset_index().sort_values( 'customer_id', ascending = False).head() ``` # 2.0. Filtragem de Variáveis ``` df2 = df1.copy() df2.dtypes # === Numerical attributes ==== df2 = df2.loc[df2['unit_price'] >= 0.04, :] # === Categorical attributes ==== df2 = df2[~df2['stock_code'].isin( ['POST', 'D', 'DOT', 'M', 'S', 'AMAZONFEE', 'm', 'DCGSSBOY', 'DCGSSGIRL', 'PADS', 'B', 'CRUK'], )] # description df2 = df2.drop( columns='description', axis=1 ) # map df2 = df2[~df2['country'].isin( ['European Community', 'Unspecified' ] ) ] # bad users df2 = df2[~df2['customer_id'].isin( [16446] )] # quantity df2_returns = df2.loc[df1['quantity'] < 0, :] df2_purchases = df2.loc[df1['quantity'] >= 0, :] ``` # 3.0. Feature Engineering ``` df3 = df2.copy() ``` ## 3.1. Feature Creation ``` # Data Reference df_ref = df3.drop( ['invoice_no', 'stock_code', 'quantity', 'invoice_date', 'unit_price', 'country'], axis=1 ).drop_duplicates( ignore_index=True ) ``` ### 3.1.1 Gross Revenue ``` # Gross Revenue ( Faturamento ) quantity * price df2_purchases.loc[:, 'gross_revenue'] = df2_purchases.loc[:, 'quantity'] * df2_purchases.loc[:, 'unit_price'] # Monetary df_monetary = df2_purchases.loc[:, ['customer_id', 'gross_revenue']].groupby( 'customer_id' ).sum().reset_index() df_ref = pd.merge( df_ref, df_monetary, on='customer_id', how='left' ) df_ref.isna().sum() ``` ### 3.1.2 Recency - Day from last purchase ``` # Recency - Last day purchase df_recency = df2_purchases.loc[:, ['customer_id', 'invoice_date']].groupby( 'customer_id' ).max().reset_index() df_recency['recency_days'] = ( df2['invoice_date'].max() - df_recency['invoice_date'] ).dt.days df_recency = df_recency[['customer_id', 'recency_days']].copy() df_ref = pd.merge( df_ref, df_recency, on='customer_id', how='left' ) df_ref.isna().sum() ``` ### 3.1.4.1 Quantity of products purchased ``` # Numero de produtos df_freq = (df2_purchases.loc[:, ['customer_id', 'stock_code']].groupby( 'customer_id' ).count() .reset_index() .rename( columns={'stock_code': 'qtde_products'} ) ) df_ref = pd.merge( df_ref, df_freq, on='customer_id', how='left' ) df_ref.isna().sum() ``` ### 3.1.7 Number of returns ``` # Number of Returns df_returns = df2_returns [[ 'customer_id', 'quantity']].groupby( 'customer_id').sum().reset_index().rename( columns ={'quantity': 'qtde_returns'} ) df_returns['qtde_returns'] = df_returns['qtde_returns'] * -1 df_ref = pd.merge( df_ref, df_returns, how = 'left', on= 'customer_id') df_ref.loc[ df_ref['qtde_returns'].isna(), 'qtde_returns'] = 0 df_ref.isna().sum() # Number of Returns df2_returns [[ 'customer_id', 'quantity']].groupby( 'customer_id').sum().reset_index().rename( columns ={'quantity': 'qtde_returns'} ) ``` ### 3.1.10 Frequency Purchase ``` df_aux = (df2_purchases[['customer_id', 'invoice_no', 'invoice_date']].drop_duplicates() .groupby( 'customer_id') .agg( max_ = ( 'invoice_date', 'max' ), min_ = ( 'invoice_date', 'min'), days_ = ('invoice_date', lambda x : ( ( x.max()- x.min() ).days) + 1 ) , buy_ = ( 'invoice_no', 'count') ) ).reset_index() # Frequency df_aux['frequency'] = df_aux[['buy_', 'days_']].apply ( lambda x: x['buy_'] / x['days_'] if x['days_'] != 0 else 0, axis = 1) # Merge df_ref = pd.merge( df_ref, df_aux[['customer_id', 'frequency']], on = 'customer_id', how = 'left') df_ref.isna().sum() df_ref.head() ``` # 4.0. Exploratory Data Analysis ``` df4 = df_ref.dropna() ``` ## 4.3 Estudo do Espaço ``` # Selected dataset cols_selected = ['customer_id', 'gross_revenue', 'recency_days', 'qtde_products', 'frequency', 'qtde_returns'] df43 = df4[cols_selected].drop( columns = 'customer_id', axis = 1) ``` <span id="papermill-error-cell" style="color:red; font-family:Helvetica Neue, Helvetica, Arial, sans-serif; font-size:2em;">Execution using papermill encountered an exception here and stopped:</span> ``` mm = pp.MinMaxScaler() fs = s3fs.S3FileSystem( anon=False, key=AWS_ACCESS_KEY_ID , secret=AWS_SECRET_ACCESS_KEY ) gross_revenue_scaler = pickle.load( fs.open( 's3://insiders-datasett/gross_revenue_scaler.pkl', 'rb') ) df43['gross_revenue'] = gross_revenue_scaler.transform( df43[['gross_revenue']] ) recency_days_scaler = pickle.load( fs.open( 's3://insiders-datasett/recency_days_scaler.pkl', 'rb' ) ) df43['recency_days'] = recency_days_scaler.transform( df43[['recency_days']] ) qtde_products_scaler = pickle.load( fs.open( 's3://insiders-datasett/qtde_products_scaler.pkl', 'rb' ) ) df43['qtde_products'] = qtde_products_scaler.transform( df43[['qtde_products']]) frequency_scaler = pickle.load( fs.open( 's3://insiders-datasett/frequency_scaler.pkl', 'rb' ) ) df43['frequency'] = frequency_scaler.transform( df43[['frequency']]) qtde_returns_scaler = pickle.load( fs.open( 's3://insiders-datasett/qtde_returns_scaler.pkl', 'rb' ) ) df43['qtde_returns'] = qtde_returns_scaler.transform( df43[['qtde_returns']]) X = df43.copy() X.shape ``` ### 4.3.4 Tree-Based embedding ``` # Training dataset X = df43.drop( columns = [ 'gross_revenue'], axis = 1 ) y = df43['gross_revenue'] # # Model definittion # rf_model = en.RandomForestRegressor ( n_estimators = 100, random_state= 42) # # Model trainning # rf_model.fit( X,y) # Carregando modelo #rf_model = pickle.load( open('../models/rf_model.pkl', 'rb')) rf_model = pickle.load( fs.open('s3://insiders-datasett/rf_model.pkl', 'rb')) #Leaf df_leaf = pd.DataFrame( rf_model.apply( X ) ) # DataFrame Leaf # Reduzer dimensionality # reducer = umap.UMAP( random_state=42 ) # embedding = reducer.fit_transform( df_leaf ) #reducer = pickle.load( open( '../features/umap_reducer.pkl', 'rb')) reducer = pickle.load( fs.open( 's3://insiders-datasett/umap_reducer.pkl', 'rb')) embedding = reducer.transform( df_leaf) # embedding df_tree = pd.DataFrame() df_tree['embedding_x'] = embedding[:, 0] df_tree['embedding_y'] = embedding[:, 1] ``` # 5.0 Data Preparation ``` df5 = df_tree.copy( ) #df5.to_csv(path_s3+'src/data/tree_based_embedding.csv') ``` # 7.0. Hyperparameter Fine-tuning ``` X = df_tree.copy() X.head() ``` # 8.0. Model Training ## 8.1. Final Model ``` # Model Definition k = 8 gmm_model = mx.GaussianMixture ( n_components = k, n_init = 300, random_state = 32) # Model Training gmm_model.fit(X) # Clustering labels = gmm_model.predict( X ) ``` ## 8.2. Cluster Validation ``` ## WSS ( Within-cluster sum of square) #print( 'WSS value: {}'.format( kmeans.inertia_ ) ) ## SS ( Silhouette Score ) print( 'SS value: {}'.format( m.silhouette_score( X, labels, metric='euclidean' ) ) ) ``` # 9.0. Cluster Analysis ``` df92 = df4[cols_selected].copy() df92['cluster'] = labels # change dtypes df92['recency_days'] = df92['recency_days'].astype( int ) df92['qtde_products'] = df92['qtde_products'].astype( int ) df92['qtde_returns'] = df92['qtde_returns'].astype( int ) from datetime import datetime #df92['last_training_timestamp'] = datetime.now().strftime( '%Y-%m-%d %H:%M:%S') # Number of customer df_cluster = df92[['customer_id','cluster']].groupby( 'cluster' ).count().reset_index() df_cluster['perc_customer'] = 100*(df_cluster['customer_id']/df_cluster['customer_id'].sum()) # Average gross revenue df_avg_gross_revenue = df92[['gross_revenue', 'cluster']].groupby('cluster').mean().reset_index() df_cluster = pd.merge( df_cluster, df_avg_gross_revenue, how = 'inner', on = 'cluster') # Average recency days df_avg_recency_days = df92[['recency_days', 'cluster']].groupby('cluster').mean().reset_index() df_cluster = pd.merge( df_cluster, df_avg_recency_days, how = 'inner', on = 'cluster') # Quantidade de produtos df_qtde_products = df92[['qtde_products', 'cluster']].groupby('cluster').mean().reset_index() df_cluster = pd.merge( df_cluster, df_qtde_products, how = 'inner', on = 'cluster') # Frequency df_frequency = df92[['frequency', 'cluster']].groupby('cluster').mean().reset_index() df_cluster = pd.merge( df_cluster, df_frequency, how = 'inner', on = 'cluster') # returns df_qtde_returns = df92[['qtde_returns', 'cluster']].groupby('cluster').mean().reset_index() df_cluster = pd.merge( df_cluster, df_qtde_returns, how = 'inner', on = 'cluster') df_cluster.sort_values( 'gross_revenue', ascending = False) ``` 02 Cluster Insiders 06 Cluster More Products 01 Cluster Spend Money 03 Cluster Even More Products 00 Cluster Less Days 05 Cluster 1K 07 Cluster Stop Returnres 04 Cluster More Buy ### Cluster 01: ( Candidato a Insider ) - Número de customers: 468 (16% do customers ) - Faturamento médio: 8836 - Recência média: 21 dias - Média de Produtos comprados: 424 produtos - Frequência de Produtos comprados: 0.09 produtos/dia - Receita em média: $8836.13,00 dólares ### Cluster 02: - Número de customer: 31 (0.7% dos customers) - Recência em média: 14 dias - Compras em média: 53 compras - Receita em média: $ 40.543,00. ### Cluster 03: - Número de customer: 4.335 (99% dos customers) - Recência em média: 92 dias - Compras em média: 05 compras - Receita em média: $ 1.372,57. # 11.0. Deploy to Production ``` import sqlite3 from sqlalchemy import create_engine df92.head() host='database-insidersv.cvrkgzmlnj5s.us-east-1.rds.amazonaws.com' port='5432' database='postgres' user='leandro' pwd='comunidadeds!' endpoint='postgresql://leandro:comunidadeds!@database-insidersv.cvrkgzmlnj5s.us-east-1.rds.amazonaws.com/postgres' conn = create_engine( endpoint ) # # create table # query_create_insiders = """ # CREATE TABLE insiders ( # customer_id INTEGER, # gross_revenue REAL, # recency_days INTEGER, # qtde_products INTEGER, # frequency REAL, # qtde_returns INTEGER, # cluster INTEGER # ) # """ # conn.execute( query_create_insiders ) # insert data into df92.to_sql( 'insiders', con=conn, if_exists='append', index=False ) df92.head() ```
github_jupyter
``` import pandas as pd import numpy as np from matplotlib import pyplot as plt from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.metrics import roc_curve from sklearn.metrics import auc from sklearn.metrics import precision_recall_curve from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn.metrics import average_precision_score from inspect import signature from sklearn.model_selection import train_test_split from sklearn import preprocessing from matplotlib import pyplot from sklearn import metrics print("Setup Complete") df = pd.read_csv("../input/fitness-watch-dataset/dataset_halfSecondWindow.csv") #dataset_halfSecondWindows #df.info # first doing label encodingon User # sorting based on the user label # plotting given_user distribution # making split df.isna().sum().sum() #5893 * 70 cleanup_target = {"target": {"Car":1,"Still":2,"Train":3,"Bus":4,"Walking":5}} df = df.replace(cleanup_target) cleanup_nums = {"user": {"andrea": 1, "Luca": 2, "Damiano": 3,"michelangelo": 4, "Pierpaolo": 5, "Vincenzo": 6,"IvanHeibi":7,"AndreaCarpineti":8, "Federica":9,"Serena":10,"Claudio":11,"Elena":12, "Riccardo":13}} df = df.replace(cleanup_nums) #df = df.fillna(0) df = df.fillna(df.median()) df1 = df.sort_values(by=['user']) list_users=df.user.unique() ax = df['user'].value_counts().plot(kind='bar') df['user'].value_counts() ax.set_xlabel("Users") ax.set_ylabel("Number of Responses") ax.figure.savefig('user_distribution.png') grouped = df.groupby(df.user) user_dict = {} sample_df = df[:0] for i in range(1,10): user_dict[i] = grouped.get_group(i) user_dict[i] = user_dict[i].sample(n=2225) sample_df = sample_df.append(user_dict[i]) list_users=sample_df.user.unique() ax = sample_df['user'].value_counts().plot(kind='bar') #sample_df['user'].value_counts() ax.set_xlabel("Users") ax.set_ylabel("Number of Responses") ax.figure.savefig('user_distribution_sampled.png') df1 = sample_df df1 df1 = df1.replace([' ','NULL'],np.nan) df1 = df1.dropna(thresh=df1.shape[0]*0.6,how='all',axis=1) df1.isna().sum().sum() #5893 * 52 df1 # commmon #df = df.dropna(axis=1, how='all') df2 = df1 train_pct_index1 = int(0.2 * len(df2)) train_pct_index2 = int(0.4 * len(df2)) train_pct_index3 = int(0.6 * len(df2)) train_pct_index4 = int(0.8 * len(df2)) print(0,train_pct_index1,train_pct_index2,train_pct_index3,train_pct_index4,len(df2)) # first fold: train1, test1 = df2[train_pct_index1:], df2[:train_pct_index1] # 20 to 100 # 2 fold: train2, test2 = df2.head(train_pct_index2).append(df2.tail(train_pct_index2)), df2[train_pct_index1:train_pct_index2] # 40 to 100 + 0 to 20 train3, test3 = df2.head(-train_pct_index3).append(df2.head(train_pct_index2)), df2[train_pct_index2:train_pct_index3] # 60 to 100 + 0 to 40 train4, test4 = df2.head(-train_pct_index4).append(df2.head(train_pct_index3)), df2[train_pct_index3:train_pct_index4] # 80 to 100 + 0 to 60 train5, test5 = df2[:train_pct_index4], df2[train_pct_index4:] # 0 to 80 # first fold: train1, test1 # train separate # train1 = train1.dropna(axis = 1, how='all') #train1 = train1.fillna(train1.mean()) #df2 = train1 train1 = train1.drop(['user'], axis=1) train1 = train1.drop(['id'], axis =1) # test separate #test1 = test1.dropna(axis=1, how='all') #df2 = df1 test1 =test1.drop(['user'], axis=1) test1 = test1.drop(['id'], axis =1) test1 = test1.dropna(axis=0) y = train1.target x = train1.loc[:, train1.columns != 'target'] y1 = test1.target x1 = test1.loc[:, test1.columns != 'target'] X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2) model=RandomForestClassifier(n_estimators=100) #Train the model using the training sets y_pred=clf.predict(X_test) model.fit(X_train,y_train) y_pred=model.predict(X_test) print("internal accuracy:", metrics.accuracy_score(y_test, y_pred)) y_pred=model.predict(x1) fold1= metrics.accuracy_score(y1, y_pred) print("Accuracy:",fold1) # second fold: train2, test2 # train separate train2 = train2.drop(['user'], axis=1) train2 = train2.drop(['id'], axis =1) # test separate test2 = test2.drop(['user'], axis=1) test2 = test2.drop(['id'], axis =1) test2 = test2.dropna(axis=0) y = train2.target x = train2.loc[:, train2.columns != 'target'] y1 = test2.target x1 = test2.loc[:, test2.columns != 'target'] X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2) model=RandomForestClassifier(n_estimators=100) #Train the model using the training sets y_pred=clf.predict(X_test) model.fit(X_train,y_train) y_pred=model.predict(X_test) print("internal accuracy:", metrics.accuracy_score(y_test, y_pred)) y_pred=model.predict(x1) fold2= metrics.accuracy_score(y1, y_pred) print("Accuracy:",fold2) # third fold: train3, test3 # train separate train3 = train3.drop(['user'], axis=1) train3 = train3.drop(['id'], axis =1) # test separate test3 = test3.drop(['user'], axis=1) test3 = test3.drop(['id'], axis =1) test3 = test3.dropna(axis=0) y = train3.target x = train3.loc[:, train3.columns != 'target'] y1 = test3.target x1 = test3.loc[:, test3.columns != 'target'] X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2) model=RandomForestClassifier(n_estimators=100) #Train the model using the training sets y_pred=clf.predict(X_test) model.fit(X_train,y_train) y_pred=model.predict(X_test) print("internal accuracy:", metrics.accuracy_score(y_test, y_pred)) y_pred=model.predict(x1) fold3= metrics.accuracy_score(y1, y_pred) print("Accuracy:",fold3) # forth fold: train4, test4 # train separate train4 = train4.drop(['user'], axis=1) train4 = train4.drop(['id'], axis =1) # test separate test4 = test4.drop(['user'], axis=1) test4 = test4.drop(['id'], axis =1) test4 = test4.dropna(axis=0) y = train4.target x = train4.loc[:, train4.columns != 'target'] y1 = test4.target x1 = test4.loc[:, test4.columns != 'target'] X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2) model=RandomForestClassifier(n_estimators=100) #Train the model using the training sets y_pred=clf.predict(X_test) model.fit(X_train,y_train) y_pred=model.predict(X_test) print("internal accuracy:", metrics.accuracy_score(y_test, y_pred)) y_pred=model.predict(x1) fold4= metrics.accuracy_score(y1, y_pred) print("Accuracy:",fold4) # fifth fold: train5, test5 # train separate train5 = train5.drop(['user'], axis=1) train5 = train5.drop(['id'], axis =1) # test separate test5 = test5.drop(['user'], axis=1) test5 = test5.drop(['id'], axis =1) #test5 = test5.dropna(axis=0) y = train5.target x = train5.loc[:, train5.columns != 'target'] y1 = test5.target x1 = test5.loc[:, test5.columns != 'target'] X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2) model=RandomForestClassifier(n_estimators=100) #Train the model using the training sets y_pred=clf.predict(X_test) model.fit(X_train,y_train) y_pred=model.predict(X_test) print("internal accuracy:", metrics.accuracy_score(y_test, y_pred)) y_pred=model.predict(x1) fold5= metrics.accuracy_score(y1, y_pred) print("Accuracy:",fold5) print("average fold:", (fold1+fold2+fold3+fold4+fold5)/5) import pickle filename = 'model.sav' pickle.dump(model, open(filename, 'wb')) ``` Feature engineering ``` print("F1:", f1_score(y1, y_pred, average='macro')) to give to dilan df3= df1.loc[df1['user'] == 3] df4= df1.loc[df1['user'] == 4] df3 = df3.drop(['user'], axis=1) df3 = df3.drop(['id'], axis =1) df4 = df4.drop(['user'], axis=1) df4 = df4.drop(['id'], axis =1) df3.to_csv("userdata_3.csv") df4.to_csv("userdata_4.csv") ```
github_jupyter
<p float="center"> <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img00_logo.png?raw=true" width="350" /> </p> <h1 align="center">ST0256 - Análisis Numérico</h1> <h1 align="center">Presentación del Curso</h1> <h1 align="center">2021/01</h1> <h1 align="center">MEDELLÍN - COLOMBIA </h1> <table> <tr align=left><td><img align=left src="./images/CC-BY.png"> <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license.(c) Carlos Alberto Alvarez Henao</td> </table> *** ***Docente:*** Carlos Alberto Álvarez Henao, I.C. D.Sc. ***e-mail:*** calvar52@eafit.edu.co ***skype:*** carlos.alberto.alvarez.henao ***Herramienta:*** [Jupyter](http://jupyter.org/) ***Kernel:*** Python 3.8 *** <a id='TOC'></a> <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Motivación" data-toc-modified-id="Motivación-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Motivación</a></span></li><li><span><a href="#Aspectos-generales-del-curso" data-toc-modified-id="Aspectos-generales-del-curso-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Aspectos generales del curso</a></span><ul class="toc-item"><li><span><a href="#Programa-clase-a-clase" data-toc-modified-id="Programa-clase-a-clase-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Programa clase-a-clase</a></span></li><li><span><a href="#Evaluación" data-toc-modified-id="Evaluación-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Evaluación</a></span></li><li><span><a href="#Bibliográfia" data-toc-modified-id="Bibliográfia-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Bibliográfia</a></span></li><li><span><a href="#Asesorías-y-Monitorias-académicas" data-toc-modified-id="Asesorías-y-Monitorias-académicas-2.4"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>Asesorías y Monitorias académicas</a></span></li></ul></li></ul></div> <p float="center"> <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img01_Intro.PNG?raw=true" width="500" /> </p> ## Motivación Deseamos desarrollar las siguientes operaciones aritméticas: - $2+2$ - $4 \times 4$ - $\left(\sqrt{3} \right )^2$ desde un punto de vista analítico, las soluciones exactas (a mano y en papel?) son - $2+2 = 4$ - $4 \times 4 = 16$ - $\left(\sqrt{3} \right )^2 = 3$ pero veamos qué sucede cuando realizamos las mismas operaciones empleando un dispositivo electrónico (calculadora, computador, etc) ``` a = 2 + 2 b = 4 * 4 c = (3**(1/2))**2 ``` preguntemos al computador si los resultados obtenidos en los cálculos son los esperados ``` a == 4 b == 16 c == 3 ``` `False`? Qué sucedió? por qué el resultado de comparar el valor que entendemos como verdadero y el obtenido empleando un dispositivo electrónico (calculadora) es falso? Veamos entonces cuál es el resultado que arrojó el cálculo: ``` print(c) ``` Efectivamente, se oberva que el valor calculado no es el esperado. Puede ser que, para muchas de las situaciones cotidianas, este valor no sea percibido como una diferencia apreciable ("error") y simplemente se asuma que ambos valores son iguales ("redondeo"). Pero, y si esta operación la tuviera qué repetir muchas veces? qué sucede con ese pequeño errror? Será que se puede simplemente despreciar? qué sucede para cálculos más complejos? se podría determinar la cantidad de error en los cálculos numéricos realizados a través de un computador? este error aumenta sin control? hasta cuándo se podrá decir que dos cantidades son "iguales"? El errror es debido a qué? una mala implementación de la operación aritmética? el lenguaje empleado para realizar el cálculo? La máquina? la formulación matemática? humano? Estas, y muchas otras, preguntas son las que se pretenden resolver en el curso de Análisis Numérico. [Volver a la Tabla de Contenido](#TOC) ## Aspectos generales del curso ### Programa clase-a-clase |**Clase** |**Fecha**|**Capítulo**|**Contenido** |**Actividad Evaluativa**| |--------:|:-------:|:-------|--------------|------------------------| |1 |26/01/2021||Descripción del curso y de los contenidos a ser tratados| | |2 |28/01/2021|Teoría de Errores|Fuentes de error, Notación Big-O, Error de punto flotante| | |3 |02/02/2021||Error de truncamiento|| |4 |04/02/2021||Error de punto flotante (cont), Combinación de error, Operaciones de conteo|| |5 |09/02/2021||Por qué debería importarnos esto?|| |6 |11/02/2021||Número de condición de una función. Forma anidad de Hörner y aritmética de $n$ decimales. El error y su relación con la estabilidad de los algoritmos|| |7 |16/02/2021|Raíces de ecuaciones|Método gráfico. Búsquedas incrementales|| |8 |18/02/2021||Método de Bisección, Teorema sobre convergencia en bisección|| |9 |23/02/2021||Punto fijo: teorema, corolarios y condiciones para hallar $G(x)$|| |10 |25/02/2021||Punto fijo: Método de Newton-Raphson (teoremas y conclusión)|| |11|02/03/2021||Punto fijo: Método de la Secante (teoremas y conclusión)|| |12|04/03/2021||Método de Raíces múltiples (teoremas y conclusión)|| |13|09/03/2021|Sistema de Ecuaciones Lineales|Introducción a la solución numérica de Sistemas De ecuaciones. criterios de Existencia y Unicidad ||| |14|11/03/2021||Eliminación Gaussiana. Número de operaciones en el proceso de Eliminación Gaussiana|Primer parcial (15%)| |15|16/03/2021||Problemas en la solución. Estrategias de pivoteo|| |16|18/03/2021||Factorización $LU$: Doolittle, Crout y Cholesky|Asamblea estudiantíl| |17|23/03/2021||Aplicaciones de la Factorización $LU$|| |18|25/03/2021||Introducción a métodos iterativos. Normas Vectoriales y Matriciales. Número de Condición de una Matriz|| ||29/03-03/04/2021|||Semana Santa|| |19|06/04/2021||Métodos de Jacobi. Gauss-Seidel y SOR. Formas Matriciales|| |20|08/04/2021||Teoremas de Convergencias para métodos iterativos. Aspectos generales del refinamiento iterativo|| |21|13/04/2021|Interpolación Numérica|Introducción. Método de diferencias dividas de Newton|Segundo Parcial (15%)| |22|15/04/2021||Método de Interpolación de Lagrange|| |23|20/04/2021||Trazadores lineales, cuadráticos y cúbicos|| |24|22/04/2021|Diferenciación e Integración Numérica|Métodos diferenciación numérica. Integración numérica Trapecio|| |25|27/04/2021||Integración numérica Simpson 1/3 simple y compuesto|| |26|29/04/2021||Integración numérica Simpson 3/8 simple y compuesto|| |27|04/05/2021|Ecuaciones Diferenciales Ordinarias|Introducción, Método de Euler|| |28|06/05/2021||Método de RK-2 y RK-4|Tercer Parcial (15%)| |29|11/05/2021|||| |30|13/05/2021|||16/05/2021 - 70%| |31|18/05/2021|||| |32|20/05/2021|||| ||24-29/05/2021|||Semana de Colchón| ||31-04/06/2021||Semana 1 finales || ||08-11/06/2021||Semana 2 finales || [Volver a la Tabla de Contenido](#TOC) ### Evaluación |Tema |Porcentaje|Fecha | |:-----------------------------|:--------:|:--------:| |Error y Raíces de ecuaciones |15% |11/03/2021| |Sistema de Ecuaciones Lineales|15% |13/04/2021| |Interpolación, Diferenciación e Integración Numérica y EDO|15%|06/05/2021| |Seguimiento | 25%|13/05/2021| |Práctica Final | 30%|08/062021| - ***Parciales:*** Los parciales se realizarán 8 días después de finalizado el tema correspondiente - ***Seguimiento:*** El seguimiento constará de una serie de actividades extraclase a ser entregados en el mismo día o en días siguientes, dependiendo de la actividad. - ***Práctica:*** La práctica consistirá en el desarrollo de un problema de Simulación numérica. Los temas serán indicados en las primeras semanas del semestre y se podrá realizar en equipos de a tres (3) estudiantes. [Volver a la Tabla de Contenido](#TOC) ### Bibliográfia - Burden, Richard L. & Faires, Duglas. Análisis Numérico. Editorial Thomson. 9° Edición 2011. - Chapra, Steven & Canale, Raymond. Métodos Numéricos para ingenieros, McgrawHill, 1987. - Heath, Michael T. Scientific Computing: An Introductory Survey, SIAM, 2006. - Python. Recuperado de https://www.python.org/ - Jupyter Notebook. Recuperado de https://jupyter.org/ - Anaconda. Recuperado de https://www.anaconda.com/ - Google Colab. Recuperado de https://colab.research.google.com/ [Volver a la Tabla de Contenido](#TOC) ### Asesorías y Monitorias académicas Próximamente... [Volver a la Tabla de Contenido](#TOC)
github_jupyter
## CIFAR 10 ``` %matplotlib inline %reload_ext autoreload %autoreload 2 ``` You can get the data via: wget http://pjreddie.com/media/files/cifar.tgz **Important:** Before proceeding, the student must reorganize the downloaded dataset files to match the expected directory structure, so that there is a dedicated folder for each class under 'test' and 'train', e.g.: ``` * test/airplane/airplane-1001.png * test/bird/bird-1043.png * train/bird/bird-10018.png * train/automobile/automobile-10000.png ``` The filename of the image doesn't have to include its class. ``` from fastai.conv_learner import * PATH = "data/cifar10/" os.makedirs(PATH,exist_ok=True) !ls {PATH} if not os.path.exists(f"{PATH}/train/bird"): raise Exception("expecting class subdirs under 'train/' and 'test/'") !ls {PATH}/train classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') stats = (np.array([ 0.4914 , 0.48216, 0.44653]), np.array([ 0.24703, 0.24349, 0.26159])) def get_data(sz,bs): tfms = tfms_from_stats(stats, sz, aug_tfms=[RandomFlip()], pad=sz//8) return ImageClassifierData.from_paths(PATH, val_name='test', tfms=tfms, bs=bs) bs=256 ``` ### Look at data ``` data = get_data(32,4) x,y=next(iter(data.trn_dl)) plt.imshow(data.trn_ds.denorm(x)[0]); plt.imshow(data.trn_ds.denorm(x)[1]); ``` ## Fully connected model ``` data = get_data(32,bs) lr=1e-2 ``` From [this notebook](https://github.com/KeremTurgutlu/deeplearning/blob/master/Exploring%20Optimizers.ipynb) by our student Kerem Turgutlu: ``` class SimpleNet(nn.Module): def __init__(self, layers): super().__init__() self.layers = nn.ModuleList([ nn.Linear(layers[i], layers[i + 1]) for i in range(len(layers) - 1)]) def forward(self, x): x = x.view(x.size(0), -1) for l in self.layers: l_x = l(x) x = F.relu(l_x) return F.log_softmax(l_x, dim=-1) learn = ConvLearner.from_model_data(SimpleNet([32*32*3, 40,10]), data) learn, [o.numel() for o in learn.model.parameters()] learn.summary() learn.lr_find() learn.sched.plot() %time learn.fit(lr, 2) %time learn.fit(lr, 2, cycle_len=1) ``` ## CNN ``` class ConvNet(nn.Module): def __init__(self, layers, c): super().__init__() self.layers = nn.ModuleList([ nn.Conv2d(layers[i], layers[i + 1], kernel_size=3, stride=2) for i in range(len(layers) - 1)]) self.pool = nn.AdaptiveMaxPool2d(1) self.out = nn.Linear(layers[-1], c) def forward(self, x): for l in self.layers: x = F.relu(l(x)) x = self.pool(x) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) learn = ConvLearner.from_model_data(ConvNet([3, 20, 40, 80], 10), data) learn.summary() learn.lr_find(end_lr=100) learn.sched.plot() %time learn.fit(1e-1, 2) %time learn.fit(1e-1, 4, cycle_len=1) ``` ## Refactored ``` class ConvLayer(nn.Module): def __init__(self, ni, nf): super().__init__() self.conv = nn.Conv2d(ni, nf, kernel_size=3, stride=2, padding=1) def forward(self, x): return F.relu(self.conv(x)) class ConvNet2(nn.Module): def __init__(self, layers, c): super().__init__() self.layers = nn.ModuleList([ConvLayer(layers[i], layers[i + 1]) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) def forward(self, x): for l in self.layers: x = l(x) x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) learn = ConvLearner.from_model_data(ConvNet2([3, 20, 40, 80], 10), data) learn.summary() %time learn.fit(1e-1, 2) %time learn.fit(1e-1, 2, cycle_len=1) ``` ## BatchNorm ``` class BnLayer(nn.Module): def __init__(self, ni, nf, stride=2, kernel_size=3): super().__init__() self.conv = nn.Conv2d(ni, nf, kernel_size=kernel_size, stride=stride, bias=False, padding=1) self.a = nn.Parameter(torch.zeros(nf,1,1)) self.m = nn.Parameter(torch.ones(nf,1,1)) def forward(self, x): x = F.relu(self.conv(x)) x_chan = x.transpose(0,1).contiguous().view(x.size(1), -1) if self.training: self.means = x_chan.mean(1)[:,None,None] self.stds = x_chan.std (1)[:,None,None] return (x-self.means) / self.stds *self.m + self.a class ConvBnNet(nn.Module): def __init__(self, layers, c): super().__init__() self.conv1 = nn.Conv2d(3, 10, kernel_size=5, stride=1, padding=2) self.layers = nn.ModuleList([BnLayer(layers[i], layers[i + 1]) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) def forward(self, x): x = self.conv1(x) for l in self.layers: x = l(x) x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) learn = ConvLearner.from_model_data(ConvBnNet([10, 20, 40, 80, 160], 10), data) learn.summary() %time learn.fit(3e-2, 2) %time learn.fit(1e-1, 4, cycle_len=1) ``` ## Deep BatchNorm ``` class ConvBnNet2(nn.Module): def __init__(self, layers, c): super().__init__() self.conv1 = nn.Conv2d(3, 10, kernel_size=5, stride=1, padding=2) self.layers = nn.ModuleList([BnLayer(layers[i], layers[i+1]) for i in range(len(layers) - 1)]) self.layers2 = nn.ModuleList([BnLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) def forward(self, x): x = self.conv1(x) for l,l2 in zip(self.layers, self.layers2): x = l(x) x = l2(x) x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) learn = ConvLearner.from_model_data(ConvBnNet2([10, 20, 40, 80, 160], 10), data) %time learn.fit(1e-2, 2) %time learn.fit(1e-2, 2, cycle_len=1) ``` ## Resnet ``` class ResnetLayer(BnLayer): def forward(self, x): return x + super().forward(x) class Resnet(nn.Module): def __init__(self, layers, c): super().__init__() self.conv1 = nn.Conv2d(3, 10, kernel_size=5, stride=1, padding=2) self.layers = nn.ModuleList([BnLayer(layers[i], layers[i+1]) for i in range(len(layers) - 1)]) self.layers2 = nn.ModuleList([ResnetLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.layers3 = nn.ModuleList([ResnetLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) def forward(self, x): x = self.conv1(x) for l,l2,l3 in zip(self.layers, self.layers2, self.layers3): x = l3(l2(l(x))) x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) return F.log_softmax(self.out(x), dim=-1) learn = ConvLearner.from_model_data(Resnet([10, 20, 40, 80, 160], 10), data) wd=1e-5 %time learn.fit(1e-2, 2, wds=wd) %time learn.fit(1e-2, 3, cycle_len=1, cycle_mult=2, wds=wd) %time learn.fit(1e-2, 8, cycle_len=4, wds=wd) ``` ## Resnet 2 ``` class Resnet2(nn.Module): def __init__(self, layers, c, p=0.5): super().__init__() self.conv1 = BnLayer(3, 16, stride=1, kernel_size=7) self.layers = nn.ModuleList([BnLayer(layers[i], layers[i+1]) for i in range(len(layers) - 1)]) self.layers2 = nn.ModuleList([ResnetLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.layers3 = nn.ModuleList([ResnetLayer(layers[i+1], layers[i + 1], 1) for i in range(len(layers) - 1)]) self.out = nn.Linear(layers[-1], c) self.drop = nn.Dropout(p) def forward(self, x): x = self.conv1(x) for l,l2,l3 in zip(self.layers, self.layers2, self.layers3): x = l3(l2(l(x))) x = F.adaptive_max_pool2d(x, 1) x = x.view(x.size(0), -1) x = self.drop(x) return F.log_softmax(self.out(x), dim=-1) learn = ConvLearner.from_model_data(Resnet2([16, 32, 64, 128, 256], 10, 0.2), data) wd=1e-6 %time learn.fit(1e-2, 2, wds=wd) %time learn.fit(1e-2, 3, cycle_len=1, cycle_mult=2, wds=wd) %time learn.fit(1e-2, 8, cycle_len=4, wds=wd) learn.save('tmp3') log_preds,y = learn.TTA() preds = np.mean(np.exp(log_preds),0) metrics.log_loss(y,preds), accuracy_np(preds,y) ``` ### End
github_jupyter
# 1. Very simple 'programs' ## 1.1 Running Python from the command line In order to test pieces of code we can run Python from the command line. In this Jupyter Notebook we are going to simulate this. You can type the commands in the fields and execute them.<br> In the field type:<br> `print('Hello, World')`<br> Then press `<shift> + <return>` to execute the command. What happened?<br>You just created a program, that prints the words 'Hello, World'. The Python environment that you are in immediately compiles whatever you have typed in. This is useful for testing things, e.g. define a few variables, and then test to see if a certain line will work. That will come in a later lesson, though. ## 1.2 Math in Python Type<br> `1 + 1` Type<br> `20 + 80` These are additions. We can of course use other mathematical operators.<br> Try this subtraction:<br> `6 - 5` and this multiplication:<br> `2 * 5` Try:<br> `5 ** 2` `**` is the exponential operator, so we executed 5 squared. Type:<br> `print('1 + 2 is an addition')` You see that the `print` statement writes something on the screen.<br> Try this:<br> `print('one kilobyte is 2^10 bytes, or', 2 ** 10, 'bytes')` This demonstrates that you can print text and calculations in a sentence.<br> The commas separating each section are a way of separating strings (text) from calculations or variable. Now try this:<br> `23 / 3` And this:<br> `23%3` `%` returns the remainder of the division. ## 1.3 Order of Operations Remember that thing called order of operation that they taught in maths? Well, it applies in Python, too. Here it is, if you need reminding:<br> 1. Parenthesis `()` 2. Exponents `**` 3. Multiplication `*`, division `/` and remainder `%` 4. Addition `+` and subtraction `-` Here are some examples that you might want to try, if you're rusty on this:<br> `1 + 2 * 3`<br> `(1 + 2) * 3` ## 1.4 Comments, Please The final thing you'll need to know to move on to multi-line programs is the comment. Type the following (and yes, the output is shown):<br> `#I am a comment. Fear my wrath!` A comment is a piece of code that is not run. In Python, you make something a comment by putting a hash in front of it. A hash comments everything after it in the line, and nothing before it. So you could type this:<br> `print("food is very nice") #eat me` This results in a normal output, without the smutty comment, thank you very much.<br> Now try this:<br> `# print("food is very nice")` Nothing happens, because the code was after a comment. Comments are important for adding necessary information for another programmer to read, but not the computer. For example, an explanation of a section of code, saying what it does, or what is wrong with it. You can also comment bits of code by putting a `#` in front of it - if you don't want it to compile, but can't delete it because you might need it later. __[Home](PythonIntro.ipynb)__<br> __[Lesson 2: Programs in a file, and variables](PythonIntroCh2.ipynb)__
github_jupyter
## Install packages and connect to Oracle ``` sc.install_pypi_package("sqlalchemy") sc.install_pypi_package("pandas") sc.install_pypi_package("s3fs") sc.install_pypi_package("cx_Oracle") sc.install_pypi_package("fsspec") from sqlalchemy import create_engine engine = create_engine('oracle://CMSDASHADMIN:4#X9#Veut#KSsU#l@oracle-prod-cms-dash.ccwgq0kcp9fq.us-east-1.rds.amazonaws.com:1521/', echo=False) # Import necessary libraries import cx_Oracle import pandas as pd import numpy as np dsn_tns = cx_Oracle.makedsn('oracle-prod-cms-dash.ccwgq0kcp9fq.us-east-1.rds.amazonaws.com', '1521', service_name='ORCL') conn = cx_Oracle.connect(user=r'VILASM', password='Z#5iC$Ld4sE', dsn=dsn_tns) con = cx_Oracle.connect('VILASM/Z#5iC$Ld4sE@oracle-prod-cms-dash.ccwgq0kcp9fq.us-east-1.rds.amazonaws.com/ORCL') print (con.version) cur =con.cursor() ``` # Insert into DASH_BENEFICIARY table ``` import pandas as pd # Create datatype dictionary for reading in the files dtype_dic= {'BENE_BIRTH_DT':str, 'BENE_DEATH_DT':str} # Read in all three files from 2008, 2009, and 2010 bene08 = pd.read_csv("s3://cms-dash-datasets/Data/DE1.0 Sample 20/DE1_0_2008_Beneficiary_Summary_File_Sample_20.csv", dtype = dtype_dic) bene09 = pd.read_csv("s3://cms-dash-datasets/Data/DE1.0 Sample 20/DE1_0_2009_Beneficiary_Summary_File_Sample_20.csv", dtype = dtype_dic) bene10 = pd.read_csv("s3://cms-dash-datasets/Data/DE1.0 Sample 20/DE1_0_2010_Beneficiary_Summary_File_Sample_20.csv", dtype = dtype_dic) # Add the FILE_YEAR column and insert it to index 0 bene08['FILE_YEAR']='2008' first_col = bene08.pop('FILE_YEAR') bene08.insert(0,'FILE_YEAR',first_col) bene09['FILE_YEAR']='2009' first_col = bene09.pop('FILE_YEAR') bene09.insert(0,'FILE_YEAR',first_col) bene10['FILE_YEAR']='2010' first_col = bene10.pop('FILE_YEAR') bene10.insert(0,'FILE_YEAR',first_col) # Add leading zeros to SP_STATE_CODE and BENE_COUNTY_CD bene08['SP_STATE_CODE'] = bene08['SP_STATE_CODE'].astype(str).apply(lambda x: x.zfill(2)) bene08['BENE_COUNTY_CD'] = bene08['BENE_COUNTY_CD'].astype(str).apply(lambda x: x.zfill(3)) bene09['SP_STATE_CODE'] = bene09['SP_STATE_CODE'].astype(str).apply(lambda x: x.zfill(2)) bene09['BENE_COUNTY_CD'] = bene09['BENE_COUNTY_CD'].astype(str).apply(lambda x: x.zfill(3)) bene10['SP_STATE_CODE'] = bene10['SP_STATE_CODE'].astype(str).apply(lambda x: x.zfill(2)) bene10['BENE_COUNTY_CD'] = bene10['BENE_COUNTY_CD'].astype(str).apply(lambda x: x.zfill(3)) # Converty BENE_BIRTH_DT and BENE_DEATH_DT to datetimes A = pd.to_datetime(bene08.BENE_BIRTH_DT) bene08['BENE_BIRTH_DT'] = A.dt.date B = pd.to_datetime(bene08.BENE_DEATH_DT) bene08['BENE_DEATH_DT'] = B.dt.date A = pd.to_datetime(bene09.BENE_BIRTH_DT) bene09['BENE_BIRTH_DT'] = A.dt.date B = pd.to_datetime(bene09.BENE_DEATH_DT) bene09['BENE_DEATH_DT'] = B.dt.date A = pd.to_datetime(bene10.BENE_BIRTH_DT) bene10['BENE_BIRTH_DT'] = A.dt.date B = pd.to_datetime(bene10.BENE_DEATH_DT) bene10['BENE_DEATH_DT'] = B.dt.date # Insert into table DASH_BENEFICIARY for 2008 sql= """ INSERT INTO DASH_BENEFICIARY (FILE_YEAR, DESYNPUF_ID,BENE_BIRTH_DT, BENE_DEATH_DT, BENE_SEX_IDENT_CD,BENE_RACE_CD, BENE_ESRD_IND, SP_STATE_CODE, BENE_COUNTY_CD, BENE_HI_CVRAGE_TOT_MONS,BENE_SMI_CVRAGE_TOT_MONS, BENE_HMO_CVRAGE_TOT_MONS, PLAN_CVRG_MOS_NUM, SP_ALZHDMTA, SP_CHF,SP_CHRNKIDN, SP_CNCR, SP_COPD, SP_DEPRESSN,SP_DIABETES, SP_ISCHMCHT, SP_OSTEOPRS, SP_RA_OA, SP_STRKETIA, MEDREIMB_IP, BENRES_IP, PPPYMT_IP, MEDREIMB_OP, BENRES_OP, PPPYMT_OP, MEDREIMB_CAR, BENRES_CAR, PPPYMT_CAR) values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33)""" df_list = bene08.values.tolist() n = 0 for i in bene08.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() # Insert into table DASH_BENEFICIARY for 2009 sql= """ INSERT INTO DASH_BENEFICIARY (FILE_YEAR, DESYNPUF_ID,BENE_BIRTH_DT, BENE_DEATH_DT, BENE_SEX_IDENT_CD,BENE_RACE_CD, BENE_ESRD_IND, SP_STATE_CODE, BENE_COUNTY_CD, BENE_HI_CVRAGE_TOT_MONS,BENE_SMI_CVRAGE_TOT_MONS, BENE_HMO_CVRAGE_TOT_MONS, PLAN_CVRG_MOS_NUM, SP_ALZHDMTA, SP_CHF,SP_CHRNKIDN, SP_CNCR, SP_COPD, SP_DEPRESSN,SP_DIABETES, SP_ISCHMCHT, SP_OSTEOPRS, SP_RA_OA, SP_STRKETIA, MEDREIMB_IP, BENRES_IP, PPPYMT_IP, MEDREIMB_OP, BENRES_OP, PPPYMT_OP, MEDREIMB_CAR, BENRES_CAR, PPPYMT_CAR) values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33)""" df_list = bene09.values.tolist() n = 0 for i in bene09.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() # Insert into table DASH_BENEFICIARY for 2010 sql= """ INSERT INTO DASH_BENEFICIARY (FILE_YEAR, DESYNPUF_ID,BENE_BIRTH_DT, BENE_DEATH_DT, BENE_SEX_IDENT_CD,BENE_RACE_CD, BENE_ESRD_IND, SP_STATE_CODE, BENE_COUNTY_CD, BENE_HI_CVRAGE_TOT_MONS,BENE_SMI_CVRAGE_TOT_MONS, BENE_HMO_CVRAGE_TOT_MONS, PLAN_CVRG_MOS_NUM, SP_ALZHDMTA, SP_CHF,SP_CHRNKIDN, SP_CNCR, SP_COPD, SP_DEPRESSN,SP_DIABETES, SP_ISCHMCHT, SP_OSTEOPRS, SP_RA_OA, SP_STRKETIA, MEDREIMB_IP, BENRES_IP, PPPYMT_IP, MEDREIMB_OP, BENRES_OP, PPPYMT_OP, MEDREIMB_CAR, BENRES_CAR, PPPYMT_CAR) values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33)""" df_list = bene10.values.tolist() n = 0 for i in bene10.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() # con.close() ``` ## Insert into DASH_CLAIM_CARRIER table ### DASH_CLAIM_CARRIER A ``` claimsA = pd.read_csv("s3://cms-dash-datasets/Data/DE1.0 Sample 3/DE1_0_2008_to_2010_Carrier_Claims_Sample_3A.csv") # claimsB = pd.read_csv("s3n://cms-dash-datasets/Data/DE1.0 Sample 20/DE1_0_2008_to_2010_Carrier_Claims_Sample_1B.csv") # Take first 51 cols, move CLM_ID to front, convert two datetime columns claims_A_toload = claimsA.iloc[: , :51] first_col = claims_A_toload.pop('CLM_ID') claims_A_toload.insert(0,'CLM_ID',first_col) claims_A_toload.columns A = pd.to_datetime(claims_A_toload['CLM_FROM_DT']) claims_A_toload['CLM_FROM_DT'] = A.dt.date B = pd.to_datetime(claims_A_toload.CLM_THRU_DT) claims_A_toload['CLM_THRU_DT'] = B.dt.date # Convert necessary columns to string nonstr = claims_A_toload[['CLM_ID','CLM_FROM_DT','CLM_THRU_DT']] claims_str = claims_A_toload.astype(str) claims_str['CLM_ID']=nonstr['CLM_ID'] claims_str['CLM_FROM_DT']=nonstr['CLM_FROM_DT'] claims_str['CLM_THRU_DT']=nonstr['CLM_THRU_DT'] # Insert into DASH_CLAIM_CARRIER table sql=""" INSERT INTO DASH_CLAIM_CARRIER ( CLM_ID, DESYNPUF_ID, CLM_FROM_DT, CLM_THRU_DT, ICD9_DGNS_CD_1, ICD9_DGNS_CD_2, ICD9_DGNS_CD_3, ICD9_DGNS_CD_4, ICD9_DGNS_CD_5, ICD9_DGNS_CD_6, ICD9_DGNS_CD_7, ICD9_DGNS_CD_8, PRF_PHYSN_NPI_1, PRF_PHYSN_NPI_2, PRF_PHYSN_NPI_3, PRF_PHYSN_NPI_4, PRF_PHYSN_NPI_5, PRF_PHYSN_NPI_6, PRF_PHYSN_NPI_7, PRF_PHYSN_NPI_8, PRF_PHYSN_NPI_9, PRF_PHYSN_NPI_10, PRF_PHYSN_NPI_11, PRF_PHYSN_NPI_12, PRF_PHYSN_NPI_13, TAX_NUM_1, TAX_NUM_2, TAX_NUM_3, TAX_NUM_4, TAX_NUM_5, TAX_NUM_6, TAX_NUM_7, TAX_NUM_8, TAX_NUM_9, TAX_NUM_10, TAX_NUM_11, TAX_NUM_12, TAX_NUM_13, HCPCS_CD_1, HCPCS_CD_2, HCPCS_CD_3, HCPCS_CD_4, HCPCS_CD_5, HCPCS_CD_6,HCPCS_CD_7,HCPCS_CD_8,HCPCS_CD_9,HCPCS_CD_10,HCPCS_CD_11,HCPCS_CD_12, HCPCS_CD_13 ) values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33 ,:34,:35,:36,:37,:38,:39,:40,:41,:42,:43,:44,:45,:46,:47,:48,:49,:50,:51 )""" df_list = claims_str.values.tolist() n = 0 for i in claims_str.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() ``` ## DASH CLAIMS CARRIER B ``` claimsB = pd.read_csv("s3://cms-dash-datasets/Data/DE1.0 Sample 2/DE1_0_2008_to_2010_Carrier_Claims_Sample_2B.csv") # Take first 51 cols, move CLM ID to front, convert two datetime columns claims_B_toload = claimsB.iloc[: , :51] first_col = claims_B_toload.pop('CLM_ID') claims_B_toload.insert(0,'CLM_ID',first_col) claims_B_toload.columns A = pd.to_datetime(claims_B_toload['CLM_FROM_DT']) claims_B_toload['CLM_FROM_DT'] = A.dt.date B = pd.to_datetime(claims_B_toload.CLM_THRU_DT) claims_B_toload['CLM_THRU_DT'] = B.dt.date nonstr = claims_B_toload[['CLM_ID','CLM_FROM_DT','CLM_THRU_DT']] claims_str = claims_B_toload.astype(str) claims_str['CLM_ID']=nonstr['CLM_ID'] claims_str['CLM_FROM_DT']=nonstr['CLM_FROM_DT'] claims_str['CLM_THRU_DT']=nonstr['CLM_THRU_DT'] sql=""" INSERT INTO DASH_CLAIM_CARRIER ( CLM_ID, DESYNPUF_ID, CLM_FROM_DT, CLM_THRU_DT, ICD9_DGNS_CD_1, ICD9_DGNS_CD_2, ICD9_DGNS_CD_3, ICD9_DGNS_CD_4, ICD9_DGNS_CD_5, ICD9_DGNS_CD_6, ICD9_DGNS_CD_7, ICD9_DGNS_CD_8, PRF_PHYSN_NPI_1, PRF_PHYSN_NPI_2, PRF_PHYSN_NPI_3, PRF_PHYSN_NPI_4, PRF_PHYSN_NPI_5, PRF_PHYSN_NPI_6, PRF_PHYSN_NPI_7, PRF_PHYSN_NPI_8, PRF_PHYSN_NPI_9, PRF_PHYSN_NPI_10, PRF_PHYSN_NPI_11, PRF_PHYSN_NPI_12, PRF_PHYSN_NPI_13, TAX_NUM_1, TAX_NUM_2, TAX_NUM_3, TAX_NUM_4, TAX_NUM_5, TAX_NUM_6, TAX_NUM_7, TAX_NUM_8, TAX_NUM_9, TAX_NUM_10, TAX_NUM_11, TAX_NUM_12, TAX_NUM_13, HCPCS_CD_1, HCPCS_CD_2, HCPCS_CD_3, HCPCS_CD_4, HCPCS_CD_5, HCPCS_CD_6,HCPCS_CD_7,HCPCS_CD_8,HCPCS_CD_9,HCPCS_CD_10,HCPCS_CD_11,HCPCS_CD_12, HCPCS_CD_13 ) values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33 ,:34,:35,:36,:37,:38,:39,:40,:41,:42,:43,:44,:45,:46,:47,:48,:49,:50,:51 )""" df_list = claims_str.values.tolist() n = 0 for i in claims_str.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() ``` ## Insert into DASH_CLAIM_INPATIENT ``` dtype_dic= {'AT_PHYSN_NPI':str, 'OP_PHYSN_NPI':str,'OT_PHYSN_NPI': str, 'CLM_FROM_DT':str, 'CLM_THRU_DT':str, 'CLM_ADMSN_DT':str, 'NCH_BENE_DSCHRG_DT':str} inpatient = pd.read_csv("s3://cms-dash-datasets/Data/DE1.0 Sample 1/DE1_0_2008_to_2010_Inpatient_Claims_Sample_1.csv" , dtype = dtype_dic) # Take only Segment 1 seg1 = inpatient.loc[inpatient['SEGMENT'] == 1] # Take first 36 columns and rearrange SEGMENT and CLM_ID inpatient_toload = seg1.iloc[: , :36] first_col = inpatient_toload.pop('SEGMENT') inpatient_toload.insert(0,'SEGMENT',first_col) sec = inpatient_toload.pop('CLM_ID') inpatient_toload.insert(0,'CLM_ID',sec) # Convert necessary columns A = pd.to_datetime(inpatient_toload.CLM_FROM_DT) inpatient_toload['CLM_FROM_DT'] = A.dt.date B = pd.to_datetime(inpatient_toload.CLM_THRU_DT) inpatient_toload['CLM_THRU_DT'] = B.dt.date C = pd.to_datetime(inpatient_toload.CLM_ADMSN_DT) inpatient_toload['CLM_ADMSN_DT'] = C.dt.date D = pd.to_datetime(inpatient_toload.NCH_BENE_DSCHRG_DT) inpatient_toload['NCH_BENE_DSCHRG_DT'] = D.dt.date # Fill NaN's with zeros inpatient_toload[['NCH_BENE_IP_DDCTBL_AMT']]=inpatient_toload[['NCH_BENE_IP_DDCTBL_AMT']].fillna(0.0) inpatient_toload['CLM_UTLZTN_DAY_CNT'] = (inpatient_toload['CLM_UTLZTN_DAY_CNT'].fillna(0)).astype(int) # inpatient_toload[['AT_PHYSN_NPI', 'OP_PHYSN_NPI', 'OT_PHYSN_NPI']] = inpatient_toload[['AT_PHYSN_NPI', 'OP_PHYSN_NPI', 'OT_PHYSN_NPI']].fillna(0)astype(int) all_columns = ['DESYNPUF_ID', 'PRVDR_NUM', 'AT_PHYSN_NPI','OP_PHYSN_NPI','OT_PHYSN_NPI','ADMTNG_ICD9_DGNS_CD', 'CLM_DRG_CD', 'ICD9_DGNS_CD_1','ICD9_DGNS_CD_2','ICD9_DGNS_CD_3','ICD9_DGNS_CD_4','ICD9_DGNS_CD_5','ICD9_DGNS_CD_6','ICD9_DGNS_CD_7','ICD9_DGNS_CD_8','ICD9_DGNS_CD_9','ICD9_DGNS_CD_10', 'ICD9_PRCDR_CD_1','ICD9_PRCDR_CD_2','ICD9_PRCDR_CD_3','ICD9_PRCDR_CD_4','ICD9_PRCDR_CD_5','ICD9_PRCDR_CD_6'] inpatient_toload[all_columns] = inpatient_toload[all_columns].astype(str) inpatient_toload = inpatient_toload.reset_index().drop(columns=['index']) # Insert into DASH_CLAIM_INPATIENT table sql="""INSERT INTO DASH_CLAIM_INPATIENT (CLM_ID,SEGMENT,DESYNPUF_ID,CLM_FROM_DT,CLM_THRU_DT,PRVDR_NUM,CLM_PMT_AMT,NCH_PRMRY_PYR_CLM_PD_AMT, AT_PHYSN_NPI,OP_PHYSN_NPI,OT_PHYSN_NPI,CLM_ADMSN_DT,ADMTNG_ICD9_DGNS_CD,CLM_PASS_THRU_PER_DIEM_AMT,NCH_BENE_IP_DDCTBL_AMT,NCH_BENE_PTA_COINSRNC_LBLTY_AM, NCH_BENE_BLOOD_DDCTBL_LBLTY_AM,CLM_UTLZTN_DAY_CNT,NCH_BENE_DSCHRG_DT,CLM_DRG_CD,ICD9_DGNS_CD_1,ICD9_DGNS_CD_2,ICD9_DGNS_CD_3,ICD9_DGNS_CD_4, ICD9_DGNS_CD_5,ICD9_DGNS_CD_6,ICD9_DGNS_CD_7,ICD9_DGNS_CD_8,ICD9_DGNS_CD_9,ICD9_DGNS_CD_10,ICD9_PRCDR_CD_1,ICD9_PRCDR_CD_2,ICD9_PRCDR_CD_3, ICD9_PRCDR_CD_4,ICD9_PRCDR_CD_5,ICD9_PRCDR_CD_6) values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33 ,:34,:35,:36)""" df_list = inpatient_toload.values.tolist() n = 0 for i in inpatient_toload.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() ``` ## DASH_CLAIM_OUTPATIENT ``` outpatient = pd.read_csv("s3://cms-dash-datasets/Data/DE1.0 Sample 2/DE1_0_2008_to_2010_Outpatient_Claims_Sample_2.csv") # Take first 36 columns and rearrange #SEGMENT and CLM_ID outpatient_toload = outpatient.iloc[: , :31] first_col = outpatient_toload.pop('SEGMENT') outpatient_toload.insert(0,'SEGMENT',first_col) sec = outpatient_toload.pop('CLM_ID') outpatient_toload.insert(0,'CLM_ID',sec) # Convert necessary columns A = pd.to_datetime(outpatient_toload['CLM_FROM_DT']) outpatient_toload['CLM_FROM_DT'] = A.dt.date B = pd.to_datetime(outpatient_toload.CLM_THRU_DT) outpatient_toload['CLM_THRU_DT'] = B.dt.date all_columns = ['PRVDR_NUM', 'AT_PHYSN_NPI','OP_PHYSN_NPI','OT_PHYSN_NPI','ICD9_DGNS_CD_1','ICD9_DGNS_CD_2','ICD9_DGNS_CD_3','ICD9_DGNS_CD_4','ICD9_DGNS_CD_5','ICD9_DGNS_CD_6', 'ICD9_DGNS_CD_7','ICD9_DGNS_CD_8','ICD9_DGNS_CD_9','ICD9_DGNS_CD_10','ICD9_PRCDR_CD_1','ICD9_PRCDR_CD_2','ICD9_PRCDR_CD_3','ICD9_PRCDR_CD_4','ICD9_PRCDR_CD_5','ICD9_PRCDR_CD_6','ADMTNG_ICD9_DGNS_CD'] outpatient_toload[all_columns] = outpatient_toload[all_columns].astype(str) # Insert into DASH_CLAIM_OUTPATIENT table sql="""INSERT INTO DASH_CLAIM_OUTPATIENT (CLM_ID,SEGMENT,DESYNPUF_ID,CLM_FROM_DT,CLM_THRU_DT,PRVDR_NUM,CLM_PMT_AMT,NCH_PRMRY_PYR_CLM_PD_AMT, AT_PHYSN_NPI,OP_PHYSN_NPI,OT_PHYSN_NPI,NCH_BENE_BLOOD_DDCTBL_LBLTY_AM,ICD9_DGNS_CD_1,ICD9_DGNS_CD_2,ICD9_DGNS_CD_3,ICD9_DGNS_CD_4, ICD9_DGNS_CD_5,ICD9_DGNS_CD_6,ICD9_DGNS_CD_7,ICD9_DGNS_CD_8,ICD9_DGNS_CD_9,ICD9_DGNS_CD_10,ICD9_PRCDR_CD_1,ICD9_PRCDR_CD_2,ICD9_PRCDR_CD_3, ICD9_PRCDR_CD_4,ICD9_PRCDR_CD_5,ICD9_PRCDR_CD_6,NCH_BENE_PTB_DDCTBL_AMT,NCH_BENE_PTB_COINSRNC_AMT,ADMTNG_ICD9_DGNS_CD) values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31)""" df_list = outpatient_toload.values.tolist() n = 0 for i in outpatient_toload.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() ``` # INSERT INTO CENSUS ``` census = pd.read_csv("s3://cms-dash-datasets/Data/Census/census_acs20195yr_county (1).csv", encoding='unicode_escape') census['FIPS'] = census['FIPS'].astype(str).apply(lambda x: x.zfill(5)) census['FIPS'] # Insert into DASH_CENSUS table sql="""INSERT INTO DASH_CENSUS ( FIPS,COUNTY_NAME,EST_HBT_TH_1,EST_HBT_TH_2,EST_ANC_TP_1,EST_ANC_TP_2,EST_ANC_TP_3,EST_ANC_TP_4,EST_ANC_TP_5,EST_ANC_TP_6,EST_ANC_TP_7,EST_ANC_TP_8,EST_ANC_TP_9,EST_ANC_TP_10,EST_ANC_TP_11,EST_ANC_TP_12,EST_ANC_TP_13,EST_ANC_TP_14,EST_ANC_TP_15,EST_ANC_TP_16,EST_ANC_TP_17,EST_ANC_TP_18,EST_ANC_TP_19,EST_ANC_TP_20,EST_ANC_TP_21,EST_ANC_TP_22,EST_ANC_TP_23,EST_ANC_TP_24,EST_ANC_TP_25,EST_ANC_TP_26,EST_ANC_TP_27,EST_BR_THU_1,EST_BR_THU_2,EST_BR_THU_3,EST_BR_THU_4, EST_BR_THU_5,EST_BR_THU_6,EST_BR_THU_7,EST_CTZN_VP_1,EST_CTZN_VP_2,EST_CTZN_VP_3,EST_COW_1,EST_COW_2,EST_COW_3,EST_COW_4,EST_COW_5,EST_CTW_1,EST_CTW_2,EST_CTW_3,EST_CTW_4,EST_CTW_5,EST_CTW_6,EST_CTW_7,EST_CTW_8,EST_CNI_TH_1,EST_CNI_TH_2,EST_CNI_TH_3,EST_DIS_1,EST_DIS_2,EST_DIS_3,EST_DIS_4,EST_DIS_5,EST_DIS_6,EST_DIS_7,EST_DIS_8,EST_EA_1,EST_EA_2,EST_EA_3,EST_EA_4,EST_EA_5,EST_EA_6,EST_EA_7,EST_EA_8,EST_EA_9,EST_EA_10,EST_EMP_1,EST_EMP_2,EST_EMP_3,EST_EMP_4,EST_EMP_5,EST_EMP_6, EST_EMP_7,EST_EMP_8,EST_EMP_9,EST_EMP_10,EST_EMP_11,EST_EMP_12,EST_EMP_13,EST_EMP_14,EST_EMP_15,EST_EMP_16,EST_EMP_17,EST_FERT_1,EST_FERT_2,EST_FERT_3,EST_FERT_4,EST_FERT_5,EST_FERT_6,EST_FERT_7,EST_GP_1,EST_GP_2,EST_GP_3,EST_GP_4,EST_GP_5,EST_GP_6,EST_GP_7,EST_GP_8,EST_GP_9,EST_GRAPI_1,EST_GRAPI_2,EST_GRAPI_3,EST_GRAPI_4,EST_GRAPI_5,EST_GRAPI_6,EST_GRAPI_7,EST_GRAPI_8,EST_GR_1,EST_GR_2,EST_GR_3,EST_GR_4,EST_GR_5,EST_GR_6,EST_GR_7,EST_GR_8,EST_GR_9,EST_GR_10,EST_HIC_1,EST_HIC_2, EST_HIC_3,EST_HIC_4,EST_HIC_5,EST_HIC_6,EST_HIC_7,EST_HIC_8,EST_HIC_9,EST_HIC_10,EST_HIC_11,EST_HIC_12,EST_HIC_13,EST_HIC_14,EST_HIC_15,EST_HIC_16,EST_HIC_17,EST_HIC_18,EST_HIC_19,EST_HIC_20,EST_HIC_21,EST_HIC_22,EST_HIC_23,EST_HIC_24,EST_HISP_1,EST_HISP_2,EST_HISP_3,EST_HISP_4,EST_HISP_5,EST_HISP_6,EST_HISP_7,EST_HISP_8,EST_HISP_9,EST_HISP_10,EST_HISP_11,EST_HISP_12,EST_HISP_13,EST_HISP_14,EST_HISP_15,EST_HISP_16,EST_HEAT_1,EST_HEAT_2,EST_HEAT_3,EST_HEAT_4,EST_HEAT_5,EST_HEAT_6,EST_HEAT_7, EST_HEAT_8,EST_HEAT_9,EST_HEAT_10,EST_HHT_1,EST_HHT_2,EST_HHT_3,EST_HHT_4,EST_HHT_5,EST_HHT_6,EST_HHT_7,EST_HHT_8,EST_HHT_9,EST_HHT_10,EST_HHT_11,EST_HHT_12,EST_HHT_13,EST_HHT_14,EST_HHT_15,EST_HOCC_1,EST_HOCC_2,EST_HOCC_3,EST_HOCC_4,EST_HOCC_5,EST_HT_1,EST_HT_2,EST_HT_3,EST_HT_4,EST_HT_5,EST_INB_1,EST_INB_2,EST_INB_3,EST_INB_4,EST_INB_5,EST_INB_6,EST_INB_7,EST_INB_8,EST_INB_9,EST_INB_10,EST_INB_11,EST_INB_12,EST_INB_13,EST_INB_14,EST_INB_15,EST_INB_16,EST_INB_17,EST_INB_18,EST_INB_19,EST_INB_20,EST_INB_21, EST_INB_22,EST_INB_23,EST_INB_24,EST_INB_25,EST_INB_26,EST_INB_27,EST_INB_28,EST_INB_29,EST_INB_30,EST_INB_31,EST_INB_32,EST_INB_33,EST_INB_34,EST_INB_35,EST_INB_36,EST_INB_37,EST_INB_38,EST_INB_39,EST_INB_40,EST_INB_41,EST_INB_42,EST_INB_43,EST_INB_44,EST_IND_1,EST_IND_2,EST_IND_3,EST_IND_4,EST_IND_5,EST_IND_6,EST_IND_7,EST_IND_8,EST_IND_9,EST_IND_10,EST_IND_11,EST_IND_12,EST_IND_13,EST_IND_14,EST_LANG_1,EST_LANG_2,EST_LANG_3,EST_LANG_4,EST_LANG_5,EST_LANG_6,EST_LANG_7,EST_LANG_8,EST_LANG_9,EST_LANG_10,EST_LANG_11,EST_LANG_12, EST_MRTL_1,EST_MRTL_2,EST_MRTL_3,EST_MRTL_4,EST_MRTL_5,EST_MRTL_6,EST_MRTL_7,EST_MRTL_8,EST_MRTL_9,EST_MRTL_10,EST_MRTL_11,EST_MRTL_12,EST_MRTG_1,EST_MRTG_2,EST_MRTG_3,EST_OPR_1,EST_OPR_2,EST_OPR_3,EST_OPR_4,EST_OCC_1,EST_OCC_2,EST_OCC_3,EST_OCC_4,EST_OCC_5,EST_OCC_6,EST_BPL_1,EST_BPL_2,EST_BPL_3,EST_BPL_4,EST_BPL_5,EST_BPL_6,EST_BPL_7,EST_BPL_8,EST_BPL_9,EST_BPL_10,EST_BPL_11,EST_BPL_12,EST_BPL_13,EST_BPL_14,EST_BPL_15,EST_BPL_16,EST_BPL_17,EST_BPL_18,EST_BPL_19,EST_POB_1,EST_POB_2,EST_POB_3,EST_POB_4,EST_POB_5,EST_POB_6,EST_POB_7,EST_RACE_1,EST_RACE_2, EST_RACE_3,EST_RACE_4,EST_RACE_5,EST_RACE_6,EST_RACE_7,EST_RACE_8,EST_RACE_9,EST_RACE_10,EST_RACE_11,EST_RACE_12,EST_RACE_13,EST_RACE_14,EST_RACE_15,EST_RACE_16,EST_RACE_17,EST_RACE_18,EST_RACE_19,EST_RACE_20,EST_RACE_21,EST_RACE_22,EST_RACE_23,EST_RACE_24,EST_RACE_25,EST_RACE_26,EST_RACE_27,EST_RACE_28,EST_RACE_29,EST_RACE_30,EST_RACE_31,EST_RACE_32,EST_RACE_33,EST_RACE_34,EST_RACE_35,EST_RACE_36,EST_RACE_37,EST_RLTNSHP_1,EST_RLTNSHP_2,EST_RLTNSHP_3,EST_RLTNSHP_4,EST_RLTNSHP_5,EST_RLTNSHP_6,EST_RLTNSHP_7,EST_RSDNC_1,EST_RSDNC_2,EST_RSDNC_3,EST_RSDNC_4,EST_RSDNC_5, EST_RSDNC_6,EST_RSDNC_7,EST_RSDNC_8,EST_ROOM_1,EST_ROOM_2,EST_ROOM_3,EST_ROOM_4,EST_ROOM_5,EST_ROOM_6,EST_ROOM_7,EST_ROOM_8,EST_ROOM_9,EST_ROOM_10,EST_ROOM_11,EST_SCHOOL_1,EST_SCHOOL_2,EST_SCHOOL_3,EST_SCHOOL_4,EST_SCHOOL_5,EST_SCHOOL_6,EST_SEL_CHAR_1,EST_SEL_CHAR_2,EST_SEL_CHAR_3,EST_SEL_CHAR_4,EST_SMOC_1,EST_SMOC_2,EST_SMOC_3,EST_SMOC_4,EST_SMOC_5,EST_SMOC_6,EST_SMOC_7,EST_SMOC_8,EST_SMOC_9,EST_SMOC_10,EST_SMOC_11,EST_SMOC_12,EST_SMOC_13,EST_SMOC_14,EST_SMOC_15,EST_SMOC_16,EST_SMOC_17,EST_SMOCAPI_1,EST_SMOCAPI_2,EST_SMOCAPI_3,EST_SMOCAPI_4,EST_SMOCAPI_5,EST_SMOCAPI_6, EST_SMOCAPI_7,EST_SMOCAPI_8,EST_SMOCAPI_9,EST_SMOCAPI_10,EST_SMOCAPI_11,EST_SMOCAPI_12,EST_SMOCAPI_13,EST_SMOCAPI_14,EST_SMOCAPI_15,EST_SMOCAPI_16,EST_SEX_AGE_1,EST_SEX_AGE_2,EST_SEX_AGE_3,EST_SEX_AGE_4,EST_SEX_AGE_5,EST_SEX_AGE_6,EST_SEX_AGE_7,EST_SEX_AGE_8,EST_SEX_AGE_9,EST_SEX_AGE_10,EST_SEX_AGE_11,EST_SEX_AGE_12,EST_SEX_AGE_13,EST_SEX_AGE_14,EST_SEX_AGE_15,EST_SEX_AGE_16,EST_SEX_AGE_17,EST_SEX_AGE_18,EST_SEX_AGE_19,EST_SEX_AGE_20,EST_SEX_AGE_21,EST_SEX_AGE_22,EST_SEX_AGE_23,EST_SEX_AGE_24,EST_SEX_AGE_25,EST_SEX_AGE_26,EST_SEX_AGE_27,EST_SEX_AGE_28,EST_SEX_AGE_29,EST_SEX_AGE_30,EST_SEX_AGE_31, EST_SEX_AGE_32,EST_THU,EST_CTZNSHP_1,EST_CTZNSHP_2,EST_CTZNSHP_3,EST_UNIT_1,EST_UNIT_2,EST_UNIT_3,EST_UNIT_4,EST_UNIT_5,EST_UNIT_6,EST_UNIT_7,EST_UNIT_8,EST_UNIT_9,EST_UNIT_10,EST_OWNER_UNIT_1,EST_OWNER_UNIT_2,EST_OWNER_UNIT_3,EST_OWNER_UNIT_4,EST_OWNER_UNIT_5,EST_OWNER_UNIT_6,EST_OWNER_UNIT_7,EST_OWNER_UNIT_8,EST_OWNER_UNIT_9,EST_OWNER_UNIT_10,EST_VEH_1,EST_VEH_2,EST_VEH_3,EST_VEH_4,EST_VEH_5,EST_VET_1,EST_VET_2,EST_FOREIGN_1,EST_FOREIGN_2,EST_FOREIGN_3,EST_FOREIGN_4,EST_FOREIGN_5,EST_FOREIGN_6,EST_FOREIGN_7,EST_MOVE_YEAR_1,EST_MOVE_YEAR_2,EST_MOVE_YEAR_3,EST_MOVE_YEAR_4, EST_MOVE_YEAR_5,EST_MOVE_YEAR_6,EST_MOVE_YEAR_7,EST_US_ENTRY_1,EST_US_ENTRY_2,EST_US_ENTRY_3,EST_US_ENTRY_4,EST_US_ENTRY_5,EST_US_ENTRY_6,EST_US_ENTRY_7,EST_BUILT_YEAR_1,EST_BUILT_YEAR_2,EST_BUILT_YEAR_3,EST_BUILT_YEAR_4,EST_BUILT_YEAR_5,EST_BUILT_YEAR_6,EST_BUILT_YEAR_7,EST_BUILT_YEAR_8,EST_BUILT_YEAR_9,EST_BUILT_YEAR_10,EST_BUILT_YEAR_11 ) values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33,:34,:35,:36,:37,:38,:39,:40,:41,:42,:43,:44,:45 ,:46,:47,:48,:49,:50,:51,:52,:53,:54,:55,:56,:57,:58,:59,:60,:61,:62,:63,:64,:65,:66,:67,:68,:69,:70,:71,:72,:73,:74,:75,:76,:77,:78,:79,:80,:81,:82,:83,:84,:85,:86,:87,:88,:89,:90 ,:91,:92,:93,:94,:95,:96,:97,:98,:99,:100,:101,:102,:103,:104,:105,:106,:107,:108,:109,:110,:111,:112,:113,:114,:115,:116,:117,:118,:119,:120,:121,:122,:123,:124,:125,:126,:127,:128,:129 ,:130,:131,:132,:133,:134,:135,:136,:137,:138,:139,:140,:141,:142,:143,:144,:145,:146,:147,:148,:149,:150,:151,:152,:153,:154,:155,:156,:157,:158,:159,:160,:161,:162,:163,:164,:165,:166,:167,:168 ,:169,:170,:171,:172,:173,:174,:175,:176,:177,:178,:179,:180,:181,:182,:183,:184,:185,:186,:187,:188,:189,:190,:191,:192,:193,:194,:195,:196,:197,:198,:199,:200,:201,:202,:203,:204,:205,:206,:207 ,:208,:209,:210,:211,:212,:213,:214,:215,:216,:217,:218,:219,:220,:221,:222,:223,:224,:225,:226,:227,:228,:229,:230,:231,:232,:233,:234,:235,:236,:237,:238,:239,:240,:241,:242,:243,:244,:245,:246,:247,:248,:249 ,:250,:251,:252,:253,:254,:255,:256,:257,:258,:259,:260,:261,:262,:263,:264,:265,:266,:267,:268,:269,:270,:271,:272,:273,:274,:275,:276,:277,:278,:279,:280,:281,:282,:283,:284,:285,:286,:287,:288,:289,:290 ,:291,:292,:293,:294,:295,:296,:297,:298,:299,:300,:301,:302,:303,:304,:305,:306,:307,:308,:309,:310,:311,:312,:313,:314,:315,:316,:317,:318,:319,:320,:321,:322,:323,:324,:325,:326,:327,:328,:329,:330,:331,:332,:333 ,:334,:335,:336,:337,:338,:339,:340,:341,:342,:343,:344,:345,:346,:347,:348,:349,:350,:351,:352,:353,:354,:355,:356,:357,:358,:359,:360,:361,:362,:363,:364,:365,:366,:367,:368,:369,:370,:371,:372,:373,:374,:375,:376,:377,:378,:379,:380 ,:381,:382,:383,:384,:385,:386,:387,:388,:389,:390,:391,:392,:393,:394,:395,:396,:397,:398,:399,:400,:401,:402,:403,:404,:405,:406,:407,:408,:409,:410,:411,:412,:413,:414,:415,:416,:417,:418,:419,:420,:421,:422,:423,:424,:425,:426,:427,:428,:429,:430 ,:431,:432,:433,:434,:435,:436,:437,:438,:439,:440,:441,:442,:443,:444,:445,:446,:447,:448,:449,:450,:451,:452,:453,:454,:455,:456,:457,:458,:459,:460,:461,:462,:463,:464,:465,:466,:467,:468 ,:469,:470,:471,:472,:473,:474,:475,:476,:477,:478,:479,:480,:481,:482,:483,:484,:485,:486,:487,:488,:489,:490,:491,:492,:493,:494,:495,:496,:497,:498,:499,:500,:501,:502,:503,:504,:505,:506,:507,:508,:509,:510,:511,:512 ,:513,:514,:515,:516,:517,:518,:519,:520,:521,:522, :523)""" df_list = census.values.tolist() n = 0 for i in census.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() ``` # INSERT INTO PERCRIPTIONS DRUGS ``` drugs = pd.read_csv("s3://cms-dash-datasets/Data/DE1.0 Sample 2/DE1_0_2008_to_2010_Prescription_Drug_Events_Sample_2.csv") # Rearrange columns and convert necessary columns first_col = drugs.pop('PDE_ID') drugs.insert(0,'PDE_ID',first_col) all_columns = ['PDE_ID', 'DESYNPUF_ID','PROD_SRVC_ID',] drugs[all_columns] = drugs[all_columns].astype(str) # Convert date columns A = pd.to_datetime(drugs['SRVC_DT']) drugs['SRVC_DT'] = A.dt.date # Insert into DASH_DRUG_PRESCRIPTION table sql="""INSERT INTO DASH_DRUG_PRESCRIPTION (PDE_ID, DESYNPUF_ID, SRVC_DT,PROD_SRVC_ID,QTY_DSPNSD_NUM,DAYS_SUPLY_NUM,PTNT_PAY_AMT,TOT_RX_CST_AMT) values(:1,:2,:3,:4,:5,:6,:7,:8)""" df_list = drugs.values.tolist() n = 0 for i in drugs.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() ``` ## Insert NLP data ``` # Import the two NLP files nlp = pd.read_csv("s3://cms-dash-datasets/Jason-NLP/output.csv") nlp_xlsx = pd.read_csv("s3://cms-dash-datasets/Jason-NLP/Review Data (Hospital Review Data for NLP processing)_Review Data.csv") # Rename columns nlp_xlsx = nlp_xlsx.rename(columns={'At Physn Npi':'AT_PHYSN_API'}) nlp2 = nlp_xlsx[['AT_PHYSN_API','Review Comment']] # Join the two NLP files nlp_load = pd.merge(left=nlp, right=nlp2, left_on ='AT_PHYSN_NPI', right_on = 'AT_PHYSN_API') nlp_load2 = nlp_load[['AT_PHYSN_NPI','Review Comment','label']].rename(columns={'Review Comment':'REVIEW_TEXT','label':'RATING_VALUE'}) # Create index column nlp_load2['REVIEW_SID']=nlp_load2.index # Rearrange columns nlp_load3 = nlp_load2[['REVIEW_SID','AT_PHYSN_NPI','REVIEW_TEXT','RATING_VALUE']] # Insert into DASH_PROVIDER_REVIEW Table sql="""INSERT INTO DASH_PROVIDER_REVIEW (REVIEW_SID, AT_PHYSN_NPI, REVIEW_TEXT, RATING_VALUE ) values(:1,:2,:3, :4)""" df_list = nlp_load3.values.tolist() n = 0 for i in nlp_load3.iterrows(): cur.execute(sql,df_list[n]) n += 1 con.commit() con.close ```
github_jupyter
# Publications markdown generator for academicpages Takes a TSV of publications with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `publications.py`. Run either from the `markdown_generator` folder after replacing `publications.tsv` with one containing your data. TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style. ## Data format The TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top. - `excerpt` and `paper_url` can be blank, but the others must have values. - `pub_date` must be formatted as YYYY-MM-DD. - `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]` This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create). ``` !cat publications.tsv ``` ## Import pandas We are using the very handy pandas library for dataframes. ``` import pandas as pd ``` ## Import TSV Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`. I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others. ``` publications = pd.read_csv("publications.tsv", sep="\t", header=0) publications ``` ## Escape special characters YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely. ``` html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): """Produce entities within text.""" return "".join(html_escape_table.get(c,c) for c in text) ``` ## Creating the markdown files This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page. ``` import os for row, item in publications.iterrows(): md_filename = str(item.pub_date) + "-" + item.url_slug + ".md" html_filename = str(item.pub_date) + "-" + item.url_slug year = item.pub_date[:4] ## YAML variables md = "---\ntitle: \"" + item.title + '"\n' md += """collection: publications""" md += """\npermalink: /publication/""" + html_filename if len(str(item.excerpt)) > 5: md += "\nexcerpt: '" + html_escape(item.excerpt) + "'" md += "\ndate: " + str(item.pub_date) md += "\nvenue: '" + html_escape(item.venue) + "'" if len(str(item.paper_url)) > 5: md += "\npaperurl: '" + item.paper_url + "'" md += "\ncitation: '" + html_escape(item.citation) + "'" md += "\n---" ## Markdown description for individual page if len(str(item.excerpt)) > 5: md += "\n" + html_escape(item.excerpt) + "\n" if len(str(item.paper_url)) > 5: md += "\n[Download paper here](" + item.paper_url + ")\n" md += "\nRecommended citation: " + item.citation md_filename = os.path.basename(md_filename) with open("../_publications/" + md_filename, 'w') as f: f.write(md) ``` These files are in the publications directory, one directory below where we're working from. ``` !ls ../_publications/ !cat ../_publications/2009-10-01-paper-title-number-1.md ```
github_jupyter
``` #!/usr/bin/python # coding: UTF-8 import json import pandas as pd import logging import codecs SimulationName="nii_videodata_jsonl_parse" #ログ設定 log_fmt = '%(asctime)s- %(name)s - %(levelname)s - %(message)s' logger_name = "LOGGER" logging.basicConfig(filename="./Log/" + SimulationName + ".log",format=log_fmt, level=logging.DEBUG) logger = logging.getLogger(logger_name) #処理開始 logger.info("----処理開始----") #カラム名を設定 video_info_columns = ['video_id', 'watch_num', 'comment_num','mylist_num','title','category','upload_time','file_type','length','size_high','size_low'] #データフレームを作成 video_info_data = pd.DataFrame([],columns=video_info_columns) tag_info_data = pd.DataFrame([]) num = 0 #ファイルを指定して1行ずつ処理 with codecs.open('./Input/jsonl_merge.jsonl', 'rb', 'utf-8') as jsonl_file: #with codecs.open('./Input/0000.jsonl', 'rb', 'utf-8') as jsonl_file: #1行読み込み jsonl_file_readline = jsonl_file.readline() #1行jsonパース json_object = json.loads(jsonl_file_readline) while True: try: #リスト作成 video_info_list = [] tag_info_list = [] ##################ここからファイルの処理################## ##カラム名 #['video_id', 'watch_num', 'comment_num','mylist_num','title','category','upload_time','file_type','length','size_high','size_low'] #リストにパースした結果を追加 video_info_list.append(json_object["video_id"].encode('utf-8')) video_info_list.append(str(json_object["watch_num"])) video_info_list.append(str(json_object["comment_num"])) video_info_list.append(str(json_object["mylist_num"])) video_info_list.append(json_object["title"].encode('utf-8')) if json_object["category"] is not None: video_info_list.append(json_object["category"].encode('utf-8')) else: video_info_list.append("") video_info_list.append(str(json_object["upload_time"])) video_info_list.append(json_object["file_type"].encode('utf-8')) video_info_list.append(str(json_object["length"])) video_info_list.append(str(json_object["size_high"])) video_info_list.append(str(json_object["size_low"])) #データフレームに変換 video_info_data_list = pd.DataFrame(video_info_list) video_info_data_list = video_info_data_list.T #出力用のデータフレームに追加 video_info_data = pd.concat([video_info_data, video_info_data_list], axis=0) if (num % 100 == 0): logger.info("ファイル出力の文書番号:" + str(num)) #ファイル出力 video_info_data.to_csv("./Output/video_tsv/video_info_data_" + str(num) + ".tsv",encoding='utf-8',header=False, index=False,sep="\t") #初期化 video_info_data = pd.DataFrame([],columns=video_info_columns) ##################ここからタグの処理################## ##video_id+タグ #タグの配列を取得 tags = json_object["tags"] #タグリストにIDを設定 tag_info_list.append(json_object["video_id"].encode('utf-8')) for tag in tags: tag_info_list.append(tag.encode('utf-8')) #タグのデータフレームに変換 tag_info_data_list = pd.DataFrame(tag_info_list) tag_info_data_list = tag_info_data_list.T #タグのデータフレームに追加 tag_info_data = pd.concat([tag_info_data, tag_info_data_list], axis=0) if (num % 100 == 0): logger.info("タグファイル出力の文書番号:" + str(num)) #タグのファイル出力 tag_info_data.to_csv("./Output/tag_tsv/tag_info_data_" + str(num) + ".tsv",encoding='utf-8',header=False, index=False,sep="\t") #初期化 tag_info_data = pd.DataFrame([]) #文書番号 num = num + 1 #100000件ごとに文書番号を出力 if (num % 100000 == 0): logger.info("処理中の文書番号:" + str(num)) print num #1行読み込み jsonl_file_readline = jsonl_file.readline() #最終行の場合は処理完了 if not jsonl_file_readline: break #1行jsonパース json_object = json.loads(jsonl_file_readline) except : print('Error:' + str(num)) logger.info('Error:' + str(num)) logging.info("文書番号:" + str(num-1)) logging.info("----出力データ作成完了----") logging.shutdown() ```
github_jupyter
# Generative Spaces (ABM) In this workshop we will lwarn how to construct a ABM (Agent Based Model) with spatial behaviours, that is capable of configuring the space. This file is a simplified version of Generative Spatial Agent Based Models. For further information, you can find more advanced versions here: * [Object Oriented version](https://github.com/shervinazadi/spatial_computing_workshops/blob/master/notebooks/w3_generative_spaces.ipynb) * [Vectorized version](https://topogenesis.readthedocs.io/notebooks/random_walker) ## 0. Initialization ### 0.1. Load required libraries ``` # !pip install pyvista==0.28.1 ipyvtklink import os import topogenesis as tg import pyvista as pv import trimesh as tm import pandas as pd import numpy as np np.random.seed(0) ``` ### 0.2. Define the Neighborhood (Stencil) ``` # creating neighborhood definition stencil = tg.create_stencil("von_neumann", 1, 1) # setting the center to zero stencil.set_index([0,0,0], 0) print(stencil) ``` ### 0.3 Visualize the Stencil ``` # initiating the plotter p = pv.Plotter(notebook=True) # Create the spatial reference grid = pv.UniformGrid() # Set the grid dimensions: shape because we want to inject our values grid.dimensions = np.array(stencil.shape) + 1 # The bottom left corner of the data set grid.origin = [0,0,0] # These are the cell sizes along each axis grid.spacing = [1,1,1] # Add the data values to the cell data grid.cell_arrays["values"] = stencil.flatten(order="F") # Flatten the stencil threshed = grid.threshold([0.9, 1.1]) # adding the voxels: light red p.add_mesh(threshed, show_edges=True, color="#ff8fa3", opacity=0.3) # plotting # p.show(use_ipyvtk=True) ``` ## 1. Setup the Environment ### 1.1. Load the envelope lattice as the avialbility lattice ``` # loading the lattice from csv lattice_path = os.path.relpath('../data/voxelized_envelope.csv') avail_lattice = tg.lattice_from_csv(lattice_path) init_avail_lattice = tg.to_lattice(np.copy(avail_lattice), avail_lattice) ``` ### 1.2 Load Program ``` program_complete = pd.read_csv("../data/program_small.csv") program_complete program_prefs = program_complete.drop(["space_name","space_id"], 1) program_prefs ``` ### 1.2 Load the value fields ``` # loading the lattice from csv fields = {} for f in program_prefs.columns: lattice_path = os.path.relpath('../data/' + f + '.csv') fields[f] = tg.lattice_from_csv(lattice_path) ``` ### 1.3. Initialize the Agents ``` # initialize the occupation lattice occ_lattice = avail_lattice * 0 - 1 # Finding the index of the available voxels in avail_lattice avail_flat = avail_lattice.flatten() avail_index = np.array(np.where(avail_lattice == 1)).T # Randomly choosing three available voxels agn_num = len(program_complete) select_id = np.random.choice(len(avail_index), agn_num) agn_origins = avail_index[select_id] # adding the origins to the agents locations agn_locs = [] # for each agent origin ... for a_id, a_origin in enumerate(agn_origins): # add the origin to the list of agent locations agn_locs.append([a_origin]) # set the origin in availablity lattice as 0 (UNavailable) avail_lattice[tuple(a_origin)] = 0 # set the origin in occupation lattice as the agent id (a_id) occ_lattice[tuple(a_origin)] = a_id ``` ### 1.4. Visualize the environment ``` p = pv.Plotter(notebook=True) # Set the grid dimensions: shape + 1 because we want to inject our values on the CELL data grid = pv.UniformGrid() grid.dimensions = np.array(occ_lattice.shape) + 1 # The bottom left corner of the data set grid.origin = occ_lattice.minbound - occ_lattice.unit * 0.5 # These are the cell sizes along each axis grid.spacing = occ_lattice.unit # adding the boundingbox wireframe p.add_mesh(grid.outline(), color="grey", label="Domain") # adding axes p.add_axes() p.show_bounds(grid="back", location="back", color="#777777") # Add the data values to the cell data grid.cell_arrays["Agents"] = occ_lattice.flatten(order="F").astype(int) # Flatten the array! # filtering the voxels threshed = grid.threshold([-0.1, agn_num - 0.9]) # adding the voxels p.add_mesh(threshed, show_edges=True, opacity=1.0, show_scalar_bar=False) # adding the availability lattice init_avail_lattice.fast_vis(p) # p.show(use_ipyvtk=True) ``` ## 2. ABM Simulation (Agent Based Space Occupation) ### 2.1. Running the simulation ``` # make a deep copy of occupation lattice cur_occ_lattice = tg.to_lattice(np.copy(occ_lattice), occ_lattice) # initialzing the list of frames frames = [cur_occ_lattice] # setting the time variable to 0 t = 0 n_frames = 30 # main feedback loop of the simulation (for each time step ...) while t<n_frames: # for each agent ... for a_id, a_prefs in program_complete.iterrows(): # retrieve the list of the locations of the current agent a_locs = agn_locs[a_id] # initialize the list of free neighbours free_neighs = [] # for each location of the agent for loc in a_locs: # retrieve the list of neighbours of the agent based on the stencil neighs = avail_lattice.find_neighbours_masked(stencil, loc = loc) # for each neighbour ... for n in neighs: # compute 3D index of neighbour neigh_3d_id = np.unravel_index(n, avail_lattice.shape) # if the neighbour is available... if avail_lattice[neigh_3d_id]: # add the neighbour to the list of free neighbours free_neighs.append(neigh_3d_id) # check if found any free neighbour if len(free_neighs)>0: # convert free neighbours to a numpy array fns = np.array(free_neighs) # find the value of neighbours # init the agent value array a_eval = np.ones(len(fns)) # for each field... for f in program_prefs.columns: # find the raw value of free neighbours... vals = fields[f][fns[:,0], fns[:,1], fns[:,2]] # raise the the raw value to the power of preference weight of the agent a_weighted_vals = vals ** a_prefs[f] # multiply them to the previous weighted values a_eval *= a_weighted_vals # select the neighbour with highest evaluation selected_int = np.argmax(a_eval) # find 3D integer index of selected neighbour selected_neigh_3d_id = free_neighs[selected_int] # find the location of the newly selected neighbour selected_neigh_loc = np.array(selected_neigh_3d_id).flatten() # add the newly selected neighbour location to agent locations agn_locs[a_id].append(selected_neigh_loc) # set the newly selected neighbour as UNavailable (0) in the availability lattice avail_lattice[selected_neigh_3d_id] = 0 # set the newly selected neighbour as OCCUPIED by current agent # (-1 means not-occupied so a_id) occ_lattice[selected_neigh_3d_id] = a_id # constructing the new lattice new_occ_lattice = tg.to_lattice(np.copy(occ_lattice), occ_lattice) # adding the new lattice to the list of frames frames.append(new_occ_lattice) # adding one to the time counter t += 1 ``` ### 2.2. Visualizing the simulation ``` p = pv.Plotter(notebook=True) base_lattice = frames[0] # Set the grid dimensions: shape + 1 because we want to inject our values on the CELL data grid = pv.UniformGrid() grid.dimensions = np.array(base_lattice.shape) + 1 # The bottom left corner of the data set grid.origin = base_lattice.minbound - base_lattice.unit * 0.5 # These are the cell sizes along each axis grid.spacing = base_lattice.unit # adding the boundingbox wireframe p.add_mesh(grid.outline(), color="grey", label="Domain") # adding the availability lattice init_avail_lattice.fast_vis(p) # adding axes p.add_axes() p.show_bounds(grid="back", location="back", color="#aaaaaa") def create_mesh(value): f = int(value) lattice = frames[f] # Add the data values to the cell data grid.cell_arrays["Agents"] = lattice.flatten(order="F").astype(int) # Flatten the array! # filtering the voxels threshed = grid.threshold([-0.1, agn_num - 0.9]) # adding the voxels p.add_mesh(threshed, name='sphere', show_edges=True, opacity=1.0, show_scalar_bar=False) return p.add_slider_widget(create_mesh, [0, n_frames], title='Time', value=0, event_type="always", style="classic") p.show(use_ipyvtk=True) ``` ### 2.3. Saving lattice frames in CSV ``` for i, lattice in enumerate(frames): csv_path = os.path.relpath('../data/abm_animation/abm_f_'+ f'{i:03}' + '.csv') lattice.to_csv(csv_path) ``` ### Credits ``` __author__ = "Shervin Azadi " __license__ = "MIT" __version__ = "1.0" __url__ = "https://github.com/shervinazadi/spatial_computing_workshops" __summary__ = "Spatial Computing Design Studio Workshop on Agent Based Models for Generative Spaces" ```
github_jupyter
``` # from google.colab import drive # drive.mount('/content/drive') import torch.nn as nn import torch.nn.functional as F import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils from matplotlib import pyplot as plt import copy # Ignore warnings import warnings warnings.filterwarnings("ignore") transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True) testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') foreground_classes = {'plane', 'car', 'bird'} background_classes = {'cat', 'deer', 'dog', 'frog', 'horse','ship', 'truck'} fg1,fg2,fg3 = 0,1,2 dataiter = iter(trainloader) background_data=[] background_label=[] foreground_data=[] foreground_label=[] batch_size=10 for i in range(5000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() background_data.append(img) background_label.append(labels[j]) else: img = images[j].tolist() foreground_data.append(img) foreground_label.append(labels[j]) foreground_data = torch.tensor(foreground_data) foreground_label = torch.tensor(foreground_label) background_data = torch.tensor(background_data) background_label = torch.tensor(background_label) def create_mosaic_img(bg_idx,fg_idx,fg): """ bg_idx : list of indexes of background_data[] to be used as background images in mosaic fg_idx : index of image to be used as foreground image from foreground data fg : at what position/index foreground image has to be stored out of 0-8 """ image_list=[] j=0 for i in range(9): if i != fg: image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor")) j+=1 else: image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor")) label = foreground_label[fg_idx]- fg1 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2 #image_list = np.concatenate(image_list ,axis=0) image_list = torch.stack(image_list) return image_list,label desired_num = 30000 mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9 mosaic_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(desired_num): bg_idx = np.random.randint(0,35000,8) fg_idx = np.random.randint(0,15000) fg = np.random.randint(0,9) fore_idx.append(fg) image_list,label = create_mosaic_img(bg_idx,fg_idx,fg) mosaic_list_of_images.append(image_list) mosaic_label.append(label) class MosaicDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list_of_images self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx], self.fore_idx[idx] batch = 250 msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx) train_loader = DataLoader( msd,batch_size= batch ,shuffle=True) class Focus(nn.Module): def __init__(self): super(Focus, self).__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=3, padding=0) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(in_channels=6, out_channels=6, kernel_size=3, padding=0) # self.conv3 = nn.Conv2d(in_channels=12, out_channels=32, kernel_size=3, padding=0) self.fc1 = nn.Linear(1014, 512) self.fc2 = nn.Linear(512, 64) # self.fc3 = nn.Linear(512, 64) # self.fc4 = nn.Linear(64, 10) self.fc3 = nn.Linear(64,1) def forward(self,z): #y is avg image #z batch of list of 9 images y = torch.zeros([batch,3, 32,32], dtype=torch.float64) x = torch.zeros([batch,9],dtype=torch.float64) y = y.to("cuda") x = x.to("cuda") for i in range(9): x[:,i] = self.helper(z[:,i])[:,0] x = F.softmax(x,dim=1) x1 = x[:,0] torch.mul(x1[:,None,None,None],z[:,0]) for i in range(9): x1 = x[:,i] y = y + torch.mul(x1[:,None,None,None],z[:,i]) return x, y def helper(self, x): x = self.pool(F.relu(self.conv1(x))) x = (F.relu(self.conv2(x))) # print(x.shape) # x = (F.relu(self.conv3(x))) x = x.view(x.size(0), -1) # print(x.shape) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) # x = F.relu(self.fc3(x)) # x = F.relu(self.fc4(x)) x = self.fc3(x) return x focus_net = Focus().double() focus_net = focus_net.to("cuda") class Classification(nn.Module): def __init__(self): super(Classification, self).__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=3, padding=0) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(in_channels=6, out_channels=18, kernel_size=3, padding=0) # self.conv3 = nn.Conv2d(in_channels=12, out_channels=20, kernel_size=3, padding=0) self.fc1 = nn.Linear(3042, 1024) self.fc2 = nn.Linear(1024, 64) # self.fc3 = nn.Linear(512, 64) # self.fc4 = nn.Linear(64, 10) self.fc3 = nn.Linear(64,3) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = (F.relu(self.conv2(x))) # print(x.shape) # x = (F.relu(self.conv3(x))) x = x.view(x.size(0), -1) # print(x.shape) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) # x = F.relu(self.fc3(x)) # x = F.relu(self.fc4(x)) x = self.fc3(x) return x classify = Classification().double() classify = classify.to("cuda") test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image test_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(10000): bg_idx = np.random.randint(0,35000,8) fg_idx = np.random.randint(0,15000) fg = np.random.randint(0,9) fore_idx_test.append(fg) image_list,label = create_mosaic_img(bg_idx,fg_idx,fg) test_images.append(image_list) test_label.append(label) test_data = MosaicDataset(test_images,test_label,fore_idx_test) test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False) import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer_classify = optim.Adam(classify.parameters(), lr=0.001)#, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False) optimizer_focus = optim.Adam(focus_net.parameters(), lr=0.001)#, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False) col1=[] col2=[] col3=[] col4=[] col5=[] col6=[] col7=[] col8=[] col9=[] col10=[] col11=[] col12=[] col13=[] correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 with torch.no_grad(): for data in train_loader: inputs, labels , fore_idx = data inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") alphas, avg_images = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) for j in range(labels.size(0)): count += 1 focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) print(count) print("="*100) col1.append(0) col2.append(argmax_more_than_half) col3.append(argmax_less_than_half) col4.append(focus_true_pred_true) col5.append(focus_false_pred_true) col6.append(focus_true_pred_false) col7.append(focus_false_pred_false) correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 with torch.no_grad(): for data in test_loader: inputs, labels , fore_idx = data inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") alphas, avg_images = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) for j in range(labels.size(0)): focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) col8.append(argmax_more_than_half) col9.append(argmax_less_than_half) col10.append(focus_true_pred_true) col11.append(focus_false_pred_true) col12.append(focus_true_pred_false) col13.append(focus_false_pred_false) nos_epochs = 200 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 for epoch in range(nos_epochs): # loop over the dataset multiple times focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 running_loss = 0.0 epoch_loss = [] cnt=0 iteration = desired_num // batch #training data set for i, data in enumerate(train_loader): inputs , labels , fore_idx = data inputs, labels = inputs.to("cuda"), labels.to("cuda") # zero the parameter gradients optimizer_focus.zero_grad() optimizer_classify.zero_grad() alphas, avg_images = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) # print(outputs) # print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1)) loss = criterion(outputs, labels) loss.backward() optimizer_focus.step() optimizer_classify.step() running_loss += loss.item() mini = 60 if cnt % mini == mini-1: # print every 40 mini-batches print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini)) epoch_loss.append(running_loss/mini) running_loss = 0.0 cnt=cnt+1 if epoch % 5 == 0: for j in range (batch): focus = torch.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): argmax_more_than_half +=1 else: argmax_less_than_half +=1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true +=1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false +=1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false +=1 if(np.mean(epoch_loss) <= 0.005): break; if epoch % 5 == 0: # focus_net.eval() # classify.eval() col1.append(epoch+1) col2.append(argmax_more_than_half) col3.append(argmax_less_than_half) col4.append(focus_true_pred_true) col5.append(focus_false_pred_true) col6.append(focus_true_pred_false) col7.append(focus_false_pred_false) #************************************************************************ #testing data set with torch.no_grad(): focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 for data in test_loader: inputs, labels , fore_idx = data inputs, labels = inputs.to("cuda"), labels.to("cuda") alphas, avg_images = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) for j in range (batch): focus = torch.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): argmax_more_than_half +=1 else: argmax_less_than_half +=1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true +=1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false +=1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false +=1 col8.append(argmax_more_than_half) col9.append(argmax_less_than_half) col10.append(focus_true_pred_true) col11.append(focus_false_pred_true) col12.append(focus_true_pred_false) col13.append(focus_false_pred_false) print('Finished Training') # torch.save(focus_net.state_dict(),"/content/drive/My Drive/Research/Cheating_data/16_experiments_on_cnn_3layers/"+name+"_focus_net.pt") # torch.save(classify.state_dict(),"/content/drive/My Drive/Research/Cheating_data/16_experiments_on_cnn_3layers/"+name+"_classify.pt") columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ] df_train = pd.DataFrame() df_test = pd.DataFrame() df_train[columns[0]] = col1 df_train[columns[1]] = col2 df_train[columns[2]] = col3 df_train[columns[3]] = col4 df_train[columns[4]] = col5 df_train[columns[5]] = col6 df_train[columns[6]] = col7 df_test[columns[0]] = col1 df_test[columns[1]] = col8 df_test[columns[2]] = col9 df_test[columns[3]] = col10 df_test[columns[4]] = col11 df_test[columns[5]] = col12 df_test[columns[6]] = col13 df_train # plt.figure(12,12) plt.plot(col1,col2, label='argmax > 0.5') plt.plot(col1,col3, label='argmax < 0.5') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("training data") plt.title("On Training set") plt.show() plt.plot(col1,col4, label ="focus_true_pred_true ") plt.plot(col1,col5, label ="focus_false_pred_true ") plt.plot(col1,col6, label ="focus_true_pred_false ") plt.plot(col1,col7, label ="focus_false_pred_false ") plt.title("On Training set") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("training data") plt.savefig("train_ftpt.pdf", bbox_inches='tight') plt.show() df_test # plt.figure(12,12) plt.plot(col1,col8, label='argmax > 0.5') plt.plot(col1,col9, label='argmax < 0.5') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("Testing data") plt.title("On Testing set") plt.show() plt.plot(col1,col10, label ="focus_true_pred_true ") plt.plot(col1,col11, label ="focus_false_pred_true ") plt.plot(col1,col12, label ="focus_true_pred_false ") plt.plot(col1,col13, label ="focus_false_pred_false ") plt.title("On Testing set") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("Testing data") plt.savefig("test_ftpt.pdf", bbox_inches='tight') plt.show() correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 with torch.no_grad(): for data in train_loader: inputs, labels , fore_idx = data inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") alphas, avg_images = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) for j in range(labels.size(0)): focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 with torch.no_grad(): for data in test_loader: inputs, labels , fore_idx = data inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") alphas, avg_images = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) for j in range(labels.size(0)): focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) correct = 0 total = 0 with torch.no_grad(): for data in train_loader: inputs, labels , fore_idx = data inputs, labels = inputs.to("cuda"), labels.to("cuda") alphas, avg_images = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) correct = 0 total = 0 with torch.no_grad(): for data in test_loader: inputs, labels , fore_idx = data inputs, labels = inputs.to("cuda"), labels.to("cuda") alphas, avg_images = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) max_alpha =[] alpha_ftpt=[] argmax_more_than_half=0 argmax_less_than_half=0 for i, data in enumerate(test_loader): inputs, labels,_ = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") alphas, avg = focus_net(inputs) outputs = classify(avg) mx,_ = torch.max(alphas,1) max_alpha.append(mx.cpu().detach().numpy()) for j in range(labels.size(0)): focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if (focus == fore_idx[j] and predicted[j] == labels[j]): alpha_ftpt.append(alphas[j][focus].item()) max_alpha = np.concatenate(max_alpha,axis=0) print(max_alpha.shape) plt.figure(figsize=(6,6)) _,bins,_ = plt.hist(max_alpha,bins=50,color ="c") plt.title("alpha values histogram") plt.savefig("alpha_hist.pdf") plt.figure(figsize=(6,6)) _,bins,_ = plt.hist(np.array(alpha_ftpt),bins=50,color ="c") plt.title("alpha values in ftpt") plt.savefig("alpha_hist_ftpt.pdf") ```
github_jupyter
# 1. Terrestrial vs solar origins of radiation in Earth's atmosphere ``` import math import numpy as np import matplotlib.pyplot as plt # Define Constants Ts = 5785 # K Te = 255 # K des = 150e9 # m re = 6.371e6 # m rs = 6.96e8 # m h = 6.62e-34 # m^2 kg/s c = 299792458 # m/s k = 1.38e-23 # J/K (kg m2 s-2 K-1) ``` ### (a) ``` def I_lambda(T, lam): intensity = 2 * h * c**2 * (lam**-5)/(np.expm1((h*c)/(lam*k*T))) return intensity lambda_vals_s = np.linspace(0, 5e-6, 201)[1:] lambda_vals_e = np.linspace(0, 5e-5, 201)[1:] Is = I_lambda(Ts, lambda_vals_s) Ie = I_lambda(Te, lambda_vals_e) plt.plot(lambda_vals_s, Is) plt.title('Blackbody Intensity of the Sun') plt.show() plt.plot(lambda_vals_e, Ie) plt.title('Blackbody Intensity of the Earth') plt.show() max_s = lambda_vals_s[np.argmax(Is)]*10**9 max_e = lambda_vals_e[np.argmax(Ie)]*10**6 print(f"The peak wavelength of the Sun's radiation is at {max_s:.0f} nm.") print() print(f"The peak wavelength of the Earth's radiation is at {max_e:.0f} \u03BCm.") ``` ### (b) ``` Is_smax = I_lambda(Ts, max_s) Ie_smax = I_lambda(Te, max_s) Is_emax = I_lambda(Ts, max_e) Ie_emax = I_lambda(Te, max_e) ratio_smax = Is_smax / Ie_smax ratio_emax = Is_emax / Ie_emax print(f"The ratio of the spectra at {max_s:.0f} nm is {ratio_smax}.") print(f"The ratio of the spectra at {max_e:.0f} \u03BCm is {ratio_emax}.") ``` ### (c) ``` s_emit_area = 4 * np.pi * des**2 # emits radiation as a shell with radius des e_absorb_area = np.pi * re**2 # absorbs radiation as a disk with radius re frac_earth = e_absorb_area / s_emit_area Is_earth = Is * frac_earth plt.plot(lambda_vals_s, Is_earth) plt.title('Intensity at Earth') plt.show() Is_smax_earth = Is_smax * frac_earth Is_emax_earth = Is_emax * frac_earth ratio_smax_earth = Is_smax_earth / Ie_smax ratio_emax_earth = Is_emax_earth / Ie_emax print(f"The ratio of the spectra at Earth's atmosphere at {max_s:.0f} nm is {ratio_smax_earth}.") print(f"The ratio of the spectra at Earth's atmosphere at {max_e:.0f} \u03BCm is {ratio_emax_earth}.") ``` ### (d) ``` Is_earth_full = I_lambda(Ts, lambda_vals_e) * frac_earth plt.plot(lambda_vals_e, Is_earth_full, lambda_vals_e, Ie) plt.xlim([0, 0.4e-5]) plt.ylim([0, 7e3]) plt.title('Intensity ') plt.show() ``` The spectra overlap at a wavelength of about 2.5 micrometers. ``` import scipy.integrate as integrate def intens_ratio(lam): ratio = (I_lambda(Ts, lam)*frac_earth) / I_lambda(Te, lam) return ratio rad = integrate.quad(intens_ratio, 2.5e-6, 100e-6) print(rad[0]) ``` The ratio from lambda_overlap to 100 um tells us the relative fraction of the radiation at the top of the atmosphere between 2.5 and 100 um that is coming from the Sun. The amount of longwave radiation at the top of the atmosphere that originates from the sun is a tiny amount when compared to the amount of longwave radiation that comes from the Earth. ### (e) The 4th power in the Stefan-Boltzmann equation is a result of the energy spectrum of photons. The photon spectrum, which is the energy density per unit photon energy, depends on the third power of the photon energy (1 for each spatial dimension) that is proportional to T. To find the total energy density, we integrate over all the photon energies, which gives us another factor of T so that we end up with a 4th power. The 4 comes from integrating the 3 spatial dimensions. # 2. Climate of Flatland I did most of Question 2 on paper. ### (b) ``` import scipy.optimize as so def eq(x): return (x*np.exp(x)) / (np.expm1(x)) - 4 x_init = 4 # initial guess based on 3d version x = so.fsolve(eq, x_init)[0] wein_const = (h*c) / (x*k) # m K print(f"Wein's Law in 2D: \u03BBT = {(wein_const*10**6):.0f} \u03BCm K") T = 5785 # K l_max = wein_const / T * 10**9 print(f"The solar intensity peaks at \u03BB = {l_max:.0f} nm") ``` ### (c) ``` A = 2.404 sig_2d = (k**3 * A) / (h**2 * c) print(f"\u03C3 = {sig_2d:.2e} W/m/K^3") print(f"The 2D Stefan-Boltzmann equation is \u03C3T^3") ``` ### (d) ``` S0 = sig_2d * T**3 rad_earth = S0 * re / 2 print(f"The radiation that reaches Earth averaged over its 1D surface is {rad_earth:.2e} W/m") alpha = 0.3 T_earth = (((1-alpha)*S0*re) / (2*sig_2d)) ** (1/3) print(f"The temperature of the 2D Earth is {T_earth:.2f} K.") ``` # 3. Radiative forcing and global warming in a two-layer atmosphere model ``` sig = 5.67e-8 # W/m^2 K^4 so = 1370 # W/m^2 alpha = 0.3 ``` ### (a) ``` eps1 = 0.65 eps2 = 0.25 Tsurf4 = ((1-alpha)*(so/4)*(4-eps1*eps2)) / (sig*(2-eps1)*(2-eps2)) T14 = Tsurf4 * ((2+eps2-eps1*eps2) / (4-eps1*eps2)) T24 = Tsurf4 * ((2-eps1) / (4-eps1*eps2)) Tsurf = Tsurf4**(1/4) T1 = T14**(1/4) T2 = T24**(1/4) print(f'Ts: {Tsurf:.2f} K') print(f'T1: {T1:.2f} K') print(f'T2: {T2:.2f} K') ``` ### (b) ``` eps2_prime = 0.29 def TOA(e1, e2): return (1-e1)*(1-e2)*sig*Tsurf4 + (1-e2)*sig*T14 + e2*sig*T24 delta_TOA = TOA(eps1, eps2_prime) - TOA(eps1, eps2) print(f'The change in net TOA radiation flux is {delta_TOA:0.2f} W/m^2.') ``` This is roughly double the amount that we calculated in class for a doubling of CO2 (-3.9 W/m^2). ### (c) ``` def surf_flux(e1, e2): return (1-alpha)*so/4 + e1*sig*T14 + (1-e1)*e2*sig*T24 - sig*Tsurf4 delta_surf_flux = surf_flux(eps1, eps2_prime) - surf_flux(eps1, eps2) print(f'The change in net surface radiation flux is {delta_surf_flux:0.2f} W/m^2.') ``` Because the TOA radiation flux decreases and the surface radiation flux increases, I expect Ts, T1, and T2 to increase once they are allowed to adjust. ### (d) ``` T14_new = Tsurf4 * ((2+eps2_prime-eps1*eps2_prime) / (4-eps1*eps2_prime)) T24_new = Tsurf4 * ((2-eps1) / (4-eps1*eps2_prime)) T1_new = T14_new**(1/4) T2_new = T24_new**(1/4) print(f'Adjusted T1: {T1_new:.2f} K') print(f'Adjusted T2: {T2_new:.2f} K') def TOA_new(e1, e2): return (1-e1)*(1-e2)*sig*Tsurf4 + (1-e2)*sig*T14_new + e2*sig*T24_new def surf_flux_new(e1, e2): return (1-alpha)*so/4 + e1*sig*T14_new + (1-e1)*e2*sig*T24_new - sig*Tsurf4 delta_TOA_new = TOA_new(eps1, eps2_prime) - TOA_new(eps1, eps2) print(f'The adjusted change in net TOA radiation flux is {delta_TOA_new:0.2f} W/m^2.') delta_surf_flux_new = surf_flux_new(eps1, eps2_prime) - surf_flux_new(eps1, eps2) print(f'The adjusted change in net surface radiation flux is {delta_surf_flux_new:0.2f} W/m^2.') ``` The effective radiative forcing is larger than the instantaneous radiative forcing. ### (e) ``` Tsurf_new = (((1-alpha)*(so/4)*(4-eps1*eps2_prime)) / (sig*(2-eps1)*(2-eps2_prime)))**(1/4) print(f'The Equilibrium Climate Sensitivity is {(Tsurf_new-Tsurf):.2f} K.') ``` This ECS value is below the canonical ECS range of 2-5 K. Possible climate processes not in this model that could explain this difference include changes in surface albedo, changes in cloud cover, and ocean dynamics. These are all sensitive to changes in radiative forcing and could influence the ECS.
github_jupyter
# Introduction This notebook illustrates how to access your database instance using Python by following the steps below: 1. Import the `ibm_db` Python library 1. Identify and enter the database connection credentials 1. Create the database connection 1. Create a table 1. Insert data into the table 1. Query data from the table 1. Retrieve the result set into a pandas dataframe 1. Close the database connection __Notice:__ Please follow the instructions given in the first Lab of this course to Create a database service instance of Db2 on Cloud. ## Task 1: Import the `ibm_db` Python library The `ibm_db` [API ](https://pypi.python.org/pypi/ibm_db/) provides a variety of useful Python functions for accessing and manipulating data in an IBM® data server database, including functions for connecting to a database, preparing and issuing SQL statements, fetching rows from result sets, calling stored procedures, committing and rolling back transactions, handling errors, and retrieving metadata. We import the ibm_db library into our Python Application ``` import ibm_db ``` When the command above completes, the `ibm_db` library is loaded in your notebook. ## Task 2: Identify the database connection credentials Connecting to dashDB or DB2 database requires the following information: * Driver Name * Database name * Host DNS name or IP address * Host port * Connection protocol * User ID * User Password __Notice:__ To obtain credentials please refer to the instructions given in the first Lab of this course Now enter your database credentials below Replace the placeholder values in angular brackets <> below with your actual database credentials e.g. replace "database" with "BLUDB" ``` #Replace the placeholder values with the actuals for your Db2 Service Credentials dsn_driver = "{IBM DB2 ODBC DRIVER}" dsn_database = "database" # e.g. "BLUDB" dsn_hostname = "hostname" # e.g.: "dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net" dsn_port = "port" # e.g. "50000" dsn_protocol = "protocol" # i.e. "TCPIP" dsn_uid = "username" # e.g. "abc12345" dsn_pwd = "password" # e.g. "7dBZ3wWt9XN6$o0J" # @hidden_cell dsn_driver = "DATABASE=BLUDB;HOSTNAME=dashdb-txn-sbox-yp-dal09-03.services.dal.bluemix.net;PORT=50000;PROTOCOL=TCPIP;UID=wvb91528;PWD=tm^1nlbn4dj3j04b;" dsn_database = "BLUDB" dsn_hostname = "dashdb-txn-sbox-yp-dal09-03.services.dal.bluemix.net" dsn_port = "50000" dsn_protocol = "TCPIP" dsn_uid = "wvb91528" dsn_pwd = "tm^1nlbn4dj3j04b" ``` ## Task 3: Create the database connection Ibm_db API uses the IBM Data Server Driver for ODBC and CLI APIs to connect to IBM DB2 and Informix. Create the database connection ``` #Create database connection #DO NOT MODIFY THIS CELL. Just RUN it with Shift + Enter dsn = ( "DRIVER={0};" "DATABASE={1};" "HOSTNAME={2};" "PORT={3};" "PROTOCOL={4};" "UID={5};" "PWD={6};").format(dsn_driver, dsn_database, dsn_hostname, dsn_port, dsn_protocol, dsn_uid, dsn_pwd) try: conn = ibm_db.connect(dsn, "", "") print ("Connected to database: ", dsn_database) #print ("Connected to database: ", dsn_database, "as user: ", dsn_uid, "on host: ", dsn_hostname) except: print ("Unable to connect: ", ibm_db.conn_errormsg() ) ``` ## Task 4: Create a table in the database In this step we will create a table in the database with following details: <img src = "https://ibm.box.com/shared/static/ztd2cn4xkdoj5erlk4hhng39kbp63s1h.jpg" align="center"> ``` #Lets first drop the table INSTRUCTOR in case it exists from a previous attempt dropQuery = "drop table INSTRUCTOR" #Now execute the drop statment dropStmt = ibm_db.exec_immediate(conn, dropQuery) ``` ## Dont worry if you get this error: If you see an exception/error similar to the following, indicating that INSTRUCTOR is an undefined name, that's okay. It just implies that the INSTRUCTOR table does not exist in the table - which would be the case if you had not created it previously. Exception: [IBM][CLI Driver][DB2/LINUXX8664] SQL0204N "ABC12345.INSTRUCTOR" is an undefined name. SQLSTATE=42704 SQLCODE=-204 ``` #Construct the Create Table DDL statement - replace the ... with rest of the statement createQuery = "create table INSTRUCTOR(id INTEGER PRIMARY KEY NOT NULL, fname varchar(15), lname varchar(15), city varchar(15), ccode char(3))" #Now fill in the name of the method and execute the statement createStmt = ibm_db.exec_immediate(conn, createQuery) ``` ## Task 5: Insert data into the table In this step we will insert some rows of data into the table. The INSTRUCTOR table we created in the previous step contains 3 rows of data: <img src="https://ibm.box.com/shared/static/j5yjassxefrjknivfpekj7698dqe4d8i.jpg" align="center"> We will start by inserting just the first row of data, i.e. for instructor Rav Ahuja ``` #Construct the query - replace ... with the insert statement insertQuery = "insert into INSTRUCTOR (id, fname, lname, city, ccode) values (1, 'Rav', 'Ahuja', 'Toronto', 'CA');" #execute the insert statement insertStmt = ibm_db.exec_immediate(conn, insertQuery) ``` Now use a single query to insert the remaining two rows of data ``` #replace ... with the insert statement that inerts the remaining two rows of data insertQuery2 = "insert into INSTRUCTOR values (2, 'Raul', 'Chong', 'Markham', 'CA'), \ (3, 'Hima', 'Vasudevan', 'Chicago', 'US')" #execute the statement insertStmt2 = ibm_db.exec_immediate(conn, insertQuery2) ``` ## Task 6: Query data in the table In this step we will retrieve data we inserted into the INSTRUCTOR table. ``` #Construct the query that retrieves all rows from the INSTRUCTOR table selectQuery = "select * from INSTRUCTOR" #Execute the statement selectStmt = ibm_db.exec_immediate(conn, selectQuery) #Fetch the Dictionary (for the first row only) - replace ... with your code ibm_db.fetch_both(selectStmt) #Fetch the rest of the rows and print the ID and FNAME for those rows while ibm_db.fetch_row(selectStmt) != False: print (" ID:", ibm_db.result(selectStmt, 0), " FNAME:", ibm_db.result(selectStmt, "FNAME")) ``` Double-click __here__ for the solution. <!-- Hint: #Fetch the rest of the rows and print the ID and FNAME for those rows while ibm_db.fetch_row(selectStmt) != False: print (" ID:", ibm_db.result(selectStmt, 0), " FNAME:", ibm_db.result(selectStmt, "FNAME")) --> Bonus: now write and execute an update statement that changes the Rav's CITY to MOOSETOWN ``` #Enter your code below updateQuery = "update INSTRUCTOR set city='Moosetown' where fname='Rav' " updateStm = ibm_db.exec_immediate(conn, updateQuery) ``` ## Task 7: Retrieve data into Pandas In this step we will retrieve the contents of the INSTRUCTOR table into a Pandas dataframe ``` import pandas import ibm_db_dbi #connection for pandas pconn = ibm_db_dbi.Connection(conn) #query statement to retrieve all rows in INSTRUCTOR table selectQuery = "select * from INSTRUCTOR" #retrieve the query results into a pandas dataframe pdf = pandas.read_sql(selectQuery, pconn) #print just the LNAME for first row in the pandas data frame pdf.LNAME[0] #print the entire data frame pdf ``` Once the data is in a Pandas dataframe, you can do the typical pandas operations on it. For example you can use the shape method to see how many rows and columns are in the dataframe ``` pdf.shape ``` ## Task 8: Close the Connection We free all resources by closing the connection. Remember that it is always important to close connections so that we can avoid unused connections taking up resources. ``` ibm_db.close(conn) ``` ## Summary In this tutorial you established a connection to a database instance of DB2 Warehouse on Cloud from a Python notebook using ibm_db API. Then created a table and insert a few rows of data into it. Then queried the data. You also retrieved the data into a pandas dataframe. Copyright &copy; 2017-2018 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
github_jupyter
# W3 Lab: Perception In this lab, we will learn basic usage of `pandas` library and then perform a small experiment to test the perception of length and area. ``` import pandas as pd import math import matplotlib.pyplot as plt %matplotlib inline ``` ## Vega datasets Before going into the perception experiment, let's first talk about some handy datasets that you can play with. It's nice to have clean datasets handy to practice data visualization. There is a nice small package called [`vega-datasets`](https://github.com/altair-viz/vega_datasets), from the [altair project](https://github.com/altair-viz). You can install the package by running $ pip install vega-datasets or $ pip3 install vega-datasets Once you install the package, you can import and see the list of datasets: ``` from vega_datasets import data data.list_datasets() ``` or you can work with only smaller, local datasets. ``` from vega_datasets import local_data local_data.list_datasets() ``` Ah, we have the `anscombe` data here! Let's see the description of the dataset. ``` local_data.anscombe.description ``` ## Anscombe's quartet dataset How does the actual data look like? Very conveniently, calling the dataset returns a Pandas dataframe for you. ``` df = local_data.anscombe() df.head() ``` **Q1: can you draw a scatterplot of the dataset "I"?** You can filter the dataframe based on the `Series` column and use `plot` function that you used for the Snow's map. ``` # TODO: put your code here ``` ## Some histograms with pandas Let's look at a slightly more complicated dataset. ``` car_df = local_data.cars() car_df.head() ``` Pandas provides useful summary functions. It identifies numerical data columns and provides you with a table of summary statistics. ``` car_df.describe() ``` If you ask to draw a histogram, you get all of them. :) ``` car_df.hist() ``` Well this is too small. You can check out [the documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html) and change the size of the figure. **Q2: by consulting the documentation, can you make the figure larger so that we can see all the labels clearly? And then make the layout 2 x 3 not 3 x 2, then change the number of bins to 20?** ``` # TODO: put your code here ``` ## Your own psychophysics experiment! Let's do an experiment! The procedure is as follows: 1. Generate a random number between \[1, 10\]; 1. Use a horizontal bar to represent the number, i.e., the length of the bar is equal to the number; 1. Guess the length of the bar by comparing it to two other bars with length 1 and 10 respectively; 1. Store your guess (perceived length) and actual length to two separate lists; 1. Repeat the above steps many times; 1. How does the perception of length differ from that of area?. First, let's define the length of a short and a long bar. We also create two empty lists to store perceived and actual length. ``` import random import time import numpy as np l_short_bar = 1 l_long_bar = 10 perceived_length_list = [] actual_length_list = [] ``` ### Perception of length Let's run the experiment. The [**`random`**](https://docs.python.org/3.6/library/random.html) module in Python provides various random number generators, and the [**`random.uniform(a,b)`**](https://docs.python.org/3.6/library/random.html#random.uniform) function returns a floating point number in \[a,b\]. We can plot horizontal bars using the [**`pyplot.barh()`**](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.barh) function. Using this function, we can produce a bar graph that looks like this: ``` mystery_length = random.uniform(1, 10) # generate a number between 1 and 10. this is the *actual* length. plt.barh(np.arange(3), [l_short_bar, mystery_length, l_long_bar], align='center') plt.yticks(np.arange(3), ('1', '?', '10')) plt.xticks([]) # no hint! ``` Btw, `np.arange` is used to create a simple integer list `[0, 1, 2]`. ``` np.arange(3) ``` Now let's define a function to perform the experiment once. When you run this function, it picks a random number between 1.0 and 10.0 and show the bar chart. Then it asks you to input your estimate of the length of the middle bar. It then saves that number to the `perceived_length_list` and the actual answer to the `actual_length_list`. ``` def run_exp_once(): mystery_length = random.uniform(1, 10) # generate a number between 1 and 10. plt.barh(np.arange(3), [l_short_bar, mystery_length, l_long_bar], height=0.5, align='center') plt.yticks(np.arange(3), ('1', '?', '10')) plt.xticks([]) # no hint! plt.show() try: perceived_length_list.append( float(input()) ) except: print("This should only fail in workflow. If you are running this in browser, this won't fail.") pass actual_length_list.append(mystery_length) run_exp_once() ``` Now, run the experiment many times to gather your data. Check the two lists to make sure that you have the proper dataset. The length of the two lists should be the same. ``` # TODO: Run your experiment many times here ``` ### Plotting the result Now we can draw the scatter plot of perceived and actual length. The `matplotlib`'s [**`scatter()`**](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter) function will do this. This is the backend of the pandas' scatterplot. Here is an example of how to use `scatter`: ``` plt.scatter(x=[1,5,10], y=[1,10, 5]) ``` **Q3: Now plot your result using the `scatter()` function. You should also use `plt.title()`, `plt.xlabel()`, and `plt.ylabel()` to label your axes and the plot itself.** ``` # TODO: put your code here ``` After plotting, let's fit the relation between actual and perceived lengths using a polynomial function. We can easily do it using [**`curve_fit(f, x, y)`**](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) in Scipy, which is to fit $x$ and $y$ using the function `f`. In our case, $f = a*x^b +c$. For instance, we can check whether this works by creating a fake dataset that follows the exact form: ``` from scipy.optimize import curve_fit def func(x, a, b, c): return a * np.power(x, b) + c x = np.arange(20) # [0,1,2,3, ..., 19] y = np.power(x, 2) # [0,1,4,9, ... ] popt, pcov = curve_fit(func, x, y) print('{:.2f} x^{:.2f} + {:.2f}'.format(*popt)) ``` In order to plot the function to check the relationship between the actual and perceived lenghts, you can use two variables `x` and `y` to plot the relationship where `x` equals to a series of continuous numbers. For example, if your x axis ranges from 1 to 9 then the variable `x` could be equal to `np.linspace(1, 10, 50)`. The variable `y` will contain the equation that you get from `popt`. For example, if you get equation `1.00 x^2.00 + 0.00` then the variable `y` would be equal to `1.0 * x**2.0 + 0`. After assigning `x` and `y` variables you will plot them in combination with the scatter plot of actual and perceived values to check if you get a linear relationship or not. **Q4: Now fit your data!** Do you see roughly linear relationship between the actual and the perceived lengths? It's ok if you don't! ``` # TODO: your code here ``` ### Perception of area Similar to the above experiment, we now represent a random number as a circle, and the area of the circle is equal to the number. First, calculate the radius of a circle from its area and then plot using the **`Circle()`** function. `plt.Circle((0,0), r)` will plot a circle centered at (0,0) with radius `r`. ``` n1 = 0.005 n2 = 0.05 radius1 = np.sqrt(n1/np.pi) # area = pi * r * r radius2 = np.sqrt(n2/np.pi) random_radius = np.sqrt(n1*random.uniform(1,10)/np.pi) plt.axis('equal') plt.axis('off') circ1 = plt.Circle( (0,0), radius1, clip_on=False ) circ2 = plt.Circle( (4*radius2,0), radius2, clip_on=False ) rand_circ = plt.Circle((2*radius2,0), random_radius, clip_on=False ) plt.gca().add_artist(circ1) plt.gca().add_artist(circ2) plt.gca().add_artist(rand_circ) ``` Let's have two lists for this experiment. ``` perceived_area_list = [] actual_area_list = [] ``` And define a function for the experiment. ``` def run_area_exp_once(n1=0.005, n2=0.05): radius1 = np.sqrt(n1/np.pi) # area = pi * r * r radius2 = np.sqrt(n2/np.pi) mystery_number = random.uniform(1,10) random_radius = np.sqrt(n1*mystery_number/math.pi) plt.axis('equal') plt.axis('off') circ1 = plt.Circle( (0,0), radius1, clip_on=False ) circ2 = plt.Circle( (4*radius2,0), radius2, clip_on=False ) rand_circ = plt.Circle((2*radius2,0), random_radius, clip_on=False ) plt.gca().add_artist(circ1) plt.gca().add_artist(circ2) plt.gca().add_artist(rand_circ) plt.show() perceived_area_list.append( float(input()) ) actual_area_list.append(mystery_number) ``` **Q5: Now you can run the experiment many times, plot the result, and fit a power-law curve!** ``` # TODO: put your code here. You can use multiple cells. ``` What is your result? How are the exponents different from each other? ``` ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"); ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Custom Training Walkthrough <table align="left"><td> <a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/models/blob/master/samples/core/get_started/eager.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/tensorflow/models/blob/master/samples/core/get_started/eager.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on Github</a></td></table> This guide uses machine learning to *categorize* Iris flowers by species. It uses [TensorFlow](https://www.tensorflow.org)'s eager execution to: 1. Build a model, 2. Train this model on example data, and 3. Use the model to make predictions about unknown data. Machine learning experience isn't required, but you'll need to read some Python code. For more eager execution guides and examples, see [these notebooks](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/notebooks). ## TensorFlow programming There are many [TensorFlow APIs](https://www.tensorflow.org/api_docs/python/) available, but start with these high-level TensorFlow concepts: * Enable an [eager execution](https://www.tensorflow.org/programmers_guide/eager) development environment, * Import data with the [Datasets API](https://www.tensorflow.org/programmers_guide/datasets), * Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/). This tutorial is structured like many TensorFlow programs: 1. Import and parse the data sets. 2. Select the type of model. 3. Train the model. 4. Evaluate the model's effectiveness. 5. Use the trained model to make predictions. For more TensorFlow examples, see the [Get Started](https://www.tensorflow.org/get_started/) and [Tutorials](https://www.tensorflow.org/tutorials/) sections. To learn machine learning basics, consider taking the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/). ## Run the notebook This tutorial is available as an interactive [Colab notebook](https://colab.research.google.com) that can execute and modify Python code directly in the browser. The notebook handles setup and dependencies while you "play" cells to run the code blocks. This is a fun way to explore the program and test ideas. If you are unfamiliar with Python notebook environments, there are a couple of things to keep in mind: 1. Executing code requires connecting to a runtime environment. In the Colab notebook menu, select *Runtime > Connect to runtime...* 2. Notebook cells are arranged sequentially to gradually build the program. Typically, later code cells depend on prior code cells, though you can always rerun a code block. To execute the entire notebook in order, select *Runtime > Run all*. To rerun a code cell, select the cell and click the *play icon* on the left. ## Setup program ### Install the latest version of TensorFlow This tutorial uses eager execution, which is available in [TensorFlow 1.8](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.) ``` !pip install --upgrade tensorflow ``` ### Configure imports and eager execution Import the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/programmers_guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager) for more details. ``` from __future__ import absolute_import, division, print_function import os import matplotlib.pyplot as plt import tensorflow as tf import tensorflow.contrib.eager as tfe tf.enable_eager_execution() print("TensorFlow version: {}".format(tf.VERSION)) print("Eager execution: {}".format(tf.executing_eagerly())) ``` ## The Iris classification problem Imagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to statistically classify flowers. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal). The Iris genus entails about 300 species, but our program will only classify the following three: * Iris setosa * Iris virginica * Iris versicolor <table> <tr><td> <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> </td></tr> <tr><td align="center"> <b>Figure 1.</b> <a href="https://commons.wikimedia.org/w/index.php?curid=170298">Iris setosa</a> (by <a href="https://commons.wikimedia.org/wiki/User:Radomil">Radomil</a>, CC BY-SA 3.0), <a href="https://commons.wikimedia.org/w/index.php?curid=248095">Iris versicolor</a>, (by <a href="https://commons.wikimedia.org/wiki/User:Dlanglois">Dlanglois</a>, CC BY-SA 3.0), and <a href="https://www.flickr.com/photos/33397993@N05/3352169862">Iris virginica</a> (by <a href="https://www.flickr.com/photos/33397993@N05">Frank Mayfield</a>, CC BY-SA 2.0).<br/>&nbsp; </td></tr> </table> Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. ## Import and parse the training dataset Download the dataset file and convert it to a structure that can be used by this Python program. ### Download the dataset Download the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ``` train_dataset_url = "http://download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ``` ### Inspect the data This dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ``` !head -n5 {train_dataset_fp} ``` From this view of the dataset, notice the following: 1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names. 2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/#example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/#feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/#label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name. Let's write that out in code: ``` # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ``` Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as: * `0`: Iris setosa * `1`: Iris versicolor * `2`: Iris virginica For more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ``` class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ``` ### Create a `tf.data.Dataset` TensorFlow's [Dataset API](https://www.tensorflow.org/programmers_guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information. Since the dataset is a CSV-formatted text file, use the the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/#batch_size) parameter. ``` batch_size = 32 train_dataset = tf.contrib.data.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ``` The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}` With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ``` features, labels = next(iter(train_dataset)) features ``` Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays. You can start to see some clusters by plotting a few features from the batch: ``` plt.scatter(features['petal_length'], features['sepal_length'], c=labels, cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length"); ``` To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`. This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ``` def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ``` Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ``` train_dataset = train_dataset.map(pack_features_vector) ``` The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ``` features, labels = next(iter(train_dataset)) print(features[:5]) ``` ## Select the type of model ### Why model? A *[model](https://developers.google.com/machine-learning/crash-course/glossary#model)* is the relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize. Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. ### Select the model We need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/#neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/#hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/#neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/#fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <table> <tr><td> <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> </td></tr> <tr><td align="center"> <b>Figure 2.</b> A neural network with features, hidden layers, and predictions.<br/>&nbsp; </td></tr> </table> When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossary#inference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.03` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.02` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. ### Create a model using Keras The TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together. The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ``` model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ``` The *[activation function](https://developers.google.com/machine-learning/crash-course/glossary#activation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossary#ReLU) is common for hidden layers. The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. ### Using the model Let's have a quick look at what this model does to a batch of features: ``` predictions = model(features) predictions[:5] ``` Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossary#logit) for each class. To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossary#softmax) function: ``` tf.nn.softmax(predictions[:5]) ``` Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ``` print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ``` ## Train the model *[Training](https://developers.google.com/machine-learning/crash-course/glossary#training)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossary#overfitting)*—it's like memorizing the answers instead of understanding how to solve a problem. The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/#supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/#unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. ### Define the loss and gradient function Both training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossary#loss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value. Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ``` def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ``` Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager). ``` def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ``` ### Create an optimizer An *[optimizer](https://developers.google.com/machine-learning/crash-course/glossary#optimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <table> <tr><td> <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorthims visualized over time in 3D space."> </td></tr> <tr><td align="center"> <b>Figure 3.</b> Optimization algorithms visualized over time in 3D space. (Source: <a href="http://cs231n.github.io/neural-networks-3/">Stanford class CS231n</a>, MIT License)<br/>&nbsp; </td></tr> </table> TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossary#gradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ``` optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.train.get_or_create_global_step() ``` We'll use this to calculate a single optimization step: ``` loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ``` ### Training loop With all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps: 1. Iterate each *epoch*. An epoch is one pass through the dataset. 2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`). 3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients. 4. Use an `optimizer` to update the model's variables. 5. Keep track of some stats for visualization. 6. Repeat for each epoch. The `num_epochs` variable is the amount of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/#hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ``` ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tfe.metrics.Mean() epoch_accuracy = tfe.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ``` ### Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module. Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ``` fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results); ``` ## Evaluate the model's effectiveness Now that the model is trained, we can get some statistics on its performance. *Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/#accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: <table cellpadding="8" border="0"> <colgroup> <col span="4" > <col span="1" bgcolor="lightblue"> <col span="1" bgcolor="lightgreen"> </colgroup> <tr bgcolor="lightgray"> <th colspan="4">Example features</th> <th colspan="1">Label</th> <th colspan="1" >Model prediction</th> </tr> <tr> <td>5.9</td><td>3.0</td><td>4.3</td><td>1.5</td><td align="center">1</td><td align="center">1</td> </tr> <tr> <td>6.9</td><td>3.1</td><td>5.4</td><td>2.1</td><td align="center">2</td><td align="center">2</td> </tr> <tr> <td>5.1</td><td>3.3</td><td>1.7</td><td>0.5</td><td align="center">0</td><td align="center">0</td> </tr> <tr> <td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align="center">1</td><td align="center" bgcolor="red">2</td> </tr> <tr> <td>5.5</td><td>2.5</td><td>4.0</td><td>1.3</td><td align="center">1</td><td align="center">1</td> </tr> <tr><td align="center" colspan="6"> <b>Figure 4.</b> An Iris classifier that is 80% accurate.<br/>&nbsp; </td></tr> </table> ### Setup the test dataset Evaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossary#test_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model. The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ``` test_url = "http://download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.contrib.data.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ``` ### Evaluate the model on the test dataset Unlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/#epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ``` test_accuracy = tfe.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ``` We can see on the last batch, for example, the model is usually correct: ``` tf.stack([y,prediction],axis=1) ``` ## Use the trained model to make predictions We've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/#unlabeled_example); that is, on examples that contain features but not a label. In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as: * `0`: Iris setosa * `1`: Iris versicolor * `2`: Iris virginica ``` predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ``` These predictions look good! To dig deeper into machine learning models, take a look at the TensorFlow [Programmer's Guide](https://www.tensorflow.org/programmers_guide/) and check out the [community](https://www.tensorflow.org/community/). ## Next steps For more eager execution guides and examples, see [these notebooks](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/notebooks).
github_jupyter
``` import pandas as pd import numpy as np name = 'Revere' #name of the town town = pd.read_csv(name+'-google.csv') #file name, in this case they all followed format "town-google.csv" #strip all white space and split the types into a list for easier searching town['type_list'] = town['types'].str.replace(' ','').str.split(',') #code_dict maps tags to 4 digit NAICS codes #sometimes >4 digits if the more specigic naics code is trivially easy to find #these are basically all assgined by hand code_dict = {} code_dict['bar'] = 7224 code_dict['liquor_store'] = 4248 code_dict['grocery_or_supermarket'] = 4244 code_dict['secondary_school'] = 6111 code_dict['school'] = 6111 code_dict['lodging'] = 7211 code_dict['car_dealer'] = 4411 code_dict['bakery'] = 4452 code_dict['car_repair'] = 8111 code_dict['jewelry_store'] = 4239 code_dict['bank'] = 5221 code_dict['department_store'] = 4521 code_dict['gym'] = 7139 code_dict['dentist'] = 6212 code_dict['hardware_store'] = 4237 code_dict['furniture_store'] = 4232 code_dict['pharmacy'] = 4461 code_dict['drugstore'] = 4461 code_dict['clothing_store'] = 4481 code_dict['pet_store'] = 4539 code_dict['electronics_store'] = 4431 code_dict['local_government_office'] = 9211 code_dict['city_hall'] = 9211 code_dict['place_of_worship'] = 8131 code_dict['electrician'] = 2382 code_dict['restaurant'] = 7225 code_dict['convenience_store'] = 44512 code_dict['shoe_store'] = 4482 code_dict['hair_care'] = 81211 code_dict['doctor'] = 6211 code_dict['insurance_agency'] = 5242 code_dict['lawyer'] = 5411 code_dict['veterinary_care'] = 54194 code_dict['book_store'] = 451211 code_dict['university'] = 6113 code_dict['funeral_home'] = 8122 code_dict['post_office'] = 4911 code_dict['library'] = 51912 code_dict['roofing_contractor'] = 2381 code_dict['storage'] = 4931 code_dict['atm'] = 5221 #used for credit union code_dict['movie_theater'] = 5121 code_dict['florist'] = 4531 code_dict['beauty_salon'] = 8121 code_dict['spa'] = 8121 code_dict['real_estate_agency'] = 5312 code_dict['home_goods_store'] = 4422 code_dict['movie_rental'] = 5322 code_dict['hospital'] = 6221 code_dict['moving_company'] = 4842 code_dict['police'] = 9221 #iterate through, assigning NAICS codes based on the dictionary above #the remaining set will keep track of tags for any business that still does not yet have a code asssigned #use the remaining set to add more keys to the dictionary remaining = set() for row in range(len(town)): types = town['type_list'][row] for elem in types: if elem in code_dict: town.at[row, 'naics'] = code_dict[elem] break else: #if the loop finishes, then we didn't find any match remaining |= set(town['type_list'][row]) print(remaining) #show what tags are still remaining #show all of the rows that were not assigned a NAICS code #these typically will be tags like "point_of_interest" or "establishment" town.loc[town['naics'] != town['naics']] #delete the extra column del town['type_list'] #save the file, name formatting can be changed to liking town.to_csv(name+'-naics.csv', index = False) ```
github_jupyter
# Pre-procesamiento de datos ![image.png](attachment:a70264d0-d460-4c9e-bee9-fd86c37a94b5.png) ## Candidaturas elegidas Principales transformaciones: - Selección de atributos - Tratamiento de valores faltantes ``` import glob import nltk import re import pandas as pd from string import punctuation df_deputadas_1934_2023 = pd.read_csv('dados/deputadas_1934_2023.csv') df_deputadas_1934_2023.shape df_deputadas_1934_2023.head(5) ``` <div class="alert-warning"> Candidaturas elegidas: Selección de atributos para análisis </div> ``` df_deputadas = df_deputadas_1934_2023[['id', 'siglaPartido', 'siglaUf', 'idLegislatura', 'sexo']] df_deputadas.head(5) ``` <div class="alert-warning"> Candidaturas elegidas: Ajuste de los valores faltantes </div> ``` df_deputadas.isnull().sum(axis = 0) df_deputadas['siglaPartido'].fillna('sem partido', inplace=True) df_deputadas.isnull().sum(axis = 0) df_deputadas.to_csv('dados/candidaturas_eleitas(1).csv', index=False) ``` ## Legislaturas Principales tranformaciones: - Convertir fecha completa en año ``` tipo_data = ['dataInicio', 'dataFim'] df_legislaturas = pd.read_csv('dados/legislaturas_1934_2023.csv', parse_dates=tipo_data) df_legislaturas.info() df_legislaturas.head() ``` <div class="alert-warning"> Legislaturas: extracción de año </div> ``` df_legislaturas['dataInicio'] = df_legislaturas['dataInicio'].dt.year df_legislaturas['dataFim'] = df_legislaturas['dataFim'].dt.year df_legislaturas.head() df_legislaturas.to_csv('dados/legislaturas_1934_2023_limpas(1).csv', index=False) ``` ## Proposiciones legislativas Principales transformaciones: - Selección de los tipos de propuestas legislativas deseadas - Selección de atributos - Ajustes de valores faltantes - Extracción de palabras claves de las ementas - Remoción de stopwords, meses, puntuación, números - Remoción de palabras con menos de 3 caracteres y semanticamente irrelevantes - Remoción de bigramas semanticamente irrelevantes ``` lista_proposicoes = glob.glob('dados/proposicoes/propo*') tipos_dados = { 'id': object, 'uri': object, 'siglaTipo': object, 'numero': object, 'ano': int, 'codTipo': object, 'descricaoTipo': object, 'ementa': object, 'ementaDetalhada': object, 'keywords': object, 'uriOrgaoNumerador': object, 'uriPropAnterior': object, 'uriPropPrincipal': object, 'uriPropPosterior': object, 'urlInteiroTeor': object, 'urnFinal': object, 'ultimoStatus_sequencia': object, 'ultimoStatus_uriRelator': object, 'ultimoStatus_idOrgao': object, 'ultimoStatus_siglaOrgao': object, 'ultimoStatus_uriOrgao': object, 'ultimoStatus_regime': object, 'ultimoStatus_descricaoTramitacao': object, 'ultimoStatus_idTipoTramitacao': object, 'ultimoStatus_descricaoSituacao': object, 'ultimoStatus_idSituacao': object, 'ultimoStatus_despacho': object, 'ultimoStatus_url': object } tipo_data = ['dataApresentacao', 'ultimoStatus_dataHora'] lista_df = [] for proposicao in lista_proposicoes: df_proposicao = pd.read_csv(proposicao, sep=';', dtype=tipos_dados, parse_dates=tipo_data) lista_df.append(df_proposicao) df_proposicao_1934_2021 = pd.concat(lista_df, axis=0, ignore_index=True) df_proposicao_1934_2021.shape ``` <div class="alert-warning"> Proposiciones legislativas: Selección de los tipos de propuestas legislativas </div> - Projeto de Decreto Legislativo [SF] (PDL) - Projeto de Decreto Legislativo [CD] (PDC) - Projeto de Decreto Legislativo [CN] (PDN) - Projeto de Decreto Legislativo [SF] (PDS) - Proposta de Emenda à Constituição (PEC) - Projeto de Lei (PL) - Projeto de Lei da Câmara (PLC) - Projeto de Lei Complementar (PLP) - Projeto de Lei de Conversão (PLV) - Projeto de Resolução da Câmara dos Deputados (PRC) ``` tipos_proposicoes = ['PDS', 'PDC', 'PDN', 'PEC', 'PL', 'PLC', 'PLP', 'PLV', 'PRC'] df_proposicoes_tipos_desejados = df_proposicao_1934_2021[df_proposicao_1934_2021['siglaTipo'].isin(tipos_proposicoes)].copy() df_proposicoes_tipos_desejados.shape ``` <div class="alert-warning"> Proposiciones legislativas: Selección de atributos para análisis </div> ``` df_proposicoes = df_proposicoes_tipos_desejados[['id','siglaTipo','ano', 'codTipo', 'descricaoTipo', 'ementa', 'ementaDetalhada', 'keywords']].copy() df_proposicoes.shape ``` <div class="alert-warning"> Proposiciones legislativas: Ajuste de valores faltantes </div> ``` df_proposicoes.isnull().sum(axis = 0) df_proposicoes[ (df_proposicoes['ementa'].isnull()) & (df_proposicoes['ementaDetalhada'].isnull()) & (df_proposicoes['keywords'].isnull())].head() df_proposicoes.dropna(axis=0, subset=['ementa'], inplace=True) df_proposicoes.shape ``` <div class="alert-warning"> Proposiciones legislativas: Normalización de las keywords existentes </div> ``` df_proposicoes_com_keywords = df_proposicoes[df_proposicoes['keywords'].notna()].copy() df_proposicoes[df_proposicoes['keywords'].notna()] nltk.download('punkt') nltk.download('stopwords') ``` <div class="alert-warning"> Proposiciones legislativas: Funcciones para borrar la puntuación, preposiciones, números y artículos</div> ``` meses = ['janeiro', 'fevereiro', 'março', 'abril', 'maio', 'junho', 'julho','agosto', 'setembro', 'outubro', 'novembro', 'dezembro'] def define_stopwords_punctuation(): stopwords = nltk.corpus.stopwords.words('portuguese') + meses pontuacao = list(punctuation) stopwords.extend(pontuacao) return stopwords def remove_stopwords_punctuation_da_sentenca(texto): padrao_digitos = r'[0-9]' texto = re.sub(padrao_digitos, '', texto) palavras = nltk.tokenize.word_tokenize(texto.lower()) stopwords = define_stopwords_punctuation() keywords = [palavra for palavra in palavras if palavra not in stopwords] return keywords df_proposicoes_com_keywords['keywords'] = df_proposicoes_com_keywords['keywords'].apply(remove_stopwords_punctuation_da_sentenca) def converte_lista_string(lista): return ','.join([palavra for palavra in lista]) df_proposicoes_com_keywords['keywords'] = df_proposicoes_com_keywords['keywords'].apply(converte_lista_string) ``` <div class="alert-warning"> Proposiciones legislativas: Borra las proposiciones que quedaron sin keywords despues de la limpieza</div> ``` df_proposicoes_com_keywords = df_proposicoes_com_keywords[df_proposicoes_com_keywords['keywords'] != ''] df_proposicoes_com_keywords.head() ``` <div class="alert-warning"> Proposiciones legislativas: Saca `keywords` de la columna `ementa` </div> ``` df_proposicoes_sem_keywords = df_proposicoes[df_proposicoes['keywords'].isna()].copy() df_proposicoes_sem_keywords['keywords'] = df_proposicoes_sem_keywords['ementa'].apply(remove_stopwords_punctuation_da_sentenca) lista_keywords = [] lista_keywords_temp = df_proposicoes_sem_keywords['keywords'].tolist() _ = [lista_keywords.extend(item) for item in lista_keywords_temp] palavras_para_descarte = [item for item in set(lista_keywords) if len(item) <= 3] substantivos_nao_descartaveis = ['cão', 'mãe', 'oab', 'boa', 'pré', 'voz', 'rui', 'uva', 'gás', 'glp', 'apa'] palavras_para_descarte_refinada = [palavra for palavra in palavras_para_descarte if palavra not in substantivos_nao_descartaveis] def remove_palavras_para_descarte_da_sentenca(texto): keywords = [] for palavra in texto: if palavra not in palavras_para_descarte_refinada: keywords.append(palavra) return keywords df_proposicoes_sem_keywords['keywords'] = df_proposicoes_sem_keywords['keywords'].apply(remove_palavras_para_descarte_da_sentenca) ``` <div class="alert-warning"> Proposiciones legislativas: Tratamiento de bigramas </div> ``` def gera_n_grams(texto, ngram=2): temporario = zip(*[texto[indice:] for indice in range(0,ngram)]) resultado = [' '.join(ngram) for ngram in temporario] return resultado df_proposicoes_sem_keywords['bigrams'] = df_proposicoes_sem_keywords['keywords'].apply(gera_n_grams) lista_ngrams = [] lista_ngrams_temp = df_proposicoes_sem_keywords['bigrams'].tolist() _ = [lista_ngrams.extend(item) for item in lista_ngrams_temp] bigrams_comuns = nltk.FreqDist(lista_ngrams).most_common(50) lista_bigramas_comuns = [bigrama for bigrama, frequencia in bigrams_comuns] ``` <div class="alert-warning"> Proposiciones legislativas: Selección de los bigramas semanticamente irrelevantes </div> ``` lista_bigramas_comuns_limpa = ['dispõe sobre', 'outras providências', 'nova redação', 'poder executivo', 'distrito federal', 'autoriza poder', 'federal outras','redação constituição', 'dispõe sôbre', 'código penal', 'artigo constituição', 'disposições constitucionais', 'altera dispõe', 'decreto-lei código', 'constitucionais transitórias', 'altera redação', 'abre ministério', 'executivo abrir', 'redação artigo', 'sobre criação', 'acrescenta parágrafo', 'parágrafo único', 'concede isenção', 'altera dispositivos', 'altera complementar', 'dispondo sobre', 'código processo', 'outras providências.', 'providências. historico', 'ministério fazenda', 'altera leis', 'programa nacional', 'quadro permanente', 'outras providencias', 'inciso constituição', 'abrir ministério', 'estabelece normas', 'ministério justiça', 'tempo serviço', 'instituto nacional', 'institui sistema', 'operações crédito', 'altera institui', 'dispõe sôbre'] palavras_para_descarte_origem_bigramas = [] _ = [palavras_para_descarte_origem_bigramas.extend(bigrama.split(' ')) for bigrama in lista_bigramas_comuns_limpa] palavras_para_descarte_origem_bigramas_unicas = set(palavras_para_descarte_origem_bigramas) def remove_palavras_origem_bigramas_da_sentenca(texto): keywords = [] for palavra in texto: if palavra not in palavras_para_descarte_origem_bigramas_unicas: keywords.append(palavra) return keywords df_proposicoes_sem_keywords['keywords'] = df_proposicoes_sem_keywords['keywords'].apply(remove_palavras_origem_bigramas_da_sentenca) df_proposicoes_sem_keywords['keywords'] = df_proposicoes_sem_keywords['keywords'].apply(converte_lista_string) df_proposicoes_sem_keywords = df_proposicoes_sem_keywords.drop(columns=['bigrams']) ``` <div class="alert-warning"> Proposiciones legislativas: Borra las proposiciones que quedaron sin keywords despues de la limpieza</div> ``` df_proposicoes_sem_keywords = df_proposicoes_sem_keywords[df_proposicoes_sem_keywords['keywords'] != ''] df_proposicoes_sem_keywords[df_proposicoes_sem_keywords['keywords']== ''] ``` <div class="alert-warning"> Proposiciones legislativas: Reuni los dataframes</div> ``` df_proposicoes_v_final = pd.concat([df_proposicoes_com_keywords, df_proposicoes_sem_keywords]) df_proposicoes_v_final.shape df_proposicoes_v_final.info() df_proposicoes_v_final.to_csv('dados/proposicoes_legislativas_limpas(1).csv', index=False) ``` # Creación de vocabulario ![image.png](attachment:039e803b-dbba-4fcd-839c-aa74b2e8469c.png) Antes de hacer el análisis de los temas de las proposiciones hacía falta clasificarlas con un vocabulario controlado. Así que, usando el conjunto de datos "temas de proposições" clasifiqué algunas proposiciones relativas a protección de derechos de grupos históricamente marginados, a saber: campesinos, mujeres, población LGTQIA+, negros, ancianos, discapacitados, artistas, poblaciones económicamente vulnerables y pueblos indígenas. Principales etapas: - Reunir todas las palabras claves - Atribuir manualmente palabras a temas - Atribuir tema a proposiciones que contenía la palabra clave ``` proposicoes = pd.read_csv('dados/proposicoes_legislativas_limpas(1).csv') proposicoes.info() ``` Reunião de palavras chaves para classificação ``` keywords = proposicoes['keywords'] vocabulario = [] for proposicao in keywords: lista = proposicao.split(',') vocabulario.extend(lista) vocabulario_unico = set(vocabulario) with open('dados/vocabulario.txt', 'w') as palavras: for termo in vocabulario_unico: palavras.write(termo + '\n') ``` <div class="alert-warning">Relacioné manualmente palabras claves a uno de los temas del conjunto de datos "Temas"</div> ``` vocabulario_temp = pd.read_csv('dados/temas_vocabulario.csv') vocabulario_temp.head() ``` <div class="alert-warning"> Crié el vocabuario</div> ``` vocabulario = pd.DataFrame(columns=['cod', 'tema', 'palavra_chave']) indices = vocabulario_temp.index for indice in indices: descricao = vocabulario_temp['descricao'].iloc[indice] if type(descricao) == str: for palavra in descricao.split(' '): df = pd.DataFrame(data={'cod':vocabulario_temp['cod'].iloc[indice], 'tema':vocabulario_temp['nome'].iloc[indice], 'palavra_chave':[palavra]}) vocabulario = pd.concat([vocabulario, df], ignore_index=True) vocabulario.sample(5) vocabulario.shape vocabulario = vocabulario[vocabulario['palavra_chave']!= ''].copy() vocabulario.shape ``` <div class="alert-warning">Atribuí el tema a las proposiciones que contenía la palabra en la columna `keyword`</div> ``` def atribui_tema(proposicao): for tema, palavra_chave in zip(vocabulario['tema'], vocabulario['palavra_chave']): if palavra_chave in proposicao: return tema proposicoes['temas'] = proposicoes['keywords'].apply(atribui_tema) proposicoes.to_csv('dados/proposicoes_legislativas_limpas_vocabulario(1).csv', index=False) ``` # Modelo de aprendizaje de máquina ![image.png](attachment:81058c1f-1ed1-412e-ab2c-b3408cf044c2.png) Hay que clasificar todas las proposiciones antes del análisis. Principales etapas: - Establece variable predictora: “ementa” y la de respuesta:"temas" - Encode da variable de respuesta utilizando preprocessing.LabelEncoder - Divide conjunto de datos para teste y entrenamiento - Convierte las ementas en vectores con HashingVectorizer - Crea el modelo de clasificación con RandomForestClassifier - Entrena el modelo - Evalua cualitativamente a partir de la comparación entre las clasificaciones de los conjuntos de prueba y entrenamiento Al final tenemos clasificadas solamente las proposiciones referentes a temática estudiada ``` from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import HashingVectorizer from sklearn.feature_extraction.text import CountVectorizer from sklearn.ensemble import RandomForestClassifier from keras.utils import np_utils import nltk from nltk.corpus import stopwords import pandas as pd import numpy as np ``` <div class="alert-warning">Classifica proposições legislativas</div> ``` df_proposicoes = pd.read_csv("dados/proposicoes_legislativas_limpas_vocabulario(1).csv") df_proposicoes_classificado = df_proposicoes.dropna(subset=["temas"]) df_proposicoes_classificado = df_proposicoes_classificado[["ementa","temas"]] df_proposicoes_classificado.shape df_proposicoes_classificado.head() ``` <div class="alert-warning">Establece variable predictora: “ementa” y la de respuesta:"temas"</div> ``` sentences = df_proposicoes_classificado['ementa'].values ``` <div class="alert-warning">Encode da variable de respuesta</div> ``` le = preprocessing.LabelEncoder() le.fit(df_proposicoes_classificado['temas'].unique()) y = le.transform(df_proposicoes_classificado['temas']) ``` <div class="alert-warning">Divide el conjunto de teste y entrenamiento</div> ``` sentences_train, sentences_test, y_train, y_test = train_test_split( sentences, y, test_size=0.25, random_state=1000) ``` <div class="alert-warning">Convierte las ementas en vectores con HashingVectorizer</div> ``` vectorizer = CountVectorizer() vectorizer.fit(sentences_train) X_train = vectorizer.transform(sentences_train) X_test = vectorizer.transform(sentences_test) X_train hasher = HashingVectorizer( n_features=10000, stop_words=stopwords.words('portuguese'), alternate_sign=False, norm=None, ) hasher.fit(sentences_train) X_train_hasher = hasher.transform(sentences_train) X_test_hasher = hasher.transform(sentences_test) X_train_hasher.shape ``` <div class="alert-warning">Cría y entreina clasificador</div> ``` clf = RandomForestClassifier(n_estimators=200,random_state=0) clf.fit(X_train_hasher, y_train) ``` <div class="alert-warning">Verifica el coeficiente de determinación (R²)</div> ``` score = clf.score(X_test_hasher, y_test) print("Acurácia:", score) ``` <div class="alert-warning">Avalia modelo cualitativamente</div> ``` df_random_forest_results = pd.DataFrame([sentences_test,le.inverse_transform(clf.predict(X_test_hasher))]).transpose().rename(columns={0:"ementa",1:"tema"}) df_random_forest_results.head() ``` <div class="alert-warning">Cría listado con probabilidades de clasificación de la proposición en cada tema</div> ``` predicted_probabilities = clf.predict_proba(X_test_hasher) ``` <div class="alert-warning">Selecciona el tema con mayor probabilidad para cada proposición</div> ``` df_random_forest_results["probabilidade_predicao"] = np.amax(predicted_probabilities,axis=1) df_random_forest_results.head() ``` <div class="alert-warning">Cría dataframe comparativo entre los temas preestablecidos y los clasificados por el clasificador</div> ``` df_ementas_test = pd.DataFrame([sentences_test,le.inverse_transform(y_test)]).transpose().rename(columns={0:"ementa",1:"tema"}) df_ementas_test.head() df_avaliacao = df_random_forest_results.merge(df_ementas_test,left_on="ementa",right_on="ementa",suffixes=["_resposta_modelo","_correto"]) df_avaliacao["modelo_acertou"] = df_avaliacao["tema_resposta_modelo"] == df_avaliacao["tema_correto"] df_avaliacao["modelo_acertou"] = df_avaliacao["modelo_acertou"].replace({True: "Sim", False: "Não"}) df_avaliacao["modelo_acertou"].value_counts() ``` <div class="alert-warning">Resumen de la validación</div> ``` df_avaliacao[df_avaliacao["probabilidade_predicao"] >= 0.85]["modelo_acertou"].value_counts() df_avaliacao.head() df_ementas_test.tema.value_counts() df_avaliacao.to_csv('dados/avaliacao-qualitativa-modelo-classificacao(1).csv') ``` <div class="alert-warning">Aplicación del modelo</div> ``` df_proposicoes_total = df_proposicoes[["ementa","temas"]] ementas = df_proposicoes_total['ementa'].values ementas_hasher = hasher.transform(ementas) df_proposicoes_total_classificadas = pd.DataFrame([ementas,le.inverse_transform(clf.predict(ementas_hasher))]).transpose().rename( columns={0:"ementa",1:"temas"}) df_proposicoes_total_classificadas.head() df_proposicoes_total_classificadas.info() ``` Informar a probabilidade de acerto de cada tema ``` temas_probabilities = clf.predict_proba(ementas_hasher) df_proposicoes_total_classificadas["probabilidade_predicao"] = np.amax(temas_probabilities, axis=1) df_proposicoes_total_classificadas.head() df_proposicoes_total_classificadas.info() ``` Limpa temas cuja a probabilidade de acerto é menor do que 85% ``` def retira_tema_com_baixa_probabilidade_acerto(proposicoes): if proposicoes['probabilidade_predicao'] >= 0.85: return proposicoes['temas'] else: return np.nan df_proposicoes_total_classificadas['temas'] = df_proposicoes_total_classificadas.apply(retira_tema_com_baixa_probabilidade_acerto, axis=1) ``` Reunir conjunto de dados de proposições legislativas com classificação realizada ``` df_proposicoes_classificador = df_proposicoes.join(df_proposicoes_total_classificadas, rsuffix='_classificador') df_proposicoes_classificador.shape df_proposicoes_classificador.head() df_proposicoes_classificador.drop(columns=['temas', 'ementa_classificador', 'probabilidade_predicao'], inplace=True) df_proposicoes_classificador.to_csv('dados/proposicoes_legislativas_limpas_classificadas(1).csv', index=False) ``` # Análisis exploratorio de datos ![image.png](attachment:246ed932-9452-427b-967d-14d28e223d7a.png) ``` import matplotlib.pyplot as plt ``` ## 1. ¿Hubo impacto positivo en la cantidad de mujeres elegidas para la Cámara en las 3 legislaciones subsecuentes a aprobación de la Ley 9.504/1997? **Hipótesis:** No huvo impacto positivo en el percentual de mujeres elegidas para la Cámara en las 3 legislaciones subsecuentes a aprobación de la Ley 9.504/1997. ``` df_legislaturas = pd.read_csv('dados/legislaturas_1934_2023_limpas(1).csv') df_legislaturas.head() ``` <div class="alert-warning">Determinar el período de los datos para el análisis (1995 a 2007)</div> ``` legislaturas_h1 = df_legislaturas[(df_legislaturas['id'] >= 50) & (df_legislaturas['id'] <= 53)]['id'].unique().tolist() df_candidaturas_eleitas = pd.read_csv('dados/candidaturas_eleitas(1).csv') df_candidaturas_eleitas_h1 = df_candidaturas_eleitas[df_candidaturas_eleitas['idLegislatura'].isin(legislaturas_h1)].copy() df_candidaturas_eleitas_h1['idLegislatura'].unique() ``` <div class="alert-warning">Agrupar por género</div> ``` agrupa_sexo = df_candidaturas_eleitas_h1.groupby(['idLegislatura', 'sexo']).size().to_frame('valorAbsoluto') ``` <div class="alert-warning">Estabelece el porcentaje de cada grupo en relación al total de diputados</div> ``` agrupa_sexo['porcentagem'] = round(agrupa_sexo['valorAbsoluto'].div( agrupa_sexo.groupby('idLegislatura')['valorAbsoluto'].transform('sum')).mul(100), 2) agrupa_sexo_df = agrupa_sexo.reset_index() agrupa_sexo_df ``` <div class="alert-warning">Prepara los datos para visualización</div> ``` mulher_h1 = agrupa_sexo_df[agrupa_sexo_df['sexo'] == 'F']['porcentagem'].tolist() homem_h1 = agrupa_sexo_df[agrupa_sexo_df['sexo'] == 'M']['porcentagem'].tolist() legislaturas_lista_h1 = agrupa_sexo_df['idLegislatura'].unique() legislaturas_lista_h1 = df_legislaturas[(df_legislaturas['id'] >= 50) & (df_legislaturas['id'] <= 53)]['dataInicio'].unique().tolist() legislaturas_lista_h1.sort() legislaturas_lista_h1 = list(map(str, legislaturas_lista_h1)) legislaturas_lista_h1 agrupa_sexo_df2 = pd.DataFrame({'mulher': mulher_h1, 'homem': homem_h1 }, index=legislaturas_lista_h1, ) agrupa_sexo_df2.plot.line() agrupa_sexo_df2.to_csv('dados/analise_genero_1995_2007(1).csv') ``` <div class="alert-warning">Visualización por género</div> ``` agrupa_sexo_df2.plot.line(subplots=True) diferenca_percentual_mulher_h1_total = mulher_h1[-1] - mulher_h1[0] print(f''' Hubo impacto positivo en la cantidad de mujeres elegidas para la Cámara en las 3 legislaciones subsecuentes a aprobación de la Ley 9.504/1997? \n Hipótesis comprobada? Sí. \n Hubo aumento de {round(diferenca_percentual_mulher_h1_total, 2)}% en el total de mujeres elegidas, sin embargo es un porcentaje muy bajo para justificar como impacto positivo. ''') ``` ## 2. ¿Hubo impacto positivo en la cantidad de mujeres elegidas para la Cámara en las 3 legislaciones subsecuentes a aprobación de la Ley 12.034/2009? **Hipótesis:** Huvo impacto positivo en el percentual de mujeres elegidas para la Cámara en las 3 legislaciones subsecuentes a aprobación de la Ley 12.034/2009. <div class="alert-warning">Determinar el período de los datos para el análisis (2007 a 2019)</div> ``` legislaturas_h2 = df_legislaturas[(df_legislaturas['id'] >= 53) & (df_legislaturas['id'] <= 56)]['id'].unique().tolist() df_candidaturas_eleitas_h2 = df_candidaturas_eleitas[df_candidaturas_eleitas['idLegislatura'].isin(legislaturas_h2)].copy() df_candidaturas_eleitas_h2['idLegislatura'].unique() ``` <div class="alert-warning">Agrupar por género</div> ``` agrupa_sexo_h2 = df_candidaturas_eleitas_h2.groupby(['idLegislatura', 'sexo']).size().to_frame('valorAbsoluto') ``` <div class="alert-warning">Estabelece el porcentaje de cada grupo en relación al total de diputados</div> ``` agrupa_sexo_h2['porcentagem'] = round(agrupa_sexo_h2['valorAbsoluto'].div(agrupa_sexo_h2.groupby( 'idLegislatura')['valorAbsoluto'].transform('sum')).mul(100), 2) agrupa_sexo_h2_df = agrupa_sexo_h2.reset_index() agrupa_sexo_h2 ``` <div class="alert-warning">Prepara los datos para visualización</div> ``` mulher_h2 = agrupa_sexo_h2_df[agrupa_sexo_h2_df['sexo'] == 'F']['porcentagem'].tolist() homem_h2 = agrupa_sexo_h2_df[agrupa_sexo_h2_df['sexo'] == 'M']['porcentagem'].tolist() legislaturas_lista_h2 = agrupa_sexo_h2_df['idLegislatura'].unique() legislaturas_lista_h2 = df_legislaturas[(df_legislaturas['id'] >= 53) & (df_legislaturas['id'] <= 56) ]['dataInicio'].unique().tolist() legislaturas_lista_h2.sort() legislaturas_lista_h2 = list(map(str, legislaturas_lista_h2)) legislaturas_lista_h2 agrupa_sexo_h2_df2 = pd.DataFrame({'mulher': mulher_h2, 'homem': homem_h2 }, index=legislaturas_lista_h2, ) agrupa_sexo_h2_df2.plot.line() agrupa_sexo_h2_df2.to_csv('dados/analise_genero_2007_2019(1).csv') ``` <div class="alert-warning">Visualización por género</div> ``` agrupa_sexo_h2_df2.plot.line(subplots=True) diferenca_percentual_mulher_h2_total = mulher_h2[-1] - mulher_h2[0] print(f''' Hubo impacto positivo en la cantidad de mujeres elegidas para la Cámara en las 3 legislaciones subsecuentes a aprobación de la Ley 12.034/2009? \n Hipótesis comprobada? Sí. \n Hubo aumento de {round(diferenca_percentual_mulher_h2_total, 2)}% en el total de mujeres elegidas. ''') ``` ## Evolução geral ``` legislaturas_todas = df_candidaturas_eleitas['idLegislatura'].unique() legislaturas_todas ``` <div class="alert-warning">Agrupar por género</div> ``` agrupa_sexo_todas = df_candidaturas_eleitas.groupby(['idLegislatura', 'sexo']).size().to_frame('valorAbsoluto') ``` <div class="alert-warning">Estabelece el porcentaje de cada grupo en relación al total de diputados</div> ``` agrupa_sexo_todas['porcentagem'] = round(agrupa_sexo_todas['valorAbsoluto'].div(agrupa_sexo_todas.groupby( 'idLegislatura')['valorAbsoluto'].transform('sum')).mul(100), 2) agrupa_sexo_todas_df = agrupa_sexo_todas.reset_index() agrupa_sexo_todas_df ``` <div class="alert-warning">Prepara los datos para visualización</div> ``` mulher_todas = agrupa_sexo_todas_df[agrupa_sexo_todas_df['sexo'] == 'F']['porcentagem'].tolist() homem_todas = agrupa_sexo_todas_df[agrupa_sexo_todas_df['sexo'] == 'M']['porcentagem'].tolist() len(mulher_todas), len(homem_todas) mulher_todas[:5] mulher_todas.insert(2, 0) len(mulher_todas), len(homem_todas) mulher_todas[:5] legislaturas_lista_todas = agrupa_sexo_todas_df['idLegislatura'].unique() legislaturas_lista_todas = df_legislaturas['dataInicio'].unique().tolist() legislaturas_lista_todas.sort() legislaturas_lista_todas = list(map(str, legislaturas_lista_todas)) len(legislaturas_lista_todas), len(mulher_todas), len(homem_todas) agrupa_sexo_todas_df2 = pd.DataFrame({'mulher': mulher_todas, 'homem': homem_todas }, index=legislaturas_lista_todas, ) agrupa_sexo_todas_df2.plot.line() agrupa_sexo_todas_df2.to_csv('dados/analise_genero_1934_2023.csv') ``` <div class="alert-warning">Visualización por género</div> ``` agrupa_sexo_h2_df2.plot.line(subplots=True) ``` ## 3. ¿Teniendo en cuenta el tema de las proposiciones legislativas, hubo aumento de los que beneficia grupos históricamente marginados en el periodo entre 1934 y 2021? **Hipótesis:** Sí, hubo aumento en la cantidade anual de propuestas legislativas que beneficia los grupos historicamente marginados. ``` proposicoes = pd.read_csv('dados/proposicoes_legislativas_limpas_classificadas(1).csv') proposicoes.head() ``` <div class="alert-warning">Agrupa por año y cantidad de propuestas de los temas</div> ``` proposicoes_anuais = proposicoes[['ano', 'temas_classificador']].groupby(by=['ano']).count() proposicoes_anuais.tail(10) ``` <div class="alert-warning">Visualización</div> ``` proposicoes_anuais.plot.line() proposicoes_anuais = proposicoes_anuais.reset_index() proposicoes_anuais.to_csv('dados/proposicoes_anuais(1).csv', index=False) print(f''' Teniendo en cuenta el tema de las proposiciones legislativas, hubo aumento de los que beneficia grupos historicamente marginalinados en el periodo entre 1934 e 2021? \n Hipótesis comprobada? Sí. Apesar de las oscilaciones hay una tendencia de crecimiento positivo en la cantidad de propuestas que benefician los grupos historicamente marginados. ''') ``` ## 4. ¿Cuál es el coeficiente de correlación entre la cantidad anual de las propuestas legislativas que benefician los grupos historicamente marginados y el porcentaje de mujeres elegidas para la Cámara de Diputados entre 1995 y 2019? **Hipótesis:** Bajo <div class="alert-warning">Une los dataframes de los análisis anteriores</div> ``` analise_genero_1995_2007 = pd.read_csv('dados/analise_genero_1995_2007(1).csv') analise_genero_2007_2019 = pd.read_csv('dados/analise_genero_2007_2019(1).csv') analise_genero_1995_2007.columns == analise_genero_2007_2019.columns analise_genero_1995_2019 = pd.concat([analise_genero_1995_2007, analise_genero_2007_2019], ignore_index=True) analise_genero_1995_2019 analise_genero_1995_2019.rename(columns={'Unnamed: 0': 'ano'}, inplace=True) analise_genero_1995_2019.drop(index=3, inplace=True) analise_genero_1995_2019 anos = analise_genero_1995_2019['ano'].tolist() anos.append(2021) anos ``` <div class="alert-warning">Inserta el período completo de cada legislatura, teniendo en cuenta que la proporcionalidade de género se mantiene durante los 4 años de legislatura</div> ``` for ano in anos: mulher_percentual = analise_genero_1995_2019['mulher'][analise_genero_1995_2019['ano'] == ano].item() homem_percentual = analise_genero_1995_2019['homem'][analise_genero_1995_2019['ano'] == ano].item() if ano < 2021: dados = pd.DataFrame(data={ 'ano': [ano+1, ano+2, ano+3], 'mulher': [mulher_percentual, mulher_percentual, mulher_percentual], 'homem': [homem_percentual, homem_percentual, homem_percentual]} ) analise_genero_1995_2019 = pd.concat([analise_genero_1995_2019, dados]) analise_genero_1995_2019.sort_values(by=['ano'], inplace=True) analise_genero_1995_2019.reset_index(drop=True, inplace=True) analise_genero_1995_2019.tail() analise_genero_1995_2019.drop(index=27, inplace=True) ``` <div class="alert-warning">Inserta el total anual de las propuestas en favor a los grupos historicamente marginados</div> ``` def insere_qnt_propostas(ano_candidaturas_eleitas): for ano, qnt_tema in zip(proposicoes_anuais['ano'], proposicoes_anuais['temas_classificador']): if ano == ano_candidaturas_eleitas: return qnt_tema analise_genero_1995_2019['qnt_proposicoes'] = analise_genero_1995_2019['ano'].apply(insere_qnt_propostas) analise_genero_1995_2019.head(10) ``` <div class="alert-warning">Cría la matriz de correlación</div> ``` correlacao = analise_genero_1995_2019[['mulher', 'homem', 'qnt_proposicoes']].corr(method='pearson') coeficiente_correlacao_mulher_qnt_temas = round(correlacao['mulher']['qnt_proposicoes'],2) correlacao_matriz_triangular = np.triu(np.ones_like(correlacao)) sns.heatmap(correlacao, annot=True, mask=correlacao_matriz_triangular) correlacao.to_csv('dados/coeficiente_correlacao_mulher_qnt_temas(1).csv') print(f'''¿Cuál es el coeficiente de correlación entre la cantidad anual de las propuestas legislativas que benefician los grupos historicamente marginados y el porcentaje de mujeres elegidas para la Cámara de Diputados entre 1995 y 2019?\n Hipótesis comprobada? Sí. \n - El coeficiente de correlación de Pearson es {coeficiente_correlacao_mulher_qnt_temas}, por lo tanto no se puede afirmar que hay correlación. ''') ```
github_jupyter
``` import torch from dataset import load_dataset from basic_unet import UNet import matplotlib.pyplot as plt from rise import RISE from pathlib import Path from plot_utils import plot_image_row from skimage.feature import canny batch_size = 1 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") train_loader, test_loader = load_dataset(batch_size) model = UNet(in_channels=4, out_channels=1) state_dict = torch.load('models/3_basic_unet_flat_criterion_279_0.00000.pth') model.load_state_dict(state_dict) model = model.to(device) sample = next(iter(test_loader)) segment = sample['segment'] segment = segment.squeeze() image = sample['input'].to(device) output = model(image) output = output.detach().cpu().squeeze().numpy() output = (output > output.mean()) class SegmentationRISE(RISE): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def forward(self, x): mask_count = self.N _, _, H, W = x.size() # generate new images by putting mask on top of original image stack = torch.mul(self.masks, x.data) output = model(x).squeeze() output = (output > output.mean()) pixels = [] for x in range(output.shape[0]): for y in range(output.shape[1]): if output[x][y]: pixels.append((x, y)) pixels_per_batch = 1000 saliencies = [] for i in range(0, len(pixels), pixels_per_batch): current_pixels = pixels[i:i+pixels_per_batch] # run generated images through the model p = [] for i in range(0, mask_count, self.gpu_batch): output_mask = self.model(stack[i:min(i + self.gpu_batch, mask_count)]) pixel_classes = [] for x, y in current_pixels: pixel_classes.append(output_mask[0][x][y]) p.append(torch.tensor([pixel_classes])) p = torch.cat(p) p = p.to(device) # Number of classes CL = p.size(1) sal = torch.matmul(p.data.transpose(0, 1), self.masks.view(mask_count, H * W)) sal = sal.view((CL, H, W)) sal /= mask_count saliencies.append(sal) return saliencies masks_path = Path('rise_masks.npy') explainer = SegmentationRISE(model, (240, 240), batch_size) if not masks_path.exists(): explainer.generate_masks(N=3000, s=8, p1=0.1, savepath=masks_path) else: explainer.load_masks(masks_path) saliencies = None with torch.set_grad_enabled(False): saliencies = explainer(image) plot_image_row([segment, output], labels=['Ground truth', 'Binarized network output']) print('Saliency map, Saliency map overlayed on binarized network output (max)') merged = torch.cat(saliencies) maxed = torch.max(merged, dim=0)[0] plt.imshow(output, cmap='gray_r') edges = canny(image.cpu().numpy()[0][1], sigma=0.01) plt.imshow(edges, alpha=0.5, cmap='gray_r') plt.imshow(maxed.cpu(), cmap='jet', alpha=0.6) plt.show() plt.imshow(output, cmap='gray_r') plt.imshow(maxed.cpu(), cmap='jet', alpha=0.6) plt.show() print('Saliency map, Saliency map overlayed on binarized network output (mean)') mean = torch.mean(merged, dim=0) plt.imshow(output, cmap='gray_r') plt.imshow(edges, alpha=0.5, cmap='gray_r') plt.imshow(mean.cpu(), cmap='jet', alpha=0.6) plt.show() plt.imshow(output, cmap='gray_r') plt.imshow(mean.cpu(), cmap='jet', alpha=0.6) plt.show() ```
github_jupyter
# Testing the Head **Warning:** Before running this notebook, first make sure you understand the command you run and make sure that the robot can freely move. **Note:** Also stop all other running Python script or notebook connected to the robot as only one connection can run at the same time. ``` %matplotlib notebook import time import cv2 as cv import numpy as np from matplotlib import pyplot as plt from collections import OrderedDict from reachy import parts def patch_head_config(head_cls): # if it's 'armv7l', assume that it's the raspberry pi 4 on reachy head_cls.dxl_motors = OrderedDict([ ('left_antenna', { 'id': 30, 'offset': 26.0, 'orientation': 'direct', 'angle-limits': [-150, 150], }), ('right_antenna', { 'id': 31, 'offset': 90.0, 'orientation': 'direct', 'angle-limits': [-150, 150], }), ]) return head_cls def patch_head(head_cls): def __init__(self, io, default_camera='right'): """Create new Head part.""" parts.part.ReachyPart.__init__(self, name='head', io=io) #self.neck = self.create_orbita_actuator('neck', Head.orbita_config) self.attach_dxl_motors(parts.Head.dxl_motors) #self.camera = self.io.find_dual_camera(default_camera) head_cls.__init__ = __init__ return head_cls ``` ## Connect to the head ``` from reachy import Reachy, parts parts.Head = patch_head_config(parts.Head) parts.Head = patch_head(parts.Head) reachy = Reachy( head=parts.Head(io='/dev/ttyUSB*') #head=parts.Head(io='ws'), ) ``` You can now connect your robot in Unity. ## Move the neck Check that all 3 disks are present and ok. ``` for d in reachy.head.neck.disks: print(d, d.temperature) ``` Turn compliant/stiff and check that the head is free or fixed. ``` reachy.head.compliant = True reachy.head.compliant = False ``` Go to the base position. ``` reachy.head.compliant = False reachy.head.look_at(1, 0, 0, duration=1, wait=True) ``` Play some random moves. ``` x = 0.5 y = (2 * np.random.rand() - 1) * 0.25 z = (2 * np.random.rand() - 1) * 0.25 duration = 1 reachy.head.look_at(x, y, z, duration=duration, wait=False) real = [] t0 = time.time() while time.time() - t0 < duration: real.append([d.rot_position for d in reachy.head.neck.disks]) time.sleep(0.01) plt.figure() plt.plot(real) ``` ## Move the antennas Check that we have both antennas. Turn them stiff. ``` for m in reachy.head.motors: m.compliant = False ``` Make them go to 0 ``` for m in reachy.head.motors: m.goal_position = 0 ``` Make them go to 45 ``` for m in reachy.head.motors: m.goal_position = 45 ``` (check that they both moved) Make them go to 0 again ``` for m in reachy.head.motors: m.goal_position = 0 ``` Make them follow a sinus for a few seconds. ``` t = np.linspace(0, 10, 1000) pos = 30 * np.sin(2 * np.pi * 0.5 * t) for p in pos: for m in reachy.head.motors: m.goal_position = p time.sleep(0.01) ``` ## Access the cameras *Note: the cameras don't seem to be working in the simulator for reachy v1.2.3. - PC* Check the right camera. ``` success, img = reachy.head.right_camera.read() if success: plt.figure() plt.imshow(cv.cvtColor(img, cv.COLOR_BGR2RGB)) ``` Check the left camera. ``` success, img = reachy.head.left_camera.read() if success: plt.figure() plt.imshow(cv.cvtColor(img, cv.COLOR_BGR2RGB)) ```
github_jupyter
<a href="https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Evaluating_TrOCR_base_handwritten_on_the_IAM_test_set.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Set-up environment ``` !pip install -q git+https://github.com/huggingface/transformers.git !pip install -q datasets jiwer ``` ## Load IAM test set ``` import pandas as pd df = pd.read_fwf('/content/drive/MyDrive/TrOCR/Tutorial notebooks/IAM/gt_test.txt', header=None) df.rename(columns={0: "file_name", 1: "text"}, inplace=True) del df[2] df.head() import torch from torch.utils.data import Dataset from PIL import Image class IAMDataset(Dataset): def __init__(self, root_dir, df, processor, max_target_length=128): self.root_dir = root_dir self.df = df self.processor = processor self.max_target_length = max_target_length def __len__(self): return len(self.df) def __getitem__(self, idx): # get file name + text file_name = self.df['file_name'][idx] text = self.df['text'][idx] # some file names end with jp instead of jpg, the two lines below fix this if file_name.endswith('jp'): file_name = file_name + 'g' # prepare image (i.e. resize + normalize) image = Image.open(self.root_dir + file_name).convert("RGB") pixel_values = self.processor(image, return_tensors="pt").pixel_values # add labels (input_ids) by encoding the text labels = self.processor.tokenizer(text, padding="max_length", max_length=self.max_target_length).input_ids # important: make sure that PAD tokens are ignored by the loss function labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels] encoding = {"pixel_values": pixel_values.squeeze(), "labels": torch.tensor(labels)} return encoding from transformers import TrOCRProcessor processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") test_dataset = IAMDataset(root_dir='/content/drive/MyDrive/TrOCR/Tutorial notebooks/IAM/image/', df=df, processor=processor) from torch.utils.data import DataLoader test_dataloader = DataLoader(test_dataset, batch_size=8) batch = next(iter(test_dataloader)) for k,v in batch.items(): print(k, v.shape) from transformers import TrOCRProcessor processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") labels = batch["labels"] labels[labels == -100] = processor.tokenizer.pad_token_id label_str = processor.batch_decode(labels, skip_special_tokens=True) label_str ``` ## Run evaluation ``` from transformers import VisionEncoderDecoderModel import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") model.to(device) from datasets import load_metric cer = load_metric("cer") from tqdm.notebook import tqdm print("Running evaluation...") for batch in tqdm(test_dataloader): # predict using generate pixel_values = batch["pixel_values"].to(device) outputs = model.generate(pixel_values) # decode pred_str = processor.batch_decode(outputs, skip_special_tokens=True) labels = batch["labels"] labels[labels == -100] = processor.tokenizer.pad_token_id label_str = processor.batch_decode(labels, skip_special_tokens=True) # add batch to metric cer.add_batch(predictions=pred_str, references=label_str) final_score = cer.compute() print("Character error rate on test set:", final_score) ```
github_jupyter
## Time to do some data science Before creating a tome, we must decide on how to transform our data before concatenating. Therefore, we will explore the data for a single match. We will investigate the number of footsteps players make as a function of rank, wins, and friendly commends. After we developed the code that does our data processing, we moved them to functions and put them in `pureskillgg_makenew_pyskill\tutorial_datascience\footsteps_example.py` so that we can import them in the next notebook. This avoids code duplication and will let the PureSkill.gg Coach import these functions in the future! _**Run this notebook as-is.**_ ``` from pureskillgg_makenew_pyskill.notebook import setup_notebook setup_notebook(silent=True) # %load ../usual_suspects.py # pylint: disable=unused-import import time import os import pandas as pd import numpy as np import matplotlib.pyplot as plt from pureskillgg_dsdk.tome import create_tome_curator pd.set_option("display.max_columns", 150) pd.set_option("display.max_rows", 150) pd.set_option("display.min_rows", 150) # pd.set_option('display.float_format', '{:.4f}'.format) curator = create_tome_curator() ``` ## Read in one match worth of data The tome curator also provides a convienent way to grab a random match to do some exploration on. The `get_single_match` method will return the DS Loader for that particular match. ``` # Just grab the first match match_loader = curator.get_match_by_index(0) # Get the manifest for these data. manifest = match_loader.manifest # Read in all channels (you can read in a subset if you pass in reading_instructions). data=match_loader.get_channels() ``` ## Explore the CSDS The CSDS files are rich in data. Feel free to explore them in depth. Here we use the manifest file to see the available channels and how many columns they contain. ``` for channel in manifest['channels']: print(channel['channel'], '-', len(channel['columns']), 'columns') ``` ## Explore the relevant data and develop the engineering ``` # Inspect player_footstep dataframe data['player_footstep'].head() # Count up footsteps per player df_footsteps_total = ( data['player_footstep'] .groupby('player_id_fixed', as_index=False) .size() .rename(columns={'size':'steps'}) ) df_footsteps_total # Inspect player_info dataframe pi = data['player_info'] pi.head() # Inspect player_info dataframe pi_simple = pi[['player_id_fixed', 'commends_friendly', 'wins', 'rank']].groupby('player_id_fixed',as_index=False).max() pi_simple # Get the map name map_name = data['header']['map_name'].iat[0] print(map_name) # Combine the data into a final dataframe df_final = pd.merge(df_footsteps_total, pi_simple, how='left', on='player_id_fixed') df_final['map_name'] = map_name df_final ```
github_jupyter
``` import argparse import os import sys import torch import torch.nn as nn import datasets import models.resnet as ResNet import models.senet as SENet from liveview import LiveView import utils configurations = { 1: dict( max_iteration=1000000, lr=1.0e-1, momentum=0.9, weight_decay=0.0, gamma=0.1, # "lr_policy: step" step_size=1000000, # "lr_policy: step" interval_validate=1000, ), } def get_parameters(model, bias=False): for k, m in model._modules.items(): if k == "fc" and isinstance(m, nn.Linear): if bias: yield m.bias else: yield m.weight N_IDENTITY = 8631 # the number of identities in VGGFace2 for which ResNet and SENet are trained parser = argparse.ArgumentParser("PyTorch Face Recognizer") parser.add_argument('--arch_type', type=str, default='resnet50_ft', help='model type', choices=['resnet50_ft', 'senet50_ft', 'resnet50_scratch', 'senet50_scratch']) parser.add_argument('--log_file', type=str, default='/path/to/log_file', help='log file') parser.add_argument('--checkpoint_dir', type=str, default='/path/to/checkpoint_directory', help='checkpoints directory') parser.add_argument('--feature_dir', type=str, default='/path/to/feature_directory', help='directory where extracted features are saved') parser.add_argument('-c', '--config', type=int, default=1, choices=configurations.keys(), help='the number of settings and hyperparameters used in training') parser.add_argument('--batch_size', type=int, default=32, help='batch size') parser.add_argument('--resume', type=str, default='', help='checkpoint file') parser.add_argument('--weight_file', type=str, default='./resnet50_ft_weight.pkl', help='weight file') parser.add_argument('--gpu', type=int, default=0) parser.add_argument('-j', '--workers', default=4, type=int, metavar='N', help='number of data loading workers (default: 4)') parser.add_argument('--horizontal_flip', action='store_true', help='horizontally flip images specified in test_img_list_file') os.environ['CUDA_VISIBLE_DEVICES'] = str(0) cuda = torch.cuda.is_available() if cuda: print("torch.backends.cudnn.version: {}".format(torch.backends.cudnn.version())) torch.manual_seed(1337) if cuda: torch.cuda.manual_seed(1337) # 2. model include_top = True model = ResNet.resnet50(num_classes=N_IDENTITY, include_top=include_top) # print(model) start_epoch = 0 start_iteration = 0 resume = False if resume: checkpoint = torch.load(resume) model.load_state_dict(checkpoint['model_state_dict']) start_epoch = checkpoint['epoch'] start_iteration = checkpoint['iteration'] assert checkpoint['arch'] == 'resnet50_ft' print("Resume from epoch: {}, iteration: {}".format(start_epoch, start_iteration)) else: utils.load_state_dict(model, './resnet50_ft_weight.pkl') if cuda: model = model.cuda() print("MODEL LOADED!") import torch as t import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable from torchvision import models, transforms normalize = transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) preprocess = transforms.Compose([ transforms.ToTensor(), normalize ]) def FeatureGeneration(model,targets): target_features = [] for target in targets: batch = [preprocess(perspective).cuda() for perspective in target ] batch = t.stack(batch) target_features.append(model(batch)) return target_features from PIL import Image import matplotlib from matplotlib import pyplot as plt import numpy as np %matplotlib inline shah = np.array(Image.open("Data/Dr_Shah.jpg")) shah = shah[:175,25:200] plt.imshow(shah) print(shah.shape) shah = Image.fromarray(shah) shah = shah.resize((244,244),Image.BILINEAR) shah = np.array(shah) plt.imshow(shah) print(shah.shape) target = Image.open("Data/cvpr2019.jpg") target = np.array(target) print(target.shape) plt.imshow(target) ex = target[90:170,730:810] plt.imshow(ex) print(ex.shape) # Resizing up, factor of 3. 2100,4800. target = Image.open("Data/cvpr2019.jpg") target = target.resize((4800,2100),Image.BILINEAR) target = np.array(target) target = target[200:1250,1000:3000] print(target.shape) plt.imshow(target) feat_gen = nn.Sequential(*list(model.children())[:-4]) torch.cuda.empty_cache() # torch.cuda.empty_cache() face_features = FeatureGeneration(feat_gen,[[shah]])[0] search_features = FeatureGeneration(feat_gen,[[target]])[0] output = (F.conv2d(search_features, face_features).squeeze(0)).squeeze(0).detach().cpu().numpy() print(output.shape) h,w = output.shape plt.imshow(output) print(np.amax(output)) pos = np.argmax(output) idx = (pos // 220), (pos%220) py,px = (pos // 220)/h, (pos%220)/w h_0,w_0,c_0 = target.shape tr_y,tr_x = int(h_0 * py), int(w_0 * px) print(idx,py,px) print(tr_y,tr_x) for w in range(5): for x in range(244): target[tr_y-w ,tr_x-x ] = [255,0,0] target[tr_y+244+w,tr_x-x ] = [255,0,0] for y in range(244): target[tr_y+y ,tr_x+w ] = [255,0,0] target[tr_y+y ,tr_x-244-w] = [255,0,0] plt.imshow(target) ex = target[tr_y:tr_y+244,tr_x-244:tr_x] plt.imshow(ex) ```
github_jupyter
``` import os, sys import torch from transformers import BertModel, BertConfig from greenformer import auto_fact from itertools import chain from os import path import sys def count_param(module, trainable=False): if trainable: return sum(p.numel() for p in module.parameters() if p.requires_grad) else: return sum(p.numel() for p in module.parameters()) ``` # Init Model ``` config = BertConfig.from_pretrained('bert-base-uncased') model = BertModel(config=config) model = BertModel.from_pretrained('bert-base-uncased') count_param(model) ``` # Factorize Model ### Apply absolute rank ``` %%time fact_model = auto_fact(model, rank=256, deepcopy=True, solver='random', num_iter=20) count_param(fact_model) %%time fact_model = auto_fact(model, rank=256, deepcopy=True, solver='svd', num_iter=20) count_param(fact_model) %%time fact_model = auto_fact(model, rank=256, deepcopy=True, solver='snmf', num_iter=20) count_param(fact_model) ``` ### Apply percentage rank ``` %%time fact_model = auto_fact(model, rank=0.4, deepcopy=True, solver='random', num_iter=20) count_param(fact_model) %%time fact_model = auto_fact(model, rank=0.4, deepcopy=True, solver='svd', num_iter=20) count_param(fact_model) %%time fact_model = auto_fact(model, rank=0.4, deepcopy=True, solver='snmf', num_iter=20) count_param(fact_model) ``` ### Apply factorization only on specific modules ``` # Only factorize last 6 transformer layers and the pooler layer of the model factorizable_submodules = list(model.encoder.layer[6:]) + [model.pooler] %%time fact_model = auto_fact(model, rank=0.2, deepcopy=True, solver='random', num_iter=20, submodules=factorizable_submodules) count_param(fact_model) %%time fact_model = auto_fact(model, rank=0.2, deepcopy=True, solver='svd', num_iter=20, submodules=factorizable_submodules) count_param(fact_model) %%time fact_model = auto_fact(model, rank=0.2, deepcopy=True, solver='snmf', num_iter=20, submodules=factorizable_submodules) count_param(fact_model) ``` # Speed test on CPU ### Test Inference CPU ``` %%timeit with torch.no_grad(): y = model(torch.zeros(32,256, dtype=torch.long)) %%timeit with torch.no_grad(): y = fact_model(torch.zeros(32,256, dtype=torch.long)) ``` ### Test Forward-Backward CPU ``` %%timeit y = model(torch.zeros(8,256, dtype=torch.long)) y.last_hidden_state.sum().backward() %%timeit y = fact_model(torch.zeros(8,256, dtype=torch.long)) y.last_hidden_state.sum().backward() ``` # Speed test on GPU ### Move models to GPU ``` model = model.cuda() fact_model = fact_model.cuda() ``` ### Test Inference GPU ``` x = torch.zeros(64,256, dtype=torch.long).cuda() %%timeit with torch.no_grad(): y = model(x) %%timeit with torch.no_grad(): y = fact_model(x) ``` ### Test Forward-Backward GPU ``` x = torch.zeros(16,256, dtype=torch.long).cuda() %%timeit y = model(x) y.last_hidden_state.sum().backward() %%timeit y = fact_model(x) y.last_hidden_state.sum().backward() ```
github_jupyter
<img src='./img/EU-Copernicus-EUM_3Logos.png' alt='Logo EU Copernicus EUMETSAT' align='right' width='40%'></img> <br> <a href="./00_index.ipynb"><< Index </a><br> <a href="./04_sentinel3_NRT_SLSTR_FRP_load_browse.ipynb"><< 04 - Sentinel-3 NRT SLSTR FRP - Load and browse </a><span style="float:right;"><a href="./06_IASI_L2_load_browse.ipynb">06 - IASI Level 2 - Load and browse >></a></span> <br> <div class="alert alert-block alert-warning"> <b>LOAD, BROWSE AND VISUALIZE</b></div> # Sentinel-3 Near Real Time SLSTR Aerosol Optical Depth (AOD) The [Copernicus Sentinel-3 Near Real Time Aerosol Optical Depth (AOD)](https://www.eumetsat.int/website/home/News/DAT_5150095.html) product quantifies the abundance of all aerosol particles suspended in the air and monitors their global distribution and long-range transport, at the scale of 9.5 x 9.5 km2. It is only applicable during daytime. The current version of the NRT S3 AOD product is considered as 'preliminary operational' over ocean surfaces, and 'demonstrational' over land surfaces. It is only applicable during daytime All these observations are made available in less than three hours from the SLSTR observation sensing time. The following workflow is based on an example of `Sentinel-3 Near Real Time SLSTR AOD` data on 1 October 2020. As a comparison, you see below the Sentinel-3 OLCI Red Green Blue composites for the same day, which clearly shows the smoke plumes along the Californian coast resulting from the fires. <br> <div style='text-align:center;'> <figure><img src='./img/s3_olci_1203.png' width='80%'/> <figcaption><i>RGB composites of Sentinel-OLCI Level 1 data on 3 December 2019</i></figcaption> </figure> </div> <hr> ### Outline * [Example: Australian Fires - December 2019](#australian_fires) * [1 - Load Sentinel-3 SLSTR AOD data](#load_cal) * [2 - Extract AOD variables](#extract_cal) * [3 - Visualize AOD Ocean and AOD land information](#visualize_cal) <hr> #### Load required libraries ``` import xarray as xr import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import matplotlib.pyplot as pltfacebook import matplotlib.colors as colors import matplotlib.cm as cm import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.feature as cfeature from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER ``` <hr> # <a id='australian_fires'></a>Example: Australian fires in December 2019 ## <a id='load'></a>Load Sentinel-3 SLSTR AOD data The Near-Real-Time Sentinel-3 Aerosol Optical Depth data are disseminated in `netCDF`. `netCDF` data can be loaded with the Python library [xarray](http://xarray.pydata.org/en/stable/) and its function `xr.open_dataset()`. You see that the data file contains two `dimensions`: * `columns` and * `rows`. It further contains an long list of `data variables`, including: * `AOD_550`, * `AOD_550_uncertainty`, * `AOD_550_Ocean_NonFiltered`, * `AOD_550_Land_Experimental_PostFiltered`, ... A data file also contains a set of `attributes`, which give you more information about the data file and the data it contains, e.g the `start_time` and `stop_time` or the `product_name`. ``` file = xr.open_dataset('../eodata/sentinel3/slstr/2019/12/03/AOD_Australia_20191203.nc') file ``` <br> ### <a id='extract'></a>Extract Aerosol Optical Depth variables The next step is to extract the variables of interest. Let us select the following two variables: * `AOD_550`: it is the Aerosol Optical Depth at 550nm. (*Note: it only covers ocean surfaces.*) * `AOD_550_Land_Experimental_PostFiltered`: it is the Aerosol Optical Depth at 550nm. (*Note: it only covers land surfaces.*) Both `DataArrays` have two dimensions (`rows` and `columns`) and the following attributes, which provide additional information about the variables: * `long_name` * `standard_name` * `valid_min` * `valid_max` * `coordinates` ``` aod_ocean = file.AOD_550 aod_land = file.AOD_550_Land_Experimental_PostFiltered print(aod_ocean) print(' ') print(aod_land) ``` <br> You can also load `latitude` and `longitude` information, which can be used later for visualizing the variables. ``` lat_nc = file.latitude lon_nc = file.longitude lat_nc, lon_nc ``` <br> ### <a id='visualize'></a> Visualize AOD Ocean and AOD Land variables The final step is to visualize both variables, Aerosol Optical Depth over ocean and land together in one plot. You can use matplotlib's function `pcolormesh` for it. Let us define a visualisation function called [visualize_pcolormesh_aod](./functions.ipynb#visualize_pcolormesh_aod) which visualizes both AOD variables together onto a map. The function takes the following keyword arguments (kwargs): * `aod_ocean`: DataArray with AOD values over ocean * `aod_land`: DataArray with AOD values over land * `latitude`: DataArray with latitude information * `longitude`: DataArray with longitude information * `title`: Title of the plot * `unit`: Unit of AOD * `vmin` and `vmax`: Minimum and maximum values to be displayed on the map * `color_scale`: Color scale the data shall be represented * `projection`: Projection of the map ``` def visualize_pcolormesh_aod(aod_ocean, aod_land, latitude, longitude, title, unit, vmin, vmax, color_scale, projection): fig=plt.figure(figsize=(12, 12)) ax=plt.axes(projection=projection) ax.coastlines(linewidth=1.5, linestyle='solid', color='k', zorder=10) gl = ax.gridlines(draw_labels=True, linestyle='--') gl.top_lables=False gl.right_labels=False gl.xformatter=LONGITUDE_FORMATTER gl.yformatter=LATITUDE_FORMATTER gl.xlabel_style={'size':12} gl.ylabel_style={'size':12} img1 = plt.pcolormesh(longitude, latitude, aod_ocean, transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax, cmap=color_scale) img2 = plt.pcolormesh(longitude, latitude, aod_land, transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax, cmap=color_scale) ax.set_title(title, fontsize=20, pad=20.0) cbar = fig.colorbar(img1, ax=ax, orientation='vertical', fraction=0.04, pad=0.05) cbar.set_label(unit, fontsize=16) cbar.ax.tick_params(labelsize=14) plt.show() ``` <br> Now, let us apply the function [visualize_pcolormesh](./functions.ipynb#visualize_pcolormesh) to visualize both variables, AOD Ocean and AOD Land. ``` visualize_pcolormesh_aod(aod_ocean, aod_land, lat_nc, lon_nc, 'Aerosol Optical Thickness at 550 nm', '~', 0., 1.0, cm.RdYlBu_r, ccrs.Mercator()) ``` <br> <br> <a href="./00_index.ipynb"><< Index </a><br> <a href="./04_sentinel3_NRT_SLSTR_FRP_load_browse.ipynb"><< 04 - Sentinel-3 NRT SLSTR FRP - Load and browse </a><span style="float:right;"><a href="./06_IASI_L2_load_browse.ipynb">06 - IASI Level 2 - Load and browse >></a></span> <hr> <img src='./img/copernicus_logo.png' alt='Logo EU Copernicus' align='right' width='20%'><br><br><br><br> <p style="text-align:right;">This project is licensed under the <a href="./LICENSE">MIT License</a> and is developed under a Copernicus contract.
github_jupyter
# Keane and Wolpin (1997) **Parameter Estimation via the Method of Simulated Moments (MSM)** In their seminal paper on the career decisions of young men, Keane and Wolpin (1997) estimate a life-cycle model for occupational choice based on NLSY data for young white men. The paper contains a basic and an extended specification of the model. Both models allow for five choice alternatives in each period: white collar sector work, blue collar sector work, military work, school, and staying home. Choice options come with pecuniary and/or non-pecuniary rewards. Agents are assumed to be forward-looking and act under uncertainty because of the occurrence of alternative-specific shocks that affect the current reward of alternatives and only become known to individuals in the period they occur in. Individuals thus form expectations about future shocks and in each period choose the option that maximizes the expected present value of current and future lifetime rewards. The extended model compared to the base specification expands the model by introducing more complex skill technology functions that for example allow for skill depreciation and age effects, job mobility and search costs, non-pecuniary rewards for work, re-entry costs for school, and some common returns for school. `respy` is able to solve, simulate, and estimate both model specifications. Within `respy`, they are referred to as `kw_97_basic` and `kw_97_extended`. However, using the parameters from the paper, `respy` returns life-cycle patterns that differ from the ones presented in the paper, prompting us to re-estimate them using the Method of Simulated Moments (MSM). The model specification can be loaded using the function `get_example_model` as demonstrated below. The returned parameter vector contains the estimated parameters from the paper and the returned DataFrame contains the 'observed' NLSY data. ``` import numpy as np import pandas as pd import respy as rp import matplotlib.pyplot as plt params_basic_kw, options, data_obs = rp.get_example_model("kw_97_basic") ``` ## Choice patterns and Rewards for Parameters in `kw_97_basic` To investigate the parameter specification presented for the basic model in Keane and Wolpin (1997), we will look at the choice frequencies in each period and compare them to the observed data. While the NLSY data is only observed for the first 11 years, the models can be used to predict choices over the entire work life of agents. The standard time horizon in `kw_97_basic` is 50 periods since Keane and Wolpin (1997) fix the terminal age to 65 with individuals entering the sample at age 15. We will thus inspect how well the model generated by the parameters can fit the observed data, as well as the predictions it makes for the rest of the life-cycle. ### Choice Patterns As a first step, we will look at the choices of agents over time. To do this, we can simulate data based on the parameters from `kw_97_basic` and compute the choice frequencies in each period. We then plot them against the observed choices. ``` simulate = rp.get_simulate_func(params_basic_kw, options) data_sim_kw = simulate(params_basic_kw) def calc_choice_frequencies(df): """Compute choice frequencies.""" return df.groupby("Period").Choice.value_counts(normalize=True).unstack() choices_obs = calc_choice_frequencies(data_obs) choices_obs = calc_choice_frequencies(data_obs) choices_kw = calc_choice_frequencies(data_sim_kw) def plot_moments(moments_obs, moments_sim, labels, colors): """Plot moments.""" plt.figure(figsize=(14, 4)) for i, (label, color) in enumerate(zip(labels, colors)): plt.subplot(1, 5, i + 1) plt.tight_layout() plt.title(label.capitalize()) plt.xlabel("Period") plt.plot(moments_sim[label], color=color) plt.plot(moments_obs[label], color="black", linestyle="dashed") plt.ylim(0, 1) plt.xlim(0, 50) choices = ["blue_collar", "white_collar", "school", "home", "military"] colors = ["tab:blue", "tab:orange", "tab:green", "tab:red", "tab:purple"] ``` The plots below show the choice frequencies of individuals for the five different choice alternatives. The colored lines represent the simulated dataset while the black dotted lines show the choices observed in the NLSY data. The simulated data does not seem to fit the observed data very well. The percentage of individuals choosing the white collar occupation is too high while all other choices are very underrepresented in the simulated data. ``` plot_moments( moments_obs=choices_obs, moments_sim=choices_kw, labels=choices, colors=colors, ) ``` ### Experience-Wage Profiles over the Life-Cycle As a next step, we will inspect the experience-wage profiles suggested by the model. The function below computes the wages of a skill **type 0** individual (skill endowment types are a source of heterogeneity between individuals in the model) with **10 years of schooling** for the given wage parameters if they enter an occupation in period 0 and stay in that occupation for their entire life-cycle. ``` def get_experience_profile(params, options, occupation): # To fix ideas we look at a Type 0 individual with 10 years of schooling # who immediately starts to work in the labor market. covars = [1, 10, 0, 0, 0, 0, 0, 0, 0] wages = list() for period in range(options["n_periods"]): if occupation == "blue_collar": covars[3] = period covars[4] = period ** 2 / 100 elif occupation == "white_collar": covars[2] = period covars[3] = period ** 2 / 100 wage = np.exp(np.dot(covars, params.loc[f"wage_{occupation}", "value"])) wages.append(wage) return wages def plot_experience_profiles(params, options): colors = ["tab:blue", "tab:orange"] occupations = ["blue_collar", "white_collar"] fig, ax = plt.subplots(1, 2, figsize=(12, 4)) for i, (label, color) in enumerate(zip(occupations, colors)): wage_profile = get_experience_profile(params, options, label) ax[i].plot(range(options["n_periods"]), wage_profile, color=color) ax[i].set_xlabel("Experience in Periods") ax[i].set_ylabel("Wages") ax[i].set_title(label) plt.tight_layout() ``` We can then plot the wage-experience profiles of the three occupations. The wage profiles do not seem very realistic, as especially the white collar occupation sees unlimited wage growth into the millions. The curve is missing the characteristic flattening (slowed and sometimes even negative wage growth) in later stages of life that is well documented in the life-cycle wage literature (Heckman et al., 2006). The military option is purposely left out in this plot since the low number of observations in the NLSY data for this occupation does not allow for the construction of an appropriate experience-wage profile. ``` plot_experience_profiles(params_basic_kw, options) ``` Since this flattening characteristic is controlled by the exponential term in the wage equation, we can see if adjusting this parameter improves the wage profile for the white collar occupation. The parameter specification in the paper gives this parameter a value of -0.0461. We will choose a smaller value in an attempt to flatten the curve in later periods. ``` params_new = params_basic_kw.copy() params_new.loc[("wage_white_collar", "exp_white_collar_square"), "value"] = -0.15 ``` As the plots below show, the new value for `(wage_white_collar, exp_white_collar_square)` produces a more realistic wage profile and wage for the white collar occupation: ``` plot_experience_profiles(params_new, options) ``` ## Estimation of the Basic Model Since there seems to be a possibility of improvements, we attempt to estimate the parameters via MSM to improve the fit. The estimation setup for MSM follows the pattern already established in other articles of this documentation. For the estimation we use moments that capture the choice frequencies for each period and mean wages as well as their standard deviation. The weighting matrix used is a diagonal inverse variance weighting matrix. Interested readers can refer to the guides below for more information on MSM estimation with `respy`. ### Choice Patterns We will first investigate the choice patterns of individuals over the 11 observed periods and the predicted choices in later periods. The plot below shows the choice frequencies for the observed data and simulated data for the specification in Keane and Wolpin (1997) and our estimates respectively in a stacked area plot. The newly estimated parameters are named with the suffix `_respy`. ``` params_basic_respy, _, _ = rp.get_example_model("kw_97_basic_respy") data_sim_new = simulate(params_basic_respy) fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 5)) calc_choice_frequencies(data_sim_kw)[choices].plot( kind="area", stacked=True, ax=axes[0], xlim=[0, 11], title="Simulated Choices KW 97", linewidth=0.1, ) calc_choice_frequencies(data_obs)[choices].plot( kind="area", stacked=True, ax=axes[1], title="Observed Choices", linewidth=0.1 ) calc_choice_frequencies(data_sim_new)[choices].plot( kind="area", stacked=True, ax=axes[2], xlim=[0, 11], title="Simulated Choices New Estimates", linewidth=0.1, ) ``` Plotting the choices separately against their observed counterpart also reveals a much better fit. ``` choices_new = calc_choice_frequencies(data_sim_new) plot_moments( moments_obs=choices_obs, moments_sim=choices_new, labels=choices, colors=colors, ) ``` ### Experience-Wage Profiles The wage profiles have attained a more realistic shape although the earned wages in later periods are still unreasonably high. These problems in wage growth are similar to the ones shown by Keane and Wolpin (1997) for the basic specification and give way to the expanded model, which promises a more reasonable development of life-cycle wages. ``` plot_experience_profiles(params_new, options) ``` ## Estimation of the Extended Model In addition to the basic model parameters, we also re-estimate the extended model specified in Keane and Wolpin (1997). Since the parameter space for this model is much larger than the basic specification, we expand the number of moments used for estimation. The new sets of moments used are conditional on the period and initial level of schooling of individuals. Specifically, we compute the choice frequencies and wage statistics for two initial schooling groups: those with up to 9 years of schooling at age 16 and those with 10 years or more. Furthermore, the moments for the wage distribution are expanded to include the median and 25% as well as 75% percentile for each initial schooling group in each period. The plots below show the choice frequencies for the parameters from the paper and the newly estimated parameters. While the fit seems much better for the newly estimated parameters, the choice patterns they suggest over the life-cycle in some cases are a bit more extreme than the ones presented in Keane and Wolpin (1997), especially for the blue and white collar occupations. This is not necessarily surprising as the model is fit on only 11 years of data, while the extrapolation is applied to 50 periods. Multiple other parameter estimates (not shown here) with a similar within-sample fit predict very different choice patterns over the life-cycle. **The parameters presented here should thus be used and interpreted with caution.** ``` params_extended_kw, options_extended, _ = rp.get_example_model("kw_97_extended") simulate_extended = rp.get_simulate_func(params_extended_kw, options_extended) data_sim_extended_kw = simulate_extended(params_extended_kw) ``` ### Choice Frequencies for Extended Parametrization from Keane and Wolpin (1997) ``` choices_extended_kw = calc_choice_frequencies(data_sim_extended_kw) plot_moments( moments_obs=choices_obs, moments_sim=choices_extended_kw, labels=choices, colors=colors, ) ``` ### Choice Frequencies for Estimated Extended Parameters ``` params_extended_respy, _, _ = rp.get_example_model("kw_97_extended_respy") data_sim_extended_respy = simulate_extended(params_extended_respy) choices_extended_respy = calc_choice_frequencies(data_sim_extended_respy) plot_moments( moments_obs=choices_obs, moments_sim=choices_extended_respy, labels=choices, colors=colors, ) ``` ## References - Heckman, J. J., Lochner, L. J., & Todd, P. E. (2006). [Earnings functions, rates of return and treatment effects: The Mincer equation and beyond](https://www.sciencedirect.com/science/article/pii/S1574069206010075). In Hanushek, E. & Welch, F., editors, *Handbook of the Economics of Education*, volume 1, pages 307–458. Elsevier Science, Amsterdam, Netherlands. - Keane, M. P. and Wolpin, K. I. (1997). [The Career Decisions of Young Men](https://www.jstor.org/stable/10.1086/262080?seq=1). *Journal of Political Economy*, 105(3): 473-522.
github_jupyter
``` from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.common.exceptions import * import pandas as pd from time import sleep import os nome_hoteis = [] preco_hoteis = [] class ScrappyCvc: def iniciar(self): self.raspagem_de_dados() def raspagem_de_dados(self): checkin = "2021-11-15" checkout = "2021-11-16" destinoId = "6162" occ = "2" chrome_options = Options() chrome_options.add_experimental_option( 'excludeSwitches', ['enable-logging']) chrome_options.add_argument('--lang=pt-BR') chrome_options.add_argument('--disable-notifications') self.driver = webdriver.Chrome(options=chrome_options) self.driver.set_window_size(800, 700) self.link = f'https://www.cvc.com.br/hotel/search?CheckIn={checkin}&CheckOut={checkout}&Location=%20-%20%20,%20Brasil&ZoneId={destinoId}&Rooms=1&Adults={occ}&Children=0&ChildAges=;&City=&State=&Country=Brasil&Name=' self.lista_nome_hoteis = [] self.lista_preco_hoteis = [] self.driver.get(self.link) sleep(2) last_height = self.driver.execute_script("return document.body.scrollHeight") while True: self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") sleep(2) new_height = self.driver.execute_script("return document.body.scrollHeight") if new_height == last_height: break last_height = new_height lista_length = self.driver.find_elements(By.CLASS_NAME, 'buttonDetailPayments') lista_max = len(lista_length) for p in range(1): item = 1 for i in range(lista_max): c = 1 while c < lista_max: try: lista_nomes = self.driver.find_elements(By.XPATH, f'/html/body/div[1]/div[2]/div/div/div[2]/div/div[{item}]/div/div[2]/div/div[1]/h2') self.lista_nome_hoteis.append(lista_nomes[0].text) sleep(1) lista_precos = self.driver.find_elements(By.XPATH, f'/html/body/div[1]/div[2]/div/div/div[2]/div/div[{item}]/div/div[3]/div/div[1]/div[1]/div[2]') self.lista_preco_hoteis.append(lista_precos[0].text) sleep(1) item += 1 except: c += 1 item += 1 print(f'\u001b[32m{"Resultados:"}\u001b[0m') print(self.lista_nome_hoteis) print(self.lista_preco_hoteis) for nome in self.lista_nome_hoteis: nome_hoteis.append(nome) for preco in self.lista_preco_hoteis: preco_hoteis.append(preco) start = ScrappyCvc() start.iniciar() print(nome_hoteis) print(preco_hoteis) data_hoteis = [] for el in zip(nome_hoteis, preco_hoteis): data_hoteis.append(el) data_hoteis_dict = dict(data_hoteis) print(data_hoteis_dict) df_data_hoteis = pd.DataFrame(list(data_hoteis_dict.items()), columns=['Nome', 'Preço']) display(df_data_hoteis) df_data_hoteis.to_csv('report.csv', index=False) ```
github_jupyter
# DS108 Databases : Lesson Ten Companion Notebook ### Table of Contents <a class="anchor" id="DS108L10_toc"></a> * [Table of Contents](#DS108L10_toc) * [Page 1 - Overview](#DS108L10_page_1) * [Page 2 - Sharding](#DS108L10_page_2) * [Page 3 - More Methods](#DS108L10_page_3) * [Page 4 - Key Terms](#DS108L10_page_4) * [Page 5 - Lesson 5 Hands On](#DS108L10_page_5) <hr style="height:10px;border-width:0;color:gray;background-color:gray"> # Page 1 - Overview of this Module<a class="anchor" id="DS108L10_page_1"></a> [Back to Top](#DS108L10_toc) <hr style="height:10px;border-width:0;color:gray;background-color:gray"> ``` from IPython.display import VimeoVideo # Tutorial Video Name: Sharding, More Methods and Project VimeoVideo('245797657', width=720, height=480) ``` # Overview During this last lesson, you will be learning about a few more in-depth NoSQL terms and methods. You will also be working on an in-depth Lesson 5 HandsOn for NoSQL. It is time to dive right into Sharding. <hr style="height:10px;border-width:0;color:gray;background-color:gray"> # Page 2 - Sharding<a class="anchor" id="DS108L10_page_2"></a> [Back to Top](#DS108L10_toc) <hr style="height:10px;border-width:0;color:gray;background-color:gray"> # Sharding **Sharding** is a way to spread data across multiple machines and servers. MongoDB uses Sharding to support deployments and applications that contain huge data sets. This is because when database systems have large data sets, a single server may have trouble keeping up with all the data. There are _two_ ways to deal with a situation like this: *Vertical* or *Horizontal* Scaling. --- ## Vertical Scaling **Vertical Scaling** involves ways to increase the capacity of a server, such as using a much more powerful CPU, adding more RAM, or increasing the amount of storage space. There are limitations when using _Vertical Scaling_ because there may be restrictions on how much storage one machine can handle. Also, cloud-based providers have a maximum for how much storage they have. --- ## Horizontal Scaling **Horizontal Scaling** is the process of spreading out the dataset between multiple servers and increasing the storage to those servers as needed. Even if a single machine out of the many handling the data may not be super high-speed, overall, it may increase the efficiency of the application having many machines. If the dataset expands, all that is needed is to add servers to handle that data as needed. MongoDB supports _Horizontal Scaling_ through _Sharding_. --- ## Enable Sharding **Sharding** is something that is done at a very high level in your database, usually on the admin side of the database. The following command is used when you would like to create Sharding in your database: ```js db.runCommand({ shardCollection: "<database>.<collection>", key: <shardkey>, unique: <boolean>, numInitialChunks: <integer>, collation: { locale: "simple" } }) ``` As you can see, there are several options available to you when running this command; however, only the last is optional. Now it's time to explore these parts: * **shardCollection:** How do you name which collection in which database you would like to shard. It will always be a string. * **key:** The index specification document to use as the shard key. The shard key determines how MongoDB distributes the documents among the shards. * **unique:** When true, the unique option ensures that the underlying index enforces a unique constraint. Hashed shard keys do not support unique constraints. Defaults to false. * **numInitialChunks:** Specifies the number of chunks to initially create when sharding a collection that is empty with a hashed shard key. Then, MongoDB will create and balance chunks across the cluster. The `numInitialChunks` must be less than 8192 per shard. * MongoDB divides sharded data into chunks. Each chunk has an inclusive lower and exclusive upper range based on the shard key. * **collation:** _Optional._ If the collection specified to shardCollection has a default collation, you must include a collation document with `{ locale : "simple" }`, or the shardCollection command fails. At least one of the indexes whose fields support the shard key pattern must have a simple collation. * Collation allows users to specify language-specific string comparison rules, such as letter case and accent marks. <div class="panel panel-success"> <div class="panel-heading"> <h3 class="panel-title">Additional Info!</h3> </div> <div class="panel-body"> <p><b>Sharding</b> can get quite complicated quickly, but you now have a basic understanding of what sharding is and how you can accomplish it. The documentation on <b>Sharding</b> is extensive, so if you would like to read more about it, you can visit MongoDB's documentation website <a href="https://docs.mongodb.com/manual/sharding/" target="_blank">here</a>.</p> </div> </div> <hr style="height:10px;border-width:0;color:gray;background-color:gray"> # Page 3 - More Methods<a class="anchor" id="DS108L10_page_3"></a> [Back to Top](#DS108L10_toc) <hr style="height:10px;border-width:0;color:gray;background-color:gray"> # More Methods Now that you have made it this far in NoSQL, it is time to look into a few more available methods when working with a collection. Some of these methods can be in-depth, but it is good to know they are available to you. --- ## aggregate() This method calculates the aggregate (total) values for data in a collection. Below is the syntax: ```js db.collectionName.aggregate(pipeline, options); ``` Below is a description of the parameters of the above query: * **pipeline:** An array that is a sequence of data aggregation operations or stages. <div class="panel panel-success"> <div class="panel-heading"> <h3 class="panel-title">Additional Info!</h3> </div> <div class="panel-body"> <p>There are many pipeline stages, which you can read about <a href="https://docs.mongodb.com/v3.0/reference/operator/aggregation-pipeline/" target="_blank">here</a>.</p> </div> </div> * **options:** _Optional_, additional documents that are passed in when using aggregate. <div class="panel panel-success"> <div class="panel-heading"> <h3 class="panel-title">Additional Info!</h3> </div> <div class="panel-body"> <p>There are many options available to the aggregate method, which you can read about <a href="https://docs.mongodb.com/v3.0/reference/method/db.collection.aggregate/#db.collection.aggregate">here</a>.</p> </div> </div> --- ## count() This method will count and return the number of results based on a query. The syntax is below: ```js db.collectionName.count(); ``` For example, if you wanted to count the number of documents in your `inventory` collection, you would run the following: ```js db.inventory.count(); ``` The query above will return 10, or however many documents are currently in the `inventory` collection. You could also run this query with a filter. Check to see how many of your app users in your `appusers` collection have an age greater than 20 by running the below query: ```js db.appusers.count( { age: { $gt : 20 } } ) ``` After running the above query, it should return the number 4 or a number close, depending on your changes in that collection. --- ## totalSize() This method will return the total size in bytes of the data in the collection plus the size of every index on the collection. If you run the query below, a number around 16000 will be returned based on what your collection currently contains: ```js db.appusers.totalSize() ``` <div class="panel panel-success"> <div class="panel-heading"> <h3 class="panel-title">Additional Info!</h3> </div> <div class="panel-body"> <p>There are many more methods available to you. Each method has the possibility of being slightly complex. If you would like to read more about the methods available in NoSQL, visit MongoDB's documentation <a href="https://docs.mongodb.com/v3.0/reference/method/js-collection/" target="_blank">Collection Methods</a>.</p> </div> </div> <hr style="height:10px;border-width:0;color:gray;background-color:gray"> # Page 4 - Key Terms<a class="anchor" id="DS108L10_page_4"></a> [Back to Top](#DS108L10_toc) <hr style="height:10px;border-width:0;color:gray;background-color:gray"> # Key Terms Below is a list of a short description of the important keywords you have learned in this lesson. Please read through and go back and review any concepts you don't fully understand. Great Work! <table class="table table-striped"> <tr> <th>Keyword</th> <th>Description</th> </tr> <tr> <td style="font-weight: bold;" nowrap>Sharding</td> <td>Sharding is a way to spread data across multiple machines and servers. MongoDB uses Sharding to support deployments and applications that contain huge data sets. The reason for this is because when database systems have large data sets, a single server may have trouble keeping up with all the data. There are two ways to deal with a situation like this: <em>Vertical</em> or <em>Horizontal</em> Scaling.</td> </tr> <tr> <td style="font-weight: bold;" nowrap>Vertical Scaling</td> <td>Involves ways to increase the capacity of a server, such as using a much more powerful CPU, adding more RAM, or increasing the amount of storage space.</td> </tr> <tr> <td style="font-weight: bold;" nowrap>Horizontal Scaling</td> <td>The process of spreading out the dataset between multiple servers and increasing the storage to those servers as needed.</td> </tr> <tr> <td style="font-weight: bold;" nowrap>aggregate()</td> <td>This method calculates the aggregate (total) values for data in a collection.</td> </tr> <tr> <td style="font-weight: bold;" nowrap>count()</td> <td>This method will count and return the number of results based on a query.</td> </tr> <tr> <td style="font-weight: bold;" nowrap>totalSize()</td> <td>This method will return the total size in bytes of the data in the collection plus the size of every indexes on the collection. </td> </tr> </table> <hr style="height:10px;border-width:0;color:gray;background-color:gray"> # Page 5 - Lesson 5 Hands On<a class="anchor" id="DS108L10_page_5"></a> [Back to Top](#DS108L10_toc) <hr style="height:10px;border-width:0;color:gray;background-color:gray"> Welcome to the last project for the NoSQL course! Great job making it this far! This hands on will be different from the hands on projects you have previously seen in a couple of different ways. You will be putting together the numerous topics you have learned into one large project. It is designed to mimic real problems which you may face in your career, so it may be a challenge for you and will also take several hours. <div class="panel panel-success"> <div class="panel-heading"> <h3 class="panel-title">Additional Info!</h3> </div> <div class="panel-body"> <p>Before beginning this hands-on, you may want to watch this <a href="https://vimeo.com/428206689"> recorded live workshop, "Winnie the Pooh and Databases Too," </a> that goes over a similar example. </p> </div> </div> Take this project step-by-step and be aware that the project description below is written to be a bit less specific than previous Hands-Ons. The hands on is supposed to challenge you to do some problem solving to figure out how to accomplish a task. You can always review past lessons or use a Google search if needed. Good luck! <div class="panel panel-danger"> <div class="panel-heading"> <h3 class="panel-title">Caution!</h3> </div> <div class="panel-body"> <p>Do not submit your project until you have completed all requirements! You will not be able to resubmit.</p> </div> </div> --- ## Requirements For this hands on, you will be working through several real-life scenarios within new collections. This Hands-On is structured into _two_ parts, and each part will ask you to run multiple queries. After each query, please take a screenshot and add it to a text document (or an equivalent) and name this file `Lesson5handson`. This way, you will be able to submit your answers to each part all at once. --- ## Part 1 You have just been hired at a startup company. They currently only have ten employees, but they need to be included in the database. So far, they have only been tracked within an excel sheet. Your boss would like you to create a new collection in Atlas named `employees`. Take a look at the following data and the notes listed below before inserting any data: <table class="table table-striped"> <tr> <th>Name</th> <th>Birthday</th> <th>Address</th> <th>City</th> <th>State</th> <th>Position Name</th> <th>Remote</th> <th>Full Time</th> <tr> <tr> <td>Alison Davidson</td> <td>04/05/75</td> <td>874 W. Oak Place</td> <td>Gary</td> <td>Indiana</td> <td>Customer Support</td> <td>Yes</td> <td>Yes</td> <tr> <tr> <td>Henry Chapelton</td> <td>09/29/80</td> <td>9324 E. Vista Way</td> <td>Tempe</td> <td>Arizona</td> <td>Customer Support</td> <td>No</td> <td>Yes</td> <tr> <tr> <td>Alex Miller</td> <td>11/22/83</td> <td>244 Price Road</td> <td>Mesa</td> <td>Arizona</td> <td>Customer Support</td> <td>No</td> <td>No</td> <tr> <tr> <td>Carly Nielson</td> <td>08/04/87</td> <td>678 W. Westward Road</td> <td>Phoenix</td> <td>Arizona</td> <td>Office Manager</td> <td>No</td> <td>Yes</td> <tr> <tr> <td>Tom Talbot</td> <td>12/30/89</td> <td>12 Oakland Way</td> <td>Chandler</td> <td>Arizona</td> <td>Inventory Manager</td> <td>No</td> <td>Yes</td> <tr> <tr> <td>Mary Crawley</td> <td>07/06/80</td> <td>1010 Granite Way</td> <td>Charlotte</td> <td>North Carolina</td> <td>Human Resources</td> <td>Yes</td> <td>Yes</td> <tr> <tr> <td>Daisy Baxter</td> <td>09/09/87</td> <td>990 E. 84th St.</td> <td>Tempe</td> <td>Arizona</td> <td>CEO</td> <td>No</td> <td>Yes</td> <tr> <tr> <td>William Coyle</td> <td>10/11/91</td> <td>944 W. 16th St.</td> <td>Phoenix</td> <td>Arizona</td> <td>Intern</td> <td>No</td> <td>No</td> <tr> <tr> <td>Edith Bates</td> <td>07/28/90</td> <td>7 E. 20th Pl.</td> <td>Chandler</td> <td>Arizona</td> <td>Customer Support</td> <td>No</td> <td>Yes</td> <tr> <tr> <td>Gwen Harding</td> <td>10/11/86</td> <td>234 W. 48th. St.</td> <td>Phoenix</td> <td>Arizona</td> <td>Office Assistent</td> <td>No</td> <td>Yes</td> <tr> </table> **Notes:** * The `Birthday` field should have a data type of Date. * The `Position Name`, `Remote`, and `Full Time` fields should be within an embedded document called `position`. * `Remote` and `Full Time` fields should have boolean values. It's been about a month since you have inserted all employees into the database. There have been a couple of changes to the company. The CEO decided that he no longer wants remote employees, so they have transferred the remote employees and they are now living in Arizona. Alison Davidson now lives at 777 E. 1st St. # 120 Tempe, AZ and Mary Crawley now lives at 8322 W. Vista Pl. Scottsdale, AZ. Since all employees now all live in Arizona, there is no need to have a field named "state" within this collection, so please remove it. Lastly, they would like very efficient searching using the "position" field (remember that field includes a document with three other fields). --- ## Part 2 You are currently working for a company who wants to build an app similar to Spotify. Below is a list of data for different songs. Please insert this data into a new collection named `songs`. <table class="table table-striped"> <tr> <th>SongId</th> <th align="left">Title</th> <th align="left">Artist</th> <th align="left">Album</th> <th>ReleaseYear</th> <tr> <tr> <td>1</td> <td>Girls Just Want To Have Fun</td> <td>Cyndi Lauper</td> <td>She's So Unusual</td> <td>1983</td> </tr> <tr> <td>2</td> <td>Hips Don't Lie</td> <td>Shakira feat. Wyclef Jean</td> <td>Oral Fixation Vol. 2</td> <td>2006</td> </tr> <tr> <td>3</td> <td>Poker Face</td> <td>Lady Gaga</td> <td>The Fame</td> <td>2008</td> </tr> <tr> <td>4</td> <td>Wannabe</td> <td>Spice Girls</td> <td>Spice</td> <td>1996</td> </tr> <tr> <td>5</td> <td>California Gurls</td> <td>Katy Perry feat. Snoop Dogg</td> <td>Teenage Dream</td> <td>2010</td> </tr> <tr> <td>6</td> <td>Bye, Bye, Bye</td> <td>NSYNC</td> <td>No Strings Attached</td> <td>2000</td> </tr> <tr> <td>7</td> <td>I Will Always Love You</td> <td>Whitney Houston</td> <td>I Will Always Love You: The Best of Whitney Houston</td> <td>2012</td> </tr> <tr> <td>8</td> <td>Baby One More Time</td> <td>Britney Spears</td> <td>Baby One More Time</td> <td>1999</td> </tr> <tr> <td>9</td> <td>Vogue</td> <td>Madonna</td> <td>I'm Breathless</td> <td>1990</td> </tr> <tr> <td>10</td> <td>Rolling in the Deep</td> <td>Adele</td> <td>21</td> <td>2011</td> </tr> <tr> <td>11</td> <td>1234</td> <td>Feist</td> <td>The Reminder</td> <td>2007</td> </tr> <tr> <td>12</td> <td>Elastic Heart</td> <td>Sia</td> <td>The Hunger Games: Catching Fire Soundtrack</td> <td>2015</td> </tr> <tr> <td>13</td> <td>Oops! I Did It Again</td> <td>Britney Spears</td> <td>Oops! I Did It Again</td> <td>2000</td> </tr> <tr> <td>14</td> <td>Bad Romance</td> <td>Lady Gaga</td> <td>The Fame Monster</td> <td>2009</td> </tr> <tr> <td>15</td> <td>Lose Control</td> <td>Missy Elliot</td> <td>The Cookbook</td> <td>2005</td> </tr> <tr> <td>16</td> <td>U Can't Touch This</td> <td>MC Hammer</td> <td>Please Hammer, Don't Hurt 'Em</td> <td>1990</td> </tr> <tr> <td>17</td> <td>Thriller</td> <td>Michael Jackson</td> <td>Thriller</td> <td>1982</td> </tr> <tr> <td>18</td> <td>Single Ladies</td> <td>Beyonce</td> <td>I am... Sasha Fierce</td> <td>2008</td> </tr> <tr> <td>19</td> <td>Rhythm Nation</td> <td>Janet Jackson</td> <td>Janet Jackson's Rhythm Nation 1814</td> <td>1989</td> </tr> </table> **Notes:** * The `artist`, `album`, and `releaseYear` fields should be an embedded document named `details`. * Be sure that the `songId` and `releaseYear` fields have a type of number. Next, your company has run into some things they would like to be changed within the database: * The `title` field needs to be renamed to `songTitle`, so it is clearer to the developers working with the data. * They would like to have the `artist` field to be outside the `details` document but the `album` and `releaseYear` should stay within that document. <div class="panel panel-danger"> <div class="panel-heading"> <h3 class="panel-title">Caution!</h3> </div> <div class="panel-body"> <p>Be sure to zip and submit your <code>Lesson5handson</code> text document when finished! You will not be able to re-submit, so be sure the screenshots to each part are located within this document.</p> </div> </div>
github_jupyter
# Implementing a one-layer Neural Network We will illustrate how to create a one hidden layer NN We will use the iris data for this exercise We will build a one-hidden layer neural network to predict the fourth attribute, Petal Width from the other three (Sepal length, Sepal width, Petal length). ``` import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from sklearn import datasets from tensorflow.python.framework import ops ops.reset_default_graph() iris = datasets.load_iris() x_vals = np.array([x[0:3] for x in iris.data]) y_vals = np.array([x[3] for x in iris.data]) # Create graph session sess = tf.Session() # make results reproducible seed = 2 tf.set_random_seed(seed) np.random.seed(seed) # Split data into train/test = 80%/20% train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False) test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices))) x_vals_train = x_vals[train_indices] x_vals_test = x_vals[test_indices] y_vals_train = y_vals[train_indices] y_vals_test = y_vals[test_indices] # Normalize by column (min-max norm) def normalize_cols(m): col_max = m.max(axis=0) col_min = m.min(axis=0) return (m-col_min) / (col_max - col_min) x_vals_train = np.nan_to_num(normalize_cols(x_vals_train)) x_vals_test = np.nan_to_num(normalize_cols(x_vals_test)) # Declare batch size batch_size = 50 # Initialize placeholders x_data = tf.placeholder(shape=[None, 3], dtype=tf.float32) y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32) # Create variables for both NN layers hidden_layer_nodes = 10 A1 = tf.Variable(tf.random_normal(shape=[3,hidden_layer_nodes])) # inputs -> hidden nodes b1 = tf.Variable(tf.random_normal(shape=[hidden_layer_nodes])) # one biases for each hidden node A2 = tf.Variable(tf.random_normal(shape=[hidden_layer_nodes,1])) # hidden inputs -> 1 output b2 = tf.Variable(tf.random_normal(shape=[1])) # 1 bias for the output # Declare model operations hidden_output = tf.nn.relu(tf.add(tf.matmul(x_data, A1), b1)) final_output = tf.nn.relu(tf.add(tf.matmul(hidden_output, A2), b2)) # Declare loss function (MSE) loss = tf.reduce_mean(tf.square(y_target - final_output)) # Declare optimizer my_opt = tf.train.GradientDescentOptimizer(0.005) train_step = my_opt.minimize(loss) # Initialize variables init = tf.global_variables_initializer() sess.run(init) # Training loop loss_vec = [] test_loss = [] for i in range(500): rand_index = np.random.choice(len(x_vals_train), size=batch_size) rand_x = x_vals_train[rand_index] rand_y = np.transpose([y_vals_train[rand_index]]) sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y}) loss_vec.append(np.sqrt(temp_loss)) test_temp_loss = sess.run(loss, feed_dict={x_data: x_vals_test, y_target: np.transpose([y_vals_test])}) test_loss.append(np.sqrt(test_temp_loss)) if (i+1)%50==0: print('Generation: ' + str(i+1) + '. Loss = ' + str(temp_loss)) %matplotlib inline # Plot loss (MSE) over time plt.plot(loss_vec, 'k-', label='Train Loss') plt.plot(test_loss, 'r--', label='Test Loss') plt.title('Loss (MSE) per Generation') plt.legend(loc='upper right') plt.xlabel('Generation') plt.ylabel('Loss') plt.show() ```
github_jupyter
# From batch to online ## A quick overview of batch learning If you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `creme` fits in and explain how to use it. The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. A lot of libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/). As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer().html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)). ``` from sklearn import datasets from sklearn import linear_model from sklearn import metrics from sklearn import model_selection from sklearn import pipeline from sklearn import preprocessing # Load the data dataset = datasets.load_breast_cancer() X, y = dataset.data, dataset.target # Define the steps of the model model = pipeline.Pipeline([ ('scale', preprocessing.StandardScaler()), ('lin_reg', linear_model.LogisticRegression(solver='lbfgs')) ]) # Define a determistic cross-validation procedure cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42) # Compute the MSE values scorer = metrics.make_scorer(metrics.roc_auc_score) scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv) # Display the average score and it's standard deviation print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})') ``` This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to: 1. Loading the data 2. Fitting a model to the data 3. Computing the performance of the model on unseen data This is pretty standard and is maybe how most people imagine a machine learning pipeline. However this way of proceding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learning by being presented the data in chunks. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/). Another issue with the batch learning regime is that can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade. A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Everytime you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. ## A hands-on introduction to incremental learning Incremental learning is also often called *online learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence we prefer the name "incremental learning", from which `creme` derives it's name. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously. ``` for xi, yi in zip(X, y): # This where the model learns pass ``` In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `x` we can notice that it is a `numpy.ndarray`. ``` xi ``` `creme` on the other hand works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position. ``` for xi, yi in zip(X, y): xi = dict(zip(dataset.feature_names, xi)) pass xi ``` `creme`'s `stream` module has an `iter_sklearn_dataset` convenience function that we can use instead. ``` from creme import stream for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()): pass ``` The simple fact that we are getting the data in a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to procede would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road. The way we do feature scaling in `creme` involves computing *running statistics*. The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so: $$ \begin{cases} n_{t+1} = n_t + 1 \\ \mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \end{cases} $$ Likewhise a running variance can be computed as so: $$ \begin{cases} n_{t+1} = n_t + 1 \\ \mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\ s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\ \sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}} \end{cases} $$ where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable. ``` n, mean, sum_of_squares, variance = 0, 0, 0, 0 for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()): n += 1 old_mean = mean mean += (xi['mean area'] - mean) / n sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean) variance = sum_of_squares / n print(f'Running mean: {mean:.3f}') print(f'Running variance: {variance:.3f}') ``` Let's compare this with `numpy`. ``` import numpy as np i = list(dataset.feature_names).index('mean area') print(f'True mean: {np.mean(X[:, i]):.3f}') print(f'True variance: {np.var(X[:, i]):.3f}') ``` The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization... Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `creme` is to use the `StandardScaler` class from the `preprocessing` module, as so: ``` from creme import preprocessing scaler = preprocessing.StandardScaler() for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()): xi = scaler.fit_one(xi) ``` This is quite terse but let's break it down nonetheless. Every class in `creme` has a `fit_one(x, y)` method where all the magic happens. Now the important thing to notice is that the `fit_one` actually returns the output for the given input. This is one of the nice properties of online learning: inference can be done immediatly. In `creme` each call to a `Transformer`'s `fit_one` will return the transformed output. Meanwhile calling `fit_one` with a `Classifier` or a `Regressor` will return the predicted target for the given set of features. The twist is that the prediction is made *before* looking at the true target `y`. This means that we get a free hold-out prediction every time we call `fit_one`. This can be used to monitor the performance of the model as it trains, which is obviously nice to have. Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitely. Online linear regression can be done in `creme` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions. ``` from creme import linear_model from creme import optim scaler = preprocessing.StandardScaler() optimizer = optim.SGD(lr=0.01) log_reg = linear_model.LogisticRegression(optimizer) y_true = [] y_pred = [] for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42): # Scale the features xi_scaled = scaler.fit_one(xi).transform_one(xi) # Fit the linear regression yi_pred = log_reg.predict_proba_one(xi_scaled) log_reg.fit_one(xi_scaled, yi) # Store the truth and the prediction y_true.append(yi) y_pred.append(yi_pred[True]) print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}') ``` The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `creme` has a `compat` module that contains utilities for making `creme` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object. ``` from creme import compat from creme import compose # We define a Pipeline, exactly like we did earlier for sklearn model = compose.Pipeline( ('scale', preprocessing.StandardScaler()), ('log_reg', linear_model.LogisticRegression()) ) # We make the Pipeline compatible with sklearn model = compat.convert_creme_to_sklearn(model) # We compute the CV scores using the same CV scheme and the same scoring scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv) # Display the average score and it's standard deviation print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})') ``` This time the ROC AUC score is lower, which is what we would expect. Indeed online learning isn't as accurate as batch learning. However it all depends in what you're interested in. If you're only interested in predicting the next observation then the online learning regime would be better. That's why it's a bit hard to compare both approaches: they're both suited to different scenarios. ## Going further There's a lot more to learn, and it all depends on what kind on your use case. Feel free to have a look at the [documentation](https://creme-ml.github.io/) to know what `creme` has available, and have a look the [example notebook](https://github.com/creme-ml/notebooks). Here a few resources if you want to do some reading: - [Online learning -- Wikipedia](https://www.wikiwand.com/en/Online_machine_learning) - [What is online machine learning? -- Max Pagels](https://medium.com/value-stream-design/online-machine-learning-515556ff72c5) - [Introduction to Online Learning -- USC course](http://www-bcf.usc.edu/~haipengl/courses/CSCI699/) - [Online Methods in Machine Learning -- MIT course](http://www.mit.edu/~rakhlin/6.883/) - [Online Learning: A Comprehensive Survey](https://arxiv.org/pdf/1802.02871.pdf) - [Streaming 101: The world beyond batch](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101) - [Machine learning for data streams](https://www.cms.waikato.ac.nz/~abifet/book/contents.html) - [Data Stream Mining: A Practical Approach](https://www.cs.waikato.ac.nz/~abifet/MOA/StreamMining.pdf)
github_jupyter
#### Copyright 2017 Google LLC. ``` # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Creating and Manipulating Tensors **Learning Objectives:** * Initialize and assign TensorFlow `Variable`s * Create and manipulate tensors * Refresh your memory about addition and multiplication in linear algebra (consult an introduction to matrix [addition](https://en.wikipedia.org/wiki/Matrix_addition) and [multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication) if these topics are new to you) * Familiarize yourself with basic TensorFlow math and array operations ``` from __future__ import print_function import tensorflow as tf try: tf.contrib.eager.enable_eager_execution() print("TF imported with eager execution!") except ValueError: print("TF already imported with eager execution!") ``` ## Vector Addition You can perform many typical mathematical operations on tensors ([TF API](https://www.tensorflow.org/api_guides/python/math_ops)). The code below creates the following vectors (1-D tensors), all having exactly six elements: * A `primes` vector containing prime numbers. * A `ones` vector containing all `1` values. * A vector created by performing element-wise addition over the first two vectors. * A vector created by doubling the elements in the `primes` vector. ``` primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) print("primes:", primes) ones = tf.ones([6], dtype=tf.int32) print("ones:", ones) just_beyond_primes = tf.add(primes, ones) print("just_beyond_primes:", just_beyond_primes) twos = tf.constant([2, 2, 2, 2, 2, 2], dtype=tf.int32) primes_doubled = primes * twos print("primes_doubled:", primes_doubled) ``` Printing a tensor returns not only its **value**, but also its **shape** (discussed in the next section) and the **type of value stored** in the tensor. Calling the `numpy` method of a tensor returns the value of the tensor as a numpy array: ``` some_matrix = tf.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int32) print(some_matrix) print("\nvalue of some_matrix is:\n", some_matrix.numpy()) ``` ### Tensor Shapes Shapes are used to characterize the size and number of dimensions of a tensor. The shape of a tensor is expressed as `list`, with the `i`th element representing the size along dimension `i`. The length of the list then indicates the rank of the tensor (i.e., the number of dimensions). For more information, see the [TensorFlow documentation](https://www.tensorflow.org/programmers_guide/tensors#shape). A few basic examples: ``` # A scalar (0-D tensor). scalar = tf.zeros([]) # A vector with 3 elements. vector = tf.zeros([3]) # A matrix with 2 rows and 3 columns. matrix = tf.zeros([2, 3]) print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.numpy()) print('vector has shape', vector.get_shape(), 'and value:\n', vector.numpy()) print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.numpy()) ``` ### Broadcasting In mathematics, you can only perform element-wise operations (e.g. *add* and *equals*) on tensors of the same shape. In TensorFlow, however, you may perform operations on tensors that would traditionally have been incompatible. TensorFlow supports **broadcasting** (a concept borrowed from numpy), where the smaller array in an element-wise operation is enlarged to have the same shape as the larger array. For example, via broadcasting: * If an operand requires a size `[6]` tensor, a size `[1]` or a size `[]` tensor can serve as an operand. * If an operation requires a size `[4, 6]` tensor, any of the following sizes can serve as an operand: * `[1, 6]` * `[6]` * `[]` * If an operation requires a size `[3, 5, 6]` tensor, any of the following sizes can serve as an operand: * `[1, 5, 6]` * `[3, 1, 6]` * `[3, 5, 1]` * `[1, 1, 1]` * `[5, 6]` * `[1, 6]` * `[6]` * `[1]` * `[]` **NOTE:** When a tensor is broadcast, its entries are conceptually **copied**. (They are not actually copied for performance reasons. Broadcasting was invented as a performance optimization.) The full broadcasting ruleset is well described in the easy-to-read [numpy broadcasting documentation](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html). The following code performs the same tensor arithmetic as before, but instead uses scalar values (instead of vectors containing all `1`s or all `2`s) and broadcasting. ``` primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) print("primes:", primes) one = tf.constant(1, dtype=tf.int32) print("one:", one) just_beyond_primes = tf.add(primes, one) print("just_beyond_primes:", just_beyond_primes) two = tf.constant(2, dtype=tf.int32) primes_doubled = primes * two print("primes_doubled:", primes_doubled) ``` ### Exercise #1: Arithmetic over vectors. Perform vector arithmetic to create a "just_under_primes_squared" vector, where the `i`th element is equal to the `i`th element in `primes` squared, minus 1. For example, the second element would be equal to `3 * 3 - 1 = 8`. Make use of either the `tf.multiply` or `tf.pow` ops to square the value of each element in the `primes` vector. ``` # Write your code for Task 1 here. primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) print("primes:", primes) m_one = tf.constant(-1, dtype=tf.int32) two = tf.constant(2, dtype=tf.int32) square = tf.multiply(primes,primes) # square = tf.pow(primes, two) just_under_primes_squared = tf.add(square, m_one) print("just_under_primes_squared:", just_under_primes_squared) # two = tf.constant(2, dtype=tf.int32) # primes_doubled = primes * two # print("primes_doubled:", primes_doubled) ``` ### Solution Double-click __here__ for the solution. <!-- Your answer is below: # Task: Square each element in the primes vector, then subtract 1. def solution(primes): primes_squared = tf.multiply(primes, primes) neg_one = tf.constant(-1, dtype=tf.int32) just_under_primes_squared = tf.add(primes_squared, neg_one) return just_under_primes_squared def alternative_solution(primes): primes_squared = tf.pow(primes, 2) one = tf.constant(1, dtype=tf.int32) just_under_primes_squared = tf.subtract(primes_squared, one) return just_under_primes_squared primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32) just_under_primes_squared = solution(primes) print("just_under_primes_squared:", just_under_primes_squared) --> ## Matrix Multiplication In linear algebra, when multiplying two matrices, the number of *columns* of the first matrix must equal the number of *rows* in the second matrix. - It is **_valid_** to multiply a `3x4` matrix by a `4x2` matrix. This will result in a `3x2` matrix. - It is **_invalid_** to multiply a `4x2` matrix by a `3x4` matrix. ``` # A 3x4 matrix (2-d tensor). x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]], dtype=tf.int32) # A 4x2 matrix (2-d tensor). y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32) # Multiply `x` by `y`; result is 3x2 matrix. matrix_multiply_result = tf.matmul(x, y) print(matrix_multiply_result) ``` ## Tensor Reshaping With tensor addition and matrix multiplication each imposing constraints on operands, TensorFlow programmers must frequently reshape tensors. You can use the `tf.reshape` method to reshape a tensor. For example, you can reshape a 8x2 tensor into a 2x8 tensor or a 4x4 tensor: ``` # Create an 8x2 matrix (2-D tensor). matrix = tf.constant( [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]], dtype=tf.int32) reshaped_2x8_matrix = tf.reshape(matrix, [2, 8]) reshaped_4x4_matrix = tf.reshape(matrix, [4, 4]) print("Original matrix (8x2):") print(matrix.numpy()) print("Reshaped matrix (2x8):") print(reshaped_2x8_matrix.numpy()) print("Reshaped matrix (4x4):") print(reshaped_4x4_matrix.numpy()) ``` You can also use `tf.reshape` to change the number of dimensions (the "rank") of the tensor. For example, you could reshape that 8x2 tensor into a 3-D 2x2x4 tensor or a 1-D 16-element tensor. ``` # Create an 8x2 matrix (2-D tensor). matrix = tf.constant( [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]], dtype=tf.int32) reshaped_2x2x4_tensor = tf.reshape(matrix, [2, 2, 4]) one_dimensional_vector = tf.reshape(matrix, [16]) print("Original matrix (8x2):") print(matrix.numpy()) print("Reshaped 3-D tensor (2x2x4):") print(reshaped_2x2x4_tensor.numpy()) print("1-D vector:") print(one_dimensional_vector.numpy()) ``` ### Exercise #2: Reshape two tensors in order to multiply them. The following two vectors are incompatible for matrix multiplication: * `a = tf.constant([5, 3, 2, 7, 1, 4])` * `b = tf.constant([4, 6, 3])` Reshape these vectors into compatible operands for matrix multiplication. Then, invoke a matrix multiplication operation on the reshaped tensors. ``` # Write your code for Task 2 here. a = tf.constant([5, 3, 2, 7, 1, 4]) b = tf.constant([4, 6, 3]) reshaped_a= tf.reshape(a, [2, 3]) reshaped_b= tf.reshape(b, [3, 1]) matrix_multiply_ab = tf.matmul(reshaped_a, reshaped_b) print(matrix_multiply_ab) ``` Remember, when multiplying two matrices, the number of *columns* of the first matrix must equal the number of *rows* in the second matrix. One possible solution is to reshape `a` into a 2x3 matrix and reshape `b` into a a 3x1 matrix, resulting in a 2x1 matrix after multiplication: An alternative solution would be to reshape `a` into a 6x1 matrix and `b` into a 1x3 matrix, resulting in a 6x3 matrix after multiplication. ``` a = tf.constant([5, 3, 2, 7, 1, 4]) b = tf.constant([4, 6, 3]) reshaped_a = tf.reshape(a, [6, 1]) reshaped_b = tf.reshape(b, [1, 3]) c = tf.matmul(reshaped_a, reshaped_b) print("reshaped_a (6x1):") print(reshaped_a.numpy()) print("reshaped_b (1x3):") print(reshaped_b.numpy()) print("reshaped_a x reshaped_b (6x3):") print(c.numpy()) ``` ### Solution Double-click __here__ for the solution. <!-- Your answer is below: # Task: Reshape two tensors in order to multiply them a = tf.constant([5, 3, 2, 7, 1, 4]) b = tf.constant([4, 6, 3]) reshaped_a = tf.reshape(a, [2, 3]) reshaped_b = tf.reshape(b, [3, 1]) c = tf.matmul(reshaped_a, reshaped_b) print("reshaped_a (2x3):") print(reshaped_a.numpy()) print("reshaped_b (3x1):") print(reshaped_b.numpy()) print("reshaped_a x reshaped_b (2x1):") print(c.numpy()) --> ## Variables, Initialization and Assignment So far, all the operations we performed were on static values (`tf.constant`); calling `numpy()` always returned the same result. TensorFlow allows you to define `Variable` objects, whose values can be changed. When creating a variable, you can set an initial value explicitly, or you can use an initializer (like a distribution): ``` # Create a scalar variable with the initial value 3. v = tf.contrib.eager.Variable([3]) # Create a vector variable of shape [1, 4], with random initial values, # sampled from a normal distribution with mean 1 and standard deviation 0.35. w = tf.contrib.eager.Variable(tf.random_normal([1, 4], mean=1.0, stddev=0.35)) print("v:", v.numpy()) print("w:", w.numpy()) ``` To change the value of a variable, use the `assign` op: ``` v = tf.contrib.eager.Variable([3]) print(v.numpy()) tf.assign(v, [7]) print(v.numpy()) v.assign([5]) print(v.numpy()) ``` When assigning a new value to a variable, its shape must be equal to its previous shape: ``` v = tf.contrib.eager.Variable([[1, 2, 3], [4, 5, 6]]) print(v.numpy()) try: print("Assigning [7, 8, 9] to v") v.assign([7, 8, 9]) except ValueError as e: print("Exception:", e) ``` There are many more topics about variables that we didn't cover here, such as loading and storing. To learn more, see the [TensorFlow docs](https://www.tensorflow.org/programmers_guide/variables). ### Exercise #3: Simulate 10 rolls of two dice. Create a dice simulation, which generates a `10x3` 2-D tensor in which: * Columns `1` and `2` each hold one throw of one six-sided die (with values 1–6). * Column `3` holds the sum of Columns `1` and `2` on the same row. For example, the first row might have the following values: * Column `1` holds `4` * Column `2` holds `3` * Column `3` holds `7` You'll need to explore the [TensorFlow documentation](https://www.tensorflow.org/api_guides/python/array_ops) to solve this task. ``` # Write your code for Task 3 here. # Task: Simulate 10 throws of two dice. Store the results in a 10x3 matrix. die1 = tf.contrib.eager.Variable( tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32)) die2 = tf.contrib.eager.Variable( tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32)) dice_sum = tf.add(die1, die2) resulting_matrix = tf.concat(values=[die1, die2, dice_sum], axis=1) print(resulting_matrix.numpy()) ``` We're going to place dice throws inside two separate 10x1 matrices, `die1` and `die2`. The summation of the dice rolls will be stored in `dice_sum`, then the resulting 10x3 matrix will be created by *concatenating* the three 10x1 matrices together into a single matrix. Alternatively, we could have placed dice throws inside a single 10x2 matrix, but adding different columns of the same matrix would be more complicated. We also could have placed dice throws inside two 1-D tensors (vectors), but doing so would require transposing the result. ### Solution Double-click __here__ for the solution. <!-- Your answer is below: # Task: Simulate 10 throws of two dice. Store the results in a 10x3 matrix. die1 = tf.contrib.eager.Variable( tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32)) die2 = tf.contrib.eager.Variable( tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32)) dice_sum = tf.add(die1, die2) resulting_matrix = tf.concat(values=[die1, die2, dice_sum], axis=1) print(resulting_matrix.numpy()) -->
github_jupyter