code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from numpy import exp, array, random, dot
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]]).T
random.seed(1)
synaptic_weights = 2 * random.random((3, 1)) - 1
for iteration in xrange(10000):
output = 1 / (1 + exp(-(dot(training_set_inputs, synaptic_weights))))
synaptic_weights += dot(training_set_inputs.T, (training_set_outputs - output) * output * (1 - output))
print 1 / (1 + exp(-(dot(array([1, 0, 0]), synaptic_weights))))
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]]).T
| My first Neural Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Character-Level LSTM in PyTorch
#
# In this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on <NAME>. **This model will be able to generate new text based on the text from the book!**
#
# This network is based off of <NAME> [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Below is the general architecture of the character-wise RNN.
#
# <img src="assets/charseq.jpeg" width="500">
# +
import os
import sys
import math
import logging
from pathlib import Path
import numpy as np
import torch.optim as optim
import torch
from torchvision import transforms, models
from torch import nn
import torch.nn.functional as F
# %load_ext autoreload
# %autoreload 2
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_context("poster")
sns.set(rc={'figure.figsize': (16, 9.)})
sns.set_style("whitegrid")
import pandas as pd
pd.set_option("display.max_rows", 120)
pd.set_option("display.max_columns", 120)
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
# -
# First let's load in our required resources for data loading and model creation.
from deep_learning_udacity.Char_RNN import one_hot_encode, get_batches, CharRNN, sample, predict, train
# ## Load in Data
#
# Then, we'll load the Anna Karenina text file and convert it into integers for our network to use.
# open text file and read in data as `text`
with open('../data/raw/Character_rnn/anna.txt', 'r') as f:
text = f.read()
# Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.
text[:100]
# ### Tokenization
#
# In the cells, below, I'm creating a couple **dictionaries** to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
# +
# encode the text and map each character to an integer and vice versa
# we create two dictionaries:
# 1. int2char, which maps integers to characters
# 2. char2int, which maps characters to unique integers
chars = tuple(set(text))
int2char = dict(enumerate(chars))
char2int = {ch: ii for ii, ch in int2char.items()}
# encode the text
encoded = np.array([char2int[ch] for ch in text])
# -
encoded
# And we can see those same characters from above, encoded as integers.
encoded[:100]
# ## Pre-processing the data
#
# As you can see in our char-RNN image above, our LSTM expects an input that is **one-hot encoded** meaning that each character is converted into an integer (via our created dictionary) and *then* converted into a column vector where only it's corresponding integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that!
#
# +
# check that the function works as expected
test_seq = np.array([[3, 5, 1]])
one_hot = one_hot_encode(test_seq, 8)
print(one_hot)
# -
# ## Making training mini-batches
#
#
# To train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
#
# <img src="assets/sequence_batching@1x.png" width=500px>
#
#
# <br>
#
# In this example, we'll take the encoded characters (passed in as the `arr` parameter) and split them into multiple sequences, given by `batch_size`. Each of our sequences will be `seq_length` long.
#
# ### Creating Batches
#
# **1. The first thing we need to do is discard some of the text so we only have completely full mini-batches. **
#
# Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences in a batch) and $M$ is the seq_length or number of time steps in a sequence. Then, to get the total number of batches, $K$, that we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.
#
# **2. After that, we need to split `arr` into $N$ batches. **
#
# You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences in a batch, so let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.
#
# **3. Now that we have this array, we can iterate through it to get our mini-batches. **
#
# The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `seq_length`. We also want to create both the input and target arrays. Remember that the targets are just the inputs shifted over by one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of tokens in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `seq_length` wide.
#
# > **TODO:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.**
# ### Test Your Implementation
#
# Now I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 8 and 50 sequence steps.
batches = get_batches(encoded, 8, 50)
x, y = next(batches)
# printing out the first 10 items in a sequence
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
# If you implemented `get_batches` correctly, the above output should look something like
# ```
# x
# [[25 8 60 11 45 27 28 73 1 2]
# [17 7 20 73 45 8 60 45 73 60]
# [27 20 80 73 7 28 73 60 73 65]
# [17 73 45 8 27 73 66 8 46 27]
# [73 17 60 12 73 8 27 28 73 45]
# [66 64 17 17 46 7 20 73 60 20]
# [73 76 20 20 60 73 8 60 80 73]
# [47 35 43 7 20 17 24 50 37 73]]
#
# y
# [[ 8 60 11 45 27 28 73 1 2 2]
# [ 7 20 73 45 8 60 45 73 60 45]
# [20 80 73 7 28 73 60 73 65 7]
# [73 45 8 27 73 66 8 46 27 65]
# [17 60 12 73 8 27 28 73 45 27]
# [64 17 17 46 7 20 73 60 20 80]
# [76 20 20 60 73 8 60 80 73 17]
# [35 43 7 20 17 24 50 37 73 36]]
# ```
# although the exact numbers may be different. Check to make sure the data is shifted over one step for `y`.
# ---
# ## Defining the network with PyTorch
#
# Below is where you'll define the network.
#
# <img src="assets/charRNN.png" width=500px>
#
# Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters.
# ### Model Structure
#
# In `__init__` the suggested structure is as follows:
# * Create and store the necessary dictionaries (this has been done for you)
# * Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching)
# * Define a dropout layer with `dropout_prob`
# * Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters)
# * Finally, initialize the weights (again, this has been given)
#
# Note that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`.
# ---
# ### LSTM Inputs/Outputs
#
# You can create a basic [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) as follows
#
# ```python
# self.lstm = nn.LSTM(input_size, n_hidden, n_layers,
# dropout=drop_prob, batch_first=True)
# ```
#
# where `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell.
#
# We also need to create an initial hidden state of all zeros. This is done like so
#
# ```python
# self.init_hidden()
# ```
# check if GPU is available
train_on_gpu = torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU!')
else:
print('No GPU available, training on CPU; consider making n_epochs very small.')
# ## Time to train
#
# The train function gives us the ability to set the number of epochs, the learning rate, and other parameters.
#
# Below we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual!
#
# A couple of details about training:
# >* Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new *tuple* variable because an LSTM has a hidden state that is a tuple of the hidden and cell states.
# * We use [`clip_grad_norm_`](https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html) to help prevent exploding gradients.
# ## Instantiating the model
#
# Now we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training!
# +
## TODO: set you model hyperparameters
# define and print the net
n_hidden=256
n_layers=2
net = CharRNN(chars, train_on_gpu,n_hidden, n_layers)
print(net)
# -
# ### Set your training hyperparameters!
# +
batch_size = 128
seq_length = 100
n_epochs = 30
# train the model
train(net, encoded, epochs=n_epochs, batch_size=batch_size, seq_length=seq_length, lr=0.001, print_every=10)
# -
# ## Getting the best model
#
# To set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network.
# ## Hyperparameters
#
# Here are the hyperparameters for the network.
#
# In defining the model:
# * `n_hidden` - The number of units in the hidden layers.
# * `n_layers` - Number of hidden LSTM layers to use.
#
# We assume that dropout probability and learning rate will be kept at the default, in this example.
#
# And in training:
# * `batch_size` - Number of sequences running through the network in one pass.
# * `seq_length` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
# * `lr` - Learning rate for training
#
# Here's some good advice from <NAME> on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).
#
# > ## Tips and Tricks
#
# >### Monitoring Validation Loss vs. Training Loss
# >If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
#
# > - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
# > - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)
#
# > ### Approximate number of parameters
#
# > The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are:
#
# > - The number of parameters in your model. This is printed when you start training.
# > - The size of your dataset. 1MB file is approximately 1 million characters.
#
# >These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
#
# > - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger.
# > - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
#
# > ### Best models strategy
#
# >The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
#
# >It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
#
# >By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
# ## Checkpoint
#
# After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters.
# +
# change the name, for saving multiple files
model_name = 'rnn_x_epoch.net'
checkpoint = {'n_hidden': net.n_hidden,
'n_layers': net.n_layers,
'state_dict': net.state_dict(),
'tokens': net.chars}
with open(model_name, 'wb') as f:
torch.save(checkpoint, f)
# -
# ---
# ## Making Predictions
#
# Now that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text!
#
# ### A note on the `predict` function
#
# The output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**.
#
# > To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character.
#
# ### Top K sampling
#
# Our predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about [topk, here](https://pytorch.org/docs/stable/torch.html#torch.topk).
#
# ### Priming and generating text
#
# Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from.
print(sample(net, 1000, prime='Anna', top_k=5))
# ## Loading a checkpoint
# +
# Here we have loaded in a model that trained over 20 epochs `rnn_20_epoch.net`
#with open('rnn_x_epoch.net', 'rb') as f:
# checkpoint = torch.load(f)
#loaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers'])
#loaded.load_state_dict(checkpoint['state_dict'])
# + jupyter={"outputs_hidden": true}
| notebooks/Exercise_Character_Level_RNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pydataset import data
# When the instructions say to load a dataset, you can pass the name of the dataset as a string to the `data` function to load the dataset. You can also view the documentation for the data set by passing the `show_doc` keyword argument.
#
# `# data('mpg', show_doc=True) # view the documentation for the dataset
# mpg = data('mpg') # load the dataset and store it in a variable`
#
# - All the datasets loaded from the `pydataset` library will be pandas dataframes.
# # 1.
# Copy the code from the lesson to create a dataframe full of student grades.
# +
import pandas as pd
import numpy as np
np.random.seed(123)
students = ['Sally', 'Jane', 'Suzie', 'Billy', 'Ada', 'John', 'Thomas',
'Marie', 'Albert', 'Richard', 'Isaac', 'Alan']
# randomly generate scores for each student for each subject
# note that all the values need to have the same length here
math_grades = np.random.randint(low=60, high=100, size=len(students))
english_grades = np.random.randint(low=60, high=100, size=len(students))
reading_grades = np.random.randint(low=60, high=100, size=len(students))
df = pd.DataFrame({'name': students,
'math': math_grades,
'english': english_grades,
'reading': reading_grades})
type(df)
# -
df
# # 1.A:
# Create a column named `passing_english` that indicates whether each student has a passing grade in english.
#
# +
df_eng = df.assign(passing_english = df['english'] > 69)
df_eng
# -
# # 1.B:
# Sort the english grades by the `passing_english` column. How are duplicates handled?
df_eng.sort_values(by='passing_english', ascending=False)
#duplicates are then sorted anti-alphabetically by name
# # 1.C:
# Sort the english grades first by `passing_english` and then by student name.
# - All the students that are failing english should be first,
# - and within the students that are failing english they should be ordered alphabetically.
# - The same should be true for the students passing english.
#
# (Hint: you can pass a list to the .sort_values method)
#
# +
df_eng.sort_values(by=['passing_english', 'name'], ascending=[True, True])
# -
# # 1.D:
# Sort the english grades first by `passing_english`,
#
# and then by the actual english grade,
#
# similar to how we did in the last step.
df_eng.sort_values(by=['passing_english','english', 'name'], ascending=[True, True, True])
# # 1.E:
# - Calculate each students overall grade and
# - add it as a column on the dataframe.
#
# The overall grade is the average of the math, english, and reading grades.
df_ogrd = df.assign(overall_grade = (df['math'] + df['english'] + df['reading']) / 3)
df_ogrd
# # 2.
# Load the `mpg` dataset. Read the documentation for the dataset and use it for the following questions:
mpg = data('mpg') # load the dataset and store it in a variable
data('mpg', show_doc=True) # view the documentation for the dataset
# # 2.A:
# How many **rows** and **columns** are there?
# +
#type(mpg)
mpg.shape
#234 rows
#11 columns
# -
# # 2.B:
# What are the data types of each column?
mpg.dtypes
# # 2.C:
# Summarize the dataframe with `.info` and `.describe`
# +
mpg.info()
# -
mpg.describe()
# # 2.D:
# Rename the `cty` column to `city`.
# +
mpg.rename(columns={'cty' : 'city'}, inplace=True)
mpg.head(2)
# -
# # 2.E:
# Rename the `hwy` column to `highway`.
# +
mpg.rename(columns={'hwy' : 'highway'}, inplace=True)
mpg.head(2)
# -
# # 2.F:
# Do any cars have better city mileage than highway mileage?
# +
mpg['cty_vs_hwy'] = mpg['city'] > mpg['highway']
mpg.sort_values(by=['cty_vs_hwy', 'city'], ascending=[False, False])
#no
# -
mpg.drop(columns=['cty_vs_hwy'])
# # 2.G:
# Create a column named `mileage_difference` this column should contain the difference between `highway` and `city` mileage for each car.
mpg_ac = mpg.assign(mileage_difference = (mpg['highway'] - mpg['city']))
mpg_ac.head(5)
# # 2.H:
# Which car (or cars) has the highest `mileage difference`?
# +
mpg_ac.nlargest(1, 'mileage_difference', keep='all')
# -
# # 2.I:
# Which compact class car has the lowest `highway mileage`?
# +
mpg_ac[mpg_ac['class'] == 'compact'].nsmallest(1, 'highway', keep='all')
# -
# #### The best?
# class is a reserve word
# we addr this by replacing .class
# with ['class']
mpg_ac[mpg_ac['class'] == 'compact'].nlargest(1, 'highway', keep='all')
# # 2.J:
# Create a column named `average_mileage` that is the mean of the city and highway mileage.
#
mpg_ac = mpg_ac.assign(average_mileage = (mpg_ac['city'] + mpg_ac['highway']) / 2)
mpg_ac
# # 2.K:
# Which dodge car has the **best** `average mileage`? The **worst**?
mpg_ac[mpg_ac.manufacturer == 'dodge'].nlargest(1, 'average_mileage', keep='all')
#The worst?
mpg_ac[mpg_ac.manufacturer == 'dodge'].nsmallest(1, 'average_mileage', keep='all')
# # 3.
# Load the `Mammals` dataset. Read the documentation for it, and use the data to answer these questions:
Mammals = data('Mammals')
data('Mammals', show_doc=True)
# # 3.A:
# How many rows and columns are there?
# +
Mammals.shape
#107 Rows
#4 Columns
# -
# # 3.B:
# What are the data types?
# +
Mammals.dtypes
# -
# # 3.C:
# Summarize the dataframe with `.info` and `.describe`
# +
Mammals.info()
# -
Mammals.describe()
# +
import matplotlib.pyplot as plt
plt.hist(Mammals.weight, bins=20, color='orange', edgecolor='black')
plt.title('Weight of Mammals in kg')
plt.xlabel('kg')
plt.ylabel('Count')
plt.show()
# +
plt.hist(Mammals.speed, bins=15, color='dodgerblue', edgecolor='black')
plt.title('Full Sprint Speed of Mammals in km')
plt.xlabel('km')
plt.ylabel('Count')
plt.show()
# -
Mammals.head(3)
# # 3.D:
# What is the the weight of the fastest animal?
# +
Mammals.nlargest(1, 'speed', keep='all')[['weight']]
# -
# # 3.E:
# What is the overall **percentage** of specials?
# +
Mammals_spec = Mammals.copy()
# -
Mammals_spec.drop(columns=['weight', 'speed', 'hoppers']).sort_values(by='specials', ascending=False)
# overall percentage
((Mammals_spec.drop(columns=['weight', 'speed', 'hoppers']).sum())/107
)*100
#change 107 hard code to variable: Mammals.shape[0]
((Mammals_spec.drop(columns=['weight', 'speed', 'hoppers']).sum())/Mammals.shape[0]
)*100
# +
#easier, non-column-dumping version:
(Mammals['specials'].sum() / Mammals.shape[0]) * 100
# -
round((Mammals['specials'].sum() / Mammals.shape[0]) * 100, 2)
# # 3.F:
# How many animals are hoppers that are above the median speed?
#
# What percentage is this?
# +
Mammals[(Mammals.speed > Mammals.median()['speed']) & (Mammals['hoppers'] == True)]
# -
Mammals.median()['speed']
#number of
(Mammals[(Mammals.speed > Mammals.median()['speed']) & (Mammals['hoppers'] == True)]).count()['hoppers']
#percentage
(((Mammals[(Mammals.speed > Mammals.median()['speed']) & (Mammals['hoppers'] == True)])
.count()['hoppers'])/ 107)*100
#change 107 hard code to variable: Mammals.shape[0]
(((Mammals[(Mammals.speed > Mammals.median()['speed']) & (Mammals['hoppers'] == True)])
.count()['hoppers'])/ Mammals.shape[0])*100
Mammals.shape[0]
| dataframes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy as sp
#import jax functions
import jax.numpy as jnp
from jax import jit
from jax.scipy.stats import norm
from jax.lax import fori_loop
from jax.ops import index_update
from tqdm.notebook import tqdm
# -
# # Bayesian Gaussian Demo (Example 1)
# +
@jit #update posterior mean
def update(i,carry):
theta_n,sigma2_n,n,theta_samp,z = carry
y_new = z[i]*jnp.sqrt(sigma2_n) + theta_n
theta_n = ((n+i+1)*theta_n + y_new)/(n+i+2)
sigma2_n = 1+ 1/(n+i+2)
theta_samp = index_update(theta_samp,i,theta_n)
carry =theta_n,sigma2_n,n,theta_samp,z
return carry
#run forward loop for predictive resampling
def samp_theta(theta_n,sigma2_n,n,T):
z = np.random.randn(T)
theta_samp = jnp.zeros(T)
carry = theta_n,sigma2_n,n,theta_samp,z
carry = fori_loop(0,T,update,carry)
return carry[3]
# +
np.random.seed(100)
n = 10
y = np.random.randn(n)+2
theta_n = jnp.sum(y)/(n+1)
sigma2_n = 1 + (1/(n+1))
print(theta_n)
B = 5000
T = 1000
theta_samp = np.zeros((B,T))
for j in tqdm(range(B)):
theta_samp[j]= samp_theta(theta_n,sigma2_n,n,T)
# -
ylim = (0.7,2.9)
f = plt.figure(figsize = (12,4))
plt.subplot(1,2,1)
for j in range(100):
plt.plot(theta_samp[100+j],color = 'k',alpha = 0.25)
plt.xlabel('Forward step $i$ \n \n(a)',fontsize=12)
plt.ylabel(r'Posterior Mean $\bar{\theta}_{n+i}$',fontsize=12)
plt.ylim(ylim)
plt.xticks(fontsize=12)
plt.yticks(fontsize = 12)
plt.subplot(1,2,2)
sns.kdeplot(theta_samp[:,-1],vertical=True,color = 'k',alpha = 0.8,label = r'KDE of $\bar{\theta}_N$ samples')
theta_plot = jnp.arange(0.8,2.8,0.01)
plt.plot(sp.stats.norm.pdf(theta_plot,loc = theta_n,scale = jnp.sqrt(sigma2_n-1))\
,theta_plot,color = 'k',linestyle ='--',label = 'Posterior density', alpha = 0.8)
plt.ylim(ylim)
plt.xlabel('Density \n \n(b)',fontsize=12)
plt.xticks(fontsize=12)
plt.yticks(fontsize = 12)
#plt.legend()
f.savefig('plots/Normal_demo_bayes.pdf', bbox_inches='tight')
# # Copula update plot (Fig 2)
# +
#return c_uv and H_uv from copula plots
def cop_update(u,v,rho):
pu = sp.stats.norm.ppf(u)
pv = sp.stats.norm.ppf(v)
z = (pu - rho*pv)/np.sqrt(1- rho**2)
cop_dist = sp.stats.norm.cdf(z)
cop_dens =np.exp( -0.5*np.log(1-rho**2) + (0.5/(1-rho**2))*(-(rho**2)*(pu**2 + pv**2)+ 2*rho*pu*pv))
return cop_dist,cop_dens
# +
y_plot = np.arange(-4,6,0.01)
#Initialize cdf
P0 = sp.stats.norm.cdf(y_plot)
p0 = sp.stats.norm.pdf(y_plot)
#New data point
y1 = 2
v1 = P0[np.argmin(np.abs(y1-y_plot))]
# +
f =plt.figure(figsize=(14,4))
plt.subplot(1,3,1)
plt.scatter([y1],[0], s= 10, color = 'k',label = r'$y_{i+1}$')
rho_range = np.array([0.9,0.8,0.7])
j = 0.99
for rho in rho_range:
H_uv,c_uv = cop_update(P0,v1,rho)
plt.plot(y_plot,c_uv, color = 'k', alpha = j,label = r'$\rho = {}$'.format(rho))
j = j - 0.33
plt.xticks(fontsize=12)
plt.yticks(fontsize = 12)
plt.xlabel('$y$ \n \n(a)',fontsize = 12)
plt.ylabel(r'Copula Density',fontsize = 12)
#plt.legend(loc = 2,fontsize = 12)
plt.subplot(1,3,2)
alpha = 0.5
#plt.plot(y_plot,p0)
plt.scatter([y1],[0], s= 10, color = 'k')
#plot first to get legend
j = 0.99
H_uv,c_uv = cop_update(P0,v1,rho_range[0])
plt.plot(y_plot,c_uv*p0, color = 'k', alpha = j,label = r'$c_\rho(u_i,v_i) p_{i}$')
j= j- 0.33
for rho in rho_range[1:]:
H_uv,c_uv = cop_update(P0,v1,rho)
plt.plot(y_plot,(c_uv)*p0, color = 'k', alpha = j)
j = j - 0.33
plt.plot(y_plot,p0,label= r'$p_i$', color = 'k', linestyle = '--',alpha = 0.33)
plt.xticks(fontsize=12)
plt.yticks(fontsize = 12)
plt.xlabel('$y$\n \n(b)',fontsize = 12)
plt.ylabel(r'Density',fontsize = 12)
#plt.legend(loc = 2,fontsize = 12)
ylim = plt.ylim()
xlim = plt.xlim()
plt.subplot(1,3,3)
alpha = 0.5
#plt.plot(y_plot,p0)
plt.scatter([y1],[0], s= 10, color = 'k')
#plot first to get legend
j = 0.99
H_uv,c_uv = cop_update(P0,v1,rho_range[0])
plt.plot(y_plot,(1-alpha+alpha*c_uv)*p0, color = 'k', alpha = j,label = r'$p_{i+1}$')
j= j- 0.33
for rho in rho_range[1:]:
H_uv,c_uv = cop_update(P0,v1,rho)
plt.plot(y_plot,(1-alpha+alpha*c_uv)*p0, color = 'k', alpha = j)
j = j - 0.33
plt.plot(y_plot,p0,label= r'$p_i$', color = 'k', linestyle = '--',alpha = 0.33)
plt.xticks(fontsize=12)
plt.yticks(fontsize = 12)
plt.xlabel('$y$ \n \n(c)',fontsize = 12)
plt.ylabel(r'Density',fontsize = 12)
#plt.legend(loc = 2,fontsize = 12)
plt.xlim(xlim)
plt.ylim(ylim)
f.savefig('plots/cop_illustration_3.pdf', bbox_inches='tight')
# +
f =plt.figure(figsize=(14,4))
plt.subplot(1,2,1)
alpha = 0.5
#plt.plot(y_plot,p0)
plt.scatter([y1],[0], s= 10, color = 'k')
#plot first to get legend
j = 0.99
H_uv,c_uv = cop_update(P0,v1,rho_range[0])
plt.plot(y_plot,c_uv*p0, color = 'k', alpha = j,label = r'$c_\rho(u_i,v_i) p_{i}$')
j= j- 0.33
for rho in rho_range[1:]:
H_uv,c_uv = cop_update(P0,v1,rho)
plt.plot(y_plot,(c_uv)*p0, color = 'k', alpha = j)
j = j - 0.33
plt.plot(y_plot,p0,label= r'$p_i$', color = 'k', linestyle = '--',alpha = 0.33)
plt.xticks(fontsize=12)
plt.yticks(fontsize = 12)
plt.xlabel('$y$\n \n(a)',fontsize = 12)
plt.ylabel(r'Density',fontsize = 12)
#plt.legend(loc = 2,fontsize = 12)
ylim = plt.ylim()
xlim = plt.xlim()
plt.subplot(1,2,2)
alpha = 0.5
#plt.plot(y_plot,p0)
plt.scatter([y1],[0], s= 10, color = 'k')
#plot first to get legend
j = 0.99
H_uv,c_uv = cop_update(P0,v1,rho_range[0])
plt.plot(y_plot,(1-alpha+alpha*c_uv)*p0, color = 'k', alpha = j,label = r'$p_{i+1}$')
j= j- 0.33
for rho in rho_range[1:]:
H_uv,c_uv = cop_update(P0,v1,rho)
plt.plot(y_plot,(1-alpha+alpha*c_uv)*p0, color = 'k', alpha = j)
j = j - 0.33
plt.plot(y_plot,p0,label= r'$p_i$', color = 'k', linestyle = '--',alpha = 0.33)
plt.xticks(fontsize=12)
plt.yticks(fontsize = 12)
plt.xlabel('$y$ \n \n(b)',fontsize = 12)
plt.ylabel(r'Density',fontsize = 12)
#plt.legend(loc = 2,fontsize = 12)
plt.xlim(xlim)
plt.ylim(ylim)
f.savefig('plots/cop_illustration_2.pdf', bbox_inches='tight')
| run_expts/Misc_Plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: DESI 21.7e
# language: python
# name: desi-21.7e
# ---
import numpy as np
import astropy.table as atable
cols = ['TARGETID', 'best_z', 'best_quality', 'best_spectype']
# +
aa = atable.Table.read('/global/cfs/cdirs/desi/sv/vi/TruthTables/Andes/BGS/Truth_table_Andes_reinspection_BGS_66003_20200315_v1.csv')
for x in ['z', 'quality', 'spectype']:
aa.rename_column('best {}'.format(x, x), 'best_{}'.format(x, x))
aa = aa[cols]
# -
bb = atable.Table.read('/global/cfs/cdirs/desi/sv/vi/TruthTables/Blanc/BGS/desi-vi_BGS_tile80613_nightdeep_merged_all_210202.csv')[cols]
cc = atable.Table.read('/global/cfs/cdirs/desi/sv/vi/TruthTables/Cascades/BGS/desi-vi_SV_cascades_combination_BGS_all_210521.csv')[cols]
vi = atable.vstack([aa, bb, cc])
vi = vi[vi['best_quality'] >= 2.5]
vi.sort('best_quality')
vi
ids, cnts = np.unique(vi['TARGETID'], return_counts=True)
np.count_nonzero(cnts > 1)
vi = atable.unique(vi, keys='TARGETID', keep='last')
vi.rename_column('best_z', 'VI_Z')
vi.rename_column('best_quality', 'VI_Q')
vi.rename_column('best_spectype', 'VI_SPECTYPE')
vi
# # Done.
| doc/sv_paper/vi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 1: Free Response of a Second-Order System
#
# Type names here
#
# Monday 1:25pm section
#
# Table #
#
# ## Contents:
#
# 1. [Pre-lab Questions](#prelab)
# 1. [Lab Procedure](#lab)
# 1. [Post-lab Questions](#postlab)
# 1. [Submissions](#sub)
# ### Learning Objectives
#
# 1. Capture displacement data from an encoder and plot a time history
# 2. Apply peak finding algorithms to experimental data
# 3. Estimate natural frequency \\(\omega_n\\) and damping ratio \\(\zeta\\) from free response data
# <a id='prelab'></a>
#
# ### Pre-lab Questions
#
# This lab concerns a simple, yet representative, mechanical system: the mass-spring-damper system shown below. A free-body-diagram (FBD) of this system appears to the right.
#
# <img src="mass spring damper.png" width="600" />
#
# We will see that all second-order linear systems can be written equivalently as a mass-spring-damper system. This means that you can build a mass-spring-damper system that will exhibit the same dynamic behavior as any second-order system.
#
# The equation of motion (EOM) for this system derived using Newton's Second Law is as follows:
#
# \\(m \ddot{x} = \sum F_x = -b \dot{x} - kx + F\\)
#
# which can be rewritten as
#
# \\(m \ddot{x} + b \dot{x} + kx = F\\)
#
# If we divide this equation through by \\(m\\), then we arrive at the following *canonical form*:
#
# \\(\ddot{x} + 2 \zeta \omega_n \dot{x} + \omega_n^2 = \dfrac{F}{m}\\)
#
# where \\(\zeta\\) is the [damping ratio](https://en.wikipedia.org/wiki/Damping_ratio) and \\(\omega_n\\) is the
# [natural frequency](https://en.wikipedia.org/wiki/Simple_harmonic_motion#Dynamics).
# In this lab, we will focus on the free response of this system. What this means is that the system will be given an initial condition (in this case, an initial deflection), and then released to oscillate freely. Hence, we can consider the external force to be \\(F = 0\\), and write
#
# \\(\ddot{x} + 2 \zeta \omega_n \dot{x} + \omega_n^2 = 0\\)
#
# Solving this equation of motion (EOM) gives
#
# \\(x(t) = X e^{-\zeta \omega_n t} \cos(\omega_d t + \phi)\\)
#
# where \\(\omega_d = \omega_n \sqrt{1-\zeta^2}\\) is the damped natural frequency. The amplitude \\(X\\) and phase angle \\(\phi\\) are functions of the initial conditions.
#
# Let's plot this function in Python with some different values for the parameters:
import numpy as np
import matplotlib.pyplot as plt
zeta = 0.1 # damping ratio
w_n = 2 # natural frequency (rad/s)
w_d = w_n*np.sqrt(1-zeta**2) # damped natural frequency (rad/s)
X = 2
phi = 0
t = np.linspace(0,2*np.pi/w_d*10,501) # plot for 10 periods
x = X*np.exp(-zeta*w_n*t)*np.cos(w_d*t+phi)
plt.plot(t,x)
plt.grid(True);
# **Exercise:**
#
# Copy the previous code into the next cell and modify it to produce the following curve. Note the effect each parameter has on the response. Try to get as close to matching this plot as you reasonably can (doesn't have to be exact).
#
# <img src="ex free response.png" width="500" />
# Now, let's load a data file like the one you will be acquiring during the lab. Check out the file `test1.txt`. In this lab, we will only use the columns 'Time' and 'Encoder 1 Pos'. 'Time' is a list of time stamps when the data is acquired. The data in the rest of the columns is collected simultaneously at each time stamp. 'Encoder 1 Pos' is the output of the encoder that is used to measure the position of the cart.
#
# The following code loads the text file using [numpy.genfromtext](https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html), which is a generic function for reading in text files. Because it only handles rows with consistent formatting, we have to ignore the first two rows and the last one. We also have to ignore the `;` that appears at the end of each row.
# read data from text file, skipping first 2 rows and last row, ignore ';'
data = np.genfromtxt('test1.txt',comments=';',skip_header=3,skip_footer=1)
t = data[:,1] # time is column 1
x = data[:,3] # position is column 3 (recall column numbering starts at 0)
plt.plot(t,x);
plt.grid(True);
# For the last part of the pre-lab, we want to be able to idenfity the peaks (local maxima) of the response. The x- and y-coordinates of the peaks will be used to calculate the natural frequency \\(\omega_n\\) and the damping ratio \\(\zeta\\). These parameters, along with the initial conditions, fully describe the free response of a second-order linear system.
#
# To find the peaks, we will use the function [scipy.signal.find_peaks](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html). Check out the bottom of that webpage for some examples of usage.
#
# **Exercise:**
#
# Use the `find_peaks` function to identify and plot the peaks of your response. Try to replicate the following figure:
#
# <img src="ex free response w peaks.png" width="500" />
from scipy.signal import find_peaks
# <a id='lab'></a>
#
# ### Lab Procedure
#
# In this experiment, you will displace part of a mass-spring-damper system a certain distance, and then record the position of the cart over time after it is released. There will be 10 trials altogether, according to the following table:
#
#
# <table>
# <tr>
# <th style='text-align: center'>Trial</th>
# <th style='text-align: center'>Added Mass [kg]</th>
# <th style='text-align: center'>Initial Displacement [cm]</th>
# </tr>
# <tr>
# <td style='text-align: center'>1</td>
# <td style='text-align: center'>0.490</td>
# <td style='text-align: center'>1</td>
# </tr>
# <tr>
# <td style='text-align: center'>2</td>
# <td style='text-align: center'>0.490</td>
# <td style='text-align: center'>1.5</td>
# </tr> <tr>
# <td style='text-align: center'>3</td>
# <td style='text-align: center'>0.490</td>
# <td style='text-align: center'>2</td>
# </tr> <tr>
# <td style='text-align: center'>4</td>
# <td style='text-align: center'>0.490</td>
# <td style='text-align: center'>2.5</td>
# </tr> <tr>
# <td style='text-align: center'>5</td>
# <td style='text-align: center'>0.490</td>
# <td style='text-align: center'>3</td>
# </tr> <tr>
# <td style='text-align: center'>6</td>
# <td style='text-align: center'>0.983</td>
# <td style='text-align: center'>1</td>
# </tr> <tr>
# <td style='text-align: center'>7</td>
# <td style='text-align: center'>0.983</td>
# <td style='text-align: center'>1.5</td>
# </tr> <tr>
# <td style='text-align: center'>8</td>
# <td style='text-align: center'>0.983</td>
# <td style='text-align: center'>2</td>
# </tr> <tr>
# <td style='text-align: center'>9</td>
# <td style='text-align: center'>0.983</td>
# <td style='text-align: center'>2.5</td>
# </tr> <tr>
# <td style='text-align: center'>10</td>
# <td style='text-align: center'>0.983</td>
# <td style='text-align: center'>3</td>
# </tr>
# </table>
# Each trial will proceed as follows:
#
# 1. Open the ECP Executive software from the desktop icon. Go to *Command > Trajectory*, select *step*, then click *Setup*. Choose *Open Loop Step* and set *step size* = 0, *dwell time* = 3000, *number of reps* = 1. Click *OK* and close the window, then click *OK* again to close the next window.
#
# 2. Go to *Command > Execute* and choose *Normal Data Sampling*. On the physical setup, displace cart 1 the appropriate distance according to the trial #. Click *Run* and release the cart approximately 1 second later (to make sure you acquire the whole oscillation). Take care not to bump the limit switches, as doing so will abort the data recording.
#
# 3. Go to *Plotting > Setup Plot*. Choose *encoder 1 position* only in the left axis box, then click plot data. The plot will show the damped oscillations of the cart. Take a screenshot and save the image of the plot with an appropriate filename for potential inclusion in your post-lab analysis. The plot is intended to give you a qualitative feel for how the system responds and to compare with the data you read from the text file. The actual data processing will be done after you collect all the data.
#
# 4. Go to *Data > Export raw data*. Pick an appropriate file name, and export the data somewhere you can access it later (portable memory drive/USB stick, Google Drive, etc.). Close the ECP software after all trials are done.
# <a id='postlab'></a>
#
# ### Post-lab Questions
#
# Once you've gathered all the experimental data, the post-processing steps are directed towards estimating the parameters of the system, namely natural frequency \\(\omega_n\\) and damping ratio \\(\zeta\\). From the pre-lab, you can see that the response depends on these parameters; the natural frequency most clearly affects the frequency of oscillation, and the damping ratio most clearly affects the rate of decay of the amplitude.
#
# The following steps will guide you in analyzing your data and using this analysis to estimate \\(\omega_n\\) and \\(\zeta\\). You will then verify these estimates are correct by comparing your experimental responses to simulated ones.
# 1. Load your data file, determine the peaks, and plot the response with labeled peaks, as you did with the example data file in the pre-lab.
# 2. Estimate the period \\(\tau_d\\) of the oscillations. This calculation is more accurate if you average over several periods.
# 3. To estimate the damping ratio \\(\zeta\\), you will first estimate how quickly the oscillations decay.
#
# Let's look at the ratio of the first peak's amplitude \\(x_1\\) and the last peak's amplitude \\(x_n\\). Suppose that the first peak occurs at \\(t = t_1\\) and the last peak occurs at \\(t = t_n\\). Then
#
# \\(\dfrac{x_1}{x_n} = \dfrac{X e^{-\zeta \omega_n t_1} \cos(\omega_d t_1 + \phi)}{X e^{-\zeta \omega_n t_n} \cos(\omega_d t_n + \phi)} = \dfrac{e^{-\zeta \omega_n t_1} \cos(\omega_d t_1 + \phi)}{e^{-\zeta \omega_n (t_1 + (n-1) \tau_d)} \cos(\omega_d (t_1 + (n-1) \tau_d) + \phi)} = \dfrac{e^{-\zeta \omega_n t_1}}{e^{-\zeta \omega_n t_1} e^{-\zeta \omega_n (n-1) \tau_d}} = e^{\zeta \omega_n (n-1) \tau_d}\\)
#
# Let's define the [logarithmic decrement](https://en.wikipedia.org/wiki/Logarithmic_decrement) as
#
# \\(\delta = \dfrac{1}{n-1} \ln{\left(\dfrac{x_1}{x_n}\right)}\\)
#
# Then \\(\delta = \zeta \omega_n \tau_d = \zeta \omega_n \dfrac{2 \pi}{\omega_d} = \zeta \omega_n \dfrac{2 \pi}{\omega_n \sqrt{1-\zeta^2}} = \zeta \dfrac{2 \pi}{\sqrt{1-\zeta^2}}\\)
#
# Now, we can solve for \\(\zeta\\) in terms of \\(\delta\\), which can be measured from the peaks:
#
# \\(\zeta = \dfrac{1}{\sqrt{1 + \left(\frac{2 \pi}{\delta} \right)^2}}\\)
#
# Write a code to calculate \\(\zeta\\) from the peaks of your data.
# 4. From your estimates of \\(\tau_d\\) and \\(\zeta\\), you can now estimate \\(\omega_n\\).
#
# Write the formula you use in this Markdown cell, and put the code for it in the next cell.
#
# \\(\omega_n = \dfrac{\omega_d}{\sqrt{1-\zeta^2}} = \dfrac{2 \pi}{\tau_d \sqrt{1-\zeta^2}}\\)
# 5. For each trial (data set), repeat steps 1-4. Gather your calculated values for \\(\zeta\\) and \\(\omega_n\\) into an array to use in the subsequent steps. You don't have to use a for loop; you can manually enter the values into the array.
#
# At the end of this step, you should have 10 values of \\(\zeta\\) and 10 values of \\(\omega_n\\).
# 6. Calculate the average \\(\zeta\\) and \\(\omega_n\\) for each Added Mass (average over trials 1-5 and then average over trials 6-10). You should see that the values for trials 1-5 are similar to each other, and the same for trials 6-10.
# 7. We next want to estimate the physical parameters mass \\(m\\), damping constant \\(b\\), and the spring constant \\(k\\). We will need more data than just \\(\zeta\\) and \\(\omega_n\\) however (2 equations and 3 unknowns!).
#
# The purpose of running trials with two different masses was to give us another equation to work with. The mass \\(m\\) consists of several components in addition to the actual added mass: the cart, armature, and other motor components all contribute to the system mass. We will call the combined mass of the cart, armature, motor, etc, \\(m_c\\), and express \\(m\\) as the sum of this \\(m_c\\) and the added mass (\\(m_w\\)) which we assume to know precisely as 0.490 kg or 0.983 kg:
#
# \\(m = m_c + m_w\\)
#
# From the definition of natural frequency, we have \\(\omega_n = \sqrt{\dfrac{k}{m}}\\)
#
# Thus, \\(\left( \dfrac{\overline{\omega_{n1}}}{\overline{\omega_{n2}}} \right)^2 = \dfrac{\dfrac{k}{m_c + m_{w1}}}{\dfrac{k}{m_c + m_{w2}}} = \dfrac{m_c + m_{w2}}{m_c + m_{w1}}\\)
#
# where \\(\overline{\omega_{n1}}\\) is the average for trials 1-5, and \\(\overline{\omega_{n2}}\\) is the average for trials 6-10.
#
# Use these equations to calculate \\(m\\) in the code cell below:
# 8. We can now estimate the damping constant \\(b\\) from the formula
#
# \\(b = 2 m \overline{\zeta} \overline{\omega_n}\\)
#
# How do you account for the differences in the values for \\(b\\) between trials 1-5 and trials 6-10?
# 9. Finally, we can estimate the spring constant \\(k\\) from the formula
#
# \\(k = m \overline{\omega_n}^2\\)
#
# Again, how do you account for any differences?
# <a id='sub'></a>
#
# ### Submissions
#
# Please submit the following on Canvas (one submission per team):
#
# 1. Your completed Jupyter notebook (this file)
# 2. All data (.txt) files
# 3. All screen captures
#
# Please label your data files and screen captures in a logical manner so that they can be correlated.
| Lab1/.ipynb_checkpoints/Lab 1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import FunctionTransformer, Imputer
from sklearn.model_selection import cross_val_score
from sklearn.feature_extraction import DictVectorizer
# %run ../dstools/ml/transformers.py
# -
df = pd.read_csv('../datasets/titanic.csv')
df = df.replace(r'\s+', np.nan, regex=True)
features = df.drop(['survived', 'alive'], axis=1)
# +
df2dict = FunctionTransformer(
lambda x: x.to_dict(orient='records'), validate=False)
pl = make_pipeline(
df2dict,
DictVectorizer(sparse=False),
Imputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=50, max_depth=4)
)
cv = cross_val_score(pl, features, df.survived, cv=3, scoring='roc_auc')
cv.mean(), cv.std()
# +
pl = make_pipeline(
empirical_bayes_encoder(),
Imputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=50, max_depth=4)
)
cv = cross_val_score(pl, features, df.survived, cv=3, scoring='roc_auc')
cv.mean(), cv.std()
# -
| demo/transformers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# A compact expression for the Legendre polynomials is given by ***Rodrigues'*** formula:
#
# ${\displaystyle P_{n}(x)={\frac {1}{2^{n}n!}}{\frac {d^{n}}{dx^{n}}}(x^{2}-1)^{n}\,.}$
#
# This formula enables derivation of a large number of properties of the ${\displaystyle P_{n}}$'s. Among these are explicit representations such as
#
# ${\displaystyle {\begin{aligned}P_{n}(x)&={\frac {1}{2^{n}}}\sum _{k=0}^{n}{\binom {n}{k}}^{2}(x-1)^{n-k}(x+1)^{k},\\P_{n}(x)&=\sum _{k=0}^{n}{\binom {n}{k}}{\binom {n+k}{k}}\left({\frac {x-1}{2}}\right)^{k},\\P_{n}(x)&={\frac {1}{2^{n}}}\sum _{k=0}^{[{\frac {n}{2}}]}(-1)^{k}{\binom {n}{k}}{\binom {2n-2k}{n}}x^{n-2k},\end{aligned}}}$
#
# The last formula is equivalent to the following:
#
# <img src='assets/1.png'>
#
# We will use the last one for python implementation
#
#
import numpy as np
def factorial(n):
"""
Function to return the factorial of a number
Parameters:
n (int) - the number which factorial we want to calculate
Returns:
int - factorial
"""
if n < 0:
print('Factorial does not exist for negative numbers')
return None
elif n == 0:
return 1
if n == 1:
return n
else:
return n*factorial(n-1)
def legendre_polynomials(n):
"""
Function takes n as argument and implements Legendre n-th polynomial.
Parameters:
n (non-negative int) - for n-th legendre polinomeal
Returns:
list(coefficients of legendre polinomeal)
Print:
Print obtained legendre polinomeal
"""
polynom = np.zeros([n+1])
for k in range(int(n/2)+1):
coef = (1/(2**n)) * ((-1)**k) * factorial(2*n - 2*k) / (factorial(k)*factorial(n-k)*factorial(n-2*k))
polynom[n-2*k] = coef
polynom_str = ''
for i in range(len(polynom)):
if polynom[i] != 0 and i == 0:
polynom_str += str(polynom[i])
elif polynom[i] != 0 and i == 1:
if polynom[i] > 0 and polynom_str != "":
polynom_str += (' +' + str(polynom[i]) + ' *X')
else:
polynom_str += (' ' + str(polynom[i]) + ' *X')
elif polynom[i] != 0 and i != 0:
if polynom[i] > 0 and polynom_str != "":
polynom_str += (' +' + str(polynom[i]) + ' *X^' + str(i))
else:
polynom_str += (' ' + str(polynom[i]) + ' *X^' + str(i))
print(polynom_str)
return polynom
legendre_polynomials(0)
legendre_polynomials(1)
legendre_polynomials(2)
legendre_polynomials(3)
legendre_polynomials(4)
legendre_polynomials(5)
legendre_polynomials(50)
| Homeworks/3_Vector_Spaces/2019.09.19 Legendre Polinomeals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from lets_plot import *
from lets_plot.geo_data import *
LetsPlot.setup_html()
# -
import lets_plot
lets_plot.__version__
city = geocode_cities("Boston")
city.get_centroids()
ggplot() + geom_tile(x=-71.08845, y=42.31104, tooltips=layer_tooltips().line("Some info"))
(ggplot() +
geom_tile(x=-71.08845, y=42.31104, tooltips=layer_tooltips().line("Some info")) +
geom_map(map=geocode_states("Massachusetts")))
(ggplot() +
geom_livemap() +
geom_tile(x=-71.08845, y=42.31104, tooltips=layer_tooltips().line("Some info")))
# ### issue: tooltip is not shown.
(ggplot() +
geom_livemap() +
geom_tile(x=-71.08845, y=42.31104, tooltips=layer_tooltips().line("Some info")) +
geom_map(map=geocode_states("Massachusetts")))
| docs/examples/jupyter-notebooks-dev/issues/map_tile_tooltip.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + block_hidden=true
# %load_ext rpy2.ipython
# %matplotlib inline
from fbprophet import Prophet
import pandas as pd
import numpy as np
# + block_hidden=true language="R"
# library(prophet)
# -
# There are two main ways that outliers can affect Prophet forecasts. Here we make a forecast on the logged Wikipedia visits to the R page from before, but with a block of bad data:
# + output_hidden=true magic_args="-w 10 -h 6 -u in" language="R"
# df <- read.csv('../examples/example_wp_R_outliers1.csv')
# df$y <- log(df$y)
# m <- prophet(df)
# future <- make_future_dataframe(m, periods = 1096)
# forecast <- predict(m, future)
# plot(m, forecast);
# -
df = pd.read_csv('../examples/example_wp_R_outliers1.csv')
df['y'] = np.log(df['y'])
m = Prophet()
m.fit(df)
future = m.make_future_dataframe(periods=1096)
forecast = m.predict(future)
m.plot(forecast);
# The trend forecast seems reasonable, but the uncertainty intervals seem way too wide. Prophet is able to handle the outliers in the history, but only by fitting them with trend changes. The uncertainty model then expects future trend changes of similar magnitude.
#
# The best way to handle outliers is to remove them - Prophet has no problem with missing data. If you set their values to `NA` in the history but leave the dates in `future`, then Prophet will give you a prediction for their values.
# + output_hidden=true magic_args="-w 10 -h 6 -u in" language="R"
# outliers <- (as.Date(df$ds) > as.Date('2010-01-01')
# & as.Date(df$ds) < as.Date('2011-01-01'))
# df$y[outliers] = NA
# m <- prophet(df)
# forecast <- predict(m, future)
# plot(m, forecast);
# -
df.loc[(df['ds'] > '2010-01-01') & (df['ds'] < '2011-01-01'), 'y'] = None
model = Prophet().fit(df)
model.plot(model.predict(future));
# In the above example the outliers messed up the uncertainty estimation but did not impact the main forecast `yhat`. This isn't always the case, as in this example with added outliers:
# + output_hidden=true magic_args="-w 10 -h 6 -u in" language="R"
# df <- read.csv('../examples/example_wp_R_outliers2.csv')
# df$y = log(df$y)
# m <- prophet(df)
# future <- make_future_dataframe(m, periods = 1096)
# forecast <- predict(m, future)
# plot(m, forecast);
# -
df = pd.read_csv('../examples/example_wp_R_outliers2.csv')
df['y'] = np.log(df['y'])
m = Prophet()
m.fit(df)
future = m.make_future_dataframe(periods=1096)
forecast = m.predict(future)
m.plot(forecast);
# Here a group of extreme outliers in June 2015 mess up the seasonality estimate, so their effect reverberates into the future forever. Again the right approach is to remove them:
# + output_hidden=true magic_args="-w 10 -h 6 -u in" language="R"
# outliers <- (as.Date(df$ds) > as.Date('2015-06-01')
# & as.Date(df$ds) < as.Date('2015-06-30'))
# df$y[outliers] = NA
# m <- prophet(df)
# forecast <- predict(m, future)
# plot(m, forecast);
# -
df.loc[(df['ds'] > '2015-06-01') & (df['ds'] < '2015-06-30'), 'y'] = None
m = Prophet().fit(df)
m.plot(m.predict(future));
| notebooks/outliers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %autosave 0
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
import ipywidgets as widgets
from matplotlib import animation
from functools import partial
slider_layout = widgets.Layout(width='600px', height='20px')
slider_style = {'description_width': 'initial'}
IntSlider_nice = partial(widgets.IntSlider, style=slider_style, layout=slider_layout, continuous_update=False)
FloatSlider_nice = partial(widgets.FloatSlider, style=slider_style, layout=slider_layout, continuous_update=False)
SelSlider_nice = partial(widgets.SelectionSlider, style=slider_style, layout=slider_layout, continuous_update=False)
# # Instalación de PyTorch
#
# La forma más recomendada es usando el manejador de ambientes y paquetes `conda`
#
# Si no conoces conda por favor revisa esta breve clase de INFO147: https://github.com/magister-informatica-uach/INFO147/blob/master/unidad1/02_ambientes_virtuales.ipynb
#
# Una vez hayas creado un ambiente de `conda` debes escoger entre instalar pytorch con soporte de GPU
#
# conda install pytorch torchvision cudatoolkit=10.2 ignite -c pytorch
#
# o sin soporte GPU
#
# conda install pytorch torchvision cpuonly ignite -c pytorch
#
#
# # Breve tutorial de [PyTorch](https://pytorch.org)
#
# PyTorch es una librería de alto nivel para Python que provee
# 1. Una clase tensor para hacer cómputo de alto rendimiento
# 1. Un plataforma para crear y entrenar redes neuronales
#
# ### Torch Tensor
#
# La clase [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html) es muy similar en uso al `ndarray` de [*NumPy*](https://numpy.org/)
#
# Un tensor corresponde a una matriz o arreglo n-dimensional con tipo definido que soporta operaciónes vectoriales tipo SIMD y broadcasting
#
#
# 
#
# La documentación de la clase con todas las operaciones que soporta: https://pytorch.org/docs/stable/tensors.html
#
# A continuación revisaremos las más fundamentales
import torch
torch.__version__
# #### Creación de tensores
#
# Un tensor puede crearse usando constructores de torch o a partir de datos existentes: lista de Python o *ndarray* de NumPy
# Un tensor de 10 ceros
display(torch.zeros(10))
# Un tensor de 10 unos
display(torch.ones(10))
# Un tensor de números linealmente espaciados
display(torch.linspace(0, 9, steps=10))
# Un tensor de 10 números aleatorios con distribución N(0, 1)
display(torch.randn(10))
# Un tensor creado a partir de una lista
display(torch.Tensor([0, 1, 2, 3, 4, 5, 6]))
# Un tensor creado a partir de un ndarray
numpy_array = np.random.randn(10)
display(torch.from_numpy(numpy_array))
# #### Atributos importantes de los tensores
#
# Un tensor tiene un tamaño (dimesiones) y tipo específico
#
# Un tensor puede estar alojado en la memoria del sistema ('cpu') o en la memoria de dispositivo ('gpu')
a = torch.randn(10, 20, 30)
display(a.shape)
display(a.dtype)
display(a.device)
display(a.requires_grad)
# Cuando se crea un tensor se puede especificar el tipo y el dispositivo
a = torch.zeros(10, dtype=torch.int32, device='cuda')
display(a)
# #### Manipulación de tensores
#
# Podemos manipular la forma de un tensor usando: reshape, flatten, roll, traspose, unsqueeze, entre otros
a = torch.linspace(0, 9, 10)
display(a)
display(a.reshape(2, 5))
display(a.reshape(2, 5).t())
display(a.reshape(2, 5).flatten())
display(a.roll(2))
display(a.unsqueeze(1))
# #### Cálculos con tensores
#
# Un tensor soporta operaciones aritméticas y lógicas
#
# Si el tensor está en memoria de sistema entonces las operaciones son realizadas por la CPU
a = torch.linspace(0, 5, steps=6)
display(a)
# aritmética y funciones
display(a + 5)
display(2*a)
display(a.pow(2))
display(a.log())
# máscaras booleanas
display(a[a>3])
# Operaciones con otros tensores
b = torch.ones(6)
display(a + b)
display(a*b)
# broadcasting
display(a.unsqueeze(1)*b.unsqueeze(0))
# #### Cálculos en GPU
#
# Usando el atributo `to` podemos intercambiar un tensor entre GPU ('device') y CPU ('host')
#
# Cuando todos los tensores involucrados en una operaciones están en memoria de dispositivo entonces el cálculo lo hace la GPU
#
# La siguiente nota indica las opciones para intercambiar datos entre GPU y CPU que ofrece PyTorch: https://pytorch.org/docs/stable/notes/cuda.html
#
# ##### Breve nota:
# Una *Graphical Processing Unit* (GPU) o tarjeta de video es un hardware para hacer cálculos sobre mallas tridimensionales, generación de imágenes (rendering) y otras tareas gráficas. A diferencia de la CPU, la GPU es especialista en cálculo paralelo y tiene miles de nucleos (NVIDIA RTX 2080: 2944 nucleos)
a = torch.zeros(10)
display(a.device)
a = a.to('cuda')
display(a.device)
# ### Auto-diferenciación
#
# Las redes neuronales se entrenan usando **Gradiente descedente**
#
# > Necesitamos calcular las derivadas de la función de costo para todos los parámetros de la red
#
# Esto puede ser complejo si nuestra red es grande y tiene distintos tipos de capas
#
# PyTorch viene incorporado con un sistema de diferenciación automática denominado [`autograd`](https://pytorch.org/docs/stable/autograd.html)
#
# Para poder derivar una función en pytorch
#
# 1. Se necesita que su entrada sean tensores con el atributo `requires_grad=True`
# 1. Luego llamamos la función `backward()` de la función
# 1. El resultado queda guardado en el atributo `grad` de la entrada (nodo hoja)
# +
x = torch.linspace(0, 10, steps=1000, requires_grad=True)
y = 5*x -20
#y = torch.sin(2.0*np.pi*x)
#y = torch.sin(2.0*np.pi*x)*torch.exp(-(x-5).pow(2)/3)
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(x.detach().numpy(), y.detach().numpy(), label='y')
y.backward(torch.ones_like(x))
ax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dy/dx')
plt.legend();
# -
# ### Grafo de cómputo
#
# Cuando contatenamos operaciones PyTorch construye internamente un "grafo de cómputo"
#
# $$
# x \to z = f_1(x) \to y = f_2(z)
# $$
#
# La función `backward` calcula los gradientes y los almacena en los nodo hoja que tengan `requires_grad=True`
#
# Por ejemplo
#
# y.backward : Guarda dy/dx en x.grad
#
# z.backward : Guarda dz/dx en x.grad
#
# Basicamente `backward` implementa la regla de la cadena de las derivadas
#
# `backward` recibe una entrada: La derivada de la etapa superior de la cadena. Por defecto usa `torch.ones([1])`, asume que se está en el nivel superior y que la salida es escalar (unidimensional)
# +
x = torch.linspace(0, 10, steps=1000, requires_grad=True) # Nodo hoja
display(x.grad_fn)
z = torch.sin(x)
display(z.grad_fn)
y = z.pow(2)
display(y.grad_fn)
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(x.detach().numpy(), z.detach().numpy(), label='z')
ax.plot(x.detach().numpy(), y.detach().numpy(), label='y')
# Derivada dy/dx
y.backward(torch.ones_like(x), create_graph=True)
ax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dy/dx')
# Borro el resultado en x.grad
x.grad = None
# Derivada dz/dx
z.backward(torch.ones_like(x))
ax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dz/dx')
plt.legend();
#ax.plot(x.detach().numpy(), 2*np.cos(x.detach().numpy())*np.sin(x.detach().numpy()), label='dy/dx')
# -
# # Redes Neuronales Artificiales en PyTorch
#
# PyTorch nos ofrece la clase tensor y las funcionalidades de autograd
#
# Estas poderosas herramientas nos dan todo lo necesario para construir y entrenar redes neuronales artificiales
#
# Para facilitar aun más estas tareas PyTorch tiene módulos de alto nivel que implementan
#
# 1. Modelo base de red neuronal: `torch.nn.Module`
# 1. Distintos tipos de capas, funciones de activación y funciones de costo: [`torch.nn`](https://pytorch.org/docs/stable/nn.html)
# 1. Distintos algoritmos de optimización basados en gradiente descedente: [`torch.optim`](https://pytorch.org/docs/stable/optim.html)
#
#
# Una red neuronal en PyTorch se implementa
# 1. Heredando de [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#module)
# 1. Especificando las funciones `__init__` y `forward`
#
# Otra opción es usar [`torch.nn.Sequential`](https://pytorch.org/docs/stable/nn.html#sequential) y especificar una lista de capas
#
#
# #### Red MLP en pytorch:
#
# Heredamos de `Module` y especificamos el constructor y la función `forward`
#
# Creamos una red de dos entradas, una capa oculta y una neurona salida
#
# La capa `torch.nn.Linear` con parámetro $W$ y $b$ realiza la siguiente operación sobre la entrada $X$
#
# $$
# Z = WX + b
# $$
#
# corresponde a la capa completamente conectada (*fully-connected*)
# +
import torch
class MLP(torch.nn.Module):
# Constructor: Crea las capas de la red
def __init__(self, input_dim=2, hidden_dim=10, output_dim=1):
# Mandatorio: Llamar al constructor del padre:
super(MLP, self).__init__()
# Creamos dos capas completamente conectadas
self.hidden = torch.nn.Linear(input_dim, hidden_dim, bias=True)
self.output = torch.nn.Linear(hidden_dim, output_dim, bias=True)
# Función de activación sigmoide
self.activation = torch.nn.Sigmoid()
# Forward: Conecta la entrada con la salida
def forward(self, x):
# Pasamos x por la primera capa y luego aplicamos función de activación
z = self.activation(self.hidden(x))
# Pasamos el resultado por la segunda capa y lo retornamos
return self.activation(self.output(z))
# -
# Al crear una capa `Linear` de forma interna se registran los parámetros `weight` y `bias`, ambos con `requires_grad=True`
#
# Inicialmente los parámetros tienen valores aleatorios
model = MLP(hidden_dim=2)
display(model.hidden.weight)
display(model.hidden.bias)
# Podemos evaluar el modelo sobre un tensor
# +
X = 10*torch.rand(10000, 2) - 5
Y = model.forward(X)
fig, ax = plt.subplots()
X_numpy = X.detach().numpy()
Y_numpy = Y.detach().numpy()
ax.scatter(X_numpy[:, 0], X_numpy[:, 1], s=10, c=Y_numpy[:, 0], cmap=plt.cm.RdBu_r);
# -
# ### Entrenamiento
#
# Para entrenar la neurona debemos definir
#
# - Una función de costo
# - Un algoritmo de optimización
# Función de costo entropía cruzada binaria
criterion = torch.nn.BCELoss(reduction='sum')
# Algoritmo de optimización Gradiente Descendente Estocástico
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2)
# Digamos que tenemos un dato $X$ y una etiqueta $Y$
#
# Podemos calcular el error de clasificar ese dato con
# +
X = torch.tensor([[-1.0, 1.0]])
Y = torch.tensor([[0.]])
hatY = model.forward(X)
display(hatY)
loss = criterion(hatY, Y)
display(loss)
# -
# Una vez que calculamos la loss podemos calcular el gradiente
loss.backward()
display(model.output.weight.grad)
display(model.output.bias.grad)
# Finalmente actualizamos los parámetros usando la función `step` de nuestro optimizador
# Resetea los gradientes
display(model.output.weight)
display(model.output.bias)
optimizer.step()
display(model.output.weight)
display(model.output.bias)
# Repetimos este proceso a través de varias "épocas" de entrenamiento
for nepoch in range(10):
# Calculamos la salida del modelo
hatY = model.forward(X)
# Reseteamos los gradientes de la iteración anterior
optimizer.zero_grad()
# Calculamos la función de costo
loss = criterion(hatY, Y)
# Calculamos su gradiente
loss.backward()
# Actualizamos los parámetros
optimizer.step()
print("%d w:%f %f b:%f" %(nepoch, model.output.weight[0, 0], model.output.weight[0, 1], model.output.bias))
# #### Entrenando en un conjunto de datos
#
# Consideremos un conjunto de entrenamiento con datos bidimensionales y dos clases como el siguiente
#
# > Notemos que no es linealmente separable
# +
import sklearn.datasets
data, labels = sklearn.datasets.make_circles(n_samples=1000, noise=0.2, factor=0.25)
#data, labels = sklearn.datasets.make_moons(n_samples=1000, noise=0.2)
#data, labels = sklearn.datasets.make_blobs(n_samples=[250]*4, n_features=2, cluster_std=0.5,
# centers=np.array([[-1, 1], [1, 1], [-1, -1], [1, -1]]))
#labels[labels==2] = 1; labels[labels==3] = 0;
fig, ax = plt.subplots(figsize=(8, 4))
for k, marker in enumerate(['x', 'o']):
ax.scatter(data[labels==k, 0], data[labels==k, 1], s=20, marker=marker, alpha=0.75)
# Para las gráficas
x_min, x_max = data[:, 0].min() - 0.5, data[:, 0].max() + 0.5
y_min, y_max = data[:, 1].min() - 0.5, data[:, 1].max() + 0.5
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.05), np.arange(y_min, y_max, 0.05))
# -
# 1. Antes de empezar el entrenamiento convertimos los datos a formato tensor de PyTorch
# 1. Luego presentamos los datos en *mini-batches* a la red neuronal en cada época del entrenamiento
#
# PyTorch provee las clases `DataSet` y `DataLoader` para lograr estos objetivos
#
# Estas clases son parte del módulo data: https://pytorch.org/docs/stable/data.html
#
# En este caso crearemos un set a partir de tensores usando una clase que hereda de `DataSet`
#
# TensorDataset(*tensors)
#
# Luego crearemos conjuntos de entrenamiento y validación usando
#
# Subset(dataset, indices)
#
# Finalmente crearemos dataloaders usando
#
# DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
# batch_sampler=None, num_workers=0, collate_fn=None,
# pin_memory=False, drop_last=False, timeout=0,
# worker_init_fn=None)
#
#
# +
import sklearn.model_selection
# Separamos el data set en entrenamiento y validación
train_idx, valid_idx = next(sklearn.model_selection.ShuffleSplit(train_size=0.6).split(data, labels))
# Crear conjuntos de entrenamiento y prueba
from torch.utils.data import DataLoader, TensorDataset, Subset
# Creamos un conjunto de datos en formato tensor
torch_set = TensorDataset(torch.from_numpy(data.astype('float32')),
torch.from_numpy(labels.astype('float32')))
# Data loader de entrenamiento
torch_train_loader = DataLoader(Subset(torch_set, train_idx), shuffle=True, batch_size=32)
# Data loader de validación
torch_valid_loader = DataLoader(Subset(torch_set, valid_idx), shuffle=False, batch_size=256)
# -
# Los `DataLoader` se ocupan como iteradores de Python
for sample_data, sample_label in torch_train_loader:
display(sample_data.shape)
display(sample_label.shape)
display(sample_label)
break
# #### Recordemos
#
# Para cada dato de entrenamiento:
# - Calculamos gradientes: `loss.backward`
# - Actualizamos parámetros `optimizer.step`
#
# Para cada dato de validación
# - Evaluamos la *loss* para detectar sobre-ajuste
#
# > Una pasada por todos los datos se llama: época
#
# #### ¿Cuándo nos detenemos?
#
# Lo ideal es detener el entrenamiento cuando la loss de validación no haya disminuido durante una cierta cantidad de épocas
#
# Podemos usar [`save`](https://pytorch.org/tutorials/beginner/saving_loading_models.html) para ir guardando los parámetros del mejor modelo de validación
#
# Usamos un número fijo de épocas como resguardo: Si el modelo no ha convergido entonces debemos incrementarlo
#
# #### ¿Cómo afecta el resultado el número de neuronas en la capa oculta?
# +
model = MLP(hidden_dim=3)
criterion = torch.nn.BCELoss(reduction='sum')
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
n_epochs = 200
running_loss = np.zeros(shape=(n_epochs, 2))
best_valid = np.inf
# -
def train_one_epoch(k, model, criterion, optimizer):
global best_valid
train_loss, valid_loss = 0.0, 0.0
# Loop de entrenamiento
for sample_data, sample_label in torch_train_loader:
output = model.forward(sample_data)
optimizer.zero_grad()
loss = criterion(output, sample_label.unsqueeze(1))
train_loss += loss.item()
loss.backward()
optimizer.step()
# Loop de validación
for sample_data, sample_label in torch_valid_loader:
output = model.forward(sample_data)
loss = criterion(output, sample_label.unsqueeze(1))
valid_loss += loss.item()
# Guardar modelo si es el mejor hasta ahora
if k % 10 == 0:
if valid_loss < best_valid:
best_valid = valid_loss
torch.save({'epoca': k,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': valid_loss}, '/home/phuijse/modelos/best_model.pt')
return train_loss/torch_train_loader.dataset.__len__(), valid_loss/torch_valid_loader.dataset.__len__()
# +
def update_plot(k):
global model, running_loss
[ax_.cla() for ax_ in ax]
running_loss[k, 0], running_loss[k, 1] = train_one_epoch(k, model, criterion, optimizer)
Z = model.forward(torch.from_numpy(np.c_[xx.ravel(), yy.ravel()].astype('float32')))
Z = Z.detach().numpy().reshape(xx.shape)
ax[0].contourf(xx, yy, Z, cmap=plt.cm.RdBu_r, alpha=1., vmin=0, vmax=1)
for i, (marker, name) in enumerate(zip(['o', 'x'], ['Train', 'Test'])):
ax[0].scatter(data[labels==i, 0], data[labels==i, 1], color='k', s=10, marker=marker, alpha=0.5)
ax[1].plot(np.arange(0, k+1, step=1), running_loss[:k+1, i], '-', label=name+" cost")
plt.legend(); ax[1].grid()
fig, ax = plt.subplots(1, 2, figsize=(8, 3.5), tight_layout=True)
update_plot(0)
anim = animation.FuncAnimation(fig, update_plot, frames=n_epochs,
interval=10, repeat=False, blit=False)
# -
# Neuronas de capa oculta:
# +
fig, ax = plt.subplots(1, model.hidden.out_features, figsize=(8, 3), tight_layout=True)
Z = model.hidden(torch.from_numpy(np.c_[xx.ravel(), yy.ravel()].astype('float32'))).detach().numpy()
Z = 1/(1+np.exp(-Z))
for i in range(model.hidden.out_features):
ax[i].contourf(xx, yy, Z[:, i].reshape(xx.shape),
cmap=plt.cm.RdBu_r, alpha=1., vmin=np.amin(Z), vmax=np.amax(Z))
# -
# #### Recuperando el mejor modelo
# +
model = MLP(hidden_dim=3)
print("state_dict del módelo:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
print(param_tensor, "\t", model.state_dict()[param_tensor])
model.load_state_dict(torch.load('/home/phuijse/modelos/best_model.pt')['model_state_dict'])
print(" ")
print("state_dict del módelo recuperado:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
print(param_tensor, "\t", model.state_dict()[param_tensor])
# -
# ## Diagnósticos a partir de curvas de aprendizaje
#
# Podemos diagnosticar el entrenamiento observando la evolución de la loss
#
# Siempre visualiza la loss en ambos conjuntos: entrenamiento y validación
#
# Algunos ejemplos
#
# #### Ambas curvas en descenso
#
# - Entrena por más épocas
epochs = np.arange(1, 500)
loss_train = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs))
loss_valid = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs)) + 0.1
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(epochs, loss_train, lw=2, label='entrenamiento')
ax.plot(epochs, loss_valid, lw=2, label='validación')
ax.set_ylim([0.5, 1.05])
plt.legend();
# #### Muy poca diferencia entre error de entrenamiento y validación
#
# - Entrena por más épocas
# - Usa un modelo más complejo
epochs = np.arange(1, 500)
loss_train = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs))
loss_valid = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs)) + 0.01
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(epochs, loss_train, lw=2, label='entrenamiento')
ax.plot(epochs, loss_valid, lw=2, label='validación')
ax.set_ylim([0.5, 1.05])
plt.legend();
# #### Sobreajuste temprano
#
# - Usa un modelo más sencillo
# - Usa más datos (aumentación)
# - Usa regularización (dropout, L2)
epochs = np.arange(1, 500)
loss_train = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs))
loss_valid = (epochs)**(-1/10) + 0.00001*(epochs)**2 +0.01*np.random.randn(len(epochs)) + 0.01
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(epochs, loss_train, lw=2, label='entrenamiento')
ax.plot(epochs, loss_valid, lw=2, label='validación')
ax.set_ylim([0.5, 1.05])
plt.legend();
# #### Error en el código o mal punto de partida
#
# - Revisa que tu código no tenga bugs
# - Función de costo
# - Optimizador
# - Mala inicialización, reinicia el entrenamiento
epochs = np.arange(1, 500)
loss_train = 1.0 + 0.01*np.random.randn(len(epochs))
loss_valid = 1.0 + 0.01*np.random.randn(len(epochs)) + 0.01
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(epochs, loss_train, lw=2, label='entrenamiento')
ax.plot(epochs, loss_valid, lw=2, label='validación')
#ax.set_ylim([0.5, 1.05])
plt.legend();
| notebooks/1_pytorch_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from sklearn.model_selection import train_test_split
X, y = np.arange(10).reshape((5, 2)), range(5)
# -
X
list(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
X_train
y_train
| ExtraLearning/02_Train_Test_Split.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Clustering for SP detection
#
# Will likely need multiple clustering, that is multiple alternative clusterings
#
#
# +
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
import scipy.special as scisp
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn import mixture
import sklearn
import wiggum as wg
import fairsim
from sp_data_util import sp_plot,plot_clustermat
import itertools as itert
import string
# +
r_clusters = -.9 # correlation coefficient of clusters
cluster_spread = [.6,.8,.5] # pearson correlation of means
p_sp_clusters = .75 # portion of clusters with SP
k = [3, 2,5] # number of clusters
cluster_size = [7,1]
domain_range = [0, 20, 0, 20]
N = 200 # number of points
p_clusters = [[1.0/k_i]*k_i for k_i in k]
n_views = 3
many_sp_df_diff = spdata.geometric_indep_views_gmm_sp(n_views,r_clusters,cluster_size,cluster_spread,p_sp_clusters,
domain_range,k,N,p_clusters,numeric_categorical=True)
sp_design_list = [('x1','x2','A'),('x3','x4','B'), ('x5','x6','C')]
many_sp_df_diff.head()
# -
sp_plot(many_sp_df_diff,'x1','x2','A')
sp_plot(many_sp_df_diff,'x3','x4','B')
sp_plot(many_sp_df_diff,'x5','x6','C')
# We can represent the relationship between the categorical and continuous variables with a binary matrix, that indicates which categorical varialbes represent known clusters in continuous dimensions. For the above data this is known and specified a priori, at least mostly. Since they are draw fully independently, it is possible that ther is a high degree of mutual information between two or more categorical variables and then there would be some errors in the matrix below
z = [[1, 0, 0],[1, 0, 0],[0,1,0],[0,1,0],[0,0,1],[0,0,1]]
ax = plot_clustermat(z,'list')
plt.xlabel('categorical variables')
plt.gca().xaxis.set_label_position('top')
plt.xticks([0,1,2],['A','B','C'])
plt.ylabel('continuous variables')
plt.yticks(range(n_views*2),['x'+ str(i) for i in range(n_views*2)]);
# We can try clustering into the total number clusters across all views and check what that recovers
kmeans = sklearn.cluster.KMeans(n_clusters=sum(k), random_state=0).fit(many_sp_df_diff.T.loc['x1':'x6'].T)
many_sp_df_diff['kmeans'] = kmeans.labels_
# If these clusters relate to the true clusters each one of these clusters would have a single value for each of the true clusters. For example `kmeans = 0` might correspond to A =0, B=0, C =0. So we can look at the std of each of the categorical variables when we group the data by our found clusters to see how well it works to just cluster across all dimensions
many_sp_df_diff.groupby('kmeans')['A','B','C'].std()
# Many are 0, whichc is good, but not all. We would also hope for high mutual information.
print(sklearn.metrics.mutual_info_score(many_sp_df_diff['kmeans'],many_sp_df_diff['A']))
print(sklearn.metrics.mutual_info_score(many_sp_df_diff['kmeans'],many_sp_df_diff['B']))
print(sklearn.metrics.mutual_info_score(many_sp_df_diff['kmeans'],many_sp_df_diff['C']))
# We can check lookinng at one at a time to confirm that clustering in a sngle view of the data recovers the known structure
dpgmm = mixture.BayesianGaussianMixture(n_components=8,
covariance_type='full').fit(many_sp_df_diff.T.loc['x1':'x2'].T)
many_sp_df_diff['Apred'] = dpgmm.predict(many_sp_df_diff.T.loc['x1':'x2'].T)
dpgmm = mixture.BayesianGaussianMixture(n_components=8,
covariance_type='full').fit(many_sp_df_diff.T.loc['x3':'x4'].T)
many_sp_df_diff['Bpred'] = dpgmm.predict(many_sp_df_diff.T.loc['x3':'x4'].T)
# Now we can look at mutual information as well.
# +
# many_sp_df_diff.apply
sklearn.metrics.mutual_info_score(many_sp_df_diff['A'],many_sp_df_diff['Apred'])
# -
sklearn.metrics.mutual_info_score(many_sp_df_diff['B'],many_sp_df_diff['Bpred'])
many_sp_df_diff.groupby('A')['Apred'].describe()
many_sp_df_diff.groupby('B')['Bpred'].describe()
x1,x2 = many_sp_df_diff.columns[:2]
x1
many_sp_df_diff[[x1,x2]].head()
rho = [.6, .8, .3]
n_rho = len(rho)
off_diag = np.tile(np.asarray([(1-rho_i)/(n_rho-1) for rho_i in rho]),[len(rho),1]).T
off_diag = off_diag - np.diag(np.diag(off_diag)) # zero out diag
p_mat = np.diag(rho) + off_diag
p_mat[1]
# +
z = many_sp_df_diff['A'].values
z_opts =[0,1,2]
rho = [.9, .8, .9]
n_rho = len(rho)
off_diag = np.tile(np.asarray([(1-rho_i)/(n_rho-1) for rho_i in rho]),[len(rho),1]).T
off_diag = off_diag - np.diag(np.diag(off_diag)) # zero out diag
p_mat = np.diag(rho) + off_diag
prob = {z_i:rho_i for z_i,rho_i in zip(set(z),p_mat)}
# sample sing genrated probmat
zp = [np.random.choice(z_opts, p = prob[z_i]) for z_i in z]
# -
sklearn.metrics.normalized_mutual_info_score(many_sp_df_diff['A'],zp)
sum([z_i== zp_i for z_i,zp_i in zip(z,zp)])/len(zp)
# +
# iterate over pairs of variables in only continuous vars- improve later for if somme are provided
for x1,x2 in itert.combinations(many_sp_df_diff.columns[:6],2):
# run clustering
dpgmm = mixture.BayesianGaussianMixture(n_components=20,
covariance_type='full').fit(many_sp_df_diff[[x1,x2]])
# check if clusters are good separation or nonsense
# agument data with clusters
many_sp_df_diff['clust_'+ x1+ '_' + x2] = dpgmm.predict(many_sp_df_diff[[x1,x2]])
many_sp_df_diff.head()
# -
# check that it found the right answers,
many_sp_df_diff[['A','clust_x1_x2']].head()
print('expected unique: ',len(np.unique(many_sp_df_diff['A'])))
print('found unique: ',len(np.unique(many_sp_df_diff['clust_x1_x2'])))
sklearn.metrics.mutual_info_score(many_sp_df_diff['A'],many_sp_df_diff['clust_x1_x2'])
# +
print('expected unique: ',len(np.unique(many_sp_df_diff['B'])))
print('found unique: ',len(np.unique(many_sp_df_diff['clust_x3_x4'])))
sklearn.metrics.mutual_info_score(many_sp_df_diff['B'],many_sp_df_diff['clust_x3_x4'])
# plot with color for true and shape for found
# -
print('expected unique: ',len(np.unique(many_sp_df_diff['C'])))
print('found unique: ',len(np.unique(many_sp_df_diff['clust_x5_x6'])))
sklearn.metrics.mutual_info_score(many_sp_df_diff['C'],many_sp_df_diff['clust_x5_x6'])
| research_notebooks/clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
import pandas as pd
def rate(t, y):
k1 = 0.1
cB_in = 2
if t <= 15:
q = 0.1
else:
q = 0
cA = y[0]
cB = y[1]
cC = y[2]
V = y[3]
dca = - (q / V) * cA - k1 * cA * cB
dcb = (q / V) * (cB_in - cB) - k1 * cA * cB
dcc = - (q / V) * cC + k1 * cA * cB
dV = q
return [dca, dcb, dcc, dV]
sol = solve_ivp(rate, [0, 60], [2, 0, 0, 1], method = 'BDF')
fig, ax = plt.subplots()
ax.plot(sol.t, sol.y[0,:])
ax.plot(sol.t, sol.y[1,:])
ax.plot(sol.t, sol.y[2,:])
fig, ax = plt.subplots()
ax.plot(sol.t, sol.y[3,:])
df = pd.DataFrame(sol.y.T)
df.columns = ['A', 'B', 'C', 'V']
df.index = sol.t
df
df2 = df.iloc[0:54:4]
df2
n = df2.shape[0]
df2.assign(A = df2['A'].values + np.random.normal(0, 0.1, n))
df2.assign(B = df2['B'].values + np.random.normal(0, 0.1, n))
df2.assign(C = df2['C'].values + np.random.normal(0, 0.1, n))
fig, ax = plt.subplots()
ax.scatter(df2.index, df2.A)
ax.scatter(df2.index, df2.B)
ax.scatter(df2.index, df2.C)
df2.to_csv('../my_data_sets/ABC_fedbatch.csv')
| kipet_examples_old/get_ABC_fedbatch_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784')
X = mnist.data
y_odd = np.int32(np.int32(mnist.target) % 2 == 1)
y_odd[:5]
y_large = np.int32(np.int32(mnist.target) >=7)
y_large[:5]
y_odd_large = np.c_[y_odd, y_large]
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 10)
knn.fit(X, y_odd_large)
knn.predict(X[:5])
mnist.target[:5]
| Multi Label.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pivotación de tablas
#
# Vamos a ver cómo transformar las tablas de formato ancho a largo y viceversa
import pandas as pd
air = pd.read_csv("dat/airquality.csv")
air.head()
# ## Melt: de ancho a largo
#
# Para pasar de formato ancho a largo, podemos usar [`melt`](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.melt.html)
air_long = air.melt(id_vars=['month', 'day'])
air_long.head()
# Vemos que, para cada mes y día, ahora contamos con dos columnas: la variable medida y su valor.
#
# En el formato largo, cada fila cuenta con el índice (en este caso, mes y día), un valor, y etiquetas del valor.
# ## Pivot: de largo a ancho
#
# Para pasar de formato largo a ancho, podemos usar [`pivot_table`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html)
air_wide = air_long.pivot_table(index=['month', 'day'], columns='variable', values='value')
air_wide.head()
# Los índices jerárquicos suelen ser incómodos para tratar la tabla. Podemos quitarlo con `reset_index()`
air_wide = air_wide.reset_index()
air_wide.head()
# #### Referencia
#
# Puedes consultar la guía completa de pandas sobre pivotación [aquí](http://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html)
| notebooks/10_pivotacion_tablas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# + deletable=true editable=true
# %matplotlib inline
from gensim.models import KeyedVectors
import estnltk
import pandas as pd
import numpy as np
from tqdm import tqdm
import itertools
from sklearn.metrics.pairwise import cosine_similarity, cosine_distances, linear_kernel, euclidean_distances
import operator
from joblib import Parallel, delayed
from sklearn.feature_extraction.text import TfidfVectorizer
import numba
import glob
# import pyemd
import seaborn as sns
import matplotlib.pyplot as plt
# + deletable=true editable=true
model = KeyedVectors.load_word2vec_format('word2vec-models/lemmas.cbow.s100.w2v.bin', binary=True)
# -
model
# + deletable=true editable=true
symmetric = False
window = 3
apple_contexts = open('../datasets/apple_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
rock_contexts = open('../datasets/rock_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
pear_contexts = open('../datasets/pear_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
contexts = apple_contexts + rock_contexts + pear_contexts
labels = [0]*len(apple_contexts) + [1]*len(rock_contexts) + [2]*len(pear_contexts)
# n = len(contexts)
s1, s2 = contexts[0].split(), contexts[2].split()
if len(s2) < len(s1):
s1, s2 = s2, s1
m = len(s1)
n = len(s2)
m = len(s1)
n = len(s2)
sum_init=np.inf
comparison=operator.lt
metric=cosine_distances
# n = len(contexts)
# + deletable=true editable=true
s1, s2
# + deletable=true editable=true
similarity_matrix = metric(model[s1], model[s2])
# + deletable=true editable=true
# # %%timeit
# best_sum = sum_init
# for perm in itertools.permutations(list(range(n)), m):
# perm_sum = 0
# for i, j in enumerate(perm):
# perm_sum += similarity_matrix[i,j]
# if comparison(perm_sum, best_sum):
# best_sum = perm_sum
# asd(similarity_matrix, m, n, sum_init)
# + deletable=true editable=true
# @numba.jit(nopython=True)
# def asd(similarity_matrix, m, n, sum_init):
# best_sum = sum_init
# for perm in itertools.permutations(list(range(n)), m):
# perm_sum = 0
# for i, j in enumerate(perm):
# perm_sum += similarity_matrix[i,j]
# if comparison(perm_sum, best_sum):
# best_sum = perm_sum
# return best_sum
def alignement_cos_metric_contexts(s1, s2, sum_init, comparison, metric):
if type(s1) == str:
s1 = s1.split()
s2 = s2.split()
if len(s2) < len(s1):
s1, s2 = s2, s1
m = len(s1)
n = len(s2)
# similarity_matrix = np.empty((m,n))
# similarity_matrix[:] = np.NAN
# for i, w1 in enumerate(s1):
# for j, w2 in enumerate(s2):
# similarity_matrix[i,j] = metric([model[w1]], [model[w2]])[0]
similarity_matrix = metric(model[s1], model[s2])
best_sum = sum_init
for perm in itertools.permutations(list(range(n)), m):
perm_sum = 0
for i, j in enumerate(perm):
perm_sum += similarity_matrix[i,j]
if comparison(perm_sum, best_sum):
best_sum = perm_sum
return best_sum/m
def alignement_cos_sim_contexts(s1, s2):
return alignement_cos_metric_contexts(s1, s2,
sum_init=-np.inf,
comparison=operator.gt,
metric=cosine_similarity)
def alignement_cos_dist_contexts(s1, s2):
return alignement_cos_metric_contexts(s1, s2,
sum_init=np.inf,
comparison=operator.lt,
metric=cosine_distances)
def alignement_matrix_row_dist(s1, s1_index, contexts, row_length):
row = np.zeros(row_length)
for j in range(s1_index+1):
s2 = contexts[j]
row[j] = alignement_cos_dist_contexts(s1, s2)
return row
def alignement_matrix_row_sim(s1, s1_index, contexts, row_length):
row = np.zeros(row_length)
for j in range(s1_index+1):
s2 = contexts[j]
row[j] = alignement_cos_sim_contexts(s1, s2)
return row
# + deletable=true editable=true
symmetric = False
window = 3
print(window, symmetric, func, name)
apple_contexts = open('../datasets/apple_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
rock_contexts = open('../datasets/rock_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
pear_contexts = open('../datasets/pear_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
contexts = apple_contexts + rock_contexts + pear_contexts
labels = [0]*len(apple_contexts) + [1]*len(rock_contexts) + [2]*len(pear_contexts)
n = len(contexts)
# distance_matrix_rows = Parallel(n_jobs=1)(delayed(alignement_matrix_row_sim)(s1, s1_index, contexts, n)
# for s1_index, s1 in tqdm(enumerate(contexts)))
# distance_matrix_partial = np.array(distance_matrix_rows)
# distance_matrix = distance_matrix_partial + distance_matrix_partial.T
# if func.__name__ == alignement_matrix_row_sim.__name__:
# np.fill_diagonal(distance_matrix, 1)
# if func.__name__ == alignement_matrix_row_dist.__name__.:
# np.fill_diagonal(distance_matrix, 0)
# + deletable=true editable=true
len(contexts)
# + deletable=true editable=true
asd = distance_matrix2 + distance_matrix2.T
np.fill_diagonal(asd, 1)
np.allclose(distance_matrix1, asd)
for func, name in [(alignement_matrix_row_dist, 'dist'), (alignement_matrix_row_sim, 'sim')]:
print(1
)
# + deletable=true editable=true
#alustas 6:40
for window in [3,2,4]:
for symmetric in [True, False]:
for func, name in [(alignement_matrix_row_dist, 'dist'), (alignement_matrix_row_sim, 'sim')]:
# symmetric = True
# window = 4
print(window, symmetric, func, name)
apple_contexts = open('../datasets/apple_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
rock_contexts = open('../datasets/rock_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
pear_contexts = open('../datasets/pear_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
contexts = apple_contexts + rock_contexts + pear_contexts
labels = [0]*len(apple_contexts) + [1]*len(rock_contexts) + [2]*len(pear_contexts)
n = len(contexts)
distance_matrix_rows = Parallel(n_jobs=20)(delayed(func)(s1, s1_index, contexts, n)
for s1_index, s1 in enumerate(contexts))
distance_matrix = np.array(distance_matrix_rows)
distance_matrix_partial = np.array(distance_matrix_rows)
distance_matrix = distance_matrix_partial + distance_matrix_partial.T
if func.__name__ == alignement_matrix_row_sim.__name__:
print('sim diag')
np.fill_diagonal(distance_matrix, 1)
if func.__name__ == alignement_matrix_row_dist.__name__:
print('dist diag')
np.fill_diagonal(distance_matrix, 0)
filename = '../datasets/apple-rock-pear/alignement_{}_w_{}_s_{}.npy'.format(name, window, symmetric)
np.save(filename, distance_matrix)
# + deletable=true editable=true
# check max_distances
# + deletable=true editable=true
for dist_file in glob.glob('../datasets/apple-rock-pear/*dist*'):
dist = np.load(dist_file)
print(dist_file)
if dist[dist>1].any():
print('ABOVE 1', dist_file)
# + deletable=true editable=true
filename = '../datasets/apple-rock-pear/alignement_dist_w_3_s_False.npy'
dist = np.load(filename)
dist[dist>1].shape
dist.shape[0]*dist.shape[0]
3091738/139263601
# + deletable=true editable=true
filename = '../datasets/apple-rock-pear/alignement_sim_w_3_s_False.npy'
sim = np.load(filename)
sim[sim>1].shape
# + deletable=true editable=true
for sim_filename in glob.glob('../datasets/apple-rock-pear/alignement_sim_*'):
print(sim_filename, sim_filename.replace('sim', 'dist'))
dist_filename = sim_filename.replace('sim', 'dist')
sim = np.load(sim_filename)
dist = 1-sim
np.save(dist_filename, dist)
# + deletable=true editable=true
sim = np.load(sim_filename)
dist = np.load(sim_filename.replace('sim', 'dist').replace('/alig', '/backup/alig'))
# + deletable=true editable=true
sim[sim<0].shape
# + deletable=true editable=true
979638/(sim.shape[0]**2)
# + deletable=true editable=true
filename = '../datasets/apple-rock-pear/alignement_sim_w_2_s_True.npy'
sim2 = np.load(filename)
filename = '../datasets/apple-rock-pear/alignement_dist_w_2_s_True.npy'
dist2 = np.load(filename)
# + deletable=true editable=true
np.allclose(sim2, 1-dist2)
# + deletable=true editable=true
sim2.shape, dist2.shape
# + deletable=true editable=true
for window in [3,2]:
for symmetric in [True, False]:
filename = '../datasets/apple-rock-pear/alignement_sim_w_{}_s_{}.npy'.format(window, symmetric)
sim2 = np.load(filename)
print(filename, sim2.shape)
filename = '../datasets/apple-rock-pear/alignement_dist_w_{}_s_{}.npy'.format(window, symmetric)
dist2 = np.load(filename)
print(filename, dist2.shape)
print(np.allclose(sim2, 1-dist2))
# + deletable=true editable=true
sim2[sim2<0].shape[0]/sim2.shape[0]**2
# + deletable=true editable=true
for file in glob.glob('../datasets/apple-rock-pear/*'):
print(file)
arr = np.load(file)
min_f = arr[arr<0].shape[0]/arr.shape[0]**2
max_f = arr[arr>1].shape[0]/arr.shape[0]**2
print('max {} ({}), min {} ({})'.format(np.max(arr), max_f, np.min(arr), min_f))
# + deletable=true editable=true
for file in glob.glob('../datasets/apple-rock-pear/alig*sim*') + glob.glob('../datasets/apple-rock-pear/cos*sim*'):
print(file)
sim = np.load(file)
sim[sim<0] = 0
np.save(file, sim)
# -
s1 = 'kes see on'
s2 = 'mis see on'
model.wmdistance(s1, s2)
alignement_cos_sim_contexts(s1, s2), alignement_cos_dist_contexts(s1, s2)
alignement_cos_metric_contexts(s1, s2,
sum_init=np.inf,
comparison=operator.lt,
metric=euclidean_distances)
contexts1 = open('../datasets/tee_sõidu_contexts_s_True_w_3.txt').read().splitlines()
contexts2 = open('../datasets/tee_jook_contexts_s_True_w_3.txt').read().splitlines()
contexts = contexts1 + contexts2
custom_wmd = []
gensim_wmd = []
c1 = contexts[0]
for c in tqdm(contexts):
# custom_wmd.append(alignement_cos_metric_contexts(c1, c,
# sum_init=np.inf,
# comparison=operator.lt,
# metric=euclidean_distances))
# gensim_wmd.append(model.wmdistance(c1.split(), c.split()))
print(alignement_cos_metric_contexts(c1, c,
sum_init=np.inf,
comparison=operator.lt,
metric=euclidean_distances),model.wmdistance(c1.split(), c.split()))
plt.hist(custom_wmd, bins=200, alpha=0.2)
plt.hist(gensim_wmd, bins=200, alpha=0.2)
plt.show()
plt.scatter(custom_wmd, gensim_wmd, alpha=0.2)
plt.xlim([15,30])
plt.ylim([15,30])
index = np.hstack((np.where(gensim_wmd<12.5)[0], np.where(gensim_wmd>1)[0]))
index
gensim_wmd.shape
for c1 in tqdm(contexts):
for c2 in contexts:
alignement_cos_metric_contexts(c1.split(), c2.split(),
sum_init=np.inf,
comparison=operator.lt,
metric=euclidean_distances)
# %time
for c in contexts:
# print(c)
alignement_cos_metric_contexts(c1, c,
sum_init=np.inf,
comparison=operator.lt,
metric=euclidean_distances),
p1 = " ".join(contexts[:100])
p2 = " ".join(contexts[100:200])
alignement_cos_metric_contexts(p1, p2,
sum_init=np.inf,
comparison=operator.lt,
metric=euclidean_distances)
model.wmdistance(p1.split(), p2.split())
for c1 in tqdm(contexts):
for c2 in contexts:
model.wmdistance(c1.split(), c2.split())
euclidean_distances(model['mis'], model['see'])
gensim_wmd[index].shape
gensim_wmd
| generate_aligned_dist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Spark | R
# name: sparkrkernel
# ---
# + [markdown] azdata_cell_guid="f4f5a3ce-b06b-4bcd-be5d-ae7e2fcd6108"
# # Using MsSQLSpark Conenctor from SparkR
#
# The following notebook demostrates usage of MSSQLSpark connector in sparkR. For full set of capability supported by MSSQLSpark Connector refer **mssql_spark_connector_non_ad_pyspark.ipynb**
# + [markdown] azdata_cell_guid="820bc224-56e1-4867-9df8-ddd9c3aa3a93"
# ## PreReq
# -------
# - Download [AdultCensusIncome.csv]( https://amldockerdatasets.azureedge.net/AdultCensusIncome.csv ) to your local machine. Upload this file to hdfs folder named *spark_data*.
# - The sample uses a SQL database *connector_test_db*, user *connector_user* with password *<PASSWORD>!#* and datasource *connector_ds*. The database, user/password and datasource need to be created before running the full sample. Refer **data-virtualization/mssql_spark_connector_user_creation.ipynb** on steps to create this user.
# + [markdown] azdata_cell_guid="29e3c7f0-b2d0-4456-949d-b80dc6857486"
# ## Load a CSV file to a dataframe
# + azdata_cell_guid="6be47cc0-0a41-4fbe-b43d-bf363e3c4c0a"
people <- read.df("/spark_data/AdultCensusIncome.csv", "csv")
head(people)
# + [markdown] azdata_cell_guid="f8304a0e-e3ef-4666-8f15-c1abddac9b25"
# ## User MSQL Spark connector to save the dataframe as a table in SQL Server
# + azdata_cell_guid="7216f3a5-0439-44a8-a726-f393db2d612b"
#Using deafault JDBC connector
#Using MSSQLSpark connector
dbname = "connector_test_db"
url = paste("jdbc:sqlserver://master-0.master-svc;databaseName=", "connector_test_db", sep="")
print(url)
dbtable = "AdultCensus_test_sparkr"
user = "connector_user"
password = "<PASSWORD>!#" # Please specify password here
saveDF(people,
dbtable,
source = "com.microsoft.sqlserver.jdbc.spark",
url = url,
dbtable=dbtable,
mode = "overwrite",
user=user,
password=password)
| bdc-samples/data-virtualization/mssql_spark_connector_sparkr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from datetime import datetime, date
from perspective import PerspectiveWidget
# +
# schema
df = pd.DataFrame([
{
'int': 1,
'float': 1.5,
'string': '20150505',
'date': date.today(),
'datetime': datetime.now(),
'object': datetime,
},
{
'int': 1,
'float': 1.5,
'string': '20150506',
'date': None,
'datetime': None,
'object': None,
},
])
custom_schema = {'index': 'int', 'int': 'int', 'float': 'float', 'string': 'date', 'date': 'date', 'datetime': 'date', 'object': 'string'}
# -
df
psp = PerspectiveWidget(df)
psp.schema
psp2 = PerspectiveWidget(df, schema=custom_schema)
psp2.schema == custom_schema
psp2.schema
custom_schema
| python/perspective/examples/pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Lesson: Toy Differential Privacy - Simple Database Queries
# In this section we're going to play around with Differential Privacy in the context of a database query. The database is going to be a VERY simple database with only one boolean column. Each row corresponds to a person. Each value corresponds to whether or not that person has a certain private attribute (such as whether they have a certain disease, or whether they are above/below a certain age). We are then going to learn how to know whether a database query over such a small database is differentially private or not - and more importantly - what techniques are at our disposal to ensure various levels of privacy
#
#
# ### First We Create a Simple Database
#
# Step one is to create our database - we're going to do this by initializing a random list of 1s and 0s (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database.
# +
import torch
# the number of entries in our database
num_entries = 5000
db = torch.rand(num_entries) > 0.5
db
# -
# ## Project: Generate Parallel Databases
#
# Key to the definition of differenital privacy is the ability to ask the question "When querying a database, if I removed someone from the database, would the output of the query be any different?". Thus, in order to check this, we must construct what we term "parallel databases" which are simply databases with one entry removed.
#
# In this first project, I want you to create a list of every parallel database to the one currently contained in the "db" variable. Then, I want you to create a function which both:
#
# - creates the initial database (db)
# - creates all parallel databases
# +
# try project here!
# -
# # Lesson: Towards Evaluating The Differential Privacy of a Function
#
# Intuitively, we want to be able to query our database and evaluate whether or not the result of the query is leaking "private" information. As mentioned previously, this is about evaluating whether the output of a query changes when we remove someone from the database. Specifically, we want to evaluate the *maximum* amount the query changes when someone is removed (maximum over all possible people who could be removed). So, in order to evaluate how much privacy is leaked, we're going to iterate over each person in the database and measure the difference in the output of the query relative to when we query the entire database.
#
# Just for the sake of argument, let's make our first "database query" a simple sum. Aka, we're going to count the number of 1s in the database.
db, pdbs = create_db_and_parallels(5000)
def query(db):
return db.sum()
full_db_result = query(db)
sensitivity = 0
for pdb in pdbs:
pdb_result = query(pdb)
db_distance = torch.abs(pdb_result - full_db_result)
if(db_distance > sensitivity):
sensitivity = db_distance
sensitivity
# # Project - Evaluating the Privacy of a Function
#
# In the last section, we measured the difference between each parallel db's query result and the query result for the entire database and then calculated the max value (which was 1). This value is called "sensitivity", and it corresponds to the function we chose for the query. Namely, the "sum" query will always have a sensitivity of exactly 1. However, we can also calculate sensitivity for other functions as well.
#
# Let's try to calculate sensitivity for the "mean" function.
# +
# try this project here!
# -
# Wow! That sensitivity is WAY lower. Note the intuition here. "Sensitivity" is measuring how sensitive the output of the query is to a person being removed from the database. For a simple sum, this is always 1, but for the mean, removing a person is going to change the result of the query by rougly 1 divided by the size of the database (which is much smaller). Thus, "mean" is a VASTLY less "sensitive" function (query) than SUM.
# # Project: Calculate L1 Sensitivity For Threshold
#
# In this first project, I want you to calculate the sensitivty for the "threshold" function.
#
# - First compute the sum over the database (i.e. sum(db)) and return whether that sum is greater than a certain threshold.
# - Then, I want you to create databases of size 10 and threshold of 5 and calculate the sensitivity of the function.
# - Finally, re-initialize the database 10 times and calculate the sensitivity each time.
# +
# try this project here!
# -
# # Lesson: A Basic Differencing Attack
#
# Sadly none of the functions we've looked at so far are differentially private (despite them having varying levels of sensitivity). The most basic type of attack can be done as follows.
#
# Let's say we wanted to figure out a specific person's value in the database. All we would have to do is query for the sum of the entire database and then the sum of the entire database without that person!
#
# # Project: Perform a Differencing Attack on Row 10
#
# In this project, I want you to construct a database and then demonstrate how you can use two different sum queries to explose the value of the person represented by row 10 in the database (note, you'll need to use a database with at least 10 rows)
# +
# try this project here!
# -
# # Project: Local Differential Privacy
#
# As you can see, the basic sum query is not differentially private at all! In truth, differential privacy always requires a form of randomness added to the query. Let me show you what I mean.
#
# ### Randomized Response (Local Differential Privacy)
#
# Let's say I have a group of people I wish to survey about a very taboo behavior which I think they will lie about (say, I want to know if they have ever committed a certain kind of crime). I'm not a policeman, I'm just trying to collect statistics to understand the higher level trend in society. So, how do we do this? One technique is to add randomness to each person's response by giving each person the following instructions (assuming I'm asking a simple yes/no question):
#
# - Flip a coin 2 times.
# - If the first coin flip is heads, answer honestly
# - If the first coin flip is tails, answer according to the second coin flip (heads for yes, tails for no)!
#
# Thus, each person is now protected with "plausible deniability". If they answer "Yes" to the question "have you committed X crime?", then it might becasue they actually did, or it might be becasue they are answering according to a random coin flip. Each person has a high degree of protection. Furthermore, we can recover the underlying statistics with some accuracy, as the "true statistics" are simply averaged with a 50% probability. Thus, if we collect a bunch of samples and it turns out that 60% of people answer yes, then we know that the TRUE distribution is actually centered around 70%, because 70% averaged wtih 50% (a coin flip) is 60% which is the result we obtained.
#
# However, it should be noted that, especially when we only have a few samples, the this comes at the cost of accuracy. This tradeoff exists across all of Differential Privacy. The greater the privacy protection (plausible deniability) the less accurate the results.
#
# Let's implement this local DP for our database before!
# +
# try this project here!
# -
# # Project: Varying Amounts of Noise
#
# In this project, I want you to augment the randomized response query (the one we just wrote) to allow for varying amounts of randomness to be added. Specifically, I want you to bias the coin flip to be higher or lower and then run the same experiment.
#
# Note - this one is a bit tricker than you might expect. You need to both adjust the likelihood of the first coin flip AND the de-skewing at the end (where we create the "augmented_result" variable).
# +
# try this project here!
# -
# # Lesson: The Formal Definition of Differential Privacy
#
# The previous method of adding noise was called "Local Differentail Privacy" because we added noise to each datapoint individually. This is necessary for some situations wherein the data is SO sensitive that individuals do not trust noise to be added later. However, it comes at a very high cost in terms of accuracy.
#
# However, alternatively we can add noise AFTER data has been aggregated by a function. This kind of noise can allow for similar levels of protection with a lower affect on accuracy. However, participants must be able to trust that no-one looked at their datapoints _before_ the aggregation took place. In some situations this works out well, in others (such as an individual hand-surveying a group of people), this is less realistic.
#
# Nevertheless, global differential privacy is incredibly important because it allows us to perform differential privacy on smaller groups of individuals with lower amounts of noise. Let's revisit our sum functions.
# +
db, pdbs = create_db_and_parallels(100)
def query(db):
return torch.sum(db.float())
def M(db):
query(db) + noise
query(db)
# -
# So the idea here is that we want to add noise to the output of our function. We actually have two different kinds of noise we can add - Laplacian Noise or Gaussian Noise. However, before we do so at this point we need to dive into the formal definition of Differential Privacy.
#
# 
# _Image From: "The Algorithmic Foundations of Differential Privacy" - <NAME> and <NAME> - https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf_
# This definition does not _create_ differential privacy, instead it is a measure of how much privacy is afforded by a query M. Specifically, it's a comparison between running the query M on a database (x) and a parallel database (y). As you remember, parallel databases are defined to be the same as a full database (x) with one entry/person removed.
#
# Thus, this definition says that FOR ALL parallel databases, the maximum distance between a query on database (x) and the same query on database (y) will be e^epsilon, but that occasionally this constraint won't hold with probability delta. Thus, this theorem is called "epsilon delta" differential privacy.
#
# # Epsilon
#
# Let's unpack the intuition of this for a moment.
#
# Epsilon Zero: If a query satisfied this inequality where epsilon was set to 0, then that would mean that the query for all parallel databases outputed the exact same value as the full database. As you may remember, when we calculated the "threshold" function, often the Sensitivity was 0. In that case, the epsilon also happened to be zero.
#
# Epsilon One: If a query satisfied this inequality with epsilon 1, then the maximum distance between all queries would be 1 - or more precisely - the maximum distance between the two random distributions M(x) and M(y) is 1 (because all these queries have some amount of randomness in them, just like we observed in the last section).
#
# # Delta
#
# Delta is basically the probability that epsilon breaks. Namely, sometimes the epsilon is different for some queries than it is for others. For example, you may remember when we were calculating the sensitivity of threshold, most of the time sensitivity was 0 but sometimes it was 1. Thus, we could calculate this as "epsilon zero but non-zero delta" which would say that epsilon is perfect except for some probability of the time when it's arbitrarily higher. Note that this expression doesn't represent the full tradeoff between epsilon and delta.
# # Lesson: How To Add Noise for Global Differential Privacy
#
# In this lesson, we're going to learn about how to take a query and add varying amounts of noise so that it satisfies a certain degree of differential privacy. In particular, we're going to leave behind the Local Differential privacy previously discussed and instead opt to focus on Global differential privacy.
#
# So, to sum up, this lesson is about adding noise to the output of our query so that it satisfies a certain epsilon-delta differential privacy threshold.
#
# There are two kinds of noise we can add - Gaussian Noise or Laplacian Noise. Generally speaking Laplacian is better, but both are still valid. Now to the hard question...
#
# ### How much noise should we add?
#
# The amount of noise necessary to add to the output of a query is a function of four things:
#
# - the type of noise (Gaussian/Laplacian)
# - the sensitivity of the query/function
# - the desired epsilon (ε)
# - the desired delta (δ)
#
# Thus, for each type of noise we're adding, we have different way of calculating how much to add as a function of sensitivity, epsilon, and delta. We're going to focus on Laplacian noise. Laplacian noise is increased/decreased according to a "scale" parameter b. We choose "b" based on the following formula.
#
# b = sensitivity(query) / epsilon
#
# In other words, if we set b to be this value, then we know that we will have a privacy leakage of <= epsilon. Furthermore, the nice thing about Laplace is that it guarantees this with delta == 0. There are some tunings where we can have very low epsilon where delta is non-zero, but we'll ignore them for now.
#
# ### Querying Repeatedly
#
# - if we query the database multiple times - we can simply add the epsilons (Even if we change the amount of noise and their epsilons are not the same).
# # Project: Create a Differentially Private Query
#
# In this project, I want you to take what you learned in the previous lesson and create a query function which sums over the database and adds just the right amount of noise such that it satisfies an epsilon constraint. Write a query for both "sum" and for "mean". Ensure that you use the correct sensitivity measures for both.
# +
# try this project here!
# -
# # Lesson: Differential Privacy for Deep Learning
#
# So in the last lessons you may have been wondering - what does all of this have to do with Deep Learning? Well, these same techniques we were just studying form the core primitives for how Differential Privacy provides guarantees in the context of Deep Learning.
#
# Previously, we defined perfect privacy as "a query to a database returns the same value even if we remove any person from the database", and used this intuition in the description of epsilon/delta. In the context of deep learning we have a similar standard.
#
# Training a model on a dataset should return the same model even if we remove any person from the dataset.
#
# Thus, we've replaced "querying a database" with "training a model on a dataset". In essence, the training process is a kind of query. However, one should note that this adds two points of complexity which database queries did not have:
#
# 1. do we always know where "people" are referenced in the dataset?
# 2. neural models rarely never train to the same output model, even on identical data
#
# The answer to (1) is to treat each training example as a single, separate person. Strictly speaking, this is often overly zealous as some training examples have no relevance to people and others may have multiple/partial (consider an image with multiple people contained within it). Thus, localizing exactly where "people" are referenced, and thus how much your model would change if people were removed, is challenging.
#
# The answer to (2) is also an open problem - but several interesitng proposals have been made. We're going to focus on one of the most popular proposals, PATE.
#
# ## An Example Scenario: A Health Neural Network
#
# First we're going to consider a scenario - you work for a hospital and you have a large collection of images about your patients. However, you don't know what's in them. You would like to use these images to develop a neural network which can automatically classify them, however since your images aren't labeled, they aren't sufficient to train a classifier.
#
# However, being a cunning strategist, you realize that you can reach out to 10 partner hospitals which DO have annotated data. It is your hope to train your new classifier on their datasets so that you can automatically label your own. While these hospitals are interested in helping, they have privacy concerns regarding information about their patients. Thus, you will use the following technique to train a classifier which protects the privacy of patients in the other hospitals.
#
# - 1) You'll ask each of the 10 hospitals to train a model on their own datasets (All of which have the same kinds of labels)
# - 2) You'll then use each of the 10 partner models to predict on your local dataset, generating 10 labels for each of your datapoints
# - 3) Then, for each local data point (now with 10 labels), you will perform a DP query to generate the final true label. This query is a "max" function, where "max" is the most frequent label across the 10 labels. We will need to add laplacian noise to make this Differentially Private to a certain epsilon/delta constraint.
# - 4) Finally, we will retrain a new model on our local dataset which now has labels. This will be our final "DP" model.
#
# So, let's walk through these steps. I will assume you're already familiar with how to train/predict a deep neural network, so we'll skip steps 1 and 2 and work with example data. We'll focus instead on step 3, namely how to perform the DP query for each example using toy data.
#
# So, let's say we have 10,000 training examples, and we've got 10 labels for each example (from our 10 "teacher models" which were trained directly on private data). Each label is chosen from a set of 10 possible labels (categories) for each image.
import numpy as np
num_teachers = 10 # we're working with 10 partner hospitals
num_examples = 10000 # the size of OUR dataset
num_labels = 10 # number of lablels for our classifier
preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int).transpose(1,0) # fake predictions
new_labels = list()
for an_image in preds:
label_counts = np.bincount(an_image, minlength=num_labels)
epsilon = 0.1
beta = 1 / epsilon
for i in range(len(label_counts)):
label_counts[i] += np.random.laplace(0, beta, 1)
new_label = np.argmax(label_counts)
new_labels.append(new_label)
# +
# new_labels
# -
# # PATE Analysis
labels = np.array([9, 9, 3, 6, 9, 9, 9, 9, 8, 2])
counts = np.bincount(labels, minlength=10)
query_result = np.argmax(counts)
query_result
from syft.frameworks.torch.differential_privacy import pate
# +
num_teachers, num_examples, num_labels = (100, 100, 10)
preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int) #fake preds
indices = (np.random.rand(num_examples) * num_labels).astype(int) # true answers
preds[:,0:10] *= 0
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5)
assert data_dep_eps < data_ind_eps
# -
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5)
print("Data Independent Epsilon:", data_ind_eps)
print("Data Dependent Epsilon:", data_dep_eps)
preds[:,0:50] *= 0
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5, moments=20)
print("Data Independent Epsilon:", data_ind_eps)
print("Data Dependent Epsilon:", data_dep_eps)
# # Where to Go From Here
#
#
# Read:
# - Algorithmic Foundations of Differential Privacy: https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf
# - Deep Learning with Differential Privacy: https://arxiv.org/pdf/1607.00133.pdf
# - The Ethical Algorithm: https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design/dp/0190948205
#
# Topics:
# - The Exponential Mechanism
# - The Moment's Accountant
# - Differentially Private Stochastic Gradient Descent
#
# Advice:
# - For deployments - stick with public frameworks!
# - Join the Differential Privacy Community
# - Don't get ahead of yourself - DP is still in the early days
# # Section Project:
#
# For the final project for this section, you're going to train a DP model using this PATE method on the MNIST dataset, provided below.
import torchvision.datasets as datasets
mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None)
train_data = mnist_trainset.train_data
train_targets = mnist_trainset.train_labels
test_data = mnist_trainset.test_data
test_targets = mnist_trainset.test_labels
| private-ai-master/Section 1 - Differential Privacy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
import collections
import numpy as np
df = pd.read_csv('results/sim_trace.csv')
df.info()
df["time_latency"] = df["time_reception"] - df["time_emit"]
df["time_wait"] = df["time_in"] - df["time_reception"]
df["time_service"] = df["time_out"] - df["time_in"]
df["time_response"] = df["time_out"] - df["time_reception"]
df["time_total_response"] = df["time_response"] + df["time_latency"]
# convert index to date type in order to use resample and agreegate functions of pandas
df["date"]=df.time_in.astype('datetime64[s]')
df.index = df.date
df.head()
df_resample = df.resample('100s').agg(dict(time_latency='mean'))
df_resample.shape
timeLatency = df_resample.time_latency.values
ticks = range(len(timeLatency))
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.plot(ticks, timeLatency, '-')
ax1.set_ylim(timeLatency.min()-0.5,timeLatency.max()+0.5)
ax1.set_xlabel("Simulation time", fontsize=16)
ax1.set_ylabel("Latency time", fontsize=16)
# # Hop Count
# Hop count
cache_distance = {}
G = nx.read_graphml('results/graph_binomial_tree_5')
print(G.nodes())
def compute_distance(k):
return nx.shortest_path_length(G,str(k[0]),str(k[1]))
for row in df[["TOPO.src","TOPO.dst"]].iterrows():
k = (row[1][0],row[1][1])
if not k in cache_distance.keys():
cache_distance[k] = compute_distance(k)
x = cache_distance.values()
counter = collections.Counter(x)
print(counter)
data_a = {}
for k in range(10):
data_a[k] = counter[k]
data_a
data_a = data_a.values()
ticks = range(10)
N = len(ticks)
ind = np.array(ticks)
width = 0.45
fig, ax = plt.subplots(figsize=(8.0,4.0))
ax.get_xaxis().set_ticks(range(0, len(ticks) * 2, 2))
r = ax.bar(ind, data_a, width, color='r')
ax.set_xticks(ind+ width/2)
ax.set_xticklabels(ticks, fontsize=12)
#ax.set_title("App")
ax.set_xlim(-width, len(ticks))
ax.plot([], c='#a6bddb', label="No LABEL",linewidth=3)
ax.set_xlabel("Hop count among services", fontsize=18)
ax.set_ylabel("Quantity", fontsize=20)
plt.legend([r],['No label'],loc="upper right",fontsize=14)
plt.tight_layout()
| src/sim/topologyChangesNew/topology_changesNew.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Creating an Interpolated Structured Data Grid
# This script allows you to input an unstructured dataset, e.g. from a CFD velocity data file and interpolate it into a structured grid of your chosen size.
#
# 
# Structured output velocity file vs unstructured grids CFD velocity data input. Source:
# *<NAME>, "The leading edge vortex and its secondary structures: a numerical study of the flow topology of pitching and plunging airfoils", MEng Disseration, University of Glasgow, January 2021*
#
#
#
#
#
# ### Sample header from input file:
#
# "U:0","U:1","U:2","Q-criterion","Points:0","Points:1","Points:2"
#
# 0,0,0,-2.0633e+05,0.076136,-3.4993e-05,0.03
#
# 0,0,0,-2.9188e+07,0.0762,-3.2004e-05,0.03
#
# 0.1312,0,0,-1.7476e+05,0.076137,-4.4772e-05,0.03
#
# 0.1312,0,0,-2.494e+07,0.076207,-3.7501e-05,0.03
#
# 0,0,0,-1.7728e+05,0.076066,-3.8283e-05,0.03
#
# 0.1312,0,0,-49700,0.076066,-4.8514e-05,0.03
#
# 0.1312,0,0,-7.0466e+06,0.076207,3.7501e-05,0.03
#
# 0,0,0,-9.4372e+07,0.0762,3.2004e-05,0.03
#
# 0.1312,0,0,-0,0.076138,-5.5822e-05,0.03
# +
import pandas as pd
import numpy as np
from scipy import interpolate
from IPython.display import clear_output
import os
initialFrame = 1
finalFrame = 2
frameStep = 1
for i in range(initialFrame,finalFrame+frameStep,frameStep):
#input file paths
input_file = os.getcwd()
input_file += '/InputVelocity/velocity_' #sample velocity files for you to try this out
input_file += str(i)
input_file += '.csv'
#output file paths
output_file = os.getcwd()
output_file += '/StructuredVelocityOutput/'
output_file += str(i)
output_file += '.txt'
df = pd.read_csv(input_file)
df = df.drop(["U:2","Q-criterion","Points:2"], axis = 1)
df = df.rename(columns = {'Points:0' : 'X', 'Points:1': 'Y', 'U:0': 'U', 'U:1':'V'})
x = df['X'].to_numpy() #x input coordinates of velocity file
y = df['Y'].to_numpy() #y input coordinates of velocity file
u = df['U'].to_numpy() #u input coordinates of velocity file
v = df['V'].to_numpy() #v input coordinates of velocity file
xgrid = np.linspace(-0.05, 0.05, 100) #output grid (initial x, final x, resolution)
ygrid = np.linspace(-0.05, 0.05, 100) #output grid (initial y, final x, resolution)
xx, yy = np.meshgrid(xgrid, ygrid) #grid is meshed
points = np.transpose(np.vstack((x, y))) #creating a joint (x,y) matrix
u_interp = interpolate.griddata(points, u, (xx, yy), method='cubic') #interpolating u
v_interp = interpolate.griddata(points, v, (xx, yy), method='cubic') #interpolating v
x1 = pd.DataFrame (data=np.hstack(xx), columns=['X'])
y1 = pd.DataFrame (data=np.hstack(yy), columns=['Y'])
u1 = pd.DataFrame (data=np.hstack(u_interp), columns=['U'])
v1 = pd.DataFrame (data= np.hstack(v_interp), columns=['V'])
df = pd.concat([x1,y1,u1,v1], axis=1)
#df = df.round({'X': 4, 'Y': 4})
#df.groupby(['X', 'Y']).mean()
df = df.drop_duplicates(['X', 'Y'])
#df = df.dropna()
df = df.sort_values(by=['X', 'Y'])
print('Processing ',round((i-1)/(finalFrame-initialFrame)*100,2), '%')
clear_output(wait=True)
df.to_csv(output_file, sep=' ', index = False, header = False)
# -
df
| InterpolatingAStructuredGrid/structured_grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
dates = pd.date_range('1/1/2018', periods=11)
df = pd.DataFrame(
{
"Column A": ["1","2","3","4","5",np.nan,"7","8"],
"Column B": [10,20,30,40,50,np.nan,70,80]
}
)
df.describe()
df = df.dropna() # Drop all rows with NaNs
df.loc[4, "Column A"] # Selects label 4 of Column A
# Does the same thing, but is considered bad practice
# since it performs two slice operations on the DataFrame.
# It's called "chaining".
df.loc[4]["Column A"]
df["Column B"] = df["Column B"].astype(int)
df["Column A"] = df["Column A"].astype(int)
# Here we convert our types to the proper values
# I think this is a false-positive warning, but I could be wrong
df.describe()
# Now Column A gets a proper description
df["Column B"].plot(style='.-') # Plots our data
df["Column B"].value_counts().plot()
# Creates a flat line, because each value only occurs once
df = pd.read_csv('10000 Sales Records.csv')
df.head()
# Munging
df["Ship Date"] = pd.to_datetime(df["Ship Date"])
df["Order Date"] = pd.to_datetime(df["Order Date"])
df["Order ID"] = pd.to_numeric(df["Order ID"])
df["Units Sold"] = pd.to_numeric(df["Units Sold"])
df["Unit Price"] = pd.to_numeric(df["Unit Price"])
df["Unit Cost"] = pd.to_numeric(df["Unit Cost"])
df["Total Revenue"] = pd.to_numeric(df["Total Revenue"])
df["Total Cost"] = pd.to_numeric(df["Total Cost"])
df["Total Profit"] = pd.to_numeric(df["Total Profit"])
df.head()
df = df.sort_values('Ship Date', ascending=True)
df.plot(x='Order Date', y='Ship Date')
# +
# The ship date increases with the order date, who would have guessed?
# -
grouped = df.groupby("Sales Channel")
grouped.groups
online_vs_offline = grouped.sum()
online_vs_offline["Total Profit"].plot(kind='bar')
# +
# As we can see, profits from online orders are on par with offline orders
# -
online_vs_offline["Units Sold"].plot(kind='bar')
# +
# Same goes for total units sold
# -
len(df["Country"].value_counts())
# Here we can see how many countries we've ordered from
len(df["Item Type"].value_counts())
# Ah and we sell 12 types of items. I wonder which we sell the most of?
units_sold_data = df.groupby("Item Type").sum()["Units Sold"]
units_sold_data.plot(kind='bar')
# Hmm, hard to tell. Lets just get the top-seller.
units_sold_data.idxmax()
# +
# The idxmax method just returns the index of the max value, nothing fancy
| Day 8/to jupyter and back.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Lecture 2: Introduction to Python
# + [markdown] slideshow={"slide_type": "fragment"}
# ## Topics covered
# * Boolean operations
# * Conditional logic
# * Introduction to loops
# + [markdown] slideshow={"slide_type": "slide"}
# ### Boolean Operators
#
# * We have seen that Python has a type `bool` that can take two values `True` or `False`
# * Boolean operators return `True` and `False` (or yes/no) answers
#
# + [markdown] slideshow={"slide_type": "subslide"}
# | Operation | Description | Operation | Description |
# | :------------- | :----------: | -----------: | |
# | `a == b` | `a` equal to `b` | `a != b` | `a` not equal to `b` |
# | `a < b` | `a` less than `b` | `a > b` | `a` greater than `b` |
# | `a <= b` | `a` less than or equal to `b` | `a >= b` | `a` greater than or equal to `b` |
#
#
# * **Tip**: Remember that the `=` operator is for **assignment**
# * use `==` when you want to **compare** equality
# + slideshow={"slide_type": "fragment"}
age = 25
print(age < 40)
print(age >= 40)
print(age == 25)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Conditional Logic
# + [markdown] slideshow={"slide_type": "fragment"}
# * Most code makes extensive use of **if then** statements
# * This makes use of the `if`, `elif`, and `else` keywords
# + [markdown] slideshow={"slide_type": "fragment"}
# ```python
# if <Boolean Operation 1>:
# do something
# elif <Boolean Operation 2>:
# do something different
# else:
# take default action
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# Example 1:
# * Is the number less than 2?
# + slideshow={"slide_type": "fragment"}
number = 5
if number > 2:
print("Number > 2!")
else:
print("Number <=2")
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Python whitespace
#
# * The indenting in the previous example is mandatory in Python
# * If you do not indent your `if then` statements then Python will throw an `IndentationError` exception
# + slideshow={"slide_type": "fragment"}
number = 1
if number > 2:
print("Number > 2!")
else:
print("Number <=2")
# + [markdown] slideshow={"slide_type": "subslide"}
# * A more complex **if then**
# + slideshow={"slide_type": "fragment"}
lower_limit = 10
upper_limit = 15
number_to_check = 12
if number_to_check >= lower_limit and number_to_check <= upper_limit:
print('number is inside range')
else:
print('number is outside of range')
# + [markdown] slideshow={"slide_type": "subslide"}
# * An example using `elif`
# + slideshow={"slide_type": "subslide"}
age = 32
pensionable_age = 68
if age < 0:
print('Error. Please enter an age greater than zero')
elif age < pensionable_age:
years_to_pension = pensionable_age - age
print('There are {0} years until you can draw your pension!'.format(years_to_pension))
else:
print('You are eligible to draw your_pension')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Functions and if statements
#
# * we could also use a function within an `if` statement
# + slideshow={"slide_type": "subslide"}
def eligible_for_pension(age):
'''
Return boolean True or False to indicate if
person is eligible for their pension
Keyword arguments:
age -- a persons age in integer years (e.g. 32 or 45)
'''
return age >= 68
# + slideshow={"slide_type": "fragment"}
age = 68
if eligible_for_pension(age):
print('Congratulations you can retire!')
else:
print('You are not yet eligible for your pension')
# + [markdown] slideshow={"slide_type": "subslide"}
# * We can also use the `not` operator in Python
# + slideshow={"slide_type": "fragment"}
age = 68
if not eligible_for_pension(age):
print('You are not yet eligible for your pension')
else:
print('Congratulations you can retire!')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Nested if statements
# + slideshow={"slide_type": "subslide"}
def stamp_duty(house_price, first_time_buyer):
'''
First time buyers recieve more tax relief
than people buying their next home.
Returns float representing the stamp duty owed.
'''
if first_time_buyer:
#this if statement is nested within the first
if house_price <= 300000:
return 0.0
else:
return house_price * 0.05
else:
if house_price < 125000:
return 0.0
else:
return house_price * 0.05
# + slideshow={"slide_type": "fragment"}
my_first_house_price = 310000
owed = stamp_duty(my_first_house_price, True)
print('stamp duty owed = £{0}'.format(owed))
# + [markdown] slideshow={"slide_type": "slide"}
# ## Introduction to iterating over data using **loops**
# + [markdown] slideshow={"slide_type": "subslide"}
# **Algorithms often need to do the same thing again and again**
#
# For example, an algorithm making three servings of toast
#
# <img src="images/toast.jpeg" alt="drawing" width="500"/>
# + [markdown] slideshow={"slide_type": "subslide"}
# **Making Toast Algorithm**:
#
# 1. Put a slice of bread in the toaster
# 2. Push lever down to turn on the toaster
# 3. Remove the toasted bread from the toaster
# 4. Put a slice of bread in the toaster
# 5. Push lever down to turn on the toaster
# 6. Remove the toasted bread from the toaster
# 7. Put a slice of bread in the toaster
# 8. Push lever down to turn on the toaster
# 9. Remove the toasted bread from the toaster
#
# + [markdown] slideshow={"slide_type": "subslide"}
# **A better way to do this in code is to use a LOOP**
#
# Do the following 3 steps, 3 times:
# 1. Put a slice of bread in the toaster
# 2. Push lever down to turn on the toaster
# 3. Remove the toasted bread from the toaster
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# * There are two types of loop in Python
# * `for` loops and `while` loops
# * We generally use `while` if we do not know the number of iterations in advance
# * We generally use `for` if we know the number of iterations in advance
# + [markdown] slideshow={"slide_type": "subslide"}
# ### While loops
# + slideshow={"slide_type": "fragment"}
age = 15
while age <= 18:
print('you are currently {0} years old'.format(age))
age += 1
# + [markdown] slideshow={"slide_type": "subslide"}
# ### While Loop Structure
#
# * All while loops have the same structure
# * You use the `while` keyword
# * Followed by a boolean operation (e.g. `age <= 18`)
# * Beware of infinite loops!
# * Let's test a more complex while loop using a function
# + slideshow={"slide_type": "subslide"}
def fizzbuzz(n):
"""
For multiples of three print "Fizz" instead of the number
and for the multiples of five print "Buzz".
For numbers which are multiples of both three
and five print "FizzBuzz".
Keyword arguments:
n -- the number to test
"""
if n % 3 == 0 and n % 5 == 0:
print('fizzbuzz')
elif n % 3 == 0:
print('fizz')
elif n % 5 == 0:
print('buzz')
else:
print(n)
# + slideshow={"slide_type": "subslide"}
n = 1
limit = 15
whi le n <= limit:
fizzbuzz(n)
n += 1
# + [markdown] slideshow={"slide_type": "subslide"}
# * A `while` loop example where we do **not** know the number of iterations in advance.
# + slideshow={"slide_type": "fragment"}
list_to_search = ['we', 'are', 'the', 'knights', 'who', 'say', 'ni']
not_found = True
word_to_find = 'knights'
current_index = 0
index_of_word = -1
while not_found:
if list_to_search[current_index] == word_to_find:
not_found = False
index_of_word = current_index
current_index += 1
print("the word '{0}' is located in index {1}".format(word_to_find, index_of_word))
# + [markdown] slideshow={"slide_type": "subslide"}
# * The previous example can easily lead to an `IndexError`.
# * If `word_to_find` was 'shrubbery' then the loop would exhaust all list elements
# * Although we do not know the number of iterations needed, we can easily modify the `while` loop to take account of the maximum allowable iterations.
# + slideshow={"slide_type": "subslide"}
list_to_search = ['we', 'are', 'the', 'knights', 'who', 'say', 'ni']
not_found = True
word_to_find = 'shrubbery'
current_index = 0
index_of_word = -1
word_count = len(list_to_search)
while not_found and current_index < word_count:
if list_to_search[current_index] == word_to_find:
not_found = False
index_of_word = current_index
current_index += 1
if not_found:
print("'{0}' could not be found".format(word_to_find))
else:
print("'{0}' is located in index {1}".format(word_to_find, index_of_word))
# + [markdown] slideshow={"slide_type": "slide"}
# ### For loops
# + [markdown] slideshow={"slide_type": "fragment"}
# * To create a **for** loop you need the following:
# * `for` keyword
# * a variable
# * the `in` keyword
# * the `range()` function - which is a built-in function in the Python library to create a sequence of numbers
#
# + slideshow={"slide_type": "subslide"}
for age in range(5):
print('you are currently {0} years old'.format(age) )
# + [markdown] slideshow={"slide_type": "fragment"}
# * notice that the loop sets age to 0 initially!
# * `range()` takes keyword arguments to set the start (inclusive, default = 0), end (exclusive) and step
# + slideshow={"slide_type": "subslide"}
for age in range(1, 5):
print('you are currently {0} years old'.format(age))
# + slideshow={"slide_type": "subslide"}
for age in range(1, 5, 2):
print('you are currently {0} years old'.format(age))
# + slideshow={"slide_type": "subslide"}
limit = 15
for n in range(1, limit+1):
fizzbuzz(n)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Watch out for python whitespace rules!
#
# * Remember to indent the next line after `:`
# + slideshow={"slide_type": "fragment"}
limit = 15
for n in range(1, limit+1):
fizzbuzz(n)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Nested Loops
# * `for` and `while` loops can be nested within each other.
# * Think of nested loops as a 'loop of loops'
# * Remember that for each iteration outer loop will consist of multiple iterations of an inner loop
# * **Don't panic** if you do not understand straight away!
# + slideshow={"slide_type": "subslide"}
for outer_index in range(3):
print('Outer loop iteration: {0}'.format(outer_index))
for inner_index in range(5):
print('\tInner loop iteration: {0}'.format(inner_index))
# + [markdown] slideshow={"slide_type": "subslide"}
# * Example 2: The inner loop now iteratures **backwards**
# + slideshow={"slide_type": "fragment"}
for outer_index in range(2):
print('Outer loop iteration: {0}'.format(outer_index))
for inner_index in range(5, 0, -1):
print('\tInner loop iteration: {0}'.format(inner_index))
# + [markdown] slideshow={"slide_type": "subslide"}
# Nested Loops Example 2.
# * Let's use a nested `for` loop to create the pattern below
#
#
# ```
# 1
# 12
# 123
# 1234
# 12345
# ```
# + slideshow={"slide_type": "subslide"}
for outer_index in range(1, 6):
#remember this is a loop of loops.
#the loop below execute all iterations each
# time the outer loop iterates
for inner_index in range(1, outer_index + 1):
#we use the end='' option of print so
#that it prints on the same line as previous
print(inner_index, end='')
#new line
print('')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Lab work
#
# * Lab 2 is now available on blackboard.
# * It explores conditionals and loops.
# * Please take a look at lab 2 before you come along this week.
# * Please ask us questions in the lab if you do not understand something.
| Lectures/Lecture2/Lecture2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exclusions and NB Scaling in EEX
# Exclusions and nonbonded scaling (such as 1-2, 1-3, and 1-4 scaling) are handled in EEX using a nonbond scaling list. This list can be passed directly or built by setting 1-2, 1-3, 1-4 scaling.
#
# ## Storing Scaling information
#
# Relevant functions are:
# ### dl.set_pair_scalings(scaling_df)
#
# This function is used to directly set scaling parameters between atoms. `scaling_df` is a dataframe with mandatory columns `atom_index1`, `atom_index2`, that also contains at least one of the columns `vdw_scale` or `coul_scale`.
#
# #### Example
#
# ` # Build scaling dataframe
# scale_df = pd.DataFrame()
# scale_df["coul_scale"] = [0.0, 0.0, 0.0]
# scale_df["atom_index1"] = [1,1,2]
# scale_df["atom_index2"] = [2,3,3]
# scale_df["vdw_scale"] = [0.5, 0.5, 0.5]`
#
# ` # Add data to datalayer
# dl.set_pair_scalings(scale_df)`
#
#
# ### dl.set_nb_scaling_factors(nb_scaling_factors)
# This function is used to set scaling factors for all atoms in a bond, angle or dihedral.
#
# Here, `nb_scaling_factors` is a dictionary of the form:
#
# `nb_scaling_factors = {
# "coul":{
# "scale12": "dimensionless",
# "scale13": "dimensionless",
# "scale14": "dimensionless",
# },
# "vdw":{
# "scale12": "dimensionless",
# "scale13": "dimensionless",
# "scale14": "dimensionless",
# }
# }`
#
# ### dl.build_scaling_list()
# This function uses the scaling factors set in `set_nb_scaling_factors` to build a full list of the same format as `scaling_df` in `set_pair_scalings` and stores it in the datalayer.
#
# ## Retrieving Scaling information
#
# ### dl.get_pair_scalings(nb_labels)
# Default for `nb_labels` is `[vdw_scale, coul_scale]` if no argument is given. This returns a pandas DataFrame with columns `atom_index1`, `atom_index2`, `coul_scale` and `vdw_scale`.
#
# ### dl.get_nb_scaling_factors()
# Returns dictionary from `dl.set_nb_scaling_factors` function
#
import eex
# +
# Write examples here
# For now, see test_datalayer.py (test_nb_scaling and test_set_nb_scaling_factors)
| examples/Exclusions and NB Scaling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="_1LTTcQVsacE" colab_type="code" outputId="fa42c3d3-73d0-4293-c63b-224820eca0e9" executionInfo={"status": "ok", "timestamp": 1583615761681, "user_tz": -60, "elapsed": 20264, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghu1AZoNys9qmixpBnJlZ_8c8mZuipDj1nRYiMPkg=s64", "userId": "14162817076247878771"}} colab={"base_uri": "https://localhost:8080/", "height": 666}
# !pip install --upgrade tables
# !pip install eli5
# !pip install xgboost
# !pip install hyperopt
# + id="Vmi--a_ps7sr" colab_type="code" outputId="2971ebba-66a3-4947-8bbb-14307984a179" executionInfo={"status": "ok", "timestamp": 1583615856661, "user_tz": -60, "elapsed": 2594, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghu1AZoNys9qmixpBnJlZ_8c8mZuipDj1nRYiMPkg=s64", "userId": "14162817076247878771"}} colab={"base_uri": "https://localhost:8080/", "height": 168}
import pandas as pd
import numpy as np
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score
import eli5
from eli5.sklearn import PermutationImportance
import xgboost as xgb
from hyperopt import hp, fmin, tpe, STATUS_OK
# + id="lEvssXKltpVl" colab_type="code" outputId="4bae82b9-873f-4f90-b0c2-b05e69e03946" executionInfo={"status": "ok", "timestamp": 1583615914322, "user_tz": -60, "elapsed": 644, "user": {"displayName": "<NAME>\u0119dzierski", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghu1AZoNys9qmixpBnJlZ_8c8mZuipDj1nRYiMPkg=s64", "userId": "14162817076247878771"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# cd "/content/drive/My Drive/Colab Notebooks/dataworkshop_matrix/Matrix_2_cars"
# + id="hMQy1aqWtwqk" colab_type="code" outputId="242d892f-3970-4961-c3f7-cc415ad9fbf1" executionInfo={"status": "ok", "timestamp": 1583615919984, "user_tz": -60, "elapsed": 5486, "user": {"displayName": "<NAME>\u0119dzierski", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghu1AZoNys9qmixpBnJlZ_8c8mZuipDj1nRYiMPkg=s64", "userId": "14162817076247878771"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
df = pd.read_hdf('data/car.h5')
df.shape
# + id="znwQ_IpPt9DZ" colab_type="code" colab={}
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
# + id="35OzgMt4uUDr" colab_type="code" outputId="0421d7c7-a2f8-4aaf-cf25-ec94c997a5e6" executionInfo={"status": "ok", "timestamp": 1583615963700, "user_tz": -60, "elapsed": 603, "user": {"displayName": "<NAME>\u0119dzierski", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghu1AZoNys9qmixpBnJlZ_8c8mZuipDj1nRYiMPkg=s64", "userId": "14162817076247878771"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
cat_feats = [x for x in df.columns if SUFFIX_CAT in x]
cat_feats = [x for x in cat_feats if 'price' not in x]
len(cat_feats)
# + id="fTavvgaF3uw_" colab_type="code" colab={}
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x:-1 if str(x) == 'None' else int(x))
df['param_moc'] = df['param_moc'].map(lambda x:-1 if str(x) == 'None' else int(x.split(' ')[0]))
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x:-1 if str(x) == 'None' else int(x.split('cm')[0].replace(' ', '')))
# + colab_type="code" id="4OaSR-yl3kS0" colab={}
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
# + [markdown] id="p-jD-Gm9vtJY" colab_type="text"
# #XGBoost
# + id="c4yghe7kvvXz" colab_type="code" outputId="a17b09d6-7030-4ec0-b1ee-17c3bf3cd43c" executionInfo={"status": "ok", "timestamp": 1583616218622, "user_tz": -60, "elapsed": 12770, "user": {"displayName": "<NAME>0119dzierski", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghu1AZoNys9qmixpBnJlZ_8c8mZuipDj1nRYiMPkg=s64", "userId": "14162817076247878771"}} colab={"base_uri": "https://localhost:8080/", "height": 85}
xgb_params = {
'max_depth': 5,
'n_estimators': 50,
'learing_rate': 0.1,
'seed': 0
}
feats = ['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat']
model = xgb.XGBRegressor(**xgb_params)
run_model( model, feats)
# + [markdown] id="Od6Ot5YA6I73" colab_type="text"
# #Hyperopt
# + id="BlNHRubD6EVL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 921} outputId="686c0393-a830-4bc9-f4b3-75637ea39730" executionInfo={"status": "ok", "timestamp": 1583620627137, "user_tz": -60, "elapsed": 1115369, "user": {"displayName": "<NAME>\u0119dzierski", "photoUrl": "https://<KEY>", "userId": "14162817076247878771"}}
def obj_func(params):
print("Training with params:")
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
#space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'objective': 'reg:squarederror',
'n_estimators': 100,
'seed': 0,
}
#run
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25)
best
# + id="lzxQ6N8D-XKp" colab_type="code" colab={}
| Matrix_2_cars/M2_day5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deit base 384_1 with DFDC dataset
# !python predictViT1.py --models "deitb_384_1" --test-dir "../../dfdc_train_all/dfdc_test/" --encoder "deit_base_patch16_384" --range1 36 --range2 50 --size 384 --output "effv2m_in21k.csv"
# # Deit base 384_1 with FaceForensics
# !python predictViT1.py --models "deitb_384_1" --test-dir "../../faceforensics/manipulated_sequences/" --encoder "deit_base_patch16_384" --range1 36 --range2 42 --size 384 --output "DeepFakeDetection"
# # Deit base 384_1 with Celeb-DF
# !python predictViT1.py --models "deitb_384_1" --test-dir "../../celeb_df/TestSet/" --encoder "deit_base_patch16_384" --range1 0 --range2 1 --size 384 --output "DeepFakeDetection"
| examples/ViT_inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # confident_prediction_for_time_serial_indoor_temp - double_stage_attention
import os
import sys
import time
import optparse
import configparser
import pyodbc
import pandas as pd
import numpy as np
import datetime
#import holidays
import re
from sklearn.preprocessing import MinMaxScaler
import tensorflow as tf
# +
temp_space = pd.read_excel('./datasets/88688_gap_15min_space_temp_f18_csw.xlsx')
temp_weather = pd.read_excel('./datasets/68364_gap_1h_weather_temp.xlsx')
df_weather=pd.DataFrame(np.array(temp_weather),columns=['Run_DateTime','Date','UTC_Date','TempA','TempM',
'DewPointA','DewPointM' ,'Humidity' ,'WindSpeedA',
'WindSpeedM' ,'WindGustA' ,'WindGustM' ,'WindDir',
'VisibilityA' ,'VisibilityM' ,'PressureA' ,'PressureM',
'WindChillA' ,'WindChillM' ,'HeatIndexA' ,'HeatIndexM',
'PrecipA' ,'PrecipM' ,'Condition' ,'Fog' ,'Rain',
'Snow' ,'Hail' ,'Thunder','Tornado' ,'ID'])
df_weather = df_weather[['Date','TempM','DewPointM','Humidity','WindSpeedM' ,'PressureM']]
df_space = pd.DataFrame(np.array(temp_space),columns=['ID','zone','floor','quadrant','eq_no','Date','temp'])
# +
def time_process(date):
time=datetime.datetime.strptime(date,'%Y-%m-%d %H:%M:%S.0000000')
return datetime.datetime(time.year,time.month,time.day,time.hour)
def weather_data_feature_selected_and_serialize(df_weather):
cols=['TempM','DewPointM','Humidity','WindSpeedM' ,'PressureM']
for col in cols:
df_weather = df_weather.loc[~(df_weather[col] == 'N/A')]
df_weather = df_weather.loc[~(df_weather[col] == -9999)]
date_trans=lambda date:datetime.datetime(date.year, date.month, date.day, date.hour)
df_weather['time']=df_weather['Date'].apply(date_trans)
df_weather = df_weather.dropna()
df_weather.drop('Date',axis=1,inplace=True)
result = df_weather.groupby('time').apply(np.mean)
return result
def weather_data_interpolation(df_weather):
interpolate_sample = df_weather.resample('15Min').asfreq()
df_weather_interpolated = interpolate_sample.interpolate(method='time')
return df_weather_interpolated
def space_temp_data_feature_selected_and_serialize(df_space):
cols=['Date','temp']
df_space = df_space[cols]
df_space = df_space.dropna()
date_trans=lambda date:datetime.datetime(date.year, date.month, date.day, date.hour, date.minute)
df_space['time'] = df_space['Date'].apply(date_trans)
df_space.drop('Date', axis=1, inplace=True)
result = df_space.groupby('time').apply(np.mean)
return result
def space_temp_extreme_data_clean_up(df_space, bad_values, verbose):
df_space_cleaned = df_space.copy()
clean_up_temp = []
for bad_value in bad_values:
for i in df_space.loc[df_space['temp'] == bad_value].index:
clean_up_temp.append((df_space_cleaned.loc[i]))
df_space_cleaned = df_space_cleaned.drop(i)
if(verbose == True):
show_interpolation_data_range(clean_up_temp)
return df_space_cleaned
def space_temp_data_interpolation(df_space):
interpolate_sample = df_space.resample('15Min').asfreq()
df_space_interpolated = interpolate_sample.interpolate(method='time')
return df_space_interpolated
def weather_data_preprocess(df_weather, bad_values=[0, 162.8, 250]):
df_weather_serial = weather_data_feature_selected_and_serialize(df_weather)
df_weather_serial_interpolated = weather_data_interpolation(df_weather_serial)
return df_weather_serial_interpolated
def space_temp_data_preprocess(df_space, bad_values=[0, 162.8, 250], verbose=False):
df_space_serial = space_temp_data_feature_selected_and_serialize(df_space)
df_space_serial_cleaned = space_temp_extreme_data_clean_up(df_space_serial, bad_values, verbose)
df_space_serial_interpolated = space_temp_data_interpolation(df_space_serial_cleaned)
return df_space_serial_interpolated
# In case people want to check the data after preprocessing
def write_data_to_excel(dataframe, filename):
# Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter(filename, engine='xlsxwriter')
# Convert the dataframe to an XlsxWriter Excel object.
dataframe.to_excel(writer, sheet_name='Sheet1')
# Close the Pandas Excel writer and output the Excel file.
writer.save()
# cleanning data printer
def show_interpolation_data_range(clean_up_list):
bad_data_counter = {}
for i, row in enumerate(clean_up_list):
print( 'Temp: %s '%(row['temp']), 'Date:', row.name)
if(str(row['temp']) not in bad_data_counter):
bad_data_counter[str(row['temp'])] = 1
else:
bad_data_counter[str(row['temp'])] += 1
# Print counter
for key, value in bad_data_counter.items():
print('Clean up %s bad data(%s)' %(value, key))
# -
df_weather_processed = weather_data_preprocess(df_weather)
df_space_processed = space_temp_data_preprocess(df_space, bad_values=[0, 162.8, 250], verbose=True)
#write_data_to_excel(df_space_processed, 'space_data.xlsx')
df = pd.merge(df_weather_processed, df_space_processed, left_index=True, right_index=True)
df.describe()
scaler = MinMaxScaler((0,1))
def get_batch(df, batch_size=128, T=16, input_dim=6, step=0, train=True):
t = step * batch_size
X_batch = np.empty(shape=[batch_size, T, input_dim])
y_batch = np.empty(shape=[batch_size, T])
labels = np.empty(shape=[batch_size])
#time_batch = np.empty(shape=[batch_size, T], dtype='datetime64')
time_batch = np.array([])
time_stamp = df.index.tolist()
for i in range(batch_size):
X_batch[i, :] = scaler.fit_transform(df.iloc[t:t+T].values)
y_batch[i, :] = df["temp"].iloc[t:t+T].values
labels[i] = df["temp"].iloc[t+T]
time_batch = np.append(time_batch, time_stamp[t+T])
#time_batch[i, :] = df.iloc[t:t+T].index[-1]
t += 1
## shuffle in train, not in test
if train:
index = list(range(batch_size))
np.random.shuffle(index)
X_batch = X_batch[index]
y_batch = y_batch[index]
labels = labels[index]
time_batch = time_batch[index]
return X_batch, y_batch, labels, time_batch
x, y, labels, time_train = get_batch(df,batch_size = 256, T = 7*24, step = 50, train=True)
print(x.shape)
print(y.shape)
print(labels.shape)
print(time_train.shape)
class ts_prediction(object):
def __init__(self, input_dim, time_step, n_hidden, d_hidden, batch_size):
self.batch_size = batch_size
self.n_hidden = n_hidden
self.d_hidden = d_hidden
#self.o_hidden = 16
self.input_dim = input_dim
self.time_step = time_step
self.seq_len = tf.placeholder(tf.int32,[None])
self.input_x = tf.placeholder(dtype = tf.float32, shape = [None, None, input_dim]) # b,T,d_in
self.input_y = tf.placeholder(dtype = tf.float32,shape = [None,self.time_step]) # b,T
self.label = tf.placeholder(dtype = tf.float32) #b,1
self.original_loss = tf.placeholder(dtype = tf.float32, shape = [])
## placeholder for loss without adversarial gradient added
self.encode_cell = tf.contrib.rnn.LSTMCell(self.n_hidden, forget_bias=1.0, state_is_tuple=True)
self.decode_cell = tf.contrib.rnn.LSTMCell(self.d_hidden, forget_bias=1.0, state_is_tuple=True)
#self.output_cell = tf.contrib.rnn.LSTMCell(self.o_hidden, forget_bias=1.0, state_is_tuple=True)
self.loss_1 = tf.constant(0.0)
self.loss = tf.constant(0.0)
## =========== build the model =========== ##
## ==== encoder ===== ##
h_encode, c_state = self.en_RNN(self.input_x)#!!!!!!!
c_expand = tf.tile(tf.expand_dims(c_state[1],1),[1,self.time_step,1])
fw_lstm = tf.concat([h_encode,c_expand],axis = 2) # b,T,2n
stddev = 1.0/(self.n_hidden*self.time_step)
Ue = tf.get_variable(name= 'Ue',dtype = tf.float32,
initializer = tf.truncated_normal(mean = 0.0, stddev = stddev,shape = [self.time_step,self.time_step]))
## (b,d,T) * (b,T,T) = (b,d,T)
brcast_UX = tf.matmul(tf.transpose(self.input_x,[0,2,1]),tf.tile(tf.expand_dims(Ue,0),[self.batch_size,1,1]))
e_list = []
for k in range(self.input_dim):
feature_k = brcast_UX[:,k,:]
e_k = self.en_attention(fw_lstm,feature_k) # b,T
e_list.append(e_k)
e_mat = tf.concat(e_list,axis = 2)
alpha_mat = tf.nn.softmax(e_mat) #b,T,d_in horizontally
encode_input = tf.multiply(self.input_x,alpha_mat)
self.h_t, self.c_t = self.en_RNN(encode_input, scopes = 'fw_lstm')
## ==== inferrence nn ==== ##
"""
node_hidden_layer1 = 500
node_hidden_layer2 = 500
node_hidden_layer3 = 500
hidden_layer1 = {"weights": tf.Variable(tf.random_normal([10, node_hidden_layer1])),
"biases": tf.Variable(tf.random_normal([node_hidden_layer1]))}
hidden_layer2 = {"weights": tf.Variable(tf.random_normal([node_hidden_layer1, node_hidden_layer2])),
"biases": tf.Variable(tf.random_normal([node_hidden_layer2]))}
hidden_layer3 = {"weights": tf.Variable(tf.random_normal([node_hidden_layer2, node_hidden_layer3])),
"biases": tf.Variable(tf.random_normal([node_hidden_layer3]))}
output_layer = {"weights": tf.Variable(tf.random_normal([node_hidden_layer3, 20])),
"biases": tf.Variable(tf.random_normal([20]))}
# Hidden Layer: weights \dot input_data + biaes
layer1 = tf.add(tf.matmul(self.h_t, hidden_layer1["weights"]), hidden_layer1["biases"])
layer1 = tf.nn.relu(layer1)
layer2 = tf.add(tf.matmul(layer1, hidden_layer2["weights"]), hidden_layer2["biases"])
layer2 = tf.nn.relu(layer2)
layer3 = tf.add(tf.matmul(layer2, hidden_layer3["weights"]), hidden_layer3["biases"])
layer3 = tf.nn.relu(layer3)
output = tf.add(tf.matmul(layer3, output_layer["weights"]), output_layer["biases"])
prediction = neural_network_model(X_data)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction, labels=y
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
"""
## ==== decoder ===== ##
h_decode, d_state = self.de_RNN(tf.expand_dims(self.input_y,-1))
d_expand = tf.tile(tf.expand_dims(d_state[1],1),[1,self.time_step,1])
dec_lstm = tf.concat([h_decode,d_expand],axis = 2) # b,T,2*d_hidden
Ud = tf.get_variable(name = 'Ud', dtype = tf.float32,
initializer = tf.truncated_normal(mean = 0.0, stddev = stddev, shape = [self.n_hidden, self.n_hidden]))
brcast_UDX = tf.matmul(self.h_t,tf.tile(tf.expand_dims(Ud,0),[self.batch_size,1,1])) # b,T,n_hidden
l_list = []
for i in range(self.time_step):
feature_i = brcast_UDX[:,i,:]
l_i = self.dec_attention(dec_lstm,feature_i)
l_list.append(l_i)
l_mat = tf.concat(l_list,axis = 2)
beta_mat = tf.nn.softmax(l_mat, dim = 1)
context_list = []
h_tmp = tf.transpose(self.h_t,[0,2,1])
for t in range(self.time_step):
beta_t = tf.reshape(beta_mat[:,t,:],[self.batch_size,1,self.time_step])
self.c_t = tf.reduce_sum(tf.multiply(h_tmp,beta_t),2) # b,T,T -> b,T,1
context_list.append(self.c_t)
c_context = tf.stack(context_list,axis = 2) # b,n_hidden,T
# b,T,1 b,T,n_hidden -> b,T,n_hidden+1
c_concat = tf.concat([tf.expand_dims(self.input_y,-1),tf.transpose(c_context,[0,2,1])], axis = 2)
W_hat = tf.get_variable(name = 'W_hat', dtype = tf.float32,
initializer = tf.truncated_normal(mean = 0.0, stddev = stddev,shape = [self.n_hidden+1,1]))
y_encode = tf.matmul(c_concat,tf.tile(tf.expand_dims(W_hat,0),[self.batch_size,1,1]))
h_out, d_out = self.de_RNN(y_encode)
last_concat = tf.expand_dims(tf.concat([h_out[:,-1,:],d_out[-1]],axis = 1),1)
Wy = tf.get_variable(name = 'Wy', dtype = tf.float32,initializer = tf.truncated_normal(mean = 0.0, stddev = stddev,shape = [self.n_hidden+self.d_hidden,1]))
W_y = tf.tile(tf.expand_dims(Wy,0),[self.batch_size,1,1])
self.y_predict = tf.squeeze(tf.matmul(last_concat,W_y))
#self.loss += tf.reduce_mean(tf.square(self.label - self.y_predict)) # reduce_mean: avg of batch loss
self.loss_1 += tf.reduce_mean(tf.square(self.label - self.y_predict)) # reduce_mean: avg of batch loss
self.adversarial_gradient = tf.gradients(self.loss_1,self.input_x)
self.loss = self.loss_1 + self.original_loss
self.params = tf.trainable_variables()
#learning rate
self.alpha=tf.Variable(5e-4)
optimizer = tf.train.AdamOptimizer(self.alpha)
#self.train_op = optimizer.minimize(self.loss)
grad_var = optimizer.compute_gradients(loss = self.loss, var_list = self.params, aggregation_method = 2)
self.train_op = optimizer.apply_gradients(grad_var)
def en_RNN(self, input_x, scopes = 'fw_lstm'):
'''
input_x: b, T, d_in
output: h: seqence of output state b,T,n_hidden
state: final state b,n_hidden
'''
with tf.variable_scope('fw_lstm' or scopes) as scope:
try:
h,state = tf.nn.dynamic_rnn(
cell = self.encode_cell, inputs = input_x,
sequence_length = self.seq_len,
dtype = tf.float32, scope = 'fw_lstm')
except ValueError:
scope.reuse_variables()
h,state = tf.nn.dynamic_rnn(
cell = self.encode_cell, inputs = input_x,
sequence_length = self.seq_len,
dtype = tf.float32, scope = scopes)
return [h,state]
def de_RNN(self,input_y, scopes = 'de_lstm'):
with tf.variable_scope('dec_lstm') as scope:
try:
h,state = tf.nn.dynamic_rnn(
cell = self.decode_cell, inputs = input_y,
sequence_length = self.seq_len,
dtype = tf.float32, scope = 'de_lstm')
except ValueError:
scope.reuse_variables()
h,state = tf.nn.dynamic_rnn(
cell = self.decode_cell, inputs = input_y,
sequence_length = self.seq_len,
dtype = tf.float32, scope = scopes)
return [h,state]
def en_attention(self,fw_lstm,feature_k):
'''
fw_lstm: b,T,2n
feature_k: row k from brcast_UX, b,T
return: b,T
'''
with tf.variable_scope('encoder') as scope:
try:
mean = 0.0
stddev = 1.0/(self.n_hidden*self.time_step)
We = tf.get_variable(name = 'We', dtype=tf.float32,shape = [self.time_step, 2*self.n_hidden],
initializer=tf.truncated_normal_initializer(mean,stddev))
Ve = tf.get_variable(name = 'Ve',dtype=tf.float32,shape = [self.time_step,1],
initializer=tf.truncated_normal_initializer(mean,stddev))
except ValueError:
scope.reuse_variables()
We = tf.get_variable('We')
Ve = tf.get_variable('Ve')
# (b,T,2n) (b,2n,T)
W_e = tf.transpose(tf.tile(tf.expand_dims(We,0),[self.batch_size,1,1]),[0,2,1]) # b,2n,T
mlp = tf.nn.tanh(tf.matmul(fw_lstm,W_e) + tf.reshape(feature_k,[self.batch_size,1,self.time_step])) #b,T,T + b,1,T = b,T,T
V_e = tf.tile(tf.expand_dims(Ve,0),[self.batch_size,1,1])
return tf.matmul(mlp,V_e)
def dec_attention(self, dec_lstm, feature_t, scopes = None):
'''
dec_lstm: b,T,2*d_hidden
feature_k: row k from brcast_UX, b,T
return: b,T
'''
with tf.variable_scope('decoder' or scopes) as scope:
try:
mean = 0.0
stddev = 1.0/(self.n_hidden*self.time_step)
Wd = tf.get_variable(name = 'Wd', dtype=tf.float32, shape = [self.n_hidden, 2*self.d_hidden],
initializer=tf.truncated_normal_initializer(mean,stddev))
Vd = tf.get_variable(name = 'Vd', dtype=tf.float32, shape = [self.n_hidden,1],
initializer=tf.truncated_normal_initializer(mean,stddev))
except ValueError:
scope.reuse_variables()
Wd = tf.get_variable('Wd')
Vd = tf.get_variable('Vd')
# (b,T,2*d_hidden) (b,2*d_hidden,T)
W_d = tf.transpose(tf.tile(tf.expand_dims(Wd,0),[self.batch_size,1,1]),[0,2,1]) # b,2*d_hidden,n_hidden
# (b,T,2*d_hidden) * (b,2*d_hidden,n_hidden) -> b,T,n_hidden
mlp = tf.nn.tanh(tf.matmul(dec_lstm,W_d) + tf.reshape(feature_t,[self.batch_size,1,self.n_hidden])) #b,T,n_hidden + b,1,n_hidden = b,T,n_hidden
V_d = tf.tile(tf.expand_dims(Vd,0),[self.batch_size,1,1])
return tf.matmul(mlp,V_d) #b,T,1
def predict(self,x_test,y_test,sess):
train_seq_len = np.ones(self.batch_size) * self.time_step
feed = {model.input_x: x_test,
model.seq_len: train_seq_len,
model.input_y: y_test}
y_hat = sess.run(self.y_predict,feed_dict = feed)
return y_hat
# +
w_data=df
batch_size = 7 * 24 * 4 * 2
INPUT_DIM = 6 # input feature
# time_steps = 7*24
time_steps = 24
n_hidden = 64 # encoder dim
d_hidden = 64 # decoder dim
total_epoch = 10
train_batch_num = int(len(w_data)*0.8/batch_size) ## 0.8 of traininng data, 20% for testing
df_test = w_data[(train_batch_num*batch_size):]
df1 = w_data[:(train_batch_num*batch_size)]
steps_train = int((len(df1) - time_steps)/batch_size) - 1
steps_test = int((len(df_test) - time_steps)/batch_size) - 1
print ('Training data %i' % len(df1), "testing data %i" % len(df_test))
# +
# Check everything before training model
print(steps_train)
print(steps_test)
print(w_data.shape)
print(df_test.shape)
print(df1.shape)
x, y, labels, times = get_batch(df1,batch_size = batch_size, T = time_steps, step = steps_train-1)
print(x.shape)
print(y.shape)
print(labels.shape)
print(times.shape)
# +
current_epoch = 0
tf.reset_default_graph()
model = ts_prediction(input_dim=INPUT_DIM, time_step=time_steps, n_hidden=n_hidden, d_hidden=d_hidden, batch_size=batch_size)
init = tf.global_variables_initializer()
#memory policy
config=tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
sess.run(init)
while current_epoch <= total_epoch:
cumulative_loss = 0.0
start_time = time.time()
for t in range(steps_train - 1):
x, y, labels, _ = get_batch(df1,batch_size=batch_size, T=time_steps, step=t)
train_seq_len = np.ones(batch_size) * time_steps
feed = {model.input_x: x,
model.seq_len: train_seq_len,
model.input_y: y,
model.label: labels,
}
loss_1, adversarial_gradients, _ = sess.run([model.loss_1,model.adversarial_gradient, model.alpha],feed_dict = feed)
epsilon = np.random.uniform(0,0.5,1)
feed = {model.input_x: x + epsilon*adversarial_gradients[0],
model.seq_len: train_seq_len,
model.input_y: y,
model.label: labels,
model.original_loss: loss_1,
}
loss, _, _ = sess.run([model.loss, model.train_op, model.alpha], feed_dict = feed)
cumulative_loss += loss
end_time = time.time()
print("time:%.1f epoch:%i/%i loss:%f" % (end_time - start_time, current_epoch, total_epoch, cumulative_loss))
if cumulative_loss<3:
if cumulative_loss>2.5:
update=tf.assign(model.alpha,1e-4) #update the learning rate
sess.run(update)
print('Learning rate is updated to 1e-4')
else:
update=tf.assign(model.alpha,1e-5) #update the learning rate
sess.run(update)
print('Learning rate is updated to 1e-5')
current_epoch += 1
# +
test_loss = 0.0
y_hat_arr = np.empty(shape = [0])
y_labels_arr = np.empty(shape = [0])
time_batch = np.empty(shape = [0], dtype="str")
for t in range(steps_test-1):
x_test,y_test,labels_test, time_test = get_batch(df_test, batch_size=batch_size, T=time_steps, step=t, train=False)
y_hat = model.predict(x_test, y_test, sess)
y_hat_arr = np.concatenate([y_hat_arr, np.array(y_hat)])
y_labels_arr = np.concatenate([y_labels_arr, np.array(labels_test)])
time_batch = np.concatenate([time_batch, time_test])
test_loss += np.mean(np.square(y_hat - labels_test))
print ("the mean squared error for test data are %f " % (test_loss * 1.0 / steps_test))
# -
y_hat_arr
print(y_labels_arr.shape)
print(y_hat_arr.shape)
print(time_batch.shape)
"""
# Forecast 24 hours plot
from matplotlib import pyplot as plt
from matplotlib.pyplot import cm
%matplotlib inline
_FUTURE_HOURS = y_hat_arr.shape[1]
compare_data_index = [0, 100, 200, 300, 400, 500, 600, 700]
X_SHOW_FREQUENT = 4
Y_SHOW_FREQUENT = 24 * 7
x_stick = list(range(0, y_labels_arr.shape[1] + 1, X_SHOW_FREQUENT))
y_stick = list(range(0, y_labels_arr.shape[0] + 1, Y_SHOW_FREQUENT))
time_batch[[y_stick_value for y_stick_value in y_stick]]
x_stick_content = np.array([time_batch[0],'Pred 1 h', 'Pred 2 h', 'Pred 3 h', 'Pred 4 h',
'Pred 5 h', 'Pred 6 h', 'Pred 7 h', 'Pred 8 h',
'Pred 9 h', 'Pred 10 h', 'Pred 11 h', 'Pred 12 h',
'Pred 13 h', 'Pred 14 h', 'Pred 15 h', 'Pred 16 h',
'Pred 17 h', 'Pred 18 h', 'Pred 19 h', 'Pred 20 h',
'Pred 21 h', 'Pred 22 h', 'Pred 23 h', 'Pred 24 h'])
for index in compare_data_index:
x_stick_content[0] = time_batch[index]
fig, ax = plt.subplots(figsize=(20, 10))
plt.axis([0.0, _FUTURE_HOURS, 62.0, 82.0])
plt.gca().set_autoscale_on(False)
plt.plot(list(range(24)), y_hat_arr[index, :], 'blue', label="prediction label")
plt.plot(list(range(24)), y_labels_arr[index, :], 'red', label="true label")
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.xticks(x_stick, x_stick_content[[x_stick_value for x_stick_value in x_stick]])
plt.show()
"""
# +
from matplotlib import pyplot as plt
# %matplotlib inline
plot_num = 24 * 7 * 4 * 2
SHOW_FREQUENT = 12 * 4
x_index = range(plot_num)
x_stick = list(range(0, plot_num, SHOW_FREQUENT))
for i in range(int(y_labels_arr.shape[0] / plot_num)):
#for i in range(3):
start_idx = i * plot_num
end_idx = (i + 1) * plot_num
plt.figure(figsize=(20, 10))
#plt.ylim([72.0, 82.0])
plt.xticks(x_stick, time_batch[[start_idx + x_stick_value for x_stick_value in x_stick]])
plt.xticks(rotation=30)
plt.plot(range(plot_num), y_hat_arr[start_idx:end_idx], 'b', label="prediction")
plt.plot(range(plot_num), y_labels_arr[start_idx:end_idx], 'r', label="ground true")
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.draw()
# -
| confident_prediction_for_time_serial_indoor_temp-tf_next_step_double_stage_attention.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Esercitazione 2
# ## Esercizio 1 - Analisi dello spettro
#
# ### Sine
# #### 100 Hz
#
# 
#
# #### 500 Hz
#
# 
#
# #### 1 kHz
#
# 
#
# #### 10 kHz
#
# 
#
# #### 15 kHz
#
# 
#
# #### 20 kHz
#
# 
#
#
# #### Conclusione
#
# Possiamo chiaramente notare come la sinusoide risulti in una distribuzione gaussiana con picco nella frequenza della sinusoide
#
#
# ### Square
# #### 100 Hz
#
# 
#
# #### 500 Hz
#
# 
#
# #### 1 kHz
#
# 
#
# #### 10 kHz
#
# 
#
# #### 15 kHz
#
# 
#
# #### 20 kHz
#
# 
#
#
# ### Saw
# #### 100 Hz
#
# 
#
# #### 500 Hz
#
# 
#
# #### 1 kHz
#
# 
#
# #### 10 kHz
#
# 
#
# #### 15 kHz
#
# 
#
# #### 20 kHz
#
# 
#
#
# ### Noise
# #### Pink
#
# 
#
#
# #### White
#
# 
#
# #### Brownian
#
# 
# ## Esercizio 2 - Sampling Rate
# ### Spettro 1 (5 kHz + 10 kHz + 15 kHz @44.1 kHz)
#
# 
#
# ### Spettro 2 (5 kHz + 10 kHz + 15 kHz @22.05 kHz)
#
# 
#
# ### Spettro 3 (5 kHz + 10 kHz + 15 kHz @44.1 kHz downsampled to 22.05 kHz)
#
# 
#
#
# ### Conclusione
# Effettuando un campionamento a 44.1kHz, notiamo un notevole distacco fra le tre traccie mixxate. Questo spazio viene ridotto nel caso del campionamento a 22.05 kHz.
# Nel caso del downsampling da 44.1 kHz a 22.05 kHz possiamo notare come un picco viene a mancare, indice di perdita di informazione.
# Il range di frequenze infatti ora va da 5 Hz a 12 kHz al posto che da 11 Hz a 22 kHz
# ## Esercizio 3
#
# ### MP3
# #### 64k
# La traccia audio è piena di artefatti. Si può chiaramente sentire un rumore metallico dovuto a ridotto bitrate.
#
# 
#
# #### 128k
# La traccia audio è più pulita rispetto alla precedente, ma risultano ancora rumori parecchio metallici in corrispondenza di un elevato numero di strumenti.
#
# 
#
# #### 320k
# La qualità audio è buona. Non si sentono particolari artefatti dovuti al bitrate ridotto.
#
# 
#
#
# ### Formati non compressi
#
# 
# ## Esercizio 4
# La parola misteriosa è composta dalle cifre `93553663`, come notato nella precedente esercitazione.
# Grazie a [T9 Cipher](https://www.dcode.fr/t9-cipher) è possibile ricavare la corrispondenza `93553663 = WELLDONE`
| 5-sem/multimedia-processing/exercises/esercitazione-2/esercitazione-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="19ae639982f1495b8165d987f13bce99"
# # WML Federated Learning with MNIST for Party using `ibm-watson-machine-learning`.
#
#
# -
# ### Learning Goals
#
# When you complete this notebook, you should know how to:
#
# - Load the data that you intend to use in the Federated Learning experiment.
# - Install IBM Federated Learning libraries.
# - Define a data handler. For more details on data handlers.
# - Configure the party to train data with the aggregator.
# <div class="alert alert-block alert-info">This notebook is intended to be run by the administrator or connecting party of the Federated Learning experiment.
# </div>
# ## Table of Contents
#
# 1. [Setup](#setup)
# 2. [Load the data](#load)
# 3. [Define a Data Handler](#data-handler)
# 4. [Configure the party](#config)
# 5. [Clean up](#clean)
# 6. [Summary and next steps](#summary)
# <a id="setup"></a>
# ## 1. Set up the environment
#
# Before you use the sample code in this notebook, you must perform the following setup tasks:
#
# - Create a <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance can be found <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics" target="_blank" rel="noopener no referrer">here</a>).
# ### Connection to WML
#
# Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform `api_key` and instance `location`.
#
# You can use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli/index.html) to retrieve platform API Key and instance location.
#
# API Key can be generated in the following way:
# ```
# ibmcloud login
# ibmcloud iam api-key-create API_KEY_NAME
# ```
#
# In result, get the value of `api_key` from the output.
#
#
# Location of your WML instance can be retrieved in the following way:
# ```
# ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
# ibmcloud resource service-instance WML_INSTANCE_NAME
# ```
#
# In result, get the value of `location` from the output.
# **Tip**: Your `Cloud API key` can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. You can also get a service specific url by going to the [**Endpoint URLs** section of the Watson Machine Learning docs](https://cloud.ibm.com/apidocs/machine-learning). You can check your instance location in your <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance details.
#
# You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
#
# **Action**: Enter your `api_key` and `location` in the following cell.
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
# ### Install and import the `ibm-watson-machine-learning` package with Federated Learning.
# **Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
# !pip install --upgrade ibm-watson-machine-learning[fl]
# +
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
# -
#
#
# **Action**: Assign project ID below
project_id = 'PASTE YOUR PROJECT ID HERE'
client.set.default_project(project_id)
# <a id="load"></a>
# ## 2. Load the data
# ### Paste Variables From Admin Notebook
# Paste in the ID's you got from the end of the Part I notebook. If you have not run through Part I, open the notebook and run through it first.
RTS_ID = 'PASTE REMOTE SYSTEM ID FROM PART I NOTEBOOK'
TRAINING_ID = 'PASTE TRAINIG ID FROM PART II NOTEBOOK'
# ### Download MNIST handwritten digits dataset
# As the party, you must provide the dataset that you will use to train the Federated Learning model. In this tutorial, a dataset is provided by default, the MNIST handwritten digits dataset.
# +
import requests
dataset_resp = requests.get("https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/903188bb984a30f38bb889102a1baae5/data",
allow_redirects=True)
f = open('MNIST-pkl.zip', 'wb')
f.write(dataset_resp.content)
f.close()
# +
import zipfile
import os
with zipfile.ZipFile("MNIST-pkl.zip","r") as file:
file.extractall()
# !ls -lh
# + [markdown] id="cc7a31c3db4b4b56997f7287e49cf8f4"
# <a id="data-handler"></a>
# ## 3. Define a Data Handler
# -
# The party should run a data handler to ensure that their datasets are in compatible format and consistent. In this tutorial, an example data handler for the MNIST dataset is provided.
#
# This data handler is written to the local working directory of this notebook
# + id="7560ec21701e460d844563df08854f73"
import requests
data_handler_content_resp = requests.get("https://github.com/IBMDataScience/sample-notebooks/raw/master/Files/mnist_keras_data_handler.py",
headers={"Content-Type": "application/octet-stream"},
allow_redirects=True)
f = open('mnist_keras_data_handler.py', 'wb')
f.write(data_handler_content_resp.content)
f.close()
# + [markdown] id="7e29d269ecd74ca79655c17412f87752"
# ### Verify Data Handler Exists
# + id="4efb89dbfece41608639b7d0ab83f279"
# !ls -lh
# + [markdown] id="7791702cf85e4be28653a6bdb64abb87"
# <a id="config"></a>
# ## 4. Configure the party
# -
# Here you can finally connect to the aggregator to begin training.
# Each party must run their party configuration file to call out to the aggregator. Here is an example of a party configuration.
#
# Because you had already defined the training ID, RTS ID and data handler in the previous sections of this notebook, and the local training and protocol handler are all defined by the SDK, you will only need to define the information for the dataset file under `["data"]["info"]`.
#
# In this tutorial, the data path is already defined as we have loaded the examplar MNIST dataset from previous sections.
# + id="b083f5c03e764e5193ae15ecfc24df63"
from pathlib import Path
# working_dir = !pwd
pwd = working_dir[0]
party_metadata = {
wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: {
"info": {
"train_file": pwd + "/mnist-keras-train.pkl",
"test_file": pwd + "/mnist-keras-test.pkl"
},
"name": "MnistTFDataHandler",
"path": "./mnist_keras_data_handler.py"
}
}
# + [markdown] id="21bb9c095dff42118cd38067820224ed"
# ### Establish Connection To Aggregator and Start Training
# + id="f462b534-c22c-4498-9042-2ee8b8849241"
party = wml_client.remote_training_systems.create_party(RTS_ID, party_metadata)
party.run(aggregator_id=TRAINING_ID, asynchronous=False)
# -
# <a id="clean"></a>
# # 5. Clean up
# If you want to clean up all created assets:
# - experiments
# - trainings
# - pipelines
# - model definitions
# - models
# - functions
# - deployments
#
# please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
# <a id="summary"></a>
# # 6. Summary and next steps
# You successfully completed this notebook!
# You have learned to:
# 1. Start a Federated Learning experiment
# 2. Load a template model
# 3. Create an RTS and launch the experiment job
# 4. Load a dataset for training
# 5. Define the data handler
# 6. Configure the party
# 7. Connect to the aggregator
# 8. Train your Federated Learning model
#
# Check out our _[Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html?context=analytics)_ for more samples, tutorials, documentation, how-tos, and blog posts.
# ### Author
#
# **<NAME>**, Software Developer at IBM.
# Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
| cloud/notebooks/python_sdk/experiments/federated_learning/Federated Learning Demo Part II - for Party.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Get dark matter flux for annihilation and decay of dark matter
#
# So, we can now compute the gamma-ray flux for particular targets. This notebook shows how to accomplish that by using the ```dmspectrum``` class
#
# As usual, we start by importing the required packages
import matplotlib.pyplot as plt
import numpy as np
from ctaAnalysis.dmspectrum.dmspectra import dmspectrum
# ## Properties of the dark matter candidate and the target
#
# First, we define some parameters and properties related to our dark matter candidate and the target of interest
# +
mass = 1.e+4
z = 0.017284
channel = 'W'
emin = 30.0
emax = mass
sigmav = 3.e-26
jfactor = 10**(18.48)
lifetime = 1.e+30
dfactor = 1.59e19
# -
# ## Getting the spectrum
#
# We create an instance of the ```dmspectrum``` class. We can change the properties to compute either annihilation or decay gamma-ray spectrum. In this case, we will use the full contructor to check all the parameters of the class.
dmspec = dmspectrum(mass, emin, emax, channel, z, process='anna',\
eblmod='franceschini2017', has_EW=True, epoints=100)
# We get the differential spectrum $dN/dE$ for the annihilation case. To compute the gamma-ray flux, we will use:
#
# $\frac{d\Phi}{dE} = \frac{\langle\sigma_{\chi}v\rangle}{8\pi m_{\chi}^2}~\frac{dN}{dE}\times J_\text{factor}$
dnde = dmspec.spectra()
dphide = sigmav * dnde * jfactor / (8 * np.pi * mass**2)
# For decay, we can do exactly the same. The differential gamma-ray flux is obtained by using:
#
# $\frac{d\Phi}{dE} = \frac{1}{4\pi m_{\chi}\tau_{\chi}}~\frac{dN}{dE}\times D_\text{factor}$
#
# We need to change the process of the ```dmspec``` instance
# +
dmspec.process = 'decay'
dmspec.emax = mass/2.
dnde_dec = dmspec.spectra()
dphide_dec = dnde_dec * dfactor / (4 * np.pi * mass * lifetime)
# -
# And, we can plot the spectrum for this particular case:
# +
fig, ax = plt.subplots(figsize=(9, 6))
energies = dmspec.energy
ax.plot(energies, energies**2*dphide, color=(0.82,0.10,0.26), lw=4, label='Annihilation')
ax.plot(energies, energies**2*dphide_dec, color=(0.29,0.33,0.13), lw=4, label='Decay')
ax.set_xlim(1.e+1, 1.e+5)
ax.set_ylim(1.e-17, 1.e-12)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Energy $(GeV)$')
ax.set_ylabel('$E^{2}\\frac{\\mathrm{d}\\Phi}{\\mathrm{d}E}$ ($cm^{-2} GeV^{-1} s^{-1}$)')
ax.legend(loc='best', prop={'size':12}, title='$M_{\\chi}=10{TeV}$', title_fontsize=14)
pfile = 'DMFlux10TeVW.png'
plt.savefig(pfile)
# -
# ## Comparison with CLUMPY (Decay)
#
# Now, we can compare the spectrum obained with ```dmspectrum``` class and **Clumpy**. You can follow the instructions in [Clumpy Documentation](https://clumpy.gitlab.io/CLUMPY/physics_1Dtoflux.html). Shortly:
#
# ```
# $ $CLUMPY/bin/clumpy -z --gPP_BR=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 --gPP_FLAG_SPECTRUMMODEL=kCIRELLI11_EW --gPP_DM_DECAY_LIFETIME=1.e+30 --gPP_DM_IS_ANNIHIL_OR_DECAY=0 --gPP_DM_MASS_GEV=1.e+4 --gSIM_FLUX_EMIN_GEV=30 --gSIM_FLUX_EMAX_GEV=0.49e+4 --gSIM_FLUX_FLAG_FINALSTATE=kGAMMA --gSIM_NX=100 --gSIM_XPOWER=2 --gSIM_IS_ASTRO_OR_PP_UNITS=0 --gSIM_JFACTOR=1.59e+19 --gSIM_REDSHIFT=0.017824 --gSIM_EPS=0.01 --gSIM_OUTPUT_DIR=./ -p --gEXTRAGAL_FLAG_ABSORPTIONPROFILE=kDOMINGUEZ11_REF --gSIM_IS_WRITE_ROOTFILES=0
# ```
#
# I edited the output file from **Clumpy** and the header looks like:
#
# # E E2dphide
#
# Please note that the output file from **Clumpy** has *MeV* as the unit for energies
# Load data from clumpy
dphide_clumpy = np.genfromtxt('spectra_CIRELLI11_EW_GAMMA_m10000.txt', names=True)
print(dphide_clumpy['E2dphide'])
# +
fig, ax = plt.subplots(figsize=(9, 6))
energies = dmspec.energy
ax.plot(dphide_clumpy['E']*1.e-3, dphide_clumpy['E2dphide']*1.e-3, color=(0.82,0.10,0.26), lw=4, label='Clumpy')
ax.plot(energies, energies**2*dphide_dec, color=(0.29,0.33,0.13), lw=2, label='dmspectrum class')
ax.set_xlim(1.e+1, 1.e+5)
ax.set_ylim(1.e-17, 1.e-10)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Energy $(GeV)$')
ax.set_ylabel('$E^{2}\\frac{\\mathrm{d}\\Phi}{\\mathrm{d}E}$ ($cm^{-2} GeV^{-1} s^{-1}$)')
ax.legend(loc='best', prop={'size':12}, title='$M_{\\chi}=10{TeV}$', title_fontsize=14)
pfile = 'DMFlux10TeVWComparisonDecay.png'
plt.savefig(pfile)
# -
| notebooks/dmflux.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("winefinal.csv")
df.tail()
df1 = df.ix[:,["new/old","type","sweet","body","fruits","trees","pure","abundunt","fresh"]]
df1.plot.box()
plt.show()
df2 = df.ix[:,1:]
df2.plot.box()
plt.show()
df3 = df.ix[:,"nuts"]
df3.plot.box()
plt.show()
df4 = df.ix[:,"fruits"]
df4.plot.box()
plt.show()
| winedata_boxplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "18f9acd5-a470-492d-91c8-d233d1a1b68e", "showTitle": false, "title": ""}
# ## Great Expectations
# A simple demonstration of how to use the basic functions of the Great Expectations library with Pyspark
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "842a92c7-0ff8-4f27-83b8-b0bfa68069f0", "showTitle": false, "title": ""}
# if you don't want to install great_expectations from the clusters menu you can install direct like this
dbutils.library.installPyPI("great_expectations")
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "79c7119d-ec25-4b5d-b4c1-df43c9ade8b2", "showTitle": false, "title": ""}
import great_expectations as ge
import pandas as pd
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "dc23346e-4c9b-4219-ac9b-002bbb17f769", "showTitle": false, "title": ""}
# first lets create a simple dataframe
data = {
"String": ["one", "two", "two",],
"Value": [1, 2, 2,],
}
# lets create a pandas dataframe
pd_df = pd.DataFrame.from_dict(data)
# we can use pandas to avoid needing to define schema
df = spark.createDataFrame(
pd_df
)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "0774a948-5af8-4665-8d73-3121d75e2be0", "showTitle": false, "title": ""}
# ## Creating Great Expectations Datasets for Pandas and PySpark
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "d32ee40c-b5c4-474e-a67a-8fb522b35436", "showTitle": false, "title": ""}
# now let us create the appropriate great-expectations Dataset objects
# for pandas we create a great expectations object like this
pd_df_ge = ge.from_pandas(pd_df)
# while for pyspark we can do it like this
df_ge = ge.dataset.SparkDFDataset(df)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "a68d8845-0841-4272-97f1-74157a431ea0", "showTitle": false, "title": ""}
# ## Running Great Expectations tests
#
# Expectations return a dictionary of metadata, including a boolean "success" value
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "1d8d936a-fd50-47de-ba5e-90819cdd88a3", "showTitle": false, "title": ""}
#this works the same for bot Panmdas and PySpark Great Expectations datasets
print(pd_df_ge.expect_table_row_count_to_be_between(1,10))
print(df_ge.expect_table_row_count_to_be_between(1,10))
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "444158b1-ab63-4be2-a298-0ea2b8dc848c", "showTitle": false, "title": ""}
# ### Differences between Great Expectations Pandas and Pyspark Datasets
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "70ad62b9-a33a-43d2-ad17-bd19f3000024", "showTitle": false, "title": ""}
# pandas datasets inherit all the pandas dataframe methods
print(pd_df_ge.count())
# while GE pyspark datasets do not and the following leads to an error
print(df_ge.count())
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "ca441b96-a2b3-42f2-9b7f-9a733cb7c194", "showTitle": false, "title": ""}
# however you can access the original pyspark dataframe using df_ge.spark_df
df_ge.spark_df.count()
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "7b541e5b-c474-46a3-9566-6052b3d145ed", "showTitle": false, "title": ""}
# # Taking Great Expectations further
#
# If you want to make use of Great Expectations data context features you will need to install a data context. details can be found here https://docs.greatexpectations.io/en/latest/guides/how_to_guides/configuring_data_contexts/how_to_instantiate_a_data_context_on_a_databricks_spark_cluster.html
| Great Expectations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
matplotlib.style.use('ggplot')
# %matplotlib inline
# +
from sklearn.decomposition import PCA
mu = np.zeros(2)
C = np.array([[3,1],[1,2]])
data = np.random.multivariate_normal(mu, C, size=50)
plt.scatter(data[:,0], data[:,1])
plt.show()
# +
v, W_true = np.linalg.eig(C)
plt.scatter(data[:,0], data[:,1])
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data[:,0], (W_true[0,0]/W_true[0,1])*data[:,0], color="g")
plt.plot(data[:,0], (W_true[1,0]/W_true[1,1])*data[:,0], color="g")
g_patch = mpatches.Patch(color='g', label='True components')
plt.legend(handles=[g_patch])
plt.axis('equal')
limits = [np.minimum(np.amin(data[:,0]), np.amin(data[:,1])),
np.maximum(np.amax(data[:,0]), np.amax(data[:,1]))]
plt.xlim(limits[0],limits[1])
plt.ylim(limits[0],limits[1])
plt.draw()
# -
def plot_principal_components(data, model, scatter=True, legend=True):
W_pca = model.components_
if scatter:
plt.scatter(data[:,0], data[:,1])
plt.plot(data[:,0], -(W_pca[0,0]/W_pca[0,1])*data[:,0], color="c")
plt.plot(data[:,0], -(W_pca[1,0]/W_pca[1,1])*data[:,0], color="c")
if legend:
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[c_patch], loc='lower right')
# сделаем графики красивыми:
plt.axis('equal')
limits = [np.minimum(np.amin(data[:,0]), np.amin(data[:,1]))-0.5,
np.maximum(np.amax(data[:,0]), np.amax(data[:,1]))+0.5]
plt.xlim(limits[0],limits[1])
plt.ylim(limits[0],limits[1])
plt.draw()
# +
model = PCA(n_components=2)
model.fit(data)
plt.scatter(data[:,0], data[:,1])
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data[:,0], (W_true[0,0]/W_true[0,1])*data[:,0], color="g")
plt.plot(data[:,0], (W_true[1,0]/W_true[1,1])*data[:,0], color="g")
# построим компоненты, полученные с использованием метода PCA:
plot_principal_components(data, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
# +
data_large = np.random.multivariate_normal(mu, C, size=5000)
model = PCA(n_components=2)
model.fit(data_large)
plt.scatter(data_large[:,0], data_large[:,1], alpha=0.1)
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data_large[:,0], (W_true[0,0]/W_true[0,1])*data_large[:,0], color="g")
plt.plot(data_large[:,0], (W_true[1,0]/W_true[1,1])*data_large[:,0], color="g")
# построим компоненты, полученные с использованием метода PCA:
plot_principal_components(data_large, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
# -
| Yandex data science/3/Week 2/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Python program of adding 2 numbers
a=int(input('Enter number 1 \t' ))
b=int(input('Enter number 2 \t' ))
sum=a+b
print(sum)
# # 2. Python program for Factorial
def factorial(num):
if num==1 or num==0 :
return 1
else:
res=num*factorial(num-1)
return res
num= int(input('Enter a Number'))
ans=factorial(num)
print(ans)
# ### One line execution
def factorial(num):
return 1 if(num==1 or num==0) else num*factorial(num-1)
num= int(input('Enter a Number'))
ans=factorial(num)
print(ans)
# # 3. Simple Intrest
p=float(input('enter Amoount'))
t= float(input('Enter time'))
r= float(input('enter Rate'))
intrest=(p*t*r)/100
print(intrest)
# # 4. Armstrong Number
num=(input('Enter a number\t'))
length=len(num)
realnum=int(num)
sum=0
rem=[]
for i in range(length):
rem.append(realnum%10)
realnum=realnum//10
for i in range(length):
sum+=rem[i]**length
if sum==int(num):
print('Number is Armstrong')
else:
print('Not Armstrong')
# # 5. Finding Prime Numbers in a interval
num1=int(input('Enter a starting number'))
num2=int(input('Enter a Ending Number'))
for i in range(num1,num2+1):
for j in range(2,i-1):
if i%j==0:
break
else:
print(i)
# # 6.Program to check weather number is prime or not
num=int(input('Enter a Number'))
for i in range(2,num):
if num%i==0:
print('Not Prime')
break
else:
print('Prime')
# # 7. Fibonacci Numbers
number=int(input('Enter Number'))
fir=0
sec=1
print(fir)
print(sec)
for i in range(3,number+1):
res=fir+sec
fir=sec
sec=res
print(res)
ch='ab'
l=len(ch)
for i in range(l):
print(ord(ch[i]))
# # LISTS
# # 1. Find sum of array
arr=[1,2,3,4,5]
arr.append(6)
arr
arr.pop()
arr
sum=0
for i in arr:
sum=sum+i
print(sum)
# # 2. Finding the largest element in array
arr=[1,2,3,4,5]
max=0
for i in arr:
if i>max:
max=i
print(max)
# # 3. Python for array reversing
#
reverse=[]
lent=len(arr)
print(lent)
for i in range(lent):
reverse.append(arr[lent-(i+1)])
print(reverse)
# # 4.Python program to split array into half and add it to end
arr=[1,2,3,4,5,6]
lent=len(arr)
split=lent/2
import numpy as np
arr=np.array(arr)
arr.max()
arr.size
np.exp(arr)
np.info(np.mean)
| PYTHON BASIC PROGRAMS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="-HliM7bt2nUN"
# # **Turma de Pós-Graduação de Ciência de Dados**
#
# **Disciplina: Linguagem de Programação Python**
#
# **prof: <NAME>, DSc**
# + [markdown] id="dTfydFyz2pUM"
# **Aula 05**
# + [markdown] id="o2MhEwD827c4"
# #**Regressão Linear**
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="DNGIszb72hAC" outputId="fc3c3441-3363-46ee-f085-91f8b5605c1a"
import matplotlib.pyplot as plt
from scipy import stats
def gerar_dados_l01():
x = [5,7,8,7,2,17,2,9,4,11,12,9,6]
y = [99,86,87,88,111,86,103,87,94,78,77,85,86]
return x, y
def f_regressao_linear(x, y): #f(x) = coef_angular * x + coef_linear
coef_angular, coef_linear, r, p, std_err = stats.linregress(x, y)
return coef_angular, coef_linear
def funcao_linear(x, coeficiente_angular, coeficiente_linear):
f = coeficiente_angular * x + coeficiente_linear
return f
x, y = gerar_dados_l01()
coef_angular, coef_linear = f_regressao_linear(x, y)
regressao_linear = []
for w in x:
f = funcao_linear(w, coef_angular, coef_linear)
regressao_linear.append(f)
plt.scatter(x, y, color = 'red')
plt.plot(x, regressao_linear, color = 'blue')
plt.show()
#print(regressao_linear)
f = funcao_linear(9, coef_angular, coef_linear)
print('f(9)={}'.format(f))
# + [markdown] id="Sput-5iF23nS"
# #**Regressão Polinomial**
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="rvisCech24PL" outputId="19d3d4be-e628-4fb7-a766-7032ed045c87"
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
def gerar_dados_p01():
x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22] #data
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]
return x, y
def f_regressao_polinomial(x, y):
modelo = np.poly1d(np.polyfit(x, y, 3))
return modelo
x, y = gerar_dados_p01()
modelo = f_regressao_polinomial(x, y)
linha = np.linspace(1, 22, 100)
plt.scatter(x, y, color='red')
plt.plot(x, y, color='green')
plt.plot(linha, modelo(linha), color='blue')
plt.show()
erro=r2_score(y, modelo(x))
print('Erro de aproximação: {}'.format(erro))
# + colab={"base_uri": "https://localhost:8080/"} id="iNpztbmWfYCD" outputId="56305f11-b9aa-43df-cd73-1e90573627ca"
print(modelo(22))
print(modelo(23))
print(modelo(24))
print(modelo(25))
# + [markdown] id="_YfCka77BP-N"
# #**Árvore de Decisão**
# + [markdown] id="IfiV2CU7DWy8"
# **exemplo 01**
# + id="SFQSWD85DIuh"
import pydotplus
from sklearn.datasets import load_iris
from sklearn import tree
import collections
# Dados
X = [ [180, 15, 0],
[177, 42, 0],
[136, 35, 1],
[174, 65, 0],
[141, 28, 1]]
Y = ['homem',
'mulher',
'mulher',
'homem',
'mulher']
nomes_das_caracteristicas = ['altura',
'comprimento do cabelo',
'tom de voz' ]
# + id="pwgXEqCSGWSs"
# Training
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X,Y)
# + colab={"base_uri": "https://localhost:8080/", "height": 518} id="TdDmoo0AGtSD" outputId="e309bf1b-fbdc-415a-c3d2-15aaa47d922a"
# Visualize data
dot_data = tree.export_graphviz(clf,
feature_names=nomes_das_caracteristicas,
out_file=None,
filled=True,
rounded=True)
grafico = graphviz.Source(dot_data)
grafico
# + colab={"base_uri": "https://localhost:8080/"} id="d9ydDAS0LkMW" outputId="66072c64-b626-427e-b09f-05f8bd39ba49"
#Teste da árvore
X_test = [[180, 15, 0]]
y_pred = clf.predict(X_test)
y_pred
# + colab={"base_uri": "https://localhost:8080/"} id="9LokRGl9L5yU" outputId="2c83fcff-b12c-480a-a252-583ae43a8f26"
#Teste da árvore
X_test = [[180, 30, 1]]
y_pred = clf.predict(X_test)
y_pred
| Aula_05_Pos_Ciencia_De_Dados.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Raport
# Autorzy: <NAME>, <NAME>, <NAME>
#
# ## Zbiór danych
# Wybraliśmy zbiór [cervical cancer](https://datahub.io/machine-learning/cervical-cancer) zawierający dane medyczne pacjentek, u części z nich zdiagnozowany został rak szyjki macicy. Całość składa się 835 rekordów. Kolumny:
# - Age - wiek
# - Number of sexual partners - liczba partnerów seksualnych
# - First sexual intercourse - wiek inicjacji seksualnej
# - Num of pregnancies - liczba ciąż
# - Smokes - czy pali
# - Smokes (years) - od ilu lat pali
# - Smokes (packs/year) - liczba wypalanych paczek na rok
# - Hormonal Contraceptives - czy stosuje antykoncepcję hormonalną
# - Hormonal Contraceptives (years) - od ilu lat stosuje antykoncepcję hormonalną
# - IUD - czy stosuje wkładkę domaciczną
# - IUD (years) - od ilu lat stosuje wkładkę domaciczną
# - STDs - czy ma/miała chorobę przenoszą drogą płciową
# - STDs (number) - ile ma/miała chorób przenoszonych drogą płciową (kolejne kolumny zaczynające się od STDs oznaczają konkretne choroby)
# - STDs:condylomatosis
# - STDs:cervical condylomatosis
# - STDs:vaginal condylomatosis
# - STDs:vulvo-perineal condylomatosis
# - STDs:syphilis
# - STDs:pelvic inflammatory disease
# - STDs:genital herpes
# - STDs:molluscum contagiosum
# - STDs:AIDS
# - STDs:HIV
# - STDs:Hepatitis B
# - STDs:HPV
# - STDs: Number of diagnosis
# - STDs: Time since first diagnosis
# - STDs: Time since last diagnosis
# - Dx:Cancer - czy miała zdiagnozowanego raka
# - Dx:CIN - czy miała zdiagnozowanego raka śródnabłonkowego
# - Dx:HPV - czy były zakażona HPV
# - Dx - czy była zdiagnozowane któreś z powyższych
# Kolumny zmiennych celu (metoda diagnozy raka szyjki macicy):
# - Hinselmann
# - Schiller
# - Citology
# - Biopsy
#
# Większość z tych kolumn zawiera braki danych ze względów osobistych pacjentek.
# ### Zmienna celu
# Celem zadania jest prognozowanie raka szyjki macicy na podstawie posiadanych danych.
#
# W zbiorze wyjściowym są cztery kolumny celu, jednak postanowiliśmy ograniczyć się do jednej. Stworzyliśmy własną kolumnę celu, która jest sumą logiczną tamtych czterech. Nazwaliśmy ją *cancer*.
#
# ## Analiza zbioru
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pandas_profiling import ProfileReport
data = pd.read_csv('cervical-cancer_csv.csv')
# -
from pandas_profiling import ProfileReport
profile = ProfileReport(data, title='Automatyczny Raport Pandas Profiling', html={'style':{'full_width':True}})
profile.to_file(output_file="raportautomatyczny.html")
# +
# Rozkład zmiennych kategorycznych
categorical = ['Smokes', 'Hormonal Contraceptives', 'IUD', 'STDs', 'STDs (number)', 'STDs:condylomatosis',
'STDs:cervical condylomatosis', 'STDs:vaginal condylomatosis', 'STDs:vulvo-perineal condylomatosis',
'STDs:syphilis', 'STDs:pelvic inflammatory disease', 'STDs:genital herpes', 'STDs:molluscum contagiosum',
'STDs:AIDS', 'STDs:HIV', 'STDs:Hepatitis B', 'STDs:HPV', 'STDs: Number of diagnosis',
'Dx:Cancer', 'Dx:CIN', 'Dx:HPV', 'Dx', 'Hinselmann', 'Schiller', 'Citology', 'Biopsy']
fig, axes = plt.subplots(nrows=13, ncols=2, figsize=(12, 36))
axes = axes.flatten()
total = len(data)
for idx, ax in enumerate(axes):
plt.sca(ax)
ax = sns.countplot(x = categorical[idx], data=data)
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:3.0f}%'.format(100 * height/total),
ha="center")
sns.despine()
plt.tight_layout()
plt.show()
# -
# ### Problem braku danych
# Ze względu na liczne braki danych podjęliśmy następujące decyzje:
# - ze względu na (prawie) brak danych innych niż 0 usuwamy następujące kolumny:
# * STDs:cervical condylomatosis
# * STDs:vaginal condylomatosis
# * STDs:pelvic inflammatory disease
# * STDs:genital herpes
# * STDs:molluscum contagiosum
# * STDs:AIDS
# * STDs:Hepatitis B
# * STDs:HPV
# * Dx:CIN
# - pozostałe kolumny kategoryczne: w związku z tym, że użyjemy OHE, tworzymy dodatkową kolumnę na brak danych
# - dane ciągłe: będą normalizowane, więc ustawienie braków na 0 (czyli średnia) jest bezpieczne
#
# ### Wstępny szablon imputacji i obróbki danych
# +
# domyślna obróbka ze zmianami
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler, MinMaxScaler
dane = pd.read_csv('cervical-cancer_csv.csv')
# usuwanie kolumn
dane = dane.drop(['STDs:cervical condylomatosis',
'STDs:vaginal condylomatosis',
'STDs:pelvic inflammatory disease',
'STDs:genital herpes',
'STDs:molluscum contagiosum',
'STDs:AIDS',
'STDs:Hepatitis B',
'STDs:HPV', 'Dx:CIN'], axis=1)
# uzupełnianie braków i kodowanie zmiennych kategorycznych
def column_nodata(df, column_name):
df[column_name + "_null"] = df[column_name].apply(lambda x: 1 if pd.isnull(x) else 0)
df[column_name] = df[column_name].fillna(0)
def replace_in_column(df, column_name, src, dst):
df[column_name] = df[column_name].replace(to_replace=src, value=dst)
replace_in_column(dane, 'STDs (number)', [3, 4], 2)
replace_in_column(dane, 'STDs: Number of diagnosis', [2,3], 1)
nodata_categories = [
'Smokes',
'Hormonal Contraceptives',
'IUD',
'STDs',
'STDs (number)',
'STDs:condylomatosis',
'STDs:vulvo-perineal condylomatosis',
'STDs:syphilis',
'STDs:HIV'
]
for category in nodata_categories:
column_nodata(dane, category)
dane = pd.concat([dane, pd.get_dummies(dane['STDs (number)'], prefix='STDs_')],axis=1)
dane.drop(['STDs (number)'],axis=1, inplace=True)
# standaryzacja
numerical = ['Age', 'Number of sexual partners', 'First sexual intercourse', 'Num of pregnancies', 'Smokes (years)',
'Smokes (packs/year)', 'Hormonal Contraceptives (years)', 'IUD (years)', 'STDs: Time since first diagnosis',
'STDs: Time since last diagnosis']
scaler = MinMaxScaler() #MinMaxScaler
dane_scaled = scaler.fit_transform(dane[numerical])
d2 = pd.DataFrame(dane_scaled, columns = numerical)
dane[numerical] = d2[numerical]
# stworzenie jednego targetu
targets = ['Hinselmann', 'Schiller', 'Citology', 'Biopsy']
def has_cancer(row):
for target in targets:
if row[target] == 1:
return 1
return 0
dane['cancer'] = dane.apply(lambda row: has_cancer(row), axis=1)
dane = dane.drop(targets, axis=1)
# -
# ## Podział zbioru i Miary
# +
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import average_precision_score
# podzial zbioru na treningowy i testowy
def default_split(X, y):
return train_test_split(X, y, test_size=0.2, random_state=2137)
# scoring
def scoring(y_test, y_predicted):
print("ACC = ", accuracy_score(y_test, y_predicted))
print("PREC = ", precision_score(y_test, y_predicted))
print("RECALL = ", recall_score(y_test, y_predicted))
print("F1 = ", f1_score(y_test, y_predicted))
print("AUPRC = ", average_precision_score(y_test, y_predicted))
print("AUC = ", roc_auc_score(y_test, y_predicted))
# wyodrebnienie y
def extract_y(data):
y = data[["cancer"]]
return data.drop(["cancer"], axis=1), y
# -
# Zbiór rozdzielamy na zbiór treningowy w rozmiarze 80% oryginalnego zbioru i testowy w rozmiarze 20% oryginalnego zbioru.
# Stworzyliśmy funkcję, która zwraca wyniki pięciu różnych miar:
# * accuracy, czyli $\frac{TP + TN}{FP + FN + TN + TP}$
# * precision, czyli $\frac{TP}{TP+FP}$
# * recall, czyli $\frac{TP}{TP+FN}$
# * F1, czyli $\frac{2TP}{2TP+FP+FN}$
# * AUC, czyli Area Under Curve
#
# Gdzie:
# * $TP$ - True Positive
# * $TN$ - True Negative
# * $FP$ - False Positive
# * $FN$ - False Negative
#
# W naszej ocenie najważniejsze było recall, ponieważ w badaniach raka ważne jest wykrycie choroby. Oczywiście, sama miara recall nie jest wystarczająca, dlatego braliśmy pod uwagę również pozostałe.
# ## Modele
#
# +
# przygotowanie danych
X, y = extract_y(dane)
X = X.fillna(0)
X_train, X_test, y_train, y_test = default_split(X, y)
# -
# ### Las losowy
# +
from sklearn.ensemble import RandomForestClassifier
model_rf = RandomForestClassifier(n_estimators=50,
max_depth=100,
min_samples_split = 2,
max_features = 7,
random_state=0,
n_jobs = -1)
model_rf.fit(X_train, y_train)
y_predicted_rf = model_rf.predict(X_test)
scoring(y_test, y_predicted_rf)
# -
# #### Strojenie
# +
from sklearn.model_selection import GridSearchCV
n_estimators = [100, 300, 500, 800]
max_depth = [10, 25, 50, 75, 100]
min_samples_split = [2, 3, 5, 10, 15]
min_samples_leaf = [1, 2, 5]
forest = RandomForestClassifier(random_state = 0)
hyperFrf = dict(n_estimators = n_estimators, max_depth = max_depth,
min_samples_split = min_samples_split,
min_samples_leaf = min_samples_leaf)
gridRF = GridSearchCV(forest, hyperFrf, cv = 5, verbose = 1,
n_jobs = -1, scoring = 'average_precision')
bestRF = gridRF.fit(X_train, y_train)
# -
y_predicted_best_rf = bestRF.predict(X_test)
scoring(y_test, y_predicted_best_rf)
# Jak widać, strojenie ręcznie zwróciło lepsze wyniki od strojenia za pomocą gridsearch
# ### GBM
# +
from sklearn.ensemble import GradientBoostingClassifier
model_gbm = GradientBoostingClassifier()
model_gbm.fit(X_train, y_train)
y_predicted_gbm = model_gbm.predict(X_test)
scoring(y_test, y_predicted_gbm)
# -
# #### Strojenie
# +
from sklearn.model_selection import GridSearchCV
n_estimators = [100, 300, 500, 800]
max_depth = [1, 3, 5, 10]
min_samples_split = [2, 3, 5, 10]
learning_rate = [0.05, 0.1, 0.2]
gbm = GradientBoostingClassifier()
hyperFGBM = dict(n_estimators = n_estimators, max_depth = max_depth,
min_samples_split = min_samples_split,
learning_rate = learning_rate)
gridFGBM = GridSearchCV(gbm, hyperFGBM, cv = 5, verbose = 1,
n_jobs = -1, scoring = 'average_precision')
bestFGBM = gridFGBM.fit(X_train, y_train)
# -
y_predicted_best_gbm = bestFGBM.predict(X_test)
scoring(y_test, y_predicted_best_gbm)
# ### BogoModel
#
# BogoModel jest autorskim modelem zaproponowanym przez nas, nazwanym po algorytmie sortowania `Bogosort`, który opiera się o losowanie. BogoModel uczy się, ile jest wystąpień 1 w zmiennej celu, następnie losuje 1, lub 0 w zmiennych testowych, z prawdopodobieństwem równym częstotliwości występowania 1 w zmiennej celu zbioru treningowego.
import copy
def BogoModelPredict(y_test, random_state):
y_random_predicted = copy.copy(y_test)
prob = y_train['cancer'].value_counts()[1]/y_train['cancer'].value_counts()[0]
np.random.seed(seed=random_state)
y_random_predicted['cancer'] = np.random.choice([0, 1], len(y_test), p=[1-prob, prob])
return y_random_predicted
y_predicted_bogomodel = BogoModelPredict(y_test, 21)
scoring(y_test, y_predicted_bogomodel)
# ### Drzewo decyzyjne
# +
from sklearn.tree import DecisionTreeClassifier
model_dtree = DecisionTreeClassifier()
model_dtree.fit(X_train, y_train)
y_predicted_dtree = model_dtree.predict(X_test)
scoring(y_test, y_predicted_dtree)
# -
# #### Strojenie
# +
from sklearn.model_selection import GridSearchCV
max_depth = [10, 25, 50, 75, 100]
max_leaf_nodes = list(range(2, 100))
min_samples_split = [2, 3, 5, 10]
dtree = DecisionTreeClassifier()
hyperDTree = dict(max_depth = max_depth, max_leaf_nodes = max_leaf_nodes,
min_samples_split = min_samples_split)
gridDTree = GridSearchCV(dtree, hyperDTree, cv = 5, verbose = 1,
n_jobs = -1, scoring = 'average_precision')
bestDTree = gridDTree.fit(X_train, y_train)
# -
y_predicted_best_dtree = bestDTree.predict(X_test)
scoring(y_test, y_predicted_best_dtree)
# ### Naive Bayes
# +
from sklearn.naive_bayes import GaussianNB
model_nb = GaussianNB()
model_nb.fit(X_train, y_train)
y_predicted_nb = model_nb.predict(X_test)
scoring(y_test, y_predicted_nb)
# -
# ### XGBoost
# XGBoost był pierwszym modelem, który zaczął dawać nieco wyższe wyniki niż pozostałe. Osiągnięcie naszego maksymalnego wyniku wymagało jednak pewnych zmian w domyślnej obróbce danych. Różnica wynika głównie z użycia MinMaxScalera. Próbowaliśmy też nie uzupełniać braków danych, gdyż XGBoost powinien sobie z tym dobrze radzić, ale było gorzej.
# +
X_xgb, y_xgb = extract_y(dane)
X_xgb = X_xgb.fillna(0)
X_xgb_train, X_xgb_test, y_xgb_train, y_xgb_test = default_split(X_xgb, y_xgb)
from xgboost import XGBClassifier
model = XGBClassifier(random_state=2137)
model.fit(X_xgb_train, y_xgb_train.values.ravel())
y_predicted_xgb = model.predict(X_xgb_test)
scoring(y_xgb_test, y_predicted_xgb)
# -
# Jak widać już w tym przypadku dostaliśmy wynik rzędu 0.55 (AUC).
# #### Strojenie
# +
from sklearn.model_selection import GridSearchCV
params = {
'max_depth': np.arange(5, 50, 5),
'learning_rate': np.arange(0.1, 2, 0.3),
'booster': ['gbtree', 'gblinear', 'dart'],
}
grid = GridSearchCV(model, params, error_score='raise', cv=5, scoring='roc_auc')
grid.fit(X_xgb_train, y_xgb_train.values.ravel())
print(f"Score: {grid.best_score_}")
grid.best_params_
# -
model = XGBClassifier(random_state=2137, booster='gblinear', learning_rate=0.4, max_depth=5)
model.fit(X_xgb_train, y_xgb_train.values.ravel())
y_predicted_best_xgb = model.predict(X_xgb_test)
scoring(y_xgb_test, y_predicted_best_xgb)
# Niestety strojenie nic nie dało, tylko pogorszyło wynik. Zgodnie z zaleceniami, postanowiliśmy użyć do strojenia miary AUPRC.
# +
model = XGBClassifier(random_state=2137)
params = {
'max_depth': np.arange(5, 50, 5),
'learning_rate': np.arange(0.1, 2, 0.3),
'booster': ['gbtree', 'gblinear', 'dart'],
}
grid = GridSearchCV(model, params, error_score='raise', cv=5, scoring='average_precision')
grid.fit(X_xgb_train, y_xgb_train.values.ravel())
print(f"Score: {grid.best_score_}")
grid.best_params_
# -
# niestety już po glebokosci mozemy sie spodziewac, ze model bedzie raczej przeuczony
model = XGBClassifier(random_state=2137, booster='gblinear', learning_rate=0.4, max_depth=40)
model.fit(X_xgb_train, y_xgb_train.values.ravel())
y_predicted_best_xgb2 = model.predict(X_xgb_test)
scoring(y_xgb_test, y_predicted_best_xgb2)
# Wynik znowu jest niższy niż przed strojeniem. Najlepsze rezultaty dało strojenie *na intuicję*. Oto model wynikowy:
model = XGBClassifier(random_state=2137, booster='gbtree', learning_rate=0.5, max_depth=7, gamma=0.001)
model.fit(X_xgb_train, y_xgb_train.values.ravel())
y_predicted_xgb_manual = model.predict(X_xgb_test)
scoring(y_xgb_test, y_predicted_xgb_manual)
# Udało się uzyskać prawie 60% AUC. Może wynik nie jest oszałamiający, ale jest znaczącym przełomem w porównaniu do poprzednich modeli. Zadaliśmy sobie jeszcze pytanie, czy nasz model się nie przeucza.
y_predicted = model.predict(X_xgb_train)
scoring(y_xgb_train, y_predicted)
# Ale jednak używane parametry nie wskazują na to. Przede wszystkim korzystamy z drzewa przy boostowaniu (booster gblinear ewidentnie poprawiał się przy dużej głębokości), dodatkowo głębokość wynosi zaledwie 7, więc raczej można przypuszczać, że problem leży po stronie zbioru.
# ## Porównanie modeli
# # KACPER INSTRUKCJA
#
# 1. Dodaj modele w sekcji wyżej, jeżeli jeszcze tego nie zrobiłeś
# 2. Niech każda predykcja (czyli wektor zwracany przez model.predict) będzie zapisana osobno (najlepiej w konwencji y_predicted_{nazwamodelu} dla pierwszego odpalenia, y_predicted_best_{nazwamodelu} dla strojonych przez Gridsearcha czy coś takiego i y_predicted_manual_{nazwamodelu}, jeżeli potem robisz jeszcze strojenie ręcznie).
# 3. Dodaj w code chunku poniżej wektory z predykcjami i na tych samych miejscach odpowiednie labelsy i wybierz sobie kolory (domyślnie każdy osobny model - osobny kolor)
# 4. Odpal potem wszystkie chunki z wykresami, żeby uwzględnić nowe modele
# 5. Usuń ten chunk
#
# +
#Wyniki
predicted_all = [y_predicted_rf,
y_predicted_best_rf,
y_predicted_gbm,
y_predicted_best_gbm,
y_predicted_bogomodel,
y_predicted_dtree,
y_predicted_best_dtree,
y_predicted_nb,
y_predicted_xgb,
y_predicted_best_xgb,
y_predicted_best_xgb2,
y_predicted_xgb_manual]
labels = ["Las losowy",
"Las losowy po strojeniu",
"GBM",
"GBM po strojeniu",
"Bogomodel",
"Drzewo decyzyjne",
"Drzewo decyzyjne po strojeniu",
"Naive Bayes",
"XGBoost",
"XGBoost strojony automatycznie 1",
"XGBoost strojony automatycznie 2",
"XGBoost strojony ręcznie"]
colors = ["royalblue",
"royalblue",
"firebrick",
"firebrick",
"darkorange",
"darkgoldenrod",
"darkgoldenrod",
"olive",
"darkgreen",
"darkgreen",
"darkgreen",
"darkgreen"]
# -
# #### Accuracy
acc_all = []
for pred in predicted_all:
acc_all.append(accuracy_score(y_test, pred))
df = pd.DataFrame({"Model": labels, "Wynik": acc_all})
plt_acc = df.plot.bar(x='Model', y ='Wynik', rot=45, color = colors, legend = None)
plt_acc.set_xticklabels(labels, ha='right')
plt_acc.grid('on', which='major', axis='y' )
# #### Precision
prec_all = []
for pred in predicted_all:
prec_all.append(precision_score(y_test, pred))
df = pd.DataFrame({"Model": labels, "Wynik": prec_all})
plt_prec = df.plot.bar(x='Model', y ='Wynik', rot=45, color = colors, legend = None)
plt_prec.set_xticklabels(labels, ha='right')
plt_prec.grid('on', which='major', axis='y' )
# #### Recall
recall_all = []
for pred in predicted_all:
recall_all.append(recall_score(y_test, pred))
df = pd.DataFrame({"Model": labels, "Wynik": recall_all})
plt_recall = df.plot.bar(x='Model', y ='Wynik', rot=45, color = colors, legend = None)
plt_recall.set_xticklabels(labels, ha='right')
plt_recall.grid('on', which='major', axis='y' )
# #### F1
f1_all = []
for pred in predicted_all:
f1_all.append(f1_score(y_test, pred))
df = pd.DataFrame({"Model": labels, "Wynik": f1_all})
plt_f1 = df.plot.bar(x='Model', y ='Wynik', rot=45, color = colors, legend = None)
plt_f1.set_xticklabels(labels, ha='right')
plt_f1.grid('on', which='major', axis='y' )
# #### Average Precision Score
auprc_all = []
for pred in predicted_all:
auprc_all.append(average_precision_score(y_test, pred))
df = pd.DataFrame({"Model": labels, "Wynik": auprc_all})
plt_auprc = df.plot.bar(x='Model', y ='Wynik', rot=45, color = colors, legend = None)
plt_auprc.set_xticklabels(labels, ha='right')
plt_auprc.grid('on', which='major', axis='y' )
# #### AUC
auc_all = []
for pred in predicted_all:
auc_all.append(roc_auc_score(y_test, pred))
df = pd.DataFrame({"Model": labels, "Wynik": auc_all})
plt_auc = df.plot.bar(x='Model', y ='Wynik', rot=45, color = colors, legend = None)
plt_auc.set_xticklabels(labels, ha='right')
plt_auc.grid('on', which='major', axis='y' )
# #### Podsumowanie porównania
#
# Z porównania różnych miar wynika w miarę jednoznacznie (poza miarą accuracy), że XGBoost po ręcznym strojeniu paramterów okazał się najlepszym modelem do klasyfikowania w tym zbiorze danych. Na drugi miejscu uplasował się nasz autorski BogoModel (nie jest to koniecznie dobra wiadomość.
# ### Informacje o środowisku
#
import IPython
print (IPython.sys_info())
# !pip freeze
# ## Oświadczenie
# Potwierdzam samodzielność powyższej pracy oraz niekorzystanie przeze mnie z niedozwolonych źródeł
| Projekty/Projekt1/Grupa3/StaronSzypulaUrbala/Raport.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Emukit tutorials
#
# Emukit tutorials can be added and used through the links below. The goal on these tutorials is to explain a particular functionality of Emukit. These tutorials are stand alone notebooks that don't require any extra file and fully sit on Emukit components (apart from the creation of the model).
#
# Some tutorials have been written with the purpose of explaining some scientific concepts and can be used for learning about different topics in emulation and uncertainty quantification. Other tutorials are a small guide to describe some feature of the library.
#
# Another great resource to learn Emukit are the [examples](../emukit/examples) which are more elaborated modules focused either on the implementation of a new method with Emukit components or on the analysis and solution of some specific problem.
# ## Available Tutorials
# * **Getting Started:**
# * [5 minutes introduction to Emukit](Emukit-tutorial-intro.ipynb)
# * [Philosophy and Basic use of the library](Emukit-tutorial-basic-use-of-the-library.ipynb)
# * **Scientific tutorials:**
# * [Introduction to Bayesian optimization](Emukit-tutorial-Bayesian-optimization-introduction.ipynb)
# * [Introduction to multi-fidelity Gaussian processes](Emukit-tutorial-multi-fidelity.ipynb)
# * [Introduction to sensitivity analysis](Emukit-tutorial-sensitivity-montecarlo.ipynb)
# * [Introduction to Bayesian Quadrature](Emukit-tutorial-Bayesian-quadrature-introduction.ipynb)
# * **Features tutorials:**
# * [Bayesian optimization with external evaluation of the objective](Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb)
# * [Bayesian optimization with context variables](Emukit-tutorial-bayesian-optimization-context-variables.ipynb)
# * [Learn how to to combine an acquisition function (entropy search) with a multi-source (fidelity) Gaussian process](Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb)
# * [How to benchmark several Bayesian optimization methods with Emukit](Emukit-tutorial-bayesian-optimization-benchmark.ipynb)
# * [Categorical variables in Emukit](Emukit-Categorical-with-Tensorflow.ipynb)
# * [Bayesian optimization integrating the hyper-parameters of the model](Emukit-tutorial-bayesian-optimization-integrating-model-hyperparameters.ipynb)
# * **Do it yourself:**
# * [How to use custom model](Emukit-tutorial-custom-model.ipynb)
# * [Template for writing a tutorial notebook for Emukit](Emukit-tutorial-how-to-write-a-notebook.ipynb) (please follow the contribution guide below).
#
# ## Contribution guide
#
# We really appreciate contributions. Tutorials and examples are a great way to spread what you have learned about Emukit across the community and an excellent way to showcase new features. If you want to contribute with a new tutorial please follow [these](https://github.com/amzn/emukit/tree/master/emukit/examples) steps.
#
# We also welcome feedback so if there is any aspect of Emukit that we can improve please raise an issue!
| notebooks/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # **Stationarity**
#
# ---------------
#
# The file covers the main features of a stationary time series.
# An accompanying [Rmd file](../R/stationarity.md) is provided.
#
# ## **Conditions**
#
# ---------------
#
# There are three conditions for a time series to be stationary.
#
# * Constant Mean
# * Constant Variance
# * Constant Autocorrelation
#
# ### **Constant Mean**
#
# All subpopulations of $X_t$ have the same mean for each time $t$
# i.e. the mean does not depend on time.
#
# $$
# E[X_t] = \mu
# $$
#
# ### **Constant Variance**
#
# Variance of $X_t$ does not depend on time and the variance is finite.
#
# $$
# Var[X_t] = \sigma^2 < \infty
# $$
#
# ### **Constant Autocorrelation**
#
# The correlation of $X_2$ and $X_1$ only depends on $t_2-t_1$.
# That is, the correlation between the data points depends only on how far a part the obervations are in time,
# not where the observations are located in time.
#
# $$
# Corr(X_t, X_{t+h}) = \rho_h
# $$
#
# where $h$ is the difference between two points in time.
#
# ### **Checking Assumptions in Practice**
#
# For constant mean and variance, plot the time series and visually assess the validity of the assumption.
# For the constant autocorrelation, split the time series into subpopulations and
# plots the ACFs for the subpopulations, then visually assess the validity of the assumption.
#
# ## **Parameter Estimation**
#
# ---------------
#
# A single realization can be used to estimate the mean, variance, and
# autocorrelation of a stationary time series when the autocorrelation approach zero as the lag increases.
#
# Parameters can be estimated in the following order due to dependence:
#
# * mean
# * sample autocorrelations
# * variance
# * confidence interval for the mean
#
# Let $X_t$ be a discrete stationary time series for the estimate procedures.
#
# ### **Estimation of the Mean**
#
# Then $\bar{x}$ is an unbiased estimate of the mean of the time series.
# Thus, the mean will be estimated as
#
# $$
# \hat{Mean}(X_t) = \bar{x} = \frac{\sum^N x_i}{N}
# $$
#
# ### **Estimation of the Autocorrelations**
#
# Recall the [definition](./preliminaries.ipynb) of $\rho_k$:
#
#
# $$
# \rho_k = \frac{\gamma_k}{\gamma_0} = \frac{E[(X_t-\mu)(X_{t+k}-\mu)]}{\sigma_x^2}
# $$
#
# The autocorrelations can be estimated with the **sample autocorrelations**:
#
# $$
# \hat{\rho}_k = \frac{\hat{\gamma}_k}{\hat{\gamma}_0} =
# \frac {\frac{1}{N} \sum^{N-k} \left( x_t - \bar{x} \right) \left( x_{t+k} - \bar{x} \right) }
# { \frac{1}{N} \sum^{N} \left( x_t - \bar{x} \right)^2 }
# $$
#
# Note that the amount of data used to estimate the sample autocorrelations decreases as $k \rightarrow N$.
#
# ### **Estimation of the Variance**
#
# The variance of the can be estimated by
#
# $$
# \hat{Var}(X_t) = \frac{\hat{\sigma}^2}{N} \sum^{N-1}_{k=-(N-1)} \left(1-\frac{\lvert k \rvert}{N} \right) \hat{\rho}_k
# $$
#
# simplifying the summation by noting that $\rho_k = \rho_{-k}$,
#
# $$
# = \frac{\hat{\sigma}^2}{N} \left( 1 + 2
# \sum^{N-1}_{k=1} \left(1-\frac{\lvert k \rvert}{N} \right) \hat{\rho}_k \right)
# $$
#
# where $\hat{\rho}_k$ represents the $k$th autocorrelation estimate and $\sigma^2$ is estimated with all the data as shown below.
#
# $$
# \hat{\sigma}^2 = \frac{1}{N} \sum^N \left( x - \bar{x} \right)^2
# $$
#
# ### **Estimation of a Confidence for the Mean**
#
# The confidence interval for the mean follows the common form of
#
# $$
# CI: (Mean \, Estimate) \pm (Multiplier) (Variance \, Estimate)
# $$
#
# Substituting for the estimation, we have
#
# $$
# CI: \bar{x} \pm t_{1-\frac{\alpha}{2}} \sqrt
# {
# \frac{\hat{\sigma}^2}{N} \left( 1 + 2
# \sum^{N-1}_{k=1} \left(1-\frac{\lvert k \rvert}{N} \right) \hat{\rho}_k \right)
# }
# $$
#
# This confidence interval is **interpreted as**
# "We are $X\%$ confident that the mean is contained in the interval $[CI \, Lower, \, CI \, Upper]$"
#
# ## References
#
# ---------------
#
# \[1\] <NAME> and <NAME>, "Stationary", SMU, 2019
| notes/stationarity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/a1ip/dl-2022/blob/main/Derivative_and_gradient.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="adHh_zIAMytT"
# <p style="text-align:center"><img align=center src="https://raw.githubusercontent.com/DLSchool/deep-learning-school/fall_2021_part1/_static/logo_readme.png" style="max-width:1590px;max-height:400px;width:100%"/></p>
# + [markdown] id="aFQw4P-xNpXp"
# # Производная функции одной переменной
# + [markdown] id="-n7rOih9NpXu"
# Для начала мы будем рассматривать производные функций одной переменной, потом обобщим это понятие на функции многих переменных.
# + [markdown] id="LSW3dwznNpXw"
# ## Смысл производной
# + [markdown] id="cG6XZXG_NpXz"
# Для начала рассмотрим функцию <br>
# $F(x) = x^4 + 5x^3 - 10x$ на интервале $x \in[-5, 2]$:
# + id="z8t7qeiNNpX1"
import numpy as np
import matplotlib.pyplot as plt
# + id="JEpj06NjNpX9" outputId="1040a0ba-7e16-4870-fc45-22bc67bd768b"
def F(x):
return x**4 + 5*x**3 - 10*x
x = np.linspace(-5, 2, 100)
y = list(map(F, x))
plt.figure(figsize=(10,7))
plt.plot(x, y)
plt.ylabel("Y")
plt.xlabel("X")
plt.scatter([-3.5518, -0.9439, 0.7457], [F(-3.5518), F(-0.9439), F(0.7457)], lw=5)
plt.show()
# + [markdown] id="o9d8PFfwNpYK"
# Как мы видим, эта функция имеет два локальных минимума в точках $x \sim -3.5518$ и $x \sim 0.7457$, один локальный максимум в точке $x \sim -0.9439$, а также она убывает на отрезках $[-5, -3.5518]$ и $[-0.9439, 0.7457]$ и возрастает на отрезках $[-3.5518, -0.9439]$ и $[0.7457, 2]$.
# + [markdown] id="8QUoZHdCNpYL"
# Посмотрим на график в местах, где функция убывает. Это отрезки $[-5, -3.5518]$ и $[-0.9439, 0.7457]$. Заметим, что на этих двух отрезках функция убывает с разной скоростью. На отрезке $[-5, -3.5518]$ она убывает быстрее ("склон" функции более крутой, при изменении координаты $x$ на 1 значение функции $y$ меняется быстро), а на отрезке $[-0.9439, 0.7457]$ функция убывает медленнее (при изменении координаты $x$ на 1 значение функции $y$ меняется на меньше число, чем на предыдущем отрезке)
#
# Давайте научимся измерять скорость убывания/возрастания функции и введем строгие определения.
# + [markdown] id="q1W6ANfnNpYN"
# <a href="https://ibb.co/hgW7bnD"><img src="https://i.ibb.co/2FtnxLg/2020-01-28-01-00-23.jpg" alt="2020-01-28-01-00-23" border="0"></a>
# + [markdown] id="dpwiK2cbNpYO"
# Рассмотрим точки $(x_1, y_1) = (-5, 50)$ и $(x_2, y_2) \sim (-0.8, -5)$ графика функции (см. рисунок). Отступим от них по координате $x$ на одинаковый шаг $\Delta x > 0$, попадем на другие точки на графике. При этом величины координат $y_1$ и $y_2$ тоже поменяются на некоторые величины $\Delta y_1$ и $\Delta y_2$.
#
# Добавленные величины $\Delta y_1$ и $\Delta y_2$ назваются **приращениями функции**, а величина $\Delta x$ — **приращением аргумента**
# + [markdown] id="wMfQe4_wNpYP"
# Заметим, что $\Delta y_2 < \Delta y_1$
#
# Рассмотрим тогда отношения $\frac{\Delta y_1}{\Delta x}$ и $\frac{\Delta y_2}{\Delta x}$. Можно сказать, что эти отношения являются мерилами скорости убывания функции на отрезках $[x_1, x_1 + \Delta x]$ и $[x_2, x_2 + \Delta x]$ соответственно. <br>
# Действительно, отношение $\frac{\Delta y_1}{\Delta x}$ показывает, на сколько в среднем меняется значение функции при изменении значения аргумента на 1.
#
# **Чем больше отношение $\frac{\Delta y}{\Delta x}$ по модулю, тем выше скорость убывания/возрастания функции**.
# + [markdown] id="66o-XoOVNpYQ"
# Можно привести аналогию с горами — рассмотрим график как дорогу, идущую то в гору, то с горы. Отношение $\frac{\Delta y}{\Delta x}$, насколько сильно меняется высота дороги на каждый метр пути относительно земли. Насколько быстро растет/падает высота горы.
# + [markdown] id="h5rUJMgfNpYR"
# Заметим, что в премере выше приращения $\Delta y_1$ и $\Delta y_2$ — отрицательные числа (функция убывает). Значит, и отношение $\frac{\Delta y_1}{\Delta x} < 0$.
#
# В местах, где функция возрастает, отношение $\frac{\Delta y_1}{\Delta x}$ будет положительно.
# + [markdown] id="NaA-s8rLNpYS"
# **Если отношение $\frac{\Delta y_1}{\Delta x} > 0$, то функция возрастает, если < 0 — убывает**.
# + [markdown] id="Li5HGJe-NpYW"
# Итак, таким образом, **исследуя приращение функции при изменении аргумента в некоторой точке, можно делать выводы о возрастании/убывании функции и о скорости роста/убывания функции в окрестности этой точки**.
# + [markdown] id="fMP0UxYrViZk"
# ------------------------------
# + [markdown] id="WCVAWa_7NpYY"
# Теперь встает логичный вопрос: какую величину $\Delta x$ лучше всего выбирать, чтобы лучше оценить поведение функции в области у некоторой точки $(x, y)$?
#
# Давайте рассмотрим пример, где мы выбрали величину $\Delta x$ больше, чем в предыдущем примере:
# + [markdown] id="6s3gIAhPNpYc"
# <a href="https://ibb.co/58KxDzx"><img src="https://i.ibb.co/qsJFG4F/2020-01-28-23-32-16.jpg" alt="2020-01-28-23-32-16" border="0"></a>
# + [markdown] id="Rlk04TJ3NpYd"
# Мы находились в точке $(x_2, y_2) \sim (0.8, 5)$ и сдвинулись по координате $x$ на величину $\Delta x' > \Delta x$, попали в точку $(x_2 + \Delta x', y_2 + \Delta y'_2)$. Заметим, что $\Delta y'_2 > 0$ и отношение $\frac{\Delta y'_2}{\Delta x'} > 0$. По опыту рассуждений выше, это нам говорит о том, что на отрезке $[x_2, x_2 + \Delta x']$ функция возрастает в среднем на $\frac{\Delta y'_2}{\Delta x'}$ при изменении значения аргумента на 1.
#
# Но так ли это?
# + [markdown] id="8eOOxeh_NpYe"
# На графике мы явно видим, что в точке $(x_2, y_2)$ функция убывает, а затем возрастает и приходит в точку $(x_2 + \Delta x', y_2 + \Delta y'_2)$. То есть, наш вывод о том, как ведет себя график в окрестности точки $(x_2, y_2)$ , немного неточен. Отступив от точки $(x_2, y_2)$ на слишком большой шаг $\Delta x'$, мы "перешагнули" через точку локального минимума функции и ошибочно решили, что в окрестности точки $(x_2, y_2)$ функция возрастает.
# + [markdown] id="eroFcV62NpYf"
# Это как если бы вы стояли на вершине горы, потом спустились с нее, потом поднялись на другую гору такой же высоты и сказали: "ну, моя итоговая высота изменилась на 0 метров. Значит, вся моя дорога была прямая".
# + [markdown] id="6ZMmGnroNpYg"
# Значит, чтобы делать верные выводы о том, как ведет себя функция в окрестности некоторой точки, нужно брать $\Delta x$ маленьким. Но насколько маленьким?
# + [markdown] id="lYkIEU2nNpYg"
# Ответ такой: бесконечно маленьким. Никакая конечная величина $\Delta x$ не подойдет. Действительно, для любой величины $\Delta x$ я могу найти такую функцию, что ее график будет выглядеть так:
# + [markdown] id="Dz0go1TJNpYh"
# <a href="https://ibb.co/ZYNm4Sm"><img src="https://i.ibb.co/Nm7SDrS/2020-01-29-00-13-31.jpg" alt="2020-01-29-00-13-31" border="0"></a>
# + [markdown] id="xozwB2CBNpYi"
# То есть, мы все равно перескочем через точку локального минимума и не поймем, что функция в точке убывала, а не возрастала.
# + [markdown] id="gvDmTwBgNpYi"
# Ну и, напоследок, даже если на отрезке $[x, x + \Delta x]$ нет локальных минимумов/максимумов и функция постоянно убывает/возрастает, она может возрастать/убывать с разной скоростью. Например, функция $F(x) = x^4$ на отрезке $[1, 15]$ начинает возрастать быстрее (график становится "круче") при приближении к $x = 15$. И делать вывод о скорости роста функции в точке 1 как о средней скорости роста функции на отрезке $[1, 15]$ не совсем корректно.
# + id="u1aBnWrZNpYj" outputId="570846e9-f131-4b58-ed23-ee1c1b18e756"
def F(x):
return x**4
x = np.linspace(1, 15, 100)
y = list(map(F, x))
plt.plot(x, y)
plt.ylabel("Y")
plt.xlabel("X")
# + [markdown] id="zgRHK49nNpYn"
# **Что же делать?**
# + [markdown] id="h5w6pzBmNpYp"
# Здесь нам пригодится понятие предела функции.
#
# Давайте возьмем некоторый $\Delta x > 0$ и устремим его к нулю. То есть, будем его постепенно уменьшать. <br>
# Другими словами — построим последовательность чисел, начиная с $\Delta x > 0$, предел которой — 0.
#
# Например, мы можем взять $\Delta x = 1$ и последовательность чисел <br>
# $$1, \frac{1}{2}, \frac{1}{3}, ... \frac{1}{n}, ...$$
#
# Теперь возьмем функцию $F$ и некоторую точку этой функции $(x, y)$. Рассмотрим приращение $x + \Delta x$ и соответствующее приращение функции $y + \Delta y = F(x + \Delta x)$. Будем постепенно уменьшать приращения $\Delta x$, подставляя вместо $\Delta x$ члены заданной выше последовательности. Получим последовательность $\Delta y$:
#
# $$\Delta y_1 = F(x + 1) - F(x),$$ <br>
# $$\Delta y_2 = F(x + \frac{1}{2}) - F(x),$$ <br>
# $$\Delta y_3 = F(x + \frac{1}{3}) - F(x),$$ <br>
# $$...$$
# + [markdown] id="33WVP4gbNpYq"
# И, соответственно, получим последовательность отношений $\frac{\Delta y}{\Delta x}$:
#
# $$\frac{\Delta y_1}{1} = \frac{F(x + 1) - F(x)}{1},$$ <br>
# $$\frac{\Delta y_1}{2} = \frac{F(x + 2) - F(x)}{2},$$ <br>
# $$\frac{\Delta y_1}{3} = \frac{F(x + 3) - F(x)}{3},$$ <br>
# $$...$$
# + [markdown] id="dByMGLFbNpYr"
# Уменьшая $\Delta x$, мы будем отходить от точки $(x, y)$ все меньше и **отношение $\frac{\Delta y}{\Delta x}$ будет все точнее отражать характер поведения функции $F$ вблизи точки $(x, y)$**
#
# Нетрудно догадаться, что итоговая величина, которая наиболее точно отразит характер поведения функции $F$ вблизи точки $(x, y)$ есть предел функции $\frac{\Delta y}{\Delta x} = \frac{F(x + \Delta x) - F(x)}{\Delta x}$ при $\Delta x$ стремящемся к нулю. То есть, мы лучшим образом опишем поведение функции около точки $(x, y)$, когда мы отходим от точки $(x, y)$ на бесконечно малое расстояние.
#
# Эта величина называется **производной функции F в точке x**
#
# Записывается это так:
# + [markdown] id="hMHhY3fcNpYr"
# $$F'(x) = \lim_{\Delta x \to 0} \frac{F(x + \Delta x) - F(x)}{\Delta x}$$
# + [markdown] id="rtWxqJq7NpYs"
# (заметим, что последовательность $\Delta x = 1, \frac{1}{2}, \frac{1}{3}, ... \frac{1}{n}, ...$ никогда не дойдет до 0, поэтому знаменатель в формуле $F'(x)$ всегда определен)
# + [markdown] id="_A1yyDtlNpYs"
# Этот предел может существовать, а может не существовать. Если он существует, то функция $F$ называется **дифференцируемой в точке** $(x, y)$. <br>
# Если функция дифференцируема во всех точках, то функция называется просто дифференцируемой.
# + [markdown] id="LKUi7m88NpYt"
# Итак, давайте подитожим:
# + [markdown] id="D8juT7luS_s6"
# ## Определение производной
# + [markdown] id="evWoubSnY1n9"
# **производная функции F в точке x**
# $$F'(x) = \lim_{\Delta x \to 0} \frac{F(x + \Delta x) - F(x)}{\Delta x}$$
#
# Также говорят "производня функции $F$ по $x$"
# + [markdown] id="CfpWvjEVTChb"
# **производная функции в точке показывает характер изменения функции в точке**, а именно:
#
# **Модуль значения производной говорит о скорости убывания/возрастания функции**. <br>
# Если в одной точке производная функции равна 5, а в другой — 3, то в первой точке функция возрастает быстрее, чем во второй.
#
# **Знак производной показывает характер изменения функции в точке** <br>
# То есть, если производная > 0, это значит, что функция в точке возрастает. если производная < 0, это значит, что функция в точке убывает. А если производная = 0, то в этой точке находится локальный минимум/максимум.
#
#
# + [markdown] id="pwfMX1V5tCaB"
# > Пример #1 <br>
# Рассмотрим функцию $F(x) = x^2$ и точки $(-10, -100)$, $(2, 4)$ и $(25, 625)$:
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="DSjkjkQ9ukA-" outputId="27c7d3d3-1f3b-4e9f-af14-c5d6ae703770"
def F(x):
return x**2
x = np.linspace(-30, 30, 100)
y = list(map(F, x))
plt.figure(figsize=(10,7))
plt.plot(x, y)
plt.ylabel("Y")
plt.xlabel("X")
plt.scatter([-10, 2, 25], [F(-10), F(2), F(25)], lw=5)
plt.show()
# + [markdown] id="c13BsE_9u7AS"
# > Производная функции $F$ в точке $x=-10$ равна $-20$, в точке $x=2$ равна $4$ , в точке $x=25$ равна $50$ (как вычислять производные, мы научимся позже). <br>
# <br>
# Заметим, что производные в точках $x=2$ и $x=25$ положительны, а в точке $-10$ -- отрицательна. Это говорит о том, что в точках $x=2$ и $x=25$ функция возрастает, а в точке $-10$ — убывает. И это правда так — посмотрите на график. <br>
# Также заметим, что производная в точке $x=25$ по модулю больше, чем производная в точках $x=-10$ и $x=2$. Это значит, что скорость возрастания функции $F$ в точке $x=25$ выше, чем скорость возрастания в точке $x=2$ и скорость убывания в точке $x=-10$. И это тоже действительно так — см. график.
# + [markdown] id="z2D1UMaVymhO"
# > Пример #2 <br>
# Вычислим производную функции $F(x) = -2x - 1$ в точке $x = x_0$ по определению. <br>
# <br>
# $$F'(x_0) = \lim_{\Delta x \to 0} \frac{F(x_0 + \Delta x) - F(x_0)}{\Delta x} = \lim_{\Delta x \to 0} \frac{-2(x_0 + \Delta x) - 1 + 2x_0 + 1}{\Delta x} = \lim_{\Delta x \to 0} \frac{-2 \Delta x}{\Delta x} = \lim_{\Delta x \to 0} -2 = -2$$
# <br>
# Заметим, что получившееся значение производной $-2$ не зависит от $x_0$, а значит, производная функции $F(x) = -2x - 1$ во всех точках равна $-2$. <br>
# Также здесь нам не потребовалось вычислять предел, потому что $\Delta x$ просто сократилось =)
# + [markdown] id="fPHl5wbZzmDD"
# > Пример #3 <br>
# Вычислим производную функции $F(x) = 2x^2$ в точке $x = x_0$ по определению. <br>
# <br>
# $$F'(x_0) = \lim_{\Delta x \to 0} \frac{F(x_0 + \Delta x) - F(x_0)}{\Delta x} = \lim_{\Delta x \to 0} \frac{2(x_0 + \Delta x)^2 - 2x_0^2}{\Delta x} = \lim_{\Delta x \to 0} \frac{4x_0 \Delta x + 2\Delta x^2}{\Delta x} =$$
# <br>
# $$= \lim_{\Delta x \to 0} (4x_0 + 2 \Delta x) = \lim_{\Delta x \to 0} 4x_0 + \lim_{\Delta x \to 0} 2 \Delta x = 4x_0$$
# <br>
# (при $\Delta x \to 0$ предел выражения $2 \Delta x$ есть 0, поэтому остается только $4x_0$)<br>
# <br>
# Итак, в точке $x_0$ производная функции $F(x) = 2x^2$ равна $4x_0$
# <br>
# <br>
# Подставляя вместо $x_0$ числа, можем получать значения производной в точках. Например, <br>
# $F'(5) = 4*5 = 20$
# + [markdown] id="uYUUTvrHnPJo"
# **P.S.** Может быть, вы сейчас задаетесь вопросом, зачем нам нужна производная, чтобы определять возрастание/убывание/точки минимума функции, ведь мы можем нарисовать график и увидеть все, не считая никаких производных. <br>
# Подумайте о том, что сейчас мы введем производные функций многих переменных, на графики которых посмотреть не получится -- они не поменяются в трехмерное пространство =) Поэтому исследовать такие функции остается только с помощью производных.
# + [markdown] id="ZVn2OunErh_8"
# ## Производная как функция
# + [markdown] id="2wdZE2rErtuW"
# До сих пор мы говорили о производной в конкретной точке $x$ функции $F$. Давайте теперь допустим, что функция $F$ дифференцируема на всей своей области определения, то есть имеет производную во всех точках.
#
# Тогда можно определить производную функции $F$ как функцию от аргумента $x$:
# $F'(x) = \lim_{\Delta x \to 0} \frac{F(x + \Delta x) - F(x)}{\Delta x}$
#
# То есть, **производная функции есть тоже функция**.
# + [markdown] id="cZH0QmRFwtWW"
# > Пример #4 <br>
# Рассмотрим ту же функцию $F(x) = x^2$. Ее производная есть функция $F'(x) = 2x$ (как вычислять производные, мы научимся позже). <br>
# <br>
# Давайте подставиим точки, рассмотренные в предыдущем примере, в эту функцию: <br>
# $F'(-10) = 2\cdot(-10) = -20$ <br>
# $F'(2) = 2\cdot(2) = 4$ <br>
# $F'(35) = 2\cdot(25) = 50$ <br>
# <br>
# Все сходится =)
# + [markdown] id="4-PXT6RA1434"
# > Пример #5 <br>
# В премере #2 мы рассмотрели функцию $F(x) = -2x - 1$ и выяснили, что ее производная во всех точках равна $-2$. Это значит, что функция производной есть:
# $$F'(x) = -2$$
# + [markdown] id="HXjgMlht2T8X"
# > Пример #6 <br>
# В премере #3 мы рассмотрели функцию $F(x) = 2x^2$ и выяснили, что ее производная в точке $x_0$ точках равна $4x_0$. Так как точка $x_0$ произвольная, это значит, что функция производной есть:
# $$F'(x) = 4x$$
# + [markdown] id="cfbf09-INpYu"
# ## Геометрический смысл производной
# + [markdown] id="6tc_4Nkw3HIU"
# Рассмотрим функцию $F = x^2$, точку $(x, y) = (x, F(x))$ и зададим приращение аргумента $\Delta x$. Приращение значения функции составит $\Delta y = F(x + \Delta x) - F(x)$.
#
# Через две точки $(x, y)$ и $(x + \Delta x, y+ \Delta y)$ проведем прямую. Она называется **секущей**.
# + [markdown] id="kc_f1-234zSJ"
# <a href="https://imgbb.com/"><img src="https://i.ibb.co/wsxHv5V/giphy.gif" alt="giphy" width="350" height="255" border="0"></a>
# + [markdown] id="KcZ00swa5Pf6"
# Заметим, что у нас получился прямоугольный треугольник с катетами $\Delta y$ и $\Delta x$. Тогда из геометрии мы получаем, что отношение приращения функции к приращению аргемента $\frac{\Delta y}{\Delta x}$ есть отношение противолежащего катета этого треугольника к прилежашему, то есть, равно тангенсу угла $\alpha$ между катетом $\Delta x$ и гипотенузой (тангенсу угла наклона касательной):
# + [markdown] id="dgVm9cNB72S9"
# <a href="https://ibb.co/GHbFXG8"><img src="https://i.ibb.co/0sdQk16/2020-01-29-13-53-10.jpg" width="350" height="255" alt="2020-01-29-13-53-10" border="0"></a>
# + [markdown] id="OGbKihLG40_f"
# Будем теперь, как и раньше, уменьшать значение $\Delta x$. Точка $(x + \Delta x, y+ \Delta y)$ будет двигаться по графику к точке $(x, y)$, а вместе с ней и секущая:
# + [markdown] id="8foaTEXoaBIM"
# 
# + [markdown] id="UAqY8CKq5Iw2"
# Заметим, что при приближении точки $(x + \Delta x, y+ \Delta y)$ к точке $(x, y)$ касательная все сильнее приближается к положению **касательной** в точке $(x, y)$ и в конце концов совпадет с ней. В итоге мы получаем определение касательной:
#
# **Касательная к графику функции $F$ в точке $(x, y)$ — это предельное положение секщей в этой точке.**
# + [markdown] id="YbTAjoQA9CTb"
# Вспомним теперь про то, что $\frac{\Delta y}{\Delta x}$ есть тангенс угла наклона касательной. <br>
# Так как значение производной в точке $(x, y)$ есть предельное значение выражения $\frac{\Delta y}{\Delta x}$, предельное положение секущей есть касательная в точке $(x, y)$, то получается, что:
#
# **Производная функции в точке (x, y) численно равна тангенсу угла наклона касательной в это точке** <br>
# (на картинке это угол $\alpha$)
#
# + [markdown] id="816xKHchAg-f"
# <a href="https://ibb.co/bdt4tyX"><img src="https://i.ibb.co/88kykR9/2020-01-29-14-13-17.jpg" width="350" height="255" alt="2020-01-29-14-13-17" border="0"></a>
# + [markdown] id="oWGJNhL7AteH"
# ## Пример производной из жизни
# + [markdown] id="3XfJLH46AuQG"
# > Пример #7 <br>
# Пример из физики: <br>
# все мы помним формулу изменения координаты движущегося тела:
# $$F(t) = x_0 + v_{0}t + \frac{a}{2}t^2$$
# где: <br>
# $x_0$ — координата тела в момент начала движения <br>
# $v_0$ — скорость тела в момент начала движения <br>
# $a$ — ускорение тела <br>
# $t$ — момент времени <br>
# <br>
# Здесь важны величины $v$ и $a$. Скорость и ускорение связаны формулой: <br>
# $v(t) = v_0 + at$ <br>
# Ускорение — это производная скорости по времени. Это можно понять интуитивно: ускорение — это скорость изменения скорости тела (сравните со "скоростью изменения координаты графика")<br>
#
# + [markdown] id="ARkNTFDcd-Gd"
# > Пример #8 <br>
# <br>
# Вообще практически вся физика описывается производными первого и второго порядка (производная первого порядка = производная, производная второго порядка = производная производной). <br>
# <br>
# Скорость есть производная координаты про времени <br>
# Ускорение есть производная скорости по времени <br>
# То есть, ускорение есть вторая производная координаты по времени (производная производной) <br>
# <br>
# Также производная применяется в физике, географии и даже социологии. В целом, производная оисывает скорость изменения некоторой величины.
#
#
# + [markdown] id="bKpVK2UkHVxI"
# # Вычисление производной
# + [markdown] id="zrKPFnhPT9yZ"
# ## Элементарные функции
# + [markdown] id="iv9tujvKHX6G"
# Выше мы ввели определение производной и нашли производные нескольких функций, используя определение. В жизни (математике) бывает множество ситуаций, когда нужно находить производные довольно сложных функций. Делать это каждый раз по определению очень муторно и сложно. <br>
# Но есть хорошая новость! Этого делать и не нужно. Производные сложных функций находятся по ловольно простым правилам. Сейчас мы их рассмотрим:
# + [markdown] id="74Hban0hIsRF"
# Нам в этом поможет **таблица производных часто встречающихся функций**:
# + [markdown] id="22iRk9mlIhh1"
# 
# + [markdown] id="gd_znTuDIz0k"
# В этой таблице $C$ — произвольная константа. То есть, первая формула таблицы говорит там, что производная любой константной функции есть 0.
# + [markdown] id="yHBPML8aJZac"
# Теперь осталось выучить еще три правила **нахождения производной композиции функций**:
# Пусть у нас есть две функции $u$ и $v$.
# + [markdown] id="nw5qIMcsJfjw"
# 1. Производная суммы двух функций есть сумма производных этих функций (аналогично с вычитанием)
# $$(u + v)' = u' + v'$$
#
# 2. Производная произведения двух функций:
# $$(u \times v)' = u' v + u v'$$
#
# 3. Производная отношения двух функций:
# $ v \neq 0$
# $$\left(\frac{u}{v}\right)' = \frac{u' v - u v'}{v^2}$$
#
# + [markdown] id="G_dUFGpdTe2M"
# > Пример #9
# <br>
# $F(x) = 4x^3$ <br>
# <br>
# Эта функция является произведением двух функций — $u(x) = 4$ и $v(x) = x^3$ <br>
# <br>
# Производная константы — 0, т.е. $u'(x) = 0$. <br>
# Для $x^3$ cмотрим в таблицу. Нужная нам формула — $(x^n)' = nx^{n-1}$. Применяем ее для $n=3$. Получаем: <br>
# $v'(x) = 3x^2$.<br>
# <br>
# По правилу дифференцирования композиции функций $F'(x) = u'v + uv' = 0 \cdot 4x^3 + 4 \cdot 3x^2 = 12x^2$ <br>
# <br>
# P.S. Вообще константы можно просто вынести за пределы дифференцирования и дифференцировать только часть функции с аргументами. То есть, формально, <br>
# $(CF(x))' = CF'(x)$. <br>
# Попробуйте убедиться в этом сами, использовав формулу производной произведения композиции функций.
#
#
# + [markdown] id="lcSI_mjtOpYk"
# > Пример #10 <br>
# $F(x) = \frac{4sinx}{e^x} + x^2 \cdot lnx$
# <br>
# <br>
# Эта функция раскладывается в сумму двух функций: <br>
# $u = \frac{4sinx}{e^x}$ <br>
# $v = x^2\cdot lnx$ <br>
# <br>
# $u$: <br>
# Эта функция есть отношение функций $4sinx$ и $e^x$. Воспользуемся формулой производной отношения функций: <br>
# $u' = \frac{(4sinx)' \cdot e^x - 4sinx \cdot(e^x)'}{(e^x)^2} = <тут\ для\ вычисления\ производных\ смотрим\ в\ таблицу> = \frac{4cosx \cdot e^x - 4sinx \cdot e^x}{e^{2x}}$ <br>
# <br>
# $v$:
# Эта функция есть произведение двух табличных функций. Воспользуемся формулой производной произведения функций: <br>
# $v' = (x^2)' \cdot lnx + x^2 \cdot (lnx)' = 2x\cdot lnx + \frac{x^2}{x} = 2x\cdot lnx + x$ <br>
# <br>
# Итого <br>
# $F'(x) = u' + v' = \frac{4cosx \cdot e^x - 4sinx \cdot e^x}{e^{2x}} + 2x\cdot lnx + x$
#
# + [markdown] id="oml5Q9J4UPzd"
# ## Производная сложной функции
# + [markdown] id="UYdhXv7NUT8x"
# Рассмотрим функцию:
#
# $$F(x) = sin(x^3)$$
#
# Эта функция не элементарная, ее производная не записана в таблице. Чтобы вычислить ее производную, нужно представить ее в виде композиции двух функций: <br>
# $u(v) = sin v$ <br>
# $v(x) = x^3$ <br>
# Тогда $F(x) = u(v(x))$
#
# Итого мы представляем функцию $F(x) = sin(x^3)$ как функцию не от аргумента $x$, а от аргумента $v = x^3$. Такая функция называется **сложной функцией**. Для вычисления ее производной применяется формула:
#
# $$\left( u(v(x)) \right)' = u'(v) \cdot v'(x)$$
#
# То есть, нужно взять производную функции $u$, если ее аргумент — это $v$ (не $x$) и умножить на производную $v$ по $x$.
# + [markdown] id="3fDiDkficLGu"
# > Пример #11 <br>
# <br>
# Найдем производную $F(x) = sin(x^3)$: <br>
# <br>
# $u(v) = sin v$, $u'(v) = cos(v)$<br>
# $v(x) = x^3$, $v'(x) = 3x^2$<br>
# <br>
# <br>
# Итого $F'(x) = u'(v) \cdot v'(x) = cos(x^3) \cdot 3x^2$
#
# + [markdown] id="VxoJ726iRYY6"
# # Производная функции многих переменных
# + [markdown] id="Zr7L6nzabwTh"
# ## Определение
# + [markdown] id="FkpkihYXeMcw"
# Давайте рассмотрим функцию $n$ переменных:
#
# $F(x_1, x_2, \dots, x_n)$.
#
# В ноутбуке про функции мы говорили о том, что возрастание и убывание функций многих переменных определяется покоординатно. <br>
# Производные многих переменных тоже можно определять покоординатно, т.е. считая все аргументы, кроме того, по которому берем производную, константами. (и мы в этом курсе будем определять производную функции многих переменных только покоординатно).<br>
# Производная функции нескольких переменных по одной переменной называется **частной производной**.
# + [markdown] id="qpZ9a0cIgIw7"
# > Пример #12 <br>
# <br>
# $F(x_1, x_2, x_3) = x_1x_2 + x_3$<br>
# <br>
# Производная функции $F$ по переменной $x_1$: <br>
# $$F'_{x_1} = \frac{\partial{F}}{\partial x_1} = (x_1x_2 + x_3)'_{x_1} = (x_1x_2)'_{x_1} + (x'_3)_{x_1} = x_2$$
# <br>
# Производная функции $F$ по переменной $x_2$: <br>
# $$F'_{x_2} = \frac{\partial{F}}{\partial x_2} =(x_1x_2 + x_3)'_{x_2} = (x_1x_2)'_{x_2} + (x'_3)_{x_2} = x_1$$
# <br>
# Производная функции $F$ по переменной $x_3$: <br>
# $$F'_{x_3} =\frac{\partial{F}}{\partial x_3} = (x_1x_2 + x_3)'_{x_3} = (x_1x_2)'_{x_3} + (x'_3)_{x_3} = 1$$
# + [markdown] id="UsC28Llvhkju"
# Раз производная — это функция, то от нее тоже можно брать производную! Можно брать производную два раза по одной и той же переменной, а можно по разным.
#
# Обозначается вторая производная так: <br>
# $F''_{xx}$ — производная функции $F$ по $x$ и еще раз по $x$ <br>
# $F''_{xy}$ — производная функции $F$ по $x$ и еще раз по $y$
# + [markdown] id="f-viD2uWhu0B"
# > Пример #13 <br>
# <br>
# $F(x_1, x_2, x_3) = x_1x_2 + x_3$<br>
# <br>
# Производная функции $F$ по переменной $x_1$: <br>
# $$F'_{x_1} = (x_1x_2 + x_3)'_{x_1} = (x_1x_2)'_{x_1} + (x'_3)_{x_1} = x_2$$<br>
# <br>
# Вторая производная функции $F$ по переменной $x_1$: <br>
# $F''_{x_1x_1} = (x_2)_{x_1} = 0$
# <br>
# Вторая производная функции $F$ по переменной $x_2$: <br>
# $F''_{x_1x_2} = (x_2)_{x_2} = 1$
#
# + [markdown] id="PKrYQXSZictv"
# Также бывают третья, четвертая, ..., $n$-ная производные.
# + [markdown] id="Wz-gd9oDbyPi"
# ## Градиент
# + [markdown] id="BzVIxHThb7D1"
# **Градиент функции многих переменных** — это вектор частных производных функции.
# Формальнее: $$grad F=\triangledown F = (F'_{x_1}(x_1, x_2, \dots, x_n), F'_{x_2}(x_1, x_2, \dots, x_n), \dots, F'_{x_n}(x_1, x_2, \dots, x_n))$$
# Или, что то же:
# $$grad F=\triangledown F = \left(\frac{\partial F}{\partial x_1}(x_1, x_2, \dots, x_n),\frac{\partial F}{\partial x_2}(x_1, x_2, \dots, x_n), \dots, \frac{\partial F}{\partial x_n}(x_1, x_2, \dots, x_n)\right)$$
# + [markdown] id="6tApylr1cHgz"
# > Пример #14
# Знакомая нам функция $F(x_1, x_2, x_3) = x_1x_2 + x_3$.<br> В примере #12 мы вычислили ее частные производные по всем трем переменным. Значит, градиент этой функции есть: <br>
# $$grad F = \triangledown F = (x_2, x_1, 1)$$
# + [markdown] id="yj8K1BXic4-R"
# Градиент функции показывает, как ведет себя функция по каждой переменной. Градиент пригодится нам в ноутбуке по методам оптимизации.
| Derivative_and_gradient.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MergeSort
#
# MergeSort is a *divide and conquer* algorithm that divides a list into equal halves until it has two single elements and then merges the sub-lists until the entire list has been reassembled in order.
#
# ### Divide
# Our MergeSort code will focus first on the divide portion of the algorithm. If the list we receive has only a single element in it, the list can be considered sorted and we can return immediately. This is our recursion base case. If we have more than 1 element we need to split the list into equal halves and call MergeSort again for each half.
#
# ### Conquer
# Once you have split the list down to single elements, your mergesort will start merging lists, in order, until you have reassembled the entire list in order.
#
# ## Design
#
# First, let's sketch this out. It's clear we want a recursive function, but how will it govern its recursion?
#
# ```python
# def mergesort(items):
# # Base case, a list of 0 or 1 items is already sorted
# if len(items) <= 1:
# return items
#
# # Otherwise, find the midpoint and split the list
# # TODO
# # left =
# # right =
#
# # Call mergesort recursively with the left and right half
# left = mergesort(left)
# right = mergesort(right)
#
# # Merge our two halves and return
# return merge(left, right)
#
# def merge(left, right):
# # Given two ordered lists, merge them together in order,
# # returning the merged list.
# # TODO
# ```
#
# We have decomposed the problem, now we will address each piece separately.
#
# ### Divide
#
# We can use python's `//` operator to find a midpoint. If `items`'s length is even, we will have the midpoint. If `items`'s length is odd, we will have one half larger by one.
# 
# 
# 
#
# +
list1 = [0, 1, 2, 3]
midpoint1 = len(list1) // 2
divide = len(list1) / 2
print('len: {}'.format(len(list1)))
print('List 1 midpoint: {}'.format(midpoint1))
print('divide: {}'.format(divide))
list2 = [4, 5, 6]
midpoint2 = len(list2) // 2
print('List 2 midpoint: {}'.format(midpoint2))
# -
# With our midpoints, we can slice the lists using python's special notation. Note, you can improve effencncy by not using the slice function. Instead, keep track of indicies.
# +
left1 = list1[:midpoint1]
right1 = list1[midpoint1:]
print('List 1 left side: {}'.format(left1))
print('List 1 right side: {}'.format(right1))
left2 = list2[:midpoint2]
right2 = list2[midpoint2:]
print('List 2 left side: {}'.format(left2))
print('List 2 right side: {}'.format(right2))
# -
# That addresses our first TODO, now for the fun one.
#
# The `merge` function needs to take two sorted lists, and interleave them, yielding a _merged_ sorted list. We can accomplish that by tracking an index into both lists, and once one is exhausted, appending the remaining items from the other list.
# +
def merge(left, right):
merged = []
left_index = 0
right_index = 0
# Move through the lists until we have exhausted one
while left_index < len(left) and right_index < len(right):
# If left's item is larger, append right's item
# and increment the index
if left[left_index] > right[right_index]:
merged.append(right[right_index])
right_index += 1
# Otherwise, append left's item and increment
else:
merged.append(left[left_index])
left_index += 1
# Append any leftovers. Because we've broken from our while loop,
# we know at least one is empty, and the remaining:
# a) are already sorted
# b) all sort past our last element in merged
# If while loop does not trigger we know that elements are in the left array or the right array
merged += left[left_index:]
merged += right[right_index:]
# return the ordered, merged list
return merged
# Test this out
merged = merge([1,3,7], [2,5,6])
print(merged)
# +
#solution is same as above, here we are using print
def merge(left, right):
merged = []
left_index = 0
right_index = 0
while left_index < len(left) and right_index < len(right):
print('\nleft_index',left_index)
print('right_index',right_index)
if left[left_index] > right[right_index]:
merged.append(right[right_index])
print("inside if merged",merged)
right_index += 1
else:
merged.append(left[left_index])
print("inside else merged",merged)
left_index += 1
merged += left[left_index:]
merged += right[right_index:]
# return the ordered, merged list
return merged
# Test this out
merged = merge([1,3,7], [2,5,6])
print(merged)
# -
# Now we can combine our TODO resolutions into our sketch from above.
# +
def mergesort(items):
if len(items) <= 1:
return items
mid = len(items) // 2
left = items[:mid]
right = items[mid:]
print('left',left)
print('right',right)
left = mergesort(left)
right = mergesort(right)
print('after left',left)
print('after right',right)
return merge(left, right)
def merge(left, right):
merged = []
left_index = 0
right_index = 0
while left_index < len(left) and right_index < len(right):
if left[left_index] > right[right_index]:
merged.append(right[right_index])
right_index += 1
else:
merged.append(left[left_index])
left_index += 1
merged += left[left_index:]
merged += right[right_index:]
print('\nMERGED',merged)
print('\nleft[left_index:]',left[left_index:])
print('right[right_index:]',right[right_index:])
return merged
test_list_1 = [8, 3, 1, 7, 0, 10, 2]
test_list_2 = [1, 0]
test_list_3 = [97, 98, 99]
print('{} to {}'.format(test_list_1, mergesort(test_list_1)))
""""print('{} to {}'.format(test_list_2, mergesort(test_list_2)))
print('{} to {}'.format(test_list_3, mergesort(test_list_3)))"""
# +
def mergesort(items):
# Base condition
if len(items) <= 1:
return items
# Get the mid point
mid = len(items) // 2
# takes the left side of the array
left = items[:mid]
right = items[mid:]
left = mergesort(left)
right = mergesort(right)
return merge(left, right)
def merge(left, right):
merged = []
left_index = 0
right_index = 0
while left_index < len(left) and right_index < len(right):
if left[left_index] > right[right_index]:
merged.append(right[right_index])
right_index += 1
else:
merged.append(left[left_index])
left_index += 1
merged += left[left_index:]
merged += right[right_index:]
return merged
test_list_1 = [8, 3, 1, 7, 0, 10, 2]
test_list_2 = [1, 0]
test_list_3 = [97, 98, 99]
print('{} to {}'.format(test_list_1, mergesort(test_list_1)))
print('{} to {}'.format(test_list_2, mergesort(test_list_2)))
print('{} to {}'.format(test_list_3, mergesort(test_list_3)))
# -
| sorting/merge_sort.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pre-requisite Material for Foundations of Machine Learning
# * The following includes of list a topics that you should know to succeed in this course.
# * This covers material from calculus, linear algebra, statistics, and programming in Python.
# * Please review the following material carefully.
# * If you do not feel confident in all of the following material, you may want to reconsider taking the course at this time.
#
# # Python Review
#
# Some Python tutorials and references to go over are:
#
# * Learn Python the hard way: https://learnpythonthehardway.org/python3/
# * Python for absolute beginners: https://www.youtube.com/playlist?list=PLS1QulWo1RIaJECMeUT4LFwJ-ghgoSH6n
# * Python Docs tutorial: https://docs.python.org/3/tutorial/
# * Numpy quickstart tutorial: https://docs.scipy.org/doc/numpy/user/quickstart.html
#
# - Python For Data Science - A Cheat Sheet For Beginners: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics
# - Other Python Cheat Sheets: https://towardsdatascience.com/collecting-data-science-cheat-sheets-d2cdff092855 (Credits to <NAME>)
# # Calculus Review
#
# Calculus topics to review include:
#
# * Derivatives: https://www.youtube.com/watch?v=9vKqVkMQHKk&list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr&index=2
# * Chain rule and product rule: https://www.youtube.com/watch?v=YG15m2VwSjA&list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr&index=4
# * Integration: https://www.youtube.com/watch?v=rfG8ce4nNh0&list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr&index=8
# * Taylor Series: https://www.youtube.com/watch?v=3d6DsjIBzJ4&list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr&index=11
#
# Additional resources:
# * Full 3Blue1Brown Calculus series: https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr
# * ML-CheatSheet for Calculus: https://ml-cheatsheet.readthedocs.io/en/latest/calculus.html
# ## Linear Algebra Review
#
#
# Topics and definitions to know include:
# * Vector: https://www.youtube.com/watch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=2&t=0s
# \begin{equation} \mathbf{x} = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_D\end{array} \right] \end{equation}
# * Matrix:
# \begin{equation}
# \mathbf{X} = \left[\!\begin{array}{c c c c}
# x_{11} & x_{12} & \cdots & x_{1n}\\
# x_{21} & x_{22} & \cdots & x_{2n}\\
# \vdots & \vdots & \ddots & \vdots\\
# x_{d1} & x_{d2} & \cdots & x_{dn}\end{array}\!\right]\! \!\in\! \mathcal{R}^{d \times n}
# \end{equation}
# * Transpose operation:
# \begin{equation}
# \mathbf{x}^T = \left[ x_1, x_2 , \cdots , x_D \right]
# \end{equation}
#
# \begin{equation}\left(\mathbf{A}^T\mathbf{B}\right)^T = \mathbf{B}^T\mathbf{A}\end{equation}
#
# * Vector/Matrix scaling: Given a vector $\mathbf{x} \in \mathbb{R}^D$ and a scalar value $a$, *what is $a\mathbf{x}$?* *What does this operation do geometrically?*
# * Vector/Matrix addition: Given $\mathbf{x} \in \mathbb{R}^D$ and $\mathbf{y} \in \mathbb{R}^D$, *what is $\mathbf{x} + \mathbf{y}$? What is the geometric interpretation?*
# * Vector/Matrix subtraction: Given $\mathbf{x} \in \mathbb{R}^D$ and $\mathbf{y} \in \mathbb{R}^D$, *what is $\mathbf{x} - \mathbf{y}$? What is the geometric interpretation?*
# * Inner product: $\mathbf{x}^T\mathbf{y} = \mathbf{y}^T\mathbf{x} = \sum_{i=1}^D x_iy_i$
# * Outer product: $xy^\top \!=\! \left[\!\begin{array}{c}
# x_1\\
# x_2\\
# \vdots\\
# x_d\end{array}\!\right]\!\!
# \left[\!\begin{array}{c}
# y_1\\
# y_2\\
# \vdots\\
# y_n\end{array}\!\right]^\top \!\!=\! \left[\!\begin{array}{c c c c}
# x_1y_1 & x_1y_2 & \cdots & x_1y_n\\
# x_2y_1 & x_2y_2 & \cdots & x_2y_n\\
# \vdots & \vdots & \ddots & \vdots\\
# x_dy_1 & x_dy_2 & \cdots & x_dy_n\end{array}\!\right]\! \!\in\! \mathcal{R}^{d \times n}.
# $
# * Linear transformations: https://www.youtube.com/watch?v=kYB8IZa5AuE&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=3
# * Inverse: https://www.youtube.com/watch?v=uQhTuRlWMxw&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=7
#
# * L-p norm: Given a vector, $\mathbf{x}$ and a $p$-value, the $l_p$ norm is defined as:
# \begin{eqnarray}
# \left\|\mathbf{x}\right\|_p = \left( \sum_{d=1}^D |x_d|^p \right)^{\frac{1}{p}}
# \end{eqnarray}
#
# So, if $p=2$, then the $l_2$ norm of a vector is:
# \begin{eqnarray}
# \left\|\mathbf{x}\right\|_2 = \left( \sum_{d=1}^D |x_d|^2 \right)^{\frac{1}{2}}
# \end{eqnarray}
#
# * Eigenvectors and Eigenvalues: https://www.youtube.com/watch?v=PFDu9oVAE-g&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=14
#
#
# +
# Associated python code to review some of the concepts listed above.
import numpy as np
a = 2
x = np.array([[1],[2],[3]])
y = np.array([[4],[5],[6]])
#Print Vector x
print('x:',x)
#Transpose Vector x
print('x.T:',x.T)
#Scale Vector x with scalar a
print('a*x:', a*x)
#Vector addition
print('x+y:', x+y)
#Vector subtraction
print('y-x:',y-x)
# +
#Several ways to compute the inner product of vectors x and y in python/numpy
#First by using matrix multiplication operator '@' that numpy supports:
print(x.T@y)
print(y.T@x)
#Second with numpy.matmul function for matrix maultiplication:
print(np.matmul(x.T,y))
print(np.matmul(y.T,x))
#Third with numpy.inner function:
print(np.inner(x.T,y.T))
#Fourth with numpy.dot function, note that numpy.dot acts the same as '@ ' or 'np.matmul' for 2D arrays:
print(x.T.dot(y))
print(y.T.dot(x))
#for 1D arrays, acts similar to numpy.inner function:
print(np.dot([1,2,3],[4,5,6]))
# -
#Compute the outer product of vectors x and y
print(np.outer(x,y))
print(np.outer(x,y).T)
#Compute l-p norm for p = 2, 3, 1
x = np.array([1, 2, 3])
print(np.linalg.norm(x,ord=2))
print((x@x)**(1/2))
print(np.linalg.norm(x,ord=3))
print(np.linalg.norm(x,ord=1))
#Compute l-p norm for p = 2, 3, 1, 0
x = np.array([-1, -2, -3])
print(np.linalg.norm(x,ord=2))
print((x@x)**(1/2))
print(np.linalg.norm(x,ord=3))
print(np.linalg.norm(x,ord=1))
print(np.linalg.norm(x,ord=0))
#Compute l-p norm for p = 2, 3, 1, 0
x = np.array([-1, 0, 3])
print(np.linalg.norm(x,ord=2))
print((x@x)**(1/2))
print(np.linalg.norm(x,ord=3))
print(np.linalg.norm(x,ord=1))
print(np.linalg.norm(x,ord=0))
# ### Note the notation:
# * scalar values are unbolded (e.g., $N$, $x$)
# * vectors are lower case and bolded (e.g., $\mathbf{x}$)
# * matrices are uppercase and bolded (e.g., $\mathbf{A} \in \mathbb{R}^{D \times N}$)
# * vectors are generally assumed to be column vectors (e.g., $\mathbf{x}^T = \left(x_1, \ldots, x_N\right)$ and $\mathbf{x} = \left(x_1, \ldots, x_N\right)^T$ )
#
# ### Additional reading and videos to review linear algebra concepts:
#
# * Strang, Gilbert, et al. Introduction to linear algebra. Vol. 4. Wellesley, MA: Wellesley-Cambridge Press, 2009.
# Chapters 1-7
#
# * Lay, <NAME>. "Linear Algebra and its Applications, 3rd updated Edition." (2005).
#
# * MITOpenCourseWare Linear Algebra: https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/
#
# * 3Blue1Brown Linear Algebra Review: https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab
#
# * SciPy Cheat Sheet: Linear Algebra in Python: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet
#
# # Statistics Review
#
# Topics and definitions to know include:
#
# - Likelihood and Probability
# - Expected Value: https://www.youtube.com/watch?v=j__Kredt7vY
# - Variance and covariance: https://www.youtube.com/watch?v=ualmyZiPs9w
# - Random variables: https://www.youtube.com/watch?v=3v9w79NhsfI
# - Probability density functions: https://www.youtube.com/watch?v=Fvi9A_tEmXQ
# - Marginal and conditional probability: https://www.youtube.com/watch?v=CAXQvTKP8sg
# - Independence and conditional independence: https://www.youtube.com/watch?v=uzkc-qNVoOk
# - Normal/Gaussian distribution: https://www.youtube.com/watch?v=hgtMWR3TFnY
# - Central Limit Theorem: https://www.youtube.com/watch?v=JNm3M9cqWyc
# - Bayes' Rule: https://www.youtube.com/watch?v=XQoLVl31ZfQ
#
# Additional Reading: <NAME> al. "Deep Learning", MIT Press, 2016. Chapter 3: Probability and Information Theory, Pages 51-70. http://www.deeplearningbook.org/contents/prob.html
| 00_Prereqs/Lecture00_Prerequisites.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ! pip install excel-to-json
# +
# Writing a module
# -
# %%writefile file1.py
def myfun(x):
return [num for num in range(x) if num%2==0]
list1 = myfun(15)
# +
# Writing a script
# +
# %%writefile file2.py
import file1
file1.list1.append(98)
print(file1.list1)
# -
# ! python file2.py
# +
# %%writefile file3.py
import file1
list2 = file1.myfun(30)
print(list2)
# -
# ! python file3.py
# %%writefile file4.py
import sys
import file1
num = int(sys.argv[1])
print(file1.myfun(num))
# ! python file4.py 50
# +
# Exploring the built-in function
# -
import math
math.cos(30)
print(dir(math.cos))
print(dir(math.cos.__call__))
# +
# %%writefile exp1.py
def hello():
print('hello i am inside exp1 directly')
def helloInDirectly():
print('hello i am inside exp1 and saying indirectly')
if __name__ == '__main__':
hello()
else:
helloInDirectly()
# -
# ! python exp1.py
# +
# %%writefile exp2.py
import exp1
print ('I am in Exp2.py')
# -
# ! python exp2.py
| Modules and Packages.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:cta] *
# language: python
# name: conda-env-cta-py
# ---
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
from sstcam_simulation import Camera
from sstcam_simulation.plotting import CameraImage
camera = Camera()
n_pixels = camera.mapping.n_pixels
# # Tutorial 2: Simulating Photoelectron Sources
#
# In sstcam-simulation, the simulation of the illumination source and the photosensor's response to it are coupled in the same methods, accounting for all the statistical fluctuations involved in photon counting. These methods are provided in the `PhotoelectronSource` class.
#
# The output of the photoelectron sources are a `Photoelectrons` object, which is a container for three one-dimensional arrays, containing information about the detected photoelectrons in this simulated event. Each entry in these arrays corresponds to one photoelectron.
# * `pixel`: the pixel the photoelectron is detected it
# * `time`: the time at which the photoelectron arrives in the photosensor
# * `charge`: the charge reported by the photosensor for this photoelectron (resulting from the SPE spectrum)
#
# This notebook demonstrates the different methods available in the `PhotoelectronSource` class.
# +
from sstcam_simulation.event.source import PhotoelectronSource
source = PhotoelectronSource(camera=camera, seed=1)
# PhotoelectronSource?
# -
# ## Night Sky Background (NSB)
#
# This photoelectron source simulates the random arrival of NSB photons, and takes the rate (in MHz) as an input.
# +
# source.get_nsb?
# -
photoelectrons = source.get_nsb(rate=10)
print("Total number of photoelectrons in event: ", len(photoelectrons))
plt.hist(photoelectrons.time, bins=100)
plt.xlabel("Photoelectron arrival time (ns)")
_ = plt.ylabel("N")
# Uniform distribution of arrival times, readout duration defined by:
print("Continuous readout duration = ", camera.continuous_readout_duration)
plt.hist(photoelectrons.charge, bins=1000)
plt.xlabel("Charge (p.e.)")
_ = plt.ylabel("N")
# Charge histogram reproduces SPE spectrum
image = CameraImage.from_coordinates(camera.mapping.pixel)
image.image = photoelectrons.get_photoelectrons_per_pixel(n_pixels=n_pixels)
image.add_colorbar("Total Number of Photoelectrons")
image = CameraImage.from_coordinates(camera.mapping.pixel)
image.image = photoelectrons.get_charge_per_pixel(n_pixels=n_pixels)
image.add_colorbar("Total charge reported (p.e.)")
# The difference between these two images is that the first (total number of photoelectrons) is the integer number of photoelectrons that were generated in each pixel during the duration. Whereas the second (total charge reported) is the sum of the reported charges from the pixels, i.e. post SPE spectrum sampling for each photoelectron.
# ## Uniform Illumination
#
# This photoelectron source simulates a uniform illumination source with a configurable intensity, similar to illuminating the camera with a laser in the lab. However, the illumination behaves as if the camera curviture is already accounted for, and the illumination measured is perfectly uniform (before poisson and SPE spectrum effects).
# +
# source.get_uniform_illumination?
# -
photoelectrons = source.get_uniform_illumination(time=400, illumination=100)
print("Total number of photoelectrons in event: ", len(photoelectrons))
plt.hist(photoelectrons.time, bins=100)
plt.xlabel("Photoelectron arrival time (ns)")
_ = plt.ylabel("N")
# All photoelectrons arrive at same time
photoelectrons = source.get_uniform_illumination(time=400, illumination=100, laser_pulse_width=2)
plt.hist(photoelectrons.time, bins=100)
plt.xlabel("Photoelectron arrival time (ns)")
_ = plt.ylabel("N")
# Photoelectrons arrive with a spread according to the illumination pulse width
plt.hist(photoelectrons.charge, bins=1000)
plt.xlabel("Charge (p.e.)")
_ = plt.ylabel("N")
# Charge histogram reproduces SPE spectrum, as each entry in the array is still a single photoelectron
charge_per_pixel = photoelectrons.get_charge_per_pixel(n_pixels=n_pixels)
plt.hist(charge_per_pixel, bins=100)
plt.xlabel("Charge (p.e.)")
_ = plt.ylabel("N")
# Sum of charge produced in each pixel is centered on 100 p.e.
image = CameraImage.from_coordinates(camera.mapping.pixel)
image.image = charge_per_pixel
image.add_colorbar("Charge (p.e.)")
# ## Cherenkov Shower Ellipse
#
# A toymodel of the illumination from a Cherenkov shower can be simulated through specifying the ellipse and timing properties.
# +
# source.get_cherenkov_shower?
# -
photoelectrons = source.get_cherenkov_shower(
centroid_x=0.02,
centroid_y=0.02,
length=0.04,
width=0.01,
psi=40,
time_gradient=200,
time_intercept=400,
intensity=10000,
cherenkov_pulse_width=3,
)
print("Total number of photoelectrons in event: ", len(photoelectrons))
image = CameraImage.from_coordinates(camera.mapping.pixel)
image.image = photoelectrons.get_charge_per_pixel(n_pixels=n_pixels)
image.add_colorbar("Charge (p.e.)")
image = CameraImage.from_coordinates(camera.mapping.pixel)
image.image = photoelectrons.get_average_time_per_pixel(n_pixels=n_pixels)
image.add_colorbar("Average Time (ns)")
# # Random Cherenkov Shower
#
# It is also possible to obtain a random Cherenkov shower, where the ellipse and timing parameters are extracted from uniform distributions.
# +
# source.get_random_cherenkov_shower?
# -
source = PhotoelectronSource(camera=camera)
for i in range(10):
photoelectrons = source.get_random_cherenkov_shower(cherenkov_pulse_width=3)
image = CameraImage.from_coordinates(camera.mapping.pixel)
image.image = photoelectrons.get_charge_per_pixel(n_pixels=n_pixels)
image.add_colorbar("Charge (p.e.)")
| tutorials/2_photoelectron_source.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="xRR-Opzm3cHF"
# # Description
# -
# This notebook implement and assesses EEG single trial classification of motor attempt on ten SCI patients using Linear, Multiclass, Probabilistic Support Tensor Machine, STM.
# ## Libs
# +
import mne
from scipy import stats
from mne.stats import bonferroni_correction, fdr_correction
from mne import Epochs, pick_types, events_from_annotations
from mne.channels import make_standard_montage
import pickle
import wget
import pandas as pd
import zipfile
import os
import shutil
import numpy as np
import time
from numpy.fft import rfft
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from itertools import combinations
from sklearn.model_selection import ShuffleSplit, cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import precision_score, recall_score, accuracy_score, f1_score
from spectrum import *
from scipy import fft
from scipy import signal
from scipy.stats import binom
from scipy.signal import butter, lfilter, filtfilt
import pystmm
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
# -
# ## Functions
# ### 0. Get EEG Data as a Dataframe and as a MNE raw from BNCI Horizon
def getEEGData(URL, User):
# Data Description: https://lampx.tugraz.at/~bci/database/001-2019/dataset_description_v1-1.pdf
# Offline: Type run and event codes
dictTypeRun = {'Run 1':'EyeMovements',
'Run 2':'Rest',
'Run 3':'AttemptedMovement',
'Run 4':'AttemptedMovement',
'Run 5':'AttemptedMovement',
'Run 6':'AttemptedMovement',
'Run 7':'AttemptedMovement',
'Run 8':'EyeMovements',
'Run 9':'Rest',
'Run 10':'AttemptedMovement',
'Run 11':'AttemptedMovement',
'Run 12':'AttemptedMovement',
'Run 13':'AttemptedMovement',
'Run 14':'EyeMovements',
'Run 15':'Rest'
}
dictEvents = dict(TrialStart = 0x300,
Beep = 0x311,
FixationCross = 0x312,
SupinationClassCue = 0x308,
PronationClassCue = 0x309,
HandOpenClassCue = 0x30B,
PalmarGraspClassCue = 0x39D,
LateralGraspClassCue = 0x39E)
dictEvents = dict(zip([str(val) for val in list(dictEvents.values())],list(dictEvents.keys())))
dictColNames = dict(zip(list(dictEvents.keys()), list(range(len(list(dictEvents.keys()))))))
# Unzip on User folder
try:
shutil.rmtree(User)
os.mkdir(User)
except:
os.mkdir(User)
if not os.path.exists(User+'.zip'):
# Download file if not exist
print('Downloading: ',User,' from ',URL+User+'.zip')
filename = wget.download(URL+User+'.zip')
else:
filename = User+'.zip'
with zipfile.ZipFile(filename, 'r') as zip_ref:
zip_ref.extractall(User)
RunFiles = [os.path.join(User,f) for f in os.listdir(User) if os.path.isfile(os.path.join(User, f)) and f.lower().endswith('.gdf')]
# Prepare DataFrame
listDFRaw = []
typeRun = []
numRun = []
samplingFrequencyList = []
raw_all = mne.concatenate_raws([mne.io.read_raw_gdf(f, preload=True) for f in RunFiles])
for run in dictTypeRun:
rfile = None
for runfile in RunFiles:
if run in runfile:
rfile = runfile
break
if rfile is None:
continue
raw = mne.io.read_raw_gdf(rfile)
samplingFrequencyList.append(raw.info['sfreq'])
ch_names = raw.info['ch_names']
dfData = pd.DataFrame(data=raw.get_data().T,columns=ch_names)
dfData = dfData.dropna(how='all')
dfData.fillna(method='bfill',inplace=True)
dfData = dfData.reset_index()
dfData = dfData[ch_names]
events, dictEventsRun = mne.events_from_annotations(raw)
dictEventsRun = dict(zip([val for val in list(dictEventsRun.values())],list(dictEventsRun.keys())))
sampleTime, eventNum = list(events[:,0]), list(events[:,2])
listEventsPerColumn = [[0]*len(dfData)]*len(dictColNames)
listEventsPerColumn = np.array(listEventsPerColumn)
for s, e in zip(sampleTime, eventNum):
if dictEventsRun[e] in dictColNames:
listEventsPerColumn[dictColNames[dictEventsRun[e]],s] = 1
dfEvents = pd.DataFrame(data=listEventsPerColumn.T,columns=[dictEvents[val] for val in list(dictColNames.keys())])
dfRaw = pd.concat([dfEvents.copy(),dfData.copy()],axis=1,ignore_index=True)
dfRaw.columns = [dictEvents[val] for val in list(dictColNames.keys())] + ch_names
listDFRaw.append(dfRaw.copy())
typeRun += [dictTypeRun[run]]*len(dfData)
numRun += [run]*len(dfData)
shutil.rmtree(User)
# Build DataFrame, MNE raw and return
df = pd.concat(listDFRaw, ignore_index=True)
df['TypeRun'] = typeRun
df['Run'] = numRun
df = df[['Run','TypeRun']+[dictEvents[val] for val in list(dictColNames.keys())]+ch_names]
return df, raw_all, list(set(samplingFrequencyList))[0]
# ### 1. Multitaper spectral estimation
def multitaperSpectral(y, fs, NW=2.5,k=4):
N=len(y)
dt = 1.0/fs
# The multitapered method
[tapers, eigen] = dpss(N, NW, k)
Sk_complex, weights, eigenvalues=pmtm(y, e=eigen, v=tapers, NFFT=N, show=False)
Sk = abs(Sk_complex)**2
Sk = np.mean(Sk * np.transpose(weights), axis=0) * dt
return Sk
# ### 2. Band-pass filtering
# + id="e4czDqxo3eD3"
def butterBandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
b, a = butter(order, [low, high], btype='band')
return b, a
def butterBandpassFilter(data, lowcut, highcut, fs, order=5):
b, a = butter_bandpass(lowcut, highcut, fs, order=order)
y = filtfilt(b, a, data)
return y
def dfBandpassFiltering(df,eeg_channels_of_interest,lowcut,highcut, fs, order):
for col in eeg_channels_of_interest:
df[col] = butter_bandpass_filter(df[col].values, lowcut, highcut, fs, order=5)
return df
# -
# ### 3. CAR filtering on Dataframe
def dfCARFiltering(df, channels_interest):
dictData = df.to_dict(orient='list')
for i in range(len(df)):
meanVal = float(np.mean([float(dictData[col][i]) for col in channels_interest]))
for col in channels_interest:
dictData[col][i] = dictData[col][i] - meanVal
df = pd.DataFrame(dictData)
return df.copy()
# ### 4. Get Chance Level
def getChanceLevel(c,n,alpha):
'''
c: Number of classes
n: total samples
alpha: statistical significance level
'''
return (1/n)*binom.ppf(1-alpha, n, 1/c)
# ### 5. Confidence Intervals by Bootstrapping
# + id="4-znvoXH3gav"
# Retrived from: http://www.jtrive.com/the-empirical-bootstrap-for-confidence-intervals-in-python.html
def bootstrap(data, n=1000, func=np.mean):
"""
Generate `n` bootstrap samples, evaluating `func`
at each resampling. `bootstrap` returns a function,
which can be called to obtain confidence intervals
of interest.
"""
simulations = list()
sample_size = len(data)
xbar_init = np.mean(data)
for c in range(n):
itersample = np.random.choice(data, size=sample_size, replace=True)
simulations.append(func(itersample))
simulations.sort()
def ci(p):
"""
Return 2-sided symmetric confidence interval specified
by p.
"""
u_pval = (1+p)/2.
l_pval = (1-u_pval)
l_indx = int(np.floor(n*l_pval))
u_indx = int(np.floor(n*u_pval))
return(simulations[l_indx],simulations[u_indx])
return(ci)
# + [markdown] id="PtQ1tKV73iUM"
# ## Main
# + [markdown] id="NdTbqK5Q3lMM"
# ### 0. Prepare Representations of Epochs per patients
# +
users = ['P01','P02','P03','P04','P05','P06','P07','P08','P09','P10']
totalUsers = len(users)
eegChannels = ['AFz', 'F3', 'F1', 'Fz', 'F2', 'F4', 'FFC5h', 'FFC3h', 'FFC1h', 'FFC2h',
'FFC4h', 'FFC6h', 'FC5', 'FC3', 'FC1', 'FCz', 'FC2', 'FC4', 'FC6', 'FCC5h',
'FCC3h', 'FCC1h', 'FCC2h', 'FCC4h', 'FCC6h', 'C5', 'C3', 'C1', 'Cz', 'C2',
'C4', 'C6', 'CCP5h', 'CCP3h', 'CCP1h', 'CCP2h', 'CCP4h', 'CCP6h', 'CP5',
'CP3', 'CP1', 'CPz', 'CP2', 'CP4', 'CP6', 'CPP5h', 'CPP3h', 'CPP1h', 'CPP2h',
'CPP4h', 'CPP6h', 'P5', 'P3', 'P1', 'Pz', 'P2', 'P4', 'P6', 'PPO1h', 'PPO2h', 'POz']
movementTypes = ['HandOpenClassCue', 'PalmarGraspClassCue','LateralGraspClassCue']
classNames = ['Rest'] +[val.replace('ClassCue','') for val in movementTypes]
additionalCols = ['Run','TypeRun','TrialStart','Beep','FixationCross']
tmin = -2.0
tmax = 3.0
NW = 2.5
totalTapers = 4
representationChannelTimeEpochs = []
representationChannelChanelTimeEpochs = []
representationChannelFrequencyEpochs = []
representationChannelChannelFrequencyEpochs = []
representationChannelTimeLabels = []
representationChannelChanelTimeLabels = []
representationChannelFrequencyLabels = []
representationChannelChannelFrequencyLabels = []
for indexUser in range(totalUsers):
startProcessing = time.time()
print('Preparing data for user: ', users[indexUser])
print('Getting data ...')
df, rawMNE, sfreq = getEEGData(URL='http://bnci-horizon-2020.eu/database/data-sets/001-2019/', User=users[indexUser])
listChannelTimeEpochs = []
listChannelChanelTimeEpochs = []
listChannelFrequencyEpochs = []
listChannelChannelFrequencyEpochs = []
listChannelTimeLabels = []
listChannelChanelTimeLabels = []
listChannelFrequencyLabels = []
listChannelChannelFrequencyLabels = []
dfFiltered = df[additionalCols+movementTypes+eegChannels]
dfFiltered = dfCARFiltering(dfFiltered, eegChannels)
### 1. Getting REST EEG Epochs
# Filter by TypeRun equal to Rest
dfRest = dfFiltered[dfFiltered['TypeRun']=='Rest']
dfRest.reset_index(inplace=True)
#Signal rest
signalRest = dfRest['TrialStart'].values.tolist()
#Retrieve indexes where signal equals 1
indexesOnes = [i for i,val in enumerate(signalRest) if val == 1]
timesOnes = [val/(sfreq) for val in indexesOnes]
consecutiveOnes = [timesOnes[i+1]-timesOnes[i] for i,val in enumerate(timesOnes) if i+1 < len(timesOnes)]
#Estimating start and end for every rest epoch
totalEpochs = 72
epochsPerOnes = round(totalEpochs/len(indexesOnes))
starts = []
ends = []
startstimes = []
endstimes = []
offset = 0.5*sfreq # One half second
totalSamples = round(1 + ((tmax - tmin)*sfreq))
steps = round(0.5*totalSamples)
for starIndexOnes in indexesOnes:
for i in range(epochsPerOnes):
start = round(starIndexOnes+offset+(i*steps))
starts.append(start)
ends.append(start+totalSamples)
#Transforming from sample to time
startstimes = [val/(sfreq) for val in starts]
endstimes = [val/(sfreq) for val in ends]
epochsChannelTimeRest = []
epochsChannelChannelRest = []
epochsChannelFrequencyRest = []
epochsChannelChannelFrequencyRest = []
labelsRestEpochs = []
print('Creating representations for REST...')
for start,end in zip(starts,ends):
epoch = dfRest.loc[start:end,eegChannels].values.T #Ch-Time Representation
epochCov = np.cov(epoch) #Ch-Ch (time) Representation
epochMultitaper = np.zeros(epoch.shape) #Ch-Freq Representation
for chann in range(epoch.shape[0]):
r = multitaperSpectral(y=epoch[chann,:], fs=sfreq, NW=NW,k=totalTapers)
epochMultitaper[chann,:] = r
epochMultitaperCov = np.cov(epochMultitaper) #Ch-Ch (Freq.) Representation
epochsChannelTimeRest.append(epoch.copy())
epochsChannelChannelRest.append(epochCov.copy())
epochsChannelFrequencyRest.append(epochMultitaper.copy())
epochsChannelChannelFrequencyRest.append(epochMultitaperCov.copy())
labelsRestEpochs.append(0)
epochsChannelTimeRest = np.stack(epochsChannelTimeRest,axis=0)
epochsChannelChannelRest = np.stack(epochsChannelChannelRest,axis=0)
epochsChannelFrequencyRest = np.stack(epochsChannelFrequencyRest,axis=0)
epochsChannelChannelFrequencyRest = np.stack(epochsChannelChannelFrequencyRest,axis=0)
labelsRestEpochs = np.array(labelsRestEpochs)
listChannelTimeEpochs.append(epochsChannelTimeRest.copy())
listChannelChanelTimeEpochs.append(epochsChannelChannelRest.copy())
listChannelFrequencyEpochs.append(epochsChannelFrequencyRest.copy())
listChannelChannelFrequencyEpochs.append(epochsChannelChannelFrequencyRest.copy())
listChannelTimeLabels.append(labelsRestEpochs.copy())
listChannelChanelTimeLabels.append(labelsRestEpochs.copy())
listChannelFrequencyLabels.append(labelsRestEpochs.copy())
listChannelChannelFrequencyLabels.append(labelsRestEpochs.copy())
### 2. Getting MOTOR ATTEMPT EEG Epochs
for it, mov in enumerate(movementTypes):
print('Creating representations for ,',mov,'...')
dfMA = dfFiltered[dfFiltered['TypeRun']=='AttemptedMovement']
dfMA.reset_index(inplace=True)
#Signal MA
signalMA = dfMA[mov].values.tolist()
#Retrieve indexes where signal equals 1
indexesOnes = [i for i,val in enumerate(signalMA) if val == 1]
timesOnes = [val/(sfreq) for val in indexesOnes]
consecutiveOnes = [timesOnes[i+1]-timesOnes[i] for i,val in enumerate(timesOnes) if i+1 < len(timesOnes)]
#Estimating start and end for every epoch
totalSamples = round(1 + ((tmax - tmin)*sfreq))
starts = []
ends = []
startstimes = []
endstimes = []
for starIndexOnes in indexesOnes:
start = starIndexOnes + sfreq*tmin
starts.append(start)
ends.append(start+totalSamples)
#Transforming from sample to time
startstimes = [val/(sfreq) for val in starts]
endstimes = [val/(sfreq) for val in ends]
epochsChannelTimeMA = []
epochsChannelChannelMA = []
epochsChannelFrequencyMA = []
epochsChannelChannelFrequencyMA = []
labelsMAEpochs = []
for start,end in zip(starts,ends):
epoch = dfMA.loc[start:end,eegChannels].values.T #Ch-Time Representation
epochCov = np.cov(epoch) #Ch-Ch (time) Representation
epochMultitaper = np.zeros(epoch.shape) #Ch-Freq Representation
for chann in range(epoch.shape[0]):
r = multitaperSpectral(y=epoch[chann,:], fs=sfreq, NW=NW,k=totalTapers)
epochMultitaper[chann,:] = r
epochMultitaperCov = np.cov(epochMultitaper) #Ch-Ch (Freq.) Representation
epochsChannelTimeMA.append(epoch.copy())
epochsChannelChannelMA.append(epochCov.copy())
epochsChannelFrequencyMA.append(epochMultitaper.copy())
epochsChannelChannelFrequencyMA.append(epochMultitaperCov.copy())
labelsMAEpochs.append(it+1)
epochsChannelTimeMA = np.stack(epochsChannelTimeMA,axis=0)
epochsChannelChannelMA = np.stack(epochsChannelChannelMA,axis=0)
epochsChannelFrequencyMA = np.stack(epochsChannelFrequencyMA,axis=0)
epochsChannelChannelFrequencyMA = np.stack(epochsChannelChannelFrequencyMA,axis=0)
labelsMAEpochs = np.array(labelsMAEpochs)
listChannelTimeEpochs.append(epochsChannelTimeMA.copy())
listChannelChanelTimeEpochs.append(epochsChannelChannelMA.copy())
listChannelFrequencyEpochs.append(epochsChannelFrequencyMA.copy())
listChannelChannelFrequencyEpochs.append(epochsChannelChannelFrequencyMA.copy())
listChannelTimeLabels.append(labelsMAEpochs.copy())
listChannelChanelTimeLabels.append(labelsMAEpochs.copy())
listChannelFrequencyLabels.append(labelsMAEpochs.copy())
listChannelChannelFrequencyLabels.append(labelsMAEpochs.copy())
X = np.concatenate(listChannelTimeEpochs, axis=0)
y = np.concatenate(listChannelTimeLabels)
representationChannelTimeEpochs.append(X.copy())
representationChannelTimeLabels.append(y.copy())
X = np.concatenate(listChannelChanelTimeEpochs, axis=0)
y = np.concatenate(listChannelChanelTimeLabels)
representationChannelChanelTimeEpochs.append(X.copy())
representationChannelChanelTimeLabels.append(y.copy())
X = np.concatenate(listChannelFrequencyEpochs, axis=0)
y = np.concatenate(listChannelFrequencyLabels)
representationChannelFrequencyEpochs.append(X.copy())
representationChannelFrequencyLabels.append(y.copy())
X = np.concatenate(listChannelChannelFrequencyEpochs, axis=0)
y = np.concatenate(listChannelChannelFrequencyLabels)
representationChannelChannelFrequencyEpochs.append(X.copy())
representationChannelChannelFrequencyLabels.append(y.copy())
endProcessing = time.time()
print('Time elapsed: ', str((endProcessing-startProcessing)/60.0),' min.')
# + [markdown] id="2wnSCFVt3_mo"
# ## 1. Setting
# + colab={"base_uri": "https://localhost:8080/"} id="DSK_dvab6oIi" outputId="c78a19a3-d335-4727-ec48-e59e6ed02522"
dictEvents = dict(TrialStart = 0x300,
Beep = 0x311,
FixationCross = 0x312,
SupinationClassCue = 0x308,
PronationClassCue = 0x309,
HandOpenClassCue = 0x30B,
PalmarGraspClassCue = 0x39D,
LateralGraspClassCue = 0x39E)
dictEvents
# + id="u92Vpst_4Bgp"
typeRun = 'AttemptedMovement'
typeRest = 'Rest'
eeg_channels = ['AFz', 'F3', 'F1', 'Fz', 'F2', 'F4', 'FFC5h', 'FFC3h', 'FFC1h', 'FFC2h',
'FFC4h', 'FFC6h', 'FC5', 'FC3', 'FC1', 'FCz', 'FC2', 'FC4', 'FC6', 'FCC5h',
'FCC3h', 'FCC1h', 'FCC2h', 'FCC4h', 'FCC6h', 'C5', 'C3', 'C1', 'Cz', 'C2',
'C4', 'C6', 'CCP5h', 'CCP3h', 'CCP1h', 'CCP2h', 'CCP4h', 'CCP6h', 'CP5',
'CP3', 'CP1', 'CPz', 'CP2', 'CP4', 'CP6', 'CPP5h', 'CPP3h', 'CPP1h', 'CPP2h',
'CPP4h', 'CPP6h', 'P5', 'P3', 'P1', 'Pz', 'P2', 'P4', 'P6', 'PPO1h', 'PPO2h', 'POz']
movementTypes = ['SupinationClassCue','PronationClassCue','HandOpenClassCue', 'PalmarGraspClassCue','LateralGraspClassCue']
movementTypes_of_interest = movementTypes[2:]
target_names = ['Rest'] +[val for val in movementTypes_of_interest]
target_names = [val.replace('ClassCue','') for val in target_names]
maincols = ['Run','TypeRun','TrialStart','Beep','FixationCross']
eeg_channels_of_interest = ['AFz', 'F3', 'F1', 'Fz', 'F2', 'F4', 'FFC5h', 'FFC3h', 'FFC1h', 'FFC2h',
'FFC4h', 'FFC6h', 'FC5', 'FC3', 'FC1', 'FCz', 'FC2', 'FC4', 'FC6', 'FCC5h',
'FCC3h', 'FCC1h', 'FCC2h', 'FCC4h', 'FCC6h', 'C5', 'C3', 'C1', 'Cz', 'C2',
'C4', 'C6', 'CCP5h', 'CCP3h', 'CCP1h', 'CCP2h', 'CCP4h', 'CCP6h', 'CP5',
'CP3', 'CP1', 'CPz', 'CP2', 'CP4', 'CP6', 'CPP5h', 'CPP3h', 'CPP1h', 'CPP2h',
'CPP4h', 'CPP6h', 'P5', 'P3', 'P1', 'Pz', 'P2', 'P4', 'P6', 'PPO1h', 'PPO2h', 'POz']
colors = ['red','green', 'blue', 'indigo','yellow','gray']
lowcut = 8
highcut = 24
butter_order = 4
tmin = -2.0
tmax = 3.0
bootstrapiterations = 100
confidenceinterval = .95
channel = 'C2'
Users = ['P01','P02','P03','P04','P05','P06','P07','P08','P09','P10']
listclf = []
dataResult = []
for indexuser in range(len(Users)):
df, sfreq, raw_mne = getDFData(URL='http://bnci-horizon-2020.eu/database/data-sets/001-2019/', User=Users[indexuser])
sfreq = sfreq[0]
dfFiltered = df[maincols+movementTypes_of_interest+eeg_channels_of_interest]
dfFiltered = CARFiltering(dfFiltered, eeg_channels_of_interest)
#dfFiltered = BANDPassFiltering(dfFiltered,eeg_channels_of_interest,lowcut,highcut, sfreq, 5)
#### **1. ¿How many "Trial Start" events do Runs of type Rest are in the data and what is the temporal distance between cosecutive "Trial Start" events?**
listEPOCHSALL = []
listLABELSALL = []
### Analysis
# Filter by TypeRun equal to Rest
dfRest = dfFiltered[dfFiltered['TypeRun']=='Rest']
dfRest.reset_index(inplace=True)
#Signal rest
signalRest = dfRest['TrialStart'].values.tolist()
#Retrieve indexes where signal equals 1
indexesOnes = [i for i,val in enumerate(signalRest) if val == 1]
timesOnes = [val/(sfreq) for val in indexesOnes]
consecutiveOnes = [timesOnes[i+1]-timesOnes[i] for i,val in enumerate(timesOnes) if i+1 < len(timesOnes)]
### Report
print('Report on REST:')
print('Total samples = %d'%(len(signalRest)))
print('Sampling frequency (Hz) = %d'%(sfreq))
print('Signal duration (s) = %.2f'%(len(signalRest)/sfreq))
print('Signal duration (min) = %.2f'%(len(signalRest)/(60.0*sfreq)))
print('Total trial start events = '+str(len(indexesOnes)))
print('Indexes (sample) = '+str(indexesOnes))
print('Times (s) = '+str(timesOnes))
print('Duration between consecutive ones (s) = '+str(consecutiveOnes))
#### **2. What are 72 epochs associated with REST?**
#R:/ From every "Trial Start" event we can slide window of duration: tmax - tmin.
#Estimating start and end for every epoch
totalEpochs = 72
epochsPerOnes = round(totalEpochs/len(indexesOnes))
starts = []
ends = []
startstimes = []
endstimes = []
offset = 0.5*sfreq # One half second
totalSamples = round(1 + ((tmax - tmin)*sfreq))
steps = round(0.5*totalSamples) #10% of advancing (90% traslaping)
for starIndexOnes in indexesOnes:
for i in range(epochsPerOnes):
start = round(starIndexOnes+offset+(i*steps))
starts.append(start)
ends.append(start+totalSamples)
#Transforming from sample to time
startstimes = [val/(sfreq) for val in starts]
endstimes = [val/(sfreq) for val in ends]
#Print and check
print(startstimes)
print(endstimes)
epochsRest = []
labelsRest = []
for start,end in zip(starts,ends):
epoch = dfRest.loc[start:end,eeg_channels_of_interest].values.T
epoch_multitaper = np.zeros(epoch.shape)
for chann in range(epoch.shape[0]):
r = multitaper_espectral(y=epoch[chann,:], fs=sfreq, NW=2.5,k=4)
epoch_multitaper[chann,:] = r
epoch_multitaper = np.cov(epoch_multitaper)
epochsRest.append(epoch_multitaper.copy())
labelsRest.append(0)
epochsRest = np.stack(epochsRest,axis=0)
labelsRest = np.array(labelsRest)
epochsRest.shape, labelsRest.shape
listEPOCHSALL.append(epochsRest.copy())
listLABELSALL.append(labelsRest.copy())
#### **3. ¿How many epochs for -Lateral Grasp- are in the data and which are those epochs?**
### Analysis
for it, mov in enumerate(movementTypes_of_interest):
dfLG = dfFiltered[dfFiltered['TypeRun']=='AttemptedMovement']
dfLG.reset_index(inplace=True)
#Signal rest
signalLG = dfLG[mov].values.tolist()
#Retrieve indexes where signal equals 1
indexesOnes = [i for i,val in enumerate(signalLG) if val == 1]
timesOnes = [val/(sfreq) for val in indexesOnes]
consecutiveOnes = [timesOnes[i+1]-timesOnes[i] for i,val in enumerate(timesOnes) if i+1 < len(timesOnes)]
### Report
print('Report on:', mov)
print('Total samples = %d'%(len(signalLG)))
print('Sampling frequency (Hz) = %d'%(sfreq))
print('Signal duration (s) = %.2f'%(len(signalLG)/sfreq))
print('Signal duration (min) = %.2f'%(len(signalLG)/(60.0*sfreq)))
print('Total '+mov+' events = '+str(len(indexesOnes)))
print('Indexes (sample) = '+str(indexesOnes))
print('Times (s) = '+str(timesOnes))
print('Duration between consecutive ones (s) = '+str(consecutiveOnes))
#Estimating start and end for every epoch
totalSamples = round(1 + ((tmax - tmin)*sfreq))
starts = []
ends = []
startstimes = []
endstimes = []
for starIndexOnes in indexesOnes:
start = starIndexOnes + sfreq*tmin
starts.append(start)
ends.append(start+totalSamples)
#Transforming from sample to time
startstimes = [val/(sfreq) for val in starts]
endstimes = [val/(sfreq) for val in ends]
#Print and check
print(startstimes)
print(endstimes)
epochsLG = []
labelsLG = []
for start,end in zip(starts,ends):
epoch = dfLG.loc[start:end,eeg_channels_of_interest].values.T
epoch_multitaper = np.zeros(epoch.shape)
for chann in range(epoch.shape[0]):
r = multitaper_espectral(y=epoch[chann,:], fs=sfreq, NW=2.5,k=4)
epoch_multitaper[chann,:] = r
epoch_multitaper = np.cov(epoch_multitaper)
#print('COV')
#print(epoch.shape)
epochsLG.append(epoch_multitaper.copy())
labelsLG.append(it+1)
epochsLG = np.stack(epochsLG,axis=0)
labelsLG = np.array(labelsLG)
epochsLG.shape, labelsLG.shape
listEPOCHSALL.append(epochsLG.copy())
listLABELSALL.append(labelsLG.copy())
X = np.concatenate(listEPOCHSALL, axis=0)
y = np.concatenate(listLABELSALL)
X.shape, y.shape, y
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y, random_state=42)
clf_multi = pystmm.classifier.STMM(typemulticlassifier='ovr',C1=10.0, C2=10.0, maxIter=5, tolSTM=1e-4, penalty = 'l2', dual = True, tol=1e-4,loss = 'squared_hinge', maxIterSVM=100000)
clf_multi.fit(X_train, y_train)
y_pred = clf_multi.predict(X_test)
y_pred_train = clf_multi.predict(X_train)
resulttrain = [Users[indexuser], 'Train','Precision']+[round(val,2) for val in precision_score(y_train, y_pred_train, average=None).tolist()]
dataResult.append(resulttrain)
resulttrain = [Users[indexuser], 'Train','Recall']+[round(val,2) for val in recall_score(y_train, y_pred_train, average=None).tolist()]
dataResult.append(resulttrain)
resulttrain = [Users[indexuser], 'Train','F1-score']+[round(val,2) for val in f1_score(y_train, y_pred_train, average=None).tolist()]
dataResult.append(resulttrain)
resulttrain = [Users[indexuser], 'Train','Support']+[len(y_train[y_train==cla]) for cla in range(len(target_names))]
dataResult.append(resulttrain)
resulttrain = [Users[indexuser], 'Train','Total-samples']+[len(y_train)]*len(target_names)
dataResult.append(resulttrain)
acc = round(accuracy_score(y_train, y_pred_train),2)
resulttrain = [Users[indexuser], 'Train','Accuracy']+[acc]*len(target_names)
dataResult.append(resulttrain)
chl = round(getChanceLevel(c=len(target_names),n=len(y_train),alpha=0.05),2)
resulttrain = [Users[indexuser], 'Train','Chance-Level']+[chl]*len(target_names)
dataResult.append(resulttrain)
resulttest = [Users[indexuser], 'Test','Precision']+[round(val,2) for val in precision_score(y_test, y_pred, average=None).tolist()]
dataResult.append(resulttest)
resulttest = [Users[indexuser], 'Test','Recall']+[round(val,2) for val in recall_score(y_test, y_pred, average=None).tolist()]
dataResult.append(resulttest)
resulttest = [Users[indexuser], 'Test','F1-score']+[round(val,2) for val in f1_score(y_test, y_pred, average=None).tolist()]
dataResult.append(resulttest)
resulttest = [Users[indexuser], 'Test','Support']+[len(y_test[y_test==cla]) for cla in range(len(target_names))]
dataResult.append(resulttest)
resulttest = [Users[indexuser], 'Test','Total-Samples']+[len(y_test)]*len(target_names)
dataResult.append(resulttest)
acc = round(accuracy_score(y_test, y_pred),2)
resulttest = [Users[indexuser], 'Test','Accuracy']+[acc]*len(target_names)
dataResult.append(resulttest)
chl = round(getChanceLevel(c=len(target_names),n=len(y_test),alpha=0.05),2)
resulttest = [Users[indexuser], 'Test','Chance-Level']+[chl]*len(target_names)
dataResult.append(resulttest)
print('------------------------------------------------------------------------------------------------')
print('REPORT')
print(Users[indexuser])
print(classification_report(y_test, y_pred, target_names=target_names))
print(classification_report(y_train, y_pred_train, target_names=target_names))
print('------------------------------------------------------------------------------------------------')
listclf.append(clf_multi)
# -
X.shape
epoch_multitaper = np.zeros(epoch.shape)
for chann in range(epoch.shape[0]):
r = multitaper_espectral(y=epoch[chann,:], fs=sfreq, NW=2.5,k=4)
epoch_multitaper[chann,:] = r
dfResult = pd.DataFrame(data=dataResult, columns=['User','Data','Metric']+target_names)
dfResult[dfResult['User']=='P10'].head(15)
dfResultFreq
dfResultTemp
dfResultCov
# +
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import permutation_test_score
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
n_uncorrelated_features = 2200
rng = np.random.RandomState(seed=0)
# Use same number of samples as in iris and 2200 features
X_rand = rng.normal(size=(X.shape[0], n_uncorrelated_features))
clf = SVC(kernel='linear', random_state=7)
cv = StratifiedKFold(2, shuffle=True, random_state=0)
score_iris, perm_scores_iris, pvalue_iris = permutation_test_score(
clf, X, y, scoring="accuracy", cv=cv, n_permutations=1000)
score_rand, perm_scores_rand, pvalue_rand = permutation_test_score(
clf, X_rand, y, scoring="accuracy", cv=cv, n_permutations=1000)
# +
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.hist(perm_scores_iris, bins=20, density=True)
ax.axvline(score_iris, ls='--', color='r')
score_label = (f"Score on original\ndata: {score_iris:.2f}\n"
f"(p-value: {pvalue_iris:.3f})")
ax.text(0.7, 260, score_label, fontsize=12)
ax.set_xlabel("Accuracy score")
_ = ax.set_ylabel("Probability")
# + id="kgxAQJS49l9p"
cv = ShuffleSplit(10, test_size=0.2, random_state=42)
cv_split = cv.split(X)
# + colab={"base_uri": "https://localhost:8080/"} id="9XKuW8_E9szP" outputId="309da67d-ec51-4f10-b43f-229c528173c9"
scores = cross_val_score(clf_multi, X, y, cv=cv, n_jobs=1)
print(scores)
# -
np.mean(scores), np.std(scores)
# + colab={"base_uri": "https://localhost:8080/"} id="C8orgy_RwM8Y" outputId="6b89618a-abf4-46b7-f2b3-b6f56810f16d"
for i,j in cv_split:
print(len(i),len(j))
# + id="fGLnnfNnvhZm"
# + colab={"base_uri": "https://localhost:8080/"} id="QrKtTxGkvw6-" outputId="2fe4a860-2260-4fb2-c312-732f5fc6bf62"
from spectrum import *
from scipy import fft
N=500
dt=2*10**-3
# Creating a signal with 2 sinus waves.
x = np.linspace(0.0, N*dt, N)
y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x)
# classical FFT
yf = fft(y)
xf = np.linspace(0.0, 1.0/(2.0*dt), N//2)
# The multitapered method
NW=2.5
k=4
[tapers, eigen] = dpss(N, NW, k)
Sk_complex, weights, eigenvalues=pmtm(y, e=eigen, v=tapers, NFFT=N, show=False)
Sk = abs(Sk_complex)**2
Sk = np.mean(Sk * np.transpose(weights), axis=0) * dt
# -
x.shape, Sk.shape
| examples/EEG_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# # AutoML 12: Retrieving Training SDK Versions
# +
import logging
import os
import random
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.train.automl.run import AutoMLRun
from azureml.train.automl.utilities import get_sdk_dependencies
# -
# ## Diagnostics
#
# Opt-in diagnostics for better experience, quality, and security of future releases.
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
# # 1. Retrieve the SDK versions in the current environment
# To retrieve the SDK versions in the current environment, run `get_sdk_dependencies`.
get_sdk_dependencies()
# # 2. Train model using AutoML
# +
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-classification'
project_folder = './sample_projects/automl-local-classification'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data=output, index=['']).T
# +
digits = datasets.load_digits()
X_train = digits.data[10:,:]
y_train = digits.target[10:]
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iterations = 3,
n_cross_validations = 2,
verbosity = logging.INFO,
X = X_train,
y = y_train,
path = project_folder)
local_run = experiment.submit(automl_config, show_output = True)
# -
# # 3. Retrieve the SDK versions from RunHistory
# To get the SDK versions from RunHistory, first the run id needs to be recorded. This can either be done by copying it from the output message or by retrieving it after each run.
# +
# Use a run id copied from an output message.
#run_id = 'AutoML_c0585b1f-a0e6-490b-84c7-3a099468b28e'
# Retrieve the run id from a run.
run_id = local_run.id
print(run_id)
# -
# Initialize a new `AutoMLRun` object.
# +
experiment_name = 'automl-local-classification'
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = run_id)
# -
# Get parent training SDK versions.
ml_run.get_run_sdk_dependencies()
# Get the traning SDK versions of a specific run.
ml_run.get_run_sdk_dependencies(iteration = 2)
| automl/12.auto-ml-retrieve-the-training-sdk-versions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "ac72bfac81abf24f9450b9835bc85d07", "grade": false, "grade_id": "cell-06b3a652620668ef", "locked": true, "schema_version": 3, "solution": false, "task": false} id="Xe_frMA2yWxJ"
# Lambda School Data Science
#
# *Unit 2, Sprint 2, Module 1*
#
# ---
# + [markdown] id="XFSK7J9h1oqh"
# #Import
# + deletable=false editable=false id="o9eSnDYhUGD7" nbgrader={"cell_type": "code", "checksum": "d065a7f431f667bb063b90c023f0dbc1", "grade": false, "grade_id": "cell-e0ed37c9a5d0e6a9", "locked": true, "schema_version": 3, "solution": false, "task": false} outputId="d8229e01-26a1-4d0e-f9b2-87b1866c1a71" colab={"base_uri": "https://localhost:8080/"}
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
# !pip install category_encoders==2.*
# !pip install pandas-profiling==2.*
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "ff7078bb0a4fb4b482d13864539dc6d8", "grade": false, "grade_id": "cell-85703297fb27b195", "locked": true, "schema_version": 3, "solution": false, "task": false} id="oI98MMj-yWxO"
# # Decision Trees
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "ad8f90528d5c7adfc38d154586deae52", "grade": false, "grade_id": "cell-4b05735817ab984e", "locked": true, "schema_version": 3, "solution": false, "task": false} id="P02DutGgyWxO"
# ## Kaggle
#
# **Task 1:** [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website (the URL is in Slack). Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "fd9d50fa771b27245b5fd407ed3ea2e5", "grade": false, "grade_id": "cell-6c43283f79ddc4dc", "locked": true, "schema_version": 3, "solution": false, "task": false} id="a0CGjiAryWxP"
# # Wrangle Data
#
# **Task 2:** Add to the code below so that `id` is set as the index for `df`.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "a99d8dc738e24ba553771b58c41e4e82", "grade": false, "grade_id": "cell-e2eb2544508f89ef", "locked": false, "schema_version": 3, "solution": true, "task": false} id="0-TymVd1yWxP" outputId="4334133d-eaa9-40f7-a855-b51b1150101a" colab={"base_uri": "https://localhost:8080/", "height": 430}
import pandas as pd
df = pd.merge(pd.read_csv(DATA_PATH + 'waterpumps/train_features.csv',
parse_dates=['date_recorded'],
na_values=[0]),
pd.read_csv(DATA_PATH + 'waterpumps/train_labels.csv')).set_index('id')
df.head()
# + id="a3_KLtMa9YFq"
## To save time later, change datetime column to int
df['yr_recorded'] = df['date_recorded'].dt.year
df = df.drop(columns='date_recorded')
# + id="tZwRoBTT74kP"
from pandas_profiling import ProfileReport
# + [markdown] id="s_Uf4P9t1R5p"
# ##Inspect Data
# + id="f62B17lg00y_" outputId="14a80300-d6ea-484f-f5cb-16dd209d2189" colab={"base_uri": "https://localhost:8080/"}
## First .info() implementation
df.info()
# + id="JQrOkxBV1KKN" outputId="7e1ab84a-cbf1-465e-dd12-61dd450a0cd7" colab={"base_uri": "https://localhost:8080/"}
## Sum null values
df.isnull().sum().sort_values()
# + id="Z12EfESKEAh5" outputId="47838535-bb2d-4efc-a99b-4808b878b65b" colab={"base_uri": "https://localhost:8080/"}
## Check details of largely null columns (both inspect and data dictionary)
#df['amount_tsh'].value_counts()
df['num_private'].value_counts()
# + id="Dgn22BpY1GQw" outputId="8550801f-7c13-4791-e0ad-0a37d9f95c31" colab={"base_uri": "https://localhost:8080/"}
## Check for duplicate columns
df.head(10).T.duplicated()
# + id="ZVC6MHMG7UIX" outputId="1ee5e1ae-1587-443d-dbd9-7c11d8d02662" colab={"base_uri": "https://localhost:8080/", "height": 282}
## Check for location outliers
import matplotlib.pyplot as plt
plt.scatter(df['longitude'], df['latitude'])
# + id="E0j-hsFJ9GIp" outputId="7605f2a6-947a-4a34-a384-ec364bf519f9" colab={"base_uri": "https://localhost:8080/"}
## Check for high cardinality columns
df.select_dtypes('object').nunique().sort_values()
# + id="dbpqrUsL-EhH"
## Columns look good by .info()
## There are a few columns high cardinality columns
## quantity_group and extraction_type_group are duplicate columns
## A few location outliers can be replaced in the import statement
## amount_tsh and num_private have large amounts of nulls (can be imputed)
## recorded_by has only one value
# + id="owAGx55k_9Vd"
def wrangle(X):
#Make a copy of df
X = X.copy()
#Remove constant value columns
X = X.drop(columns='recorded_by')
#Remove high cardinality columns
hc_cols = [col for col in X.select_dtypes('object').columns if X[col].nunique() > 1000]
X = X.drop(columns=hc_cols)
#Remove duplicate columns
X = X.drop(columns=['quantity_group', 'extraction_type_group'])
return X
# + id="S8gU3b0SElZ2"
df = wrangle(df)
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "58baf16b195a2c6f4cc23ee09560dbf7", "grade": false, "grade_id": "cell-f09f47f6f4e63cc9", "locked": true, "schema_version": 3, "solution": false, "task": false} id="CTjMFQh3yWxS"
# **Test 2**
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "81daab28521dfd13be1016b273534926", "grade": true, "grade_id": "cell-caa83cb6363d0cd7", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false} id="GMwp10AvyWxS"
'''Task 2 Testing'''
assert isinstance(df, pd.DataFrame), 'Have you created the DataFrame `df`?'
assert df.shape == (59400, 40), '`df` is the wrong shape. Did you set the index to `id`?'
assert 69572 in df.index, 'The index for `df` is has the wrong values. Did you set the index to `id`?'
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "26f7a327a6b7a3e2e00173e6f75519bd", "grade": false, "grade_id": "cell-1a84f0ae77611032", "locked": true, "schema_version": 3, "solution": false, "task": false} id="c9L7pCiRyWxV"
# # Split Data
#
# **Task 3:** Create your target vector `y` and feature matrix `X`.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "06665db4ef96abbf7f715c4dd554f19b", "grade": false, "grade_id": "cell-c94bceb606d02353", "locked": false, "schema_version": 3, "solution": true, "task": false} id="D04eSo_KyWxV"
target = 'status_group'
y = df[target]
X = df.drop(columns=target)
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "0386c0e16323e2419d61f55374d0df05", "grade": false, "grade_id": "cell-48167ea99a6356ae", "locked": true, "schema_version": 3, "solution": false, "task": false} id="1aJgoYR_yWxZ"
# **Test 3**
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "5a2b2452b4ad94e5b5a562a185779dff", "grade": true, "grade_id": "cell-d0b6777daf75d805", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false} id="aJyi66PqyWxa" outputId="eb6d026b-f155-4e86-a86f-a61365e09be5" colab={"base_uri": "https://localhost:8080/", "height": 212}
'''Task 3 Testing'''
assert isinstance(X, pd.DataFrame), '`X` is the wrong data type.'
assert isinstance(y, pd.Series), '`y` is the wrong data type.'
assert y.shape == (59400,), '`y` is the wrong shape.'
assert X.shape == (59400,39), '`X` is the wrong shape.'
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "ad5f005274a47a2acc8d9bb8bc8bbd84", "grade": false, "grade_id": "cell-96e0701792e522b6", "locked": true, "schema_version": 3, "solution": false, "task": false} id="YMBjZWGPyWxd"
# We already have a test set for this model, the `test.csv` that you'll use to make the predictions you'll upload to Kaggle. However, since our competition only allows for 2 submissions per day, we need a way to estimate our training error so that we can quickly iterate and improve our model. We can achieve this by creating a validation set from the data we have.
#
# **Task 4:** Split `X` and `y` into training and validation sets. Your validation set should be 20% of the data you have. You should have four variables: `X_train`, `X_val`, `y_train`, and `y_val`.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "4b417b896298dc539f4d351d4f958172", "grade": false, "grade_id": "cell-018b7c0deecc89dc", "locked": false, "schema_version": 3, "solution": true, "task": false} id="MP-syhXvyWxd"
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "614aceebebbcb231254f1ae7a842a77e", "grade": false, "grade_id": "cell-ab9ee727cfdc0131", "locked": true, "schema_version": 3, "solution": false, "task": false} id="ADMXp7WTyWxg"
# **Test 4**
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "7b998b6a51e9a2c17bbd024717a901ab", "grade": true, "grade_id": "cell-b1436df7d5901b26", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false} id="YRxJ8-ujyWxh" outputId="05c7e1e5-60ec-4108-8ba6-87b5b008c897" colab={"base_uri": "https://localhost:8080/", "height": 229}
'''Task 4 Testing'''
assert X_train.shape == (47520, 39), '`X_train` is the wrong shape.'
assert X_val.shape == (11880, 39), '`X_val` is the wrong shape.'
assert y_train.shape == (47520,), '`y_train` is the wrong shape.'
assert y_val.shape == (11880,), '`y_val` is the wrong shape.'
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "e787af946d17a1f823c209349a810942", "grade": false, "grade_id": "cell-4eb573c00f7e3219", "locked": true, "schema_version": 3, "solution": false, "task": false} id="b-XVjQC5yWxj"
# # Establish Baseline
#
# **Task 5:** This is a **classification** problem, so you need to establish the baseline accuracy for your training set. Find the majority class for `y_train` and calculate the percentage of labels in `y_train` belonging to that class. Assign your answer to the variable name `baseline_acc`.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "6b5e3397d4e5c153e839b6fb53525f76", "grade": false, "grade_id": "cell-c6c63e2b6ff9e101", "locked": false, "schema_version": 3, "solution": true, "task": false} id="z6V5lGAGyWxj" outputId="d8f94131-e181-4357-f8bb-68988bd0e8f8" colab={"base_uri": "https://localhost:8080/"}
baseline_acc = y_train.value_counts(normalize=True).max()
print('Baseline Accuracy:', baseline_acc)
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "2d6a544256ca1f163fb33b115da2345e", "grade": false, "grade_id": "cell-cf3ce1fce46a2f72", "locked": true, "schema_version": 3, "solution": false, "task": false} id="vc2d8UGeyWxm"
# **Task 5**
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "dad84b7c8b303e546add1509de1a0dd7", "grade": true, "grade_id": "cell-a0c23a1103429de2", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false} id="UQIY4Hc1yWxm"
'''Task 5 Testing'''
assert isinstance(baseline_acc, float), '`baseline_acc` should be a `float`.'
assert 0.0 <= baseline_acc <= 1.0, '`baseline_acc` is a score that should be between 0 and 1.'
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "8ff10811aa8b551ae8d1f93475d566fc", "grade": false, "grade_id": "cell-10c13b070533d8b5", "locked": true, "schema_version": 3, "solution": false, "task": false} id="yCTpO-wayWxp"
# # Build Model
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "92d0226782f7a5b7b1a41be67a0cf1ae", "grade": false, "grade_id": "cell-9759a1c2167a5dfc", "locked": true, "schema_version": 3, "solution": false, "task": false} id="xSFLZm6RyWxq"
# **Task 6:** Create a model named `model` and train it with your training data. Your model should be a pipeline with (a) transformers that you think are appropriate to this dataset and (b) a `DecisionTreeClassifier` as your predictor. **Tip:** How can you transform categorical features and missing values in order to train your model?
# + deletable=false nbgrader={"cell_type": "code", "checksum": "8c5b2cabd3d5d1adbfad3706bd025cd4", "grade": false, "grade_id": "cell-0c7c4d20b0f0fc70", "locked": false, "schema_version": 3, "solution": true, "task": false} id="9Vc213JryWxr" outputId="ba9cef96-9535-432a-9c91-351c07342355" colab={"base_uri": "https://localhost:8080/"}
from category_encoders import OrdinalEncoder
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.tree import DecisionTreeClassifier
model = make_pipeline(
OrdinalEncoder(),
SimpleImputer(),
DecisionTreeClassifier()
)
model.fit(X_train, y_train);
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "3be2e5b6636f8de879f37e7b93e17282", "grade": false, "grade_id": "cell-bd1f0e078ca61b7a", "locked": true, "schema_version": 3, "solution": false, "task": false} id="XwdTUnAjyWxt"
# **Test 6**
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "acd5a348298e5f22c5e2dc1c4bc5f3e2", "grade": true, "grade_id": "cell-4edc1a13268269ac", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false} id="Q4H7Ctl8yWxt"
'''Task 6 Testing'''
assert len(model) > 1, 'Your model pipeline should have multiple steps.'
assert isinstance(model[-1], DecisionTreeClassifier), 'Your pipline should end in a `DecisionTreeClassifier`.'
assert hasattr(model, 'classes_'), 'Have you fit your model?'
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "4bde05f99cea7be1f76987db649f3452", "grade": false, "grade_id": "cell-c64ad7ac1b03f6da", "locked": true, "schema_version": 3, "solution": false, "task": false} id="UL3MloLYyWxw"
# # Check Metrics
#
# **Task 7:** Calculate the training and validation accuracy of your model, and assign them to the valiables `training_acc` and `validation_acc`, respectively. Your validation accuracy should be greater than your baseline accuracy.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "5ad0c7ee9de8fae4b1f309d13a194bbc", "grade": false, "grade_id": "cell-87a17042f6131ba5", "locked": false, "schema_version": 3, "solution": true, "task": false} id="eEk4FdN2yWxx" outputId="41f424fb-1f98-4213-8071-608c5d433b9a" colab={"base_uri": "https://localhost:8080/"}
training_acc = model.score(X_train, y_train)
validation_acc = model.score(X_val, y_val)
print('Training Accuracy:', training_acc)
print('Validation Accuracy:', validation_acc)
# + id="vRRudG3lAeC8"
## Best val_acc so far: .708
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "7187c867400846bff1a47be9c11e4651", "grade": false, "grade_id": "cell-8d44b362513262b9", "locked": true, "schema_version": 3, "solution": false, "task": false} id="CrUEo3dMyWxz"
# **Test 7**
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "c28add2cc4da7249cd4a9b4e367abb1f", "grade": true, "grade_id": "cell-5b2575d28995d5be", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false} id="AYWD-a3tyWx0"
'''Task 7 Testing'''
assert isinstance(training_acc, float)
assert isinstance(validation_acc, float)
assert 0.0 <= training_acc <= 1.0
assert 0.0 <= validation_acc <= 1.0
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "dbe8e21b37bb8bbcf790f6c3bb82409d", "grade": false, "grade_id": "cell-20e570b001622e4d", "locked": true, "schema_version": 3, "solution": false, "task": false} id="73WDUpT6yWx2"
# # Create Kaggle Submission
#
# **Task 8:** Load `'waterpumps/test.csv'` into a DataFrame named `X_test`. Generate a list of predictions, and then put them into a DataFrame `submission`. Be sure that `submission` has the same index as `X_test` and that the column name for your predictions is `'status_group'`.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "751e42e23ad01c840e677d9061441489", "grade": false, "grade_id": "cell-f449c26c27917323", "locked": false, "schema_version": 3, "solution": true, "task": false} id="jjomRQ8_yWx2"
X_test = pd.read_csv(DATA_PATH + 'waterpumps/test_features.csv', index_col='id')
# YOUR CODE HERE
raise NotImplementedError()
submission.head()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "c3ea8f69c1f63c2fc7c6b4dc9f36a4c7", "grade": false, "grade_id": "cell-88bdf757927010d2", "locked": true, "schema_version": 3, "solution": false, "task": false} id="4igO6t_hyWx5"
# **Test 8**
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "de1de506bac0c572293dcf81b592fda3", "grade": true, "grade_id": "cell-7184784b1c67a971", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false} id="nK1M9VoNyWx6"
'''Task 8 Testing'''
assert isinstance(submission, pd.DataFrame), '`submission` should be a DataFrame.'
assert len(submission) == 14358, '`submission` should have 14358 rows.'
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "9e1dd2ff80244929b807a600180c8db5", "grade": false, "grade_id": "cell-7501482e73c658e0", "locked": true, "schema_version": 3, "solution": false, "task": false} id="w8dMSTvYyWx9"
# **Task 9 (`stretch goal`):** Save `submission` as a csv file using [`.to_csv()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html) and submit it to our Kaggle competition.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "53ceb2bae5f41f56eba8f65b2b56c38c", "grade": false, "grade_id": "cell-6a87d8169b9c48ea", "locked": false, "schema_version": 3, "solution": true, "task": false} id="t3lJarh2yWx-"
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "cba7b6461310b5fab62266e978074a78", "grade": false, "grade_id": "cell-591ad5292d4e1ee1", "locked": true, "schema_version": 3, "solution": false, "task": false} id="SM3LDSGsyWyA"
# # Explain
#
# **Task 10 (`stretch goal`):** Make a horizontal barchart of the 10 most important features for your model.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "6e22d3e237d7383a3ca1bfa71f995bfc", "grade": false, "grade_id": "cell-297ec1119fbbaad5", "locked": false, "schema_version": 3, "solution": true, "task": false} id="3_U6LwGKyWyB" outputId="c44c7769-dda3-44f8-af96-947377506e7c" colab={"base_uri": "https://localhost:8080/", "height": 295}
import matplotlib.pyplot as plt
feat_imp = pd.Series(list(model['decisiontreeclassifier'].feature_importances_), X.columns)
feat_imp.sort_values().tail(10).plot.barh()
plt.xlabel('Feature Importance [%]')
plt.title('Feature Importance');
| module1-decision-trees/DS-module-project-221.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="RnaQcNo2W3B9"
# !apt-get install -y xvfb python-opengl x11-utils ffmpeg > /dev/null 2>&1
# !pip install pyvirtualdisplay gym gym[atari] > /dev/null 2>&1
# + colab={} colab_type="code" id="lpmMu6GLeMxJ"
import gym
from gym import logger as gymlogger
from gym.wrappers import Monitor
gymlogger.set_level(40) #error only
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="z-9zrVa2Y8T9" outputId="1791a4e2-8916-4c99-a817-3d6dc7a69d1c"
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
# + colab={} colab_type="code" id="vsg9izqJhJ2-"
import math
import glob
import io
import base64
from IPython.display import HTML
from IPython import display as ipythondisplay
"""
Utility functions to enable video recording of gym environment and displaying it
To enable video, just do "env = wrap_env(env)""
"""
def show_video():
mp4list = glob.glob('video/*.mp4')
if len(mp4list) > 0:
mp4 = mp4list[0]
video = io.open(mp4, 'r+b').read()
encoded = base64.b64encode(video)
ipythondisplay.display(HTML(data='''<video alt="test" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))))
else:
print("Could not find video")
def wrap_env(env):
env = Monitor(env, './video', force=True)
return env
# + colab={"base_uri": "https://localhost:8080/", "height": 455} colab_type="code" id="OTeF5wcbhmep" outputId="0cfeca56-aea9-4c9f-c2d6-365181a1409a"
gym_env = gym.make("MountainCar-v0")
#gym_env = gym.make('CartPole-v1')
#gym_env = gym.make("MsPacman-v0")
num_episodes = 1
for episode in range(num_episodes):
env = wrap_env(gym_env)
observation = env.reset()
final_score = 0
for t in range(1000):
env.render()
action = env.action_space.sample()
observation, reward, done, info = env.step(action) # take a random action
final_score += reward
if done:
print("Episode finished after {} timesteps".format(t+1))
print("Final Score: {}".format(int(final_score)))
break
env.close()
if episode == num_episodes-1:
show_video()
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="HCxFTHLx5SaD" outputId="cbee35ff-be69-412b-9eb0-fd2c0c85df9e"
print(env.action_space)
print(env.observation_space)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Yl9h48FXjGgV" outputId="cf3eff5a-20a1-4fa5-fcdd-1d6680cbd960"
from gym import spaces
space = spaces.Discrete(8) # Set with 8 elements {0, 1, 2, ..., 7}
x = space.sample()
print(x)
assert space.contains(x)
assert space.n == 8
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="tiX0KZ3U5mvN" outputId="7ee26414-9a70-4b7a-94e8-56c45ae14c8b"
from gym import envs
print(envs.registry.all())
| Algorithms Implementations/openai_gym_tutorial/RL_Script1_GymTutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
import elsie
slides = elsie.SlideDeck(320, 280)
# + pycharm={"name": "#%%\n"}
from ipywidgets import interact
import ipywidgets as widgets
# Using `ipywidgets`, you can create interactive slides.
# After each change of the slider, the slide will re-render.
def interactive_slide(x):
@slides.slide()
def slide_fn(slide):
slide.box().text(f"X: {x}")
row = slide.box(horizontal=True)
for _ in range(x):
row.box(width=50, height=50, padding=10).rect(bg_color="red")
return slide_fn
interact(interactive_slide, x=widgets.IntSlider(min=1, max=5, step=1, continuous_update=False));
# -
# SlideDeck with multiple fragments (steps) will include buttons to move to the previous/next step.
@slides.slide()
def multistep_slide(slide):
slide.box().text(f"Step 1")
slide.box(show="next+").text(f"Step 2")
slide.box(show="next+").text(f"Step 3")
multistep_slide
| examples/jupyter/jupyter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + tags=[]
from mikeio import Dfsu
filename = "../tests/testdata/HD2D.dfsu"
dfs = Dfsu(filename)
dfs
# -
ds = dfs.read(["Surface elevation","Current speed"]) # to read some variables
ds
Dfsu.show_progress = True # Turn on progress bar, useful to get progress for long-running tasks
ds = dfs.read()
ds.describe()
# Find which element is nearest to POI.
idx = dfs.find_nearest_elements(606200, 6905480)
# Extract a subset of the dataset from this element. (Discrete values, no interpolation)
selds = ds.isel(idx=idx)
selds
# Convert to a dataframe, for convenience.
df = selds.to_dataframe()
df.head()
df.plot()
# ## Other ways to subset data
# Assume that we interested in these 3 points only
pt1 = (606200, 6905480)
pt2 = (606300, 6905410)
pt3 = (606400, 6905520)
pts_x = [pt1[0], pt2[0], pt3[0]]
pts_y = [pt1[1], pt2[1], pt3[1]]
elem_ids = dfs.find_nearest_elements(pts_x, pts_y)
# We can use these element ids either when we select the data from the complete dataset using the method isel() as shown above or already when we read the data from file (particular useful for files larger than memory)
ds_pts = dfs.read(elements=elem_ids)
ds_pts
# ### Select area
# Let's take the area North of y=6905480
yc = dfs.element_coordinates[:,1]
elem_ids = dfs.element_ids[yc>6905480]
# And find the maximum average current speed in this area in the last time step
# + tags=[]
item_num = 1
print(ds.items[item_num])
subset = ds.data[1][:,elem_ids]
subset_timeavg = subset.mean(axis=0)
idx = subset_timeavg.argmax()
coords = dfs.element_coordinates[idx,0:2].round(1)
print(f'Max current speed in area is found in {coords} and is {subset_timeavg[idx]:.3f}m/s')
# -
# Let us save the time averaged subset to a dfsu file.
# + tags=[]
outfilename1 = "HD2D_north.dfsu"
data = []
data.append(subset_timeavg.reshape(1,-1))
items = ds.items[item_num]
dfs.write(outfilename1, data, items=[items], elements=elem_ids)
# -
# # Create a new dfsu file
#
# * Subset of items
# * Subset of timesteps
# * Renamed variables
#
# First inspect the source file:
ds = dfs.read()
ds
# +
from mikeio.eum import ItemInfo, EUMType
from mikeio import Dataset
sourcefilename = filename
outfilename2 = "HD2D_selected.dfsu"
starttimestep = 4
time = ds.time[starttimestep:]
data = []
data.append(ds['U velocity'][starttimestep:,:])
data.append(ds['V velocity'][starttimestep:,:])
items = [ItemInfo("eastward_sea_water_velocity", EUMType.u_velocity_component),
ItemInfo("northward_sea_water_velocity",EUMType.v_velocity_component)]
newds = Dataset(data,time,items)
dfs.write(outfilename2, newds) # Note, this method was previosly named create
# -
# Read the newly created file to verify the contents.
# +
newdfs = Dfsu(outfilename2)
newds2 = newdfs.read()
newds2
# -
# # Write mesh from dfsu file
# Don't you have the original mesh? No problem - you can re-create it from the dfsu file...
outmesh = 'mesh_from_HD2D.mesh'
dfs.to_mesh(outmesh)
# # Clean up
import os
os.remove(outfilename1)
os.remove(outfilename2)
os.remove(outmesh)
| notebooks/Dfsu - Read.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DAT257x: Reinforcement Learning Explained
#
# ## Lab 2: Bandits
#
# ### Exercise 2.4 Thompson Beta
# +
import numpy as np
import sys
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.bandit import BanditEnv
from lib.simulation import Experiment
# -
#Policy interface
class Policy:
#num_actions: (int) Number of arms [indexed by 0 ... num_actions-1]
def __init__(self, num_actions):
self.num_actions = num_actions
def act(self):
pass
def feedback(self, action, reward):
pass
# Now let's implement a Thompson Beta algorithm.
#
#
#Tompson Beta policy
class ThompsonBeta(Policy):
def __init__(self, num_actions):
Policy.__init__(self, num_actions)
#PRIOR Hyper-params: successes = 1; failures = 1
self.total_counts = np.zeros(num_actions, dtype = np.longdouble)
self.name = "Thompson Beta"
#For each arm, maintain success and failures
self.successes = np.ones(num_actions, dtype = np.int)
self.failures = np.ones(num_actions, dtype = np.int)
def act(self):
current_action = np.argmax(np.random.beta(self.successes, self.failures))
return current_action
def feedback(self, action, reward):
if reward > 0:
self.successes[action] += 1
else:
self.failures[action] += 1
self.total_counts[action] += 1
# Now let's prepare the simulation.
evaluation_seed = 1239
num_actions = 10
trials = 10000
distribution = "bernoulli"
# What do you think the regret graph would look like?
env = BanditEnv(num_actions, distribution, evaluation_seed)
agent = ThompsonBeta(num_actions)
experiment = Experiment(env, agent)
experiment.run_bandit(trials)
# Now let's prepare another simulation by setting a different distribution, that is set distribion = "normal"
# Run the simulation and observe the results.
# What do you think the regret graph would look like?
| ReinforcementLearning/DAT257x/library/LabFiles/Module 2/Ex2.4 Thompson Beta.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''py39-tf-m1'': conda)'
# language: python
# name: python397jvsc74a57bd0dde3562a707dd5c908cff731c5d458cabc735e0a8d718c859c4a294627fec5bb
# ---
# +
from qr_region_extractor import *
import matplotlib.pyplot as plt
image_path = 'images/test_01.jpg'
original_image = cv.imread(image_path)
# show original image
plt.figure(figsize=(15,15))
plt.imshow(original_image)
plt.show()
# +
# initialize the qr code extractor and extract qr regions from the image
extractor = QRRegionExtractor()
qr_regions = extractor.get_qr_region(image=original_image, debug_mode=True) # set debug mode to true to show additional processing information and images
# get the first qr region's bounding box
bounding_box = qr_regions[0]
left, top, width, height = bounding_box
# display the region
qr_frame = original_image[top:top+height, left:left+width]
plt.figure(figsize=(15,15))
plt.imshow(qr_frame)
plt.show()
# +
# decode the qr frame
from PIL import Image
import zbarlight
image = Image.fromarray(qr_frame)
codes = zbarlight.scan_codes(['qrcode'], image)
print('QR codes: %s' % codes)
# -
| qr_sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Day 2: Password Philosophy
# Your flight departs in a few days from the coastal airport; the easiest way down to the coast from here is via toboggan.
#
# The shopkeeper at the North Pole Toboggan Rental Shop is having a bad day. "Something's wrong with our computers; we can't log in!" You ask if you can take a look.
#
# Their password database seems to be a little corrupted: some of the passwords wouldn't have been allowed by the Official Toboggan Corporate Policy that was in effect when they were chosen.
#
# To try to debug the problem, they have created a list (your puzzle input) of passwords (according to the corrupted database) and the corporate policy when that password was set.
#
# For example, suppose you have the following list:
# ```
# 1-3 a: abcde
# 1-3 b: cdefg
# 2-9 c: ccccccccc
# ```
# Each line gives the password policy and then the password. The password policy indicates the lowest and highest number of times a given letter must appear for the password to be valid. For example, 1-3 a means that the password must contain a at least 1 time and at most 3 times.
#
# In the above example, 2 passwords are valid. The middle password, cdefg, is not; it contains no instances of b, but needs at least 1. The first and third passwords are valid: they contain one a or nine c, both within the limits of their respective policies.
#
# How many passwords are valid according to their policies?
with open("input.txt","r") as f:
input_data = f.read().split("\n")
def valid_pwd(reqpwd):
temp, pwd = reqpwd.split(":")
temp, letter = temp.split(" ")
lower, upper = temp.split("-")
return int(lower)<=sum([x == letter for x in pwd])<=int(upper)
def count_valid(input_data):
return sum([valid_pwd(reqpwd) for reqpwd in input_data])
count_valid(input_data)
# # Part Two
# While it appears you validated the passwords correctly, they don't seem to be what the Official Toboggan Corporate Authentication System is expecting.
#
# The shopkeeper suddenly realizes that he just accidentally explained the password policy rules from his old job at the sled rental place down the street! The Official Toboggan Corporate Policy actually works a little differently.
#
# Each policy actually describes two positions in the password, where 1 means the first character, 2 means the second character, and so on. (Be careful; Toboggan Corporate Policies have no concept of "index zero"!) Exactly one of these positions must contain the given letter. Other occurrences of the letter are irrelevant for the purposes of policy enforcement.
#
# Given the same example list from above:
#
# * 1-3 a: abcde is valid: position 1 contains a and position 3 does not.
# * 1-3 b: cdefg is invalid: neither position 1 nor position 3 contains b.
# * 2-9 c: ccccccccc is invalid: both position 2 and position 9 contain c.
#
# How many passwords are valid according to the new interpretation of the policies?
#
#
def valid_pwd_2(reqpwd):
temp, pwd = reqpwd.split(":")
temp, letter = temp.split(" ")
lower, upper = temp.split("-")
return (int(pwd[int(lower)] == letter) + int(pwd[int(upper)] == letter)) == 1
def count_valid_2(input_data):
return sum([valid_pwd_2(reqpwd) for reqpwd in input_data])
count_valid_2(input_data)
| 2020/Day02/Day2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
np.__version__
# Q1. Search for docstrings of the numpy functions on linear algebra.
# ?np.linalg.solve
# ?np.poly
P = np.array([0, 1./3])
np.poly(P)
P = np.array([[1, 2], [5, 2]])
np.poly(P)
P = np.array([[0, 2], [5, 2]])
np.poly(P)
P = np.array([[0, 2], [5, 0]])
np.poly(P)
P = np.array([[0, 5], [2, 0]])
np.poly(P)
len(P.shape)
P = np.array([[2, 1], [1, -1]])
np.poly(P)
# ?np.linalg.eig
w, v = np.linalg.eig(np.diag((1, 2, 3)))
v
# ?np.linalg.cond
# ?np.linalg.pinv
a = np.random.randn(9, 6)
B = np.linalg.pinv(a)
B
a
np.allclose(a, np.dot(a, np.dot(B, a)))
np.dot(a, np.dot(B, a))
# ?np.linalg.LinAlgError
# Q2. Get help information for numpy dot function.
# +
#https://m.blog.naver.com/PostView.nhn?blogId=cjh226&logNo=221356884894&proxyReferer=https:%2F%2Fwww.google.com%2F
| 4_Numpy-specific_help_functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Website Quality Analysis
#
# Analyze website quality using data generated from Google Lighthouse, Pandas dataframes, and visualizations.
#
# ## Steps
#
# 1. Find all internal links on website
# 1. Build dataframe of links
# 1. Assess website quality scores on a per link basis, storing in dataframe
# 1. Sort and display dataframe results
# 1. Visualize results as needed
# +
import json
import os
import subprocess
from urllib.parse import urljoin, urlparse
import colorama
import numpy as np
import pandas as pd
import plotly.graph_objects as go
import requests
from bs4 import BeautifulSoup
from plotly.colors import n_colors
# -
target_url = "https://<your website here>"
# lightouse json report audits don't include categories,
# so we run a single csv report to gather audit id's per category
target_url_str = "{}".format(target_url).replace("https://", "").replace("/", "_")
command = "lighthouse --no-update-notifier --no-enable-error-reporting --output=csv --output-path={} --chrome-flags='--headless' {}".format(
target_url_str, target_url
)
p = subprocess.Popen(
command,
shell=True,
)
p.communicate()
# gather categories of audit ids from csv
df_cats = pd.read_csv(target_url_str)[["name", "category"]]
df_cats.head()
# +
# scraping code courtesy of @x4nth055 from the following link:
# https://github.com/x4nth055/pythoncode-tutorials/blob/master/web-scraping/link-extractor/link_extractor.py
# init the colorama module
colorama.init()
GREEN = colorama.Fore.GREEN
GRAY = colorama.Fore.LIGHTBLACK_EX
RESET = colorama.Fore.RESET
YELLOW = colorama.Fore.YELLOW
def is_valid(url):
"""
Checks whether `url` is a valid URL.
"""
parsed = urlparse(url)
return bool(parsed.netloc) and bool(parsed.scheme)
def get_all_website_links(url):
"""
Returns all URLs that is found on `url` in which it belongs to the same website
"""
# all URLs of `url`
urls = set()
# domain name of the URL without the protocol
domain_name = urlparse(url).netloc
headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
}
soup = BeautifulSoup(
requests.get(url, headers=headers, verify=False).content, "html.parser"
)
for a_tag in soup.findAll("a"):
href = a_tag.attrs.get("href")
if href == "" or href is None:
# href empty tag
continue
# join the URL if it's relative (not absolute link)
href = urljoin(url, href)
parsed_href = urlparse(href)
# remove URL GET parameters, URL fragments, etc.
href = parsed_href.scheme + "://" + parsed_href.netloc + parsed_href.path
if not is_valid(href):
# not a valid URL
continue
if href in internal_urls:
# already in the set
continue
if domain_name not in href:
# external link
if href not in external_urls:
# print(f"{GRAY}[!] External link: {href}{RESET}")
external_urls.add(href)
continue
# print(f"{GREEN}[*] Internal link: {href}{RESET}")
urls.add(href)
internal_urls.add(href)
return urls
# number of urls visited so far will be stored here
total_urls_visited = 0
def crawl(url, max_urls=1000):
"""
Crawls a web page and extracts all links.
You'll find all links in `external_urls` and `internal_urls` global set variables.
params:
max_urls (int): number of max urls to crawl, default is 30.
"""
global total_urls_visited
total_urls_visited += 1
# print(f"{YELLOW}[*] Crawling: {url}{RESET}")
links = get_all_website_links(url)
for link in links:
if total_urls_visited > max_urls:
break
crawl(link, max_urls=max_urls)
# +
# initialize the set of links (unique links)
internal_urls = set()
external_urls = set()
crawl(target_url)
print("[+] Total Internal links:", len(internal_urls))
print("[+] Total External links:", len(external_urls))
print("[+] Total URLs:", len(external_urls) + len(internal_urls))
# +
df = pd.DataFrame(internal_urls)
df = df.rename(columns={0: "URL"})
df = df[
(~df["URL"].str.contains("tel:"))
& (~df["URL"].str.contains("mailto:"))
& (~df["URL"].str.contains("http://"))
]
df["URL"] = df["URL"].str.rstrip("/")
df = df.drop_duplicates()
df.info()
# -
def get_lighthouse_audit(target_url: str) -> pd.DataFrame:
"""
takes url generate lighthouse report from system
returns dataframe of result
"""
# prepare command
command = "lighthouse --no-update-notifier --no-enable-error-reporting --output=json --chrome-flags='--headless' {}".format(
target_url
)
# run command from system
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
# transform stdout str results to json dict
json_result = json.loads(p.communicate()[0])
# send audit results to pd.dataframe
df = pd.DataFrame(json_result["audits"]).T
# add url to result audits
df["url"] = target_url
# reorder df to show url as first column
df = df[["url"] + [col for col in df.columns if col != "url"]]
# reset the index
df = df.reset_index(drop=True)
# set the categories per audit id
df["category"] = df["id"]
df["category"] = df["category"].map(
dict(zip(df_cats["name"].values.tolist(), df_cats["category"].values.tolist()))
)
return df
# +
df_list = []
# go through each url and gather audit results
for target_url in df["URL"].tolist():
df_list.append(get_lighthouse_audit(target_url))
print(len(df_list))
# -
# join the results as one dataframe, or relable the first element as the same
if len(df_list) > 1:
results = pd.concat(df_list)
else:
results = df_list[0]
results.head()
# filter results to only those we'll use to report on.
results_filtered = results[
(~results["category"].isna())
& (~(results["scoreDisplayMode"] == "notApplicable"))
& (~(results["scoreDisplayMode"] == "manual"))
& (~(results["scoreDisplayMode"] == "informative"))
]
# find the aggregated scores of each category by url
aggregate_scores = (
results_filtered.groupby(["url", "category"])["score"].sum()
/ results_filtered.groupby(["url", "category"])["id"].count()
)
aggregate_scores.head()
# +
# show mean of all urls
average_score = aggregate_scores.unstack(level=-1).mean().to_frame().T
# round scores as integers
average_score = (average_score.round(decimals=2) * 100).astype("int")
average_score
# +
colors = n_colors("rgb(250, 0, 50)", "rgb(100, 200, 0)", 101, colortype="rgb")
fig = go.Figure(
data=[
go.Table(
header=dict(
values=list(average_score.columns),
fill_color="paleturquoise",
align="center",
font=dict(color="black", size=11),
),
cells=dict(
values=[
average_score["Accessibility"],
average_score["Best Practices"],
average_score["Performance"],
average_score["Progressive Web App"],
average_score["SEO"],
],
line_color=[
np.array(colors)[average_score["Accessibility"]],
np.array(colors)[average_score["Best Practices"]],
np.array(colors)[average_score["Performance"]],
np.array(colors)[average_score["Progressive Web App"]],
np.array(colors)[average_score["SEO"]],
],
fill_color=[
np.array(colors)[average_score["Accessibility"]],
np.array(colors)[average_score["Best Practices"]],
np.array(colors)[average_score["Performance"]],
np.array(colors)[average_score["Progressive Web App"]],
np.array(colors)[average_score["SEO"]],
],
align="center",
font=dict(color="white", size=11),
),
)
]
)
fig.show()
fig.write_image("lighthouse_overall_average_score.png")
# -
# target accessibility category aggregate scores
aggregate_scores.unstack().sort_values("Accessibility", ascending=False)
# target accessibility category aggregate scores as csv
aggregate_scores.unstack().sort_values("Accessibility", ascending=False).to_csv(
"lighthouse_category_scores_by_url.csv"
)
# target all category individual scores as csv
results_filtered.sort_values(["url", "category"]).to_csv(
"lighthouse_scores_by_url.csv", index=False
)
# target accessibility category individual scores as csv
results_filtered[
(results_filtered["category"] == "Accessibility") & (results_filtered["score"] != 1)
].sort_values(["url", "category"]).to_csv(
"lighthouse_accessibility_low_scores_by_url.csv", index=False
)
| website-quality-analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bisection method
# The bisection method is a variation of the incremental search method in which the interval
# is always divided in half.
#
# $$x^{(0)} = \frac{a+b}{2}$$
#
# 
# The bisection method is a root-finding method that applies to any continuous functions for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root.
#
# It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.
#
# The method may be written in pseudocode as follows:
# + active=""
# INPUT: Function f,
# endpoint values a, b,
# tolerance TOL,
# maximum iterations NMAX
# CONDITIONS: a < b,
# either f(a) < 0 and f(b) > 0 or f(a) > 0 and f(b) < 0
# OUTPUT: value which differs from a root of f(x) = 0 by less than TOL
#
# N ← 1
# while N ≤ NMAX do // limit iterations to prevent infinite loop
# c ← (a + b)/2 // new midpoint
# if f(c) = 0 or (b – a)/2 < TOL then // solution found
# Output(c)
# Stop
# end if
# N ← N + 1 // increment step counter
# if sign(f(c)) = sign(f(a)) then a ← c else b ← c // new interval
# end while
# Output("Method failed.") // max number of steps exceeded
#
# -
# ## Error Estimates for bisection method
# One way to do this is by estimating an approximate percent relative error as in:
#
# $$|\epsilon_a| = \frac{|x_r^{new} - x_r^{old}|}{x_r^{new}}$$
# Continue the example funciton $$f(m) = \sqrt{\frac{gm}{c_d}}\tanh(\sqrt{\frac{gc_d}{m}}t) - v(t)$$
#
# until the approximate error falls below a
# stopping criterion, let's say $|\epsilon_a|=0.5\%$
#
# We have this boundary conditions:
#
#
# $$c_dd = 0.25 \\ g = 9.81\\ v=36 /\\ t = 4 \\ m_p = [50,200]$$
# The initial estimate of the root x r lies at the midpoint of the interval.
#
# $$c = \frac{50+200}{2} = 125$$
# Doing a secound time we have
# $$c = \frac{125+200}{2} = 162.5 $$
# This means that the value of 125 calculated here has a percent relative error of
#
# $$|\epsilon_a| = |\frac{162.5-125}{162.5}|\times100\%=23.04\% $$
# |Interation|$a$|$b$|$c$|$|\epsilon_a|$% |
# |---|---|---|---|---|
# |1|50|200|125|-----|
# |2|125|200|162.5|23.08|
# |3|125|162.5|143.75|13.04|
# |4|125|143.75|134.375|6.98|
# |5|134.375|139.0625|139.0625|3.37|
# |6|139.0625|141.4063|141.4063|1.66|
# |7|141.4063|142.5781|142.5781|0.82|
# |8|142.5781|143.1641|143.1641|0.41|
import numpy as np
import scipy as sc
import matplotlib.pyplot as plt
def bisection(f, a, b, tol, N):
"""
Find root of a function within an interval using bisection.
Basic bisection routine to find a zero of the function `f` between the
arguments `a` and `b`. `f(a)` and `f(b)` cannot have the same signs.
Parameters
----------
Input:
f = name of function
a, b = lower and upper guesses
tol = desired relative error (epsilon)
N = maximum allowable interations
Returns
-------
root = real root
fx = value at root
interation = Number of interations
tol_ap = approxximate relative error
"""
interation = 0
fa = f(a)
while (interation <= N):
# Interation from the method
root = a + (b-a)/2
fx = f(root)
# Stop criteria
if ((fx == 0) or (np.abs((b-a)/2) < tol)):
tol_ap = (b-a)/2
return root, fx, interation, tol_ap
# Update the interval [a,b]
interation += 1
if (fa * fx > 0):
a = root
fa = fx
else:
b = root
raise NameError("Number max of interetions exceeded")
f = lambda x: np.sin(10*x) + np.cos(3*x)
a = 3
b = 4
root, fx, interation, tol_ap = bisection(f, a, b, tol=0.5e-2, N=50)
print("The root is: "+ str(root))
print("The value of f is: "+ str(fx))
print("The number of interations is: "+ str(interation))
print("The approxximate relative error: "+ str(tol_ap))
# The method is guaranteed to converge to a root of $f(x)$ if $f(x)$ is a continuous function on the interval $[a, b]$ and $(a)$ and $f(b)$ have opposite signs. The absolute error is halved at each step so the method converges linearly, which is comparatively slow.
#
# The number of iterations needed, $n$, to achieve a given error (or tolerance), $\epsilon$, is given by:
# $$ n=\log _{2}\left({\frac {\epsilon _{0}}{\epsilon }}\right)={\frac {\log \epsilon _{0}-\log \epsilon }{\log 2}}, $$
| src/02_roots_optimization/01_bracketing_methods/.ipynb_checkpoints/bisection_method-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import folium
import psycopg2
import pandas as pd
dbname = 'routing_db_crime_2'
username = 'nishan'
password = '<PASSWORD>'
con = psycopg2.connect(database=dbname, user=username, password=password)
query = """
SELECT *, ST_AsGeoJSON(the_geom) AS geojson_geometry FROM ways WHERE
ST_DWithin(the_geom, ST_GeomFromText('POINT(-73.947494 40.687179)', 4326), 0.05)
"""
df = pd.read_sql(query, con)
df.columns
df_tmp = df.loc[:,['gid', 'cost_crime0']]
# normalize between 0 and 1
df_tmp.cost_crime0 = df_tmp.cost_crime0/df_tmp.cost_crime0.max()
# (40.687179, -73.947494)
mymap = folium.Map(location=[40.687179, -73.947494], tiles='https://api.mapbox.com/styles/v1/mapbox/streets-v9/tiles/256/\{z\}/\{x\}/\{y\}?access_token=<KEY>', attr='My Data Attribution')
mymap
mymap.choropleth(geo_path='roads.json', data=df_tmp, columns=['gid', 'cost_crime0'], key_on='feature.id', fill_color='YlOrRd', fill_opacity=1.0, line_color='red', line_opacity=1.0, line_weight=2)
df.geojson_geometry[0]
folium.PolyLine([[-73.9147568,40.6843469],[-73.9174143,40.6840406]],color='#0060ff', weight=10, opacity=1.0).add_to(mymap)
# +
# folium.PolyLine?
# -
mymap.save('test.html')
| folium_choropleth.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
def test_exercise_4_3(path_to_db) -> bool:
import sqlite3
import pandas as pd
conn = sqlite3.connect(path_to_db)
df = pd.read_sql_query('''SELECT * FROM user;''', conn)
return "m" in df["gender"].unique()
# -
def test_exercise_4_4(path_to_db,rows ) -> bool:
import sqlite3
import pandas as pd
with sqlite3.connect(path_to_db) as conn:
cursor = conn.cursor()
rows_test = cursor.execute('SELECT * FROM user;')
return rows_test == rows
| Chapter08/unit_tests/.ipynb_checkpoints/Exercise 8.04-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CSC280, Introduction to Computer Science I
# # Prof. <NAME>
# # Conditons, <NAME>, and Collatz
#
# Ok, today we are going to start off by learning a few more Python language constructs. These are a little different what we have learned so far. So far, what we have written looks a lot like a recipe in a cookbook: There is a first, second, third step, etc. You just go from one thing to the next --- there aren't any choices.
#
# That doesn't make for a very interesting program.
#
# An interesting program will do things like ask questions and interact with its environment. There are _choices_ to be made. You do this sort of thing every day: You go to the cafeteria and see what's there. If it is good enough, you eat. Otherwise you have make another decision.
#
# Python has several different constructions that are useful for making choices. The simplest is the `if` statement. The basic `if` statement needs two things
# * a condition, which is either true of false
# * code to run if the condition is _true_
# For example, suppose that we were curious to learn if the word "supercalifragilisticexpialidocious" was longer than "pneumonoultramicroscopicsilicovolcanoconiosis". (pretend that you can't just see the length) We could write
# ```
# supercali = "supercalifragilisticexpialidocious"
# pneumono = "pneumonoultramicroscopicsilicovolcanoconiosis"
# if len(supercali) < len(pneumono):
# print( supercali + " is shorter than " + pneumono )
# ```
# Go ahead and try it below:
# By the way, notice that the `print` statement is **indented**. This is how Python tells which part of your code is conditional on the `if`. We call that indented a _code block_. (Other languages use things like curly braces `{}` to do the same thing, but even then you expected to indent as a matter of proper style.)
#
# Also, notice the `len` method. This takes a string (and several other things) and gives back its length.
#
# Now, let's try to enhance our word length code. We want to write some code that reads in two words and compares them to see which one is longer, or if they have the same length. We saw before (in notebook `01`) how to read data from the user using `input`. The new bits that we will need are extensions to `if`.
#
# You could _concievably_ write code that looks like this:
# ```
# if thing1:
# do_operation1
#
# if thing2:
# do_operation2
#
# if thing3:
# do_operation3
# ```
# But that's not great for our case. We know that _exactly_ one of these things is true:
# * the first word is longer than the second,
# * the second word is longer than the first, or
# * the two words have the same length.
#
# If we were to write code with three seperate `if` statements, then it would look like we thought that any combination of these possibilities could be true! For our problem, if we know that `thing1` is true, we never need to check `thing2` or `thing3`! For this sort of situation, we can combine the statements:
# ```
# if thing1:
# do_operation1
# elif thing2:
# do_operation2
# else:
# do_operation3
# ```
# **A thing to think about:** What happened to `thing3`?
#
# **Your task:** Write code which reads in two words and tells the user which is shorter, or if the words have the same length. Make sure you test out all 3 possibilities.
# **Your next task:** Enhance the code; it should also test to see if the two words are actually the same word. You'll need to use the `==` operator to test equality. (Remember that `=` is the _assignment_ operator.)
#
# By the way, there is also a not-equal operator: `!=`
# Pretty soon we'll need to make more complex conditions for `if` statements. We can join together multiple conditons using `and` and `or` and switch truth values using `not`.
#
# **Your task:** Explore, by printing, the values of
# ```
# print( True and True )
# print( True and False )
# print( False and True )
# print( False and False )
# ...
# ```
# Similarly for `or`. These are called 'Truth-tables' in logic. How does `or` differ from the usual English-language usage of "or"?
# An odd question to think about: There are two [Boolean](https://en.wikipedia.org/wiki/George_Boole) values, `True` and `False`. What is `True + True`? Does that make sense? What does Python think it is? What do you think that means about how Python stores truth values, internally?
# There is an [infamous](https://blog.codinghorror.com/why-cant-programmers-program/) programming interview question called [FizzBuzz](https://en.wikipedia.org/wiki/Fizz_buzz), based on a kid's mathematics teaching game from the UK:
#
# > Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
#
# (When you get a chance, you really should follow that first link.)
#
# **Your primary task:** Write the FizzBuzz program. (You'll need the modulus, `%`, operator we used in notebook `01`.)
#
# **Your secondary task:** What kinds of errors do you think that interviewees make? Remember that details matter!
# Another kind of conditional is a kind of chimera, mixing properties of `if` and `for`. It goes by the name `while` and like `for`, `while` loops --- it repeats the same block of code over and over again. However, like `if`, `while` is checking a Boolean statement.
#
# (Alternatively, you can think of `for`-loops as being a special kind of `while`.)
#
# `while` shows up when we want to repeat some block of code as long as (i.e. while) some condition is true. For example, suppose that we wanted to ask the user if they really want to do something:
# ```
# choice = input('Are you sure you want to rm -rf / ?')
# while choice != 'yes' and choice != 'no':
# choice = input('I didn\'t understand, please type \'yes\' or \'no\'.')
# if choice == 'yes':
# print('Ok, deleting all files')
# else:
# print('Ok, since you aren\'t sure, I\'ll just make the choice for you. Deleting all files.')
# ```
# Try it out.
# # Collatz Conjecture
#
# 
#
# The [Collatz conjecture](https://en.wikipedia.org/wiki/Collatz_conjecture) says that if you take _any positive whole number_ $n$ and repeatedly get a new value for $n$ by the proceedure:
# * if $n$ is even, divide by 2 and
# * if $n$ is odd, take $3n+1$.
#
# Then you will eventually get to $n=1$.
#
# The late, great mathematician <NAME> said the follwing about the Collatz conjecture: "Mathematics may not be ready for such problems." None the less, we can verify that it is true for many, many examples!
#
# **Your task:** Write a program which reads in a number from the user, then prints out the corresponding Collatz sequence. Your program should halt when you reach 1. (Print that 1!)
| 03-Conditions_FizzBuzz_Collatz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import geopandas as gpd
import hvplot.pandas
# import cartopy.crs as ccrs
# # load csv file
df = pd.read_csv('data-flooddamage/data.csv',index_col=0)
# # make bar chart
# +
count = df['date'].value_counts(sort=False)
scount = count.sort_index()
d = pd.DataFrame(scount)
d.columns = ['Damaged houses']
d.index.name = 'date'
g = d.hvplot.bar(y='Damaged houses', rot=30, width=400, height=250) # + d.hvplot.table(['date', 'Damaged houses'], index_position=None, width=200)
# -
# # make scatter plot
df2 = df[df['longitude'] != 0]
gdf = gpd.GeoDataFrame(df2, geometry=gpd.points_from_xy(df2.longitude, df2.latitude))
gdf = gdf.drop(['longitude','latitude'],axis = 1)
# gdf['date'] = pd.to_datetime(gdf['date'])
gdf.crs = 'epsg:4326'
from bokeh.models import HoverTool
hover = HoverTool(tooltips=[ ("address", "@address"),("damage", "@damage"),("cause", "@cause")])
g2 = gdf.hvplot.points(geo=True, groupby='date', frame_height=400, frame_width=500, tiles=False, size=100, color='red'
# ,crs = ccrs.PlateCarree(), project=True, projection=ccrs.PlateCarree()
, hover_cols = ['address','damage','cause']
).options(tools=[hover])
# # make polygon
poly = gpd.read_file('chofu.geojson')
p = poly.hvplot.polygons( geo=True, frame_height=400, frame_width=500, tiles="OSM", line_color='black', line_width=5, alpha=0.2, hover=False )
# # layout
layout = ( g + p*g2).cols(2)
# # on bokeh server
# +
import holoviews as hv
renderer = hv.renderer('bokeh')
# -
doc = renderer.server_doc(layout)
doc.title = 'chofu flood damage'
layout
| app-flooddamage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bayesian Final Project
# # Crisis Management using Tweets
# ### <NAME> - ss8xj
# ### <NAME> - ss4jg
# ### <NAME> - hg5mn
# +
import sys
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
import arviz as az
import scipy.stats as stats
import scipy as sp
import matplotlib.pyplot as plt
import graphviz
from dbn.tensorflow import SupervisedDBNClassification
from pybbn.graph.dag import Bbn
from pybbn.graph.edge import Edge, EdgeType
from pybbn.graph.jointree import EvidenceBuilder
from pybbn.graph.node import BbnNode
from pybbn.graph.variable import Variable
from pybbn.pptc.inferencecontroller import InferenceController
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import cross_val_predict
from sklearn import linear_model
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import LeaveOneOut
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.manifold import TSNE
from sklearn.feature_extraction.text import CountVectorizer
# %matplotlib inline
from pybbn.generator.bbngenerator import convert_for_drawing
import matplotlib.pyplot as plt
import networkx as nx
import warnings
import matplotlib.pyplot as plt
import seaborn as sns
# +
# command to install dbn.tensorflow package
# #!pip install git+git://github.com/albertbup/deep-belief-network.git
# -
# # Data Preprocessing
# load two csv files and merge them
wild_fires_data = pd.read_csv("2012_Colorado_wildfires-tweets_labeled.csv")
wild_fires_australia_data = pd.read_csv("2013_Australia_bushfire-tweets_labeled.csv")
wild_fires_data = [wild_fires_data,wild_fires_australia_data]
wild_fires_data = pd.concat(wild_fires_data)
wild_fires_data.head()
# drop tweet ID and Information source
columns = ['Tweet ID', ' Information Source']
wild_fires_data.drop(columns, inplace=True, axis=1)
wild_fires_data.info()
wild_fires_data[' Informativeness'] = wild_fires_data[' Informativeness'].astype('category')
# function to convert the categories in informativeness column to integer classes.
def Informativeness_to_numeric(x):
if x=='Related and informative':
return 3
if x=='Not related':
return 2
if x=='Related - but not informative':
return 1
if x=='Not applicable':
return 4
wild_fires_data[' Informativeness'] = wild_fires_data[' Informativeness'].apply(Informativeness_to_numeric)
wild_fires_data.head()
# extract rows that have information type as not labelled
# we wont consider these rows as we are doing supervised classification which requires our data to be labelled
not_labelled = wild_fires_data[wild_fires_data[' Information Type'] == 'Not labeled']
not_labelled.head()
# extract labelled data
labelled = wild_fires_data[wild_fires_data[' Information Type'] != 'Not labeled']
labelled.head()
# categories in information type
np.unique(labelled[' Information Type'])
# function to convert information type to integer classes
def Information_Type_to_numeric(x):
if x=='Affected individuals':
return 3
if x=='Caution and advice':
return 2
if x=='Donations and volunteering':
return 1
if x=='Infrastructure and utilities':
return 4
if x=='Not applicable':
return 5
if x=='Other Useful Information':
return 6
if x=='Sympathy and support':
return 7
labelled[' Information Type'] = labelled[' Information Type'].apply(Information_Type_to_numeric)
labelled.head()
labelled.shape
# # Nested Naive Bayes Classifier
# function to implement nested naive bayes
def nested_classify_nb(X_train, X_test, y_train, y_test, latent_train, latent_test):
# fit two naive bayes models
mnb1 = MultinomialNB()
mnb1.fit(X_train, latent_train)
cv_predicted = cross_val_predict(mnb1, X_train, latent_train, cv=LeaveOneOut())
print ("Naive Bayes Training accuracy for Informativeness prediction: ", accuracy_score(latent_train, cv_predicted))
mnb2 = MultinomialNB()
mnb2.fit(X_train, y_train)
cv_predicted = cross_val_predict(mnb2, X_train, y_train, cv=LeaveOneOut())
print ("Naive Bayes Training accuracy for Information Type prediction: ", accuracy_score(y_train, cv_predicted))
# predict class for first classifier
pred1 = mnb1.predict(X_test)
# extract indices where the predicted class is 3(related and informative)
indices = np.where( pred1 == 3 )
y_test_ri = y_test.iloc[indices]
X_test_ri = X_test[indices]
pred2 = []
count = 0
# predict class for second classifier only when the first predicted class is 3
for i in range(len(pred1)):
if pred1[i] == 3:
x_test = X_test_ri[count]
pred2.append(mnb2.predict(x_test))
count = count + 1
print("Tweet ", i+1, " is informative with info type ", mnb2.predict(x_test))
else:
print("Tweet ", i+1, " is not informative!")
print("Accuracy of the Classifier given the tweet is Informative is: ", accuracy_score(y_test_ri, pred2))
print("Accuracy of the Classifier given the tweet is Informative is: ", accuracy_score(latent_test, pred1))
# +
X = labelled.loc[:,' Tweet Text']
y = labelled.loc[:, ' Information Type']
latent = labelled.loc[:, ' Informativeness']
X_train, X_test, latent_train, latent_test = train_test_split(X, latent, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# count vectorizer to get bag of words from the tweets
cv = CountVectorizer(strip_accents='ascii', token_pattern=u'(?ui)\\b\\w*[a-z]+\\w*\\b', lowercase=True, stop_words='english')
X_train_cv = cv.fit_transform(X_train)
X_test_cv = cv.transform(X_test)
nested_classify_nb(X_train_cv, X_test_cv, y_train, y_test, latent_train, latent_test)
# -
# # Deep Belief Network Classification
# input and output for the dep belief network
X = cv.fit_transform(labelled.loc[:,' Tweet Text'])
X = X.toarray()
X = pd.DataFrame(X)
Y = labelled[' Information Type']
from sklearn.preprocessing import StandardScaler
ss=StandardScaler()
X = ss.fit_transform(X)
# train test split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
# building a deep belief network model with three hidden layers containing 250 units each
# no. of epochs for each ristricted boltzmann machine is 10
# activation function used is relu. can use sigmoid too
classifier = SupervisedDBNClassification(hidden_layers_structure = [250, 250, 250],
learning_rate_rbm=0.05,
learning_rate=0.1,
n_epochs_rbm=10,
n_iter_backprop=100,
batch_size=200,
activation_function='relu',
dropout_p=0.2)
# fit model
classifier.fit(X_train, Y_train)
# predict from the model
Y_pred = classifier.predict(X_test)
print('Done.\nAccuracy: %f' % accuracy_score(Y_test, Y_pred))
# # Bayesian Belief Network
# +
# creating Bayesian belief network nodes by initilizing number of random states in each random variable
# and their respective transition probabilities
Info_Source = BbnNode(Variable(0,
'Info_Source',
['Business', 'Eyewitness','Government', 'Media', 'NGOs', 'Outsiders']),
[0.01066, 0.069, 0.078, 0.48, 0.0511, 0.308])
Informativeness = BbnNode(Variable(1,
'Informativeness',
['Related_Informative', 'Related_Not_Informative']),
[0.8, 0.2, 0.54, 0.46, 0.85, 0.15, 0.92, 0.08, 0.75, 0.25, 0.42, 0.58])
Info_Type = BbnNode(Variable(2,
'Info_Type',
['Affected Individuals', 'Caution', 'Donations', 'Infra', 'Other', 'Sympathy']),
[0.18, 0.064, 0.089, 0.1, 0.52, 0.04, 0.089, 0.0038, 0.062, 0.0077, 0.26, 0.569])
# -
# building a network with 3 nodes and 2 edges specifying the conditional relationship between the nodes
bbn1 = Bbn() \
.add_node(Info_Source) \
.add_node(Informativeness) \
.add_node(Info_Type) \
.add_edge(Edge(Info_Source, Informativeness, EdgeType.DIRECTED)) \
.add_edge(Edge(Informativeness, Info_Type, EdgeType.DIRECTED))
# plotting the belief network
with warnings.catch_warnings():
warnings.simplefilter('ignore')
graph = convert_for_drawing(bbn1)
pos = nx.nx_agraph.graphviz_layout(graph, prog='neato')
plt.figure(figsize=(20, 10))
plt.subplot(121)
labels = dict([(k, node.variable.name) for k, node in bbn1.nodes.items()])
nx.draw(graph, pos=pos, with_labels=True, labels=labels)
plt.title('BBN DAG')
plt.savefig('DAG')
# +
# convert the BBN to a join tree
join_tree = InferenceController.apply(bbn1)
# insert an observation evidence
ev = EvidenceBuilder() \
.with_node(join_tree.get_bbn_node_by_name('Info_Source')) \
.with_evidence('Outsiders', 1.0) \
.build()
join_tree.set_observation(ev)
# print the marginal probabilities
for node in join_tree.get_bbn_nodes():
potential = join_tree.get_bbn_potential(node)
print(node)
print(potential)
print('--------------------->')
# +
# convert the BBN to a join tree
join_tree = InferenceController.apply(bbn1)
# insert an observation evidence
ev = EvidenceBuilder() \
.with_node(join_tree.get_bbn_node_by_name('Info_Source')) \
.with_evidence('Government', 1.0) \
.build()
join_tree.set_observation(ev)
# print the marginal probabilities
for node in join_tree.get_bbn_nodes():
potential = join_tree.get_bbn_potential(node)
print(node)
print(potential)
print('--------------------->')
| BAYESIAN_PROJECT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: visualization-curriculum-gF8wUgMm
# language: python
# name: visualization-curriculum-gf8wugmm
# ---
# + [markdown] papermill={"duration": 0.010326, "end_time": "2020-04-23T06:07:23.708053", "exception": false, "start_time": "2020-04-23T06:07:23.697727", "status": "completed"} tags=[]
# # Interactive Map - Confirmed Cases in the US by State
# > Interactive Visualizations of The Count and Growth of COVID-19 in the US.
#
# - comments: true
# - author: <NAME>
# - categories: [growth, usa, altair, interactive]
# - image: images/us-growth-state-map.png
# - permalink: /growth-map-us-states/
# + papermill={"duration": 0.522219, "end_time": "2020-04-23T06:07:24.236711", "exception": false, "start_time": "2020-04-23T06:07:23.714492", "status": "completed"} tags=[]
#hide
import requests
import numpy as np
import pandas as pd
import altair as alt
alt.data_transformers.disable_max_rows()
#https://github.com/altair-viz/altair/issues/1005#issuecomment-403237407
def to_altair_datetime(dt):
return alt.DateTime(year=dt.year, month=dt.month, date=dt.day,
hours=dt.hour, minutes=dt.minute, seconds=dt.second,
milliseconds=0.001 * dt.microsecond)
# + papermill={"duration": 0.018495, "end_time": "2020-04-23T06:07:24.261609", "exception": false, "start_time": "2020-04-23T06:07:24.243114", "status": "completed"} tags=[]
#hide
abbr2state = {
'AK': 'Alaska',
'AL': 'Alabama',
'AR': 'Arkansas',
'AS': 'American Samoa',
'AZ': 'Arizona',
'CA': 'California',
'CO': 'Colorado',
'CT': 'Connecticut',
'DC': 'District of Columbia',
'DE': 'Delaware',
'FL': 'Florida',
'GA': 'Georgia',
'GU': 'Guam',
'HI': 'Hawaii',
'IA': 'Iowa',
'ID': 'Idaho',
'IL': 'Illinois',
'IN': 'Indiana',
'KS': 'Kansas',
'KY': 'Kentucky',
'LA': 'Louisiana',
'MA': 'Massachusetts',
'MD': 'Maryland',
'ME': 'Maine',
'MI': 'Michigan',
'MN': 'Minnesota',
'MO': 'Missouri',
'MP': 'Northern Mariana Islands',
'MS': 'Mississippi',
'MT': 'Montana',
'NA': 'National',
'NC': 'North Carolina',
'ND': 'North Dakota',
'NE': 'Nebraska',
'NH': 'New Hampshire',
'NJ': 'New Jersey',
'NM': 'New Mexico',
'NV': 'Nevada',
'NY': 'New York',
'OH': 'Ohio',
'OK': 'Oklahoma',
'OR': 'Oregon',
'PA': 'Pennsylvania',
'PR': 'Puerto Rico',
'RI': 'Rhode Island',
'SC': 'South Carolina',
'SD': 'South Dakota',
'TN': 'Tennessee',
'TX': 'Texas',
'UT': 'Utah',
'VA': 'Virginia',
'VI': 'Virgin Islands',
'VT': 'Vermont',
'WA': 'Washington',
'WI': 'Wisconsin',
'WV': 'West Virginia',
'WY': 'Wyoming'
}
state2abbr = {s:a for a,s in abbr2state.items()}
# + papermill={"duration": 1.013374, "end_time": "2020-04-23T06:07:25.280859", "exception": false, "start_time": "2020-04-23T06:07:24.267485", "status": "completed"} tags=[]
#hide
states_daily_url = 'https://covidtracking.com/api/states/daily'
states_daily_raw = pd.DataFrame(requests.get(states_daily_url).json())
us_daily_df = states_daily_raw.copy()
cols_keep = ['date','state','positive','dateChecked','positiveIncrease','death']
us_daily_df = us_daily_df[cols_keep]
us_daily_df['date'] = pd.to_datetime(us_daily_df['date'], format='%Y%m%d')
us_daily_df['dateChecked'] = pd.to_datetime(us_daily_df['dateChecked'])
us_state_capitals_url = 'https://vega.github.io/vega-datasets/data/us-state-capitals.json'
state_cap_df = pd.DataFrame(requests.get(us_state_capitals_url).json())
state_cap_df['state'] = state_cap_df['state'].apply(lambda s: state2abbr.get(s))
us_daily_df = us_daily_df.merge(state_cap_df, on='state', how='left')
us_daily_df.rename(columns={'positive':'confirmed_count',
'positiveIncrease':'new_cases'}, inplace=True)
state_df = us_daily_df.sort_values('date').groupby(['state']).tail(1)
# + papermill={"duration": 0.091354, "end_time": "2020-04-23T06:07:25.379048", "exception": false, "start_time": "2020-04-23T06:07:25.287694", "status": "completed"} tags=[]
#hide
states_data = 'https://vega.github.io/vega-datasets/data/us-10m.json'
states = alt.topo_feature(states_data, feature='states')
selector = alt.selection_single(empty='none', fields=['state'], nearest=True, init={'state':'CA'})
curr_date = state_df.date.max().date().strftime('%Y-%m-%d')
dmax = (us_daily_df.date.max() + pd.DateOffset(days=3))
dmin = us_daily_df.date.min()
# US states background
background = alt.Chart(states).mark_geoshape(
fill='lightgray',
stroke='white'
).properties(
width=500,
height=400
).project('albersUsa')
points = alt.Chart(state_df).mark_circle().encode(
longitude='lon:Q',
latitude='lat:Q',
size=alt.Size('confirmed_count:Q', title= 'Number of Confirmed Cases'),
color=alt.value('steelblue'),
tooltip=['state:N','confirmed_count:Q']
).properties(
title=f'Total Confirmed Cases by State as of {curr_date}'
).add_selection(selector)
timeseries = alt.Chart(us_daily_df).mark_bar().properties(
width=500,
height=350,
title="New Cases by Day",
).encode(
x=alt.X('date:T', title='Date', timeUnit='yearmonthdate',
axis=alt.Axis(format='%y/%m/%d', labelAngle=-30),
scale=alt.Scale(domain=[to_altair_datetime(dmin), to_altair_datetime(dmax)])),
y=alt.Y('new_cases:Q',
axis=alt.Axis(title='# of New Cases',titleColor='steelblue'),
),
color=alt.Color('state:O'),
tooltip=['state:N','date:T','confirmed_count:Q', 'new_cases:Q']
).transform_filter(
selector
).add_selection(alt.selection_single()
)
timeseries_cs = alt.Chart(us_daily_df).mark_line(color='red').properties(
width=500,
height=350,
).encode(
x=alt.X('date:T', title='Date', timeUnit='yearmonthdate',
axis=alt.Axis(format='%y/%m/%d', labelAngle=-30),
scale=alt.Scale(domain=[to_altair_datetime(dmin), to_altair_datetime(dmax)])),
y=alt.Y('confirmed_count:Q',
#scale=alt.Scale(type='log'),
axis=alt.Axis(title='# of Confirmed Cases', titleColor='red'),
),
).transform_filter(
selector
).add_selection(alt.selection_single(nearest=True)
)
final_chart = alt.vconcat(
background + points,
alt.layer(timeseries, timeseries_cs).resolve_scale(y='independent'),
).resolve_scale(
color='independent',
shape='independent',
).configure(
padding={'left':10, 'bottom':40}
).configure_axis(
labelFontSize=10,
labelPadding=10,
titleFontSize=12,
).configure_view(
stroke=None
)
# + [markdown] papermill={"duration": 0.005914, "end_time": "2020-04-23T06:07:25.391406", "exception": false, "start_time": "2020-04-23T06:07:25.385492", "status": "completed"} tags=[]
# ### Click On State To Filter Chart Below
# + papermill={"duration": 0.217322, "end_time": "2020-04-23T06:07:25.615192", "exception": false, "start_time": "2020-04-23T06:07:25.397870", "status": "completed"} tags=[]
#hide_input
final_chart
# + [markdown] papermill={"duration": 0.014058, "end_time": "2020-04-23T06:07:25.643304", "exception": false, "start_time": "2020-04-23T06:07:25.629246", "status": "completed"} tags=[]
# Prepared by [<NAME>](https://twitter.com/5sigma)[^1]
#
# [^1]: Source: ["https://covidtracking.com/api/"](https://covidtracking.com/api/).
| _notebooks/2020-03-15-us-growth-by-state-map.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/MainakRepositor/ML-Algorithms/blob/master/Random_Forest_Classifier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="HbFWauEpY-ir"
# # Random Forest Classifier
#
# <hr>
# + [markdown] id="r-orZ8mWhXzx"
# ## 1. Importing necessary libraries
# + id="0Rq08N8bY67W" outputId="5d5e90d7-c15e-4fd3-ee00-ddab5c765b2a" colab={"base_uri": "https://localhost:8080/"}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
print("Basic libraries included successfully!")
# + [markdown] id="oIJ_7hFCh-nQ"
# ## 2. Importing the datasets
# + id="bATpyKIoh3wt" outputId="babbaaee-e515-463a-c09b-b66b1db6436c" colab={"base_uri": "https://localhost:8080/", "height": 196}
url = 'https://raw.githubusercontent.com/MainakRepositor/Datasets-/master/Social_Network_Ads.csv'
df = pd.read_csv(url,error_bad_lines=False)
df.head()
# + [markdown] id="fwwOvPQflRPi"
# ## 3.Splitting the dataset into the Training set and Test set
# + id="dk-ddwiHjnd3"
X = df.iloc[:, :-1].values
y = df.iloc[:, -1].values
# + id="CeTjRGeHlXAb"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# + [markdown] id="CGjaIY6ZlilX"
# ## 4. Feature Scaling
# + id="9MygBmy5leNQ"
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# + [markdown] id="EY6855Drlsv2"
# ## 5.Training the Random Forest Classification model on the Training set
# + id="0D70fVIolomy" outputId="64989478-1bc4-433a-acc5-18e0fa540309" colab={"base_uri": "https://localhost:8080/"}
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
# + [markdown] id="jPoS8I6Nl1L8"
# ## 6. Predicting a new result
# + id="VY1G33YAly1L" outputId="5106cc55-b5a5-48c3-b668-eff294c81281" colab={"base_uri": "https://localhost:8080/"}
print(classifier.predict(sc.transform([[30,87000]])))
# + [markdown] id="9h4DSwTgl-0w"
# ## 7. Predicting the Test set results
# + id="bEtxWlfVl9Rn" outputId="61292dbe-8902-47fb-db48-cfd48c2206c9" colab={"base_uri": "https://localhost:8080/"}
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
# + [markdown] id="yGh_BpsJmLOR"
# ## 8. Making the Confusion Matrix
# + id="SGUlXEwFmFuW" outputId="fa672c5e-2fb7-493d-dc83-74e91e027165" colab={"base_uri": "https://localhost:8080/"}
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print("Confusion matrix :")
print(cm)
print("\nAccuracy of the model is :",accuracy_score(y_test, y_pred))
# + id="B9kENTQRmRHc" outputId="a116512a-398a-4c46-a89e-096972f65ae8" colab={"base_uri": "https://localhost:8080/", "height": 349}
# Visualising the Training set results
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Random Forest Classification (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
# + id="N658g7b2mjfn" outputId="fdfcd4ef-769e-4689-8bad-d2d23a28d9af" colab={"base_uri": "https://localhost:8080/", "height": 349}
# Visualising the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Random Forest Classification (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
# + id="5s1OUMMUo6vn"
| 6. Random_Forest_Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **<font size="6" color='darkgreen'>Kaggle Credit Card Fraud Dataset</font>**<br>
# <br>
# <font size=5>We use an open-source [dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud) from Kaggle.<font>
# # Split Datasets
# +
import pandas as pd
import numpy as np
# import some models
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from tensorflow import keras
# import evaluation metrics
from sklearn.metrics import roc_auc_score
from sklearn.metrics import f1_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.metrics import confusion_matrix
# plot some metrics
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# %matplotlib inline
# +
df = pd.read_csv("creditcard.csv")
print("The number of example of this dataset is:", len(df.iloc[:,1]))
df.tail()
# -
# **<font color='green' size=3>The column of *Time* and *amount* were applied no PCA, I would like to manually scaled them to make the data comparable and also make gradient descent faster in neural network later<font>**
# +
from sklearn.preprocessing import StandardScaler, RobustScaler
std_scaler = StandardScaler()
rob_scaler = RobustScaler()
df['scaled_amount'] = rob_scaler.fit_transform(df['Amount'].values.reshape(-1,1))
df['scaled_time'] = rob_scaler.fit_transform(df['Time'].values.reshape(-1,1))
df.drop(['Time','Amount'], axis=1, inplace=True)
# for new column of scaled_amount and scaled_time are inserted in the back
# let's move them in front for convenience processing
cols = df.columns.tolist()
cols = cols[-2:] + cols[:-2]
df = df[cols]
df.head()
# -
# create training and test sets
X = df.iloc[:,:-1]
y = df.iloc[:,-1]
# ratio of posiive examples
sum(y)/len(y)
# **<font color='green' size=3>Now we see that the dataset is extremely imbalanced with only 1~2 positive (fraud) examples in 1000.<br>
# It means that accuracy is not a good metric to evaluate model performance, for a dummy classifier that always predict negative would have a accuracy of 99.8%<font>**
# <br>
# **<font size=3>split training and test set<font>**
X_train,X_test,y_train, y_test = train_test_split(X, y.values, test_size = 0.15)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
X_train.head()
# # Quick implementation with logisitc regression
# +
# quickly implement a simple model to get intuition
for c in [0.001,0.01,0.1,1,10]:
log_reg = LogisticRegression(C=c, solver='lbfgs',penalty="l2",max_iter=1500).fit(X_train,y_train)
print("\nAUC score of regularization with L2 of C=" + str(c) +" is:", roc_auc_score(y_test,log_reg.predict(X_test)))
print("F1 score of regularization with L2 of C=" + str(c) +" is:", f1_score(y_test,log_reg.predict(X_test)))
# -
precision, recall, thresholds = precision_recall_curve(y_test,log_reg.predict_proba(X_test)[:,1])
pr_curve = plt.plot(precision, recall, label ='Precision-Recall Curve')
#
# <font color='green' size=3>From above precision recall cuve we see that with 75% AUC, a well-perform model would get around **0.7 recall and 0.8 precision**<font>
#
# # Random Forest
# Tree models are better for imbalance datasets.
# Now we try several tree models to have a look.
# +
# seperate a validation set from training set for grid_search below
X_train_t,X_val,y_train_t, y_val = train_test_split(X_train, y_train, test_size = 0.15)
from sklearn.ensemble import RandomForestClassifier
# +
best_score = 0
for d in [10,15,17,19,22]:
for l in [15,20,25,28,30,32]:
forest = RandomForestClassifier(n_estimators=30, random_state=0,max_depth=d,max_leaf_nodes=l)
forest.fit(X_train_t, y_train_t)
score = f1_score(y_val,forest.predict(X_val))
if score > best_score:
best_score = score
best_parameters = {"d":d, "l":l}
print("Best depth are:",d)
print("\nBest leaf nodes are:",l)
# print("\nAccuracy on training set: {:.3f}".format(forest.score(X_train_t, y_train_t)))
# print("\nAccuracy on validation set: {:.3f}".format(forest.score(X_val, y_val)))
# print("\nAUC score is", roc_auc_score(y_val,forest.predict(X_val)))
# print("\nF1 score is", f1_score(y_val,forest.predict(X_val)))
# -
# best parameter:<br>
# Current depth are: 22<br>
# Current leaf nodes are: 32
# +
# train more rounds with best parameter to check if there's better output
forest = RandomForestClassifier(n_estimators=500, random_state=0,max_depth=22,max_leaf_nodes=32)
forest.fit(X_train_t, y_train_t)
print("Accuracy on training set: {:.3f}".format(forest.score(X_train_t, y_train_t)))
print("Accuracy on validation set: {:.3f}".format(forest.score(X_val, y_val)))
print("\nAUC score is", roc_auc_score(y_val,forest.predict(X_val)))
print("F1 score is", f1_score(y_val,forest.predict(X_val)))
# -
forest.feature_importances_
def plot_feature_importances(model):
n_features = len(X.columns.tolist())
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns.tolist())
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.axis('tight')
plot_feature_importances(forest)
# +
# to export a beautiful tree plot
from sklearn.tree import export_graphviz
import graphviz
export_graphviz(forest.estimators_[0], out_file="forest.dot", class_names=["fraud", "normal"],
feature_names=X.columns.tolist(), impurity=False, filled=True)
with open("forest.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
# +
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title, fontsize=14)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# +
forest_cm = confusion_matrix(y_test, forest.predict(X_test))
labels = ['No Fraud', 'Fraud']
plt.figure()
plot_confusion_matrix(forest_cm, labels, title="Random Forest \n Confusion Matrix", cmap=plt.cm.Reds)
# -
# We can see that the recall score is not so satisfactory.
# # XGboost
# <br>
# Let's try another tree model.
# +
import xgboost as xgb
D_train = xgb.DMatrix(X_train, label=y_train)
D_test = xgb.DMatrix(X_test, label=y_test)
# set ordinary params to see performance quickly
param = {
'eta': 0.18,
'max_depth': 7,
'objective': 'multi:softprob',
'gamma':4,
'num_class': 2}
steps = 60
xgb_model = xgb.train(param, D_train, steps)
preds = xgb_model.predict(D_test)
best_preds = np.asarray([np.argmax(line) for line in preds])
#print("Accuracy on training set: {:.3f}".format(xgb_model.score(X_train, y_train)))
#print("Accuracy on test set: {:.3f}".format(xgb_model.score(X_test, y_test)))
print("\nAUC score is", roc_auc_score(y_test,best_preds))
print("F1 score is", f1_score(y_test,best_preds))
# +
xgboost_cm = confusion_matrix(y_test, best_preds)
labels = ['No Fraud', 'Fraud']
plt.figure()
plot_confusion_matrix(xgboost_cm, labels, title="Xgboost \n Confusion Matrix", cmap=plt.cm.Reds)
# -
# Now we have a better recall than random foreset.
from xgboost import plot_importance
plot_importance(xgb_model)
# The feature importance is different from that of random forest.
# # Decision Tree
# <br>
# The fact is, we just quickly jump in complicated tree models like rf and xgb. Maybe this dataset requires no complicated models. Let's see decision trees to check the baseline performance of tree models.
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=0,max_depth=6,max_leaf_nodes=15,min_samples_leaf=10)
tree.fit(X_train,y_train)
print("Accuracy on training set: {:.3f}".format(tree.score(X_train_t, y_train_t)))
print("Accuracy on validation set: {:.3f}".format(tree.score(X_val, y_val)))
print("\nAUC score is", roc_auc_score(y_val,tree.predict(X_val)))
print("F1 score is", f1_score(y_val,tree.predict(X_val)))
# <font color='green'>**Now we see that actually a simple decision tree could have a good performance.**</font>
plot_feature_importances(tree)
# +
from sklearn.tree import export_graphviz
import graphviz
# class_names,先填normal,再填fraud
aa = export_graphviz(tree, out_file=None, class_names=["normal", "fraud"],
feature_names=X.columns.tolist(), impurity=False, filled=True)
graph = graphviz.Source(aa)
graph
# +
tree_cm = confusion_matrix(y_test, tree.predict(X_test))
labels = ['No Fraud', 'Fraud']
plt.figure()
plot_confusion_matrix(tree_cm, labels, title="Xgboost \n Confusion Matrix", cmap=plt.cm.Reds)
# -
# # Resample
# <font color='green'>For the dataset is extremely imbalance, it's pushing the model to predict as more 0 as possible. To avoid such problem, we may resample the dataset. There're two ways of resampling: upsamle and undersample.<br>
# By upsample we mean to create more positive datasets when training the model and vice versa for undersampling.
# **Here,We try to use SMOTE technique to upsample the training set with synthetic positive examples.<font>**
from imblearn.over_sampling import SMOTE
from collections import Counter
sm = SMOTE(sampling_strategy='minority')
X_smote, y_smote = sm.fit_sample(X_train, y_train)
Counter(y_smote)
# ## Decision Tree with SM
from sklearn.tree import DecisionTreeClassifier
tree1 = DecisionTreeClassifier(random_state=0,max_depth=6,max_leaf_nodes=15,min_samples_leaf=10)
tree1.fit(X_smote,y_smote)
print("Accuracy on training set: {:.3f}".format(tree1.score(X_smote, y_smote)))
print("Accuracy on validation set: {:.3f}".format(tree1.score(X_val, y_val)))
print("\nAUC score is", roc_auc_score(y_val,tree1.predict(X_val)))
print("F1 score is", f1_score(y_val,tree1.predict(X_val)))
print("\nAUC score is", roc_auc_score(y_test,tree1.predict(X_test)))
print("F1 score is", f1_score(y_test,tree1.predict(X_test)))
# We see that decision tree generates *poor* performance after upsampling.
plot_feature_importances(tree1)
# +
tree_cm1 = confusion_matrix(y_test, tree1.predict(X_test))
labels = ['No Fraud', 'Fraud']
plt.figure()
plot_confusion_matrix(tree_cm1, labels, title="Decision Tree \n Confusion Matrix", cmap=plt.cm.Reds)
# -
# ## Xgboost with SM
#X_test = X_test[X_train.columns]
import scipy
Xsmote = scipy.sparse.csc_matrix(X_smote)
Xtest = scipy.sparse.csc_matrix(X_test)
# +
import xgboost as xgb
test = X_test[X_train.columns]
D_train = xgb.DMatrix(Xsmote, label=y_smote)
D_test = xgb.DMatrix(Xtest, label=y_test)
param = {
'eta': 0.18,
'max_depth': 7,
'objective': 'multi:softprob',
'gamma':4,
'num_class': 2}
steps = 50
xgb_model1 = xgb.train(param, D_train, steps)
preds = xgb_model1.predict(D_test)
best_preds = np.asarray([np.argmax(line) for line in preds])
#print("Accuracy on training set: {:.3f}".score(xgb_model1.score(X_smote, y_smote)))
#print("Accuracy on test set: {:.3f}".format(xgb_model.score(X_test, y_test)))
print("\nAUC score is", roc_auc_score(y_test,best_preds))
print("F1 score is", f1_score(y_test,best_preds))
# -
# +
xgb_model1_cm1 = confusion_matrix(y_test, best_preds)
labels = ['No Fraud', 'Fraud']
plt.figure()
plot_confusion_matrix(xgb_model1_cm1, labels, title="Xgboost SM \n Confusion Matrix", cmap=plt.cm.Reds)
# -
# We get better recall but worse presicion now.
# ## Logistic with SM
# **Now we have positive and negative sample with same amount.<br> So let's train a logistic regression model another time to check if there's improvements.**
for c in [0.001,0.01,0.1,1]:
log_reg_s = LogisticRegression(C=c, solver='lbfgs',penalty="l2",max_iter=1500).fit(X_smote,y_smote)
print("\nAUC score of regularization with L2 of C=" + str(c) +" is:", roc_auc_score(y_test,log_reg_s.predict(X_test)))
print("F1 score of regularization with L2 of C=" + str(c) +" is:", f1_score(y_test,log_reg_s.predict(X_test)))
plt.figure()
precision, recall, thresholds = precision_recall_curve(y_test,log_reg_s.predict_proba(X_test)[:,1])
pr_curve = plt.plot(precision, recall, label ='Precision-Recall Curve')
# <font color='green'>We have seen that **AUC score has improved significantly to around 90%** although the above PRcurve looks similar as before.<br>
# This implies that we can reset the prediction threshold to achieve a better f1 score.
# To explore the threshold we can do below raw test:<font>
thresholds = [0.99,0.999,0.9999,0.99999,0.999999]
for i in thresholds:
print('\nconfusion matrix:\n',confusion_matrix(y_test,log_reg_s.predict_proba(X_test)[:,1]>i))
print('f1 is:',f1_score(y_test,log_reg_s.predict_proba(X_test)[:,1]>i))
print('recall is:',recall_score(y_test,log_reg_s.predict_proba(X_test)[:,1]>i))
print('AUC is:',roc_auc_score(y_test,log_reg_s.predict_proba(X_test)[:,1]>i))
# <font color='green'>**From above search we see that increasing the threshold improves model performance in terms of F1 score.<br>
# Such improvement basically comes from increasing the precision while hurting just a little recall. In a business context, a higher precision in this case means that every time the model predicts fraud, it is more likely that it is really a fraud.<br>
# However a higer precision means that the recall is lower. In a business context, it means that among all the fraud cases, it is less likely for the model to detect.**<br><font>
plt.figure()
precision, recall, thresholds = precision_recall_curve(y_test,log_reg_s.predict_proba(X_test)[:,1]>0.99999)
pr_curve = plt.plot(precision, recall, label ='Precision-Recall Curve')
# **By increasing the threshold we signifiantly *expand* our PRcurve**
# # Stratified datasets
# <br>
# <font color='green'>The datasets might not distributed evenly, which means that examples with similar features might cluster together and makes out model to overfit particular kinds of examples.<br>
# To avoid that we may stratified and shuffle our datasets.<font>
# +
from sklearn.model_selection import KFold, StratifiedKFold
sss = StratifiedKFold(n_splits=5, random_state=None, shuffle=True)
for train_index, test_index in sss.split(X, y):
print("Train:", train_index, "Test:", test_index)
original_Xtrain, original_Xtest = X.iloc[train_index], X.iloc[test_index]
original_ytrain, original_ytest = y.iloc[train_index], y.iloc[test_index]
# Turn into an array
original_Xtrain = original_Xtrain.values
original_Xtest = original_Xtest.values
original_ytrain = original_ytrain.values
original_ytest = original_ytest.values
train_unique_label, train_counts_label = np.unique(original_ytrain, return_counts=True)
test_unique_label, test_counts_label = np.unique(original_ytest, return_counts=True)
print('\nLabel Distributions: ')
print(train_counts_label/ len(original_ytrain))
print(test_counts_label/ len(original_ytest))
print("\nshape of original_Xtrain:", original_Xtrain.shape)
print("shape of original_Xtest:", original_Xtest.shape)
print("shape of original_ytrain:", original_ytrain.shape)
print("shape of original_ytest:", original_ytest.shape)
# -
sm = SMOTE(sampling_strategy='minority')
X_smote_s, y_smote_s = sm.fit_sample(original_Xtrain, original_ytrain)
Counter(y_smote_s)
for c in [0.001,0.01,0.1,1]:
log_reg_sn = LogisticRegression(C=c, solver='lbfgs',penalty="l2",max_iter=1500).fit(X_smote_s,y_smote_s)
print("\nAUC score of regularization with L2 of C=" + str(c) +" is:", roc_auc_score(original_ytest,log_reg_sn.predict(original_Xtest)))
print("F1 score of regularization with L2 of C=" + str(c) +" is:", f1_score(original_ytest,log_reg_sn.predict(original_Xtest)))
# **We can see that the performance is even a little worse than before.**
plt.figure()
precision, recall, thresholds = precision_recall_curve(original_ytest,log_reg_sn.predict_proba(original_Xtest)[:,1])
pr_curve = plt.plot(precision, recall, label ='Precision-Recall Curve')
# # Shallow neural network with Keras
# <br>
# <font color='green'>At last we may try a more complicated models such as neural network. To begin we may use Keras to quickly build a simple network to have a try.<font>
# +
n_inputs = X_smote_s.shape[1]
model_regularize = keras.Sequential([
keras.layers.Dense(units=n_inputs, input_shape=(n_inputs,),activation='relu',kernel_regularizer=keras.regularizers.l2(0.001)),
keras.layers.Dense(32, activation='relu',kernel_regularizer=keras.regularizers.l2(0.001)),
keras.layers.Dense(2, activation='softmax')
])
model_regularize.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model_regularize.fit(X_smote_s, y_smote_s,validation_split=0.2, batch_size=64, epochs=10, shuffle=True, verbose=2)
# -
nn_prediction = model_regularize.predict(original_Xtest, batch_size=200, verbose=0)
nnclass_prediction = model_regularize.predict_classes(original_Xtest, batch_size=200, verbose=0)
# +
undersample_cm = confusion_matrix(original_ytest, nnclass_prediction)
actual_cm = confusion_matrix(original_ytest, original_ytest)
labels = ['No Fraud', 'Fraud']
plt.figure()
plot_confusion_matrix(undersample_cm, labels, title="Random UnderSample \n Confusion Matrix", cmap=plt.cm.Reds)
# -
# <font color='green'>**From above confusion matrix we can see that this shallow neural network does not outperform logistic regression.<br>
# This implies that our dataset does not include difficult non-linear features for the model to learn.**<font>
f1_score(original_ytest,model_regularize.predict_proba(original_Xtest)[:,1]>0.995)
plt.figure()
precision, recall, thresholds = precision_recall_curve(original_ytest,model_regularize.predict_proba(original_Xtest)[:,1]>0.995)
pr_curve = plt.plot(precision, recall, label ='Precision-Recall Curve')
# The PRcurve is much smoother than that of logistic regression.<br>
# From below model output we see that neural network is more sure of its predictions.
np.round(model_regularize.predict_proba(original_Xtest),3)
# # Anomaly Detection with MultivariateGaussian
# <br>
# <font color='green'>When the positive training example is rare or there’s no particular patterns to detect positive examples, supervised learning algorithms are difficult to train. Then comes anomaly detection.
# <br><br>
# Anomaly detection generally uses Gaussian distribution to find the mean and variance of normal examples. Then we use a validation set with that mean and variance to calculate the probability. Then we try the set a probability threshold so that each calculated probability below that threshold would be predicted as anomaly examples.<font>
df_p = df.loc[df['Class'] == 1]
df_n = df.loc[df['Class'] == 0]
print(df_p.shape)
print(df_n.shape)
# We only use nagative(non-fraud) examples to calculate the mean and variance.<br>
# Thus the training set will only contains nagative examples.
# +
X_train_anomaly = df_n.iloc[:,:-1]
y_train_anomaly = df_n.iloc[:,-1]
Xn = df_n.iloc[0:1000,:-1]
yn = df_n.iloc[0:1000,-1]
Xp = df_p.iloc[:,:-1]
yp = df_p.iloc[:,-1]
Xtest = pd.concat([Xn,Xp])
ytest = pd.concat([yn,yp])
print(X_train_anomaly.shape)
print(X_test.shape)
# +
def estimateGaussian(X):
"""
This function estimates the parameters of a Gaussian distribution using the data in X
"""
m = X.shape[0]
#compute mean
sum_ = np.sum(X,axis=0)
mu = 1/m *sum_
# compute variance
var = 1/m * np.sum((X - mu)**2,axis=0)
return mu,var
mu, sigma2 = estimateGaussian(X_train_anomaly.values)
# -
print(mu.shape)
print(sigma2.shape)
# +
def multivariateGaussian(X, mu, sigma2):
"""
Computes the probability density function of the multivariate gaussian distribution.
"""
k = len(mu)
sigma2=np.diag(sigma2)
X = X - mu.T
p = 1/((2*np.pi)**(k/2)*(np.linalg.det(sigma2)**0.5))* np.exp(-0.5* np.sum(X @ np.linalg.pinv(sigma2) * X,axis=1))
return p
p = multivariateGaussian(X_train_anomaly.values, mu, sigma2)
# -
p.shape
# +
def selectThreshold(yval, pval):
"""
Find the best threshold (epsilon) to use for selecting outliers
"""
best_epi = 0
best_F1 = 0
stepsize = (max(pval) -min(pval))/1000
epi_range = np.arange(pval.min(),pval.max(),stepsize)
for epi in epi_range:
predictions = (pval<epi)[:,np.newaxis]
tp = np.sum(predictions[yval==1]==1)
fp = np.sum(predictions[yval==0]==1)
fn = np.sum(predictions[yval==1]==0)
# compute precision, recall and F1
prec = tp/(tp+fp)
rec = tp/(tp+fn)
F1 = (2*prec*rec)/(prec+rec)
if F1 > best_F1:
best_F1 =F1
best_epi = epi
return best_epi, best_F1, prec, rec
pval = multivariateGaussian(Xtest.values, mu, sigma2)
epsilon, F1, prec, rec = selectThreshold(ytest.values, pval)
print("Best epsilon found using cross-validation:",epsilon)
print("Best F1 on Cross Validation Set:",F1)
print("Recall score:",rec)
print("Precision score:",prec)
print("Outliers found:",sum(pval<epsilon))
# -
# <font color='green'>**It turns out that anomaly detection was able to get a pretty well recall so that all fraud cases would be detected and the F1 of 71% is fairly well**<font>
# # Conclusion
# <br>
# In this credit card fraud dataset where only **0.0017** positive examples, we have used typical supervised learning algorithm like logistic regression and deep learning algorithm of neural network to detect credit card frauds.
# <br>
# It turns out that simple tree models could have quite a good performance with F1 score of 86%. <br>
# <br>
# We also try to upsample the positives to make the dataset more balanced. However, model performance after upsampling is not better than that before.Then we try shallow neural network and the recall improves while the precision deteriorate.<br>
# <br>
# At last with anomaly detection, we easily achieve a recall score of 100% while the F1 is 71%. Anomaly detection is well suited in situations where positive training examples are not enough and there's no particular patterns of postive examples.
| creditcard_fraud.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Now You Code 3: List of numbers
#
# This example will demonstrate how to parse an input string into
# a list of numbers. You will then demonstrate it's a list of
# numbers by calculating the count, sum(), min(), max() and average
# of the numbers using Python's len(), sum(), min(), and max() functions.
#
# Example Run #1:
# ```
# Enter numbers separated by a space: 10 5 0
#
# Numbers: [10.0, 5.0, 0.0]
# Count: 3
# Min: 0.0
# Max: 10.0
# Sum: 15.0
# Average: 5.0
# ```
#
# As usual, devise your plan, write the program, THEN figure out how to handle bad input in Example run #2.
#
# HINT: Use split() to make a list from the input, use a loop to convert
# each item in the list to a float
#
# Start out your program by writing your TODO list of steps
# you'll need to solve the problem!
# ## Step 1: Problem Analysis
#
# Inputs: a string of numbers separated by a space eg. `10 5 1`
#
# Outputs: the count, sum, min, max and average of the numbers
#
# Algorithm (Steps in Program):
#
# ```
# TODO: Write algorithm here
# ```
#
#
# +
# Step 2: Write solution
numbers = [] # start with an empty list
# -
# ## Step 3: Questions
#
# 1. Re-write the solution so that it handles bad input:
#
# Example Run #2: (Handles bad input)
# ```
# Enter numbers separated by a space: 5 mike 3
#
# Error: mike is not a number!
# ```
#
# 2 Explain part of the solution could be refactored into a function? What would be be the input(s) and output of that function?
#
# ## Reminder of Evaluation Criteria
#
# 1. What the problem attempted (analysis, code, and answered questions) ?
# 2. What the problem analysis thought out? (does the program match the plan?)
# 3. Does the code execute without syntax error?
# 4. Does the code solve the intended problem?
# 5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
| content/lessons/09/Now-You-Code/NYC3-Numbers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.3 64-bit (''chem-eng-solver-AIlJxP1x-py3.7'': poetry)'
# name: python3
# ---
# # Introduction
#
# This notebook demonstrates methods and usage for the `fluids` module.
#
# # Examples
#
# ## Bernoulli's Equation
#
# ### Problem
#
# Suppose that water (density = 998.87 kg/m3) is held in a tank (v = 0 ft/s) at atmospheric pressure (14.6959 psia) at a height that is 10 yards above ground-level.
# Water is allowed to flow from the tank through a pipe that is 3.00123 m from ground-level.
# What is the velocity of the fluid in the pipe?
#
# ### Solution
#
# The method `Fluids.bernoulli` provides a can numerically solve for the velocity in the pipe.
# Define the `initial` and `final` values for velocity, height, pressure, and density as *v, h, P, rho*, respectively.
# We do not know *v* final, so mark that value as `"unknown"`.
# To demonstrate the unit conversion capabilities of the module, this example enters values for *P* final in pascals (standard atmospheric pressure is 101325 Pa) and *rho* final in lb/ft^3 (62.423 lb/ft^3).
# Note that the units parsing can use either `^` and `**` for symbols to denote exponentiation.
#
# In code, the solution is shown as below:
# +
from chem_eng_solver.fluids import Fluids
initial = {
"v": "0.0000 ft/s",
"h": "10.000 yard",
"P": "14.6959 psi",
"rho": "998.87 kg/m**3",
}
final = {
"v": "unknown",
"h": "3.00123 m",
"P": "101325 pascal",
"rho": "62.423 lb/ft^3",
}
fl = Fluids(initial, final, units_out="m/s")
fl.bernoulli()
| examples/fluids.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from collections import Counter
import random
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.sparse import coo_matrix
import pickle
import surprise
from surprise import NormalPredictor
from surprise import Dataset
from surprise import Reader
from surprise.model_selection import cross_validate,train_test_split,KFold
from surprise import SVD,SVDpp, NMF,SlopeOne,CoClustering
from surprise import accuracy
from collections import defaultdict
my_seed = 789
random.seed(my_seed)
# -
interaction = pd.read_csv("src/data/kaggle_food_data/RAW_interactions.csv")
recipes = pd.read_pickle("src/data/kaggle_food_data/raw_recipes.pkl")
recipes.info()
interaction.head()
recipes.head()
def make_df_from_count(serie,name):
counts = dict(Counter(serie))
return pd.DataFrame.from_dict(counts,orient='index').reset_index().rename(columns={'index':name,0:f'{name}_count'})
recipe_df = make_df_from_count(interaction.recipe_id,'recipe_id')
print(len(recipe_df[recipe_df['recipe_id_count'] <= 1])/len(recipe_df))
print(len(recipe_df[recipe_df['recipe_id_count'] <= 10])/len(recipe_df))
# around 40% of recipes have 1 or less reviews
#
# around 92% of recipes have less than or equal to 10 reviews
user_df = make_df_from_count(interaction.user_id,'user_id')
user_df['user_id_count'].hist(bins = 1000)
plt.title("number of reviews made by users".title(), fontsize = 14)
ax = plt.gca()
ax.set_xlim((0,50))
plt.xlabel("Number of reviews made".title())
plt.ylabel("Freq");
print(f"percentage of users with less than or equal to 10 reivews: {len(user_df[user_df.user_id_count <=10])/len(user_df)}")
merged_df = interaction.merge(recipe_df,how="left", left_on="recipe_id",right_on = "recipe_id")
merged_df
interaction.head()
# +
# Surprise
# scale rating to 0 to 1
df_touse = interaction.copy()
df_touse["rating"] = df_touse["rating"].apply(lambda x: x/5)
# create reader for the dataset
scaled_reader = Reader(rating_scale = (0,1))
scaled_surprise_data = Dataset.load_from_df(df_touse["user_id recipe_id rating".split(" ")], scaled_reader)
reader = Reader(rating_scale= (0,5))
surprise_data = Dataset.load_from_df(interaction[["user_id","recipe_id","rating"]], reader)
# split into train test
#scaled_train, scaled_test = train_test_split(scaled_surprise_data, test_size = .25, random_state = 789)
#train, test = train_test_split(surprise_data, test_size = .25, random_state = 789)
# -
param_grid = {"n_factors":[100,150,200],
"n_epochs":[20,25,30],
"lr_all":[0.005,0.0001,0.0005],
"reg_all":[0.01,0.015,0.02,0.025],
"random_state":[789],
"verbose":[True]}
gs = surprise.model_selection.GridSearchCV(SVD,
param_grid = param_grid,
measures = ["mae","rmse"],
return_train_measures = True,
n_jobs= 3,cv=3,
joblib_verbose = 4)
gs.fit(surprise_data)
results_df = pd.DataFrame(gs.cv_results)
results_df.sort_values("mean_test_mae")
# best params with n_factors: 100, n_epochs: 20, lr_all: 0.005, reg_all: 0.025, random_state: 789
gs.best_params
alg_touse = SVD(gs.best_params["mae"])
full_trainset = surprise_data.build_full_trainset()
alg = SVD(n_factors=100, n_epochs=20, lr_all= 0.005, reg_all= 0.025, random_state= 789,verbose=True)
alg.fit(full_trainset)
cross_validate(alg3,surprise_data, measures = ["RMSE","MAE", "MSE","FCP"], cv=5, verbose = True)
alg.predict(214,123123).est/5
cross_validate(alg2,surprise_data, measures = ["RMSE","MAE", "MSE","FCP"], cv=5, verbose = True) #cocluster
cross_validate(alg1,surprise_data, measures = ["RMSE","MAE", "MSE","FCP"], cv=5, verbose = True) # svd
print(alg1.predict(245,1245))
print(alg2.predict(245,1245))#, verbose=True))
print(alg3.predict(245,1245))#, verbose=True))
alg1.top_recommendations()
recipes.head()
plt.hist(interaction["rating"]);
inter_copy = interaction.copy()
inter_copy.head()
temp_data = {"user_id":"special_user","recipe_id": 12374, "date":None, "rating":3, "review":None}
temp_user_data_df = pd.DataFrame(temp_data, index = [0])
inter_copy.append(temp_user_data_df)
| Surprise_collaborative_filtering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (ox)
# language: python
# name: ox
# ---
# # Unify and clean-up intersections of divided roads
#
# Divided roads are represented by separate centerline edges. The intersection of two divided roads thus creates 4 nodes, representing where each edge intersects a perpendicular edge. These 4 nodes represent a single intersection in the real world. Roundabouts similarly create a cluster of intersections where each edge connects to the roundabout. This function cleans up these clusters by buffering their points to an arbitrary distance, merging overlapping buffers, and returning a GeoSeries of their centroids. For best results, the tolerance argument should be adjusted to approximately match street design standards in the specific street network.
#
# - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
# - [GitHub repo](https://github.com/gboeing/osmnx)
# - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)
# - [Documentation](https://osmnx.readthedocs.io/en/stable/)
# - [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/)
import matplotlib.pyplot as plt
import numpy as np
import osmnx as ox
ox.config(use_cache=True, log_console=True)
# %matplotlib inline
ox.__version__
# get a street network and plot it with all edge intersections
address = '2700 Shattuck Ave, Berkeley, CA'
G = ox.graph_from_address(address, network_type='drive', distance=750)
G_proj = ox.project_graph(G)
fig, ax = ox.plot_graph(G_proj, fig_height=10, node_color='orange', node_size=30,
node_zorder=2, node_edgecolor='k')
# ### Clean up the intersections
#
# We'll specify that any nodes with 15 meters of each other in this network are part of the same intersection. Adjust this tolerance based on the street design standards in the community you are examining, and use a projected graph to work in meaningful units like meters. We'll also specify that we do not want dead-ends returned in our list of cleaned intersections. Then we extract their xy coordinates and plot it to show how the clean intersections below compare to the topological edge intersections above.
# clean up the intersections and extract their xy coords
intersections = ox.clean_intersections(G_proj, tolerance=15, dead_ends=False)
points = np.array([point.xy for point in intersections])
# plot the cleaned-up intersections
fig, ax = ox.plot_graph(G_proj, fig_height=10, show=False, close=False, node_alpha=0)
ax.scatter(x=points[:,0], y=points[:,1], zorder=2, color='#66ccff', edgecolors='k')
plt.show()
# Note that these cleaned up intersections give us more accurate intersection counts and densities, but do not alter or integrate with the network's topology.
| notebooks/14-clean-intersection-node-clusters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <a href="https://cognitiveclass.ai"><img src = "https://ibm.box.com/shared/static/9gegpsmnsoo25ikkbl4qzlvlyjbgxs5x.png" width = 400> </a>
#
# <h1 align=center><font size = 5>From Problem to Approach</font></h1>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## Introduction
#
# The aim of these labs is to reinforce the concepts that we discuss in each module's videos. These labs will revolve around the use case of food recipes, and together, we will walk through the process that data scientists usually follow when trying to solve a problem. Let's get started!
#
# In this lab, we will start learning about the data science methodology, and focus on the **Business Understanding** and the **Analytic Approach** stages.
#
# ------------
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## Table of Contents
#
# <div class="alert alert-block alert-info" style="margin-top: 20px">
#
# 1. [Business Understanding](#0)<br>
# 2. [Analytic Approach](#2) <br>
# </div>
# <hr>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# # Business Understanding <a id="0"></a>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# This is the **Data Science Methodology**, a flowchart that begins with business understanding.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig1_flowchart_business_understanding.png" width=500>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Why is the business understanding stage important?
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
# The business understanding stage is important because this where a shared understanding of the business context is established. It is fundamental to identify the stakeholders and their objectives.
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!-- The correct answer is:
# It helps clarify the goal of the entity asking the question.
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Looking at this diagram, we immediately spot two outstanding features of the data science methodology.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig2_datascience_methodology_flowchart.png" width = 500>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### What are they?
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
# 1. The is a methodological aspect, step-by-step process to followed
# 2. The specific process has embedded iterative cycles
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!--The correct answer is:
# \\ 1. The flowchart is highly iterative.
# \\ 2. The flowchart never ends.
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Now let's illustrate the data science methodology with a case study.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Say, we are interested in automating the process of figuring out the cuisine of a given dish or recipe. Let's apply the business understanding stage to this problem.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Q. Can we predict the cuisine of a given dish using the name of the dish only?
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
# Based on the name of the dish we can predict the "cuisine style" (like "french cuisine", "chinese cuisine" or "other") or we can predict some of the steps and ingredients. Which of the two we want to get?
# What is the purpose we want to give to this information - use it as base line for matching receipts ? use it to cook ? share plain information with others ?
# Who will use this information - cooks ? lambda people? other ?
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!--The correct answer is:
# No.
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Q. For example, the following dish names were taken from the menu of a local restaurant in Toronto, Ontario in Canada.
#
# #### 1. Beast
# #### 2. 2 PM
# #### 3. 4 Minute
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Are you able to tell the cuisine of these dishes?
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
# No.
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!--The correct answer is:
# The cuisine is <strong>Japanese</strong>. Here are links to the images of the dishes:
# -->
#
# <!--
# Beast: https://ibm.box.com/shared/static/5e7duvewfl5bk4317sna5skvdhrehro2.png
# -->
#
# <!--
# 2PM: https://ibm.box.com/shared/static/d9xuzqm8cq76zxxcc0f9gdts4iksipyk.png
# -->
#
# <!--
# 4 Minute: https://ibm.box.com/shared/static/f1fwvvwn4u8rx8tghep6zyj5pi6a8v8k.png
# -->
#
# <!--
# Photographs by Avlxyz: https://commons.wikimedia.org/wiki/Category:Photographs_by_Avlxyz
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Q. What about by appearance only? Yes or No.
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
#
# No
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!--The correct answer is:
# No, especially when it comes to countries in close geographical proximity such as Scandinavian countries, or Asian countries.
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# At this point, we realize that automating the process of determining the cuisine of a given dish is not a straightforward problem as we need to come up with a way that is very robust to the many cuisines and their variations.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Q. What about determining the cuisine of a dish based on its ingredients?
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
# Some ingredients are specific to each cuisine, so potentially we can identify the cuisine based on the ingredients.
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!--The correct answer is:
# Potentially yes, as there are specific ingredients unique to each cuisine.
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# As you guessed, yes determining the cuisine of a given dish based on its ingredients seems like a viable solution as some ingredients are unique to cuisines. For example:
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# * When we talk about **American** cuisines, the first ingredient that comes to one's mind (or at least to my mind =D) is beef or turkey.
#
# * When we talk about **British** cuisines, the first ingredient that comes to one's mind is haddock or mint sauce.
#
# * When we talk about **Canadian** cuisines, the first ingredient that comes to one's mind is bacon or poutine.
#
# * When we talk about **French** cuisines, the first ingredient that comes to one's mind is bread or butter.
#
# * When we talk about **Italian** cuisines, the first ingredient that comes to one's mind is tomato or ricotta.
#
# * When we talk about **Japanese** cuisines, the first ingredient that comes to one's mind is seaweed or soy sauce.
#
# * When we talk about **Chinese** cuisines, the first ingredient that comes to one's mind is ginger or garlic.
#
# * When we talk about **indian** cuisines, the first ingredient that comes to one's mind is masala or chillis.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Accordingly, can you determine the cuisine of the dish associated with the following list of ingredients?
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig3_ingredients.png" width=600>
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
#
# Seems like a chinese dish.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!--The correct answer is:
# Japanese since the recipe is most likely that of a sushi roll.
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# # Analytic Approach <a id="2"></a>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig4_flowchart_analytic_approach.png" width=500>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### So why are we interested in data science?
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Once the business problem has been clearly stated, the data scientist can define the analytic approach to solve the problem. This step entails expressing the problem in the context of statistical and machine-learning techniques, so that the entity or stakeholders with the problem can identify the most suitable techniques for the desired outcome.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Why is the analytic approach stage important?
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
# Because this is the stage where the type of a model to be searched is defined.
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!--The correct answer is:
# Because it helps identify what type of patterns will be needed to address the question most effectively.
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Let's explore a machine learning algorithm, decision trees, and see if it is the right technique to automate the process of identifying the cuisine of a given dish or recipe while simultaneously providing us with some insight on why a given recipe is believed to belong to a certain type of cuisine.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# This is a decision tree that a naive person might create manually. Starting at the top with all the recipes for all the cuisines in the world, if a recipe contains **rice**, then this decision tree would classify it as a **Japanese** cuisine. Otherwise, it would be classified as not a **Japanese** cuisine.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig5_decision_tree_3.png" width=500>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Is this a good decision tree? Yes or No, and why?
# + button=false deletable=true new_sheet=false run_control={"read_only": false} active=""
# Your Answer:
# While it is simplistic and only taking into account one ingredient & one cuisine, it is a consistent sample of a decision tree.
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Double-click __here__ for the solution.
# <!--The correct answer is:
# No, because a plethora of dishes from other cuisines contain rice. Therefore, using rice as the ingredient in the Decision node to split on is not a good choice.
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### In order to build a very powerful decision tree for the recipe case study, let's take some time to learn more about decision trees.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# * Decision trees are built using recursive partitioning to classify the data.
# * When partitioning the data, decision trees use the most predictive feature (ingredient in this case) to split the data.
# * **Predictiveness** is based on decrease in entropy - gain in information, or *impurity*.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Suppose that our data is comprised of green triangles and red circles.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# The following decision tree would be considered the optimal model for classifying the data into a node for green triangles and a node for red circles.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig6_decision_tree_5.png" width=400>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Each of the classes in the leaf nodes are completely pure – that is, each leaf node only contains datapoints that belong to the same class.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# On the other hand, the following decision tree is an example of the worst-case scenario that the model could output.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig7_decision_tree_2.png" width=500>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Each leaf node contains datapoints belonging to the two classes resulting in many datapoints ultimately being misclassified.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### A tree stops growing at a node when:
# * Pure or nearly pure.
# * No remaining variables on which to further subset the data.
# * The tree has grown to a preselected size limit.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Here are some characteristics of decision trees:
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig8_decision_trees_table.png" width=800>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Now let's put what we learned about decision trees to use. Let's try and build a much better version of the decision tree for our recipe problem.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DS0103EN/labs/images/lab1_fig9_decision_tree_4.png" width = 500>
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# I hope you agree that the above decision tree is a much better version than the previous one. Although we are still using **Rice** as the ingredient in the first *decision node*, recipes get divided into **Asian Food** and **Non-Asian Food**. **Asian Food** is then further divided into **Japanese** and **Not Japanese** based on the **Wasabi** ingredient. This process of splitting *leaf nodes* continues until each *leaf node* is pure, i.e., containing recipes belonging to only one cuisine.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# Accordingly, decision trees is a suitable technique or algorithm for our recipe case study.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ### Thank you for completing this lab!
#
# This notebook was created by [<NAME>](https://www.linkedin.com/in/aklson/). I hope you found this lab session interesting. Feel free to contact me if you have any questions!
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# This notebook is part of a course on **Coursera** called *Data Science Methodology*. If you accessed this notebook outside the course, you can take this course, online by clicking [here](http://cocl.us/DS0103EN_Coursera_LAB1).
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <hr>
#
# Copyright © 2019 [Cognitive Class](https://cognitiveclass.ai/?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| labs/DS0103EN/DS0103EN-1-1-1-From-Problem-to-Approach-v2.0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from astropy.table import Table
from astropy.io import ascii, fits
import astropy.coordinates as coord
import astropy.units as u
import numpy as np
import matplotlib.pyplot as pl
pl.style.use('apw-notebook')
# %matplotlib inline
import pandas as pd
kepler = Table(np.genfromtxt("../data/kepler-eb.csv", delimiter=",", names=True, dtype=None, skip_header=7))
apogee = Table(fits.getdata("/Users/adrian/Data/APOGEE_DR12/allStar-v603.fits", 1))
kepler_c = coord.Galactic(l=kepler['GLon']*u.degree, b=kepler['GLat']*u.degree)
apogee_c = coord.Galactic(l=apogee['GLON']*u.degree, b=apogee['GLAT']*u.degree)
idx,sep,_ = coord.match_coordinates_sky(kepler_c, apogee_c)
len(apogee)
match = kepler[sep < 3.*u.arcsecond]
match
period_idx = (match['period'] < 250) & (match['period'] > 75)
period_idx.sum()
match[period_idx]
match[match['KIC'] == 9246715]
coord.Galactic(l=80.7927*u.degree, b=7.6115*u.degree).transform_to(coord.ICRS)
# ---
# Stars in APOGEE with similar T_eff
similar = apogee[((apogee['TEFF'] > 4690) & (apogee['TEFF'] < 4710))]
similar = similar[(similar['LOGG'] > 2.4) & (similar['LOGG'] < 2.6)]
pl.hist(similar['LOGG'], bins=32);
similar[0]['APOGEE_ID']
| notebooks/Kepler APOGEE match.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Description
#
# Contact: <NAME>, University of Oslo [<EMAIL>](<EMAIL>)
#
#
# Code for analyzing and plotting RCMIP data for AR6 IPCC.
#
#
# OBS: Some of the code is based on or copied directly with permission from [https://gitlab.com/rcmip/rcmip](https://gitlab.com/rcmip/rcmip)
# <NAME> ([<EMAIL>](<EMAIL>)).
#
# ## Download input data
# !wget https://zenodo.org/record/3593570/files/rcmip-phase-1-submission.tar.gz
# !cd ar6_ch6_rcmipfigs/data_in; tar zxf ../../rcmip-phase-1-submission.tar.gz; mv rcmip-tmp/data/* .;
# +
# just generating the list here, nothing to see...
from IPython.display import Markdown
from glob import glob
Markdown("\n".join([
"- <a href='{1}' target='_blank'>{0}</a>".format(x[9:-6].replace("_", " "), x)
for x in sorted(glob("ar6_ch6_rcmipfigs/notebooks/*.ipynb"))
]))
# -
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import math
print('pi is', math.pi)
print('cos(pi) is', math.cos(math.pi))
# -
# help(math)
# +
from math import cos, pi
print('cos(pi) is', cos(pi))
# +
import math as m
print('cos(pi) is', m.cos(m.pi))
# -
x = 2
mysqrt = math.pow(x, 0.5)
print(mysqrt)
# +
bases = 'ACTTGCTTGAC'
from random import randrange
random_index = randrange(len(bases))
print(random_index, bases[random_index])
# +
from random import sample
print(sample(bases, 1)[0])
# -
bases="ACTTGCTTGAC"
import math
import random
n_bases = len(bases)
base_idx = random.randrange(n_bases)
print("random base ", bases[base_idx], "base index", base_idx)
import math as m
angle = m.degrees(m.pi / 2)
print(angle)
from math import degrees, pi
angle = degrees(pi / 2)
print(angle)
from math import log
log(0)
| 06-libraries.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/seldoncode/Python_CoderDojo/blob/main/Python_CoderDojo14.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="b_n3uqr_wl9w"
# # Eliminar elementos de una lista
# ## Usando ```remove```
# * Tanto el método ```append``` como el método ```remove``` no devuelven nada, simplemente modifican la lista sobre la que se aplican.
# * El método ```remove``` elimina el elemento indicado, pero si hay dos solo elimina el primero.
# * Si queremos eliminar con ```remove``` un elemento que no existe da error.
# + colab={"base_uri": "https://localhost:8080/"} id="YwtRyVEEwjV_" outputId="e5664258-f0bc-4ee0-a4d4-105ded9d4314"
compra = ["pan", "aceite", "azúcar"]
compra.append("pasta") # añadimos un elemento con append
compra
# + colab={"base_uri": "https://localhost:8080/"} id="cZRdJ5kqxFxL" outputId="afe94632-19cd-4cda-a6f7-f4c1f8b637fa"
compra.remove("azúcar")
compra
# + colab={"base_uri": "https://localhost:8080/"} id="rL95TUI5yAXH" outputId="5f104169-234c-478b-dc73-cffeb860cc5c"
compra.append('pan') # ahora está repetido el pan
print(compra)
compra.remove('pan') # elimina solo el primer elemento de los repetidos
print(compra)
# + id="6He0QUOtydQP"
#compra.remove('tomate') # da error al intentar eliminar un elemento que no existe en la lista
# + [markdown] id="v8ec26JS0jnO"
# ### Lista de la compra
# 1. Mostrar un mensaje de presentación
# 2. Mostrar menú opciones: Pedir que se elija una opción.
# - Añadir producto
# - Eliminar producto
# - Mostrar lista
# - Salir
# 3. Opción "Añadir producto": Pedir nombre del producto.
# - Si ya está en la lista indicarlo
# - sino añadirlo
# 4. Opción "Eliminar producto": Pedir nombre del producto.
# - Si no está indicarlo
# - sino eliminarlo
# 5. Opción "Mostrar lista": Mostrar la lista.
# 6. Opción "Salir": Salir del programa.
# 7. Opción incorrecta: Indicarlo.
# 8. Tras cada acción volver al menú, hasta que se elija "Salir".
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="kMVp4Kqf2FFQ" outputId="53a05bfa-4899-4bd3-ac0f-3dd3713fd8a7"
menu = '''Elija una de las siguienes opciones:
1. Añadir producto
2. Eliminar producto
3. Mostrar lista
4. Salir
'''
lista = []
while True:
print()
print("LISTA DE LA COMPRA".center(40))
print("="*40)
print(menu)
n = int(input("número → "))
if n==1:
producto = input("Indique el nombre del producto que desea añadir: ").capitalize()
if producto in lista:
print("Este producto ya está en la lista. No se duplicará.")
else:
lista.append(producto)
print(f"El producto {producto} ha sido añadido a la lista.")
elif n == 2:
producto = input("Indique el nombre del producto que desea eliminar: ").capitalize()
if producto in lista:
lista.remove(producto)
print(f"El producto {producto} ha sido eliminado de la lista.")
else:
print("Este producto no está en la lista. No se puede eliminar.")
elif n == 3:
for i in range(len(lista)):
print(f"\t{i+1}. {lista[i]}")
elif n==4:
print("Fin del programa")
break
else:
print("Debe elegir una opción permitida")
# + [markdown] id="uKNDs_uK8ysP"
# ### Eliminar los meses con cierta letra
# Eliminar los mese con la letra b.
# * meses = ['enero','febrero','marzo','abril','mayo','junio','julio','agosto','septiembre','octubre','noviembre','diciembre']
# + [markdown] id="bahLcNJm7GZW"
# #### Solución incorrecta
# * La siguiente solución no es correcta.
# * No funciona bien.
# * No da error pero el resultado es incorrecto.
# * El motivo es que la lista meses con ```remove``` se va alterando cuando ya ha comenzado el bucle ```for```
# * Esto supone que la lista puede cambiar en cada iteración y por lo tanto el bucle ```for``` no lleva bien la cuenta de por que elemento va.
# * Cuando comienza el bucle ```for``` la lista meses tiene 12 elementos, pero a medida que ```remove``` actúa la lista meses va cambiando el número de elementos pero el bucle no se adapta.
# + colab={"base_uri": "https://localhost:8080/"} id="eYqXenFp9RGO" outputId="22cc9924-a775-4098-ab7f-ae0044856a92"
meses = ['enero','febrero','marzo','abril','mayo','junio','julio','agosto','septiembre','octubre','noviembre','diciembre']
############### C Ó D I G O Q U E N O F U N C I O N A B I E N ###################
for mes in meses:
if 'b' in mes:
meses.remove(mes)
meses # observe que en el resultado aparecen octubre y diciembre que no deberían aparecer ya que tienen letra 'b'
# + [markdown] id="1c_qxLrDDDeb"
# #### Solución 1
# Haciendo una copia independiente de la lista meses.
# + colab={"base_uri": "https://localhost:8080/"} id="h3YTlw6QDRHF" outputId="c896f5a9-a5e9-418a-fc64-9fd0ae51f88b"
meses = ['enero','febrero','marzo','abril','mayo','junio','julio','agosto','septiembre','octubre','noviembre','diciembre']
copia_meses = list(meses)
for mes in copia_meses:
if 'b' in mes:
meses.remove(mes)
meses
# + [markdown] id="uMKQ65oS9Xo-"
# #### Solución 2
# Comenzando por el final.
# + colab={"base_uri": "https://localhost:8080/"} id="4xEgbnnB8S9c" outputId="5b00801d-e577-4c78-b165-93bfa35d67ea"
meses = ['enero','febrero','marzo','abril','mayo','junio','julio','agosto','septiembre','octubre','noviembre','diciembre']
for i in range(len(meses),0,-1):
if 'b' in meses[i-1]: # se pone i-1 porque el mes 12 tiene índice 11, y el mes 1 tiene índice 0
meses.remove(meses[i-1])
meses
# + [markdown] id="y4iwPmdP9h4U"
# #### Solución 3
# Creando otra lista con los resultados que pasan el filtro.
# + colab={"base_uri": "https://localhost:8080/"} id="VgSyuRnB9p96" outputId="42b2d82b-4adb-4eb7-9839-ac3310903574"
meses = ['enero','febrero','marzo','abril','mayo','junio','julio','agosto','septiembre','octubre','noviembre','diciembre']
resultado = []
for mes in meses:
if 'b' not in mes:
resultado.append(mes)
resultado
# + [markdown] id="tIwQAMNCCEss"
# #### Solución 4
# * Usando un bucle ```while``` y controlando si el contador del índex (i) avanza o no, y controlando la longitud (n) de la lista meses.
# * Es una solución que posiblemente resulta más complicada que las anteriores
# + colab={"base_uri": "https://localhost:8080/"} id="Mlol23O1AHfA" outputId="966f0009-74a2-4c28-bfbe-19c7758b0f62"
meses = ['enero','febrero','marzo','abril','mayo','junio','julio','agosto','septiembre','octubre','noviembre','diciembre']
n = len(meses) # inicialmente n es 12 pero luego se acorta cuando removemos un mes
i = 0
while i < n:
if 'b' in meses[i]:
meses.remove(meses[i])
n -= 1
else:
i += 1
meses
# + [markdown] id="zgmxWsY5EWqq"
# ### Eliminar elementos repetidos de una lista
# + [markdown] id="qNyjELqGFxH3"
# #### Solución 1
# Creando una lista auxiliar.
# + colab={"base_uri": "https://localhost:8080/"} id="XFB3kJ2IGqNH" outputId="258cf5df-fc73-4cfb-be45-eb4dbca08ce4"
numeros = [1,2,3,2,5,3,4,6,5,7,8]
unicos = []
for n in numeros:
if n not in unicos:
unicos.append(n)
unicos
# + [markdown] id="UidoRVW4HP5h"
# #### Solución 2
# Creando una copia de la lista.
# + colab={"base_uri": "https://localhost:8080/"} id="m0irvTAtHSfJ" outputId="9f0a81f0-193e-4c86-be36-7ea42ec99869"
numeros = [1,2,3,2,5,3,4,6,5,7,8]
unicos = []
numeros_copia = list(numeros)
for n in numeros_copia:
if n not in unicos:
unicos.append(n)
else:
numeros.remove(n)
print(unicos) # ambas listas (unicos y numeros) contienen los elementos sin repetición
print(numeros)
# + [markdown] id="94FQl3vgFrjO"
# #### Solución 3.1.
# Comenzando por atrás.
# + colab={"base_uri": "https://localhost:8080/"} id="15NJT9nFJQ-7" outputId="4a6b4f62-5c06-43ed-dd29-fd05389bd423"
numeros = [1,2,3,2,5,3,4,6,5,7,8]
unicos = []
for i in range(len(numeros)-1,-1,-1):
if numeros[i] not in unicos:
unicos.append(numeros[i])
unicos
# + [markdown] id="aD8R03R5KFlz"
# Solución 3.2.
# * Es una variante de la solución anterior.
# * Se quiere que la lista números al final también contenga los elementos sin repetición.
# * En la solución anterior la lista numeros queda inalterada (igual que al inicio)
# + colab={"base_uri": "https://localhost:8080/"} id="Ns9L6SckGq_9" outputId="405f2866-9a84-4088-b058-f845fc0b0cf6"
numeros = [1,2,3,2,5,3,4,6,5,7,8]
unicos = []
for i in range(len(numeros)-1,-1,-1):
if numeros[i] not in unicos:
unicos.append(numeros[i])
else:
numeros.remove(numeros[i])
print(unicos) # ambas listas (unicos y numeros) contienen los elementos sin repetición
print(numeros)
# + [markdown] id="06bBW2C3GfKz"
# #### Solución 4
# Usando ```count```, que cuenta cuantas veces aparece un elemento en una lista.
# + colab={"base_uri": "https://localhost:8080/"} id="uJGen5hLEdom" outputId="fe109490-425d-462c-966f-b67039c0adae"
numeros = [1,2,3,2,5,3,4,6,5,7,8]
for i in range(len(numeros)-1,-1,-1):
if numeros.count(numeros[i])>1:
numeros.remove(numeros[i])
numeros
# + [markdown] id="mWKQOSe7I8PK"
# ## La sentencia ```del```
# * Elimina elementos de una lista
# * Elimina variables
#
# + id="WzI2RmLAKnhj"
x = 5
del x # con esto eliminamos la variable x. También se puede poner del(x)
#x # da ERROR porque la variable x ya no existe
# + id="l3rP7mvWLG6i"
lista = [1,2,3]
del lista # elimina la lista. También se puede poner del(lista)
# + [markdown] id="GuDgzDw7Lpth"
# ## Eliminando un elemento de una lista
# + colab={"base_uri": "https://localhost:8080/"} id="lMJ-LQECLvGP" outputId="2a5702fa-127e-47c6-e0ed-16b3376e570d"
numeros = [1, 2, 3, 4, 5, 6, 7, 8, 9]
del numeros[0] # eliminamos el elemento de índice 0
numeros
# + [markdown] id="DOaYlauEMG5s"
# ## Eliminando segmentos de una lista
# + colab={"base_uri": "https://localhost:8080/"} id="CyO9es4MMMPd" outputId="d3e80d8c-7a35-46a2-9f1e-41c9d1630223"
numeros = [1, 2, 3, 4, 5, 6, 7, 8, 9]
del numeros[3:5] # eliminamos los elementos ente el índice 3 y 5 (excluido el 5)
numeros # [1, 2, 3, 6, 7, 8, 9] se han eliminado los números 4 y 5 que ocupaban los índice 3 y 4
# + [markdown] id="cmjodgyNMq1r"
# ### Vaciando toda la lista
# + [markdown] id="jbpHM8_BM2xL"
# #### con ```del```
# + colab={"base_uri": "https://localhost:8080/"} id="oO9Ntr8ZMt6l" outputId="9bf40cdf-3333-420f-f9ff-4b6b617fb817"
numeros = [1, 2, 3, 4, 5, 6, 7, 8, 9]
del numeros[:]
numeros
# + [markdown] id="9NCzMXS3M9L7"
# #### Con el método ```clear```
# + colab={"base_uri": "https://localhost:8080/"} id="zo52lLSxM_XD" outputId="30bbec2f-11a2-4030-ab3d-52ff7222c444"
numeros = [1, 2, 3, 4, 5, 6, 7, 8, 9]
numeros.clear()
numeros
# + [markdown] id="lh-tNZ9JjFIR"
# ### Eliminar una serie de datos de una lista
# * Generar una lista de n números aleatorios, sin repetición, entre 10 y 50 con un decimal.
# * Mostrar la lista en pantalla.
# * El valor de n es otro número aleatorio entre 12 y 19.
# * Mostrar n
# * Se van a eliminar de la lista los m últimos elementos
# * El valor de m es otro número aleatorio entre 2 y n-5.
# * Mostrar m
# * Mostrar la lista final resultantes después de la eliminación de elementos.
# + [markdown] id="RlWADiY-pydg"
# #### Solución 1
# * Con slicing
# * Sin usar ```del```
# + colab={"base_uri": "https://localhost:8080/"} id="_13AB1rQkVVN" outputId="04f0cc7f-b4bf-446b-e727-19ea7d9089e6"
import random
random.seed()
n = random.randint(12,19)
datos = random.choices(list(range(100,501)), k=n) # creando una lista de n elementos aleatorios sin repetición entre 100 y 500
for i in range(len(datos)): # con este bucle for recorremos todos los elementos de la lista para
datos[i] /= 10 # dividirlos entre 10
print(f"Lista datos: {datos}")
m = random.randint(2,n-5) # Se van a eliminar de la lista los m últimos elementos
print(f"La lista datos tiene {n} elementos.")
print(f"Se van a eliminar los últimos {m} elementos.")
print(f"Se mantienen los {n-m} primeros datos.")
final = datos[:n-m] # la lista final tiene los n-m primeros datos de la original
print(f"Lista final : {final}")
# + [markdown] id="aOB9UMLZqAAw"
# #### Solución 2
# * Usando ```del```
# + colab={"base_uri": "https://localhost:8080/"} id="Yd8Dg63zqGfr" outputId="ab2dbdb1-21cf-4101-f1e3-be8dc24fe72f"
import random
random.seed()
n = random.randint(12,19)
datos = []
while len(datos)<n: # otra forma de generar la lista usando un while
rnd = (random.random()*401+100)//1/10 # el valor mínmo es 10.0 y el valor máximo generado es 50.0
if rnd not in datos: # para evitar repeticiones
datos.append(rnd) # el número aleatorio se incluye solo si no existía previamente en la lista original
print(f"Lista original: {datos}")
m = random.randint(2,n-5) # Se van a eliminar de la lista los m últimos elementos
print(f"La lista datos tiene {n} elementos.")
print(f"Se van a eliminar los últimos {m} elementos.")
print(f"Se mantienen los {n-m} primeros datos.")
del datos[n-m:] # la lista final tiene los n-m primeros datos de la original, el resto se borran
print(f"Lista final : {datos}")
# + [markdown] id="S9CDfNTl16ak"
# ## Método ```pop```
# * Elimina el último elemento de la lista.
# * Devuelve el elemento eliminado, y ese valor devuelto se puede meter en una variable.
#
# + colab={"base_uri": "https://localhost:8080/"} id="PvHfsk4i2DC6" outputId="9d217add-7565-41ae-d4b3-94579f466163"
lista = [1,2,3,4,5,6,7,8,9]
lista.pop()
# + colab={"base_uri": "https://localhost:8080/"} id="yBr7BTT_2PAz" outputId="f0385c2a-031c-4a7d-b41d-aa2ef6912141"
eliminado = lista.pop()
print(f"Ahora se ha eliminado el {eliminado}.")
# + [markdown] id="-mFzGiww2sbg"
# ### Método ```pop``` con parámetro
# * Si pasamos un parámetro al método ```pop``` elimina el elemento cuyo índice se indica.
# + colab={"base_uri": "https://localhost:8080/"} id="9zQQdqvQ26Xx" outputId="0eadd0bc-5100-4f8f-c0a4-5c4569060a3f"
lista = ['a','e','i','o','u']
letra_eliminada = lista.pop(1)
print(f"Se ha eliminado la letra {letra_eliminada}.")
print(f"Ahora la lista es {lista}")
# + [markdown] id="PxYUDkwk3-AQ"
# https://youtu.be/ebxkdrxKHWE
#
# + [markdown] id="8e989_VLAEGO"
# ### Repartiendo cartas
# * Repartimos 5 cartas a 4 jugadores, quedando 20 cartas en la baraja sin repartir
# + [markdown] id="ZYr-rUTKEwli"
# #### Solución 1
# Nos basamos en la baraja que teníamos en el Python_CoderCojo12.ipynb
# + colab={"base_uri": "https://localhost:8080/"} id="XENfX0nyAZ3U" outputId="160f63d0-3ca0-4002-ad62-471050993636"
import random
tantos = ['A','2','3','4','5','6','7','S','C','R']
palos = ['oros','copas','espadas','bastos']
baraja = []
for tanto in tantos:
for palo in palos:
carta = f"{tanto} de {palo}"
baraja.append(carta)
print(baraja)
random.shuffle(baraja) # barajamos
for i in range(0,40,4): # mostramos las cartas ya barajadas
for j in range(4):
print(f"{baraja[i+j]:14}", end=" ")
print()
print()
# Repartiendo cartas
# a 4 jugadores, repartimos 5 cartas a 4 jugadores, quedan 20 sin repartir
jugador1 = []
jugador2 = []
jugador3 = []
jugador4 = []
for i in range(5): # vamos a repartir 5 cartas a cada jugador
carta1 = baraja.pop(0) # pop(0) indica que comenzamos a repartir por la primera del mazo
jugador1.append(carta1)
carta2 = baraja.pop(0)
jugador2.append(carta2)
carta3 = baraja.pop(0)
jugador3.append(carta3)
carta4 = baraja.pop(0) # estas dos últimas línes equivalen a jugador4.append(baraja.pop(0))
jugador4.append(carta4) # de esa forma podríamos evitar la variable carta4, y al igual con las anteriores
jugadores = [jugador1, jugador2, jugador3, jugador4] # lista de listas, contiene todas las cartas repartidas
# Mostrando las cartas de cada jugador
for i in range(4):
print(f"Jugador {i+1}:")
for j in range(5):
print(f"{jugadores[i][j]:14}", end = " ")
print()
# + [markdown] id="4YVFVLZnBSuw"
# #### Solución 2. Mejorando el código
# Se mejora la zona denominada "Repartiendo cartas"
#
# + colab={"base_uri": "https://localhost:8080/"} id="2XSjby6dCzgc" outputId="d9118352-afa6-41a3-9632-7166b816846b"
import random
tantos = ['A','2','3','4','5','6','7','S','C','R']
palos = ['oros','copas','espadas','bastos']
baraja = []
for tanto in tantos:
for palo in palos:
carta = f"{tanto} de {palo}"
baraja.append(carta)
print(baraja)
random.shuffle(baraja)
for i in range(0,40,4):
for j in range(4):
print(f"{baraja[i+j]:14}", end=" ")
print()
print()
# Repartiendo cartas
# a 4 jugadores, repartimos 5 cartas a 4 jugadores, quedan 20 sin repartir
jugadores = [[],[],[],[]]
for i in range(5): # vamos a repartir 5 cartas a cada jugador
for j in range(4): # son 4 jugadores
jugadores[j].append(baraja.pop()) # repartimos la última carta del mazo
# Mostrando las cartas de cada jugador
for i in range(4):
print(f"Jugador {i+1}:")
for j in range(5):
print(f"{jugadores[i][j]:14}", end = " ")
print()
# + [markdown] id="v7uYl-GOFJrC"
# # Valor absoluto ```abs```
# + colab={"base_uri": "https://localhost:8080/"} id="6bzkS82qFP0v" outputId="ef91e54b-2f12-4000-e168-575048f54592"
abs(-7) # siempre devuelve el valor introducido con signo positivo
# + [markdown] id="FKS4UcwjFkBl"
# ### Distancia entre naves
# * Dos naves se encuentran en un recta (la recta real con el cero en el centro)
# * Queremos calcular la distancia entre ellas.
# + colab={"base_uri": "https://localhost:8080/"} id="bFZnXOBXOed8" outputId="4e11724b-245f-4603-8bdc-f17d53da45fb"
x1 = 5 # nave1 está en el punto x1
x2 = 12 # nave2 está en el punto x2
distancia = abs(x1-x2) # da igual hacer x1-x2 o x2-x1 porque el valor absoluto siempre da positivo
distancia
# + colab={"base_uri": "https://localhost:8080/"} id="qY0BScrePFWs" outputId="fb201d74-ec1d-4586-c678-cc61ef740b25"
distancia = abs(x2-x1) # haciendo x2-x1
distancia
# + [markdown] id="6QBTW7FNPQiv"
# #### Distancia con posiciones negativas
#
# + colab={"base_uri": "https://localhost:8080/"} id="3qZbJfDdPaLO" outputId="3c731836-1923-4795-a1c2-4fa9cc8e6c3a"
x1 = -5 # nave1 está en el punto x1
x2 = 15 # nave2 está en el punto x2
distancia = abs(x1-x2) # da igual hacer x1-x2 o x2-x1
distancia
# + colab={"base_uri": "https://localhost:8080/"} id="8eB02LpMPwql" outputId="b69dcf43-5cec-484f-e7de-7b4e3655e510"
x1 = -5 # x1 es negativo
x2 = -15 # x2 también es negativo
distancia = abs(x1-x2) # da igual hacer x1-x2 o x2-x1
distancia
# + [markdown] id="czCh8IRPQAlN"
# ### Reto 14.1. Cambio de temperatura
# * La máxima temperatura de un día ha sido de +10°C y la mínima ha sido de -5°C.
# * Calcular la variación de temperatura que se ha producido en ese día.
# + [markdown] id="jz_F1QKoRH47"
# ### Máximo y mínimo de una lista
# * Generar una listas con números aleatorios positivos y negativos, entre -99 y 99
# * El número de elementos de la lista también es un número aleatorio entre 12 y 19.
# * Calcular el máximo y el mínimo de la lista sin utilizar las funciones ```max``` y ```min```.
# * Luego comprobar que los valores obtenidos coinciden con los que nos dan las funciones predefinidas ```max``` y ```min```.
# + [markdown] id="IdBTGchiv3mw"
# #### Solución 1
# Inicializando el máximo y el minimo con el primer elemento de la lista números.
# + colab={"base_uri": "https://localhost:8080/"} id="ZCIx0SQuSsnG" outputId="7aaef639-f228-46d6-ca24-4ec0d7b671dc"
import random
import math
random.seed()
numeros = []
for i in range(random.randint(12,19)):
rnd = random.uniform(-99,100) # genera números float entre -99 y 100, sin llegar a 100
n = math.floor(rnd) # trunca decimales, no redondea, simplement toma la parte entera
numeros.append(n) # con posible repetición
print(f"{len(numeros)} números mixtos: {numeros}")
# Cálculo del máximo y el mínimo
maximo = minimo = numeros[0] # elegimos inicializar ambas variables con el primer elemento de la lista
for numero in numeros:
if numero>maximo: maximo=numero
if numero<minimo: minimo=numero
print(f"El mínimo es: {minimo}")
print(f"El máximo es: {maximo}")
if min(numeros)==minimo and max(numeros)==maximo:
print("Coincide el máximo calculado con el max y el mínimo con el min.")
# + [markdown] id="1_uXF16MwBR4"
# #### Solución 2
# * Inicializando el máximo y el mínimo con ```None```.
# * Si la lista numeros fuera una lista vacía con este método no da error pero con el método anterior daría error.
# + colab={"base_uri": "https://localhost:8080/"} id="2S97EyVkwODO" outputId="55df71bd-2ecf-4625-f5fe-f48c6923568f"
numeros = [4, -70, 73, -8, 87, -60, 46, 85, 49, 54, -31, -84, -58, 24, 76, 75, 33, -93, -39]
maximo = None
minimo = None
for n in numeros:
if maximo is None or n > maximo: maximo = n
if minimo is None or n < minimo: minimo = n
print(f"El mínimo es: {minimo}")
print(f"El máximo es: {maximo}")
# + [markdown] id="yU1zLeWRyGvr"
# # Reverso de una cadena de caracteres
# + [markdown] id="9SNKGyMoyNdI"
# #### Solución 1
# Iterando con un ```for``` desde el final hasta el inicio.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="9cI3iV78yVjw" outputId="3f4e4f7c-ccef-4d37-8c9f-2c1227e88639"
cadena = "Buenos días"
reverso = ''
for i in range(len(cadena)):
reverso += cadena[-i-1]
reverso
# + [markdown] id="s7HYnAHozVSy"
# #### Solución 2
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="WB-GQ6atzCzw" outputId="28cb23e5-d93a-4866-acb8-d4555b6f6bef"
cadena = "Buenos días"
reverso = ''
for i in cadena:
reverso = i + reverso
reverso
# + [markdown] id="tASATIRRzSvg"
# #### Solución 3
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="orbV0vD1zYiO" outputId="939cf05a-2bbf-4ead-e664-1a07ceacff06"
cadena = "Buenos días"
reverso = ''
for i in range(len(cadena),-1,-1):
reverso += cadena[i:i+1]
reverso
# + [markdown] id="cotbJgRlz0xD"
# #### Solución 4
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="P1MF_Ejnz3l9" outputId="f41354d5-3bd4-4e36-c541-a6ce98ccfb78"
cadena = "Buenos días"
reverso = cadena[::-1]
reverso
# + [markdown] id="EbcTW6j10GPa"
# ### Palíndromos
# * Si son números se llaman capicuas
# * Si son letras se llaman palíndromos
# * [Wikipedia](https://es.wikipedia.org/wiki/Pal%C3%ADndromo)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="nSms2akO0UUm" outputId="7498c5f3-ee58-41b6-9ccc-13a1a6f785e0"
frase = "Yo hago yoga hoy"
palindromo = frase[::-1]
palindromo
# + [markdown] id="aRKIVWgN4JU0"
# ## El método ```split```
# Aplicada a una cadena separa.
# + colab={"base_uri": "https://localhost:8080/"} id="s-z_8qKf4dg8" outputId="0bbc35f3-fb80-44d0-9a62-9e18673dcba9"
cadena = "La capital de Francia es París. La capital de Italia es Roma."
cadena.split() # si no lleva argumento split separa por los espacios
# + colab={"base_uri": "https://localhost:8080/"} id="o-ND9ctT48pk" outputId="24774fb2-0f70-41ad-dd2d-f43e0453a9d7"
cadena = "La capital de Francia es París. La capital de Italia es Roma."
cadena.split(".") # si el argumento es el punto separa por los puntos
# al final deja una cadena vacía por el último punto
# + [markdown] id="Qbvi-hOn5Txm"
# ## El método ```join```
# Dada una lista cuyos elementos son strings, el método ```join``` lo que hace es unir todas esos elementos formando una cadena.
# + colab={"base_uri": "https://localhost:8080/", "height": 53} id="SmxLnHbo5qTV" outputId="7ec72a4f-cc18-443a-8dad-07d7d2e00371"
cadena = "La capital de Francia es París. La capital de Italia es Roma."
lista = cadena.split() # primero separamos con split para obtener una lista
print(lista)
"".join(lista) # luego unimos con join
# + colab={"base_uri": "https://localhost:8080/", "height": 53} id="cpoeOLEC6OP7" outputId="7eec1395-08eb-4459-e734-8715f3a64f9b"
cadena = "La capital de Francia es París. La capital de Italia es Roma."
lista = cadena.split()
print(lista)
" ".join(lista) # si ponemos un espcio entre las comillas join une separando con espacio
# + colab={"base_uri": "https://localhost:8080/", "height": 53} id="NuLTtblG6gEK" outputId="b9360f58-0351-45d7-c626-e0e6c28182d6"
cadena = "La capital de Francia es París. La capital de Italia es Roma."
lista = cadena.split()
print(lista)
"-".join(lista) # si ponemos un guión entre las comillas join une separando con guiones
# + [markdown] id="-4-wyrp_63RN"
# ## El método ```replace```
# Permite reemplazar un caracter por otro.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="I4nDZPvR6_Rr" outputId="45c27f2b-d954-4ddc-a266-650edfaead76"
cadena = "La-capital-de-Francia-es-París.-La-capital-de-Italia-es-Roma."
cadena.replace("-"," ")
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="i52d01y77Uen" outputId="c63a5d50-dc47-4b73-bd8b-9aae99e3689a"
cadena.replace(".",",") # reemplaza los puntos por comas
# + [markdown] id="OCSQwt_m6xoj"
# ### El método ```replace``` admite un tercer argumento
# El tercer argumento es el número de veces que se reemplaza un caracter por otro, comenzando por el principio.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="pAAj7Xnb73b7" outputId="30ab6260-54c5-40c3-bf80-549d24d3ed6c"
cadena.replace(".",",",1) # reemplaza solo el primer punto por una coma
# + [markdown] id="G4_3614B8Ssc"
# ## Los métodos is***** de las cadenas
# * Hay varios métodos que comienzan por is
# - 'isalnum'
# - 'isalpha'
# - 'isascii'
# - 'isdecimal'
# - 'isdigit'
# - 'isidentifier'
# - 'islower'
# - 'isnumeric'
# - 'isprintable'
# - 'isspace'
# - 'istitle'
# - 'isupper'
# * Dan como resultado True o False
# + colab={"base_uri": "https://localhost:8080/"} id="bF8bXgrD8iKE" outputId="13edc9a2-fa94-4c71-c9a5-2f449ad3a27d"
dir(str)
# + colab={"base_uri": "https://localhost:8080/"} id="Rs3pFFAK8_vG" outputId="46fef3a3-22ee-4bf4-908f-b516bdebcbd6"
"hola".isalpha()
# + colab={"base_uri": "https://localhost:8080/"} id="gRWfUsQx9Iae" outputId="01516de8-844c-4962-8015-539ec11c5ee9"
"123abc".isalpha() # solo es True si todos los caracteres son alfabéticos
# + colab={"base_uri": "https://localhost:8080/"} id="3d2erfCU9XgW" outputId="e0fe49b5-afdf-42b3-9682-aeeba3435f28"
"123.".isalnum() # solo es True si todos los caracteres son números y en este hay un punto
# + [markdown] id="I22BI90F1KVa"
# ### Comprobar si una frase es un palíndromo
# * frase = "Allí por la tropa portado, traído a ese paraje de maniobras, una tipa como capitán usar boina me dejara, pese a odiar toda tropa por tal ropilla."
# * Eliminar puntos
# * no tener en cuenta tildes
# * no tener en cuenta mayúsculas
# + [markdown] id="5Asj5L6uEVF4"
# #### Solución 1
# + colab={"base_uri": "https://localhost:8080/"} id="wdx6P-4u1o9O" outputId="8e54e9c1-4d01-4fb3-955d-af93643671e9"
frase = "Allí por la tropa portado, traído a ese paraje de maniobras, una tipa como capitán usar boina me dejara, pese a odiar toda tropa por tal ropilla."
frase = frase.strip(".")
frase = frase.lower()
frase = frase.replace("í","i").replace("á","a").replace(",","").replace(" ","")
print(frase)
frase == frase[::-1]
# + [markdown] id="fgN4gFCdEY2q"
# #### Solución 2
# + colab={"base_uri": "https://localhost:8080/"} id="xFEUsRzkEUNs" outputId="ec636461-e083-4194-99fb-04311d901850"
frase = "Allí por la tropa portado, traído a ese paraje de maniobras, una tipa como capitán usar boina me dejara, pese a odiar toda tropa por tal ropilla."
minusculas = frase.lower()
palabras = minusculas.split()
# Quitar las comas y puntos
lista_plana = []
for p in palabras:
q = p.strip(".").strip(",")
lista_plana.append(q)
print(lista_plana)
cadena_plana = "".join(lista_plana)
print(cadena_plana)
# Quitamos las tildes
cadena_plana = cadena_plana.replace("á","a").replace("é","e")\
.replace("í","i").replace("ó","o").replace("ú","u")
# Revertimos la cadena
palindromo = cadena_plana[::-1]
print(palindromo)
# Comparamos la cadena con su reverso
if cadena_plana == palindromo:
print("Son palíndromos")
else:
print("No son palíndromos")
# + [markdown] id="3NNi_jjgG2al"
# #### Solución 3
# + colab={"base_uri": "https://localhost:8080/"} id="DLQkuY8MG5Dn" outputId="ca46ce84-c067-402f-fc62-31e4fdcc0e6f"
frase = "Allí por la tropa portado, traído a ese paraje de maniobras, una tipa como capitán usar boina me dejara, pese a odiar toda tropa por tal ropilla."
minusculas = frase.lower()
# Quitar las comas y puntos
cadena_plana = ""
for letra in minusculas:
if letra.isalpha():
cadena_plana += letra
# Quitamos las tildes
cadena_plana = cadena_plana.replace("á","a").replace("é","e")\
.replace("í","i").replace("ó","o").replace("ú","u")
# Revertimos la cadena
palindromo = cadena_plana[::-1]
print(palindromo)
# Comparamos la cadena con su reverso
if cadena_plana == palindromo:
print("Son palíndromos")
else:
print("No son palíndromos")
# + [markdown] id="6ODjdYPAHcpl"
# ### Números capicuas
# Calcular todos los capicúas que existen entre dos números dados.
#
# + colab={"base_uri": "https://localhost:8080/"} id="9S-DnSsrHqIG" outputId="4832238e-1e2f-484a-f5b3-633f29e7cf64"
inicial = 10_000
final = 12_000
capicuas = []
for numero in range(inicial, final+1):
texto = str(numero)
if texto == texto[::-1]:
capicuas.append(numero)
print(f"Existen {len(capicuas)} capicuas en ese rango.")
print(capicuas)
| Python_CoderDojo14.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Tabular Playground Series - February 2022
#
# For this months TPS, we are predicting the class of bacteria based on the histogram of DNA bases found in 10-mers of the DNA segments. We are told that our data includes simulated measurement errors.
# +
import pandas as pd
import numpy as np
import pyarrow
import time
import re
import math
import plotly.express as px
import matplotlib.pyplot as plt
from IPython.display import Image
import seaborn as sns; sns.set_theme()
# +
from sklearn.preprocessing import LabelEncoder
from math import factorial
import gc
def prepare_data(path, integer = False, remove_dupes = True):
# Load Data
df = pd.read_csv(path)
df = df.drop('row_id', axis = 1)
features = [x for x in df.columns if x not in ['row_id','target']]
bias = lambda w, x, y, z: factorial(10) / (factorial(w) * factorial(x) * factorial(y) * factorial(z) * 4**10)
# Create integer data
df_i = dict()
for col in features:
w = int(col[1:col.index('T')])
x = int(col[col.index('T')+1:col.index('G')])
y = int(col[col.index('G')+1:col.index('C')])
z = int(col[col.index('C')+1:])
df_i[col] = ((df[col] + bias(w, x, y, z)) * 1000000).round().astype(int)
df_i = pd.DataFrame(df_i)
# Get GCDs
gcd = df_i[features[0]]
for col in features[1:]:
gcd = np.gcd(gcd, df_i[col])
df['gcd'] = gcd
# Return integer histograms?
if integer:
df[features] = df_i[features]
gc.collect()
# Get sample weight
if remove_dupes:
vc = df.value_counts()
df = pd.DataFrame([list(tup) for tup in vc.index.values], columns = df.columns)
df['sample_weight'] = vc.values
# Save Memory
for col, dtype in df.dtypes.iteritems():
if dtype.name.startswith('int'):
df[col] = pd.to_numeric(df[col], downcast ='integer')
elif dtype.name.startswith('float'):
df[col] = pd.to_numeric(df[col], downcast ='float')
return df
def load_data():
try:
# Successive notebook runs will load the preprocessed data locally
train = pd.read_feather('../data/train.feather')
test = pd.read_feather('../data/test.feather')
submission = pd.read_csv('../data/sample_submission.csv')
except:
# First run has to perform preprocessing
train = prepare_data('../data/train.csv')
train.to_feather('../data/train.feather')
test = prepare_data('../data/test.csv', remove_dupes = False)
test.to_feather('../data/test.feather')
submission = pd.read_csv('../data/sample_submission.csv')
encoder = LabelEncoder()
train['target'] = encoder.fit_transform(train['target'])
return train, test, submission, encoder
# -
# # Load Data
# +
# %%time
train, test, submission, encoder = load_data()
target_bins = train['target'].astype(str) + train['gcd'].astype(str)
# Features
features = [x for x in train.columns if x not in ['row_id','target','sample_weight','gcd']]
print(f'Training Samples: {len(train)}')
# -
# # Duplicate Data
#
# Our training data contains many duplicate rows which have been combined and given a weight based on how often it was duplicated
# +
# Training Data Dupes
temp = train.groupby(['gcd','sample_weight'])['target'].count()
temp = temp.reset_index()
temp['sample_weight'] = temp['sample_weight'].astype('str')
fig, ax = plt.subplots(2, 2, figsize = (12,9))
gcd = [[1,10],[1000,10000]]
for row in range(2):
for col in range(2):
idx = 2*row + col
ax[row,col].bar(
temp[temp.gcd == gcd[row][col]]['sample_weight'],
temp[temp.gcd == gcd[row][col]]['target'],
)
ax[row,col].set_title(f'Duplicates for GCD = {gcd[row][col]} (Training)')
# -
# # Labels
# +
import plotly.express as px
# Training Data Dupes
temp = train.groupby(['gcd','sample_weight', 'target'])['A0T0G0C10'].count()
temp = temp.reset_index()
temp['sample_weight'] = temp['sample_weight'].astype('str')
fig = px.bar(
temp[temp.gcd == 1000], x="sample_weight", y='A0T0G0C10',
color="target", title = "Duplicates Per Label (GCD = 1000)",
labels = {'A0T0G0C10':'Counts'}
)
img_bytes = fig.to_image(format="png")
Image(img_bytes)
# +
# Training Data Dupes
temp = train.groupby(['gcd','sample_weight', 'target'])['A0T0G0C10'].count()
temp = temp.reset_index()
temp['sample_weight'] = temp['sample_weight'].astype('str')
fig = px.bar(
temp[temp.gcd == 10000], x="sample_weight", y='A0T0G0C10',
color="target", title = "Duplicates Per Label (GCD = 10000)",
labels = {'A0T0G0C10':'Counts'}
)
img_bytes = fig.to_image(format="png")
Image(img_bytes)
# -
# # Test Data
# +
# Test Data dupes
temp = prepare_data('../data/test.csv')
temp = temp.groupby(['gcd','sample_weight'])['A0T0G0C10'].count()
temp = temp.reset_index()
temp['sample_weight'] = temp['sample_weight'].astype('str')
fig, ax = plt.subplots(2, 2, figsize = (12,9))
gcd = [[1,10],[1000,10000]]
for row in range(2):
for col in range(2):
idx = 2*row + col
ax[row,col].bar(
temp[temp.gcd == gcd[row][col]]['sample_weight'],
temp[temp.gcd == gcd[row][col]]['A0T0G0C10'],
)
ax[row,col].set_title(f'Duplicates for GCD = {gcd[row][col]} (Test)')
# -
# # Original Histograms
#
# The training data is formed by creating histograms out of 10-mers, then subtracting off the "bias". The bias is the 10-mer count you would expect if you generated completely random DNA sequences. We then divide this unbiased count by the total number of 10-mers (1 million). Each row consists of different numbers of reads which are then multiplied by a constant so that their row sum is 1 million.
# +
# %%time
original_train = prepare_data('../data/train.csv', integer = True, remove_dupes = False)
original_test = prepare_data('../data/test.csv', integer = True, remove_dupes = False)
print(f'Training Samples: {len(original_train)}')
print(f'Test Samples: {len(original_test)}')
original_train = original_train[original_train.gcd == 1][features].sum(axis = 0)
original_test = original_test[original_test.gcd == 1][features].sum(axis = 0)
original_train //= 2
# -
temp = original_train - original_test
temp.sort_values().head(20)
# # Correlation
#
# We should expect a good deal of correlation since in counting the 10-mers there should be a lot of overlap, since shifting left or right will change at most two values by one.
corr_matrix = train[features].corr()
np.fill_diagonal(corr_matrix.values, 0)
fig, ax = plt.subplots(figsize=(9, 6))
sns.heatmap(corr_matrix, ax=ax)
# # Principal Components
#
# We can look at the principal components explained variance to see the redundancy.
# +
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(train[features])
cumsum = np.cumsum(pca.explained_variance_ratio_)
fig, ax = plt.subplots(figsize = (9,6))
ax.plot(range(1,len(cumsum)+1), cumsum)
plt.ylabel('Explained Variance')
plt.xlabel('# of Components')
# +
# Confusion matrix
fig, ax = plt.subplots(2, 2, figsize = (12,9))
GCD = [[1,10],[1000,10000]]
for row in range(2):
for col in range(2):
idx = 2*row + col
gcd = GCD[row][col]
pca = PCA(whiten = True, random_state = 0)
pca.fit(train[features][train.gcd == gcd])
train_pca = pca.transform(train[features][train.gcd == gcd])
ax[row,col].scatter(
train_pca[:,0],train_pca[:,1],
c = train[train.gcd == gcd]['target'], s = 1
)
ax[row,col].set_title(
f"{1000000 // gcd} reads ({(train['gcd'] == gcd).sum()} unique samples)"
)
plt.show()
# -
# # Test Data w/ PCA
# +
# Confusion matrix
fig, ax = plt.subplots(2, 2, figsize = (12,9))
GCD = [[1,10],[1000,10000]]
for row in range(2):
for col in range(2):
idx = 2*row + col
gcd = GCD[row][col]
pca = PCA(whiten = True, random_state = 0)
pca.fit(train[features][train.gcd == gcd])
train_pca = pca.transform(train[features][train.gcd == gcd])
test_pca = pca.transform(test[features][test.gcd == gcd])
ax[row,col].scatter(
train_pca[:,0],train_pca[:,1],
c = train[train.gcd == gcd]['target'], s = 1
)
ax[row,col].scatter(
test_pca[:,0],test_pca[:,1],
c = 'green', s = 1
)
ax[row,col].set_title(
f"{1000000 // gcd} reads ({(train['gcd'] == gcd).sum()} unique samples)"
)
plt.show()
| tps-2022-02/notebooks/Notebook 1 - Exploratory Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.1
# language: julia
# name: julia-1.4
# ---
# # Notebook demonstrating how students can run tests
# One bonus of using Jupyter notebooks is that they can include automated *tests*. These can be used by:
# - Students to get instantenous feedback on their responces,
# - Teachers to automated (at least a portion of) the grading, or
# - Both!
#
# First, we need to load a package, *NBInclude* that is able to parse Jupyter notebooks.
using NBInclude
# First, we'll read in the original notebook, which did not include correct student responses.
@nbinclude("ex1.ipynb");
# Now, we'll run the tests that have been defined in the file ```test/test1.jl```.
include("test/test1.jl");
# The student can clearly see that their notebook does not pass the tests. Additionally, it shows which tests failed and why.
# The information below ```Stacktracte``` can be used to isolate the location in the notebook where the that occured. Alternatively, non-programming students can ignore that.
# For an example of tests that pass, see this [same file in the *completed* branch](https://github.com/PsuAstro528/TLTDemo-start/blob/completed/Self-check.ipynb).
| Self-check.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: action_prediction
# language: python
# name: action_prediction
# ---
# +
# load libraries
import pandas as pd
import numpy as np
import seaborn as sns
import os
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge
from action_prediction import constants as const
from action_prediction.data import DataSet
from action_prediction import visualize as vis
from action_prediction import inter_subject_congruency as isc
# %load_ext autoreload
# %autoreload 2
# -
# ## Load behavioral and eyetracking data
#
# ### Merge these datasets into a `df_merged` dataframe
# +
# initialize class
data = DataSet(task='action_observation')
# load behavior
df_behav = data.load_behav()
# load eyetracking
df_eye = data.load_eye(data_type='events')
# merge eyetracking with behav
df_merged = data.merge_behav_eye(dataframe_behav=df_behav, dataframe_eye=df_eye)
# -
# initialize plotting style
vis.plotting_style()
# ## Accuracy increases over time
# ### The dashed black line indicates the start of a new session
#
# #### On 20% of the trials, the player does not hit the instructed target. On those trials, the learning rates are sig. worse than on trials where the player successfully hits the intended target
# visualize accuracy across runs
vis.plot_acc(dataframe=df_behav.query('player_sucess=="T"'), x='run_num', hue='player_success')
# ## Reaction time decreases over time
#
# ### There is an anomalous spike in RT in run 10 (need to check this out)
# visualize rt across runs
vis.plot_rt(dataframe=df_behav.query('player_success=="T"'), x='run_num', hue=None)
# +
# plot diameter
# vis.plot_diameter(dataframe=df_merged)
# -
# ## Visualization of Correlations
# Another approach that visualizes correlations between subjects
#
# reference source: https://medium.com/@sebastiannorena/finding-correlation-between-many-variables-multidimensional-dataset-with-python-5deb3f39ffb3
# +
# get all subj fixations
fixations = isc.get_subj_fixations(dataframe=df_eye, data_type='events')
# grid subject data
gridded_data = isc.grid_all_subjects(fixations)
# visualize corr matrix
correlations = vis.visualize_corr(gridded_data)
# -
# # Inter-observer congruency
#
# Idea: Build a saliency map from all observers except the ith observer. To calculate degree of similarity between ith observer and others, calculate hit rate. Iterate over all subjects and average scores to get the IOC.
#
# Iterate over all fixation positions(x,y)
# Convert fixation positions to gridded positions
# For each grid position, map position to number of fixations in that grid position
# Adaptive Binarization - yes or no whether some threshold number t or more fixations in that position
# +
ioc_rates = isc.ioc(gridded_data=gridded_data, thresholds=[5, 10, 15, 20])
sns.factorplot(x='threshold', y='hit_rate', data=ioc_rates)
plt.title("Inter-Observer Congruency");
# -
# ## Pairwise Comparison¶
# To measure how much of a subject's fixations match with all other subjects'
#
# Convert fixations to their locations in a grid
# Since all the subjects have the same grid, the vectors being compared for each subject now all have the same length since each subject has one entry per grid position that contains the number of fixations at that position
# Calculate correlation between each subject
# Convert correlations to z-scores with Fisher Z-Transformation
# Average z-scores
# Convert to probabilities
# Note on Fisher Z-Transformation:
#
# The Fisher Z-Transformation is a way to transform the sampling distribution of Pearson’s r (i.e. the correlation coefficient) so that it becomes normally distributed. The “z” in Fisher Z stands for a z-score. The formula to transform r to a z-score is: z’ = .5[ln(1+r) – ln(1-r)]
# +
results = isc.pairwise(gridded_data)
sns.barplot(x='observer', y='avrg_zscore', data=results)
# sns.lineplot(x='observer', y='probability', data=results)
# -
df_merged.columns
# ## The number of fixations increase over time
#
# ## Participants fixate more on trials where the player successfully hit the instructed target
# ### 20% of the trials are "False" trials where the player does not hit the instructed target
# plot fixation count
vis.plot_fixation_count(dataframe=df_merged, x='run_num', hue='player_success')
# ## Participants fixate more on trials with player "EW"
# plot fixation count
vis.plot_fixation_count(dataframe=df_merged, x='player_name', hue=None)
# +
# plot saccade count
# vis.plot_saccade_count(dataframe=df_merged, x='run_num', hue=None)
# -
# ## Fixation duration increases over time
# plot fixation duration
vis.plot_fixation_duration(dataframe=df_merged, x='run_num', hue=None)
# plot amplitude
vis.plot_amplitude(dataframe=df_merged, x='run_num', hue=None)
# +
# plot dispersion
# vis.plot_dispersion(dataframe=df_merged, x='run_num', hue=None)
# +
# plot peak velocity
# vis.plot_peak_velocity(dataframe=df_merged, x='run_num', hue=None)
# +
# heatmap
# one subj, one run
for run in range(14):
tmp = df_eye[(df_eye['subj']=='sIU') & (df_eye['run_num']==run+1) & (df_eye['type']=="fixations")]
vis.plot_gaze_positions(dataframe=tmp)
plt.title(f'run{run+1}')
# -
df = df_merged.groupby(['subj', 'sess', 'run_num','type', 'actors', 'label', 'condition_name']
).agg({'rt': 'mean',
'corr_resp': 'mean',
'dispersion': 'mean',
'amplitude': 'mean',
'peak_velocity': 'mean',
'duration': 'mean'}
).reset_index()
# +
# model data
model = LinearRegression()
model.fit(df[['corr_resp']], df[['rt']])
predictions = model.predict(df[['corr_resp']])
# -
| notebooks/1.0-mk-action-prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ***Data collection code for acceptable (i.e., better) ads***
import requests
import urllib.request
import time
from bs4 import BeautifulSoup as bs
import csv
import pandas as pd
# ### 101 Entries of Acceptable Ads
#
# *Because for classification precision is integral and there are no particular method for distinguishing ad format, I manually crawled each acceptable ad using Selenium.*
# ***Open Sample Sites List***
with open("/Users/SeoyeonHong/Desktop/annoying_ad_classifier/data_collection/sites_list/acceptable_ad_list.txt") as sample:
urls = []
for line in sample:
urls.append(line.rstrip())
len(urls)
# ***Manual collection of HTML snippets corresponding to each ad***
def open_url(url):
"""Open URL"""
driver.get(url)
time.sleep(10)
driver.execute_script("window.scrollTo(0, 150)")
time.sleep(10)
data = []
from selenium import webdriver
#driver.close()
driver = webdriver.Firefox()
# url 1
i = urls[0]
open_url(i)
element = driver.find_element_by_css_selector('#secondary')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 2
i = urls[1]
open_url(i)
element = driver.find_element_by_css_selector('#ad-standard-wrap')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 3
i = urls[2]
open_url(i)
element = driver.find_element_by_css_selector('#ad_contentslot_1')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 4
i = urls[3]
open_url(i)
element = driver.find_element_by_css_selector('.google-auto-placed')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 5
i = urls[4]
open_url(i)
element = driver.find_element_by_css_selector('#mainLeaderboard')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 6
i = urls[5]
open_url(i)
element = driver.find_element_by_css_selector('div.sidesection:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 7
i = urls[6]
open_url(i)
element = driver.find_element_by_css_selector('aside.l-region')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 8
i = urls[7]
open_url(i)
element = driver.find_element_by_css_selector('div.horizontal:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 9
i = urls[8]
open_url(i)
element = driver.find_element_by_css_selector('.vertical')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 10
i = urls[9]
open_url(i)
element = driver.find_element_by_css_selector('.extension')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 11
i = urls[10]
#open_url(i)
element = driver.find_element_by_css_selector('aside.panel-rt')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 12
i = urls[11]
#open_url(i)
element = driver.find_element_by_css_selector('#leaderboard_top_ad')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 13
i = urls[12]
#open_url(i)
element = driver.find_element_by_css_selector('.fixed-ads-r')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 14
i = urls[13]
#open_url(i)
element = driver.find_element_by_css_selector('#xz1')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 15
i = urls[14]
#open_url(i)
element = driver.find_element_by_css_selector('#xz5')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 16
i = urls[15]
#open_url(i)
element = driver.find_element_by_css_selector('.favorite-cards')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 17
i = urls[16]
#open_url(i)
element = driver.find_element_by_css_selector('.card-feature')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 18
i = urls[17]
#open_url(i)
element = driver.find_element_by_css_selector('#div-gpt-ad-desktop_leaderboard_variable')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 19
i = urls[18]
#open_url(i)
element = driver.find_element_by_css_selector('div.l-sidebar-fixed:nth-child(3) > div:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 20
i = urls[19]
#open_url(i)
element = driver.find_element_by_css_selector('.m-ad__btf_leaderboard_variable')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 21
i = urls[20]
#open_url(i)
element = driver.find_element_by_css_selector('#bbccom_leaderboard')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 22
i = urls[21]
#open_url(i)
element = driver.find_element_by_css_selector('#bbccom_mpu_1_2_3_4_5')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 23
i = urls[22]
#open_url(i)
element = driver.find_element_by_css_selector('#cookie-notice')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 24
i = urls[23]
#open_url(i)
element = driver.find_element_by_css_selector('.site-content > div:nth-child(3)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 25
i = urls[24]
#open_url(i)
element = driver.find_element_by_css_selector('.transporter-ad')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 26
i = urls[25]
#open_url(i)
element = driver.find_element_by_css_selector('#banner1_homepage_8f79b6d7-00e3-44f7-a5b3-77938c782b6a')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 27
i = urls[26]
#open_url(i)
element = driver.find_element_by_css_selector('.asidead')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 28
i = urls[27]
#open_url(i)
element = driver.find_element_by_css_selector('div.rr_container:nth-child(6)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 29
i = urls[28]
#open_url(i)
element = driver.find_element_by_css_selector('div.dfp-ad:nth-child(12)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 30
i = urls[29]
#open_url(i)
element = driver.find_element_by_css_selector('.top-header')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 31
i = urls[30]
#open_url(i)
element = driver.find_element_by_css_selector('#html_javascript_adder-5')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 32
i = urls[31]
#open_url(i)
element = driver.find_element_by_css_selector('.ad')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 33
i = urls[32]
#open_url(i)
element = driver.find_element_by_css_selector('div.dfsmall:nth-child(44) > div:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 34
i = urls[33]
#open_url(i)
element = driver.find_element_by_css_selector('#sky')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 35
i = urls[34]
#open_url(i)
element = driver.find_element_by_css_selector('.base-page-wrapper')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 36
i = urls[35]
#open_url(i)
element = driver.find_element_by_css_selector('.css-1ruifza')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 37
i = urls[36]
#open_url(i)
element = driver.find_element_by_css_selector('#sej-page-wrapper')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 38
i = urls[37]
#open_url(i)
element = driver.find_element_by_css_selector('.link-sect')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 39
i = urls[38]
#open_url(i)
element = driver.find_element_by_css_selector('#SEJ_300x250_Sidebar0-parent')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 40
i = urls[39]
#open_url(i)
element = driver.find_element_by_css_selector('aside.sej-widget:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 41
i = urls[40]
#open_url(i)
element = driver.find_element_by_css_selector('div.spy__buy-now-wrapper:nth-child(54)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 42
i = urls[41]
#open_url(i)
element = driver.find_element_by_css_selector('#secondary')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 43
i = urls[42]
#open_url(i)
element = driver.find_element_by_css_selector('#contentAccess > aside:nth-child(10)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 44
i = urls[43]
#open_url(i)
element = driver.find_element_by_css_selector('div.cubeAd:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 45
i = urls[44]
#open_url(i)
element = driver.find_element_by_css_selector('div.ad:nth-child(3)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 46
i = urls[45]
#open_url(i)
element = driver.find_element_by_css_selector('#hs-eu-cookie-confirmation')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 47
i = urls[46]
#open_url(i)
element = driver.find_element_by_css_selector('.Page-above')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 48
i = urls[47]
#open_url(i)
element = driver.find_element_by_css_selector('.NewsletterModule')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 49
i = urls[48]
#open_url(i)
element = driver.find_element_by_css_selector('.ArticlePage-aside-content')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 50
i = urls[49]
#open_url(i)
element = driver.find_element_by_css_selector('.footer-ad')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 51
i = urls[50]
#open_url(i)
element = driver.find_element_by_css_selector('.footer-bck')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 52
i = urls[51]
#open_url(i)
element = driver.find_element_by_css_selector('div.abl:nth-child(5)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 53
i = urls[52]
#open_url(i)
element = driver.find_element_by_css_selector('#definition-right-rail')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 54
i = urls[53]
#open_url(i)
element = driver.find_element_by_css_selector('#subscribe-unabridged')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 55
i = urls[54]
#open_url(i)
element = driver.find_element_by_css_selector('.global-footer')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 56
i = urls[55]
#open_url(i)
element = driver.find_element_by_css_selector('#div-gpt-square-flex-1')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 57
i = urls[56]
#open_url(i)
element = driver.find_element_by_css_selector('#footermagmore')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 58
i = urls[57]
#open_url(i)
element = driver.find_element_by_css_selector('.m-adaptive-ad-component')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 59
i = urls[58]
#open_url(i)
element = driver.find_element_by_css_selector('div.mtm-ad-col:nth-child(2) > div:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 60
i = urls[59]
#open_url(i)
element = driver.find_element_by_css_selector('#adslot5')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 61
i = urls[60]
#open_url(i)
element = driver.find_element_by_css_selector('#gform_wrapper_249')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 62
i = urls[61]
#open_url(i)
element = driver.find_element_by_css_selector('#pos_1_atf_728x90-wrap')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 63
i = urls[62]
#open_url(i)
element = driver.find_element_by_css_selector('div.main-container:nth-child(3)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 64
i = urls[63]
#open_url(i)
element = driver.find_element_by_css_selector('div.global-nav-subscribe')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 65
i = urls[64]
#open_url(i)
element = driver.find_element_by_css_selector('#google_ads_iframe_\/4312434\/consumer\/webmd_1__container__')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 66
i = urls[65]
#open_url(i)
element = driver.find_element_by_css_selector('div.main-container:nth-child(3)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 67
i = urls[66]
#open_url(i)
element = driver.find_element_by_css_selector('div.sal--wrapper:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 68
i = urls[67]
#open_url(i)
element = driver.find_element_by_css_selector('.ad-header')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 69
i = urls[68]
#open_url(i)
element = driver.find_element_by_css_selector('.AsidePromo')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 70
i = urls[69]
#open_url(i)
element = driver.find_element_by_css_selector('.sidebar-content')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 71
i = urls[70]
#open_url(i)
element = driver.find_element_by_css_selector('#web-push-prompt')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 72
i = urls[71]
open_url(i)
element = driver.find_element_by_css_selector('.header-banners')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 73
i = urls[72]
#open_url(i)
element = driver.find_element_by_css_selector('#ad-standard-wrap')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 74
i = urls[73]
#open_url(i)
element = driver.find_element_by_css_selector('div.two-col-main-content:nth-child(3) > div:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 75
i = urls[74]
#open_url(i)
element = driver.find_element_by_css_selector('.article-right-rail')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 76
i = urls[75]
#open_url(i)
element = driver.find_element_by_css_selector('#genesis-sidebar-primary')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 77
i = urls[76]
#open_url(i)
element = driver.find_element_by_css_selector('.box-4')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 78
i = urls[77]
#open_url(i)
element = driver.find_element_by_css_selector('.medrectangle-1')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 79
i = urls[78]
#open_url(i)
element = driver.find_element_by_css_selector('.announcement-widget')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 80
i = urls[79]
#open_url(i)
element = driver.find_element_by_css_selector('.sidebar')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 81
i = urls[80]
#open_url(i)
element = driver.find_element_by_css_selector('#div-gpt-ad-1391092477373-2')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 82
i = urls[81]
#open_url(i)
element = driver.find_element_by_css_selector('.quinstreet-widget > div:nth-child(3)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 83
i = urls[82]
#open_url(i)
element = driver.find_element_by_css_selector('div.aside-block:nth-child(4)')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 84
i = urls[83]
#open_url(i)
element = driver.find_element_by_css_selector('.sidebar-item')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 85
i = urls[84]
#open_url(i)
element = driver.find_element_by_css_selector('.above-footer')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 86
i = urls[85]
#open_url(i)
element = driver.find_element_by_css_selector('#ad-728x90_LL_td_3')
source = element.get_attribute("outerHTML")
print(source)
ad_type = "better"
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
ad_type = "better"
# url 87
i = urls[86]
#open_url(i)
element = driver.find_element_by_css_selector('.ad_area')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 88
i = urls[87]
#open_url(i)
element = driver.find_element_by_css_selector('.module-article-newsletter')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 89
i = urls[88]
#open_url(i)
element = driver.find_element_by_css_selector('.mayoad')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 90
i = urls[89]
#open_url(i)
element = driver.find_element_by_css_selector('#mpu-plus-top')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 91
i = urls[90]
#open_url(i)
element = driver.find_element_by_css_selector('.symposia-brand-footer')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 92
i = urls[91]
#open_url(i)
element = driver.find_element_by_css_selector('.elementor-element-7ccca8b')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 93
i = urls[92]
#open_url(i)
element = driver.find_element_by_css_selector('.elementor-element-5adb56b')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 94
i = urls[93]
#open_url(i)
element = driver.find_element_by_css_selector('.elementor-element-84c3a1b > div:nth-child(1) > div:nth-child(1) > p:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 95
i = urls[94]
#open_url(i)
element = driver.find_element_by_css_selector('#js-gdpr-consent-banner')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 96
i = urls[95]
#open_url(i)
element = driver.find_element_by_css_selector('.inline-container > div:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 97
i = urls[96]
#open_url(i)
element = driver.find_element_by_css_selector('article.article > div:nth-child(2) > div:nth-child(1) > div:nth-child(3)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 98
i = urls[97]
#open_url(i)
element = driver.find_element_by_css_selector('.bh_ad_container')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 99
i = urls[98]
#open_url(i)
element = driver.find_element_by_css_selector('#billboard1-dynamic_1-0')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 100
i = urls[99]
#open_url(i)
element = driver.find_element_by_css_selector('#leaderboard-post-content_1-0')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 101
i = urls[100]
#open_url(i)
element = driver.find_element_by_css_selector('.photo-info-boxes')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 102
i = urls[101]
#open_url(i)
element = driver.find_element_by_css_selector('.slot-topOfSidebar')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 103
i = urls[102]
#open_url(i)
element = driver.find_element_by_css_selector('section.box')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 104
i = urls[103]
#open_url(i)
element = driver.find_element_by_css_selector('.slot-single-height-0-598')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 105
i = urls[104]
#open_url(i)
element = driver.find_element_by_css_selector('#mc-embedded-subscribe-form')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 106
i = urls[105]
#open_url(i)
element = driver.find_element_by_css_selector('div.horizontal:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 107
i = urls[106]
#open_url(i)
element = driver.find_element_by_css_selector('div.container-gas:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 108
i = urls[107]
#open_url(i)
element = driver.find_element_by_css_selector('.footer-newsletter-panel')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 109
i = urls[108]
#open_url(i)
element = driver.find_element_by_css_selector('.layout-2x2 > div:nth-child(1) > div:nth-child(2) > section:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 110
i = urls[109]
#open_url(i)
element = driver.find_element_by_css_selector('#adm-widget-dsk-tab-ros-rail-bottom')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 111
i = urls[110]
#open_url(i)
element = driver.find_element_by_css_selector('.signup-newsletter')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 112
i = urls[111]
#open_url(i)
element = driver.find_element_by_css_selector('div.feed-block-container:nth-child(2) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 113
i = urls[112]
#open_url(i)
element = driver.find_element_by_css_selector('div.feed-block-container:nth-child(6) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 114
i = urls[113]
#open_url(i)
element = driver.find_element_by_css_selector('.breaker-ad')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 115
i = urls[114]
#open_url(i)
element = driver.find_element_by_css_selector('div.feed-block-container:nth-child(2) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 116
i = urls[115]
#open_url(i)
element = driver.find_element_by_css_selector('#HomepageAd1')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 117
i = urls[116]
#open_url(i)
element = driver.find_element_by_css_selector('#ad-cid-PXrkIurs')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 118
i = urls[117]
#open_url(i)
element = driver.find_element_by_css_selector('.mpu-container')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 119
i = urls[118]
#open_url(i)
element = driver.find_element_by_css_selector('div.footer-section:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 120
i = urls[119]
#open_url(i)
element = driver.find_element_by_css_selector('#div-gpt-ad-tower')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 121
i = urls[120]
#open_url(i)
element = driver.find_element_by_css_selector('.region-pre-footer')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 122
i = urls[121]
#open_url(i)
element = driver.find_element_by_css_selector('.rightside')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 123
i = urls[122]
#open_url(i)
element = driver.find_element_by_css_selector('div.container:nth-child(14)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 124
i = urls[123]
#open_url(i)
element = driver.find_element_by_css_selector('div.container:nth-child(4)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 125
i = urls[124]
#open_url(i)
element = driver.find_element_by_css_selector('#Homepage_BTF_970x250_Wrapper')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 126
i = urls[125]
#open_url(i)
element = driver.find_element_by_css_selector('section.house-promo:nth-child(4)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 127
i = urls[126]
#open_url(i)
element = driver.find_element_by_css_selector('#formulario-newsletter-home')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 128
i = urls[127]
#open_url(i)
element = driver.find_element_by_css_selector('aside.col-md-4:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 129
i = urls[128]
#open_url(i)
element = driver.find_element_by_css_selector('#div-hola-slot-bannerinferior')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 130
i = urls[129]
#open_url(i)
element = driver.find_element_by_css_selector('.home__poster-sidebar')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 131
i = urls[130]
#open_url(i)
element = driver.find_element_by_css_selector('#newsletter')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 132
i = urls[131]
#open_url(i)
element = driver.find_element_by_css_selector('.sectionsRundown__adWrapper--1WdeK')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 133
i = urls[132]
#open_url(i)
element = driver.find_element_by_css_selector('div.container:nth-child(3) > div:nth-child(1) > div:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 134
i = urls[133]
#open_url(i)
element = driver.find_element_by_css_selector('.NET-INSIGHT')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 135
i = urls[134]
#open_url(i)
element = driver.find_element_by_css_selector('div.special--fixed:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 136
i = urls[135]
#open_url(i)
element = driver.find_element_by_css_selector('div.fl-row:nth-child(10)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 137
i = urls[136]
#open_url(i)
element = driver.find_element_by_css_selector('div.fl-row:nth-child(14)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 138
i = urls[137]
#open_url(i)
element = driver.find_element_by_css_selector('.ad-leaderboard-container')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 139
i = urls[138]
#open_url(i)
element = driver.find_element_by_css_selector('#div-gpt-ad-sidebar-sky')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 140
i = urls[139]
#open_url(i)
element = driver.find_element_by_css_selector('.signup-nl-right-rail-top')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 141
i = urls[140]
#open_url(i)
element = driver.find_element_by_css_selector('.hp-right-rail-top-ad')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 142
i = urls[141]
#open_url(i)
element = driver.find_element_by_css_selector('#ATF_300x250_300x600_300x1050')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 143
i = urls[142]
#open_url(i)
element = driver.find_element_by_css_selector('section.ad_right:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 144
i = urls[143]
#open_url(i)
element = driver.find_element_by_css_selector('.BarronsTheme--adWrapper--JuoHipNd')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 145
i = urls[144]
#open_url(i)
element = driver.find_element_by_css_selector('#custom_html-1002')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 146
i = urls[145]
#open_url(i)
element = driver.find_element_by_css_selector('.sc-7xvhmc-2 > div:nth-child(1) > div:nth-child(1)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 147
i = urls[146]
#open_url(i)
element = driver.find_element_by_css_selector('.sc-7xvhmc-7 > div:nth-child(2)')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 148
i = urls[147]
#open_url(i)
element = driver.find_element_by_css_selector('.mid-content-ad')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
# url 149
i = urls[148]
#open_url(i)
element = driver.find_element_by_css_selector('.safety-tips-ad')
source = element.get_attribute("outerHTML")
print(source)
data.append({"TYPE" : ad_type, "ad_code" : source, "URL" : i})
driver.close()
import pandas as pd
df1 = pd.DataFrame(data)
print(df)
df2.to_csv('acceptable_ad_code_raw.csv') # saving data
# ***Data Cleansing***
df = pd.read_csv('acceptable_ad_code_raw.csv')
df = df.drop('Unnamed: 0', 1)
code = df.ad_code
def get_divs(code):
"""Extract div tags from HTML for better manipulation"""
soup = bs(code)
divs = soup.select("div")
return divs
df["divs"] = df["ad_code"].map(get_divs)
def strip_html_tags(text):
"""Remove html tags from text"""
soup = bs(text, "html.parser")
stripped_text = soup.get_text(separator=" ")
return stripped_text
import string
import re
# +
def strip_jamo(string):
"""Remove Korean from text"""
jamo_list = re.findall(u'[^\uAC00-\uD7A3]', string)
for c in string:
if c not in jamo_list:
string = string.replace(c, '')
return string
def strip_cjk(string):
"""Remove Chinese, Japanese, Korean from text"""
cjk_list = re.findall(u'[^\u4E00-\u9FFF]', string)
for c in string:
if c not in cjk_list:
string = string.replace(c, '')
return string
def strip_tamazight(string):
"""Strip Tamazight from text"""
tama_list = re.findall(u'[^\u23D0-\u2D7F]', string)
for c in string:
if c not in tama_list:
string = string.replace(c, '')
return string
def strip_hindu(string):
"""Strip Hindu from text"""
hindu_list = re.findall(u'[^\u0900-\u097F]', string)
for c in string:
if c not in hindu_list:
string = string.replace(c, '')
return string
def strip_khmer(string):
"""Strip Khmer from text"""
khmer_list = re.findall(u'[^\u1780-\u17FF]', string)
for c in string:
if c not in khmer_list:
string = string.replace(c, '')
return string
def strip_ethiopic(string):
"""Strip Ethiopic from text"""
ethiopic_list = re.findall(u'[^\u1200-\u137F]', string)
for c in string:
if c not in ethiopic_list:
string = string.replace(c, '')
return string
def strip_george(string):
"""Strip Georgian from text"""
george_list = re.findall(u'[^\u10A0-\u10FF]', string)
for c in string:
if c not in george_list:
string = string.replace(c, '')
return string
def strip_myan(string):
"""Strip Myan from text"""
myan_list = re.findall(u'[^\u1000-\u109F]', string)
for c in string:
if c not in myan_list:
string = string.replace(c, '')
return string
def strip_tibet(string):
"""Strip Tibetian from text"""
tibet_list = re.findall(u'[^\u0F00-\u0FFF]', string)
for c in string:
if c not in tibet_list:
string = string.replace(c, '')
return string
def strip_lao(string):
"""Strip Laotian from text"""
lao_list = re.findall(u'[^\u0E80-\u0EFF]', string)
for c in string:
if c not in lao_list:
string = string.replace(c, '')
return string
def strip_thai(string):
"""Strip Thai from text"""
thai_list = re.findall(u'[^\u0E00-\u0E7F]', string)
for c in string:
if c not in thai_list:
string = string.replace(c, '')
return string
def strip_sinhala(string):
"""Strip Sinhala from text"""
hala_list = re.findall(u'[^\u0D80-\u0DFF]', string)
for c in string:
if c not in hala_list:
string = string.replace(c, '')
return string
def strip_malayalam(string):
"""Strip Malayalam from text"""
mala_list = re.findall(u'[^\u0D02-\u0D4D]', string)
for c in string:
if c not in mala_list:
string = string.replace(c, '')
return string
def strip_kannada(string):
"""Strip Kannada from text"""
kan_list = re.findall(u'[^\u0C80-\u0CFF]', string)
for c in string:
if c not in kan_list:
string = string.replace(c, '')
return string
def strip_telugu(string):
"""Strip Telugu from text"""
tel_list = re.findall(u'[^\u0C00-\u0C7F]', string)
for c in string:
if c not in tel_list:
string = string.replace(c, '')
return string
def strip_tamil(string):
"""Strip Tamil from text"""
ta_list = re.findall(u'[^\u0B80-\u0BFF]', string)
for c in string:
if c not in ta_list:
string = string.replace(c, '')
return string
def strip_gujarati(string):
"""Strip Gujarati from Text"""
guj_list = re.findall(u'[^\u0A80-\u0AFF]', string)
for c in string:
if c not in guj_list:
string = string.replace(c, '')
return string
def strip_mongolia(string):
"""Strip Mongolian from text"""
mon_list = re.findall(u'[^\u1800-\u18AF]', string)
for c in string:
if c not in mon_list:
string = string.replace(c, '')
return string
def strip_punjabi(string):
"""Strip Punjabi from text"""
pun_list = re.findall(u'[^\u0A02-\u0A4C]', string)
for c in string:
if c not in pun_list:
string = string.replace(c, '')
return string
def strip_bengali(string):
"""Strip Bengali from text"""
ben_list = re.findall(u'[^\u0980-\u09FF]', string)
for c in string:
if c not in ben_list:
string = string.replace(c, '')
return string
def strip_urdu(string):
"""Strip Urdu from text"""
ur_list = re.findall(u'[^\u0600-\u06FF]', string)
for c in string:
if c not in ur_list:
string = string.replace(c, '')
return string
def strip_herbrew_a(string):
"""Strip Hebrew from text"""
he_list = re.findall(u'[^\u0590-\u05FF]', string)
for c in string:
if c not in he_list:
string = string.replace(c, '')
return string
def strip_herbrew(string):
"""Strip Hebrew from text"""
brew_list = re.findall(u'[^\uFB1D-\uFB4F]', string)
for c in string:
if c not in brew_list:
string = string.replace(c, '')
return string
def strip_armenian(string):
"""Strip Armenian from text"""
ar_list = re.findall(u'[^\u0530-\u058F]', string)
for c in string:
if c not in ar_list:
string = string.replace(c, '')
return string
def strip_cyrillic(string):
"""Strip Cyrillic from text"""
cy_list = re.findall(u'[^\u0400-\u04FF]', string)
for c in string:
if c not in cy_list:
string = string.replace(c, '')
return string
def strip_greek(string):
"""Strip Greek from text"""
gre_list = re.findall(u'[^\u0370-\u03FF]', string)
for c in string:
if c not in gre_list:
string = string.replace(c, '')
return string
def strip_lat(string):
lat_list = re.findall(u'[^\u00A0-\u00FF]', string)
for c in string:
if c not in lat_list:
string = string.replace(c, '')
return string
# +
div_refined = []
for i in code:
divs = get_divs(i)
divs = str(divs)
divs = divs.lower() # convert to lowercase
divs = strip_html_tags(divs)
divs = re.sub(r"[{(,.;:@#?!&$)}]+\ *", " ", divs) # strip punctuation
divs = divs.replace("_", " ")
divs = divs.replace("-", " ")
# strip non-english characters
divs = strip_jamo(divs)
divs = strip_cjk(divs)
divs = strip_hindu(divs)
divs = strip_khmer(divs)
divs = strip_ethiopic(divs)
divs = strip_george(divs)
divs = strip_myan(divs)
divs = strip_tibet(divs)
divs = strip_lao(divs)
divs = strip_thai(divs)
divs = strip_sinhala(divs)
divs = strip_malayalam(divs)
divs = strip_kannada(divs)
divs = strip_telugu(divs)
divs = strip_kannada(divs)
divs = strip_tamil(divs)
divs = strip_gujarati(divs)
divs = strip_mongolia(divs)
divs = strip_gujarati(divs)
divs = strip_punjabi(divs)
divs = strip_bengali(divs)
divs = strip_urdu(divs)
divs = strip_armenian(divs)
divs = strip_cyrillic(divs)
divs = strip_greek(divs)
div_refined.append(divs)
# -
len(div_refined)
df['n_divs'] = div_refined
def get_class(code):
"""Extract classes from HTML for better manipulation"""
soup = bs(code)
elements = soup.select('*')
classes = []
for element in elements:
classes.extend(element.attrs.get("class", []))
return classes
df["classes"] = df["ad_code"].map(get_class)
# +
class_refined = []
for i in code:
classes = get_class(i)
classes = str(classes)
classes = classes.lower() # convert to lowercase
classes = re.sub(r"[{(,.;:@#?!&$)}]+\ *", " ", classes) # strip punctuation
classes = classes.replace("_", " ")
classes = classes.replace("-", " ")
# strip foreign characters
classes = strip_jamo(classes)
classes = strip_cjk(classes)
classes = strip_hindu(classes)
classes = strip_khmer(classes)
classes = strip_ethiopic(classes)
classes = strip_george(classes)
classes = strip_myan(classes)
classes = strip_tibet(classes)
classes = strip_lao(classes)
classes = strip_thai(classes)
classes = strip_sinhala(classes)
classes = strip_malayalam(classes)
classes = strip_kannada(classes)
classes = strip_telugu(classes)
classes = strip_kannada(classes)
classes = strip_tamil(classes)
classes = strip_gujarati(classes)
classes = strip_mongolia(classes)
classes = strip_gujarati(classes)
classes = strip_punjabi(classes)
classes = strip_bengali(classes)
classes = strip_urdu(classes)
classes = strip_armenian(classes)
classes = strip_cyrillic(classes)
classes = strip_greek(classes)
class_refined.append(classes)
# -
df['n_classes'] = class_refined
def get_style(code):
"""Extract style tags from HTMl"""
soup = bs(code)
divs = soup.select('div')
styles = []
for div in divs:
style = div.attrs.get("style")
if style:
styles.append(div.attrs.get("style"))
return styles
df.ad_code.map(get_style)
df["style"] = df["ad_code"].map(get_style)
# +
style_refined = []
for i in code:
style = get_style(i)
style = str(style)
style = style.lower()
style = re.sub(r"[{(,.;:@#?!&$)}]+/\ *", " ", style) # strip punctuation
style = style.replace("_", " ")
style = style.replace("-", " ")
style = style.replace(";", " ")
style = style.replace(":", " ")
# strip foreign characters
style = strip_jamo(style)
style = strip_cjk(style)
style = strip_hindu(style)
style = strip_khmer(style)
style = strip_ethiopic(style)
style = strip_george(style)
style = strip_myan(style)
style = strip_tibet(style)
style = strip_lao(style)
style = strip_thai(style)
style = strip_sinhala(style)
style = strip_malayalam(style)
style = strip_kannada(style)
style = strip_telugu(style)
style = strip_kannada(style)
style = strip_tamil(style)
style = strip_gujarati(style)
style = strip_mongolia(style)
style = strip_gujarati(style)
style = strip_punjabi(style)
style = strip_bengali(style)
style = strip_urdu(style)
style = strip_armenian(style)
style = strip_cyrillic(style)
style = strip_greek(style)
style = str(style)
style_refined.append(style)
# -
df['n_style'] = style_refined
df
df.to_csv('/Users/SeoyeonHong/Desktop/annoying_ad_classifier/data_collection/data/acceptable_ad_code.csv')
| annoying_ad_classifier/data_collection/acceptable_ad_crawl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reshape Fuel Prices - Duke Energy Progress
#
# 3/18/2021 \
# by [<NAME>](<EMAIL>)
import csv
import datetime as dt
import numpy as np
import pandas as pd
df_lookup = pd.read_csv('./inputs/UnitLookupAndDetailTable_(DEC-DEP).csv')
df_fuel_DEP = pd.read_csv('./inputs/UNIT_FUEL_PRICE(DEP 2019).csv')
list(df_fuel_DEP.columns)
# +
#Slicing data and filter all the values where end date is before Jan 1st
df_fuel_DEP['UNIT_ID'] = df_fuel_DEP.UNIT_NAME + '_'+ df_fuel_DEP.CC_KEY.apply(str)
df_fuel_DEP = df_fuel_DEP.loc[:, ['UNIT_ID', 'FUEL_TYPE','PRICE $/MBTU', 'FROM_DATE', 'TO_DATE']]
df_fuel_DEP.sort_values(by=['UNIT_ID', 'FUEL_TYPE'], inplace=True)
df_fuel_DEP.to_csv('./outputs/UNIT_FUEL_PRICE(DEP 2019)_sorted.csv', sep=',', encoding='utf-8', index= False)
df_fuel_DEP.head()
# -
# ## Descriptive statistics
#
# Data from Duke Energy Carolinas and Duke Energy Progress
df_fuel_DEP.describe(include='all')
# ### Calculating range of days between initial and end dates
# +
def convertStringToDate(date_string):
date_obj = dt.datetime.strptime(date_string.split(" ")[0], '%m/%d/%Y')
#if date_obj - dt.date(2018, 7, 11)
return date_obj
#convertStringToDate('5/10/2018')
df_fuel_DEP['FROM_DATE'] = df_fuel_DEP['FROM_DATE'].apply(convertStringToDate)
df_fuel_DEP['TO_DATE'] = df_fuel_DEP['TO_DATE'].apply(convertStringToDate)
df_fuel_DEP.describe(include='all')
# +
First_day = convertStringToDate('1/1/2019')
Last_day = convertStringToDate('12/31/2019')
#remove all the values where the end dates are in 2018
df_fuel_DEP['END_YEAR'] = df_fuel_DEP['TO_DATE'].map(lambda TO_DATE: TO_DATE.year)
df_fuel_DEP['START_YEAR'] = df_fuel_DEP['FROM_DATE'].map(lambda FROM_DATE: FROM_DATE.year)
df_fuel_DEP = df_fuel_DEP[df_fuel_DEP['START_YEAR'] < 2020]
df_fuel_DEP = df_fuel_DEP[df_fuel_DEP['END_YEAR'] >= 2019]
df_fuel_DEP['FROM_DATE'] = df_fuel_DEP['FROM_DATE'].map(lambda FROM_DATE: First_day if (First_day - FROM_DATE).days > 0 else FROM_DATE )
df_fuel_DEP['TO_DATE'] = df_fuel_DEP['TO_DATE'].map(lambda TO_DATE: Last_day if (TO_DATE - Last_day).days > 0 else TO_DATE)
df_fuel_DEP = df_fuel_DEP[df_fuel_DEP['TO_DATE'] != First_day]
df_fuel_DEP.describe(include='all')
# +
# Adding columns to compute number of days from FROM_DATE to TO_DATE
df_fuel_DEP['DAYS'] = df_fuel_DEP['TO_DATE'] - df_fuel_DEP['FROM_DATE']
df_fuel_DEP['DAYS'] = df_fuel_DEP['DAYS'].map(lambda DAYS: DAYS.days )
df_fuel_DEP['REF_FROM_DATE'] = df_fuel_DEP['FROM_DATE'] - First_day
df_fuel_DEP['REF_FROM_DATE'] = df_fuel_DEP['REF_FROM_DATE'].map(lambda DAYS: DAYS.days )
# Replace last value when the number of days is zero
df_fuel_DEP['DAYS'] = np.where((df_fuel_DEP['DAYS'] == 0) & (df_fuel_DEP['TO_DATE'] == Last_day), 1, df_fuel_DEP['DAYS'])
df_fuel_DEP = df_fuel_DEP.loc[:, ['UNIT_ID', 'FUEL_TYPE', 'PRICE $/MBTU', 'FROM_DATE', 'TO_DATE', 'DAYS', 'REF_FROM_DATE']]
df_fuel_DEP.head()
# -
# Creating pivot tableto summarize unit units and fuel type
df_fuel_DEP_pivot = df_fuel_DEP.groupby(['UNIT_ID', 'FUEL_TYPE']).sum()
df_fuel_DEP_pivot.to_csv('./outputs/fuel_summary.csv', sep=',', encoding='utf-8')
#print(list(df_fuel_DEP_pivot.index))
df_fuel_DEP_pivot
# ## Manipulating dataframe to organize data
# +
First_day = convertStringToDate('1/1/2019')
Last_day = convertStringToDate('12/31/2019')
#Create list with dates from First_day to last_day
date_list = [First_day + dt.timedelta(days=x) for x in range(0, (Last_day-First_day).days + 1)]
date_str_list = []
for date in date_list:
date_str_list.append(date.strftime("%m/%d/%Y"))
# +
#create results dataframe to store prices every day
df_fuel_result = pd.DataFrame(index=df_fuel_DEP_pivot.index, columns=date_list)
#df_fuel_DEP_pivot = df_fuel_DEP_pivot.reindex(columns = df_fuel_DEP_pivot.columns.tolist() + date_str_list)
df_fuel_result.head(n=5)
# +
current_index = ()
old_index = ()
aux_index = 0
fuel_price_list = [None] * 365
for index, row in df_fuel_DEP.iterrows():
aux_index = index
index_current = (row['UNIT_ID'], row['FUEL_TYPE'])
# access data using column names
fuel_price = row['PRICE $/MBTU']
days = row['DAYS']
ref_day = row['REF_FROM_DATE']
current_index = (row['UNIT_ID'], row['FUEL_TYPE'])
#print(index, row['UNIT_ID'], row['FUEL_TYPE'], row['PRICE $/MBTU'], row['REF_FROM_DATE'], row['DAYS'])
if index == 0:
old_index = current_index
if (old_index != current_index):
df_fuel_result.loc[old_index] = fuel_price_list
old_index = current_index
fuel_price_list = [None] * 365
fuel_price_list[ref_day:(ref_day + days)] = [fuel_price]*(days)
#print(index, row['PRICE $/MBTU'], row['REF_FROM_DATE'], row['DAYS'])
#Save last value
if aux_index != 0 :
df_fuel_result.loc[current_index] = fuel_price_list
df_fuel_result.head()
# -
df_fuel_result.to_csv('./outputs/UNIT_FUEL_PRICES_DEP_Results.csv', sep=',', encoding='utf-8')
df_fuel_DEP.to_csv('./outputs/UNIT_FUEL_PRICES_DEP_Short.csv', sep=',', encoding='utf-8')
# +
#dfSummary['UNIT_ID'] dfSummary.UNIT_ID == 'ALLE_UN01_0')
#dfSummary[dfSummary.DAYS == 364]
| data_cleaning_analysis/Reshape_data_fuel_prices_DEP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from tqdm import tqdm
from rdkit import Chem
import seaborn as sns
from sklearn.cluster import AgglomerativeClustering, DBSCAN, SpectralClustering
from scipy.stats import ks_2samp, chisquare, power_divergence
import tmap, os
from faerun import Faerun
from mhfp.encoder import MHFPEncoder
from rdkit.Chem import AllChem
#from map4 import MAP4Calculator, to_mol
import matplotlib.pyplot as plt
# %matplotlib inline
tqdm.pandas(ascii=True)
np.random.seed(123)
# -
dim = 1024
n_clusters = 5
# +
#https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html
#https://reneshbedre.github.io/blog/chisq.html#-chi-square-%CF%872-test-for-independence-pearson-chi-square-test
from chembench import dataset
data = dataset.load_FreeSolv() #load_ESOL, load_Lipop, load_Malaria, load_PDBF, load_HIV, load_BACE,load_BBBP
# -
task_name = data.task_name
# +
data_save_folder = './cluster_split_results/%s' % task_name
if not os.path.exists(data_save_folder):
os.makedirs(data_save_folder)
# -
# +
mols = [Chem.MolFromSmiles(s) for s in data.x]
ECFP4_fps = [AllChem.GetMorganFingerprintAsBitVect(x,2,dim) for x in tqdm(mols, ascii=True)]
ecfps = [tmap.VectorUchar(list(fp)) for fp in ECFP4_fps]
enc = tmap.Minhash(dim,seed = 42)
lf = tmap.LSHForest(dim)
lf.batch_add(enc.batch_from_binary_array(ecfps))
lf.index()
# # # Calculate the MAP4 fp
# calc = MAP4Calculator(dimensions=dim)
# fps = calc.calculate_many([to_mol(s) for s in data.x])
# # # Calculate the MHFP
# # # enc = MHFPEncoder(dim)
# # # fps = [tmap.VectorUint(enc.encode(s)) for s in data.x]
# # Initialize the LSH Forest
# lf = tmap.LSHForest(dim)
# # Add the Fingerprints to the LSH Forest and index
# lf.batch_add(fps)
# lf.index()
# +
# # # Calculate the MAP4 fp
# calc = MAP4Calculator(dimensions=dim)
# fps = calc.calculate_many([to_mol(s) for s in data.x])
# # # Calculate the MHFP
# # # enc = MHFPEncoder(dim)
# # # fps = [tmap.VectorUint(enc.encode(s)) for s in data.x]
# # Initialize the LSH Forest
# lf = tmap.LSHForest(dim)
# # Add the Fingerprints to the LSH Forest and index
# lf.batch_add(fps)
# lf.index()
# +
x, y, s, t, gp = tmap.layout_from_lsh_forest(lf)
X = np.array([x,y]).T
def adj_list_to_matrix(adj_list):
n = len(adj_list)
adj_matrix = np.zeros((n,n))
for i,c in enumerate(adj_list):
for (j, weight) in c:
adj_matrix[i, j] = weight
return adj_matrix
adj_csr = adj_list_to_matrix(gp.adjacency_list)
clustering = AgglomerativeClustering(n_clusters = n_clusters, connectivity = adj_csr,).fit(X)
# clustering= SpectralClustering(n_clusters = n_clusters, random_state = 2, n_init = 100).fit(X)
dft = pd.concat([pd.Series(clustering.labels_), pd.Series(x)], axis=1)
order_dict = dft.groupby(0)[1].apply(np.min).sort_values().argsort().to_dict()
clustering.labels_ = pd.Series(clustering.labels_).map(order_dict).values
pd.Series(clustering.labels_).value_counts()
# -
# +
mapd = {}
for k, v in pd.Series(clustering.labels_ + 1).value_counts().items():
mapd.update({k:'%s(%s)'% (k,v)})
branch_name = 'Group'
df = data.df
df = pd.DataFrame(data.y, columns = [task_name])
df[branch_name]= (clustering.labels_ + 1)
df['TMAP1'] = x
df['TMAP2'] = y
df[branch_name] = df[branch_name].map(mapd)
df['smiles'] = data.x
df[[branch_name]].to_pickle(os.path.join(data_save_folder, 'cluster_split_%s.idx' % task_name))
# -
# +
sns.set(style='white', font_scale = 1.4)
size = 12
palette = sns.color_palette("Set1", n_clusters)
order = df[branch_name].unique()
order.sort()
fig, axes = plt.subplots(ncols=3,figsize=(20,6))
ax1, ax2, ax3 = axes
sns.set(style="white")
_ = sns.scatterplot('TMAP1', 'TMAP2', hue = branch_name, palette = palette, hue_order = order, s = size,
data = df, ax = ax1, linewidth = 0)
ax1.legend(loc='upper right')
if data.task_type == 'regression':
num = 6
_ = sns.catplot(x = branch_name, y = task_name, kind="swarm", palette = palette,order = order, data=df, ax= ax2 , )
else:
num = 1
gb = df.groupby([branch_name, task_name]).size().unstack()
gb.columns = gb.columns.astype(int)
# _ = gb.plot(kind='bar', stacked = True, cmap = 'rainbow', ax= ax2)
gbb = gb[1]/gb[0]
gbb.plot(kind = 'bar', color = palette, ax= ax2, rot=0)
ax2.set_ylabel('Ratio(positive/negative)')
im3 = ax3.scatter(x = df.TMAP1, y = df.TMAP2, alpha = .8, c = df[task_name].tolist(), cmap = 'rainbow', s = size)
ax3.set_xlabel('TMAP1')
ax3.set_ylabel('TMAP2')
# fig.colorbar(im, ax=ax3)
lg3 = ax3.legend(*im3.legend_elements(num = num), loc="upper right", title=task_name,)
ax3.add_artist(lg3)
# fig.tight_layout()
fig.show()
plt.close(2)
plt.tight_layout()
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0.25, hspace=None)
fig.savefig(os.path.join(data_save_folder, '%s.png' % task_name), dpi=300, format='png')
fig.savefig(os.path.join(data_save_folder, '%s.pdf' % task_name), dpi=300, format='pdf')
# +
sns.set(style='white', font_scale = 1.2)
fig, axes = plt.subplots(ncols=2,figsize=(16,6))
ax1, ax2, = axes
fontsize = 16
if data.task_type == 'regression':
gb = df.groupby('Group')[task_name].apply(lambda x:x.values)
ks_values = []
p_values = []
for i in gb.index:
for j in gb.index:
expected = gb.loc[i]
observed = gb.loc[j]
ks, p = ks_2samp(expected, observed)
ks_values.append(ks)
p_values.append(p)
arrv = np.array(ks_values).reshape(len(gb), len(gb)).astype('float16')
arrp = np.array(p_values).reshape(len(gb), len(gb))
dfv = pd.DataFrame(arrv, index = gb.index, columns = gb.index)
dfp = pd.DataFrame(arrp, index = gb.index, columns = gb.index)
vax = sns.heatmap(dfv, annot=True, cmap = 'Greens', fmt='.3g', ax = ax1,
linewidths = 0.5, linecolor='0.9', cbar_kws={'label': 'KS value'})
vax.figure.axes[-1].yaxis.label.set_size(fontsize)
vax.collections[0].colorbar.ax.tick_params(labelsize=15) #cbar ticklabel size
pax = sns.heatmap(dfp, vmax = 0.05, annot=True, cmap = 'Greens', fmt='.3g', ax= ax2,
linewidths = 0.5, linecolor='0.9', cbar_kws={'label': 'p value', })
pax.figure.axes[-1].yaxis.label.set_size(fontsize)
pax.collections[0].colorbar.ax.tick_params(labelsize=15) #cbar ticklabel size
else:
gb = df.groupby([branch_name, task_name]).size().unstack()
gb.columns = gb.columns.astype(int)
chisq_values = []
p_values = []
for i in gb.index:
for j in gb.index:
expected = gb.loc[i].values
observed = gb.loc[j].values
# adjust the number of the expected
expected_adjust = (expected / expected.sum()) * observed.sum()
chisq, p = chisquare(expected_adjust, observed)
chisq_values.append(chisq)
p_values.append(p)
arrv = np.array(chisq_values).reshape(len(gb), len(gb)).astype('float16')
arrp = np.array(p_values).reshape(len(gb), len(gb))
dfv = pd.DataFrame(arrv, index = gb.index, columns = gb.index)
dfp = pd.DataFrame(arrp, index = gb.index, columns = gb.index)
vax = sns.heatmap(dfv, vmax = 10, annot=True, cmap = 'Greens', fmt='.3g', ax = ax1,
linewidths = 0.5, linecolor='0.9', cbar_kws={'label': 'chi-square value'})
vax.figure.axes[-1].yaxis.label.set_size(fontsize)
vax.collections[0].colorbar.ax.tick_params(labelsize=15) #cbar ticklabel size
pax = sns.heatmap(dfp, vmax = 0.05, annot=True, cmap = 'Greens', fmt='.3g', ax= ax2,
linewidths = 0.5, linecolor='0.9', cbar_kws={'label': 'p value',})
pax.figure.axes[-1].yaxis.label.set_size(fontsize)
pax.collections[0].colorbar.ax.tick_params(labelsize=15) #cbar ticklabel size
for ax in [ax1, ax2]:
ax.set_yticklabels(dfv.index, rotation=0, fontsize="15", va="center")
ax.set_xticklabels(dfv.index, rotation=0, fontsize="15", va="center")
ax.axhline(y=0, color='0.9',lw= 0.5, ls = '--')
ax.axhline(y=dfv.shape[0], color='0.9',lw= 0.5, ls = '--')
ax.autoscale()
ax.axvline(x=dfv.shape[1], color='0.9',lw= 0.5, ls = '--')
ax.axvline(x=0, color='0.9',lw= 0.5, ls = '--')
ax.set_xlabel('Group', fontsize = 16)
ax.set_ylabel('Group', fontsize = 16)
fig.tight_layout()
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0.3, hspace=None)
fig.savefig(os.path.join(data_save_folder, '%s_stat_test.png' % task_name), dpi=300, format='png')
fig.savefig(os.path.join(data_save_folder, '%s_stat_test.pdf' % task_name), dpi=300, format='pdf')
dfv['Value'] = 'statistic value'
dfv = dfv.reset_index().set_index(['Value', 'Group'])
dfp['Value'] = 'p value'
dfp = dfp.reset_index().set_index(['Value', 'Group'])
dfv.append(dfp).to_excel(os.path.join(data_save_folder, '%s_stat_test.xlsx' % task_name))
# -
# +
# Now plot interactive results
if data.task_type == 'regression':
categorical=[False, True,]
else:
categorical = [True, True,]
faerun = Faerun(view="front", clear_color='#111111',coords=False) #'#ffffff'
faerun.add_scatter(
task_name,
{ "x": x,
"y": y,
"c": [data.y.reshape(-1, ), clustering.labels_],
"labels": data.x},
point_scale=5,
colormap = ['rainbow', 'Set1'],
has_legend=True,
categorical = categorical,
series_title = [task_name, branch_name],
legend_labels = [None, [(i, "%s" % (i+1)) for i in range(n_clusters)]],
shader = 'smoothCircle'
)
faerun.add_tree(task_name + "_tree", {"from": s, "to": t}, point_helper=task_name, color='#666666', ) #colors when no value
# Choose the "smiles" template to display structure on hover
faerun.plot(task_name, path = data_save_folder, template="smiles", notebook_height=750)
# -
#
| chembench/cluster/cluster_split/00_split_data_FreeSolv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Code Details
# Author: <NAME><br>
# Created: 19JAN19<br>
# Version: 0.1<br>
# ***
# This code loads the data from Mongo and processes it to make the information about who has access to the different CLARA results based on their group membership.<br>
# At the moment this code is experimental and its functionality will be defined as I start to understand the data better.
# # Package Importing + Variable Setting
# +
import matplotlib
#need to use this otherwise nothing appears in the notebook from the charting point of view
matplotlib.use('module://ipykernel.pylab.backend_inline')
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from itertools import cycle, islice
import pandas as pd
import numpy as np
from math import pi
from math import ceil
from math import floor
import datetime
# mongo stuff
import pymongo
from pymongo import MongoClient
from bson.objectid import ObjectId
# -
# packages for the widgets
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
# if true the code outputs to the notebook a whole of diagnostic data that is helpful when writing but not so much when running it for real
verbose = False
# first run will truncate the target database and reload it from scratch. Once delta updates have been implmented this needs adjusting
first_run = True
# # Set display options
# +
# further details found by running:
# pd.describe_option('display')
# set the values to show all of the columns etc.
pd.set_option('display.max_columns', None) # or 1000
pd.set_option('display.max_rows', None) # or 1000
pd.set_option('display.max_colwidth', -1) # or 199
# locals() # show all of the local environments
# -
# # Connect to Mongo DB
# +
# create the connection to MongoDB
# define the location of the Mongo DB Server
# in this instance it is a local copy running on the dev machine. This is configurable at this point.
client = MongoClient('127.0.0.1', 27017)
# define what the database is called.
db = client.CLARA
# define the collections
coachDataCollection = db.raw_data_coach_coachee
groupDataCollection = db.raw_data_group_user
resultsDataCollection = db.raw_data_combined_user_results
usersDataCollection = db.raw_data_claraUsers
# -
# ## Read Data
# This uses a variable that is defined above and puts it into a filter based on the student index. <br>
# This needs to be replaced with the student ID.
# Use this to define what to search on
searchCriteria = [
"f2f3c425-d4bf-4b1a-8112-e5d23d48719b",
"539693e6-bd4c-4c25-aeed-f62789032181"
]
# ### Read the coaching relationship file
# +
# Need to introduce this check as the commands are different if I want to conduct a serach or return all values
# create a variable to determine if I want to perform a wildcard search
wildcardSearch = True
# +
# Get the group information from the data base
# Query the data from the database using a filter
queryField = "userId"
sortField = "coachId"
# define the search query
query = {
queryField: {
'$in': searchCriteria
} # matches ensuring only the requested students are supplied
}
if wildcardSearch:
# no filter is provided for a wildcard search
# return sorted results
cursor = coachDataCollection.find().sort([(sortField, 1)])
else:
# return filtered and sorted results
cursor = coachDataCollection.find(query).sort([(sortField, 1)])
# put the results into a dataframe
dfCoach = pd.DataFrame(list(cursor))
if verbose:
print(dfCoach.shape)
display(dfCoach.head())
# +
dfCoach.drop(['_id', 'insertdate', 'userGroup_index'],
inplace=True,
axis=1,
errors='ignore')
if verbose:
display(dfCoach)
# -
# ### Read the users file
# +
# Need to introduce this check as the commands are different if I want to conduct a serach or return all values
# create a variable to determine if I want to perform a wildcard search
wildcardSearch = True
# +
# Get the group information from the data base
# Query the data from the database using a filter
queryField = "userId"
sortField = "coachId"
# define the search query
query = {
queryField: {
'$in': searchCriteria
} # matches ensuring only the requested students are supplied
}
if wildcardSearch:
# no filter is provided for a wildcard search
# return sorted results
cursor = usersDataCollection.find().sort([(sortField, 1)])
else:
# return filtered and sorted results
cursor = usersDataCollection.find(query).sort([(sortField, 1)])
# put the results into a dataframe
dfUsers = pd.DataFrame(list(cursor))
if verbose:
print(dfUsers.shape)
display(dfUsers.head())
# +
# drop columns that are not needed from the data frame
# 'orgUserId' is needed to link back to the results
dfUsers.drop([
'_id', 'insertdate', 'user_index', 'isSSO', 'additionalData',
'learningPlatformUserId', 'coachId'
],
inplace=True,
axis=1,
errors='ignore')
if verbose:
display(dfUsers.head())
# -
# ### Merge the files together to bring in the coach's name and identifiers against the coachees record
# +
# add in the coaches name if there is one by merging the users file into the coach / coachee file by joining on the coach id
dfCoach = pd.merge(
dfCoach, dfUsers, how='left', left_on='coachId', right_on='orgUser_Id')
if verbose:
display(dfCoach.head())
# +
# drop the unneeded columns - this column is a duplicate of the coachId
dfCoach.drop([
'orgUser_Id', 'AvatarSupplied', 'clientUserId', 'declaraLinked',
'languagePreference', 'orgUser_Id', 'primaryEmail', 'userDeletedAt',
'userStatus'
],
inplace=True,
axis=1,
errors='ignore')
if verbose:
display(dfCoach.head())
# +
# Rename the columns to indicate which are the ones that belong to the coaches DisplayName and id's
dfCoach.columns = [
'coachId', 'dateFrom', 'dateTo', 'learnerId', 'coachDisplayName', 'coachNameId', 'coachOrgUserId'
]
if verbose:
display(dfCoach.head())
# +
# Update the userse table with the coach / coachee details
dfUsers = pd.merge(
dfUsers, dfCoach, how='left', left_on='orgUser_Id', right_on='learnerId')
if verbose:
display(dfUsers.head())
# -
if verbose:
print(*dfUsers, sep='\n')
display(dfUsers)
# +
# rename columns
# The key renameing is orgUserId -> userId & orgUser_Id -> user_Id
dfUsers.columns = ['AvatarSupplied', 'clientUserId', 'declaraLinked', 'displayName', 'languagePreference', 'nameId', 'userId', 'user_Id', 'primaryEmail', 'userDeletedAt', 'userStatus', 'coachId', 'dateFrom',
'dateTo', 'learnerId', 'coachDisplayName', 'coachNameId', 'coachOrgUserId']
if verbose:
display(dfUsers.head())
# +
# Extract the list of coaches
# get the unique users from data frame. This is what we will interate through
coachNames = dfUsers['coachDisplayName'].dropna().unique()
if verbose:
display(coachNames.shape)
display(coachNames)
# -
# ## Group Data Retrieval
# ### Take note of the setting here of the wild card
# The code is left like this in case there is a need to use a search criteria at a later date, but for now check to see if it is using a wild card in the next cell
# +
# Need to introduce this check as the commands are different if I want to conduct a serach or return all values
# create a variable to determine if I want to perform a wildcard search
wildcardSearch = True
# +
# Get the group information from the data base
# Query the data from the database using a filter
queryField = "userId"
sortField = "groupName"
# define the search query
query = {
queryField: {
'$in': searchCriteria
} # matches ensuring only the requested students are supplied
}
if wildcardSearch:
# no filter is provided for a wildcard search
# return sorted results
cursor = groupDataCollection.find().sort([(sortField, 1)])
else:
# return filtered and sorted results
cursor = groupDataCollection.find(query).sort([(sortField, 1)])
# put the results into a dataframe
dfGroup = pd.DataFrame(list(cursor))
if verbose:
print(dfGroup.shape)
display(dfGroup.head())
# -
if verbose:
# count columns and rows
print("Number of columns are " + str(len(dfGroup.columns)))
print("Number of rows are " + str(len(dfGroup.index)))
print()
# output the shape of the dataframe
print("The shape of the data frame is " + str(dfGroup.shape))
print()
# output the column names
print("The column names of the data frame are: ")
print(*dfGroup, sep='\n')
print()
# output the column names and datatypes
print("The datatypes of the data frame are: ")
print(dfGroup.dtypes)
print()
# # Group stuff
# This needs work because this is not the correct relationship in this code. A intersection table needs to be used and also account of the dates that the surveys were taken.
# +
# get the unique users from data frame. This is what we will interate through
groupId = dfGroup['groupName'].unique()
if verbose:
display(groupId.shape)
display(groupId)
# -
# ### CLARA Results Retrieval
# +
# Need to introduce this check as the commands are different if I want to conduct a serach or return all values
# create a variable to determine if I want to perform a wildcard search
wildcardSearch = True
# Get the group information from the data base
# Query the data from the database using a filter
queryField = "userId"
sortField = "groupName"
# define the search query
query = {
queryField: {
'$in': searchCriteria
} # matches ensuring only the requested students are supplied
}
if wildcardSearch:
# no filter is provided for a wildcard search
# return sorted results
cursor = resultsDataCollection.find().sort([(sortField, 1)])
else:
# return filtered and sorted results
cursor = resultsDataCollection.find(query).sort([(sortField, 1)])
# put the results into a dataframe
dfResults = pd.DataFrame(list(cursor))
if verbose:
print(dfResults.shape)
display(dfResults.head())
# -
# Display information about the data that has been retrieved
if verbose:
# count columns and rows
print("Number of columns are " + str(len(dfResults.columns)))
print("Number of rows are " + str(len(dfResults.index)))
print()
# output the shape of the dataframe
print("The shape of the data frame is " + str(dfResults.shape))
print()
# output the column names
print("The column names of the data frame are: ")
print(*dfResults, sep='\n')
print()
# output the column names and datatypes
print("The datatypes of the data frame are: ")
print(dfResults.dtypes)
print()
# # Create additional columns
# These will help in understanding the data and if the surveys are valid or not.
# There are many test journeys and the like that we don't want to include in the analysis
# ## Second Survey
# This makes a column that indicates if the student completed a second survey
# test to see if the start date for the 2nd survey is blank, if so, then False
dfResults["completedSecondSurvey"] = np.where(
dfResults["measure_ClaraResultsCreatedAt"].isnull(), False, True)
# ## Survey Duration
# +
# Duration of surveys
# convert the number fields to a datetime field
dfResults.loc[:, "diagnose_ClaraResultsCreatedAt"] = pd.to_datetime(
dfResults.loc[:, 'diagnose_ClaraResultsCreatedAt'])
dfResults.loc[:, "diagnose_ClaraResultCompletedAt"] = pd.to_datetime(
dfResults.loc[:, 'diagnose_ClaraResultCompletedAt'])
dfResults.loc[:, "measure_ClaraResultsCreatedAt"] = pd.to_datetime(
dfResults.loc[:, 'measure_ClaraResultsCreatedAt'])
dfResults.loc[:, "measure_ClaraResultCompletedAt"] = pd.to_datetime(
dfResults.loc[:, 'measure_ClaraResultCompletedAt'])
# -
# this calcs the duration of the surveys in HH:MM:SS
dfResults.loc[:,
"surveyOneDuration"] = dfResults.loc[:,
"diagnose_ClaraResultCompletedAt"] - dfResults.loc[:,
"diagnose_ClaraResultsCreatedAt"]
dfResults.loc[:,
"surveyTwoDuration"] = dfResults.loc[:,
"measure_ClaraResultCompletedAt"] - dfResults.loc[:,
"measure_ClaraResultsCreatedAt"]
# Calc the time between the end of the first survey and the start of the second one
dfResults.loc[:,
"surveyBetweenDuration"] = dfResults.loc[:,
"measure_ClaraResultsCreatedAt"] - dfResults.loc[:,
"diagnose_ClaraResultCompletedAt"]
# # Interactive section
# As the code for the widgets executes immediately, this needs to be nested inside a function otherwise it doesn't wait for the user to select their input.<br>
# This function contains the code for a second button to seek user interaction to save the file.<br>
# +
# this function is used to display the groups to the user and set the value for use in the rest of the munging
# it contains all of the remaining code otherwise it just skips past this button and executes it which is meaningless in this context
def GroupIdSelect(groupId, coachName):
# Get the user id's into a variable for selecting from the results data frame
# Compare the group name selected by the user and return the columns where it matches
groupUserId = list(dfGroup.loc[dfGroup['groupName'] == groupId]['userId'])
# Get the user id's into a variable for selecting from the results data frame
coachUserId = list(
dfUsers.loc[dfUsers['coachDisplayName'] == coachName]['userId'])
# if any of the options are blank then don't include them
if (groupId == "") & (coachName == ""):
combinedUserId = []
# print out to the user their selection and their matches
print("\n" + "*" * 45)
print(("\n You did not select a group \n"))
print(("\n You did not select a coach \n"))
print("\nThis results in " + str(len(combinedUserId)) +
" people selected")
elif groupId == "":
combinedUserId = coachUserId
# print out to the user their selection and their matches
print("\n" + "*" * 45)
print(("\n You did not select a group \n"))
print(("\n The coach you picked is: \n"))
print(" " + coachName + "")
print("\nThe number of members in the coach selection is: " +
str(len(coachUserId)))
print("\nThis results in " + str(len(combinedUserId)) +
" people selected")
elif coachName == "":
combinedUserId = groupUserId
# print out to the user their selection and their matches
print("\n" + "*" * 45)
print(("\n The group you picked is: \n"))
print(" " + groupId + "")
print("\nThe number of members in the group is: " +
str(len(groupUserId)))
print(("\n You did not select a coach \n"))
print("\nThis results in " + str(len(combinedUserId)) +
" people selected")
else:
# this is the intersection of the two selections
# only the students that match both
combinedUserId = list(set(groupUserId) & set(coachUserId))
# print out to the user their selection and their matches
print("\n" + "*" * 45)
print(("\n The group you picked is: \n"))
print(" " + groupId + "")
print("\n The number of members in the group is: " +
str(len(groupUserId)))
print(("\n The coach you picked is: \n"))
print(" " + coachName + "")
print("\nThe number of members in the coach selection is: " +
str(len(coachUserId)))
print("\nThis results in " + str(len(combinedUserId)) +
" people selected")
# footer for display
print("\n" + "*" * 45 + "\n")
# use isin function to select the rows that match the userId's that are in the selected group
selectedResults = dfResults.loc[dfResults['userId'].isin(
combinedUserId)].copy()
print()
print("The " + str(len(combinedUserId)) + " people selected have " +
str(len(selectedResults)) + " corresponding CLARA Journey results")
print("\nA sample of the key columns and data is: \n")
display(selectedResults[[
'userPrimaryEmail', 'completedSecondSurvey', 'journeyTitle',
'journeyGoal', 'diagnose_ClaraResultsCreatedAt',
'measure_ClaraResultsCreatedAt'
]].head())
# drop the unneeded columns from the results set
selectedResults.drop([
'_id', 'claraResultsJourneyStep', 'diagnose_ClaraId', 'insertdate',
'journeyId', 'measure_ClaraId', 'measure_ClaraResultsJourneyStep',
'numTotalClaraJourneySurveys', 'numTotalClaraSurveys', 'rowIndex',
'userDeclaraLinked', 'userDeletedAt', 'userAvatarSupplied',
'userClientUserId', 'userExtraData', 'userId',
'userLanguagePreference', 'userStatus', 'userName'
],
inplace=True,
axis=1,
errors='ignore')
if verbose:
print("\nThe column names: \n")
print(selectedResults.columns)
print("\nA sample of the all columns and data is: \n")
display(selectedResults.head())
# ############# !~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~! #############
# Saving the file section really starts here
# add a few extra lines for seperation from the save file
print("\n" * 3)
print("\n" + "*" * 80 + "\n")
print("Specify the name of the file to be saved... ")
print(
"This will automatically place a comma serparated file onto the platform"
)
print(
"at the following location - http://127.0.0.1:8888/tree/datasets/CLARA/UserSaved"
)
print(
"Please note that this will overwrite any file with the same name, please check before saving the file."
)
print()
print(
"A good format could be YYMMDD_SubjectOrGroup_LecturerOrTutor - It depends on the the data that you have selected."
)
print()
# # Save File
# This section waits for the user to interact with the code to set the file name of the csv file.
# set a variable to hold the filename - note there is a check if the user tries to use this value to save the file.
filename = "Specify_filename"
# define the widget controls
wFilename = widgets.Text(
value=filename,
placeholder='Go on... type the file name?',
description='Filename:',
disabled=False)
# this function is used to manage the setting of the filename and saving that file.
def saveFile(filename):
# check to see if the file ends in .csv, if it does do nothing otherwise, add it to the end of the filename
# note that the tests below have been modified to take this into account
# therefore blank file name is actually ".csv"
if filename[-4:] != ".csv":
# add .csv to the filename
filename = filename + ".csv"
# write out data to CSV file.
# test to see if the user has entered a filename otherwise reject and let them try again
# see lines above for why the tests have .csv in them
if filename == ".csv": # file name is blank
print("\n" + "*" * 45 + "\n")
print(" You need to enter a filename!")
print("\n" + "*" * 45)
elif filename != "Specify_filename.csv": # file name has been changed by the user - not negative testing
# don't write the index to the file as it is not required and it breaks when reading the file back in later on
selectedResults.to_csv(
"~/datasets/CLARA/UserSaved/" + filename, index=False)
print("\n" + "##!" * 45 + "\n")
print("Congrats! \n\nFile of " + str(len(selectedResults)) +
" records succesfully written to: " +
"~/datasets/CLARA/UserSaved/" + filename)
print("\n" + "##!" * 45 + "\n")
else: # this means that the user has entered a filename of some sort
print("\n" + "*" * 45 + "\n")
print((" File is NOT saved!"))
print(" You need to specify the file name in the box.")
print("\n" + "*" * 45 + "\n")
# end function saveFile and return with nothing as file has been written or not...
return
## Set up the interaction component
# set the name on the button
interact_save = interact_manual.options(manual_name="Save File")
# This file is used by the next step of the analysis to save the file interactively
interact_save(saveFile, filename=wFilename)
# end function GroupIdSelect and return with nothing as data was selected and then written...
return
# -
# ## Set up the interaction component
# +
# add a blank value to the variable - this is used as a default value
groupId = np.append(groupId, "")
# define the widget controls for the groups that are available
wGroupId = widgets.Dropdown(
options=groupId, description='Select Group', value='', disabled=False)
if verbose:
print(groupId)
# +
# add a blank value to the variable - this is used as a default value
coachNames = np.append(coachNames, "")
# define the widget controls for the coach names
wCoachNames = widgets.Dropdown(
options=coachNames, description='Select Coach', value='', disabled=False)
if verbose:
print(coachNames)
# -
# set the name on the button
interactGroupId = interact_manual.options(manual_name="Pick Group + Coach")
# +
print("\n" * 3)
# This file is used by the next step of the analysis
interactGroupId(GroupIdSelect, groupId=wGroupId, coachName=wCoachNames)
print("\n" * 3)
| 3_Code/06_MongoDB_to_CSV_RealData_ClaraResults_SSODataModel_CoachingandGroups.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
import pandas as pd
from pandas import Series, DataFrame
import numpy as np
from numpy import nan as NA
import sys
#
import matplotlib.pylab as plt
from numpy.random import randn
# %matplotlib inline
# # Chapter 7: Data Wrangling: Clean, Transform, Merge, Reshape
# Run next cell to create df1 and df2
#
df1 = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
'data1': range(6)})
df1
df2 = DataFrame({'key': ['a', 'b', 'a', 'b', 'd'],
'data2': range(5)})
df2
# Merge (join) df1 and df2 on their shared column name. Try inner, left, right, and outer merge and verify your expectation
pd.merge(df1, df2, on='key', how='inner')
pd.merge(df1, df2, on='key', how='left')
pd.merge(df1, df2, on='key', how='right')
pd.merge(df1, df2, on='key', how='outer')
# Run next cell to creata df3 and df4
df3 = DataFrame({'lkey': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],
'data1': range(7)})
df4 = DataFrame({'rkey': ['a', 'b', 'd'],
'data2': range(3)})
# Merge df3 and df4 on 'lkey' and 'rkey'
pd.merge(df3, df4, left_on='lkey', right_on='rkey')
# Run next cell to create 'left' and 'right'
#
left = DataFrame({'key1': ['foo', 'foo', 'bar'],
'key2': ['one', 'two', 'one'],
'lval': [1, 2, 3]})
right = DataFrame({'key1': ['foo', 'foo', 'bar', 'bar'],
'key2': ['one', 'one', 'one', 'two'],
'rval': [4, 5, 6, 7]})
# Merge left and right by two keys of key1 and key2, and verify your thoughts. Try both inner and outer
left
right
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer')
# Merge left and right on key1 only
pd.merge(left, right, on='key1')
# Run next cell to generate left1 and right1
#
left1 = DataFrame({'key': ['a', 'b', 'a', 'a', 'b', 'c'],
'value': range(6)})
right1 = DataFrame({'group_val': [3.5, 7]}, index=['a', 'b'])
# Observe left1 and right1, then merge them
left1
right1
pd.merge(left1, right1, left_on='key', right_index=True)
# Run next cell to create left2 and right2
left2 = DataFrame([[1., 2.], [3., 4.], [5., 6.]], index=['a', 'c', 'e'],
columns=['Ohio', 'Nevada'])
right2 = DataFrame([[7., 8.], [9., 10.], [11., 12.], [13, 14]],
index=['b', 'c', 'd', 'e'], columns=['Missouri', 'Alabama'])
# Observe left2 and right2, and merge them
left2
right2
pd.merge(left2, right2, left_index=True, right_index=True)
# Run next cell to create series s1, s2, and s3
#
s1 = Series([0, 1], index=['a', 'b'])
s2 = Series([2, 3, 4], index=['c', 'd', 'e'])
s3 = Series([4, 5], index=['f', 'g'])
# Concatenate s1, s2, and s3 by default settings
pd.concat([s1, s2, s3])
# Now cnocatenate them by axis 1
pd.concat([s1, s2, s3], axis=1)
# Now repeat last question but requst inner join on axis 0. Verify your thought
pd.concat([s1, s2, s3], axis=1, join='inner')
# Run next cell to create s4
#
s4 = pd.concat([s1*5, s3])
s4
# Now concatenate s4 and s1 in axis 0 and 1 separately, and verify your thought
s1
pd.concat([s1, s4], axis=1)
# Concatenate s1, s2, and s3 in axis 0, and assign keys to identify the groups
pd.concat([s1, s2, s3], keys=['s1', 's2', 's3'], axis=0)
# Run next cell to create df1 and df2
#
df1 = DataFrame(np.arange(6).reshape(3, 2), index=['a', 'b', 'c'],
columns=['one', 'two'])
df2 = DataFrame(5 + np.arange(4).reshape(2, 2), index=['a', 'c'],
columns=['three', 'two'])
# Concategate df1 and df2 in two axises, and distinguish df1 and df2 by using keys; verify your thought.
df1
df2
pd.concat([df1, df2], axis=0, keys=['df1', 'df2'])
pd.concat([df1, df2], axis=1, keys=['df1', 'df2'])
# Run next cell to create df1 and df2
#
df1 = DataFrame(np.random.randn(3, 4), columns=['a', 'b', 'c', 'd'])
df2 = DataFrame(np.random.randn(2, 3), columns=['b', 'd', 'a'])
df1
df2
# Concatenate df1 and df2. Since their index has no physical meaning, ignore the index.
pd.concat([df1, df2], ignore_index=True)
# Run next cell to create series a and b
#
a = Series([np.nan, 2.5, np.nan, 3.5, 4.5, np.nan],
index=['f', 'e', 'd', 'c', 'b', 'a'])
b = Series(np.arange(len(a), dtype=np.float64),
index=['f', 'e', 'd', 'c', 'b', 'a'])
a
b
# Fill in the missing values in a by the correspoing values in b
a.combine_first(b)
# Run next cell to create df1 and df2
#
df1 = DataFrame({'a': [1., np.nan, 5., np.nan],
'b': [np.nan, 2., np.nan, 6.],
'c': range(2, 18, 4)})
df2 = DataFrame({'a': [5., 4., np.nan, 3., 7.],
'b': [np.nan, 3., 4., 6., 8.]})
df1
df2
# Similarly, fill in the missing values in df1 by df2
df1.combine_first(df2)
# Try df1.combine_first(df2) and df1.fillna(df2) and observe the difference.
df1.fillna(df2)
# Run next cell to create DataFrame 'data'
#
data = DataFrame(np.arange(6).reshape((2, 3)),
index=pd.Index(['Ohio', 'Colorado'], name='state'),
columns=pd.Index(['one', 'two', 'three'], name='number'))
data
# Reshaping: Stack DataFrame data to a series 'result'. Stack would generate longer dataframe or series
result = data.stack()
result
# Reshaping: Unstack result to recover data. Unstack would generate shorter data frame
result.unstack()
# Reshaping: Try two methods to unstack result by "state"
result.unstack(0)
result.unstack('state')
# Note that the level stacked or unstacke would become the lowest level in the other axis
# Create a data frame ldata by reading ldata.txt
ldata=pd.read_csv('ldata.txt')
ldata
# In ldata, date and item are the primary keys. Now create a pivoted data frame by these two keys
pivoted = ldata.pivot('date', 'item', 'value')
pivoted
# Now add another column 'value2' of random number to ldata
ldata['value2'] = np.random.randn(len(ldata))
ldata
# Now pivot again and keep value2 only.
ldata.pivot('date', 'item', 'value2')
# Pivot again and keep both value and value2
ldata.pivot('date', 'item')
# Pivot is just a shorcut for creating hierarchical index using set_index and reshaping with unstack. Now use set_index and unstack to sovle last question.
ldata.set_index(['date', 'item']).unstack('item')
# Run next cell to create DataFrame data
#
data = DataFrame({'k1': ['one']*3 + ['two']*4,
'k2': [1, 1, 2, 3, 3, 4, 4]})
data
# Check which row of data is a duplicated row
data.duplicated()
# Now drop all duplicated rows
data.drop_duplicates()
# Now add a new column 'v1' = range(7) to data
data['v1'] = range(7)
data
# Now drop duplicate based on k1
data.drop_duplicates('k1')
# Now drop duplicates by k1 and k2
data.drop_duplicates(['k1', 'k2'])
# Drop dulicates by k1 and k2, and keep the last value instead of the first value
data.drop_duplicates(['k1', 'k2'], keep = 'last')
# Run the next line to create a data frame of 'data'
#
data = DataFrame({'food': ['bacon', 'pulled pork', 'bacon', 'Pastrami',
'corned beef', 'Bacon', 'pastrami', 'honey ham',
'nova lox'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data
# Run next line to create a mapping from 'food' to 'animal'
#
meat_to_animal = {
'bacon': 'pig',
'pulled pork': 'pig',
'pastrami': 'cow',
'corned beef': 'cow',
'honey ham': 'pig',
'nova lox': 'salmon'
}
# Now add another column 'animal' to data by the mapping above. Note that the mapping should be case insensitive.
data['animal'] = data.food.map(lambda x: meat_to_animal[x.lower()])
data
# Run the next line to create a series 'data'
data = Series([1., -999., 2., -999., -1000., 3.])
data
# Now replace -999 by np.nan and return as data1
data1 = data.replace(-999, np.nan)
data1
# Now replace both -999 and -1000 by np.nan and return as data2
data2 = data.replace([-999, -1000], np.nan)
data2
# Now replace -999 by np.nan, -1000 by 0
data.replace({-999:np.nan, -1000:0})
# Run next line to create a data frame "data"
#
data = DataFrame(np.arange(12).reshape((3, 4)),
index=['Ohio', 'Colorado', 'New York'],
columns=['one', 'two', 'three', 'four'])
data
# Now change the index of data to UPPERCASE in place using the map function
data.index = data.index.map(str.upper)
data
# Change the index to title format and change the column names to uppercase by using rename method
data.rename(index=str.title, columns=str.upper)
# + active=""
# For data, change the index of 'OHIO' to 'INDIANA' and change the column of 'three' to 'peekaboo'; do this in place
# -
data.rename(index={'OHIO':'INDIANA'}, columns={'three':'peekaboo'}, inplace=True)
data
# Run the next line to create a list
#
ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
# Now create a series showing the number of elements in ages falling into these four groups: (18, 25], (25, 35], (35,60], (60, 100], and name these groups as 'Youth', 'YoungAdult', 'MiddleAged', 'Senior'
cats = pd.cut(ages, [18, 25, 35, 60, 100],
labels=['Youth', 'YoungAdult', 'MiddleAged', 'Senior'])
result = pd.value_counts(cats)
result
# Create a data frame 'data' from a 1000*4 radom number matrix, which is generate from seed of 12345
np.random.seed(12345)
data = DataFrame(np.random.randn(1000, 4))
# Show the description of data
data.describe()
# Find out all rows that has 1 or more values exceeding 3 or -3
data[(np.abs(data)>3).any(1)]
# For data, replace a value by 3 if it is larger than 3; by -3 if it is smaller than -3
data[data>3] = 3
data[data<-3] = -3
# Now show the description of data again to verify the result
data.describe()
# Run the next line to generate a data frame df
#
df = DataFrame(np.arange(5*4).reshape(5, 4))
df
# Now permutate (randomly reorder) the rows in df
sampler = np.random.permutation(len(df))
df.iloc[sampler]
# Randomly select three rows from data without replacement
df.iloc[np.random.permutation(len(df))[:3]]
# Now randomly select three rows from data with replacement
sampler = np.random.randint(0, len(df), size=3)
df.iloc[sampler]
# Run next line to create a data frame 'df'
#
df = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
'data1': range(6)})
df
# Now create a new data frame showing the dummy matrix of df.key
pd.get_dummies(df.key)
# Create a data frame movies from movies.dat with column names of 'movie_id', 'title', 'genres'
movies = pd.read_csv('datasets/movielens/movies.dat',
sep='::', header=None,
names=['movie_id', 'title', 'genres'],
engine='python')
movies.head(10)
# Get a list of unique movie genres, sorted alphabetically
genre_iter = [set(x.split('|')) for x in movies.genres]
genres = sorted(set.union(*genre_iter))
genres
| Chapter7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Review of Supervised Learning with scikit-learn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
sklearn.set_config(print_changed_only=True)
# read data.
# you can find a description in data/bank-campaign-desc.txt
data = pd.read_csv("data/bank-campaign.csv")
data.shape
data.columns
data.head()
y = data.target
X = data.drop("target", axis=1)
X.shape
y.shape
y.head()
data.target.value_counts()
data.target.value_counts(normalize=True)
# Splitting the data:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=.2, random_state=42, stratify=y)
np.sum(y_train == "yes") / len(y_train)
np.sum(y_test == "yes") / len(y_test)
# import model
from sklearn.linear_model import LogisticRegression
# instantiate model, set parameters
lr = LogisticRegression(C=0.1, max_iter=1000)
# fit model
lr.fit(X_train, y_train)
lr.coef_
# Make predictions:
lr.score(X_train, y_train)
(y_train == "no").mean()
lr.score(X_test, y_test)
# # https://github.com/amueller/ml-workshop-2-of-4
# # Exercise
# Load the dataset ``data/bike_day_raw.csv``, which has the regression target ``cnt``.
# This dataset is hourly bike rentals in the citybike platform. The ``cnt`` column is the number of rentals, which we want to predict from date and weather data.
#
# Split the data into a training and a test set using ``train_test_split``.
# Use the ``LinearRegression`` class to learn a regression model on this data. You can evaluate with the ``score`` method, which provides the $R^2$ or using the ``mean_squared_error`` function from ``sklearn.metrics`` (or write it yourself in numpy).
# +
# # %load solutions/bike_regression.py
| notebooks/01 - Review of Supervised Learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/shamim237/ML-Handcrafted_Features/blob/main/SIFT_SVM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="P0eGPBtM27W-"
import os
import numpy as np
import matplotlib.pyplot as plt
import cv2
import pickle
from skimage.feature import hog
import random
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, recall_score, f1_score, precision_score
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
# + id="UPBGpFHM3Oyk"
# img_dir = '/content/drive/MyDrive/digit_dataset'
# + id="aUI2GQ0s3QGb"
#categories = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
# + id="eGjFv6ID3QJf"
# data = []
# for category in categories:
# path = os.path.join(img_dir, category)
# label = categories.index(category)
# for img in os.listdir(path):
# img_path = os.path.join(path, img)
# img_data = cv2.imread(img_path)
# #resized_img = cv2.resize(img_data, (128, 128))
# gray= cv2.cvtColor(img_data,cv2.COLOR_BGR2GRAY)
# # create SIFT feature extractor
# sift = cv2.xfeatures2d.SIFT_create()
# # detect features from the image
# keypoints, descriptors = sift.detectAndCompute(img_data, None)
# # draw the detected key points
# sift_image = cv2.drawKeypoints(gray, keypoints, img_data)
# image = np.array(sift_image).flatten()
# data.append([image, label])
# + id="onRSmOF13QMZ"
#print(len(data))
# + id="oLiHoOMG3QOU"
# pick_in = open('/content/drive/MyDrive/Colab Notebooks/sift_sign_data.pickle', 'wb')
# pickle.dump(data, pick_in)
# pick_in.close()
# + id="NGrDPAkt3QQh"
pick_in = open('/content/drive/MyDrive/Colab Notebooks/sift_sign_data.pickle', 'rb')
data = pickle.load(pick_in)
pick_in.close()
# + id="xeMEP1Ti3QSs"
random.shuffle(data)
features = []
labels = []
# + id="-8lCKavk3QVO"
for feature,label in data:
features.append(feature)
labels.append(label)
# + id="uYZvTUQA3QYC"
x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size=0.11, shuffle=True)
# + colab={"base_uri": "https://localhost:8080/"} id="qiHnL3Sj9BLi" outputId="c6637196-57c9-4015-b2d1-7f5402ed987b"
print(len(x_test))
# + id="yicU3Xn33QZz"
rbf = SVC(kernel='rbf').fit(x_train, y_train)
# + id="0QIUDXKQ3Qco"
rbf_pred = rbf.predict(x_test)
# + id="_Sa_Pk5XRtvI"
lin = SVC(kernel='linear').fit(x_train, y_train)
# + id="mAHz7ZQeRwx1"
lin_pred = lin.predict(x_test)
# + id="p43ojSg5srWw"
poly = SVC(kernel = 'poly')
# + id="CUDcgkrd2yoR"
model= poly.fit(x_train, y_train)
# + id="fGfUD5Sisrdi"
poly_pred = poly.predict(x_test)
# + colab={"base_uri": "https://localhost:8080/"} id="U8a3_s5gEHaz" outputId="fa2522e3-1a29-44d1-dd76-3aba4393dc84"
rbf_accuracy = accuracy_score(y_test, rbf_pred)
rbf_f1 = f1_score(y_test, rbf_pred, average='weighted')
rbf_precision = precision_score(y_test, rbf_pred, average= 'weighted')
rbf_recall = recall_score(y_test, rbf_pred, average= 'weighted')
print('Accuracy (RBF Kernel): ', "%.2f" % (rbf_accuracy*100))
print('F1 (RBF Kernel): ', "%.2f" % (rbf_f1*100))
print('Precision: ', "%.2f" % (rbf_precision*100))
print('Recall: ', "%.2f" % (rbf_recall*100))
# + colab={"base_uri": "https://localhost:8080/"} id="UCO4ZAp7MmEE" outputId="f0cafebc-4a42-4b70-ed2f-925278822cca"
poly_accuracy = accuracy_score(y_test, poly_pred)
poly_f1 = f1_score(y_test, poly_pred, average='weighted')
poly_precision = precision_score(y_test, poly_pred, average= 'weighted')
poly_recall = recall_score(y_test, poly_pred, average= 'weighted')
print('Accuracy (RBF Kernel): ', "%.2f" % (poly_accuracy*100))
print('F1 (RBF Kernel): ', "%.2f" % (poly_f1*100))
print('Precision: ', "%.2f" % (poly_precision*100))
print('Recall: ', "%.2f" % (poly_recall*100))
# + id="nhaZGZQqM5Sf"
# + colab={"base_uri": "https://localhost:8080/"} id="22XGXkEzXRDY" outputId="5b452cec-5981-41ae-db4e-2a690bb490ef"
lin_accuracy = accuracy_score(y_test, lin_pred)
lin_f1 = f1_score(y_test, lin_pred, average='weighted')
lin_precision = precision_score(y_test, lin_pred, average= 'weighted')
lin_recall = recall_score(y_test, lin_pred, average= 'weighted')
print('Accuracy (RBF Kernel): ', "%.2f" % (lin_accuracy*100))
print('F1 (RBF Kernel): ', "%.2f" % (lin_f1*100))
print('Precision: ', "%.2f" % (lin_precision*100))
print('Recall: ', "%.2f" % (lin_recall*100))
# + colab={"base_uri": "https://localhost:8080/"} id="FXRokm-lXUZl" outputId="fc790838-cab1-4f06-8637-f93ee89e015c"
cm = confusion_matrix(y_test, lin_pred)
print(cm)
# + colab={"base_uri": "https://localhost:8080/"} id="RqATfH5wEHl6" outputId="68aa2222-e374-4324-fc75-c03e5155c024"
cm = confusion_matrix(y_test, rbf_pred)
print(cm)
# + colab={"base_uri": "https://localhost:8080/", "height": 604} id="Ir5pDT_UEHpG" outputId="b3023e62-bf0d-46e6-a004-41d9e377b875"
from sklearn.metrics import confusion_matrix
import pandas as pd
import seaborn as sn
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
data = confusion_matrix(y_test, poly_pred)
df_cm = pd.DataFrame(data, columns=np.unique(y_test), index = np.unique(y_test))
df_cm.index.name = 'Actual'
df_cm.columns.name = 'Predicted'
plt.figure(figsize = (10,9))
plt.title('Confusion Matrix (HOG-SVM(Polynomial))')
sn.set(font_scale=1.4)#for label size
sn.heatmap(df_cm, cmap="Blues", annot=True, fmt= 'g' ,annot_kws={"size": 12})# font size
# + id="0gtuE5h4EHtM"
# + id="L8qqXAs6EH2s"
# + id="EXgqT-P_EH55"
| SIFT_SVM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
x=input("enter username")
if x== "Aaron":
print("username correct")
else:
print("username incorrect")
income=int(input("enter your income in lakhs"))
if((income>=2) and (income<=5)):
print("base tax")
else:
print("special tax rate")
income=int(input("enter your income"))
if((income>=2) and (income<=50)):
print( "taxrate is 5%")
else:
print("tax rateis 10%")
x=int(input("base price"))
gst=int(input("enter the slab value"))
y=int(input("enter road development cess"))
z=int(input("enter the road tax"))
f=int(input("enter economic cess"))
b=int(input("enter vat"))
price=x+(gst+y+z+f+b)*x/100
print(price)
x=int(input("base price"))
gst=int(input("enter the slab value"))
if gst== 1:
rate=0
if gst==2:
rate=5
if gst==3:
rate=12
if gst==4:
rate=18
if gst==5:
rate=28
y=int(input("enter road development cess"))
z=int(input("enter the road tax"))
f=int(input("enter economic cess"))
b=int(input("enter vat"))
price=x+(gst+y+z+f+b)*x/100
print(price)
# +
| Day 8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svr = svm.SVC()
clf = grid_search.GridSearchCV(svr, parameters)
clf.fit(iris.data, iris.target)
| Model Evaluation and Validation/GridSearchCV.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import astroquery
import matplotlib.pyplot as plt
import glob
from tqdm import tqdm
from numpy.random import poisson, beta, uniform
import occSimFuncs as occFunc
from tvguide import TessPointing
from astropy.coordinates import SkyCoord
from astropy import units as u
# %matplotlib inline
# -
np.arange(0,10,1)
| code/Untitled4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Survival Analysis
# + [markdown] tags=["remove-cell"]
# Think Bayes, Second Edition
#
# Copyright 2020 <NAME>
#
# License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
# + tags=["remove-cell"]
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
# !pip install empiricaldist
# + tags=["remove-cell"]
# Get utils.py
import os
if not os.path.exists('utils.py'):
# !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
# + tags=["remove-cell"]
from utils import set_pyplot_params
set_pyplot_params()
# -
# This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event.
# In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions.
#
# Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data.
#
# As examples, we'll consider two applications that are a little less serious than life and death: the time until light bulbs fail and the time until dogs in a shelter are adopted.
# To describe these "survival times", we'll use the Weibull distribution.
# ## The Weibull Distribution
#
# The [Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution) is often used in survival analysis because it is a good model for the distribution of lifetimes for manufactured products, at least over some parts of the range.
#
# SciPy provides several versions of the Weibull distribution; the one we'll use is called `weibull_min`.
# To make the interface consistent with our notation, I'll wrap it in a function that takes as parameters $\lambda$, which mostly affects the location or "central tendency" of the distribution, and $k$, which affects the shape.
# +
from scipy.stats import weibull_min
def weibull_dist(lam, k):
return weibull_min(k, scale=lam)
# -
# As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$.
lam = 3
k = 0.8
actual_dist = weibull_dist(lam, k)
# The result is an object that represents the distribution.
# Here's what the Weibull CDF looks like with those parameters.
# +
import numpy as np
from empiricaldist import Cdf
from utils import decorate
qs = np.linspace(0, 12, 101)
ps = actual_dist.cdf(qs)
cdf = Cdf(ps, qs)
cdf.plot()
decorate(xlabel='Duration in time',
ylabel='CDF',
title='CDF of a Weibull distribution')
# -
# `actual_dist` provides `rvs`, which we can use to generate a random sample from this distribution.
# + tags=["remove-cell"]
np.random.seed(17)
# -
data = actual_dist.rvs(10)
data
# So, given the parameters of the distribution, we can generate a sample.
# Now let's see if we can go the other way: given the sample, we'll estimate the parameters.
#
# Here's a uniform prior distribution for $\lambda$:
# +
from utils import make_uniform
lams = np.linspace(0.1, 10.1, num=101)
prior_lam = make_uniform(lams, name='lambda')
# -
# And a uniform prior for $k$:
ks = np.linspace(0.1, 5.1, num=101)
prior_k = make_uniform(ks, name='k')
# I'll use `make_joint` to make a joint prior distribution for the two parameters.
# +
from utils import make_joint
prior = make_joint(prior_lam, prior_k)
# -
# The result is a `DataFrame` that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows.
#
# Now I'll use `meshgrid` to make a 3-D mesh with $\lambda$ on the first axis (`axis=0`), $k$ on the second axis (`axis=1`), and the data on the third axis (`axis=2`).
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
# Now we can use `weibull_dist` to compute the PDF of the Weibull distribution for each pair of parameters and each data point.
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
densities.shape
# The likelihood of the data is the product of the probability densities along `axis=2`.
likelihood = densities.prod(axis=2)
likelihood.sum()
# Now we can compute the posterior distribution in the usual way.
# + tags=["hide-output"]
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
# -
# The following function encapsulates these steps.
# It takes a joint prior distribution and the data, and returns a joint posterior distribution.
def update_weibull(prior, data):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
# Here's how we use it.
posterior = update_weibull(prior, data)
# And here's a contour plot of the joint posterior distribution.
# + tags=["hide-input"]
from utils import plot_contour
plot_contour(posterior)
decorate(title='Posterior joint distribution of Weibull parameters')
# -
# It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3.
# And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8.
# + [markdown] tags=["hide-cell"]
# ## Marginal Distributions
#
# To be more precise about these ranges, we can extract the marginal distributions:
# + tags=["hide-cell"]
from utils import marginal
posterior_lam = marginal(posterior, 0)
posterior_k = marginal(posterior, 1)
# + [markdown] tags=["hide-cell"]
# And compute the posterior means and 90% credible intervals.
# + tags=["hide-cell"]
import matplotlib.pyplot as plt
plt.axvline(3, color='C5')
posterior_lam.plot(color='C4', label='lambda')
decorate(xlabel='lam',
ylabel='PDF',
title='Posterior marginal distribution of lam')
# + [markdown] tags=["hide-cell"]
# The vertical gray line show the actual value of $\lambda$.
#
# Here's the marginal posterior distribution for $k$.
# + tags=["hide-cell"]
plt.axvline(0.8, color='C5')
posterior_k.plot(color='C12', label='k')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
# + [markdown] tags=["hide-cell"]
# The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely.
# But for both parameters, the actual value falls in the credible interval.
# + tags=["hide-cell"]
print(lam, posterior_lam.credible_interval(0.9))
# + tags=["hide-cell"]
print(k, posterior_k.credible_interval(0.9))
# -
# ## Incomplete Data
#
# In the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know).
#
# But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future.
#
# As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted.
# Some dogs might be snapped up immediately; others might have to wait longer.
# The people who operate the shelter might want to make inferences about the distribution of these residence times.
#
# Suppose you monitor arrivals and departures over 8 weeks and 10 dogs arrive during that interval.
# I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this.
# + tags=["remove-cell"]
np.random.seed(19)
# -
start = np.random.uniform(0, 8, size=10)
start
# Now let's suppose that the residence times follow the Weibull distribution we used in the previous example.
# We can generate a sample from that distribution like this:
# + tags=["remove-cell"]
np.random.seed(17)
# -
duration = actual_dist.rvs(10)
duration
# I'll use these values to construct a `DataFrame` that contains the arrival and departure times for each dog, called `start` and `end`.
# +
import pandas as pd
d = dict(start=start, end=start+duration)
obs = pd.DataFrame(d)
# -
# For display purposes, I'll sort the rows of the `DataFrame` by arrival time.
obs = obs.sort_values(by='start', ignore_index=True)
obs
# Notice that several of the lifelines extend past the observation window of 8 weeks.
# So if we observed this system at the beginning of Week 8, we would have incomplete information.
# Specifically, we would not know the future adoption times for Dogs 6, 7, and 8.
#
# I'll simulate this incomplete data by identifying the lifelines that extend past the observation window:
censored = obs['end'] > 8
# `censored` is a Boolean Series that is `True` for lifelines that extend past Week 8.
#
# Data that is not available is sometimes called "censored" in the sense that it is hidden from us.
# But in this case it is hidden because we don't know the future, not because someone is censoring it.
#
# For the lifelines that are censored, I'll modify `end` to indicate when they are last observed and `status` to indicate that the observation is incomplete.
obs.loc[censored, 'end'] = 8
obs.loc[censored, 'status'] = 0
# Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line.
# + tags=["hide-cell"]
def plot_lifelines(obs):
"""Plot a line for each observation.
obs: DataFrame
"""
for y, row in obs.iterrows():
start = row['start']
end = row['end']
status = row['status']
if status == 0:
# ongoing
plt.hlines(y, start, end, color='C0')
else:
# complete
plt.hlines(y, start, end, color='C1')
plt.plot(end, y, marker='o', color='C1')
decorate(xlabel='Time (weeks)',
ylabel='Dog index',
title='Lifelines showing censored and uncensored observations')
plt.gca().invert_yaxis()
# + tags=["hide-input"]
plot_lifelines(obs)
# -
# And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines.
obs['T'] = obs['end'] - obs['start']
# What we have simulated is the data that would be available at the beginning of Week 8.
# ## Using Incomplete Data
#
# Now, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times.
#
# First I'll split the data into two sets: `data1` contains residence times for dogs whose arrival and departure times are known; `data2` contains incomplete residence times for dogs who were not adopted during the observation interval.
data1 = obs.loc[~censored, 'T']
data2 = obs.loc[censored, 'T']
# + tags=["hide-cell"]
data1
# + tags=["hide-cell"]
data2
# -
# For the complete data, we can use `update_weibull`, which uses the PDF of the Weibull distribution to compute the likelihood of the data.
posterior1 = update_weibull(prior, data1)
# For the incomplete data, we have to think a little harder.
# At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than `T`.
#
# And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds `T`.
#
# The following function is identical to `update_weibull` except that it uses `sf`, which computes the survival function, rather than `pdf`.
def update_weibull_incomplete(prior, data):
"""Update the prior using incomplete data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
# evaluate the survival function
probs = weibull_dist(lam_mesh, k_mesh).sf(data_mesh)
likelihood = probs.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
# Here's the update with the incomplete data.
posterior2 = update_weibull_incomplete(posterior1, data2)
# And here's what the joint posterior distribution looks like after both updates.
plot_contour(posterior2)
decorate(title='Posterior joint distribution, incomplete data')
# Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider.
# We can see that more clearly by looking at the marginal distributions.
posterior_lam2 = marginal(posterior2, 0)
posterior_k2 = marginal(posterior2, 1)
# Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data.
# + tags=["hide-input"]
posterior_lam.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_lam2.plot(color='C2', label='Some censored')
decorate(xlabel='lambda',
ylabel='PDF',
title='Marginal posterior distribution of lambda')
# -
# The distribution with some incomplete data is substantially wider.
#
# As an aside, notice that the posterior distribution does not come all the way to 0 on the right side.
# That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter.
# If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior.
#
# Here's the posterior marginal distribution for $k$:
# + tags=["hide-input"]
posterior_k.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_k2.plot(color='C12', label='Some censored')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
# -
# In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider.
#
# In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored.
#
# In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty.
#
# This example is based on data I generated; in the next section we'll do a similar analysis with real data.
# ## Light Bulbs
#
# In 2007 [researchers ran an experiment](https://www.researchgate.net/publication/225450325_Renewal_Rate_of_Filament_Lamps_Theory_and_Experiment) to characterize the distribution of lifetimes for light bulbs.
# Here is their description of the experiment:
#
# > An assembly of 50 new Philips (India) lamps with the rating 40 W, 220 V (AC) was taken and installed in the horizontal orientation and uniformly distributed over a lab area 11 m x 7 m.
# >
# > The assembly was monitored at regular intervals of 12 h to look for failures. The instants of recorded failures were [recorded] and a total of 32 data points were obtained such that even the last bulb failed.
# + tags=["hide-cell"]
import os
datafile = 'lamps.csv'
if not os.path.exists(datafile):
# !wget https://gist.github.com/epogrebnyak/7933e16c0ad215742c4c104be4fbdeb1/raw/c932bc5b6aa6317770c4cbf43eb591511fec08f9/lamps.csv
# -
# We can load the data into a `DataFrame` like this:
df = pd.read_csv('lamps.csv', index_col=0)
df.head()
# Column `h` contains the times when bulbs failed in hours; Column `f` contains the number of bulbs that failed at each time.
# We can represent these values and frequencies using a `Pmf`, like this:
# +
from empiricaldist import Pmf
pmf_bulb = Pmf(df['f'].to_numpy(), df['h'])
pmf_bulb.normalize()
# -
# Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously.
# + [markdown] tags=["hide-cell"]
# The average lifetime is about 1400 h.
# + tags=["hide-cell"]
pmf_bulb.mean()
# -
# Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data.
# Again, I'll start with uniform priors for $\lambda$ and $k$:
lams = np.linspace(1000, 2000, num=51)
prior_lam = make_uniform(lams, name='lambda')
ks = np.linspace(1, 10, num=51)
prior_k = make_uniform(ks, name='k')
# For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations.
# They will run faster with fewer values, but the results will be less precise.
#
# As usual, we can use `make_joint` to make the prior joint distribution.
prior_bulb = make_joint(prior_lam, prior_k)
# Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times.
# We can use `np.repeat` to transform the data.
data_bulb = np.repeat(df['h'], df['f'])
len(data_bulb)
# Now we can use `update_weibull` to do the update.
posterior_bulb = update_weibull(prior_bulb, data_bulb)
# Here's what the posterior joint distribution looks like:
# + tags=["hide-input"]
plot_contour(posterior_bulb)
decorate(title='Joint posterior distribution, light bulbs')
# -
# To summarize this joint posterior distribution, we'll compute the posterior mean lifetime.
# ## Posterior Means
#
# To compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$.
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
# Now for each pair of parameters we'll use `weibull_dist` to compute the mean.
means = weibull_dist(lam_mesh, k_mesh).mean()
means.shape
# The result is an array with the same dimensions as the joint distribution.
#
# Now we need to weight each mean with the corresponding probability from the joint posterior.
prod = means * posterior_bulb
# Finally we compute the sum of the weighted means.
prod.to_numpy().sum()
# Based on the posterior distribution, we think the mean lifetime is about 1413 hours.
#
# The following function encapsulates these steps:
def joint_weibull_mean(joint):
"""Compute the mean of a joint distribution of Weibulls."""
lam_mesh, k_mesh = np.meshgrid(
joint.columns, joint.index)
means = weibull_dist(lam_mesh, k_mesh).mean()
prod = means * joint
return prod.to_numpy().sum()
# + [markdown] tags=["hide-cell"]
# ## Incomplete Information
#
# The previous update was not quite right, because it assumed each light bulb died at the instant we observed it.
# According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check.
#
# It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval.
# + tags=["hide-cell"]
def update_weibull_between(prior, data, dt=12):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
dist = weibull_dist(lam_mesh, k_mesh)
cdf1 = dist.cdf(data_mesh)
cdf2 = dist.cdf(data_mesh-12)
likelihood = (cdf1 - cdf2).prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
# + [markdown] tags=["hide-cell"]
# The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval.
#
# Here's how we run the update.
# + tags=["hide-cell"]
posterior_bulb2 = update_weibull_between(prior_bulb, data_bulb)
# + [markdown] tags=["hide-cell"]
# And here are the results.
# + tags=["hide-cell"]
plot_contour(posterior_bulb2)
decorate(title='Joint posterior distribution, light bulbs')
# + [markdown] tags=["hide-cell"]
# Visually this result is almost identical to what we got using the PDF.
# And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct.
#
# To see whether it makes any difference at all, let's check the posterior means.
# + tags=["hide-cell"]
joint_weibull_mean(posterior_bulb)
# + tags=["hide-cell"]
joint_weibull_mean(posterior_bulb2)
# + [markdown] tags=["hide-cell"]
# When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less.
# And that makes sense: if we assume that a bulb is equally likely to expire at any point in the interval, the average would be the midpoint of the interval.
# -
# ## Posterior Predictive Distribution
#
# Suppose you install 100 light bulbs of the kind in the previous section, and you come back to check on them after 1000 hours. Based on the posterior distribution we just computed, what is the distribution of the number of bulbs you find dead?
#
# If we knew the parameters of the Weibull distribution for sure, the answer would be a binomial distribution.
#
# For example, if we know that $\lambda=1550$ and $k=4.25$, we can use `weibull_dist` to compute the probability that a bulb dies before you return:
# +
lam = 1550
k = 4.25
t = 1000
prob_dead = weibull_dist(lam, k).cdf(t)
prob_dead
# -
# If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution.
# +
from utils import make_binomial
n = 100
p = prob_dead
dist_num_dead = make_binomial(n, p)
# + [markdown] tags=["hide-cell"]
# And here's what it looks like.
# + tags=["hide-cell"]
dist_num_dead.plot(label='known parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Predictive distribution with known parameters')
# -
# But that's based on the assumption that we know $\lambda$ and $k$, and we don't.
# Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities.
#
# So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities.
#
# We can use `make_mixture` to compute the posterior predictive distribution.
# It doesn't work with joint distributions, but we can convert the `DataFrame` that represents a joint distribution to a `Series`, like this:
posterior_series = posterior_bulb.stack()
posterior_series.head()
# The result is a `Series` with a `MultiIndex` that contains two "levels": the first level contains the values of `k`; the second contains the values of `lam`.
#
# With the posterior in this form, we can iterate through the possible parameters and compute a predictive distribution for each pair.
pmf_seq = []
for (k, lam) in posterior_series.index:
prob_dead = weibull_dist(lam, k).cdf(t)
pmf = make_binomial(n, prob_dead)
pmf_seq.append(pmf)
# Now we can use `make_mixture`, passing as parameters the posterior probabilities in `posterior_series` and the sequence of binomial distributions in `pmf_seq`.
# +
from utils import make_mixture
post_pred = make_mixture(posterior_series, pmf_seq)
# -
# Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters.
# + tags=["hide-input"]
dist_num_dead.plot(label='known parameters')
post_pred.plot(label='unknown parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Posterior predictive distribution')
# -
# The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs.
# ## Summary
#
# This chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains.
#
# We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways: knowing the exact duration of a lifetime, knowing a lower bound, and knowing that a lifetime fell in a given interval.
#
# These examples demonstrate a feature of Bayesian methods: they can be adapted to handle incomplete, or "censored", data with only small changes. As an exercise, you'll have a chance to work with one more type of censored data, when we are given an upper bound on a lifetime.
#
# The methods in this chapter work with any distribution with two parameters.
# In the exercises, you'll have a chance to estimate the parameters of a two-parameter gamma distribution, which is used to describe a variety of natural phenomena.
#
# And in the next chapter we'll move on to models with three parameters!
# ## Exercises
# **Exercise:** Using data about the lifetimes of light bulbs, we computed the posterior distribution from the parameters of a Weibull distribution, $\lambda$ and $k$, and the posterior predictive distribution for the number of dead bulbs, out of 100, after 1000 hours.
#
# Now suppose you do the experiment: You install 100 light bulbs, come back after 1000 hours, and find 20 dead light bulbs.
# Update the posterior distribution based on this data.
# How much does it change the posterior mean?
# + [markdown] tags=["hide-cell"]
# Suggestions:
#
# 1. Use a mesh grid to compute the probability of finding a bulb dead after 1000 hours for each pair of parameters.
#
# 2. For each of those probabilities, compute the likelihood of finding 20 dead bulbs out of 100.
#
# 3. Use those likelihoods to update the posterior distribution.
# +
# Solution
t = 1000
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
prob_dead = weibull_dist(lam_mesh, k_mesh).cdf(t)
prob_dead.shape
# +
# Solution
from scipy.stats import binom
k = 20
n = 100
likelihood = binom(n, prob_dead).pmf(k)
likelihood.shape
# +
# Solution
posterior_bulb3 = posterior_bulb * likelihood
normalize(posterior_bulb3)
plot_contour(posterior_bulb3)
decorate(title='Joint posterior distribution with k=20')
# +
# Solution
# Since there were more dead bulbs than expected,
# the posterior mean is a bit less after the update.
joint_weibull_mean(posterior_bulb3)
# -
# **Exercise:** In this exercise, we'll use one month of data to estimate the parameters of a distribution that describes daily rainfall in Seattle.
# Then we'll compute the posterior predictive distribution for daily rainfall and use it to estimate the probability of a rare event, like more than 1.5 inches of rain in a day.
#
# According to hydrologists, the distribution of total daily rainfall (for days with rain) is well modeled by a two-parameter
# gamma distribution.
#
# When we worked with the one-parameter gamma distribution in <<_TheGammaDistribution>>, we used the Greek letter $\alpha$ for the parameter.
#
# For the two-parameter gamma distribution, we will use $k$ for the "shape parameter", which determines the shape of the distribution, and the Greek letter $\theta$ or `theta` for the "scale parameter".
# + [markdown] tags=["hide-cell"]
# The following function takes these parameters and returns a `gamma` object from SciPy.
# + tags=["hide-cell"]
import scipy.stats
def gamma_dist(k, theta):
"""Makes a gamma object.
k: shape parameter
theta: scale parameter
returns: gamma object
"""
return scipy.stats.gamma(k, scale=theta)
# + [markdown] tags=["hide-cell"]
# Now we need some data.
# The following cell downloads data I collected from the National Oceanic and Atmospheric Administration ([NOAA](http://www.ncdc.noaa.gov/cdo-web/search)) for Seattle, Washington in May 2020.
# + tags=["hide-cell"]
# Load the data file
datafile = '2203951.csv'
if not os.path.exists(datafile):
# !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2203951.csv
# + [markdown] tags=["hide-cell"]
# Now we can load it into a `DataFrame`:
# + tags=["hide-cell"]
weather = pd.read_csv('2203951.csv')
weather.head()
# + [markdown] tags=["hide-cell"]
# I'll make a Boolean Series to indicate which days it rained.
# + tags=["hide-cell"]
rained = weather['PRCP'] > 0
rained.sum()
# + [markdown] tags=["hide-cell"]
# And select the total rainfall on the days it rained.
# + tags=["hide-cell"]
prcp = weather.loc[rained, 'PRCP']
prcp.describe()
# + [markdown] tags=["hide-cell"]
# Here's what the CDF of the data looks like.
# + tags=["hide-cell"]
cdf_data = Cdf.from_seq(prcp)
cdf_data.plot()
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Distribution of rainfall on days it rained')
# + [markdown] tags=["hide-cell"]
# The maximum is 1.14 inches of rain is one day.
# To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model.
# -
# I suggest you proceed in the following steps:
#
# 1. Construct a prior distribution for the parameters of the gamma distribution. Note that $k$ and $\theta$ must be greater than 0.
#
# 2. Use the observed rainfalls to update the distribution of parameters.
#
# 3. Compute the posterior predictive distribution of rainfall, and use it to estimate the probability of getting more than 1.5 inches of rain in one day.
# + tags=["hide-cell"]
# Solution
# I'll use the MLE parameters of the gamma distribution
# to help me choose priors
k_est, _, theta_est = scipy.stats.gamma.fit(prcp, floc=0)
k_est, theta_est
# +
# Solution
# I'll use uniform priors for the parameters.
# I chose the upper bounds by trial and error.
ks = np.linspace(0.01, 2, num=51)
prior_k = make_uniform(ks, name='k')
# +
# Solution
thetas = np.linspace(0.01, 1.5, num=51)
prior_theta = make_uniform(thetas, name='theta')
# +
# Solution
# Here's the joint prior
prior = make_joint(prior_k, prior_theta)
# +
# Solution
# I'll use a grid to compute the densities
k_mesh, theta_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, prcp)
# +
# Solution
# Here's the 3-D array of densities
densities = gamma_dist(k_mesh, theta_mesh).pdf(data_mesh)
densities.shape
# +
# Solution
# Which we reduce by multiplying along axis 2
likelihood = densities.prod(axis=2)
likelihood.sum()
# +
# Solution
# Now we can do the update in the usual way
posterior = prior * likelihood
normalize(posterior)
# +
# Solution
# And here's what the posterior looks like
plot_contour(posterior)
decorate(title='Posterior distribution, parameters of a gamma distribution')
# +
# Solution
# I'll check the marginal distributions to make sure the
# range of the priors is wide enough
from utils import marginal
posterior_k = marginal(posterior, 0)
posterior_theta = marginal(posterior, 1)
# +
# Solution
# The marginal distribution for k is close to 0 at both ends
posterior_k.plot(color='C4')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
# +
# Solution
posterior_k.mean(), posterior_k.credible_interval(0.9)
# +
# Solution
# Same with the marginal distribution of theta
posterior_theta.plot(color='C2')
decorate(xlabel='theta',
ylabel='PDF',
title='Posterior marginal distribution of theta')
# +
# Solution
posterior_theta.mean(), posterior_theta.credible_interval(0.9)
# +
# Solution
# To compute the posterior predictive distribution,
# I'll stack the joint posterior to make a Series
# with a MultiIndex
posterior_series = posterior.stack()
posterior_series.head()
# +
# Solution
# I'll extend the predictive distribution up to 2 inches
low, high = 0.01, 2
# +
# Solution
# Now we can iterate through `posterior_series`
# and make a sequence of predictive Pmfs, one
# for each possible pair of parameters
from utils import pmf_from_dist
qs = np.linspace(low, high, num=101)
pmf_seq = []
for (theta, k) in posterior_series.index:
dist = gamma_dist(k, theta)
pmf = pmf_from_dist(dist, qs)
pmf_seq.append(pmf)
# +
# Solution
# And we can use `make_mixture` to make the posterior predictive
# distribution
post_pred = make_mixture(posterior_series, pmf_seq)
# +
# Solution
# Here's what it looks like.
post_pred.make_cdf().plot(label='rainfall')
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Posterior predictive distribution of rainfall')
# +
# Solution
# The probability of more than 1.5 inches of rain is small
cdf = post_pred.make_cdf()
p_gt = 1 - cdf(1.5)
p_gt
# +
# Solution
# So it's easier to interpret as the number of rainy
# days between events, on average
1 / p_gt
# -
| soln/chap14.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## How to Test
# ### Equivalence partitioning
# Think hard about the different cases the code will run under: this is science, not coding!
# We can't write a test for every possible input: this is an infinite amount of work.
# We need to write tests to rule out different bugs. There's no need to separately test *equivalent* inputs.
# Let's look at an example of this question outside of coding:
# * Research Project : Evolution of agricultural fields in Saskatchewan from aerial photography
# * In silico translation : Compute overlap of two rectangles
import matplotlib.pyplot as plt
from matplotlib.path import Path
import matplotlib.patches as patches
# %matplotlib inline
# Let's make a little fragment of matplotlib code to visualise a pair of fields.
# +
def show_fields(field1, field2):
def vertices(left, bottom, right, top):
verts = [(left, bottom),
(left, top),
(right, top),
(right, bottom),
(left, bottom)]
return verts
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY]
path1 = Path(vertices(*field1), codes)
path2 = Path(vertices(*field2), codes)
fig = plt.figure()
ax = fig.add_subplot(111)
patch1 = patches.PathPatch(path1, facecolor='orange', lw=2)
patch2 = patches.PathPatch(path2, facecolor='blue', lw=2)
ax.add_patch(patch1)
ax.add_patch(patch2)
ax.set_xlim(0,5)
ax.set_ylim(0,5)
show_fields((1.,1.,4.,4.), (2.,2.,3.,3.))
# -
# Here, we can see that the area of overlap, is the same as the smaller field, with area 1.
# We could now go ahead and write a subroutine to calculate that, and also write some test cases for our answer.
# But first, let's just consider that question abstractly, what other cases, *not equivalent to this* might there be?
# For example, this case, is still just a full overlap, and is sufficiently equivalent that it's not worth another test:
show_fields((1.,1.,4.,4.),(2.5,1.7,3.2,3.4))
# But this case is no longer a full overlap, and should be tested separately:
show_fields((1.,1.,4.,4.),(2.,2.,3.,4.5))
# On a piece of paper, sketch now the other cases you think should be treated as non-equivalent. Some answers are below:
for _ in range(10):
print("\n\n\nSpoiler space\n\n\n")
show_fields((1.,1.,4.,4.),(2,2,4.5,4.5)) # Overlap corner
show_fields((1.,1.,4.,4.),(2.,2.,3.,4.)) # Just touching
show_fields((1.,1.,4.,4.),(4.5,4.5,5,5)) # No overlap
show_fields((1.,1.,4.,4.),(2.5,4,3.5,4.5)) # Just touching from outside
show_fields((1.,1.,4.,4.),(4,4,4.5,4.5)) # Touching corner
# ### Using our tests
# OK, so how might our tests be useful?
# Here's some code that **might** correctly calculate the area of overlap:
def overlap(field1, field2):
left1, bottom1, top1, right1 = field1
left2, bottom2, top2, right2 = field2
overlap_left = max(left1, left2)
overlap_bottom = max(bottom1, bottom2)
overlap_right = min(right1, right2)
overlap_top = min(top1, top2)
overlap_height = (overlap_top-overlap_bottom)
overlap_width = (overlap_right-overlap_left)
return overlap_height * overlap_width
# So how do we check our code?
# The manual approach would be to look at some cases, and, once, run it and check:
overlap((1.,1.,4.,4.),(2.,2.,3.,3.))
# That looks OK.
# But we can do better, we can write code which **raises an error** if it gets an unexpected answer:
assert overlap((1.,1.,4.,4.),(2.,2.,3.,3.)) == 1.0
assert overlap((1.,1.,4.,4.),(2.,2.,3.,4.5)) == 2.0
assert overlap((1.,1.,4.,4.),(2.,2.,4.5,4.5)) == 4.0
assert overlap((1.,1.,4.,4.),(4.5,4.5,5,5)) == 0.0
print(overlap((1.,1.,4.,4.),(4.5,4.5,5,5)))
show_fields((1.,1.,4.,4.),(4.5,4.5,5,5))
# What? Why is this wrong?
# In our calculation, we are actually getting:
overlap_left = 4.5
overlap_right = 4
overlap_width = -0.5
overlap_height = -0.5
# Both width and height are negative, resulting in a positive area.
# The above code didn't take into account the non-overlap correctly.
# It should be:
#
def overlap(field1, field2):
left1, bottom1, top1, right1 = field1
left2, bottom2, top2, right2 = field2
overlap_left = max(left1, left2)
overlap_bottom = max(bottom1, bottom2)
overlap_right = min(right1, right2)
overlap_top = min(top1, top2)
overlap_height = max(0, (overlap_top-overlap_bottom))
overlap_width = max(0, (overlap_right-overlap_left))
return overlap_height*overlap_width
assert overlap((1,1,4,4), (2,2,3,3)) == 1.0
assert overlap((1,1,4,4), (2,2,3,4.5)) == 2.0
assert overlap((1,1,4,4), (2,2,4.5,4.5)) == 4.0
assert overlap((1,1,4,4), (4.5,4.5,5,5)) == 0.0
assert overlap((1,1,4,4), (2.5,4,3.5,4.5)) == 0.0
assert overlap((1,1,4,4), (4,4,4.5,4.5)) == 0.0
# Note, we reran our other tests, to check our fix didn't break something else. (We call that "fallout")
# ### Boundary cases
# "Boundary cases" are an important area to test:
#
# * Limit between two equivalence classes: edge and corner sharing fields
# * Wherever indices appear, check values at ``0``, ``N``, ``N+1``
# * Empty arrays:
# + [markdown] attributes={"classes": [" python"], "id": ""}
# ``` python
# atoms = [read_input_atom(input_atom) for input_atom in input_file]
# energy = force_field(atoms)
# ```
# -
# * What happens if ``atoms`` is an empty list?
# * What happens when a matrix/data-frame reaches one row, or one column?
# ### Positive *and* negative tests
#
# * **Positive tests**: code should give correct answer with various inputs
# * **Negative tests**: code should crash as expected given invalid inputs, rather than lying
#
# <div align="left">
# Bad input should be expected and should fail early and explicitly.
#
# <div class="fragment roll-in">
# Testing should ensure that explicit failures do indeed happen.
# ### Raising exceptions
# In Python, we can signal an error state by raising an error:
# + attributes={"classes": [" python"], "id": ""}
def I_only_accept_positive_numbers(number):
# Check input
if number < 0:
raise ValueError("Input {} is negative".format(number))
# Do something
# -
I_only_accept_positive_numbers(5)
I_only_accept_positive_numbers(-5)
# There are standard "Exception" types, like `ValueError` we can `raise`
# We would like to be able to write tests like this:
assert I_only_accept_positive_numbers(-5) == # Gives a value error
# But to do that, we need to learn about more sophisticated testing tools, called "test frameworks".
| ch03tests/02SaskatchewanFields.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# # Circuitos cuánticos
# -
# ## Contenido
#
# 1. [Introducción](#intro)
# 2. [¿Qué es un circuito cuántico?](#whatis)
# 3. [Ejemplo: teletransportación cuántica](#teleportation)
# 4. [Ejemplo: Solucionadores propios cuánticos variacionales](#vqe)
# 5. [¿Por qué las piezas clásicas?](#why-classical)
#
# ## 1. Introducción<a id="intro"></a>
#
# Hasta ahora, hemos visto varias puertas [de un solo qubit](https://qiskit.org/textbook/ch-states/single-qubit-gates.html) y [de múltiples qubits](https://qiskit.org/textbook/ch-gates/introduction.html) . También hemos visto cómo usar estas puertas junto con otros componentes para construir circuitos cuánticos.
#
# Antes de implementar algoritmos cuánticos en computadoras cuánticas reales, es importante resaltar la definición de un circuito cuántico en concreto, ya que construiremos circuitos cuánticos para implementar estos algoritmos.
#
# ## 2. ¿Qué es un circuito cuántico?<a id="whatis"></a>
#
# Un circuito cuántico es una rutina computacional que consta de *operaciones cuánticas coherentes sobre datos cuánticos, como qubits, y computación clásica simultánea en tiempo real* . Es una secuencia ordenada de *puertas cuánticas,* *mediciones y reinicios,* todos los cuales pueden condicionarse y utilizar datos de la computación clásica en tiempo real.
#
# Se dice que un conjunto de puertas cuánticas es [universal](https://qiskit.org/textbook/ch-gates/proving-universality.html) si cualquier transformación unitaria de los datos cuánticos se puede aproximar eficientemente de manera arbitraria como una secuencia de puertas en el conjunto. Cualquier programa cuántico puede representarse mediante una secuencia de circuitos cuánticos y computación clásica no concurrente.
#
# ## 3. Ejemplo: teletransportación cuántica<a id="teleportation"></a>
#
# Eche un vistazo al circuito cuántico a continuación. Aprenderá más adelante en este capítulo que implementa [el algoritmo de teletransportación cuántica](https://qiskit.org/textbook/ch-algorithms/teleportation.html) . Por ahora, basta con observar los componentes del circuito cuántico.
#
# 
#
# El circuito cuántico utiliza tres qubits y dos bits clásicos. Hay cuatro componentes principales en este circuito cuántico.
#
# ### Inicialización y reinicio
#
# Primero, necesitamos comenzar nuestra computación cuántica con un estado cuántico bien definido. Esto se logra utilizando las operaciones de inicialización y reinicio. Los reinicios se pueden realizar mediante una combinación de puertas de un solo qubit y computación clásica simultánea en tiempo real que monitorea si hemos creado con éxito el estado deseado a través de mediciones. La inicialización de $q_0$ en un estado deseado $\vert\psi\rangle$ puede seguir aplicando puertas de un solo qubit.
#
# ### Puertas cuánticas
#
# En segundo lugar, aplicamos una secuencia de puertas cuánticas que manipulan los tres qubits según lo requiera el algoritmo de teletransportación. En este caso, solo necesitamos aplicar puertas de Hadamard de un solo qubit ($H$) y de dos qubits Controlled-X ($\oplus$).
#
# ### Mediciones
#
# Tercero, medimos dos de los tres qubits. Una computadora clásica interpreta las medidas de cada qubit como resultados clásicos (0 y 1) y las almacena en los dos bits clásicos.
#
# ### Puertas cuánticas clásicamente condicionadas
#
# En cuarto lugar, aplicamos puertas cuánticas $Z$ y $X$ de un solo qubit en el tercer qubit. Estas puertas están condicionadas a los resultados de las medidas que se almacenan en los dos bits clásicos. En este caso, estamos utilizando los resultados del cálculo clásico simultáneamente en tiempo real dentro del mismo circuito cuántico.
# ## 4. Ejemplo: Solucionadores propios cuánticos variacionales<a id="vqe"></a>
#
# Aquí hay un ejemplo de un programa cuántico. Aprenderá en los siguientes capítulos que implementa un [autosolvente cuántico variacional](https://qiskit.org/textbook/ch-applications/vqe-molecules.html) . En este programa, una computadora clásica funciona *de manera no simultánea* en concierto con una computadora cuántica.
#
# 
#
# ### El bloque cuántico
#
# Al igual que con el ejemplo anterior de teletransportación cuántica, se prepara un estado cuántico $\vert\Psi(\theta)\rangle$ mediante una combinación de reinicios con puertas cuánticas de uno y varios qubits. Aquí, los parámetros del estado se parametrizan usando la cantidad $\theta$. Una vez preparado, el estado cuántico se manipula mediante puertas cuánticas y se mide. Todas las operaciones dentro del bloque cuántico consisten en circuitos cuánticos.
#
# ### El bloque clásico
#
# Una vez que se ha medido el estado cuántico, una computadora clásica interpreta esos resultados de medición y calcula su costo utilizando una función de costo que se ha elegido para la aplicación prevista. Basado en este costo, la computadora clásica determina otro valor para el parámetro $\theta$.
#
# ### Operación combinada
#
# Una vez que la computadora clásica determina el siguiente parámetro para $\theta$, se usa una secuencia de reinicios, puertas cuánticas de uno y varios qubits en un circuito cuántico para preparar $\vert\Psi(\theta)\rangle$, y esto El proceso continúa hasta que se estabiliza el costo de los estados cuánticos medidos, o hasta que se alcanza otro resultado predeterminado.
# ## 5. ¿Por qué las piezas clásicas?<a id="why-classical"></a>
#
# Si bien una computadora cuántica universal puede hacer cualquier cosa que pueda hacer una computadora clásica, a menudo agregamos partes clásicas a nuestros circuitos cuánticos porque los estados cuánticos son frágiles.
#
# Cuando medimos el qubit, colapsamos su estado y destruimos mucha información. Dado que todo lo que hace la medición es destruir información, en teoría siempre podemos medir en último lugar y no perder ninguna ventaja computacional. En realidad, la medición temprana ofrece muchas ventajas prácticas.
#
# Por ejemplo, en el circuito de teletransportación, medimos los qubits para poder enviar la información a través de canales clásicos en lugar de canales cuánticos. La ventaja es que los canales clásicos son muy estables, mientras que en realidad no tenemos una forma de enviar información cuántica a otras personas, ya que los canales son muy difíciles de crear.
#
# En el ejemplo del solucionador propio cuántico variacional, dividir el cálculo en cálculos cuánticos más pequeños en realidad nos hace perder cierta ventaja computacional, pero compensa esto en hardware ruidoso al reducir el tiempo que nuestros qubits están en superposición. Esto significa que hay menos posibilidades de que la interferencia introduzca imprecisiones en nuestros resultados.
#
# Finalmente, para usar los resultados de nuestra computación cuántica en nuestro mundo cotidiano clásico, necesitamos medir e interpretar estos estados al final de nuestra computación.
#
| es/notebooks/ch-algorithms/defining-quantum-circuits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2A.i - Table de mortalité dans plusieurs containers
#
# Pas de calcul d'espérence de vie, seulement différentes façons de lire les données d'une table de mortalité.
# %matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
# ## Récupération des données
#
# Les données sont recensées sur [Data Publica](http://www.data-publica.com/) : [Table de mortalité](http://www.data-publica.com/opendata/7098--population-et-conditions-sociales-table-de-mortalite-de-1960-a-2010) qui les a récupéré depuis le site d'Eurostat via le listing suivant : [listing](http://epp.eurostat.ec.europa.eu/NavTree_prod/everybody/BulkDownloadListing?sort=1&dir=data). Pour faire court, le lien est le suivant : [demo_mlifetable.tsv.gz](http://epp.eurostat.ec.europa.eu/NavTree_prod/everybody/BulkDownloadListing?file=data/demo_mlifetable.tsv.gz). Le fichier est compressé au format [gzip](http://fr.wikipedia.org/wiki/Gzip). On le télécharge, on le décompresse.
url = "http://ec.europa.eu/eurostat/estat-navtree-portlet-prod/BulkDownloadListing?file=data/"
file = "demo_mlifetable.tsv.gz"
import pyensae
local = pyensae.download_data("demo_mlifetable.tsv.gz", url=url)
local = local[0]+".gz"
import gzip
with gzip.open(local, 'rb') as f:
file_content = f.read()
content = str(file_content, encoding="utf8")
with open("mortalite.txt", "w", encoding="utf8") as f:
f.write(content)
# Puis on le charge sous forme de dataframe :
import pandas
dff = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8")
dff.head()
dff.shape
# La première colonne contient une aggrégation de champs. On souhaite transformer cette table de telle sorte qu'on ait un nombre réduit de colonnes :
#
# - indicateur
# - genre
# - age
# - pays (ou ensemble de pays)
# - annee
# - valeur
#
# L'âge est représenté sous forme de chaîne de caractères pour pouvoir écrire ``Y_LT1`` (moins d'un an), ``Y_GE85`` (plus de 85 ans). On change un peu le format pour pouvoir les trier par ordre croissant (en effet ``Y2`` est après ``Y10``). On sauve le tout dans un fichier pour ne pas avoir à recommencer ultérieurement. Malgré tout, le code ci-dessous est très lent pour la table complète qui contiendra au final près de 5 millions de lignes. On supprime les valeurs manquantes.
# +
def format_age(s):
if s.startswith("Y_") :
if s.startswith("Y_LT"): s = "Y00_LT" + s[4:]
elif s.startswith("Y_GE"): s = "Y85_GE" + s[4:]
else: raise FormatError(s)
else:
i = int(s.strip("Y"))
return "Y%02d" % i
def format_value(s):
if s.strip() == ":" : return -1
else : return float(s.strip(" ebp"))
if False: # sur les données complètes, c'est plutôt long, réduire la taille pour essayer
dfsmall = dff.head(n = 1000) # on réduit la taille pour
df = dfsmall # implémenter la transformation
else:
df = dff
print("étape 1", df.shape)
dfi = df.reset_index().set_index("indic_de,sex,age,geo\\time")
dfi = dfi.drop('index', axis=1)
dfs = dfi.stack()
dfs = pandas.DataFrame({"valeur": dfs } )
print("étape 2", dfs.shape)
dfs["valeur"] = dfs["valeur"].astype(str)
dfs["valeur"] = dfs["valeur"].apply( format_value )
dfs = dfs[ dfs.valeur >= 0 ].copy()
dfs = dfs.reset_index()
dfs.columns = ["index", "annee", "valeur"]
print("étape 3", dfs.shape)
dfs["age"] = dfs["index"].apply ( lambda i : format_age(i.split(",")[2]))
dfs["indicateur"] = dfs["index"].apply ( lambda i : i.split(",")[0])
dfs["genre"] = dfs["index"].apply ( lambda i : i.split(",")[1])
dfs["pays"] = dfs["index"].apply ( lambda i : i.split(",")[3])
print("étape 4")
dfy = dfs.drop('index', axis=1)
dfy.to_csv("mortalite_5column.txt", sep="\t", encoding="utf8", index=False)
dfy.head()
# -
# Graphe d'une coupe de la table de mortalité :
view = dfs [ (dfs.pays=="FR") &
(dfs.age == "Y80") &
(dfs.indicateur == "DEATHRATE") &
(dfs.genre == "T") ]
view = view.sort_values("annee")
view.plot(x="annee", y="valeur")
# ### SQLite
#
# [SQLite](http://www.sqlite.org/) est un outils de gestion de base de données locales. Intégré à Python, il ne nécessite aucune installation. Il est très utile lorsque [Microsoft Excel](http://fr.wikipedia.org/wiki/Microsoft_Excel) ne peut pas contenir toutes les données qu'on souhaite consulter. Plus de deux millions de lignes dans le cas de cette table.
# ### version 1 : pandas to SQLite
#
# On utilise pour la méthode [to_sql](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.to_sql.html) et le module [sqlite3](https://docs.python.org/3.4/library/sqlite3.html). Ca prend un peu de temps (deux à trois minutes).
import sqlite3
con = sqlite3.connect("mortalite_sqlite3_y2.db3")
dfy.to_sql("table_mortalite",con)
con.close() # il faut fermer la base qui sinon reste ouverte tant que le notebook
# n'est pas fermé --> elle n'est pas modifiable pas d'autre que ce notebook
import os
[ _ for _ in os.listdir(".") if "sqlite3" in _ ]
# On utilise une requête SQL pour récupérer les données équivalent au code pandas cité ci-dessous :
con = sqlite3.connect("mortalite_sqlite3_y2.db3")
view = pandas.read_sql("""SELECT * FROM table_mortalite WHERE pays=="FR"
AND age == "Y80"
AND indicateur == "DEATHRATE"
AND genre == "T"
ORDER BY annee""", con)
con.close()
view.plot(x="annee", y="valeur")
# ### version 2 : pyensae
#
# [import_flatfile_into_database](http://www.xavierdupre.fr/app/pyensae/helpsphinx/pyensae/sql/database_helper.html) est une fonction à utiliser lorsqu'on ne sait pas toujours quel est le séparateur de colonnes dans le fichier à importer. La fonction le devine pour vous ainsi que le type de chaque colonne (quand c'est possible). L'autre aspect intéressant est qu'elle affiche son état d'avancement. On repère plus rapidement que quelque chose se passe mal. Enfin, pour les gros fichiers, la fonction ne charge pas tout le fichier en mémoire. Cela permet de placer dans une base SQLite des milliards de lignes (cela peut prendre plus d'une heure). Ce n'est pas le cas ici, c'est juste à titre d'exemple. On stocke l'ensemble des données au format SQLite 3 de façon à pouvoir les consulter plus facilement.
from pyensae.sql import import_flatfile_into_database
import_flatfile_into_database("mortalite.db3", "mortalite_5column.txt")
# Ensuite, on peut facilement consulter les données avec le logiciel (sous Windows) [SQLiteSpy](http://www.yunqa.de/delphi/doku.php/products/sqlitespy/index) ou l'extension [sqlite-manager](https://addons.mozilla.org/fr/firefox/addon/sqlite-manager/) pour Firefox sous toutes les plates-formes. Pour cet exercice, on exécute :
# +
sql = """SELECT * FROM mortalite_5column WHERE pays=="FR"
AND age == "Y80"
AND indicateur == "DEATHRATE"
AND genre == "T"
ORDER BY annee"""
from pyensae.sql import Database
db = Database("mortalite.db3", LOG = lambda *l : None)
db.connect()
view = db.to_df(sql)
view
# -
# Visuellement, cela donne :
view.plot(x="annee", y="valeur")
# ## Cube de données
#
# On utilise l'expression *cube de données* pour désigner à tableaux à plusieurs dimensions. On le représente souvent par une liste ``coordonnées, valeurs``. C'est souvent beaucoup de données et pas forcément de moyen pratique de les manipuler. On utilise le module [xarray](http://xarray.pydata.org/en/stable/). [pandas](https://pandas.pydata.org/pandas-docs/stable/) propose automatiquement d'exporter les données vers ce module avec [to_xarray](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_xarray.html#pandas-dataframe-to-xarray).
import pandas
df = pandas.read_csv("mortalite_5column.txt", sep="\t", encoding="utf8")
df.shape
df.head(n=2)
# On passe du côté *index* toutes les colonnes excepté *valeur*.
cols = [_ for _ in df.columns if _ != "valeur"]
cols
# On laisse tomber les valeurs manquantes.
df.shape
df = df.dropna()
df.shape
# On vérifie qu'il n'y a pas de doublons car la conversion en *cube* ne fonctionne pas dans ce cas puisque deux valeurs seraient indexées avec les mêmes coordonnées.
dup = df.groupby(cols).count().sort_values("valeur", ascending=False)
dup = dup[dup.valeur > 1]
dup.head(n=2)
dup.shape
dfi = df.set_index(cols, verify_integrity=True)
dfi.head(n=2)
type(dfi.index)
# On vérifie que [xarray]() est installé.
import xarray
# Et on convertit en cube.
cube = xarray.Dataset.from_dataframe(dfi) # ou dfi.to_xarray()
cube
back_to_pandas = cube.to_dataframe().reset_index(drop=True)
back_to_pandas.head(n=2)
# Et on prend le maximum par *indicateur* et *genre*.
cube.max(dim=["age", "annee", "pays"]).to_dataframe().reset_index().pivot("indicateur", "genre", "valeur")
cube.groupby("indicateur").max().to_dataframe().head()
cube["valeur"].sel(indicateur="LIFEXP", genre="F", annee=slice(1980, 1985)).to_dataframe().head()
cube["valeur"].max(dim=["age"]).to_dataframe().head()
# On ajoute une colonne avec un ratio où on divise par le maximum sur une classe d'âge.
cube["max_valeur"] = cube["valeur"] / cube["valeur"].max(dim=["age"])
cube.to_dataframe().head()
| _doc/notebooks/expose/ml_table_mortalite.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="8d0bbac2"
# # Finetuning FastPitch for a new speaker
#
# In this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on the new speaker's text and speech pairs (though see the section at the end to learn more about mixing speaker data).
#
# We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.
#
# A final section will describe approaches to improve audio quality past this notebook.
# + [markdown] id="nGw0CBaAtmQ6"
# ## License
#
# > Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# >
# > Licensed under the Apache License, Version 2.0 (the "License");
# > you may not use this file except in compliance with the License.
# > You may obtain a copy of the License at
# >
# > http://www.apache.org/licenses/LICENSE-2.0
# >
# > Unless required by applicable law or agreed to in writing, software
# > distributed under the License is distributed on an "AS IS" BASIS,
# > WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# > See the License for the specific language governing permissions and
# > limitations under the License.
# + id="U7bOoIgLttRC"
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# # !apt-get install sox libsndfile1 ffmpeg
# # !pip install wget unidecode pynini==2.1.4
# # !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
# + [markdown] id="2502cf61"
# ## Downloading data
# + [markdown] id="81fa2c02"
# For our tutorial, we will use a small part of the Hi-Fi Multi-Speaker English TTS (Hi-Fi TTS) dataset. You can read more about dataset [here](https://arxiv.org/abs/2104.01497). We will use speaker 6097 as the target speaker, and only a 5-minute subset of audio will be used for this fine-tuning example. We additionally resample audio to 22050 kHz.
# + id="VIFgqxLOpxha"
# !wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data
# !tar -xzf 6097_5_mins.tar.gz
# + [markdown] id="gSQqq0fBqy8K"
# Looking at `manifest.json`, we see a standard NeMo json that contains the filepath, text, and duration. Please note that our `manifest.json` contains the relative path.
#
# Let's make sure that the entries look something like this:
#
# ```
# {"audio_filepath": "audio/presentpictureofnsw_02_mann_0532.wav", "text": "not to stop more than ten minutes by the way", "duration": 2.6, "text_no_preprocessing": "not to stop more than ten minutes by the way,", "text_normalized": "not to stop more than ten minutes by the way,"}
# ```
# -
# !head -n 1 ./6097_5_mins/manifest.json
# Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set.
#
# As mentioned, since the paths in the manifest are relative, we also create a symbolic link to the audio folder such that `audio/` goes to the correct directory.
# + id="B8gVfp5SsuDd"
# !cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json
# !cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json
# !ln -s ./6097_5_mins/audio audio
# + [markdown] id="lhhg2wBNtW0r"
# Let's also download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to `~/.cache`, so let's move that to our current directory.
#
# *Note: please, check that `home_path` refers to your home folder. Otherwise, change it manually.*
# -
home_path = !(echo $HOME)
home_path = home_path[0]
print(home_path)
# + id="LggELooctXCT"
import os
import json
import torch
import IPython.display as ipd
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
from nemo.collections.tts.models import FastPitchModel
FastPitchModel.from_pretrained("tts_en_fastpitch")
from pathlib import Path
nemo_files = [p for p in Path(f"{home_path}/.cache/torch/NeMo/").glob("**/tts_en_fastpitch_align.nemo")]
print(f"Copying {nemo_files[0]} to ./")
Path("./tts_en_fastpitch_align.nemo").write_bytes(nemo_files[0].read_bytes())
# + [markdown] id="6c8b13b8"
# To finetune the FastPitch model on the above created filelists, we use the `examples/tts/fastpitch_finetune.py` script to train the models with the `fastpitch_align_v1.05.yaml` configuration.
#
# Let's grab those files.
# + id="3zg2H-32dNBU"
# !wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch_finetune.py
# !mkdir -p conf \
# && cd conf \
# && wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align_v1.05.yaml \
# && cd ..
# -
# We also need some additional files (see `FastPitch_MixerTTS_Training.ipynb` tutorial for more details) for training. Let's download these, too.
# additional files
# !mkdir -p tts_dataset_files && cd tts_dataset_files \
# && wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/cmudict-0.7b_nv22.01 \
# && wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tts_dataset_files/heteronyms-030921 \
# && wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/nemo_text_processing/text_normalization/en/data/whitelist_lj_speech.tsv \
# && cd ..
# + [markdown] id="ef75d1d5"
# ## Finetuning FastPitch
# + [markdown] id="12b5511c"
# We can now train our model with the following command:
#
# **NOTE: This will take about 50 minutes on colab's K80 GPUs.**
# + id="reY1LV4lwWoq"
# TODO(oktai15): remove +model.text_tokenizer.add_blank_at=true when we update FastPitch checkpoint
!(python fastpitch_finetune.py --config-name=fastpitch_align_v1.05.yaml \
train_dataset=./6097_manifest_train_dur_5_mins_local.json \
validation_datasets=./6097_manifest_dev_ns_all_local.json \
sup_data_path=./fastpitch_sup_data \
phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01 \
heteronyms_path=tts_dataset_files/heteronyms-030921 \
whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv \
exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins \
+init_from_nemo_model=./tts_en_fastpitch_align.nemo \
+trainer.max_steps=1000 ~trainer.max_epochs \
trainer.check_val_every_n_epoch=25 \
model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 \
model.n_speakers=1 model.pitch_mean=121.9 model.pitch_std=23.1 \
model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 \
~model.optim.sched model.optim.name=adam trainer.devices=1 trainer.strategy=null \
+model.text_tokenizer.add_blank_at=true \
)
# + [markdown] id="j2svKvd1eMhf"
# Let's take a closer look at the training command:
#
# * `--config-name=fastpitch_align_v1.05.yaml`
# * We first tell the script what config file to use.
#
# * `train_dataset=./6097_manifest_train_dur_5_mins_local.json
# validation_datasets=./6097_manifest_dev_ns_all_local.json
# sup_data_path=./fastpitch_sup_data`
# * We tell the script what manifest files to train and eval on, as well as where supplementary data is located (or will be calculated and saved during training if not provided).
#
# * `phoneme_dict_path=tts_dataset_files/cmudict-0.7b_nv22.01
# heteronyms_path=tts_dataset_files/heteronyms-030921
# whitelist_path=tts_dataset_files/whitelist_lj_speech.tsv
# `
# * We tell the script where `phoneme_dict_path`, `heteronyms-030921` and `whitelist_path` are located. These are the additional files we downloaded earlier, and are used in preprocessing the data.
#
# * `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins`
# * Where we want to save our log files, tensorboard file, checkpoints, and more.
#
# * `+init_from_nemo_model=./tts_en_fastpitch_align.nemo`
# * We tell the script what checkpoint to finetune from.
#
# * `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25`
# * For this experiment, we tell the script to train for 1000 training steps/iterations rather than specifying a number of epochs to run. Since the config file specifies `max_epochs` instead, we need to remove that using `~trainer.max_epochs`.
#
# * `model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24`
# * Set batch sizes for the training and validation data loaders.
#
# * `model.n_speakers=1`
# * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook.
#
# * `model.pitch_mean=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512`
# * For the new speaker, we need to define new pitch hyperparameters for better audio quality.
# * These parameters work for speaker 6097 from the Hi-Fi TTS dataset.
# * For speaker 92, we suggest `model.pitch_mean=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512`.
# * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker.
# * After fmin and fmax are defined, pitch mean and std can be easily extracted.
#
# * `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam`
# * For fine-tuning, we lower the learning rate.
# * We use a fixed learning rate of 2e-4.
# * We switch from the lamb optimizer to the adam optimizer.
#
# * `trainer.devices=1 trainer.strategy=null`
# * For this notebook, we default to 1 gpu which means that we do not need ddp.
# * If you have the compute resources, feel free to scale this up to the number of free gpus you have available.
# * Please remove the `trainer.strategy=null` section if you intend on multi-gpu training.
# + [markdown] id="c3bdf1ed"
# ## Synthesize Samples from Finetuned Checkpoints
# + [markdown] id="f2b46325"
# Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFi-GAN vocoder trained on LJSpeech.
#
# We define some helper functions as well.
# + id="886c91dc"
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import FastPitchModel
vocoder = HifiGanModel.from_pretrained("tts_hifigan")
vocoder = vocoder.eval().cuda()
# + id="0a4c986f"
def infer(spec_gen_model, vocoder_model, str_input, speaker=None):
"""
Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.
Args:
spec_gen_model: Spectrogram generator model (FastPitch in our case)
vocoder_model: Vocoder model (HiFiGAN in our case)
str_input: Text input for the synthesis
speaker: Speaker ID
Returns:
spectrogram and waveform of the synthesized audio.
"""
with torch.no_grad():
parsed = spec_gen_model.parse(str_input)
if speaker is not None:
speaker = torch.tensor([speaker]).long().to(device=spec_gen_model.device)
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker=speaker)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
def get_best_ckpt_from_last_run(
base_dir,
new_speaker_id,
duration_mins,
mixing_enabled,
original_speaker_id,
model_name="FastPitch"
):
mixing = "no_mixing" if not mixing_enabled else "mixing"
d = f"{original_speaker_id}_to_{new_speaker_id}_{mixing}_{duration_mins}_mins"
exp_dirs = list([i for i in (Path(base_dir) / d / model_name).iterdir() if i.is_dir()])
last_exp_dir = sorted(exp_dirs)[-1]
last_checkpoint_dir = last_exp_dir / "checkpoints"
last_ckpt = list(last_checkpoint_dir.glob('*-last.ckpt'))
if len(last_ckpt) == 0:
raise ValueError(f"There is no last checkpoint in {last_checkpoint_dir}.")
return str(last_ckpt[0])
# + [markdown] id="0153bd5a"
# Specify the speaker ID, duration of the dataset in minutes, and speaker mixing variables to find the relevant checkpoint (for example, we've saved our model in `ljspeech_to_6097_no_mixing_5_mins/`) and compare the synthesized audio with validation samples of the new speaker.
#
# The mixing variable is False for now, but we include some code to handle multiple speakers for reference.
# + id="8901f88b"
new_speaker_id = 6097
duration_mins = 5
mixing = False
original_speaker_id = "ljspeech"
last_ckpt = get_best_ckpt_from_last_run("./", new_speaker_id, duration_mins, mixing, original_speaker_id)
print(last_ckpt)
spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)
spec_model.eval().cuda()
# Only need to set speaker_id if there is more than one speaker
speaker_id = None
if mixing:
speaker_id = 1
num_val = 2 # Number of validation samples
val_records = []
with open(f"{new_speaker_id}_manifest_dev_ns_all_local.json", "r") as f:
for i, line in enumerate(f):
val_records.append(json.loads(line))
if len(val_records) >= num_val:
break
for val_record in val_records:
print("Real validation audio")
ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))
print(f"SYNTHESIZED FOR -- Speaker: {new_speaker_id} | Dataset size: {duration_mins} mins | Mixing:{mixing} | Text: {val_record['text']}")
spec, audio = infer(spec_model, vocoder, val_record['text'], speaker=speaker_id)
ipd.display(ipd.Audio(audio, rate=22050))
# %matplotlib inline
imshow(spec, origin="lower", aspect="auto")
plt.show()
# + [markdown] id="ge2s7s9-w3py"
# ## Improving Speech Quality
#
# We see that from fine-tuning FastPitch, we were able to generate audio in a male voice but the audio quality is not as good as we expect. We recommend two steps to improve audio quality:
#
# * Finetuning HiFi-GAN
# * Adding more data
#
# **Note that both of these steps are outside the scope of the notebook due to the limited compute available on Colab, but the code is included below for you to use outside of this notebook.**
#
# ### Finetuning HiFi-GAN
# From the synthesized samples, there might be audible audio crackling. To fix this, we need to finetune HiFi-GAN on the new speaker's data. HiFi-GAN shows improvement using **synthesized mel spectrograms**, so the first step is to generate mel spectrograms with our finetuned FastPitch model to use as input.
#
# The code below uses our finetuned model to generate synthesized mels for the training set we have been using. You will also need to do the same for the validation set (code should be very similar, just with paths changed).
# +
import json
import numpy as np
import torch
import soundfile as sf
from pathlib import Path
from nemo.collections.tts.torch.helpers import BetaBinomialInterpolator
def load_wav(audio_file):
with sf.SoundFile(audio_file, 'r') as f:
samples = f.read(dtype='float32')
return samples.transpose()
# Get records from the training manifest
manifest_path = "./6097_manifest_train_dur_5_mins_local.json"
records = []
with open(manifest_path, "r") as f:
for i, line in enumerate(f):
records.append(json.loads(line))
beta_binomial_interpolator = BetaBinomialInterpolator()
spec_model.eval()
device = spec_model.device
save_dir = Path("./6097_manifest_train_dur_5_mins_local_mels")
save_dir.mkdir(exist_ok=True, parents=True)
# Generate a spectrograms (we need to use ground truth alignment for correct matching between audio and mels)
for i, r in enumerate(records):
audio = load_wav(r["audio_filepath"])
audio = torch.from_numpy(audio).unsqueeze(0).to(device)
audio_len = torch.tensor(audio.shape[1], dtype=torch.long, device=device).unsqueeze(0)
# Again, our finetuned FastPitch model doesn't use multiple speakers,
# but we keep the code to support it here for reference
if spec_model.fastpitch.speaker_emb is not None and "speaker" in r:
speaker = torch.tensor([r['speaker']]).to(device)
else:
speaker = None
with torch.no_grad():
if "normalized_text" in r:
text = spec_model.parse(r["normalized_text"], normalize=False)
else:
text = spec_model.parse(r['text'])
text_len = torch.tensor(text.shape[-1], dtype=torch.long, device=device).unsqueeze(0)
spect, spect_len = spec_model.preprocessor(input_signal=audio, length=audio_len)
# Generate attention prior and spectrogram inputs for HiFi-GAN
attn_prior = torch.from_numpy(
beta_binomial_interpolator(spect_len.item(), text_len.item())
).unsqueeze(0).to(text.device)
spectrogram = spec_model.forward(
text=text,
input_lens=text_len,
spec=spect,
mel_lens=spect_len,
attn_prior=attn_prior,
speaker=speaker,
)[0]
save_path = save_dir / f"mel_{i}.npy"
np.save(save_path, spectrogram[0].to('cpu').numpy())
r["mel_filepath"] = str(save_path)
hifigan_manifest_path = "hifigan_train_ft.json"
with open(hifigan_manifest_path, "w") as f:
for r in records:
f.write(json.dumps(r) + '\n')
# Please do the same for the validation json. Code is omitted.
# -
# We can then finetune hifigan similarly to fastpitch using NeMo's [hifigan_finetune.py](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/hifigan_finetune.py) and [hifigan.yaml](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/hifigan/hifigan.yaml):
#
# ```bash
# python examples/tts/hifigan_finetune.py \
# --config-name=hifigan.yaml \
# model.train_ds.dataloader_params.batch_size=32 \
# model.max_steps=1000 \
# model.optim.lr=0.0001 \
# ~model.optim.sched \
# train_dataset=./hifigan_train_ft.json \
# validation_datasets=./hifigan_val_ft.json \
# exp_manager.exp_dir=hifigan_ft \
# +init_from_nemo_model=tts_hifigan.nemo \
# trainer.check_val_every_n_epoch=10 \
# model.train_ds=hifigan_train_ft.json \
# model.validation_ds=<your_hifigan_val_manifest>
# ```
#
# Like when finetuning FastPitch, we lower the learning rate and get rid of the optimizer schedule for finetuning. You will need to create `<your_hifigan_val_manifest>` and the synthesized mels corresponding to it.
#
# As mentioned, the above command is not runnable in Colab due to limited compute resources, but you are free to finetune HiFi-GAN on your own machines.
#
# ### Adding more data
# We can add more data in two ways. They can be combined for the best effect:
#
# * **Add more training data from the new speaker**
#
# The entire notebook can be repeated from the top after a new JSON manifest is defined that includes the additional data. Modify your finetuning commands to point to the new manifest. Be sure to increase the number of steps as more data is added to both the FastPitch and HiFi-GAN finetuning.
#
# We recommend **1000 steps per minute of audio for fastpitch and 500 steps per minute of audio for HiFi-GAN**.
#
# * **Mix new speaker data with old speaker data**
#
# We recommend finetuning FastPitch (but not HiFi-GAN) using both old speaker data (LJSpeech in this notebook) and the new speaker data. In this case, please modify the JSON manifests when finetuning FastPitch to include speaker information by adding a `speaker` field to each entry:
#
# ```
# {"audio_filepath": "new_speaker.wav", "text": "sample", "duration": 2.6, "speaker": 1}
# {"audio_filepath": "old_speaker.wav", "text": "LJSpeech sample", "duration": 2.6, "speaker": 0}
# ```
#
# 5 hours of data from the old speaker should be sufficient for training; it's up to you how much data from the old speaker to use in validation.
#
# For the training manifest, since we likely have less data from the new speaker, we need to ensure that the model sees a similar amount of new data and old data. We can do this by repeating samples from the new speaker until we have an equivalent number of samples from the old and new speaker. For each sample from the old speaker, please add a sample from the new speaker in the .json.
#
# As a toy example, if we use 4 samples of the old speaker and only 2 samples of the new speaker, we would want the order of samples in our manifest to look something like this:
#
# ```
# old_speaker_sample_0
# new_speaker_sample_0
# old_speaker_sample_1
# new_speaker_sample_1
# old_speaker_sample_2
# new_speaker_sample_0 # Start repeat of new speaker samples
# old_speaker_sample_3
# new_speaker_sample_1
# ```
#
# Once the manifests are created, we can modify the FastPitch training command to point to the new training and validation JSON files.
#
# We also need to update `model.n_speakers=1` to `model.n_speakers=2`, as well as update the `sup_data_types` specified in the config file to include `speaker_id` (`sup_data_types=[align_prior_matrix,pitch,speaker_id]`). Updating these two fields is very important--otherwise the model will not recognize that there is more than one speaker!
#
# Ensure the pitch statistics correspond to the new speaker rather than the old speaker for best results.
#
# **For HiFiGAN finetuning, the training should be done on the new speaker data.**
| tutorials/tts/FastPitch_Finetuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.2
# language: julia
# name: julia-1.7
# ---
pwd()
using Pkg; Pkg.activate("../../FermiCG/")
using FermiCG, NPZ
readdir()
h0 = npzread("tetraphenyl-ethylene_scf_integrals_h0.npz.npy")
h1 = npzread("tetraphenyl-ethylene_scf_integrals_h1.npz.npy")
h2 = npzread("tetraphenyl-ethylene_scf_integrals_h2.npz.npy");
ints = InCoreInts(h0, h1, h2)
print(size(h0), size(h1), size(h2))
# +
# Define clusters
clusters_in = [
(1:6), # Benzene 1
(7:12), # Benzene 2
(13:18), # Benzene 3
(19:24), # Benzene 4
(25:26) # ethylene
]
clusters = [Cluster(i,collect(clusters_in[i])) for i = 1:length(clusters_in)]
init_fspace = [
(3,3),
(3,3),
(3,3),
(3,3),
(1,1)
];
print(clusters)
print(init_fspace)
# +
rdm1 = zeros(size(ints.h1))
e_cmf, U, da1, db1 = FermiCG.cmf_oo(ints, clusters, init_fspace, rdm1, rdm1, verbose=0, gconv=1e-6, method="cg",sequential=true)
# + jupyter={"outputs_hidden": true}
# rotate integrals to new basis
ints = FermiCG.orbital_rotation(ints,U)
# +
Da = da1
Db = db1
max_roots = 20
#
# form Cluster data
cluster_bases = FermiCG.compute_cluster_eigenbasis(ints, clusters, verbose=0,
max_roots=max_roots,
init_fspace=init_fspace,
rdm1a=Da, rdm1b=Db)
clustered_ham = FermiCG.extract_ClusteredTerms(ints, clusters)
cluster_ops = FermiCG.compute_cluster_ops(cluster_bases, ints);
FermiCG.add_cmf_operators!(cluster_ops, cluster_bases, ints, Da, Db);
check = 0.0
for ci_ops in cluster_ops
for (opstr, ops) in ci_ops
for (ftrans, op) in ops
check += sum(abs.(op))
end
end
end
# + jupyter={"outputs_hidden": true} tags=[]
# Create BST state
v = FermiCG.BSTstate(clusters, FockConfig(init_fspace), cluster_bases)
xspace = FermiCG.build_compressed_1st_order_state(v, cluster_ops, clustered_ham, nbody=3, thresh=1e-3)
xspace = FermiCG.compress(xspace, thresh=1e-3)
display(size(xspace))
FermiCG.nonorth_add!(v, xspace)
v = FermiCG.BSTstate(v,R=6)
FermiCG.randomize!(v)
FermiCG.orthonormalize!(v)
e_ci, v = FermiCG.tucker_ci_solve(v, cluster_ops, clustered_ham)
# -
display(v)
using Printf
for ei in 1:length(e_ci)
@printf(" %5i %12.8f %12.8f eV\n", ei, e_ci[ei]+ints.h0, (e_ci[ei]-e_ci[1])*27.21165)
end
# + jupyter={"outputs_hidden": true} tags=[]
e_var, v_var = FermiCG.block_sparse_tucker(v, cluster_ops, clustered_ham,
max_iter = 20,
max_iter_pt = 200,
nbody = 4,
H0 = "Hcmf",
thresh_var = 1e-1,
thresh_foi = 1e-3,
thresh_pt = 1e-3,
ci_conv = 1e-5,
do_pt = true,
resolve_ss = false,
tol_tucker = 1e-4)
# -
| tetraphenyl-ethylene/cmf_bst1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pyspark
from pyspark import SparkContext
sc = SparkContext.getOrCreate();
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
spark.conf.set("spark.sql.repl.eagerEval.enabled", True) # Property used to format output tables better
spark
import pandas as pd
import tweepy
def printtweetdata(n, ith_tweet):
print()
print(f"Tweet {n}:")
print(f"Username:{ith_tweet[0]}")
print(f"Follower Count:{ith_tweet[1]}")
print(f"Retweet Count:{ith_tweet[2]}")
def scrape(words, date_since, numtweet):
db = pd.DataFrame(columns=['username','followers', 'retweetcount'])
tweets = tweepy.Cursor(api.search, q=words, lang="en",since=date_since, tweet_mode='extended').items(numtweet)
list_tweets = [tweet for tweet in tweets]
i = 1
for tweet in list_tweets:
username = tweet.user.screen_name
followers = tweet.user.followers_count
retweetcount = tweet.retweet_count
try:
text = tweet.retweeted_status.full_text
except AttributeError:
text = tweet.full_text
ith_tweet = [username,followers,retweetcount]
db.loc[len(db)] = ith_tweet
printtweetdata(i, ith_tweet)
i = i+1
filename = 'ScrapedTweets.csv'
db.to_csv(filename)
consumer_key = 'PApQojAQOSd8iWdMdWeumD5i2'
consumer_secret = 'lUEA33FLEUK1hO52LlLnRbFulopaDYYLes5EDqnImO5XmyPqLg'
access_key = '<KEY>'
access_secret = 'jdb2xaTL50gfiyIGx6aNMwihLe0ff0T4peX0ectErrYEy'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
words = "joebiden"
date_since = "2021-06-05"
numtweet = 100
scrape(words, date_since, numtweet)
| follower_and_retweet_count_on_famous_personality.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# # cheat sheet:
#
# # use this code to: convert to numeric
# <var>["<header>"] = pd.to_numeric(<var>["<header>"])
#
# # use this code to: see all rows if necessary
# # note from reuben on 06/16/18 @ 8:22PM: the function is correct but syntax is not
# pd.set_option("max_rows",len(tesla_df))
#
# # use this code to: save to csv if necessary
# tesla_df.to_csv("tesla_df.csv",index=False,header=True)
# + active=""
# # Marianas Code - For Dual Axis
#
# plt.figure(figsize=(10,5))
#
# ax = tesla_df.plot(x="date", y="Close", legend=False)
# ax2 = ax.twinx()
# tesla_df.plot(x="date", y="operating_income", ax=ax2, legend=False, color="r")
# ax.figure.legend()
# plt.title("Operating Income Vs. Stock price")
# plt.xlabel("Date")
# plt.ylabel("Operating Income")
#
# plt.show()
# -
#
# +
# dependencies
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
financials = "./Resources/Tesla Financials/Testla Operations Financials.csv"
stock_price = "./Resources/Tesla Stock Price/TSLAstock.xlsx"
ps_ratio = "./Resources/Tesla Stock Price/TSLApsratio.xlsx"
twitter = "./Resources/Tesla Twitter/twitter tweets.csv"
# -
# set df_variable: financial statement
financials_csv = pd.read_csv(financials)
financials_df = financials_csv[["date","operating_income","cf_from_operations"]]
# set df_variabel: stock price
stock_price_excel = pd.read_excel(stock_price)
stock_price_df = stock_price_excel[["date","Close","Volume"]]
# set df_variable: price-to-sales ratio
ps_ratio_xlsx = pd.read_excel(ps_ratio)
ps_ratio_df = ps_ratio_xlsx[["date","P/S Ratio"]]
# set df_variable: twitter sentiment
twitter_csv = pd.read_csv(twitter)
twitter_df = twitter_csv[["date","TSLA_compound","ElonMusk_compound"]]
# merge: financials and stock price (AS: tesla_df_1)
tesla_df_1 = pd.merge(financials_df,stock_price_df, how="outer",on="date")
# merge: tesla_df_1 and ps ratio (AS: tesla_df_2)
tesla_df_2 = pd.merge(tesla_df_1,ps_ratio_df, how="outer",on="date")
# merge: tesla_df_2 and twitter (AS: tesla_df_3)
tesla_df_3 = pd.merge(tesla_df_2,twitter_df, how="outer",on="date")
# converting column: convert 'date' column to pandas.to_datetime(), THEN adding it as a new column to df
tesla_df_3["date_converted"] = pd.to_datetime(tesla_df_3["date"])
# +
# sort: by date (final wrangling/applying as: tesla_df )
tesla_df = tesla_df_3.sort_values(by=["date_converted"],ascending = False)
tesla_df.head()
# use to this code: to audit via excel
# tesla_df.to_csv("tesla_df.csv",index=False,header=True)
# + active=""
#
# -
# # Notes for Reuben
# +
# adding daily return pct
tesla_df["daily_pct_change"] = tesla_df["Close"].pct_change(1)
tesla_df.head()
tesla_df.to_csv("tesla_df.csv",index=False,header=True)
# +
# tieing out to excel file
excel = pd.read_excel("tesla_df.xlsx")
test = pd.merge(tesla_df,excel,on="date",how="outer")
test[["date","daily_pct_change","pct_daily_return"]].head()
# -
test["test"] = test["daily_pct_change"] == test["pct_daily_return"]
test["test"]
# # End Note
#
# # Graph Section
# +
# Financials
# plot - scatter: operating income
plt.scatter(tesla_df["date"],
tesla_df["operating_income"])
# s=<var>,
# c=<var>,
# label=<var>,
# alpha=<int>
# edgecolors="none")
# plot - scatter: cash flows
plt.scatter(tesla_df["date"],
tesla_df["cf_from_operations"])
# s=<var>,
# c=<var>,
# label=<var>,
# alpha=<int>
# edgecolors="none")
# plot - line: close stock prices
plt.plot(tesla_df["date"],
tesla_df["Close"])
plt.show()
# +
# plot - line: close stock prices
plt.plot(tesla_df["date"],
tesla_df["Close"])
plt.show()
# -
fig, ax1 = plt.subplots()
print(ax1)
#
# +
# plot - Stock Price vs PS ratio
andy_df = tesla_df[["date","Close","P/S Ratio"]].dropna(how="any")
plt.figure(figsize=(900,6))
ax = andy_df.plot(x="date",
y="Close",
legend=False,
color="blue")
ax2 = ax.twinx()
andy_df.plot(x="date",
y="P/S Ratio",
ax=ax2,
legend=False,
color="Green",
figsize=(10,7))
ax.figure.legend()
plt.title("Stock Price vs Price-To-Sale Ratio")
ax.set_xlabel("Date")
ax.set_ylabel("Stock Price")
ax2.set_ylabel("Price-To-Sale Ratio")
plt.savefig("Stock Price vs Price-To_Sale Ratio.png")
# +
# plot - Financials vs PS ratio
andy_df = tesla_df[["date","Close","P/S Ratio"]].dropna(how="any")
plt.figure(figsize=(900,6))
ax = andy_df.plot(x="date",
y="Close",
legend=True,
color="blue")
ax2 = ax.twinx()
andy_df.plot(x="date",
y="P/S Ratio",
ax=ax2,
legend=True,
color="Green")
plt.show()
# -
#
# +
# plot - Financials vs PS ratio
| 18.06.21 - Project1/Archive (Delete)/(1) Backups/(v 0.02) Tesla.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import numpy as np
image = cv2.imread('Untitled.jpg')
cv2.imshow('shapes', image)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)
contours, heirarchy = cv2.findContours(thresh,cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
len(contours)
cnt_list = []
len(cnt_list)
i = 0
for cnt in contours:
if i == 0:
i = 1
continue
approx = cv2.approxPolyDP(cnt, 0.01*cv2.arcLength(cnt, True), True)
i = len(approx)
cnt_list.append(i)
if len(approx) == 3:
shape_name = "Triangle 3"
cv2.drawContours(image, [cnt], 0, (0, 255, 0), 5)
#placing text at centre
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1)
elif len(approx) == 5:
shape_name = "Pentagon 5"
cv2.drawContours(image, [cnt], -1, (0,255,0), 5)
#placing text at centre
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1)
elif len(approx) == 6:
shape_name = "Hexagon 6"
cv2.drawContours(image, [cnt], -1, (0,255,0), 5)
#placing text at centre
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1)
elif len(approx) == 8:
shape_name = "Octagon 8"
cv2.drawContours(image, [cnt], -1, (0,255,0), 5)
#placing text at centre
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1)
elif len(approx) == 12:
shape_name = "Star 12"
cv2.drawContours(image, [cnt], -1, (0,255,0), 5)
#placing text at centre
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1)
elif len(approx) == 4:
x, y, w, h = cv2.boundingRect(approx)
aspectRatio = float(w)/h
if aspectRatio >= 0.5 and aspectRatio <= 1.5:
shape_name = "Square 4"
cv2.drawContours(image, [cnt], -1, (0,255,0), 5)
#placing text at centre
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1)
else:
shape_name = "Rectangle 4"
cv2.drawContours(image, [cnt], -1, (0,255,0), 5)
#placing text at centre
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1)
elif len(approx) > 15:
shape_name = "Circle"
cv2.drawContours(image, [cnt], -1, (0,255,0), 5)
#placing text at centre
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1)
print(len(cnt_list))
cnt_list.sort()
print(len(cnt_list))
for i in cnt_list:
if i == 3:
print("triangle")
elif i == 4:
print("quadrilateral")
elif i == 5:
print("Pentagon")
elif i == 6:
print("Hexagon")
elif i == 8:
print("Octagon")
elif i == 12:
print("Star")
elif i > 15:
print("Circle")
cv2.imshow("image1", image)
cv2.waitKey(0)
cv2.clearAllWindows()
# -
| Assignments/Aditya/Detect_shapes1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import plotly.graph_objects as go
import pandas as pd
df = pd.read_csv("data/World/time_series_covid_19_confirmed.csv")
df = df.rename(columns= {"Country/Region" : "Country", "Province/State": "Province"})
total_list = df.groupby('Country')['4/13/20'].sum().tolist()
country_list = df["Country"].tolist()
country_set = set(country_list)
country_list = list(country_set)
country_list.sort()
new_df = pd.DataFrame(list(zip(country_list, total_list)),
columns =['Country', 'Total_Cases'])
# -
new_df.head(10)
# +
colors = ["#F9F9F5", "#FAFAE6", "#FCFCCB", "#FCFCAE", "#FCF1AE", "#FCEA7D", "#FCD97D",
"#FCCE7D", "#FCC07D", "#FEB562", "#F9A648", "#F98E48", "#FD8739", "#FE7519",
"#FE5E19", "#FA520A", "#FA2B0A", "#9B1803", "#861604", "#651104", "#570303",]
fig = go.Figure(data=go.Choropleth(
locationmode = "country names",
locations = new_df['Country'],
z = new_df['Total_Cases'],
text = new_df['Total_Cases'],
colorscale = colors,
autocolorscale=False,
reversescale=False,
colorbar_title = 'Reported Covid-19 Cases',
))
fig.update_layout(
title_text='Reported Covid-19 Cases',
geo=dict(
showcoastlines=True,
),
)
fig.write_html('first_figure.html', auto_open=True)
| COVID19_Map2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# See mission 6.5 ML Intermediate for more on the NBA players
# +
import pandas as pd
nba = pd.read_csv("nba_2013.csv")
# The names of the columns in the data
print(nba.columns.values)
# +
# Finding Similar Rows With Euclidean Distance
selected_player = nba[nba["player"] == "<NAME>"].iloc[0]
distance_columns = ['age', 'g', 'gs', 'mp', 'fg', 'fga', 'fg.', 'x3p', 'x3pa', 'x3p.', 'x2p', 'x2pa', 'x2p.', 'efg.', 'ft', 'fta', 'ft.', 'orb', 'drb', 'trb', 'ast', 'stl', 'blk', 'tov', 'pf', 'pts']
import math
def euclidean_distance(row):
inner_value = 0
for k in distance_columns:
inner_value += (row[k] - selected_player[k]) ** 2
return math.sqrt(inner_value)
lebron_distance = nba.apply(euclidean_distance, axis=1)
# +
# Normalizing Columns
nba_numeric = nba[distance_columns]
nba_normalized = (nba_numeric - nba_numeric.mean()) / nba_numeric.std()
# +
# Finding the Nearest Neighbor
import pandas
from scipy.spatial import distance
# Fill in the NA values in nba_normalized
nba_normalized.fillna(0, inplace=True)
# Find the normalized vector for <NAME>
lebron_normalized = nba_normalized[nba["player"] == "<NAME>"]
# Find the distance between <NAME> and everyone else.
euclidean_distances = nba_normalized.apply(lambda row: distance.euclidean(row, lebron_normalized), axis=1)
distance_frame = pandas.DataFrame(data={"dist": euclidean_distances, "idx": euclidean_distances.index})
distance_frame.sort_values("dist", inplace=True)
second_smallest = distance_frame.iloc[1]["idx"]
most_similar_to_lebron = nba.loc[int(second_smallest)]["player"]
print(most_similar_to_lebron)
# +
# Generating Training and Testing Sets
import random
from numpy.random import permutation
# Randomly shuffle the index of nba
random_indices = permutation(nba.index)
# Set a cutoff for how many items we want in the test set (in this case 1/3 of the items)
test_cutoff = math.floor(len(nba)/3)
# Generate the test set by taking the first 1/3 of the randomly shuffled indices
test = nba.loc[random_indices[1:test_cutoff]]
# Generate the train set with the rest of the data
train = nba.loc[random_indices[test_cutoff:]]
# -
# Instead of having to do it all ourselves, we can use the kNN implementation in scikit-learn. While scikit-learn (Sklearn for short) makes a regressor and a classifier available, we'll be using the regressor, as we have continuous values to predict on.
#
# Sklearn performs the normalization and distance finding automatically, and lets us specify how many neighbors we want to look at
# +
import numpy as np
col_mask=nba.isnull().any(axis=0)
# -
row_mask=nba.isnull().any(axis=1)
nba.loc[row_mask,col_mask]
# +
# The columns that we'll be using to make predictions
x_columns = ['age', 'g', 'gs', 'mp', 'fg', 'fga', 'fg.', 'x3p', 'x3pa', 'x3p.', 'x2p', 'x2pa', 'x2p.', 'efg.', 'ft', 'fta', 'ft.', 'orb', 'drb', 'trb', 'ast', 'stl', 'blk', 'tov', 'pf']
# The column we want to predict
y_column = ["pts"]
from sklearn.neighbors import KNeighborsRegressor
# Create the kNN model
knn = KNeighborsRegressor(n_neighbors=5)
# Fit the model on the training data
knn.fit(train[x_columns], train[y_column])
# Make predictions on the test set using the fit model
predictions = knn.predict(test[x_columns])
# +
predictions = np.array([[5.1900e+02],
[1.3560e+02],
[2.9720e+02],
[3.4460e+02],
[2.2800e+01],
[2.8240e+02],
[7.1400e+01],
[4.3040e+02],
[4.9380e+02],
[3.5340e+02],
[9.7440e+02],
[1.6920e+02],
[4.3780e+02],
[5.6460e+02],
[1.3014e+03],
[4.3520e+02],
[8.4320e+02],
[9.6400e+01],
[7.8300e+02],
[2.7600e+02],
[2.0760e+02],
[1.5620e+02],
[1.0200e+01],
[2.5100e+02],
[6.3580e+02],
[1.5102e+03],
[7.9200e+02],
[3.3940e+02],
[7.9800e+01],
[1.4612e+03],
[9.6400e+01],
[8.6600e+02],
[8.0440e+02],
[1.1048e+03],
[3.3680e+02],
[1.0678e+03],
[2.7180e+02],
[1.8170e+03],
[1.1434e+03],
[1.0542e+03],
[1.8000e+02],
[8.9600e+01],
[1.1160e+02],
[8.7800e+01],
[7.8160e+02],
[1.2780e+02],
[1.4100e+02],
[1.7886e+03],
[1.6500e+02],
[1.8140e+02],
[1.4238e+03],
[1.6020e+02],
[2.2200e+01],
[1.5600e+01],
[9.9460e+02],
[3.3320e+02],
[3.3100e+02],
[2.1880e+02],
[6.8200e+01],
[1.3292e+03],
[1.2000e+01],
[3.0000e+00],
[3.5340e+02],
[3.8120e+02],
[1.3292e+03],
[1.3020e+02],
[6.0300e+02],
[6.0200e+01],
[1.3828e+03],
[9.3100e+02],
[1.2314e+03],
[6.6660e+02],
[2.3440e+02],
[9.7060e+02],
[2.8460e+02],
[7.7020e+02],
[1.7886e+03],
[4.2000e+00],
[4.8980e+02],
[9.2360e+02],
[1.2200e+01],
[3.1380e+02],
[2.0040e+02],
[8.2400e+01],
[1.4704e+03],
[1.2200e+01],
[4.1000e+01],
[6.3800e+01],
[4.0200e+02],
[6.0000e-01],
[1.0520e+03],
[1.7760e+02],
[1.6000e+00],
[1.1056e+03],
[1.7840e+02],
[3.8680e+02],
[2.4960e+02],
[3.1020e+02],
[3.3860e+02],
[1.1640e+02],
[7.3220e+02],
[1.8440e+02],
[1.0720e+03],
[1.2300e+02],
[3.4800e+01],
[4.1620e+02],
[2.5200e+01],
[1.5960e+02],
[5.9000e+02],
[6.0260e+02],
[1.6842e+03],
[2.7160e+02],
[2.2600e+02],
[1.0036e+03],
[8.6000e+01],
[8.9200e+02],
[9.8000e+00],
[1.2000e+02],
[1.2000e+00],
[5.2400e+01],
[1.2828e+03],
[1.0160e+02],
[5.1420e+02],
[1.2456e+03],
[9.7200e+01],
[9.6400e+01],
[1.4400e+03],
[1.0976e+03],
[7.9560e+02],
[8.9600e+01],
[3.3200e+01],
[1.2736e+03],
[3.7880e+02],
[1.2086e+03],
[1.4400e+01],
[1.3078e+03],
[7.3000e+01],
[5.9980e+02],
[1.1052e+03],
[2.8440e+02],
[5.3820e+02],
[4.6120e+02],
[7.1400e+01],
[1.7840e+02],
[1.0586e+03],
[1.3292e+03],
[9.9400e+01],
[1.5272e+03],
[3.4400e+02],
[2.9440e+02],
[5.9380e+02],
[1.0800e+02],
[6.0900e+02],
[6.8220e+02],
[3.5600e+01],
[2.9860e+02],
[9.4200e+01],
[9.8500e+02],
[1.0618e+03]])
actual = test[y_column]
mse = (((predictions - actual) ** 2).sum()) / len(predictions)
print(mse)
# +
# Splitting the Data into train and test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(nba["pts"], test_size=0.2, random_state=1)
# +
# making predictions with fit
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
# -
| 7.4 Exploring Topics in Data Science/ML with K-Nearest Neighbors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
## tensorflow-gpu==2.3.0 bug to load_weight after call inference
# !pip install tensorflow-gpu==2.2.0
# +
import yaml
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow_tts.inference import AutoConfig
from tensorflow_tts.inference import TFAutoModel
# -
config = AutoConfig.from_pretrained("../examples_tts/multiband_melgan/conf/multiband_melgan.v1.yaml")
mb_melgan = TFAutoModel.from_pretrained(
config=config,
pretrained_path=None, # "../examples_tts/fastspeech2/checkpoints/model-150000.h5",
is_build=False, # don't build model if you want to save it to pb. (TF related bug)
name="mb_melgan"
)
fake_mels = tf.random.uniform(shape=[4, 256, 80], dtype=tf.float32)
audios = mb_melgan.inference(fake_mels)
mb_melgan.load_weights("../examples_tts/multiband_melgan/checkpoints/mb_melgan.v1-940K.h5")
# # Save to Pb
tf.saved_model.save(mb_melgan, "./mb_melgan", signatures=mb_melgan.inference)
# # Load and Inference
mb_melgan = tf.saved_model.load("./mb_melgan")
mels = np.load("../dump/valid/norm-feats/LJ001-0009-norm-feats.npy")
audios = mb_melgan.inference(mels[None, ...])
plt.plot(audios[0, :, 0])
| notebooks/multiband_melgan_inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
fig_dims = (20,10)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import xgboost
from xgboost import XGBClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import BernoulliNB
import os
# # Preparing Data
#
# * Train data set has the pixel information of the images and the label of what digit it is.
# * Split the training set as x and y with labels
# * Normalize the data to make the models train faster
# +
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")
train.head()
# -
x = train.copy()
x = x.drop(columns=["label"])
y = train["label"]
x.head()
# +
x = x / 255.0
x.iloc[0].max()
# -
# # KNeighbours Classifier
# +
train_x,val_x,train_y,val_y = train_test_split(x,y,random_state=32)
model = KNeighborsClassifier (n_neighbors=5,weights = 'distance')
model.fit(train_x,train_y)
preds = model.predict(val_x)
score = accuracy_score(preds,val_y)*100
print('Error: ',score)
# -
# # Multinomial Naive Bayes
nb = MultinomialNB()
nb.fit(train_x,train_y)
preds = nb.predict(val_x)
score = accuracy_score(preds,val_y)* 100
print('Error: ', score)
# # Gaussian Naive Bayes
gnb = GaussianNB()
gnb.fit(train_x,train_y)
preds = gnb.predict(val_x)
score = accuracy_score(preds,val_y)* 100
print('Error: ', score)
# # Bernoulli Naive Bayes
bnb = BernoulliNB()
bnb.fit(train_x,train_y)
preds = bnb.predict(val_x)
score = accuracy_score(preds,val_y)* 100
print('Error: ', score)
# # XGBoost Classifier
xgboost = XGBClassifier()
xgboost.fit(train_x,train_y)
preds = xgboost.predict(val_x)
score = accuracy_score(preds,val_y)*100
print('Error: ',score)
# +
submission = pd.read_csv('sample_submission.csv')
test = test / 255.0
predictions = xgboost.predict(test)
submission['Label'] = predictions
submission.to_csv('submission.csv',index=False)
| Digit Classifier/SimpleClassification/Digit Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pickle
import os
import numpy as np
base_dirs = ['/localdata/juan/local/',
'/localdata/juan/erehwon//',
'/localdata/juan/numenor/']
experiments = ['pilco_ssgp_rbfp_1',
'mcpilco_dropoutd_rbfp_3',
'mcpilco_dropoutd_mlpp_4',
'mcpilco_lndropoutd_rbfp_5',
'mcpilco_lndropoutd_mlpp_6',
'mcpilco_dropoutd_dropoutp_7',
'mcpilco_lndropoutd_dropoutp_8']
result_files = []
for b in base_dirs:
dirs = os.listdir(b)
for e in experiments:
for d in dirs:
if d.find(e) == 0:
res_dir = os.path.join(b,d)
res_file = os.path.join(res_dir, 'results_50_10')
result_files.append(res_file)
res_file = os.path.join(res_dir, 'results_50_20')
result_files.append(res_file)
print(result_files)
# +
base_dirs = ['/localdata/juan/inferno/',
'/localdata/juan/rhys/',
'/localdata/juan/numenor/'
]
experiments = ['pilco_ssgp_rbfp_1',
'mcpilco_dropoutd_rbfp_3',
'mcpilco_dropoutd_mlpp_4',
'mcpilco_lndropoutd_rbfp_5',
'mcpilco_lndropoutd_mlpp_6',
'mcpilco_dropoutd_dropoutp_7',
'mcpilco_lndropoutd_dropoutp_8']
result_files = []
for b in base_dirs:
for e in experiments:
res_dir = os.path.join(b,e)
res_file = os.path.join(res_dir, 'results_50_10')
result_files.append(res_file)
print(result_files)
# -
from collections import OrderedDict
result_arrays = OrderedDict()
for rpath in result_files:
if not os.path.isfile(rpath):
continue
with open(rpath, 'rb') as f:
print('Opening %s' % rpath)
exp_type = None
for e in experiments:
if rpath.find(e) >= 0:
exp_type = e
break
arrays = result_arrays.get(exp_type, [])
arrays.append(pickle.load(f))
result_arrays[exp_type] = arrays
ids = [0,1,2,4,3,5,6]
names = ['SSGP-DYN_RBF-POL (PILCO)', 'DROPOUT-DYN_RBF-POL', 'DROPOUT-DYN_MLP-POL',
'LOGNORMAL-DYN_RBF-POL', 'DROPOUT-DYN_DROPOUT-POL', 'LOGNORMAL-DYN_MLP-POL',
'LOGNORMAL-DYN_DROPOUT-POL']
from collections import OrderedDict
# gather all costs
costs = OrderedDict()
for e in experiments:
exp_results = result_arrays[e]
costs_e = []
for results in exp_results:
costs_i = []
#learning_iter
for rj in results:
costs_ij = []
#trial
for r in rj:
costs_ij.append(r[2])
costs_i.append(costs_ij)
if len(costs_i) > 0 :
costs_e.append(costs_i)
costs_i = np.concatenate(costs_e, axis=1).squeeze()
costs_i = costs_i/costs_i.shape[-1]
print costs_i.shape
mean_sum_costs = costs_i.sum(-1).mean(-1)
std_sum_costs = costs_i.sum(-1).std(-1)
costs[e] = (mean_sum_costs, std_sum_costs)
import matplotlib
from matplotlib import pyplot as plt
exp1 = ['mcpilco_dropoutd_mlpp_4',
'mcpilco_lndropoutd_mlpp_6',
'mcpilco_dropoutd_dropoutp_7',
'mcpilco_lndropoutd_dropoutp_8']
names = ['Binary Drop (p=0.1) Dyn, MLP Pol', 'Log-Normal Drop Dyn, MLP Pol', 'Binary Drop (p=0.1) Dyn, Drop MLP Pol p=0.1', 'Log-Normal Drop Dyn, Drop MLP Pol p=0.1']
linetype = ['--','-','--','-']
names = dict(zip(exp1 ,names))
linetype = dict(zip(exp1 ,linetype))
matplotlib.rcParams.update({'font.size': 20})
fig = plt.figure(figsize=(15,9))
t = range(len(costs.values()[0][1]))
for e in exp1:
mean, std = costs[e]
min_ = mean
std_ = std
#min_ = np.minimum.accumulate(mean)
#std_ = std[np.array([np.where(mean==mi) for mi in min_]).flatten()]
if names[e].find('rbf') < 0:
pl, = plt.plot(t, min_, linetype[e], label=names[e], linewidth=2)
#pl, = plt.plot(t, max_, linetype[e], label=names[e], linewidth=2, color=pl.get_color())
alpha = 0.5
for i in range(1,2):
alpha = alpha*0.5
lower_bound = min_ - i*std_*0.5
#lower_bound = min_
upper_bound = min_ + i*std_*0.5
#upper_bound = max_
plt.plot(t, upper_bound, linetype[e], linewidth=2,color=pl.get_color(),alpha=alpha)
plt.plot(t, lower_bound, linetype[e], linewidth=2,color=pl.get_color(),alpha=alpha)
plt.fill_between(t, lower_bound, upper_bound, alpha=alpha, color=pl.get_color(), linestyle=linetype[e])
plt.legend()
plt.xlabel('Number of interactions (2.5 secs at 10 Hz each)')
plt.ylabel('Average cost (over 30 runs)')
plt.show()
import matplotlib
from matplotlib import pyplot as plt
exp1 = ['pilco_ssgp_rbfp_1',
'mcpilco_dropoutd_rbfp_3',
'mcpilco_lndropoutd_rbfp_5',]
names = ['SSGP Dyn, RBF Pol', 'Binary Drop (p=0.1) Dyn, RBF Pol', 'Log-Normal Drop Dyn, RBF Pol']
linetype = ['--','-','--','-']
names = dict(zip(exp1 ,names))
linetype = dict(zip(exp1 ,linetype))
matplotlib.rcParams.update({'font.size': 20})
fig = plt.figure(figsize=(15,9))
t = range(len(costs.values()[0][1]))
for e in exp1:
mean, std = costs[e]
min_, std_ = mean, std
#min_ = np.minimum.accumulate(mean)
#std_ = std[np.array([np.where(mean==mi) for mi in min_]).flatten()]
if names[e].find('rbf') < 0:
pl, = plt.plot(t, min_, linetype[e], label=names[e], linewidth=2)
#pl, = plt.plot(t, max_, linetype[e], label=names[e], linewidth=2, color=pl.get_color())
alpha = 0.5
for i in range(1,2):
alpha = alpha*0.5
lower_bound = min_ - i*std_*0.5
#lower_bound = min_
upper_bound = min_ + i*std_*0.5
#upper_bound = max_
plt.plot(t, upper_bound, linetype[e], linewidth=2,color=pl.get_color(),alpha=alpha)
plt.plot(t, lower_bound, linetype[e], linewidth=2,color=pl.get_color(),alpha=alpha)
plt.fill_between(t, lower_bound, upper_bound, alpha=alpha, color=pl.get_color(), linestyle=linetype[e])
plt.legend()
plt.xlabel('Number of interactions (2.5 secs at 10 Hz each)')
plt.ylabel('Average cost (over 30 runs)')
plt.show()
| examples/notebooks/evaluate_policy.ipynb |