code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dataset2vec
# language: python
# name: dataset2vec
# ---
# +
import ismlldataset
dataset_id = 31 # (between 0-119)
dataset = ismlldataset.datasets.get_dataset(dataset_id=dataset_id)
# get data
x,y = dataset.get_data()
# get specific split
x,y = dataset.get_folds(split=1,return_valid=True)
train_x,valid_x,test_x = x
train_y,valid_y,test_y = y
# -
import tensorflow as tf;print(tf.__version__)
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Assignment 4: Word Sense Disambiguation: from start to finish
#
# ## Due: Tuesday 6 December 2016 15:00 p.m.
#
# Please name your Jupyter notebook using the following naming convention: ASSIGNMENT_4_FIRSTNAME_LASTNAME.ipynb
#
# Please send your assignment to `<EMAIL>`.
# A well-known NLP task is [Word Sense Disambiguation (WSD)](https://en.wikipedia.org/wiki/Word-sense_disambiguation). The goal is to identify the sense of a word in a sentence. Here is an example of the output of one of the best systems, called [Babelfy](http://babelfy.org/index). 
# Since 1998, there have been WSD competitions: [Senseval and SemEval](https://en.wikipedia.org/wiki/SemEval). The idea is very simple. A few people annotate words in a sentence with the correct meaning and systems try to the do same. Because we have the manual annotations, we can score how well each system performs. In this exercise, we are going to compete in [SemEval-2013 task 12: Multilingual Word Sense Disambiguation](https://www.cs.york.ac.uk/semeval-2013/task12.html).
#
# The main steps in this exercise are:
# * Introduction of the data and goals
# * Performing WSD
# * Loading manual annotations (which we will call **gold data**)
# * System output
# * Write an XML file containing both the gold data and our system output
# * Read the XML file and evaluate our performance
# * please only use **xpath** if you are comfortable with using it. It is not needed to complete the assignment.
# ## Introduction of the data and goals
# We will use the following data (originating from [SemEval-2013 task 12 test data](https://www.cs.york.ac.uk/semeval-2013/task12/data/uploads/datasets/semeval-2013-task12-test-data.zip)):
# * **system input**: data/multilingual-all-words.en.xml
# * **gold data**: data/sem2013-aw.key
# Given a word in a sentence, the goal of our system is to determine the corect meaning of that word. For example, look at the **system input** file (data/multilingual-all-words.en.xml) at lines 1724-1740.
# All the *instance* elements are the ones we have to provide a meaning for. Please note that the *sentence* element has *wf* and *instance* children. The *instance* elements are the ones for which we have to provide a meaning.
#
# ```xml
# <sentence id="d003.s005">
# <wf lemma="frankly" pos="RB">Frankly</wf>
# <wf lemma="," pos=",">,</wf>
# <wf lemma="the" pos="DT">the</wf>
# <instance id="d003.s005.t001" lemma="market" pos="NN">market</instance>
# <wf lemma="be" pos="VBZ">is</wf>
# <wf lemma="very" pos="RB">very</wf>
# <wf lemma="calm" pos="JJ">calm</wf>
# <wf lemma="," pos=",">,</wf>
# <wf lemma="observe" pos="VVZ">observes</wf>
# <wf lemma="Mace" pos="NP">Mace</wf>
# <wf lemma="Blicksilver" pos="NP">Blicksilver</wf>
# <wf lemma="of" pos="IN">of</wf>
# <wf lemma="Marblehead" pos="NP">Marblehead</wf>
# <instance id="d003.s005.t002" lemma="asset_management" pos="NE">Asset_Management</instance>
# <wf lemma="." pos="SENT">.</wf>
# </sentence>
# ```
# As a way to determine the possible meanings of a word, we will use [WordNet](https://wordnet.princeton.edu/). For example, for the lemma **market**, Wordnet lists the following meanings:
from nltk.corpus import wordnet as wn
for synset in wn.synsets('market', pos='n'):
print(synset, synset.definition())
# In order to know which meaning the manual annotators chose, we go to the **gold data** (data/sem2013-aw.key). For the identifier *d003.s005.t001*, we find:
# d003 d003.s005.t001 market%1:14:01::
# In order to know to which synset *market%1:14:01::* belongs, we can do the following:
lemma = wn.lemma_from_key('market%1:14:01::')
synset = lemma.synset()
print(synset, synset.definition())
# Hence, the manual annotators chose **market.n.04**.
# ## Performing WSD
# As a first step, we will perform WSD. For this, we will use the [**lesk** WSD algorithm](http://www.d.umn.edu/~tpederse/Pubs/banerjee.pdf) as implemented in the [NLTK](http://www.nltk.org/howto/wsd.html). One of the applications of the Lesk algorithm is to determine which senses of words are related. Imagine that **cone** has three senses, and **pine** has three senses (example from [paper](http://www.d.umn.edu/~tpederse/Pubs/banerjee.pdf)):
#
# **Cone**
# * Sense 1: kind of *evergreen tree* with needle–shaped leaves
# * Sense 2: waste away through sorrow or illness.
#
# **Pine**
# * Sense 1: solid body which narrows to a point
# * Sense 2: something of this shape whether solid or hollow
# * Sense 3: fruit of certain *evergreen tree*
#
# As you can see, **sense 1 of cone** and **sense 3 of pine** have an overlap in their definitions and hence indicate that these senses are related. This idea can then be used to perform WSD. The words in the sentence of a word are compared against the definition of each sense of word. The word sense that has the highest number of overlapping words between the sentence and the definition of the word sense is chosen as the correct sense according to the algorithm.
from nltk.wsd import lesk
# Given is a function that allows you to perform WSD on a sentence. The output is a **WordNet sensekey**, hence an identifier of a sense.
# #### the function is given, but it is important that you understand how to call it.
# +
def perform_wsd(sent, lemma, pos):
'''
perform WSD using the lesk algorithm as implemented in the nltk
:param list sent: list of words
:param str lemma: a lemma
:param str pos: a pos (n | v | a | r)
:rtype: str
:return: wordnet sensekey or not_found
'''
sensekey = 'not_found'
wsd_result = lesk(sent, lemma, pos)
if wsd_result is not None:
for lemma_obj in wsd_result.lemmas():
if lemma_obj.name() == lemma:
sensekey = lemma_obj.key()
return sensekey
sent = ['I', 'went', 'to', 'the', 'bank', 'to', 'deposit', 'money', '.']
assert perform_wsd(sent, 'bank', 'n') == 'bank%1:06:01::', 'key is %s' % perform_wsd(sent, 'bank', 'n')
assert perform_wsd(sent, 'dfsdf', 'n') == 'not_found', 'key is %s' % perform_wsd(sent, 'money', 'n')
print(perform_wsd(sent, 'bank', 'n'))
# -
# ## Loading manual annotations
# Your job now is to load the manual annotations from 'data/sem2013-aw.key'.
# * Tip, you can use [**repr**](https://docs.python.org/3/library/functions.html#repr) to check which delimiter (space, tab, etc) was used.
# * sometimes there is more than one sensekey given for an identifier (see line 25 for example)
# You can use the **set** function to convert a list to a set
a_list = [1, 1, 2, 1, 3]
a_set = set(a_list)
print(a_set)
def load_gold_data(path_to_gold_key):
'''
given the path to gold data of semeval2013 task 12,
this function creates a dictionary mapping the identifier to the
gold answers
HINT: sometimes, there is more than one sensekey for identifier
:param str path_to_gold_key: path to where gold data file is stored
:rtype: dict
:return: identifier (str) -> goldkeys (set)
'''
gold = {}
with open(path_to_gold_key) as infile:
for line in infile:
# find the identifier and the goldkeys
# add them to the dictionary
# gold[identifier] = goldkeys
return gold
# Please check if your functions works correctly by running the cell below.
gold = load_gold_data('data/sem2013-aw.key')
assert len(gold) == 1644, 'number of gold items is %s' % len(gold)
# ## Combining system input + system output + gold data
# We are going to create a dictionary that looks like this:
# ```python
# {10: {'sent_id' : 1
# 'text': 'banks',
# 'lemma' : 'bank',
# 'pos' : 'n',
# 'instance_id' : 'd003.s005.t001',
# 'gold_keys' : {'bank%1:14:00::'},
# 'system_key' : 'bank%1:14:00::'}
# }
# ```
#
# This dictionary maps a number (int) to a dictionary. Combining all relevant information in one dictionary will help us to create the NAF XML file. In order to do this, we will write several functions. To work with XML, we will first import the lxml module.
from lxml import etree
def load_sentences(semeval_2013_input):
'''
given the path to the semeval input xml,
this function creates a dictionary mapping sentence identfier
to the sentence (list of words)
HINT: you need the text of both:
text/sentence/instance and text/sentence/wf elements
:param str semeval_2013_input: path to semeval 2013 input xml
:rtype: dict
:return: mapping sentence identifier -> list of words
'''
sentences = dict()
doc = etree.parse(semeval_2013_input)
# insert code here
return sentences
# please check that your function works by running the cell below.
sentences = load_sentences('data/multilingual-all-words.en.xml')
assert len(sentences) == 306, 'number of sentences is different from needed 306: namely %s' % len(sentences)
def load_input_data(semeval_2013_input):
'''
given the path to input xml file, we will create a dictionary that looks like this:
:rtype: dict
:return: {10: {
'sent_id' : 1
'text': 'banks',
'lemma' : 'bank',
'pos' : 'n',
'instance_id' : 'd003.s005.t001',
'gold_keys' : {},
'system_key' : ''}
}
'''
data = dict()
doc = etree.parse(semeval_2013_input)
identifier = 1
for sent_el in doc.findall('text/sentence'):
# insert code here
for child_el in sent_el.getchildren():
# insert code here
info = {
'sent_id' : # to fill,
'text': # to fill,
'lemma' : # to fill,
'pos' : # to fill,
'instance_id' : # to fill if instance element else empty string,
'gold_keys' : set(), # this is ok for now
'system_key' : '' # this is ok for now
}
data[identifier] = info
identifier += 1
return data
data = load_input_data('data/multilingual-all-words.en.xml')
assert len(data) == 8142, 'number of token is not the needed 8142: namely %s' % len(data)
def add_gold_and_wsd_output(data, gold, sentences):
'''
the goal of this function is to fill the keys 'system_key'
and 'gold_keys' for the entries in which the 'instance_id' is not an empty string.
:param dict data: see output function 'load_input_data'
:param dict gold: see output function 'load_gold_data'
:param dict sentences: see output function 'load_sentences'
NOTE: not all instance_ids have a gold answer!
:rtype: dict
:return: {10: {'sent_id' : 1
'text': 'banks',
'lemma' : 'bank',
'pos' : 'n',
'instance_id' : 'd003.s005.t001',
'gold_keys' : {'bank%1:14:00::'},
'system_key' : 'bank%1:14:00::'}
}
'''
for identifier, info in data.items():
# get the instance id
if instance_id:
# perform wsd and get sensekey that lesk proposes
# add system key to our dictionary
# info['system_key'] = sensekey
if instance_id in gold:
info['gold_keys'] = gold[instance_id]
# Call the function to combine all information.
add_gold_and_wsd_output(data, gold, sentences)
# ## Create NAF with system run and gold information
# We are going to create one [NAF XML](http://www.newsreader-project.eu/files/2013/01/techreport.pdf) containing both the gold information and our system run. In order to do this, we will guide you through the process of doing this.
# ### [CODE IS GIVEN] Step a: create an xml object
# **NAF** will be our root element.
new_root = etree.Element('NAF')
new_tree = etree.ElementTree(new_root)
new_root = new_tree.getroot()
# We can inspect what we have created by using the **etree.dump** method. As you can see, we only have the root node **NAF** currently in our document.
etree.dump(new_root)
# ### [CODE IS GIVEN] Step b: add children
# We will now add the elements in which we will place the **wf** and **term** elements.
# +
text_el = etree.Element('text')
terms_el = etree.Element('terms')
new_root.append(text_el)
new_root.append(terms_el)
# -
etree.dump(new_root)
# ### Step c: functions to create wf and term elements
# For this step, the code is not given. Please complete the functions.
# #### TIP: check the subsection *Creating your own XML* from Topic 5
def create_wf_element(identifier, sent_id, text):
'''
create NAF wf element, such as:
<wf id="11" sent_id="d001.s002">conference</wf>
:param int identifier: our own identifier (convert this to string)
:param str sent_id: the sentence id of the competition
:param str text: the text
'''
# complete from here
wf_el = etree.Element(
return wf_el
# #### TIP: **externalRef** elements are children of **term** elements
def create_term_element(identifier, instance_id, system_key, gold_keys):
'''
create NAF xml element, such as:
<term id="3885">
<externalRef instance_id="d007.s013.t004" provenance="lesk" wordnetkey="player%1:18:04::"/>
<externalRef instance_id="d007.s013.t004" provenance="gold" wordnetkey="player%1:18:01::"/>
</term>
:param int identifier: our own identifier (convert this to string)
:param str system_key: system output
:param set gold_keys: goldkeys
'''
# complete code here
term_el = etree.Element(
return term_el
# ### [CODE IS GIVEN] Step d: add wf and term elements
counter = 0
for identifier, info in data.items():
wf_el = create_wf_element(identifier, info['sent_id'], info['text'])
text_el.append(wf_el)
term_el = create_term_element(identifier,
info['instance_id'],
info['system_key'],
info['gold_keys'])
terms_el.append(term_el)
# ### [CODE IS GIVEN]: write to file
with open('semeval2013_run1.naf', 'wb') as outfile:
new_tree.write(outfile,
pretty_print=True,
xml_declaration=True,
encoding='utf-8')
# ## Score our system run
# Read the NAF file and extract relevant statistics, such as:
# * overall performance (how many are correct?)
# * [optional]: anything that you find interesting
| Notebooks/ASSIGNMENT-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="aaF59Lg6R07M"
# # Where is your CNN Model looking at? - (Grad-CAM)
# > Use Grad-CAM to visualize where the network is looking at or which pixels in the image contribute most to the prediction being made
#
# - toc: false
# - badges: false
# - comments: false
# - categories: [Grad-CAM, Gradient Heat map visualization, Image Classification, CIFAR-10, Cutout Augmentation]
# - image: images/grad-cam.png
#
# + [markdown] id="TawFwX1AS8RN"
# ## Gradient-weighted Class Activation Mapping (Grad-CAM) :
# ### Grad-CAM is a technique to visually represent where amodel is looking at and why it has made a certain prediction and was first presented in this paper [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization](https://arxiv.org/abs/1610.02391)
# + [markdown] id="5vYLb4A1hN8i"
# We will classify images in CIFAR 10 dataset and integrate Grad-CAM visualization. We will also use Cutout Image Augmentation for training the model
#
# ###Import necessary Modules
# + id="KkwXnw9OfHZl"
from keras import backend as K
import time
import matplotlib.pyplot as plt
import numpy as np
% matplotlib inline
np.random.seed(2017)
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers import Activation, Flatten, Dropout
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
# + [markdown] id="ipgjdjmAhvfv"
# ###create train and test data using cifar10 dataset in Keras
# + id="NHpnoCHZfO8g" outputId="1a1fbf6f-59cd-4827-d9e8-640663606103" colab={"base_uri": "https://localhost:8080/", "height": 51}
from keras.datasets import cifar10
(train_features, train_labels), (test_features, test_labels) = cifar10.load_data()
num_train, img_channels, img_rows, img_cols = train_features.shape
num_test, _, _, _ = test_features.shape
num_classes = len(np.unique(train_labels))
# + [markdown] id="e9wIkqDgiDUh"
# ###Plot some of the images in the dataset along with class label
# + id="14HyBUXdfS6G" outputId="46a5e2a3-4544-4aa7-9971-f93bfb2fa809" colab={"base_uri": "https://localhost:8080/", "height": 213}
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
fig = plt.figure(figsize=(8,3))
for i in range(num_classes):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
idx = np.where(train_labels[:]==i)[0]
features_idx = train_features[idx,::]
img_num = np.random.randint(features_idx.shape[0])
im = features_idx[img_num]
ax.set_title(class_names[i])
plt.imshow(im)
plt.show()
# + [markdown] id="phNvqFCVixXy"
# ###Scale the input features to be within 0 and 1
# ###convert the train and test labels to 10 class category format
# + id="T5c5nDvxm6zR"
train_features = train_features.astype('float32')/255
test_features = test_features.astype('float32')/255
# convert class labels to binary class labels
train_labels = np_utils.to_categorical(train_labels, num_classes)
test_labels = np_utils.to_categorical(test_labels, num_classes)
# + [markdown] id="NfBJFe9WKMoU"
# #### Define Model for image classification
# + id="sougsIPE_Uqx" outputId="7ce40423-7e51-429a-8f7a-b7a543cccf37" colab={"base_uri": "https://localhost:8080/", "height": 1000}
#hide_output
from keras.layers import Conv2D,BatchNormalization,MaxPooling2D,Activation,Flatten
# Define the model #RF
model = Sequential()
model.add(Conv2D(32, 3, border_mode='same', name='layer1', input_shape=(32, 32, 3))) #3
model.add(BatchNormalization(name='BN1'))
model.add(Activation('relu',name='rl1'))
#Conv block 1
model.add(Conv2D(64, 3,name='layer2',border_mode='same')) #5
model.add(BatchNormalization(name='BN2'))
model.add(Activation('relu',name='rl2'))
model.add(Conv2D(128, 3,name='layer3')) #7
model.add(BatchNormalization(name='BN3'))
model.add(Activation('relu',name='rl3'))
#dropout after conv block1
model.add(Dropout(0.1,name='drp1'))
#Transition Block 1
model.add(Conv2D(32,1,name='tb1'))
model.add(BatchNormalization(name='tb-BN1'))
model.add(Activation('relu',name='tb-rl1'))
model.add(MaxPooling2D(pool_size=(2, 2),name='mp1')) #14
#Conv Block 2
model.add(Conv2D(64, 3, name='layer4',border_mode='same')) #16
model.add(BatchNormalization(name='BN4'))
model.add(Activation('relu',name='rl4'))
model.add(Conv2D(128, 3,name='layer5',border_mode='same')) #18
model.add(BatchNormalization(name='BN5'))
model.add(Activation('relu',name='rl5'))
#dropout after conv block2
model.add(Dropout(0.1,name='drp2'))
#Transition Block 2
model.add(Conv2D(32,1,name='tb2'))
model.add(BatchNormalization(name='tb-BN2'))
model.add(Activation('relu',name='tb-rl2'))
model.add(MaxPooling2D(pool_size=(2, 2),name='mp2')) #36 - we have reached the image size here
#final conv Block
model.add(Conv2D(64, 3, name='layer6',border_mode='same')) #38
model.add(BatchNormalization(name='BN6'))
model.add(Activation('relu',name='rl6'))
model.add(Conv2D(128, 3,name='layer7',border_mode='same')) #40
model.add(BatchNormalization(name='BN7'))
model.add(Activation('relu',name='rl7'))
#dropout after final conv block
model.add(Dropout(0.1,name='d3'))
#Pointwise convolution to squash 128 channels to 10 output channels
model.add(Conv2D(10,1,name='red1'))
model.add(BatchNormalization(name='red-BN1'))
model.add(Activation('relu',name='rrl1'))
#last conv layer - No ReLU activation, No Batch Normalization
model.add(Conv2D(10,7,name='layer8')) #47
#Flatten the output
model.add(Flatten())
#Softmax activation to output likelihood values for classes
model.add(Activation('softmax'))
#Print model summary
model.summary()
# + [markdown] id="VTdsbwYhzb-t"
# ###Learning Rate Scheduler : We will add a custom learning rate scheduler that reduces the rate every 3rd epoch sugject to a min of 0.0005. We will also start with a slightly larger lr of 0.003 compared to default of 0.001 for Adam optimizer
# + id="KVxgzGEItcF1"
# define a learning rate scheduler . We will use a simple scheduler that reduces the lr by 10% every 3 epochs subject to a minimum lr of 0.0005
from keras.optimizers import Adam
from keras.callbacks import LearningRateScheduler
def scheduler(epoch, lr):
if (epoch%3==0 and epoch):
new_lr = max(0.9*lr,0.0005)
else:
new_lr=lr
return round(new_lr, 10)
lr_scheduler=LearningRateScheduler(scheduler,verbose=1)
# + id="RFpdzh6CtcF3" outputId="31c690f1-0d41-4961-8d31-7849e8ac614c" colab={"base_uri": "https://localhost:8080/", "height": 71}
#hide_output
#start with a higher lr of 0.003
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.003), metrics=['accuracy'])
# + [markdown] id="5kPGjkaewSi1"
# ###Mount google drive so that you can save the model with best validation accuracy and use it later for prediction tasks
# + id="FEeWmjMYbsup" outputId="349ab084-944b-43c7-8f8c-72bb556b0989" colab={"base_uri": "https://localhost:8080/", "height": 122}
#hide_output
from google.colab import drive
def mount_drive():
drive.mount('/gdrive',force_remount=True)
mount_drive()
# + [markdown] id="p5nonS1EwhQP"
# ### Create a modelcheckpoint callback to chack validation accuracy at the end of each epoch and save the model with best validation accuracy
# + id="6geQ-_37bg4I"
from keras.callbacks import ModelCheckpoint
chkpoint_model=ModelCheckpoint("/gdrive/My Drive/EVA/Session9/model_customv1_cifar10_best.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='max')
# + [markdown] id="mO4wuH94_Uq7"
# ### Data Augmentation : Define datagenerator with horizontal flip set to True ,zoom range of 0.15 .
#
# ### Train the model for 100 epochs
#
# + id="ddjtsX0F_Uq8" outputId="56ca4d61-3bc7-4c83-e4e2-fe8216df5c85" colab={"base_uri": "https://localhost:8080/", "height": 1000}
#hide_output
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(zoom_range=0.15,
horizontal_flip=True)
# train the model
start = time.time()
# Train the model
model_info = model.fit_generator(datagen.flow(train_features, train_labels, batch_size = 128),
samples_per_epoch = train_features.shape[0], nb_epoch = 100,
validation_data = (test_features, test_labels),
callbacks=[chkpoint_model,lr_scheduler],verbose=1)
end = time.time()
print ("Model took %0.2f seconds to train\n"%(end - start))
# + [markdown] id="VfI8zCGaw8WS"
# ### Model trained for 100 epochs and reached a max validation accuracy of 88.05. Model with best validation accuracy was saved in google drive
# + [markdown] id="jrDHvlkExLFP"
# ### Load the model with best validation accuracy
# + id="ScRHF6AGZbD5"
from keras.models import load_model
model1=load_model('/gdrive/My Drive/EVA/Session9/model_customv1_cifar10_best.h5')
# + [markdown] id="UXVYz6nOxTqy"
# ## Integrate Grad-CAM to visualize gradient heatmaps
# ### We will integrate Grad-CAM to visualize where the network is looking at or which pixels in the image contribute most to the prediction being made .
#
# ### Choose 4 images from the test dataset , predict their classes and print GradCam heatmap visualization for these 4 images
# + id="I71r7UoCZ1H2" outputId="0b13d773-d102-4c00-eb25-b79d324d49bf" colab={"base_uri": "https://localhost:8080/", "height": 789}
import cv2
from mpl_toolkits.axes_grid1 import ImageGrid
from google.colab.patches import cv2_imshow
#select test images and corresponding labels to print heatmap
x=np.array([test_features[41],test_features[410],test_features[222],test_features[950]])
y=[test_labels[41],test_labels[410],test_labels[222],test_labels[950]]
#make prediction for these 4 images
preds = model1.predict(x)
for j in range(4):
#get class id from the prediction values
class_idx = np.argmax(preds[j])
class_output = model1.output[:, class_idx]
## choose the layer before last 7x7 layer
last_conv_layer = model1.get_layer("rrl1")
# compute gradients and from it heatmap
grads = K.gradients(class_output, last_conv_layer.output)[0]
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([model1.input], [pooled_grads, last_conv_layer.output[0]])
pooled_grads_value, conv_layer_output_value = iterate([x])
for i in range(10):
conv_layer_output_value[:, :, i] *= pooled_grads_value[i]
heatmap = np.mean(conv_layer_output_value, axis=-1)
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap)
img = x[j]
#resize heatmap 7x7 to image size of 32x32
heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))
heatmap = np.uint8(255 * heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
# convert from BGR to RGB
heatmap1 = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)
# create superimposed image if we want to print using cv2 (cv2_imshow supported in colab)
superimposed_img = cv2.addWeighted(img, 0.8, heatmap, 0.2, 0,dtype=5)
# since cv.imshow does not work in jupyter notebooks and colab
# we will use matplotlib to print the image and its heatmap
fig = plt.figure(1, (5,5))
grid = ImageGrid(fig, 111,
nrows_ncols=(1,2),
axes_pad=0.3,
)
print(" original class is :"+class_names[np.argmax(y[j])]+" and predicted class is :"+str(class_names[class_idx]))
grid[0].imshow(img)
grid[0].set_title('Original')
#print the original image and on top of it place the heat map at 70% transparency
grid[1].imshow(img,alpha=1)
grid[1].imshow(heatmap1,alpha=0.7)
grid[1].set_title('superimposed heatmap')
plt.show()
# + [markdown] id="8sZGXI8v5YAi"
# ## How about Misclassified Images ?
#
# ### Let us also choose 4 misclassified images to visualize their Grad-CAM heatmap
# + id="m7ETpXW7IkJA"
pred=model1.predict(test_features)
pred2=np.argmax(pred,axis=1)
wrong_set=[]
correct_set=[]
wrong_labels=[]
true_labels=[]
wrong_indices=[]
for i in range(10000):
if (pred2[i]==np.argmax(test_labels[i])):
correct_set.append(test_features[i])
else:
wrong_indices.append(i)
wrong_labels.append(class_names[pred2[i]])
true_labels.append(class_names[np.argmax(test_labels[i])])
wrong_set.append(test_features[i])
# + [markdown] id="oRy6rySc5rKf"
# ### A selection of 4 misclassiifed images
# + id="m_Sgi3VZd0r0" outputId="99396598-e398-42a9-9a49-44a587841224" colab={"base_uri": "https://localhost:8080/", "height": 228}
print(' Selection of 4 misclassified images \n _________________________________\n')
from mpl_toolkits.axes_grid1 import ImageGrid
fig = plt.figure(1, (12, 12))
grid = ImageGrid(fig, 111,
nrows_ncols=(1, 4),
axes_pad=1,
)
for i in range(5,9):
grid[i-5].imshow(wrong_set[i].reshape(32,32,3))
grid[i-5].set_title('{2}: {0}, predicted: {1}'.format(true_labels[i],wrong_labels[i],wrong_indices[i]))
plt.show()
# + [markdown] id="oY8PQdl75zT-"
# ### print Grad-CAM heatmap for the misclassifed images
# + id="t9MA7bJZh0Z4" outputId="561aa4b7-7620-4da8-dec6-b2015038a6ba" colab={"base_uri": "https://localhost:8080/", "height": 789}
#select the 4 of the misclassified images to be visualized
w_list=wrong_indices[5:9]
x=[]
y=[]
for i in range(len(w_list)):
x.append(test_features[w_list[i]])
y.append(test_labels[w_list[i]])
#convert the image list to numpy array
x=np.array(x)
#make prediction for these 4 images
preds = model1.predict(x)
for j in range(len(x)):
#get class id from the prediction values
class_idx = np.argmax(preds[j])
class_output = model1.output[:, class_idx]
## choose the layer before last 7x7 layer
last_conv_layer = model1.get_layer("rrl1")
# compute gradients and from it heatmap
grads = K.gradients(class_output, last_conv_layer.output)[0]
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([model1.input], [pooled_grads, last_conv_layer.output[0]])
pooled_grads_value, conv_layer_output_value = iterate([x])
for i in range(10):
conv_layer_output_value[:, :, i] *= pooled_grads_value[i]
heatmap = np.mean(conv_layer_output_value, axis=-1)
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap)
img = x[j]
#resize heatmap 7x7 to image size of 32x32
heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))
heatmap = np.uint8(255 * heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
# convert from BGR to RGB
heatmap1 = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)
# create superimposed image if we want to print using cv2 (cv2_imshow supported in colab)
superimposed_img = cv2.addWeighted(img, 0.8, heatmap, 0.2, 0,dtype=5)
# since cv.imshow does not work in jupyter notebooks and colab
# we will use matplotlib to print the image and its heatmap
fig = plt.figure(1, (5,5))
grid = ImageGrid(fig, 111,
nrows_ncols=(1,2),
axes_pad=0.3,
)
print(" original class is :"+class_names[np.argmax(y[j])]+" and predicted class is :"+str(class_names[class_idx]))
grid[0].imshow(img)
grid[0].set_title('Original')
#print the original image and on top of it place the heat map at 70% transparency
grid[1].imshow(img,alpha=1)
grid[1].imshow(heatmap1,alpha=0.7)
grid[1].set_title('superimposed heatmap')
plt.show()
# + [markdown] id="Z4x6TO6Y6FQE"
# ### We trained the model with some basic data augmentation techniques available in Keras and visualized the Grad-CAM heatmaps for a selection 4 correctly classified images and 4 misclassifed images . Let us now use another augmentation technique called cutout to train the model and see if it improves the prediction of these misclassified images and also visualize where the model looks at when making the prediction
# + [markdown] id="q9TJRVWvTtNJ"
# ## Model prediction improvement with Cutout augmentation
# Now let us try and improve the model prediction accuracy by using an image augmentation technique called Cutout and see how the model performs for the misclassified images
# + [markdown] id="sht0wCzh5_Ne"
# ## Cutout Augmentation
# Cutout was first presented as an effective augmentation technique in these two papers :
#
# [Improved Regularization of Convolutional Neural Networks with Cutout](https://arxiv.org/abs/1708.04552)
# and
# [Random Erasing Data Augmentation](https://arxiv.org/abs/1708.04896)
#
# The idea is to randomly cut away patches of information from images that a model is training on to force it to learn from more parts of the image. This would help the model learn more features about a class instead of depending on some simple assumptions using smaller areas within the image . This helps the model generalize better and make better predictions .
#
# We will use python code for cutout /random erasing found at https://github.com/yu4u/cutout-random-erasing
#
# + id="GB0RhxhAj2UO" outputId="5dd24d81-33c5-4fe7-f8b4-eaac724f13ee" colab={"base_uri": "https://localhost:8080/", "height": 204}
#get code for random erasing from https://github.com/yu4u/cutout-random-erasing
# !wget https://raw.githubusercontent.com/yu4u/cutout-random-erasing/master/random_eraser.py
# + [markdown] id="pfEx0Oo39kP4"
# ### Define model (It is the same as above)
# + id="M1CZOU5CkyLj" outputId="6b6b79ad-2b70-4cb1-ca3c-3f02d2ab6997" colab={"base_uri": "https://localhost:8080/", "height": 1000}
#hide_output
from keras.layers import Conv2D,BatchNormalization,MaxPooling2D,Activation,Flatten
# Define the model #RF
model = Sequential()
model.add(Conv2D(32, 3, border_mode='same', name='layer1', input_shape=(32, 32, 3))) #3
model.add(BatchNormalization(name='BN1'))
model.add(Activation('relu',name='rl1'))
#Conv block 1
model.add(Conv2D(64, 3,name='layer2',border_mode='same')) #5
model.add(BatchNormalization(name='BN2'))
model.add(Activation('relu',name='rl2'))
model.add(Conv2D(128, 3,name='layer3')) #7
model.add(BatchNormalization(name='BN3'))
model.add(Activation('relu',name='rl3'))
#dropout after conv block1
model.add(Dropout(0.1,name='drp1'))
#Transition Block 1
model.add(Conv2D(32,1,name='tb1'))
model.add(BatchNormalization(name='tb-BN1'))
model.add(Activation('relu',name='tb-rl1'))
model.add(MaxPooling2D(pool_size=(2, 2),name='mp1')) #14
#Conv Block 2
model.add(Conv2D(64, 3, name='layer4',border_mode='same')) #16
model.add(BatchNormalization(name='BN4'))
model.add(Activation('relu',name='rl4'))
model.add(Conv2D(128, 3,name='layer5',border_mode='same')) #18
model.add(BatchNormalization(name='BN5'))
model.add(Activation('relu',name='rl5'))
#dropout after conv block2
model.add(Dropout(0.1,name='drp2'))
#Transition Block 2
model.add(Conv2D(32,1,name='tb2'))
model.add(BatchNormalization(name='tb-BN2'))
model.add(Activation('relu',name='tb-rl2'))
model.add(MaxPooling2D(pool_size=(2, 2),name='mp2')) #36 - we have reached the image size here
#final conv Block
model.add(Conv2D(64, 3, name='layer6',border_mode='same')) #38
model.add(BatchNormalization(name='BN6'))
model.add(Activation('relu',name='rl6'))
model.add(Conv2D(128, 3,name='layer7',border_mode='same')) #40
model.add(BatchNormalization(name='BN7'))
model.add(Activation('relu',name='rl7'))
#dropout after final conv block
model.add(Dropout(0.1,name='d3'))
#Pointwise convolution to squash 128 channels to 10 output channels
model.add(Conv2D(10,1,name='red1'))
model.add(BatchNormalization(name='red-BN1'))
model.add(Activation('relu',name='rrl1'))
#last conv layer - No ReLU activation, No Batch Normalization
model.add(Conv2D(10,7,name='layer8')) #47
#Flatten the output
model.add(Flatten())
#Softmax activation to output likelihood values for classes
model.add(Activation('softmax'))
#Print model summary
model.summary()
# + [markdown] id="hdf_XNSA9qdb"
# ### Compile the model with Adam optimizer and initial learning rate of 0.003
# + id="JhbyAnePljbU"
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.003), metrics=['accuracy'])
# + [markdown] id="bH6YaqY594-0"
# ### Define a new modelcheckpoint to save this model trained with cutout augmentaion in a separate path on drive
# + id="QfzYe3Fwk_fi"
chkpoint_model=ModelCheckpoint("/gdrive/My Drive/EVA/Session9/model3_with_cutout_cifar10_best.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='max')
# + [markdown] id="PDhQx-uHk_fk"
# ### Data Augmentation :
# - Define datagenerator with horizontal flip set to True ,zoom range of 0.15 .
# - Add random erasing or cutout as a preprocessing step . Use the default parameters from the random eraser code
#
# ### Train the model for 100 epochs
#
#
# + id="TAFBuBi8k_fl" outputId="47cde4f0-9501-4dc5-8b40-c360e91de9bc" colab={"base_uri": "https://localhost:8080/", "height": 1000}
#hide_output
from random_eraser import get_random_eraser
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(preprocessing_function=get_random_eraser(v_l=0, v_h=1),
zoom_range=0.15,
horizontal_flip=True)
# train the model
start = time.time()
# Train the model
model_info = model.fit_generator(datagen.flow(train_features, train_labels, batch_size = 128),
samples_per_epoch = train_features.shape[0], nb_epoch = 100,
validation_data = (test_features, test_labels),
callbacks=[chkpoint_model,lr_scheduler],verbose=1)
end = time.time()
print ("Model took %0.2f seconds to train\n"%(end - start))
# + [markdown] id="agc1bExP-cMw"
# Validation accuracy after 100 epochs is 88.28
#
# ### Load the new model trained with cutout augmentation
# + id="ZXcu7ftIsTj6"
model1=load_model('/gdrive/My Drive/EVA/Session9/model3_with_cutout_cifar10_best.h5')
# + [markdown] id="ZGOBcUjG-ipQ"
# ### visualize the same 4 images using Grad-CAM heatmap
# + id="sGiV05cQsHxq" outputId="d803a821-9678-463f-ff65-dcb6ccbcb852" colab={"base_uri": "https://localhost:8080/", "height": 789}
#select test images and corresponding labels to print heatmap
x=np.array([test_features[41],test_features[410],test_features[222],test_features[950]])
y=[test_labels[41],test_labels[410],test_labels[222],test_labels[950]]
#make prediction for these 4 images
preds = model1.predict(x)
for j in range(4):
#get class id from the prediction values
class_idx = np.argmax(preds[j])
class_output = model1.output[:, class_idx]
## choose the layer before last 7x7 layer
last_conv_layer = model1.get_layer("rrl1")
# compute gradients and from it heatmap
grads = K.gradients(class_output, last_conv_layer.output)[0]
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([model1.input], [pooled_grads, last_conv_layer.output[0]])
pooled_grads_value, conv_layer_output_value = iterate([x])
for i in range(10):
conv_layer_output_value[:, :, i] *= pooled_grads_value[i]
heatmap = np.mean(conv_layer_output_value, axis=-1)
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap)
img = x[j]
#resize heatmap 7x7 to image size of 32x32
heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))
heatmap = np.uint8(255 * heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
# convert from BGR to RGB
heatmap1 = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)
# create superimposed image if we want to print using cv2 (cv2_imshow supported in colab)
superimposed_img = cv2.addWeighted(img, 0.8, heatmap, 0.2, 0,dtype=5)
# since cv.imshow does not work in jupyter notebooks and colab
# we will use matplotlib to print the image and its heatmap
fig = plt.figure(1, (5,5))
grid = ImageGrid(fig, 111,
nrows_ncols=(1,2),
axes_pad=0.3,
)
print(" original class is :"+class_names[np.argmax(y[j])]+" and predicted class is :"+str(class_names[class_idx]))
grid[0].imshow(img)
grid[0].set_title('Original')
#print the original image and on top of it place the heat map at 70% transparency
grid[1].imshow(img,alpha=1)
grid[1].imshow(heatmap1,alpha=0.7)
grid[1].set_title('superimposed heatmap')
plt.show()
# + [markdown] id="aq5NYHjb_7cy"
# ### Let us see what happened to the 4 misclassified images after cutout augmenation - if the prediction changed and if the heatmap pattern changed too
# + id="C6yRHOCvso6W" outputId="9c6148ae-4559-4c99-ee82-418054f964a5" colab={"base_uri": "https://localhost:8080/", "height": 789}
#select the previously misclasified images to be visualized
w_list=wrong_indices[5:9]
x=[]
y=[]
for i in range(len(w_list)):
x.append(test_features[w_list[i]])
y.append(test_labels[w_list[i]])
#convert the image list to numpy array
x=np.array(x)
#make prediction for these 4 images
preds = model1.predict(x)
for j in range(len(x)):
#get class id from the prediction values
class_idx = np.argmax(preds[j])
class_output = model1.output[:, class_idx]
## choose the layer before last 7x7 layer
last_conv_layer = model1.get_layer("rrl1")
# compute gradients and from it heatmap
grads = K.gradients(class_output, last_conv_layer.output)[0]
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([model1.input], [pooled_grads, last_conv_layer.output[0]])
pooled_grads_value, conv_layer_output_value = iterate([x])
for i in range(10):
conv_layer_output_value[:, :, i] *= pooled_grads_value[i]
heatmap = np.mean(conv_layer_output_value, axis=-1)
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap)
img = x[j]
#resize heatmap 7x7 to image size of 32x32
heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))
heatmap = np.uint8(255 * heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
# convert from BGR to RGB
heatmap1 = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)
# create superimposed image if we want to print using cv2 (cv2_imshow supported in colab)
superimposed_img = cv2.addWeighted(img, 0.8, heatmap, 0.2, 0,dtype=5)
# since cv.imshow does not work in jupyter notebooks and colab
# we will use matplotlib to print the image and its heatmap
fig = plt.figure(1, (5,5))
grid = ImageGrid(fig, 111,
nrows_ncols=(1,2),
axes_pad=0.3,
)
print(" original class is :"+class_names[np.argmax(y[j])]+" and predicted class is :"+str(class_names[class_idx]))
grid[0].imshow(img)
grid[0].set_title('Original')
#print the original image and on top of it place the heat map at 70% transparency
grid[1].imshow(img,alpha=1)
grid[1].imshow(heatmap1,alpha=0.7)
grid[1].set_title('superimposed heatmap')
plt.show()
# + [markdown] id="4tbZ2LE6AWl1"
# ### We can see that cutout augmenation forced the model to look at different parts of the image than it was looking at earlier and it helped in getting the classification of this set of 4 previously misclassified images right
#
# ### It is also to be noted that the validation accuracy is still only at 88.28 even with cutout and we should train the network for more epochs and with different combinations of augmentations to get better results
# + [markdown] id="YKQQiVZ4IE3w"
#
| _notebooks/2020-04-11-Grad-CAM-visualization-.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:sonata-py37]
# language: python
# name: conda-env-sonata-py37-py
# ---
# # SONATA Spike Train Reports
#
# Spike-train reports are a hdf5 based format for representing neuronal spikes in a manner optimized for large-scale simulations. pySonata includes classes for creating these reports and reading them into your own code.
# +
import numpy as np
from sonata.reports.spike_trains import SpikeTrains, sort_order, PoissonSpikeGenerator
# -
# ## Saving Spiking information
# We start by showing how to create a sonata file for storing spike trains of multi-neuron populations.
#
# Once we initalized a SpikeTrains object we start by adding individual spiking events. To represent a spike we need three things; the id of the node (eg cell) that spikes, the name of the population (in this case the area of the brain where the node came from), and the time of the spike. By default Sonata assumes the units are in miliseconds but that can be changed in the format.
#
# To add an individual spike we can use the **add_spike** method
st_buffer = SpikeTrains()
st_buffer.add_spike(node_id=0, timestamp=1.0, population='VTA')
st_buffer.add_spike(node_id=0, timestamp=23.0, population='VTA')
st_buffer.add_spike(node_id=0, timestamp=50.1, population='VTA')
# The **add_spikes** emthod allow us to save an array of spikes at a time.
# By setting node_ids to a single node value, all 10 spikes will be associated with node 1.
times = np.sort(np.random.uniform(0.01, 1000.0, size=10))
st_buffer.add_spikes(node_ids=1, timestamps=times, population='VTA')
# We can also pass in a list of nodes that is the same size as the list of spikes, creating a one-to-one correspondence
node_ids = np.random.choice([2, 3, 4, 5], size=100, replace=True)
timestamps = np.random.uniform(0.01, 1000.0, size=100)
st_buffer.add_spikes(node_ids=node_ids, timestamps=timestamps, population='VTA')
# Often in simulations we are recording from cells from different populations/areas. By specifying a different population we prevent id clashes.
st_buffer.add_spikes(node_ids=0, timestamps=np.random.uniform(0.01, 1000.0, size=10), population='PFC')
# Finally we save our spikes to an hdf5 file in the specified SONATA format. Use the *sort_order* attribute to sort the spikes by time (*sort_order.by_time*) or by node_id (*sort_order.by_id*). If *sort_order* is left blank or set to *sort_order.unknown* the spikes will be saved in the same order they were inserted.
st_buffer.to_sonata('output/recorded_spiketimes.h5', sort_order=sort_order.by_time)
# # Poission distributed spike reports
#
# pySonata also includes special tools for creating spike-train reports using a predefine distributions. Below we create a sonata file that creates a population of 100 nodes, each firing with
psg = PoissonSpikeGenerator()
psg.add(node_ids=range(0, 100), firing_rate=15.0, times=(0.0, 3.0), population='const')
# We can also create a non-homogeneous poission distribution by passing in a list of rates (make sure that the rates are always non-negative), here we generate a second population where the nodes vary across the recorded time
# +
times = np.linspace(0, 3.0, 1000)
rates = 15.0 + 15.0*np.sin(times)
psg.add(node_ids=range(0, 100), firing_rate=rates, times=times, population='sinosodial')
# -
psg.to_sonata('output/poisson_spikes.h5')
# ## Extra Options
#
# ### Working with single population
#
# More often then not there is only a single node-population that is being recorded from and having to specify the *population* becomes burdensome. Use the *default_population* parameter so that whenever calling add_spike(s) methods you can ignore the population.
#
# ```python
# st_buffer = SpikeTrains(default_population='VTA')
# st_buffer.add_spike(node_id=0, timestamp=1.0)
# ...
# ```
#
#
#
# ### Excessively large spike trains
#
# pySonata tries to be as memory efficent as it can, however sometimes during a very active simulation with millions of cells there are too many spikes to save in memory. For large simulations ran on machines with limited memory use the *cache_dir* parameter to have spikes temporarly saved to disk preventing out-of-memory errors (but expected a slow-down).
#
# ```python
# st_buffer = SpikeTrains(cache_dir='output/tmp_spikes')
# ...
# ```
# ### Parallelized simulations
#
# If MPI is installed, you can split the creation of the spike-reports file across different nodes on a multi-core cluster. Initializing and saving the spikes are the same as above, and only the calls to add_spike(s) are distributed among the different cores:
#
# ```python
# from mpi4py import MPI
# rank = MPI.COMM_WORLD.comm.Get_rank()
#
# st_buffer = SpikeTrains(default_population='hpc')
# if rank == 0:
# st_buffer.add_spikes(node_ids=0, timestamps=np.linspace(0, 1.0, 20))
# else:
# st_buffer.add_spikes(node_ids=rank, timestamps=np.random.uniform(1.0, 2.0, size=20))
#
# st_buffer.to_sontat('hpc_spikes.h5')
# ```
# # Reading Sonata Reports
#
# Now that we have a SONATA file containing spike-trains we can read it in using the **from_sonata** method and use the **populations** property to see what node populations exists in the file
spikes = SpikeTrains.from_sonata('output/recorded_spiketimes.h5')
print(spikes.populations)
# Get the node_ids associated with the 'VTA' population
spikes.nodes('VTA')
# find the first and last spike time
spikes.time_range('VTA')
# get number of spikes generated by VTA neurons
spikes.n_spikes('VTA')
# There are a couple of ways to fetch the spikes in the file.
# Returns all spikes as a dataframe
spikes.to_dataframe()
# get the spikes on for VTA node 0.
spikes.get_times(node_id=0, population='VTA')
# spikes is a generator method usefull for analyzing very large spike files.
for time, pop, node_id in spikes.spikes(populations='PFC'):
print(time, pop, node_id)
| tutorials/pySonata/spike-reports.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# Most examples work across multiple plotting backends, this example is also available for:
#
# * [Matplotlib - scatter_economic](../matplotlib/scatter_economic.ipynb)
import pandas as pd
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
# ## Declaring data
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t')
key_dimensions = [('year', 'Year'), ('country', 'Country')]
value_dimensions = [('unem', 'Unemployment'), ('capmob', 'Capital Mobility'),
('gdp', 'GDP Growth'), ('trade', 'Trade')]
macro = hv.Table(macro_df, key_dimensions, value_dimensions)
# ## Plot
gdp_unem_scatter = macro.to.scatter('Year', ['GDP Growth', 'Unemployment'])
overlay = gdp_unem_scatter.overlay('Country')
overlay.options(
opts.Scatter(width=700, height=400, scaling_method='width', scaling_factor=2,
size_index=2, show_grid=True, color=hv.Cycle('Category20'), line_color='k'),
opts.NdOverlay(legend_position='left', show_frame=False)
)
| examples/gallery/demos/bokeh/scatter_economic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [NTDS'18] tutorial 2: build a graph from an edge list
# [ntds'18]: https://github.com/mdeff/ntds_2018
#
# [<NAME>](https://people.epfl.ch/benjamin.ricaud), [EPFL LTS2](https://lts2.epfl.ch)
#
# * Dataset: [Open Tree of Life](https://tree.opentreeoflife.org)
# * Tools: [pandas](https://pandas.pydata.org), [numpy](http://www.numpy.org), [networkx](https://networkx.github.io), [gephi](https://gephi.org/)
# ## Tools
# The below line is a [magic command](https://ipython.readthedocs.io/en/stable/interactive/magics.html) that allows plots to appear in the notebook.
# %matplotlib inline
# The first thing is always to import the packages we'll use.
import pandas as pd
import numpy as np
import networkx as nx
# Tutorials on pandas can be found at:
# * <https://pandas.pydata.org/pandas-docs/stable/10min.html>
# * <https://pandas.pydata.org/pandas-docs/stable/tutorials.html>
#
# Tutorials on numpy can be found at:
# * <https://docs.scipy.org/doc/numpy/user/quickstart.html>
# * <http://www.scipy-lectures.org/intro/numpy/index.html>
# * <http://www.scipy-lectures.org/advanced/advanced_numpy/index.html>
#
# A tutorial on networkx can be found at:
# * <https://networkx.github.io/documentation/stable/tutorial.html>
# ## Import the data
#
# We will play with a excerpt of the Tree of Life, that can be found together with this notebook. This dataset is reduced to the first 1000 taxons (starting from the root node). The full version is available here: [Open Tree of Life](https://tree.opentreeoflife.org/about/taxonomy-version/ott3.0).
#
# 
# 
tree_of_life = pd.read_csv('data/taxonomy_small.tsv', sep='\t\|\t?', encoding='utf-8', engine='python')
# If you do not remember the details of a function:
# +
# pd.read_csv?
# -
# For more info on the separator, see [regex](https://docs.python.org/3.6/library/re.html).
# Now, what is the object `tree_of_life`? It is a Pandas DataFrame.
tree_of_life
# The description of the entries is given here:
# https://github.com/OpenTreeOfLife/reference-taxonomy/wiki/Interim-taxonomy-file-format
# ## Explore the table
tree_of_life.columns
# Let us drop some columns.
tree_of_life = tree_of_life.drop(columns=['sourceinfo', 'uniqname', 'flags','Unnamed: 7'])
tree_of_life.head()
# Pandas infered the type of values inside each column (int, float, string and string). The parent_uid column has float values because there was a missing value, converted to `NaN`
print(tree_of_life['uid'].dtype, tree_of_life.parent_uid.dtype)
# How to access individual values.
tree_of_life.iloc[0, 2]
tree_of_life.loc[0, 'name']
# **Exercise**: Guess the output of the below line.
# +
# tree_of_life.uid[0] == tree_of_life.parent_uid[1]
# -
# Ordering the data.
tree_of_life.sort_values(by='name').head()
# ## Operation on the columns
# Unique values, useful for categories:
tree_of_life['rank'].unique()
# Selecting only one category.
tree_of_life[tree_of_life['rank'] == 'species'].head()
# How many species do we have?
len(tree_of_life[tree_of_life['rank'] == 'species'])
tree_of_life['rank'].value_counts()
# ## Building the graph
# Let us build the adjacency matrix of the graph. For that we need to reorganize the data. First we separate the nodes and their properties from the edges.
nodes = tree_of_life[['uid', 'name','rank']]
edges = tree_of_life[['uid', 'parent_uid']]
# When using an adjacency matrix, nodes are indexed by their row or column number and not by a `uid`. Let us create a new index for the nodes.
# Create a column for node index.
nodes.reset_index(level=0, inplace=True)
nodes = nodes.rename(columns={'index':'node_idx'})
nodes.head()
# Create a conversion table from uid to node index.
uid2idx = nodes[['node_idx', 'uid']]
uid2idx = uid2idx.set_index('uid')
uid2idx.head()
edges.head()
# Now we are ready to use yet another powerful function of Pandas. Those familiar with SQL will recognize it: the `join` function.
# Add a new column, matching the uid with the node_idx.
edges = edges.join(uid2idx, on='uid')
# Do the same with the parent_uid.
edges = edges.join(uid2idx, on='parent_uid', rsuffix='_parent')
# Drop the uids.
edges = edges.drop(columns=['uid','parent_uid'])
edges.head()
# The above table is a list of edges connecting nodes and their parents.
# ## Building the (weighted) adjacency matrix
#
# We will use numpy to build this matrix. Note that we don't have edge weights here, so our graph is going to be unweighted.
n_nodes = len(nodes)
adjacency = np.zeros((n_nodes, n_nodes), dtype=int)
for idx, row in edges.iterrows():
if np.isnan(row.node_idx_parent):
continue
i, j = int(row.node_idx), int(row.node_idx_parent)
adjacency[i, j] = 1
adjacency[j, i] = 1
adjacency[:15, :15]
# Congratulations, you have built the adjacency matrix!
# ## Graph visualization
#
# To conclude, let us visualize the graph. We will use the python module networkx.
# A simple command to create the graph from the adjacency matrix.
graph = nx.from_numpy_array(adjacency)
# In addition, let us add some attributes to the nodes:
node_props = nodes.to_dict()
for key in node_props:
# print(key, node_props[key])
nx.set_node_attributes(graph, node_props[key], key)
# Let us check if it is correctly recorded:
graph.node[1]
# Draw the graph with two different [layout algorithms](https://en.wikipedia.org/wiki/Graph_drawing#Layout_methods).
nx.draw_spectral(graph)
nx.draw_spring(graph)
# Save the graph to disk in the `gexf` format, readable by gephi and other tools that manipulate graphs. You may now explore the graph using gephi and compare the visualizations.
nx.write_gexf(graph, 'tree_of_life.gexf')
| tutorials/02a_graph_from_edge_list.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.5 64-bit
# name: python3
# ---
# # Principal Component Analysis
# This notebook introduces what is adaptive best subset selection principal component analysis (SparsePCA) and uses a real data example to show how to use it.
#
# ## PCA
# Principal component analysis (PCA) is an important method in the field of data science, which can reduce the dimension of data and simplify our model. It actually solve an optimization problem like:
#
# $$
# \max_{v} v^{\top}\Sigma v,\qquad s.t.\quad v^Tv=1.
# $$
#
# where $\Sigma = X^TX / (n-1)$ and $X$ is the **centered** sample matrix. We also denote that $X$ is a $n\times p$ matrix, where each row is an observation and each column is a variables.
#
# Then, before further analysis, we can project $X$ to $v$ (thus dimensional reduction), without losing too much information.
#
# However, consider that:
#
# - The PC is a linear combination of all primary variables ($Xv$), but sometimes we may tend to use less variables for clearer interpretation (and less computational complexity);
# - It has been proved that if $p/n$ does not converge to $0$, the classical PCA is not consistent, but this would happen in some high-dimensional data analysis.
#
# > For example, in gene analysis, the dataset may contain plenty of genes (variables) and we would like to find a subset of them, which can explain most information. Compared with using all genes, this small subset may perform better on interpretation, without loss much information. Then we can focus on these variables in the further analysis.
#
# When we trapped by these problems, a classical PCA may not be a best choice, since it use all variables. One of the alternatives is `SparsePCA`, which is able to seek for principal component with a sparsity limitation:
#
# $$
# \max_{v} v^{\top}\Sigma v,\qquad s.t.\quad v^Tv=1,\ ||v||_0\leq s.
# $$
#
# where $s$ is a non-negative integer, which indicates how many primary variables are used in principal component. With `SparsePCA`, we can search for the best subset of variables to form principal component and it retains consistency even under $p>>n$. And we make two remarks:
#
# > Clearly, if $s$ is equal or larger than the number of primary variables, this sparsity limitation is actually useless, so the problem is equivalent to a classical PCA.
#
# > With less variables, the PC must have lower explained variance. However, this decrease is slight if we choose a good $s$ and at this price, we can interpret the PC much better. It is worthy.
#
# In the next section, we will show how to form `SparsePCA`.
# ## Real Data Example
#
# ### Communities and Crime Dataset
#
# Here we will use real data analysis to show how to form `SparsePCA`. The data we use is from [UCI:
# Communities and Crime Data Set](https://archive.ics.uci.edu/ml/datasets/Communities+and+Crime) and we pick up its 99 predictive variables as our samples.
#
# Firstly, we read the data and pick up those variables we interested.
# +
import numpy as np
from abess.decomposition import SparsePCA
X = np.genfromtxt('communities.data', delimiter = ',')
X = X[:, 5:127] # numeric predictiors
X = X[:, ~np.isnan(X).any(axis = 0)] # drop variables with nan
n, p = X.shape
print(n)
print(p)
# -
# ### Model fitting
#
# To build an SparsePCA model, we need to give the target sparisty to its `support_size` argument. Our program supports adaptively finding a best sparisty in a given range.
#
# #### Fixed sparsity
#
# If we only focus on one fixed sparsity, you can simply give a single integer to fit on this situation. And then the fitted sparse principal component is stored in `SparsePCA.coef_`:
model = SparsePCA(support_size = 20)
# Give either $X$ or $\Sigma$ to `model.fit()` and the fitting process will start. The argument `is_normal = False` here means that the program will not normalize $X$. Note that if both $X$ and $Sigma$ are given, the program prefer to use $X$.
model.fit(X = X, is_normal = False)
# model.fit(Sigma = np.cov(X.T))
# After fitting, `model.coef_` returns the sparse principal component and its non-zero positions correspond to variables used.
temp = np.nonzero(model.coef_)[0]
print('sparsity: ', temp.size)
print('non-zero position: \n', temp)
print(model.coef_.T)
# #### Adaptive sparsity
#
# What's more, **abess** also support a range of sparsity and adaptively choose the best-explain one. However, usually a higher sparsity level would lead to better explaination.
#
# Now, you need to build an $s_{max} \times 1$ binomial matrix, where $s_{max}$ indicates the max target sparsity and each row indicates one sparsity level (i.e. start from $1$, until $s_{max}$). For each position with $1$, **abess** would try to fit the model under that sparsity and finally give the best one.
# fit sparsity from 1 to 20
support_size = np.ones((20, 1))
# build model
model = SparsePCA(support_size = support_size)
model.fit(X, is_normal = False)
# results
temp = np.nonzero(model.coef_)[0]
print('chosen sparsity: ', temp.size)
print('non-zero position: \n', temp)
print(model.coef_.T)
# *Because of warm-start, the results here may not be the same as fitted sparsity.*
#
# Then, the explained variance can be computed by:
Xc = X - X.mean(axis = 0)
Xv = Xc @ model.coef_
explained = Xv.T @ Xv # explained variance (information)
total = sum(np.diag(Xc.T @ Xc)) # total variance (information)
print( 'explained ratio: ', explained / total )
# ### More on the results
#
# We can give different target sparsity (change `s_begin` and `s_end`) to get different sparse loading. Interestingly, we can seek for a smaller sparsity which can explain most of the variance.
#
# In this example, if we try sparsities from $0$ to $p$, and calculate the ratio of explained variance:
# +
num = 30
i = 0
sparsity = np.linspace(1, p - 1, num, dtype='int')
explain = np.zeros(num)
Xc = X - X.mean(axis = 0)
for s in sparsity:
model = SparsePCA(
support_size = np.ones((s, 1)),
exchange_num = int(s),
max_iter = 50
)
model.fit(X, is_normal = False)
Xv = Xc @ model.coef_
explain[i] = Xv.T @ Xv
i += 1
print('80%+ : ', sparsity[explain > 0.8 * explain[num-1]])
print('90%+ : ', sparsity[explain > 0.9 * explain[num-1]])
# -
# If we denote the explained ratio from all 99 variables as 100%, the curve indicates that at least 31 variables can reach 80% (blue dashed line) and 41 variables can reach 90% (red dashed line).
# +
import matplotlib.pyplot as plt
plt.plot(sparsity, explain)
plt.xlabel('Sparsity')
plt.ylabel('Explained variance')
ind = np.where(explain > 0.8 * explain[num-1])[0][0]
plt.plot([0, sparsity[ind]], [explain[ind], explain[ind]], 'b--')
plt.plot([sparsity[ind], sparsity[ind]], [0, explain[ind]], 'b--')
plt.text(sparsity[ind], 0, str(sparsity[ind]))
plt.text(0, explain[ind], '80%')
ind = np.where(explain > 0.9 * explain[num-1])[0][0]
plt.plot([0, sparsity[ind]], [explain[ind], explain[ind]], 'r--')
plt.plot([sparsity[ind], sparsity[ind]], [0, explain[ind]], 'r--')
plt.text(sparsity[ind], 0, str(sparsity[ind]))
plt.text(0, explain[ind], '90%')
plt.plot([0, p], [explain[num-1], explain[num-1]], color='gray', linestyle='--')
plt.text(0, explain[num-1],'100%')
plt.show()
# -
# This result shows that using less than half of all 99 variables can be close to perfect. For example, if we choose sparsity 31, the used variables are:
model = SparsePCA(support_size = 31)
model.fit(X, is_normal = False)
temp = np.nonzero(model.coef_)[0]
print('non-zero position: \n', temp)
# ## Extension: Group PCA
#
# ### Group PCA
#
# Furthermore, in some situation, some variables may need to consider together, that is, they should be "used" or "unused" for PC at the same time, which we call "group information". The optimization problem becomes:
#
# $$
# \max_{v} v^{\top}\Sigma v,\qquad s.t.\quad v^Tv=1,\ \sum_{g=1}^G I(||v_g||\neq 0)\leq s.
# $$
#
# where we suppose there are $G$ groups, and the $g$-th one correspond to $v_g$, $v = [v_1^{\top},v_2^{\top},\cdots,v_G^{\top}]^{\top}$. Then we are interested to find $s$ (or less) important groups.
#
# > Group problem is extraordinary important in real data analysis. Still take gene analysis as an example, several sites would be related to one charcter, and it is meaningless to consider each of them alone.
#
# `SparsePCA` can also deal with group information. Here we make sure that variables in the same group address close to each other (if not, the data should be sorted first).
#
# ### Simulated Data Example
#
# Suppose that the data above have group information like:
#
# - Group 0: {the 1st, 2nd, ..., 6th variable};
# - Group 1: {the 7th, 8th, ..., 12th variable};
# - ...
# - Group 15: {the 91st, 92nd, ..., 96th variable};
# - Group 16: {the 97th, 98th, 99th variables}.
#
# Denote different groups as different number:
# +
g_info = np.arange(17)
g_info = g_info.repeat(6)
g_info = g_info[0:99]
print(g_info)
# -
# And fit a group sparse PCA model with additional argument `group=g_info`:
model = SparsePCA(support_size = np.ones((6, 1)))
model.fit(X, group = g_info, is_normal = False)
# The result comes to:
# +
print(model.coef_.T)
temp = np.nonzero(model.coef_)[0]
temp = np.unique(g_info[temp])
print('non-zero group: \n', temp)
print('chosen sparsity: ', temp.size)
# -
# Hence we can focus on variables in Group 0, 8, 9, 10, 11, 15.
# ## Extension: Multiple principal components
#
# ### Multiple principal components
#
# In some cases, we may seek for more than one principal components under sparsity. Actually, we can iteratively solve the largest principal component and then mapping the covariance matrix to its orthogonal space:
#
# $$
# \Sigma' = (1-vv^{\top})\Sigma(1-vv^{\top})
# $$
#
# where $\Sigma$ is the currect covariance matrix and $v$ is its (sparse) principal component. We map it into $\Sigma'$, which indicates the orthogonal space of $v$, and then solve the sparse principal component again.
#
# By this iteration process, we can acquire multiple principal components and they are sorted from the largest to the smallest.
# In our program, there is an additional argument `number`, which indicates the number of principal components we need, defaulted by 1.
# Now the `support_size` is shaped in $s_{max}\times \text{number}$ and each column indicates one principal component.
model = SparsePCA(support_size = np.ones((31, 3)))
model.fit(X, is_normal = False, number = 3)
model.coef_.shape
# Here, each column of the `model.coef_` is a sparse PC (from the largest to the smallest), for example the second one is that:
model.coef_[:,1]
# If we want to compute the explained variance of them, it is also quite easy:
Xv = Xc.dot(model.coef_)
explained = np.sum(np.diag(Xv.T.dot(Xv)))
print( 'explained ratio: ', explained / total )
# ## R tutorial
#
# For R tutorial, please view [https://abess-team.github.io/abess/articles/v08-sPCA.html](https://abess-team.github.io/abess/articles/v08-sPCA.html).
| docs/Tutorial/PCA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # [Shifted Kerr-Schild Solution](https://arxiv.org/pdf/1704.00599.pdf) Initial Data
#
# ## Authors: <NAME> & <NAME>
# ### Formatting improvements courtesy <NAME>
#
# ## This module sets up Shifted Kerr-Schild initial data [Etienne et al., 2017 GiRaFFE](https://arxiv.org/pdf/1704.00599.pdf).
#
# **Notebook Status:** <font color='green'><b> Validated </b></font>
#
# **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian and momentum constraint violations at the expected order to the exact solution (see plots at bottom of [the exact initial data validation start-to-finish tutorial notebook](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.ipynb)).
#
# ### NRPy+ Source Code for this module: [BSSN/ShiftedKerrSchild.py](../edit/BSSN/ShiftedKerrSchild.py)
#
#
#
#
# ## Introduction:
# Shifted Kerr-Schild coordinates are similar to the trumpet spacetime, in that $r=0$ maps to some finite radius surface in Kerr-Schild coordinates. The radial shift $r_0$ both reduces the black hole's coordinate size and causes the very strongly-curved spacetime fields at $r<r_{0}$ to vanish deep inside the horizon, which aids in numerical stability, e.g., when evolving hydrodynamic, MHD, and FFE fields inside the horizon.
# <a id='toc'></a>
#
# # Table of Contents:
# $$\label{toc}$$
#
# 1. [Step 1](#initialize_nrpy): Set up the needed NRPy+ infrastructure and declare core gridfunctions
# 1. [Step 2](#kerr_schild_lapse): The Kerr-Schild Lapse, Shift, and 3-Metric
# 1. [Step 2.a](#define_rho): Define $\rho^{2}$, $\alpha$, $\beta^{r}$, $\beta^{\theta}$, $\beta^{\phi}$, $\gamma_{r\theta}$, $\gamma_{\theta\phi}$
# 1. [Step 2.b](#nonzero_gamma): Define and construct nonzero components of $\gamma_{ij}$
# 1. [Step 3](#extrinsic_curvature): The extrinsic curvature $K_{ij}$
# 1. [Step 3.a](#abc): Define useful quantities $A$, $B$, $C$
# 1. [Step 3.b](#nonzero_k): Define and construct nonzero components of $K_{ij}$
# 1. [Step 4](#code_validation): Code Validation against `BSSN.ShiftedKerrSchild` NRPy+ module
# 1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
# <a id='initialize_nrpy'></a>
#
# # Step 1: Set up the needed NRPy+ infrastructure and declare core gridfunctions \[Back to [top](#toc)\]
# $$\label{initialize_nrpy}$$
#
# First, we will import the core modules from Python/NRPy+ and specify the main gridfunctions we will need.
#
# **Input for initial data**:
#
# * The black hole mass $M$.
# * The black hole spin parameter $a$
# * The radial offset $r_0$
#
# +
# Step P0: Load needed modules
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import BSSN.ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoB
# All gridfunctions will be written in terms of spherical coordinates (r, th, ph):
r,th,ph = sp.symbols('r th ph', real=True)
thismodule = "ShiftedKerrSchild"
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Input parameters:
M, a, r0 = par.Cparameters("REAL", thismodule,
["M", "a", "r0"],
[1.0, 0.9, 1.0])
# Auxiliary variables:
rho2 = sp.symbols('rho2', real=True)
# -
# <a id='kerr_schild_lapse'></a>
#
# # Step 2: The Kerr-Schild Lapse, Shift, and 3-Metric \[Back to [top](#toc)\]
# $$\label{kerr_schild_lapse}$$
# <a id='define_rho'></a>
#
# ## Step 2.a: Define $\rho^{2}$, $\alpha$, $\beta^{r_{\rm KS}}$, $\beta^{\theta}$, $\beta^{\phi}$, $\gamma_{r_{\rm KS}\theta}$, $\gamma_{\theta\phi}$ \[Back to [top](#toc)\]
# $$\label{define_rho}$$
#
# The relationship between the Kerr-Schild radius $r_{\rm KS}$ and the radial coordinate used on our numerical grid $r$, is given by
#
# $$
# r_{\rm KS} = r + r_0,
# $$
# where $r_0\ge 0$ is the radial shift.
#
# Notice that the radial shift has no impact on Jacobians since $\frac{\partial{r_{\rm KS}}}{\partial{r}}=1$. $r_0$ must be set to a value less than the horizon radius $R$, but not so close to $R$ that finite-difference stencils from outside the horizon cross $r=0$. Thus $r_0$ must be set with consideration of the numerical grid structure in mind, as nonzero values of $r_0$ will shrink the coordinate size of the black hole by exactly $r_0$.
#
# All of these equations are as defined in the appendix of the original GiRaFFE paper ([Etienne et al., 2017 GiRaFFE](https://arxiv.org/pdf/1704.00599.pdf)).
# <br>
# First, we define $\rho^{2}$ as
#
# <br>
#
# $$ \rho^2 = r_{\rm KS} + a^{2}\cos^{2}(\theta) $$
#
# <br>
#
# And we then define the Kerr-Schild lapse $\alpha$ from equation (A.1)
#
# <br>
#
# $$ \alpha = \frac{1}{\sqrt{1 + \frac{2Mr_{\rm KS}}{\rho^2}}} $$
#
# <br>
#
# And the shift $\beta$ from equations (A.2) & (A.3)
#
# <br>
#
# $$ \beta^{r_{\rm KS}} = \alpha^2\frac{2Mr_{\rm KS}}{\rho^2} $$
#
# <br>
#
# $$ \beta^{\theta} = \beta^{\phi} = \gamma_{r_{\rm KS}\theta} = \gamma_{\theta\phi} = 0 $$
# +
# Step 1: Define rho^2, alpha, beta^(r_{KS}), beta^(theta), beta^(phi), gamma_{r_{KS}theta}, gamma_{theta\phi}
# r_{KS} = r + r0
rKS = r+r0
# rho^2 = rKS^2 + a^2*cos^2(theta)
rho2 = rKS*rKS + a*a*sp.cos(th)**2
# alpha = 1/sqrt{1 + M*rKS/rho^2}
alphaSph = 1/(sp.sqrt(1 + 2*M*rKS/rho2))
# Initialize the shift vector, \beta^i, to zero.
betaSphU = ixp.zerorank1()
# beta^r = alpha^2*2Mr/rho^2
betaSphU[0] = alphaSph*alphaSph*2*M*rKS/rho2
# Time derivative of shift vector beta^i, B^i, is zero.
BSphU = ixp.zerorank1()
# -
# <a id='nonzero_gamma'></a>
#
# ## Step 2.b: Define and construct nonzero components $\gamma_{r_{\rm KS}r_{\rm KS}}$, $\gamma_{r_{\rm KS}\phi}$, $\gamma_{\theta\theta}$, $\gamma_{\phi\phi}$ \[Back to [top](#toc)\]
# $$\label{nonzero_gamma}$$
#
# From equations (A.4)-(A.7) of [Etienne et al., 2017](https://arxiv.org/pdf/1704.00599.pdf) we define the nonzero components of the 3-metric:
#
# <br>
#
# $$ \gamma_{r_{\rm KS}r_{\rm KS}} = 1 + \frac{2Mr_{\rm KS}}{\rho^2} $$
#
# <br>
#
# $$ \gamma_{r_{\rm KS}\phi} = -a\gamma_{r_{\rm KS}r_{\rm KS}}\sin^2(\theta) $$
#
# <br>
#
# $$ \gamma_{\theta\theta} = \rho^2 $$
#
# <br>
#
# $$ \gamma_{\phi\phi} = \left(r_{\rm KS}^2 + a^2 + \frac{2Mr_{\rm KS}}{\rho^2}a^{2}\sin^{2}(\theta)\right)\sin^{2}(\theta) $$
# +
# Step 2: Define and construct nonzero components gamma_{r_{KS}r_{KS}}$, gamma_{r_{KS}phi},
# gamma_{thetatheta}, gamma_{phiphi}
# Initialize \gamma_{ij} to zero.
gammaSphDD = ixp.zerorank2()
# gammaDD{rKS rKS} = 1 +2M*rKS/rho^2
gammaSphDD[0][0] = 1 + 2*M*rKS/rho2
# gammaDD{rKS phi} = -a*gammaDD{r r}*sin^2(theta)
gammaSphDD[0][2] = gammaSphDD[2][0] = -a*gammaSphDD[0][0]*sp.sin(th)**2
# gammaDD{theta theta} = rho^2
gammaSphDD[1][1] = rho2
# gammaDD{phi phi} = (rKS^2 + a^2 + 2Mr/rho^2*a^2*sin^2(theta))*sin^2(theta)
gammaSphDD[2][2] = (rKS*rKS + a*a + 2*M*rKS*a*a*sp.sin(th)**2/rho2)*sp.sin(th)**2
# -
# <a id='extrinsic_curvature'></a>
#
# # Step 3: The extrinsic curvature $K_{ij}$ \[Back to [top](#toc)\]
# $$\label{extrinsic_curvature}$$
# <a id='abc'></a>
#
# ## Step 3.a: Define useful quantities $A$, $B$, $C$ \[Back to [top](#toc)\]
# $$\label{abc}$$
#
# From equations (A.8)-(A.10) of [Etienne et al., 2017](https://arxiv.org/pdf/1704.00599.pdf) we define the following expressions which will help simplify the nonzero extrinsic curvature components:
#
# <br>
#
# $$ A = \left(a^{2}\cos(2\theta) + a^{2} + 2r_{\rm KS}^{2}\right) $$
#
# <br>
#
# $$ B = A + 4Mr_{\rm KS} $$
#
# <br>
#
# $$ D = \sqrt{\frac{2Mr_{\rm KS}}{a^{2}\cos^{2}(\theta) + r_{\rm KS}^2} + 1} $$
#
# +
# Step 3: Define useful quantities A, B, C
# A = (a^2*cos^2(2theta) + a^2 + 2r^2)
A = (a*a*sp.cos(2*th) + a*a + 2*rKS*rKS)
# B = A + 4M*rKS
B = A + 4*M*rKS
# D = \sqrt(2M*rKS/(a^2cos^2(theta) + rKS^2) + 1)
D = sp.sqrt(2*M*rKS/(a*a*sp.cos(th)**2 + rKS*rKS) + 1)
# -
# <a id='nonzero_k'></a>
#
# ## Step 3.b: Define and construct nonzero components of $K_{ij}$ \[Back to [top](#toc)\]
# $$\label{nonzero_k}$$
#
# We will now express the extrinsic curvature $K_{ij}$ in spherical polar coordinates.
#
# From equations (A.11) - (A.13) of [Etienne et al., 2017](https://arxiv.org/pdf/1704.00599.pdf) we define the following:
#
# $$ K_{r_{\rm KS}r_{\rm KS}} = \frac{D(A + 2Mr_{\rm KS})}{A^{2}B}\left[4M\left(a^{2}\cos(2\theta) + a^{2} - 2r_{\rm KS}^{2}\right)\right] $$
#
# <br>
#
# $$ K_{r_{\rm KS}\theta} = \frac{D}{AB}\left[8a^{2}Mr_{\rm KS}\sin(\theta)\cos(\theta)\right] $$
#
# <br>
#
# $$ K_{r_{\rm KS}\phi} = \frac{D}{A^2}\left[-2aM\sin^{2}(\theta)\left(a^{2}\cos(2\theta) + a^{2} - 2r_{\rm KS}^{2}\right)\right] $$
# +
# Step 4: Define the extrinsic curvature in spherical polar coordinates
# Establish the 3x3 zero-matrix
KSphDD = ixp.zerorank2()
# *** Fill in the nonzero components ***
# *** This will create an upper-triangular matrix ***
# K_{r r} = D(A+2Mr)/(A^2*B)[4M(a^2*cos(2theta) + a^2 - 2r^2)]
KSphDD[0][0] = D*(A+2*M*rKS)/(A*A*B)*(4*M*(a*a*sp.cos(2*th)+a*a-2*rKS*rKS))
# K_{r theta} = D/(AB)[8a^2*Mr*sin(theta)cos(theta)]
KSphDD[0][1] = KSphDD[1][0] = D/(A*B)*(8*a*a*M*rKS*sp.sin(th)*sp.cos(th))
# K_{r phi} = D/A^2[-2aMsin^2(theta)(a^2cos(2theta)+a^2-2r^2)]
KSphDD[0][2] = KSphDD[2][0] = D/(A*A)*(-2*a*M*sp.sin(th)**2*(a*a*sp.cos(2*th)+a*a-2*rKS*rKS))
# -
# And from equations (A.14) - (A.17) of [Etienne et al., 2017](https://arxiv.org/pdf/1704.00599.pdf) we define the following expressions to complete the upper-triangular matrix $K_{ij}$:
#
# $$ K_{\theta\theta} = \frac{D}{B}\left[4Mr_{\rm KS}^{2}\right] $$
#
# <br>
#
# $$ K_{\theta\phi} = \frac{D}{AB}\left[-8a^{3}Mr_{\rm KS}\sin^{3}(\theta)\cos(\theta)\right] $$
#
# <br>
#
# $$ K_{\phi\phi} = \frac{D}{A^{2}B}\left[2Mr_{\rm KS}\sin^{2}(\theta)\left(a^{4}(r_{\rm KS}-M)\cos(4\theta) + a^{4}(M + 3r_{\rm KS}) + 4a^{2}r_{\rm KS}^{2}(2r_{\rm KS} - M) + 4a^{2}r_{\rm KS}\cos(2\theta)\left(a^{2} + r_{\rm KS}(M + 2r_{\rm KS})\right) + 8r_{\rm KS}^{5}\right)\right] $$
# +
# K_{theta theta} = D/B[4Mr^2]
KSphDD[1][1] = D/B*(4*M*rKS*rKS)
# K_{theta phi} = D/(AB)*(-8*a^3*Mr*sin^3(theta)cos(theta))
KSphDD[1][2] = KSphDD[2][1] = D/(A*B)*(-8*a**3*M*rKS*sp.sin(th)**3*sp.cos(th))
# K_{phi phi} = D/(A^2*B)[2Mr*sin^2(theta)(a^4(M+3r)
# +4a^2r^2(2r-M)+4a^2r*cos(2theta)(a^2+r(M+2r))+8r^5)]
KSphDD[2][2] = D/(A*A*B)*(2*M*rKS*sp.sin(th)**2*(a**4*(rKS-M)*sp.cos(4*th)\
+ a**4*(M+3*rKS)+4*a*a*rKS*rKS*(2*rKS-M)\
+ 4*a*a*rKS*sp.cos(2*th)*(a*a + rKS*(M + 2*rKS)) + 8*rKS**5))
# -
# <a id='code_validation'></a>
#
# # Step 4: Code Validation against `BSSN.ShiftedKerrSchild` NRPy+ module \[Back to [top](#toc)\]
# $$\label{code_validation}$$
#
# Here, as a code validation check, we verify agreement in the SymPy expressions for Shifted Kerr-Schild initial data between
#
# 1. this tutorial and
# 2. the NRPy+ [BSSN.ShiftedKerrSchild](../edit/BSSN/ShiftedKerrSchild.py) module.
# +
# First we import reference_metric, which is
# needed since BSSN.ShiftedKerrSchild calls
# BSSN.ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear, which
# depends on reference_metric:
import reference_metric as rfm # NRPy+: Reference metric support
rfm.reference_metric()
import BSSN.ShiftedKerrSchild as sks
sks.ShiftedKerrSchild()
# It is SAFE to ignore the warning(s) from re-initializing parameters.
print("Consistency check between Brill-Lindquist tutorial and NRPy+ BSSN.BrillLindquist module. ALL SHOULD BE ZERO.")
print("alphaSph - sks.alphaSph = "+str(sp.simplify(alphaSph - sks.alphaSph)))
for i in range(DIM):
print("betaSphU["+str(i)+"] - sks.betaSphU["+str(i)+"] = "+\
str(sp.simplify(betaSphU[i] - sks.betaSphU[i])))
print("BSphU["+str(i)+"] - sks.BaSphU["+str(i)+"] = "+str(sp.simplify(BSphU[i] - sks.BSphU[i])))
for j in range(DIM):
print("gammaSphDD["+str(i)+"]["+str(j)+"] - sks.gammaSphDD["+str(i)+"]["+str(j)+"] = "+\
str(sp.simplify(gammaSphDD[i][j] - sks.gammaSphDD[i][j])))
print("KSphDD["+str(i)+"]["+str(j)+"] - sks.KSphDD["+str(i)+"]["+str(j)+"] = "+\
str(sp.simplify(KSphDD[i][j] - sks.KSphDD[i][j])))
# -
# <a id='latex_pdf_output'></a>
#
# # Step 5: Output this notebook to $\LaTeX$-formatted PDF \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-ADM_Initial_Data-ShiftedKerrSchild.pdf](Tutorial-ADM_Initial_Data-ShiftedKerrSchild.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ADM_Initial_Data-ShiftedKerrSchild")
| Tutorial-ADM_Initial_Data-ShiftedKerrSchild.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import json
import requests
import pandas as pd
key_dict = {"key": "key_code_obtained"}
with open("credentials_juan.json", "w") as output:
json.dump(key_dict, output)
key_json = json.load(open("credentials_juan.json"))
# key_json["key"]
gmaps_key = key_json["key"]
# gmaps_key
url = "https://maps.googleapis.com/maps/api/place/textsearch/json?"
# text string on which to search
head_names = ["names", "address", "lat", "long", "types", "codes"]
df1 = pd.read_csv("notebooksMigros_python_mined.csv", names=head_names)
urls_list = []
for i in range(0,10): #len(df1["names"])):
id = "{0}".format(df1["codes"][i])
# actual api key
api_key = gmaps_key
# get method of requests module, return response object
url2 = ("https://maps.googleapis.com/maps/api/place/details/json?placeid="+ id + "&key="+ api_key)
print(url2)
# get method of requests module, return response object
req = requests.get(url2)
# json method of response object: json format data -> python format data
places_json = req.json()
my_result = places_json["result"]
# now result contains list of nested dictionaries
urls_list.append(my_result["url"])
# +
urls_df = pd.DataFrame({"urls": urls_list})
df12 = df1.iloc[0:10,:]
df12
#df12.join(urls_df)
urls_df.to_csv("migros_urls.csv", index=False)
# -
| Workbooks/Data Scraping/Selenium/3_code_to_urls.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Present Value of Liabilities and Funding Ratio
#
# In this lab session, we'll examine how to discount future liabilities to compute the present value of future liabilities, and measure the funding ratio.
#
# The funding ratio is the ratio of the current value of assets to the present value of the liabilities.
#
# In order to compute the present value, we need to discount the amount of the liability based on the relevant interest rate derived from the yield curve.
#
# For simplicity, we'll assume that the yield curve is flat, and so the interest rate is the same for all horizons.
#
# The present value of a set of liabilities $L$ where each liability $L_i$ is due at time $t_i$ is give by:
#
# $$ PV(L) = \sum_{i=1}^{k} B(t_i) L_i$$
#
# where $B(t_i)$ is the price of a pure discount bond that pays 1 dollar at time $t_i$
#
# If we assume the yield curve is flat and the annual rate of interest is $r$ then $B(t)$ is given by
#
# $$B(t) = \frac{1}{(1+r)^t}$$
#
#
# +
import numpy as np
import pandas as pd
import edhec_risk_kit_124 as erk
# %load_ext autoreload
# %autoreload 2
# -
def discount(t, r):
"""
Compute the price of a pure discount bond that pays $1 at time t where t is in years and r is the annual interest rate
"""
return (1+r)**(-t)
b = discount(10, .03)
b
# You can verify that if you buy that bond today, and hold it for 10 years at an interest rate of 3 percent per year, we will get paid \$1
b*(1.03**10)
def pv(l, r):
"""
Compute the present value of a list of liabilities given by the time (as an index) and amounts
"""
dates = l.index
discounts = discount(dates, r)
return (discounts*l).sum()
# Assume that you have 4 liabilities, of 1, 1.5, 2, and 2.5M dollars. Assume the first of these are 3 years away and the subsequent ones are spaced out 6 months apart, i.e. at time 3, 3.5, 4 and 4.5 years from now. Let's compute the present value of the liabilities based on an interest rate of 3% per year.
#
# In an individual investment context, you can think oif liabilities as Goals, such as saving for Life Events such as a down payment for a house, college expenses for your children, or retirement income. In each of these cases, we have a requirement of a cash flow at some point in the future ... anytime you have a future cash requirement, you can think of it as a liability.
liabilities = pd.Series(data=[1, 1.5, 2, 2.5], index=[3, 3.5, 4, 4.5])
pv(liabilities, 0.03)
# We can now compute the funding ratio, based on current asset values:
def funding_ratio(assets, liabilities, r):
"""
Computes the funding ratio of a series of liabilities, based on an interest rate and current value of assets
"""
return assets/pv(liabilities, r)
funding_ratio(5, liabilities, 0.03)
# Now assume interest rates go down to 2% ... let's recompute the funding ratio:
funding_ratio(5, liabilities, 0.02)
# We can examine the effect of interest rates on funding ratio:
#
# Recall that our liabilities are:
liabilities
# +
import ipywidgets as widgets
from IPython.display import display
# %matplotlib inline
def show_funding_ratio(assets, r):
fr = funding_ratio(assets, liabilities, r)
print(f'{fr*100:.2f}%')
controls = widgets.interactive(show_funding_ratio,
assets=widgets.IntSlider(min=1, max=10, step=1, value=5),
r=(0, .20, .01)
)
display(controls)
# -
# As the illustration above shows, even if your assets do not go down in value, cash can be a risky asset if you think about the funding ratio rather than the asset value. Even though cash is a "safe asset" in the sense that the asset value does not go down, cash can be a very risky asset because the value of the liabilities goes up when interest rates go down. Therefore, if you think about your savings in terms of funding ratio (i.e. how much money do you have compared to what you need) then cash is a risky asset and can result in a decline in your funding ratio.
#
# We'll investigate this and solutions to this in the next session, but for now, add the `discount`, `pv`, and `funding_ratio` functions to the `edhec_risk_kit.py` file.
#
# ```python
# def discount(t, r):
# """
# Compute the price of a pure discount bond that pays a dollar at time t where t is in years and r is the annual interest rate
# """
# return (1+r)**(-t)
#
# def pv(l, r):
# """
# Compute the present value of a list of liabilities given by the time (as an index) and amounts
# """
# dates = l.index
# discounts = discount(dates, r)
# return (discounts*l).sum()
#
# def funding_ratio(assets, liabilities, r):
# """
# Computes the funding ratio of a series of liabilities, based on an interest rate and current value of assets
# """
# return assets/pv(liabilities, r)
# ```
#
| Investment Management/Course1/.ipynb_checkpoints/lab_124-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from numpy import set_printoptions
from sklearn.preprocessing import MinMaxScaler
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = pd.read_csv('datasets/pima-indians-diabetes.csv', names=names)
array = data.values
X = array[:,0:8]
Y = array[:,8]
#rescaling
scaler = MinMaxScaler(feature_range=(0,1))
rescaledX = scaler.fit_transform(X)
set_printoptions(precision=3)
print(rescaledX[0:5, :])
from sklearn.preprocessing import StandardScaler
#Standardization
scaler = StandardScaler().fit(X)
rescaledX = scaler.transform(X)
set_printoptions(precision=3)
print(rescaledX[0:5, :])
from sklearn.preprocessing import Normalizer
#Normalizing Data
scaler = Normalizer().fit(X)
normalizedX = scaler.transform(X)
set_printoptions(precision=3)
print(normalizedX[0:5, :])
from sklearn.preprocessing import Binarizer
#Binarize Data
binarizer = Binarizer(threshold=0.0).fit(X)
binaryX = binarizer.transform(X)
set_printoptions(precision=3)
print(binaryX[0:5, :])
| .ipynb_checkpoints/03. Prepare Data from Machine Learning -checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.append('..')
import os
import torch
import numpy as np
import matplotlib.pyplot as plt
from sympy import simplify_logic
import time
from sklearn.metrics import accuracy_score
import pandas as pd
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.tree import _tree, export_text
import lens
from lens.utils.base import validate_network, set_seed, tree_to_formula
from lens.utils.layer import prune_logic_layers
from lens import logic
results_dir = 'results_ll/xor'
if not os.path.isdir(results_dir):
os.makedirs(results_dir)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
concepts = ['c1', 'c2']
n_rep = 10
tot_epochs = 2001
# +
# XOR problem
x_train = torch.tensor([
[0, 0],
[0, 1],
[1, 0],
[1, 1],
], dtype=torch.float)
y_train = torch.tensor([0, 1, 1, 0], dtype=torch.float)
x_test = x_train
y_test = y_train
# -
def train_nn(x_train, y_train, seed, device, verbose=False):
set_seed(seed)
x_train = x_train.to(device)
y_train = y_train.to(device)
layers = [
lens.nn.XLogic(2, 5, activation='identity', first=True),
torch.nn.LeakyReLU(),
torch.nn.Linear(5, 5),
torch.nn.LeakyReLU(),
torch.nn.Linear(5, 1),
lens.nn.XLogic(1, 1, activation='sigmoid', top=True),
]
model = torch.nn.Sequential(*layers).to(device)
optimizer = torch.optim.AdamW(model.parameters(), lr=0.01)
loss_form = torch.nn.BCELoss()
model.train()
need_pruning = True
for epoch in range(tot_epochs):
optimizer.zero_grad()
y_pred = model(x_train).squeeze()
loss = loss_form(y_pred, y_train)
loss.backward()
optimizer.step()
# compute accuracy
if epoch % 100 == 0 and verbose:
y_pred_d = y_pred > 0.5
accuracy = y_pred_d.eq(y_train).sum().item() / y_train.size(0)
print(f'Epoch {epoch}: train accuracy: {accuracy:.4f}')
return model
def c_to_y(method, verbose=False):
methods = []
splits = []
explanations = []
model_accuracies = []
explanation_accuracies = []
explanation_fidelities = []
explanation_complexities = []
elapsed_times = []
for seed in range(n_rep):
explanation, explanation_inv = '', ''
explanation_accuracy, explanation_accuracy_inv = 0, 0
print(f'Seed [{seed+1}/{n_rep}]')
if method == 'tree':
classifier = DecisionTreeClassifier(random_state=seed)
classifier.fit(x_train.detach().numpy(), y_train.detach().numpy())
y_preds = classifier.predict(x_test.detach().numpy())
model_accuracy = accuracy_score(y_test.detach().numpy(), y_preds)
target_class = 1
start = time.time()
explanation = tree_to_formula(classifier, concepts, target_class)
elapsed_time = time.time() - start
target_class_inv = 0
start = time.time()
explanation_inv = tree_to_formula(classifier, concepts, target_class_inv)
elapsed_time = time.time() - start
else:
model = train_nn(x_train, y_train, seed, device, verbose=False)
y_preds = model(x_test.to(device)).cpu().detach().numpy() > 0.5
model_accuracy = accuracy_score(y_test.cpu().detach().numpy(), y_preds)
# positive class
start = time.time()
class_explanation, class_explanations = lens.logic.explain_class(model.cpu(), x_train.cpu(), y_train.cpu(),
binary=True, target_class=1,
topk_explanations=10)
elapsed_time = time.time() - start
if class_explanation:
explanation = logic.base.replace_names(class_explanation, concepts)
explanation_accuracy, y_formula = logic.base.test_explanation(class_explanation,
target_class=1,
x=x_train, y=y_train,
metric=accuracy_score)
explanation_fidelity = lens.logic.fidelity(y_formula, y_preds)
explanation_complexity = lens.logic.complexity(class_explanation)
if verbose:
print(f'\t Model\'s accuracy: {model_accuracy:.4f}')
print(f'\t Class 1 - Global explanation: "{explanation}" - Accuracy: {explanation_accuracy:.4f}')
print(f'\t Elapsed time {elapsed_time}')
methods.append(method)
splits.append(seed)
explanations.append(explanation)
model_accuracies.append(model_accuracy)
explanation_accuracies.append(explanation_accuracy)
explanation_fidelities.append(explanation_fidelity)
explanation_complexities.append(explanation_complexity)
elapsed_times.append(elapsed_time)
results = pd.DataFrame({
'method': methods,
'split': splits,
'explanation': explanations,
'model_accuracy': model_accuracies,
'explanation_accuracy': explanation_accuracies,
'explanation_fidelity': explanation_fidelities,
'explanation_complexity': explanation_complexities,
'elapsed_time': elapsed_times,
})
results.to_csv(os.path.join(results_dir, f'results_{method}.csv'))
return results
# # General pruning
results_pruning = c_to_y(method='logic_layer', verbose=False)
results_pruning
| experiments/old/experiment_logic_layer_01_xor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sklearn
import pandas as pd
# import d6tflow tasks
import tasks
# -
model = tasks.TaskTrain().output().load()
df_train = tasks.TaskPreprocess().output().load()
print(sklearn.metrics.accuracy_score(df_train['y'],model.predict(df_train.iloc[:,:-1])))
# +
df_importance = pd.Series(model.feature_importances_, index=df_train.iloc[:,:-1].columns)
import matplotlib.pyplot as plt
df_importance.sort_values(ascending=False).plot.bar()
# -
# save figure
plt.savefig('reports/plot.png')
| visualize.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import cv2
from PIL import Image,ImageDraw
import matplotlib.pyplot as plt
import math
import copy
import sys
# To calculate Graidents Sobel Masks are used
sobel_X=np.array([[1,0,-1],[2,0,-2],[1,0,-1]])
sobel_Y=np.array([[1,2,1],[0,0,0],[-1,-2,-1]])
count=0
# threshold valaues for different images
threshold_val=[0.002,0.001,0.05,0.05,0.05]
image_list=['buildingGray.jpg','rose.jpg','lena.jpg','bicycle.bmp','Image_fig2.jpg']
def rgb2gray(image):
'''
Converts Color Image (RGB) to GrayScale Image
'''
gray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
return gray
def display_image(arr):
'''
Displays Image
'''
image_harris=Image.fromarray(arr)
# Display the image
image_harris.show()
def gaussian_smoothing(sigma=1,mean=0):
'''
Returns a Gaussian Mask
Arguments:
sigma -- Default value -> 1
mean -- Default value -> 0
Returns:
gaussian_filter -- numpy array of size (2*sigma+1,2*sigma+1)
'''
# radius of the gaussian curve
radius=int(sigma)
# X and Y values
X=np.linspace(-radius,radius,2*radius+1)
Y=np.linspace(-radius,radius,2*radius+1)
[grid_X,grid_Y]=np.meshgrid(X,Y)
#gaussian_filter=np.exp(-(grid_X**2+grid_Y**2)/(2*(sigma)**2))/(math.sqrt(2*math.pi)*sigma)
# Gaussian Mask
gaussian_filter=np.exp(-(grid_X**2+grid_Y**2-mean**2)/(2*(sigma)**2))
gaussian_filter=gaussian_filter/np.sum(gaussian_filter)
return gaussian_filter
def add_padding(I,H):
'''
This function adds Padding to an Image
'''
# Padding to added in row and column
pad_R=int((H.shape[0]-1)/2)+(H.shape[0]+1)%2
pad_C=int((H.shape[1]-1)/2)+(H.shape[1]+1)%2
# Padding of zeros to be added to row and column
Z_R=np.zeros((pad_R,I.shape[1]))
if H.shape[0]%2==0:
Z_C=np.zeros((I.shape[0]+pad_R,pad_C))
else:
Z_C=np.zeros((I.shape[0]+2*pad_R,pad_C))
# If height and width of mask are even
if H.shape[0]%2==0 and H.shape[1]%2==0:
I=np.append(Z_R,I,axis=0)
I=np.append(Z_C,I,axis=1)
# If height of mask is even and width of mask are odd
elif H.shape[0]%2==0 and H.shape[1]%2==1:
I=np.append(Z_R,I,axis=0)
I=np.concatenate((Z_C,I,Z_C),axis=1)
# If height of mask is odd and width of mask are even
elif H.shape[0]%2==1 and H.shape[1]%2==0:
I=np.concatenate((Z_R,I,Z_R),axis=0)
I=np.append(Z_C,I,axis=1)
# If height and width of mask are odd
else:
I=np.concatenate((Z_R,I,Z_R),axis=0)
I=np.concatenate((Z_C,I,Z_C),axis=1)
return I
def convolution(img,H):
'''
Performs Convolution Operation
'''
# Adds padding to the image
I_pad=add_padding(img,H)
# convolution array
conv=np.zeros(img.shape)
# Mask is inverted about X and then about Y
H=np.rot90(np.rot90(H))
for i in range(img.shape[0]):
for j in range(img.shape[1]):
# Window in the image
window=I_pad[i:i+H.shape[0],j:j+H.shape[1]]
conv[i,j]=np.sum(window*H)
return conv
def NMR(matrix):
'''
Performs Non-Max Suppression
'''
# empty list
points=[]
# For all pixel coordinates in matrix
for k in matrix.keys():
max_R=0
# NMR in window of size 3x3
for i in range(k[0]-1,k[0]+2):
for j in range(k[1]-1,k[1]+2):
# if the coordinate is in matrix
if (i,j) in matrix.keys():
if matrix[(i,j)]>max_R:
pt=(i,j)
max_R=matrix[(i,j)]
# pixel in the window with max R value
if pt not in points:
points.append(pt)
return points
def adaptive_NMR(matrix):
'''
Performs Adaptive Non-Max Suppression
'''
# matrix is sorted w.r.t R value
I_sort = dict(sorted(matrix.items(), key=lambda item: item[1],reverse=True))
# pixel with max R value is added
final_pt=[list(I_sort.keys())[0]]
for i in I_sort.keys():
for j in final_pt:
# if the absolute difference of both X and Y values is less than 5
if abs(i[0]-j[0])<=5 and abs(i[1]-j[1])<=5:
break
else:
final_pt.append(i)
return final_pt
def harris_detector(I,alpha=0.04,method='none',supression='normal',sigm=1):
'''
Performs Harris Corner Detector Operation
'''
# Checks if the image is colored
if len(I.shape)==3:
# Convertes image to gray scale
I_gray=rgb2gray(I)
else:
I_gray=I
print(f'Solving for Image : {image_list[count]}')
print('Performing Gaussian Smoothing')
# Gaussian Smoothing of the image
gaussian_filter=gaussian_smoothing(sigma=sigm)
Gaussian=convolution(I_gray,gaussian_filter)
#Gaussian=cv2.GaussianBlur(I_gray,(3,3),3)
print('Phase I : Calculating Gradients')
# Gradient in X direction
grad_X=convolution(Gaussian,sobel_X)
grad_X=convolution(grad_X,gaussian_smoothing(sigma=sigm))
#grad_X=cv2.GaussianBlur(grad_X,(3,3),1.4)
# Gradient in Y direction
grad_Y=convolution(Gaussian,sobel_Y)
grad_Y=convolution(grad_Y,gaussian_smoothing(sigma=sigm))
#grad_Y=cv2.GaussianBlur(grad_Y,(3,3),1.4)
# Square of gradient in X and Y direction
grad_XX=np.square(grad_X)
grad_YY=np.square(grad_Y)
# Gradients in X and Y direction are multiplied
grad_XY=np.multiply(grad_X,grad_Y)
print('Phase II : Calculating R values')
max_R=0
mat={}
for row in range(1,I.shape[0]-1):
for clm in range(1,I.shape[1]-1):
# Values in a window of size 3x3
Window_XX=np.sum(grad_XX[row-1:row+2,clm-1:clm+2])
Window_XY=np.sum(grad_XY[row-1:row+2,clm-1:clm+2])
Window_YY=np.sum(grad_YY[row-1:row+2,clm-1:clm+2])
# M matrix is created
M=np.array([[Window_XX,Window_XY],[Window_XY,Window_YY]])
# R value is calculated using Szeliski or Normal method
if method=='none':
R=np.linalg.det(M)-alpha*(np.trace(M))**2
elif method=='szeliski':
R=np.linalg.det(M)/np.trace(M)
else:
print('Incorrect Option !!!')
sys.exit()
mat[(row,clm)]=R
# Max value of R is stored
if R>max_R:
max_R=R
mat_copy=mat.copy()
threshold=max_R*threshold_val[count]
# Deleting pixels with R value less than threshold
for k,v in mat_copy.items():
if not v>threshold:
del mat[k]
print('Phase III : Non Max Supression')
# NMR is performed
if supression=='normal':
points=NMR(mat)
elif supression=='adaptive':
points=adaptive_NMR(mat)
else:
print('Incorrect Option !!!')
sys.exit()
# Final corners are plotted on the image
for val in points:
I=cv2.circle(I,(val[1],val[0]),2,(255,0,0),-1)
# Display the Image
display_image(I)
plt.imshow(I)
plt.show()
# the canvas is saved as an image
cv2.imwrite(f'Output/output_{image_list[count]}.jpg',cv2.cvtColor(I, cv2.COLOR_BGR2RGB))
# Performing Harris Corner Detection for all the images
for img in image_list:
# loading the image
image=Image.open(f'Data/{img}')
img_arr=np.array(image)
harris_detector(img_arr,supression='adaptive')
count+=1
| Harris Corner Detector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ##IPython Widgets
# IPython widgets are tools that give us interactivity within our analysis. This is most useful when looking at a complication plot and trying to figure out how it depends on a single parameter. You could make 20 different plots and vary the parameter a bit each time, or you could use an IPython slider widget. Let's first import the widgets.
import IPython.html.widgets as widg
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# %matplotlib inline
# The object we will learn about today is called interact. Let's find out how to use it.
# +
# widg.interact?
# -
# We see that we need a function with parameters that we want to vary, let's make one. We will examine the lorenz equations. They exhibit chaotic behaviour and are quite beautiful.
def lorentz_derivs(yvec, t, sigma, rho, beta):
"""Compute the the derivatives for the Lorentz system at yvec(t)."""
dx = sigma*(yvec[1]-yvec[0])
dy = yvec[0]*(rho-yvec[2])-yvec[1]
dz = yvec[0]*yvec[1]-beta*yvec[2]
return [dx,dy,dz]
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
"""
t = np.linspace(0,max_time, max_time*250)
return odeint(lorentz_derivs, ic, t, args = (sigma, rho, beta)), t
def plot_lorentz(N=1, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
"""
f = plt.figure(figsize=(15, N*8))
np.random.seed(1)
colors = plt.cm.hot(np.linspace(0,1,N))
for n in range(N):
plt.subplot(N,1,n)
x0 = np.random.uniform(-15, 15)
y0 = np.random.uniform(-15, 15)
z0 = np.random.uniform(-15, 15)
soln, t = solve_lorentz([x0,y0,z0], max_time, sigma, rho, beta)
plt.plot(soln[:,0], soln[:, 2], color=colors[n])
plot_lorentz()
widg.interact(plot_lorentz, N=1, max_time=(0,10,.1), sigma=(0,10,.1), rho=(0,100, .1), beta=(0,10,.1))
# Okay! So now you are ready to analyze the world! Just kidding. Let's make a simpler example. Consider the best fitting straight line through a set of points. When a curve fitter fits a straight line, it tries to minimize the sum of the "errors" from all the data points and the fit line. Mathematically this is represented as
#
# $$\sum_{i=0}^{n}(f(x_i)-y_i)^2$$
#
# Now, $f(x_i)=mx_i+b$. Your task is to write a function that plots a line and prints out the error, make an interact that allows you to vary the m and b parameters, then vary those parameters until you find the smallest error.
#Make a function that takes two parameters m and b and prints the total error and plots the the line and the data.
#Use this x and y into your function to use as the data
x=np.linspace(0,1,10)
y=(np.random.rand(10)+4)*x+5
#Make an interact as above that allows you to vary m and b.
| Python Workshop/IPython Widgets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"></ul></div>
# -
from examples.example_target_sequences import VDR, MTOR, GABA
import pandas as pd
import mltle as mlt
model = mlt.predict.Model('Res2_06')
# +
# this model expect SMILES string to be canonical
vdr_ligand_calcitriol = "C=C1C(=CC=C2CCCC3(C)C2CCC3C(C)CCCC(C)(C)O)CC(O)CC1O"
vdr_ligand_calcitriol = mlt.utils.to_non_isomeric_canonical(vdr_ligand_calcitriol)
gaba_ligand_diazepam = "CN1C(=O)CN=C(c2ccccc2)c2cc(Cl)ccc21"
gaba_ligand_diazepam = mlt.utils.to_non_isomeric_canonical(gaba_ligand_diazepam)
mtor_ligand_torin1 = "CCC(=O)N1CCN(c2ccc(-n3c(=O)ccc4cnc5ccc(-c6cnc7ccccc7c6)cc5c43)cc2C(F)(F)F)CC1"
mtor_ligand_torin1 = mlt.utils.to_non_isomeric_canonical(mtor_ligand_torin1)
# -
X_predict = pd.DataFrame()
X_predict['drug'] = [vdr_ligand_calcitriol, gaba_ligand_diazepam, mtor_ligand_torin1]
X_predict['protein'] = [VDR, GABA, MTOR]
X_predict.head()
model.predict(X_predict)
| examples/predict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# name: python3
# ---
# ## PV parameter extraction from a Si solar cell and µSMU
#
# - Measure the JV-curve of solar cells using the micro-SMU under light
# - Import data into the Jupyter Notebok and analyse using python3
# - Plot the JV curve
# - Extract parameters of the solar cell: Voc, Jsc, FF, Vmax, Imax
#
# Note: The polycristaline Si mini-modules were adquired from Aliexpress (Cheap devices) Parameters: Voc = 2V , I = 40 mA (4 mA each, 40 mA total at 1000 W/m2) url: https://es.aliexpress.com/item/4000512203943.html
# +
# Import libraries
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
# +
# Import data into Jupyter Notebook
filename = 'data/celdaSi_ligh_near_01.dat' # Under Light (White LED lamp with 500 W/m2 irradiance)
np_data = np.loadtxt(filename, delimiter=',', skiprows =2) # Import as np.ndarray
# +
#Separate data in two arrays - V and J
V = np_data[:,0] # voltage is in [V]
J = np_data[:,1] # current density is in [mA/cm2]
# +
# Find Voc_approx and Jsc_approx
a = [J > 0 ] #Voc_approx is located when J > 0
Voc_approx = np.min(V[(J>0)])
Voc_approx = round(Voc_approx,4)
b = (V < 0) #Jsc_approx is located when V < 0
Jsc_approx = np.max(J[(V<0)])
Jsc_approx = round(Jsc_approx,4)
# +
# Find the approx Fill Factor
P = J*V # mW/cm2
P_min = np.min(P)
FF_approx = P_min/(Voc_approx*Jsc_approx)
FF_approx = round(FF_approx,4)
# +
## Plot the JV curve with the approximated values of Voc and Jsc
# Print data inside the Voc anc Jsc limits
plt.plot(V,J,'or')
plt.ylim(np.min(J),0)
plt.xlim(0,Voc_approx) # Plot the 4th quadrant from 0 to Voc_approx
plt.title('poly-Si solar cell, 500 $W/m^2$', fontsize =16)
plt.ylabel('Current density $ \:[mA/cm^2]$', fontsize=14)
plt.xlabel('Voltage $[V]$', fontsize=14)
plt.grid()
plt.show()
# +
# This are the approximate values extracted from the light JV-curve
print (f'Voc_approx = {Voc_approx} V')
print (f'Jsc_approx = {Jsc_approx} mA/cm2')
print (f'FF_approx = {FF_approx}')
# +
## Refine the Voc trough linear fitting
#Import data near Voc to calculate de series resistance (slope)
Vs = V[(V < (Voc_approx + 0.04)) & (V > (Voc_approx - 0.04))]
Js = J[(V < (Voc_approx + 0.04)) & (V > (Voc_approx - 0.04))]
#add the column of ones to the inputs if you want statsmodels to calculate the intercept 𝑏₀.
Vs = sm.add_constant(Vs)
# Create the model for the regression and show results
model = sm.OLS(Js, Vs) #Take care from the Js , Vs input order
results = model.fit()
# Extract the y = mx+b coefficients for Voc calculation
b1 = results.params[0]
m1 = results.params[1]
# Linnear fitting of Voc when mx+b = 0
Voc_fit = -b1/m1
Voc_fit = round(Voc_fit,4)
# Calculation of series resistance (Rs) from the linear model
Rs = 1/(m1*1e-3)
Rs = round(Rs,2) # Series resistance
# +
## Refine the Jsc trough linear fitting
#Import data near Voc to calculate de series resistance (slope)
Vp = V[(V < (0 + 0.04)) & (V > (0 - 0.04))]
Jp = J[(V < (0 + 0.04)) & (V > (0 - 0.04))]
#add the column of ones to the inputs if you want statsmodels to calculate the intercept 𝑏₀.
Vp = sm.add_constant(Vp)
# Create the model for the regression and show results
model_sc = sm.OLS(Jp, Vp)
results_sc = model_sc.fit()
# Extract the y = mx+b coefficients for Voc calculation
b1 = results_sc.params[0]
m1 = results_sc.params[1]
# Linnear fitting of Voc when mx+b = 0
Jsc_fit = b1
Jsc_fit = round(Jsc_fit,4)
# Calculation of series resistance (Rs) from the linear model
Rp = 1/(m1*1e-3)
Rp = round(Rp,2) # Series resistance
# +
# Find the accurate Fill Factor (FF)
P = J*V # mW/cm2
P_min = np.min(P)
FF_fit = P_min/(Voc_fit*Jsc_fit)
FF_fit = round(FF_fit,4)
# +
# Plot the whole J(V) curve and use vline and hline to delimite the 4-cuadrant
# Print data inside the Voc_fit anc Jsc_fit limits
plt.plot(V,J,'or')
plt.ylim(np.min(J)-0.05,0+0.05) #This y_limit can be modify for your values
plt.xlim(0-0.1,Voc_fit+0.1) #This x_limit can be modify for your values
plt.title('poly-Si solar cell', fontsize =16)
plt.ylabel('Current density $ \:[mA/cm^2]$', fontsize=14)
plt.xlabel('Voltage $[V]$', fontsize=14)
plt.vlines(0, -0.7, 0.1, linestyle="dashed")
plt.hlines(0 , 0-0.2, Voc_fit+0.2, linestyle="dashed")
#plt.grid()
plt.show()
# +
#This are the accurate PV parameters from the light JV-curve
print (f'Voc_fit = {Voc_fit} V')
print (f'Jsc_fit = {Jsc_fit} mA/cm2')
print (f'FF_fit = {FF_fit}')
# -
Rs
| light_jv_parameters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # COVID-19 $R_t$
# > "State level Rt estimates for India"
#
# - toc: true
# - branch: master
# - badges: false
# - comments: false
# - use_math: true
# - author: <NAME>
# #### Based on [k-sys/covid-19](https://github.com/k-sys/covid-19).
# +
#collapse
# For some reason Theano is unhappy when I run the GP, need to disable future warnings
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import os
import requests
import pymc3 as pm
import pandas as pd
import numpy as np
import theano
import theano.tensor as tt
from matplotlib import pyplot as plt
from matplotlib import dates as mdates
from matplotlib import ticker
from datetime import date
from datetime import datetime
from IPython.display import clear_output
# %config InlineBackend.figure_format = 'retina'
# -
# #### Load State Information
#collapse
url = 'data/india-states.csv'
states = pd.read_csv(url,
parse_dates=['date'],
index_col=['state', 'date']).sort_index()
states = states.drop([])
# ## Load Patient Information
# #### Global Covid-19 Onset/Symptom Data
# +
#collapse
def download_file(url, local_filename):
"""From https://stackoverflow.com/questions/16694907/"""
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
return local_filename
URL = 'data/linelist.csv'
LINELIST_PATH = 'data/linelist.csv'
if not os.path.exists(LINELIST_PATH):
print('Downloading file, this will take a while ~100mb')
try:
download_file(URL, LINELIST_PATH)
clear_output(wait=True)
print('Done downloading.')
except:
print('Something went wrong. Try again.')
else:
print('Already downloaded CSV')
# -
# #### Parse & Clean Patient Info
# +
#collapse
# Load the patient CSV
patients = pd.read_csv(
'data/linelist.csv',
parse_dates=False,
usecols=[
'date_confirmation',
'date_onset_symptoms'],
low_memory=False)
patients.columns = ['Onset', 'Confirmed']
# There's an errant reversed date
patients = patients.replace('01.31.2020', '31.01.2020')
# Only keep if both values are present
patients = patients.dropna()
# Must have strings that look like individual dates
# "2020.03.09" is 10 chars long
is_ten_char = lambda x: x.str.len().eq(10)
patients = patients[is_ten_char(patients.Confirmed) &
is_ten_char(patients.Onset)]
# Convert both to datetimes
patients.Confirmed = pd.to_datetime(
patients.Confirmed, format='%d.%m.%Y')
patients.Onset = pd.to_datetime(
patients.Onset, format='%d.%m.%Y')
# Only keep records where confirmed > onset
patients = patients[patients.Confirmed >= patients.Onset]
# -
# #### Show Relationship between Onset of Symptoms and Confirmation
# +
#collapse
ax = patients.plot.scatter(
title='Onset vs. Confirmed Dates - COVID19',
x='Onset',
y='Confirmed',
alpha=.1,
lw=0,
s=10,
figsize=(6,6))
formatter = mdates.DateFormatter('%m/%d')
locator = mdates.WeekdayLocator(interval=2)
for axis in [ax.xaxis, ax.yaxis]:
axis.set_major_formatter(formatter)
axis.set_major_locator(locator)
# -
# #### Calculate the Probability Distribution of Delay
# +
#collapse
# Calculate the delta in days between onset and confirmation
delay = (patients.Confirmed - patients.Onset).dt.days
# Convert samples to an empirical distribution
p_delay = delay.value_counts().sort_index()
new_range = np.arange(0, p_delay.index.max()+1)
p_delay = p_delay.reindex(new_range, fill_value=0)
p_delay /= p_delay.sum()
# Show our work
fig, axes = plt.subplots(ncols=2, figsize=(9,3))
p_delay.plot(title='P(Delay)', ax=axes[0])
p_delay.cumsum().plot(title='P(Delay <= x)', ax=axes[1])
for ax in axes:
ax.set_xlabel('days')
# -
# ## Odisha State
#collapse
state = 'OR'
confirmed = states.xs(state).positive.diff().dropna()
confirmed.tail()
# ### Translate Confirmation Dates to Onset Dates
#
# Our goal is to translate positive test counts to the dates where they likely occured. Since we have the distribution, we can distribute case counts back in time according to that distribution. To accomplish this, we reverse the case time series, and convolve it using the distribution of delay from onset to confirmation. Then we reverse the series again to obtain the onset curve. Note that this means the data will be 'right censored' which means there are onset cases that have yet to be reported so it looks as if the count has gone down.
# +
#collapse
def confirmed_to_onset(confirmed, p_delay):
assert not confirmed.isna().any()
# Reverse cases so that we convolve into the past
convolved = np.convolve(confirmed[::-1].values, p_delay)
# Calculate the new date range
dr = pd.date_range(end=confirmed.index[-1],
periods=len(convolved))
# Flip the values and assign the date range
onset = pd.Series(np.flip(convolved), index=dr)
return onset
onset = confirmed_to_onset(confirmed, p_delay)
# -
# ### Adjust for Right-Censoring
#
# Since we distributed observed cases into the past to recreate the onset curve, we now have a right-censored time series. We can correct for that by asking what % of people have a delay less than or equal to the time between the day in question and the current day.
#
# For example, 5 days ago, there might have been 100 cases onset. Over the course of the next 5 days some portion of those cases will be reported. This portion is equal to the cumulative distribution function of our delay distribution. If we know that portion is say, 60%, then our current count of onset on that day represents 60% of the total. This implies that the total is 166% higher. We apply this correction to get an idea of what actual onset cases are likely, thus removing the right censoring.
# +
#collapse
def adjust_onset_for_right_censorship(onset, p_delay):
cumulative_p_delay = p_delay.cumsum()
# Calculate the additional ones needed so shapes match
ones_needed = len(onset) - len(cumulative_p_delay)
padding_shape = (0, ones_needed)
# Add ones and flip back
cumulative_p_delay = np.pad(
cumulative_p_delay,
padding_shape,
constant_values=1)
cumulative_p_delay = np.flip(cumulative_p_delay)
# Adjusts observed onset values to expected terminal onset values
adjusted = onset / cumulative_p_delay
return adjusted, cumulative_p_delay
adjusted, cumulative_p_delay = adjust_onset_for_right_censorship(onset, p_delay)
# -
# Take a look at all three series: confirmed, onset and onset adjusted for right censoring.
# +
#collapse
fig, ax = plt.subplots(figsize=(5,3))
confirmed.plot(
ax=ax,
label='Confirmed',
title=state,
c='k',
alpha=.25,
lw=1)
onset.plot(
ax=ax,
label='Onset',
c='k',
lw=1)
adjusted.plot(
ax=ax,
label='Adjusted Onset',
c='k',
linestyle='--',
lw=1)
ax.legend();
# -
# Let's have the model run on days where we have enough data ~last 40 or so
# ### Sample the Posterior with PyMC3
# We assume a poisson likelihood function and feed it what we believe is the onset curve based on reported data. We model this onset curve based on the same math in the previous notebook:
#
# $$ I^\prime = Ie^{\gamma(R_t-1)} $$
#
# We define $\theta = \gamma(R_t-1)$ and model $ I^\prime = Ie^{\theta} $ where $\theta$ observes a random walk. We let $\gamma$ vary independently based on known parameters for the serial interval. Therefore, we can recover $R_t$ easily by $R_t = \frac{\theta}{\gamma}+1$
#
# The only tricky part is understanding that we're feeding in _onset_ cases to the likelihood. So $\mu$ of the poisson is the positive, non-zero, expected onset cases we think we'd see today.
#
# We calculate this by figuring out how many cases we'd expect there to be yesterday total when adjusted for bias and plugging it into the first equation above. We then have to re-bias this number back down to get the expected amount of onset cases observed that day.
#collapse
class MCMCModel(object):
def __init__(self, region, onset, cumulative_p_delay, window=40):
# Just for identification purposes
self.region = region
# For the model, we'll only look at the last N
self.onset = onset.iloc[-window:]
self.cumulative_p_delay = cumulative_p_delay[-window:]
# Where we store the results
self.trace = None
self.trace_index = self.onset.index[1:]
def run(self, chains=1, tune=3000, draws=1000, target_accept=.95):
with pm.Model() as model:
# Random walk magnitude
step_size = pm.HalfNormal('step_size', sigma=.03)
# Theta random walk
theta_raw_init = pm.Normal('theta_raw_init', 0.1, 0.1)
theta_raw_steps = pm.Normal('theta_raw_steps', shape=len(self.onset)-2) * step_size
theta_raw = tt.concatenate([[theta_raw_init], theta_raw_steps])
theta = pm.Deterministic('theta', theta_raw.cumsum())
# Let the serial interval be a random variable and calculate r_t
serial_interval = pm.Gamma('serial_interval', alpha=6, beta=1.5)
gamma = 1.0 / serial_interval
r_t = pm.Deterministic('r_t', theta/gamma + 1)
inferred_yesterday = self.onset.values[:-1] / self.cumulative_p_delay[:-1]
expected_today = inferred_yesterday * self.cumulative_p_delay[1:] * pm.math.exp(theta)
# Ensure cases stay above zero for poisson
mu = pm.math.maximum(.1, expected_today)
observed = self.onset.round().values[1:]
cases = pm.Poisson('cases', mu=mu, observed=observed)
self.trace = pm.sample(
chains=chains,
tune=tune,
draws=draws,
target_accept=target_accept)
return self
def run_gp(self):
with pm.Model() as model:
gp_shape = len(self.onset) - 1
length_scale = pm.Gamma("length_scale", alpha=3, beta=.4)
eta = .05
cov_func = eta**2 * pm.gp.cov.ExpQuad(1, length_scale)
gp = pm.gp.Latent(mean_func=pm.gp.mean.Constant(c=0),
cov_func=cov_func)
# Place a GP prior over the function f.
theta = gp.prior("theta", X=np.arange(gp_shape)[:, None])
# Let the serial interval be a random variable and calculate r_t
serial_interval = pm.Gamma('serial_interval', alpha=6, beta=1.5)
gamma = 1.0 / serial_interval
r_t = pm.Deterministic('r_t', theta / gamma + 1)
inferred_yesterday = self.onset.values[:-1] / self.cumulative_p_delay[:-1]
expected_today = inferred_yesterday * self.cumulative_p_delay[1:] * pm.math.exp(theta)
# Ensure cases stay above zero for poisson
mu = pm.math.maximum(.1, expected_today)
observed = self.onset.round().values[1:]
cases = pm.Poisson('cases', mu=mu, observed=observed)
self.trace = pm.sample(chains=1, tune=1000, draws=1000, target_accept=.8)
return self
# ### Run Pymc3 Model
# +
#collapse
def df_from_model(model):
r_t = model.trace['r_t']
mean = np.mean(r_t, axis=0)
median = np.median(r_t, axis=0)
hpd_90 = pm.stats.hpd(r_t, credible_interval=.9)
hpd_50 = pm.stats.hpd(r_t, credible_interval=.5)
idx = pd.MultiIndex.from_product([
[model.region],
model.trace_index
], names=['region', 'date'])
df = pd.DataFrame(data=np.c_[mean, median, hpd_90, hpd_50], index=idx,
columns=['mean', 'median', 'lower_90', 'upper_90', 'lower_50','upper_50'])
return df
def create_and_run_model(name, state):
confirmed = state.positive.diff().dropna()
onset = confirmed_to_onset(confirmed, p_delay)
adjusted, cumulative_p_delay = adjust_onset_for_right_censorship(onset, p_delay)
return MCMCModel(name, onset, cumulative_p_delay).run()
# +
#hide
models = {}
for state, grp in states.groupby('state'):
print(state)
if state in models:
print(f'Skipping {state}, already in cache')
continue
models[state] = create_and_run_model(state, grp.droplevel(0))
# -
# ### Handle Divergences
# +
#collapse
# Check to see if there were divergences
n_diverging = lambda x: x.trace['diverging'].nonzero()[0].size
divergences = pd.Series([n_diverging(m) for m in models.values()], index=models.keys())
has_divergences = divergences.gt(0)
print('Diverging states:')
display(divergences[has_divergences])
# Rerun states with divergences
for state, n_divergences in divergences[has_divergences].items():
models[state].run()
# -
# ## Compile Results
# +
#collapse
results = None
for state, model in models.items():
df = df_from_model(model)
if results is None:
results = df
else:
results = pd.concat([results, df], axis=0)
# -
# ### Render Charts
#collapse
def plot_rt(name, result, ax, c=(.3,.3,.3,1), ci=(0,0,0,.05)):
ax.set_ylim(0.5, 1.6)
ax.set_title(name)
ax.plot(result['median'],
marker='o',
markersize=4,
markerfacecolor='w',
lw=1,
c=c,
markevery=2)
ax.fill_between(
result.index,
result['lower_90'].values,
result['upper_90'].values,
color=ci,
lw=0)
ax.axhline(1.0, linestyle=':', lw=1)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(mdates.WeekdayLocator(interval=2))
# # COVID-19 $R_t$ for Indian States
# > Last updated // 10 September, 2020 by [Reeva](https://twitter.com/reeva_mishra)
# +
#collapse
ncols = 2
nrows = int(np.ceil(results.index.levels[0].shape[0] / ncols))
fig, axes = plt.subplots(
nrows=nrows,
ncols=ncols,
figsize=(14, nrows*3),
sharey='row')
for ax, (state, result) in zip(axes.flat, results.groupby('region')):
plot_rt(state, result.droplevel(0), ax)
fig.tight_layout()
fig.set_facecolor('w')
# -
# ### States and Union Territories (Short Codes)
#
# * Andaman and Nicobar Islands (AN)
# * Andhra Pradesh (AP)
# * Arunachal Pradesh (AR)
# * Assam (AS)
# * Bihar (BR)
# * Chandigarh (CH)
# * Chhattisgarh (CT)
# * Dadra and Nagar Haveli (DN)
# * Daman and Diu (DD)
# * Delhi (DL)
# * Goa (GA)
# * Gujarat (GJ)
# * Haryana (HR)
# * Himachal Pradesh (HP)
# * Jammu and Kashmir (JK)
# * Jharkhand JH
# * Karnataka KA
# * Kerala KL
# * Ladakh LA
# * Lakshadweep LD
# * Madhya Pradesh MP
# * Maharashtra MH
# * Manipur MN
# * Meghalaya ML
# * Mizoram MZ
# * Nagaland NL
# * Odisha OR
# * Puducherry PY
# * Punjab PB
# * Rajasthan RJ
# * Sikkim SK
# * Tamil Nadu TN
# * Telangana TG
# * Tripura TR
# * Uttar Pradesh UP
# * Uttarakhand UT
# * West Bengal WB
| _notebooks/Realtime Rt mcmc.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.1
# language: julia
# name: julia-1.0
# ---
# # AR(1) + GARCH(1,1) Model
# ## Loading Packages
# +
using Dates, DelimitedFiles, Statistics, LinearAlgebra, Optim, ForwardDiff
include("jlFiles/printmat.jl")
# +
using Plots
backend = "gr" #"gr" (default), "pyplot"
if backend == "pyplot"
pyplot(size=(600,400))
else
gr(size=(480,320))
default(fmt = :svg)
end
# -
# ## Loading Data
# +
xx = readdlm("Data/FFdSizePs.csv",',',skipstart=1)
ymd = round.(Int,xx[:,1]) #YearMonthDay, like 20121231
R = xx[:,2] #returns for the smallest size portfolio
xx = nothing
y = R[2:end] #dependent variable, y(t)
x = [ones(size(R,1)-1) R[1:end-1]] #regressors, [1, y(t-1)]
dN = Date.(string.(ymd),"yyyymmdd")
println("The first four dates:")
printmat(dN[1:4])
# -
# ## The Likelihood Function
# Consider a regression equation, where the residual follows a GARCH(1,1) process
#
# $
# y_{t} =x_{t}^{\prime}b+u_{t} \: \text{ with }\: u_{t}=v_{t}\sigma_{t} \: \text{ and }
# $
#
# $
# \sigma_{t}^{2} =\omega+\alpha u_{t-1}^{2}+\beta\sigma_{t-1}^{2}.
# $
#
# Notice that we require $(\omega,\alpha,\beta)$ to all be positive and $\alpha + \beta < 1$.
#
# If $v_{t}\sim N(0,1)$, then the likelihood function is
#
# $
# \ln L=-\frac{T}{2}\ln(2\pi)
# -\frac{1}{2}\sum_{t=1}^{T}\ln\sigma_{t}^{2}-
# \frac{1}{2}\sum_{t=1}^{T}\frac{u_{t}^{2}}{\sigma_{t}
# ^{2}}.
# $
#
# The likelihood function of a GARCH(1,1) model can be coded as in
# *garch11LL*. The first function calculates time-varying variances and
# the likelihood contributions (for each period). The second functions forms the
# loss function used in the minimization.
# +
function garch11LL(par::Vector,y,x)
(T,k) = (size(x,1),size(x,2))
b = par[1:k] #mean equation, y = x'*b
(omega,alpha,beta1) = par[k+1:k+3] #GARCH(1,1) equation:
#s2(t) = omega + alpha*u(t-1)^2 + beta1*s2(t-1)
yhat = x*b
u = y - yhat
s2_0 = var(u) #var(u,1) gives a matrix, var(u) a scalar
s2 = zeros(typeof(alpha),T) #works also with ForwardDiff
s2[1] = omega + alpha*s2_0 + beta1*s2_0 #simple, but slow approach
for t = 2:T #using filter() is perhaps quicker
s2[t] = omega + alpha*u[t-1]^2 + beta1*s2[t-1]
end
LL = -(1/2)*log(2*pi) .- (1/2)*log.(s2) .- (1/2)*(u.^2)./s2
LL[1] = 0.0 #effectively skip the first observation
return LL,s2,yhat
end
function garch11LLLoss(par::Vector,y,x) #loss fn to minimize
par1 = copy(par)
par1[end-2:end] = abs.(par1[end-2:end]) #impose non-negativity on (omega,alpha,beta)
LL, = garch11LL(par1,y,x)
Loss = -sum(LL) #to minimize: -sum(LL)
return Loss
end
function garch11LLRLoss(par::Vector,y,x,rho) #loss fn, with penalty on alpha+beta1 > 1
par1 = copy(par)
par1[end-2:end] = abs.(par1[end-2:end]) #impose non-negativity
LL, = garch11LL(par1,y,x)
(alpha,beta1) = par1[end-1:end] #s2(t) = omega + alpha*u(t-1)^2 + beta1*s2(t-1)
g = [alpha + beta1 - 1] #alpha + beta1 < 1
Loss = -sum(LL) + rho*sum(max.(0,g).^2)
return Loss
end
# -
# ## Try the Likelihood Function
# +
par0 = [mean(y),0,var(y)*0.05,0.05,0.93] #initial parameter guess
(loglik,s2,yhat) = garch11LL(par0,y,x) #just testing the log lik
LL = garch11LLLoss(par0,y,x)
printlnPs("Value of log-likelihood fn at starting guess: ",-LL)
# -
# ## Maximize the Likelihood Function
# +
Sol = optimize(par->garch11LLLoss(par,y,x),par0) #minimize -sum(LL)
parHat = Optim.minimizer(Sol) #extract the optimal solution
parHat[end-2:end] = abs.(parHat[end-2:end]) #since the likelihood function uses abs(these values)
LLHat = garch11LLLoss(parHat,y,x)
printlnPs("Value of log-likelihood fn at estimate: ",-LLHat)
println("\nParameter estimates (b[1],b[2],omega,alpha,beta1): ")
printmat(parHat)
# -
# ## Standard Errors of the Estimates
#
# MLE is typically asymptotically normally distributed
#
# $
# \sqrt{T}(\hat{\theta}-\theta) \rightarrow^{d}N(0,V) \: \text{, where } \: V=I(\theta)^{-1}\text{ with }
# $
#
# $
# I(\theta) =-\text{E}\frac{\partial^{2}\ln L_t}{\partial\theta\partial\theta^{\prime}}
# $
#
# where $\ln L_t$ is the contribution of period $t$ to the likelihood function and $I(\theta)$ is the information matrix.
#
# The code below calculates numerical derivatives.
#
#
# Alternatively, we can use the outer product of the gradients to calculate the
# information matrix as
#
# $
# J(\theta)=\text{E}\left[ \frac{\partial\ln L_t}{\partial\theta
# }\frac{\partial\ln L_t}{\partial\theta^{\prime}}\right]
# $
#
# We could also use the "sandwich" estimator
#
# $
# V=I(\theta)^{-1}J(\theta)I(\theta)^{-1}.
# $
# ### Std from Hessian
# +
T = size(y,1) #finding std(coefs) by inverse of information matrix
Ia = -ForwardDiff.hessian(par->mean(garch11LL(par,y,x)[1]),parHat)
Ia = (Ia+Ia')/2 #to guarantee symmetry
vcv = inv(Ia)/T
std_parHat = sqrt.(diag(vcv))
println("std from Hessian")
printmat(std_parHat)
# -
# ### Std from Gradient and Sandwich
# +
LLgrad = ForwardDiff.jacobian(par->garch11LL(par,y,x)[1],parHat) #T x length(par) matrix, T gradients
J = LLgrad'LLgrad/T
vcv = inv(J)/T
stdb_parHat = sqrt.(diag(vcv)) #std from gradients
vcv = inv(Ia) * J * inv(Ia)/T
stdc_parHat = sqrt.(diag(vcv)) #std from sandwich
printlnPs("\nGARCH parameter estimates and 3 different standard errors
coef hessian gradient sandwich")
printmat([parHat std_parHat stdb_parHat stdc_parHat])
# -
# ## Redo MLE, but with Penalty on alpha+beta>1
# +
println("\nIterate with harder and harder punishment on α + β < 1 restriction. PERHAPS NOT NEEDED")
println("\n penalty b0 b1 omega alpha beta")
options = Optim.Options(show_trace=false,show_every=10)
for rho = 0.0:0.1:1
local Sol, par
global par0
Sol = optimize(par->garch11LLRLoss(par,y,x,rho),par0,options)
par = Optim.minimizer(Sol)
par[end-3:end-1] = abs.(par[end-3:end-1])
printmat([rho par'])
par0 = copy(par)
end
# -
# # Value at Risk
#
# calculated by assuming conditional (time-varying) normality,
#
# $
# \text{VaR} = -(\mu_t - 1.645 \sigma_t),
# $
#
# where
# $\mu_t$ are the predictions from the estimated mean equation ($x_t'b$) and $\sigma_t$ from the GARCH(1,1) model.
# +
(_,σ²,μ) = garch11LL(parHat,y,x)
VaR95 = -(μ - 1.645*sqrt.(σ²))
CovRatio = mean((-y) .>= VaR95) #coverage ratio for VaR
printlnPs("\nCoverage ratio for VaR(95%): ",CovRatio)
# +
xTicks = [Date(1990);Date(2000);Date(2010)] #controlling the tick locations
plot(dN[2:end],VaR95,xticks=Dates.value.(xTicks),legend=false)
title!("1-day VaR (95%)")
# -
| Garch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
import matplotlib.pyplot as plt
from statsmodels.stats.proportion import proportions_ztest
from statsmodels.stats.power import TTestIndPower
sns.set()
# -
# ## Statistical Power
# **Type I error**
#
# Rejecting the null when it is actaully true, denoted by $\alpha$. It is essentially a false positive because we recommend a treatment when it actually does not work.
#
# **Type II error**
#
# Do not reject the null when it is actaully false, denoted by $\beta$. It is a false negative because we end up not recomending a treatment that works.
#
# **Significance level**
#
# A significance level of 0.05 means that there is a 5% chance of a false positive. Choosing level of significance is an arbitrary task, but for many applications, a level of 5% is chosen, for no better reason than that it is conventional
#
# **Power**
#
# Power of 0.80 means that there is an 80% chance that if there was an effect, we would detect it (or a 20% chance that we'd miss the effect). In other words, power is equivalent to $1 - \beta$. There are no formal standards for power, most researchers assess the power of their tests using 0.80 for adequacy
#
# | Scenario | $H_0$ is true | $H_0$ is false |
# |--------------|:-----------------------------------:|-------------------------:|
# | Accept $H_0$ | Correct Decision | Type 2 Error (1 - power) |
# | Reject $H_0$ | Type 1 Error (significance level) | Correct decision |
# ### Intuition
#
# A good way to get a feel for the underlying mechanics is to plot the probability distribution of $z$ assuming that the null hypothesis is true. Then do the same assuming that the alternative hypothesis is true, and overlay the two plots.
#
# Consider the following example:
# * $H_0$: $p_a = p_b$
# * $H_1$: $p_a > p_b$
#
# $n_1 = 2500, n_2 = 2500, p_1 = 0.08, p_2 = 0.10$
# +
def plot_power(
n1: int,
n2: int,
p1: float,
p2: float,
significance: float = 0.05
)-> plt.Figure:
counts= np.array([p1*n1, p2*n2])
nobs = np.array([n1, n2])
zscore, _ = proportions_ztest(counts, nobs, alternative = 'larger')
#calucalte distributions
h0_dist = stats.norm(loc = 0, scale = 1)
h1_dist = stats.norm(loc = zscore, scale = 1)
# calculate threshold and power
x = np.linspace(-5, 6, num = 100)
threshold = h0_dist.ppf(1 - significance)
mask = x > threshold
power = np.round(1 - h1_dist.cdf(threshold), 2)
#plot figure
fig = plt.figure(figsize=(8, 8))
ax = plt.subplot(1, 1, 1)
sns.lineplot(x=x, y=h1_dist.pdf(x), label="$H_1$ is true", ax=ax, color="navy")
ax.fill_between(x = x[mask], y1 = 0.0, y2 = h1_dist.pdf(x)[mask],alpha = 0.2, color="navy")
sns.lineplot(x=x, y=h0_dist.pdf(x), label="$H_0$ is true", ax=ax, color="green")
ax.fill_between(x = x[mask], y1 = 0.0, y2 = h0_dist.pdf(x)[mask],alpha = 0.2, color="green")
ax.fill_between(x = x[mask], y1 = 0.0, y2 = h1_dist.pdf(x)[mask],alpha = 0.2)
ax.set_title(f"n1: {n1}, n2: {n2}, p1: {p1*100}%, p2: {p2*100}%, power: {power*100}%", fontsize=16)
plt.show()
plot_power(n1=2500, n2=2500, p1=0.10, p2=0.08)
# -
# The shaded green area denotes the significance region, while the shaded blue area denotes the power (note that it includes the shaded green area). Note that if we pick a smaller N, or a smaller probability difference between the control and experiment group, the power drops (the shaded blue area decreases), meaning that if there’s is in fact a change, there’s lesser percent chance that we’ll detect it.
plot_power(n1=1250, n2=1250, p1=0.10, p2=0.08)
plot_power(n1=2500, n2=2500, p1=0.10, p2=0.099)
# ### Power Analysis
#
# Statistical power is one piece in a puzzle that has four related parts:
#
# * **Effect Size:** The quantified magnitude of a result present in the population
# * **Sample Size:** The number of observations in the sample.
# * **Significance:** The significance level used in the statistical test, e.g. alpha. Often set to 5% or 0.05.
# * **Statistical Power:** The probability of accepting the alternative hypothesis if it is true.
#
#
# Say we've followed the rule of thumb and required the significance level to be 5% and the power to be 80%. This means we have now specified the two key components of a power analysis.
#
# A decision rule of when to reject the null hypothesis. We reject the null when the p-value is less than 5%.
# Our tolerance for committing type 2 error (1−80%=20%).
#
# To actually solve for the equation of finding the suitable sample size, we also need to specify the detectable difference, i.e. the level of impact we want to be able to detect with our test.
#
# In order to explain the dynamics behind this, we'll return to the definition of power: the power is the probability of rejecting the null hypothesis when it is false. Hence for us to calculate the power, we need to define what "false" means to us in the context of the study. In other words, how much impact, i.e., difference between test and control, do we need to observe in order to reject the null hypothesis and conclude that the action worked?
#
# Let's consider two illustrative examples: if we think that an event rate reduction of, say, $10^{-10}$ is enough to reject the null hypothesis, then we need a very large sample size to get a power of 80%. This is pretty easy to deduce from the charts above: if the difference in event rates between test and control is a small number like $10^{-10}$, the null and alternative probability distributions will be nearly indistinguishable. Hence we will need to increase the sample size in order to move the alternative distribution to the right and gain power. Conversely, if we only require a reduction of 0.02 in order to claim success, we can make do with a much smaller sample size.
#
# The smaller the detectable difference, the larger the required sample size
#
# #### Student’s t Test Power Analysis
# +
alpha = 0.05
beta = 0.2
power = 1 - beta
effect = 0.80
# perform power analysis
analysis = TTestIndPower()
result = analysis.solve_power(
effect,
power=power,
nobs1=None,
ratio=1.0,
alpha=alpha
)
print(f"Sample Size: {np.ceil(result)}")
# -
# #### Power Curves
# +
# parameters for power analysis
effect_sizes = np.array([0.2, 0.5, 0.8])
sample_sizes = np.array(range(5, 100))
analysis = TTestIndPower()
analysis.plot_power(dep_var='nobs', nobs=sample_sizes, effect_size=effect_sizes)
plt.show()
| statistics/power_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # High-Performance Pandas: ``eval()`` and ``query()``
import numpy as np
rng = np.random.RandomState(42)
x = rng.rand(1E6)
y = rng.rand(1E6)
# %timeit x + y
# %timeit np.fromiter((xi + yi for xi, yi in zip(x, y)), dtype=x.dtype, count=len(x))
mask = (x > 0.5) & (y < 0.5)
tmp1 = (x > 0.5)
tmp2 = (y < 0.5)
mask = tmp1 & tmp2
import numexpr
mask_numexpr = numexpr.evaluate('(x > 0.5) & (y < 0.5)')
np.allclose(mask, mask_numexpr)
import pandas as pd
nrows, ncols = 100000, 100
rng = np.random.RandomState(42)
df1, df2, df3, df4 = (pd.DataFrame(rng.rand(nrows, ncols))
for i in range(4))
# %timeit df1 + df2 + df3 + df4
# %timeit pd.eval('df1 + df2 + df3 + df4')
np.allclose(df1 + df2 + df3 + df4,
pd.eval('df1 + df2 + df3 + df4'))
df1, df2, df3, df4, df5 = (pd.DataFrame(rng.randint(0, 1000, (100, 3)))
for i in range(5))
result1 = -df1 * df2 / (df3 + df4) - df5
result2 = pd.eval('-df1 * df2 / (df3 + df4) - df5')
np.allclose(result1, result2)
result1 = (df1 < df2) & (df2 <= df3) & (df3 != df4)
result2 = pd.eval('df1 < df2 <= df3 != df4')
np.allclose(result1, result2)
result1 = (df1 < 0.5) & (df2 < 0.5) | (df3 < df4)
result2 = pd.eval('(df1 < 0.5) & (df2 < 0.5) | (df3 < df4)')
np.allclose(result1, result2)
result3 = pd.eval('(df1 < 0.5) and (df2 < 0.5) or (df3 < df4)')
np.allclose(result1, result3)
result1 = df2.T[0] + df3.iloc[1]
result2 = pd.eval('df2.T[0] + df3.iloc[1]')
np.allclose(result1, result2)
df = pd.DataFrame(rng.rand(1000, 3), columns=['A', 'B', 'C'])
df.head()
result1 = (df['A'] + df['B']) / (df['C'] - 1)
result2 = pd.eval("(df.A + df.B) / (df.C - 1)")
np.allclose(result1, result2)
result3 = df.eval('(A + B) / (C - 1)')
np.allclose(result1, result3)
df.head()
df.eval('D = (A + B) / C', inplace=True)
df.head()
df.eval('D = (A - B) / C', inplace=True)
df.head()
column_mean = df.mean(1)
result1 = df['A'] + column_mean
result2 = df.eval('A + @column_mean')
np.allclose(result1, result2)
result1 = df[(df.A < 0.5) & (df.B < 0.5)]
result2 = pd.eval('df[(df.A < 0.5) & (df.B < 0.5)]')
np.allclose(result1, result2)
result2 = df.query('A < 0.5 and B < 0.5')
np.allclose(result1, result2)
Cmean = df['C'].mean()
result1 = df[(df.A < Cmean) & (df.B < Cmean)]
result2 = df.query('A < @Cmean and B < @Cmean')
np.allclose(result1, result2)
x = df[(df.A < 0.5) & (df.B < 0.5)]
tmp1 = df.A < 0.5
tmp2 = df.B < 0.5
tmp3 = tmp1 & tmp2
x = df[tmp3]
df.values.nbytes
| code_listings/03.12-Performance-Eval-and-Query.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="TBFXQGKYUc4X"
# ##### Copyright 2018 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="1z4xy2gTUc4a"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="FE7KNzPPVrVV"
# # Image classification
# + [markdown] colab_type="text" id="KwQtSOz0VrVX"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/beta/images/image_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/images/image_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/images/image_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="gN7G9GFmVrVY"
# This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:
#
# * Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.
# * _Overfitting_ —How to identify and prevent it.
# * _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.
#
# This tutorial follows a basic machine learning workflow:
#
# 1. Examine and understand data
# 2. Build an input pipeline
# 3. Build the model
# 4. Train the model
# 5. Test the model
# 6. Improve the model and repeat the process
# + [markdown] colab_type="text" id="zF9uvbXNVrVY"
# # Import packages
# + [markdown] colab_type="text" id="VddxeYBEVrVZ"
# Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
# + colab={} colab_type="code" id="rtPGh2MAVrVa"
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] colab_type="text" id="TI3XEQuJVrVd"
# This tutorial uses data available as `.zip` archive file. Use the `zipfile` module to extract its contents.
# + colab={} colab_type="code" id="19eGwtQ4VrVe"
import zipfile
# + [markdown] colab_type="text" id="Jlchl4x2VrVg"
# Import Tensorflow and the Keras classes needed to construct our model.
# + colab={} colab_type="code" id="L1WtoaOHVrVh"
# !brew install wget # added by Miles
# !pip install --upgrade --ignore-installed wrapt # added by Miles
# !pip install tensorflow==2.0.0-beta0 # modified by Miles: change to CPU version
# + colab={} colab_type="code" id="L1WtoaOHVrVh"
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# + [markdown] colab_type="text" id="UZZI6lNkVrVm"
# # Load data
# + [markdown] colab_type="text" id="DPHx8-t-VrVo"
# Begin by downloading the dataset. This tutorial uses a filtered version of <a href="https://www.kaggle.com/c/dogs-vs-cats/data" target="_blank">Dogs vs Cats</a> dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
# + colab={} colab_type="code" id="rpUSoFjuVrVp"
# !wget --no-check-certificate \
# https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
# -O /tmp/cats_and_dogs_filtered.zip
# + [markdown] colab_type="text" id="_lPjfOmNVrVs"
# Extract the dataset contents:
# + colab={} colab_type="code" id="OYmOylPlVrVt"
local_zip = '/tmp/cats_and_dogs_filtered.zip' # local path of downloaded .zip file
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp') # contents are extracted to '/tmp' folder
zip_ref.close()
# + [markdown] colab_type="text" id="Giv0wMQzVrVw"
# The dataset has the following directory structure:
#
# <pre>
# <b>cats_and_dogs_filtered</b>
# |__ <b>train</b>
# |______ <b>cats</b>: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....]
# |______ <b>dogs</b>: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]
# |__ <b>validation</b>
# |______ <b>cats</b>: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....]
# |______ <b>dogs</b>: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...]
# </pre>
# + [markdown] colab_type="text" id="VpmywIlsVrVx"
# After extracting its contents, assign variables with the proper file path for the training and validation set.
# + colab={} colab_type="code" id="sRucI3QqVrVy"
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# + colab={} colab_type="code" id="Utv3nryxVrV0"
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
# + [markdown] colab_type="text" id="ZdrHHTy2VrV3"
# ### Understand the data
# + [markdown] colab_type="text" id="LblUYjl-VrV3"
# Let's look at how many cats and dogs images are in the training and validation directory:
# + colab={} colab_type="code" id="vc4u8e9hVrV4"
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
# + colab={} colab_type="code" id="g4GGzGt0VrV7"
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
# + [markdown] colab_type="text" id="tdsI_L-NVrV_"
# # Set the model parameters
# + [markdown] colab_type="text" id="8Lp-0ejxOtP1"
# For convenience, set up variables to use while pre-processing the dataset and training the network.
# + colab={} colab_type="code" id="3NqNselLVrWA"
batch_size = 100
epochs = 15
IMG_SHAPE = 150 # Our training data consists of images with width of 150 pixels and height of 150 pixels
# + [markdown] colab_type="text" id="INn-cOn1VrWC"
# # Data preparation
# + [markdown] colab_type="text" id="5Jfk6aSAVrWD"
# Format the images into appropriately pre-processed floating point tensors before feeding to the network:
#
# 1. Read images from the disk.
# 2. Decode contents of these images and convert it into proper grid format as per their RGB content.
# 3. Convert them into floating point tensors.
# 4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.
#
# Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
# + colab={} colab_type="code" id="syDdF_LWVrWE"
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
# + [markdown] colab_type="text" id="RLciCR_FVrWH"
# After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
# + colab={} colab_type="code" id="Pw94ajOOVrWI"
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
# Its usually best practice to shuffle the training data
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
# + colab={} colab_type="code" id="2oUoKUzRVrWM"
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
# + [markdown] colab_type="text" id="hyexPJ8CVrWP"
# ### Visualize training images
# + [markdown] colab_type="text" id="60CnhEL4VrWQ"
# Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
# + colab={} colab_type="code" id="3f0Z7NZgVrWQ"
sample_training_images, _ = next(train_data_gen)
# + [markdown] colab_type="text" id="49weMt5YVrWT"
# The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
# + colab={} colab_type="code" id="JMt2RES_VrWU"
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
# + colab={} colab_type="code" id="d_VVg_gEVrWW"
plotImages(sample_training_images[:5])
# + [markdown] colab_type="text" id="b5Ej-HLGVrWZ"
# # Create the model
# + [markdown] colab_type="text" id="wEgW4i18VrWZ"
# The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it thatr is activated by a `relu` activation function. The model outputs class probabilities based on binary classification by the `sigmoid` activation function.
# + colab={} colab_type="code" id="F15-uwLPVrWa"
model = Sequential()
model.add(Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_SHAPE,IMG_SHAPE, 3,)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# + [markdown] colab_type="text" id="PI5cdkMQVrWc"
# ### Compile the model
#
# For this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
# + colab={} colab_type="code" id="6Mg7_TXOVrWd"
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
# + [markdown] colab_type="text" id="2YmQZ3TAVrWg"
# ### Model summary
#
# View all the layers of the network using the model's `summary` method:
# + colab={} colab_type="code" id="Vtny8hmBVrWh"
model.summary()
# + [markdown] colab_type="text" id="N06iqE8VVrWj"
# ### Train the model
# + [markdown] colab_type="text" id="oub9RtoFVrWk"
# Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
# + colab={} colab_type="code" id="KSF2HqhDVrWk"
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(batch_size))),
epochs=epochs,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(batch_size)))
)
# -
# + [markdown] colab_type="text" id="ojJNteAGVrWo"
# ### Visualize training results
# + [markdown] colab_type="text" id="LZPYT-EmVrWo"
# Now visualize the results after training the network.
# + colab={} colab_type="code" id="K6oA77ADVrWp"
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# + [markdown] colab_type="text" id="kDnr50l2VrWu"
# As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.
#
# Let's look at what went wrong and try to increase overall performance of the model.
# +
# Note (added by Miles): this is the end of the code-along
# What we've learned: training a CNN from scratch
# -
# # << End of Code-Along >>
# +
# Note (added by Miles): the following is optional material
# A better solution to this network's poor performance is to use transfer learning!
# + [markdown] colab_type="text" id="rLO7yhLlVrWu"
# # Overfitting
# + [markdown] colab_type="text" id="hNyx3Lp4VrWv"
# In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.
#
# When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.
#
# There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model.
#
# To begin, clear the previous Keras session and start a new one:
# + colab={} colab_type="code" id="_L8qd6IxVrWw"
# Clear resources
tf.keras.backend.clear_session()
epochs = 80
# + [markdown] colab_type="text" id="UOoVpxFwVrWy"
# # Data augmentation
# + [markdown] colab_type="text" id="Wn_QLciWVrWy"
# Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.
#
# Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process.
# + [markdown] colab_type="text" id="2uJ1G030VrWz"
# ## Augment and visualize data
# + [markdown] colab_type="text" id="hvX7hHlgVrW0"
# Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation.
# + [markdown] colab_type="text" id="rlVj6VqaVrW0"
# ### Apply horizontal flip
# + [markdown] colab_type="text" id="xcdvx4TVVrW1"
# Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
# + colab={} colab_type="code" id="Bi1_vHyBVrW2"
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
# + colab={} colab_type="code" id="zvwqmefgVrW3"
train_data_gen = image_gen.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE)
)
# + [markdown] colab_type="text" id="zJpRSxJ-VrW7"
# Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
# + colab={} colab_type="code" id="RrKGd_jjVrW7"
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# + colab={} colab_type="code" id="EvBZoQ9xVrW9"
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
# + [markdown] colab_type="text" id="i7n9xcqCVrXB"
# ### Randomly rotate the image
# + [markdown] colab_type="text" id="qXnwkzFuVrXB"
# Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
# + colab={} colab_type="code" id="1zip35pDVrXB"
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
# + colab={} colab_type="code" id="kVoWh4OIVrXD"
train_data_gen = image_gen.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE)
)
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# + colab={} colab_type="code" id="wmBx8NhrVrXK"
plotImages(augmented_images)
# + [markdown] colab_type="text" id="FOqGPL76VrXM"
# ### Apply zoom augmentation
# + [markdown] colab_type="text" id="NvqXaD8BVrXN"
# Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
# + colab={} colab_type="code" id="tGNKLa_YVrXR"
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
# + colab={} colab_type="code" id="VOvTs32FVrXU"
train_data_gen = image_gen.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE)
)
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# + colab={} colab_type="code" id="-KQWw8IZVrXZ"
plotImages(augmented_images)
# + [markdown] colab_type="text" id="usS13KCNVrXd"
# ### Put it all together
# + [markdown] colab_type="text" id="OC8fIsalVrXd"
# Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
# + colab={} colab_type="code" id="gnr2xujaVrXe"
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
# + colab={} colab_type="code" id="K0Efxy7EVrXh"
train_data_gen = image_gen_train.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE),
class_mode='binary'
)
# + [markdown] colab_type="text" id="AW-pV5awVrXl"
# Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
# + colab={} colab_type="code" id="z2m68eMhVrXm"
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
# + [markdown] colab_type="text" id="J8cUd7FXVrXq"
# ### Create validation data generator
# + [markdown] colab_type="text" id="a99fDBt7VrXr"
# Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
# + colab={} colab_type="code" id="54x0aNbKVrXr"
image_gen_val = ImageDataGenerator(rescale=1./255)
# + colab={} colab_type="code" id="1PCHKzI8VrXv"
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_SHAPE, IMG_SHAPE),
class_mode='binary')
# + [markdown] colab_type="text" id="yQGhdqHFVrXx"
# # Dropout
# + [markdown] colab_type="text" id="2Iq5TAH_VrXx"
# Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorial
#
# When you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.
#
# When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.
#
# Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers.
# + [markdown] colab_type="text" id="DyxxXRmVVrXy"
# # Creating a new network with Dropouts
# + [markdown] colab_type="text" id="1Ba2LjtkVrXy"
# Here, you apply dropout to first and last max pool layers and to a fully connected layer that has 512 output units. 30% of the first and last max pool layer, and 10% of fully connected layer output units, are randomly set to zero during each training epoch.
# + colab={} colab_type="code" id="2fjio8EsVrXz"
model = Sequential()
model.add(Conv2D(16, 3, padding='same', activation='relu', input_shape=(150,150,3,)))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Conv2D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(64, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1, activation='sigmoid'))
# + [markdown] colab_type="text" id="tpTgIxWAVrX0"
# ### Compile the model
# + [markdown] colab_type="text" id="1osvc_iTVrX1"
# After introducing dropouts to the network, compile the model and view the layers summary.
# + colab={} colab_type="code" id="OkIJhS-WVrX1"
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
model.summary()
# + [markdown] colab_type="text" id="7KiDshEUVrX6"
# ### Train the model
# + [markdown] colab_type="text" id="NFj0oVqVVrX6"
# After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
# + colab={} colab_type="code" id="GWxHs_luVrX7"
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(batch_size))),
epochs=epochs,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(batch_size)))
)
# + [markdown] colab_type="text" id="bbdyqZdxVrYA"
# ### Visualize the model
# + [markdown] colab_type="text" id="OgvF2nt7OtR7"
# Visualize the new model after training and see if there are signs of overfitting:
# + colab={} colab_type="code" id="7BTeMuNAVrYC"
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# + [markdown] colab_type="text" id="LrO5SdNoVrYH"
# ### Evaluating the model
#
# As you can see, the model's learning curves are much better than before and there is much less overfitting. The model is able to achieve an accuracy of ~*75%*.
| site/en/r2/tutorials/images/_image_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### 1. Once set, context variables can be accessed for the duration of the conversation with a given user.
# ##### Ans: True
# #### 2. Slots allow us to collect information from the user and store it in context variables.
# ##### Ans: True
# #### 3. Slots with no question defined are optional and will only set the context variable if the condition (e.g., @location) is detected.
# ##### Ans: True
# #### 4. A node can only have one slot and therefore cannot assign more than one context variable.
# ##### Ans: False
# #### 5. In general, a required slot will only ask its question to the user once, even if the user replies with irrelevant information.
# ##### Ans: False
| Coursera/How to Build a Chatbot Without Coding/Week-3/Quiz/Module-6-Quiz-Context-Variables-&-Slots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''doutorado'': conda)'
# name: python3
# ---
from analysis import analysis_utils
import re
import pandas as pd
# +
metric_name = "overall_acc"
df = analysis_utils.wandb_to_df(
[
# "exp_14_stacking",
# "exp_14_stacking_rtol0.01",
# "exp_11",
# "exp_11_boosting",
# # "exp_12_rtol",
# "exp_13_rol_boosting",
# "exp_004_clean"
# "exp_011_rtol0.01",
# "exp_011_rtol0.001",
# "exp_011_autoencoders",
# "exp_011_autoencoders_50",
# "exp_012_rtol0.01",
# "exp_013_12foiautoencoders_rtol0.01",
# "exp_014_rtol_defato_0.01",
"exp0007",
"exp0009_stack_hidden_maxlayers2_noappend",
"exp0009_maxlayers1",
"exp0009_maxlayers2",
"exp0009_stack_hidden_maxlayers2",
"exp0016",
"exp0016_tanh",
"exp0016_relu",
],
metric_name,
)
df = df.sort_index(axis=1)
# -
df.head()
# +
metric_columns = [
c
for c in df.columns
if re.match(r"test_.*[1\-nn|3\-nn|svm|xgboost|rf|autoconstructive]_*" + metric_name, c)
# if re.match("test_.*[drl]_" + metric, c)
]
df_filtered_tmp = df[metric_columns + ["project", "dataset_name"]]
df_filtered = pd.DataFrame()
for project in df_filtered_tmp['project'].unique():
df_p = df[df['project'] == project]
for dataset_name in df_p['dataset_name'].unique():
df_p_d = df_p[df_p['dataset_name'] == dataset_name]
tmp_df = df_p_d.iloc[:20, :]
df_filtered = pd.concat((df_filtered, tmp_df))
df_wilcoxon = analysis_utils.wilcoxon_tests(df_filtered, metric_name)
df_wilcoxon.to_csv("analysis_wilcoxon.csv")
df_pivot = df_wilcoxon[df_wilcoxon["g1"] == f"test_drl_untrained_{metric_name}"].pivot(
index=["project", "dataset_name"],
columns="g2",
values=["g1_mean", "g2_mean", "wilcoxon_result"],
)
df_pivot.columns = df_pivot.columns.swaplevel(0, 1)
df_pivot.sort_index(1).to_csv("pivot_untrained.csv")
df_pivot = df_wilcoxon[df_wilcoxon["g1"] == f"test_drl_{metric_name}"].pivot(
index=["project", "dataset_name"],
columns="g2",
values=["g1_mean", "g2_mean", "wilcoxon_result"],
)
df_pivot.columns = df_pivot.columns.swaplevel(0, 1)
df_pivot.sort_index(1).to_csv("pivot_trained.csv")
# plot_html(df)
# df[["project", "dataset_name", "g2", "wilcoxon_result"]].pivot(
# "project", "dataset_name", "g2", "wilcoxon_result"
# ).to_csv("pivot.csv")
df_filtered.melt(["project", "dataset_name"]).groupby(
["project", "dataset_name", "variable"]
).mean().unstack([0, 2]).to_csv("analysis2.csv")
avg = df_wilcoxon.groupby(["project", "dataset_name"]).mean()
avg.to_csv("analysis.csv")
with open("analysis.html", "w") as html_file:
html_file.write(avg.style.highlight_max(color="lightgreen", axis=1).render())
print(df_wilcoxon)
| analysis/analysis_jupyter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Estadística con Python
#
# ### GitHub repository: https://github.com/jorgemauricio/python_statistics
#
# ### Instructor: <NAME>
# ## EDA
#
# Exploratory Data Analysis refers to the critical process of performing initial investigations on data so as to discover patterns,to spot anomalies,to test hypothesis and to check assumptions with the help of summary statistics and graphical representations.
#librerías
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# leer archivo csv "store_info.csv"
df = pd.read_csv("data/store_info.csv")
# visualizar la estructura del archivo
df.head()
# visualizar el tipo de dato de cada columna
df.info()
# eliminar el signo de $ de los datos
df['Amount'] = df['Amount'].str.replace('$','').str.replace(',','')
# verificar la estructura
df.head()
df.info()
# convertir columna 'Amount' a dato númerico
df['Amount'] = pd.to_numeric(df['Amount'])
df.info()
# filas y columnas
df.shape
df.describe()
# distribución de las ventas
plt.hist(df["Amount"],bins=20,density=1,facecolor='blue',alpha=0.7)
plt.show()
# Se puede determinar que las ventas varian de -1000 a 1000
# ## Ventas por mes, día y hora
# Mensuales
sales_by_month = df.groupby('Month').size()
print(sales_by_month)
#Plotting the Graph
plot_by_month = sales_by_month.plot(title='Ventas Mensuales',xticks=(1,2,3,4,5,6,7,8,9,10,11,12))
plot_by_month.set_xlabel('Mes')
plot_by_month.set_ylabel('Venta Total')
# Por día
fig = plt.figure(figsize=(15,5))
sales_by_day = df.groupby('Day').size()
plot_by_day = sales_by_day.plot(title='Ventas diarias',xticks=(range(1,31)),rot=45)
plot_by_day.set_xlabel('Día')
plot_by_day.set_ylabel('Venta Total')
# El día con mayor ventas fue el día 18
sales_by_hour = df.groupby('Hour').size()
plot_by_hour = sales_by_hour.plot(title='Ventas por Hora',xticks=(range(5,22)))
plot_by_hour.set_xlabel('Hora')
plot_by_hour.set_ylabel('Venta Total')
sales_by_day.idxmax()
# Por día
fig = plt.figure(figsize=(15,5))
sales_by_day = df.groupby('Day').size()
plot_by_day = sales_by_day.plot(title='Ventas diarias',xticks=(range(1,31)),rot=45)
plot_by_day.set_xlabel('Día')
plot_by_day.set_ylabel('Venta Total')
plot_by_day.text(sales_by_day.idxmax() + 1,sales_by_day.max(),"Mejor Venta")
sales_by_day.max()
| EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/f2010126/CodeworkWS2021/blob/main/DLComp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="wPmhED-bFIHh"
# Add Drive
#
# + colab={"base_uri": "https://localhost:8080/"} id="r9Z4_o3gFLsL" outputId="6f3da98c-5c72-4987-d8a6-abff03e97e26"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="pm-fPrvH4S91"
# Code from evaluate.py
# + id="4S3lB3hny_Kr"
import os
import torch
from tqdm import tqdm
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
# commented for notebook
# from src.cnn import *
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
class AverageMeter(object):
def __init__(self):
self.reset()
def reset(self):
self.avg = 0
self.sum = 0
self.cnt = 0
def update(self, val, n=1):
self.sum += val * n
self.cnt += n
self.avg = self.sum / self.cnt
def accuracy(logits, labels):
preds = torch.argmax(logits, axis=1)
return torch.sum(preds == labels) / len(labels)
def eval_fn(model, loader, device, train=False):
"""
Evaluation method
:param model: model to evaluate
:param loader: data loader for either training or testing set
:param device: torch device
:param train: boolean to indicate if training or test set is used
:return: accuracy on the data
"""
score = AverageMeter()
model.eval()
t = tqdm(loader)
with torch.no_grad(): # no gradient needed
for images, labels in t:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
acc = accuracy(outputs, labels)
score.update(acc.item(), images.size(0))
t.set_description('(=> Test) Score: {:.4f}'.format(score.avg))
return score.avg
def eval_model(model, saved_model_file, test_data_dir, data_augmentations):
model = model.to(device)
model.load_state_dict(torch.load(os.path.join(os.getcwd(), 'models', saved_model_file)))
data = ImageFolder(test_data_dir, transform=data_augmentations)
test_loader = DataLoader(dataset=data,
batch_size=128,
shuffle=False)
score = eval_fn(model, test_loader, device, train=False)
print('Avg accuracy:', str(score*100) + '%')
# + [markdown] id="b3fpLM3r41gu"
# cnn.py
#
# + id="856zY24U4gtS"
""" File with CNN models. Add your custom CNN model here. """
import torch.nn as nn
import torch.nn.functional as F
class SampleModel(nn.Module):
"""
A sample PyTorch CNN model
"""
def __init__(self, input_shape=(3, 64, 64), num_classes=10):
super(SampleModel, self).__init__()
self.conv1 = nn.Conv2d(in_channels=input_shape[0], out_channels=10, kernel_size=(3,3), padding=(1,1))
self.conv2 = nn.Conv2d(in_channels=10, out_channels=20, kernel_size=(3,3), padding=(1,1))
self.pool = nn.MaxPool2d(3, stride=2)
# The input features for the linear layer depends on the size of the input to the convolutional layer
# So if you resize your image in data augmentations, you'll have to tweak this too.
self.fc1 = nn.Linear(in_features=4500, out_features=32)
self.fc2 = nn.Linear(in_features=32, out_features=num_classes)
def forward(self, x):
x = self.conv1(x)
x = self.pool(x)
x = self.conv2(x)
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
# + [markdown] id="s5yLlG8x5Qho"
# data_augmentations.py
# + id="gwOpmzCE40MX"
from torchvision import datasets, transforms
import torch.nn as nn
resize_to_64x64 = transforms.Compose([
transforms.Resize((64, 64)),
transforms.ToTensor()
])
resize_and_colour_jitter = transforms.Compose([
transforms.Resize((64, 64)),
transforms.ColorJitter(brightness=0.1, contrast=0.1),
transforms.ToTensor()
])
# + [markdown] id="y2N3TSeE5jYR"
# main.py
# + colab={"base_uri": "https://localhost:8080/", "height": 783} id="PDsj41Ah5b3G" outputId="0bde700a-59da-48bd-b22d-25866d4fa8ec"
import os
import argparse
import logging
import time
import numpy as np
import torch
import torchvision.transforms as transforms
from torch.utils.data import DataLoader, Subset, ConcatDataset
from torchsummary import summary
from torchvision.datasets import ImageFolder
# commented for notebook
# from src.cnn import *
# from src.eval.evaluate import eval_fn, accuracy
# from src.training import train_fn
# from src.data_augmentations import *
data_dir ='/content/drive/My Drive/DEEP_LEARNING/DLCOMP/dataset'
def main(data_dir,
torch_model,
num_epochs=10,
batch_size=50,
learning_rate=0.001,
train_criterion=torch.nn.CrossEntropyLoss,
model_optimizer=torch.optim.Adam,
data_augmentations=None,
save_model_str=None,
use_all_data_to_train=False,
exp_name=''):
"""
Training loop for configurableNet.
:param data_dir: dataset path (str)
:param num_epochs: (int)
:param batch_size: (int)
:param learning_rate: model optimizer learning rate (float)
:param train_criterion: Which loss to use during training (torch.nn._Loss)
:param model_optimizer: Which model optimizer to use during trainnig (torch.optim.Optimizer)
:param data_augmentations: List of data augmentations to apply such as rescaling.
(list[transformations], transforms.Composition[list[transformations]], None)
If none only ToTensor is used
:return:
"""
# Device configuration
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
if data_augmentations is None:
data_augmentations = transforms.ToTensor()
elif isinstance(data_augmentations, list):
data_augmentations = transforms.Compose(data_augmentations)
elif not isinstance(data_augmentations, transforms.Compose):
raise NotImplementedError
# Load the dataset
train_data = ImageFolder(os.path.join(data_dir, 'train'), transform=data_augmentations)
val_data = ImageFolder(os.path.join(data_dir, 'val'), transform=data_augmentations)
test_data = ImageFolder(os.path.join(data_dir, 'test'), transform=data_augmentations)
channels, img_height, img_width = train_data[0][0].shape
# image size
input_shape = (channels, img_height, img_width)
# instantiate training criterion
train_criterion = train_criterion().to(device)
score = []
if use_all_data_to_train:
train_loader = DataLoader(dataset=ConcatDataset([train_data, val_data, test_data]),
batch_size=batch_size,
shuffle=True)
logging.warning('Training with all the data (train, val and test).')
else:
train_loader = DataLoader(dataset=train_data,
batch_size=batch_size,
shuffle=True)
val_loader = DataLoader(dataset=val_data,
batch_size=batch_size,
shuffle=False)
model = torch_model(input_shape=input_shape,
num_classes=len(train_data.classes)).to(device)
# instantiate optimizer
optimizer = model_optimizer(model.parameters(), lr=learning_rate)
# Info about the model being trained
# You can find the number of learnable parameters in the model here
logging.info('Model being trained:')
summary(model, input_shape,
device='cuda' if torch.cuda.is_available() else 'cpu')
# Train the model
for epoch in range(num_epochs):
logging.info('#' * 50)
logging.info('Epoch [{}/{}]'.format(epoch + 1, num_epochs))
train_score, train_loss = train_fn(model, optimizer, train_criterion, train_loader, device)
logging.info('Train accuracy: %f', train_score)
if not use_all_data_to_train:
test_score = eval_fn(model, val_loader, device)
logging.info('Validation accuracy: %f', test_score)
score.append(test_score)
if save_model_str:
# Save the model checkpoint can be restored via "model = torch.load(save_model_str)"
model_save_dir = os.path.join(os.getcwd(), save_model_str)
if not os.path.exists(model_save_dir):
os.mkdir(model_save_dir)
save_model_str = os.path.join(model_save_dir, exp_name + '_model_' + str(int(time.time())))
torch.save(model.state_dict(), save_model_str)
if not use_all_data_to_train:
logging.info('Accuracy at each epoch: ' + str(score))
logging.info('Mean of accuracies across all epochs: ' + str(100*np.mean(score))+'%')
logging.info('Accuracy of model at final epoch: ' + str(100*score[-1])+'%')
if __name__ == '__main__':
"""
This is just an example of a training pipeline.
Feel free to add or remove more arguments, change default values or hardcode parameters to use.
"""
loss_dict = {'cross_entropy': torch.nn.CrossEntropyLoss} # Feel free to add more
opti_dict = {'sgd': torch.optim.SGD, 'adam': torch.optim.Adam} # Feel free to add more
cmdline_parser = argparse.ArgumentParser('DL WS20/21 Competition')
cmdline_parser.add_argument('-m', '--model',
default='SampleModel',
help='Class name of model to train',
type=str)
cmdline_parser.add_argument('-e', '--epochs',
default=50,
help='Number of epochs',
type=int)
cmdline_parser.add_argument('-b', '--batch_size',
default=282,
help='Batch size',
type=int)
# cmdline_parser.add_argument('-D', '--data_dir',
# default=os.path.join(os.path.dirname(os.path.abspath(__file__)),
# '..', 'dataset'),
# help='Directory in which the data is stored (can be downloaded)')
cmdline_parser.add_argument('-l', '--learning_rate',
default=2.244958736283895e-05,
help='Optimizer learning rate',
type=float)
cmdline_parser.add_argument('-L', '--training_loss',
default='cross_entropy',
help='Which loss to use during training',
choices=list(loss_dict.keys()),
type=str)
cmdline_parser.add_argument('-o', '--optimizer',
default='adam',
help='Which optimizer to use during training',
choices=list(opti_dict.keys()),
type=str)
cmdline_parser.add_argument('-p', '--model_path',
default='models',
help='Path to store model',
type=str)
cmdline_parser.add_argument('-v', '--verbose',
default='INFO',
choices=['INFO', 'DEBUG'],
help='verbosity')
cmdline_parser.add_argument('-n', '--exp_name',
default='default',
help='Name of this experiment',
type=str)
cmdline_parser.add_argument('-d', '--data-augmentation',
default='resize_and_colour_jitter',
help='Data augmentation to apply to data before passing to the model.'
+ 'Must be available in data_augmentations.py')
cmdline_parser.add_argument('-a', '--use-all-data-to-train',
action='store_true',
help='Uses the train, validation, and test data to train the model if enabled.')
args, unknowns = cmdline_parser.parse_known_args()
log_lvl = logging.INFO if args.verbose == 'INFO' else logging.DEBUG
logging.basicConfig(level=log_lvl)
if unknowns:
logging.warning('Found unknown arguments!')
logging.warning(str(unknowns))
logging.warning('These will be ignored')
main(
data_dir=data_dir,
torch_model=eval(args.model),
num_epochs=args.epochs,
batch_size=args.batch_size,
learning_rate=args.learning_rate,
train_criterion=loss_dict[args.training_loss],
model_optimizer=opti_dict[args.optimizer],
data_augmentations=eval(args.data_augmentation), # Check data_augmentations.py for sample augmentations
save_model_str=args.model_path,
exp_name=args.exp_name,
use_all_data_to_train=args.use_all_data_to_train
)
# + [markdown] id="NdsrlvC05q1Y"
# training.py
# + id="QZeDOCOo5pbW"
from tqdm import tqdm
import time
import torch
import numpy as np
# commented for notebook
# from src.eval.evaluate import AverageMeter, accuracy
def train_fn(model, optimizer, criterion, loader, device, train=True):
"""
Training method
:param model: model to train
:param optimizer: optimization algorithm
:criterion: loss function
:param loader: data loader for either training or testing set
:param device: torch device
:param train: boolean to indicate if training or test set is used
:return: (accuracy, loss) on the data
"""
time_begin = time.time()
score = AverageMeter()
losses = AverageMeter()
model.train()
time_train = 0
t = tqdm(loader)
for images, labels in t:
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
logits = model(images)
loss = criterion(logits, labels)
loss.backward()
optimizer.step()
acc = accuracy(logits, labels)
n = images.size(0)
losses.update(loss.item(), n)
score.update(acc.item(), n)
t.set_description('(=> Training) Loss: {:.4f}'.format(losses.avg))
time_train += time.time() - time_begin
print('training time: ' + str(time_train))
return score.avg, losses.avg
| DLComp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# original
# * https://gist.github.com/karpathy/d4dee566867f8291f086
import numpy as np
# +
with open('book.txt') as f:
data = f.read()
data = data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print(f'data has {data_size}, {vocab_size} unique')
# -
print(data[:500])
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
class T(object):
def __init__(self, data, seq_len):
self.seq_len = seq_len
self.data = data
# pointer
self.p = 0
def get(self):
if self.p + self.seq_len + 1 > len(self.data):
self.p = 0
X = [char_to_ix[char] for char in self.data[self.p :self.p+self.seq_len ]]
y = [char_to_ix[char] for char in self.data[self.p+1:self.p+self.seq_len+1]]
self.p += self.seq_len
return X, y
# +
class RNN(object):
def __init__(self, vocab_dim, h_dim, seq_len, learning_rate, seed=None):
if seed:
np.random.seed(seed)
# vocabulary dimension
self.vocab_dim = vocab_dim
# hidden nodes dimension
self.h_dim = h_dim
# sequence length
self.seq_len = seq_len
#
self.learning_rate = learning_rate
#
self._build()
def _build(self):
self._Wx = np.random.randn(self.vocab_dim, self.h_dim) * 1e-2
self._Wh = np.random.randn(self.h_dim, self.h_dim) * 1e-2
self._Wy = np.random.randn(self.h_dim, self.vocab_dim) * 1e-2
self._bh = np.zeros((1, self.h_dim))
self._by = np.zeros((1, self.vocab_dim))
self._mWx, self._mWh, self._mWy = np.zeros_like(self._Wx), np.zeros_like(self._Wh), np.zeros_like(self._Wy)
self._mbh, self._mby = np.zeros_like(self._bh), np.zeros_like(self._by)
def _forward(self, X, y, hprev):
"""
X (seq_len, vocab_size)
"""
h = np.dot(X, self._Wx) + np.dot(hprev, self._Wh) + self._bh
h = np.tanh(h)
y_pred = np.dot(h, self._Wy) + self._by
y_pred = self._softmax(y_pred)
loss = -np.log(y_pred[0,y])
return h, y_pred, loss
def _backward(self, X, y, H, hprev, Y):
Y[y] -= 1
self._dWhy += np.dot(H[None].T, Y[None])
self._dby += Y[None]
dh = np.dot(Y[None], self._Wy.T) + self._dhnext
dhraw = self._tanh_derivative(H) * dh
assert dhraw.shape == (1, self.h_dim)
self._dWxh += np.dot( X[None].T, dhraw)
self._dWhh += np.dot(hprev[None].T, dhraw)
self._dbh += dhraw
self._dhnext = np.dot(dhraw, self._Wh.T)
assert self._dhnext.shape == (1, self.h_dim)
def _softmax(self, x):
y = np.exp(x - np.max(x))
return y/np.sum(y)
def _tanh_derivative(self, value):
return 1 - value**2
def _clip_gradients(self):
for dparam in [self._dWxh, self._dWhh, self._dWhy, self._dbh, self._dby]:
np.clip(dparam, -5, 5, out=dparam)
def forward_backward(self, inputs, targets, hprev):
X = np.zeros((self.seq_len, self.vocab_dim))
for t, input in enumerate(inputs):
X[t, input] = 1
H = np.zeros((self.seq_len, self.h_dim))
Y = np.zeros_like(X)
total_loss = 0
for t in range(self.seq_len):
h = hprev if t==0 else H[t-1]
H[t], Y[t], loss = self._forward(X[t], targets[t], h)
total_loss += loss
self._dWxh, self._dWhh, self._dWhy = np.zeros_like(self._Wx), np.zeros_like(self._Wh), np.zeros_like(self._Wy)
self._dbh, self._dby = np.zeros_like(self._bh), np.zeros_like(self._by)
self._dhnext = np.zeros_like(H[0])
for t in range(self.seq_len)[::-1]:
self._backward(X[t], targets[t], H[t], H[t-1], Y[t])
self._clip_gradients()
params = [self._Wx, self._Wh, self._Wy, self._bh, self._by]
mparams = [self._mWx, self._mWh, self._mWy, self._mbh, self._mby]
dparams = [self._dWxh, self._dWhh, self._dWhy, self._dbh, self._dby]
# print(f'Wx: {np.sum(self._Wx):4.4} - Wh: {np.sum(self._Wh):4.4f} - Wy: {np.sum(self._Wy):4.4f}')
# print(f'Wx: {np.sum(self._dWxh):4.4} - Wh: {np.sum(self._dWhh):4.4f} - Wy: {np.sum(self._dWhy):4.4f}')
for param, dparam, mparam in zip(params, dparams, mparams):
mparam += dparam ** 2
param -= self.learning_rate * dparam/np.sqrt(mparam + 1e-8)
# print(f'Wx: {np.sum(self._Wx):4.4} - Wh: {np.sum(self._Wh):4.4f} - Wy: {np.sum(self._Wy):4.4f}')
return total_loss, H[-1]
def sample(self, input, hprev, seq_len):
X = np.zeros((1, self.vocab_dim))
X[0, input] = 1
ixes = []
for t in range(seq_len):
h = np.dot(X, self._Wx) + np.dot(hprev, self._Wh) + self._bh
h = np.tanh(h)
y = np.dot(h, self._Wy) + self._by
y = self._softmax(y)
ix = np.random.choice(range(self.vocab_dim), p=y.ravel())
X *= 0
X[0,ix] = 1
ixes.append(ix)
print('\n\n', ''.join(ix_to_char[ix] for ix in ixes))
# +
seq_len = 25
threshold = len(data)//25
data_supplier = T(data, seq_len)
rnn = RNN(vocab_size, 100, seq_len, 1e-1, seed=42)
# -
n = 0
while n <= 1000000:
inputs, targets = data_supplier.get()
if n % threshold == 0:
hprev = np.zeros((1, 100))
loss, hprev = rnn.forward_backward(inputs, targets, hprev)
print(f'\riter {n:2}, loss: {loss:4.6f}', end='')
if (n+1) % 5000 == 0:
rnn.sample(inputs[0], hprev, 200)
n += 1
| 01 - RNN Numpy.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.0
# language: julia
# name: julia-0.6
# ---
# 課題2
# +
using DataFrames
using Gadfly
set_default_plot_size(30cm, 10cm)
data = readtable("inflation.csv", header = false)
rename!(data, :x1, :year)
rename!(data, :x2, :CPI)
CPI = Array(data[:, 2]);
# +
# original series
inflation = [100*(CPI[i]-CPI[i-1])/CPI[i-1] for i in 2:length(CPI)]
original = inflation[12:end]
# moving average
MA_series = [sum([inflation[i-j] for j in 0:11])/12 for i in 12:length(inflation)];
# log seasonal difference
log_season = [100*(log(CPI[i]) - log(CPI[i-12])) for i in 13:length(CPI)];
# +
X = Array(data[13:end, 1]);
l1 = layer(x = X, y = original, Geom.line, Theme(default_color="green"))
l2 = layer(x = X, y = MA_series, Geom.line, Theme(default_color="deepskyblue"))
l3 = layer(x = X, y = log_season, Geom.line, Theme(default_color="red"))
myplot = plot(l1,l2,l3, Guide.manual_color_key("Legend", ["original", "MA", "log_season"], ["green", "deepskyblue", "red"]),
Guide.xlabel("time"),
Guide.ylabel("inflation rate"),
Coord.cartesian(ymin=-2, ymax=5))
# -
draw(PNG("myplot.png", 3inch, 3inch), myplot)
# +
df = DataFrame(
MA = MA_series,
log= log_season
)
writetable("SA.csv", df)
| hw1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import fastkml
import geopy
import shapely
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style='white')
# %matplotlib inline
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('max_colwidth', 1000)
pd.set_option('expand_frame_repr', False) # more options can be specified also
data = pd.read_csv("annot_houses.csv")
data = data.drop(columns=["type"])
data = data.set_index("Unnamed: 0")
data[data["str inf"] == "strada Clujului"]
addresses = data[data["city inf"].isnull()]["str inf"].unique()[1:]
addresses
address_coords = {}
k = fastkml.KML()
k.from_string(open("../Cartiere.kml", "rb").read())
len(list(list(list(k.features())[0].features())[0].features()))
coder = geopy.geocoders.Nominatim(user_agent="rolisz")
for a in addresses:
res = coder.geocode(a +" Oradea")
if res is None:
print(a)
continue
address_coords[a] = shapely.geometry.Point(res.longitude, res.latitude)
addres_neighb = {}
for a in address_coords:
for feat in list(list(list(k.features())[0].features())[0].features()):
if feat.geometry.contains(address_coords[a]):
addres_neighb[a] = feat.name.strip()
break
else:
print("not found", a)
addres_neighb["calea santandrei"] = "<NAME>"
addres_neighb["strada <NAME>"] = "Oncea"
addres_neighb["str <NAME>"] = "Episcopia"
addres_neighb["str.<NAME>"] = "Episcopia"
addres_neighb["str. Leaganului"] = "Olosig"
addres_neighb["strada <NAME>"] = "Episcopia"
addres_neighb["Str. <NAME>"] = "Episcopia"
addres_neighb["str Ciheiului"] = "Nufarul"
addres_neighb["str. Lugojului"] = "Dorobantilor"
addres_neighb["<NAME>"] = "<NAME>"
addres_neighb["strada bartok bela"] = "<NAME>"
addres_neighb["Strada Traian"] = "<NAME>"
addres_neighb["str Sarmisegetuza"] = "C<NAME>"
addres_neighb["str.Clujului"] = "Tokai"
addres_neighb["str. <NAME>"] = "Oncea"
addres_neighb["Str. Clujului"] = "<NAME>"
addres_neighb["strada Veteranilor"] = "Iosia"
addres_neighb["StradaTuberozelor"] = "Orasul Nou"
addres_neighb["strada Clujului"] = "Tokai"
addres_neighb["str Veteranilor"] = "Iosia"
addres_neighb["str. <NAME>"] = "Rogerius"
addres_neighb["str. Episcop <NAME>"] = "Episcopia"
addres_neighb["strada Podgoriei"] = "Podgoria"
addres_neighb["str.<NAME>"] = "Grigorescu"
addres_neighb["strada C<NAME>"] = "Grigorescu"
neighb = data["neighb inf"].unique()[1:]
neighb
neighb_map = {'ultracentral': "Orasul Nou", 'primariei': "Orasul Nou", "horea": "Dorobantilor",
'centrala': "Olosig", 'tineretului': "Velenta", 'muntele găina,': "<NAME>",
'semicentrala':"Olosig", 'garii': "Rogerius", 'eminescu': "Mih<NAME>", 'doja': "<NAME>",
'cantemir': "<NAME>", 'horia': "Dorobantilor", 'episcopia bihor': "Episcopia",
'piata devei': "Iosia", 'decebal': "<NAME>", 'spitalul judetean': "<NAME>",
'dealuri': "<NAME>", 'calea clujului': "Tokai", 'ultra central': "Orasul Nou",
'pieței i. creanga': "Olosig", 'h. ipsen': "Iosia", 'armatei romane': "Dorobantilor",
'balcescu': "C<NAME>", "Centrala": "Olosig", "Tineretului": "Tokai",
'bazilicii romano-catolice': "Olosig", 'era': "Calea Santandrei", "Balcescu": "Calea Aradului",
"Decebal": "<NAME>", "Doja": "<NAME>", "Cantemir": "<NAME>",
"Ioșia": "Iosia", "Muntele Găina": "<NAME>", "Calea Clujului": "Tokai",
"Horea": "Dorobantilor",
'parcului bratianu':"Olosig", 'gh. doja': "Gheorghe Doja", 'garii centrale':"Rogerius", 'clujului': "Tokai",
'bancpost': "Orasul Nou", 'crisan': "Dorobantilor"}
def get_nb(row):
st = row["str inf"]
nb = row["neighb inf"]
neighborhood = ""
if st in addres_neighb:
neighborhood = addres_neighb[st]
if type(nb) == str:
if nb in neighb_map:
if neighborhood == neighb_map[nb]:
return neighborhood
else:
return neighb_map[nb]
if nb.title() in neighb_map:
return neighb_map[nb.title()]
if nb.title() == neighborhood:
return neighborhood
else:
return nb.title()
return neighborhood
data["Neighborhood"] = data.apply(get_nb, axis=1)
def convert_price(prc):
if prc[-1] == "€":
return float(prc[:-1])
elif prc[-3:] == "lei":
return float(prc[:-3])/4.5
elif prc == "Schimb":
return -1
return prc
data["price"] = data["price"].apply(convert_price)
"DECEBAL".title()
data[(data["city inf"].isnull()) & ("Neighborhood" != "")].shape
nb_data = data[(data["city inf"].isnull()) & (data["Neighborhood"] != "") & (data["price"] > 0)].groupby('text').first()
means = nb_data.groupby("Neighborhood")["price"].mean()
means.plot(kind='bar')
plt.figure(figsize=(25,16))
plot = sns.barplot(data=nb_data, x="Neighborhood", y="price")
for item in plot.get_yticklabels():
item.set_fontsize(18)
for item in plot.get_xticklabels():
item.set_rotation(90)
item.set_fontsize(18)
# +
plt.figure(figsize=(25,16))
plot = sns.countplot(data=nb_data, x="Neighborhood")
for item in plot.get_yticklabels():
item.set_fontsize(18)
for item in plot.get_xticklabels():
item.set_rotation(90)
item.set_fontsize(18)
# -
| notebooks/EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="R0iis46JultV" colab_type="code" colab={}
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
# + id="Ce0L5sPtuaMy" colab_type="code" outputId="9a6fbb82-e234-4649-e74f-f49be3fe4584" executionInfo={"status": "ok", "timestamp": 1561131507176, "user_tz": 300, "elapsed": 1649, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/-p2FrFpD_hQk/AAAAAAAAAAI/AAAAAAAAAkU/Qol50T4G-Pc/s64/photo.jpg", "userId": "11714338394019695389"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# Load S&P 500 df
df_quake_gold = pd.read_csv("https://raw.githubusercontent.com/labs13-quake-viewer/ds-data/master/" +
"Gold%20Price%20Change%20by%20Earthquake(5.5+).csv", index_col=0)
df_quake_gold.shape
# + id="z4dfryggb7hg" colab_type="code" outputId="478a875a-9bd7-4d4b-c315-c3a35d713499" executionInfo={"status": "ok", "timestamp": 1561131508502, "user_tz": 300, "elapsed": 252, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/-p2FrFpD_hQk/AAAAAAAAAAI/AAAAAAAAAkU/Qol50T4G-Pc/s64/photo.jpg", "userId": "11714338394019695389"}} colab={"base_uri": "https://localhost:8080/", "height": 482}
df_quake_gold.head()
# + id="e2ovujy9wfcw" colab_type="code" colab={}
dates = []
for i in df_quake_gold.Date:
dates.append(int(''.join(c for c in i if c.isdigit())))
# + id="a5SH-G0E1ZYb" colab_type="code" colab={}
df_quake_gold["magg"] = (df_quake_gold["Mag"] * 10).astype(int)
# + id="pcJ70PWkyKs_" colab_type="code" colab={}
df_quake_gold["dates"] = dates
# + id="FP0Qss2_6bKU" colab_type="code" outputId="33268faf-f4e6-49fe-d3d3-aba0be06c452" executionInfo={"status": "ok", "timestamp": 1561131515746, "user_tz": 300, "elapsed": 273, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/-p2FrFpD_hQk/AAAAAAAAAAI/AAAAAAAAAkU/Qol50T4G-Pc/s64/photo.jpg", "userId": "11714338394019695389"}} colab={"base_uri": "https://localhost:8080/", "height": 425}
df_quake_gold.info()
# + id="Kd9mYnMq-xOH" colab_type="code" outputId="cd68fcbb-6a3a-414f-9393-17b34c640414" executionInfo={"status": "ok", "timestamp": 1561131519378, "user_tz": 300, "elapsed": 283, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/-p2FrFpD_hQk/AAAAAAAAAAI/AAAAAAAAAkU/Qol50T4G-Pc/s64/photo.jpg", "userId": "11714338394019695389"}} colab={"base_uri": "https://localhost:8080/", "height": 119}
y = df_quake_gold['Appr_Day_30']
X = df_quake_gold[['dates', 'Mag', 'Lat', 'Long', 'Depth']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=42)
print("Original shape:", X.shape, "\n")
print("X_train shape:", X_train.shape)
print("X_test shape:", X_test.shape)
print("y_train shape:", y_train.shape)
print("y_test shape:", y_test.shape)
# + id="QubjVvpLzp-D" colab_type="code" outputId="d10b89e8-f28a-4f04-f7c6-d95eb69988f5" executionInfo={"status": "ok", "timestamp": 1561131522244, "user_tz": 300, "elapsed": 261, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/-p2FrFpD_hQk/AAAAAAAAAAI/AAAAAAAAAkU/Qol50T4G-Pc/s64/photo.jpg", "userId": "11714338394019695389"}} colab={"base_uri": "https://localhost:8080/", "height": 80}
X_train.sample()
# + id="VL_3ZFVxY34r" colab_type="code" colab={}
# Instantiate model with 100 decision trees
rf = RandomForestRegressor(n_estimators = 100, random_state = 42)
# + id="jrlrFsm1Y_cm" colab_type="code" outputId="8b660ccd-a14b-44d3-90a3-887b766d6e7f" executionInfo={"status": "ok", "timestamp": 1561131537529, "user_tz": 300, "elapsed": 10205, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/-p2FrFpD_hQk/AAAAAAAAAAI/AAAAAAAAAkU/Qol50T4G-Pc/s64/photo.jpg", "userId": "11714338394019695389"}} colab={"base_uri": "https://localhost:8080/", "height": 136}
# Train model on training data
rf.fit(X_train, y_train)
# + id="5IYGWS9LZQcs" colab_type="code" colab={}
# Use forest's predict method on test data
predictions = rf.predict(X_test)
# + id="D55oNxX4Zvqw" colab_type="code" colab={}
# Calculate absolute errors
errors = abs(predictions - y_test)
# + id="M2A_rs8PaXpq" colab_type="code" outputId="e80bf840-1177-4b7d-ba69-937c6d4a2f6c" executionInfo={"status": "ok", "timestamp": 1561131545482, "user_tz": 300, "elapsed": 244, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/-p2FrFpD_hQk/AAAAAAAAAAI/AAAAAAAAAkU/Qol50T4G-Pc/s64/photo.jpg", "userId": "11714338394019695389"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# Print out mean absolute error
print('Mean Absolute Error:', round(np.mean(errors), 2), 'degrees.')
# + id="m36Q-MYXavAz" colab_type="code" outputId="f444b6a1-5c5a-47d9-afa9-722042c96de4" executionInfo={"status": "ok", "timestamp": 1561131547892, "user_tz": 300, "elapsed": 721, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/-p2FrFpD_hQk/AAAAAAAAAAI/AAAAAAAAAkU/Qol50T4G-Pc/s64/photo.jpg", "userId": "11714338394019695389"}} colab={"base_uri": "https://localhost:8080/", "height": 68}
# Calculate and display accuracy
accuracy = errors.sum() / y_test.sum()
print("For Gold, Incident Mag >= 5.5 ({} incidents)".format(df_quake_gold.shape[0]))
print("Random Forest Regressor Model score:", rf.score(X_train, y_train))
print('Predictive Accuracy:', round(accuracy, 2), '%.')
| financial_models/Gold_Random_Forest_Regressor_5.5+.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# In this tutorial, we will go through a typical ML workflow with Foreshadow using a subset of the [adult data set](https://archive.ics.uci.edu/ml/datasets/Adult) from the UCI machine learning repository.
#
#
# # Getting Started
# To get started with foreshadow, install the package using `pip install foreshadow`. This will also install the dependencies. Now create a simple python script that uses all the defaults with Foreshadow. Note that Foreshadow requires `Python >=3.6, <4.0`.
#
# First import foreshadow related classes. Also import sklearn, pandas and numpy packages.
# +
from foreshadow import Foreshadow
from foreshadow.intents import IntentType
from foreshadow.utils import ProblemType
from foreshadow.logging import logging
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
# -
# Configure the random seed and logging level.
np.random.seed(42)
logging.set_level('warning')
# # Load the dataset
data = pd.read_csv('adult.csv').iloc[:2000]
data.head()
data.describe()
# #### Split data to Train and Test
X_df = data.drop(columns="class")
y_df = data[["class"]]
X_train, X_test, y_train, y_test = train_test_split(X_df, y_df, test_size=0.2)
# # Train a Simple LogisticRegression Model using Foreshadow and making predictions
#
# **The following example is for classification. For regression problems, use `problem_type=ProblemType.REGRESSION`.**
shadow = Foreshadow(problem_type=ProblemType.CLASSIFICATION,
estimator=LogisticRegression())
_ = shadow.fit(X_train, y_train)
# ## Making predictions
predictions = shadow.predict(X_test)
predictions.head()
# ### Use the trained estimator to compute the evaluation score.
# Note that the scoring method is defined by the selected estimator.
shadow.score(X_test, y_test)
# ## You can inspect and change Foreshadow's decision
#
# Foreshadow uses a machine learning model to power the auto intent resolving step. As a user, you may not agree with the decision made by Foreshadow. The following APIs allow you to inspect the decisions and change them if you have a different opinion.
# ### To check the intent of a particular column
shadow.get_intent('education-num')
# ### Override the decision of intent resolving
# If you want to explore a different intent type, simply call the `override_intent` API.
shadow.override_intent('education-num', IntentType.CATEGORICAL)
_ = shadow.fit(X_train, y_train)
shadow.score(X_test, y_test)
# To show that the intent has been updated:
shadow.get_intent('education-num')
# ### You can also provide override to fix the intent/column type before fitting the data
#
# This tells Foreshadow to not run auto intent resolving on some columns but use your decisions instead.
shadow = Foreshadow(problem_type=ProblemType.CLASSIFICATION, estimator=LogisticRegression())
shadow.override_intent('education-num', IntentType.CATEGORICAL)
_ = shadow.fit(X_train, y_train)
print(shadow.get_intent('education-num'))
# # Now Let's Search the Best Model and Hyper-Parameter
# At this point, you have a basic pipeline fitted by Foreshadow using a logistic regression estimator. You can update the estimator to something more powerful and retrain the model. Another way is to use the AutoEstimator option in Foreshadow.
#
# Foreshadow leverages the [TPOT AutoML](https://epistasislab.github.io/tpot/using/) package to search the best model and hyper-parameter for you. **Note that AutoML algorithms can take a long time to finish their search, so here we only configure Foreshadow to search for 2 minutes. Please refer to the TPOT manual for more details.**
from foreshadow.estimators import AutoEstimator
estimator = AutoEstimator(
problem_type=ProblemType.CLASSIFICATION,
auto="tpot",
auto_estimator_kwargs={"max_time_mins": 2}, # change here
)
shadow = Foreshadow(problem_type=ProblemType.CLASSIFICATION, estimator=estimator)
shadow.override_intent('education-num', IntentType.CATEGORICAL)
_ = shadow.fit(X_df, y_df)
# ## Making predictions and evaluations
predictions = shadow.predict(X_test)
shadow.score(X_test, y_test)
# # Model persistence
# ## Save the fitted pipeline
# After finding the best pipeline, you can export the fitted pipeline as a pickle file for your prediction task.
pickled_fitted_pipeline_location = "fitted_pipeline.p"
shadow.pickle_fitted_pipeline(pickled_fitted_pipeline_location)
# ## Load back the pipeline for prediction
# +
import pickle
with open(pickled_fitted_pipeline_location, "rb") as fopen:
shadow_reload = pickle.load(fopen)
# -
# ## Reuse the pipeline to do predictions and evaluations
predictions = shadow_reload.predict(X_test)
predictions.head()
shadow_reload.score(X_test, y_test)
# # [Experimental] Register customized data cleaners
# Foreshadow provides several built-in data cleaning transformations. These transformations work on a per column basis.
# - datetime cleaner (covert date time into YYYY, mm, and dd respectively)
# - financial number cleaner (reformat financial numbers by removing signs like "$" and ",")
# - drop cleaner (drop a column if a column has over 90% NaN values)
#
# It is also possible to provide your own data cleaning transformations. The follow (dummy) example shows how to change a column of strings to lowercase.
# ## Define your own cleaner and transformation function
# There are two components when defining your own data cleaner (We may change it to only 1 component in the future).
#
# - One is the transformation you want to apply to each row in a column.
#
# - The second is a subclass of the `CustomizableBaseCleaner`. You will need to override the `metric_score` method. The metric_score returns a confidence score between 0 and 1 representing how certain this particular cleaner should be applied to the column being processed.
#
#
# +
from foreshadow.concrete.internals.cleaners.customizable_base import (
CustomizableBaseCleaner,
)
def lowercase_row(row):
"""Lowercase a row.
Args:
row: string of text
Returns:
transformed row.
"""
return row if row is None else str(row).lower()
class LowerCaseCleaner(CustomizableBaseCleaner):
def __init__(self):
super().__init__(transformation=lowercase_row)
def metric_score(self, X: pd.DataFrame) -> float:
"""Calculate the matching metric score of the cleaner on this col.
In this method, you specify the condition on when to apply the
cleaner and calculate a confidence score between 0 and 1 where 1
means 100% certainty to apply the transformation.
Args:
X: a column as a dataframe.
Returns:
the confidence score.
"""
column_name = list(X.columns)[0]
if column_name == "workclass":
return 1
else:
return 0
# -
# ## Register the cleaner in foreshadow object then train the model
# +
# Note that you need to reinitialize the Foreshadow object to pick up the customized data cleaner
shadow = Foreshadow(problem_type=ProblemType.CLASSIFICATION,
estimator=LogisticRegression())
shadow.register_customized_data_cleaner(data_cleaners=[LowerCaseCleaner])
# -
# ### List the unique values of the workclass column
workclass_values = list(X_train["workclass"].unique())
print(workclass_values)
# ### List the unique values of the workclass after the transformation
# +
X_train_cleaned = shadow.X_preparer.steps[0][1].fit_transform(X_train)
workclass_values_transformed = list(X_train_cleaned["workclass"].unique())
print(workclass_values_transformed)
# -
# ## Train, predict and evaluate as usual
# +
# Note that right now you need to reinitialize the Foreshadow object before retraining.
shadow = Foreshadow(problem_type=ProblemType.CLASSIFICATION,
estimator=LogisticRegression())
shadow.register_customized_data_cleaner(data_cleaners=[LowerCaseCleaner])
shadow.fit(X_train, y_train)
predictions = shadow.predict(X_test)
shadow.score(X_test, y_test)
# -
| examples/user_tutorial_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: book_new
# language: python
# name: book_new
# ---
# ## Predicción sentimental desde reviews de peliculas. (incompletooo)
# The words have been replaced by integers that indicate the absolute popularity of the word in the dataset. The sentences in each review are therefore comprised of a sequence of
# integers.
import numpy
from keras.datasets import imdb
from matplotlib import pyplot
# +
# load the dataset
(X_train, y_train), (X_test, y_test) = imdb.load_data()
# -
X = numpy.concatenate((X_train, X_test), axis=0)
y = numpy.concatenate((y_train, y_test), axis=0)
# summarize size
print("Training data: ")
print(X.shape)
print(y.shape)
# Tenemos clasificacion de bueno(1) y malo(0)
# Summarize number of classes
print("Classes: ")
print(numpy.unique(y))
# Podemos hacernos una idea del número total de palabras únicas en el conjunto de datos.
#
# Curiosamente, podemos ver que hay algo menos de 100.000 palabras en todo el conjunto de datos.
# Summarize number of words
print("Number of words: ")
print(len(numpy.unique(numpy.hstack(X))))
type(X)
# Summarize review length
print("Review length: ")
result = map(len, X)
result=numpy.fromiter(result, dtype=numpy.int)
print("Mean %.2f words (%f)" % (numpy.mean(result), numpy.std(result)))
# plot review length as a boxplot and histogram
pyplot.subplot(121)
pyplot.boxplot(result)
pyplot.subplot(122)
pyplot.hist(result)
pyplot.show()
| DL_PY/cap22.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 3.1 signac-flow minimal example
#
# ## About
#
# This notebook contains a minimal example for running a signac-flow project from scratch.
# The example demonstrates how to compare an ideal gas with a lennard jones fluid by calculating a p-V phase diagram.
#
# ## Author
#
# <NAME>
#
# ## Before you start
#
# Make sure you installed signac and signac-flow, e.g., with:
#
# ```
# conda install -c conda-forge signac
# conda install -c glotzer signac-flow
# ```
# +
import signac
import flow
import numpy as np
# Enter the signac project directory
project = signac.init_project('FlowTutorialProject', 'projects/tutorial-signac-flow')
# -
# We want to generate a pressure-volume (p-V) phase diagram for an ideal gas.
#
# We define a function to calculate the result for a given state point:
def V_idg(N, p, kT):
return N * kT / p
# We want to use **signac** to manage our data, therefore we define an *operation* which has only the *signac job* as argument:
def compute_volume(job):
job.document['V'] = V_idg(** job.statepoint())
# For this demonstration we will specialize a `flow.FlowProject` to manage our simple *workflow*.
#
# The workflow is controlled by two core functions: `classify()` and `next_operation()`:
# - The `classify()` function allows us to *label* our jobs and get a good overview of the project *status*. This is especially important once the data space becomes larger and more complex and operations more expensive.
# - The `next_operation()` functions helps to automate the workflow by identifying the next required operation for each job.
#
# In this case there is only **one** operation:
class MyProject(flow.FlowProject):
def classify(self, job):
yield 'init'
if 'V' in job.document:
yield 'estimated'
def next_operation(self, job):
labels = set(self.classify(job))
if 'V' not in job.document:
return 'compute_volume'
# We need to use the `get_project()` *class method* to get a project handle for this special project class.
project = MyProject.get_project(root='projects/tutorial-signac-flow')
# Now it's time to actually generate some data! Let's initialize the data space!
#
for p in np.linspace(0.5, 5.0, 10):
sp = dict(N=1728, kT=1.0, p=p)
project.open_job(sp).init()
# The `print_status()` function allows to get a quick overview of our project's *status*:
project.print_status(detailed=True, parameters=['p'])
# The next cell will attempt to execute all operations by cycling through jobs and operations until no *next operations* are defined anymore.
#
# We limit the max. number of cycles to prevent accidental infinite loops, the number of cycles is arbitrary.
for i in range(3):
for job in project:
for j in range(5):
next_op = project.next_operation(job)
if next_op is None:
break
print('execute', job, next_op)
globals()[next_op](job)
assert next_op != project.next_operation(job)
else:
raise RuntimeError("Reached max. # cycle limit!")
# Let's double check the project status.
project.print_status()
# After running all operations we can make a brief examination of the collected data.
for job in project:
print(job.statepoint()['p'], job.document.get('V'))
# For a better presentation of the results we need to aggregate all results and sort them by pressure.
# +
from matplotlib import pyplot as plt
# %matplotlib inline
V = dict()
for job in project:
V[job.statepoint()['p']] = job.document['V']
p = sorted(V.keys())
V = [V[p_] for p_ in p]
print(V)
plt.plot(p, V, label='idG')
plt.xlabel(r'pressure [$\epsilon / \sigma^3$]')
plt.ylabel(r'volume [$\sigma^3$]')
plt.legend()
# -
# As a a final step, we ca generate a index of our project data.
# You can store this index in a variable or within a database, e.g., for search operations.
for doc in project.index():
print(doc)
# Uncomment and execute the following line to remove all data and start over.
# +
#% rm -r projects/tutorial-signac-flow/workspace
| notebooks/signac-flow_Ideal_Gas_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 作業目標:
#
# 使用CIFAR100, 數據集變大的影響
#
#
# # 作業重點:¶
#
# 了解 CIFAR100 跟 CIFAR10 數據及差異
#
import numpy
from keras.datasets import cifar100
import numpy as np
np.random.seed(100)
# # 資料準備
(x_img_train,y_label_train), \
(x_img_test, y_label_test)=cifar100.load_data()
print('train:',len(x_img_train))
print('test :',len(x_img_test))
# 查詢檔案維度資訊
x_img_train.shape
# 查詢檔案維度資訊
x_img_test.shape
# 查詢檔案維度資訊
y_label_test.shape
# +
#針對物件圖像數據集的類別編列成字典
label_dict={0:"0",1:"1",2:"2",3:"3",4:"4",}
# +
#導入影像列印模組
import matplotlib.pyplot as plt
#宣告一個影像標記的函數
def plot_images_labels_prediction(images,labels,prediction,
idx,num=10):
fig = plt.gcf()
fig.set_size_inches(12, 14)
if num>25: num=25
for i in range(0, num):
ax=plt.subplot(5,5, 1+i)
ax.imshow(images[idx],cmap='binary')
try:
title=str(i)+','+label_dict[labels[i][0]]
except:
continue
if len(prediction)>0:
title+='=>'+label_dict[prediction[i]]
ax.set_title(title,fontsize=10)
ax.set_xticks([]);ax.set_yticks([])
idx+=1
plt.show()
# +
#針對不同的影像作標記
plot_images_labels_prediction(x_img_train,y_label_train,[],0)
# -
print('x_img_test:',x_img_test.shape)
print('y_label_test :',y_label_test.shape)
# # Image normalize
x_img_train[0][0][0]
x_img_train_normalize = x_img_train.astype('float32') / 255.0
x_img_test_normalize = x_img_test.astype('float32') / 255.0
x_img_train_normalize[0][0][0]
# # 轉換label 為OneHot Encoding
y_label_train.shape
y_label_train[:5]
from keras.utils import np_utils
y_label_train_OneHot = np_utils.to_categorical(y_label_train)
y_label_test_OneHot = np_utils.to_categorical(y_label_test)
y_label_train_OneHot.shape
y_label_train_OneHot[:5]
| homeworks/D067/Day67-Keras_Dataset_HW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Django Shell-Plus
# language: python
# name: django_extensions
# ---
# +
# Django Session
from django.conf import settings
from django.db import models
from products.models import Product
User = settings.AUTH_USER_MODEL
class Cart(models.Model):
user = models.ForeignKey(User, null=True, blank=True)
products = models.ManyToManyField(Product, blank=True)
total = models.DecimalField(default=0.00, max_digits=100, decimal_places=2)
updated = models.DateTimeField(auto_now=True)
timestamp = models.DateTimeField(auto_now_add=True)
def __str__(self):
return str(self.id)
# +
# Creating Cart in Session
from django.shortcuts import render
from .models import Cart
def cart_home(request):
cart_id = request.session.get("cart_id", None)
if cart_id is None:
print('create new cart')
request.session['cart_id'] = 12
else:
print('Cart ID exists')
return render(request, "carts/home.html", {})
# +
# create Cart Id using Cart model
from django.shortcuts import render
from .models import Cart
def cart_home(request):
del request.session['cart_id']
cart_id = request.session.get("cart_id", None)
if cart_id is None:
cart_obj = Cart.objects.create(user=None)
request.session['cart_id'] = cart_obj.id
else:
print('Cart ID exists')
print(cart_id)
cart_obj = Cart.objects.get(id=cart_id)
return render(request, "carts/home.html", {})
| django_ecommerce_cart_modelling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: db_covid19
# language: python
# name: db_covid19
# ---
# + [markdown] toc="true"
# # Table of Contents
# <p>
# +
import pandas as pd
import seaborn as sns
tips = sns.load_dataset('tips')
# +
import matplotlib.pyplot as plt
tips.tip.plot(kind='hist')
plt.show()
# -
cts = tips.sex.value_counts()
cts.plot(kind='bar')
plt.show()
# Scatter plot between the tip and total_bill
tips.plot(kind='scatter', x='total_bill', y='tip')
plt.show()
# Boxplot of the tip column
tips.boxplot(column='tip')
plt.show()
# Boxplot of the tip column by sex
tips.boxplot(column='tip', by='sex')
plt.show()
# seaborn
sns.countplot(x='sex', data=tips)
plt.show()
sns.distplot(tips.total_bill)
plt.show()
sns.boxplot(x="sex", y="tip", data=tips)
plt.show()
# Scatter plot of total_bill and tip faceted by smoker and colored by sex
sns.lmplot(x='total_bill', y='tip', data=tips, hue='sex', col='smoker')
plt.show()
# +
# FacetGrid of time and smoker colored by sex
facet = sns.FacetGrid(tips, col='time', row='smoker', hue='sex')
# Map the scatter plot of total_bill and tip to the FacetGrid
facet.map(plt.scatter, 'total_bill', 'tip')
plt.show()
# -
# matplotlib parts of a figure
#
# https://matplotlib.org/3.1.0/gallery/showcase/anatomy.html
#
# https://matplotlib.org/1.5.1/faq/usage_faq.html
# +
# Fig with 1 axes
fig, ax = plt.subplots(1, 1)
# scatter plot axes
ax.scatter(tips.tip, tips.total_bill)
plt.show()
# -
# Create a figure with scatter plot and histogram
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.scatter(tips.tip, tips.total_bill)
ax2.hist(tips.total_bill)
plt.show()
# +
# seaborn plot
dis = sns.distplot(tips.tip)
# returns an axes
print(type(dis))
# -
# Figure with 2 axes of displot and regplot
fig, (ax1, ax2) = plt.subplots(1, 2)
sns.distplot(tips.tip, ax=ax1)
sns.regplot(x='total_bill', y='tip', data=tips, ax=ax2)
plt.show()
| class/12-plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Given an array of integers where 1 ≤ a[i] ≤ n (n = size of array), some elements appear twice and others appear once.
#
# Find all the elements of [1, n] inclusive that do not appear in this array.
#
# Could you do it **without extra space and in O(n) runtime?** You may assume the returned list does not count as extra space.
#
# Example:
#
# Input:[4,3,2,7,8,2,3,1]
#
# Output:[5,6]
# ### Thoughts
# After the code below:
#
# for n in nums:
# nums[abs(n) - 1] = -abs(nums[abs(n) - 1])
#
# Input: [4,3,2,7,8,2,3,1] will become [-4,-3,-2,-7,**8, 2,** -3, -1]. There are two numbers shown as positive number. Their [locations + 1] in the list represent the missing numbers.
def findDisappearedNumbers(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
for n in nums:
nums[abs(n) - 1] = -abs(nums[abs(n) - 1])
res = []
for i in range(len(nums)):
if nums[i] > 0:
res.append(i+1)
return res
| 448. Find All Numbers Disappeared in an Array.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Fill in your name using the format below and student ID number
your_name = "<NAME>"
student_id = "1518798"
# # Assignment 2
# The [Speed Dating dataset](https://www.openml.org/d/40536) collects feedback gathered from participants in experimental speed dating events. Every participant rated themselves and their dates according to different attributes (e.g. attractiveness, sincerity, intelligence, fun, ambition, shared interests,...), and whether or not they were interested in a second date. Our goal is to build a machine learning model able to predict whether there will be a match (or not) between two different people. Will you be able to trust your final model?
# imports
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import openml
# ### Additional packages:
# * TargetEncoder
# - Run `pip install category_encoders` or `conda install -c conda-forge category_encoders`
# * Seaborn (plotting)
# - Run `pip install seaborn` or `conda install seaborn`
# Pre-flight checklist. Do not change this code.
# Make sure that you have installed recent versions of key packages.
# You could lose points if these checks do not pass.
from packaging import version
import sklearn
import category_encoders
import seaborn
sklearn_version = sklearn.__version__
catencoder_version = category_encoders.__version__
if version.parse(sklearn_version) < version.parse("0.22.0"):
print("scikit-learn is outdated. Please update now!")
if version.parse(catencoder_version) < version.parse("2.0.0"):
print("category_encoders is outdated. Please update now!")
else:
print("OK. You may continue :)")
# Download Speed Dating data. Takes a while the first time. Do not change this code!
# Note that X is a pandas dataframe
dates = openml.datasets.get_dataset(40536)
X, y, _, feat_names = dates.get_data(target=dates.default_target_attribute)
# +
# Cleanup. Do not change this code!
# Remove irrelevant or preprocessed columns
cols = [c for c in X.columns if ((c.lower()[:2] != 'd_' or c.lower() == 'd_age') and c.lower() not in ['wave','has_null'])]
X = X[cols]
classes = ['No match','Match']
# Fix feature name typos
X = X.rename(columns={'ambtition_important': 'ambition_important',
'sinsere_o': 'sincere_o'})
# Harmonize the field names somewhat
X['field'] = X['field'].str.lower()
X = X.astype({'field': 'category'})
# Drop columns with more than 10% missing values
missing_counts = X.isnull().sum() * 100 / len(X)
d = {k:v for (k,v) in missing_counts.items() if v>10}
X.drop(d.keys(), axis=1, inplace=True)
# Solves an implementation issue with TargetEncoder
y=y.astype(int)
# -
# THIS WILL BE HELPFUL
# The list of the names of all categorical features
categorical = X.select_dtypes(include=["category"]).columns.tolist()
# The list of the names of all numerical features
numerical = X.select_dtypes(exclude=["category"]).columns.tolist()
# ### Exploring the data
# Uncomment these lines to learn more about the data. Comment or remove to run the notebook faster.
# +
# Peek at the remaining data
# X
# +
# Check the column data types and missing data
#X.info()
# +
# Some categorical columns have a large number of possible values
# Note: It looks like some manual cleaning should be done, but let's move on
#X['field'].value_counts().plot(kind='barh', figsize=(5,40));
# +
# Distributions of numeric data
#X.hist(layout=(20,4), figsize=(20,50));
# -
# What do people find important? Is this related to the outcome (match / no match)?
import seaborn as sns
subset = ['attractive_important','ambition_important','attractive_partner','ambition_partner']
X_sub=X[subset]
X_sub['match'] = [classes[int(x)] for x in y]
sns.set(style="ticks")
sns.pairplot(X_sub, hue="match");
# ## Part 1: Preprocessing
# ### Question 1.1 (5 points)
# Implement a function `simple_pipeline` that returns an sklearn pipeline that preprocesses the data in a minimal way before running a classifier:
# - Categorical features:
# - Impute missing values by replacing them with the most frequent value for that feature
# - Perform one-hot encoding. Use `sparse=False` to avoid that it returns a sparse datasets. Use `handle_unknown='ignore'` to ignore categorical values that where not seen during training.
# - Numeric features:
# - Remove missing values by replace missing values with the mean value for that feature
# +
# Implement
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer, make_column_transformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.decomposition import PCA
from sklearn.svm import SVC
def simple_pipeline(categorical, clf):
""" Returns a minimal pipeline that imputes missing values and does one-hot-encoding for categorical features
Keyword arguments:
categorical -- A list of categorical column names. Example: ['gender', 'country'].
clf -- any scikit-learn classifier
Returns: a scikit-learn pipeline which preprocesses the data and then runs the classifier
"""
num_pipe = Pipeline(steps=[('imputer', SimpleImputer(strategy='mean'))])
cat_pipe = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(sparse=False, handle_unknown='ignore'))
])
preprocessor = make_column_transformer((cat_pipe, categorical),remainder = num_pipe)
return Pipeline([('preprocessor', preprocessor),('classifier', clf)])
# -
# #### Sanity check
# To be correct, this pipeline should be able to fit any classifier without error. Uncomment and run this code to do a sanity check.
# +
#s = simple_pipeline(categorical, DecisionTreeClassifier()).fit(X,y)
#print(s.steps[1][1].feature_importances_.shape)
#print(s.steps[1][1].max_features_)
# -
# ### Question 1.2 (1 point)
# How many features are being constructed by this pipeline (i.e. on how many features is the classifier trained)?
# Fill in the correct answer, should be an integer. Don't change the name of the variable
q_1_2 = 287
# ### Question 1.3 (3 points)
# Implement a function `flexible_pipeline` that has two additional options:
# - Allow to add a feature scaling method for numeric features. The default is standard scaling. 'None' means no scaling
# - Allow the one-hot encoder to be replaced with another encoder, The default is one-hot encoding.
# +
# Implement
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def flexible_pipeline(categorical, clf, scaler=StandardScaler(), encoder=OneHotEncoder(sparse=False, handle_unknown='ignore')):
""" Returns a pipeline that imputes all missing values, encodes categorical features and scales numeric ones
Keyword arguments:
categorical -- A list of categorical column names. Example: ['gender', 'country'].
clf -- any scikit-learn classifier
scaler -- any scikit-learn feature scaling method (Optional)
encoder -- any scikit-learn category encoding method (Optional)
Returns: a scikit-learn pipeline which preprocesses the data and then runs the classifier
"""
#num_pipe = Pipeline(steps=[('imputer', SimpleImputer(strategy='mean'))])
categorical_pipe = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent', fill_value='missing')),
('encoder', encoder)
])
numerical_pipe = Pipeline([('imputer', SimpleImputer(strategy='mean'))])
if scaler:
numerical_pipe.steps.append(('scaler', scaler))
preprocessor = ColumnTransformer([('cat', categorical_pipe, categorical)],
remainder = numerical_pipe )
return Pipeline([('preprocessor', preprocessor),('classifier', clf)])
# +
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score, KFold
kfold = KFold(n_splits=3, shuffle=True, random_state=1)
pipe = flexible_pipeline(categorical,
clf=LogisticRegression(random_state=1))
np.mean(cross_val_score(pipe, X, y, scoring='roc_auc', cv=kfold))
# -
# ### Question 1.4 (3 points)
# Implement a function `plot_1_4` which plots a heatmap comparing several combinations of scaling methods and classifiers:
# * As classifiers, the following algorithms in their default hyperparameters settings:
# * Logistic regression
# * SVM with RBF kernel
# * Random Forest
# * As options, the following feature scaling options in their default settings:
# * No scaling
# * Standard scaling
# * Normalize
# * PowerTransformer
# * In all cases, use OneHotEncoder with `sparse=False` and `handle_unknown='ignore'`
#
# You should evaluate all pipelines using AUC (area under the ROC curve) with 3-fold cross-validation.
# Compare all methods with the same cross-validation folds, shuffle the data and use `random_state=1`.
# Where possible, also use `random_state=1` for the classifiers.
# Only report the test scores (not the training scores).
### Helper plotting function. Do not change.
import seaborn as sns
def heatmap(columns, rows, scores):
""" Simple heatmap.
Keyword arguments:
columns -- list of options in the columns
rows -- list of options in the rows
scores -- numpy array of scores
"""
df = pd.DataFrame(scores, index=rows, columns=columns)
sns.heatmap(df, cmap='RdYlGn_r', linewidths=0.5, annot=True, fmt=".3f")
# +
# Implement
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import MinMaxScaler, StandardScaler, Normalizer, PowerTransformer
from tqdm import tqdm_notebook as tqdm
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import cross_val_score, KFold
def plot_1_4(X, y):
""" Evaluates 3 classifiers together with 4 types of scaling. See description above.
"""
classifiers = [LogisticRegression(random_state=1), SVC(random_state=1), RandomForestClassifier(random_state=1)]
classifiers_name = [type(clf).__name__ for clf in classifiers]
scalers = [None, StandardScaler(), Normalizer(), PowerTransformer()]
scalers_name = [type(scl).__name__ for scl in scalers]
encoder = OneHotEncoder(sparse=False, handle_unknown='ignore')
sol = np.zeros((3,4))
kfold = KFold(n_splits=3, shuffle=True, random_state=1)
for c, classifier in enumerate(classifiers):
for s, scaler in enumerate(scalers):
pipe = flexible_pipeline(categorical, clf=classifier, scaler=scaler, encoder=encoder)
sol[c,s] = np.mean(cross_val_score(pipe, X, y, scoring='roc_auc', cv=kfold))
heatmap(scalers_name, classifiers_name, sol)
pass
# -
plot_1_4(X, y)
# ### Question 1.5 (1 point)
# Interpret the heatmap of Question 1.4. Which of the following are correct?
# Enter your answer as a comma-separated string without spaces, e.g. "A,B,C"
# - 'A': All scaling methods perform equally well. It doesn't matter which scaling method is used.
# - 'B': The best scaling method depends on the classifier that will be used.
# - 'C': Scaling is important for SVMs and logistic regression, but not needed for Random Forests.
# - 'D': The power transformer is much better than other techniques on this dataset because many features have a power law distribution.
# - 'E': The Normalizer works badly because information gets lost in the scaling.
# - 'F': No answer
# Fill in the correct answers, e.g. 'A,B,C'. Don't change the name of the variable
q_1_5 = 'B,C,E'
# ### Question 1.6 (3 points)
# Optimize the encoding method for the categorical features. Use your `flexible_pipeline` to compare OneHotEncoding and TargetEncoding
# together with the same 3 classifiers as in question 1.4. Always use standard scaling. Implement a function `plot_1_6` which plots a heatmap with the results.
#
# TargetEncoding is part of the category encoders extension of scikit-learn. [Read more about it.](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html)
# +
from category_encoders import TargetEncoder
# Implement
def plot_1_6(X, y):
""" Evaluates 3 classifiers and plots the results in a bar chart.
Also compares different category encoders
"""
classifiers = [LogisticRegression(random_state=1), SVC(random_state=1), RandomForestClassifier(random_state=1)]
classifiers_name = [type(clf).__name__ for clf in classifiers]
encoders = [OneHotEncoder(sparse=False, handle_unknown='ignore'), TargetEncoder(handle_unknown='ignore')]
encoders_name = [type(enc).__name__ for enc in encoders]
score = np.zeros((3,2))
kfold = KFold(n_splits=3, shuffle=True, random_state=1)
for c, classifier in enumerate(classifiers):
for e, encoder in enumerate(encoders):
pipe = flexible_pipeline(categorical, clf=classifier, encoder=encoder)
score[c,e] = np.mean(cross_val_score(pipe, X, y, scoring='roc_auc', cv=kfold))
heatmap(encoders_name, classifiers_name, score)
pass
plot_1_6(X,y)
# -
# ### Question 1.7 (1 point)
# Interpret the heatmap of Question 1.6. Which of the following are correct?
# Enter your answer as a comma-separated string without spaces, e.g. "A,B,C"
# - 'A': They perform equally well
# - 'B': Target encoding is slightly better
# - 'C': One-hot-encoding is slightly better
# - 'D': It depends on the algorithm.
# - 'E': No answer
# Fill in the correct answers, e.g. 'A,B,C'. Don't change the name of the variable
q_1_7 = 'A,D'
# ### Question 1.8 (1 point)
# How many features are being constructed by the target encoder pipeline (i.e. on how many features is the classifier trained)?
# Fill in the correct answer, should be an integer. Don't change the name of the variable
q_1_8 = 59
# ## Part 2: Feature importance
# In this part, we will continue with your `flexible_pipeline`, and we use a random forest to learn which features
# are most important to predict the outcome of a dating 'match'. We will do this with both Random Forest's importance estimates and with permutation importance.
#
# ### Question 2.1 (5 points)
# Implement a function `plot_2_1` that does the following:
# * Split the data using a standard stratified and shuffled train-test split. Use `random_state=1`.
# * Combine your `flexible_pipeline`, without feature scaling but with one-hot-encoding, with a RandomForest classifier. Train that pipeline on the training set.
# * Remember that the categorical features where encoded. Retrieve their encoded names from the one-hot-encoder (with `get_feature_names`).
# * Retrieve the feature importances from the trained random forest and match them to the correct names. Depending on how you implemented your `flexible_pipeline` these are likely the first or the last columns in the processed dataset.
# * Compute the permutation importances given the random forest pipeline and the test set. Use `random_state=1` and at least 10 iterations.
# * Pass the tree-based and permutation importances to the plotting function `compare_importances` below.
# Plotting function. Do not edit.
def compare_importances(rf_importance, perm_importance, rf_feature_names, feature_names):
""" Compares the feature importances from random forest to permutation importance
Keyword arguments:
rf_importance -- The random forest's feature_importances_
perm_importance -- The permutation importances as computed by sklearn.inspection.permutation_importance
rf_feature_names -- The names of the features received by the random forest, in the same order as their importances
feature_names -- The original features names in their original order
"""
topk = 30
# Trees
sorted_idx = rf_importance.argsort()[-topk:]
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6))
y_ticks = np.arange(0, topk)
ax[0].barh(y_ticks, rf_importance[sorted_idx])
ax[0].set_yticklabels(rf_feature_names[sorted_idx])
ax[0].set_yticks(y_ticks)
ax[0].set_title("Random Forest Feature Importances")
# Permutations
sorted_idx = perm_importance.importances_mean.argsort()[-topk:]
ax[1].boxplot(perm_importance.importances[sorted_idx].T, vert=False, labels=feature_names[sorted_idx])
ax[1].set_title("Permutation Importances (test set)")
fig.tight_layout()
plt.show()
# +
# Implement
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.inspection import permutation_importance
def plot_2_1(X, y):
""" See detailed description above.
"""
X_train, X_test, y_train, y_test = train_test_split(X,y, stratify=y, shuffle=True, train_size=0.5, random_state=1)
features_original = X_train.columns.to_numpy()
pipe = flexible_pipeline(categorical, clf=RandomForestClassifier(), scaler=None)
pipe.fit(X_train, y_train)
ohe = (pipe.named_steps['preprocessor'].named_transformers_['cat'].named_steps['encoder'])
pipe_feature_names = ohe.get_feature_names(input_features = categorical)
pipe_feature_names = np.r_[pipe_feature_names, numerical]
pipe_importances = pipe.steps[-1][1].feature_importances_
sorted_idx = pipe_importances.argsort()
perm_importances = permutation_importance(pipe, X_test, y_test, n_repeats=10,random_state=1)
pipe_importances_ordered = pipe_importances[sorted_idx]
pipe_feature_names_ordered = pipe_feature_names[sorted_idx]
compare_importances(pipe_importances_ordered, perm_importances, pipe_feature_names_ordered, features_original)
pass
plot_2_1(X, y)
# -
# ### Question 2.2 (1 point)
# Interpret the results of Question 2.1. Which of the following are correct?
# Enter your answer as a comma-separated string without spaces, e.g. "A,B,C"
# - 'A': The topmost feature importances are roughly the same for both methods
# - 'B': The topmost feature importances are very different
# - 'C': Categorical features (race, race_o, field,...) are ranked higher in the random forest ranking
# - 'D': Categorical features are ranked lower in the random forest ranking
# - 'E': No answer
# Fill in the correct answers, e.g. 'A,B,C'. Don't change the name of the variable
q_2_2 = 'A,D'
# ## Part 3: Calibrating predictions
# ### Question 3.1 (2 points)
# Use a grid search to optimize the RandomForest pipeline from question 2.1. Vary the number of trees from 100 to 1500 and `max_features` from
# 0.05 to 0.1. Use at least 2 values for every hyperparameter. Evaluate all pipelines using AUC (area under the ROC curve) with 3-fold cross-validation. Compare all methods with the same cross-validation folds, shuffle the data and use `random_state=1`.
# Plot the results in a heatmap in function `plot_3_1`.
# +
#Implement
from sklearn.model_selection import GridSearchCV
def plot_3_1(X, y):
""" See detailed description above.
"""
param_grid = {'classifier__n_estimators': np.linspace(100, 1500, 4).astype(int),
'classifier__max_features': np.linspace(0.05, 0.1, 4)}
pipe = flexible_pipeline(categorical, clf=RandomForestClassifier(), scaler=None)
kfold = KFold(n_splits=3, shuffle=True, random_state=1)
res = GridSearchCV(pipe, param_grid, scoring='roc_auc', cv=kfold).fit(X, y)
heatmap(param_grid['classifier__n_estimators'],
param_grid['classifier__max_features'],
res.cv_results_['mean_test_score'].reshape(param_grid['classifier__n_estimators'].shape[0],
param_grid['classifier__max_features'].shape[0]))
pass
plot_3_1(X, y)
# -
# ### Question 3.2 (2 points)
# Implement a function `plot_3_2` that plots the ROC curve for the Random Forest pipeline with `n_estimators=1000`.
# Also indicate the point on the curve that corresponds to the 0.5 probability decision threshold.
# +
#Implement
from sklearn.metrics import roc_curve
def plot_3_2(X, y):
""" See description above.
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
pipe = flexible_pipeline(categorical, clf=RandomForestClassifier(n_estimators=1000, random_state=1), scaler=None).fit(X_train, y_train)
fpr, tpr, thresholds = roc_curve(y_test, pipe.predict_proba(X_test)[:, 1])
#idx = np.nanargmin(np.abs((tpr/fpr)-2))
#print((tpr/fpr)[idx])
#print("Best threshold: ", thresholds[idx])
plt.plot(fpr, tpr, label="ROC Curve RF")
plt.xlabel("FPR")
plt.ylabel("TPR (recall)")
close_default = np.argmin(np.abs(thresholds - 0.5))
plt.plot(fpr[close_default], tpr[close_default], '^', markersize=10,
label="threshold 0.5 RF", fillstyle="none", c='k', mew=2)
plt.legend(loc=4);
pass
plot_3_2(X, y)
# -
# ### Question 3.3 (2 points)
# Calibrate your model to get a higher recall. What would be the optimal decision threshold (approximately) assuming that a false negative (missing a good match) is twice as bad as a false positive (going on a date with someone who is not a good match)? The grade will depend on the distance to the actual optimum (within a tolerance).
# Fill in the correct answer, should be a float. Don't change the name of the variable
q_3_3 = 0.114
| Assignment_2/Assignment 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### In-class exercises Module 3 Answers (Handling arrays with numpy)
# Q.1
import numpy as np
np.b<TAB>
# Q.2
import numpy as np
list1 = [2*i for i in range(1,51)]
array1 = np.array(list1)
array1
# Q.3
np.random.seed(0)
x = np.random.randint(2,12, size=(4,5,3))
print(x)
print("dimension = ",x.ndim, " shape = ",x.shape, " data type is ", x.dtype)
# Q.4
x = np.random.randn(100)
x_reshaped = x.reshape(10,10)
x_reshaped
# Q.4
x_reshaped[::2]
# Q.4
x_reshaped[1,-1]
# Q.4
x[:,np.newaxis]
# Q.5
x = np.linspace(0,5,20)
x = np.array([x]) # this step converts x into a 2 dimensional array, which allows for horizontal stacking.
np.concatenate((x,x))
#Q.5
np.concatenate((x,x),axis=1)
#Q.5
print(np.vstack((x,x)))
print(np.hstack((x,x)))
x = np.linspace(0,5,20) # convert x back to a one dimensional array.
np.split(x,5)
# Q.6
import matplotlib.pyplot as plt
x = np.linspace(-6,6,100)
y1 = np.sqrt(36 - x**2)
y2 = -np.sqrt(36 - x**2)
plt.plot(x,y1, "b") # set the color of the line as blue
plt.plot(x,y2, "b") # set the color of the line as blue
plt.gca().set_aspect('equal')
plt.show()
# Q.7
import matplotlib.pyplot as plt
x = np.arange(-10,10,step = 0.01)
y = np.log(np.abs(x))
plt.plot(x,y)
# Q.8
x1 = np.ones(1000)*30
x2 = np.zeros(1000)
np.sum(x1)
# Q.9
import numpy as np
import matplotlib.pyplot as plt
array1 = np.random.randn(1000)
plt.plot(array1)
# Q.9
print ("mean = ", array1.mean())
print ("min = ", array1.min())
print ("max = ", array1.max())
print ("standard dev = ", array1.std())
# +
# Q.10
array2 = np.random.randint(0,101,size=(8,12))
print(array2)
print(array2[1].max())
print(array2.mean(axis=0)) # to find mean of each column, we collpase axis=0
print(array2.mean(axis=1)) # to find mean of each row, we collpase axis=1
# +
# Q.11
import numpy as np
from pandas_datareader import data
goog = data.DataReader('GOOG', start='2017', end='2018',
data_source='yahoo')
goog_price = np.array(goog["Close"])
print ("mean of google price in 2017 =", goog_price.mean())
print ("max google price in 2017 =", goog_price.max())
print ("min google price in 2017 =", goog_price.min())
print ("standard deviation google price in 2017 =", goog_price.std())
# +
# Q.11
goog_2016lastday = data.DataReader('GOOG', start = "2016-12-30", end="2017-01-01",
data_source="yahoo")
goog_2016lastday_price = np.array(goog_2016lastday["Close"])
print("number of days in 2017, such that the closing price is higher than the closing price of last trading day in 2016 = ",
np.sum(goog_price>goog_2016lastday_price))
# +
# Q.12
import pandas as pd
import matplotlib.pyplot as plt
import seaborn; seaborn.set()
data = pd.read_csv("Seattle2014.csv")
TMIN = np.array(data["TMIN"])
print("shape =",TMIN.shape)
print("sample size =",TMIN.shape[0])
plt.hist(TMIN, label="min temperature")
plt.legend()
# -
# Q.12
print("number of days which min temperature is greater than 100 =",np.sum([TMIN>100]))
days = np.arange(365)
summer = (days > 172) & (days < 262)
print("standard deviation of temperatures in summer days =",
np.std(TMIN[summer]))
# Q.12
TMIN.sort() # first sort an array in an ascending order
TMIN[::-1] # then sort in a descending order
# Q.13
np.random.seed(1234)
x = np.random.randn(1000)
y = np.random.randn(1000)
z = [max(a,b) for a,b in zip(x,y)]
z = np.array(z)
# Q.14
data = pd.read_csv("salary.csv")
salary=np.array(data["Salary"])
print("data type = ",salary.dtype)
print("sample size = ",salary.shape[0])
plt.hist(salary)
print("number of people with salary > 90k =", np.sum([salary>90000]))
print("max salary", np.max(salary))
print("median salary", np.median(salary))
print("standard deviation of salary", np.std(salary))
# Q.14
salary.sort()
print(salary)
# +
# Q.15 (a)
x = np.linspace(-10,10,100)
y1 = np.sqrt(100-x**2)
y2 = -np.sqrt(100-x**2)
plt.plot(x,y1, "b") # set the color of the line as blue
plt.plot(x,y2, "b") # set the color of the line as blue
plt.gca().set_aspect('equal')
# -
# Q.15 (b)
x = np.linspace(-2*np.pi, 2*np.pi, 100)
y = np.cos(x)
plt.plot(x,y)
# Q.15 (c)
x = np.linspace(-2, 2, 100)
y2 = x**2
y3 = x**3
plt.plot(x,y2, label="square function")
plt.plot(x,y3, label="cubic function")
plt.legend()
# Q.16
import numpy as np
import matplotlib.pyplot as plt
a = np.arange(-5,5,0.01)
b = np.arange(-5,5,0.01)
x,y = np.meshgrid(a,b)
z = np.sqrt(x**2 + y**2)
plt.imshow(z,extent=[-5,5,-5,5], cmap=plt.cm.gray)
plt.colorbar()
| inclass-exercises/module-3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + slideshow={"slide_type": "slide"}
# Load numpy for math/array operations
# and matplotlib for plotting
import numpy as np
from matplotlib import ticker
import matplotlib.pyplot as plt
# + slideshow={"slide_type": "subslide"}
# %matplotlib inline
# Set up figure size and DPI for screen demo
plt.rcParams['figure.figsize'] = (4,3)
plt.rcParams['figure.dpi'] = 150
# -
# # Modifying tick labels & formatting manually
#Manually set tick labels
index = ('one', 'two', 'three', 'four')
plt.plot()
plt.ylim(1,4)
plt.yticks([1,2,3,4]
#Rotating ticks
index = ('one', 'two', 'three', 'four')
plt.plot()
plt.ylim(1,4)
plt.yticks(
#Tick color
index = ('one', 'two', 'three', 'four')
plt.plot()
plt.ylim(1,4)
plt.yticks(
# # Using formatters
#NullFormatter (no labels)
plt.plot()
plt.gca().yaxis.set_major_formatter(ticker.
#Log vs. Linear formatting
plt.loglog()
plt.ylim(1,1e7)
plt.gca().yaxis.set_major_formatter(ticker.
#EngFormatter w/ unit
plt.loglog()
plt.ylim(1,10000)
plt.gca().yaxis.set_major_formatter(ticker.
#EngFormatter w/ unit
plt.loglog()
plt.ylim(1,10000)
plt.gca().yaxis.set_major_formatter(ticker.
#FormatStrFormatter
plt.plot()
plt.gca().yaxis.set_major_formatter(ticker.
#FuncFormatter
func = lambda x,pos: r"$\sqrt{\rm tick}$ is %3.2f" % np.sqrt(x)
plt.plot()
plt.ylim(1,10)
plt.gca().yaxis.set_major_formatter(ticker.
| 6506_03_code_ACC_SB/Video 3.5- Tick formatting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nsci
# language: python
# name: nsci
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # So Far...
# + [markdown] slideshow={"slide_type": "subslide"}
# We've gone over a lot of stuff so far and you all have been doing great with everything I've thrown at you
# + [markdown] slideshow={"slide_type": "slide"}
# # Measures of Descriptive Statistics
# + [markdown] slideshow={"slide_type": "subslide"}
# All descriptive statistics are either measures of central tendency or measures of variability, also known as measures of dispersion. Measures of central tendency focus on the average or middle values of data sets; whereas, measures of variability focus on the dispersion of data. These two measures use graphs, tables, and general discussions to help people understand the meaning of the analyzed data.
# + [markdown] slideshow={"slide_type": "slide"}
# # Central Tendency
# + [markdown] slideshow={"slide_type": "subslide"}
# Measures of central tendency describe the center position of a distribution for a data set. A person analyzes the frequency of each data point in the distribution and describes it using the mean, median, or mode, which measures the most common patterns of the analyzed data set.
# + slideshow={"slide_type": "subslide"}
from scipy import stats
import numpy as np
#make random data
nums=np.random.normal(0, 10, 1000)
import matplotlib.pyplot as plt
f, ax1 = plt.subplots()
ax1.hist(nums, bins='auto')
ax1.set_title('probability density (random)')
plt.tight_layout()
# + slideshow={"slide_type": "subslide"}
f, ax1 = plt.subplots()
ax1.hist(nums, bins='auto')
ax1.set_title('probability density (random)')
plt.tight_layout()
ax1.plot([np.mean(nums)]*2,[0,100],'r')
#ax1.plot([np.median(nums)]*2,[0,100],'g')
# ax1.plot([stats.mode(nums)[0]]*2,[0,100],'g')
plt.show()
print("The Mean is: ",np.mean(nums))
print("The Mode is: ",stats.mode(nums))
print("The Median is: ",np.median(nums))
# + [markdown] slideshow={"slide_type": "slide"}
# # Dispersion
# + [markdown] slideshow={"slide_type": "subslide"}
# Measures of variability, or the measures of spread, aid in analyzing how spread-out the distribution is for a set of data. For example, while the measures of central tendency may give a person the average of a data set, it does not describe how the data is distributed within the set. So, while the average of the data may be 65 out of 100, there can still be data points at both 1 and 100. Measures of variability help communicate this by describing the shape and spread of the data set. Range, quartiles, absolute deviation, and variance are all examples of measures of variability. Consider the following data set: 5, 19, 24, 62, 91, 100. The range of that data set is 95, which is calculated by subtracting the lowest number (5) in the data set from the highest (100).
# + [markdown] slideshow={"slide_type": "slide"}
# # Range
# + [markdown] slideshow={"slide_type": "subslide"}
#
# The range is the simplest measure of variability to calculate, and one you have probably encountered many times in your life. The range is simply the highest score minus the lowest score
# + slideshow={"slide_type": "subslide"}
max_nums = max(nums)
min_nums = min(nums)
range_nums = max_nums-min_nums
print(max_nums)
print(min_nums)
print("The Range is :", range_nums)
# + [markdown] slideshow={"slide_type": "slide"}
# # Standard deviation
# + [markdown] slideshow={"slide_type": "subslide"}
# The standard deviation is also a measure of the spread of your observations, but is a statement of how much your data deviates from a typical data point. That is to say, the standard deviation summarizes how much your data differs from the mean. This relationship to the mean is apparent in standard deviation’s calculation.
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + slideshow={"slide_type": "subslide"}
print(np.std(nums))
# + [markdown] slideshow={"slide_type": "slide"}
# # Variance
# + [markdown] slideshow={"slide_type": "subslide"}
# Often, standard deviation and variance are lumped together for good reason. The following is the equation for variance, does it look familiar?
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# Standard deviation looks at how spread out a group of numbers is from the mean, by looking at the square root of the variance. The variance measures the average degree to which each point differs from the mean—the average of all data points
# + slideshow={"slide_type": "subslide"}
print(np.var(nums))
# + [markdown] slideshow={"slide_type": "slide"}
# # Shape
# + [markdown] slideshow={"slide_type": "subslide"}
# The skewness is a parameter to measure the symmetry of a data set and the kurtosis to measure how heavy its tails are compared to a normal distribution, see for example here.
# + slideshow={"slide_type": "subslide"}
import numpy as np
from scipy.stats import kurtosis, skew, skewnorm
n = 10000
start = 0
width = 20
a = 0
data_normal = skewnorm.rvs(size=n, a=a,loc = start, scale=width)
a = 3
data_skew = skewnorm.rvs(size=n, a=a,loc = start, scale=width)
import matplotlib.pyplot as plt
f, (ax1, ax2) = plt.subplots(1, 2)
ax1.hist(data_normal, bins='auto')
ax1.set_title('probability density (random)')
ax2.hist(data_skew, bins='auto')
ax2.set_title('(your dataset)')
plt.tight_layout()
sig1 = data_normal
print("mean : ", np.mean(sig1))
print("var : ", np.var(sig1))
print("skew : ", skew(sig1))
print("kurt : ", kurtosis(sig1))
# + [markdown] slideshow={"slide_type": "slide"}
# # Correlation/Regression
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Assumptions
# + [markdown] slideshow={"slide_type": "subslide"}
# The assumptions for Pearson correlation coefficient are as follows: level of measurement, related pairs, absence of outliers, normality of variables, linearity, and homoscedasticity.
#
# Level of measurement refers to each variable. For a Pearson correlation, each variable should be continuous. If one or both of the variables are ordinal in measurement, then a Spearman correlation could be conducted instead.
#
# Related pairs refers to the pairs of variables. Each participant or observation should have a pair of values. So if the correlation was between weight and height, then each observation used should have both a weight and a height value.
#
# Absence of outliers refers to not having outliers in either variable. Having an outlier can skew the results of the correlation by pulling the line of best fit formed by the correlation too far in one direction or another. Typically, an outlier is defined as a value that is 3.29 standard deviations from the mean, or a standardized value of less than ±3.29.
#
# Linearity and homoscedasticity refer to the shape of the values formed by the scatterplot. For linearity, a “straight line” relationship between the variable should be formed. If a line were to be drawn between all the dots going from left to right, the line should be straight and not curved. Homoscedasticity refers to the distance between the points to that straight line. The shape of the scatterplot should be tube-like in shape. If the shape is cone-like, then homoskedasticity would not be met.
# + slideshow={"slide_type": "subslide"}
import pandas as pd
path_to_data = '/Users/joe/Cook Share Dropbox/<NAME>/NSCI Teaching/Lectures/Lectures1/Practice/rois.csv'
data_in = pd.read_csv(path_to_data).values
plt.scatter(data_in[:,1],data_in[:,2])
plt.xlabel('Height (inches)', size=18)
plt.ylabel('Weight (pounds)', size=18);
# + [markdown] slideshow={"slide_type": "subslide"}
# A scatter plot is a two dimensional data visualization that shows the relationship between two numerical variables — one plotted along the x-axis and the other plotted along the y-axis. Matplotlib is a Python 2D plotting library that contains a built-in function to create scatter plots the matplotlib.pyplot.scatter() function. ALWAYS PLOT YOUR RAW DATA
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Pearson Correlation Coefficient
# + [markdown] slideshow={"slide_type": "subslide"}
# Correlation measures the extent to which two variables are related. The Pearson correlation coefficient is used to measure the strength and direction of the linear relationship between two variables. This coefficient is calculated by dividing the covariance of the variables by the product of their standard deviations and has a value between +1 and -1, where 1 is a perfect positive linear correlation, 0 is no linear correlation, and −1 is a perfect negative linear correlation.
# We can obtain the correlation coefficients of the variables of a dataframe by using the .corr() method. By default, Pearson correlation coefficient is calculated; however, other correlation coefficients can be computed such as, Kendall or Spearman
#
# + slideshow={"slide_type": "subslide"}
np.corrcoef(data_in[:,1],data_in[:,2])
# + [markdown] slideshow={"slide_type": "subslide"}
# A rule of thumb for interpreting the size of the correlation coefficient is the following:
# 1–0.8 → Very strong
# 0.799–0.6 → Strong
# 0.599–0.4 → Moderate
# 0.399–0.2 → Weak
# 0.199–0 → Very Weak
# + [markdown] slideshow={"slide_type": "slide"}
# # Regression
# + [markdown] slideshow={"slide_type": "subslide"}
# Linear regression is an analysis that assesses whether one or more predictor variables explain the dependent (criterion) variable. The regression has five key assumptions:
#
# Linear relationship
# Multivariate normality
# No or little multicollinearity
# No auto-correlation
# Homoscedasticity
#
# A note about sample size. In Linear regression the sample size rule of thumb is that the regression analysis requires at least 20 cases per independent variable in the analysis.
# + slideshow={"slide_type": "subslide"}
import statsmodels.api as sm
X = data_in[:,1]
y = data_in[:,2]
# Note the difference in argument order
model = sm.OLS(y, X).fit()
predictions = model.predict(X) # make the predictions by the model
# Print out the statistics
model.summary()
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# import seaborn as seabornInstance
#from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
p1 = '/Users/joe/Cook Share Dropbox/<NAME>/NSCI Teaching/Lectures/Lectures1/Practice/Weather.csv'
dataset = pd.read_csv(p1)
dataset.plot(x='MinTemp', y='MaxTemp', style='o')
plt.title('MinTemp vs MaxTemp')
plt.xlabel('MinTemp')
plt.ylabel('MaxTemp')
plt.show()
# + slideshow={"slide_type": "subslide"}
X = dataset['MinTemp'].values.reshape(-1,1)
y = dataset['MaxTemp'].values.reshape(-1,1)
# + slideshow={"slide_type": "subslide"}
regressor = LinearRegression()
regressor.fit(X, y) #training the algorithm
y_pred = regressor.predict(X)
plt.scatter(X, y, color='gray')
plt.plot(X, y_pred, color='red', linewidth=2)
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# # Another Example
# + slideshow={"slide_type": "subslide"}
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn import metrics
import matplotlib.pyplot as plt
x_vals1 = np.random.randint(-100,-50,100)
y_vals1 = np.random.randint(-100,-50,100)
x_vals2 = np.random.randint(35,100,100)
y_vals2 = np.random.randint(60,100,100)
x_t = np.concatenate((x_vals1,x_vals2))
y_t = np.concatenate((y_vals1,y_vals2))
plt.scatter(x_t, y_t)
plt.show()
# + slideshow={"slide_type": "subslide"}
regressor = LinearRegression().fit((x_t).reshape(-1,1),(y_t).reshape(-1,1))
y_pred = regressor.predict(x_t.reshape(-1,1))
plt.scatter(x_t, y_t)
plt.plot((x_t).reshape(-1,1), y_pred, color='red', linewidth=2)
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# Whats wrong with this??
# + [markdown] slideshow={"slide_type": "slide"}
# # The Logic of Hypothesis Testing
# + [markdown] slideshow={"slide_type": "subslide"}
# State the Hypothesis: We state a hypothesis (guess) about a population. Usually the hypothesis concerns the value of a population parameter. ... Gather Data: We obtain a random sample from the population. Make a Decision: We compare the sample data with the hypothesis about the population.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# The Logic of Hypothesis Testing As just stated, the logic of hypothesis testing in statistics involves four steps.
# State the Hypothesis: We state a hypothesis (guess) about a population. Usually the hypothesis concerns the value of a population parameter.
# Define the Decision Method: We define a method to make a decision about the hypothesis. The method involves sample data.
# Gather Data: We obtain a random sample from the population.
# Make a Decision: We compare the sample data with the hypothesis about the population. Usually we compare the value of a statistic computed from the sample data with the hypothesized value of the population parameter.
# If the data are consistent with the hypothesis we conclude that the hypothesis is reasonable. NOTE: We do not conclude it is right, but reasonable! AND: We actually do this by rejecting the opposite hypothesis (called the NULL hypothesis). More on this later.
# If there is a big discrepency between the data and the hypothesis we conclude that the hypothesis was wrong.
# We expand on those steps in this section:
# + [markdown] slideshow={"slide_type": "subslide"}
# First Step: State the Hypothesis
#
# Stating the hypothesis actually involves stating two opposing hypotheses about the value of a population parameter.
# Example: Suppose we have are interested in the effect of prenatal exposure of alcohol on the birth weight of rats. Also, suppose that we know that the mean birth weight of the population of untreated lab rats is 18 grams.
#
# Here are the two opposing hypotheses:
#
# The Null Hypothesis (Ho). This hypothesis states that the treatment has no effect. For our example, we formally state:
# The null hypothesis (Ho) is that prenatal exposure to alcohol has no effect on the birth weight for the population of lab rats. The birthweight will be equal to 18 grams. This is denoted
#
#
# The Alternative Hypothesis (H1). This hypothesis states that the treatment does have an effect. For our example, we formally state:
# The alternative hypothesis (H1) is that prenatal exposure to alcohol has an effect on the birth weight for the population of lab rats. The birthweight will be different than 18 grams. This is denoted
#
#
# Second Step: Define the Decision Method
#
# We must define a method that lets us decide whether the sample mean is different from the hypothesized population mean. The method will let us conclude whether (reject null hypothesis) or not (accept null hypothesis) the treatment (prenatal alcohol) has an effect (on birth weight).
# We will go into details later.
#
# Third Step: Gather Data.
#
# Now we gather data. We do this by obtaining a random sample from the population.
# Example: A random sample of rats receives daily doses of alcohol during pregnancy. At birth, we measure the weight of the sample of newborn rats. The weights, in grams, are shown in the table.
#
# We calculate the mean birth weight.
#
# Experiment 1
# Sample Mean = 13
# Fourth Step: Make a Decision
#
# We make a decision about whether the mean of the sample is consistent with our null hypothesis about the population mean.
# If the data are consistent with the null hypothesis we conclude that the null hypothesis is reasonable.
# Formally: we do not reject the null hypothesis.
#
# If there is a big discrepency between the data and the null hypothesis we conclude that the null hypothesis was wrong.
# Formally: we reject the null hypothesis.
#
# Example: We compare the observed mean birth weight with the hypothesized value, under the null hypothesis, of 18 grams.
#
# If a sample of rat pups which were exposed to prenatal alcohol has a birth weight "near" 18 grams we conclude that the treatement does not have an effect.
# Formally: We do not reject the null hypothesis that prenatal exposure to alcohol has no effect on the birth weight for the population of lab rats.
#
# If our sample of rat pups has a birth weight "far" from 18 grams we conclude that the treatement does have an effect.
# Formally: We reject the null hypothesis that prenatal exposure to alcohol has no effect on the birth weight for the population of lab rats.
#
# For this example, we would probably decide that the observed mean birth weight of 13 grams is "different" than the value of 18 grams hypothesized under the null hypothesis.
# Formally: We reject the null hypothesis that prenatal exposure to alcohol has no effect on the birth weight for the population of lab rats.
# + [markdown] slideshow={"slide_type": "slide"}
# # Statistical significance
# + [markdown] slideshow={"slide_type": "subslide"}
# Statistical significance is the likelihood that a relationship between two or more variables is caused by something other than chance.
#
# Statistical significance is used to provide evidence concerning the plausibility of the null hypothesis, which hypothesizes that there is nothing more than random chance at work in the data.
#
# Statistical hypothesis testing is used to determine whether the result of a data set is statistically significant
# + [markdown] slideshow={"slide_type": "subslide"}
# Understanding Statistical Significance
# Statistical significance is a determination about the null hypothesis, which hypothesizes that the results are due to chance alone. A data set provides statistical significance when the p-value is sufficiently small.
#
# When the p-value is large, then the results in the data are explainable by chance alone, and the data are deemed consistent with (while not proving) the null hypothesis.
#
# When the p-value is sufficiently small (e.g., 5% or less), then the results are not easily explained by chance alone, and the data are deemed inconsistent with the null hypothesis; in this case the null hypothesis of chance alone as an explanation of the data is rejected in favor of a more systematic explanation
# + slideshow={"slide_type": "subslide"}
from scipy.stats import ttest_ind
for i in range(100):
vals1 = np.random.rand(100)
vals2 = np.random.rand(100)
if ttest_ind(vals1,vals2)[1]<0.05:
print(ttest_ind(vals1,vals2))
# + [markdown] slideshow={"slide_type": "slide"}
# # Multiple Comparisons
# + [markdown] slideshow={"slide_type": "subslide"}
# Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery." A stated confidence level generally applies only to each test considered individually, but often it is desirable to have a confidence level for the whole family of simultaneous tests. Failure to compensate for multiple comparisons can have important real-world consequences, as illustrated by the following examples:
#
# Suppose the treatment is a new way of teaching writing to students, and the control is the standard way of teaching writing. Students in the two groups can be compared in terms of grammar, spelling, organization, content, and so on. As more attributes are compared, it becomes increasingly likely that the treatment and control groups will appear to differ on at least one attribute due to random sampling error alone.
# Suppose we consider the efficacy of a drug in terms of the reduction of any one of a number of disease symptoms. As more symptoms are considered, it becomes increasingly likely that the drug will appear to be an improvement over existing drugs in terms of at least one symptom.
# + [markdown] slideshow={"slide_type": "slide"}
# # Different Test Statistics
# + [markdown] slideshow={"slide_type": "subslide"}
# A test statistic is a random variable that is calculated from sample data and used in a hypothesis test. You can use test statistics to determine whether to reject the null hypothesis. The test statistic compares your data with what is expected under the null hypothesis.
# + [markdown] slideshow={"slide_type": "subslide"}
# A test statistic measures the degree of agreement between a sample of data and the null hypothesis. Its observed value changes randomly from one random sample to a different sample. A test statistic contains information about the data that is relevant for deciding whether to reject the null hypothesis. The sampling distribution of the test statistic under the null hypothesis is called the null distribution. When the data show strong evidence against the assumptions in the null hypothesis, the magnitude of the test statistic becomes too large or too small depending on the alternative hypothesis. This causes the test's p-value to become small enough to reject the null hypothesis.
# + [markdown] slideshow={"slide_type": "subslide"}
# Different hypothesis tests make different assumptions about the distribution of the random variable being sampled in the data. These assumptions must be considered when choosing a test and when interpreting the results.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Z-Stat
# + [markdown] slideshow={"slide_type": "subslide"}
# The z-test assumes that the data are independently sampled from a normal distribution. Secondly, it assumes that the standard deviation σ of the underlying normal distribution is known;
# + [markdown] slideshow={"slide_type": "subslide"}
# ## t-Stat
# + [markdown] slideshow={"slide_type": "subslide"}
# The t-test also assumes that the data are independently sampled from a normal distribution. but unlike the F-Test it assumes it does not make assumptions about the standard deviation σ of the underlying normal distribution
# + [markdown] slideshow={"slide_type": "subslide"}
# ## F-Stat
#
# + [markdown] slideshow={"slide_type": "subslide"}
# An F-test assumes that data are normally distributed and that samples are independent from one another.
# Data that differs from the normal distribution could be due to a few reasons. The data could be skewed or the sample size could be too small to reach a normal distribution. Regardless the reason, F-tests assume a normal distribution and will result in inaccurate results if the data differs significantly from this distribution.
#
# F-tests also assume that data points are independent from one another. For example, you are studying a population of giraffes and you want to know how body size and sex are related. You find that females are larger than males, but you didn't take into consideration that substantially more of the adults in the population are female than male. Thus, in your dataset, sex is not independent from age.
#
# -
| .ipynb_checkpoints/NSCI801_Descriptive_Stats-NEW-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Convert visibilities from CASA MS format to Python save file
# To be executed in CASA.
# +
import numpy as np
import pickle
import os
sourcetag,workingdir,vis,nvis,mosaic,phasecenter,weighting,robust,uvtaper,interactive = pickle.load(open('../imaging/imagepars.npy','rb'))
vis=[x.encode('ascii') for x in vis]
tb = casac.table()
ms = casac.ms()
cc=2.9979e10 #cm/s
#Use CASA table tools to get columns of UVW, DATA, WEIGHT, etc.
outputfilename=[x[:-3]+'.npy' for x in vis]
tb = casac.table()
ms = casac.ms()
cc=2.9979e10 #cm/s
#Use CASA table tools to get columns of UVW, DATA, WEIGHT, etc.
filename=vis
for ii in np.arange(len(filename)):
tb.open(filename[ii])
#NB for this to run smooth we need to have the same number of channels for ALL scans. So no spectral windows with different number of channels, otherwise it gets complicated.
#See https://safe.nrao.edu/wiki/pub/Main/RadioTutorial/BandwidthSmearing.pdf to choose how much to smooth a dataset in frequency.
#Errors are taking into account when time averaging in split: https://casa.nrao.edu/casadocs/casa-5.1.1/uv-manipulation/time-average
#And when channel averaging: https://casa.nrao.edu/casadocs/casa-5.1.1/uv-manipulation/channel-average
data = tb.getcol("DATA")
uvw = tb.getcol("UVW")
flags = tb.getcol("FLAG")
spwid = tb.getcol("DATA_DESC_ID")
weight = tb.getcol("WEIGHT")
ant1 = tb.getcol("ANTENNA1")
ant2 = tb.getcol("ANTENNA2")
tb.close()
if np.any(flags):
print "Note: some of the data is FLAGGED"
print "Found data with "+str(data.shape[-1])+" uv points per channel"
#Use CASA ms tools to get the channel/spw info
ms.open(filename[ii])
spwstuff = ms.getspectralwindowinfo()
nchan = spwstuff["0"]["NumChan"]
npol = spwstuff["0"]["NumCorr"]
ms.close()
print "with "+str(nchan)+" channels per SPW and "+str(npol)+" polarizations,"
#Use CASA table tools to get frequencies, which are needed to calculate u-v points from baseline lengths
tb.open(filename[ii]+"/SPECTRAL_WINDOW")
freqs = tb.getcol("CHAN_FREQ")
rfreq = tb.getcol("REF_FREQUENCY")
tb.close()
print str(freqs.shape[1])+" SPWs and Channel 0 frequency of 1st SPW of "+str(rfreq[0]/1e9)+" GHz"
print "corresponding to "+str(2.9979e8/rfreq[0]*1e3)+" mm"
print "Datasets has baselines between "+str(np.min(np.sqrt(uvw[0,:]**2.0+uvw[1,:]**2.0)))+" and "+str(np.max(np.sqrt(uvw[0,:]**2.0+uvw[1,:]**2.0)))+" m"
#Initialize u and v arrays (coordinates in Fourier space)
uu=np.zeros((freqs.shape[0],uvw[0,:].size))
vv=np.zeros((freqs.shape[0],uvw[0,:].size))
#Fill u and v arrays appropriately from data values.
for i in np.arange(freqs.shape[0]):
for j in np.arange(uvw.shape[1]):
uu[i,j]=uvw[0,j]*freqs[i,spwid[j]]/(cc/100.0)
vv[i,j]=uvw[1,j]*freqs[i,spwid[j]]/(cc/100.0)
#Extract real and imaginary part of the visibilities at all u-v coordinates, for both polarization states (XX and YY), extract weights which correspond to 1/(uncertainty)^2
Re_xx = data[0,:,:].real
Im_xx = data[0,:,:].imag
weight_xx = weight[0,:]
if npol>=2:
Re_yy = data[1,:,:].real
Im_yy = data[1,:,:].imag
weight_yy = weight[1,:]
#print('Ciao')
#Since we don't care about polarization, combine polarization states (average them together) and fix the weights accordingly. Also if any of the two polarization states is flagged, flag the outcome of the combination.
flags = flags[0,:,:]*flags[1,:,:]
Re = np.where((weight_xx + weight_yy) != 0, (Re_xx*weight_xx + Re_yy*weight_yy) / (weight_xx + weight_yy), 0.)
Im = np.where((weight_xx + weight_yy) != 0, (Im_xx*weight_xx + Im_yy*weight_yy) / (weight_xx + weight_yy), 0.)
wgts = (weight_xx + weight_yy)
else:
Re=Re_xx
Im=Im_xx
wgts=weight_xx
flags=flags[0,:,:]
# Find which of the data represents cross-correlation between two antennas as opposed to auto-correlation of a single antenna.
# We don't care about the latter so we don't want it.
xc = np.where(ant1 != ant2)[0]
#Select only cross-correlation data
data_real = Re[:,xc]
data_imag = Im[:,xc]
flags = flags[:,xc]
data_wgts = wgts[xc]
data_uu = uu[:,xc]
data_vv = vv[:,xc]
data_wgts=np.reshape(np.repeat(wgts[xc], uu.shape[0]), data_uu.shape)
#Delete previously used (and not needed) variables (to free up some memory?)
del Re
del Im
del wgts
del uu
del vv
#Select only data that is NOT flagged
data_real = data_real[np.logical_not(flags)]
data_imag = data_imag[np.logical_not(flags)]
flagss = flags[np.logical_not(flags)]
data_wgts = data_wgts[np.logical_not(flags)]
data_uu = data_uu[np.logical_not(flags)]
data_vv = data_vv[np.logical_not(flags)]
#Wrap up all the arrays/matrices we need, (u-v coordinates, complex visibilities, and weights for each visibility) and save them all together in a numpy file
u, v, Re, Im, w = data_uu, data_vv, data_real, data_imag, data_wgts
#print(filename[ii][:-3]+'.npy')
np.save(filename[ii][:-3]+'.npy', [u, v, Re, Im, w]
| utils/.ipynb_checkpoints/mstonumpyortxt_multiple-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Testing Spatial K-Fold Cross Validation
# +
import sys
import numpy as np
import matplotlib.pyplot as plt
sys.path.append('../Scripts')
from deafrica_classificationtools import spatial_clusters, SKCV, spatial_train_test_split
from sklearn.model_selection import ShuffleSplit, GroupShuffleSplit, GroupKFold
# %load_ext autoreload
# %autoreload 2
# -
# ## Analysis Parameters
# +
training_data = "eastern_cropmask/results/training_data/gm_mads_two_seasons_training_data_20201204.txt"
coordinate_data = "eastern_cropmask/results/training_data/training_data_coordinates_20201204.txt"
cv_splits = 5
max_distance = 250000
test_size = 0.20
cluster_method='Hierarchical'
# -
# ### Load data
# +
# load the data
model_input = np.loadtxt(training_data)
coordinates = np.loadtxt(coordinate_data)
# load the column_names
with open(training_data, 'r') as file:
header = file.readline()
column_names = header.split()[1:]
# Extract relevant indices from training data
model_col_indices = [column_names.index(var_name) for var_name in column_names[1:]]
#convert variable names into sci-kit learn nomenclature
X = model_input[:, model_col_indices]
y = model_input[:, 0]
# -
# ## Generate spatial clusters to visualize
# +
spatial_groups = spatial_clusters(coordinates, method='Hierarchical', max_distance=max_distance, verbose=True)
plt.figure(figsize=(6,8))
plt.scatter(coordinates[:, 0], coordinates[:, 1], c=spatial_groups,
s=50, cmap='viridis');
plt.title('Spatial clusters of training data')
plt.ylabel('Northings')
plt.xlabel('Eastings');
# -
# ## Test the different SKCV methods and random shufflesplit
# +
#generate n_splits of train-test_splits
kfold = SKCV(coordinates=coordinates,
max_distance=max_distance,
n_splits=cv_splits,
cluster_method=cluster_method,
kfold_method='SpatialKFold',
test_size=test_size,
balance=True
)
kfold=kfold.split(coordinates)
shuffle = SKCV(coordinates=coordinates,
max_distance=max_distance,
n_splits=cv_splits,
cluster_method=cluster_method,
kfold_method='SpatialShuffleSplit',
test_size=test_size,
balance=10
)
shuffle = shuffle.split(coordinates)
rs = ShuffleSplit(n_splits=cv_splits, test_size=test_size, random_state=0)
rs = rs.split(coordinates)
gss = GroupShuffleSplit(n_splits=cv_splits, test_size=test_size, random_state=0)
gss = gss.split(coordinates, groups=spatial_groups)
gkf = GroupKFold(n_splits=cv_splits)
gkf = gkf.split(coordinates, groups=spatial_groups)
# -
# ### Plot train-test coordinate data
# +
fig, axes = plt.subplots(
5,
cv_splits,
figsize=(20, 20),
sharex=True,
sharey=True,
)
for row, title, folds in zip(axes, ['Spatial-KF', 'GKF','Spatial-SS','GSS', 'random'], [kfold,gkf,shuffle, gss, rs]):
for i, (ax, fold) in enumerate(zip(row, folds)):
train, test = fold
X_tr, X_tt = coordinates[train,:], coordinates[test,:]
ax.set_title("{} fold {} ({} testing points)".format(title, i, test.size))
ax.plot(
np.transpose(X_tr)[0],
np.transpose(X_tr)[1],
".b",
markersize=2,
label="Train",
)
ax.plot(
np.transpose(X_tt)[0],
np.transpose(X_tt)[1],
".r",
markersize=2,
label="Test",
)
# Place a legend on the first plot
axes[0, 0].legend(loc="upper right", markerscale=5)
plt.subplots_adjust(
hspace=0.1, wspace=0.05, top=0.95, bottom=0.05, left=0.05, right=0.95
)
plt.show()
# -
# ## Train-test-split
# +
train_features, test_features, train_labels, test_labels = spatial_train_test_split(X=model_input[:, model_col_indices],
y=model_input[:, 0],
coordinates = coordinates,
cluster_method='Hierarchical',
max_distance=max_distance,
kfold_method = 'SpatialShuffleSplit',
test_size=test_size,
random_state=0,
balance=10
)
print("train_features shape:", train_features.shape)
print("test_features shape:", test_features.shape)
# -
| pre-post_processing/test_SKCV.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## MERGE
# +
import numpy as np
import pandas as pd
import os
pd.set_option('display.max_columns', None)
path = os.getcwd()
# -
df_census = pd.read_csv(os.path.join(path, 'csv/census_by_city.csv')) # (25235, 35)
df_crime = pd.read_csv(os.path.join(path, 'csv/fbi_crime_uscities.csv')) # (8103, 16)
df_rent = pd.read_csv(os.path.join(path, 'csv/rent_cleaned.csv')) # (2831, 4)
df_poll = pd.read_csv(os.path.join(path, 'csv/pollution_summary.csv')) # (845, 20)
df_lat_lon = pd.read_excel(os.path.join(path, 'csv/uscities.xlsx')) # (28338, 17)
# replace 'State_Id' in df_census so it matches the other columns
df_census.rename(columns = {'State_Id':'State'}, inplace = True)
df_lat_lon.rename(columns = {'state_id':'State','city': 'City'}, inplace = True)
df_lat_lon = df_lat_lon[['City', 'State', 'lat', 'lng']]
# +
# put all the dfs in a list
df_lst = [df_census, df_crime, df_rent, df_poll, df_lat_lon]
# remove whitespace in all the dfs
for x in df_lst:
x['City'] = x['City'].str.strip()
x['State'] = x['State'].str.strip()
# make sure it worked
for x in df_lst:
print(x['City'][0])
print(x['State'][0])
# -
# merging the data and following its shape sequentially
df_net = pd.merge(df_census, df_crime, on=['City', 'State'])
print(df_net.shape)
df_net2 = pd.merge(df_net, df_rent, on=['City', 'State'])
print(df_net2.shape)
df_net3 = pd.merge(df_net2, df_poll, on=['City', 'State'])
print(df_net3.shape)
df_net4 = pd.merge(df_net3, df_lat_lon, on=['City', 'State'])
print(df_net4.shape)
# ## CLEANUP
# ### Update Crime Ratings
# - comparison is across fewer cities.
# - run pd.qcut again
df_net4['Crime Rating'].value_counts()
# crime rating skews very high because after the merging there were only a little over 400 cities versus the 8000 cities reported in the FBI crime data.
# dropping old crime rating column
df_net4 = df_net4.drop(columns = ['Crime Rating'])
# Create a new crime rating by using qcut() on the number of merged cities we have
df_net4['Crime Rating'] = pd.qcut(df_net4['Crime Rate (per 1000 residents)'], q=3, labels=['Low', 'Medium', 'High'])
df_net4['Crime Rating'].value_counts()
# ### Clean Columns
col = ["Poverty", "ChildPoverty", "Unemployment", "Professional", "Office", "Service", "Construction", "Production"]
df_net4[col] = df_net4[col].round(2)
# ### Reorder Columns
df_net4.columns
df_net4 = df_net4[[
'City', 'State',
'lat', 'lng',
'TotalPop', 'Men', 'Women', 'Hispanic', 'White',
'Black', 'Native', 'Asian', 'Pacific', 'Income', 'IncomeErr','IncomePerCap',
'IncomePerCapErr', 'Poverty', 'ChildPoverty', 'Employed', 'Unemployment',
'PrivateWork', 'PublicWork', 'SelfEmployed', 'FamilyWork', 'Professional',
'Service', 'Office', 'Construction', 'Production', 'Drive', 'Carpool',
'Transit', 'Walk', 'OtherTransp', 'WorkAtHome', 'MeanCommute',
'Rent', 'Year',
'Population','Violent crime', 'Murder and nonnegligent manslaughter', 'Rape',
'Robbery', 'Aggravated assault', 'Property crime', 'Burglary','Larceny- theft',
'Motor vehicle theft', 'Arson', 'Crime Rate (per 1000 residents)', 'Crime Rating',
'Days with AQI', 'Good Days', 'Moderate Days', 'Unhealthy for Sensitive Groups Days',
'Unhealthy Days', 'Very Unhealthy Days', 'Hazardous Days', 'Max AQI',
'90th Percentile AQI', 'Median AQI', 'Days CO', 'Days NO2', 'Days Ozone', 'Days SO2',
'Days PM2.5', 'Days PM10', 'Level of Concern',
]]
df_net4.rename(columns = {'Level of Concern':'Air Quality Index',
'Crime Rate (per 1000 residents)': 'Crime Rate per 1000',
'lng': 'lon'}, inplace = True)
# ### Drop Duplicates
df_net4[df_net4.duplicated(subset=['City', 'State', 'lat', 'lon'], keep=False)]
df_net4 = df_net4.drop_duplicates(subset=['City', 'State', 'lat', 'lon'], keep='first')
# ### Save
print(df_net4.shape)
df_net4.head()
# extract the merged df as a .csv
df_net4.to_csv('merged.csv', index = False)
| notebooks/datasets/datasets_to_merge/old/merging_datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from langmodels.models import *
import langmodels.utf8codec as utf8codec
from langmodels.utils.tools import *
import torch.nn.functional as F
import torch.nn as nn
import torch
from fairseq.modules.transformer_layer import MultiheadAttention, TransformerDecoderLayer, TransformerEncoderLayer
# +
in_conv_dim=1024
conv_proj_dim=256
att_layers=2
att_dim=128
att_encoder_heads=8
att_encoder_ff_embed_dim=1024
dropout=0.1
att_dropout=0.1
activation=None
residual=True
args = SimpleNamespace(**{"encoder_embed_dim": att_dim, "decoder_embed_dim": att_dim,
"encoder_attention_heads": att_encoder_heads, "decoder_attention_heads": att_encoder_heads,
"attention_dropout": att_dropout,
"dropout": dropout,
"activation_fn": "gelu",
"encoder_normalize_before": True, "decoder_normalize_before": True,
"encoder_ffn_embed_dim": att_encoder_ff_embed_dim, "decoder_ffn_embed_dim": att_encoder_ff_embed_dim,
})
# -
args
tenc = TransformerDecoderLayer(args)
conv_col = Conv1DColNet()
count_parameters(conv_col), count_parameters(conv_col)
conv_col
convatt_col = ConvAttColNet(conv_col)
count_parameters(convatt_col), count_parameters(convatt_col)
convatt_col
convatt_col = ConvAttColNet(conv_col,
in_dim=324, hidd_embed_dim=768, out_embed_dim=96, # input Embedding adaptor channel-wise
in_conv_channels=96, lin_channels=96,
in_conv_dim=1024, conv_proj_dim=256, att_layers=2, att_dim=256,
att_encoder_heads=16, att_encoder_ff_embed_dim=1024,
dropout=0.1, att_dropout=0.1, activation=None, residual=True,
out_in_dim=96, out_hidd_embed_dim=1024, # decoder channel_wise dimensions
out_seq_len=512, # output sequence length, maximum attention that will appear there
)
count_parameters(convatt_col), count_parameters(convatt_col)
convatt_col
convatt_col.to("cuda:0")
# This convatt_col module is aprox 600-700MB in the GPU
# tcn_col = TCNColumn(1024, [128])
tcn_col = TCNColumn()
count_parameters(tcn_col)
256+128
utf8codes = np.load("./utf8-codes/utf8_codebook_overfit_matrix_2seg_dim64.npy")
utf8codes = utf8codes.reshape(1987,64)
gconv_net = GatedConv1DPoS(utf8codes)
count_parameters(gconv_net), count_trainable_parameters(gconv_net)
| predictors/sequence/text/ParamCount.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Week 2 lecture notebook
# ## Outline
#
# [Missing values](#missing-values)
#
# [Decision tree classifier](#decision-tree)
#
# [Apply a mask](#mask)
#
# [Imputation](#imputation)
# <a name="missing-values"></a>
# ## Missing values
import numpy as np
import pandas as pd
df = pd.DataFrame({"feature_1": [0.1,np.NaN,np.NaN,0.4],
"feature_2": [1.1,2.2,np.NaN,np.NaN]
})
df
# ### Check if each value is missing
df.isnull()
# ### Check if any values in a row are true
#
df_booleans = pd.DataFrame({"col_1": [True,True,False],
"col_2": [False,False,False]
})
df_booleans
# - If we use pandas.DataFrame.any(), it checks if at least one value in a column is `True`, and if so, returns `True`.
# - If all rows are `False`, then it returns `False` for that column
df_booleans.any()
# - Setting the axis to zero also checks if any item in a column is `True`
df_booleans.any(axis=0)
# - Setting the axis to `1` checks if any item in a **row** is `True`, and if so, returns true
# - Similarily only when all values in a row are `False`, the function returns `False`.
df_booleans.any(axis=1)
# ### Sum booleans
series_booleans = pd.Series([True,True,False])
series_booleans
# - When applying `sum` to a series (or list) of booleans, the `sum` function treats `True` as 1 and `False` as zero.
sum(series_booleans)
# You will make use of these functions in this week's assignment!
# ### This is the end of this practice section.
#
# Please continue on with the lecture videos!
#
# ---
# <a name="decision-tree"></a>
# ## Decision Tree Classifier
#
import pandas as pd
X = pd.DataFrame({"feature_1":[0,1,2,3]})
y = pd.Series([0,0,1,1])
X
y
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt
dt.fit(X,y)
# ### Set tree parameters
dt = DecisionTreeClassifier(criterion='entropy',
max_depth=10,
min_samples_split=2
)
dt
# ### Set parameters using a dictionary
#
# - In Python, we can use a dictionary to set parameters of a function.
# - We can define the name of the parameter as the 'key', and the value of that parameter as the 'value' for each key-value pair of the dictionary.
tree_parameters = {'criterion': 'entropy',
'max_depth': 10,
'min_samples_split': 2
}
# - We can pass in the dictionary and use `**` to 'unpack' that dictionary's key-value pairs as parameter values for the function.
dt = DecisionTreeClassifier(**tree_parameters)
dt
# ### This is the end of this practice section.
#
# Please continue on with the lecture videos!
#
# ---
# <a name="mask"></a>
# ## Apply a mask
#
# Use a 'mask' to filter data of a dataframe
import pandas as pd
df = pd.DataFrame({"feature_1": [0,1,2,3,4]})
df
mask = df["feature_1"] >= 3
mask
df[mask]
# ### Combining comparison operators
#
# You'll want to be careful when combining more than one comparison operator, to avoid errors.
# - Using the `and` operator on a series will result in a `ValueError`, because it's
df["feature_1"] >=2
df["feature_1" ] <=3
# NOTE: This will result in a ValueError
df["feature_1"] >=2 and df["feature_1" ] <=3
# ### How to combine two logical operators for Series
# What we want is to look at the same row of each of the two series, and compare each pair of items, one row at a time. To do this, use:
# - the `&` operator instead of `and`
# - the `|` operator instead of `or`.
# - Also, you'll need to surround each comparison with parenthese `(...)`
# This will compare the series, one row at a time
(df["feature_1"] >=2) & (df["feature_1" ] <=3)
# ### This is the end of this practice section.
#
# Please continue on with the lecture videos!
#
# ---
# <a name="imputation"></a>
# ## Imputation
#
# We will use imputation functions provided by scikit-learn. See the scikit-learn [documentation on imputation](https://scikit-learn.org/stable/modules/impute.html#iterative-imputer)
import pandas as pd
import numpy as np
df = pd.DataFrame({"feature_1": [0,1,2,3,4,5,6,7,8,9,10],
"feature_2": [0,np.NaN,20,30,40,50,60,70,80,np.NaN,100],
})
df
# ### Mean imputation
from sklearn.impute import SimpleImputer
mean_imputer = SimpleImputer(missing_values=np.NaN, strategy='mean')
mean_imputer
mean_imputer.fit(df)
nparray_imputed_mean = mean_imputer.transform(df)
nparray_imputed_mean
# Notice how the missing values are replaced with `50` in both cases.
# ### Regression Imputation
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
reg_imputer = IterativeImputer()
reg_imputer
reg_imputer.fit(df)
nparray_imputed_reg = reg_imputer.transform(df)
nparray_imputed_reg
# Notice how the filled in values are replaced with `10` and `90` when using regression imputation. The imputation assumed a linear relationship between feature 1 and feature 2.
# ### This is the end of this practice section.
#
# Please continue on with the lecture videos!
#
# ---
| AI_for_Medical_Prognosis/Week2/Ungraded_Exercises/C2_W2_lecture.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <span style="color:orange">Natural Language Processing Tutorial (NLP101) - Level Beginner</span>
# **Created using:PyCaret 2.0** <br /> **Date Updated:2020年8月24日**日
#
# # 1.0 チュートリアルの目的
#
# 自然言語処理チュートリアル **(NLP101)** へようこそ。このチュートリアルは、あなたがPyCaretの初心者で、 `pycaret.nlp` モジュールを使って自然言語処理を始めようとしているものと仮定しています。
#
# このチュートリアルでは、以下のことを学びます。
#
#
# **データの取得:** PyCaretのリポジトリからデータをインポートするには?**環境の設定:** PyCaret で環境を設定し、重要なテキスト前処理を行うには?**モデルの作成:** トピックモデルの作成方法?**Assign Model:** 学習したモデルを用いて、ドキュメントやテキストをトピックに割り当てるには?**Plot Model:** トピックモデルやコーパス全体を様々なプロットで分析する方法 **Save / Load Model:** How to save/load model for future use?
#
# 読了時間:約30分
#
#
# ## 1.1 PyCaret のインストール
#
# PyCaret を使い始めるための最初のステップは pycaret をインストールすることです。pycaret のインストールは簡単で、数分しかかかりません。以下の手順に従ってください。
#
# #### ローカル Jupyter Notebook に PyCaret をインストールする
#
# `pip install pycaret` <br /> Jupyter Notebook に PyCaret をインストールする。
#
# #### Google ColabまたはAzureノートブックへのPyCaretのインストール
#
# `!pip install pycaret`
#
#
# ## 1.2 前提条件
#
# - Python 3.6 以上
# - PyCaret 2.0 以上
# - pycaret のリポジトリからデータをロードするためのインターネット接続
# - NLP の基本的な知識
#
# `from pycaret.utils import enable_colab`
# `enable_colab()`
#
# は、Google colabでこのノートブックを実行する場合、対話型のビジュアルを表示するには、ノートブックの先頭で次のセルのコードを実行しなければなりません。
#
# ## 1.4 See also:
# - __[Natural Language Processing Tutorial (NLP102) - Level Intermediate](https://github.com/pycaret/pycaret/blob/master/tutorials/Natural%20Language%20Processing%20Tutorial%20Level%20Intermediate%20-%20NLP102.ipynb)__
# - __[Natural Language Processing Tutorial (NLP103) - Level Expert](https://github.com/pycaret/pycaret/blob/master/tutorials/Natural%20Language%20Processing%20Tutorial%20Level%20Expert%20-%20NLP103.ipynb)__
# # 2.0 What is Natural Language Processing?
#
# 自然言語処理(NLP)とは、人間が自然に使っている言語を分析・理解・生成し、コンピュータ言語ではなく、人間の自然な言葉を使って、文章や話し言葉の文脈でコンピュータと対話することを扱う人工知能の一分野である。機械学習におけるNLPの一般的な使用例としては、以下のようなものがある。
#
# - トピックの発見とモデリング:テキストコレクションの意味とテーマを捕らえ、トピックモデリングなどの高度なモデリング技術を適用して、類似した文書をグループ化する。
# - **文書要約**:大量のテキストの要約を自動的に生成します。
# - **音声** -テキストおよびテキスト-音声変換:**書かれたテキストに音声コマンドを変換し、その逆。
# - **機械翻訳:**ある言語から別のテキストまたは音声の自動翻訳。
#
# __[自然言語処理についてもっと知りたい](https://en.wikipedia.org/wiki/Natural_language_processing)__
# # 3.0 Overview of Natural Language Processing Module in PyCaret
# PyCaret の NLP モジュール (`pycaret.nlp`) は教師なし機械学習モジュールで、トピックモデルを作成してテキストデータを分析し、ドキュメントに隠れた意味構造を見つけるために使用することができます。PyCaret の NLP モジュールは、あらゆる NLP 問題の基本ステップである、幅広いテキスト前処理技術を内蔵しています。これは、機械学習アルゴリズムが学習できる形式に生テキストを変換するものです。
#
# 最初のリリースでは、PyCaretのNLPモジュールは `English` 言語のみをサポートし、 Latent Dirichlet Allocation から Non-Negative Matrix Factorization まで、トピックモデルのいくつかの有名な実装を提供します。また、5つ以上のアルゴリズムと10以上のプロットが用意されており、テキストを分析することができます。PyCaret の NLP モジュールは、ユニークな関数 `tune_model()` を実装しており、分類のための `AUC` や回帰のための `R2` といった教師あり学習の目的を最適化するために、トピックモデルのハイパーパラメータを調整することもできます。
# # 4.0 Dataset for the Tutorial
# このチュートリアルでは、**Kiva Microfunds** https://www.kiva.org/ のデータを使用する予定です。Kiva Microfundsは、個人が世界中の低所得の起業家や学生にお金を貸すことを可能にする非営利団体です。2005年に始まって以来、Kivaは約98%の返済率で何百万ものローンをクラウドファンディングしてきました。Kivaでは、各融資依頼には、性別や場所など借り手の従来の人口統計学的情報と、個人的なストーリーの両方が含まれています。このチュートリアルでは、パーソナルストーリーで与えられたテキストを使って、データセットの洞察を深め、テキストに隠された意味構造を理解します。データセットには6,818のサンプルが含まれています。特徴量の簡単な説明は以下の通り。
#
# - **country:** country of borrower
# - **en:** personal story of borrower when applied for loan
# - **gender:** Gender (M=male, F=female)
# - **loan_amount:** amount of loan approved and disbursed
# - **nonpayment:** Type of lender (Lender = Kiva website上の個人の登録ユーザー, Partner = ローンを見つけて資金提供するKivaと連携したマイクロファイナンスの機関)
# - **sector:** sector of borrower
# - **status:** status of loan (1-default, 0-repaid)
#
# このチュートリアルでは、`en`カラムのみを用いてトピックモデルを作成する。次回のチュートリアル __[自然言語処理 (NLP102) - Level Intermediate](https://github.com/pycaret/pycaret/blob/master/tutorials/Natural%20Language%20Processing%20Tutorial%20Level%20Intermediate%20-%20NLP102.ipynb)__ では、トピックモデルを用いて、ローンの「状態」を予測する分類器を構築し、申請者がデフォルトになるかどうかを知ることにします。
#
# #### Dataset Acknowledgement: Kiva Microfunds https://www.kiva.org/
# # 5.0 Getting the Data
# PyCaret の git リポジトリ __[Click Here to Download](https://github.com/pycaret/pycaret/blob/master/datasets/kiva.csv)__ からデータをダウンロードするか、 `get_data()` 関数を用いてデータをロードします(インターネット接続が必要です)。
from pycaret.datasets import get_data
data = get_data('kiva')
#check the shape of data
data.shape
# sampling the data to select only 1000 documents
data = data.sample(1000, random_state=786).reset_index(drop=True)
data.shape
# # 6.0 Setting up Environment in PyCaret
# `setup()`関数はpycaretの環境を初期化し、自然言語処理問題を扱う上で不可欠ないくつかのテキスト前処理を行います。setupはpycaretの他の関数を実行する前に呼び出す必要があります。setup は、pandas dataframe と `target` パラメータとして渡されるテキストカラムの名前という2つのパラメータを受け取ります。テキストを含む `list` を渡すこともできますが、その場合は `target` パラメータを渡す必要はありません。セットアップが実行されると、以下の前処理が自動的に適用される。
#
# - **数値文字の除去:** すべての数値文字がテキストから除去されます。
# - **特殊文字の除去:** 英数字以外の特殊文字はすべて削除されます。
# - **単語のトークン化:**単語のトークン化とは、大きなテキストサンプルを単語に分割する処理です。これは、さらなる分析のために各単語を個別に捕捉する必要がある自然言語処理タスクの中核となる要件です。__[Read More](https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html)
# - **ストップワード除去:** ストップワード(またはストップワード)とは、言語的に意味があるかもしれないが、一般的で情報検索にほとんど価値をもたらさないため、テキストからしばしば除去される単語のことである。英語では、このような単語の例として、以下のものがある。例えば、"the", "a", "an", "in" などです。__[Read More](https://en.wikipedia.org/wiki/Stop_words)
# - **Bigram Extraction:** Bigramとは、トークンの文字列から隣接する2つの要素の並びのことで、通常は文字、音節、単語などが含まれる。例えば、New York という単語はトークン化されると "New" と "York" という2つの異なる単語として認識されますが、十分な回数繰り返されると、Bigram Extraction はその単語を1つ、すなわち "New_York" として表現します __[Read More](https://en.wikipedia.org/wiki/Bigram)
# - **Trigram Extraction:** bigram 抽出と同様、3つのトークンの列にある、隣接する要素の列を Trigram として表現します。__[Read More](https://en.wikipedia.org/wiki/Trigram)__ <br/> <br/> - **Lemmatizing:** Lemmatizationとは、単語の屈折した形をグループ化し、単語のレンマ、または辞書形式で識別される、単一の単語として分析できるようにするプロセスである。英語では、単語はいくつかの屈折した形で現れる。たとえば、「歩く」という動詞は、「walk」、「walked」、「walks」、「walking」のように表示されることがあります。辞書で調べると出てくる「walk」という基本形を、その単語のレンマと呼ぶ。__[Read More](https://en.wikipedia.org/wiki/Lemmatisation)__ <br/> <br/> - **Custom Stopwords:** 多くの場合、テキストには言語の規則上ストップワードではない単語が含まれていますが、それらは全く、あるいはほとんど情報を付加しません。例えば、このチュートリアルでは、loan データセットを使用しています。そのため、「ローン」、「銀行」、「お金」、「ビジネス」のような単語は当たり前すぎて、何の価値もない。また、トピックモデルにも多くのノイズが含まれることがあります。このような単語は `custom_stopwords` パラメータを使ってコーパスから取り除くことができる。次回のチュートリアル __[自然言語処理チュートリアル (NLP102) - 中級編](https://github.com/pycaret/pycaret/blob/master/tutorials/Natural%20Language%20Processing%20Tutorial%20Level%20Intermediate%20-%20NLP102.ipynb)__ では、 `setup()` 内で `custom_stopwords` パラメータを使用するデモを紹介します。<br/> <br/> <br
#
# **Note :** `pycaret.nlp` のいくつかの機能には英語の言語モデルが必要です。言語モデルはpycaretのインストール時に自動的にダウンロードされません。Anaconda PromptのようなPythonのコマンドラインインターフェイスをダウンロードする必要があります。言語モデルをダウンロードするには、コマンドラインに次のように入力してください。
#
# python -m spacy download en_core_web_sm` <br/> `python -m textblob.download_corpora` <br/> と入力してください。
from pycaret.nlp import *
exp_nlp101 = setup(data = data, target = 'en', session_id = 123)
# Once the setup is succesfully executed it prints the information grid with the following information:
#
# - **session_id :** A pseduo-random number distributed as a seed in all functions for later reproducibility. If no `session_id` is passed, a random number is automatically generated that is distributed to all functions. In this experiment session_id is set as `123` for later reproducibility.<br/>
# <br/>
# - **# Documents :** Number of documents (or samples in dataset if dataframe is passed). <br/>
# <br/>
# - **Vocab Size :** Size of vocabulary in the corpus after applying all text pre-processing such as removal of stopwords, bigram/trigram extraction, lemmatization etc. <br/>
#
# Notice that all text pre-processing steps are performed automatically when you execute `setup()`. These steps are imperative to perform any NLP experiment. `setup()` function prepares the corpus and dictionary that is ready-to-use for the topic models that you can create using `create_model()` function. Another way to pass the text is in the form of list in which case no `target` parameter is needed.
# # 7.0 Create a Topic Model
# **What is Topic Model?** In machine learning and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a document is about a particular topic, one would expect particular words to appear in the document more or less frequently: "dog" and "bone" will appear more often in documents about dogs, "cat" and "meow" will appear in documents about cats, and "the" and "is" will appear equally in both. A document typically concerns multiple topics in different proportions; thus, in a document that is 10% about cats and 90% about dogs, there would probably be about 9 times more dog words than cat words. The "topics" produced by topic modeling techniques are clusters of similar words. A topic model captures this intuition in a mathematical framework, which allows examining a set of documents and discovering, based on the statistics of the words in each, what the topics might be and what each document's balance of topics is. __[Read More](https://en.wikipedia.org/wiki/Topic_model)__
#
# Creating a topic model in PyCaret is simple and similar to how you would have created a model in supervised modules of pycaret. A topic model is created using `create_model()` function which takes one mandatory parameter i.e. name of model as a string. This function returns a trained model object. There are 5 topic models available in PyCaret. see the docstring of `create_model()` for complete list of models. See an example below where we create Latent Dirichlet Allocation (LDA) model:
lda = create_model('lda')
print(lda)
# We have created Latent Dirichlet Allocation (LDA) model with just one word i.e. `create_model()`. Notice the `num_topics` parameter is set to `4` which is a default value taken when you donot pass `num_topics` parameter in `create_model()`. In below example, we will create LDA model with 6 topics and we will also set `multi_core` parameter to `True`. When `multi_core` is set to `True` Latent Dirichlet Allocation (LDA) uses all CPU cores to parallelize and speed up model training.
lda2 = create_model('lda', num_topics = 6, multi_core = True)
print(lda2)
# # 8.0 Assign a Model
# Now that we have created a topic model, we would like to assign the topic proportions to our dataset (6818 documents / samples) to analyze the results. We will achieve this by using `assign_model()` function. See an example below:
lda_results = assign_model(lda)
lda_results.head()
# Notice how 6 additional columns are now added to the dataframe. `en` is the text after all pre-processing. `Topic_0 ... Topic_3` are the topic proportions and represents the distribution of topics for each document. `Dominant_Topic` is the topic number with highest proportion and `Perc_Dominant_Topic` is the percentage of dominant topic over 1 (only shown when models are stochastic i.e. sum of all proportions equal to 1) .
# # 9.0 Plot a Model
# `plot_model()` function can be used to analyze the overall corpus or only specific topics extracted through topic model. Hence the function `plot_model()` can also work without passing any trained model object. See examples below:
# ### 9.1 Frequency Distribution of Entire Corpus
plot_model()
# ### 9.2 Top 100 Bigrams on Entire Corpus
plot_model(plot = 'bigram')
# ### 9.3 Frequency Distribution of Topic 1
# `plot_model()` can also be used to analyze the same plots for specific topics. To generate plots at topic level, function requires trained model object to be passed inside `plot_model()`. In example below we will generate frequency distribution on `Topic 1` only as defined by `topic_num` parameter.
plot_model(lda, plot = 'frequency', topic_num = 'Topic 1')
# ### 9.4 Topic Distribution
plot_model(lda, plot = 'topic_distribution')
# Each document is a distribution of topics and not a single topic. Although, if the task is of categorizing document into specific topics, it wouldn't be wrong to use the topic proportion with highest value to categorize the document into **a topic**. In above plot, each document is categorized into one topic using the largest proportion of topic weights. We can see most of the documents are in `Topic 3` with only few in `Topic 1`. If you hover over these bars, you will get basic idea of themes in this topic by looking at the keywords. For example if you evaluate `Topic 2`, you will see keywords words like 'farmer', 'rice', 'land', which probably means that the loan applicants in this category pertains to agricultural/farming loans. However, if you hover over `Topic 0` and `Topic 3` you will observe lot of repitions and keywords are overlapping in all topics such as word "loan" and "business" appears both in `Topic 0` and `Topic 3`. In next tutorial, __[Natural Language Processing Tutorial (NLP102) - Level Intermediate](https://github.com/pycaret/pycaret/blob/master/Tutorials/Natural%20Language%20Processing%20Tutorial%20Level%20Intermediate%20-%20NLP102.ipynb)__ we will demonstrate the use of `custom_stopwords` at which point we will re-analyze this plot.
# ### 9.5 T-distributed Stochastic Neighbor Embedding (t-SNE)
plot_model(lda, plot = 'tsne')
# T-distributed Stochastic Neighbor Embedding (t-SNE) is a nonlinear dimensionality reduction technique well-suited for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions.
#
# __[Learn More](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding)__
# ### 9.6 Uniform Manifold Approximation and Projection Plot
plot_model(lda, plot = 'umap')
# UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimensionality reduction. It is similar to tSNE and PCA in its purpose as all of them are techniques to reduce dimensionality for 2d/3d projections. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology.
#
# __[Learn More](https://towardsdatascience.com/how-exactly-umap-works-13e3040e1668)__
# # 10.0 Evaluate Model
# Another way to analyze performance of models is to use `evaluate_model()` function which displays a user interface for all of the available plots for a given model. It internally uses the `plot_model()` function. See below example where we have generated Sentiment Polarity Plot for `Topic 3` using LDA model stored in `lda` variable.
evaluate_model(lda)
# # 11.0 Saving the model
# As you get deeper into Natural Language Processing, you will learn that training time of topic models increases exponentially as the size of corpus increases. As such, if you would like to continue your experiment or analysis at a later point, you don't need to repeat the entire experiment and re-train your model. PyCaret inbuilt function `save_model()` allows you to save the model for later use.
save_model(lda,'Final LDA Model 08Feb2020')
# # 12.0 Loading the model
# To load a saved model on a future date in the same or different environment, we would use the PyCaret's `load_model()` function.
saved_lda = load_model('Final LDA Model 08Feb2020')
print(saved_lda)
# # 13.0 Wrap-up / Next Steps?
# What we have covered in this tutorial is the entire workflow for Natural Language Processing experiment. Our task today was to create and analyze a topic model. We have performed several text pre-processing steps using `setup()` then we have created a topic model using `create_model()`, assigned topics to the dataset using `assign_model()` and analyze the results using `plot_model()`. All this was completed in less than 10 commands that are naturally constructed and very intuitive to remember. Re-creating the entire experiment without PyCaret would have taken well over 100 lines of code.
#
# In this tutorial, we have only covered basics of `pycaret.nlp`. In the next tutorial we will demonstrate the use of `tune_model()` to automatically select the number of topics for a topic model. We will also go deeper into few concepts and techniques such as `custom_stopwords` to improve the result of a topic model.
#
# See you at the next tutorial. Follow the link to __[Natural Language Processing (NLP102) - Level Intermediate](https://github.com/pycaret/pycaret/blob/master/tutorials/Natural%20Language%20Processing%20Tutorial%20Level%20Intermediate%20-%20NLP102.ipynb)__
| tutorials/Japanese/JA-Natural Language Processing Tutorial Level Beginner - NLP101.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="lMV49YYKJP4W" colab={"base_uri": "https://localhost:8080/"} outputId="a0d77f53-e9ea-4ebc-c630-e3619b42a11e"
import glob
import os
import librosa
import numpy as np
# !pip install pretty_midi
import pretty_midi
# + id="7S3iwqnIPlqF" colab={"base_uri": "https://localhost:8080/"} outputId="941ffb87-5b5c-4b32-cdb6-57d7a7284056"
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# + [markdown] id="UfyfIelgQXWP"
# **Please update the start path and destination path**
# + id="n4CfI2XnQTyp"
start ='/content/drive/MyDrive/MUS' # Divide all 9 directories of the MAPS Dataset into train/test/val and provide its path (format eg. test/.wav)
dest = '/content/drive/MyDrive/cqt_test_train' # Destination Path for storing the .npy files
# + id="qh_QyIq-PoTO"
RangeMIDInotes=[21,108]
sr=16000.
bins_per_octave=36
n_octave=7
val_rate=1./7
pretty_midi.pretty_midi.MAX_TICK = 1e10
n_bins= n_octave * bins_per_octave
hop_length = 512
win_width = 32
kernel_size=7
overlap=True
# + id="X<KEY>"
def midi2mat(midi_path_train, length, cqt_len, sr, RangeMIDInotes=RangeMIDInotes):
midi_data = pretty_midi.PrettyMIDI(midi_path_train)
pianoRoll = midi_data.instruments[0].get_piano_roll(fs=cqt_len * sr/length)
Ground_truth_mat = (pianoRoll[RangeMIDInotes[0]:RangeMIDInotes[1] + 1, :cqt_len] > 0)
return Ground_truth_mat
# + id="vZCL6-i8QxTh" colab={"base_uri": "https://localhost:8080/"} outputId="fb6fced0-c6a6-4b96-be75-4ff30e399eaa"
fil = [direc for direc in os.listdir(start)]
for direc in fil:
j=0
k=0
startpath= os.path.join(start,direc)
destpath = os.path.join(dest,direc)
if not os.path.exists(destpath):
os.makedirs(destpath)
print(direc)
files = [f for f in os.listdir(startpath)]
for f in files:
fpath=startpath
f1=f
if 1:
ffile=os.path.join(fpath,f1)
file_name,file_extensions=os.path.splitext(f1)
if file_extensions == '.txt':
continue
if file_extensions==".mid":
ffile=os.path.join(fpath,file_name+'.wav')
x,sr = librosa.load(ffile,sr=sr)
# perform stft to obtain 2D array of complex values(frequencies)
cqt_file = librosa.cqt(x,sr=sr,fmin=librosa.note_to_hz('A0'),hop_length = hop_length,n_bins=n_bins,bins_per_octave=bins_per_octave,scale=False)
# taking their absolute values
cqt_abs = np.abs(cqt_file)
# converting the frequency data on a logarthmic scale for better visualisation
cqt = np.transpose(librosa.amplitude_to_db(cqt_abs))
midi_file = os.path.join(fpath,f1)
if file_extensions==".wav":
midi_file = os.path.join(fpath,file_name+'.mid')
Ground_truth_mat=midi2mat(midi_file, len(x), cqt.shape[0], sr, RangeMIDInotes=RangeMIDInotes)
midi_train = np.transpose(Ground_truth_mat)
#midi length<stft length, cut stft
if midi_train.shape[0]<cqt.shape[0]:
cqt=cqt[:midi_train.shape[0],:]
if file_extensions == ".wav" :
ofolder = 'wav'
subname = 'CQT'
no=j
elif file_extensions == ".mid" :
ofolder = 'mid'
subname = 'label'
no=k
opath = os.path.join(destpath,f,ofolder,file_name)+subname+'.npy'
temp_path = os.path.join(destpath,f,ofolder)
if not os.path.exists(temp_path):
os.makedirs(temp_path)
if file_extensions == ".wav":
np.save(opath,cqt)
elif file_extensions == ".mid":
np.save(opath,midi_train)
# print('Preprocessed ',f1)
# cut the tensor along the first axis by the win_width with a single frame hop
matrix = np.array(np.load(opath))
l=matrix.shape[0]
cut_matrix=[]
nb_win=int(l/win_width) #integer division=floor
if not overlap:
for i in range(nb_win):
cut_matrix.append(matrix[i*win_width:(i+1)*win_width,:])
else:
w=matrix.shape[1]
matrix_1=np.concatenate([np.zeros([int(kernel_size/2),w]),matrix,np.zeros([int(kernel_size/2),w])],axis=0) #padding
cut_matrix = []
for i in range(nb_win):
cut_matrix.append(matrix_1[i * win_width:(i + 1) * win_width+kernel_size-1,:])
cut_matrix = np.asarray(cut_matrix)
os.remove(opath)
# print("Removed ",f1)
if file_extensions == ".wav":
if j == 0:
X = cut_matrix
#print(cut_matrix.shape)
else:
X = np.concatenate((X,cut_matrix),axis=0)
#print(cut_matrix.shape)
j=j+1
elif file_extensions == ".mid":
if k == 0:
Y = cut_matrix
#print(cut_matrix.shape)
else:
Y = np.concatenate((Y,cut_matrix),axis=0)
#print(cut_matrix.shape)
k=k+1
print('Joined ',f1,"no ",no)
# print('--------------')
os.rmdir(temp_path)
os.rmdir(os.path.join(destpath,f))
X = np.expand_dims(X,axis=-2)
Y = np.expand_dims(Y,axis=-2)
opath1= os.path.join(destpath,"X_final_CQT_")+direc+'.npy'
opath2= os.path.join(destpath,"Y_final_CQT_")+direc+'.npy'
np.save(opath1,X)
np.save(opath2,Y)
# print('Saved X_train final')
# print('Saved Y_train final')
# print('X_train_Shape -',X.shape)
# print('Y_train_Shape -',Y.shape)
| MODEL -2 (CNN)/CQT_CNN_PreprocessingCode_Model2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: local-venv
# language: python
# name: local-venv
# ---
# # Silvia Control
# Evaluation od espresso machine control theory based upon [Control Bootcamp](https://www.youtube.com/playlist?list=PLMrJAkhIeNNR20Mz-VpzgfQs5zrYi085m) by <NAME>.
# ## Boundary DIagram
# <img src="boiler_boundary_diagram.png" width="400" />
#
# ## System Dynamics
#
# ### Differential equations
# The change of internal energy $\frac{dU}{dt}$ of an incompresssible substance is as follows where $c$ is the specific heat.
#
# $$ \frac{dU}{dt} = m . c $$
#
# Heat transfer $Q$ is:
#
# $$ Q = \int_{0}^{\Delta{}T} \dot{Q} dt $$
#
# The heat rate required to increase the boiler temperture is:
#
# $$ \dot{Q}_{net} = c . m . \frac{dT_{boiler}}{dt} $$
#
# The convection gain, $\dot{Q}_{conv}$ is:
#
# $$ \dot{Q}_{conv} = h_{conv} . A . (T_{amb} - T_{boiler}) $$
#
# The net heat input to the boiler, $\dot{Q}_{net}$ is:
#
# $$ \dot{Q}_{net} = \dot{Q}_{heat} + \dot{Q}_{conv} $$
#
# So differetial temperature equation is:
# $$ \frac{dT_{boiler}}{dt} = \frac{u}{cm} + \frac{hA}{cm}(T_{amb} - T_{boiler}) $$
#
# ### Finite time equations
#
# The heat required to increase the boiler temperture is:
#
# $$ Q_{net} = c . m . (T_{boiler}^{(1)} - T_{boiler}^{(0)}) . \Delta{}t $$
#
# The temperture of the boiler, $T_{boiler}$ after heating is therefore:
# $$ T_{boiler}^{(1)} = \frac{ (\dot{Q}_{heat} + \dot{Q}_{conv}) . \Delta{}t }{ c . m } + T_{boiler}^{(0)} $$
#
# $$ \Delta{}T_{boiler} = \frac{ (\dot{Q}_{heat} + h_{conv} . A . (T_{amb} - T_{boiler})) . \Delta{}t }{ c . m } $$
#
# ### Notes
# The heat input $\dot{Q}_{heat}$ to the system equals the electrical power $P_{heat}$ of the element:
# $$ \dot{Q}_{heat} = P_{heat} $$
# ### State
# $$ \vec{x} = \begin{bmatrix} x_0 \end{bmatrix} = \begin{bmatrix} T \end{bmatrix} $$
# ### Control
# Looking at the equations on the form:
# $$ \dot{x} = Ax + Bu $$
#
# To account for the constant term, define new variable $z$:
# $$ z = x + \alpha $$
# where:
# $$ \alpha = \frac{hA}{cm}T_{amb} $$
#
# The A matrix defines the system:
# $$ A = \begin{bmatrix} \frac{hA}{cm} \end{bmatrix} $$
#
# The B matrix is the control:
# $$ B = \begin{bmatrix} \frac{1}{cm} \end{bmatrix} $$
#
# Total, equation:
# $$ \dot{x} = -\frac{hA}{cm}T + \frac{1}{cm}u + \frac{hA}{cm}T_{amb} $$
# $$ \dot{z} = \begin{bmatrix} -\frac{hA}{cm} \end{bmatrix} z + \begin{bmatrix} \frac{1}{cm} \end{bmatrix} u $$
# ## System Properties
#
# |Property |Quantity |Reference |
# |:- |:- |:- |
# |Boiler Mass |- $kg$ |
# |Boiler Specific Heat Capacity |375 $\frac{J}{kg.K}$ |[Engineering Toolbox](https://www.engineeringtoolbox.com/specific-heat-capacity-d_391.html) |
# |Water Mass |- $kg$ |
# |Water Specific Heat Capacity |4182 $\frac{J}{kg.K}$ |[Engineering Toolbox](https://www.engineeringtoolbox.com/specific-heat-capacity-d_391.html) |
# | Natural Convection Coefficient | 10 $\frac{W}{m^2.K}$ |
# +
import sympy as sp
from sympy.utilities.lambdify import lambdify
import numpy as np
import matplotlib.pyplot as plt
import control.matlab as ctrlm
import control as ctrl
from control import place
import slycot
from scipy import integrate
# State
T, dT = sp.symbols("T \dot{T}")
# Properties
m, c, h, T_amb, a = sp.symbols("m, c, h, T_amb, a")
# Control
u = sp.symbols("u")
# -
# ## Define System Matricies
# Define function for the temperature rate
z = sp.Matrix([T - h * a * T_amb / (c * m) ])
u = sp.Matrix([u])
A = sp.Matrix([-h * a / (c * m)])
B = sp.Matrix([1 / (c * m)])
A
T_rate = A*z + B*u
sp.Eq(dT, T_rate, evaluate=False)
# +
r = 0.035
l = 0.09
area = 2 * 3.14 * r**2 + 2 * 3.14 * r * l
properties = {
'm': 3,
'c': 375,
'h': 10,
'T_amb': 20,
'a': area
}
area
# -
# ## State space representation
# Redefine matricies in numpy form
A_np = np.array(A.subs(properties))
B_np = np.array(B.subs(properties))
C_np = np.array([1])
D_np = np.zeros((1))
A_np, B_np
ss_model = ctrl.ss(A_np, B_np, C_np, D_np)
ss_model
def convert_x_z(data, output, properties):
h = properties['h']
a = properties['a']
T_amb = properties['T_amb']
m = properties['m']
c = properties['c']
if output == "x":
return data + h * a * T_amb / (c * m)
else:
return data - h * a * T_amb / (c * m)
# ## Design LQR Controller
Q = np.eye(A.shape[0])
R = 0.001
K = ctrl.lqr(ss_model, Q, R)[0]
K
def heatMass(t, x, properties, uf):
dx = np.empty([1])
h = properties['h']
a = properties['a']
T_amb = properties['T_amb']
m = properties['m']
c = properties['c']
u = uf(x)
dx[0] = (u + h * a * (- x)) / (c * m)
t_log.append(t)
u_log.append(u.item(0))
return dx
# +
t_lims = (0, 6000)
tspan = np.arange(t_lims[0], t_lims[1], 1)
z0 = np.array([convert_x_z(0, 'z', properties)]) # Start temperature
wr = np.array([100]) # Target temperature
# uf = lambda x: -K @ (x - wr)
uf = lambda x: x * 0 + 100
t_log = []
u_log = []
# x = integrate.odeint(heatMass, x0, tspan, args=(properties, uf))
sol = integrate.solve_ivp(heatMass, t_lims, z0, method="DOP853", args=(properties, uf), t_eval=tspan)
sol.message
# -
plt.plot(sol.t, convert_x_z(sol.y[0], "x", properties), linewidth=2, label='T')
plt.scatter(t_log, u_log, linewidth=1, label='u', c="orange")
plt.legend()
plt.xlabel('Time (s)')
plt.ylabel('State')
# +
plt.scatter(t_log, u_log, linewidth=1, label='u')
plt.xlabel('Time (s)')
plt.ylabel('Heat')
plt.legend()
plt.show()
# -
# ## Simulate with control library
t_sample = 5
t_sim = np.arange(0, 6000, t_sample)
u_sim = t_sim * 0 + 100
t_step, y_step, x_step = ctrl.forced_response(ss_model, T=t_sim, U=u_sim, X0=z0)
plt.plot(t_step, convert_x_z(y_step, "x", properties), label='y')
plt.plot(t_step, convert_x_z(x_step, "x", properties), label='x')
plt.plot(t_step, u_sim, '--', label='u')
plt.legend()
plt.xlabel('Time (s)')
# # Compare to measures response
# +
import pickle
with open('dataset_01.pickle', 'rb') as f:
measured_data = pickle.load(f)
power = 1100 # W
plt.plot(measured_data["t"], measured_data["y"], label='y')
plt.plot(measured_data["t"], measured_data["u"], label='u')
plt.legend()
plt.xlabel('Time (s)')
plt.xlabel('Duty [%], Temperature [degC]')
# -
# ## Simulate using measured input
t_sim, y_sim, x_sim = ctrl.forced_response(
ss_model, T=measured_data["t"], U=measured_data["u"]*power/100, X0=measured_data["y"][0]
)
plt.plot(measured_data["t"], measured_data["y"], label='y_measured')
plt.plot(measured_data["t"], measured_data["u"], label='u')
plt.plot(measured_data["t"], convert_x_z(y_sim, "x", properties), label='y_simulated')
plt.legend()
plt.xlabel('Time (s)')
# Try changing the properties.
properties = {
'm': 2,
'c': 50,
'h': 3,
'T_amb': 20,
'a': 0.03
}
A_np = np.array(A.subs(properties))
B_np = np.array(B.subs(properties))
ss_model = ctrl.ss(A_np, B_np, C_np, D_np)
t_sim, y_sim, x_sim = ctrl.forced_response(
ss_model, T=measured_data["t"], U=measured_data["u"], X0=measured_data["y"][0]
)
plt.plot(measured_data["t"], measured_data["y"], label='y_measured')
plt.plot(measured_data["t"], measured_data["u"], label='u')
plt.plot(measured_data["t"], convert_x_z(y_sim, "x", properties), label='y_simulated')
plt.legend()
plt.xlabel('Time (s)')
# The delay in the response is not being modelled as the boiler is assumed to be at constant temperature.
| docs/docs/control/assets/.ipynb_checkpoints/silvia_control_notebook-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.0
# language: julia
# name: julia-1.4
# ---
# # 1.0 Load Packages & Testing Data
Threads.nthreads()
# Load Packages
using Clustering
using ParallelKMeans
using BenchmarkTools
using DelimitedFiles
# ## Read Data as transposed matrices since Julia is column major
X_1m = permutedims(DelimitedFiles.readdlm("data_1m.csv", ',', Float64));
X_100k = permutedims(DelimitedFiles.readdlm("data_100k.csv", ',', Float64));
X_10k = permutedims(DelimitedFiles.readdlm("data_10k.csv", ',', Float64));
X_1k = permutedims(DelimitedFiles.readdlm("data_1k.csv", ',', Float64));
# # 2.0 Elbow Method Clustering.jl
@btime [Clustering.kmeans(X_1m, i; tol=1e-6, maxiter=1000).totalcost for i = 2:10]
@btime [Clustering.kmeans(X_100k, i; tol=1e-6, maxiter=1000).totalcost for i = 2:10]
@btime [Clustering.kmeans(X_10k, i; tol=1e-6, maxiter=1000).totalcost for i = 2:10]
@btime [Clustering.kmeans(X_1k, i; tol=1e-6, maxiter=1000).totalcost for i = 2:10]
# # 3.0 Elbow Method Speed ParallelKMeans.jl
# ## Lloyd
@btime [ParallelKMeans.kmeans(Lloyd(), X_1m, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Lloyd(), X_100k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Lloyd(), X_10k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Lloyd(), X_1k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
# ## Hamerly
@btime [ParallelKMeans.kmeans(Hamerly(), X_1m, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Hamerly(), X_100k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Hamerly(), X_10k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Hamerly(), X_1k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
# # Elkan
@btime [ParallelKMeans.kmeans(Elkan(), X_1m, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Elkan(), X_100k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Elkan(), X_10k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
@btime [ParallelKMeans.kmeans(Elkan(), X_1k, i; tol=1e-6, max_iters=1000, verbose=false).totalcost for i = 2:10]
| extras/ClusteringJL & ParallelKMeans Benchmarks Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Py3 research env
# language: python
# name: py3_research
# ---
# + colab={} colab_type="code" id="Civy2l4cos1t"
# %matplotlib inline
# -
# _Credits: First two parts of this notebook are based on PyTorch official_ [tensor](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py) _and_ [autograd](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py) _tutorials._
# + [markdown] colab_type="text" id="-nEtOTFPos11"
#
# Once again, what is PyTorch?
# ================
#
# It’s a Python-based scientific computing package targeted at two sets of
# audiences:
#
# - A replacement for NumPy to use the power of GPUs
# - a deep learning research platform that provides maximum flexibility
# and speed
#
# Getting Started
# ---------------
#
# #### Tensors
#
# Tensors are similar to NumPy’s ndarrays, with the addition being that
# Tensors can also be used on a GPU to accelerate computing.
#
#
# + colab={} colab_type="code" id="wCg57jzaos12"
import torch
# + [markdown] colab_type="text" id="y2XO1pKzos17"
# Construct a 5x3 matrix, uninitialized:
#
#
# + colab={} colab_type="code" id="z4hzsf2Zos2A"
x = torch.empty(5, 3)
print(x)
# + [markdown] colab_type="text" id="s1UXUAyfos2G"
# Construct a randomly initialized matrix:
#
#
# + colab={} colab_type="code" id="2pZuhVZdos2I"
x = torch.rand(5, 3)
print(x)
# + [markdown] colab_type="text" id="XDuGrzjJos2O"
# Construct a matrix filled zeros and of dtype long:
#
#
# + colab={} colab_type="code" id="d7RuB4xhos2P"
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
# + [markdown] colab_type="text" id="dh6Esa3Fos2W"
# Construct a tensor directly from data:
#
#
# + colab={} colab_type="code" id="RnInKPIQos2X"
x = torch.tensor([5.5, 3])
print(x)
# + [markdown] colab_type="text" id="vMJZ3uZQos2q"
# or create a tensor based on an existing tensor. These methods
# will reuse properties of the input tensor, e.g. dtype, unless
# new values are provided by user
#
#
# + colab={} colab_type="code" id="bKfLS4Mios2r"
x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes
print(x)
x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x) # result has the same size
# -
import numpy as np
np.random.randint((2,5))
# +
a = np.random.randint((2,5))
# Create a torch tensor from numpy tensor and cast it to float32 type
a_t = torch.tensor(a, dtype=torch.float32)# YOUR CODE HERE
assert a_t.dtype == torch.float32
# -
a_t
# + [markdown] colab_type="text" id="cHcJcupjos2u"
# Get its size:
#
#
# + colab={} colab_type="code" id="glNo_10Kos2v"
print(x.size())
# -
x.shape[:2]
# + [markdown] colab_type="text" id="_scE1Qvwos21"
# <div class="alert alert-info"><h4>Note</h4><p>``torch.Size`` is in fact a tuple, so it supports all tuple operations.</p></div>
#
# #### Operations
#
# There are multiple syntaxes for operations. In the following
# example, we will take a look at the addition operation.
#
# Addition: syntax 1
#
#
# + colab={} colab_type="code" id="luQAJTT9os22"
y = torch.rand(5, 3)
print(x + y)
# + [markdown] colab_type="text" id="e8s8E-Q2os26"
# Addition: syntax 2
#
#
# + colab={} colab_type="code" id="FLwb3hgyos28"
print(torch.add(x, y))
# + [markdown] colab_type="text" id="2dItCJmIos2_"
# Addition: providing an output tensor as argument
#
#
# + colab={} colab_type="code" id="eTXFN-TTos3B"
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
# + [markdown] colab_type="text" id="BmLxccujos3F"
# Addition: in-place
#
#
# + colab={} colab_type="code" id="udSpL0x0os3H"
# adds x to y
y.add_(x)
print(y)
# + [markdown] colab_type="text" id="zqQU0LO0os3M"
# <div class="alert alert-info"><h4>Note</h4><p>Any operation that mutates a tensor in-place is post-fixed with an ``_``.
# For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.</p></div>
#
# You can use standard NumPy-like indexing with all bells and whistles!
#
#
# + colab={} colab_type="code" id="wcuoNbqOos3Q"
print(x[:, 1])
# + [markdown] colab_type="text" id="XyKhJ4E8os3U"
# Resizing: If you want to resize/reshape tensor, you can use ``torch.view``:
#
#
# + colab={} colab_type="code" id="LmCUrv3qos3V"
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
# + [markdown] colab_type="text" id="Srv3YKMMos3Y"
# If you have a one element tensor, use ``.item()`` to get the value as a
# Python number
#
#
# + colab={} colab_type="code" id="s13XZt8Cos3Z"
x = torch.randn(1)
print(x)
print(x.item())
# + [markdown] colab_type="text" id="orMsQvQAos3c"
# **Read later:**
#
#
# 100+ Tensor operations, including transposing, indexing, slicing,
# mathematical operations, linear algebra, random numbers, etc.,
# are described
# `here <http://pytorch.org/docs/torch>`_.
#
# NumPy Bridge
# ------------
#
# Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
#
# The Torch Tensor and NumPy array will share their underlying memory
# locations, and changing one will change the other.
#
# #### Converting a Torch Tensor to a NumPy Array
#
#
# + colab={} colab_type="code" id="BOdXxb_Vos3d"
a = torch.ones(5)
print(a)
# + colab={} colab_type="code" id="68cPoOiIos3i"
b = a.numpy()
print(b)
# + [markdown] colab_type="text" id="N9q-JAGdos3n"
# See how the numpy array changed in value.
#
#
# + colab={} colab_type="code" id="0-AYXf1Los3p"
a.add_(1)
print(a)
print(b)
# + [markdown] colab_type="text" id="8QtC7o0zos3s"
# #### Converting NumPy Array to Torch Tensor
#
# See how changing the np array changed the Torch Tensor automatically
#
#
# + colab={} colab_type="code" id="MxF1_zXDos3t"
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
# + [markdown] colab_type="text" id="-w5pWYnaos3w"
# All the Tensors on the CPU except a CharTensor support converting to
# NumPy and back.
#
# CUDA Tensors
# ------------
#
# Tensors can be moved onto any device using the ``.to`` method.
#
#
# + colab={} colab_type="code" id="LHW5QuDSos3x"
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
# -
z.device
z.shape
a = torch.ones((2, 8))
z.to("cpu") + a
x.device
torch.cuda.is_available()
#
# Autograd: Automatic Differentiation
# ===================================
#
# Central to all neural networks in PyTorch is the ``autograd`` package.
# Let’s first briefly visit this, and we will then go to training our
# first neural network.
#
#
# The ``autograd`` package provides automatic differentiation for all operations
# on Tensors. It is a define-by-run framework, which means that your backprop is
# defined by how your code is run, and that every single iteration can be
# different.
#
# Let us see this in more simple terms with some examples.
#
# Tensor
# --------
#
# ``torch.Tensor`` is the central class of the package. If you set its attribute
# ``.requires_grad`` as ``True``, it starts to track all operations on it. When
# you finish your computation you can call ``.backward()`` and have all the
# gradients computed automatically. The gradient for this tensor will be
# accumulated into ``.grad`` attribute.
#
# To stop a tensor from tracking history, you can call ``.detach()`` to detach
# it from the computation history, and to prevent future computation from being
# tracked.
#
# To prevent tracking history (and using memory), you can also wrap the code block
# in ``with torch.no_grad():``. This can be particularly helpful when evaluating a
# model because the model may have trainable parameters with `requires_grad=True`,
# but for which we don't need the gradients.
#
# There’s one more class which is very important for autograd
# implementation - a ``Function``.
#
# ``Tensor`` and ``Function`` are interconnected and build up an acyclic
# graph, that encodes a complete history of computation. Each tensor has
# a ``.grad_fn`` attribute that references a ``Function`` that has created
# the ``Tensor`` (except for Tensors created by the user - their
# ``grad_fn is None``).
#
# If you want to compute the derivatives, you can call ``.backward()`` on
# a ``Tensor``. If ``Tensor`` is a scalar (i.e. it holds a one element
# data), you don’t need to specify any arguments to ``backward()``,
# however if it has more elements, you need to specify a ``gradient``
# argument that is a tensor of matching shape.
#
#
import torch
# Create a tensor and set requires_grad=True to track computation with it
#
#
x = torch.ones(2, 2, requires_grad=True)
print(x)
# Do an operation of tensor:
#
#
y = x + 2
print(y)
# ``y`` was created as a result of an operation, so it has a ``grad_fn``.
#
#
print(y.grad_fn)
# Do more operations on y
# +
z = y * y * 3
out = z.mean()
print(z, out)
# -
# ``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad``
# flag in-place. The input flag defaults to ``False`` if not given.
#
#
a = torch.randn(2, 2)
a = ((a * 3) / (a - 1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)
# Gradients
# ---------
# Let's backprop now
# Because ``out`` contains a single scalar, ``out.backward()`` is
# equivalent to ``out.backward(torch.tensor(1))``.
#
#
out.backward()
# print gradients d(out)/dx
#
#
#
print(x.grad)
# You should have got a matrix of ``4.5``. Let’s call the ``out``
# *Tensor* “$o$”.
# We have that
# $$o = \frac{1}{4}\sum_i z_i,$$
#
# $$z_i = 3(x_i+2)^2$$ and $$z_i\bigr\rvert_{x_i=1} = 27$$
#
# Therefore,
#
# $$\frac{\partial o}{\partial x_i} = \frac{3}{2}(x_i+2),$$ hence
# $$\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5$$
# You can do many crazy things with autograd!
#
#
# +
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
# +
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)
# -
# You can also stop autograd from tracking history on Tensors
# with ``.requires_grad=True`` by wrapping the code block in
# ``with torch.no_grad()``:
#
#
# +
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
# -
# **Read Later:**
#
# Documentation of ``autograd`` and ``Function`` is at
# http://pytorch.org/docs/autograd
#
#
def plot_train_process(train_loss, val_loss, train_accuracy, val_accuracy, title_suffix=''):
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].set_title(' '.join(['Loss', title_suffix]))
axes[0].plot(train_loss, label='train')
axes[0].plot(val_loss, label='validation')
axes[0].legend()
axes[1].set_title(' '.join(['Validation accuracy', title_suffix]))
axes[1].plot(train_accuracy, label='train')
axes[1].plot(val_accuracy, label='validation')
axes[1].legend()
plt.show()
# ### Dealing with the simple task
# Now we will tackle the car classification problem with new techniques: neural networks. Let's get started
import pandas as pd
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
try:
dataset = pd.read_csv('../datasets/car_dataset/car_data.csv', delimiter=',', header=None).values
except FileNotFoundError:
# !wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/datasets/car_dataset/car_data.csv -nc
dataset = pd.read_csv('car_data.csv', delimiter=',', header=None).values
# +
data = dataset[:, :-1].astype(int)
target = dataset[:, -1]
print(data.shape, target.shape)
X_train, X_test, y_train_raw, y_test_raw = train_test_split(data, target, test_size=0.15)
print(X_train.shape, y_train_raw.shape, X_test.shape, y_test_raw.shape)
# -
# Now we need to map all the class labels to numbers with some `dict`. PyTorch does not like non-numeric labels.
mapper = # YOUR CODE HERE
y_train = np.array([mapper[y] for y in y_train_raw])
y_test = np.array([mapper[y] for y in y_test_raw])
# And let's preprocess the feature matrices as well
# +
from sklearn.preprocessing import StandardScaler
scaler = # YOUR CODE HERE
X_train_scaled = # YOUR CODE HERE
X_test_scaled = # YOUR CODE HERE
# -
X_train_scaled
import torch
from torch import nn
from torch.nn import functional as F
import torchsummary
model = nn.Sequential()
model.add_module('l1', nn.Linear(19, 4))
torchsummary.summary(model, (19,))
X_train.shape
opt = torch.optim.AdamW(model.parameters(), lr=3e-3)
# And here comes the loss function as well. `nn.CrossEntropyLoss` combines both log-softmax and `NLLLoss`.
# __Be careful with it! Criterion `nn.CrossEntropyLoss` with still work with log-softmax output, but it won't allow you to converge to the optimum.__ Next comes small demonstration:
x = torch.randn((1, 10))
x_new = x
for n_sequential_softmax_transforms in range(1, 5):
x_new = F.softmax(x_new, dim=1)
fig = plt.figure()
plt.bar(range(10), x_new.numpy().squeeze())
plt.title('N sequential softmax transforms: {}'.format(n_sequential_softmax_transforms))
plt.show()
# As you can see, the _entropy_ of the labels distribution increases (so it becomes closer to uniform) if you use several `softmax` transformations in a row. But it won't affect the accuracy at first sight, because softmax preserves the maximum value position.
# loss_function = nn.NLLLoss()
loss_function = nn.CrossEntropyLoss()
# +
X_train_torch = torch.tensor(X_train_scaled, dtype=torch.float32)
X_test_torch = torch.tensor(X_test_scaled, dtype=torch.float32)
y_train_torch = torch.tensor(y_train, dtype=torch.long)
y_test_torch = torch.tensor(y_test, dtype=torch.long)
# -
# example loss
loss_function(model(X_train_torch[:3]), y_train_torch[:3])
# +
# # !pip install scikit-plot
# -
from sklearn.metrics import accuracy_score, f1_score
import scikitplot
from torch.optim.lr_scheduler import StepLR, ReduceLROnPlateau
# To reduce learning rate on plateau of the loss functions
lr_scheduler = ReduceLROnPlateau(opt, patience=5)
opt
from IPython import display
# +
train_loss_history = []
train_acc_history = []
val_loss_history = []
val_acc_history = []
local_train_loss_history = []
local_train_acc_history = []
for i in range(5000):
# sample 256 random observations
ix = np.random.randint(0, len(X_train), 256)
x_batch = X_train_torch[ix]
y_batch = y_train_torch[ix]
# predict log-probabilities or logits
### YOUR CODE
# compute loss, just like before
loss = ### YOUR CODE
# compute gradients
### YOUR CODE
# Adam step
### YOUR CODE
# clear gradients
### YOUR CODE
local_train_loss_history.append(loss.data.numpy())
local_train_acc_history.append(
accuracy_score(
y_batch.detach().numpy(),
y_predicted.detach().numpy().argmax(axis=1)
)
)
if i % 200 == 0:
train_loss_history.append(np.mean(local_train_loss_history))
train_acc_history.append(np.mean(local_train_acc_history))
local_train_loss_history, local_train_acc_history = [], []
predictions_val = model(X_test_torch)
val_loss_history.append(loss_function(predictions_val, y_test_torch).detach().item())
acc_score_val = accuracy_score(y_test, predictions_val.detach().numpy().argmax(axis=1))
val_acc_history.append(acc_score_val)
lr_scheduler.step(train_loss_history[-1])
display.clear_output()
plot_train_process(train_loss_history, val_loss_history, train_acc_history, val_acc_history)
# -
# Now we get the predictions
# +
y_predicted_train = model(torch.from_numpy(
X_train_scaled
).type(torch.float32)).detach().numpy()
y_predicted_test = model(torch.from_numpy(
X_test_scaled
).type(torch.float32)).detach().numpy()
# -
from sklearn.metrics import accuracy_score, f1_score
import scikitplot
print('Accuracy train: {}\nAccuracy test: {}\nf1 train: {}\nf1 test: {}'.format(
accuracy_score(y_train, np.argmax(y_predicted_train, axis=1)),
accuracy_score(y_test, np.argmax(y_predicted_test, axis=1)),
f1_score(y_train, np.argmax(y_predicted_train, axis=1), average='macro'),
f1_score(y_test, np.argmax(y_predicted_test, axis=1), average='macro')
))
print('Accuracy train: {}\nAccuracy test: {}\nf1 train: {}\nf1 test: {}'.format(
accuracy_score(y_train, np.argmax(y_predicted_train, axis=1)),
accuracy_score(y_test, np.argmax(y_predicted_test, axis=1)),
f1_score(y_train, np.argmax(y_predicted_train, axis=1), average='weighted'),
f1_score(y_test, np.argmax(y_predicted_test, axis=1), average='weighted')
))
scikitplot.metrics.plot_roc(y_test, y_predicted_test)
# __Not that good, yeah? Let's get back and make it really work__
# ### And now let's do in with texts
try:
data = pd.read_csv('../datasets/comments_small_dataset/comments.tsv', sep='\t')
except FileNotFoundError:
# ! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/datasets/comments_small_dataset/comments.tsv -nc
data = pd.read_csv("comments.tsv", sep='\t')
# +
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
# -
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
# +
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "I don\'t want to do that" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# +
# task: preprocess each comment in train and test
texts_train = ### YOUR CODE HERE
texts_test = ### YOUR CODE HERE
# -
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
# +
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = min(10000, len(set(' '.join(texts_train).split())))
bow_vocabulary = ### YOUR CODE HERE
print('example features:', sorted(bow_vocabulary)[::100])
# -
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
#<YOUR CODE>
return np.array(<...>, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
# Small check that everything is done properly
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
# Let's solve it using `sklearn` logistic regression model:
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
X_train_bow.shape
model.score(X_train_bow, y_train)
model.score(X_test_bow, y_test)
# +
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
# -
# And now let's achieve similar results using PyTorch:
# +
linear_model = ### YOUR CODE HERE
opt = ### YOUR CODE HERE
loss_func = ### YOUR CODE HERE
X_train_bow_torch = torch.from_numpy(X_train_bow)
y_train_torch = torch.from_numpy(y_train)
# -
# Simple training loop. No need in batch generation in here
for _ in range(100):
### YOUR CODE HERE
y_pred_train = linear_model(X_train_bow_torch).argmax(dim=1).numpy()
y_pred_test = linear_model(torch.from_numpy(X_test_bow)).argmax(dim=1).numpy()
np.mean(y_pred_train == y_train)
np.mean(y_pred_test == y_test)
# ### Extra stuff: NotMNIST
# ! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/week01_Intro/notmnist.py -nc
# +
from notmnist import load_notmnist
X_train, y_train, X_test, y_test = load_notmnist()
X_train, X_test = X_train.reshape([-1, 784]), X_test.reshape([-1, 784])
print("Train size = %i, test_size = %i"%(len(X_train),len(X_test)))
# -
for i in [0,1]:
plt.subplot(1, 2, i + 1)
plt.imshow(X_train[i].reshape([28,28]))
plt.title(str(y_train[i]))
# +
# Your turn: create a multiclass classifier in here for notMNIST dataset.
# +
# create a network that stacks layers on top of each other
model = nn.Sequential()
# add first "dense" layer with 784 input units and 1 output unit.
model.add_module('l1', # YOUR CODE HERE)
model.add_module('a1', # YOUR CODE HERE)
...
# -
# Take a look at the model structure:
torchsummary.summary(model)
# Let's check that everything works correctly:
# +
# create dummy data with 3 samples and 784 features
x = torch.tensor(X_train[:3], dtype=torch.float32)
y = torch.tensor(y_train[:3], dtype=torch.float32)
# compute outputs given inputs, both are variables
y_predicted = model(x)[:, 0]
y_predicted # display what we've got
# -
# Let's call the loss function from `torch.nn`.
loss_function = nn.CrossEntropyLoss()
# Define some optimizer for your model parameters:
opt = # YOUR CODE HERE
# Compute the loss for some batch of objects:
loss = # YOUR CODE HERE
# Do a backward pass and optimizator step:
# +
# YOUR CODE HERE
# -
# Finally, implement the optimization pipeline and monitor the model quality during the optimization.
# +
# YOUR CODE HERE
| week01_Intro/week01_PyTorch_hands_on_practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Install Required Packages
# + pycharm={"name": "#%%\n"}
# ! pip install numpy pandas sklearn seaborn statsmodels matplotlib
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Imports
# + pycharm={"is_executing": true}
import sys
import statistics
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rcParams.update({'figure.figsize': (10, 8)})
# -
# ### The KMeans model
# + pycharm={"name": "#%%\n"}
class KMeans:
def __init__(self, k: int, init_method: str) -> None:
self.k = k
self.labels = None
self.centroids = [None] * k
self.cost = None
if init_method == 'random':
self.init = self._random_init
self.init_str = 'random'
elif init_method == 'kmeans++':
self.init = self._k_means_plus_plus_init
self.init_str = 'kmeans++'
else:
raise Exception("Unknown init method. Available choices are: 'random', 'kmeans++'")
def _random_init(self, X: np.array) -> None:
"""Random initialization of KMeans
:param X: NxD matrix where N: number of examples, D: dimensionality of data
:return: None
"""
self.labels = np.random.randint(low=1, high=self.k + 1, size=X.shape[0])
def _k_means_plus_plus_init(self, X: np.array) -> None:
""" KMeans++ initialization of KMeans
:param X: NxD matrix where N: number of examples, D: dimensionality of data
:return: None
"""
centroids = list()
centroids.append(X[np.random.randint(X.shape[0]), :])
for cluster_id in range(self.k - 1):
# Initialize a list to store distances of data points from nearest centroid
dist = []
for i in range(X.shape[0]):
point = X[i, :]
d = sys.maxsize
# Compute the minimum distance of 'point' from each of the previously selected centroid
for j in range(len(centroids)):
temp_dist = np.sum((point - centroids[j]) ** 2)
d = min(d, temp_dist)
dist.append(d)
# Select data point with maximum distance as our next centroid
dist = np.array(dist)
next_centroid = X[np.argmax(dist), :]
centroids.append(next_centroid)
self.centroids = centroids
def _calculate_centroids(self, X: np.array) -> None:
""" Estimates the new centroids of each cluster
:param X: NxD matrix where N: number of examples, D: dimensionality of data
:return: None
"""
for cluster_id in range(1, self.k + 1):
# Get the indexes of the datapoints that have been assigned to the current cluster
indexes = np.where(self.labels == cluster_id)
X_split = X[tuple([indexes[0]])]
centroid = np.mean(X_split, axis=0)
if not any(np.isnan(centroid)):
self.centroids[cluster_id - 1] = centroid
def _assign_observations_to_clusters(self, X: np.array, predict_mode: bool = False) -> np.ndarray:
""" Estimates the cluster id of each point
:param X: NxD matrix where N: number of examples, D: dimensionality of data
:param predict_mode: Flag that indicates if this method is called from the predict()
:return: The total cost (sum of all euclidean distances of the points)
"""
np_centroids = np.array(self.centroids)
distances = None
for cluster_id in range(np_centroids.shape[0]):
arr = np.array([np_centroids[cluster_id]] * X.shape[0])
dist = np.linalg.norm(X - arr, axis=1)
if distances is not None:
distances = np.vstack((distances, dist))
else:
distances = dist
if self.k == 1:
distances = distances[np.newaxis]
distances = distances.T
min_indexes = np.argmin(distances, axis=1)
if not predict_mode:
self.labels = min_indexes + 1
return np.sum(np.min(distances, axis=1))
else:
return min_indexes + 1
def fit(self, X: np.array, num_restarts: int):
""" Fit method that trains the model
:param X: NxD matrix where N: number of examples, D: dimensionality of data
:param num_restarts: The number of executions of the algorithm from random starting points.
:return: None
"""
best_fit_labels = None
best_fit_centroids = None
min_cost = sys.maxsize
for iteration in range(num_restarts):
self.init(X)
prev_labels = None
loops_to_converge = 0
while True:
# If the init method is kmeans++, skip centroid estimation for the first loop
if self.init_str != 'kmeans++' or loops_to_converge != 0:
self._calculate_centroids(X)
cost = self._assign_observations_to_clusters(X)
if prev_labels is not None and np.array_equal(prev_labels, self.labels):
break
prev_labels = self.labels
loops_to_converge += 1
if cost < min_cost:
min_cost = cost
best_fit_labels = self.labels
best_fit_centroids = self.centroids
# Make sure that the model will hold the centroids that produce the smallest cost
self.cost = min_cost
self.labels = best_fit_labels
self.centroids = best_fit_centroids
def predict(self, y: np.array) -> np.array:
""" Predicts the cluster ids of the points of a matrix
:param y: The input matrix that contains points to predict their cluster
:return: Matrix with the cluster ids of each data point of y
"""
return self._assign_observations_to_clusters(y, predict_mode=True)
# -
# ### Read datasets
# + pycharm={"name": "#%%\n"}
X_train = np.load('datasets/clustering/X.npy')
print(X_train)
sns.scatterplot(x=X_train[:, 0], y=X_train[:, 1])
plt.show()
# -
# ### Helper plot functions
# + pycharm={"name": "#%%\n"}
def plot_clusters(X: np.array, labels: np.array) -> None:
""" Scatter plots the dataset, with different colors for each cluster
:param X: NxD matrix where N: number of examples, D: dimensionality of data
:param labels: matrix that contains the cluster id of each datapoint of X
:return: None
"""
df = pd.DataFrame(np.column_stack((X, labels[:, None])), columns=['x', 'y', 'label'])
df['label'] = df['label'].astype(int)
unique = df["label"].append(df["label"]).unique()
palette = dict(zip(unique, sns.color_palette(n_colors=len(unique))))
sns.scatterplot(x=df['x'], y=df['y'], hue=df['label'], palette=palette)
plt.show()
def plot_objective_per_k(X: np.array, init_method: str) -> None:
""" Plots the elbow curve for k=[1,20]
:param X: NxD matrix where N: number of examples, D: dimensionality of data
:param init_method: Defines the KMeans init method
:return: None
"""
k = list(range(1, 21))
costs = []
for i in k:
kmeans = KMeans(i, init_method=init_method)
kmeans.fit(X, 10)
costs.append(kmeans.cost)
# plot_clusters(X, kmeans.labels)
plt.plot(k, costs, 'bx-')
plt.xlabel('Values of K')
plt.ylabel('Cost')
plt.title(f'The Elbow Curve for Initialization Method: {init_method}')
plt.xticks(np.arange(min(k), max(k)+1, 1.0))
plt.show()
# -
# ### Elbow Curves
# + pycharm={"name": "#%%\n"}
plot_objective_per_k(X_train, 'random')
plot_objective_per_k(X_train, 'kmeans++')
# -
# ### Cluster Plots
# + pycharm={"name": "#%%\n"}
kmeans = KMeans(9, init_method='random')
kmeans.fit(X_train, 10)
plot_clusters(X_train, kmeans.labels)
kmeans = KMeans(9, init_method='kmeans++')
kmeans.fit(X_train, 10)
plot_clusters(X_train, kmeans.labels)
# +
kmeans = KMeans(9, init_method='random')
kmeans.fit(X_train, 30)
plot_clusters(X_train, kmeans.labels)
kmeans = KMeans(9, init_method='kmeans++')
kmeans.fit(X_train, 30)
plot_clusters(X_train, kmeans.labels)
# -
# ### Standard Deviation and Mean for KMeans Random & KMeans++ objective values
#
# Using `k=9` and `num_restarts=1` for 800 runs, we yield the following metrics.
# + pycharm={"name": "#%%\n"}
costs_random = []
costs_kmeans_plus_plus = []
for i in range(800):
kmeans = KMeans(9, init_method='random')
kmeans.fit(X_train, 1)
costs_random.append(kmeans.cost)
kmeans = KMeans(9, init_method='kmeans++')
kmeans.fit(X_train, 1)
costs_kmeans_plus_plus.append(kmeans.cost)
print(f'Standard Deviation of Random Init: {statistics.stdev(costs_random)}')
print(f'Mean of Random Init: {statistics.mean(costs_random)}')
print(f'Standard Deviation of KMeans++ Init: {statistics.stdev(costs_kmeans_plus_plus)}')
print(f'Mean of KMeans++ Init: {statistics.mean(costs_kmeans_plus_plus)}')
| kmeans_svm/kmeans.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # XGBoost Decision Tree Classifier Assignment
# ## By: <NAME> 8)
# # Some Background on The Dataset
# - This dataset is pertained to mental health as well. In this case, an organization called OSMI (open sourcing mental illness), is a nonprofit organization committed to using technology and open source software to improve the mental health of those in tech-related jobs.
# - They conduct a yearly survey that asks people in tech all kinds of qusetions related to their mental health in the context of their tech job that they are working as well as the support that they have received/not received.
# - This dataset is the collection of responses to this survery from 2014-2019!
#
# # What is Our Goal?
# - Predict if a person has sought treatment for any mental health issue given their responses to the survey
#
# the usual imports
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
data = pd.read_csv('./OSMIcleaned.csv')
# preview data
data.head()
# As we can see, there are a lot of invalid values (look at the NaNs), so we are going to have to do some preprocessing!
# # Preprocessing
# function to fill in NaN values
def bestFill(dataset):
for feature in dataset:
if dataset[feature].dtype == np.int64:
dataset[feature] = pd.to_numeric(dataset[feature], errors='coerce').astype(int)
dataset[feature].fillna(-1, inplace=True)
elif dataset[feature].dtype == np.float64:
dataset[feature].fillna(-1, inplace=True)
dataset[feature] = pd.to_numeric(dataset[feature], errors='coerce').astype(float)
elif dataset[feature].dtype == np.object:
dataset[feature].fillna('NaN', inplace=True)
bestFill(data)
# features we will use to predict y
X = data.drop(['Sought Treatment', 'Describe Past Experience', 'Location', 'Disorder Notes'], axis = 1)
# what we will predict
y= data['Sought Treatment']
# ## Encoding/Scaling Features
# - Why do we scale data?
# - to normalize data within a certain range
# - we will scale our numerical features
# - Why do we encode data?
# - Because numbers are easier to work with 8)
# - we will encode our categorical features
# +
numerical_features = (X.dtypes == 'float') | (X.dtypes == 'int')
categorical_features = ~numerical_features
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
preprocess = make_column_transformer(
(StandardScaler(), numerical_features),
(OneHotEncoder(), categorical_features), remainder="drop", n_jobs= -1, verbose = True
)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.40, random_state=1)
# -
# # Model Evaluation
# - If you want to learn what these metrics mean, I would take a look at sklearn.metrics documentation
# +
from sklearn import metrics
from sklearn.pipeline import make_pipeline
def evaluateModel(model, yPredClass, plot=False):
# Classification Accuracy: Overall, how often is the classifier correct?
accuracy = metrics.accuracy_score(Y_test, yPredClass)
print('Classification Accuracy:', accuracy*100)
# Comparing the true and predicted response values
print('\nTrue:', Y_test.values[0:25])
print('Pred:', yPredClass[0:25])
# Metrics computed from a confusion matrix
confusion = metrics.confusion_matrix(Y_test, yPredClass)
TP = confusion[1, 1] # True Positive
TN = confusion[0, 0] # True Negative
FP = confusion[0, 1] # False Positive
FN = confusion[1, 0] # False Negative
# False Positive Rate: When the actual value is negative, how often is the prediction incorrect?
false_positive_rate = FP / float(FP+TN)
print('\nFalse Positive Rate:', false_positive_rate)
# Precision: When a positive value is predicted, how often is the prediction correct?
print('Precision:', metrics.precision_score(Y_test, yPredClass))
# IMPORTANT: first argument is true values, second argument is predicted probabilities
print('AUC Score:', metrics.roc_auc_score(Y_test, yPredClass))
# store the predicted probabilities for class 1
y_pred_prob = model.predict_proba(X_test)[:, 1]
# visualize Confusion Matrix
fig = sns.heatmap(confusion, annot=True, fmt="d")
bottom, top = fig.get_ylim()
fig.set_ylim(bottom + 0.5, top - 0.5)
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()
# histogram of predicted probabilities
if plot:
plt.rcParams['font.size'] = 12
plt.hist(y_pred_prob, bins=4)
# x-axis limit from 0 to 1
plt.xlim(0, 1)
plt.title('Histogram of predicted probabilities')
plt.xlabel('Predicted probability of treatment')
plt.ylabel('Frequency')
# AUC is the percentage of the ROC plot that is underneath the curve
# Higher value = better classifier
roc_auc = metrics.roc_auc_score(Y_test, y_pred_prob)
# roc_curve returns 3 objects fpr, tpr, thresholds
# fpr: false positive rate
# tpr: true positive rate
fpr, tpr, thresholds = metrics.roc_curve(Y_test, y_pred_prob)
if plot:
plt.figure()
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.rcParams['font.size'] = 12
plt.title('ROC curve for treatment classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.legend(loc="lower right")
plt.show()
return accuracy
# -
# # Using Models!
# - To give you guys some help, I've trained a model using Logistic Regression. The process for training an XgBoost classifier should be very similar!
# +
from sklearn.linear_model import LogisticRegression
def logisticRegression():
modelLogisticRegression = make_pipeline(preprocess, LogisticRegression(solver='liblinear', multi_class='ovr'))
modelLogisticRegression.fit(X_train, Y_train)
# make class predictions for the testing set
y_pred_class = modelLogisticRegression.predict(X_test)
print('############### Logistic Regression ###############')
accuracy_score = evaluateModel(modelLogisticRegression, y_pred_class, True)
logisticRegression()
# -
# ## Now it's your turn to implement your own XGBoost Classifier!
# Some notes:
# - Run `pip install xgboost` to get the xgboost package
# - Once you are done training, try evaluating the accuracy of your model using the `evaluateModel` function above and with the test dataset. This model should be more accurate than the Logistic Regression classifier.
# - try playing around with tuning the parameters to get the highest accuracy possible!
# - in particular, try tuning the `eta` and `max_depth` parameters!
# +
import xgboost
############################ TODO ############################
# def xgBoost():
| Analysis/gradient_boosted_trees/XGBoost Implementation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# Imports
import pandas as pd
import numpy as np
# machine learning
from sklearn import svm
from sklearn.ensemble import VotingClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn import preprocessing
# +
# 自定義的function
# 算出list各個值的數量
def word_count(data_list):
data_dict = {}
data_dict['nan'] = 0
for item in data_list:
if pd.isnull(item):
data_dict['nan'] += 1
else:
if item not in data_dict:
data_dict[item] = 1
else:
data_dict[item] += 1
return data_dict
# 算 accuracy, precision, recall
def performance(clf, X_train, Y_train, cv_num = 4):
scores = cross_val_score(clf, X_train, Y_train, cv=cv_num , scoring='precision')
print "precision is {}".format(scores.mean())
scores = cross_val_score(clf, X_train, Y_train, cv=cv_num , scoring='recall')
print "recall is {}".format(scores.mean())
scores = cross_val_score(clf, X_train, Y_train, cv=cv_num , scoring='accuracy')
print "accuracy is {}".format(scores.mean())
# -
# get titanic & test csv files as a DataFrame
titanic_df = pd.read_csv("/Users/wy/notebook/kaggle_competitions/titanic/train.csv")
test_df = pd.read_csv("/Users/wy/notebook/kaggle_competitions/titanic/test.csv")
# 稍微看一下 data長怎樣
titanic_df.head()
titanic_df.info()
print "--------------"
test_df.info()
titanic_df.columns
# +
# 了解各個欄位的值的分佈情形
print word_count(titanic_df['Survived'].tolist())
print word_count(titanic_df[u'Pclass'].tolist())
print word_count(titanic_df[u'Sex'].tolist())
# print word_count(titanic_df[u'SibSp'].tolist())
# print word_count(titanic_df[u'Parch'].tolist())
# print word_count(titanic_df[u'Embarked'].tolist())
# print word_count(titanic_df[u'Fare'].tolist())
# +
# 結果
# PassengerId 流水編號 無意義
# Survived {0: 549, 1: 342, 'nan': 0}
# Pclass {1: 216, 2: 184, 3: 491, 'nan': 0}
# Name 無意義
# Sex {'female': 314, 'male': 577, 'nan': 0}
# Age 0.42 - 80 'nan':177 要處理
# SibSp {0: 608, 1: 209, 2: 28, 3: 16, 4: 18, 5: 5, 8: 7, 'nan': 0}
# Parch {0: 678, 1: 118, 2: 80, 3: 5, 4: 4, 5: 5, 6: 1, 'nan': 0}
# Ticket 無意義
# Fare 0 - 512
# Cabin nan 過多
# Embarked {'C': 168, 'Q': 77, 'S': 644, 'nan': 2}
# 處理方式
# PassengerId, Name, Ticket 無意義拿掉
# Age 產生隨機年齡遞補
# Embarked nan 2個 填上最多的 S
# +
# 處理 titanic_df
# PassengerId, Name, Ticket 無意義拿掉
titanic_df = titanic_df.drop(['PassengerId','Name','Ticket','Cabin'], axis=1)
# Embarked nan 2個 填上最多的 S
titanic_df["Embarked"] = titanic_df["Embarked"].fillna("S")
# Age 產生隨機年齡遞補
# get average, std, and number of NaN values in titanic_df
average_age_titanic = titanic_df["Age"].mean()
std_age_titanic = titanic_df["Age"].std()
count_nan_age_titanic = titanic_df["Age"].isnull().sum()
# generate random numbers between (mean - std) & (mean + std)
rand_1 = np.random.randint(average_age_titanic - std_age_titanic,
average_age_titanic + std_age_titanic, size = count_nan_age_titanic)
titanic_df["Age"][np.isnan(titanic_df["Age"])] = rand_1
# +
# 處理 test_df
# PassengerId, Name, Ticket 無意義拿掉
test_passengerId = test_df["PassengerId"]
test_df = test_df.drop(['PassengerId','Name','Ticket','Cabin'], axis=1)
# Age 產生隨機年齡遞補
# get average, std, and number of NaN values in titanic_df
average_age_test_titanic = test_df["Age"].mean()
std_age_test_titanic = test_df["Age"].std()
count_nan_age_test_titanic = test_df["Age"].isnull().sum()
# generate random numbers between (mean - std) & (mean + std)
rand_2 = np.random.randint(average_age_test_titanic - std_age_test_titanic,
average_age_test_titanic + std_age_test_titanic, size = count_nan_age_test_titanic)
test_df["Age"][np.isnan(test_df["Age"])] = rand_2
# +
# data processing
# train data
# 把train的答案分開
X_train = titanic_df.drop("Survived",axis=1)
Y_train = titanic_df["Survived"]
# 把Embarked, Sex 的值展開
Embarked_dummies = pd.get_dummies(X_train['Embarked'])
Sex_dummies = pd.get_dummies(X_train['Sex'])
X_train.drop(['Embarked','Sex'], axis=1, inplace=True)
X_train = X_train.join(Embarked_dummies).join(Sex_dummies)
# minmax
min_max_scaler = preprocessing.MinMaxScaler()
X_train_minmax = min_max_scaler.fit_transform(X_train)
# test data
# 把Embarked, Sex 的值展開
Embarked_dummies = pd.get_dummies(test_df['Embarked'])
Sex_dummies = pd.get_dummies(test_df['Sex'])
test_df.drop(['Embarked','Sex'], axis=1, inplace=True)
test_df = test_df.join(Embarked_dummies).join(Sex_dummies)
# 處理 test fare的缺失值
test_df['Fare'] = test_df['Fare'].fillna(test_df['Fare'].mean())
# minmax
X_test_minmax = min_max_scaler.fit_transform(test_df)
# +
# 資料處理完成,開始跑模型
# clf
clf1 = RandomForestClassifier(n_estimators=50, max_depth=None,min_samples_split=2, random_state=443)
print "rf"
performance(clf1, X_train_minmax, Y_train)
clf2 = svm.SVC()
print "svm"
performance(clf2, X_train_minmax, Y_train)
clf3 = AdaBoostClassifier(n_estimators=100)
print "Ada"
performance(clf3, X_train_minmax, Y_train)
clf4 = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,max_depth=1, random_state=0)
print "Gbc"
performance(clf4, X_train_minmax, Y_train)
eclf = VotingClassifier(estimators=[('rf', clf1), ('svm', clf2), ('Ada', clf3), ('Gbc', clf4)], voting='hard')
print "eclf"
performance(eclf, X_train_minmax, Y_train)
# -
eclf = eclf.fit(X_train_minmax,Y_train)
test_predict = eclf.predict(X_test_minmax)
submission = pd.DataFrame({
"PassengerId": test_passengerId,
"Survived": test_predict
})
submission.to_csv('/Users/wy/Desktop/titanic2.csv', index=False)
| titanic/Titanic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **NOTE:** This Notebook is downloaded from Kaggle and is therefore intended to be used as a Kaggle Kernel
# + [markdown] papermill={"duration": 0.009238, "end_time": "2021-10-23T04:16:23.744523", "exception": false, "start_time": "2021-10-23T04:16:23.735285", "status": "completed"} tags=[]
# # 📦 Packages and Basic Setup
# + _kg_hide-input=true _kg_hide-output=true papermill={"duration": 43.644924, "end_time": "2021-10-23T04:17:07.397967", "exception": false, "start_time": "2021-10-23T04:16:23.753043", "status": "completed"} tags=[]
# %%capture
# -------- Offline Installs for REMBert -------- #
# !pip uninstall fsspec -qq -y
# !pip install -U --no-build-isolation --no-deps ../input/transformers-master/ -qq
# !pip install --no-index --find-links ../input/hf-datasets/wheels datasets -qq
# + _kg_hide-input=true _kg_hide-output=true papermill={"duration": 9.810453, "end_time": "2021-10-23T04:17:17.216932", "exception": false, "start_time": "2021-10-23T04:17:07.406479", "status": "completed"} tags=[]
# %%capture
# -------- Basic Packages -------- #
import os
import gc
import sys
gc.enable()
import math
import time
import torch
import numpy as np
import pandas as pd
from string import punctuation
from sklearn import model_selection
from transformers import AutoTokenizer
from torch.utils.data import DataLoader, SequentialSampler
# -------- Output Prettification ✨ -------- #
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
from transformers import logging
logging.set_verbosity_warning()
logging.set_verbosity_error()
# -------- Custom Library -------- #
wrapperdir = "../input/coffee"
sys.path.append(wrapperdir)
from coffee.dataloader import Dataset
from coffee.helpers import make_model
from coffee.data_utils import prepare_test_features
from coffee.utils import optimal_num_of_loader_workers
# + [markdown] papermill={"duration": 0.008334, "end_time": "2021-10-23T04:17:17.234147", "exception": false, "start_time": "2021-10-23T04:17:17.225813", "status": "completed"} tags=[]
# # 📃 Configuration
# + papermill={"duration": 0.016116, "end_time": "2021-10-23T04:17:17.258774", "exception": false, "start_time": "2021-10-23T04:17:17.242658", "status": "completed"} tags=[]
CONFIG = dict(
# Model
model_type = 'rembert',
model_name_or_path = "../input/rembert-pt",
config_name = "../input/rembert-pt",
output_head_dropout_prob = 0.0,
gradient_accumulation_steps = 2,
# Tokenizer
tokenizer_name = "../input/rembert-pt",
max_seq_length = 400,
doc_stride = 135,
# Training
epochs = 1,
folds = 4,
train_batch_size = 2,
eval_batch_size = 8,
# Optimizer
optimizer_type = 'AdamW',
learning_rate = 1.5e-5,
weight_decay = 1e-2,
epsilon = 1e-8,
max_grad_norm = 1.0,
# Scheduler
decay_name = 'cosine-warmup',
warmup_ratio = 0.1,
logging_steps = 100,
# Misc
output_dir = 'output',
seed = 21,
# W&B
competition = 'chaii',
_wandb_kernel = 'sauravm'
)
# + [markdown] papermill={"duration": 0.008308, "end_time": "2021-10-23T04:17:17.275332", "exception": false, "start_time": "2021-10-23T04:17:17.267024", "status": "completed"} tags=[]
# # 💿 Dataset
# + papermill={"duration": 1.160903, "end_time": "2021-10-23T04:17:18.444602", "exception": false, "start_time": "2021-10-23T04:17:17.283699", "status": "completed"} tags=[]
test = pd.read_csv('../input/chaii-hindi-and-tamil-question-answering/test.csv')
base_model_path = '../input/bestrembertchaii/'
tokenizer = AutoTokenizer.from_pretrained(CONFIG["tokenizer_name"])
test_features = []
for i, row in test.iterrows():
test_features += prepare_test_features(CONFIG, row, tokenizer)
args = CONFIG
test_dataset = Dataset(test_features, mode='test')
test_dataloader = DataLoader(
test_dataset,
batch_size=CONFIG["eval_batch_size"],
sampler=SequentialSampler(test_dataset),
num_workers=optimal_num_of_loader_workers(),
pin_memory=True,
drop_last=False
)
# + [markdown] papermill={"duration": 0.008301, "end_time": "2021-10-23T04:17:18.461597", "exception": false, "start_time": "2021-10-23T04:17:18.453296", "status": "completed"} tags=[]
# # 🔥 Inference
# + papermill={"duration": 172.299991, "end_time": "2021-10-23T04:20:10.770052", "exception": false, "start_time": "2021-10-23T04:17:18.470061", "status": "completed"} tags=[]
# %%capture
def get_predictions(checkpoint_path):
config, tokenizer, model = make_model(CONFIG)
model.cuda();
model.load_state_dict(
torch.load(base_model_path + checkpoint_path)
);
start_logits = []
end_logits = []
for batch in test_dataloader:
with torch.no_grad():
outputs_start, outputs_end = model(batch['input_ids'].cuda(), batch['attention_mask'].cuda())
start_logits.append(outputs_start.cpu().numpy().tolist())
end_logits.append(outputs_end.cpu().numpy().tolist())
del outputs_start, outputs_end
del model, tokenizer, config
gc.collect()
return np.vstack(start_logits), np.vstack(end_logits)
start_logits1, end_logits1 = get_predictions('checkpoint-fold-0/pytorch_model.bin')
start_logits2, end_logits2 = get_predictions('checkpoint-fold-1/pytorch_model.bin')
start_logits3, end_logits3 = get_predictions('checkpoint-fold-2/pytorch_model.bin')
start_logits4, end_logits4 = get_predictions('checkpoint-fold-3/pytorch_model.bin')
start_logits = (start_logits1 + start_logits2 + start_logits3 + start_logits4) / 4
end_logits = (end_logits1 + end_logits2 + end_logits3 + end_logits4) / 4
# + _kg_hide-input=true _kg_hide-output=true papermill={"duration": 0.061972, "end_time": "2021-10-23T04:20:10.841921", "exception": false, "start_time": "2021-10-23T04:20:10.779949", "status": "completed"} tags=[]
import collections
def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):
all_start_logits, all_end_logits = raw_predictions
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
predictions = collections.OrderedDict()
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
for example_index, example in examples.iterrows():
feature_indices = features_per_example[example_index]
min_null_score = None
valid_answers = []
context = example["context"]
for feature_index in feature_indices:
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
sequence_ids = features[feature_index]["sequence_ids"]
context_index = 1
features[feature_index]["offset_mapping"] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(features[feature_index]["offset_mapping"])
]
offset_mapping = features[feature_index]["offset_mapping"]
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
best_answer = {"text": "", "score": 0.0}
predictions[example["id"]] = best_answer["text"]
return predictions
predictions = postprocess_qa_predictions(test, test_features, (start_logits, end_logits))
# + papermill={"duration": 0.055765, "end_time": "2021-10-23T04:20:10.907539", "exception": false, "start_time": "2021-10-23T04:20:10.851774", "status": "completed"} tags=[]
submission = []
for p1, p2 in predictions.items():
p2 = " ".join(p2.split())
p2 = p2.strip(punctuation)
submission.append((p1, p2))
sample = pd.DataFrame(submission, columns=["id", "PredictionString"])
test_data =pd.merge(left=test,right=sample,on='id')
bad_starts = [".", ",", "(", ")", "-", "–", ",", ";"]
bad_endings = ["...", "-", "(", ")", "–", ",", ";"]
tamil_ad = "கி.பி"
tamil_bc = "கி.மு"
tamil_km = "கி.மீ"
hindi_ad = "ई"
hindi_bc = "ई.पू"
cleaned_preds = []
for pred, context in test_data[["PredictionString", "context"]].to_numpy():
if pred == "":
cleaned_preds.append(pred)
continue
while any([pred.startswith(y) for y in bad_starts]):
pred = pred[1:]
while any([pred.endswith(y) for y in bad_endings]):
if pred.endswith("..."):
pred = pred[:-3]
else:
pred = pred[:-1]
if pred.endswith("..."):
pred = pred[:-3]
if any([pred.endswith(tamil_ad), pred.endswith(tamil_bc), pred.endswith(tamil_km), pred.endswith(hindi_ad), pred.endswith(hindi_bc)]) and pred+"." in context:
pred = pred+"."
cleaned_preds.append(pred)
test_data["PredictionString"] = cleaned_preds
test_data[['id', 'PredictionString']].to_csv('submission.csv', index=False)
# + papermill={"duration": 0.029372, "end_time": "2021-10-23T04:20:10.946867", "exception": false, "start_time": "2021-10-23T04:20:10.917495", "status": "completed"} tags=[]
test_data
| notebooks/coffee-rembert-inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Flatbed Scanner Seed Analysis Tutorial
#
# This is a full workflow that shows methods for counting and analyzing the shape and color of seeds. Similar methods should work for other types of seeds.
# # Section 1: Importing libraries and image
# Set the notebook display method
# inline = embedded plots, notebook = interactive plots
# %matplotlib inline
# +
#Import libraries
# %matplotlib notebook
import os
import argparse
import matplotlib
import cv2
import numpy as np
from plantcv import plantcv as pcv
# -
# ## Input variables
#
# The options class mimics the workflow command-line argument parser that is used for workflow parallelization. Using it while developing a workflow in Jupyter makes it easier to convert the workflow to a script later.
# +
# Input image into self.image (include file path if image is not in
# the same folder as jupyter notebook)
# Set self.debug to "plot" so that image outputs for each step is shown
# once cell is run in jupyter notebooks (recommended)
class options:
def __init__(self):
self.image = "./img/MaizeSeedScan.jpg"
self.debug = "plot"
self.writeimg = False
self.result = "seed_analysis_results"
self.outdir = "."
# +
# Get options
args = options()
# Set debug to the global parameter
pcv.params.debug = args.debug
# Set plotting size (default = 100)
pcv.params.dpi = 100
# Increase text size and thickness to make labels clearer
# (size may need to be altered based on original image size)
pcv.params.text_size = 10
pcv.params.text_thickness = 20
# -
# ## Read the input image
# Inputs:
# filename = Image file to be read in
# mode = How to read in the image; either 'native' (default), 'rgb', 'gray', or 'csv'
img, path, filename = pcv.readimage(filename=args.image)
# # Section 2: Segmenting plant from background and identifying plant object(s)
#
# * Requires successful import of image
# * See Threshold Tools Tutorial tutorial for other functions that can be used to create a binary mask
# ## Crop image
#
# Cropping out aspects of the image that may interfere with the binary mask makes it easier to isolate plant material from background. This is also useful to save memory in these tutorials.
# Inputs:
# x = top left x-coordinate
# y = top left y-coordinate
# h = height of final cropped image
# w = width of final cropped image
img = pcv.crop(img=img, x=100, y=100, h=3500, w=3000)
# ## Visualize the distribution of grayscale values
#
# A histogram can be used to visualize the distribution of values in an image. The histogram can aid in the selection of a threshold value.
#
# For this image, the large peak between 125-130 are from the darker background pixels. The smaller peak between 150-160 are the lighter seed pixels.
# Inputs:
# img = gray image in selected colorspace
# mask = None (default), or mask
# bins = 100 (default) or number of desired number of evenly spaced bins
# lower-bound = None (default) or minimum value on x-axis
# upper-bound = None (default) or maximum value on x-axis
# title = None (default) or custom plot title
# hist_data = False (default) or True (if frequency distribution data is desired)
hist = pcv.visualize.histogram(img=img)
# ## Threshold the grayscale image
# Use a threshold function (binary in this case) to segment the grayscale image into plant (white) and background (black) pixels. Using the histogram above, a threshold point between 130-150 will segment the plant and background peaks. Because the seeds are the lighter pixels in this image, use object_type="light" to do a traditional threshold.
# Inputs:
# gray_img = black and white image created from selected colorspace
# threshold = cutoff pixel intensity value (all pixels below value will become black, all above will become white)
# max_value = maximum pixel value
# object_type = 'dark' or 'light' depending on if seeds are darker or lighter than background
b_thresh = pcv.threshold.binary(gray_img=img, threshold=100, max_value=255, object_type='dark')
# ^ ^
# | |
# change this value change this value
# ## Remove small background noise
#
# Thresholding mostly labeled plant pixels white but also labeled small regions of the background white. The fill function removes "salt" noise from the background by filtering white regions by size.
# Inputs:
# bin_img - binary mask image
# size - maximum size for objects that should be filled in as background (non-plant) pixels
b_fill = pcv.fill(bin_img=b_thresh, size=300)
# ^
# |
# change this value
# # Section 3: Count and Analyze Seeds
#
# * Need a completed binary mask
# ## Identify simple seed objects
#
# The binary mask can be used to find objects, or contours, each of which will outline a seed. Unlike the PlantCV find_objects function, this uses findContours from OpenCV with the input cv2.RETR_EXTERNAL to ignore layered contours. The output from this step can be used to count seeds, but CANNOT be used as input for shape and color analysis.
# Inputs:
# mask = binary mask with extra noise filled in
objects = cv2.findContours(b_fill, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# ## Count seeds
#
# Count the number of seeds (simple objects) by accessing the number of values stores in the second item of the object list.
# +
# Find number of seeds
# Inputs:
# contours = list of contours
number_seeds = len(objects[1])
number_seeds
# -
# ## Identify seed objects for shape and color analysis
#
# For shape and color analysis, we need to use find_objects from PlantCV to get the objects and object hierarchy that we need as inputs in the following analyses. OpenCV findContours and PlantCV find_objects do not behave in the same way or provide the same outputs, which is why we must identify objects twice in this workflow.
# Inputs:
# img = rgb image
# mask = binary mask
objects2, obj_hierarchy = pcv.find_objects(img=img, mask=b_fill)
# ## Measure each seed
#
# To measure each seed, iterate over the objects (which occur when obj_hierarchy[0][i][3] == -1). For each object, the following steps are done:
#
# 1. Contours are consolidated, so that all contours that correspond to one seed are compiled into a single object and mask.
# 2. Analyze seed shape
# 3. Analyze seed color
# +
# Create a copy of the RGB image for shape analysis annotations
# Inputs:
# img = image
shape_img = np.copy(img)
# Turn off plot debugging
pcv.params.debug = None
# Interate through all objects in objects2 and do a shape and color analysis
# for i in range(0, len(objects2)):
# The loop above takes up too much memory for binder, but ideally you'd loop over every seed
# For demonstration purposes, we can loop through the first 15 objects
for i in range(0, 28):
# Check to see if the object has an offshoot in the hierarchy
if obj_hierarchy[0][i][3] == -1:
# Create an object and a mask for one object
#
# Inputs:
# img - rgb image
# contours - list entry i in objects2
# hierarchy - np.array of obj_hierarchy[0][1]
seed, seed_mask = pcv.object_composition(img=img, contours=[objects2[i]], hierarchy=np.array([[obj_hierarchy[0][i]]]))
# Analyze shape of each seed
#
# Inputs:
# img - rgb image
# obj - seed
# mask - mask created of single seed
# label - label for each seed in image
shape_img = pcv.analyze_object(img=shape_img, obj=seed, mask=seed_mask, label=f"seed{i}")
# Analyze color of each seed
#
# Inputs:
# img - rgb image
# obj - seed
# hist_plot_type - 'all', or None for no histogram plot
# label - 'default'
# color_img = pcv.analyze_color(rgb_img=img, mask=b_fill, hist_plot_type=None, label="default")
# -
# ## Visualize shape analysis of seeds
#
# Since debugging was turned off during the for loop, as plotting all analysis results significantly slows down the analysis, we can plot the final shape and color analyses to ensure that the results look correct.
# Inputs:
# img = image for shape analysis
pcv.plot_image(img=shape_img)
# ## Save results
#
# During analysis, measurements are stored in the background in the outputs recorder.
#
# This example includes image analysis for 'area', 'convex_hull_area', 'solidity', 'perimeter', 'width', 'height', 'longest_path', 'center_of_mass, 'convex_hull_vertices', 'object_in_frame', 'ellipse_center', 'ellipse_major_axis', 'ellipse_minor_axis', 'ellipse_angle', 'ellipse_eccentricity' using anayze_object and color analysis using analyze_color.
#
# Here, results are saved to a CSV file for easy viewing, but when running workflows in parallel, save results as "json"
# Inputs:
# filename = filename for saving results
# outformat = output file format: "json" (default) hierarchical format or "csv" tabular format
pcv.outputs.save_results(filename=args.result, outformat="csv")
| .ipynb_checkpoints/ShootCounterAnalysis_PlantCV-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="O3G_mRXylgqS" colab_type="code" colab={}
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# + [markdown] id="OwucUyEYlgqV" colab_type="text"
# ## Does nn.Conv2d init work well?
# + [markdown] id="T8sBryeclgqW" colab_type="text"
# [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=21)
# + id="5Hx_tJ6YlgqY" colab_type="code" colab={}
#export
from exp.nb_02 import *
def get_data():
path = datasets.download_data(MNIST_URL, ext='.gz')
with gzip.open(path, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
return map(tensor, (x_train,y_train,x_valid,y_valid))
def normalize(x, m, s): return (x-m)/s
# + id="ANGSJqGVlgqa" colab_type="code" colab={}
# torch.nn.modules.conv._ConvNd.reset_parameters??
# + id="hiHZcDbZlgqe" colab_type="code" colab={}
x_train,y_train,x_valid,y_valid = get_data()
train_mean,train_std = x_train.mean(),x_train.std()
x_train = normalize(x_train, train_mean, train_std)
x_valid = normalize(x_valid, train_mean, train_std)
# + id="Bt386ZjAlgqh" colab_type="code" colab={} outputId="4258aa23-610e-467a-cda2-3327a57bcb88"
x_train = x_train.view(-1,1,28,28)
x_valid = x_valid.view(-1,1,28,28)
x_train.shape,x_valid.shape
# + id="2EehTuP7lgqk" colab_type="code" colab={} outputId="653dd5b2-36bf-44f8-e5ff-48cd8f2cd5c7"
n,*_ = x_train.shape
c = y_train.max()+1
nh = 32
n,c
# + id="k2g96jUOlgqn" colab_type="code" colab={}
l1 = nn.Conv2d(1, nh, 5)
# + id="cDoM9CGAlgqp" colab_type="code" colab={}
x = x_valid[:100]
# + id="iuXvWDoClgqs" colab_type="code" colab={} outputId="2b0717bf-3a7f-4b6d-c22d-673cd5e904e7"
x.shape
# + id="wRRZyLZalgqv" colab_type="code" colab={}
def stats(x): return x.mean(),x.std()
# + id="eB9Emnadlgqx" colab_type="code" colab={} outputId="247f7399-8665-4806-f1d5-6b596593baa1"
l1.weight.shape
# + id="7xmH3pyclgqz" colab_type="code" colab={} outputId="a8c8ca76-5efc-4050-dacf-f58eed4b2278"
stats(l1.weight),stats(l1.bias)
# + id="QQIgu8c4lgq1" colab_type="code" colab={}
t = l1(x)
# + id="auVhczmklgq9" colab_type="code" colab={} outputId="e41e001c-a619-4366-cd74-8df0fe215118"
stats(t)
# + id="QpdszC3zlgrB" colab_type="code" colab={} outputId="92970bb8-b6d5-453f-9a96-0dda5380f86d"
init.kaiming_normal_(l1.weight, a=1.)
stats(l1(x))
# + id="_E3-lMsplgrF" colab_type="code" colab={}
import torch.nn.functional as F
# + id="o85WYsGylgrH" colab_type="code" colab={}
def f1(x,a=0): return F.leaky_relu(l1(x),a)
# + id="V5O7rNMJlgrJ" colab_type="code" colab={} outputId="f43e9a85-4e31-4c23-d0c3-f4412d0146ea"
init.kaiming_normal_(l1.weight, a=0)
stats(f1(x))
# + id="8RenzfyilgrM" colab_type="code" colab={} outputId="23b35677-8056-46f5-8cbb-2b5fcb13209a"
l1 = nn.Conv2d(1, nh, 5)
stats(f1(x))
# + id="O64FdQEUlgrP" colab_type="code" colab={} outputId="ac20493c-4349-48b9-bbfe-5b80a913c199"
l1.weight.shape
# + id="BvD6ptBDlgrW" colab_type="code" colab={} outputId="1b6aa33f-e466-4a5d-fd03-4bc1ed4fe45c"
# receptive field size
rec_fs = l1.weight[0,0].numel()
rec_fs
# + id="liCnynHflgrb" colab_type="code" colab={} outputId="12272c3c-2b2b-4adf-9c7e-6c4a141ffba4"
nf,ni,*_ = l1.weight.shape
nf,ni
# + id="xHnmtBsFlgrd" colab_type="code" colab={} outputId="b6b59d7c-7b4d-4824-90a6-9cde6cfbd796"
fan_in = ni*rec_fs
fan_out = nf*rec_fs
fan_in,fan_out
# + id="ZHLEi_solgrg" colab_type="code" colab={}
def gain(a): return math.sqrt(2.0 / (1 + a**2))
# + id="hqlkZZ1ylgrj" colab_type="code" colab={} outputId="33587a2c-778f-4bfe-c2df-38f1a161a063"
gain(1),gain(0),gain(0.01),gain(0.1),gain(math.sqrt(5.))
# + id="roK3YSVSlgrm" colab_type="code" colab={} outputId="5be0d842-cddb-45e1-f8c7-bc31a5bc253c"
torch.zeros(10000).uniform_(-1,1).std()
# + id="dfqWb8Gylgro" colab_type="code" colab={} outputId="22f3a1b0-4011-4bcb-a83e-7cc9290693c9"
1/math.sqrt(3.)
# + id="F4vqBYHmlgrr" colab_type="code" colab={}
def kaiming2(x,a, use_fan_out=False):
nf,ni,*_ = x.shape
rec_fs = x[0,0].shape.numel()
fan = nf*rec_fs if use_fan_out else ni*rec_fs
std = gain(a) / math.sqrt(fan)
bound = math.sqrt(3.) * std
x.data.uniform_(-bound,bound)
# + id="ayyEUKDRlgrt" colab_type="code" colab={} outputId="7fdb3ec1-8357-4ea1-adb9-2126739f7b67"
kaiming2(l1.weight, a=0);
stats(f1(x))
# + id="XaSilatxlgrv" colab_type="code" colab={} outputId="a1a906cd-0cb8-47a8-e095-d344dd7d7ce7"
kaiming2(l1.weight, a=math.sqrt(5.))
stats(f1(x))
# + id="msUoB7Gklgrx" colab_type="code" colab={}
class Flatten(nn.Module):
def forward(self,x): return x.view(-1)
# + id="fIKxQM6flgr0" colab_type="code" colab={}
m = nn.Sequential(
nn.Conv2d(1,8, 5,stride=2,padding=2), nn.ReLU(),
nn.Conv2d(8,16,3,stride=2,padding=1), nn.ReLU(),
nn.Conv2d(16,32,3,stride=2,padding=1), nn.ReLU(),
nn.Conv2d(32,1,3,stride=2,padding=1),
nn.AdaptiveAvgPool2d(1),
Flatten(),
)
# + id="SClIVa8flgr3" colab_type="code" colab={}
y = y_valid[:100].float()
# + id="9mtMEe-Plgr7" colab_type="code" colab={} outputId="5f6a46ad-82e6-4170-844e-6d1028c4b476"
t = m(x)
stats(t)
# + id="Yv_71Kw6lgsD" colab_type="code" colab={}
l = mse(t,y)
l.backward()
# + id="rJcBrOvNlgsF" colab_type="code" colab={} outputId="2db8469c-656e-4cf6-982f-8e5b3948a967"
stats(m[0].weight.grad)
# + id="mfuetjQulgsH" colab_type="code" colab={}
# init.kaiming_uniform_??
# + id="t02tRKLolgsL" colab_type="code" colab={}
for l in m:
if isinstance(l,nn.Conv2d):
init.kaiming_uniform_(l.weight)
l.bias.data.zero_()
# + id="YxlLqcY2lgsP" colab_type="code" colab={} outputId="7a45cf20-28fa-457e-ae60-e7839b043319"
t = m(x)
stats(t)
# + id="EYJxeL_QlgsS" colab_type="code" colab={} outputId="face2d76-5f64-41df-91e1-86b13a5df9a2"
l = mse(t,y)
l.backward()
stats(m[0].weight.grad)
# + [markdown] id="Cau1ifvklgsV" colab_type="text"
# ## Export
# + id="6IK2K3BFlgsW" colab_type="code" colab={}
# !./notebook2script.py 02a_why_sqrt5.ipynb
# + id="rAA66DoslgsY" colab_type="code" colab={}
| nbs/dl2/02a_why_sqrt5_copy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] heading_collapsed=true
# # 0.0. Step 0 - Imports
# + hidden=true
import math
import numpy as np
import pandas as pd
import inflection
import seaborn as sns
from numpy import int64
from matplotlib import pyplot as plt
from IPython.core.display import HTML
from IPython.display import Image
# + [markdown] heading_collapsed=true hidden=true
# ## 0.1. Helper Functions
# + hidden=true
def jupyter_settings():
# %matplotlib inline
# %pylab inline
plt.style.use('bmh')
plt.rcParams['figure.figsize'] = [20, 12]
plt.rcParams['font.size'] = 25
#
display(HTML( '<style>.container {width:100% !important;}</style>'))
# pd.options.display.max_columns = None
# pd.options.display.max_rows = None
# pd.set_option('display.expand_frame_repr', False)
sns.set()
# + hidden=true
jupyter_settings ()
# + [markdown] hidden=true
# ## 0.2. Loading Data
# + [markdown] heading_collapsed=true
# # 1.0. Step 1 - Descrição dos Dados
# + [markdown] heading_collapsed=true hidden=true
# ## 1.1. Data
# + hidden=true
df_sales_raw = pd.read_csv ('data/train.csv', low_memory = False)
df_store_raw = pd.read_csv ('data/store.csv', low_memory = False)
#merge
df_raw = pd.merge (df_sales_raw, df_store_raw, how='left', on='Store')
# + hidden=true
df1 = df_raw.copy()
# + [markdown] heading_collapsed=true hidden=true
# ## 1.2. Rename Columns
# + hidden=true
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo','StateHoliday', 'SchoolHoliday',
'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth','CompetitionOpenSinceYear',
'Promo2', 'Promo2SinceWeek','Promo2SinceYear', 'PromoInterval']
snakecase = lambda x: inflection.underscore(x)
cols_new = list(map(snakecase, cols_old))
#rename
df1.columns = cols_new
# + [markdown] heading_collapsed=true hidden=true
# ## 1.3. Data Dimensions
# + hidden=true
print('Number of Rows {}'.format(df1.shape[0]))
print('Number od Columns {}'.format(df1.shape[1]))
# + [markdown] heading_collapsed=true hidden=true
# ## 1.4. Data Types
# + hidden=true
df1['date'] = pd.to_datetime(df1['date'])
df1.dtypes
# + [markdown] heading_collapsed=true hidden=true
# ## 1.5. Check NA
# + hidden=true
df1.isna().sum()
# + [markdown] heading_collapsed=true hidden=true
# ## 1.6. Fillout NA
# + hidden=true
#competition_distance
df1['competition_distance'] = df1['competition_distance'].apply(lambda x: 2000000.0 if math.isnan (x) else (x))
#competition_open_since_month
df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis=1)
#competition_open_since_year
df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis=1 )
#promo2_since_week
df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis=1 )
#promo2_since_year
df1['promo2_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis=1 )
#promo_interval
month_map = {1:'Jan', 2:'Feb', 3:'Mar', 4:'Apr', 5:'May', 6:'Jun', 7:'Jul', 8:'Aug', 9:'Sep', 10:'Oct', 11:'Nov', 12:'Dec'}
df1['promo_interval'].fillna(0, inplace=True)
df1['month_map'] = df1['date'].dt.month.map(month_map)
df1['is_promo'] = df1[['promo_interval', 'month_map' ]].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split( ',') else 0, axis=1)
# + [markdown] heading_collapsed=true hidden=true
# ## 1.7. Change Types
# + hidden=true
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype(int64)
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype(int64)
df1['promo2_since_week'] = df1['promo2_since_week'].astype(int64)
df1['promo2_since_year'] = df1['promo2_since_year'].astype(int64)
# + [markdown] heading_collapsed=true hidden=true
# ## 1.8. Descriptive Statistical
# + hidden=true
num_attributes = df1.select_dtypes(include=['int64', 'float64'])
cat_attributes = df1.select_dtypes(exclude=['int64', 'float64','datetime64[ns]'])
# + [markdown] heading_collapsed=true hidden=true
# ### 1.7.1 Numerical Attributes
# + hidden=true
# central tendency - mean, median
ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T
ct2 = pd.DataFrame(num_attributes.apply(np.median)).T
# dispersion - std, min, max, range, skew, Kurtosis
d1 = pd.DataFrame(num_attributes.apply(np.std)).T
d2 = pd.DataFrame(num_attributes.apply(min)).T
d3 = pd.DataFrame(num_attributes.apply(max)).T
d4 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T
d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T
#concatenate
m = pd.concat([d2, d3, d4, ct1, ct2, d1, d5, d6]).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
m
# + hidden=true
sns.kdeplot(df1['sales']);
# + [markdown] heading_collapsed=true hidden=true
# ### 1.7.2 Categorical Attributes
# + hidden=true
cat_attributes.apply(lambda x: x.unique().shape[0] )
# + hidden=true
aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)]
aux2 = df1[df1['sales'] > 0]
plt.subplot(131)
sns.boxplot(x='state_holiday', y='sales', data=aux1);
plt.subplot(132)
sns.boxplot(x='store_type', y='sales', data=aux2);
plt.subplot(133)
sns.boxplot(x='assortment', y='sales', data=aux2);
# -
# # 2.0. Step 2 - Feature Engineering
# + [markdown] heading_collapsed=true
# ### 2.1. Mind Map Hypothesis
# + hidden=true
Image('img/MindMapHypothesis.png')
# -
# ### 2.2. Create Hypothesis
# #### 2.2.1. Store Hypothesis
# 1.Lojas com número maior de funcionários deveriam vender mais.
#
# 2.Lojas com maior capacidade de estoque deveriam vender mais.
#
# 3.Lojas com maior porte deveriam vender mais.
#
# 4.Lojas com maior sortimentos deveriam vender mais.
#
# 5.Lojas com competidores mais próximos deveriam vender menos.
#
# 6.Lojas com competidores à mais tempo deveriam vendem mais.
# #### 2.2.2. Product Hypothesis
# **1**.Lojas que investem mais em Marketing deveriam vender mais.
#
# **2**.Lojas com maior exposição de produto deveriam vender mais.
#
# **3**.Lojas com produtos com preço menor deveriam vender mais.
#
# **5**.Lojas com promoções mais agressivas ( descontos maiores ), deveriam vender mais.
#
# **6**.Lojas com promoções ativas por mais tempo deveriam vender mais.
#
# **7**.Lojas com mais dias de promoção deveriam vender mais.
#
# **8**.Lojas com mais promoções consecutivas deveriam vender mais.
# #### 2.2.3. Time Hypothesis (seasonality)
# **1**.Lojas abertas durante o feriado de Natal deveriam vender mais.
#
# **2**.Lojas deveriam vender mais ao longo dos anos.
#
# **3**.Lojas deveriam vender mais no segundo semestre do ano.
#
# **4**.Lojas deveriam vender mais depois do dia 10 de cada mês.
#
# **5**.Lojas deveriam vender menos aos finais de semana.
#
# **6**.Lojas deveriam vender menos durante os feriados escolares.
# ### 2.3. Final List Hypothesis
# **1**.Lojas com maior sortimentos deveriam vender mais.
#
# **2**.Lojas com competidores mais próximos deveriam vender menos.
#
# **3**.Lojas com competidores à mais tempo deveriam vendem mais.
#
# **4**.Lojas com promoções ativas por mais tempo deveriam vender mais.
#
# **5**.Lojas com mais dias de promoção deveriam vender mais.
#
# **6**.Lojas com mais promoções consecutivas deveriam vender mais.
#
# **7**.Lojas abertas durante o feriado de Natal deveriam vender mais.
#
# **8**.Lojas deveriam vender mais ao longo dos anos.
#
# **9**.Lojas deveriam vender mais no segundo semestre do ano.
#
# **10**.Lojas deveriam vender mais depois do dia 10 de cada mês.
#
# **11**.Lojas deveriam vender menos aos finais de semana.
#
# **12**.Lojas deveriam vender menos durante os feriados escolares.
| m03_v01_store_sales_prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Code With Us Demo
#
# In this notebook, we will provide the ability to run an ICF cadCAD model. Below we provide an overview of cadCAD and the model process.
#
# ## cadCAD Model Structure Overview
# In the cadCAD simulation methodology, we operate on four layers: Policies, Mechanisms, States, and Metrics. Information flows do not have explicit feedback loop unless noted. Policies determine the inputs into the system dynamics, and can come from user input, observations from the exogenous environment, or algorithms. Mechanisms (sometimes referred to as State Update Logic) are functions that take the policy decisions and update the States to reflect the policy level changes. States are variables that represent the system quantities at the given point in time, and Metrics are computed from state variables to assess the health of the system, essentially views on a complex data structure. Metrics can often be thought of as Key Performance Indicators (KPIs).
#
# At a more granular level, to setup a model, there are system conventions and configurations that must be followed.
#
# The way to think of cadCAD modeling is analogous to machine learning pipelines which normally consist of multiple steps when training and running a deployed model. There is preprocessing, which includes segregating features between continuous and categorical, transforming or imputing data, and then instantiating, training, and running a machine learning model with specified hyperparameters. cadCAD modeling can be thought of in the same way as states, roughly translating into features, are fed into pipelines that have built-in logic to direct traffic between different mechanisms, such as scaling and imputation. Accuracy scores, ROC, etc. are analogous to the metrics that can be configured on a cadCAD model, specifying how well a given model is doing in meeting its objectives. The parameter sweeping capability of cadCAD can be thought of as a grid search, or way to find the optimal hyperparameters for a system by running through alternative scenarios. A/B style testing that cadCAD enables is used in the same way machine learning models are A/B tested, except out of the box, in providing a side by side comparison of muliple different models to compare and contrast performance. Utilizing the field of Systems Identification, dynamical systems models can be used to "online learn" by providing a feedback loop to generative system mechanisms.
#
# cadCAD models are micro founded with metrics being at the macro or the institutional level. If you are interested in institutional dynamics, see Dr. Zargham's recent paper: [<NAME> and <NAME> (2019) Foundations of Cryptoeconomic Systems. Working Paper Series / Institute for Cryptoeconomics / Interdisciplinary Research, 1. Research Institute for Cryptoeconomics, Vienna.](https://epub.wu.ac.at/7309/8/Foundations%20of%20Cryptoeconomic%20Systems.pdf)
#
#
#
# ### Model File structure
# * Code With Us.ipynb
# * src/sim
# * src/sim/model
# * src/sim/model/parts
#
# In the sim folder there exist 3 files and a model folder, the [config.py](src/sim/config.py), [run.py](src/sim/run.py), and [sim_setup.py](src/sim/sim_setup.py). The [config.py](src/sim/config.py) contains the simulation configurations, aggregating the partial states, and the state variables. [run.py](src/sim/run.py) actually runs the simulation, and [sim_setup.py](src/sim/sim_setup.py) defines the number of timesteps and monte carlo runs (these 12 simulations have 100 timesteps and no monte carlo runs).
#
# Within the src/sim/model folder, there are 3 files and a parts folder. The [partial_state_update_block.py](src/sim/model/partial_state_update_block.py) contains the partial state update blocks and how they update the state variables. [state_variables.py](src/sim/model/state_variables.py) defines the state variables and [state_variables.py](src/sim/model/state_variables.py). [sys_params.py](src/sim/model/sys_params.py) specifies hyperparameters for the simulation.
#
#
# The mechanisms of the model live within the parts subfolder as:
# * [attest.py](src/sim/model/parts/attest.py)
# * [bondburn.py](src/sim/model/parts/bondburn.py)
# * [choose_action.py](src/sim/model/parts/choose_action.py)
# * [choose_agent.py](src/sim/model/parts/choose_agent.py)
# * [monthly_instalment.py](src/sim/model/parts/monthly_instalment.py)
# * [private_beliefs.py](src/sim/model/parts/private_beliefs.py)
# * [put_agent_back_to_df.py](src/sim/model/parts/put_agent_back_to_df.py)
# * [uniswap.py](src/sim/model/parts/uniswap.py)
# * [utils.py](src/sim/model/parts/utils.py)
#
#
# ## Model Diagram
#
# 
#
#
# In order to reperform this code, we recommend the researcher use the following link https://www.anaconda.com/products/individual to download Python 3.7. To install the specific version of cadCAD this repository was built with, run the following code: pip install cadCAD==0.4.23
#
# Then run cd InterchainFoundation to enter the repository. Finally, run jupyter notebook to open a notebook server to run the various notebooks in this repository.
# ### Installed cadCAD Installed Version Check
# + tags=[]
pip freeze | grep cadCAD
# -
# ## Parametric testing of the initialization of the Bonding Curve
#
# ### The specific simulation of the model is as follows:
#
# * 10 agents
# * 14 days where each day each participant gets 1xCHF
# * Each day the 10 participants all buy tokens on the boding curve with their 1xCHF
# * Ffter 14 days the bond closes because the project is over (succeeds)
# * Because the project wasn't actually spending any funds, the total amount of reward will in fact be the C + reserve
#
#
# * Bond token: uFIT (micro FIT)
# * Reserve token: uXCHF (micro XCHF)
# * C (Outcome payment): 300000000 [uxchf]
# * d0: 1000000
# * p0: 1
# * theta: 0
# * kappa: 3.0
# * max supply: 20000000 [ufit]
# ### Import Libraries for Analysis and Visualization
# + tags=[]
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# For analysis
import numpy as np
import pandas as pd
sns.set_style("whitegrid")
# -
# ### Import Parent cadCAD Model and Utilities
# +
from cadCAD.configuration import Experiment
from cadCAD import configs
from src.config_wrapper import ConfigWrapper
from src import run_wrapper2
from src import run_wrapper
import src.sim as sim
# custom plotting function
from src.utils import param_test_plot
# -
# ### Choose number of Monte Carlo runs ('N') and/or number of Timesteps ('T') and/or to update parameter values ('M')
#
# ### Current N, T, and M
# + tags=[]
# get list of keys and values from M
parametric_experiment = ConfigWrapper(sim)
model_keys = parametric_experiment.get_config()
model_keys[0]
# -
## Choose a parameter values
update_params = {
# disable selling
'ENABLE_BURN' : [False],
'THETA' : [0],
# 'alpha_test' : ['success'],
'alpha_test' : ['failure'],
}
# ### Update Timesteps, if desired
# Current Number of TImesteps
# +
# Original
New_Timesteps = model_keys[0]['T']
# New Change Value
# New_Timesteps = range(365)
New_Timesteps
# -
# ### Update Monte Carlo Runs, if desired
# Current Number of Runs
# +
# Original
New_Runs = model_keys[0]['N']
# New Change Value
# New_Runs = 10
New_Runs
# + tags=[]
parametric_experiment = ConfigWrapper(sim, M=update_params, N=New_Runs, T=New_Timesteps)
# -
# ### Get Initial Conditions from Config
initial_state = parametric_experiment.get_initial_conditions()
initial_state
# ### Update Agents
# #### Choose Number of Agents
number_of_agents = 10
# +
########## AGENT INITIALIZATION ##########
PRIVATE_ALPHA = 0.5
PRIVATE_PRICE = 0.5
r = 0 #1000000 # Agent reserve, the amount of fiat tokens an agent starts with
s = 0
s1 = 0
s0 = 0
s_free = s - (s1+s0)
# Configure agents for agent-based model
agents_df = pd.DataFrame({
'agent_attestations_1': 0,
'agent_attestations_0': 0,
'agent_reserve': r,
'agent_supply_1': s1,
'agent_supply_0': s0,
'agent_supply_free': s_free,
'agent_private_alpha': PRIVATE_ALPHA,
'agent_private_price': PRIVATE_PRICE,
'agent_private_alpha_signal': 0,
'agent_private_price_signal': 0,
'agent_public_alpha_signal': 0,
'agent_public_price_signal': 0}, index=[0])
agents_df = pd.concat([agents_df]*number_of_agents, ignore_index=True)
# Adding IDs to agents
agents_df.insert(0, 'id', range(0, len(agents_df)))
# vary agent reserves
# agents_df['agent_reserve'] = 1000000 #[round(num, 2) for num in list(np.random.uniform(1000000,7000000,10))]
agents_df['agent_private_alpha'] = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
# vary agent private price
agents_df['agent_private_price'] = 100 # [round(num, 2) for num in list(np.random.uniform(0.4,0.9,10))]
# -
## see initialized values
agents_df[['agent_reserve','agent_private_price']]
# ### Update Agent into Initial State
initial_state['agents'] = agents_df
# ### Re-Instantiate Config with Updated Initial Conditions
parametric_experiment = ConfigWrapper(sim, M=update_params, N=New_Runs, T=New_Timesteps)
# +
del configs[:]
parametric_experiment.append()
# -
parametric_experiment.get_config()
# ### Generate config_ids to match results with swept variable input
# +
def get_M(k, v):
if k == 'sim_config':
k, v = 'M', v['M']
return k, v
config_ids = [
dict(
get_M(k, v) for k, v in config.__dict__.items() if k in ['simulation_id', 'run_id', 'sim_config', 'subset_id']
) for config in configs
]
# + [markdown] tags=[]
# ### Execute cadCAD Simulation
# + tags=[]
(data, tensor_field, sessions) = run_wrapper.run(drop_midsteps=True)
experiments = data
# -
experiments.head()
experiments.tail()
# +
from src.sim.model.parts.utils import *
alpha_plot(experiments,'Code With Me - Alpha',len(New_Timesteps))
# -
alpha(experiments,'Code With Me - Alpha', len(New_Timesteps))
reserve_supply(experiments,'Code With Me - Reserve',len(New_Timesteps))
supply_plot(experiments,'Code with Me - Supply',len(New_Timesteps))
price(experiments,'Code with Me - Price',len(New_Timesteps))
agent_payout(experiments,len(New_Timesteps))
# +
#agent_ROI(experiments,len(New_Timesteps))
# -
agent_profit(experiments,len(New_Timesteps))
# ## Conclusion
#
# In this notebook, we have provided the ability for users to visualize a bonding curve implementation in action, and play around with the parameters.
| Pilot/.ipynb_checkpoints/Code_With_Us_Failure-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''base'': conda)'
# language: python
# name: python38364bitbasecondaf64df65f8fc24e9fa1a2d1db350ec619
# ---
# # Description
#
# We are assigned to train a model that will predict student's score based on the number of hours they've studied the material.
import pandas as pd
import matplotlib.pyplot as plt
# ## Defining a solution
#
# We are starting by loading a small dataset of students' scores and number of hours of their study.
student_scores = pd.read_csv("./student_scores.csv")
# I learned that a proper **visualizing** is a key of understanding the data. In the following plot, we observe a linear trend between number of hours and scores
student_scores.plot.scatter(x="Hours", y="Scores")
# The first algorithm that I learned is called **Linear Regression**. By doing some math, we define a function that produces a predicted score. It'll depend on x variable and a number of weights. Those weights are applied to each part of the polynomial:
# Where w0, w1 are defined as weights (theta)
hypothesis = lambda w0, w1: w0 + w1 * student_scores["Hours"]
# **Gradient Descent** is an algorithm comming from Calculus that is designed to find an optimal weight value based on the function. Such process is called optimization.
def gradient_descent(w0, w1, alpha, predicate):
cooficient = alpha * 2 / n
w0 -= cooficient * sum(predicate - student_scores["Scores"])
w1 -= cooficient * sum((predicate - student_scores["Scores"]) * student_scores["Hours"])
return [w0, w1]
# For a small dataset, 2000 iterations is good enough. I'm finding a function's minimum at each iteration and assigning new weight values
# + tags=[]
w = [0, 0] # initial weights (theta)
n = len(student_scores) # collection size
a = 0.01 # define step size
def gradient(alpha, weights):
for i in range(2000):
weights = gradient_descent(*weights, alpha, hypothesis(*weights))
return weights
w = gradient(a, w)
print(w)
# -
# ## Solution
#
# Our model now has a relatively small error rate
# +
# assigning labels
plt.xlabel("Hours")
plt.ylabel("Scores")
plt.scatter(student_scores["Hours"], student_scores["Scores"]) # scattering points
plt.plot(student_scores["Hours"], hypothesis(*w)) # line, predicting the score by passing an hour
| src/student-scores.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="lliGZ6gyXR8E" colab_type="code" colab={}
import requests
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn.neighbors import KNeighborsClassifier
from joblib import dump
# + id="INPvZyAvcIC-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="6a4eeef5-00b2-46f6-f82a-6ecd380754f7"
DATA_FILE = 'crawler/coinmarketExample.csv'
data = pd.read_csv(DATA_FILE)
data.describe()
# + id="__cUySIfcISX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="7e8e6a84-2cee-4294-e580-960b4cdb08c7" tags=[]
print(data['open'])
# + [markdown] id="ACGFaVpknTms" colab_type="text"
# ## Tentar substituir todos os valores de market cap por 2 SE o valor na tabela for maior que 2500 e 1 caso contrario
# + id="R8_Qw174nSga" colab_type="code" colab={}
for idx, class in enumerate(data['market cap']):
if (("2" in class)):
data['market cap'][idx] = 0
else:
data['market cap'][idx] = 1
# + [markdown] id="5amkr1k0efmE" colab_type="text"
# ## Passar tudo pra inteiro depois
# + id="clNZA7kQdU5m" colab_type="code" colab={}
if (not data['open'].dtype == np.dtype(int)):
data['open'] = data['open'].str.extract('(\d+)').astype(int)
if (not data['high'].dtype == np.dtype(int)):
data['high'] = data['high'].str.extract('(\d+)').astype(int)
if (not data['low'].dtype == np.dtype(int)):
data['low'] = data['low'].str.extract('(\d+)').astype(int)
if (not data['close'].dtype == np.dtype(int)):
data['close'] = data['close'].str.extract('(\d+)').astype(int)
if (not data['volume'].dtype == np.dtype(int)):
data['volume'] = data['volume'].str.extract('(\d+)').astype(int)
if (not data['market cap'].dtype == np.dtype(int)):
data['market cap'] = data['market cap'].str.extract('(\d+)').astype(int)
# + id="JuAKgsOBdU8N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="bd248107-5da9-4af5-dbeb-5a411c2a647f"
data.describe()
# + [markdown] id="a-MmPOBAesma" colab_type="text"
# ## Comeca o treinamento
# + id="GRQ7wg2QdUQY" colab_type="code" colab={}
X = data[['open', 'high', 'low', 'close', 'volume']].values
Y = data['market cap'].values
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 1)
# + [markdown] id="mD-VS9W2e3im" colab_type="text"
# X sao as variaveis que vou usar pra chegar ate Y, no exemplo dele ele usa temp, wind e humidity pra chegar ate weater
# + id="AEDFm4JQdUUC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="e9f82967-0ba5-41b9-9287-520a72c17902" tags=[]
knn_clf = KNeighborsClassifier(n_neighbors = 5)
score = cross_val_score(knn_clf, X, Y, cv = 8)
print("CV: {}".format(np.mean(score)))
knn_clf.fit(X_train, Y_train)
confusion = confusion_matrix(y_true = Y_test, y_pred = knn_clf.predict(X_test))
print('Confusion matrix:')
print(confusion)
# + [markdown] id="XLycS3CWfGdA" colab_type="text"
# Faz o dump do joblib depois que terminar de treinar tudo
# + id="AhJ1tTpadUWn" colab_type="code" colab={}
dump(knn_clf, 'service/knn_clf.joblib')
# + id="euKsnS93dUZG" colab_type="code" colab={}
| training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Section 4.3 $\quad$ Subspaces (cont)
# ## Definition of Linear Combination
# Let $\mathbf{v}_1$, $\mathbf{v}_2$, $\cdots$, $\mathbf{v}_k$ be vectors in a vector space $V$. <br /><br /><br /><br />
# ### Example 1
# Every polynomial of degree $\leq 2$ is a linear combination of $t^2$, $t$, $1$.
# ### Example 2
# Show that the set of all vectors in $\mathbb{R}^3$ of the form $\left[\begin{array}{c}a \\ b \\ a+b \end{array}\right]$ is a linear combination of $\mathbf{v}_1 = \left[\begin{array}{c}1 \\ 0 \\ 1 \end{array}\right]$ and $\mathbf{v}_2 = \left[\begin{array}{c}0 \\ 1 \\ 1 \end{array}\right]$.
# ### Example 3
# In $\mathbb{R}^3$, let
# \begin{equation*}
# \mathbf{v}_1 = \left[\begin{array}{c}1 \\ 2 \\ 1 \end{array}\right],~~~
# \mathbf{v}_2 = \left[\begin{array}{c}1 \\ 0 \\ 2 \end{array}\right],~~~
# \mathbf{v}_3 = \left[\begin{array}{c}1 \\ 1 \\ 0 \end{array}\right]
# \end{equation*}
# Verify that the vector
# \begin{equation*}
# \mathbf{v} = \left[\begin{array}{c}2 \\ 1 \\ 5 \end{array}\right]
# \end{equation*}
# is a linear combination of $\mathbf{v}_1$, $\mathbf{v}_2$, and $\mathbf{v}_3$.
# +
from sympy import *
a, b, c = symbols('a b c');
Eq1 = a + b + c - 2;
Eq2 = 2*a + c - 1;
Eq3 = a + 2*b - 5;
solve([Eq1, Eq2, Eq3], (a, b, c))
# -
# ### Example 4
# Consider the homogeneous system
# $$A\mathbf{x} = \mathbf{0}$$
# where $A$ is an $m\times n$ matrix. The set $W$ of solutions is a subset of $\mathbb{R}^n$. Verify that $W$ is a subspace of $\mathbb{R}^n$ (called **solution space**).
# **Remark** The set of all solutions of the linear system $A\mathbf{x} = \mathbf{b}$, with $\mathbf{b} \neq \mathbf{0}$, is <br /><br /><br /><br />
| Jupyter_Notes/Lecture15_Sec4-3_SubspacesPart2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # K-Nearest Neighbors (k-NN)
#
# In this chapter, we describe the k-nearest-neighbors algorithm that can be used for classification (of a categorical outcome) or prediction (of a numerical outcome). To classify or predict a new record, the method relies on finding "similar" records in the training data. These "neighbors" are then used to derive a classification or prediction for the new record by voting (for classification) or averaging (for prediction). We explain how similarity is determined, how the number of neighbors is chosen, and how a classification or prediction is computed. k-NN is a highly automated data-driven method. We discuss the advantages and weaknesses of the k-NN method in terms of performance and practical considerations such as computational time.
# ## Imports
#
# Imports required for this notebook
# +
import pandas as pd
import matplotlib.pylab as plt
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestNeighbors, KNeighborsClassifier
# -
# ## The k-NN Classifier (Categorical Outcome)
#
# The idea in *k*-nearest-neighbors methods is to identify *k* records in the training dataset that are similar to a new record that we wish to classify. We then use these similar (neighboring) records to classify the new record into a class, assigning the new record to the predominant class among these neighbors. Denote the values of the predictors for this new record by $x_1$, $x_2$, ..., $x_p$. We look for records in our training data that are similar or "near" the record to be classified in the predictor space (i.e., records that have values close to $x_1$, $x_2$, ..., $x_p$). Then, based on the classes to which those proximate records belong, we assign a class to the record that we want to classify.
#
# ### Determining Neighbors
#
# The k-nearest-neighbors algorithm is a classification method that does not make assumptions about the form of the relationship between the class membership (*Y*) and the predictors $X_1$, $X_2$, ..., $X_p$. This is a nonparametric method because it does not involve estimation of parameters in an assumed function form, such as the linear form assumed in [linear regression](./multiple-linear-regression.ipynb). Instead, this method draws information from similarities between the predictor values of the records in the dataset.
#
# A central question is how to measure the distance between records based on their predictor values. The most popular measure of distance is the Euclidean distance. The Euclidean distance between two records ($x_1$, $x_2$, ..., $x_p$) and ($u_1$, $u_2$, ..., $u_p$) is:
#
# <p>
# <center>
# $\sqrt{(x_1 - u_1)^2 + (x_2 - u_2)^2 + ... + (x_p - u_p)^2}$
# </center>
# </p>
#
# You will find a host of other distance metrics elsewhere, for both numerical and categorical variables. However, the *k*-NN algorithm relies on many distance computations (between each record to be predicted and every record in the training set), and therefore the Euclidean distance, which is computationally cheap, is the most popular in *k*-NN.
#
# To equalize the scales that the various predictors may have, note that in most cases, predictors should first be standardized before computing a Euclidean distance. Also note that the means and standard deviations used to standardize new records are those of the *training* data, and the new record is not included in calculating them. The validation data, like new data, are also not included in this calculation.
#
# ### Classification Rule
#
# After computing the distances between the record to be classified and existing records, we need a rule to assign a class to the record to be classified, based on the classes of its neighbors. The simplest case is *k* = 1, where we look for the record that is closest (the nearest neighbor) and classify the new record as belonging to the same class as its closest neighbor. It is a remarkable fact that this simple, intuitive idea of using a single nearest neighbor to classify records can be very powerful when we have a large number of records in our training set. It turns out that the misclassification error of the 1-nearest neighbor scheme has a misclassification rate that is no more than twice the error when we know exactly the probability density functions for each class.
#
# The idea of the *1-nearest neighbor* can be extended to *k* > 1 neighbors as follows:
#
# 1. Find the nearest *k* neighbors to the record to be classified.
#
# 2. Use a majority decision rule to classify the record, where the record is classified as a member of the majority class of the *k* neighbors.
# ## Example: Riding Mowers
#
# A riding-mower manufacturer would like to find a way of classifying families in a city into those likely to purchase a riding mower and those not likely to buy one. A pilot random sample is undertaken of 12 owners and 12 nonowners in the city. The data are shown below. We first partition the data into training data (14 households) and validation data (10 households). Obviously, this dataset is too small for partitioning, which can result in unstable results, but we will continue with this partitioning for illustration purposes. A scatter plot of the training data is also shown below.
#
# Now consider a new household with \\$60,000 income and lot size 20,000 ft² (star marker on the figure). Among the households in the training set, the one closest to the new household (in Euclidean distance after normalizing income and lot size) is household 4, with \\$61,500 income and lot size 20,800 ft² . If we use a 1-NN classifier, we would classify the new household as an owner, like household 4. If we use *k* = 3, the three nearest households are 4,
# 14, and 1, as can be seen visually in the scatter plot, and as computed by the software. Two of these neighbors are owners of riding mowers, and one is a nonowner. The majority vote is therefore *owner*, and the new household would be classified as an owner.
# +
mower_df = pd.read_csv("../datasets/RidingMowers.csv")
mower_df["Number"] = mower_df.index + 1
## new household
new_household = pd.DataFrame({"Income": [60], "Lot_Size": [20],
"Number": [25], "Ownership": ["?"]})
pd.concat([mower_df, new_household])[["Income", "Lot_Size", "Ownership"]]
# +
train_data, valid_data = train_test_split(mower_df, test_size=0.4,
random_state=26)
## scatter plot
def plot_dataset(ax, data, show_label=True, **kwargs):
subset = data.loc[data["Ownership"]=="Owner"]
ax.scatter(subset.Income, subset.Lot_Size, marker="o",
label="Owner" if show_label else None, color="C1", **kwargs)
subset = data.loc[data["Ownership"]=="Nonowner"]
ax.scatter(subset.Income, subset.Lot_Size, marker="D",
label="Nonowner" if show_label else None, color="C0", **kwargs)
plt.xlabel("Income") # set x-axis label
plt.ylabel("Lot_Size") # set y-axis label
for _, row in data.iterrows():
ax.annotate(row.Number, (row.Income + 2, row.Lot_Size))
fig, ax = plt.subplots()
plot_dataset(ax, train_data)
plot_dataset(ax, valid_data, show_label=False, facecolors="none")
ax.scatter(new_household.Income, new_household.Lot_Size, marker="*",
label="New household", color="black", s=150)
plt.xlabel("Income"); plt.ylabel("Lot_Size")
ax.set_xlim(40, 115)
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, loc=4)
plt.show()
# +
# initialize normalized training, validation, and complete data frames
# use the training data to learn the transformation.
scaler = preprocessing.StandardScaler()
scaler.fit(train_data[["Income", "Lot_Size"]]) # Note the use of array of column names
# transform the full dataset
mower_norm = pd.concat([pd.DataFrame(scaler.transform(mower_df[["Income", "Lot_Size"]]),
columns=["zIncome", "zLot_Size"]),
mower_df[["Ownership", "Number"]]], axis=1)
train_norm = mower_norm.iloc[train_data.index]
valid_norm = mower_norm.iloc[valid_data.index]
## new household
new_household = pd.DataFrame({"Income": [60], "Lot_Size": [20]})
new_household_norm = pd.DataFrame(scaler.transform(new_household),
columns=["zIncome", "zLog_Size"])
# use NearestNeighbors from scikit-learn to compute knn
knn = NearestNeighbors(n_neighbors=3)
knn.fit(train_norm.iloc[:, 0:2])
distances, indices = knn.kneighbors(new_household_norm)
# indices is a list of lists, we are only interested in the first element
train_norm.iloc[indices[0], :]
# -
# ### Choosing k
#
# The advantage of choosing *k* > 1 is that higher values of *k* provide smoothing that reduces the risk of overfitting due to noise in the training data. Generally speaking, if *k* is too low, we may be fitting to the noise in the data. However, if *k* is too high, we will miss out on the method's ability to capture the local structure in the data, one of its main advantages. In the extreme, *k* = *n* = the number of records in the training dataset. In that case, we simply assign all records to the majority class in the training data,irrespective of the values of ($x_1$, $x_2$, ...,$x_p$), which coincides with the naive rule! This is clearly a case of oversmoothing in the absence of useful information in the predictors about the class membership. In other words, we want to balance between overfitting to the predictor information and ignoring this information completely. A balanced choice greatly depends on the nature of the data. The more complex and irregular the structure of the data, the lower the optimum value of *k*. Typically, values of *k* fall in the range of 1-20. We will use odd numbers to avoid ties.
#
# So how is *k* chosen? Answer: We choose the *k* with the best classification performance. We use the training data to classify the records in the validation data, then compute error rates for various choices of *k*. For our example, if we choose *k* = 1, we will classify in a way that is very sensitive to the local characteristics of the training data. On the other hand, if we choose a large value of *k*, such as *k* = 14, we would simply predict the most frequent class in the dataset in all cases. This is a very stable prediction but it completely ignores the information in the predictors. To find a balance, we examine the accuracy (of predictions in the validation set) that results from different choices of *k* between 1 and 14. For an even number *k*, if there is a tie in classifying a household, the tie is broken randomly. This is shown in the table below.
# +
train_X = train_norm[["zIncome", "zLot_Size"]]
train_y = train_norm["Ownership"]
valid_X = valid_norm[["zIncome", "zLot_Size"]]
valid_y = valid_norm["Ownership"]
# Train a classifier for different values of k
results = []
for k in range(1, 15):
knn = KNeighborsClassifier(n_neighbors=k).fit(train_X, train_y)
results.append({"k": k,
"accuracy": accuracy_score(valid_y, knn.predict(valid_X))})
# Convert results to a pandas data frame
results = pd.DataFrame(results)
results
# -
# Best *k* is 4!
#
# Once *k* is chosen, we rerun the algorithm on the combined training and testing sets in order to generate classifications of new records. An example is shown below, where the four nearest neighbors are used to classify the new household.
# +
# Retrain with full dataset
mower_X = mower_norm[["zIncome", "zLot_Size"]]
mower_y = mower_norm["Ownership"]
knn = KNeighborsClassifier(n_neighbors=4).fit(mower_X, mower_y)
distances, indices = knn.kneighbors(new_household_norm)
print("Predicted Ownership:", knn.predict(new_household_norm))
print("Distances:", distances)
print("Indices:", indices)
print(mower_norm.iloc[indices[0], :])
# -
# ### Setting the Cutoff Value
#
# *k*-NN uses a majority decision rule to classify a new record, where the record is classified as a member of the majority class of the *k* neighbors. The definition of "majority" is directly linked to the notion of a cutoff value applied to the class membership probabilities. Let us consider a binary outcome case.For a new record, the proportion of class 1 members among its neighbors is an estimate of its propensity (probability) of belonging to class 1. In the riding mowers example with *k* = 4, we found that the four nearest neighbors to the new household (with income = \\$60,000 and lot size = 20,000 ft²) are households 4, 9, 14, and 1.
#
# Since three of these are owners and one is a nonowner, we can estimate for the new household a probability of 0.75 of being an owner (and 0.25 for being a nonowner). Using a simple majority rule is equivalent to setting the cutoff value to 0.5. In the above results, we see that the software assigned class *owner* to this record.
#
# As mentioned in [Evaluating Prediction Performance](./evaluating-predictive-performance.ipynb), changing the cutoff value affects the confusion matrix (i.e., the error rates). Hence, in some cases we might want to choose a cutoff other than the default 0.5 for the purpose of maximizing accuracy or for incorporating misclassification costs.
#
# ### k-NN with More Than Two Classes
#
# The k-NN classifier can easily be applied to an outcome with *m* classes, where *m* > 2. The "majority rule" means that a new record is classified as a member of the majority class of its *k* neighbors. An alternative, when there is a specific class that we are interested in identifying (and are willing to "overidentify" records as belonging to this class), is to calculate the proportion of the *k* neighbors that belong to this class of interest, use that as an estimate of the probability (propensity) that the new record belongs to that class, and then refer to a user-specified cutoff value to decide whether to assign the new record to that class. For more on the use of cutoff value in classification where there is a single class of interest, see [Evaluating Prediction Performance](./evaluating-predictive-performance.ipynb)
#
# ### Converting Categorical Variables to Binary Dummies
#
# It usually does not make sense to calculate Euclidean distance between two non-numeric categories (e.g., cookbooks and maps, in a bookstore). Therefore, before *k*-NN can be applied, categorical variables must be converted to binary dummies. In contrast to the situation with statistical models such as regression, all *m* binaries should be created and used with *k*-NN. While mathematically this is redundant, since *m* - 1 dummies contain the same information as *m* dummies, this redundant information does not create the multicollinearity problems that it does for linear models. Moreover, in *k*-NN the use of *m* - 1 dummies can yield different classifications than the use of *m* dummies, and lead to an imbalance in the contribution of the different categories to the model.
# ## k-NN for a Numerical Outcome
#
# The idea of *k*-NN can readily be extended to predicting a continuous value (as is our aim with multiple linear regression models). The first step of determining neighbors by computing distances remains unchanged. The second step, where a majority vote of the neighbors is used to determine class, is modified such that we take the average outcome value of the *k*-nearest neighbors to determine the prediction. Often, this average is a weighted average, with the weight decreasing with increasing distance from the point at which the prediction is required. In `scikit-learn`, we can use `KNeighborsRegressor` to compute *k*-NN numerical predictions for the validation set.
#
# Another modification is in the error metric used for determining the "best k". Rather than the overall error rate used in classification, RMSE (root-mean-squared error) or another prediction error metric should be used in prediction.
#
# **PANDORA**
#
# Pandora is an Internet music radio service that allows users to build customized "stations" that play music similar to a song or artist that they have specified. Pandora uses a k-NN style clustering/classification process called the Music Genome Project to locate new songs or artists that are close to the user-specified song or artist.
#
# Pandora was the brainchild of <NAME>, who worked as a musician and a nanny when he graduated from Stanford in the 1980s. Together with <NAME>, who was studying medieval music, he developed a “matching engine” by entering data about a song's characteristics into a spreadsheet. The first result was surprising-a Beatles song
# matched to a Bee Gees song, but they built a company around the concept. The early days were hard-Westergren racked up over \\$300,000 in personal debt, maxed out 11 credit cards, and ended up in the hospital once due to stress-induced heart palpitations. A venture capitalist finally invested funds in 2004 to rescue the firm, and as of 2013, it is listed on the NY Stock Exchange.
#
# In simplified terms, the process works roughly as follows for songs:
#
# 1. Pandora has established hundreds of variables on which a song can be measured on a scale from 0 to 5. Four such variables from the beginning of the list are
#
# - Acid Rock Qualities
# - Accordion Playing
# - Acousti-Lectric Sonority
# - Acousti-Synthetic Sonority
#
# 2. Pandora pays musicians to analyze tens of thousands of songs, and rate each song on each of these attributes. Each song will then be represented by a row vector of values between 0 and 5, for example, for Led Zeppelin's Kashmir: Kashmir 4 0 3 3 ... (high on acid rock attributes, no accordion, etc.)
#
# This step represents a costly investment, and lies at the heart of Pandora's value because these variables have been tested and selected because they accurately reflect the essence of a song, and provide a basis for defining highly individualized preferences.
#
# 3. The online user specifies a song that s/he likes (the song must be in Pandora's database).
#
# 4. Pandora then calculates the statistical distance 1 between the user's song and the songs in its database. It selects a song that is close to the user-specified song and plays it.
#
# 5. The user then has the option of saying "I like this song", "I don't like this song," or saying nothing.
#
# 6. If "like" is chosen, the original song, plus the new song are merged into a 2-song cluster 2 that is represented by a single vector comprising means of the variables in the original two song vectors.
#
# 7. If "dislike" is chosen, the vector of the song that is not liked is stored for future reference. (If the user does not express an opinion about the song, in our simplified example here, the new song is not used for further comparisons.)8. Pandora looks in its database for a new song, one whose statistical distance is close to the "like" song cluster, 3 and not too close to the "dislike" song. Depending on the user's reaction, this new song might be added to the "like" cluster or "dislike" cluster.
#
# Over time, Pandora develops the ability to deliver songs that match a particular taste of a particular user. A single user might build up multiple stations around different song clusters. Clearly, this is a less limiting approach than selecting music in terms of which "genre" it belongs to.
#
# While the process described above is a bit more complex than the basic "classification of new data" process described in this chapter, the fundamental process-classifying a record according to its proximity to other records - is the same at its core. Note the role of domain knowledge in this machine learning process - the variables have been tested and selected by the project leaders, and the measurements have been made by human experts.
#
# Further reading: See [www.pandora.com], Wikipedia's article on the Music Genome Project, and <NAME>'s article "Pandora and the Music Genome Project", Scientific Computing, vol. 23, no. 10: 14, p. 40–41, Sep. 2006.
# ## Advantages and Shortcomings of k-NN Algorithms
#
# The main advantage of *k*-NN methods is their simplicity and lack of parametric assumptions. In the presence of a large enough training set, these methods perform surprisingly well, especially when each class is characterized by multiple combinations of predictor values. For instance, in real-estate databases, there are likely to be multiple combinations of {home type, number of rooms, neighborhood, asking price, etc.} that characterize homes that sell quickly vs. those that remain for a long period on the market.
#
# There are three difficulties with the practical exploitation of the power of the *k*-NN approach. First, although no time is required to estimate parameters from the training data (as would be the case for parametric models such as regression), the time to find the nearest neighbors in a large training set can be prohibitive. A number of ideas have been implemented to overcome this difficulty. The main ideas are:
#
# - Reduce the time taken to compute distances by working in a reduced dimension using dimension reduction techniques such as [principal components analysis](./dimension-reduction.ipynb)
#
# - Use sophisticated data structures such as search trees to speed up identification of the nearest neighbor. This approach often settles for an "almost nearest" neighbor to improve speed. An example is using *bucketing*, where the records are grouped into buckets so that records within each bucket are close to each other. For a to-be-predicted record, buckets are ordered by their distance to the record. Starting from the nearest bucket, the distance to each of the records within the bucket is measured. The algorithm stops when the distance to a bucket is larger than the distance to the closest record thus far.
#
# Second, the number of records required in the training set to qualify as large increases exponentially with the number of predictors *p*. This is because the expected distance to the nearest neighbor goes up dramatically with *p* unless the size of the training set increases exponentially with *p*. This phenomenon is known as the *curse of dimensionality*, a fundamental issue pertinent to all classification, prediction, and clustering techniques. This
# is why we often seek to reduce the number of predictors through methods such as selecting subsets of the predictors for our model or by combining them using methods such as principal components analysis, singular value decomposition, and factor analysis ([dimension reduction noteboook](./dimension-reduction.ipyb)).
#
# Third, *k*-NN is a "lazy learner": the time-consuming computation is deferred to the time of prediction. For every record to be predicted, we compute its distances from the entire set of training records only at the time of prediction. This behavior prohibits using this algorithm for real-time prediction of a large number of records simultaneously.
| analysis/k-nearest-neighbors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# 
# # Tutorial: Get started (day 1) with Azure Machine Learning (Part 1 of 4)
#
# ---
# ## Introduction <a id='intro'></a>
#
# In this **four-part tutorial series**, you will learn the fundamentals of Azure Machine Learning and complete jobs-based Python machine learning tasks in the Azure cloud, including:
#
# 1. Set up a compute cluster
# 2. Run code in the cloud using Azure Machine Learning's Python SDK.
# 3. Manage the Python environment you use for model training.
# 4. Upload data to Azure and consume that data in training.
#
# In this first part of the tutorial series you learn how to create an Azure Machine Learning Compute Cluster that will be used in subsequent parts of the series to submit jobs to. This notebook follows the steps provided on the [Python (day 1) - set up local computer documentation page](https://aka.ms/day1aml).
#
# ## Pre-requisites <a id='pre-reqs'></a>
#
# - An Azure Subscription. If you don't have an Azure subscription, create a free account before you begin. Try [Azure Machine Learning](https://aka.ms/AMLFree) today.
# - Familiarity with Python and Machine Learning concepts. For example, environments, training, scoring, and so on.
# - If you are using a compute instance in Azure Machine Learning to run this notebook series, you are all set. Otherwise, please follow the [Configure a development environment for Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/how-to-configure-environment)
#
# ---
# ## Ensure you have the latest Azure Machine Learning Python SDK
#
# This tutorial series depends on having the Azure Machine Learning SDK version 1.14.0 onwards installed. You can check your version using the code cell below.
# +
from azureml.core import VERSION
print ('Version: ' + VERSION)
# -
# If your version is below 1.14.0, then upgrade the SDK using `pip` (**Note: You may need to restart your kernel for the changes to take effect. Re-run the cell above to ensure you have the right version**).
#
# ```bash
# # !pip install -U azureml-sdk
# ```
# ## Create an Azure Machine Learning compute cluster <a id='createcc'></a>
#
# As this tutorial focuses on jobs-based machine learning tasks, you will be submitting python code to run on an Azure Machine Learning **Compute cluster**, which is well suited for large jobs and production. Therefore, you create an Azure Machine Learning compute cluster that will auto-scale between zero and four nodes:
# + tags=["create mlc", "batchai"]
from azureml.core import Workspace
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
ws = Workspace.from_config() # this automatically looks for a directory .azureml
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4,
idle_seconds_before_scaledown=2400)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
# -
# > <span style="color:darkblue;font-weight:bold"> ! INFORMATION
# > When the cluster has been created it will have 0 nodes provisioned. Therefore, the cluster does not incur costs until you submit a job. This cluster will scale down when it has been idle for 2400 seconds (40 minutes).</span>
# ## Next Steps
#
# In the next tutorial, you walk through submitting a script to the Azure Machine Learning compute cluster.
#
# [Tutorial: Run "Hello World" Python Script on Azure](day1-part2-hello-world.ipynb)
#
| tutorials/get-started-day1/day1-part1-setup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Numpy #
#
# Numpy è una libreria creata per permettere la creazione di ***array n-dimensionali in python***, le caratteristiche principali di numpy è che questa libreria è **estremamente veloce, versatile e usata in molte altre librerie**, come infatti vedremo in seguito altre librerie sono basate su di essa per permettere di fare operazioni più veloci e avere controllo sulla memoria.
#
# <div class="alert alert-block alert-success">
# Il motivo per cui numpy è così veloce è dovuto al fatto che è basata su linguaggi a più basso livello che sono C e Fortran, come vedremo in seguito questo è possibile notarlo dalla presenza di tipi di variabili con numeri di bit definiti.
# </div>
#
# <div class="alert alert-block alert-warning">
# Numpy dovrebbe essere già installato con anaconda, qualora però non lo fosse installatelo seguendo la lezione 1-Module.
# </div>
#
# ## Cosa sono gli array ? ##
#
# Gli array sono un concetto informatica ispirato dalla nozione di vettori, matrici o più in generale tensori usati in geometria e matematica.
# Il concetto di array è già stato introdotto indirettamente nel capitolo 1 con la definzione di strutture di dati in particolare con la definizione di liste e dizionari, per avere una visione su cosa siano questi array usiamo un immagine di numpy.
#
# 
#
# Come possiamo notare dall'immagine un array 1d è equivalemente a una lista in python, il problema sorge qualora ci spostiamo in array 2d o superiori per cui non abbiamo un'istruzione diretta per la creazione di tali elementi, in tal caso facciamo ricorso alla libreria notando che numpy creerà una lista di liste legata al numero di dimensioni considerata.
# Vediamo però il suo utilizzo attraverso la libreria.ù
#
# ## Creazione di array e operazioni ##
#
# ### Creazione di array ###
# Per creare un array in numpy in primo passo è necessario importare la libreria dopodiché è possibile inizializzare un array decidendo se dovrà contenere zeri o uni, attenzione i valori che potreste vedere potrebbero essere valori approssimata seconda la macchina essere vicini a quelli.
import numpy as np
#notare bene le doppie paretesi
ones_1d = np.ones((4,)) #contiene solo 4 elementi
zeros_1d = np.zeros((4,))
print('Array 1dimensionale contenenti uni', ones_1d)
print('Array 1dimensionale contenenti zeri', zeros_1d)
# Qualora volessi creare array 2D l'unica modifica che è necessaria è aggiungere il numero della seconda dimensione all'interno della doppia parentesi.
ones_2d = np.ones((2,3))
zeros_2d = np.zeros((2,3))
print('Array 2dimensioni contenenti uni:\n', ones_2d)
print('Array 2dimensioni contenenti zeri:\n', zeros_2d)
# Come è possibile notare l'array 1d è l'analogo della lista, mentre quello 2d è una lista di liste questo poiché in python come abbiamo detto non esistono altre strutture oltre a questa per la memorizzazione di dati in maniera ordinata.
# Per quanto riguarda gli array 3d ripetiamo i procedimenti precedenti.
ones_3d = np.ones((4,3,2))
zeros_3d = np.zeros((4,3,2))
print('Array 3dimensioni contenenti uni:\n', ones_3d)
print('Array 3dimensioni contenenti zeri:\n', zeros_3d)
# ### Operazioni su array ###
# Su questi array è possibile effettuare delle operazioni come se fossero delle variabili e molte altre.
# #### Addizione ####
# Per l'operazione di somma tra due array a noi basta usare `+` oppure `np.add`.
print(ones_1d + ones_1d)
print(np.add(ones_1d, ones_1d))
# #### Sottrazione ####
# Per l'operazione di differenza tra due array a noi basta usare `-` oppure `np.substract`.
print(ones_2d - ones_2d)
# #### Moltiplicazione ####
# Per l'operazione di moltiplicazione tra due array a noi basta usare `*` oppure `np.multiply`.
print(ones_2d * zeros_2d)
# #### Divisione ####
# Per l'operazione di divisione tra due array a noi basta usare `/` oppure `np.divide`.
print(ones_2d / 2)
# <div class="alert alert-block alert-success">
# Notare bene che l'ultima operazione che ho fatto non è tra due array, il motivo per cui funziona è dovuto al broadcasting che manipola gli array in maniera tale da avere la stessa dimensione, ciò non avviene per tutte le operazioni, per chiarimenti fare riferimento a questo <a href="https://www.tutorialspoint.com/numpy/numpy_broadcast.htm">link</a>.
# </div>
#
# Ci sono molte altre operazioni nella librerie che sarebbero troppo lunghe da elencare qui, per avere un quadro generale di tutto ciò potete consultare questo __[cheatsheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Numpy_Python_Cheat_Sheet.pdf)__, mentre se aveste dubbi potete consultare la __[documentazione](https://numpy.org/doc/stable/index.html)__.
# ## Accesso ad elementi negli array ##
# Per accedere agli elementi degli si applicano le stesse regole della lista, è necessario quindi ricordare che gli indici partono da 0, per quanto riguardo l'accesso dei singoli elementi in array multidimensionali è necessario usare un numero di indici uguale alla dimensione dell'array, mentre nel caso volessimo accedere a elementi multipli basta definire un intervallo come nelle liste, vediamo degli esempi per chiarire ciò.
#selezionare uno o più elementi in array 1d
print('Array 1d')
print('Elemento al 2 indice', ones_1d[2])
print('Elementi dal primo al penultimo', ones_1d[:-2])
print('Tutti gli elementi', ones_1d[:])
#selezionare uno o più elementi in 2d
print('Array 2d')
print('elemento in posizione (1,2)', ones_2d[1,2])
print('Prima riga', ones_2d[0,:])
print('Prima colonna', ones_2d[:,0])
# ## Tipi di dati in numpy ##
# Numpy essendo basato su linguaggi a più basso livello fornisce maggiore informazioni su come sono costruiti i dati, vediamolo guardando le informazioni su un array.
np.info(ones_2d)
# Come possiamo vedere abbiamo informazioni sull'array:
# - la classe
# - shape: la forma dell'array
# - strides: quanti bytes sono distanti da una cella di memoria all'altra
# - itemsize: la sua lunghezza in byte,
# - aligend: se è allineata in memoria
# - contiguous: ci dice se l'array è costruito usando la notazione in C
# - fortran è l'analogo di contiguous con la condizione posta sul linguaggio Fortran
# - il data pointer definisce l'indirizzo di memoria dove sono contenuti i dati
# - byteorder definisce quale sia l'ordine di questo tipo di dato-
# - byteswap invece ci dice se vogliamo scambiare il byteorder.
# - type: il tipo di dato con il numero di bit
#
# Il punto centrale risulta però essere il type che ci dice che è di tipo float64 ovvero di tipo float come già visto in python, con l'aggiunta del fatto che ogni numero viene memorizzato in 64bit.<br>
# Il 64bit viene considerato il formato standard poiché buon compresso tra velocità e precisione, qualora però sia necessaria più velocità e minore precisione potete convertire l'array in un formato con un numero di bit minore usando il comando `.astype('nuovo_tipo')`dove il nuovo tipo deve essere tra quelli ammessi, guardare questa __[link](https://www.delftstack.com/tutorial/python-numpy/numpy-datatype-and-conversion/)__ per maggiori info.
#creo un nuovo array basandomi sul precedente con il tipo float32
ones_2d_32bit = ones_2d.astype('float32')
np.info(ones_2d_32bit)
# <div class="alert alert-block alert-danger">
# State attenti ad usare un numero di bit basso per numeri particolarmente grandi, poiché potreste andare incontro ad errori di approssimazioni dovuti al numero di bit insufficienti facendo delle operazione, e state inoltre attenti a convertire interi in floating point e altri tipi di dati.
# </div>
#
# ***
# COMPLIMENTI AVETE COMPLETATO LA LEZIONE DI NUMPY!
| 2.librerie python/2-Numpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
git --version
# Show git-config(1) Manual Page in browser.
git help config
# --global : apply this config for all projects.
git config --global user.name = "yourusername"
git config --global user.email = <EMAIL>
git config –list
# Open git-init(1) Manual Page in browser.
git help init
# Create a directory and convert it to reposotory.
md directoryproject
# cd directoryproject
git init
# What has changed in the project?
git status
# <pre>
# All steps:
# 1.Untrack
# 2.Staged
# 3.Commit
# If you are certain about changes in a file, you can add file to Staged step by this command: </pre>
git add filename
# To unstage from commited
git rm --cached filename
# To commit files use git commit. git commit, commit only staged, and untracked remain without change.
Git commit -m "description" filename
| .ipynb_checkpoints/Git-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Check leak on external data
#
# List of external data checked :
# * https://www.kaggle.com/shymammoth/shopee-reviews
# # Library
# !pip install jellyfish
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np
import pandas as pd
import jellyfish
# -
# # Dataset
# !ls /kaggle/input
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
df_test = pd.read_csv('/kaggle/input/student-shopee-code-league-sentiment-analysis/test.csv')
df_test
# -
df_train_ext = pd.read_csv('/kaggle/input/shopee-reviews/shopee_reviews.csv')
df_train_ext
# # Exact match
set_test = set()
for i in df_test.index:
set_test.add(df_test.loc[i, 'review'])
list_exact_match = []
for i in df_train_ext.index:
if df_train_ext.loc[i, 'text'] in set_test:
list_exact_match.append(df_train_ext.loc[i, 'text'])
len(list_exact_match)
list_exact_match
# # String similarity
#
# > Unused due to the O(n^m) complexity
#
# Using jaro_distance, with output between 0 and 1.
#
# See https://jellyfish.readthedocs.io/en/latest/comparison.html#jaro-similarity
# ## Example
jellyfish.jaro_distance('great product fast delivery', 'Amazing product, quick delivery')
jellyfish.jaro_distance('easy to use', 'hard to use')
jellyfish.jaro_distance('good', 'goods')
# ## Check
# +
# set_test = set()
# for i in df_test.index:
# set_test.add(df_test.loc[i, 'review'])
# len(set_test)
# +
# set_train_ext = set()
# for i in df_train_ext.index:
# set_train_ext.add(df_train_ext.loc[i, 'text'])
# len(set_train_ext)
# +
# df_similar_string = pd.DataFrame(columns=['test', 'train_ext', 'similarity_score'])
# +
# for r_train in set_train_ext:
# for r_test in set_test:
# similarity_score = jellyfish.jaro_distance(r_test, r_train)
# if similarity_score >= 0.9:
# df_similar_string.loc[df_similar_string.shape[0]] = [r_test, r_train, similarity_score]
# +
# pd.set_option('display.max_rows', None)
# pd.set_option('display.max_columns', None)
# pd.set_option('display.width', None)
# +
# df_similar_string
# +
# df_similar_string.to_csv('similarity.csv', index=None)
| 00_check_leak.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import librosa
import numpy as np
from tensorflow import keras
# %load_ext autoreload
# %autoreload 2
# %run ./S0_util.ipynb
# -
# ### Sumário da Rede
# |Layer (typex) | Output Shape | Param |
# | -------------------------- | ------------------- |---------|
# |conv2d (Conv2D) | (None, 257, 22, 32) | 320 |
# |max_pooling2d (MaxPooling2D)| (None, 129, 11, 32) | 0 |
# |batch_normalization (BatchNo| (None, 129, 11, 32) | 128 |
# |conv2d_1 (Conv2D) | (None, 127, 9, 32) | 9248 |
# |max_pooling2d_1 (MaxPooling2| (None, 64, 5, 32) | 0 |
# |batch_normalization_1 (Batch| (None, 64, 5, 32) | 128 |
# |conv2d_2 (Conv2D) | (None, 63, 4, 32) | 4128 |
# |max_pooling2d_2 (MaxPooling2| (None, 32, 2, 32) | 0 |
# |batch_normalization_2 (Batch| (None, 32, 2, 32) | 128 |
# |flatten (Flatten) | (None, 2048) | 0 |
# |dense (Dense) | (None, 64) | 131136 |
# |dropout (Dropout) | (None, 64) | 0 |
# |dense_1 (Dense) | (None, 2) | 130 |
# ##### Batch 50 e EPOCHS 50
# ### Conteúdo
# 1. F: MFCC [link](#mfcc)
# 1. F: MFCC,RMS [link](#mfcc_rms)
# 1. F: MFCC,RMS,ZCR [link](#mfcc_rms_zcr)
# 1. F: MFCC,RMS,ZCR,ONSET [link](#mfcc_rms_zcr_onset)
# 1. F: MFCC,RMS,ZCR,ONSET,CENTROID [link](#mfcc_rms_zcr_onset_centroid)
# 1. F: MFCC, MEL SPEC [link](#mfcc_mel_spec)
# 1. F: MEL SPEC [link](#mel_spec)
# 1. F: MEL SPEC,RMS [link](#mel_spec_rms)
# 1. F: MEL SPEC,RMS,ZCR [link](#mel_spec_rms_zcr)
# 1. F: MEL SPEC,RMS,ZCR,ONSET [link](#mel_spec_rms_zcr_onset)
# 1. F: MEL SPEC,RMS,ZCR,ONSET,CENTROID [link](#mel_spec_rms_zcr_onset_centroid)
# <a id="mfcc"></a>
# #### MFCC
# > Foi gerado apenas `13` MFCC values (Muito Utilizado para reconhecimento de padrão)
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mfcc"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mfcc"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mfcc"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mfcc"],"../data/globo/7957359.wav")
# <a id="mfcc_rms"></a>
# #### MFCC + RMS
# > Foi gerado apenas `13` MFCC values (Muito Utilizado para reconhecimento de padrão)
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mfcc","rms"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mfcc","rms"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mfcc","rms"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mfcc","rms"],"../data/globo/7957359.wav")
# <a id="mfcc_rms_zcr"></a>
# #### MFCC + RMS + ZCR
# > Foi gerado apenas `13` MFCC values (Muito Utilizado para reconhecimento de padrão), no ZCR foi adicionado uma constante
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mfcc","rms", "zcr"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mfcc","rms", "zcr"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mfcc","rms", "zcr"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mfcc","rms", "zcr"],"../data/globo/7957359.wav")
# <a id="mfcc_rms_zcr_onset"></a>
# #### MFCC + RMS + ZCR + ONSET
# > Foi gerado apenas `13` MFCC values (Muito Utilizado para reconhecimento de padrão), no ZCR foi adicionado uma constante
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mfcc","rms", "zcr", "onset"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mfcc","rms", "zcr", "onset"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mfcc","rms", "zcr", "onset"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mfcc","rms", "zcr", "onset"],"../data/globo/7957359.wav")
# <a id="mfcc_rms_zcr_onset_centroid"></a>
# #### MFCC + RMS + ZCR + ONSET + CENTROID
# > Foi gerado apenas `13` MFCC values (Muito Utilizado para reconhecimento de padrão), no ZCR foi adicionado uma constante
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mfcc","rms", "zcr", "onset", "centroid"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mfcc","rms", "zcr", "onset", "centroid"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mfcc","rms", "zcr", "onset", "centroid"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mfcc","rms", "zcr", "onset", "centroid"],"../data/globo/7957359.wav")
# <a id="mfcc_mel_spec"></a>
# #### MFCC + MEL SPEC
# > Foi gerado apenas `13` MFCC values, Mel Spec foi gerado utilizando `20` faixas
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mfcc","mel_spec"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mfcc","mel_spec"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mfcc","mel_spec"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mfcc","mel_spec"],"../data/globo/7957359.wav")
# <a id="mel_spec"></a>
# #### MEL SPEC
# > Mel Spec foi gerado utilizando 20 faixas
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mel_spec"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mel_spec"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mel_spec"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mel_spec"],"../data/globo/7957359.wav")
# <a id="mels_spec_rms"></a>
# #### MEL SPEC + RMS
# > Mel Spec foi gerado utilizando `20` faixas
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mel_spec","rms"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mel_spec","rms"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mel_spec","rms"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mel_spec","rms"],"../data/globo/7957359.wav")
# <a id="mel_spec_rms_zcr"></a>
# #### MEL SPEC + RMS + ZCR
# > Mel Spec foi gerado utilizando `20` faixas, no ZCR foi adicionado uma constante
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mel_spec","rms", "zcr"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mel_spec","rms", "zcr"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mel_spec","rms", "zcr"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mel_spec","rms", "zcr"],"../data/globo/7957359.wav")
# <a id="mel_spec_rms_zcr_onset"></a>
# #### MEL SPEC + RMS + ZCR + ONSET
# > Mel Spec foi gerado utilizando `20` faixas, no ZCR foi adicionado uma constante
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mel_spec","rms", "zcr", "onset"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mel_spec","rms", "zcr", "onset"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mel_spec","rms", "zcr", "onset"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mel_spec","rms", "zcr", "onset"],"../data/globo/7957359.wav")
# <a id="mel_spec_rms_zcr_onset_centroid"></a>
# #### MEL SPEC + RMS + ZCR + ONSET + CENTROID
# > Mel Spec foi gerado utilizando `20` faixas, no ZCR foi adicionado uma constante
# * Novela: A Favorita
print("Abertura".center(20, '*'))
run_model(["mel_spec","rms", "zcr", "onset", "centroid"], '../data/globo/8701320.wav')
print("Bloco1".center(20, '*'))
run_model(["mel_spec","rms", "zcr", "onset", "centroid"],'../data/globo/949809.wav')
# * Novela: Órfões da Terra
print("Abertura".center(20, '*'))
run_model(["mel_spec","rms", "zcr", "onset", "centroid"], "../data/globo/7822663.wav")
print("Bloco1".center(20, '*'))
run_model(["mel_spec","rms", "zcr", "onset", "centroid"],"../data/globo/7957359.wav")
| audio_classify/test/S3_run_mlp_model_v1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Copyright (c) 2019 <NAME> UCSB Licensed under BSD 2-Clause [see LICENSE for details] Written by <NAME>
#
# This batch processes ST-images (3C) into ethograms.
# +
# get the prob matrix and movement matrix (signal level) from 3C ST images
import numpy as np
import scipy
from scipy import ndimage
from scipy import misc
import pickle
import pandas as pd
import time
import matplotlib.pyplot as plt
import cv2
import os
import matplotlib.colors as mcolors
import natsort
from PIL import Image
from sklearn.utils import shuffle
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import TensorBoard
from ABRS_modules import discrete_radon_transform
from ABRS_modules import etho2ethoAP
from ABRS_modules import smooth_1d
pathToABRS_GH_folder = 'D:\\ABRS'
topInputFolder = 'D:\\ABRS\\Data\\3C'
modelName = 'modelConv2ABRS_3C_train_with_descendingcombinedwithothers_avi_10' # MOD12
model = keras.models.load_model(modelName)
model.summary()
outputFolderEtho = pathToABRS_GH_folder + '\\Output';
storeFrameRec = 0
normalizeByMax = 1;
thresholdMovement = 10;
subfolderList = sorted(os.listdir(topInputFolder))
startNewEtho = 1
firstFolder = 0
numbSubfolders = 4
lengthEtho = 5*50
if startNewEtho == 1:
ethoMat = np.zeros((1, lengthEtho))
probMat = np.zeros((10, lengthEtho))
ethoMovementMat = np.zeros((1, lengthEtho))
for fld in range(firstFolder, numbSubfolders, 1):
inputSubfolderName = subfolderList[fld]
print(inputSubfolderName)
inputSubfolderPathName = topInputFolder + '\\' + inputSubfolderName
print(inputSubfolderPathName)
fileList = sorted(os.listdir(inputSubfolderPathName))
numbFiles = np.shape(fileList)[0];
skipFilesNumb = 1;
skipFrameNumb = 1;
yi = np.zeros((1, 10))
yiVect = np.zeros((1, 1))
if storeFrameRec == 1:
rtImRec = np.zeros((np.shape(fileList)[0] * 50, 80, 80, 3))
predictionsProbDataRec = np.zeros((10, lengthEtho))
etho = np.zeros((1, lengthEtho))
ethoMovement = np.zeros((1, lengthEtho))
indIm = 0
for fl in range(0, numbFiles - 0, skipFilesNumb): #
inputFileName = fileList[fl];
print(inputFileName);
fileDirPathInputName = inputSubfolderPathName + '\\' + inputFileName;
with open(fileDirPathInputName, "rb") as f:
dict3C = pickle.load(f)
recIm3C = dict3C["recIm3C"]
maxMovRec = dict3C['maxMovementRec'];
for i in range(0, recIm3C.shape[0] - 0, skipFrameNumb):
im3CRaw = recIm3C[i, :, :, :] / 1
if np.count_nonzero(im3CRaw[:, :, 0]) > 5500:
im3CRaw[:, :, 0] = np.zeros((80, 80))
if np.count_nonzero(im3CRaw[:, :, 1]) > 800:
im3CRaw[:, :, 1] = np.zeros((80, 80))
rgbArray = np.zeros((80, 80, 3), 'uint8')
rgbArray[..., 0] = im3CRaw[:, :, 0]
rgbArray[..., 1] = im3CRaw[:, :, 1]
rgbArray[..., 2] = im3CRaw[:, :, 2]
im3C = Image.fromarray(rgbArray)
if storeFrameRec == 1:
rtImRec[indIm, :, :, :] = im3C
X_rs = np.zeros((1, 80, 80, 3))
X_rs[0, :, :, :] = im3C
X = X_rs / 256
predictionsProbData = model.predict(X)
predictionsProbData[0, 2] = predictionsProbData[0, 2] + 0.0
predictionsProbData[0, 5] = predictionsProbData[0, 5] + 0.0
predictionsProbDataRec[:, indIm] = predictionsProbData
predictionLabelData = np.zeros((1, np.shape(predictionsProbData)[0]))
etho[0, indIm] = np.argmax(predictionsProbData, axis=1)
ethoMovement[0, indIm] = maxMovRec[i]
if maxMovRec[i] < thresholdMovement:
# behSignal[0,indIm]=1
# print(maxMovRec[i]);print('No movement detected')
etho[0, indIm] = 7
indIm = indIm + 1
probMat = np.vstack((probMat, predictionsProbDataRec))
ethoMat = np.vstack((ethoMat, etho))
ethoMovementMat = np.vstack((ethoMovementMat, ethoMovement))
with open(outputFolderEtho + '\\ethoMatRawJaneliaWin16_MOD12.pickle', "wb") as f:
pickle.dump(ethoMat, f)
with open(outputFolderEtho +'\\ethoMovementMatJaneliaWin16.pickle', "wb") as f:
pickle.dump(ethoMovementMat, f)
with open(outputFolderEtho +'\\probMatJaneliaWin16.pickle', "wb") as f:
pickle.dump(probMat, f)
with open(outputFolderEtho +'\\subfolderList.pickle', "wb") as f:
pickle.dump(subfolderList, f)
# +
#Generate the final ethograms
ethoPath = 'D:\\ABRS\\Output'
with open(ethoPath + '\\' + 'ethoMatRawJaneliaWin16_MOD12.pickle', "rb") as f:
ethoMatLoaded = pickle.load(f)
with open(ethoPath + '\\' + 'ethoMovementMatJaneliaWin16.pickle', "rb") as f:
ethoMovementMatLoaded = pickle.load(f)
with open(ethoPath + '\\' + 'probMatJaneliaWin16.pickle', "rb") as f:
probMatLoaded = pickle.load(f)
ethoMatLoaded[0, 2] = 0;
ethoMatLoaded[0, 3] = 7
probMat = probMatLoaded
#probMat = probMat
ethoMat = ethoMatLoaded
ethoMovementMat = ethoMovementMatLoaded
ethoMatScilences = ethoMatLoaded
ethoMatScilences[ethoMovementMat < 150] = 7
ethoMatScilences[0, 2] = 0;
ethoMatScilences[0, 3] = 7
# smooth the prob matrix and movement matrix
#probMatSmWhole = smooth_1d (probMat, 59) #29 for 30Hz; 59 for 60Hz
probMatSmWhole = smooth_1d (probMat, 29) #29 for 30Hz; 59 for 60Hz
#ethoMovementMatSm = smooth_1d (ethoMovementMat, 175)
ethoMovementMatSm = smooth_1d (ethoMovementMat, 89)
# construct ethogram matrix ethoMatNew from prob matrix
thWk = 0.5 # prob of walking
#thSignal = 230 #temperature project wpn1-spgal4>gtacr2
thSignal = 110 # janelia
#thSignal = 170 # variation1
thresholdFHbaseline = 0 #default should be 0; this moves the thresholds up or down
#thresholdABbaseline = 0.1 #variation1
#thresholdFHbaseline = -0.1 # default should be 0; this moves the thresholds up or down; janelia
thresholdABbaseline = -0.0
ethoMatNew = np.zeros((np.shape(ethoMat)[0], np.shape(ethoMat)[1]))
probBehDiffMatFH = np.zeros((np.shape(ethoMat)[0], np.shape(ethoMat)[1]))
indEtho = 0
for p in range(0, np.shape(probMat)[0], 10):
# print(p)
probMatSm = probMatSmWhole[p:p + 9, :]
ethoMovementMatCurrent = np.zeros((1, np.shape(probMat)[1]))
ethoMovementMatCurrent[0, :] = ethoMovementMatSm[indEtho, :]
newEthoFH = np.zeros((1, np.shape(probMat)[1]))
newEthoAB = np.zeros((1, np.shape(probMat)[1]))
newEthoW = np.zeros((1, np.shape(probMat)[1]))
newEthoWk = np.zeros((1, np.shape(probMat)[1]))
newEthoFull = np.zeros((1, np.shape(probMat)[1]))
maxEtho = np.zeros((1, np.shape(probMat)[1]))
maxEtho[0, :] = np.argmax(probMatSm[0:8, :], axis=0)
probA = np.zeros((1, np.shape(probMat)[1]))
probA[0, :] = probMatSm[1, :] + probMatSm[2, :]
probASm = smooth_1d(probA, 175)
probP = np.zeros((1, np.shape(probMat)[1]))
probP[0, :] = probMatSm[3, :] + probMatSm[4, :] + probMatSm[5, :]
probPSm = smooth_1d(probP, 175)
probWk = np.zeros((1, np.shape(probMat)[1]))
probWk[0, :] = probMatSm[6, :]
diffFH = np.zeros((1, np.shape(probMat)[1]))
diffFH[0, :] = probMatSm[1, :] - probMatSm[2, :]
thFH = smooth_1d(diffFH, 239) / 2 - thresholdFHbaseline
diffAB = np.zeros((1, np.shape(probMat)[1]))
diffAB[0, :] = probMatSm[3, :] - probMatSm[4, :]
thAB = smooth_1d(diffAB, 239) / 2 - thresholdABbaseline
newEthoFH[0, diffFH[0, :] > thFH[0, :]] = 1
newEthoFH[0, diffFH[0, :] <= thFH[0, :]] = 2
newEthoFH[0, probASm[0, :] <= probPSm[0, :]] = 0
newEthoFH[0, probWk[0, :] > thWk] = 0
newEthoAB[0, diffAB[0, :] > thAB[0, :]] = 3
newEthoAB[0, diffAB[0, :] <= thAB[0, :]] = 4
newEthoAB[0, maxEtho[0, :] == 5] = 5
newEthoAB[0, probPSm[0, :] <= probASm[0, :]] = 0
newEthoAB[0, probWk[0, :] > thWk] = 0
newEthoWk[0, probWk[0, :] > thWk] = 6
newEthoFull = newEthoFH + newEthoAB + newEthoWk
newEthoFull[0, ethoMovementMatCurrent[0, :] < thSignal] = 7
probBehDiffMatFH[indEtho, :] = diffFH
ethoMatNew[indEtho, :] = newEthoFull
indEtho = indEtho + 1
from ABRS_data_vis import cmapG
from ABRS_data_vis import cmapAP
ethoMatPlot = ethoMatNew
ethoMatPlot[0,0] = 0;ethoMatPlot[0,1] = 7
##### post process ethograms ######
import ABRS_behavior_analysis
rawEthoMat = ethoMatNew
# rawEthoMat = ethoMat
ethoMat = rawEthoMat[0:np.shape(rawEthoMat)[0], :]
shEthoMat = np.shape(ethoMat)
ethoMatPP = np.zeros((shEthoMat[0], shEthoMat[1]))
minDurWalk = 10;
minDurSilence = 5;
minDurAPW = 5;
minDurAPA = 30;
# minDurWalk=0;
# minDurSilence=0;
# minDurAPW=0;
# minDurAPA=0;
for e in range(0, shEthoMat[0]):
idx = ethoMat[[e]]
idxPP = ABRS_behavior_analysis.post_process_etho3(idx, minDurWalk, minDurSilence, minDurAPW, minDurAPA)
ethoMatPP[e, :] = idxPP
print(e)
ethoMatPPZ = ethoMatPP;
ethoMatPPZ[0, 1] = 0;
with open(outputFolderEtho +'\\etho.pickle', "wb") as f:
pickle.dump(ethoMatPPZ, f)
plt.matshow(ethoMatPPZ, interpolation=None, aspect='auto', cmap=cmapG);
plt.show()
# -
| .ipynb_checkpoints/2_batch_3C_to_etho-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 - Tensorflow
# language: python
# name: py37_tensorflow
# ---
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# +
import numpy as np
import data_load
import train_util
import train_tl
from importlib import reload
import tensorflow.keras as keras
from tensorflow.keras import applications
from tensorflow.keras.layers import Flatten, LeakyReLU, Dense
from tensorflow.keras.models import *
from tensorflow.keras import optimizers
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow as tf
import datetime
# training data
image_size = 224
#labels = pd.read_csv("data/fgvc7/train.csv")
image_dir = "data/leaf"
tf.random.set_seed(1234)
np.random.seed(seed=1234)
# +
from tensorflow.keras import applications
base_model = applications.EfficientNetB0(weights='imagenet', input_shape=(image_size, image_size, 3), include_top=False)
# Create new model on top.
inputs = keras.Input(shape=(image_size, image_size, 3))
x = base_model(inputs, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dense(512, activation='relu')(x)
x = keras.layers.Dropout(.2)(x)
outputs = keras.layers.Dense(512)(x)
model = keras.Model(inputs, outputs)
model.summary()
# -
# import training data and combine labels
data = data_load.load_unlabeled_data(image_size, image_dir)
print(len(data))
# +
warmup_epoch = 5
total_epoch = 50
lr = 0.001
temperature = 0.1
model = train_util.train(model = model, data = data, batch_size = 32, warmup_epoch = warmup_epoch, total_epoch = total_epoch, lr = lr, temperature=temperature)
# -
wheat_data = data_load.load_unlabeled_data(image_size, image_dir = 'data/wheat')
# +
reload(train_util)
model = train_util.train(model = model, data = wheat_data, batch_size = 32, warmup_epoch = 0, total_epoch = 50, lr = lr, temperature=temperature)
# -
x2 = keras.layers.GlobalAveragePooling2D()(model.layers[1].output)
outputs2 = keras.layers.Dense(4)(x2)
del model2
model2 = keras.Model(model.layers[1].input, outputs2)
# +
reload(train_tl)
model_name = f'CL_{model.layers[1].name}_{datetime.datetime.now().strftime("%m%d_%H%M")}'
submission_file = f"submissions/{model_name}.csv"
model2, history = train_tl.train_tl(model2, submission_filename = submission_file)
# -
#comments = f'"CL_{model.layers[1].name}_{date.today()}"'
file = f'"{submission_file}"'
layers = [l.weights[0].shape[1] for l in model.layers[3-len(model.layers):] if type(l).__name__ == 'Dense']
comment = f'"{warmup_epoch},{total_epoch},{lr},{temperature},{layers},{len(data)}"'
# !kaggle competitions submit -f $file -m $comment plant-pathology-2020-fgvc7
# +
# !kaggle competitions submissions plant-pathology-2020-fgvc7
model2.save(f'models/{model_name}.pb')
# -
# ## Grad-Cam test
import grad_cam
grad_cam.showGradCam(model2, img_path = r"F:\notebooks\capstone\data\multipleplants\Tomato___Wilt\0.png")
grad_cam.showGradCam(model2, img_path = r"F:\notebooks\capstone\data\fgvc7\images\Test_3.jpg")
grad_cam.showGradCam(model2, img_path = r"F:\notebooks\capstone\data\fgvc7\images\Test_4.jpg")
| 4. CL Cleaned.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# <script>
# jQuery(document).ready(function($) {
#
# $(window).load(function(){
# $('#preloader').fadeOut('slow',function(){$(this).remove();});
# });
#
# });
# </script>
#
# <style type="text/css">
# div#preloader { position: fixed;
# left: 0;
# top: 0;
# z-index: 999;
# width: 100%;
# height: 100%;
# overflow: visible;
# background: #fff url('http://preloaders.net/preloaders/720/Moving%20line.gif') no-repeat center center;
# }
# </style>
# <div id="preloader"></div>
# + active=""
# <script>
# function code_toggle() {
# if (code_shown){
# $('div.input').hide('500');
# $('#toggleButton').val('Show Code')
# } else {
# $('div.input').show('500');
# $('#toggleButton').val('Hide Code')
# }
# code_shown = !code_shown
# }
#
# $( document ).ready(function(){
# code_shown=false;
# $('div.input').hide()
# });
# </script>
# -
# <div class="header">
# <img src="cgat_logo.jpeg" alt="logo" style="float:left;width:150px; height:150px"/>
# <img src="wimm.png" alt="logo" style="float:right;width:150px; height:140px"/>
# <center><b><font size="5" color='firebrick'>CGAT BAMStats Report: for summarizing various information from BAM alignment files</font> </b></center>
# </div>
#
# <div class="footer">
# <img src="oxford.png" alt="logo" style="float:right;width:150px; height:50px"/>
# </div>
#
#
# <font size="4"> Typically in a sequencing experiment a lot of focus is directed towards ensuring that the quality control of fastq files is good. However, equally important is quality checking the mapping. Currently there are a number of cgat specific and external tools that have been developed to assess this.
# <br><br>
# The aim of this report is to collate the quality statistics generated from these tools accross your bam files following mapping. The pipeline inputs a bam file and then runs the following tools:<font size="4">
# <br><br><br>
#
#
# <table id="CGAT REPORT">
# <tr>
# <th> Tools </th>
# <th> Description </th>
# </tr>
# <tr>
# <td>IdxStats</td>
# <td>Samtools idxstats is ran and this calculates
# the number of mapped and unmapped reads per contig.</td>
# </tr>
# <tr>
# <td>BamStats</td>
# <td>This is a CGAT script (bam2stats) that performs stats
# on a bam file and outputs alignment statistics</td>
# </tr>
# <tr>
# <td>PicardStats</td>
# <td>This runs to CollectRnaSeqMetrics picard tools</td>
# </tr>
# <tr>
# <td>StrandSpec</td>
# <td>Gives a measure of the proportion of reads that map to
# each strand. Is used to work out strandness of library
# if unknown</td>
# </tr>
# <tr>
# <td>nreads</td>
# <td>Calculates the number of reads in the bam file.</td>
# </tr>
# <tr>
# <td>Paired_QC</td>
# <td>This contains metrics that are only required for paired
# end seqencing. Exon validation is perfomred using the cgat script
# bam_vs_gtf. Most of the statistics concern
# splicing</td>
# </tr>
# </table>
# #### Please find links below to access the report:
#
# <b> <a style="cursor:pointer;" href="CGAT_context_stats_report.html">Context Stats Report</a> </b>
#
# <b> <a style="cursor:pointer;" href="CGAT_idx_stats_report.html">Idx Stats Report</a> </b>
#
# <b> <a style="cursor:pointer;" href="CGAT_fragment_library_type_report.html">Fragment Library Type Report</a> </b>
#
# <b> <a style="cursor:pointer;" href="CGAT_picardstats_report.html">Picard Stats Report</a> </b>
#
# <b> <a style="cursor:pointer;" href="CGAT_transcript_profile_report.html">Transcript Profile Report</a> </b>
#
# <b> <a style="cursor:pointer;" href="CGAT_bamstats_report.html">Bamstats Report</a> </b>
#
# <b> <a style="cursor:pointer;" href="CGAT_exon_validation_report.html">Exon validation Report</a> </b>
#
# + active=""
# <script>
# $(document).ready(function(){
# $('div.prompt').hide();
# $('div.back-to-top').hide();
# $('nav#menubar').hide();
# $('.breadcrumb').hide();
# $('.hidden-print').hide();
# });
# </script>
#
# <footer id="attribution" style="float:right; color:#999; background:#fff;">
# Created with Jupyter,by Reshma.
# </footer>
| CGATPipelines/pipeline_docs/pipeline_bamstats/Jupyter_report/CGAT_FULL_BAM_STATS_REPORT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="2T1wWDPa5bO_"
# # Formati dati 3 - JSON
#
# ## [Scarica zip esercizi](../_static/generated/formats.zip)
#
# [Naviga file online](https://github.com/DavidLeoni/softpython-it/tree/master/formats)
#
#
#
# -
# Il JSON è un formato più elaborato, molto diffuso nel mondo delle applicazioni web.
#
# Un file con estensione `.json` è semplicemente un file di testo, strutturato _ad albero_. Vediamone un esempio, tratto dal dataset stazioni di Bike sharing di Lavis:
#
# - Fonte dati: [dati.trentino.it](https://dati.trentino.it/dataset/stazioni-bike-sharing-emotion-trentino) - Servizio Trasporti Provincia Autonoma di Trento
# - Licenza: [CC-BY 4.0](http://creativecommons.org/licenses/by/4.0/deed.it)
#
#
# File [bike-sharing-lavis.json](bike-sharing-lavis.json):
#
# ```json
# [
# {
# "name": "Grazioli",
# "address": "Piazza Grazioli - Lavis",
# "id": "Grazioli - Lavis",
# "bikes": 3,
# "slots": 7,
# "totalSlots": 10,
# "position": [
# 46.139732902099794,
# 11.111516155225331
# ]
# },
# {
# "name": "Pressano",
# "address": "Piazza della Croce - Pressano",
# "id": "Pressano - Lavis",
# "bikes": 2,
# "slots": 5,
# "totalSlots": 7,
# "position": [
# 46.15368174037716,
# 11.106601229430453
# ]
# },
# {
# "name": "<NAME>",
# "address": "Via Stazione - Lavis",
# "id": "Stazione RFI - Lavis",
# "bikes": 4,
# "slots": 6,
# "totalSlots": 10,
# "position": [
# 46.148180371138814,
# 11.096753997622727
# ]
# }
# ]
#
# ```
#
#
#
# Come si può notare, il formato del JSON è molto simile a strutture dati che già abbiamo in Python, come stringhe, numeri interi, float, liste e dizionari. L’unica differenza sono i campi `null` del JSON che diventano `None` in Python. Quindi la conversione a Python è quasi sempre facile e indolore, per farla basta usare il modulo già pronto `json` con la funzione `json.load`, che interpreta il testo dal file json e lo converte in strutture dati Python:
# +
import json
with open('bike-sharing-lavis.json', encoding='utf-8') as f:
contenuto_python = json.load(f)
print(contenuto_python)
# -
# Notiamo che quanto letto con la funzione `json.load` non è più semplice testo ma oggetti Python. Per questo json, l'oggetto più esterno è una lista (nota le quadre all'inizio e alla fine del file), e usando `type` su `contenuto_python` ne abbiamo la conferma:
type(contenuto_python)
# Guardando meglio il JSON, vedrai che è una lista di dizionari. Quindi, per accedere al primo dizionario (cioè quello all'indice zeresimo), possiamo scrivere
contenuto_python[0]
# Vediamo che è la stazione in Piazza Grazioli. Per accedere al nome esatto, accederemo alla chiave `'address'` del primo dizionario :
contenuto_python[0]['address']
# Per accedere alla posizione, usiamo la chiave corrispondente:
contenuto_python[0]['position']
# Notiamo come essa sia a sua volta una lista. In JSON, possiamo avere alberi ramificati arbitrariamente, senza necessariamente una struttura regolare (per quanto quando generiamo noi un json sia sempre auspicabile mantenere uno schema regolare nei dati).
# ### JSONL
#
# C'è un particolare tipo di file JSON che si chiama [JSONL](http://jsonlines.org/) (nota 'L' alla fine), e cioè un file di testo contenente una sequenza di linee, ciascuna rappresentante un valido oggetto json.
#
#
# Guardiamo per esempio il file [impiegati.jsonl](impiegati.jsonl):
# ```json
# {"nome": "Mario", "cognome":"Rossi"}
# {"nome": "Paolo", "cognome":"Bianchi"}
# {"nome": "Luca", "cognome":"Verdi"}
# ```
# Per leggerlo, possiamo aprire il file, separarlo nelle linee di testo e poi interpretare ciascuna come singolo oggetto JSON
# +
import json
with open('./impiegati.jsonl', encoding='utf-8',) as f:
lista_testi_json = list(f) # converte le linee del file di testo in una lista Python
# in questo caso avremo un contenuto python per ciascuna riga del file originale
i = 0
for testo_json in lista_testi_json:
contenuto_python = json.loads(testo_json) # converte testo json in oggetto python
print('Oggetto ', i)
print(contenuto_python)
i = i + 1
# -
# <div class="alert alert-warning">
#
# **ATTENZIONE: questa foglio è IN-PROGRESS**
# <div>
# +
# scrivi qui
#todo
# -
# ## Prosegui
#
# Continua con le [challenge](https://it.softpython.org/formats/formats4-chal.html)
| formats/formats3-json-sol.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# !pip install torch==1.7.1
# # !pip install -q --user torch==1.4.0 -f
# !pip install transformers
# !pip install sentencepiece==0.1.94
# !pip install ipywidgets
# !pip install rouge
# !pip install python-meteor
# +
import pandas as pd
import numpy as np
import math
from matplotlib import pyplot as plt
import torch
import json
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
# model = T5ForConditionalGeneration.from_pretrained('t5-small')
# tokenizer = T5Tokenizer.from_pretrained('t5-small')
# model = model.to('cuda' if torch.cuda.is_available() else "cpu")
# -
#Load dataset
WikiHow_sample_leq512 = pd.read_csv('WikiHow_sample_leq512_withsummary.csv')
WikiHow_sample_in1024 = pd.read_csv('WikiHow_sample_in1024_withsummary.csv')
WikiHow_sample_in2048 = pd.read_csv('WikiHow_sample_in2048_withsummary.csv')
# +
#Prepare the model
def preprocess(text):
preprocess_text = text.strip().replace("\n","")
t5_prepared_Text = "summarize: "+preprocess_text
return t5_prepared_Text, len(preprocess_text.split())
def summarize(t5_prepared_Text,ml):
tokenized_text = tokenizer.encode(t5_prepared_Text,max_length=2048,truncation=True, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
max_length= ml,
num_beams=2,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return output
# -
#This is used to split the dataset into multiple epochs to prevent server crash
def get_epoch(which_epoch, epoch_amount, data_length):
epoch_size = math.floor(data_length/epoch_amount)
if which_epoch == epoch_amount:
indecies = list(range((which_epoch-1)*epoch_size))
else:
indecies = list(range((which_epoch-1)*epoch_size, which_epoch*epoch_size))
return indecies
def get_summary(data,which_epoch, epoch_amount):
data_length = len(data)
epoch = get_epoch(which_epoch, epoch_amount, data_length)
for i,text in zip(epoch,data['text'][epoch]):
t5_prepared_Text,train_length = preprocess(text)
ml = round(0.33 * train_length)
data.loc[i,'train_length'] = train_length
data.loc[i,'summary'] = summarize(t5_prepared_Text,ml)
return data
epoch = get_epoch(1,8,len(WikiHow_sample_in2048))
for i,text in zip(epoch,WikiHow_sample_in2048['text'][epoch]):
t5_prepared_Text,train_length = preprocess(text)
ml = round(0.33 * train_length)
WikiHow_sample_in2048['train_length'][i] = train_length
WikiHow_sample_in2048['summary'][i] = summarize(t5_prepared_Text,ml)
#merge datasets
WikiHow_sample_all = WikiHow_sample_leq512.loc[:36200,:].append(WikiHow_sample_in1024.loc[:8939,:], ignore_index=True)
WikiHow_sample_all = WikiHow_sample_all.append(WikiHow_sample_in2048.loc[:4503,:], ignore_index=True)
#save results
WikiHow_sample_leq512.to_csv('WikiHow_sample_leq512_withsummary.csv')
WikiHow_sample_in1024.to_csv('WikiHow_sample_in1024_withsummary.csv')
WikiHow_sample_in2048.to_csv('WikiHow_sample_in2048_withsummary.csv')
WikiHow_sample_all.to_csv('WikiHow_sample_all_withsummary.csv')
| Code/T5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib
matplotlib.use('nbagg')
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from Brain import Neuron, Net, GMM
from scipy.stats import multivariate_normal
from matplotlib.lines import Line2D
matplotlib.rcParams.update({'font.size': 10})
from mpl_toolkits.mplot3d import Axes3D
p = GMM([0.4,0.6], np.array([[[0.25,0.5],.08],[[0.75,0.5],0.07]]))
q1 = Neuron([1,1], np.array([[0.35]]), 0.0007, 0.025, 1, lr_decay=0.005)
q2 = Neuron([1,1], np.array([[0.2]]), np.power(0.035,2), 0.002, 1, lr_decay=0.005)
q3 = Neuron([1,1], np.array([[0.55]]), np.power(0.07,2), 0.015, 1, lr_decay=0.005)
q4 = Neuron([1,3], np.array([[0.02,0.7997,0.2]]), 0.005, 0.04, 1, lr_decay=0.001)
q5 = Neuron([1,3], np.array([[0.69,0.01,0.3]]), 0.005, 0.04, 1, lr_decay=0.001)
# +
num_samples = 1000
samples, labels = p.sample(num_samples)
num_grid_pts = 500
t1 = np.linspace(0,1.0,num_grid_pts)
t2 = np.linspace(0,1.0,num_grid_pts)
q1_hist, q2_hist, q3_hist = ([], [], [])
fig2 = plt.figure(2)
ax = fig2.add_subplot(111, projection='3d')
colors = ['orange','black']
# For plotting the 3D neurons
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
x_pts = np.outer(np.cos(u), np.sin(v))
y_pts = np.outer(np.sin(u), np.sin(v))
z_pts = np.outer(np.ones(np.size(u)), np.cos(v))
# -
for k in range(1000):
x = np.array(samples[k])
l = labels[k]
# Expose the neurons to the stimulus
q1_hist.append(q1(x[1].reshape(1,1,1)))
q2_hist.append(q2(x[0].reshape(1,1,1)))
q3_hist.append(q3(x[0].reshape(1,1,1)))
pt_3d = np.array([q2_hist[-1],q3_hist[-1],q1_hist[-1]]).reshape(1,1,3)
q4(pt_3d)
q5(pt_3d)
sb_plt = sns.JointGrid(x=samples[:,0], y=samples[:,1], xlim=(0,1), ylim=(0,1))
x_hist = sb_plt.ax_marg_x.hist(samples[:,0],density=True,bins=100,color='blue')
y_hist = sb_plt.ax_marg_y.hist(samples[:,1],density=True,bins=100,color='blue',orientation='horizontal')
c1 = sb_plt.ax_joint.scatter(samples[:,0], samples[:,1])
sb_plt.ax_joint.scatter(x[0], x[1],c='magenta')
sb_plt.ax_joint.plot([x[0],x[0]],[0,1],c='magenta',ls='dashed')
sb_plt.ax_joint.plot([0,1],[x[1],x[1]],c='magenta',ls='dashed')
sb_plt.ax_joint.set_xlabel("$x_2$")
sb_plt.ax_joint.set_xticklabels([])
sb_plt.ax_joint.set_ylabel("$x_1$")
sb_plt.ax_joint.set_yticklabels([])
q1_vals = q1(t1.reshape(num_grid_pts,1,1), update=False)
q2_vals = q2(t2.reshape(num_grid_pts,1,1), update=False)
q3_vals = q3(t2.reshape(num_grid_pts,1,1), update=False)
sb_plt.ax_marg_y.plot((q1_vals/q1_vals.max())*y_hist[0].max(),t1,c='y',label='$q_1$')
sb_plt.ax_marg_y.plot([0,y_hist[0].max()],[x[1],x[1]],c='magenta',lw=3)
sb_plt.ax_marg_x.plot(t2, (q2_vals/q2_vals.max())*x_hist[0].max(),c='g',label='$q_2$')
sb_plt.ax_marg_x.plot([x[0],x[0]],[0,x_hist[0].max()],c='magenta',lw=3)
sb_plt.ax_marg_x.plot(t2, (q3_vals/q3_vals.max())*x_hist[0].max(),c='r',label='$q_3$')
sb_plt.ax_marg_x.legend()
sb_plt.ax_marg_y.legend()
sb_plt.fig.savefig(f"figs/2d/fig{str(k).zfill(4)}.jpg")
# Make 3D plot
ax.scatter(q2_hist, q3_hist, q1_hist, c=[colors[li] for li in labels[:k+1]])
# find the rotation matrix and radii of the axes
U, q4_s, q4_r = np.linalg.svd(np.diag(q4.get_bias()))
q4_radii = np.sqrt(q4_s)
q4_center = q4.get_weights()[0]
q4_x = q4_radii[0] * x_pts
q4_y = q4_radii[1] * y_pts
q4_z = q4_radii[2] * z_pts
U, q5_s, q5_r = np.linalg.svd(np.diag(q5.get_bias()))
q5_radii = np.sqrt(q5_s)
q5_center = q5.get_weights()[0]
q5_x = q5_radii[0] * x_pts
q5_y = q5_radii[1] * y_pts
q5_z = q5_radii[2] * z_pts
# Rotate and Translate data points
for i in range(len(q4_x)):
for j in range(len(q4_x)):
[q4_x[i,j],q4_y[i,j],q4_z[i,j]] = np.dot([q4_x[i,j],q4_y[i,j],q4_z[i,j]], q4_r) + q4_center
[q5_x[i,j],q5_y[i,j],q5_z[i,j]] = np.dot([q5_x[i,j],q5_y[i,j],q5_z[i,j]], q5_r) + q5_center
# Plot the 3D Gaussians
ax.plot_wireframe(q4_x, q4_y, q4_z, color='magenta', rcount=10, ccount=10)
ax.plot_wireframe(q5_x, q5_y, q5_z, color='cyan', rcount=10, ccount=10)
#ax.plot_wireframe(x_pts, y_pts, z_pts, color='b',rcount=20,ccount=20)
# Plotting configuration
ax.set_xlabel('$q_2(x)$')
ax.set_ylabel('$q_3(x)$')
ax.set_zlabel('$q_1(x)$')
ax.set_xlim3d([0,1])
ax.set_ylim3d([0,1])
ax.set_zlim3d([0,1])
ax.view_init(azim=(45+2*k)%360)
fig2.savefig(f"figs/3d/fig{str(k).zfill(4)}.jpg")
ax.cla()
| dissertation/experiments/2019-06-16-17.15/.ipynb_checkpoints/2D_example-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ! pwd
import sys
sys.path.append("/Users/niarfe/tmprepos/hydra_inc/hydraseq")
import hydraseq
from hydraseq import Hydraseq
from hydraseq.columns import run_convolutions
sentence = "the quick brown fox jumped over the lazy dog"
# +
hdq1 = hydraseq.Hydraseq('one')
for pattern in [
"the _ART",
"quick _ADJ",
"brown _ADJ",
"fox _NOU",
"jumped _VER",
"over _PRO",
"lazy _ADJ",
"dog _NOU"
]:
hdq1.insert(pattern)
hdq2 = hydraseq.Hydraseq('two')
for pattern in [
"_NOU *NP*",
"_ADJ _NOU *NP*",
"_VER *VP*",
"_ADV _VER *VP*",
"_ART _NOU *NP*",
"_ART _ADJ _ADJ _NOU *NP*",
]:
hdq2.insert(pattern)
hdq3 = hydraseq.Hydraseq('three')
for pattern in [
"*NP* *VP* #BINGO#"
]:
hdq3.insert(pattern)
# -
hdq1.look_ahead('the')
convos = run_convolutions(sentence.split(), hdq1, "_")
#print(convos)
encoded = list(map(lambda x: x[2], convos))
for idx, tup in enumerate(zip(sentence.split(), convos)):
print(idx, tup)
convos = run_convolutions(list(encoded), hdq2, "*")
#print(convos)
encoded = list(map(lambda x: x[2], convos))
words = sentence.split()
for idx, convo in enumerate(convos):
print(idx, words[convo[0]:convo[1]], convo)
convos = run_convolutions(encoded, hdq3, "#")
print(convos)
| notebooks/lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Machine Learning is a huge and growing area. In this chapter, we cannot
# possibly even survey this area, but we can provide some context and some
# connections to probability and statistics that should make it easier to think
# about machine learning and how to apply these methods to real-world problems.
# The fundamental problem of statistics is basically the same as machine
# learning: given some data, how to make it actionable? For statistics, the
# answer is to construct analytic estimators using powerful theory. For machine
# learning, the answer is algorithmic prediction. Given a data set, what
# forward-looking inferences can we draw? There is a subtle bit in this
# description: how can we know the future if all we have is data about the past?
# This is the crux of the matter for machine learning, as we will explore in the
# chapter.
| chapter/machine_learning/intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Obtaining application / publication numbers from the Japan Patent Office
#
# **Version**: Dec 17 2020
#
# Data acquisition using Selenium and Python.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import NoSuchElementException
# +
# set chrome options
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
# create a chrome webdriver
driver = webdriver.Chrome('/usr/bin/chromedriver', options=chrome_options)
# -
# ## Navigate to the Japan Patent Office search page.
driver.get("https://www.j-platpat.inpit.go.jp/p0000")
# ## Change the language from Japanese to English.
# click on the English link and make sure language changed
driver.find_element_by_link_text("English").click()
language_elem = driver.find_element_by_id("cfc001_header_lnkLangChange")
print(language_elem.text)
if language_elem.text != "Japanese":
print("error in changing language from Japanese to English")
# ## Try to select on the "Number type" drop-down menu.
#
# I first tried this using the Selenium `Select` object, but this webpage uses `<mat-select>` tags which is incompatible.
# +
from selenium.webdriver.support.ui import Select
select = Select(driver.find_element_by_id("p00_srchCondtn_selDocNoInputType0"))
# select by visible text
select.select_by_visible_text('Patent application number')
# select by value
select.select_by_value('1')
# -
# ## Click on the drop-down arrow on the "Number type" menu.
driver.find_element_by_xpath('//*[@id="p00_srchCondtn_selDocNoInputType0"]/div/div[2]').click()
# ## Check what the "Number type" value is.
# If the request type is the same as this value, we don't need to alter it.
inputElement = driver.find_element_by_id("p00_srchCondtn_selDocNoInputType0")
inputElement.text
# ## Let's try to click on another option for publication number.
# This approach, looking for the element by its xpath, doesn't seem to work.
driver.find_element_by_xpath('//*[@id="mat-option-13"]/span').click()
# ## Instead of using the xpath identifier, search for the specific `mat-option` text to find the selection to click.
# https://stackoverflow.com/a/54117155
driver.find_element_by_xpath("//mat-option/span[contains(.,'Publication number of Japanese translation of PCT international application (A)')]").click()
# ## Check this by re-calling the value of the "Number type" menu as done earlier.
inputElement = driver.find_element_by_id("p00_srchCondtn_selDocNoInputType0")
inputElement.text
# ## Alternatively, take a screenshot of the element to see that the selection was updated.
#
# You can also take a full screen screenshot by the following:
# ```
# driver.get_screenshot_as_file("screenshot.png")
#
# ```
#
# Note: You may need to change the webpage of the driver to see the full screen. Do this in the browser options before creating the driver.
driver.find_element_by_xpath('//*[@id="p00_srchCondtn_selDocNoInputType0"]/div/div[2]').screenshot
driver.save_screenshot("test.png")
# ## Change the "Number type" back to "Patent application number".
driver.find_element_by_xpath('//*[@id="p00_srchCondtn_selDocNoInputType0"]/div/div[2]').click()
driver.find_element_by_xpath("//mat-option/span[contains(.,'Patent application number')]").click()
inputElement = driver.find_element_by_id("p00_srchCondtn_selDocNoInputType0")
inputElement.text
# ## Enter in the publication number to search.
inputElement = driver.find_element_by_id("p00_srchCondtn_txtDocNoInputNo0")
inputElement.send_keys('2001-531855')
driver.get_screenshot_as_file("test.png")
# ## Click the search button.
# Note that on this webpage hitting the Enter button does not trigger search. If it did, one could try:
# ```
# inputElement.send_keys(Keys.ENTER)
# ```
driver.find_element_by_xpath('//*[@id="p00_searchBtn_btnDocInquiry"]').click()
driver.get_screenshot_as_file("test.png")
# ## Now we're on the results page. Pull out the application and publication numbers.
# The page has updated, but the URL is the same. Before moving on, we should check that a result actually shows up. Assuming it does, however:
appno = driver.find_element_by_xpath('//*[@id="patentUtltyIntnlNumOnlyLst_tableView_appNum0"]/label').text
appno
pubno = driver.find_element_by_xpath('//*[@id="patentUtltyIntnlNumOnlyLst_tableView_publicNumArea0"]/a').text
pubno
# ## That gives us the desired information. We could go even further and click the publication number link.
#
# It opens up a new window: `https://www.j-platpat.inpit.go.jp/p0200`. There, we could extract more information about this patent application. Opening the "Bibliography" section gives the following:
#
# ```
# (19) [Publication country] Japan Patent Office (JP)
# (12) [Kind of official gazette] Published Japanese translations of PCT international publication for patent applications (A)
# (11) [Publication number of Japanese translation of PCT international application] JP 2003 - 512037A (P2003-512037A)
# (43) [Publication date of Japanese translation of PCT international application] Heisei 15(2003) April 2 (2003.4.2)
# (54) [Title of the invention] Antisense modulation of Bcl - 6 expression
# (51) [International Patent Classification 7th Edition]
# C12N 15/09 ZNA
# A61K 31/7088
# 48/00
# A61P 35/00
# C12N 5/06
# [FI]
# A61K 31/7088
# 48/00
# A61P 35/00
# C12N 15/00 ZNA A
# 5/00 E
# [Request for examination] Y
# [Request for preliminary examination] Y
# [Total number of pages] 104
# (21) [Application number] Japanese Patent Application No. 2001-531855 (P2001-531855)
# (86)(22)[Filing date]Heisei 12(2000) October 11 (2000.10.11)
# (85) [Submission date of translated text] Heisei 14(2002) April 11 (2002.4.11)
# (86) [International application number] PCT/US00/27963
# (87) [International publication number] WO01/029057
# (87) [International publication date] Heisei 13(2001) April 26 (2001.4.26)
# (31) [Application number of the priority] 09/418,640
# (32) [Priority date] Heisei 11(1999) October 15 (1999.10.15)
# (33) [Priority claim country] U.S. (US)
# (81) [Designated country/region] EP (AT, BE, CH, CY, DE, DK, ES, FI, FR, GB, GR, IE, IT, LU, MC, NL, PT, SE), OA (BF, BJ, CF, CG, CI, CM, GA, GN, GW, ML, MR, NE, SN, TD, TG), AP (GH, GM, KE, LS, MW, MZ, SD, SL, SZ, TZ, UG, ZW), EA (AM, AZ, BY, KG, KZ, MD, RU, TJ, TM), AE, AG, AL, AM, AT, AU, AZ, BA, BB, BG, BR, BY, BZ, CA, CH, CN, CR, CU, CZ, DE, DK, DM, DZ, EE, ES, FI, GB, GD, GE, GH, GM, HR, HU, ID, IL, IN, IS, JP, KE, KG, KP, KR, KZ, LC, LK, LR, LS, LT, LU, LV, MA, MD, MG, MK, MN, MW, MX, MZ, NO, NZ, PL, PT, RO, RU, SD, SE, SG, SI, SK, SL, TJ, TM, TR, TT, TZ, UA, UG, US, UZ, VN, YU,ZA,ZW
# (71) [Applicant]
# [Name]The <NAME> <NAME>uce Inc.
# [Name (in original language)]Isis PharmaceuticalsInc
# (72) [Inventor]
# [Name]Taylor, <NAME>
# (72) [Inventor]
# [Name]Cow ***, REXX em
# (74) [Representative]
# [Patent attorney]
# [Name] <NAME> (and 5 others)
# [Theme code (reference)]
# 4B024
# 4B065
# 4C084
# 4C086
# [F-term (reference) ]
# 4B024 AA01 CA05 HA12
# 4B065 AA93X BB14 BD35 CA46
# 4C084 AA13 MA01 NA14 ZB212 ZB262
# 4C086 AA01 AA02 AA03 EA16 MA01 MA04 NA14 ZB21 ZB26
# ```
| jpo/explore_jpo_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SWELL-KW GRU
# Adapted from Microsoft's notebooks, available at https://github.com/microsoft/EdgeML authored by Dennis et al.
import pandas as pd
import numpy as np
from tabulate import tabulate
import os
import datetime as datetime
import pickle as pkl
import pathlib
# +
from __future__ import print_function
import os
import sys
import tensorflow as tf
import numpy as np
# Making sure edgeml is part of python path
sys.path.insert(0, '../../')
#For processing on CPU.
os.environ['CUDA_VISIBLE_DEVICES'] ='0'
np.random.seed(42)
tf.set_random_seed(42)
# MI-RNN and EMI-RNN imports
from edgeml.graph.rnn import EMI_DataPipeline
from edgeml.graph.rnn import EMI_GRU
from edgeml.trainer.emirnnTrainer import EMI_Trainer, EMI_Driver
import edgeml.utils
import keras.backend as K
cfg = K.tf.ConfigProto()
cfg.gpu_options.allow_growth = True
K.set_session(K.tf.Session(config=cfg))
# +
# Network parameters for our LSTM + FC Layer
NUM_HIDDEN = 128
NUM_TIMESTEPS = 8
ORIGINAL_NUM_TIMESTEPS = 20
NUM_FEATS = 22
FORGET_BIAS = 1.0
NUM_OUTPUT = 2
USE_DROPOUT = True
KEEP_PROB = 0.75
# For dataset API
PREFETCH_NUM = 5
BATCH_SIZE = 32
# Number of epochs in *one iteration*
NUM_EPOCHS = 2
# Number of iterations in *one round*. After each iteration,
# the model is dumped to disk. At the end of the current
# round, the best model among all the dumped models in the
# current round is picked up..
NUM_ITER = 4
# A round consists of multiple training iterations and a belief
# update step using the best model from all of these iterations
NUM_ROUNDS = 30
LEARNING_RATE=0.001
# A staging direcory to store models
MODEL_PREFIX = '/home/sf/data/SWELL-KW/models/GRU/model-gru'
# + [markdown] heading_collapsed=true
# # Loading Data
# + hidden=true
# Loading the data
x_train, y_train = np.load('/home/sf/data/SWELL-KW/8_3/x_train.npy'), np.load('/home/sf/data/SWELL-KW/8_3/y_train.npy')
x_test, y_test = np.load('/home/sf/data/SWELL-KW/8_3/x_test.npy'), np.load('/home/sf/data/SWELL-KW/8_3/y_test.npy')
x_val, y_val = np.load('/home/sf/data/SWELL-KW/8_3/x_val.npy'), np.load('/home/sf/data/SWELL-KW/8_3/y_val.npy')
# BAG_TEST, BAG_TRAIN, BAG_VAL represent bag_level labels. These are used for the label update
# step of EMI/MI RNN
BAG_TEST = np.argmax(y_test[:, 0, :], axis=1)
BAG_TRAIN = np.argmax(y_train[:, 0, :], axis=1)
BAG_VAL = np.argmax(y_val[:, 0, :], axis=1)
NUM_SUBINSTANCE = x_train.shape[1]
print("x_train shape is:", x_train.shape)
print("y_train shape is:", y_train.shape)
print("x_test shape is:", x_val.shape)
print("y_test shape is:", y_val.shape)
# -
# # Computation Graph
# +
# Define the linear secondary classifier
def createExtendedGraph(self, baseOutput, *args, **kwargs):
W1 = tf.Variable(np.random.normal(size=[NUM_HIDDEN, NUM_OUTPUT]).astype('float32'), name='W1')
B1 = tf.Variable(np.random.normal(size=[NUM_OUTPUT]).astype('float32'), name='B1')
y_cap = tf.add(tf.tensordot(baseOutput, W1, axes=1), B1, name='y_cap_tata')
self.output = y_cap
self.graphCreated = True
def restoreExtendedGraph(self, graph, *args, **kwargs):
y_cap = graph.get_tensor_by_name('y_cap_tata:0')
self.output = y_cap
self.graphCreated = True
def feedDictFunc(self, keep_prob=None, inference=False, **kwargs):
if inference is False:
feedDict = {self._emiGraph.keep_prob: keep_prob}
else:
feedDict = {self._emiGraph.keep_prob: 1.0}
return feedDict
EMI_GRU._createExtendedGraph = createExtendedGraph
EMI_GRU._restoreExtendedGraph = restoreExtendedGraph
if USE_DROPOUT is True:
EMI_Driver.feedDictFunc = feedDictFunc
# -
inputPipeline = EMI_DataPipeline(NUM_SUBINSTANCE, NUM_TIMESTEPS, NUM_FEATS, NUM_OUTPUT)
emiGRU = EMI_GRU(NUM_SUBINSTANCE, NUM_HIDDEN, NUM_TIMESTEPS, NUM_FEATS,
useDropout=USE_DROPOUT)
emiTrainer = EMI_Trainer(NUM_TIMESTEPS, NUM_OUTPUT, lossType='xentropy',
stepSize=LEARNING_RATE)
tf.reset_default_graph()
g1 = tf.Graph()
with g1.as_default():
# Obtain the iterators to each batch of the data
x_batch, y_batch = inputPipeline()
# Create the forward computation graph based on the iterators
y_cap = emiGRU(x_batch)
# Create loss graphs and training routines
emiTrainer(y_cap, y_batch)
# # EMI Driver
# +
with g1.as_default():
emiDriver = EMI_Driver(inputPipeline, emiGRU, emiTrainer)
emiDriver.initializeSession(g1)
y_updated, modelStats = emiDriver.run(numClasses=NUM_OUTPUT, x_train=x_train,
y_train=y_train, bag_train=BAG_TRAIN,
x_val=x_val, y_val=y_val, bag_val=BAG_VAL,
numIter=NUM_ITER, keep_prob=KEEP_PROB,
numRounds=NUM_ROUNDS, batchSize=BATCH_SIZE,
numEpochs=NUM_EPOCHS, modelPrefix=MODEL_PREFIX,
fracEMI=0.5, updatePolicy='top-k', k=1)
# -
# # Evaluating the trained model
# +
# Early Prediction Policy: We make an early prediction based on the predicted classes
# probability. If the predicted class probability > minProb at some step, we make
# a prediction at that step.
def earlyPolicy_minProb(instanceOut, minProb, **kwargs):
assert instanceOut.ndim == 2
classes = np.argmax(instanceOut, axis=1)
prob = np.max(instanceOut, axis=1)
index = np.where(prob >= minProb)[0]
if len(index) == 0:
assert (len(instanceOut) - 1) == (len(classes) - 1)
return classes[-1], len(instanceOut) - 1
index = index[0]
return classes[index], index
def getEarlySaving(predictionStep, numTimeSteps, returnTotal=False):
predictionStep = predictionStep + 1
predictionStep = np.reshape(predictionStep, -1)
totalSteps = np.sum(predictionStep)
maxSteps = len(predictionStep) * numTimeSteps
savings = 1.0 - (totalSteps / maxSteps)
if returnTotal:
return savings, totalSteps
return savings
# -
k = 2
predictions, predictionStep = emiDriver.getInstancePredictions(x_test, y_test, earlyPolicy_minProb,
minProb=0.99, keep_prob=1.0)
bagPredictions = emiDriver.getBagPredictions(predictions, minSubsequenceLen=k, numClass=NUM_OUTPUT)
print('Accuracy at k = %d: %f' % (k, np.mean((bagPredictions == BAG_TEST).astype(int))))
mi_savings = (1 - NUM_TIMESTEPS / ORIGINAL_NUM_TIMESTEPS)
emi_savings = getEarlySaving(predictionStep, NUM_TIMESTEPS)
total_savings = mi_savings + (1 - mi_savings) * emi_savings
print('Savings due to MI-RNN : %f' % mi_savings)
print('Savings due to Early prediction: %f' % emi_savings)
print('Total Savings: %f' % (total_savings))
# A slightly more detailed analysis method is provided.
df = emiDriver.analyseModel(predictions, BAG_TEST, NUM_SUBINSTANCE, NUM_OUTPUT)
# ## Picking the best model
devnull = open(os.devnull, 'r')
for val in modelStats:
round_, acc, modelPrefix, globalStep = val
emiDriver.loadSavedGraphToNewSession(modelPrefix, globalStep, redirFile=devnull)
predictions, predictionStep = emiDriver.getInstancePredictions(x_test, y_test, earlyPolicy_minProb,
minProb=0.99, keep_prob=1.0)
bagPredictions = emiDriver.getBagPredictions(predictions, minSubsequenceLen=k, numClass=NUM_OUTPUT)
print("Round: %2d, Validation accuracy: %.4f" % (round_, acc), end='')
print(', Test Accuracy (k = %d): %f, ' % (4, np.mean((bagPredictions == BAG_TEST).astype(int))), end='')
mi_savings = (1 - NUM_TIMESTEPS / ORIGINAL_NUM_TIMESTEPS)
emi_savings = getEarlySaving(predictionStep, NUM_TIMESTEPS)
total_savings = mi_savings + (1 - mi_savings) * emi_savings
print("Total Savings: %f" % total_savings)
params = {
"NUM_HIDDEN" : 128,
"NUM_TIMESTEPS" : 64, #subinstance length.
"ORIGINAL_NUM_TIMESTEPS" : 128,
"NUM_FEATS" : 16,
"FORGET_BIAS" : 1.0,
"NUM_OUTPUT" : 5,
"USE_DROPOUT" : 1, # '1' -> True. '0' -> False
"KEEP_PROB" : 0.75,
"PREFETCH_NUM" : 5,
"BATCH_SIZE" : 32,
"NUM_EPOCHS" : 2,
"NUM_ITER" : 4,
"NUM_ROUNDS" : 10,
"LEARNING_RATE" : 0.001,
"MODEL_PREFIX" : '/home/sf/data/DREAMER/Dominance/model-gru'
}
# +
gru_dict = {**params}
gru_dict["k"] = k
gru_dict["accuracy"] = np.mean((bagPredictions == BAG_TEST).astype(int))
gru_dict["total_savings"] = total_savings
gru_dict["y_test"] = BAG_TEST
gru_dict["y_pred"] = bagPredictions
# A slightly more detailed analysis method is provided.
df = emiDriver.analyseModel(predictions, BAG_TEST, NUM_SUBINSTANCE, NUM_OUTPUT)
print (tabulate(df, headers=list(df.columns), tablefmt='grid'))
# +
dirname = "/home/sf/data/SWELL/GRU/"
pathlib.Path(dirname).mkdir(parents=True, exist_ok=True)
print ("Results for this run have been saved at" , dirname, ".")
now = datetime.datetime.now()
filename = list((str(now.year),"-",str(now.month),"-",str(now.day),"|",str(now.hour),"-",str(now.minute)))
filename = ''.join(filename)
#Save the dictionary containing the params and the results.
pkl.dump(gru_dict,open(dirname + filename + ".pkl",mode='wb'))
# -
dirname+filename+'.pkl'
| SWELL-KW/SWELL-KW_GRU.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.3 64-bit (''base'': conda)'
# name: python3
# ---
import pandas as pd
import os
import nltk
import re
import sklearn
import numpy as np
from sklearn.preprocessing import LabelEncoder
from nltk.tokenize import RegexpTokenizer
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
import re
# +
def normalize_document(doc):
# lower case and remove special characters\whitespaces
doc = re.sub(r'[^a-zA-Z\s]', '', doc, re.I|re.A)
doc = doc.lower()
doc = doc.strip()
# tokenize document
tokens = wpt.tokenize(doc)
# filter stopwords out of document
filtered_tokens = [token for token in tokens if token not in stop_words]
# re-create document from filtered tokens
doc = ' '.join(filtered_tokens)
return doc
def processing_text(series_to_process):
new_list = []
tokenizer = RegexpTokenizer(r'(\w+)')
lemmatizer = WordNetLemmatizer()
for i in range(len(series_to_process)):
#TOKENISED ITEM(LONG STRING) IN A LIST
dirty_string = (series_to_process)[i].lower()
words_only = tokenizer.tokenize(dirty_string) #WORDS_ONLY IS A LIST THAT DOESN'T HAVE PUNCTUATION
#LEMMATISE THE ITEMS IN WORDS_ONLY
words_only_lem = [lemmatizer.lemmatize(i) for i in words_only]
#REMOVING STOP WORDS FROM THE LEMMATIZED LIST
words_without_stop = [i for i in words_only_lem if i not in stopwords.words("english")]
#RETURN SEPERATED WORDS INTO LONG STRING
long_string_clean = " ".join(word for word in words_without_stop)
new_list.append(long_string_clean)
return new_list
def processing_label(series_to_process):
new_list = []
for i in range(len(series_to_process)):
if series_to_process[i] in ("suicide", "depression"):
new_list.append(1)
else:
new_list.append(0)
return new_list
# -
class LabelEncoderExt(object):
def __init__(self):
"""
It differs from LabelEncoder by handling new classes and providing a value for it [Unknown]
Unknown will be added in fit and transform will take care of new item. It gives unknown class id
"""
self.label_encoder = LabelEncoder()
# self.classes_ = self.label_encoder.classes_
def fit(self, data_list):
"""
This will fit the encoder for all the unique values and introduce unknown value
:param data_list: A list of string
:return: self
"""
self.label_encoder = self.label_encoder.fit(list(data_list) + ['Unknown'])
self.classes_ = self.label_encoder.classes_
return self
def transform(self, data_list):
"""
This will transform the data_list to id list where the new values get assigned to Unknown class
:param data_list:
:return:
"""
new_data_list = list(data_list)
for unique_item in np.unique(data_list):
if unique_item not in self.label_encoder.classes_:
new_data_list = ['Unknown' if x==unique_item else x for x in new_data_list]
return self.label_encoder.transform(new_data_list)
# + tags=[]
wpt = nltk.WordPunctTokenizer()
stop_words = nltk.corpus.stopwords.words('english')
### Consolidate Depression and Suicide files into one dataframe with 2 columnns (label, text)
### Ensured 50/50 breakdown between the 2.
### Need a third
df = pd.DataFrame(columns=["label", "text"])
i = 0
for file in os.listdir("./data/slighly less uncleaned/Suicide"):
text = open(("data/slighly less uncleaned/Suicide/" + file)).read()
df.loc[i] = ["suicide"] + [text]
i += 1
j = 0
while j < i:
file = os.listdir("./data/slighly less uncleaned/Depression")[j]
print(file)
text = open(("data/slighly less uncleaned/Depression/" + file)).read()
df.loc[i + j] = ["depression"] + [text]
j += 1
k = 0
while k < j:
file = os.listdir("./data/slighly less uncleaned/Recovery")[k]
print(file)
text = open(("data/slighly less uncleaned/Recovery/" + file)).read()
df.loc[i + j + k] = ["recovery"] + [text]
k += 1
## We know that this dataset is collected from actual surveys of participants.
## thus we can safely disregard syntactic or semantic structures of the text.
text_values = df["text"]
print(df)
## we do some simple normalization of the text data.
i = 0
while i < 3\
*j:
text = df.loc[i]['text']
text = normalize_document(text)
df.loc[i]['text'] = text
i += 1
df["text_clean"] = processing_text(df["text"])
# -
# Credits to: https://github.com/hesamuel/goodbye_world/blob/master/code/03_Modelling.ipynb
# +
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
pd.set_option('display.max_columns', 100)
sns.set_style("white")
from sklearn.feature_extraction.text import CountVectorizer, HashingVectorizer, TfidfVectorizer
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score, roc_auc_score
# import warnings filter
from warnings import simplefilter
# ignore all future warnings
simplefilter(action='ignore', category=FutureWarning)
# + tags=["outputPrepend"]
df["is_bad"] = processing_label(df["label"])
print(df)
# -
# ## Baseline
df["is_bad"].mean()
# ## Finding a Production Model
# +
# DEFINING A FUNCTION THAT WILL RUN MULTIPLE MODELS AND GRIDSEARCH FOR BEST PARAMETERS
def gridsearch_multi(steps_titles, steps_list, pipe_params):
#DEFINING X and y
X = df["text_clean"]
y = df['is_bad']
#TRAIN-TEST SPLIT
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
# DATAFRAME TO DISPLAY RESULTS
gs_results = pd.DataFrame(columns=['model','AUC Score', 'precision', 'recall (sensitivity)',
'best_params', 'best score', 'confusion matrix',
'train_accuracy','test_accuracy','baseline_accuracy',
'specificity', 'f1-score'])
# FOR LOOP THROUGH STEPS LIST
for i in range(len(steps_list)):
# INSTATIATE PIPELINE
pipe = Pipeline(steps=steps_list[i])
# INSTANTIATE GRIDSEARCHCV WITH PARAMETER ARGUMENT
gs = GridSearchCV(pipe, pipe_params[i], cv=3)
gs.fit(X_train, y_train)
#GETTING PREDICTIONS FROM MODEL
pred = gs.predict(X_test)
# DEFINE CONFUSION MATRIX ELEMENTS
tn, fp, fn, tp = confusion_matrix(y_test, gs.predict(X_test)).ravel()
#CREATING A DICTIONARY FROM THE CLASSIFICATION REPORT(WE'LL DRAW SOME METRICS FROM HERE)
classi_dict = (classification_report(y_test,pred, output_dict=True))
#CALCULATING AREA UNDER THE CURVE
gs.predict_proba(X_test)
pred_proba = [i[1] for i in gs.predict_proba(X_test)]
auc = roc_auc_score(y_test, pred_proba)
#DEFINE DATAFRAME COLUMNS
model_results = {}
model_results['model'] = steps_titles[i]
model_results['AUC Score'] = auc
model_results['precision']= classi_dict['weighted avg']['precision']
model_results['recall (sensitivity)']= classi_dict['weighted avg']['recall']
model_results['best params'] = gs.best_params_
model_results['best score'] = gs.best_score_
model_results['confusion matrix']={"TP": tp,"FP":fp, "TN": tn, "FN": fn}
model_results['train accuracy'] = gs.score(X_train, y_train)
model_results['test accuracy'] = gs.score(X_test, y_test)
model_results['baseline accuracy'] = 2/3
model_results['specificity']= tn/(tn+fp)
model_results['f1-score']= classi_dict['weighted avg']['f1-score']
#APPEND RESULTS TO A NICE DATAFRAME
df_list.append(model_results)
pd.set_option("display.max_colwidth", 200)
return (pd.DataFrame(df_list)).round(2)
# +
#USING THE FUNCTION WITH COUNT VECTORIZOR
# EMPTY LIST THAT WILL HOLD RESULTS
df_list=[]
# LIST OF MODELS
steps_titles = ['cvec+ multi_nb','cvec + ss + knn','cvec + ss + logreg']
# CODE FOR PIPELINE TO INSTATIATE MODELS
steps_list = [
[('cv', CountVectorizer()),('multi_nb', MultinomialNB())],
[('cv', CountVectorizer()),('scaler', StandardScaler(with_mean=False)),('knn', KNeighborsClassifier())],
[('cv', CountVectorizer()),('scaler', StandardScaler(with_mean=False)),('logreg', LogisticRegression())]
]
# LIST OF PARAMETER DICTIONARIES
pipe_params = [
{'cv__stop_words':['english'], 'cv__ngram_range':[(1,1),(1,2)],'cv__max_features': [20, 30, 50],'cv__min_df': [2, 3],'cv__max_df': [.2, .25, .3]},
{'cv__stop_words':['english'], 'cv__ngram_range':[(1,1),(1,2)],'cv__max_features': [20, 30, 50],'cv__min_df': [2, 3],'cv__max_df': [.2, .25, .3]},
{'cv__stop_words':['english'], 'cv__ngram_range':[(1,1),(1,2)],'cv__max_features': [20, 30, 50],'cv__min_df': [2, 3],'cv__max_df': [.2, .25, .3]}
]
#RUNNING THE FUNCTION
gridsearch_multi(steps_titles, steps_list, pipe_params)
# +
#USING THE FUNCTION WITH TFID VECTORIZOR
# LIST OF MODELS
steps_titles = ['tvec + multi_nb','tvec + ss + knn','tvec + ss + logreg']
# CODE FOR PIPELINE TO INSTATIATE MODELS
steps_list = [
[('tv', TfidfVectorizer()),('multi_nb', MultinomialNB())],
[('tv', TfidfVectorizer()),('scaler', StandardScaler(with_mean=False)),('knn', KNeighborsClassifier())],
[('tv', TfidfVectorizer()),('scaler', StandardScaler(with_mean=False)),('logreg', LogisticRegression())]
]
# LIST OF PARAMETER DICTIONARIES
pipe_params = [
{'tv__stop_words':['english'], 'tv__ngram_range':[(1,1),(1,2)],'tv__max_features': [20, 30, 50],'tv__min_df': [2, 3],'tv__max_df': [.2, .25, .3]},
{'tv__stop_words':['english'], 'tv__ngram_range':[(1,1),(1,2)],'tv__max_features': [20, 30, 50],'tv__min_df': [2, 3],'tv__max_df': [.2, .25, .3]},
{'tv__stop_words':['english'], 'tv__ngram_range':[(1,1),(1,2)],'tv__max_features': [20, 30, 50],'tv__min_df': [2, 3],'tv__max_df': [.2, .25, .3]}
]
#RUNNING THE FUNCTION
gridsearch_multi(steps_titles, steps_list, pipe_params)
# +
#USING THE FUNCTION WITH HASHING VECTORIZOR
# LIST OF MODELS
steps_titles = ['hvec + multi_nb','hvec + ss + knn','hvec + ss + logreg']
# CODE FOR PIPELINE TO INSTATIATE MODELS
steps_list = [
[('hv', HashingVectorizer(alternate_sign=False)),('multi_nb', MultinomialNB())],
[('hv', HashingVectorizer(alternate_sign=False)),('scaler', StandardScaler(with_mean=False)),('knn', KNeighborsClassifier())],
[('hv', HashingVectorizer(alternate_sign=False)),('scaler', StandardScaler(with_mean=False)),('logreg', LogisticRegression())]
]
# LIST OF PARAMETER DICTIONARIES
pipe_params = [
{'hv__stop_words':['english'], 'hv__ngram_range':[(1,1),(1,2)]},
{'hv__stop_words':['english'], 'hv__ngram_range':[(1,1),(1,2)]},
{'hv__stop_words':['english'], 'hv__ngram_range':[(1,1),(1,2)]}
]
#RUNNING THE FUNCTION
gridsearch_multi(steps_titles, steps_list, pipe_params)
# -
# ## Production Model
# +
#CHECKING SCORES OF THE OPTIMISED MODEL USING TEST DATA
#DEFINING X and y
X = df["text_clean"]
y = df['is_bad']
#TRAIN-TEST SPLIT
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
tvec_optimised = TfidfVectorizer(max_df= 0.5, max_features=70, min_df=2, ngram_range=(1, 3),stop_words = 'english')
X_train_tvec = tvec_optimised.fit_transform(X_train).todense()
X_test_tvec = tvec_optimised.transform(X_test).todense()
#SAVE TO PICKLE
import pickle
pickle.dump(tvec_optimised, open("vectorizer.pickle", "wb"))
#FINDING THE ACCURACY SCORE ON THE TEST DATA
log_reg = LogisticRegression()
log_reg.fit(X_train_tvec, y_train)
accuracy = log_reg.score(X_test_tvec, y_test)
#CALCULATING AREA UNDER THE CURVE
pred_proba = [i[1] for i in log_reg.predict_proba(X_test_tvec)]
auc = roc_auc_score(y_test, pred_proba)
print("ACCURACY: {}\nAUC SCORE: {}".format(accuracy, auc) )
# -
test = ["I think it's overfitted"]
vectorizer = pickle.load(open("vectorizer.pickle", "rb"))
new_X = vectorizer.transform(test).todense()
result = log_reg.predict(new_X)
print(result)
| model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Práctica 2: Regresión logística
# ---
# ### Autores:
# <NAME> - 5º Doble Grado en Ingeniería Informática - Matemáticas
# <NAME> - 4º Grado en Ingeniería Informática
#
# ---
# **Fecha de entrega:** 25 de octubre de 2018, 18.00h
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from pandas.io.parsers import read_csv
import scipy.optimize as opt
from sklearn.preprocessing import PolynomialFeatures
# ## 1. Regresión logística
#
# Los datos del fichero `ex2data1.csv` representan las notas obtenidas por una serie de candidatos
# en los dos exámenes de admisión de una universidad junto con la información sobre si
# fueron (1) o no (0) admitidos. El objetivo de la práctica es construir un modelo por regresión
# logística que estime la probabilidad de que un estudiante sea admitido en esa universidad en base
# a las notas de sus exámenes.
# ### 1.1. Visualización de los datos
def carga_csv(filename):
data = read_csv(filename, header=None)
return np.array(data.astype(float))
def plot_data(X,Y):
plt.figure()
plt.scatter(X[pos, 0], X[pos, 1], marker='+', c='k')
plt.scatter(X[neg, 0], X[neg, 1], marker='o',c='y')
plt.legend(['Admitted','Not admitted'])
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.show()
# +
data = carga_csv("ex2data1.csv")
X = data[:,:-1]
Y = data[:,-1]
pos = np.where(Y == 1)
neg = np.where(Y == 0)
plot_data(X,Y)
# -
# ### 1.2. Función sigmoide
# $$g(z) = \frac{1}{1+e^{-z}}$$
def sigmoid(z):
return 1.0/(1.0 + np.exp(-z))
# ### 1.3. Cálculo de la función de coste y su gradiente
# El valor de la función de coste en regresión logística viene dado por la expresión:
# $$J(\theta) = \frac{1}{m}\sum_{i=1}^m \left[-y^{(i)}\log(h_{\theta}(x^{(i)})) - (1 - y^{(i)})\log(1 - h_{\theta}(x^{(i)}))\right]$$
# que en forma vectorizada se puede calcular como:
# $$J(\theta) = \frac{1}{m}\left(-(\log(g(X\theta))^T y - (\log(1 -g(X\theta))^T (1-y)\right)$$
def coste(theta, x, y):
return -((np.log(sigmoid(x.dot(theta)))).T.dot(y) + (np.log(1 - sigmoid(x.dot(theta)))).T.dot(1 - y))/len(y)
# El gradiente de la función de coste es un vector de la misma longitud que $\theta$ donde la componente
# $j$ (para $j = 0, 1,\dots ,n$) viene dada por la expresión:
# $$\frac{\partial J(\theta)}{\partial \theta_j} = \frac{1}{m}\sum_{i=1}^m \left(h_{\theta}(x^{(i)}) - y^{(i)}\right) x_j^{(i)}$$
# que en forma vectorizada se puede calcular como:
# $$\frac{\delta J(\theta)}{\delta \theta} = \frac{1}{m} X^T \left(g(X \theta) - y\right)$$
def gradiente(theta, x, y):
return (x.T.dot(sigmoid(x.dot(theta)) - y))/len(y)
X_aux = np.hstack([np.ones((len(Y), 1)), X])
theta = [0,0,0]
print('El valor de la función de coste es de ', end='')
print(coste(theta, X_aux,Y))
print('El gradiente de la función de coste es ', end='')
print(gradiente(theta, X_aux,Y))
# ### 1.4. Cálculo del valor óptimo de los parámetros
result = opt.fmin_tnc(func=coste, x0=theta, fprime=gradiente, args=(X_aux, Y))
theta_opt = result [0]
print('El coste óptimo es ', end="", flush=True)
print(coste(theta_opt, X_aux, Y))
def pinta_frontera_recta(X, Y, theta):
#plt.figure()
x1_min, x1_max = X[:, 0].min(), X[:, 0].max()
x2_min, x2_max = X[:, 1].min(), X[:, 1].max()
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max),
np.linspace(x2_min, x2_max))
h = sigmoid(np.c_[np.ones((xx1.ravel().shape[0], 1)),
xx1.ravel(),
xx2.ravel()].dot(theta))
h = h.reshape(xx1.shape)
# el cuarto parámetro es el valor de z cuya frontera se
# quiere pintar
plt.contour(xx1, xx2, h, [0.5], linewidths=1, colors='b')
#plt.savefig("frontera.pdf")
plt.show()
plot_data(X,Y)
pinta_frontera_recta(X, Y, theta_opt)
# ### 1.5. Evaluación de la regresión logística
def accuracy(X, Y, theta):
predictions = sigmoid(np.dot(X,theta_opt))
return np.mean((predictions>=0.5)==Y)
print('Se ha clasificado correctamente el ', end='')
print(accuracy(X_aux,Y, theta_opt)*100, end='')
print('% de los ejemplos de entrenamiento')
# ## 2. Regresión logística regularizada
# En este apartado utilizarás la regresión logística regularizada para encontrar una función que
# pueda predecir si un microchip pasará o no el control de calidad, a partir del resultado de dos
# tests a los que se somete a los microchips.
def plot_data2(X,Y):
plt.figure()
plt.scatter(X[pos, 0], X[pos, 1], marker='+', c='k')
plt.scatter(X[neg, 0], X[neg, 1], marker='o',c='y')
plt.legend(['Pass','Fail'])
plt.xlabel('Microchip test 1')
plt.ylabel('Microchip test 2')
plt.show()
# +
data2 = carga_csv("ex2data2.csv")
X = data2[:,:-1]
Y = data2[:,-1]
pos = np.where(Y == 1)
neg = np.where(Y == 0)
plot_data2(X,Y)
# -
# ### 2.1. Mapeo de los atributos
poly = PolynomialFeatures(6)
mapFeature = poly.fit_transform(X)
# ### 2.2. Cálculo de la función de coste y su gradiente
# La función de coste viene dada por la expresión:
# $$J(\theta) = \left[\frac{1}{m}\sum_{i=1}^m \left[-y^{(i)}\log(h_{\theta}(x^{(i)})) - (1 - y^{(i)})\log(1 - h_{\theta}(x^{(i)}))\right]\right] + \frac{\lambda}{2m} \sum_{j=1}^n\theta_j^2 $$
# que en forma vectorizada puede calcularse como
# $$J(\theta) = \frac{1}{m}\left(\left(-(\log(g(X\theta))^T y - (\log(1 -g(X\theta))^T (1-y)\right)\right) + \frac{\lambda}{2m} \sum_{j=1}^n\theta_j^2$$
def coste_reg(theta, x, y, l):
return (coste(theta, x, y) + l/(2*len(y))*(np.square(theta[1:])).sum())
# El gradiente de la función de coste es un vector de la misma longitud que $\theta$ donde la componente
# $j$ viene dada por la expresión:
# \begin{aligned}
# \frac{\partial J(\theta)}{\partial \theta_0} = \frac{1}{m}\sum_{i=1}^m \left(h_{\theta}(x^{(i)}) - y^{(i)}\right) x_j^{(i)} && \text{para } j=0\\
# \frac{\partial J(\theta)}{\partial \theta_j} = \left(\frac{1}{m}\sum_{i=1}^m \left(h_{\theta}(x^{(i)}) - y^{(i)}\right) x_j^{(i)}\right) + \frac{\lambda}{m}\theta_j && \text{para } j\geq 1
# \end{aligned}
# que en forma vectorizada puede calcularse como:
# $$\frac{\delta J(\theta)}{\delta \theta} = \frac{1}{m} X^T \left(g(X \theta) - y\right) + \frac{\lambda}{m}\theta$$
# teniendo cuidado de no incluir el término de regularización en el cálculo del gradiente respecto
# de $\theta_0$.
def gradiente_reg(theta, x, y, l):
# para no incluir el término de regularización en el cálculo del gradiente respecto de theta_0
aux = np.hstack(([0],theta[1:]))
return (gradiente(theta, x, y) + l*aux/len(y))
theta = [0]*mapFeature[0]
l = 1
print('El valor de la función de coste es de ', end='')
print(coste_reg(theta, mapFeature,Y, l))
# ### 2.3. Cálculo del valor óptimo de los parámetros
theta = [0]*mapFeature[0]
result2 = opt.fmin_tnc(func=coste_reg, x0=theta, fprime=gradiente_reg, args=(mapFeature, Y, l))
theta_opt2 = result2[0]
print('El coste óptimo es ', end="", flush=True)
print(coste_reg(theta_opt2, mapFeature, Y, l))
def plot_decisionboundary(X, Y, theta, poly):
#plt.figure()
x1_min, x1_max = X[:, 0].min(), X[:, 0].max()
x2_min, x2_max = X[:, 1].min(), X[:, 1].max()
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max),
np.linspace(x2_min, x2_max))
h = sigmoid(poly.fit_transform(np.c_[xx1.ravel(),
xx2.ravel()]).dot(theta))
h = h.reshape(xx1.shape)
plt.contour(xx1, xx2, h, [0.5], linewidths=1, colors='g')
#plt.savefig("boundary.pdf")
plt.show()
plot_data2(X,Y)
plot_decisionboundary(X, Y, theta_opt2, poly)
# ### 2.4. Efectos de la regularización
# Experimenta con distintos valores del parámetro $\lambda$ para ver cómo afecta el término de regularización
# al aprendizaje logístico, comparando las gráficas resultantes y evaluando el resultado
# del aprendizaje sobre los ejemplos de entrenamiento.
# +
# Initial plot of points
plot_data2(X,Y)
lambdas = np.linspace(0.1, 20, 10)
colors = ['r', 'b', 'y', 'g', 'orange', 'm', 'yellow', 'indigo', 'coral', 'tan', 'aqua']
x1_min, x1_max = X[:, 0].min(), X[:, 0].max()
x2_min, x2_max = X[:, 1].min(), X[:, 1].max()
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max),
np.linspace(x2_min, x2_max))
for l, c in zip(lambdas, colors):
theta = [0]*mapFeature[0]
theta_opt2 = opt.fmin_tnc(func=coste_reg, x0=theta, fprime=gradiente_reg, args=(mapFeature, Y, l))[0]
h = sigmoid(poly.fit_transform(np.c_[xx1.ravel(),
xx2.ravel()]).dot(theta_opt2))
h = h.reshape(xx1.shape)
plt.contour(xx1, xx2, h, [0.5], linewidths=1, colors=c)
#plt.legend([str(i) for i in lambdas])
plt.show()
# -
| P2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#######################################################################
# Copyright (C) #
# 2016-2018 <NAME>(<EMAIL>) #
# 2016 <NAME>(<EMAIL>) #
# 2017 <NAME>(<EMAIL>) #
# Permission given to modify the code as long as you keep this #
# declaration at the top #
#######################################################################
# +
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from tqdm import tqdm
# -
# actions: hit or stand
ACTION_HIT = 0
ACTION_STAND = 1 # "strike" in the book
ACTIONS = [ACTION_HIT, ACTION_STAND]
# policy for player
POLICY_PLAYER = np.zeros(22)
for i in range(12, 20):
POLICY_PLAYER[i] = ACTION_HIT
POLICY_PLAYER[20] = ACTION_STAND
POLICY_PLAYER[21] = ACTION_STAND
# function form of target policy of player
def target_policy_player(usable_ace_player, player_sum, dealer_card):
return POLICY_PLAYER[player_sum]
# function form of behavior policy of player
def behavior_policy_player(usable_ace_player, player_sum, dealer_card):
if np.random.binomial(1, 0.5) == 1:
return ACTION_STAND
return ACTION_HIT
# policy for dealer
POLICY_DEALER = np.zeros(22)
for i in range(12, 17):
POLICY_DEALER[i] = ACTION_HIT
for i in range(17, 22):
POLICY_DEALER[i] = ACTION_STAND
# get a new card
def get_card():
card = np.random.randint(1, 14)
card = min(card, 10)
return card
# play a game
# @policy_player: specify policy for player
# @initial_state: [whether player has a usable Ace, sum of player's cards, one card of dealer]
# @initial_action: the initial action
def play(policy_player, initial_state=None, initial_action=None):
# player status
# sum of player
player_sum = 0
# trajectory of player
player_trajectory = []
# whether player uses Ace as 11
usable_ace_player = False
# dealer status
dealer_card1 = 0
dealer_card2 = 0
usable_ace_dealer = False
if initial_state is None:
# generate a random initial state
num_of_ace = 0
# initialize cards of player
while player_sum < 12:
# if sum of player is less than 12, always hit
card = get_card()
# if get an Ace, use it as 11
if card == 1:
num_of_ace += 1
card = 11
usable_ace_player = True
player_sum += card
# if player's sum is larger than 21, he must hold at least one Ace, two Aces are possible
if player_sum > 21:
# use the Ace as 1 rather than 11
player_sum -= 10
# if the player only has one Ace, then he doesn't have usable Ace any more
if num_of_ace == 1:
usable_ace_player = False
# initialize cards of dealer, suppose dealer will show the first card he gets
dealer_card1 = get_card()
dealer_card2 = get_card()
else:
# use specified initial state
usable_ace_player, player_sum, dealer_card1 = initial_state
dealer_card2 = get_card()
# initial state of the game
state = [usable_ace_player, player_sum, dealer_card1]
# initialize dealer's sum
dealer_sum = 0
if dealer_card1 == 1 and dealer_card2 != 1:
dealer_sum += 11 + dealer_card2
usable_ace_dealer = True
elif dealer_card1 != 1 and dealer_card2 == 1:
dealer_sum += dealer_card1 + 11
usable_ace_dealer = True
elif dealer_card1 == 1 and dealer_card2 == 1:
dealer_sum += 1 + 11
usable_ace_dealer = True
else:
dealer_sum += dealer_card1 + dealer_card2
# game starts!
# player's turn
while True:
if initial_action is not None:
action = initial_action
initial_action = None
else:
# get action based on current sum
action = policy_player(usable_ace_player, player_sum, dealer_card1)
# track player's trajectory for importance sampling
player_trajectory.append([(usable_ace_player, player_sum, dealer_card1), action])
if action == ACTION_STAND:
break
# if hit, get new card
player_sum += get_card()
# player busts
if player_sum > 21:
# if player has a usable Ace, use it as 1 to avoid busting and continue
if usable_ace_player == True:
player_sum -= 10
usable_ace_player = False
else:
# otherwise player loses
return state, -1, player_trajectory
# dealer's turn
while True:
# get action based on current sum
action = POLICY_DEALER[dealer_sum]
if action == ACTION_STAND:
break
# if hit, get a new card
new_card = get_card()
if new_card == 1 and dealer_sum + 11 < 21:
dealer_sum += 11
usable_ace_dealer = True
else:
dealer_sum += new_card
# dealer busts
if dealer_sum > 21:
if usable_ace_dealer == True:
# if dealer has a usable Ace, use it as 1 to avoid busting and continue
dealer_sum -= 10
usable_ace_dealer = False
else:
# otherwise dealer loses
return state, 1, player_trajectory
# compare the sum between player and dealer
if player_sum > dealer_sum:
return state, 1, player_trajectory
elif player_sum == dealer_sum:
return state, 0, player_trajectory
else:
return state, -1, player_trajectory
# Monte Carlo Sample with On-Policy
def monte_carlo_on_policy(episodes):
states_usable_ace = np.zeros((10, 10))
# initialze counts to 1 to avoid 0 being divided
states_usable_ace_count = np.ones((10, 10))
states_no_usable_ace = np.zeros((10, 10))
# initialze counts to 1 to avoid 0 being divided
states_no_usable_ace_count = np.ones((10, 10))
for i in tqdm(range(0, episodes)):
_, reward, player_trajectory = play(target_policy_player)
for (usable_ace, player_sum, dealer_card), _ in player_trajectory:
player_sum -= 12
dealer_card -= 1
if usable_ace:
states_usable_ace_count[player_sum, dealer_card] += 1
states_usable_ace[player_sum, dealer_card] += reward
else:
states_no_usable_ace_count[player_sum, dealer_card] += 1
states_no_usable_ace[player_sum, dealer_card] += reward
return states_usable_ace / states_usable_ace_count, states_no_usable_ace / states_no_usable_ace_count
# Monte Carlo with Exploring Starts
def monte_carlo_es(episodes):
# (playerSum, dealerCard, usableAce, action)
state_action_values = np.zeros((10, 10, 2, 2))
# initialze counts to 1 to avoid division by 0
state_action_pair_count = np.ones((10, 10, 2, 2))
# behavior policy is greedy
def behavior_policy(usable_ace, player_sum, dealer_card):
usable_ace = int(usable_ace)
player_sum -= 12
dealer_card -= 1
# get argmax of the average returns(s, a)
values_ = state_action_values[player_sum, dealer_card, usable_ace, :] / \
state_action_pair_count[player_sum, dealer_card, usable_ace, :]
return np.random.choice([action_ for action_, value_ in enumerate(values_) if value_ == np.max(values_)])
# play for several episodes
for episode in tqdm(range(episodes)):
# for each episode, use a randomly initialized state and action
initial_state = [bool(np.random.choice([0, 1])),
np.random.choice(range(12, 22)),
np.random.choice(range(1, 11))]
initial_action = np.random.choice(ACTIONS)
current_policy = behavior_policy if episode else target_policy_player
_, reward, trajectory = play(current_policy, initial_state, initial_action)
for (usable_ace, player_sum, dealer_card), action in trajectory:
usable_ace = int(usable_ace)
player_sum -= 12
dealer_card -= 1
# update values of state-action pairs
state_action_values[player_sum, dealer_card, usable_ace, action] += reward
state_action_pair_count[player_sum, dealer_card, usable_ace, action] += 1
return state_action_values / state_action_pair_count
# Monte Carlo Sample with Off-Policy
def monte_carlo_off_policy(episodes):
initial_state = [True, 13, 2]
rhos = []
returns = []
for i in range(0, episodes):
_, reward, player_trajectory = play(behavior_policy_player, initial_state=initial_state)
# get the importance ratio
numerator = 1.0
denominator = 1.0
for (usable_ace, player_sum, dealer_card), action in player_trajectory:
if action == target_policy_player(usable_ace, player_sum, dealer_card):
denominator *= 0.5
else:
numerator = 0.0
break
rho = numerator / denominator
rhos.append(rho)
returns.append(reward)
rhos = np.asarray(rhos)
returns = np.asarray(returns)
weighted_returns = rhos * returns
weighted_returns = np.add.accumulate(weighted_returns)
rhos = np.add.accumulate(rhos)
ordinary_sampling = weighted_returns / np.arange(1, episodes + 1)
with np.errstate(divide='ignore',invalid='ignore'):
weighted_sampling = np.where(rhos != 0, weighted_returns / rhos, 0)
return ordinary_sampling, weighted_sampling
# +
def figure_5_1():
states_usable_ace_1, states_no_usable_ace_1 = monte_carlo_on_policy(10000)
states_usable_ace_2, states_no_usable_ace_2 = monte_carlo_on_policy(500000)
states = [states_usable_ace_1,
states_usable_ace_2,
states_no_usable_ace_1,
states_no_usable_ace_2]
titles = ['Usable Ace, 10000 Episodes',
'Usable Ace, 500000 Episodes',
'No Usable Ace, 10000 Episodes',
'No Usable Ace, 500000 Episodes']
_, axes = plt.subplots(2, 2, figsize=(40, 30))
plt.subplots_adjust(wspace=0.1, hspace=0.2)
axes = axes.flatten()
for state, title, axis in zip(states, titles, axes):
fig = sns.heatmap(np.flipud(state), cmap="YlGnBu", ax=axis, xticklabels=range(1, 11),
yticklabels=list(reversed(range(12, 22))))
fig.set_ylabel('player sum', fontsize=30)
fig.set_xlabel('dealer showing', fontsize=30)
fig.set_title(title, fontsize=30)
plt.show()
figure_5_1()
# +
def figure_5_2():
state_action_values = monte_carlo_es(500000)
state_value_no_usable_ace = np.max(state_action_values[:, :, 0, :], axis=-1)
state_value_usable_ace = np.max(state_action_values[:, :, 1, :], axis=-1)
# get the optimal policy
action_no_usable_ace = np.argmax(state_action_values[:, :, 0, :], axis=-1)
action_usable_ace = np.argmax(state_action_values[:, :, 1, :], axis=-1)
images = [action_usable_ace,
state_value_usable_ace,
action_no_usable_ace,
state_value_no_usable_ace]
titles = ['Optimal policy with usable Ace',
'Optimal value with usable Ace',
'Optimal policy without usable Ace',
'Optimal value without usable Ace']
_, axes = plt.subplots(2, 2, figsize=(40, 30))
plt.subplots_adjust(wspace=0.1, hspace=0.2)
axes = axes.flatten()
for image, title, axis in zip(images, titles, axes):
fig = sns.heatmap(np.flipud(image), cmap="YlGnBu", ax=axis, xticklabels=range(1, 11),
yticklabels=list(reversed(range(12, 22))))
fig.set_ylabel('player sum', fontsize=30)
fig.set_xlabel('dealer showing', fontsize=30)
fig.set_title(title, fontsize=30)
plt.show()
figure_5_2()
# +
def figure_5_3():
true_value = -0.27726
episodes = 10000
runs = 100
error_ordinary = np.zeros(episodes)
error_weighted = np.zeros(episodes)
for i in tqdm(range(0, runs)):
ordinary_sampling_, weighted_sampling_ = monte_carlo_off_policy(episodes)
# get the squared error
error_ordinary += np.power(ordinary_sampling_ - true_value, 2)
error_weighted += np.power(weighted_sampling_ - true_value, 2)
error_ordinary /= runs
error_weighted /= runs
plt.plot(error_ordinary, label='Ordinary Importance Sampling')
plt.plot(error_weighted, label='Weighted Importance Sampling')
plt.xlabel('Episodes (log scale)')
plt.ylabel('Mean square error')
plt.xscale('log')
plt.legend()
plt.show()
figure_5_3()
| chapter05/blackjack.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## General information
#
# In this kernel I introduce a regression approach to this task.
# There were already several competitions on kaggle with kappa metric. Usually a regression approach with thresholds worked the best.
#
# I use the code for feature generation from this kernel: https://www.kaggle.com/braquino/890-features
# And modelling is taken from my previous kernel. The code was changed to regression quite fast, so it may look not nice for now.
# ## Importing libraries
# + _kg_hide-input=true
import numpy as np
import pandas as pd
import os
import copy
import matplotlib.pyplot as plt
# %matplotlib inline
from tqdm import tqdm_notebook
from sklearn.preprocessing import StandardScaler
from sklearn.svm import NuSVR, SVR
from sklearn.metrics import mean_absolute_error
pd.options.display.precision = 15
from collections import defaultdict
import lightgbm as lgb
import xgboost as xgb
import catboost as cat
import time
from collections import Counter
import datetime
from catboost import CatBoostRegressor
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold, KFold, RepeatedKFold, GroupKFold, GridSearchCV, train_test_split, TimeSeriesSplit, RepeatedStratifiedKFold
from sklearn import metrics
from sklearn.metrics import classification_report, confusion_matrix
from sklearn import linear_model
import gc
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
from bayes_opt import BayesianOptimization
import eli5
import shap
from IPython.display import HTML
import json
import altair as alt
from category_encoders.ordinal import OrdinalEncoder
import networkx as nx
import matplotlib.pyplot as plt
# %matplotlib inline
from typing import List
import os
import time
import datetime
import json
import gc
from numba import jit
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm_notebook
import lightgbm as lgb
import xgboost as xgb
from catboost import CatBoostRegressor, CatBoostClassifier
from sklearn import metrics
from typing import Any
from itertools import product
pd.set_option('max_rows', 500)
import re
from tqdm import tqdm
from joblib import Parallel, delayed
# -
# ## Helper functions and classes
# + _kg_hide-input=true
def add_datepart(df: pd.DataFrame, field_name: str,
prefix: str = None, drop: bool = True, time: bool = True, date: bool = True):
"""
Helper function that adds columns relevant to a date in the column `field_name` of `df`.
from fastai: https://github.com/fastai/fastai/blob/master/fastai/tabular/transform.py#L55
"""
field = df[field_name]
prefix = ifnone(prefix, re.sub('[Dd]ate$', '', field_name))
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Is_month_end', 'Is_month_start']
if date:
attr.append('Date')
if time:
attr = attr + ['Hour', 'Minute']
for n in attr:
df[prefix + n] = getattr(field.dt, n.lower())
if drop:
df.drop(field_name, axis=1, inplace=True)
return df
def ifnone(a: Any, b: Any) -> Any:
"""`a` if `a` is not None, otherwise `b`.
from fastai: https://github.com/fastai/fastai/blob/master/fastai/core.py#L92"""
return b if a is None else a
# + _kg_hide-input=true
from sklearn.base import BaseEstimator, TransformerMixin
@jit
def qwk(a1, a2):
"""
Source: https://www.kaggle.com/c/data-science-bowl-2019/discussion/114133#latest-660168
:param a1:
:param a2:
:param max_rat:
:return:
"""
max_rat = 3
a1 = np.asarray(a1, dtype=int)
a2 = np.asarray(a2, dtype=int)
hist1 = np.zeros((max_rat + 1, ))
hist2 = np.zeros((max_rat + 1, ))
o = 0
for k in range(a1.shape[0]):
i, j = a1[k], a2[k]
hist1[i] += 1
hist2[j] += 1
o += (i - j) * (i - j)
e = 0
for i in range(max_rat + 1):
for j in range(max_rat + 1):
e += hist1[i] * hist2[j] * (i - j) * (i - j)
e = e / a1.shape[0]
return 1 - o / e
def eval_qwk_lgb(y_true, y_pred):
"""
Fast cappa eval function for lgb.
"""
y_pred = y_pred.reshape(len(np.unique(y_true)), -1).argmax(axis=0)
return 'cappa', qwk(y_true, y_pred), True
def eval_qwk_lgb_regr(y_true, y_pred):
"""
Fast cappa eval function for lgb.
"""
y_pred[y_pred <= 1.12232214] = 0
y_pred[np.where(np.logical_and(y_pred > 1.12232214, y_pred <= 1.73925866))] = 1
y_pred[np.where(np.logical_and(y_pred > 1.73925866, y_pred <= 2.22506454))] = 2
y_pred[y_pred > 2.22506454] = 3
# y_pred = y_pred.reshape(len(np.unique(y_true)), -1).argmax(axis=0)
return 'cappa', qwk(y_true, y_pred), True
class LGBWrapper_regr(object):
"""
A wrapper for lightgbm model so that we will have a single api for various models.
"""
def __init__(self):
self.model = lgb.LGBMRegressor()
def fit(self, X_train, y_train, X_valid=None, y_valid=None, X_holdout=None, y_holdout=None, params=None):
if params['objective'] == 'regression':
eval_metric = eval_qwk_lgb_regr
else:
eval_metric = 'auc'
eval_set = [(X_train, y_train)]
eval_names = ['train']
self.model = self.model.set_params(**params)
if X_valid is not None:
eval_set.append((X_valid, y_valid))
eval_names.append('valid')
if X_holdout is not None:
eval_set.append((X_holdout, y_holdout))
eval_names.append('holdout')
if 'cat_cols' in params.keys():
cat_cols = [col for col in params['cat_cols'] if col in X_train.columns]
if len(cat_cols) > 0:
categorical_columns = params['cat_cols']
else:
categorical_columns = 'auto'
else:
categorical_columns = 'auto'
self.model.fit(X=X_train, y=y_train,
eval_set=eval_set, eval_names=eval_names, eval_metric=eval_metric,
verbose=params['verbose'], early_stopping_rounds=params['early_stopping_rounds'],
categorical_feature=categorical_columns)
self.best_score_ = self.model.best_score_
self.feature_importances_ = self.model.feature_importances_
def predict(self, X_test):
return self.model.predict(X_test, num_iteration=self.model.best_iteration_)
def eval_qwk_xgb(y_pred, y_true):
"""
Fast cappa eval function for xgb.
"""
# print('y_true', y_true)
# print('y_pred', y_pred)
y_true = y_true.get_label()
y_pred = y_pred.argmax(axis=1)
return 'cappa', -qwk(y_true, y_pred)
class LGBWrapper(object):
"""
A wrapper for lightgbm model so that we will have a single api for various models.
"""
def __init__(self):
self.model = lgb.LGBMClassifier()
def fit(self, X_train, y_train, X_valid=None, y_valid=None, X_holdout=None, y_holdout=None, params=None):
eval_set = [(X_train, y_train)]
eval_names = ['train']
self.model = self.model.set_params(**params)
if X_valid is not None:
eval_set.append((X_valid, y_valid))
eval_names.append('valid')
if X_holdout is not None:
eval_set.append((X_holdout, y_holdout))
eval_names.append('holdout')
if 'cat_cols' in params.keys():
cat_cols = [col for col in params['cat_cols'] if col in X_train.columns]
if len(cat_cols) > 0:
categorical_columns = params['cat_cols']
else:
categorical_columns = 'auto'
else:
categorical_columns = 'auto'
self.model.fit(X=X_train, y=y_train,
eval_set=eval_set, eval_names=eval_names, eval_metric=eval_qwk_lgb,
verbose=params['verbose'], early_stopping_rounds=params['early_stopping_rounds'],
categorical_feature=categorical_columns)
self.best_score_ = self.model.best_score_
self.feature_importances_ = self.model.feature_importances_
def predict_proba(self, X_test):
if self.model.objective == 'binary':
return self.model.predict_proba(X_test, num_iteration=self.model.best_iteration_)[:, 1]
else:
return self.model.predict_proba(X_test, num_iteration=self.model.best_iteration_)
class CatWrapper(object):
"""
A wrapper for catboost model so that we will have a single api for various models.
"""
def __init__(self):
self.model = cat.CatBoostClassifier()
def fit(self, X_train, y_train, X_valid=None, y_valid=None, X_holdout=None, y_holdout=None, params=None):
eval_set = [(X_train, y_train)]
self.model = self.model.set_params(**{k: v for k, v in params.items() if k != 'cat_cols'})
if X_valid is not None:
eval_set.append((X_valid, y_valid))
if X_holdout is not None:
eval_set.append((X_holdout, y_holdout))
if 'cat_cols' in params.keys():
cat_cols = [col for col in params['cat_cols'] if col in X_train.columns]
if len(cat_cols) > 0:
categorical_columns = params['cat_cols']
else:
categorical_columns = None
else:
categorical_columns = None
self.model.fit(X=X_train, y=y_train,
eval_set=eval_set,
verbose=params['verbose'], early_stopping_rounds=params['early_stopping_rounds'],
cat_features=categorical_columns)
self.best_score_ = self.model.best_score_
self.feature_importances_ = self.model.feature_importances_
def predict_proba(self, X_test):
if 'MultiClass' not in self.model.get_param('loss_function'):
return self.model.predict_proba(X_test, ntree_end=self.model.best_iteration_)[:, 1]
else:
return self.model.predict_proba(X_test, ntree_end=self.model.best_iteration_)
class XGBWrapper(object):
"""
A wrapper for xgboost model so that we will have a single api for various models.
"""
def __init__(self):
self.model = xgb.XGBClassifier()
def fit(self, X_train, y_train, X_valid=None, y_valid=None, X_holdout=None, y_holdout=None, params=None):
eval_set = [(X_train, y_train)]
self.model = self.model.set_params(**params)
if X_valid is not None:
eval_set.append((X_valid, y_valid))
if X_holdout is not None:
eval_set.append((X_holdout, y_holdout))
self.model.fit(X=X_train, y=y_train,
eval_set=eval_set, eval_metric=eval_qwk_xgb,
verbose=params['verbose'], early_stopping_rounds=params['early_stopping_rounds'])
scores = self.model.evals_result()
self.best_score_ = {k: {m: m_v[-1] for m, m_v in v.items()} for k, v in scores.items()}
self.best_score_ = {k: {m: n if m != 'cappa' else -n for m, n in v.items()} for k, v in self.best_score_.items()}
self.feature_importances_ = self.model.feature_importances_
def predict_proba(self, X_test):
if self.model.objective == 'binary':
return self.model.predict_proba(X_test, ntree_limit=self.model.best_iteration)[:, 1]
else:
return self.model.predict_proba(X_test, ntree_limit=self.model.best_iteration)
class MainTransformer(BaseEstimator, TransformerMixin):
def __init__(self, convert_cyclical: bool = False, create_interactions: bool = False, n_interactions: int = 20):
"""
Main transformer for the data. Can be used for processing on the whole data.
:param convert_cyclical: convert cyclical features into continuous
:param create_interactions: create interactions between features
"""
self.convert_cyclical = convert_cyclical
self.create_interactions = create_interactions
self.feats_for_interaction = None
self.n_interactions = n_interactions
def fit(self, X, y=None):
if self.create_interactions:
self.feats_for_interaction = [col for col in X.columns if 'sum' in col
or 'mean' in col or 'max' in col or 'std' in col
or 'attempt' in col]
self.feats_for_interaction1 = np.random.choice(self.feats_for_interaction, self.n_interactions)
self.feats_for_interaction2 = np.random.choice(self.feats_for_interaction, self.n_interactions)
return self
def transform(self, X, y=None):
data = copy.deepcopy(X)
if self.create_interactions:
for col1 in self.feats_for_interaction1:
for col2 in self.feats_for_interaction2:
data[f'{col1}_int_{col2}'] = data[col1] * data[col2]
if self.convert_cyclical:
data['timestampHour'] = np.sin(2 * np.pi * data['timestampHour'] / 23.0)
data['timestampMonth'] = np.sin(2 * np.pi * data['timestampMonth'] / 23.0)
data['timestampWeek'] = np.sin(2 * np.pi * data['timestampWeek'] / 23.0)
data['timestampMinute'] = np.sin(2 * np.pi * data['timestampMinute'] / 23.0)
# data['installation_session_count'] = data.groupby(['installation_id'])['Clip'].transform('count')
# data['installation_duration_mean'] = data.groupby(['installation_id'])['duration_mean'].transform('mean')
# data['installation_title_nunique'] = data.groupby(['installation_id'])['session_title'].transform('nunique')
# data['sum_event_code_count'] = data[['2000', '3010', '3110', '4070', '4090', '4030', '4035', '4021', '4020', '4010', '2080', '2083', '2040', '2020', '2030', '3021', '3121', '2050', '3020', '3120', '2060', '2070', '4031', '4025', '5000', '5010', '2081', '2025', '4022', '2035', '4040', '4100', '2010', '4110', '4045', '4095', '4220', '2075', '4230', '4235', '4080', '4050']].sum(axis=1)
# data['installation_event_code_count_mean'] = data.groupby(['installation_id'])['sum_event_code_count'].transform('mean')
return data
def fit_transform(self, X, y=None, **fit_params):
data = copy.deepcopy(X)
self.fit(data)
return self.transform(data)
class FeatureTransformer(BaseEstimator, TransformerMixin):
def __init__(self, main_cat_features: list = None, num_cols: list = None):
"""
:param main_cat_features:
:param num_cols:
"""
self.main_cat_features = main_cat_features
self.num_cols = num_cols
def fit(self, X, y=None):
# self.num_cols = [col for col in X.columns if 'sum' in col or 'mean' in col or 'max' in col or 'std' in col
# or 'attempt' in col]
return self
def transform(self, X, y=None):
data = copy.deepcopy(X)
# for col in self.num_cols:
# data[f'{col}_to_mean'] = data[col] / data.groupby('installation_id')[col].transform('mean')
# data[f'{col}_to_std'] = data[col] / data.groupby('installation_id')[col].transform('std')
return data
def fit_transform(self, X, y=None, **fit_params):
data = copy.deepcopy(X)
self.fit(data)
return self.transform(data)
# + _kg_hide-input=true
class RegressorModel(object):
"""
A wrapper class for classification models.
It can be used for training and prediction.
Can plot feature importance and training progress (if relevant for model).
"""
def __init__(self, columns: list = None, model_wrapper=None):
"""
:param original_columns:
:param model_wrapper:
"""
self.columns = columns
self.model_wrapper = model_wrapper
self.result_dict = {}
self.train_one_fold = False
self.preprocesser = None
def fit(self, X: pd.DataFrame, y,
X_holdout: pd.DataFrame = None, y_holdout=None,
folds=None,
params: dict = None,
eval_metric='rmse',
cols_to_drop: list = None,
preprocesser=None,
transformers: dict = None,
adversarial: bool = False,
plot: bool = True):
"""
Training the model.
:param X: training data
:param y: training target
:param X_holdout: holdout data
:param y_holdout: holdout target
:param folds: folds to split the data. If not defined, then model will be trained on the whole X
:param params: training parameters
:param eval_metric: metric for validataion
:param cols_to_drop: list of columns to drop (for example ID)
:param preprocesser: preprocesser class
:param transformers: transformer to use on folds
:param adversarial
:return:
"""
if folds is None:
folds = KFold(n_splits=3, random_state=42)
self.train_one_fold = True
self.columns = X.columns if self.columns is None else self.columns
self.feature_importances = pd.DataFrame(columns=['feature', 'importance'])
self.trained_transformers = {k: [] for k in transformers}
self.transformers = transformers
self.models = []
self.folds_dict = {}
self.eval_metric = eval_metric
n_target = 1
self.oof = np.zeros((len(X), n_target))
self.n_target = n_target
X = X[self.columns]
if X_holdout is not None:
X_holdout = X_holdout[self.columns]
if preprocesser is not None:
self.preprocesser = preprocesser
self.preprocesser.fit(X, y)
X = self.preprocesser.transform(X, y)
self.columns = X.columns.tolist()
if X_holdout is not None:
X_holdout = self.preprocesser.transform(X_holdout)
for fold_n, (train_index, valid_index) in enumerate(folds.split(X, y, X['installation_id'])):
if X_holdout is not None:
X_hold = X_holdout.copy()
else:
X_hold = None
self.folds_dict[fold_n] = {}
if params['verbose']:
print(f'Fold {fold_n + 1} started at {time.ctime()}')
self.folds_dict[fold_n] = {}
X_train, X_valid = X.iloc[train_index], X.iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
if self.train_one_fold:
X_train = X[self.original_columns]
y_train = y
X_valid = None
y_valid = None
datasets = {'X_train': X_train, 'X_valid': X_valid, 'X_holdout': X_hold, 'y_train': y_train}
X_train, X_valid, X_hold = self.transform_(datasets, cols_to_drop)
self.folds_dict[fold_n]['columns'] = X_train.columns.tolist()
model = copy.deepcopy(self.model_wrapper)
if adversarial:
X_new1 = X_train.copy()
if X_valid is not None:
X_new2 = X_valid.copy()
elif X_holdout is not None:
X_new2 = X_holdout.copy()
X_new = pd.concat([X_new1, X_new2], axis=0)
y_new = np.hstack((np.zeros((X_new1.shape[0])), np.ones((X_new2.shape[0]))))
X_train, X_valid, y_train, y_valid = train_test_split(X_new, y_new)
model.fit(X_train, y_train, X_valid, y_valid, X_hold, y_holdout, params=params)
self.folds_dict[fold_n]['scores'] = model.best_score_
if self.oof.shape[0] != len(X):
self.oof = np.zeros((X.shape[0], self.oof.shape[1]))
if not adversarial:
self.oof[valid_index] = model.predict(X_valid).reshape(-1, n_target)
fold_importance = pd.DataFrame(list(zip(X_train.columns, model.feature_importances_)),
columns=['feature', 'importance'])
self.feature_importances = self.feature_importances.append(fold_importance)
self.models.append(model)
self.feature_importances['importance'] = self.feature_importances['importance'].astype(int)
# if params['verbose']:
self.calc_scores_()
if plot:
# print(classification_report(y, self.oof.argmax(1)))
fig, ax = plt.subplots(figsize=(16, 12))
plt.subplot(2, 2, 1)
self.plot_feature_importance(top_n=20)
plt.subplot(2, 2, 2)
self.plot_metric()
plt.subplot(2, 2, 3)
plt.hist(y.values.reshape(-1, 1) - self.oof)
plt.title('Distribution of errors')
plt.subplot(2, 2, 4)
plt.hist(self.oof)
plt.title('Distribution of oof predictions');
def transform_(self, datasets, cols_to_drop):
for name, transformer in self.transformers.items():
transformer.fit(datasets['X_train'], datasets['y_train'])
datasets['X_train'] = transformer.transform(datasets['X_train'])
if datasets['X_valid'] is not None:
datasets['X_valid'] = transformer.transform(datasets['X_valid'])
if datasets['X_holdout'] is not None:
datasets['X_holdout'] = transformer.transform(datasets['X_holdout'])
self.trained_transformers[name].append(transformer)
if cols_to_drop is not None:
cols_to_drop = [col for col in cols_to_drop if col in datasets['X_train'].columns]
datasets['X_train'] = datasets['X_train'].drop(cols_to_drop, axis=1)
if datasets['X_valid'] is not None:
datasets['X_valid'] = datasets['X_valid'].drop(cols_to_drop, axis=1)
if datasets['X_holdout'] is not None:
datasets['X_holdout'] = datasets['X_holdout'].drop(cols_to_drop, axis=1)
self.cols_to_drop = cols_to_drop
return datasets['X_train'], datasets['X_valid'], datasets['X_holdout']
def calc_scores_(self):
print()
datasets = [k for k, v in [v['scores'] for k, v in self.folds_dict.items()][0].items() if len(v) > 0]
self.scores = {}
for d in datasets:
scores = [v['scores'][d][self.eval_metric] for k, v in self.folds_dict.items()]
print(f"CV mean score on {d}: {np.mean(scores):.4f} +/- {np.std(scores):.4f} std.")
self.scores[d] = np.mean(scores)
def predict(self, X_test, averaging: str = 'usual'):
"""
Make prediction
:param X_test:
:param averaging: method of averaging
:return:
"""
full_prediction = np.zeros((X_test.shape[0], self.oof.shape[1]))
if self.preprocesser is not None:
X_test = self.preprocesser.transform(X_test)
for i in range(len(self.models)):
X_t = X_test.copy()
for name, transformers in self.trained_transformers.items():
X_t = transformers[i].transform(X_t)
if self.cols_to_drop is not None:
cols_to_drop = [col for col in self.cols_to_drop if col in X_t.columns]
X_t = X_t.drop(cols_to_drop, axis=1)
y_pred = self.models[i].predict(X_t[self.folds_dict[i]['columns']]).reshape(-1, full_prediction.shape[1])
# if case transformation changes the number of the rows
if full_prediction.shape[0] != len(y_pred):
full_prediction = np.zeros((y_pred.shape[0], self.oof.shape[1]))
if averaging == 'usual':
full_prediction += y_pred
elif averaging == 'rank':
full_prediction += pd.Series(y_pred).rank().values
return full_prediction / len(self.models)
def plot_feature_importance(self, drop_null_importance: bool = True, top_n: int = 10):
"""
Plot default feature importance.
:param drop_null_importance: drop columns with null feature importance
:param top_n: show top n columns
:return:
"""
top_feats = self.get_top_features(drop_null_importance, top_n)
feature_importances = self.feature_importances.loc[self.feature_importances['feature'].isin(top_feats)]
feature_importances['feature'] = feature_importances['feature'].astype(str)
top_feats = [str(i) for i in top_feats]
sns.barplot(data=feature_importances, x='importance', y='feature', orient='h', order=top_feats)
plt.title('Feature importances')
def get_top_features(self, drop_null_importance: bool = True, top_n: int = 10):
"""
Get top features by importance.
:param drop_null_importance:
:param top_n:
:return:
"""
grouped_feats = self.feature_importances.groupby(['feature'])['importance'].mean()
if drop_null_importance:
grouped_feats = grouped_feats[grouped_feats != 0]
return list(grouped_feats.sort_values(ascending=False).index)[:top_n]
def plot_metric(self):
"""
Plot training progress.
Inspired by `plot_metric` from https://lightgbm.readthedocs.io/en/latest/_modules/lightgbm/plotting.html
:return:
"""
full_evals_results = pd.DataFrame()
for model in self.models:
evals_result = pd.DataFrame()
for k in model.model.evals_result_.keys():
evals_result[k] = model.model.evals_result_[k][self.eval_metric]
evals_result = evals_result.reset_index().rename(columns={'index': 'iteration'})
full_evals_results = full_evals_results.append(evals_result)
full_evals_results = full_evals_results.melt(id_vars=['iteration']).rename(columns={'value': self.eval_metric,
'variable': 'dataset'})
sns.lineplot(data=full_evals_results, x='iteration', y=self.eval_metric, hue='dataset')
plt.title('Training progress')
# + _kg_hide-input=true
class CategoricalTransformer(BaseEstimator, TransformerMixin):
def __init__(self, cat_cols=None, drop_original: bool = False, encoder=OrdinalEncoder()):
"""
Categorical transformer. This is a wrapper for categorical encoders.
:param cat_cols:
:param drop_original:
:param encoder:
"""
self.cat_cols = cat_cols
self.drop_original = drop_original
self.encoder = encoder
self.default_encoder = OrdinalEncoder()
def fit(self, X, y=None):
if self.cat_cols is None:
kinds = np.array([dt.kind for dt in X.dtypes])
is_cat = kinds == 'O'
self.cat_cols = list(X.columns[is_cat])
self.encoder.set_params(cols=self.cat_cols)
self.default_encoder.set_params(cols=self.cat_cols)
self.encoder.fit(X[self.cat_cols], y)
self.default_encoder.fit(X[self.cat_cols], y)
return self
def transform(self, X, y=None):
data = copy.deepcopy(X)
new_cat_names = [f'{col}_encoded' for col in self.cat_cols]
encoded_data = self.encoder.transform(data[self.cat_cols])
if encoded_data.shape[1] == len(self.cat_cols):
data[new_cat_names] = encoded_data
else:
pass
if self.drop_original:
data = data.drop(self.cat_cols, axis=1)
else:
data[self.cat_cols] = self.default_encoder.transform(data[self.cat_cols])
return data
def fit_transform(self, X, y=None, **fit_params):
data = copy.deepcopy(X)
self.fit(data)
return self.transform(data)
# -
# ## Data overview
#
# Let's have a look at the data at first.
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
def read_data():
print('Reading train.csv file....')
train = pd.read_csv('/kaggle/input/data-science-bowl-2019/train.csv')
print('Training.csv file have {} rows and {} columns'.format(train.shape[0], train.shape[1]))
print('Reading test.csv file....')
test = pd.read_csv('/kaggle/input/data-science-bowl-2019/test.csv')
print('Test.csv file have {} rows and {} columns'.format(test.shape[0], test.shape[1]))
print('Reading train_labels.csv file....')
train_labels = pd.read_csv('/kaggle/input/data-science-bowl-2019/train_labels.csv')
print('Train_labels.csv file have {} rows and {} columns'.format(train_labels.shape[0], train_labels.shape[1]))
print('Reading specs.csv file....')
specs = pd.read_csv('/kaggle/input/data-science-bowl-2019/specs.csv')
print('Specs.csv file have {} rows and {} columns'.format(specs.shape[0], specs.shape[1]))
print('Reading sample_submission.csv file....')
sample_submission = pd.read_csv('/kaggle/input/data-science-bowl-2019/sample_submission.csv')
print('Sample_submission.csv file have {} rows and {} columns'.format(sample_submission.shape[0], sample_submission.shape[1]))
return train, test, train_labels, specs, sample_submission
def encode_title(train, test, train_labels):
# encode title
train['title_event_code'] = list(map(lambda x, y: str(x) + '_' + str(y), train['title'], train['event_code']))
test['title_event_code'] = list(map(lambda x, y: str(x) + '_' + str(y), test['title'], test['event_code']))
all_title_event_code = list(set(train["title_event_code"].unique()).union(test["title_event_code"].unique()))
# make a list with all the unique 'titles' from the train and test set
list_of_user_activities = list(set(train['title'].unique()).union(set(test['title'].unique())))
# make a list with all the unique 'event_code' from the train and test set
list_of_event_code = list(set(train['event_code'].unique()).union(set(test['event_code'].unique())))
list_of_event_id = list(set(train['event_id'].unique()).union(set(test['event_id'].unique())))
# make a list with all the unique worlds from the train and test set
list_of_worlds = list(set(train['world'].unique()).union(set(test['world'].unique())))
# create a dictionary numerating the titles
activities_map = dict(zip(list_of_user_activities, np.arange(len(list_of_user_activities))))
activities_labels = dict(zip(np.arange(len(list_of_user_activities)), list_of_user_activities))
activities_world = dict(zip(list_of_worlds, np.arange(len(list_of_worlds))))
assess_titles = list(set(train[train['type'] == 'Assessment']['title'].value_counts().index).union(set(test[test['type'] == 'Assessment']['title'].value_counts().index)))
# replace the text titles with the number titles from the dict
train['title'] = train['title'].map(activities_map)
test['title'] = test['title'].map(activities_map)
train['world'] = train['world'].map(activities_world)
test['world'] = test['world'].map(activities_world)
train_labels['title'] = train_labels['title'].map(activities_map)
win_code = dict(zip(activities_map.values(), (4100*np.ones(len(activities_map))).astype('int')))
# then, it set one element, the 'Bird Measurer (Assessment)' as 4110, 10 more than the rest
win_code[activities_map['Bird Measurer (Assessment)']] = 4110
# convert text into datetime
train['timestamp'] = pd.to_datetime(train['timestamp'])
test['timestamp'] = pd.to_datetime(test['timestamp'])
return train, test, train_labels, win_code, list_of_user_activities, list_of_event_code, activities_labels, assess_titles, list_of_event_id, all_title_event_code
def get_data(user_sample, test_set=False):
'''
The user_sample is a DataFrame from train or test where the only one
installation_id is filtered
And the test_set parameter is related with the labels processing, that is only requered
if test_set=False
'''
# Constants and parameters declaration
last_activity = 0
user_activities_count = {'Clip':0, 'Activity': 0, 'Assessment': 0, 'Game':0}
# new features: time spent in each activity
last_session_time_sec = 0
accuracy_groups = {0:0, 1:0, 2:0, 3:0}
all_assessments = []
accumulated_accuracy_group = 0
accumulated_accuracy = 0
accumulated_correct_attempts = 0
accumulated_uncorrect_attempts = 0
accumulated_actions = 0
counter = 0
time_first_activity = float(user_sample['timestamp'].values[0])
durations = []
last_accuracy_title = {'acc_' + title: -1 for title in assess_titles}
event_code_count: Dict[str, int] = {ev: 0 for ev in list_of_event_code}
event_id_count: Dict[str, int] = {eve: 0 for eve in list_of_event_id}
title_count: Dict[str, int] = {eve: 0 for eve in activities_labels.values()}
title_event_code_count: Dict[str, int] = {t_eve: 0 for t_eve in all_title_event_code}
# itarates through each session of one instalation_id
for i, session in user_sample.groupby('game_session', sort=False):
# i = game_session_id
# session is a DataFrame that contain only one game_session
# get some sessions information
session_type = session['type'].iloc[0]
session_title = session['title'].iloc[0]
session_title_text = activities_labels[session_title]
# for each assessment, and only this kind off session, the features below are processed
# and a register are generated
if (session_type == 'Assessment') & (test_set or len(session)>1):
# search for event_code 4100, that represents the assessments trial
all_attempts = session.query(f'event_code == {win_code[session_title]}')
# then, check the numbers of wins and the number of losses
true_attempts = all_attempts['event_data'].str.contains('true').sum()
false_attempts = all_attempts['event_data'].str.contains('false').sum()
# copy a dict to use as feature template, it's initialized with some itens:
# {'Clip':0, 'Activity': 0, 'Assessment': 0, 'Game':0}
features = user_activities_count.copy()
features.update(last_accuracy_title.copy())
features.update(event_code_count.copy())
features.update(event_id_count.copy())
features.update(title_count.copy())
features.update(title_event_code_count.copy())
features.update(last_accuracy_title.copy())
# get installation_id for aggregated features
features['installation_id'] = session['installation_id'].iloc[-1]
# add title as feature, remembering that title represents the name of the game
features['session_title'] = session['title'].iloc[0]
# the 4 lines below add the feature of the history of the trials of this player
# this is based on the all time attempts so far, at the moment of this assessment
features['accumulated_correct_attempts'] = accumulated_correct_attempts
features['accumulated_uncorrect_attempts'] = accumulated_uncorrect_attempts
accumulated_correct_attempts += true_attempts
accumulated_uncorrect_attempts += false_attempts
# the time spent in the app so far
if durations == []:
features['duration_mean'] = 0
else:
features['duration_mean'] = np.mean(durations)
durations.append((session.iloc[-1, 2] - session.iloc[0, 2] ).seconds)
# the accurace is the all time wins divided by the all time attempts
features['accumulated_accuracy'] = accumulated_accuracy/counter if counter > 0 else 0
accuracy = true_attempts/(true_attempts+false_attempts) if (true_attempts+false_attempts) != 0 else 0
accumulated_accuracy += accuracy
last_accuracy_title['acc_' + session_title_text] = accuracy
# a feature of the current accuracy categorized
# it is a counter of how many times this player was in each accuracy group
if accuracy == 0:
features['accuracy_group'] = 0
elif accuracy == 1:
features['accuracy_group'] = 3
elif accuracy == 0.5:
features['accuracy_group'] = 2
else:
features['accuracy_group'] = 1
features.update(accuracy_groups)
accuracy_groups[features['accuracy_group']] += 1
# mean of the all accuracy groups of this player
features['accumulated_accuracy_group'] = accumulated_accuracy_group/counter if counter > 0 else 0
accumulated_accuracy_group += features['accuracy_group']
# how many actions the player has done so far, it is initialized as 0 and updated some lines below
features['accumulated_actions'] = accumulated_actions
# there are some conditions to allow this features to be inserted in the datasets
# if it's a test set, all sessions belong to the final dataset
# it it's a train, needs to be passed throught this clausule: session.query(f'event_code == {win_code[session_title]}')
# that means, must exist an event_code 4100 or 4110
if test_set:
all_assessments.append(features)
elif true_attempts+false_attempts > 0:
all_assessments.append(features)
counter += 1
# this piece counts how many actions was made in each event_code so far
def update_counters(counter: dict, col: str):
num_of_session_count = Counter(session[col])
for k in num_of_session_count.keys():
x = k
if col == 'title':
x = activities_labels[k]
counter[x] += num_of_session_count[k]
return counter
event_code_count = update_counters(event_code_count, "event_code")
event_id_count = update_counters(event_id_count, "event_id")
title_count = update_counters(title_count, 'title')
title_event_code_count = update_counters(title_event_code_count, 'title_event_code')
# counts how many actions the player has done so far, used in the feature of the same name
accumulated_actions += len(session)
if last_activity != session_type:
user_activities_count[session_type] += 1
last_activitiy = session_type
# if it't the test_set, only the last assessment must be predicted, the previous are scraped
if test_set:
return all_assessments[-1]
# in the train_set, all assessments goes to the dataset
return all_assessments
def get_train_and_test(train, test):
compiled_train = []
compiled_test = []
for i, (ins_id, user_sample) in tqdm(enumerate(train.groupby('installation_id', sort = False)), total = 17000):
compiled_train += get_data(user_sample)
for ins_id, user_sample in tqdm(test.groupby('installation_id', sort = False), total = 1000):
test_data = get_data(user_sample, test_set = True)
compiled_test.append(test_data)
reduce_train = pd.DataFrame(compiled_train)
reduce_test = pd.DataFrame(compiled_test)
categoricals = ['session_title']
return reduce_train, reduce_test, categoricals
# read data
train, test, train_labels, specs, sample_submission = read_data()
# get usefull dict with maping encode
train, test, train_labels, win_code, list_of_user_activities, list_of_event_code, activities_labels, assess_titles, list_of_event_id, all_title_event_code = encode_title(train, test, train_labels)
# tranform function to get the train and test set
reduce_train, reduce_test, categoricals = get_train_and_test(train, test)
# -
def preprocess(reduce_train, reduce_test):
for df in [reduce_train, reduce_test]:
df['installation_session_count'] = df.groupby(['installation_id'])['Clip'].transform('count')
df['installation_duration_mean'] = df.groupby(['installation_id'])['duration_mean'].transform('mean')
#df['installation_duration_std'] = df.groupby(['installation_id'])['duration_mean'].transform('std')
df['installation_title_nunique'] = df.groupby(['installation_id'])['session_title'].transform('nunique')
df['sum_event_code_count'] = df[[2050, 4100, 4230, 5000, 4235, 2060, 4110, 5010, 2070, 2075, 2080, 2081, 2083, 3110, 4010, 3120, 3121, 4020, 4021,
4022, 4025, 4030, 4031, 3010, 4035, 4040, 3020, 3021, 4045, 2000, 4050, 2010, 2020, 4070, 2025, 2030, 4080, 2035,
2040, 4090, 4220, 4095]].sum(axis = 1)
df['installation_event_code_count_mean'] = df.groupby(['installation_id'])['sum_event_code_count'].transform('mean')
#df['installation_event_code_count_std'] = df.groupby(['installation_id'])['sum_event_code_count'].transform('std')
features = reduce_train.loc[(reduce_train.sum(axis=1) != 0), (reduce_train.sum(axis=0) != 0)].columns # delete useless columns
features = [x for x in features if x not in ['accuracy_group', 'installation_id']] + ['acc_' + title for title in assess_titles]
return reduce_train, reduce_test, features
# call feature engineering function
reduce_train, reduce_test, features = preprocess(reduce_train, reduce_test)
params = {'n_estimators':2000,
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': 'rmse',
'subsample': 0.75,
'subsample_freq': 1,
'learning_rate': 0.04,
'feature_fraction': 0.9,
'max_depth': 15,
'lambda_l1': 1,
'lambda_l2': 1,
'verbose': 100,
'early_stopping_rounds': 100, 'eval_metric': 'cappa'
}
y = reduce_train['accuracy_group']
n_fold = 5
folds = GroupKFold(n_splits=n_fold)
cols_to_drop = ['game_session', 'installation_id', 'timestamp', 'accuracy_group', 'timestampDate']
mt = MainTransformer()
ft = FeatureTransformer()
transformers = {'ft': ft}
regressor_model1 = RegressorModel(model_wrapper=LGBWrapper_regr())
regressor_model1.fit(X=reduce_train, y=y, folds=folds, params=params, preprocesser=mt, transformers=transformers,
eval_metric='cappa', cols_to_drop=cols_to_drop)
# ## Making predictions
#
# The preprocessing is a class, which was initially written by <NAME> here: https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107 and later improved here https://www.kaggle.com/naveenasaithambi/optimizedrounder-improved (the improvement is is speed).
#
# It can be used to find optimal coefficients for thresholds. In this kernel I'll show an example, but when you do it, don't forget a proper validation.
from functools import partial
import scipy as sp
class OptimizedRounder(object):
"""
An optimizer for rounding thresholds
to maximize Quadratic Weighted Kappa (QWK) score
# https://www.kaggle.com/naveenasaithambi/optimizedrounder-improved
"""
def __init__(self):
self.coef_ = 0
def _kappa_loss(self, coef, X, y):
"""
Get loss according to
using current coefficients
:param coef: A list of coefficients that will be used for rounding
:param X: The raw predictions
:param y: The ground truth labels
"""
X_p = pd.cut(X, [-np.inf] + list(np.sort(coef)) + [np.inf], labels = [0, 1, 2, 3])
return -qwk(y, X_p)
def fit(self, X, y):
"""
Optimize rounding thresholds
:param X: The raw predictions
:param y: The ground truth labels
"""
loss_partial = partial(self._kappa_loss, X=X, y=y)
initial_coef = [0.5, 1.5, 2.5]
self.coef_ = sp.optimize.minimize(loss_partial, initial_coef, method='nelder-mead')
def predict(self, X, coef):
"""
Make predictions with specified thresholds
:param X: The raw predictions
:param coef: A list of coefficients that will be used for rounding
"""
return pd.cut(X, [-np.inf] + list(np.sort(coef)) + [np.inf], labels = [0, 1, 2, 3])
def coefficients(self):
"""
Return the optimized coefficients
"""
return self.coef_['x']
# +
# %%time
pr1 = regressor_model1.predict(reduce_train)
optR = OptimizedRounder()
optR.fit(pr1.reshape(-1,), y)
coefficients = optR.coefficients()
# -
opt_preds = optR.predict(pr1.reshape(-1, ), coefficients)
qwk(y, opt_preds)
# some coefficients calculated by me.
pr1 = regressor_model1.predict(reduce_test)
pr1[pr1 <= 1.12232214] = 0
pr1[np.where(np.logical_and(pr1 > 1.12232214, pr1 <= 1.73925866))] = 1
pr1[np.where(np.logical_and(pr1 > 1.73925866, pr1 <= 2.22506454))] = 2
pr1[pr1 > 2.22506454] = 3
sample_submission['accuracy_group'] = pr1.astype(int)
sample_submission.to_csv('submission.csv', index=False)
sample_submission['accuracy_group'].value_counts(normalize=True)
| model_src/quick-and-dirty-regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # T<sub>2</sub> Ramsey Characterization
# The purpose of the $T_2$Ramsey experiment is to determine two of the qubit's properties: *Ramsey* or *detuning frequency* and $T_2^\ast$. The rough frequency of the qubit was already determined previously. The control pulses are based on this frequency.
#
# In this experiment, we would like to get a more precise estimate of the qubit's frequency. The difference between the frequency used for the control rotation pulses, and the precise frequency is called the *detuning frequency*. This part of the experiment is called a *Ramsey Experiment*. $T_2^\ast$ represents the rate of decay toward a mixed state, when the qubit is initialized to the $\left|1\right\rangle$ state.
#
# Since the detuning frequency is relatively small, we add a phase gate to the circuit to enable better measurement. The actual frequency measured is the sum of the detuning frequency and the user induced *oscillation frequency* (`osc_freq` parameter).
import qiskit
from qiskit_experiments.library import T2Ramsey
# The circuit used for the experiment comprises the following:
#
# 1. Hadamard gate
# 2. delay
# 3. RZ gate that rotates the qubit in the x-y plane
# 4. Hadamard gate
# 5. measurement
#
# The user provides as input a series of delays and the time unit for the delays, e.g., seconds, milliseconds, etc. In addition, the user provides the oscillation frequency in Hz. During the delay, we expect the qubit to precess about the z-axis. If the p gate and the precession offset each other perfectly, then the qubit will arrive at the $\left|0\right\rangle$ state (after the second Hadamard gate). By varying the extension of the delays, we get a series of oscillations of the qubit state between the $\left|0\right\rangle$ and $\left|1\right\rangle$ states. We can draw the graph of the resulting function, and can analytically extract the desired values.
# set the computation units to microseconds
unit = "us" # microseconds
qubit = 0
# set the desired delays
delays = list(range(1, 50, 1))
# Create a T2Ramsey experiment. Print the first circuit as an example
exp1 = T2Ramsey(qubit, delays, unit=unit, osc_freq=1e5)
print(exp1.circuits()[0])
# We run the experiment on a simple, simulated backend, created specifically for this experiment's tutorial.
# +
from qiskit_experiments.test.t2ramsey_backend import T2RamseyBackend
# FakeJob is a wrapper for the backend, to give it the form of a job
from qiskit_experiments.test.utils import FakeJob
conversion_factor = 1e-6
# The behavior of the backend is determined by the following parameters
backend = T2RamseyBackend(
p0={
"A": [0.5],
"T2star": [20.0],
"f": [100100],
"phi": [0.0],
"B": [0.5],
},
initial_prob_plus=[0.0],
readout0to1=[0.02],
readout1to0=[0.02],
conversion_factor=conversion_factor,
)
# -
# The resulting graph will have the form:
# $f(t) = a^{-t/T_2*} \cdot \cos(2 \pi f t + \phi) + b$
# where *t* is the delay, $T_2^\ast$ is the decay factor, and *f* is the detuning frequency.
# `conversion_factor` is a scaling factor that depends on the measurement units used. It is 1E-6 here, because the unit is microseconds.
# +
exp1.set_analysis_options(user_p0=None, plot=True)
expdata1 = exp1.run(backend=backend, shots=2000)
expdata1.block_for_results() # Wait for job/analysis to finish.
# Display the figure
display(expdata1.figure(0))
# -
# Print results
for result in expdata1.analysis_results():
print(result)
# Additional fitter result data is stored in the `result.extra` field
expdata1.analysis_results("T2star").extra
# ### Providing initial user estimates
# The user can provide initial estimates for the parameters to help the analysis process. Because the curve is expected to decay toward $0.5$, the natural choice for parameters $A$ and $B$ is $0.5$. Varying the value of $\phi$ will shift the graph along the x-axis. Since this is not of interest to us, we can safely initialize $\phi$ to 0. In this experiment, `t2ramsey` and `f` are the parameters of interest. Good estimates for them are values computed in previous experiments on this qubit or a similar values computed for other qubits.
# +
from qiskit_experiments.library.characterization import T2RamseyAnalysis
user_p0={
"A": 0.5,
"T2star": 20.0,
"f": 110000,
"phi": 0,
"B": 0.5
}
exp_with_p0 = T2Ramsey(qubit, delays, unit=unit, osc_freq=1e5)
exp_with_p0.set_analysis_options(user_p0=user_p0, plot=True)
expdata_with_p0 = exp_with_p0.run(backend=backend, shots=2000)
expdata_with_p0.block_for_results()
# Display fit figure
display(expdata_with_p0.figure(0))
# -
# Print results
for result in expdata_with_p0.analysis_results():
print(result)
# The units can be changed, but the output in the result is always given in seconds. The units in the backend must be adjusted accordingly.
# +
from qiskit.utils import apply_prefix
unit = "ns"
delays = list(range(1000, 50000, 1000))
conversion_factor = apply_prefix(1, unit)
print(conversion_factor)
# +
p0 = {
"A": [0.5],
"T2star": [20000],
"f": [100000],
"phi": [0.0],
"B": [0.5],
}
backend_in_ns = T2RamseyBackend(
p0=p0,
initial_prob_plus=[0.0],
readout0to1=[0.02],
readout1to0=[0.02],
conversion_factor=conversion_factor,
)
exp_in_ns = T2Ramsey(qubit, delays, unit=unit, osc_freq=1e5)
user_p0_ns = {
"A": 0.5,
"T2star": 20000.0,
"f": 110000,
"phi": 0,
"B": 0.5
}
exp_in_ns.set_analysis_options(user_p0=user_p0_ns, plot=True)
# Run experiment
expdata_in_ns = exp_in_ns.run(backend=backend_in_ns, shots=2000).block_for_results()
# Display Figure
display(expdata_in_ns.figure(0))
# -
# Print Results
for result in expdata_in_ns.analysis_results():
print(result)
import qiskit.tools.jupyter
# %qiskit_copyright
| docs/tutorials/t2ramsey_characterization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.5 64-bit (''shiing'': venv)'
# name: python3
# ---
# # CUHK [STAT3009](https://www.bendai.org/STAT3009/) Notebook7a: Performance for Netflix dataset
# ## load the developed methods
# +
def rmse(true, pred):
return np.sqrt(np.mean((pred - true)**2))
# baseline methods
class glb_mean(object):
def __init__(self):
self.glb_mean = 0
def fit(self, train_rating):
self.glb_mean = np.mean(train_rating)
def predict(self, test_pair):
pred = np.ones(len(test_pair))
pred = pred*self.glb_mean
return pred
class user_mean(object):
def __init__(self, n_user):
self.n_user = n_user
self.glb_mean = 0.
self.user_mean = np.zeros(n_user)
def fit(self, train_pair, train_rating):
self.glb_mean = train_rating.mean()
for u in range(self.n_user):
ind_train = np.where(train_pair[:,0] == u)[0]
if len(ind_train) == 0:
self.user_mean[u] = self.glb_mean
else:
self.user_mean[u] = train_rating[ind_train].mean()
def predict(self, test_pair):
pred = np.ones(len(test_pair))*self.glb_mean
j = 0
for row in test_pair:
user_tmp, item_tmp = row[0], row[1]
pred[j] = self.user_mean[user_tmp]
j = j + 1
return pred
class item_mean(object):
def __init__(self, n_item):
self.n_item = n_item
self.glb_mean = 0.
self.item_mean = np.zeros(n_item)
def fit(self, train_pair, train_rating):
self.glb_mean = train_rating.mean()
for i in range(self.n_item):
ind_train = np.where(train_pair[:,1] == i)[0]
if len(ind_train) == 0:
self.item_mean[i] = self.glb_mean
else:
self.item_mean[i] = train_rating[ind_train].mean()
def predict(self, test_pair):
pred = np.ones(len(test_pair))*self.glb_mean
j = 0
for row in test_pair:
user_tmp, item_tmp = row[0], row[1]
pred[j] = self.item_mean[item_tmp]
j = j + 1
return pred
class LFM(object):
def __init__(self, n_user, n_item, lam=.001, K=10, iterNum=10, tol=1e-4, verbose=1):
self.P = np.random.randn(n_user, K)
self.Q = np.random.randn(n_item, K)
# self.index_item = []
# self.index_user = []
self.n_user = n_user
self.n_item = n_item
self.lam = lam
self.K = K
self.iterNum = iterNum
self.tol = tol
self.verbose = verbose
def fit(self, train_pair, train_rating):
diff, tol = 1., self.tol
n_user, n_item, n_obs = self.n_user, self.n_item, len(train_pair)
K, iterNum, lam = self.K, self.iterNum, self.lam
## store user/item index set
self.index_item = [np.where(train_pair[:,1] == i)[0] for i in range(n_item)]
self.index_user = [np.where(train_pair[:,0] == u)[0] for u in range(n_user)]
if self.verbose:
print('Fitting Reg-LFM: K: %d, lam: %.5f' %(K, lam))
for i in range(iterNum):
## item update
score_old = self.rmse(test_pair=train_pair, test_rating=train_rating)
for item_id in range(n_item):
index_item_tmp = self.index_item[item_id]
if len(index_item_tmp) == 0:
self.Q[item_id,:] = 0.
continue
sum_pu, sum_matrix = np.zeros((K)), np.zeros((K, K))
for record_ind in index_item_tmp:
## double-check
if item_id != train_pair[record_ind][1]:
raise ValueError('the item_id is waring in updating Q!')
user_id, rating_tmp = train_pair[record_ind][0], train_rating[record_ind]
sum_matrix = sum_matrix + np.outer(self.P[user_id,:], self.P[user_id,:])
sum_pu = sum_pu + rating_tmp * self.P[user_id,:]
self.Q[item_id,:] = np.dot(np.linalg.inv(sum_matrix + lam*n_obs*np.identity(K)), sum_pu)
for user_id in range(n_user):
index_user_tmp = self.index_user[user_id]
if len(index_user_tmp) == 0:
self.P[user_id,:] = 0.
continue
sum_pu, sum_matrix = np.zeros((K)), np.zeros((K, K))
for record_ind in index_user_tmp:
## double-check
if user_id != train_pair[record_ind][0]:
raise ValueError('the user_id is waring in updating P!')
item_id, rating_tmp = train_pair[record_ind][1], train_rating[record_ind]
sum_matrix = sum_matrix + np.outer(self.Q[item_id,:], self.Q[item_id,:])
sum_pu = sum_pu + rating_tmp * self.Q[item_id,:]
self.P[user_id,:] = np.dot(np.linalg.inv(sum_matrix + lam*n_obs*np.identity(K)), sum_pu)
# compute the new rmse score
score_new = self.rmse(test_pair=train_pair, test_rating=train_rating)
diff = abs(score_new - score_old) / score_old
if self.verbose:
print("Reg-LFM: ite: %d; diff: %.3f RMSE: %.3f" %(i, diff, score_new))
if(diff < tol):
break
def predict(self, test_pair):
# predict ratings for user-item pairs
pred_rating = [np.dot(self.P[line[0]], self.Q[line[1]]) for line in test_pair]
return np.array(pred_rating)
def rmse(self, test_pair, test_rating):
# report the rmse for the fitted `LFM`
pred_rating = self.predict(test_pair=test_pair)
return np.sqrt( np.mean( (pred_rating - test_rating)**2) )
from sklearn.model_selection import KFold
import itertools
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
class LFM_CV(object):
def __init__(self, n_user, n_item, cv=5,
lams=[.000001,.0001,.001,.01],
Ks=[3,5,10,20],
iterNum=10, tol=1e-4):
# self.index_item = []
# self.index_user = []
self.n_user = n_user
self.n_item = n_item
self.cv = cv
self.lams = lams
self.Ks = Ks
self.iterNum = iterNum
self.tol = tol
self.best_model = {}
self.cv_result = {'K': [], 'lam': [], 'train_rmse': [], 'valid_rmse': []}
def grid_search(self, train_pair, train_rating):
## generate all comb of `K` and `lam`
kf = KFold(n_splits=self.cv, shuffle=True)
for (K,lam) in itertools.product(self.Ks, self.lams):
train_rmse_tmp, valid_rmse_tmp = 0., 0.
for train_index, valid_index in kf.split(train_pair):
# produce training/validation sets
train_pair_cv, train_rating_cv = train_pair[train_index], train_rating[train_index]
valid_pair_cv, valid_rating_cv = train_pair[valid_index], train_rating[valid_index]
# fit the model based on CV data
model_tmp = LFM(self.n_user, self.n_item, K=K, lam=lam, verbose=0)
model_tmp.fit(train_pair=train_pair_cv, train_rating=train_rating_cv)
train_rmse_tmp_cv = model_tmp.rmse(test_pair=train_pair_cv, test_rating=train_rating_cv)
valid_rmse_tmp_cv = model_tmp.rmse(test_pair=valid_pair_cv, test_rating=valid_rating_cv)
train_rmse_tmp = train_rmse_tmp + train_rmse_tmp_cv / self.cv
valid_rmse_tmp = valid_rmse_tmp + valid_rmse_tmp_cv / self.cv
print('%d-Fold CV for K: %d; lam: %.5f: train_rmse: %.3f, valid_rmse: %.3f'
%(self.cv, K, lam, train_rmse_tmp_cv, valid_rmse_tmp_cv))
self.cv_result['K'].append(K)
self.cv_result['lam'].append(lam)
self.cv_result['train_rmse'].append(train_rmse_tmp)
self.cv_result['valid_rmse'].append(valid_rmse_tmp)
self.cv_result = pd.DataFrame.from_dict(self.cv_result)
best_ind = self.cv_result['valid_rmse'].argmin()
self.best_model = self.cv_result.loc[best_ind]
def plot_grid(self, data_source='valid'):
sns.set_theme()
if data_source == 'train':
cv_pivot = self.cv_result.pivot("K", "lam", "train_rmse")
elif data_source == 'valid':
cv_pivot = self.cv_result.pivot("K", "lam", "valid_rmse")
else:
raise ValueError('data_source must be train or valid!')
sns.heatmap(cv_pivot, annot=True, fmt=".3f", linewidths=.5, cmap="YlGnBu")
plt.show()
# -
# ## load dataset
# +
import numpy as np
import pandas as pd
dtrain = pd.read_csv('./dataset/train.csv')
dtest = pd.read_csv('./dataset/test.csv')
## save real ratings for test set for evaluation.
test_rating = np.array(dtest['rating'])
## remove the ratings in the test set to simulate prediction
dtest = dtest.drop(columns='rating')
## convert string to user_id and item_id -> [user_id, item_id, rating]
# pre-process for training data
train_pair = dtrain[['user_id', 'movie_id']].values
train_rating = dtrain['rating'].values
# pre-process for testing set
test_pair = dtest[['user_id', 'movie_id']].values
n_user, n_item = max(train_pair[:,0].max(), test_pair[:,0].max())+1, max(train_pair[:,1].max(), test_pair[:,1].max())+1
# -
# ## Define and training the predictive models based on `class`
## baseline user mean methods
user_ave = user_mean(n_user=n_user)
user_ave.fit(train_pair=train_pair, train_rating=train_rating)
pred_user = user_ave.predict(test_pair)
print('RMSE for user_mean: %.3f' %rmse(test_rating, pred_user) )
## baseline item mean methods
item_ave = item_mean(n_item=n_item)
item_ave.fit(train_pair=train_pair, train_rating=train_rating)
pred_item = item_ave.predict(test_pair)
print('RMSE for item_mean: %.3f' %rmse(test_rating, pred_item) )
## baseline user-item method
pred_user_item = user_ave.predict(test_pair)
train_rating_res = train_rating - user_ave.predict(test_pair=train_pair)
item_ave = item_mean(n_item=n_item)
item_ave.fit(train_pair=train_pair, train_rating=train_rating_res)
pred_user_item = pred_user_item + item_ave.predict(test_pair=test_pair)
print('RMSE for user+item_mean: %.3f' %rmse(test_rating, pred_user_item))
train_rating_res = train_rating_res - item_ave.predict(test_pair=train_pair)
## CV based on `LFM_CV`
# fit LFM_CV by residual ratings
Ks, lams = [3, 5, 10, 15], 10**np.arange(-6, -2, .5)
shiing_cv = LFM_CV(n_user, n_item, cv=3, Ks=Ks, lams=lams)
shiing_cv.grid_search(train_pair, train_rating_res)
shiing_cv.plot_grid('valid')
shiing_cv.plot_grid('train')
## refit the best model, and make prediction
best_K, best_lam = int(shiing_cv.best_model['K']), shiing_cv.best_model['lam']
print('best K: %d, best lam: %.5f' %(best_K, best_lam))
shiing=LFM(n_user, n_item, K=best_K, lam=best_lam)
shiing.fit(train_pair, train_rating_res)
pred_user_item_LFM = pred_user_item + shiing.predict(test_pair)
print('RMSE for item + user mean + LFM: %.3f' %rmse(test_rating, pred))
# ## Smooth Recommender Systems
# ## Step 1: Generate the side info for users and items
# +
from sklearn.preprocessing import StandardScaler
dtrain['res_rating'] = train_rating_res
user_info = pd.DataFrame({'user_id': list(range(n_user))})
user_info = user_info.set_index('user_id')
user_info['mean'] = dtrain.groupby('user_id')['res_rating'].mean()
user_info['q1'] = dtrain.groupby('user_id')['res_rating'].quantile(.1)
user_info['q3'] = dtrain.groupby('user_id')['res_rating'].quantile(.3)
user_info['q5'] = dtrain.groupby('user_id')['res_rating'].quantile(.5)
user_info['q7'] = dtrain.groupby('user_id')['res_rating'].quantile(.7)
user_info['q7'] = dtrain.groupby('user_id')['res_rating'].quantile(.9)
## fill NAN as the column mean
user_info = user_info.fillna(user_info.mean())
user_scaler = StandardScaler()
user_info = user_scaler.fit_transform(user_info)
movie_info = pd.DataFrame({'movie_id': list(range(n_item))})
movie_info = movie_info.set_index('movie_id')
movie_info['mean'] = dtrain.groupby('movie_id')['res_rating'].mean()
movie_info['q1'] = dtrain.groupby('movie_id')['res_rating'].quantile(.1)
movie_info['q3'] = dtrain.groupby('movie_id')['res_rating'].quantile(.3)
movie_info['q5'] = dtrain.groupby('movie_id')['res_rating'].quantile(.5)
movie_info['q7'] = dtrain.groupby('movie_id')['res_rating'].quantile(.7)
movie_info['q7'] = dtrain.groupby('movie_id')['res_rating'].quantile(.9)
## fill NAN as the column mean
movie_info = movie_info.fillna(movie_info.mean())
movie_scaler = StandardScaler()
movie_info = movie_scaler.fit_transform(movie_info)
# -
print(user_info)
print(movie_info)
# ## Step 2: Weight matrix
from sklearn.metrics.pairwise import cosine_similarity
user_sim = cosine_similarity(user_info)
movie_sim = cosine_similarity(movie_info)
# ## Step 2: Compute the augmented dataset
top = 5
index_item = [np.where(train_pair[:,1] == i)[0] for i in range(n_item)]
index_user = [np.where(train_pair[:,0] == u)[0] for u in range(n_user)]
## augmented data
fake_pair, fake_rating = [], []
for u in range(n_user):
print('UserId: %d' %u)
### find the top closest users for the user u
top_user_tmp = user_sim[u].argsort()[-top:][::-1]
valid_user_ind = []
### extend the records' index for the users
for u_tmp in top_user_tmp:
valid_user_ind.extend(index_user[u_tmp])
### find observed items under top users
obs_movie_tmp = train_pair[valid_user_ind,1]
for i in range(n_item):
### find top items
top_movie_tmp = movie_sim[i].argsort()[-top:][::-1]
### find valid item: intersect with top-items and observed item
valid_movie_tmp = np.intersect1d(top_movie_tmp, obs_movie_tmp)
if len(valid_movie_tmp) == 0:
continue
valid_item_ind = []
for i_tmp in valid_movie_tmp:
### extend all rating index for valid item
valid_item_ind.extend(index_item[i_tmp])
### find index close to (u,i)
valid_ind = np.intersect1d(valid_user_ind, valid_item_ind)
if len(valid_ind) > 0:
fake_pair.append([u,i])
fake_rating.append(train_rating_res[valid_ind].mean())
fake_pair, fake_rating = np.array(fake_pair), np.array(fake_rating)
# ## Step 3: Fit a Latent Factor Model CV
aug_pair, aug_rating_res = np.vstack((train_pair, fake_pair)), np.hstack((train_rating_res, fake_rating))
## CV based on `LFM_CV`
# fit LFM_CV by residual ratings
Ks, lams = [3, 5, 10, 15], 10**np.arange(-6, -2, .5)
shiing_cv = LFM_CV(n_user, n_item, cv=3, Ks=Ks, lams=lams)
shiing_cv.grid_search(aug_pair, aug_rating_res)
shiing_cv.plot_grid('valid')
shiing_cv.plot_grid('train')
## fit the LFM model with augmentated dataset
K, lam = 5, 0.00005
sSVD=LFM(n_user, n_item, K=K, lam=lam, iterNum=50)
sSVD.fit(aug_pair, aug_rating_res)
## Baseline + LFM
pred_sSVD = pred_user_item + sSVD.predict(test_pair)
print('RMSE for glb + user_mean + smooth LFM: %.3f' %rmse(test_rating, pred_sSVD))
# ### Note1: the performance can be further improved by fair cross-validation on (`K`, `lam`) and even `top`
# ### Note2: the computation complexity is relative large for `augmentation step`, yet it might be solved by `SGD`.
| notebook7a.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from converters.data_converter import DataConverter
import numpy as np
import matplotlib.pyplot as plt
url = r'https://raw.githubusercontent.com/surfaceanalytics/nipals_algorithm/main/TiO2_reduction.vms'
converter = DataConverter()
converter.load(url)
data = converter.data
# The data is stored in a list of dictionaries.
# Get the first element in the list and have a look at the keys.
print(data[0].keys())
fig, ax = plt.subplots(figsize=(6, 4))
for spectrum in data:
x = spectrum['data']['x']
y = spectrum['data']['y']
ax.plot(x, y)
plt.xlabel(spectrum['x_units'])
#ax.set_ylim([-4, 4])
#ax.grid(True)
plt.show()
| nipals_algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7
# language: python
# name: python3
# ---
# # Watson Assistant Dialog Export (Readable Format)
# This notebook will export the dialog from a given skill into a readable .csv format. The output will resemble:
#
# | Title | Conditions | Output |
# |------|------|------|
# | Welcome | #welcome| Welcome to my bot! |
# |...|...|...|
# |Live Agent|#escalate|One moment while I transfer you...|
#
# Follow Steps 1 & 2 below then run all the cells. You can download the output in your project assets as `dialog_output.csv`
# ## Step 1: Insert your project token here ([Directions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html))
# ## Step 2: Enter your Watson Assistant credentials
wa_version = "2020-09-24"
wa_apikey = "123myapikey"
wa_url = "https://gateway.watsonplatform.net/assistant/api/"
wa_skill = "skillid"
# ### This section instantiates the Watson Assistant connection
# +
import json
# !pip install ibm-watson
import os
import pandas as panda
import ibm_watson
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
authenticator = IAMAuthenticator(wa_apikey)
assistant_service=ibm_watson.AssistantV1(
version = wa_version,
authenticator = authenticator
)
assistant_service.set_service_url(wa_url)
# -
# ### This section formats the output into a readable format and saves it as dialog_output.csv
# +
workspace_response = assistant_service.get_workspace(
workspace_id = wa_skill,
export=True
).get_result()
formattedOutputDict = []
for node in workspace_response['dialog_nodes']:
outputText = ""
if 'output' in node:
if 'generic' in node['output']:
for item in node['output']['generic']:
if item['response_type'] == "text":
for response in item['values']:
outputText += " " + response['text'] + " "
if item['response_type'] == "option":
outputText += str(item)
if item['response_type'] == "connect_to_agent":
outputText += " CONNECT TO AGENT "
if item['response_type'] == "search_skill":
outputText += " SEARCH SKILL "
if 'title' in node:
title = node['title']
else:
title = 'NONE'
if 'conditions' in node:
condition = node['conditions']
else:
condition = 'NONE'
if len(outputText) > 0:
thisNode = {'title': title, 'conditions': condition, 'outputText': outputText}
formattedOutputDict.append(thisNode)
df = panda.DataFrame(formattedOutputDict)
print(df)
project.save_data("dialog_output.csv", df.to_csv(index=False), overwrite=True)
print("Dialog downloaded ✅")
| WA Dialog Export (Readable Format).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Toy model ch3 page 82
#
# +
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
import seaborn as sns
import pymc3 as pm
import pandas as pd
# %matplotlib inline
sns.set(font_scale=1.5)
# -
N_samples = [30, 30, 30]
G_samples = [18, 18, 18]
group_idx = np.repeat(np.arange(len(N_samples)), N_samples)
data = []
for i in range(0, len(N_samples)):
data.extend(np.repeat([1, 0], [G_samples[i], N_samples[i]-G_samples[i]]))
with pm.Model() as model_h:
alpha = pm.HalfCauchy('alpha', beta=10)
beta = pm.HalfCauchy('beta', beta=10)
theta = pm.Beta('theta', alpha, beta, shape=len(N_samples))
y = pm.Bernoulli('y', p=theta[group_idx], observed=data)
trace_j = pm.sample(2000, chains=4)
chain_h = trace_j[200:]
pm.traceplot(chain_h);
# +
N_samples = [30, 30, 30]
G_samples = [18, 18, 18]
group_idx = np.repeat(np.arange(len(N_samples)), N_samples)
data = []
for i in range(0, len(N_samples)):
data.extend(np.repeat([1, 0], [G_samples[i], N_samples[i]-G_samples[i]]))
with pm.Model() as model_h:
alpha = pm.HalfCauchy('alpha', beta=10)
beta = pm.HalfCauchy('beta', beta=10)
theta = pm.Beta('theta', alpha, beta, shape=len(N_samples))
y = pm.Bernoulli('y', p=theta[group_idx], observed=data)
trace_j = pm.sample(2000, chains=4)
chain_h1 = trace_j[200:]
N_samples = [30, 30, 30]
G_samples = [3, 3, 3]
group_idx = np.repeat(np.arange(len(N_samples)), N_samples)
data = []
for i in range(0, len(N_samples)):
data.extend(np.repeat([1, 0], [G_samples[i], N_samples[i]-G_samples[i]]))
with pm.Model() as model_h:
alpha = pm.HalfCauchy('alpha', beta=10)
beta = pm.HalfCauchy('beta', beta=10)
theta = pm.Beta('theta', alpha, beta, shape=len(N_samples))
y = pm.Bernoulli('y', p=theta[group_idx], observed=data)
trace_j = pm.sample(2000, chains=4)
chain_h2 = trace_j[200:]
N_samples = [30, 30, 30]
G_samples = [18, 3, 3]
group_idx = np.repeat(np.arange(len(N_samples)), N_samples)
data = []
for i in range(0, len(N_samples)):
data.extend(np.repeat([1, 0], [G_samples[i], N_samples[i]-G_samples[i]]))
with pm.Model() as model_h:
alpha = pm.HalfCauchy('alpha', beta=10)
beta = pm.HalfCauchy('beta', beta=10)
theta = pm.Beta('theta', alpha, beta, shape=len(N_samples))
y = pm.Bernoulli('y', p=theta[group_idx], observed=data)
trace_j = pm.sample(2000, chains=4)
chain_h3 = trace_j[200:]
# -
pm.summary(chain_h1)
pm.summary(chain_h2)
pm.summary(chain_h3)
# ### Look at the estimated priors
# +
fog, ax = plt.subplots(figsize=(10,5))
x = np.linspace(0, 1, 100)
for i in np.random.randint(0, len(chain_h), size=100):
pdf = stats.beta(chain_h['alpha'][i], chain_h['beta'][i]).pdf(x)
plt.plot(x, pdf, 'g', alpha=0.1)
dist = stats.beta(chain_h['alpha'].mean(), chain_h['beta'].mean())
pdf = dist.pdf(x)
mode = x[np.argmax(pdf)]
mean = dist.moment(1)
plt.plot(x, pdf, label='mode = {:.2f}\nmean = {:.2f}'.format(mode, mean), lw=3)
plt.legend()
plt.xlabel(r'$\theta_{prior}$', )
# -
plt.figure(figsize=(8,5))
pm.plot_posterior(chain_h,)
| BayesianAnalysisWithPython/Heirarchical models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # LLN and CLT
# ## 1. Overview
# 本节介绍大数定理和中心极限定理,另外进行扩展:
#
# - Delta Method:在统计里是求渐进分布asymptotic distribution的,是一种转化的方法,用a随机变量转化为b随机变量,可用于求样本均值、样本矩函数等。
# - The multivariate case
#
# ## 2. Relationship
# CLT改良了LLN,LLN描述的是在什么条件下随着样本量的上升样本矩收敛为总体矩,CLT描述在什么比率下随着样本量的上升样本矩收敛为总体矩。
#
# ## 3. LLN 大数定律
# ### The Classical LLN
# 强大数定律:
# $X_{1}...X_{n}$独立同分布的随机变量,并有共同的分布F,总体均值为$\mu$
# $$\mu = \mathbb EX = \int xF(dx)$$
# 样本均值:
# $$\overline{X}_{n} = \frac{1}{n}\sum_{i=1}^{n} X_i$$
# Kolmogorov 强大数定律为:如果$\mathbb E|X|$是有限的,则有:
# $$\mathbb{P} = \{\overline{X}_{n} \rightarrow \mu~~as~~ n\rightarrow \infty\}=1$$
# 样本均值几乎确定收敛于(almost surely convergence)总体均值
# 弱大数定律为依概率收敛总体均值
#
# http://students.brown.edu/seeing-theory
# ### Simulation
# 生成独立同分布的变量,并画出当n增加时$\overline{X}_{n}$的变化。
# 每一个点代表随机变量$X_i$ , 作图要说明$\overline{X}_n$收敛到总体均值$\mu$
# +
import random
import numpy as np
from scipy.stats import t, beta, lognorm, expon, gamma, poisson
import matplotlib.pyplot as plt
import matplotlib
#from jupyterthemes import jtplot
#jtplot.style(theme='grade3')
n = 100
#matplotlib.style.use('ggplot')
# == Arbitrary collection of distributions == #
distributions = {"student's t with 10 degrees of freedom": t(10),
"β(2, 2)": beta(2, 2),
"lognormal LN(0, 1/2)": lognorm(0.5),
"γ(5, 1/2)": gamma(5, scale=2),
"poisson(4)": poisson(4),
"exponential with λ = 1": expon(1)}
# == Create a figure and some axes == #
num_plots = 3
fig, axes = plt.subplots(num_plots, 1, figsize=(20, 20))
# == Set some plotting parameters to improve layout == # legend 位置
bbox = (0., 1.02, 1., .102)
legend_args = {'ncol': 2,
'bbox_to_anchor': bbox,
'loc': 3,
'mode': 'expand'}
plt.subplots_adjust(hspace=0.5)
for ax in axes:
# == Choose a randomly selected distribution == #
name = random.choice(list(distributions.keys())) # keys 返回字典的键
distribution = distributions.pop(name) #pop() 函数用于移除列表中的一个元素 aList = [123, 'xyz', 'zara', 'abc'];
# print ("A List : ", aList.pop()) >>> A List : abc
# == Generate n draws from the distribution == #
data = distribution.rvs(n)
# == Compute sample mean at each n == #
sample_mean = np.empty(n) #创建一个没有使用特定值来初始化的数组,传入shape
for i in range(n):
sample_mean[i] = np.mean(data[:i+1]) # 一维数组 arr_name[start: end: step]; 多维数组 arr_name[行操作, 列操作]
# == Plot == #
ax.plot(list(range(n)), data, 'o', color='grey', alpha=0.5)
axlabel = '$\\bar X_n$ for $X_i \sim$' + name
ax.plot(list(range(n)), sample_mean, 'g-', lw=3, alpha=0.6, label=axlabel) #'g-'为线样式
m = distribution.mean()
ax.plot(list(range(n)), [m] * n, 'k--', lw=1.5, label='$\mu$')
ax.vlines(list(range(n)), m, data, lw=0.2) #绘制垂直线
ax.legend(**legend_args) #使用单个*会将所有的参数,放入一个元组供函数使用;使用两个**会将所有的关键字参数,放入一个字典供函数使用
plt.show()
# -
# ### 不满足大数定律的情况:Infinite Mean
# 例如 柯西分布:柯西分布的重要特性之一就是期望和方差均不存在。其密度为:
# $$f(x)=\frac{1}{\pi (1+x^2)} \ (x\in \mathbb R)$$
# 柯西分布特征函数
# $$\phi(t)=\mathbb E e^{itX}=\int e^{itx}f(x)dx=e^{-|t|}$$
# 样本均值特征函数
# $$
# \begin{aligned}
# \mathbb E e^{it\bar X_n} & =\mathbb E \exp\{i\frac{t}{n}\sum_{j=1}^n X_j\} \\
# & = \mathbb E \prod_{j=1}^n\exp\{i\frac{t}{n}X_j\} \\
# & = \prod_{j=1}^n\mathbb E\exp\{i\frac{t}{n}X_j\}=[\phi(t/n)]^n
# \end{aligned}
# $$
# $\bar X_n$并不收敛于一点。
# +
from scipy.stats import cauchy
n = 100
distribution = cauchy()
fig, ax = plt.subplots(figsize=(10, 6))
data = distribution.rvs(n)
ax.plot(list(range(n)), data, linestyle='', marker='o', alpha=0.5)
ax.vlines(list(range(n)), 0, data, lw=0.2)
ax.set_title(f"{n} observations from the Cauchy distribution")
plt.show()
# +
n = 1000
distribution = cauchy()
fig, ax = plt.subplots(figsize=(10, 6))
data = distribution.rvs(n)
# == Compute sample mean at each n == #
sample_mean = np.empty(n)
for i in range(1, n):
sample_mean[i] = np.mean(data[:i])
# == Plot == #
ax.plot(list(range(n)), sample_mean, 'r-', lw=3, alpha=0.6,
label='$\\bar X_n$')
ax.plot(list(range(n)), [0] * n, 'k--', lw=0.5)
ax.legend()
plt.xlabel("n")
plt.show()
# -
# ## 4. CLT 中心极限定理
# ### Statement of Theorem
# 随机抽取的n个独立同分布的随机变量$X_1,...,X_n$,均值为$\mu$,方差为$\sigma ^2$
# $$\sqrt{n} (\overline{X}_{n} - \mu )\xrightarrow{d} N(0,\sigma ^2)~~ as ~~ n \rightarrow \infty$$
# 任一具有有限二阶矩的分布,随着随机变量数量的增加,其分布图形会逐渐变为一个高斯曲线
#
# ### Intuition
# **伯努利分布**
# 令$\mathbb{P} \{X_i =0\} = \mathbb{P} \{X_i =1\} = 0.5$
# 画出n=1,2,4,8的probability mass function。可以看做实验n次,硬币朝上次数的概率。概率质量函数即随机变量在各个可能值上对应的概率
# +
from scipy.stats import binom
fig,axes = plt.subplots(2,2,figsize = (10,6))
plt.subplots_adjust(hspace=0.4)
axes = axes.flatten() #.flatten():对数组进行降维,返回折叠后的一维数组,原数组不变
ns = [1,2,4,8]
dom = list(range(9))
for ax, n in zip(axes, ns): #zip 打包成元组 并返回由这个元组组成的列表
b = binom(n, 0.5) #伯努利分布
ax.bar(dom, b.pmf(dom), alpha=0.6, align='center') #ax.bar(x,y,....)
'''
binom.pmf(k, *args, **kwds)
Probability mass function at k of the given RV.
'''
ax.set(xlim=(-0.5, 8.5), ylim=(0, 0.55),
xticks=list(range(9)), yticks=(0, 0.2, 0.4), # 横轴标出0~8
title=f'$n = {n}$') # 以f开头表示在字符串内支持大括号内的python 表达式
plt.show()
# -
zip(axes, ns)
# ### Simulation 1
# 选择任意的一个符合分布F(以下模拟中用指数分布 $F(x)=1- e^{-\lambda x}$)的随机数列
# 生成$Y_{n}=\sqrt{n}(\overline{x}_{n} -\mu)$,用直方图来衡量其分布,并同$N(0,\sigma ^2)$进行比较
#
# +
from scipy.stats import norm
# == Set parameters == #
n = 250 # Choice of n
k = 100000 # Number of draws of Y_n
distribution = expon(2) # Exponential distribution, λ = 1/2
μ, s = distribution.mean(), distribution.std()
# == Draw underlying RVs. Each row contains a draw of X_1,..,X_n == #
data = distribution.rvs((k, n))
# == Compute mean of each row, producing k draws of \bar X_n == #
sample_means = data.mean(axis=1) #axis=1 算行
# == Generate observations of Y_n == #
Y = np.sqrt(n) * (sample_means - μ)
# == Plot == #
# 设置图形参数
fig, ax = plt.subplots(figsize=(10, 6))
xmin, xmax = -3 * s, 3 * s
ax.set_xlim(xmin, xmax)
# 画直方图
ax.hist(Y, bins=60, alpha=0.5, normed=True)
# 画正态分布图
xgrid = np.linspace(xmin, xmax, 200) #linspace(start,stop,N)创建等差数列 曲线平滑程度
ax.plot(xgrid, norm.pdf(xgrid, scale=s), 'k-', lw=2, label='$N(0, \sigma^2)$')
ax.legend()
plt.show()
# -
# ### Simulation 2
# 当n增加时,来考察$Y_n$的分布,$Y_n=\sqrt{n}(\overline{X}_{n}-\mu)$,$\mu =0$
# 当n=1时,$Y_1 = X_1$
# 当n=2时,$Y_2 = (X_1 + X_2)/\sqrt{2}$
# 随n的增加,$Y_n$的分布会趋于一个钟形
# 下图就是画出n从1到5的$Y_n$的分布曲线,其中$X_i ∼ f,f$是三个不同Beta densities 的凸组合
# +
from scipy.stats import gaussian_kde
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.collections import PolyCollection
beta_dist = beta(2, 2)
def gen_x_draws(k):
"""
Returns a flat array containing k independent draws from the
distribution of X, the underlying random variable. This distribution is
itself a convex combination of three beta distributions.
"""
bdraws = beta_dist.rvs((3, k))
# == Transform rows, so each represents a different distribution == #
bdraws[0, :] -= 0.5 #每一行加上或减去不同的值
bdraws[1, :] += 0.6
bdraws[2, :] -= 1.1
# == Set X[i] = bdraws[j, i], where j is a random draw from {0, 1, 2} == #
js = np.random.randint(0, 3, size=k) #randint:随机生一个整数int类型,可设定最低和最大值 代码这里应该是3
X = bdraws[js, np.arange(k)] #
# == Rescale, so that the random variable is zero mean == # 标准化
m, sigma = X.mean(), X.std()
return (X - m) / sigma
nmax = 5
reps = 100000
ns = list(range(1, nmax + 1))
# == Form a matrix Z such that each column is reps independent draws of X == #
Z = np.empty((reps, nmax))
for i in range(nmax):
Z[:, i] = gen_x_draws(reps) #使Z的每一列都是随机变量分别为X_1 X_2 X_3 X_4 X_5
# == Take cumulative sum across columns
S = Z.cumsum(axis=1) # >>> a = np.array([[1,2,3], [4,5,6]]) np.cumsum(a,axis=1)
# array([[ 1, 3, 6],
# [ 4, 9, 15]])
# == Multiply j-th column by sqrt j == #
Y = (1 / np.sqrt(ns)) * S
# == Plot == # https://matplotlib.org/gallery/index.html
fig = plt.figure(figsize = (10, 6))
ax = fig.gca(projection='3d')
a, b = -3, 3
gs = 100
xs = np.linspace(a, b, gs)
# == Build verts == #
greys = np.linspace(0.3, 0.7, nmax)
verts = []
for n in ns:
density = gaussian_kde(Y[:, n-1])
ys = density(xs)
verts.append(list(zip(xs, ys))) #得到绘制分布图的概率密度,储存至verts
poly = PolyCollection(verts, facecolors=[str(g) for g in greys]) ##verts 输入x,y的点
poly.set_alpha(0.85)
ax.add_collection3d(poly, zs=ns, zdir='x') #2D图增加z轴信息至3D zdir 在2d形状上设定哪一个坐标轴作为Z轴
ax.set(xlim3d=(1, nmax), xticks=(ns), ylabel='$Y_n$', zlabel='$p(y_n)$',
xlabel=("n"), yticks=((-3, 0, 3)), ylim3d=(a, b),
zlim3d=(0, 0.4), zticks=((0.2, 0.4)))
ax.invert_xaxis()
ax.view_init(30, 45) # Rotates the plot 30 deg on z axis and 45 deg on x axis
plt.show()
# -
beta_dist = beta(2, 2)
bdraws = beta_dist.rvs((3, k))
# == Transform rows, so each represents a different distribution == #
bdraws[0, :] -= 0.5 #每一行加上或减去不同的值
bdraws[1, :] += 0.6
bdraws[2, :] -= 1.1
# == Set X[i] = bdraws[j, i], where j is a random draw from {0, 1, 2} == #
js = np.random.randint(0, 3, size=k) #randint:随机生一个整数int类型,可设定最低和最大值 代码这里应该是3
X = bdraws[js, np.arange(k)]
print('bdraws:',bdraws )
print('js:',js)
print('X:',X)
# ### Exercises 1
# 若$\mathbb{R} \rightarrow \mathbb{R}$ 可微且 $g'(\mu) \neq 0$则
# $$\sqrt{n} \{g(\overline{X}_{n}) - g(\mu)\} \xrightarrow{d} N(0,g'(\mu)^2 \sigma ^2 ) ~~as~~ n\rightarrow \infty$$
# 令随机变量$X_i$满足$[0,\pi /2]$均匀分布,且$g(x)=sin(x)$ 类似simulation1 画出近似分布并同正态分布曲线进行比较
# +
from scipy.stats import uniform
# == Set parameters == #
n = 250
replications = 100000
distribution = uniform(loc=0, scale=(np.pi / 2))
μ, s = distribution.mean(), distribution.std()
g = np.sin
g_prime = np.cos
# == Generate obs of sqrt{n} (g(X_n) - g(μ)) == #
data = distribution.rvs((replications, n))
sample_means = data.mean(axis=1) # Compute mean of each row
error_obs = np.sqrt(n) * (g(sample_means) - g(μ))
# == Plot == #
asymptotic_sd = g_prime(μ) * s
fig, ax = plt.subplots(figsize=(8, 5))
xmin = -3 * g_prime(μ) * s
xmax = -xmin
ax.set_xlim(xmin, xmax)
ax.hist(error_obs, bins=60, alpha=0.5, normed=True)
xgrid = np.linspace(xmin, xmax, 200)
lb = "$N(0, g'(\mu)^2 \sigma^2)$"
ax.plot(xgrid, norm.pdf(xgrid, scale=asymptotic_sd), 'k-', lw=2, label=lb)
ax.legend()
plt.show()
# -
# ### The Multivariate Case
# 一个随机向量$\mathbf X$由k个随机变量组成的序列构成$(X_1, \ldots, X_k)$
# $\mathbb E [\mathbf X]$ 为:
#
# $$\begin{split}\mathbb E [\mathbf X]
# :=
# \left(
# \begin{array}{c}
# \mathbb E [X_1] \\
# \mathbb E [X_2] \\
# \vdots \\
# \mathbb E [X_k]
# \end{array}
# \right)
# =
# \left(
# \begin{array}{c}
# \mu_1 \\
# \mu_2\\
# \vdots \\
# \mu_k
# \end{array}
# \right)
# =: \boldsymbol \mu\end{split}$$
#
# $\mathbf X$的方差-协方差矩阵表示为$\Sigma$:
#
# $$\begin{split}Var[\mathbf X]
# =
# \left(
# \begin{array}{ccc}
# \mathbb E [(X_1 - \mu_1)(X_1 - \mu_1)]
# & \cdots & \mathbb E [(X_1 - \mu_1)(X_k - \mu_k)] \\
# \mathbb E [(X_2 - \mu_2)(X_1 - \mu_1)]
# & \cdots & \mathbb E [(X_2 - \mu_2)(X_k - \mu_k)] \\
# \vdots & \vdots & \vdots \\
# \mathbb E [(X_k - \mu_k)(X_1 - \mu_1)]
# & \cdots & \mathbb E [(X_k - \mu_k)(X_k - \mu_k)] \\
# \end{array}
# \right)\end{split}$$
#
# 令
# $$\bar{\mathbf X}_n := \frac{1}{n} \sum_{i=1}^n \mathbf X_i$$
#
# **LLN:** $$\mathbb P \left\{ \bar{\mathbf X}_n \to \boldsymbol \mu \text{ as } n \to \infty \right\} = 1$$
# 其中$\bar{\mathbf X}_n \to \boldsymbol \mu$表示$\| \bar{\mathbf X}_n - \boldsymbol \mu \| \to 0$
# 若$\Sigma$是有限的,则**CLT:**
# $$\sqrt{n} ( \bar{\mathbf X}_n - \boldsymbol \mu ) \stackrel { d } {\to} N(\mathbf 0, \Sigma)
# \quad \text{as} \quad
# n \to \infty$$
# ### Exercise 2 Multivariate 下的中心极限
# $\mathbf{X}_1,...,\mathbf{X}_n$是一组服从独立同分布的向量。
# $\boldsymbol \mu := \mathbb E [\mathbf X_i]$,$\sum$为$X_{i}$的方差-协方差矩阵
# 则满足$\sqrt{n} ( \bar{\mathbf X}_n - \boldsymbol \mu ) \stackrel { d } {\to} N(\mathbf 0, \Sigma)$
# 标准化右侧,首先$X$是在$\mathbb{R}^{k}$中的随机向量,$\mathbf{A}$为$k\times k$的常数矩阵,则有
# $$Var[\mathbf{AX}] = \mathbf{A} Var[\mathbf{X}]\mathbf{A}'$$
# 根据连续映射定理,如果在$\mathbb{R}$中$Z_n \xrightarrow{d} Z$则有:
# $$\mathbf A \mathbf Z_n
# \stackrel{d}{\to} \mathbf A \mathbf Z$$
# 假设$\mathbf{S}$是一个$k\times k$的对称正定矩阵,那么存在一个对称的正定矩阵$\mathbf{Q}$满足
# $$\mathbf Q \mathbf S\mathbf Q' = \mathbf I$$
# 其中$\mathbf I$是一个$k\times k$的 单位矩阵
# 综上可以得到:
# $$\mathbf Z_n := \sqrt{n} \mathbf Q ( \bar{\mathbf X}_n - \boldsymbol \mu )
# \stackrel{d}{\to}
# \mathbf Z \sim N(\mathbf 0, \mathbf I)$$
# 再一次运用连续映射定理,得到:
# $$\| \mathbf Z_n \|^2
# \stackrel{d}{\to}
# \| \mathbf Z \|^2 \tag{1}$$
# 最后得到:
# $$n \| \mathbf Q ( \bar{\mathbf X}_n - \boldsymbol \mu ) \|^2
# \stackrel{d}{\to}
# \chi^2(k) \tag{2}$$
#
# 通过模拟来验证上式,其中
# \begin{split}\mathbf X_i
# :=
# \left(
# \begin{array}{c}
# W_i \\
# U_i + W_i
# \end{array}
# \right)\end{split}
#
# $W_i$来自[-1,1]的均匀分布,$U_i$来自[-2,2]的均匀分布,$W_i$,$U_i$相互独立
#
#
# 首先需要说明$$\sqrt{n} \mathbf Q ( \bar{\mathbf X}_n - \boldsymbol \mu )
# \stackrel{d}{\to}
# N(\mathbf 0, \mathbf I)$$
# 令$$\mathbf Y_n := \sqrt{n} ( \bar{\mathbf X}_n - \boldsymbol \mu )
# \quad \text{and} \quad
# \mathbf Y \sim N(\mathbf 0, \Sigma)$$
# 根据连续映射定理和多变量中心极限定理可以得到:
# $$\mathbf Q \mathbf Y_n
# \stackrel{d}{\to}
# \mathbf Q \mathbf Y$$
# $\mathbf Q \mathbf Y$ 均值为零,
# $$\mathrm{Var}[\mathbf Q \mathbf Y]
# = \mathbf Q \mathrm{Var}[\mathbf Y] \mathbf Q'
# = \mathbf Q \Sigma \mathbf Q'
# = \mathbf I$$
# $$\Rightarrow\mathbf Q \mathbf Y_n \stackrel{d}{\to} \mathbf Q \mathbf Y \sim N(\mathbf 0, \mathbf I)$$
# 根据(1)的变换,最终验证(2)
# +
from scipy.stats import chi2
from scipy.linalg import inv, sqrtm
# == Set parameters == #
n = 250
replications = 50000
dw = uniform(loc=-1, scale=2) # Uniform(-1, 1) 平均分布
du = uniform(loc=-2, scale=4) # Uniform(-2, 2)
sw, su = dw.std(), du.std()
vw, vu = sw**2, su**2 # **平方
Σ = ((vw, vw), (vw, vw + vu))
Σ = np.array(Σ) #构建x_i 的方差协方差矩阵
# == Compute Σ^{-1/2} == #
Q = inv(sqrtm(Σ)) # inv求逆 sqrtm 矩阵平凡根
# == Generate observations of the normalized sample mean == Y_n#
error_obs = np.empty((2, replications))
for i in range(replications):
# == Generate one sequence of bivariate shocks == #
X = np.empty((2, n))
W = dw.rvs(n)
U = du.rvs(n)
# == Construct the n observations of the random vector == #
X[0, :] = W
X[1, :] = W + U
# == Construct the i-th observation of Y_n == #
error_obs[:, i] = np.sqrt(n) * X.mean(axis=1)
# == Premultiply by Q and then take the squared norm == #
temp = Q @ error_obs # @矩阵相乘
chisq_obs = np.sum(temp**2, axis=0)
# == Plot == #
fig, ax = plt.subplots(figsize=(10, 6))
xmax = 8
ax.set_xlim(0, xmax)
xgrid = np.linspace(0, xmax, 200)
lb = "Chi-squared with 2 degrees of freedom"
ax.plot(xgrid, chi2.pdf(xgrid, 2), 'k-', lw=2, label=lb)
ax.legend()
ax.hist(chisq_obs, bins=50, normed=True)
plt.show()
| 5.3 LLN_CLT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from sklearn.preprocessing import Imputer
import numpy as np
# ## We will use the Pima Indians Diabetes Dataset
# download the data here: https://www.kaggle.com/uciml/pima-indians-diabetes-database/downloads/pima-indians-diabetes-database.zip/1
data = pd.read_csv('diabetes.csv')
# ## value of 0 means missing values for the following columns
# - Plasma glucose concentration
# - Diastolic blood pressure
# - Triceps skinfold thickness
# - 2-Hour serum insulin
# - Body mass index
data.describe()
# mark zero values as missing or NaN
data[['Glucose','BloodPressure','SkinThickness','Insulin','BMI']] = data[['Glucose','BloodPressure','SkinThickness','Insulin','BMI']].replace(0, np.NaN)
# count the number of NaN values in each column
print(data.isnull().sum())
# ## impute missing values with the mean of the column
# The scikit-learn library provides the Imputer() pre-processing class that can be used to replace missing values.
#
# It is a flexible class that allows you to specify the value to replace (it can be something other than NaN) and the technique used to replace it (such as mean, median, or mode). The Imputer class operates directly on the NumPy array instead of the DataFrame.
#
# The example below uses the Imputer class to replace missing values with the mean of each column then prints the number of NaN values in the transformed matrix.
#
# source: https://machinelearningmastery.com/handle-missing-data-python/
# fill missing values with mean column values
values = data.values
imputer = Imputer()
transformed_values = imputer.fit_transform(values)
# count the number of NaN values in each column
print(np.isnan(transformed_values).sum())
| Section 1/1.3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ENGR 202 Solver
# importing the needed modules
import cmath as c
import math as m
# ## Solve for $X_C$
# +
# Where f is frequency, cap is the value of the capacitor, and xcap is the capacitive reactance
f = 5*10**3
cap = 50*(10**-9)
xcap = 1/-(2*m.pi*f*cap)
print("Xc =",xcap)
# -
# ## Solve for $X_L$
# +
# Where f is the frequency, l is the inductor value, and xind is the inductive reactance
f = 5*10**3
l = 200*(10**-3)
xind = 2*m.pi*f*l
print("XL =",xind)
# -
# ## Define A complex number in rectangular form
# +
# All values except r pulled from previous cells
# Solutions are given in Rectangular form
# Negative value for Xc already accounted for
r = 100 # Resistor value
x_c = r + 1j*(xcap)
print("For capacitor -",x_c)
x_i = r + 1j*(xind)
print("For inductor -",x_i)
# -
# ## Convert from Rectangular to Polar
# +
# Answers are given in magnitude and radians. Convert if degrees are necessary.
y = c.polar(x_c)
print("Magnitude, radians",y)
y = c.polar(x_i)
print("Magnitude, radians",y)
# -
# ## Convert from Radians to Degrees
# The above answers will be in radians, use the following code to convert to degrees.
# +
#substitute x_c and x_i as needed
z=c.phase(x_c)
m.degrees(z)
print("Angle in degrees =",m.degrees(z))
# -
# ## Simple Circuit in Series
# +
# For following three cells, if reactance is already given, replace "xind" or"xcap" with corresponding j value
# Resistor value is overwritten from previous cells when changed here
# Not all simple circuits will have all three components. Modify as needed.
# Original formula - series_comb = r + ind + cap
r = 100 # Resistor Value
ind = 0 + xind*1j
cap = 0 - xcap*1j
series_comb = r + ind + cap
print("Series Rectangular Form =",series_comb)
# -
# ## Simple Parallel Circuit - Product/Sum
# +
# Product sum rule works only with 2 components
# Original Formula - prod_sum = res*cap/(res + cap)
ind = 0 + xind*1j
cap = 0 + xcap*1j
res = 100
prod_sum = res*cap/(res + cap)
print("Product/sum Rectangular Form =",prod_sum)
# -
# ## Simple Parallel Circuit
# +
# Use as many components as necessary
# Original formula - parallel_comb = 1/(1/res + 1/ind + 1/cap)
ind = 0 + xind*1j
cap = 0 + xcap*1j
res = 100
parallel_comb = 1/(1/res + 1/ind + 1/cap)
print("Parallel Rectangular Form =",parallel_comb)
# -
# ## Current Solver
# +
# Make sure to use the parallel cell that IS NOT product/sum
# Copy and paste cur_ind or cur_cap sections as necessary to account for all components. Some code modifaction/addition may be required.
# This cell useful as is for one of each component.
# Once previous cells are complete, this will populate automatically EXCEPT for E
E = 10 #Equivalent Voltage
Z_rect = parallel_comb
Z_polar = c.polar(Z_rect)
print("Z Polar = ",Z_polar,"\n")
print(" Z Rectangular =",parallel_comb,"\n")
cur_source = E/Z_rect
cur_source_p = c.polar(cur_source)
z=c.phase(cur_source)
m.degrees(z)
print("Source Current =",cur_source,"\n","Source Current, Polar =",cur_source_p,"\n","Angle = ",m.degrees(z),"\n")
cur_cap = cur_source*Z_rect/cap
cur_cap_p = c.polar(cur_cap)
z=c.phase(cur_cap)
m.degrees(z)
print("Capacitor Current =",cur_cap,"\n","Capacitor Current, Polar =",cur_cap_p,"\n","Angle = ",m.degrees(z),"\n")
cur_ind = cur_source*Z_rect/ind
cur_ind_p = c.polar(cur_ind)
z=c.phase(cur_ind)
m.degrees(z)
print("inductor Current =",cur_ind,"\n","Inductor Current, Polar =",cur_ind_p,"\n","Angle = ",m.degrees(z),"\n")
cur_res = cur_source*Z_rect/res
cur_res_p = c.polar(cur_res)
z=c.phase(cur_res)
m.degrees(z)
print("Resistor Current =",cur_res,"\n","Resistor Current, Polar =",cur_res_p,"\n","Angle = ",m.degrees(z),"\n")
# -
# ## Series-Parallel Circuits
# +
# Organization cell for component values
# Inductors
z1 = 200*1j
# Resistors
z2 = 300
z3 = 270
#Capacitors
z4 = -1500*1j
# +
# This cell is ambiguous with just z values to make it easy to modify. Keep track of z values.
# Original Form of equation - parallel_react = 1/(1/z1+1/z2+1/(z3+z4))
parallel_react = 1/(1/z1+1/z2+1/(z3+z4))
parallel_polar = c.polar(parallel_react)
print("Z Rectangular =",parallel_react,"\n","Z Polar =",parallel_polar)
# -
| Applications/.ipynb_checkpoints/ENGR 202 Solver-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Bring in Alignment for mapping
#
# This program will map TFBS using the Biopython's motif package.
#
# **Inputs**:
# 1. before alignment (fasta)
# 2. after alignment (fasta)
# 3. TFBS Position Frequency Matrix.
#
# **Outputs**:
# 1. `.csv` file that outputs found TFBSs at each position, if any, in alignment (position, score, species, raw_position, strand motif_found)
# 2. `.csv` file that outputs only TFBS found.
#
# **To Do**:
#
# - [x] **Fix bug where species are labelled wrong only in reverse strand (how is this possible)?**
# - I arbitrarily set the length. An artifact when I was looking at one sequence at a time.
# - [ ] **Why are there NaNs when I pivot the table? Weird.**
# - [x] Make Vector that attaches
# - [ ] Attach real species name
# - [ ] Loop through all files in directory
# - [ ] Write file name in last column
# - [ ] Append to a file
# - [ ] Have all of the important checks read out into a file
from Bio import motifs
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.Seq import MutableSeq
from Bio.SeqRecord import SeqRecord
from Bio.Alphabet import IUPAC, generic_dna, generic_protein
from collections import defaultdict
import re
import pandas as pd
import numpy as np
import os, sys
# +
#####################
## sys Inputs - to do
#####################
## So I can run in shell and loop through sequences AND motifs to get a giant dataset
## Need to get the alignment to raw sequence
## read in alignment as a list of sequences
#alignment = list(SeqIO.parse(sys.argv[1], "fasta"))
#motif = motifs.read(open(sys.argv[2],"pfm")
alignment = list(SeqIO.parse("../data/fasta/output_ludwig_eve-striped-2.fa", "fasta"))
motif = motifs.read(open("../data/PWM/transpose_fm/bcd_FlyReg.fm"),"pfm")
raw_sequences = []
for record in alignment:
raw_sequences.append(SeqRecord(record.seq.ungap("-"), id = record.id))
# +
##################
## Input 1 - Alignment Input
##################
## Sort alphabetically
alignment = [f for f in sorted(alignment, key=lambda x : x.id)]
## Make all ids lowercase
for record in alignment:
record.id = record.id.lower()
## Check
print("Found %i records in alignment file" % len(alignment))
## Sequence Length should be the same for all alignment sequences
for seq in alignment:
print len(seq)
# +
#####################
## Input 2 - Raw Sequences Input
#####################
print("Found %i records in raw sequence file" % len(raw_sequences))
## make all IUPAC.IUPACUnambiguousDNA()
raw_sequences_2 = []
for seq in raw_sequences:
raw_sequences_2.append(Seq(str(seq.seq), IUPAC.IUPACUnambiguousDNA()))
# -
#####################
## Input 3 - Motif Input
#####################
## motif.weblogo("mymotif.png")
print(motif.counts)
pwm = motif.counts.normalize(pseudocounts=0.0) # Doesn't change from pwm
pssm = pwm.log_odds()
print(pssm) # Why do I need log odds exactly?
motif_length = len(motif) #for later retrival of nucleotide sequence
# +
######################
## Searching for Motifs in Sequences
######################
## Returns a list of arrays with a score for each position
## This give the score for each position
## If you print the length you get the length of the sequence minus TFBS length.
## Forward stand
pssm_list = [ ]
for seq in raw_sequences_2:
pssm_list.append(pssm.calculate(seq))
# +
##########################
## Automatic Calculation of threshold
##########################
## Ideal to find something that automatically calculates, as
## opposed to having human choosing.
## Approximate calculation of appropriate thresholds for motif finding
## Patser Threshold
## It selects such a threshold that the log(fpr)=-ic(M)
## note: the actual patser software uses natural logarithms instead of log_2, so the numbers
## are not directly comparable.
distribution = pssm.distribution(background=motif.background, precision=10**4)
patser_threshold = distribution.threshold_patser() #for use later
print("Patser Threshold is %5.3f" % patser_threshold) # Calculates Paster threshold.
# +
###################################
## Searching for motif in all raw_sequences
#################################
raw_id = []
for seq in raw_sequences:
raw_id.append(seq.id)
record_length = []
for record in raw_sequences_2:
record_length.append(len(record))
position_list = []
for i in range(0,8):
for position, score in pssm.search(raw_sequences_2[i], threshold = patser_threshold):
positions = {'species': raw_id[i], 'score':score, 'position':position, 'seq_len': record_length[i] }
position_list.append(positions)
position_DF = pd.DataFrame(position_list)
# +
#############################
## Add strand and pos position information as columns to position_DF
#############################
position_list_pos = []
for i, x in enumerate(position_DF['position']):
if x < 0:
position_list_pos.append(position_DF.loc[position_DF.index[i], 'seq_len'] + x)
else:
position_list_pos.append(x)
## append to position_DF
position_DF['raw_position'] = position_list_pos
## print position_DF['raw_position']
## strand Column
strand = []
for x in position_DF['position']:
if x < 0:
strand.append("negative")
else:
strand.append("positive")
## append to position_DF
position_DF['strand'] = strand
## motif_found column
## First turn into a list of strings
raw_sequences_2_list = []
for seq in raw_sequences_2:
raw_sequences_2_list.append(str(seq))
## Now get all motifs found in sequences
# motif_found = []
# for x in position_DF['position']:
# motif_found.append(raw_sequences_2_list[i][x:x + motif_length])
## append to position_DF
# position_DF['motif_found'] = motif_found
print position_DF
## Check
## len(motif_found)
## print(motif_found)
## print(position_DF)
# +
##################
## get alignment position
#################
## Need to map to the sequence alignment position
remap_list = []
nuc_list = ['A', 'a', 'G', 'g', 'C', 'c', 'T', 't', 'N', 'n']
positions = {'score':score, 'position':position, 'species': i}
position_list.append(positions)
for i in range(0,9):
counter = 0
for xInd, x in enumerate(alignment[i].seq):
if x in nuc_list:
remaps = {'raw_position': counter, 'align_position':xInd, 'species':alignment[i].id}
counter += 1
remap_list.append(remaps)
remap_DF = pd.DataFrame(remap_list)
## Check
## print_full(remap_DF)
# +
## Merge both datasets
## Check first
## print(position_DF.shape)
## print(remap_DF.shape)
## Merge - all sites
TFBS_map_DF_all = pd.merge(position_DF, remap_DF, on=['species', 'raw_position'], how='outer')
## Sort
TFBS_map_DF_all = TFBS_map_DF_all.sort_values(by=['species','align_position'], ascending=[True, True])
## Check
## print_full(TFBS_map_DF_all)
## print(TFBS_map_DF_all.shape)
# Merge - only signal
TFBS_map_DF_only_signal = pd.merge(position_DF, remap_DF, on=['species', 'raw_position'], how='inner')
TFBS_map_DF_only_signal = TFBS_map_DF_only_signal.sort_values(by=['species','align_position'], ascending=[True, True])
## To quickly check if species share similar TFBS positions
## print_full(TFBS_map_DF_only_signal.sort_values(by=['raw_position'], ascending=[True]))
## Check
## print_full(TFBS_map_DF_only_signal)
## print(TFBS_map_DF_only_signal.shape)
print(TFBS_map_DF_all)
## Write out Files
## TFBS_map_DF_all.to_csv('../data/outputs/TFBS_map_DF_all_bicoid_test.csv', sep='\t', na_rep="NA")
# +
###############################
## Print binary info for each species
################################
# Create new column, 1 if TFBS is present, 0 if absent
TFBS_map_DF_all['presence'] = 0 # For some reason you have to initaite the column first.
TFBS_map_DF_all['presence'] = TFBS_map_DF_all.apply(lambda x: x.notnull(), axis=1)
TFBS_map_DF_all['presence'] = TFBS_map_DF_all['presence'].astype(int)
## Check
## print_full(TFBS_map_DF_all)
## Create new dataframe
## Check First
## list(TFBS_map_DF_all.columns)
TFBS_map_DF_binary = TFBS_map_DF_all[['species', 'presence', 'align_position', 'strand']].copy()
## Subset on strand
TFBS_map_DF_binary_positive = TFBS_map_DF_binary['strand'] == "positive"
TFBS_map_DF_binary_negative = TFBS_map_DF_binary['strand'] == "negative"
## Check
print(TFBS_map_DF_binary)
## Now long to wide
## [ ] NaNs are introduced here and not sure why and it is super annoying.
## - Maybe it has something to do with the negative and positive strands. These should be subsetted first.
## There should be two presense and absence...or maybe a 2 presented? This get back to the range issue.
## We should have the 1s maybe represent TFBS range, not just starting position.
TFBS_map_DF_binary = TFBS_map_DF_binary.pivot_table(index='species', columns='align_position', values='presence')
print(TFBS_map_DF_binary.iloc[:,6:40])
# +
####################
## Attach input files name as a column
#####################
## Ideally I would attach the file name of the 1. raw sequence and 2. the motif being tested
| py/.ipynb_checkpoints/TFBSmapping_multiple_files-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import numpy.random as rd
import statistics
players = list(range(100))
points = [0 for i in range(100)]
print(players)
print(points)
# +
marginabovesecond = []
marginabovethird = []
for i in range(10000): # Number of simulation runs
points = [0 for i in range(100)] # Reset everyone's points to zero after each World Cup simulation
for i in range(6): # The Fortnite World Cup consists of six games
players = list(range(100)) # 100 players play in each game
storm = [-1 for i in range(5)]
# We'll have the storm/fall damage eliminate up to five players per game. No player gets the elimination credit.
for i in range(99): # 99 players get eliminated each game
eliminated = rd.choice(players)
players.remove(eliminated)
elimcred = rd.choice(list(range(len(players)))+storm)
# Grants the elimination credit to a random player, or to the storm/fall damage (five times per game)
if elimcred == -1:
storm.remove(elimcred)
else:
points[players[elimcred]] += 1 # Elimination points
if 25 > len(players) > 14: # Placement points
points[eliminated] += 3
if 15 > len(players) > 4:
points[eliminated] += 5
if 5 > len(players) > 1:
points[eliminated] += 7
if len(players) == 1:
points[eliminated] += 7
points[players[0]] += 10
leaderboard = sorted(points) # Highest scores after each World Cup are at the end.
marginabovesecond.append(leaderboard[-1] - leaderboard[-2])
marginabovethird.append(leaderboard[-1] - leaderboard[-3])
marginabovesecond_pvalue = statistics.mean([1 if i >= 26 else 0 for i in marginabovesecond])
marginabovethird_pvalue = statistics.mean([1 if i >= 27 else 0 for i in marginabovethird])
print("P value for margin above second:", marginabovesecond_pvalue)
print("P value for margin above third:", marginabovethird_pvalue)
# -
| Fortnite Monte Carlo Simulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualize the flow in and out of square cells as example of a mesoscopic view
#
# Use the density map and trajectory visualization in "PedestrianDynamics-MacroscopicVsMicroscopic.pdf" as basis. The trajectoriy visualization contains multiple cells arranged as 9 rows and 7 columns. Therefore, use a 9x9array, use the flow as cell value and use `matplotlib.matshow()` to visualize this array.
# +
import numpy as np
import matplotlib.pyplot as plt
# +
def use_custom_plot_settings(font_weight="normal"):
font_size_extra_small = 12
font_size_small = 16
font_size_medium = 18
font_size_big = 20
plt.style.use("default")
plt.rc("font", size=font_size_small, weight=font_weight)
plt.rc("axes", titlesize=font_size_big, titleweight=font_weight)
plt.rc("axes", labelsize=font_size_medium, labelweight=font_weight)
plt.rc("xtick", labelsize=font_size_small)
plt.rc("ytick", labelsize=font_size_small)
plt.rc("legend", fontsize=font_size_extra_small)
plt.rc("figure", titlesize=font_size_big, titleweight=font_weight)
def use_default_plot_settings():
plt.rcdefaults()
use_custom_plot_settings(font_weight="normal")
print(plt.style.available)
# +
flow_matrix = np.array([
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 2, 3, 2, 0, 0],
[0, 0, 1, 2, 3, 4, 3, 0, 0],
[0, 0, 2, 3, 4, 3, 2, 0, 0],
[0, 0, 3, 4, 3, 2, 1, 0, 0],
[0, 0, 0, 4, 0, 0, 0, 0, 0],
[0, 0, 0, 4, 0, 0, 0, 0, 0]
])
fig = plt.figure(figsize=(12.8, 4.8))
ax = fig.add_subplot(111)
def apply_plot_settings(axis):
for side in ["left", "right","top", "bottom"]:
axis.spines[side].set_visible(False)
axis.tick_params(
axis="both",
which="both",
left=False,
right=False,
top=False,
bottom=False,
labelleft=False,
labeltop=False
)
ax.matshow(flow_matrix)
apply_plot_settings(ax)
fig.savefig("MesoscopicView-FlowVisualization.pdf", bbox_inches="tight", transparent=True)
| assets/data/Work/Presentations/2021-03-26 - How to Simulate Crowds/Figures/PedestrianStreamSimulations/LocomotionModels-ByType/MesoscopicView-FlowVisualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sarahalyahya/SoftwareArt-Text/blob/main/LousyFairytaleGenerator_Assemblage_Project1_.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="K6VUu3uaAS_b"
# #Lousy Fairytale Plot Generator | <NAME>
# *scroll to the end and click run to check the presentation first!*
# ##Idea
# For this project, I spent a lot of time browsing through project gutenberg. While I was doing that, I got a an urge to look up what kind of children's books are on the website, inspired by a phone call with my niece. My niece is a very curious child, and I can only imagine that by the time she starts getting interested in stories and fairytales, her curiosity will mean that her parents will run out of made-up stories to tell her. So, I wanted to create a tool that helps them, well kind of...I wanted to capture that confused tone that parents have sometimes when they make up a story as they go.
# ##Research
# What makes a fairytale? A google search led to Missouri Southern State University's [website](https://libguides.mssu.edu/c.php?g=185298&p=1223898#:~:text=The%20basic%20structure%20of%20a,a%20solution%20can%20be%20found.). To summarize, the page explains that a fairy tale's main elements are:
#
#
# * Characters
# * The Moral Lesson
# * Obstacles
# * Magic
# * A Happily Ever After
#
# So, these elements formed the structure of my output.
#
# ##Elements
# *(More specific comments are in the code)*
# ####Characters
# For the characters, I scraped a few pages from a website called World of Tales. I chose it over Project Gutenberg as I found the HTML easier to work with. The program picks a random fairytale from the ones I have in an array, parses the HTML, finds all paragraphs and cleans them from new line and trailing characters before appending them to a list. Afterwards, to extract the main character's name I used NLTK and a function that counts proper nouns to find the most repeated proper noun which becomes my main character. As you can imagine, this isn't a perfect method since we could get "I" or "King's". I tried to do my best to limit such results, but it isn't perfect.
# ###The Moral Lesson
# Here, I scraped a random blog post that lists 101 valuable life lessons. I thought this would add a humorous tone to the work as some of the lessons are completely unrelated to the fairytale world, e.g. "Don’t go into bad debt (debt taken on for consumption)". Then, I parsed and cleaned the text, and sliced the results to remove any paragraph elements that weren't part of the list. For this part I also used Regular Expressions to remove list numbers ("1."). Unfortunately, there's a bit of hardcoded replacements and removals in this part. I found it difficult to figure out the pronouns, such as turning all you's into a neutral they resulting in "theyself"...etc. I left some of the failed RegEx attempts commented if anyone has any suggestions! This part picks a random lesson from the list everytime.
# ###Obstacle
# My obstacle was basically a villain the main character needs to fight. Which I got from a webpage that lists 10 fairytale villains. The process was similar, I parsed the page and cleaned the strings using replace() and regEx then randomly chose one of the villains listed.
# ###Magic
# Using an approach almost identical to the one for villains. I scraped a webpage that lists a bunch of superpowers and magical ability, and made a list out of them to pick a random one when the program runs. I relied on random web pages to compliment that "confused" "humorous" tone of scrambling to make up a random story on the spot!
# ###The Happily Ever After
# I made sure to end the text generation with a reference to the happily ever after and the fact that the hero beats the villain using the specific superpower.
#
# ##Presentation
# I made use of the Sleep() function to create a rhythm for the conversation. I embedded my generated text into human sounding text to really make it feel like you're talking to someone. Obviously, some outputs make it very obvious that this is a bot, such as when the main character is called "I", but I think the general feel is quite natural.
#
# ##Impact
# I would say the impact is more entertaining and maybe humorous than functional, although it is quite functional sometimes and suggests ideas that could make great fairytales!
#
# ##Final Reflection
# While working I realized that RegEx really need a lot of practice, and that the more I do it the more it'll make sense...so I'm excited to do that! However,
# This was a really fun assignment to work on! It was very rewarding to gradually go through my ideas and realize that there's something in our "toolbox" that can help me achieve what I have in mind.
#
# + colab={"base_uri": "https://localhost:8080/"} id="JHOet1s568pf" outputId="9c31fd54-5645-4c89-deb4-89c64dea2aae"
import requests
from bs4 import BeautifulSoup
import random
import nltk
import re
from time import sleep
nltk.download('averaged_perceptron_tagger')
from nltk.tag import pos_tag
# + id="rPtzdTTvQz-u"
fairytaleLinks = ["Charles_Perrault/Little_Thumb.html#gsc.tab=0","Brothers_Grimm/Margaret_Hunt/The_Story_of_the_Youth_who_Went_Forth_to_Learn_What_Fear_Was.html#gsc.tab=0",
"Charles_Perrault/THE_FAIRY.html#gsc.tab=0","Hans_Christian_Andersen/Andersen_fairy_tale_17.html#gsc.tab=0","Brothers_Grimm/Grimm_fairy_stories/Cinderella.html#gsc.tab=0"
"Brothers_Grimm/Margaret_Hunt/Little_Snow-white.html#gsc.tab=0",
"Hans_Christian_Andersen/Andersen_fairy_tale_17.html#gsc.tab=0","Charles_Perrault/THE_MASTER_CAT,_OR_PUSS_IN_BOOTS.html#gsc.tab=0",
"Brothers_Grimm/Grimm_household_tales/The_Sleeping_Beauty.html#gsc.tab=0", "Hans_Christian_Andersen/Andersen_fairy_tale_31.html#gsc.tab=0",
"Hans_Christian_Andersen/Andersen_fairy_tale_47.html#gsc.tab=0","Brothers_Grimm/Grimm_fairy_stories/Snow-White_And_Rose-Red.html#gsc.tab=0"
"Brothers_Grimm/Margaret_Hunt/Hansel_and_Grethel.html#gsc.tab=0","Brothers_Grimm/RUMPELSTILTSKIN.html#gsc.tab=0"
"Brothers_Grimm/THE%20ELVES%20AND%20THE%20SHOEMAKER.html#gsc.tab=0","Brothers_Grimm/THE%20JUNIPER-TREE.html#gsc.tab=0","Brothers_Grimm/THE%20GOLDEN%20GOOSE.html#gsc.tab=0",
"Brothers_Grimm/Margaret_Hunt/The_Frog-King,_or_Iron_Henry.html#gsc.tab=0","Brothers_Grimm/Grimm_fairy_stories/Snow-White_And_Rose-Red.html#gsc.tab=0"]
fairytaleIndex = random.randint(0,(len(fairytaleLinks))-1)
# + id="l3WXuPtpZCZR"
# + id="4yLI8vfg7ggc"
fairytaleTargetUrl = "https://www.worldoftales.com/fairy_tales/" + fairytaleLinks[fairytaleIndex]
reqFairytale = requests.get(fairytaleTargetUrl)
moralLessonTargetUrl = "https://daringtolivefully.com/life-lessons"
reqMoralLesson = requests.get(moralLessonTargetUrl)
superpowerTargetUrl = "http://www.superheronation.com/2007/12/30/list-of-superpowers/"
reqSuperpower = requests.get(superpowerTargetUrl)
villainTargetUrl = "https://beat102103.com/life/top-10-fairytale-villains-ranked/"
reqVillain = requests.get(villainTargetUrl)
# + id="91oVeoXK8doW"
#FOR CHARACTER
soupFairytale = BeautifulSoup(reqFairytale.content, 'html.parser')
story = soupFairytale.find_all("p")
paragraphs = []
for i in story:
content = str(i.text)
content = content.replace("\r"," ")
content = content.replace("\n","")
paragraphs.append(content)
#print(paragraphs)
story = " "
story = story.join(paragraphs)
#print(story)
#Code from StackOverflow: https://stackoverflow.com/questions/17669952/finding-proper-nouns-using-nltk-wordnet
tagged_sent = pos_tag(story.split())
propernouns = [word for word,pos in tagged_sent if pos == 'NNP']
#print(propernouns)
highestCount = 0
#A loop that looks for the most repeated proper noun
for i in propernouns:
currentCount = propernouns.count(i)
if currentCount > highestCount:
highestCount = currentCount
countDictionary = {i:propernouns.count(i) for i in propernouns}
#Had an issue with this at first as there's no direct function to get the key using the value, so I researched solutions and used this page:
# https://www.geeksforgeeks.org/python-get-key-from-value-in-dictionary/
def get_key(val):
for key, value in countDictionary.items():
if val == value:
return key
characterName = get_key(highestCount)
#to eliminate instances like "king's"
characterName = characterName.replace("'", "")
characterName = characterName.replace('"', "")
#print(characterName)
# + id="7oH4DMIgE6av"
#FOR MORAL LESSON
soupLesson = BeautifulSoup(reqMoralLesson.content, 'html.parser')
lessonsHTML = soupLesson.find_all("p")
lessons = []
for i in lessonsHTML:
content = str(i.text)
content = content.replace("\r"," ")
content = content.replace("\n","")
lessons.append(content)
#used .index() to figure out the slicing here:
del lessons[0]
del lessons[101:len(lessons)+1]
# to make things more convenient :)
toRemove = ['2. Don’t postpone joy.',
'8. Pay yourself first: save 10% of what you earn.',
'10. Don’t go into bad debt (debt taken on for consumption).',
'15. Remember <NAME>’s admonishment: “Whether you think you can or whether you think you can’t, you’re right.”',
'19. Don’t smoke. Don’t abuse alcohol. Don’t do drugs.',
'36. Don’t take action when you’re angry. Take a moment to calm down and ask yourself if what you’re thinking of doing is really in your best interest.',
'38. Worry is a waste of time; it’s a misuse of your imagination.',
'44. Don’t gossip. Remember the following quote by Eleanor Roosevelt: “Great minds discuss ideas, average minds discuss events, small minds discuss people.”',
'52. Don’t procrastinate; procrastination is the thief of time.',
'61. Don’t take yourself too seriously.',
'62. During difficult times remember this: “And this too shall pass.”',
'63. When things go wrong remember that few things are as bad as they first seem.',
'64. Keep in mind that mistakes are stepping stones to success. Success and failure are a team; success is the hero and failure is the sidekick. Don’t be afraid to fail.',
'70. If you don’t know the answer, say so; then go and find the answer.',
'77. Don’t renege on your promises, whether to others or to yourself.',
'80. Don’t worry about what other people think.',
'89. Every time you fall simply get back up again.',
'95. Don’t argue for your limitations.',
'97. Listen to Eleanor Roosevelt’s advice: “No one can make you feel inferior without your consent.”',
'99. Remember the motto: “You catch more flies with honey.”']
for i in toRemove:
lessons.remove(i)
#print(lessons)
x = 0
strippedLessons = []
for i in lessons:
lessons[x] = lessons[x].strip()
#remove any digits + the period after the digits
lessons[x] = re.sub("\d+""[.]", "", i)
#lessons[x] = re.sub("n'^Dot", "not to", lessons[x]) // attempt to turn all don't(s) at the beginning of the sentence to "not to"
#remove periods ONLY at the end
lessons[x] = re.sub("\.$", " ", lessons[x])
lessons[x] = lessons[x].replace("theirs","others'").replace("your","their").replace("you","they").replace("theyself","themselves").lower()
strippedLessons.append(lessons[x])
x+=1
#specifics = {"you've":"they've","theirself":"themself","theirs":"others'"} // #an attempt to fix any awkward results due to the replace function above
#y = 0
#for word in strippedLessons[y]:
# if word.lower() in specifics:
# srippedLessons = text.replace(word, specifics[word.lower()])
# y+=1
randomLessonIndex = random.randint(0, len(strippedLessons)-1)
chosenMoralLesson = strippedLessons[randomLessonIndex]
#print(chosenMoralLesson)
# + id="iseyU_pCJqLR"
#FOR SUPERPOWER
soupSuperpower = BeautifulSoup(reqSuperpower.content, 'html.parser')
superpowers = soupSuperpower.find_all("li")
allSuperpowers = []
for i in superpowers:
content = (str(i.text)).strip()
content = content.replace("\r"," ")
content = content.replace("\n"," ")
content = content.replace("\t","")
allSuperpowers.append(content)
#allSuperpowers.index('Superstrength')
#removing all non-Superpower elements
del allSuperpowers[67:len(allSuperpowers)+1]
del allSuperpowers[0:5]
toRemove2 = ['Skills and/or knowledge Popular categories: science, mechanical, computer/electronics, weapons-handling/military, driving, occult/magical.',
'Popular categories: science, mechanical, computer/electronics, weapons-handling/military, driving, occult/magical.',
'Resourcefulness (“I’m never more than a carton of baking soda away from a doomsday device”)']
for i in toRemove2:
allSuperpowers.remove(i)
randomSuperpowerIndex = random.randint(0, len(allSuperpowers)-1)
chosenSuperpower = allSuperpowers[randomSuperpowerIndex].lower()
#print(chosenSuperpower)
# + id="zKdnoTDsQ_y7"
#FOR VILLAIN:
soupVillain = BeautifulSoup(reqVillain.content, 'html.parser')
villainsHTML = soupVillain.find_all("strong")
villains = []
for i in villainsHTML:
content = str(i.text)
content = content.replace("\r"," ")
content = content.replace("\n","")
content = content.replace("\xa0"," ")
villains.append(content)
x = 0
for v in villains:
villains[x] = re.sub("\d+""[.]", "", v)
villains[x].strip()
x+=1
randomVillainIndex = random.randint(0, len(villains)-1)
chosenVillain = villains[randomVillainIndex].lower()
#print(chosenVillain)
# + colab={"base_uri": "https://localhost:8080/"} id="dzJTk760Ss87" outputId="9b894e9b-d900-4962-e06d-13246ab20bfe"
print(u"\U0001F5E8"+ " " +"Oh? You're out of bedtime stories to tell?")
sleep(1.5)
print(u"\U0001F5E8"+ " " +"hmmm...")
sleep(2)
print(u"\U0001F5E8"+ " " +"how about you tell a story about")
sleep(1)
print(u"\U0001F5E8"+ " " +".....")
sleep(3)
print(u"\U0001F5E8"+ " " +characterName+"?")
sleep(2)
print(u"\U0001F5E8"+ " " +"yeah...yeah, tell a story about " + characterName +" " + "and how they learnt to...")
sleep(1.5)
print(u"\U0001F5E8"+ " " +"I don't know...like..")
sleep(3)
print(u"\U0001F5E8"+ " " +"how they learnt to..." + chosenMoralLesson)
sleep(1.5)
print(u"\U0001F5E8"+ " " +"Yes! that sounds good I guess.")
sleep(2)
print(u"\U0001F5E8"+ " " +"and of course it isn't that easy...it isn't all rainbows and sunshine you know?")
sleep(3)
print(u"\U0001F5E8"+ " " +"I don't know... maybe talk about their struggles with...")
sleep(4)
print(u"\U0001F5E8"+ " " +"with" + chosenVillain + "...yikes")
sleep(4)
print(u"\U0001F5E8"+ " " +"but we need a happily ever after, so maybe say that " + characterName + " was able to defeat" + chosenVillain + " somehow...")
sleep(5)
print(u"\U0001F5E8"+ " " +"like by " + chosenSuperpower + " or something...does that make sense?")
sleep(2)
print(u"\U0001F5E8"+ " " +"I mean even if it doesn't...that's all I can give you tonight")
sleep(3)
print(u"\U0001F5E8"+ " " +"you should practice being imaginative or something...")
sleep(2)
print(u"\U0001F5E8"+ " " +"anyways, it's way past bedtime. Go tell your story!")
print("\n\n\nERROR:chat disconnected")
| LousyFairytaleGenerator_Assemblage_Project1_.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simple distributed wordcount with MapReduce
# Check that file `file.txt` exists, view size.
# !ls -hal file.txt
# Copy file to HDFS
# !hdfs dfs -put -f file.txt
# Erase `result` folder.
# !hdfs dfs -rm -R result 2>/dev/null
# Run the bash wordcount command `wc` in parallel on the distributed file.
# !mapred streaming \
# -input file.txt \
# -output result \
# -mapper /bin/cat \
# -reducer /usr/bin/wc
# Check result of MapReduce job
# !hdfs dfs -cat result/part*
# Check that the word count is correct by comparing with `wc` on local host (warning: do not try with too large files).
# !wc file.txt
| simplest_mapreduce_bash_wordcount.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import csv
import os
import json
import numpy as np
import pandas as pd
from pprint import pprint
import random
datasetSizeController = 200
src_Folder = "C:\\Users\\haris\\Documents\\Altmetric Data\\data\\"
with open("RandomData.csv", mode = 'w', newline = '') as csvFile:
fieldnames = ['altmetric_id', 'abstract', 'blog_post']
csvWriter = csv.DictWriter(csvFile, fieldnames = fieldnames)
csvWriter.writeheader()
filesCount = 0
filesTried = 0
# +
with open("RandomData.csv", mode = 'a', newline = '') as csvFile:
csvWriter = csv.writer(csvFile)
random.seed(25)
while(filesCount < datasetSizeController):
rand = random.randint(10,846)
json_Folder = src_Folder + str(rand)
fileName = json_Folder + "\\" + str(random.choice(os.listdir(json_Folder)))
try:
with open(fileName) as json_file:
json_data = json.load(json_file)
#print("File opened")
filesTried = filesTried + 1
altmetric_id = json_data['altmetric_id']
if ('abstract' not in json_data['citation']):
#print("Debug1")
continue
else:
#print("Debug2")
abstract = json_data['citation']['abstract']
abstract = abstract.replace(',', '')
if('blogs' not in json_data['posts']):
#print("Debug3")
continue
else:
print("Debug4")
blog_post = json_data['posts']['blogs'][0]['summary']
blog_post = blog_post.replace(',', '')
print(str(blog_post))
filesCount = filesCount + 1
csvWriter.writerow([altmetric_id, abstract, blog_post])
print("Files Tried: " + str(filesTried))
print("Files Read: " + str(filesCount))
#print(str(filesCount))
except:
pass
#print("CSV File Generated!")
# -
| readNews.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Import-scripts-and-datasets" data-toc-modified-id="Import-scripts-and-datasets-1"><span class="toc-item-num">1 </span>Import scripts and datasets</a></span></li><li><span><a href="#Rank-sum-test" data-toc-modified-id="Rank-sum-test-2"><span class="toc-item-num">2 </span>Rank-sum test</a></span></li><li><span><a href="#Gradient-boosting-tree-as-single-cell-classifiers" data-toc-modified-id="Gradient-boosting-tree-as-single-cell-classifiers-3"><span class="toc-item-num">3 </span>Gradient boosting tree as single cell classifiers</a></span></li><li><span><a href="#Model-explaination-using-SHAP-values" data-toc-modified-id="Model-explaination-using-SHAP-values-4"><span class="toc-item-num">4 </span>Model explaination using SHAP values</a></span></li><li><span><a href="#Feature-selection-using-mean-SHAP" data-toc-modified-id="Feature-selection-using-mean-SHAP-5"><span class="toc-item-num">5 </span>Feature selection using mean SHAP</a></span></li><li><span><a href="#Apply-workflow-on-the-ICC-dataset" data-toc-modified-id="Apply-workflow-on-the-ICC-dataset-6"><span class="toc-item-num">6 </span>Apply workflow on the ICC dataset</a></span></li></ul></div>
# -
# ## Import scripts and datasets
# +
import sys
sys.path.append('../scripts')
import utils,analysis,test_features
import SCCML
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import subplots
HIP_CER_df = pd.read_pickle('../data/HIP_CER.pkl')
ICC_df = pd.read_pickle('../data/ICC_rms.pkl')
SIMS_df = pd.read_pickle('../data/SIMS.pkl')
# -
HIP_CER_df.head(5)
# + code_folding=[3]
def plot_avg_spec(data):
types = list(set(data['type']))
fig,axes=subplots(1,len(types),figsize=(7*len(types),4))
ax = axes.ravel()
mzs = np.array(data.columns[data.columns!='type'])
for i in range(len(types)):
ax[i].stem(mzs,data[data['type']==types[i]].mean(0),label=types[i],basefmt='darkslateblue',markerfmt=' ',linefmt='darkslateblue',
use_line_collection=True)
ax[i].set_ylabel('Normalized Intensity',fontsize=12)
ax[i].set_xlabel('m/z',fontsize=12)
ax[i].legend()
plot_avg_spec(data=HIP_CER_df)
#plot_avg_spec(data=ICC_df,mzs=ICC_df.columns)
#plot_avg_spec(data=SIMS_df,mzs=SIMS_df.columns)
# -
# ## Rank-sum test
# + code_folding=[]
def rank_sum(data,groups):
x = data.loc[data['type']==groups[0], data.columns != 'type'].values
y = data.loc[data['type']==groups[1], data.columns != 'type'].values
mzs = np.array(data.columns[data.columns!='type'])
O,F,P=analysis.rank_sum_test(x,y,mzs,10000)
# plt.figure(figsize=(5,3))
# plt.hist(np.log10(P),30)
# plt.xlabel('log 10 p-value')
# plt.ylabel('number of mz features')
return O,F,P
O,F,P1 = rank_sum(HIP_CER_df,['Hippocampal','Cerebellar'])
O,F,P2 = rank_sum(ICC_df,['Astrocytes','Neurons'])
O,F,P3 = rank_sum(SIMS_df,['DRG','CER'])
# -
fig,axes = subplots(1,3,figsize=(20,4))
ax = axes.ravel()
ax[0].hist(np.log10(P1),30)
ax[0].set_ylabel('number of features',fontsize=14)
ax[1].hist(np.log10(P2),30)
ax[1].set_xlabel('log10(P-value)',fontsize=14)
ax[2].hist(np.log10(P3),100)
ax[2].set_xlim([-50,0])
fig.savefig('pvalues.tif')
# ## Gradient boosting tree as single cell classifiers
# + code_folding=[]
#split the dataset into training and test set
from sklearn.model_selection import train_test_split
def split_data(data,test_size,random_state):
X_train, X_test, y_train, y_test = train_test_split(data.drop('type',1).values, data['type'].values,
test_size=test_size,
random_state=random_state)
data_dict = {'X_train':X_train,'X_test':X_test,'y_train':y_train,'y_test':y_test}
return data_dict
data_dict_hip_cer = split_data(data=HIP_CER_df,test_size=0.2,random_state=19)
data_dict_icc = split_data(data=ICC_df,test_size=0.2,random_state=19)
#data_dict_sims = split_data(data=SIMS_df,test_size=0.2,random_state=19)
# + code_folding=[]
import xgboost
from sklearn.metrics import classification_report
model = xgboost.XGBClassifier(n_estimators=500,subsample=1)
def train_model(model,data_dict):
model.fit(data_dict['X_train'],data_dict['y_train'])
y_pred = model.predict(data_dict['X_test'])
prob = model.predict_proba(data_dict['X_test'])
report_dict = classification_report(data_dict['y_test'], y_pred, output_dict=True)
return model, y_pred, prob
model, y_pred, prob = train_model(model,data_dict_hip_cer)
# +
fig,axes = subplots(1,2,figsize=(18,4))
ax = axes.ravel()
ax[0].hist(prob[np.where(data_dict_hip_cer['y_test']=='Cerebellar')[0],0],bins=30)
ax[0].set_title('Cerebellum',fontsize=15)
ax[1].hist(prob[np.where(data_dict_hip_cer['y_test']=='Hippocampal')[0],1],bins=30)
ax[1].set_title('Hippocampus',fontsize=15)
ax[0].set_ylabel('number of cells',fontsize=15)
ax[1].set_ylabel('number of cells',fontsize=15)
ax[0].set_xlabel('predicted probability',fontsize=15)
ax[1].set_xlabel('predicted probability',fontsize=15)
ax[0].axvline(0.5,linestyle='-.',c='r',label='binary classification threshold')
ax[1].axvline(0.5,linestyle='-.',c='r',label='binary classification threshold')
ax[0].legend(fontsize=11,loc=2)
ax[1].legend(fontsize=11)
# -
features = HIP_CER_df.columns[:-1]
_,f_selected = SCCML.plot_featureImp_glob(model,features,0)
# ## Model explaination using SHAP values
_,_,contrib_xgb_best = SCCML.feature_contrib(model,HIP_CER_df.drop('type',1),
HIP_CER_df.columns[:-1],10,True)
# +
shap_ranked_index = np.argsort(abs(contrib_xgb_best).mean(0))[::-1]
X_df = pd.DataFrame(HIP_CER_df.drop('type',axis=1))
X_df.columns = HIP_CER_df.columns[:-1]
contrib_df = pd.DataFrame()
contrib_df['mass'] = features[shap_ranked_index]
contrib_df['mean SHAP'] = np.around(abs(contrib_xgb_best).mean(0),4)[shap_ranked_index]
contrib_df['mean SHAP'] = np.round(contrib_df['mean SHAP']/contrib_df['mean SHAP'].max(),4)
a=X_df.values*contrib_xgb_best
contrib_df.loc[a.mean(0)[shap_ranked_index]>0,'contribute to which GOI']='Hippocampal'
contrib_df.loc[a.mean(0)[shap_ranked_index]<0,'contribute to which GOI']='Cerebellar'
contrib_df = contrib_df[contrib_df['mean SHAP']!=0]
contrib_df.head(10)
# -
_,_,shap_pca = SCCML.shap_clustering(contrib_xgb_best,HIP_CER_df['type'].values)
selected_mass = [412.1391,322.1068,341.1018]
_ = SCCML.plot_shap_features(contrib_xgb_best,HIP_CER_df.drop('type',1),selected_mass)
# ## Feature selection using mean SHAP
# +
label_ = HIP_CER_df['type'].values.copy()
label_[label_=='Hippocampal'] = 1
label_[label_=='Cerebellar'] = 0
acc,recall,f1,auc = test_features.feature_select(HIP_CER_df.drop('type',1),label_,
model,shap_ranked_index,100)
# -
fig,axes = subplots(1,1)
axes.plot(auc,label='Area Under Curve')
axes.plot(acc,label='Accuracy')
axes.axvline(18,linestyle='-.',c='b',label='18 selected features, AUC=0.98')
axes.set_xlabel('number of features',fontsize=13)
plt.legend(fontsize=12)
acc1,recall1,f11,auc_1 = test_features.feature_sampling(HIP_CER_df.drop('type',1),label_,model,18,300)
fig,axes = subplots(1,1)
axes.hist(auc_1,bins=30, color = "skyblue")
axes.axvline(0.98,linestyle='-.',c='r',label='18 top selected features with AUC=0.98')
axes.set_xlim([0.6,1])
axes.set_xlabel('AUC',fontsize=13)
plt.legend(fontsize=12)
# ## Apply workflow on the ICC dataset
from sklearn.externals import joblib
model = joblib.load('xgb_best.sav')
model
# +
model, y_pred, prob = train_model(model,data_dict_icc)
fig,axes = subplots(1,2,figsize=(18,4))
ax = axes.ravel()
ax[0].hist(prob[data_dict_icc['y_test']=='Astrocytes'][:,0],bins=20)
ax[0].set_title('astrocytes',fontsize=15)
ax[1].hist(prob[data_dict_icc['y_test']=='Neurons'][:,1],bins=20)
ax[1].set_title('neurons',fontsize=15)
ax[0].set_ylabel('number of cells',fontsize=15)
ax[1].set_ylabel('number of cells',fontsize=15)
ax[0].set_xlabel('predicted probability',fontsize=15)
ax[1].set_xlabel('predicted probability',fontsize=15)
ax[0].axvline(0.5,linestyle='-.',c='r',label='binary classification threshold')
ax[1].axvline(0.5,linestyle='-.',c='r',label='binary classification threshold')
ax[0].legend(fontsize=11,)
ax[1].legend(fontsize=11)
# -
features = ICC_df.columns[:-1]
_,_,contrib_xgb_best = SCCML.feature_contrib(model,ICC_df.drop('type',1),
ICC_df.columns[:-1],10,True)
# +
shap_ranked_index = np.argsort(abs(contrib_xgb_best).mean(0))[::-1]
X_df = pd.DataFrame(ICC_df.drop('type',axis=1))
X_df.columns = ICC_df.columns[:-1]
contrib_df = pd.DataFrame()
contrib_df['mass'] = features[shap_ranked_index]
contrib_df['mean SHAP'] = np.around(abs(contrib_xgb_best).mean(0),4)[shap_ranked_index]
contrib_df['mean SHAP'] = np.round(contrib_df['mean SHAP']/contrib_df['mean SHAP'].max(),4)
a=X_df.values*contrib_xgb_best
contrib_df.loc[a.mean(0)[shap_ranked_index]>0,'contribute to which GOI']='Neuron'
contrib_df.loc[a.mean(0)[shap_ranked_index]<0,'contribute to which GOI']='Astrocyte'
contrib_df = contrib_df[contrib_df['mean SHAP']!=0]
contrib_df.head(10)
# -
_,_,shap_pca = SCCML.shap_clustering(contrib_xgb_best,ICC_df['type'].values)
# +
label_ = ICC_df['type'].values.copy()
label_[label_=='Neurons'] = 1
label_[label_=='Astrocytes'] = 0
acc,recall,f1,auc = test_features.feature_select(ICC_df.drop('type',1),label_,
model,shap_ranked_index,100)
# -
fig,axes = subplots(1,1)
axes.plot(auc,label='Area Under Curve')
axes.plot(acc,label='Accuracy')
axes.axvline(45,linestyle='-.',c='b',label='45 selected features, AUC=0.83')
axes.set_xlabel('number of features',fontsize=13)
plt.legend(fontsize=12)
acc1,recall1,f11,auc_1 = test_features.feature_sampling(ICC_df.drop('type',1),label_,model,45,300)
fig,axes = subplots(1,1)
axes.hist(auc_1,bins=30, color = "skyblue")
axes.axvline(0.85,linestyle='-.',c='r',label='45 top selected features with AUC=0.83')
axes.set_xlabel('AUC',fontsize=13)
plt.legend(fontsize=12)
| notebooks/sc_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: gt-venv
# language: python
# name: gt-venv
# ---
# %load_ext autoreload
# %autoreload 2
# +
import numpy as np
import matplotlib.pyplot as plt
from pprint import pprint
# %matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# -
# from custom_minigrid.envs import AdversarialEnv
import gym
import gym_minigrid
# +
env = gym.make('MultiGrid-Adversarial-v0')
env.reset()
before_img = env.render('rgb_array')
plt.imshow(before_img);
# -
env.reset_random()
reset_img = env.render('rgb_array')
plt.imshow(reset_img);
env.reset()
pass
done = False
num_steps = 0
while not done:
loc = env.adversary_action_space.sample()
# Basically first position is the agent, second is goal, rest are blocks
# not sure how agent orientation is determined
obs, _, done, _ = env.step_adversary(loc)
num_steps += 1
reset_img = env.render('rgb_array')
plt.imshow(reset_img);
| examples/display_custom_grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from selenium import webdriver
import time
# !dir .\chromedriver.exe
browser = webdriver.Chrome('./chromedriver.exe')
browser.maximize_window()
browser.get('https://store.kakao.com/home/best')
# +
prev_height = browser.execute_script("return document.body.scrollHeight")
while True:
browser.execute_script("window.scrollTo(0,document.body.scrollHeight)")
time.sleep(2)
curr_height = browser.execute_script("return document.body.scrollHeight")
if(curr_height == prev_height):
break
else:
prev_height = browser.execute_script("return document.body.scrollHeight")
# -
html = browser.page_source
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'html.parser')
contents = soup.select('li.item-container')
# print(type(contents))
# contents[0]
content = contents[0]
# content
rank = content.select('div > em.rank_info')[0]
rank
title = content.select('span.product_name')[0]
title
shop = [] # or list()
for content in contents:
rank = content.select('div > em.rank_info')[0]
title = content.select('span.product_name')[0]
print(rank.text.strip(),title.text.strip())
shop.append([rank.text.strip(),title.text.strip()])
import pandas as pd
pd_data = pd.DataFrame(shop, columns=['Rank','KaKao shopping'])
pd_data
| scraping/Kakao_shopping_top100.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reading and Writing files
f = open("file-demo-read.txt", "r")
data = f.read()
print(data)
f.close()
# # More clean way. The close method will be implicitly called just before exiting with block.
# +
with open("file-demo-read.txt", "r") as f:
data = f.read()
print(data)
# -
# # Reading one line at a time
# +
with open("file-demo-read.txt", "r") as f:
data = f.readline()
print(data)
# -
# # Writing into file
f = open("file-demo-write.txt", "r+")
f.write("This is a sample text.")
f.close()
# ### Reading the written content
f = open("file-demo-write.txt", "r")
data = f.read()
print(data)
f.close()
# # File positioning
f = open("file-demo-write.txt", "r")
f.seek(5)
data = f.readline()
print(data)
f.seek(0)
data = f.readline()
print(data)
f.close()
# # File system operations
import os
# ### Create a directory
os.mkdir("demo")
# ### Rename a directory
os.rename("demo", "demo-dir")
# ### Change current working directory
curr_dir_backup = os.getcwd()
os.chdir("demo-dir")
# ### Print current working directory
curr_dir = os.getcwd()
print(curr_dir)
# ### Create a file under this directory
f= open("demo.txt","w+")
f.close()
# ### Rename this file
os.rename("demo.txt", "demo-file.txt")
# ### Create two directory under this directory
os.mkdir("demo-sub-dir-1")
os.mkdir("demo-sub-dir-2")
# ### List all files and directories
dir_list = os.listdir()
print(dir_list)
# ### Get the statistics of a director. Current directory in the case below.
stat = os.stat(".")
print(stat)
# ## Clean up
if(os.path.exists("../demo-dir")):
if(os.path.exists("demo-sub-dir-1")):
os.rmdir("demo-sub-dir-1")
if(os.path.exists("demo-sub-dir-2")):
os.rmdir("demo-sub-dir-2")
if(os.path.isfile("demo-file.txt")):
os.remove("demo-file.txt")
os.rmdir("../demo-dir")
os.chdir(curr_dir_backup)
| 8-working-with-files.ipynb |