code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (Summer of AI)
# language: python
# name: summerofai
# ---
import torch
# Tensors are like arrays with special properties
# an array or tensor of 10 random numbers
torch.rand(10)
a = torch.rand(2, 2)
b = torch.rand(2, 2)
a
b
# For simple cell-to-cell operations, use the typical commands like the ones for addition, subtraction, multiplication, etc.
a + b
a * b
# For performing matrix operations, there are different operators. For example, use `@` for matrix multiplication
a @ b
# same as above
torch.mm(a, b)
# for tensor concatenation
torch.cat((a, b))
# Getting and manipulating tensor shapes is something that will come up frequently in deep learning
a.shape
# creating a new tensor with specific values
c = torch.tensor([1.4, 0, 3.5, 2])
c.shape
# will fail, since shape is different, even though the number of elements is equal
torch.cat((a, c))
# - Concatenation of tensors with different shapes will fail even though the number of elements is equal.
# - This can be fixed by changing the shape of the tensors before performing any operations on them.
# - `tensor.view` can be used to get a temporary value of the changed shape for a tensor, which can later be saved and used
# - `tensor.view` can also be used in case you are unaware of the rows or columns you need at the end. Just use `-1` to the component you don't know, and specify the other.
c.view(2, 2)
# original shape is not changed
c
# now we can use this value to concatenate with a
torch.cat((a, c.view(2, 2)))
c.view(4, -1)
| Week 1/Getting Started.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ---
# ## <span style="color:orange"> Host Multiple TensorFlow Computer Vision Models using SageMaker Multi-Model Endpoint </span>
# ---
# ## <span style="color:black">Contents</span>
# 1. [Background](#Background)
# 1. [Setup](#Setup)
# 1. [Train Model 1 - CIFAR-10 Image Classification](#Train-Model-1---CIFAR-10-Image-Classification)
# 1. [Train Model 2 - Sign Language Image Classification](#Train-Model-2---Sign-Language-Image-Classification)
# 1. [Create a Multi-Model Endpoint](#Create-a-Multi-Model-Endpoint)
# 1. [Test Multi-Model Endpoint for Real Time Inference](#Test-Multi-Model-Endpoint-for-Real-Time-Inference)
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Background
# In this notebook, we show how to host two computer vision models trained using the TensorFlow framework under one SageMaker multi-model endpoint. For the first model, we train a smaller version of AlexNet CNN to classify images from the CIFAR-10 dataset. For the second model, we use a pretrained VGG16 CNN model pretrained on the ImageNet dataset and fine-tune on Sign Language Digits Dataset.
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Setup
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Prerequisites
# Choose Kernel for this notebook.<br>
# Under `Kernel` tab at the top of this notebook → `Choose kernel`, select `conda_python3`
# + button=false new_sheet=false run_control={"read_only": false}
# %%capture
# !pip install tensorflow==2.3.0
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Imports
# + button=false new_sheet=false run_control={"read_only": false}
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sagemaker.tensorflow.serving import TensorFlowModel
from sagemaker.multidatamodel import MultiDataModel
from tensorflow.keras.datasets import cifar10
from sagemaker.tensorflow import TensorFlow
from sagemaker.inputs import TrainingInput
from sagemaker import get_execution_role
from tensorflow.keras import utils
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from datetime import datetime
import tensorflow as tf
import numpy as np
import sagemaker
import logging
import boto3
import time
import os
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Setup Logger
# + button=false new_sheet=false run_control={"read_only": false}
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
# + button=false new_sheet=false run_control={"read_only": false}
logger.info(f'[Using TensorFlow version: {tf.__version__}]')
logger.info(f'[Using SageMaker version: {sagemaker.__version__}]')
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Seed for Reproducability
# + button=false new_sheet=false run_control={"read_only": false}
SEED = 123
np.random.seed(SEED)
tf.random.set_seed(SEED)
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### Create Roles, Sessions and Data Locations
# + button=false new_sheet=false run_control={"read_only": false}
role = get_execution_role()
session = boto3.Session()
sagemaker_session = sagemaker.Session()
s3 = session.resource('s3')
TF_FRAMEWORK_VERSION = '2.3.0'
BUCKET = sagemaker.Session().default_bucket()
PREFIX = 'cv-models'
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Train Model 1 - CIFAR-10 Image Classification
#
# <p align="justify">First, we will train a Convolutional Neural Network (CNN) model to classify images from the CIFAR-10 dataset. Image classification is the task of assigning a label to an image, from a predefined set of categories. CIFAR-10 is an established CV dataset used for object recognition. It is a subset of the 80 Million Tiny Images dataset and consists of 60,000 (32x32) color images containing 1 of 10 object classes, with 6,000 images per class.</p>
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### a) Load Data
#
# The first step is to load the pre-shuffled CIFAR-10 dataset into our train and test objects. Luckily, Keras provides the CIFAR dataset for us to load using the `load_data()` method. All we have to do is import keras.datasets and then load the data.
# + button=false new_sheet=false run_control={"read_only": false}
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# + button=false new_sheet=false run_control={"read_only": false}
logger.info(f'X_train Shape: {X_train.shape}')
logger.info(f'y_train Shape: {y_train.shape}')
logger.info(f'X_test Shape : {X_test.shape}')
logger.info(f'y_test Shape : {y_test.shape}')
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### b) Data Exploration
# + button=false new_sheet=false run_control={"read_only": false}
fig = plt.figure(figsize=(20, 5))
for i in range(36):
ax = fig.add_subplot(3, 12, i+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(X_train[i].T))
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### c) Data Preparation
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ##### Rescale
# Rescales the images by dividing the pixel values by 255: [0,255] ⇒ [0,1]
# + button=false new_sheet=false run_control={"read_only": false}
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ##### One Hot Encode Target Labels
# One-hot encoding is a process by which categorical variables are converted into a numeric form. One-hot encoding converts the (1 × n) label vector to a label matrix of dimensions (10 × n), where n is the number of sample images. So, if we have 1,000 images in our dataset, the label vector will have the dimensions (1 × 1000). After one-hot encoding, the label matrix dimensions will be (1000 × 10). That’s why, when we define our network architecture in the next step, we will make the output softmax layer contain 10 nodes, where each node represents the probability of each class we have.
# + button=false new_sheet=false run_control={"read_only": false}
num_classes = len(np.unique(y_train))
y_train = utils.to_categorical(y_train, num_classes)
y_test = utils.to_categorical(y_test, num_classes)
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ##### Split Data
# Break original train set further into train and validation sets.
# + button=false new_sheet=false run_control={"read_only": false}
X_train, X_validation = X_train[500:], X_train[:500]
y_train, y_validation = y_train[500:], y_train[:500]
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ##### Save to Local
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# Create a local `data/cifar_10` directory to save the datasets.
# + button=false new_sheet=false run_control={"read_only": false}
DATASET_PATH = './data/cifar_10'
# + button=false new_sheet=false run_control={"read_only": false}
os.makedirs(DATASET_PATH, exist_ok=True)
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# Save train, validation and test sets to local `data` directory
# + button=false new_sheet=false run_control={"read_only": false}
np.save(f'{DATASET_PATH}/X_train.npy', X_train)
np.save(f'{DATASET_PATH}/y_train.npy', y_train)
np.save(f'{DATASET_PATH}/X_validation.npy', X_validation)
np.save(f'{DATASET_PATH}/y_validation.npy', y_validation)
np.save(f'{DATASET_PATH}/X_test.npy', X_test)
np.save(f'{DATASET_PATH}/y_test.npy', y_test)
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ##### Copy Datasets to S3
# Copy train, validation and test sets from the local dir to S3, since SageMaker expects datasets to be in S3 for training.
# + button=false new_sheet=false run_control={"read_only": false}
# !aws s3 cp ./{DATASET_PATH}/X_train.npy s3://{BUCKET}/{PREFIX}/cifar_10/train/
# !aws s3 cp ./{DATASET_PATH}/y_train.npy s3://{BUCKET}/{PREFIX}/cifar_10/train/
# !aws s3 cp ./{DATASET_PATH}/X_validation.npy s3://{BUCKET}/{PREFIX}/cifar_10/validation/
# !aws s3 cp ./{DATASET_PATH}/y_validation.npy s3://{BUCKET}/{PREFIX}/cifar_10/validation/
# !aws s3 cp ./{DATASET_PATH}/X_test.npy s3://{BUCKET}/{PREFIX}/cifar_10/test/
# !aws s3 cp ./{DATASET_PATH}/y_test.npy s3://{BUCKET}/{PREFIX}/cifar_10/test/
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### d) Create Training Inputs
# Using the S3 locations of the datasets we saved in the previous step, create pointers to these datasets using the `TrainingInput`class from the SageMaker SDK.
# + button=false new_sheet=false run_control={"read_only": false}
train_input = TrainingInput(s3_data=f's3://{BUCKET}/{PREFIX}/cifar_10/train',
distribution='FullyReplicated',
content_type='npy')
validation_input = TrainingInput(s3_data=f's3://{BUCKET}/{PREFIX}/cifar_10/validation',
distribution='FullyReplicated',
content_type='npy')
test_input = TrainingInput(s3_data=f's3://{BUCKET}/{PREFIX}/cifar_10/test',
distribution='FullyReplicated',
content_type='npy')
# + button=false new_sheet=false run_control={"read_only": false}
inputs = {'train': train_input, 'val': validation_input, 'test': test_input}
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### e) Define Model Architecture & create Training Script
#
# We will build a small CNN consisting of three convolutional layers and two dense layers.<br>
# <b>Note:</b> We will use the ReLU activation function for all the hidden layers. In the last dense layer, we will use a softmax activation function with 10 nodes to return an array of 10 probability scores (summing to 1). Each score will be the probability that the current image belongs to our 10 image classes.
# + button=false new_sheet=false run_control={"read_only": false}
# !pygmentize cifar_train.py
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### f) Create a TensorFlow Estimator & fit the Model
# + button=false new_sheet=false run_control={"read_only": false}
model_name = 'cifar-10'
hyperparameters = {'epochs': 30}
estimator_parameters = {'entry_point':'cifar_train.py',
'instance_type': 'ml.m5.2xlarge',
'instance_count': 1,
'model_dir': f'/opt/ml/model',
'role': role,
'hyperparameters': hyperparameters,
'output_path': f's3://{BUCKET}/{PREFIX}/cifar_10/out',
'base_job_name': f'mme-cv-{model_name}',
'framework_version': TF_FRAMEWORK_VERSION,
'py_version': 'py37',
'script_mode': True}
estimator_1 = TensorFlow(**estimator_parameters)
# + button=false new_sheet=false run_control={"read_only": false}
estimator_1.fit(inputs)
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Train Model 2 - Sign Language Image Classification
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### a) Load Data
# + button=false new_sheet=false run_control={"read_only": false}
train_path = './data/sign_language/train'
validation_path = './data/sign_language/validation'
test_path = './data/sign_language/test'
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### b) Data Exploration
# + button=false new_sheet=false run_control={"read_only": false}
img = mpimg.imread(f'{train_path}/0/IMG_1118.JPG')
plt.imshow(img)
# + button=false new_sheet=false run_control={"read_only": false}
img.shape
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# <p>ImageDataGenerator generates batches of tensor image data with real-time data augmentation.
# The data will be looped over (in batches).
# + button=false new_sheet=false run_control={"read_only": false}
train_batches = ImageDataGenerator().flow_from_directory(train_path,
target_size=(224, 224),
batch_size=10)
# + button=false new_sheet=false run_control={"read_only": false}
train_batches.next()[0][0].shape
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# Visualize random sample of images from `train_batches`
# + button=false new_sheet=false run_control={"read_only": false}
fig, rows = plt.subplots(nrows=1, ncols=4, figsize=(224, 224))
for row in rows:
row.imshow(train_batches.next()[0][0].astype('int').T)
row.axis('off')
plt.show()
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### c) Copy local data to S3 for SageMaker training
# + button=false new_sheet=false run_control={"read_only": false}
# !aws s3 cp ./data/sign_language/ s3://{BUCKET}/{PREFIX}/sign_language/ --recursive
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### d) Define Model Architecture & create Training Script
# + button=false new_sheet=false run_control={"read_only": false}
# !pygmentize sign_language_train.py
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### f) Create a TensorFlow Estimator & fit the Model
# + button=false new_sheet=false run_control={"read_only": false}
train_input = TrainingInput(s3_data=f's3://{BUCKET}/{PREFIX}/sign_language/train',
distribution='ShardedByS3Key')
val_input = TrainingInput(s3_data=f's3://{BUCKET}/{PREFIX}/sign_language/validation',
distribution='ShardedByS3Key')
# + button=false new_sheet=false run_control={"read_only": false}
model_name = 'sign-language'
hyperparameters = {'epochs': 1}
estimator_parameters = {'entry_point':'sign_language_train.py',
'instance_type': 'ml.m5.2xlarge',
'instance_count': 1,
'hyperparameters': hyperparameters,
'model_dir': f'/opt/ml/model',
'role': role,
'output_path': f's3://{BUCKET}/{PREFIX}/sign_language/out',
'base_job_name': f'cv-{model_name}',
'framework_version': TF_FRAMEWORK_VERSION,
'py_version': 'py37',
'script_mode': True}
estimator_2 = TensorFlow(**estimator_parameters)
# + button=false new_sheet=false run_control={"read_only": false}
estimator_2.fit({'train': train_input, 'val': val_input})
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Create a Multi-Model Endpoint
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### a) Copy Trained Models to a common S3 Prefix
# + button=false new_sheet=false run_control={"read_only": false}
tf_model_1 = estimator_1.model_data
output_1 = f's3://{BUCKET}/{PREFIX}/mme/cifar.tar.gz'
# + button=false new_sheet=false run_control={"read_only": false}
tf_model_2 = estimator_2.model_data
output_2 = f's3://{BUCKET}/{PREFIX}/mme/sign-language.tar.gz'
# + button=false new_sheet=false run_control={"read_only": false}
tf_model_2
# + button=false new_sheet=false run_control={"read_only": false}
# !aws s3 cp {tf_model_1} {output_1}
# !aws s3 cp {tf_model_2} {output_2}
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### b) Essentials
# + button=false new_sheet=false run_control={"read_only": false}
current_time = datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d-%H-%M-%S')
current_time
# + button=false new_sheet=false run_control={"read_only": false}
IMAGE_URI = '763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-inference:2.3.1-cpu-py37-ubuntu18.04'
model_data_prefix = f's3://{BUCKET}/{PREFIX}/mme/'
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### c) Create a MultiDataModel using Model 1
# + button=false new_sheet=false run_control={"read_only": false}
model_1 = TensorFlowModel(model_data=output_1,
role=role,
image_uri=IMAGE_URI)
# + button=false new_sheet=false run_control={"read_only": false}
mme = MultiDataModel(name=f'mme-tensorflow-{current_time}',
model_data_prefix=model_data_prefix,
model=model_1,
sagemaker_session=sagemaker_session)
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### d) Deploy Multi-Model Endpoint
# + button=false new_sheet=false run_control={"read_only": false}
predictor = mme.deploy(initial_instance_count=1,
instance_type='ml.m5.2xlarge',
endpoint_name=f'mme-tensorflow-{current_time}')
# + button=false new_sheet=false run_control={"read_only": false}
list(mme.list_models())
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ### Test Multi-Model Endpoint for Real Time Inference
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### a) Test Model-1 CIFAR Image Classification
# + button=false new_sheet=false run_control={"read_only": false}
# %matplotlib inline
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing import image
from IPython.display import Image
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
# + button=false new_sheet=false run_control={"read_only": false}
CIFAR10_LABELS = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
# + button=false new_sheet=false run_control={"read_only": false}
Image('./data/cifar_10/raw_images/airplane.png')
# + button=false new_sheet=false run_control={"read_only": false}
img = load_img('./data/cifar_10/raw_images/airplane.png', target_size=(32, 32))
data = img_to_array(img)
data = data.astype('float32')
data = data / 255.0
data = data.reshape(1, 32, 32, 3)
# + button=false new_sheet=false run_control={"read_only": false}
payload = {'instances': data}
# + button=false new_sheet=false run_control={"read_only": false}
y_pred = predictor.predict(data=payload, initial_args={'TargetModel': 'cifar.tar.gz'})
# + button=false new_sheet=false run_control={"read_only": false}
predicted_label = CIFAR10_LABELS[np.argmax(y_pred)]
print(f'Predicted Label: [{predicted_label}]')
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# #### b) Test Model-2 Sign Language Classification
# + button=false new_sheet=false run_control={"read_only": false}
test_path = './data/sign_language/test'
img = mpimg.imread(f'{test_path}/0/IMG_4159.JPG')
plt.imshow(img)
# + button=false new_sheet=false run_control={"read_only": false}
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
# + button=false new_sheet=false run_control={"read_only": false}
data = path_to_tensor(f'{test_path}/0/IMG_4159.JPG')
payload = {'instances': data}
# + button=false new_sheet=false run_control={"read_only": false}
y_pred = predictor.predict(data=payload, initial_args={'TargetModel': 'sign-language.tar.gz'})
# + button=false new_sheet=false run_control={"read_only": false}
predicted_label = np.argmax(y_pred)
print(f'Predicted Label: [{predicted_label}]')
| multi-model-endpoint-tensorflow-cv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from collections import Counter
my_list=[1,2,3,4,5,1,2,1,2,1,6,3,4]
counter=Counter(my_list)
x = counter.most_common(1)
# -
x = counter.most_common(1)
x
x = counter.most_common(2)
x
counter=Counter(my_list)
x = counter.most_common(1)[0]
print(f'value: {x[0]}, frequency: {x[1]}')
max(my_list)
max(my_list, key=my_list.count)
# +
#count digits
import math
number = 12312321321
print(len(str(number)))
# -
print(int(math.log10(number))+1)
numbers=-1234512
counter=1
while abs(numbers) >= (10 ** counter):
counter +=1
| tip_tricks_python/Counter_most_frequent_value.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import os
import time
import re
import copy
import numpy as np
import pandas as pd
import h5py
import tables
import random
from tqdm import tqdm
import matplotlib.pyplot as plt
import seaborn as sns
import rioxarray as rxr
import math
import pickle
import sklearn
import graphviz
import xgboost as xgb
import lightgbm as lgbm
from pickle import dump
import contextily as cx
import rasterio
import geopandas as gpd
from shapely.geometry import Point
# from mpl_toolkits.basemap import Basemap
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import mean_squared_error
from sklearn.feature_selection import RFE
from sklearn.feature_selection import RFECV
from xgboost import cv
from xgboost import XGBRegressor
from xgboost import plot_importance as plot_importance_XGB
from lightgbm import LGBMRegressor
from lightgbm import plot_importance as plot_importance_LGBM
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import load_model
from tensorflow.keras.models import Sequential
# from tensorflow.keras.utils import np_utils
from tensorflow.keras.layers import Dense, Activation, Dropout
from os import listdir
from os.path import isfile, join
from platform import python_version
import re
def atof(text):
try:
retval = float(text)
except ValueError:
retval = text
return retval
def natural_keys(text):
'''
alist.sort(key=natural_keys) sorts in human order
http://nedbatchelder.com/blog/200712/human_sorting.html
(See Toothy's implementation in the comments)
float regex comes from https://stackoverflow.com/a/12643073/190597
'''
return [ atof(c) for c in re.split(r'[+-]?([0-9]+(?:[.][0-9]*)?|[.][0-9]+)', text) ]
print(pd.__version__) # should be 1.3.0
print(sklearn.__version__) # should be 0.24.1
print(tf.__version__) # should be 2.4.0
print(python_version()) #should be 3.8.2
# -
os.getcwd()
os.chdir('..')
os.getcwd()
os.chdir('..')
os.getcwd()
# +
### define new regions
Region_list = ['N_Sierras',
'S_Sierras_High',
'S_Sierras_Low',
'Greater_Yellowstone',
'N_Co_Rockies',
'SW_Mont',
'SW_Co_Rockies',
'GBasin',
'N_Wasatch',
'N_Cascade',
'S_Wasatch',
'SW_Mtns',
'E_WA_N_Id_W_Mont',
'S_Wyoming',
'SE_Co_Rockies',
'Sawtooth',
'Ca_Coast',
'E_Or',
'N_Yellowstone',
'S_Cascade',
'Wa_Coast',
'Greater_Glacier',
'Or_Coast'
]
# +
Model_weights_intial = {}
for Region in Region_list:
#set up filepath to extract best model
checkpoint_filepath = 'Model/Initial_Models_Final/'+Region+ '/'
model = checkpoint_filepath+Region+'_model.h5'
print(model)
model=load_model(model)
Model_weights_intial[Region] = {}
for i in range(1,len((model.layers))):
weights = model.layers[i].get_weights()
Model_weights_intial[Region][i] = weights
pickle.dump(Model_weights_intial, open("Model/Model_Calibration/Initial_MLP/Model_Weights_Intitial.pkl", "wb"))
# +
Model_weights_final= {}
for Region in Region_list:
#set up filepath to extract best model
checkpoint_filepath = 'Model/Prev_SWE_Models_Final/'+Region+ '/'
model = checkpoint_filepath+Region+'_model.h5'
print(model)
model=load_model(model)
Model_weights_intial[Region] = {}
for i in range(1,len((model.layers))):
weights = model.layers[i].get_weights()
Model_weights_intial[Region][i] = weights
pickle.dump(Model_weights_final, open("Model/Model_Calibration/Prev_MLP/Model_Weights_Final.pkl", "wb"))
# -
| Model/Model_Calibration/Model_Weights_MLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# 
# # MNIST Handwritten Digit Classification using ONNX and AzureML
#
# This example shows how to train a model on the MNIST data using PyTorch, save it as an ONNX model, and deploy it as a web service using Azure Machine Learning services and the ONNX Runtime.
#
# ## What is ONNX
# ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).
#
# ## MNIST Details
# The Modified National Institute of Standards and Technology (MNIST) dataset consists of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing numbers from 0 to 9. For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/). For more information about the MNIST model and how it was created can be found on the [ONNX Model Zoo github](https://github.com/onnx/models/tree/master/vision/classification/mnist).
# ## Prerequisites
# * Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
# * If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to:
# * install the AML SDK
# * create a workspace and its configuration file (`config.json`)
# +
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
# -
# ## Initialize workspace
# Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
# +
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Resource group: ' + ws.resource_group, sep = '\n')
# -
# ## Train model
#
# ### Create a remote compute target
# You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an [Azure Batch AI](https://docs.microsoft.com/azure/batch-ai/overview) cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.
#
# **Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process.
# +
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=6)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# Use the 'status' property to get a detailed status for the current cluster.
print(compute_target.status.serialize())
# -
# The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`.
# ### Create a project directory
# Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on.
# +
import os
project_folder = './pytorch-mnist'
os.makedirs(project_folder, exist_ok=True)
# -
# Copy the training script [`mnist.py`](mnist.py) into your project directory. Make sure the training script has the following code to create an ONNX file:
# ```python
# dummy_input = torch.randn(1, 1, 28, 28, device=device)
# model_path = os.path.join(output_dir, 'mnist.onnx')
# torch.onnx.export(model, dummy_input, model_path)
# ```
import shutil
shutil.copy('mnist.py', project_folder)
# ### Create an experiment
# Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this transfer learning PyTorch tutorial.
# +
from azureml.core import Experiment
experiment_name = 'pytorch1-mnist'
experiment = Experiment(ws, name=experiment_name)
# -
# ### Create a PyTorch estimator
# The AML SDK's PyTorch estimator enables you to easily submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch). The following code will define a single-node PyTorch job.
# +
from azureml.train.dnn import PyTorch
estimator = PyTorch(source_directory=project_folder,
script_params={'--output-dir': './outputs'},
compute_target=compute_target,
entry_script='mnist.py',
use_gpu=True)
# upgrade to PyTorch 1.0 Preview, which has better support for ONNX
estimator.conda_dependencies.remove_conda_package('pytorch=0.4.0')
estimator.conda_dependencies.add_conda_package('pytorch-nightly')
estimator.conda_dependencies.add_channel('pytorch')
# -
# The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`. Please note the following:
# - We specified the output directory as `./outputs`. The `outputs` directory is specially treated by AML in that all the content in this directory gets uploaded to your workspace as part of your run history. The files written to this directory are therefore accessible even once your remote run is over. In this tutorial, we will save our trained model to this output directory.
#
# To leverage the Azure VM's GPU for training, we set `use_gpu=True`.
# ### Submit job
# Run your experiment by submitting your estimator object. Note that this call is asynchronous.
run = experiment.submit(estimator)
print(run.get_details())
# ### Monitor your run
# You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
from azureml.widgets import RunDetails
RunDetails(run).show()
# Alternatively, you can block until the script has completed training before running more code.
# %%time
run.wait_for_completion(show_output=True)
# ### Download the model (optional)
#
# Once the run completes, you can choose to download the ONNX model.
# list all the files from the run
run.get_file_names()
model_path = os.path.join('outputs', 'mnist.onnx')
run.download_file(model_path, output_file_path=model_path)
# ### Register the model
# You can also register the model from your run to your workspace. The `model_path` parameter takes in the relative path on the remote VM to the model file in your `outputs` directory. You can then deploy this registered model as a web service through the AML SDK.
model = run.register_model(model_name='mnist', model_path=model_path)
print(model.name, model.id, model.version, sep = '\t')
# #### Displaying your registered models (optional)
#
# You can optionally list out all the models that you have registered in this workspace.
models = ws.models
for name, m in models.items():
print("Name:", name,"\tVersion:", m.version, "\tDescription:", m.description, m.tags)
# ## Deploying as a web service
#
# ### Write scoring file
#
# We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object.
# +
# %%writefile score.py
import json
import time
import sys
import os
from azureml.core.model import Model
import numpy as np # we're going to use numpy to process input and output data
import onnxruntime # to inference ONNX models, we use the ONNX Runtime
def init():
global session
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'mnist.onnx')
session = onnxruntime.InferenceSession(model)
def preprocess(input_data_json):
# convert the JSON data into the tensor input
return np.array(json.loads(input_data_json)['data']).astype('float32')
def postprocess(result):
# We use argmax to pick the highest confidence label
return int(np.argmax(np.array(result).squeeze(), axis=0))
def run(input_data_json):
try:
start = time.time() # start timer
input_data = preprocess(input_data_json)
input_name = session.get_inputs()[0].name # get the id of the first input of the model
result = session.run([], {input_name: input_data})
end = time.time() # stop timer
return {"result": postprocess(result),
"time": end - start}
except Exception as e:
result = str(e)
return {"error": result}
# -
# ### Create inference configuration
# First we create a YAML file that specifies which dependencies we would like to see in our container.
# +
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(pip_packages=["numpy","onnxruntime","azureml-core"])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
# -
# Then we setup the inference configuration
# +
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(runtime= "python",
entry_script="score.py",
conda_file="myenv.yml",
extra_docker_file_steps = "Dockerfile")
# -
# ### Deploy the model
# +
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'demo': 'onnx'},
description = 'web service for MNIST ONNX model')
# -
# The following cell will likely take a few minutes to run as well.
# +
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from random import randint
aci_service_name = 'onnx-demo-mnist'+str(randint(0,100))
print("Service", aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
# -
# In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again.
if aci_service.state != 'Healthy':
# run this command for debugging.
print(aci_service.get_logs())
aci_service.delete()
# ## Success!
#
# If you've made it this far, you've deployed a working web service that does handwritten digit classification using an ONNX model. You can get the URL for the webservice with the code below.
print(aci_service.scoring_uri)
# When you are eventually done using the web service, remember to delete it.
# +
#aci_service.delete()
| how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Adaptive-Runge–Kutta
#
# Compile, run, and plot the result from RKF.cpp
# +
import subprocess
import sys
import os
import time
# +
import numpy as np
import matplotlib
#matplotlib.use('WebAgg')
#matplotlib.use('Qt4Cairo')
#matplotlib.use('Qt5Cairo')
matplotlib.use('nbAgg')
import matplotlib.pyplot as plt
plt.rcParams['font.family']='serif'
plt.rcParams['font.size']=10
plt.rcParams['mathtext.fontset']='stixsans'
from scipy.integrate import odeint
from scipy.integrate import RK45 #this is DP45 as the one I use
# -
os.chdir('..')
os.system(r'make')
os.chdir('0-test')
# +
time0=time.time()
output=subprocess.check_output(["../RKF.run"]).decode(sys.stdout.encoding).split("\n")
print("time: {:10} s".format( time.time()-time0) )
solution=np.array([ (i.split(' '))[:-1] for i in output[:-1] ] ,np.float64)
# -
# +
t=solution[:,0]
y1=solution[:,1]
y2=solution[:,2]
y3=solution[:,3]
err1=solution[:,4]
err2=solution[:,5]
err3=solution[:,6]
# -
def f(t,y):
lhs=np.zeros(3)
lhs[0]=-20*y[0]*pow(t,2) ;
lhs[1]=5*y[0]*pow(t,2)+2*(-pow( y[1],2 )+pow( y[2],2 ) )*pow(t,1);
lhs[2]=15*y[0]*pow(t,2)+2*(pow( y[1],2 )-pow( y[2],2 ) )*pow(t,1);
return lhs
# +
# # ?RK45
# +
sol_py=RK45(f,0,[y1[0],y2[0],y3[0]],t[-1],rtol=1e-8,atol=1e-8)
time0=time.time()
y_py=[]
t_py=[]
while sol_py.status=='running' :
sol_py.step()
y_py.append(sol_py.y)
t_py.append(sol_py.t)
# print(sol_py.step_size,sol_py.t)
y_py=np.array(y_py)
print("time: {:10} s".format( time.time()-time0) )
# -
def g(y,t):
return f(t,y)
time0=time.time()
sol_ode=odeint(g,y_py[0],t_py )
print("time: {:10} s".format( time.time()-time0) )
# +
fig=plt.figure(figsize=(9,6))
fig.subplots_adjust(bottom=0.05, left=0.1, top = 0.99, right=0.9,wspace=0.0,hspace=0.15)
fig.suptitle('')
_c=['xkcd:black','xkcd:red','xkcd:blue']
sub = fig.add_subplot(311)
sub.plot(t,y1,c=_c[0],alpha=0.5,linestyle='-',linewidth=3,label=r'$y_{1}(t)$')
sub.plot(t,y2,c=_c[1],alpha=0.5,linestyle='-',linewidth=3,label=r'$y_{2}(t)$')
sub.plot(t,y3,c=_c[2],alpha=0.5,linestyle='-',linewidth=3,label=r'$y_{3}(t)$')
sub.plot(t_py,y_py[:,0],c=_c[0],alpha=1,linestyle=':',linewidth=2,label=r'$y_{1}(t)$ scipy')
sub.plot(t_py,y_py[:,1],c=_c[1],alpha=1,linestyle=':',linewidth=2,label=r'$y_{2}(t)$ scipy')
sub.plot(t_py,y_py[:,2],c=_c[2],alpha=1,linestyle=':',linewidth=2,label=r'$y_{3}(t)$ scipy')
# sub.plot(t,sol_ode[:,0],c=_c[0],alpha=1,linestyle='--',linewidth=2,label=r'$y_{1}(t)$ scipy-odeint')
# sub.plot(t,sol_ode[:,1],c=_c[1],alpha=1,linestyle='--',linewidth=2,label=r'$y_{2}(t)$ scipy-odeint')
# sub.plot(t,sol_ode[:,2],c=_c[2],alpha=1,linestyle='--',linewidth=2,label=r'$y_{3}(t)$ scipy-odeint')
sub.legend(framealpha=0,ncol=2,loc='upper right',bbox_to_anchor=(1,.9))
# sub.set_xscale('log')
# sub.set_yscale('log')
sub.set_ylabel('y')
# sub.set_xlim(0,1)
sub = fig.add_subplot(312)
sub.hist(t,color=_c[0],label=r'mine',bins=25 )
# sub.plot(t,hist)
# sub.set_xscale('log')
sub.set_ylabel('No. steps')
sub.set_ylabel(r' $\dfrac{\Delta y}{\rm scale} \lesssim 1$ ')
sub.set_xlabel('')
sub = fig.add_subplot(313)
sub.plot(t,np.abs(err1/y1),c=_c[0],alpha=1,linestyle='--',linewidth=3,label=r'$y_{1}(t)$')
sub.plot(t,np.abs(err2/y2),c=_c[1],alpha=1,linestyle='--',linewidth=3,label=r'$y_{2}(t)$')
sub.plot(t,np.abs(err3/y3),c=_c[2],alpha=1,linestyle='--',linewidth=3,label=r'$y_{3}(t)$')
sub.legend(framealpha=0,ncol=2,loc='upper right',bbox_to_anchor=(1,.9))
sub.set_yscale('log')
# sub.set_xscale('log')
sub.set_ylabel(r' $\dfrac{\Delta y}{ y} $ ')
sub.set_xlabel('t')
plt.show()
# +
fig=plt.figure(figsize=(8,4))
fig.subplots_adjust(bottom=0.15, left=0.1, top = 0.99, right=0.9,wspace=0.0,hspace=0.2)
fig.suptitle('')
_c=['xkcd:black','xkcd:red','xkcd:blue']
sub = fig.add_subplot(111)
sub.hist(t,color=_c[0],label=r'mine',bins=int(t[-1]*5))
sub.hist(t_py,color=_c[2],label=r'scipy',alpha=0.5,bins=int(t[-1]*5))
# check also this
# sub.plot(t,hist,label=r'mine')
# sub.hist(t_py,label=r'scipy',alpha=0.5,bins=N)
sub.set_ylabel('No. steps')
sub.legend(framealpha=0,ncol=2,loc='upper right',bbox_to_anchor=(1,.9))
plt.show()
# -
len(t)
len(t_py)
| RKF/0-test/Adaptive-Runge-Kutta.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/vigneshdurairaj/Chandler_Bot/blob/master/chandler.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="-iCq__XIZZm2" colab_type="code" colab={}
import numpy as np
import pandas as pd
import re
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.utils.data_utils import get_file
from keras.callbacks import EarlyStopping
from __future__ import print_function
import random
import sys
# + id="4Pv3cTytZsmX" colab_type="code" colab={}
def split_data(df, train_perc = 0.8):
df['train'] = np.random.rand(len(df)) < train_perc
train = df[df.train == 1]
test = df[df.train == 0]
split_data ={'train': train, 'test': test}
return split_data
def cleanstr(somestring):
rx = re.compile('\W+')
return rx.sub(' ', somestring).strip()
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
# + id="Ugl160SQZ8u-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 176} outputId="2defd51e-2262-42f6-d836-a2918994e6ac"
df = pd.read_csv('friends-transcripts corpus.txt',delimiter='\t')
df.head(3)
# + id="qqbjx8xXaEzb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="41ec3153-c7d1-4f3e-8ae7-77e00ff623da"
df = df[2:]
df.drop("Season & Episode", axis=1 , inplace=True)
df.head(3)
# + id="gE_F79NIbPyU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="a5402138-a306-4a31-c5fa-250642141628"
df.Season = pd.to_numeric(df.Season , errors='raise')
df.Episode = pd.to_numeric(df.Episode, errors='coerce')
df.Episode = df.Episode.replace(np.nan , 17)
df.Title = df.Title.astype(str)
df.Quote = df.Quote.astype(str)
df.Author = df.Author.astype(str)
df.head(2)
# + id="_Ukg4pEHbTho" colab_type="code" colab={}
Dataset = split_data(df , train_perc=0.8)
# + id="eFF5gYWVb8xF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="af92031d-3188-4f07-fe7d-ed43da60337c"
text = ' '.join(Dataset['train'].Quote[Dataset['train'].Author == 'Chandler'].tolist())
text = text.lower()
print('Total Dialogues', len(text))
chars = set(text)
print(text[:40])
# + id="h6FflYNecqv4" colab_type="code" colab={}
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# + id="wW1BgrAHcsPP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="861a0062-bf18-4a46-9f1d-79eea1049961"
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
print('Vectorization...')
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
# + id="njEfZ9l2c39R" colab_type="code" colab={}
model = Sequential()
model.add(LSTM(512, return_sequences=True, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(LSTM(512, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
# + id="qKm4WgAzdA_r" colab_type="code" colab={}
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
# + id="EfoM-pvVkhvp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="78a5ddbe-c4d5-4af7-94f4-d176cd4df73d"
model.fit(X, y, batch_size=128, nb_epoch=30, callbacks=[EarlyStopping(monitor='loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)])
# + id="WxuakxaSkmal" colab_type="code" colab={}
from keras.models import load_model
model.save('chandler_F1.h5')
# + id="Z7B9xd5kdCTP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="fa8c5a30-9537-4e3e-f427-8affb3399cd5"
for iteration in range(1, 40):
print()
print('-' * 50)
print('Iteration', iteration)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(100):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
# + id="xXjWihsCoE-9" colab_type="code" colab={}
| chandler.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook contains an implementation of the third place result in the Rossman Kaggle competition as detailed in Guo/Berkhahn's [Entity Embeddings of Categorical Variables](https://arxiv.org/abs/1604.06737).
# The motivation behind exploring this architecture is it's relevance to real-world application. Much of our focus has been computer-vision and NLP tasks, which largely deals with unstructured data.
#
# However, most of the data informing KPI's in industry are structured, time-series data. Here we explore the end-to-end process of using neural networks with practical structured data problems.
# %matplotlib inline
import math, keras, datetime, pandas as pd, numpy as np, keras.backend as K
import matplotlib.pyplot as plt, xgboost, operator, random, pickle
from utils2 import *
np.set_printoptions(threshold=50, edgeitems=20)
limit_mem()
from isoweek import Week
from pandas_summary import DataFrameSummary
# %cd ./data/rossman/
# ## Create datasets
# In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them [here](http://files.fast.ai/part2/lesson14/rossmann.tgz).
#
# For completeness, the implementation used to put them together is included below.
def concat_csvs(dirname):
os.chdir(dirname)
filenames=glob.glob("*.csv")
wrote_header = False
with open("../"+dirname+".csv","w") as outputfile:
for filename in filenames:
name = filename.split(".")[0]
with open(filename) as f:
line = f.readline()
if not wrote_header:
wrote_header = True
outputfile.write("file,"+line)
for line in f:
outputfile.write(name + "," + line)
outputfile.write("\n")
os.chdir("..")
# +
# concat_csvs('googletrend')
# concat_csvs('weather')
# -
# Feature Space:
# * train: Training set provided by competition
# * store: List of stores
# * store_states: mapping of store to the German state they are in
# * List of German state names
# * googletrend: trend of certain google keywords over time, found by users to correlate well w/ given data
# * weather: weather
# * test: testing set
table_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
# We'll be using the popular data manipulation framework pandas.
#
# Among other things, pandas allows you to manipulate tables/data frames in python as one would in a database.
# We're going to go ahead and load all of our csv's as dataframes into a list `tables`.
tables = [pd.read_csv(fname+'.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML
# We can use `head()` to get a quick look at the contents of each table:
# * train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holdiay, etc.
# * store: general info about the store including competition, etc.
# * store_states: maps store to state it is in
# * state_names: Maps state abbreviations to names
# * googletrend: trend data for particular week/state
# * weather: weather conditions for each state
# * test: Same as training table, w/o sales and customers
#
for t in tables: display(t.head())
# This is very representative of a typical industry dataset.
# The following returns summarized aggregate information to each table accross each field.
for t in tables: display(DataFrameSummary(t).summary())
# ## Data Cleaning / Feature Engineering
# As a structured data problem, we necessarily have to go through all the cleaning and feature engineering, even though we're using a neural network.
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
# Turn state Holidays to Bool
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
# Define function for joining tables on specific fields.
#
# By default, we'll be doing a left outer join of `right` on the `left` argument using the given fields for each table.
#
# Pandas does joins using the `merge` method. The `suffixes` argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "_y" to those on the right.
def join_df(left, right, left_on, right_on=None):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", "_y"))
# Join weather/state names.
weather = join_df(weather, state_names, "file", "StateName")
# In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
#
# We're also going to replace all instances of state name 'NI' with the usage in the rest of the table, 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use `.loc[rows, cols]` to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list `googletrend.State=='NI'` and selecting "State".
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
# The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
#
# You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities.
def add_datepart(df):
df.Date = pd.to_datetime(df.Date)
df["Year"] = df.Date.dt.year
df["Month"] = df.Date.dt.month
df["Week"] = df.Date.dt.week
df["Day"] = df.Date.dt.day
# We'll add to every table w/ a date field.
add_datepart(weather)
add_datepart(googletrend)
add_datepart(train)
add_datepart(test)
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
# Now we can outer join all of our data into a single dataframe.
#
# Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields.
#
# One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
#
# *Aside*: Why not just do an inner join?
# If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.)
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
len(joined[joined.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()])
joined_test = test.merge(store, how='left', left_on='Store', right_index=True)
len(joined_test[joined_test.StoreType.isnull()])
# Next we'll fill in missing values to avoid complications w/ na's.
joined.CompetitionOpenSinceYear = joined.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
joined.CompetitionOpenSinceMonth = joined.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
joined.Promo2SinceYear = joined.Promo2SinceYear.fillna(1900).astype(np.int32)
joined.Promo2SinceWeek = joined.Promo2SinceWeek.fillna(1).astype(np.int32)
# Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of `apply()` in mapping a function across dataframe values.
joined["CompetitionOpenSince"] = pd.to_datetime(joined.apply(lambda x: datetime.datetime(
x.CompetitionOpenSinceYear, x.CompetitionOpenSinceMonth, 15), axis=1).astype(pd.datetime))
joined["CompetitionDaysOpen"] = joined.Date.subtract(joined["CompetitionOpenSince"]).dt.days
# We'll replace some erroneous / outlying data.
joined.loc[joined.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
joined.loc[joined.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
# Added "CompetitionMonthsOpen" field, limit the maximum to 2 years to limit number of unique embeddings.
joined["CompetitionMonthsOpen"] = joined["CompetitionDaysOpen"]//30
joined.loc[joined.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
# Same process for Promo dates.
joined["Promo2Since"] = pd.to_datetime(joined.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
joined["Promo2Days"] = joined.Date.subtract(joined["Promo2Since"]).dt.days
joined.loc[joined.Promo2Days<0, "Promo2Days"] = 0
joined.loc[joined.Promo2SinceYear<1990, "Promo2Days"] = 0
joined["Promo2Weeks"] = joined["Promo2Days"]//7
joined.loc[joined.Promo2Weeks<0, "Promo2Weeks"] = 0
joined.loc[joined.Promo2Weeks>25, "Promo2Weeks"] = 25
joined.Promo2Weeks.unique()
# ## Durations
# It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:
# * Running averages
# * Time until next event
# * Time since last event
#
# This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
# We've defined a class `elapsed` for cumulative counting across a sorted dataframe.
# For e.g., on any given date, how many days it is before the next state holiday, and how many days it is after the previous state holiday.
#
# Given a particular field `fld` to monitor, this object will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
#
# Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen.
#
# We'll see how to use this shortly.
class elapsed(object):
def __init__(self, fld):
self.fld = fld
self.last = pd.to_datetime(np.nan)
self.last_store = 0
def get(self, row):
if row.Store != self.last_store:
self.last = pd.to_datetime(np.nan)
self.last_store = row.Store
if (row[self.fld]): self.last = row.Date
return row.Date-self.last
df = train[columns]
# And a function for applying said class across dataframe rows and adding values to a new column.
def add_elapsed(fld, prefix):
tmp_el = elapsed(fld)
df[prefix+fld] = df.apply(tmp_el.get, axis=1)
# Let's walk through an example.
#
# Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call `add_elapsed('SchoolHoliday', 'After')`:
# This will generate an instance of the `elapsed` class for School Holiday:
# * Instance applied to every row of the dataframe in order of store and date
# * Will add to the dataframe the days since seeing a School Holiday
# * If we sort in the other direction, this will count the days until another promotion.
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
# We'll do this for two more fields.
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
# We're going to set the active index to Date.
df = df.set_index("Date")
# Then set null values from elapsed field calculations to 0.
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(pd.Timedelta(0)).dt.days
# Next we'll demonstrate window functions in pandas to calculate rolling quantities.
#
# The idea is to calculate, for e.g., given any date, how many state holidays were there in the previous week, and how many in the next week.
#
# Here we're sorting by date (`sort_index()`) and counting the number of events of interest (`sum()`) defined in `columns` in the following week (`rolling()`), grouped by Store (`groupby()`). We also do the same in the opposite direction for similar windowing over the next week.
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
# Next we want to drop the Store indices grouped together in the window function.
#
# Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
# Now we'll merge these values onto the df.
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
# It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
df.to_csv('df.csv')
df = pd.read_csv('df.csv', index_col=0)
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = join_df(joined, df, ['Store', 'Date'])
# We'll back this up as well.
joined.to_csv('joined.csv')
# We now have our final set of engineered features.
joined = pd.read_csv('joined.csv', index_col=0)
joined["Date"] = pd.to_datetime(joined.Date)
joined.columns
# While these steps were explicitly outlined in the paper, these are all fairly typical feature engineering steps for dealing with time series data and are practical in any similar setting.
# ## Create features
# Now that we've engineered all our features, we need to convert to input compatible with a neural network.
#
# This includes converting categorical variables into contiguous integers or one-hot encodings, normalizing continuous features to standard normal, etc...
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import LabelEncoder, Imputer, StandardScaler
# This dictionary maps categories to embedding dimensionality. In generally, categories we might expect to be conceptually more complex have larger dimension.
cat_var_dict = {'Store': 50, 'DayOfWeek': 6, 'Year': 2, 'Month': 6,
'Day': 10, 'StateHoliday': 3, 'CompetitionMonthsOpen': 2,
'Promo2Weeks': 1, 'StoreType': 2, 'Assortment': 3, 'PromoInterval': 3,
'CompetitionOpenSinceYear': 4, 'Promo2SinceYear': 4, 'State': 6,
'Week': 2, 'Events': 4, 'Promo_fw': 1,
'Promo_bw': 1, 'StateHoliday_fw': 1,
'StateHoliday_bw': 1, 'SchoolHoliday_fw': 1,
'SchoolHoliday_bw': 1}
# Name categorical variables
cat_vars = [o[0] for o in
sorted(cat_var_dict.items(), key=operator.itemgetter(1), reverse=True)]
"""cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday',
'StoreType', 'Assortment', 'Week', 'Events', 'Promo2SinceYear',
'CompetitionOpenSinceYear', 'PromoInterval', 'Promo', 'SchoolHoliday', 'State']"""
# Likewise for continuous
# mean/max wind; min temp; cloud; min/mean humid;
contin_vars = ['CompetitionDistance',
'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
"""contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC',
'Max_Humidity', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday']"""
# Replace nulls w/ 0 for continuous, "" for categorical.
for v in contin_vars: joined.loc[joined[v].isnull(), v] = 0
for v in cat_vars: joined.loc[joined[v].isnull(), v] = ""
# Here we create a list of tuples, each containing a variable and an instance of a transformer for that variable.
#
# For categoricals, we use a label encoder that maps categories to continuous integers. For continuous variables, we standardize them.
cat_maps = [(o, LabelEncoder()) for o in cat_vars]
contin_maps = [([o], StandardScaler()) for o in contin_vars]
# The same instances need to be used for the test set as well, so values are mapped/standardized appropriately.
#
# DataFrame mapper will keep track of these variable-instance mappings.
cat_mapper = DataFrameMapper(cat_maps)
cat_map_fit = cat_mapper.fit(joined)
cat_cols = len(cat_map_fit.features)
cat_cols
contin_mapper = DataFrameMapper(contin_maps)
contin_map_fit = contin_mapper.fit(joined)
contin_cols = len(contin_map_fit.features)
contin_cols
# Example of first five rows of zeroth column being transformed appropriately.
cat_map_fit.transform(joined)[0,:5], contin_map_fit.transform(joined)[0,:5]
# We can also pickle these mappings, which is great for portability!
pickle.dump(contin_map_fit, open('contin_maps.pickle', 'wb'))
pickle.dump(cat_map_fit, open('cat_maps.pickle', 'wb'))
[len(o[1].classes_) for o in cat_map_fit.features]
# ## Sample data
# Next, the authors removed all instances where the store had zero sale / was closed.
joined_sales = joined[joined.Sales!=0]
n = len(joined_sales)
# We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little EDA reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. Be ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
n
# We're going to run on a sample.
samp_size = 100000
np.random.seed(42)
idxs = sorted(np.random.choice(n, samp_size, replace=False))
joined_samp = joined_sales.iloc[idxs].set_index("Date")
samp_size = n
joined_samp = joined_sales.set_index("Date")
# In time series data, cross-validation is not random. Instead, our holdout data is always the most recent data, as it would be in real application.
# We've taken the last 10% as our validation set.
train_ratio = 0.9
train_size = int(samp_size * train_ratio)
train_size
joined_valid = joined_samp[train_size:]
joined_train = joined_samp[:train_size]
len(joined_valid), len(joined_train)
# Here's a preprocessor for our categoricals using our instance mapper.
def cat_preproc(dat):
return cat_map_fit.transform(dat).astype(np.int64)
cat_map_train = cat_preproc(joined_train)
cat_map_valid = cat_preproc(joined_valid)
# Same for continuous.
def contin_preproc(dat):
return contin_map_fit.transform(dat).astype(np.float32)
contin_map_train = contin_preproc(joined_train)
contin_map_valid = contin_preproc(joined_valid)
# Grab our targets.
y_train_orig = joined_train.Sales
y_valid_orig = joined_valid.Sales
# Finally, the authors modified the target values by applying a logarithmic transformation and normalizing to unit scale by dividing by the maximum log value.
#
# Log transformations are used on this type of data (because sales data typically has lot of skews) frequently to attain a nicer shape.
#
# Further by scaling to the unit interval we can now use a sigmoid output in our neural network. Then we can multiply by the maximum log value to get the original log value and transform back.
#
# However, since log loss values for probability of 1 penalizes heavily, it might be prudent to scale by (1.25 * max_log_y) instead of max_log_y.
#
# ** Aside: ** If we take logs of the target variable, MAE should be used in the loss function instead of MSE.
max_log_y = np.max(np.log(joined_samp.Sales))
y_train = np.log(y_train_orig)/max_log_y
y_valid = np.log(y_valid_orig)/max_log_y
# Note: Some testing shows this doesn't make a big difference.
"""#y_train = np.log(y_train)
ymean=y_train_orig.mean()
ystd=y_train_orig.std()
y_train = (y_train_orig-ymean)/ystd
#y_valid = np.log(y_valid)
y_valid = (y_valid_orig-ymean)/ystd"""
# Root-mean-squared percent error is the metric Kaggle used for this competition.
def rmspe(y_pred, targ = y_valid_orig):
pct_var = (targ - y_pred)/targ
return math.sqrt(np.square(pct_var).mean())
# These undo the target transformations.
def log_max_inv(preds, mx = max_log_y):
return np.exp(preds * mx)
def normalize_inv(preds):
return preds * ystd + ymean
# ## Create models
# Now we're ready to put together our models.
# Much of the following code has commented out portions / alternate implementations.
"""
1 97s - loss: 0.0104 - val_loss: 0.0083
2 93s - loss: 0.0076 - val_loss: 0.0076
3 90s - loss: 0.0071 - val_loss: 0.0076
4 90s - loss: 0.0068 - val_loss: 0.0075
5 93s - loss: 0.0066 - val_loss: 0.0075
6 95s - loss: 0.0064 - val_loss: 0.0076
7 98s - loss: 0.0063 - val_loss: 0.0077
8 97s - loss: 0.0062 - val_loss: 0.0075
9 95s - loss: 0.0061 - val_loss: 0.0073
0 101s - loss: 0.0061 - val_loss: 0.0074
"""
def split_cols(arr): return np.hsplit(arr,arr.shape[1])
map_train = split_cols(cat_map_train) + [contin_map_train]
map_valid = split_cols(cat_map_valid) + [contin_map_valid]
len(map_train)
map_train = split_cols(cat_map_train) + split_cols(contin_map_train)
map_valid = split_cols(cat_map_valid) + split_cols(contin_map_valid)
# Helper function for getting categorical name and dim.
def cat_map_info(feat): return feat[0], len(feat[1].classes_)
cat_map_info(cat_map_fit.features[1])
def my_init(scale):
return lambda shape, name=None: initializations.uniform(shape, scale=scale, name=name)
def emb_init(shape, name=None):
return initializations.uniform(shape, scale=2/(shape[1]+1), name=name)
# Helper function for constructing embeddings. Notice commented out codes, several different ways to compute embeddings at play.
#
# Also, note we're flattening the embedding. Embeddings in Keras come out as an element of a sequence like we might use in a sequence of words; here we just want to concatenate them so we flatten the 1-vector sequence into a vector.
def get_emb(feat):
name, c = cat_map_info(feat)
#c2 = cat_var_dict[name]
c2 = (c+1)//2
if c2>50: c2=50
inp = Input((1,), dtype='int64', name=name+'_in')
# , W_regularizer=l2(1e-6)
u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1, init=emb_init)(inp))
# u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1)(inp))
return inp,u
# Helper function for continuous inputs.
def get_contin(feat):
name = feat[0][0]
inp = Input((1,), name=name+'_in')
return inp, Dense(1, name=name+'_d', init=my_init(1.))(inp)
# Let's build them.
contin_inp = Input((contin_cols,), name='contin')
contin_out = Dense(contin_cols*10, activation='relu', name='contin_d')(contin_inp)
#contin_out = BatchNormalization()(contin_out)
# Now we can put them together. Given the inputs, continuous and categorical embeddings, we're going to concatenate all of them.
#
# Next, we're going to pass through some dropout, then two dense layers w/ ReLU activations, then dropout again, then the sigmoid activation we mentioned earlier.
#
# ** Aside: ** Please note that we have used MAE in the loss function instead of MSE, since we have taken logs of the target variable.
# +
embs = [get_emb(feat) for feat in cat_map_fit.features]
#conts = [get_contin(feat) for feat in contin_map_fit.features]
#contin_d = [d for inp,d in conts]
x = concatenate([emb for inp,emb in embs] + [contin_out])
#x = merge([emb for inp,emb in embs] + contin_d, mode='concat')
x = Dropout(0.02)(x)
x = Dense(1000, activation='relu', init='uniform')(x)
x = Dense(500, activation='relu', init='uniform')(x)
x = Dropout(0.2)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model([inp for inp,emb in embs] + [contin_inp], x)
#model = Model([inp for inp,emb in embs] + [inp for inp,d in conts], x)
model.compile('adam', 'mean_absolute_error')
#model.compile(Adam(), 'mse')
# -
# ### Start training
# %%time
hist = model.fit(map_train, y_train, batch_size=128, epochs=25,
verbose=0, validation_data=(map_valid, y_valid))
hist.history
plot_train(hist)
preds = np.squeeze(model.predict(map_valid, 1024))
# Result on validation data: 0.1678 (samp 150k, 0.75 trn)
log_max_inv(preds)
normalize_inv(preds)
# + [markdown] heading_collapsed=true
# ## Using 3rd place data
# + hidden=true
pkl_path = '/data/jhoward/github/entity-embedding-rossmann/'
# + hidden=true
def load_pickle(fname):
return pickle.load(open(pkl_path+fname + '.pickle', 'rb'))
# + hidden=true
[x_pkl_orig, y_pkl_orig] = load_pickle('feature_train_data')
# + hidden=true
max_log_y_pkl = np.max(np.log(y_pkl_orig))
y_pkl = np.log(y_pkl_orig)/max_log_y_pkl
# + hidden=true
pkl_vars = ['Open', 'Store', 'DayOfWeek', 'Promo', 'Year', 'Month', 'Day',
'StateHoliday', 'SchoolHoliday', 'CompetitionMonthsOpen', 'Promo2Weeks',
'Promo2Weeks_L', 'CompetitionDistance',
'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear',
'Promo2SinceYear', 'State', 'Week', 'Max_TemperatureC', 'Mean_TemperatureC',
'Min_TemperatureC', 'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover','Events', 'Promo_fw', 'Promo_bw',
'StateHoliday_fw', 'StateHoliday_bw', 'AfterStateHoliday', 'BeforeStateHoliday',
'SchoolHoliday_fw', 'SchoolHoliday_bw', 'trend_DE', 'trend']
# + hidden=true
x_pkl = np.array(x_pkl_orig)
# + hidden=true
gt_enc = StandardScaler()
gt_enc.fit(x_pkl[:,-2:])
# + hidden=true
x_pkl[:,-2:] = gt_enc.transform(x_pkl[:,-2:])
# + hidden=true
x_pkl.shape
# + hidden=true
x_pkl = x_pkl[idxs]
y_pkl = y_pkl[idxs]
# + hidden=true
x_pkl_trn, x_pkl_val = x_pkl[:train_size], x_pkl[train_size:]
y_pkl_trn, y_pkl_val = y_pkl[:train_size], y_pkl[train_size:]
# + hidden=true
x_pkl_trn.shape
# + hidden=true
xgb_parms = {'learning_rate': 0.1, 'subsample': 0.6,
'colsample_bylevel': 0.6, 'silent': True, 'objective': 'reg:linear'}
# + hidden=true
xdata_pkl = xgboost.DMatrix(x_pkl_trn, y_pkl_trn, feature_names=pkl_vars)
# + hidden=true
xdata_val_pkl = xgboost.DMatrix(x_pkl_val, y_pkl_val, feature_names=pkl_vars)
# + hidden=true
xgb_parms['seed'] = random.randint(0,1e9)
model_pkl = xgboost.train(xgb_parms, xdata_pkl)
# + hidden=true
model_pkl.eval(xdata_val_pkl)
# + hidden=true
#0.117473
# + hidden=true
importance = model_pkl.get_fscore()
importance = sorted(importance.items(), key=operator.itemgetter(1))
df = pd.DataFrame(importance, columns=['feature', 'fscore'])
df['fscore'] = df['fscore'] / df['fscore'].sum()
df.plot(kind='barh', x='feature', y='fscore', legend=False, figsize=(6, 10))
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance');
# + [markdown] heading_collapsed=true hidden=true
# ### Neural net
# + hidden=true
#np.savez_compressed('vars.npz', pkl_cats, pkl_contins)
#np.savez_compressed('deps.npz', y_pkl)
# + hidden=true
pkl_cats = np.stack([x_pkl[:,pkl_vars.index(f)] for f in cat_vars], 1)
pkl_contins = np.stack([x_pkl[:,pkl_vars.index(f)] for f in contin_vars], 1)
# + hidden=true
co_enc = StandardScaler().fit(pkl_contins)
pkl_contins = co_enc.transform(pkl_contins)
# + hidden=true
pkl_contins_trn, pkl_contins_val = pkl_contins[:train_size], pkl_contins[train_size:]
pkl_cats_trn, pkl_cats_val = pkl_cats[:train_size], pkl_cats[train_size:]
y_pkl_trn, y_pkl_val = y_pkl[:train_size], y_pkl[train_size:]
# + hidden=true
def get_emb_pkl(feat):
name, c = cat_map_info(feat)
c2 = (c+2)//3
if c2>50: c2=50
inp = Input((1,), dtype='int64', name=name+'_in')
u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1, init=emb_init)(inp))
return inp,u
# + hidden=true
n_pkl_contin = pkl_contins_trn.shape[1]
contin_inp = Input((n_pkl_contin,), name='contin')
contin_out = BatchNormalization()(contin_inp)
# + hidden=true
map_train_pkl = split_cols(pkl_cats_trn) + [pkl_contins_trn]
map_valid_pkl = split_cols(pkl_cats_val) + [pkl_contins_val]
# + hidden=true
def train_pkl(bs=128, ne=10):
return model_pkl.fit(map_train_pkl, y_pkl_trn, batch_size=bs, nb_epoch=ne,
verbose=0, validation_data=(map_valid_pkl, y_pkl_val))
# + hidden=true
def get_model_pkl():
conts = [get_contin_pkl(feat) for feat in contin_map_fit.features]
embs = [get_emb_pkl(feat) for feat in cat_map_fit.features]
x = merge([emb for inp,emb in embs] + [contin_out], mode='concat')
x = Dropout(0.02)(x)
x = Dense(1000, activation='relu', init='uniform')(x)
x = Dense(500, activation='relu', init='uniform')(x)
x = Dense(1, activation='sigmoid')(x)
model_pkl = Model([inp for inp,emb in embs] + [contin_inp], x)
model_pkl.compile('adam', 'mean_absolute_error')
#model.compile(Adam(), 'mse')
return model_pkl
# + hidden=true
model_pkl = get_model_pkl()
# + hidden=true
train_pkl(128, 10).history['val_loss']
# + hidden=true
K.set_value(model_pkl.optimizer.lr, 1e-4)
train_pkl(128, 5).history['val_loss']
# + hidden=true
"""
1 97s - loss: 0.0104 - val_loss: 0.0083
2 93s - loss: 0.0076 - val_loss: 0.0076
3 90s - loss: 0.0071 - val_loss: 0.0076
4 90s - loss: 0.0068 - val_loss: 0.0075
5 93s - loss: 0.0066 - val_loss: 0.0075
6 95s - loss: 0.0064 - val_loss: 0.0076
7 98s - loss: 0.0063 - val_loss: 0.0077
8 97s - loss: 0.0062 - val_loss: 0.0075
9 95s - loss: 0.0061 - val_loss: 0.0073
0 101s - loss: 0.0061 - val_loss: 0.0074
"""
# + hidden=true
plot_train(hist)
# + hidden=true
preds = np.squeeze(model_pkl.predict(map_valid_pkl, 1024))
# + hidden=true
y_orig_pkl_val = log_max_inv(y_pkl_val, max_log_y_pkl)
# + hidden=true
rmspe(log_max_inv(preds, max_log_y_pkl), y_orig_pkl_val)
# -
# ## XGBoost
# Xgboost is extremely quick and easy to use. Aside from being a powerful predictive model, it gives us information about feature importance.
X_train = np.concatenate([cat_map_train, contin_map_train], axis=1)
X_valid = np.concatenate([cat_map_valid, contin_map_valid], axis=1)
all_vars = cat_vars + contin_vars
xgb_parms = {'learning_rate': 0.1, 'subsample': 0.6,
'colsample_bylevel': 0.6, 'silent': True, 'objective': 'reg:linear'}
xdata = xgboost.DMatrix(X_train, y_train, feature_names=all_vars)
xdata_val = xgboost.DMatrix(X_valid, y_valid, feature_names=all_vars)
xgb_parms['seed'] = random.randint(0,1e9)
model = xgboost.train(xgb_parms, xdata)
model.eval(xdata_val)
model.eval(xdata_val)
# Easily, competition distance is the most important, while events are not important at all.
#
# In real applications, putting together a feature importance plot is often a first step. Oftentimes, we can remove hundreds of thousands of features from consideration with importance plots.
# +
importance = model.get_fscore()
importance = sorted(importance.items(), key=operator.itemgetter(1))
df = pd.DataFrame(importance, columns=['feature', 'fscore'])
df['fscore'] = df['fscore'] / df['fscore'].sum()
df.plot(kind='barh', x='feature', y='fscore', legend=False, figsize=(6, 10))
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance');
# -
# ## End
| deeplearning2/nbs/rossman.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example of category and frequency based aspect detection
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
import json
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.engine import reflection
from sqlalchemy.schema import Table, MetaData
connstr = "postgresql+psycopg2://{user}:{pwd}@{ipaddress}/{dbname}".format(
user='postgres', pwd='<PASSWORD>', ipaddress='localhost', dbname='bdlab'
)
engine = create_engine(connstr)
def read(sql, engine):
conn = engine.connect()
data = pd.read_sql(sql, conn)
conn.close()
return data
# ## Get reviews
sql = """
select b.id, b.name, r.content
from yelp.textclip as r
join yelp.business as b on r.business = b.id
join yelp.reviewer as u on r.author = u.id
where b.review > 500
limit 10000
"""
R = read(sql, engine=engine)
R.shape
R.head()
# ## Get categories
K = read("select * from yelp.incat", engine)
K.head(2)
category = lambda x: list(K[K.business==x].category.values)
sample_cat = R.id.unique()
category_map = dict([(b, category(b)) for b in sample_cat])
category_map['AfN3Z1U6QPEgAb5F2CQm8w']
# ## Indexing
# We can extract n-grams from text and indexing by category
import spacy
nlp = spacy.load("en_core_web_sm")
from collections import defaultdict
import time
def tokenize(text):
doc = nlp(text)
tokens = []
for sentence in doc.sents:
tokens.append([(t.lemma_, t.pos_) for t in sentence])
return tokens
def shift_ngrams(text, window=3):
grams = []
for sent in tokenize(text):
for i, (token, pos) in enumerate(sent):
if pos == 'NOUN':
grams.append(token)
for token_j, pos_j in sent[max([i-window, 0]):i+window+1]:
if pos_j in ['NOUN', 'ADJ', 'VERB', 'ADV'] and token_j != token:
grams.append((token_j, token))
return grams
test = R.content.values[0]
print(test)
shift_ngrams(test)
# ### Note: this is slow, so we pre-compute and save indexes
# +
unigram = defaultdict(lambda: defaultdict(lambda: 0))
bigram = defaultdict(lambda: defaultdict(lambda: defaultdict(lambda: 0)))
business_units = R.id.values
reviews = R.content.values
for i, text in tqdm(list(enumerate(reviews))):
business = business_units[i]
for token in shift_ngrams(text):
for k in category_map[business]:
if isinstance(token, tuple):
for x in token:
unigram[k][x] += 1
bigram[k][token[1]][token[0]] += 1
else:
unigram[k][token] += 1
U = dict([(x, dict(y)) for x, y in unigram.items()])
B = {}
for x, y in bigram.items():
data = dict([(p, dict(q)) for p, q in y.items()])
B[x] = data
# -
with open('data/unigram.json', 'w') as uo:
json.dump(U, uo)
with open('data/bigram.json', 'w') as bo:
json.dump(B, bo)
# ## Aspect detection
with open('data/unigram.json', 'r') as uo:
U = json.load(uo)
with open('data/bigram.json', 'r') as bo:
B = json.load(bo)
list(U['Restaurants'].items())[:4]
for k, d in list(B['Restaurants'].items())[:4]:
print(k, list(d.items())[:4])
# ## Kullback–Leibler approach
# $$
# KL_t = p(t)\log \frac{p(t)}{q(t)}
# $$
from collections import defaultdict
global_u = defaultdict(lambda: 0)
global_s = 0
for k, v in U.items():
for t, w in v.items():
global_u[t] += w
global_s += w
def kl_unigram(category, unigram):
kl = {}
s = sum(unigram[category].values())
for k, v in unigram[category].items():
p_k = v / s
q_k = global_u[k] / global_s
kl[k] = p_k * np.log(p_k / q_k)
return kl
klu = kl_unigram('Restaurants', U)
candidates = [(k, v) for k, v in sorted(klu.items(), key=lambda x: -x[1])]
candidates[:20]
# ## Probability for bigrams
# $$
# p(i \mid j) \approx \frac{count(i, j)}{\sum\limits_{k} count(k, j)} \approx \frac{count(i, j)}{2 \cdot window \cdot count(j)}
# $$
#
# $$
# pmi(a, b) = p(a, b)\log \frac{p(a, b)}{p(a)p(b)}
# $$
kU = U['Restaurants']
kB = B['Restaurants']
def pab(w1, w2, idx, uidx):
n = sum(uidx.values())
try:
p = idx[w1][w2]
z = sum(idx[w1].values())
p_ab = p / z
p_w1 = uidx[w1] / n
p_w2 = uidx[w2] / n
p_c = p_w1 * p_w2
pmi = p_ab * np.log(p_ab / p_c)
except KeyError:
return 0
return pmi
pab('service', 'food', kB, kU)
# ## Exercise: suggest ideas on how to aggregate those in aspects
from nltk.corpus import wordnet as wn
# ### Example 1: WordNet
# +
def h_context(word):
h = lambda s: s.hypernyms()
b_all = wn.synsets(word, pos=wn.NOUN)
H = {}
for b in b_all:
H[b] = 0
for i, j in enumerate(b.closure(h)):
if j in H.keys():
if H[j] < i + 1:
H[j] = i + 1
else:
H[j] = i + 1
return H
def containement(context_a, context_b):
common = [(x, y, context_b[x]) for x, y in context_a.items() if x in context_b.keys()]
return common
# -
c_a = h_context('pizza')
c_b = h_context('food')
containement(c_a, c_b)
k = [x for x, _ in candidates[:20]]
for i, w1 in enumerate(k):
for w2 in k:
if w1 != w2:
cont = containement(h_context(w1), h_context(w2))
if len(cont) > 0:
print(w1, w2, cont[0])
# ### Example 2: word context
kU = U['Restaurants']
kB = B['Restaurants']
V = list(kU.keys())
k = [x for x, _ in candidates[:20]]
m = np.zeros((len(k), len(V)))
for w in k:
i = k.index(w)
for w_con, _ in kB[w].items():
try:
j = V.index(w_con)
m[i,j] = pab(w, w_con, kB, kU)
except IndexError:
pass
from sklearn.metrics.pairwise import cosine_similarity
sigma = cosine_similarity(m, m)
ip = k.index('pizza')
most_sim = [(k, v) for k, v in sorted(enumerate(sigma[ip]), key=lambda x: -x[1])]
for wi, w_score in most_sim:
print(k[wi], round(w_score, 3))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=6)
clusters = kmeans.fit_predict(m)
cii = set(clusters)
vectors = {}
for c in cii:
bag = []
for i, cluster in enumerate(clusters):
if cluster == c:
bag.append(k[i])
vectors[c] = bag
m_1 = np.array([m[k.index(w)] for w in vectors[1]])
m_1.shape
sim_c = cosine_similarity(kmeans.cluster_centers_[1].reshape(1, -1), m_1)
sim_c.shape
for pos, values in sorted(enumerate(sim_c[0]), key=lambda x: -x[1]):
print(vectors[1][pos], round(values, 3))
| absa/aspects.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Ruby 2.7.1
# language: ruby
# name: ruby
# ---
# +
$LOAD_PATH << File.dirname(__FILE__) + "/../lib"
require 'toji'
require './example_core'
require 'rbplotly'
moromi = Example::Progress::MoromiProgress.load_yaml_file("moromi_progress.yaml")
moromi.progress_note.plot.show
moromi.progress_note.table.tap{|t| t.layout.height=1000}.show
moromi.bmd.plot.show
moromi.ab(1.4)
.expect(16.5, +3)
.expect(17.0, +3)
.expect(16.5, +0)
.plot.show
nil
# -
| example/moromi_progress.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # lawzhidao_filter 说明
# 0. **下载地址:** [百度知道](https://pan.baidu.com/s/18Lwq16VBo6wBD_qLb3i33g)
# 1. **数据概览:** 3.6 万条法律问答数据
# 2. **推荐实验:** FAQ 问答系统
# 3. **数据来源:** 百度知道
# 4. **加工处理:**
# 1. 过滤了id、url、qid、reply_t、user字段
# 2. 对question、reply做了脱敏处理
import pandas as pd
path = 'lawzhidao_文件夹_所在_路径'
# # 1. lawzhidao_filter.csv
# ## 加载数据
pd_all = pd.read_csv(path + 'baoxianzhidao_filter.csv')
# ## 字段说明
#
# | 字段 | 说明 |
# | ---- | ---- |
# | title | 问题的标题 |
# | question | 问题内容(可为空) |
# | reply| 回复内容 |
# | is_best| 是否为页面上显示的最佳回答 |
pd_all.sample(n=20)
| resources/nlp/ChineseNlpCorpus/datasets/lawzhidao/intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Practical Deep Learning for Coders, v3
# # Lesson4_collab
from fastai.collab import *
from fastai.tabular import *
# # Collaborative filtering example
# # 协同过滤案例
# `collab` models use data in a `DataFrame` of user, items, and ratings.
#
# `collab`模型使用的是`DataFrame`中的一个(包含)用户、电影和评分的数据集。
user,item,title = 'userId','movieId','title'
path = untar_data(URLs.ML_SAMPLE)
path
ratings = pd.read_csv(path/'ratings.csv')
ratings.head()
# That's all we need to create and train a model:
#
# 以上就是我们用来训练模型的全部(数据):
data = CollabDataBunch.from_df(ratings, seed=42)
y_range = [0,5.5]
learn = collab_learner(data, n_factors=50, y_range=y_range)
learn.fit_one_cycle(3, 5e-3)
# ## Movielens 100k
# Let's try with the full Movielens 100k data dataset, available from http://files.grouplens.org/datasets/movielens/ml-100k.zip
#
# 让我们尝试一下用Movielens的全部数据进行建模。
path=Config.data_path()/'ml-100k'
ratings = pd.read_csv(path/'u.data', delimiter='\t', header=None,
names=[user,item,'rating','timestamp'])
ratings.head()
movies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1', header=None,
names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])
movies.head()
len(ratings)
rating_movie = ratings.merge(movies[[item, title]])
rating_movie.head()
data = CollabDataBunch.from_df(rating_movie, seed=42, valid_pct=0.1, item_name=title)
data.show_batch()
y_range = [0,5.5]
learn = collab_learner(data, n_factors=40, y_range=y_range, wd=1e-1)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(5, 5e-3)
learn.save('dotprod')
# Here's [some benchmarks](https://www.librec.net/release/v1.3/example.html) on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of `0.91**2 = 0.83`.
#
# [这里](https://www.librec.net/release/v1.3/example.html) 是一些在同一数据集上建模的基准数据。在表格中我们可以看到最好的模型的RMSE是0.91,对应的MSE是`0.91**2 = 0.83`。
# ## Interpretation
# ## 模型释义
# ### Setup 调用
learn.load('dotprod');
learn.model
g = rating_movie.groupby(title)['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
# ### Movie bias
# ### 电影模型的偏差
movie_bias = learn.bias(top_movies, is_item=True)
movie_bias.shape
mean_ratings = rating_movie.groupby(title)['rating'].mean()
movie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]
item0 = lambda o:o[0]
sorted(movie_ratings, key=item0)[:15]
sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
# ### Movie weights
# ### 电影模型权重
movie_w = learn.weight(top_movies, is_item=True)
movie_w.shape
movie_pca = movie_w.pca(3)
movie_pca.shape
fac0,fac1,fac2 = movie_pca.t()
movie_comp = [(f, i) for f,i in zip(fac0, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
movie_comp = [(f, i) for f,i in zip(fac1, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
idxs = np.random.choice(len(top_movies), 50, replace=False)
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
| zh-nbs/Lesson4_collab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 4.6. Using stride tricks with NumPy
import numpy as np
def aid(x):
# This function returns the memory
# block address of an array.
return x.__array_interface__['data'][0]
x = np.zeros(10)
x.strides
y = np.zeros((10, 10))
y.strides
n = 1000
a = np.arange(n)
b = np.lib.stride_tricks.as_strided(a, (n, n), (0, 8))
b
b.size, b.shape, b.nbytes
# %timeit b * b.T
# %%timeit
np.tile(a, (n, 1)) * np.tile(a[:, np.newaxis], (1, n))
| 001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter04_optimization/06_stride_tricks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# [View in Colaboratory](https://colab.research.google.com/github/ruzbro/data-analysis/blob/master/03LinearRegressionModels_converted_from_JupyterHyub.ipynb)
# + [markdown] id="xr-yFWgVh-Xu" colab_type="text"
# # Berkeley Program on Data Science and Analytics
# ## Module IV, Part III: A Linear Regression Model
#
# <br/>
#
# <div class="container">
# <div style="float:left;width:40%">
# <img src="images/bikeshare_sun.jpg">
# </div>
# <div style="float:left;width:40%">
# <img src="images/bikeshare_snow.PNG">
# </div>
# </div>
#
# ### Table of Contents
#
# [Case Study: Bike Sharing](#section case)<br>
#
# [The Test-Train Split](#subsection 0)
#
#
# 1 - [Exploratory Data Analysis](#section 1)<br>
#
# a - [Data Types and Summary Statistics](#subsection 1a)
#
# b - [Visualizations Continued: Numerical Data and Widgets](#subsection 1b)
#
#
# 2 - [Linear Regression Model](#section 2)<br>
#
# a - [Explanatory and Response Variables](#subsection 2a)
#
# b - [Finding $\beta$](#subsection 2b)
#
# c - [Evaluating the Model](#subsection 2c)
#
#
# 3 - [Challenge: Improve the Model](#section 3)<br>
#
# + id="IUVe7DmlcAyc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1958} outputId="8b6d1d22-bc3f-43e4-b2c9-f9e488cb2738"
# !pip install ipywidgets
# !pip install --upgrade datascience
# + id="-DyPAwdUh-Xx" colab_type="code" colab={}
# run this cell to import some necessary packages
from datascience import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight') # what is 538 ???
# %matplotlib inline
#from scripts.exec_ed_scripts import *
import pandas as pd
import ipywidgets as widgets
from scipy.linalg import lstsq
from scipy.linalg import lstsq
import seaborn as sns
# + id="bn0IC4ZjdjdP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 2210} outputId="74b85173-3789-47a3-dbf1-5d52174ab843"
# !pip freeze
# + [markdown] id="oyqgcHOoh-X0" colab_type="text"
# ## Case Study: Capital Bike Share <a id= "section case"></a>
#
# Bike-sharing systems have become increasingly popular worldwide as environmentally-friendly solutions to traffic congestion, inadequate public transit, and the "last-mile" problem. Capital Bikeshare runs one such system in the Washington, D.C. metropolitan area.
#
# The Capital Bikeshare system comprises docks of bikes, strategically placed across the area, that can be unlocked by *registered* users who have signed up for a monthly or yearly plan or by *casual* users who pay by the hour or day. They collect data on the number of casual and registered users per hour and per day.
#
# Let's say that Capital Bikeshare is interested in a **prediction** problem: predicting how many riders they can expect to have on a given day. [UC Irvine's Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset) has combined the bike sharing data with information about weather conditions and holidays to try to answer this question.
#
# In this notebook, we'll walk through the steps a data scientist would take to answer this question.
# + [markdown] id="ttNUMgvVh-X0" colab_type="text"
# ### The Test-Train Split <a id="subsection 0"> </a>
#
# When we train a model on a data set, we run the risk of [**over-fitting**](http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html). Over-fitting happens when a model becomes so complex that it makes very accurate predictions for the data it was trained on, but it can't generalize to make good predictions on new data.
#
# Over- and under-fitting are most easily explained visually. The [Scikit-Learn machine learning library](http://scikit-learn.org) has a good example:
#
# <img src="http://scikit-learn.org/stable/_images/sphx_glr_plot_underfitting_overfitting_001.png"/>
#
# The linear model on the left is **under-fitting**: we can see that there is a lot of vertical distance (the *error*) between the actual samples (the dots) and the prediction (the blue line). The 15-degree model on the right is over-fitting: there's almost no error, but the model is so complex it is unlikely to generalize to new data. Our goal is to get the model in the middle: reduce the error as much as possible while keeping the complexity low.
#
# We can reduce the risk of overfitting by using a **test-train split**.
#
# 1. Randomly divide our data set into two smaller sets: one for training and one for testing
# 2. Train the data on the training set, changing our model along the way to increase accuracy
# 3. Test the data's predictions using the test set.
#
# <div class="alert alert-info">
#
# **Over-fitting to the test set**
# By using the test set over and over to check the predictive accuracy of different models, we run the risk of over-fitting to the test set as well. In the real world, data scientists get around this by also using a **validation set**- a portion of training data that the model isn't trained on, used to find optimal *hyperparameters* for the model (parameters that are set before the model is trained). Because we're using only limited hyperparameters, and because our model is for educational purposes, we'll only use training and test sets.
#
# </div>
#
# Our bike data has been divided ahead of time into test and training sets. Run the next cell to load the training and test data.
# + id="MNcxPrf2i1FI" colab_type="code" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY> "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 137} outputId="b20dea3e-bfa9-4537-8e38-c7a45fd2feee"
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# + id="6GbL0H7wh-X0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="1daadccf-6922-44fd-bcc2-ef0269f2c66e"
# run this cell to load the data
bike_train = pd.read_csv("day_train.csv")
# load the test data
bike_test = pd.read_csv("day_test.csv")
# reformat the date column from strings to dates
bike_train['date'] = pd.to_datetime(bike_train['date'])
bike_test['date'] = pd.to_datetime(bike_test['date'])
bike_train.head()
# + [markdown] id="G4JElEtAh-X2" colab_type="text"
# **QUESTION:** Data is often expensive to collect, and having a good predictive model can be the difference between success and ruin. Given these factors, the decision of how much data to set aside for testing and validation is very personal.
#
# What are some reasons for putting a larger portion of data into the training set? What are some risks?
# + [markdown] id="i32h2Emvh-X3" colab_type="text"
# **ANSWER:** The assumption would be that "more data" makes for comprehensive training set --> better predictions. Risks???
# + [markdown] id="kyXJIS7xh-X4" colab_type="text"
# ## 1. Exploratory Data Analysis (EDA) <a id= "section 1"></a>
#
# > "It is important to understand what you CAN DO before you learn to measure how WELL you seem to have done it." -<NAME>, *Exploratory Data Analysis*
#
# **Exploratory Data Analysis (EDA)** is the process of 'looking at data to see what it seems to say'. EDA is an essential first step toward answering any research question. Through this process, we hope to accomplish several things:
# - learn about the overall 'shape' of the data: structure, organization, ranges of values
# - assess what assumptions we can make about the data as a basis for later statistical inference
# - figure out the appropriate tools and techniques for analysis
# - tentatively create testable, appropriate hypotheses or models
#
# We will do this by looking at summary statistics and visualizations of the different variables.
#
# ### 1a. Data Types and Summary Statistics <a id= "subsection 1a"></a>
#
# Before we even know how to visualize the data, we need to know what types of data we're working with. Run the following cell to show our bike sharing training data.
# + id="vVezGzyeh-X4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 3043} outputId="8dab6b6a-f0b5-4fb9-939e-37ce5b03da17"
bike_train
# + [markdown] id="otSg5nPhh-X8" colab_type="text"
# A few of the less straight-forward columns can be described as follows:
# - **instant**: record index
# - **is 2012** : 1 if the date is in 2012, 0 if the date is in 2011
# - **is holiday** : 1 if day is a holiday, 0 otherwise
# - **is work day** : 1 if day is not a weekend or holiday, otherwise 0
# - **weather** :
# - 1: Clear, Few clouds, Partly cloudy, Partly cloudy
# - 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist
# - 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds
# - 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog
# - **temp** : Normalized temperature in Celsius. The values are derived via (t-t_min)/(t_max-t_min), t_min=-8, t_max=+39 (only in hourly scale)
# Note that historic temperature in Washington DC has exceeded 39C several times over the past decade
# https://www.currentresults.com/Yearly-Weather/USA/DC/Washington/extreme-annual-washington-high-temperature.php
#
# - **felt temp**: Normalized feeling temperature in Celsius. The values are derived via (t-t_min)/(t_max-t_min), t_min=-16, t_max=+50 (only in hourly scale)
# - **humidity**: Normalized humidity. The values are divided to 100 (max)
# - **windspeed**: Normalized wind speed. The values are divided to 67 (max)
# - **casual**: count of casual users
# - **registered**: count of registered users
# - **total riders**: count of total rental bikes (casual + registered)
# + [markdown] id="zksmoZXDh-X9" colab_type="text"
# **QUESTION:** Which of the variables are numerical and which are categorical? Intuitively, which do you think would be useful for predicting the number of riders on a given day? Would you choose different variables depending on if you wanted to predict casual versus registered rider counts?
# + [markdown] id="H_bI2Mjeh-X-" colab_type="text"
# **ANSWER:**
# Which of the variables are numerical and which are categorical?
# categorical: is 2012, is holiday / workday, weather... all the rest are numerical (excluding record index)
#
# Intuitively, which do you think would be useful for predicting the number of riders on a given day?
# Weather should be biggest predictor of # of riders (weather = 1 or 2)
#
# Would you choose different variables depending on if you wanted to predict casual versus registered rider counts?
# Registered riders may be more hardcore bike riders hence may be willing to bike even in inclement weather, unlike casual riders that may be more inclined when weather's great.
# + [markdown] id="WAU0miGGh-X_" colab_type="text"
# #### Summary Statistics
# It can also be useful to know some *summary statistics* about the different variables: things like the minimum, maximum, and average. Earlier, we learned how to do this on individual columns using functions like `min`, `max`, and `np.average`.
#
# Thankfully, we can generate a variety of summary statistics for many variables at the same time using a function called `describe`. `describe` works on a variety of table called a dataframe. Run the following cell to convert our bike data table to a dataframe, then generate the summary statistics.
# + id="OsC2ILlZh-YA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 334} outputId="a6e7c7d0-bbf8-4062-cbb1-c16cde28432b"
# generate summary statistics
bike_train.describe()
# + [markdown] id="AWFXFbL7h-YC" colab_type="text"
# **QUESTION:** Looking at these statistics as data scientists, we're interested in a few things in particular:
# - are there any values missing (e.g. days for which some data was not collected)?
# - what kinds of values does each variable take?
# - are there any extreme values that might throw off our analysis?
#
# Using the summary table, answer these questions below.
# + [markdown] id="j1VMUw-9h-YD" colab_type="text"
# **ANSWER:**
# are there any values missing (e.g. days for which some data was not collected)? All columns contain 584 values.
# what kinds of values does each variable take? numeric
# are there any extreme values that might throw off our analysis? None.
#
# + [markdown] id="xk0Gd4aOh-YD" colab_type="text"
# ### 1b. Visualization Continued: Numerical Data and Widgets <a id= "subsection 1b"></a>
# So far, we've worked largely with categorical variables, which we visualized with bar graphs. The bike sharing data contains several *numerical* variables, which will necessitate different visualizations.
#
# You've previously used the `hist` function to visualize the distribution of a numerical variable. The following cell creates a **widget** that will make different histograms based on the variable you choose in the drop box. Run the cell to created the widget (don't worry too much about the details of the code).
# + id="XXJJ5M_8mXg7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} cellView="both" outputId="c9c59559-973a-4c26-f73c-50f408418c32"
#@title Histogram widget
# create a widget to plot and compare different histograms
# max values: temp: 39c, felt temp 50c, humidity: 100%, windspeed 67mph
explantory_slider = 'temp' #@param ["temp", "felt temp", "humidity", "windspeed"]
display(widgets.interactive(lambda x: bike_train.hist(x, bins=30), x=explantory_slider))
# + [markdown] id="helHilYwh-YH" colab_type="text"
# **QUESTION:** Describe the distributions of the different variables. Are they normally distributed? Are any of them *skewed* (that is, do any of them have most of their values to the left or right of the histogram's center)? What values do each of them take on?
# + [markdown] id="b_XTrxV-h-YI" colab_type="text"
# **ANSWER:** temp and felt-temp seems to be multi-modal. with humidity and windspeed both normally distributed (except for a left skew on windspeed)
# + [markdown] id="twzs16XYh-YI" colab_type="text"
# To predict the number of riders (the **response variable**) based on an **explanatory variable**, we often want to plot them against one another
#
# `scatter` is a table function that creates a scatter plot of one numerical variable versus another. The first argument specifies the name of the variable to be plotted on the x-axis, and the second specifies the name of the variable on the y-axis.
# + id="iZYPpCb8h-YJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="1e7dd8ac-a2e2-48ff-b0b9-05354cd551c0"
# example of scatter: plot the number of casual riders against registered riders
bike_train.plot.scatter("casual", "registered")
# + [markdown] id="0DQOhORYh-YM" colab_type="text"
# As you might remember from Professor Kariv's videos in Module II, the shape of the scatter plot can give us some information about how two variables are **correlated**: what the value of one variable can or cannot tell you about the value of another.
#
# **EXERCISE:** Try plotting at least one numerical explanatory variable (temp, felt temp, windspeed, or humidity) against a response variable (casual, registered, or total riders). What would you say about the relationship between the two variables based on the scatter plot?
# + id="hhlxyEUJh-YM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="076ba7a5-dfe2-4bcc-86d1-c7e12c7360e4"
# your code here
bike_train.plot.scatter("windspeed", "registered")
# + [markdown] id="gQXZl3LRh-YQ" colab_type="text"
# You can also *overlay* two scatter plots on top of one another. This can be helpful when comparing multiple responser variables.
#
# To overlay two scatter plots, give a list of columns as the second argument instead of a single column name. The following cell gives an example: try substituting in different column names to create different plots.
# + id="aMIN1bX9h-YS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="419ff29d-2cdf-41dc-f137-81f257aab46c"
# plot date of the year against casual and total riders
df1 = bike_train.plot.scatter("temp", "casual", label="casual", color="y")
df2 = bike_train.plot.scatter("temp", "registered", label="registered", color = "b", ax=df1)
# + id="KwBwOQsVh-YV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="b98b862d-c8a1-49b9-fc3d-f9254616a271"
# plot date of the year against casual and total riders
df1 = bike_train.plot.scatter("humidity", "casual", label="casual", color="y")
df2 = bike_train.plot.scatter("humidity", "registered",label="registered", color="b", ax=df1)
# + id="ye3L5jigh-YZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="8c8b3e1e-f279-40cf-eacd-d0f306d83de2"
# plot date of the year against casual and total riders
df1 = bike_train.plot.scatter("humidity", "casual", label="casual", color="y")
df2 = bike_train.plot.scatter("humidity", "total riders", label="total riders",color = "b", ax=df1)
# + id="g8IGQIIkh-Yc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="4b7f917f-dc02-4741-edac-b61bf6ea22bd"
# plot date of the year against casual and total riders
df1 = bike_train.plot.scatter("windspeed", "casual", label="casual", color="y")
df2 = bike_train.plot.scatter("windspeed", "total riders", label="total riders", color = "b", ax=df1)
# + [markdown] id="lq12yEQ6h-Ye" colab_type="text"
# In the following cell, we've created another widget to make it easier to compare multiple variables against one another.
# + id="IIqgACznh-Ye" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} cellView="both" outputId="a30d0caa-088b-4151-ea5d-db3851412ff6"
# create a widget to make different scatter plots
x = 'humidity' #@param ["date", "temp", "felt temp", "humidity", "windspeed"]
df1 = bike_train.plot.scatter(x, "registered", label="registered", color="y")
df2 = bike_train.plot.scatter(x, "casual", label="casual", color="b", ax=df1)
# + [markdown] id="G8vAgmQuh-Yi" colab_type="text"
# **QUESTION:** Based on the scatter plots, which variables appear to be linearly correlated with rider counts? Which variables appear to be non-linearly correlated or uncorrelated? Is the apparent correlation different for casual or registered riders?
# + [markdown] id="fu7Qa5Heh-Yi" colab_type="text"
# **ANSWER:** Temp and felt-temp seems to be linearly correlated. although there's less correlation with registered users -- which probably indicates that the registered are heavy users and unfazed by increase in temp.
# Windspeed and Humidity doesn't seem to be correlated.
# + [markdown] id="v3W86-Y_h-Yk" colab_type="text"
# Finally, we want to visualize our categorical variables using bar graphs. Remember, for categorical variables we are grouping rows into the different possible categories (like the seven days of the week) and aggregating all the values in the group into a single value (in this case, taking the average).
#
# Run the next cell to create a widget for making the different bar plots.
# + id="tQAvIeqWh-Yk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} cellView="both" outputId="8a23d794-7a01-4682-831d-d10a601fda9d"
# define a function that groups by an explanatory variable and creates a horizontal bar graph
def barh_bikes(expl_var):
g = bike_train[[expl_var, "registered", "casual"]]
g = g.groupby([expl_var]).agg(['mean'])
return g.plot.barh()
explantory_slider = 'day of week' #@param ["season", "month", "is holiday", "day of week", "is work day", "weather"]
display(widgets.interactive(barh_bikes, expl_var=explantory_slider))
# + [markdown] id="mhv5QQ89h-Ym" colab_type="text"
# **QUESTION:** Many of our categorical variables are related to time (e.g. week day, month, etc). How do usage patterns over time differ for registered and casual users? In what categories do the different user types act similarly?
# + [markdown] id="_3O6-F4Bh-Yn" colab_type="text"
# **ANSWER:** when it's a holiday or weekend, the trend is reversed for bike usage among reg vs casual. There's higher usage on weekdays with reg (probably use it for biking to work) vs weekends for casuals who probably want to have fun on weekends.
# Usage Trend for reg vs casual is the same in the year (peak on June-July) and highest when weather is best (1 vs 3).
# + [markdown] id="O8BolUj9h-Yn" colab_type="text"
# ## 2. The Regression Model <a id= "section 2"></a>
#
# To try to predict the number of riders on a given day, we'll use a regression model. From Module II:
#
# > A **simple regression model** describes how the conditional mean of a response variable $y$ depends on an explanatory variable $x$: $$\mu_{y|x} = \beta_0 + \beta_1x$$ This equation describes our best guess for $y$ given a particular $x$.
#
# > A **multiple regression model** describes how the conditional mean of a response variable $y$ depends on multiple explanatory variables $x_1, x_2, ..., x_k$: $$\mu_{y|x_1, x_2, ..., x_k} = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_kx_k$$ This equation describes our best guess for $y$ given a particular vector $x_1, x_2, ..., x_k$.
#
# In this case, our model will look something like this:
# $$\mu_{\text{rider count}|\text{temp, humidity, ..., month}} = \beta_0 + \beta_1*\text{(temp)} + \beta_2*\text{(humidity)} + ... + \beta_k*\text{(month)}$$
# The code for either a simple or multiple regression model is basically identical except for the number of columns we select for inclusion in our explanatory variables.
#
# To create our model, we need to:
# 1. isolate the explanatory variables (X) and response variable (y) we want to use
# 2. find the values for the $\beta$ variables on the best-fit regression line
# 3. evaluate the accuracy of our model
#
# ### 2a. Explanatory and response variables <a id="subsection 2a"></a>
#
# First, let's decide on our response variable. We'll try to predict the *total number of riders* on a given day. The response variable needs to be in an array (not a Table), so we'll get it using the `column` function.
# + id="TxQG7S0Ah-Yo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1071} outputId="519e240f-c55d-417f-ad20-8c404d0efbd4"
# response variable: total count of riders in a day (training set)
y_train = bike_train['total riders']
# response variable: total count of riders in a day (validation set)
y_test = bike_test['total riders']
y_train
# + [markdown] id="qTsMveAeh-Yq" colab_type="text"
# Next, we want to choose our explanatory variables. Let's try predicting ridership in terms of _temperature_, _work day_, and _season_.
#
# <div class="alert alert-info">
#
# **Why don't we just use all the explanatory variables?**
# You might think that the best model would use *all* the available explanatory information. But, using many variables makes a model **computationally expensive**. In the real world, where data sets may have a million or more rows, using a complex model can increase the time and computing power needed to make preditions. Additionally, many variables may have **little predictive power** such that excluding them from the model doesn't lower the accuracy very much. Other variables might **add noise** that actually decreases the model's performance outside of the training data.
#
# </div>
#
# Here, we run into a problem: "work day" and "season" are categorical variables (even though they have numerical values). This gets tricky with regression- the computer starts to work with the values as if they were numbers, and that can lead to questionable manipulations. For example, since Sunday is coded as 0 and Saturday is coded as 6, the computer might conclude that the average of Sunday and Saturday is Wednesday (since Wednesday is coded as 3).
#
# #### One-Hot Encoding
# To work around this, we will **one-hot encode** all our categorical variables. In one-hot encoding, the possible values of the variable each get their own column of 1s and 0s. The value is a 1 if that day falls in that category and a 0 otherwise.
#
# Here's an example. Say we have three possible weather states: rain, cloudy, or sunny.
# + id="tGf9IAM1h-Yq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="2314c914-46a5-4f00-f17b-804a69f5645e"
# original statement: categorical = Table().with_columns("weather", ["rainy", "cloudy", "sunny", "cloudy"])
# corrected statement below eliminates duplicated value of "cloudy"
categorical = Table().with_columns("weather", ["rainy", "cloudy", "sunny"])
categorical
# + [markdown] id="PshbGVD_h-Ys" colab_type="text"
# The one-hot encoding would look like this:
# + id="gVHNz_-7h-Yt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="bf704670-3379-4235-e4f9-f66d7b3170c8"
# pd.get_dummies is a function that does dummy encoding
one_hot = pd.get_dummies(categorical.column('weather'))
one_hot
# + [markdown] id="gcoc0S6Zh-Yw" colab_type="text"
# Notice that in each row, only one of the values is equal to 1 (hence the name, "one-hot" encoding), since no day can have more than one weather state.
#
# Notice also that we don't technically need the third column.
# + id="8nOVq4n9h-Yx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="389629a7-6eb4-45a1-a9be-c531d3fee4c5"
# This was original statement which I changed below: one_hot.drop("sunny", axis=1)
one_hot = one_hot.drop("sunny", axis=1)
one_hot
# + [markdown] id="04LfItw5h-Y3" colab_type="text"
# If we know that there are only three possible weather states, and we see that day 2 was neither cloudy nor rainy (that is, `cloudy`=0 and `rainy`=0), day 2 *must* have been sunny. This is helpful to save computation time and space. If you have some knowledge of linear algebra, note that this is also helpful to solve the problem of *perfect multicollinearity*- a situation that can make it impossible to compute the optimal set of $\beta$s.
#
# For simplicity, we've provided a function called `format_X` that will take a Table of explanatory variables and convert it to the correct format for prediction, including one-hot encoding the categorical variables. `format_X` will also add a column called "intercept" that only contains 1s. This column will help find the intercept term $\beta_0$ in our regression line. You can think of the intercept term as an $x_0$ that gets multiplied by $\beta_0$ and is always equal to 1.
# + id="BPW2vI-cmvFZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="25488aaa-4bd4-4484-b36a-67d9517f79cd"
ds1 = bike_train['temp']
ds2 = bike_train['is work day']
ds3 = bike_train['season']
ds3 = pd.get_dummies(ds3)
X_train = pd.concat([ds1, ds2, ds3], axis=1)
X_train = X_train.drop(labels='4:winter', axis=1)
X_train.insert(0, 'intercept', 1)
X_train.head()
# + [markdown] id="5UGxk0Gjh-Y7" colab_type="text"
# **EXERCISE:** Since we want to try the model on the test data as well, we will also perform the same transformations on the test set so it can fit the model. Fill in the code to first select the explanatory variables "temp", "is work day", and "season", then convert the explanatory table to the matrix format using `format_X`. Hint: we'll need to go through the exact same steps as in the above cell for the training data, but any references to training data should be replaced by their test data counterparts.
# + id="xjJAsIMBh-Y8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="0df7b6fb-042d-43d8-f267-ec1688fe2d67"
ds1 = bike_test['temp']
ds2 = bike_test['is work day']
ds3 = bike_test['season']
ds3 = pd.get_dummies(ds3)
X_test = pd.concat([ds1, ds2, ds3], axis=1)
X_test = X_test.drop(labels='4:winter', axis=1)
X_test.insert(0, 'intercept', 1)
X_test.head()
# + [markdown] id="pA0nLcLth-Y-" colab_type="text"
# ### 2b. Finding $\beta$ <a id="subsection 2b"></a>
# The next step is to calculate the $\beta$ terms. We can do this with a function from the Scipy statistical analysis Python package called `lstsq`.
#
# Given a matrix of explanatory variables X and an array of response variables y, `lstsq` returns a vector $\beta$. `lstsq` uses *ordinary least squares* as its **loss function (L)**: the function that defines the training loss (error) and what we seek to minimize (often using linear algebra or calculus, depending on the loss function). The ordinary least squares equation is a common loss function that is used to minimize the sum of squared errors:
#
# $$L(\beta) = \frac{1}{n}\sum_{n}^{i=1}(y_i - \text{predicted}(\beta, x_i))^2$$
#
# where $n$ is the number of days, $y_i$ is the actual number of riders on the $i$th day, and $\text{predicted}(\beta, x_i)$ is number of riders predicted to be on the $i$th day when using $\beta$ and the explanatory variables $x_i$. When minimized, the loss function will yield our optimal $\beta$.
#
# `lstsq` returns a list of four things, but for our purposes we're only interested in one of them: the array of the $\beta$ values for the best-fit line.
# + id="LtTtxezth-ZA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="2bb90784-b731-44bb-c143-2aa809bce7f3"
# calculate the least squares solution
# Apply the Training dataset with 6 explanatory variables (X_train Array containing: intercept, temp, is work day, 1:spring, 2:summer, 3:fall)
# Input 2nd parameter to lstsq function with Response variable y_train which is "total riders"
y_train = bike_train['total riders']
lstsq_results = lstsq(X_train, y_train)
print(lstsq_results)
# extract row 0 from function scipy.linalg.lstsq which returns 4 rows
beta = lstsq_results[0]
beta
# + [markdown] id="bbUXd4foh-ZD" colab_type="text"
# We now have everything we need to make predictions about the total number of riders on a given day. Remember, the formula is: $$\mu_{y|x_1, x_2, ..., x_k} = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_kx_k$$
#
# So, to make a prediction for a given day, we take all the values in the X matrix corresponding to that day, multiply each value by its corresponding $\beta$, then add them all together. The `@` operator can help us with this matrix multiplication.
#
# For example, here's the first row of our explanatory variable matrix.
# + id="6MSbdLtgh-ZE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="6218c1ef-8975-47a4-916d-76f8fcea335c"
X_train.loc[0:9, :]
# + id="gSqkCOIjh-ZI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1105} outputId="2dc1f62d-2aeb-43e1-dab6-5a8e708f0911"
print(X_train)
# + [markdown] id="oBBuQlDwh-ZK" colab_type="text"
# To get the prediction, we use `@` to multiply each item in the row by each corresponding item in the $\beta$ vector and sum them all up. If you've taken linear algebra, you'll recognize this as the [*dot product*](https://en.wikipedia.org/wiki/Dot_product).
#
# <img src="images/vector_mult.png" />
#
#
# + id="ZU0QoRLPh-ZK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="311318d9-5dd3-434d-e25a-107a72b4201c"
# multiply the arrays using Row 0 of X_train to get the prediction
X_train.loc[0, :] @ beta
# + [markdown] id="tQyzqh6Rh-ZM" colab_type="text"
# The `@` operator can also work on matrices. To get the predictions for *every* row in X, we use exactly the same syntax.
#
# <img src="images/matrix_mult.png" />
#
# + id="WEpPwiwlh-ZN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 2499} outputId="2b417eb3-eacd-4bd2-c28a-34ff95686315"
predict_train = X_train @ beta
predict_train
# + [markdown] id="5T32ty5hh-ZO" colab_type="text"
#
# Now we can add our predictions to our original table.
# + id="_iJgHNLWh-ZP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 3043} outputId="e8675aab-6042-403f-abe8-255ea0f3357e"
bike_train.insert(16, 'predicted total riders', predict_train)
bike_train
# + [markdown] id="E1YednBEh-ZR" colab_type="text"
# **EXERCISE:** We also want to make predictions for the test data using the $\beta$ we found during training. Replace the `...` in the cell below with an expression to calculate the predictions for the test set. Remember- you need to use `@` to multiply each row of explanatory variables in our test set by the $\beta$ vector. Look at how `predict_train` was calculated for a guide.
# + id="VKrDAtefvjdk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="dd6d7e7e-8078-446f-88e9-edeb2db4e5c6"
y_test = bike_test['total riders']
lstsq_results_test = lstsq(X_test, y_test)
print(lstsq_results_test)
beta = lstsq_results_test[0]
beta
# + id="aElDWXUUh-ZS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 3043} outputId="1bd10d59-170c-46f3-e4c9-9bb5bec13a9c"
X_test.loc[0:9, :]
X_test.loc[0, :] @ beta
predict_test = X_test @ beta
predict_test
bike_test.insert(16, "predicted total riders", predict_test)
bike_test
# + [markdown] id="tC4wpEo0h-ZV" colab_type="text"
# ### 2c. Evaluating the model <a id="subsection 2c"></a>
#
# Our model makes predictions, but how good are they? We can start to get a sense of how we did by plotting the predictions versus the actual values on our training data on a scatter plot. Remember from Module II that if our model predicts perfectly:
#
# - the predicted values will be equal to the actual values
# - all points in the scatter plot will fall along a straight line with a slope of 1
#
# As a bonus, `scatter` has an optional argument called `fit_line` that will add the best-fit line to the plot if you mark it as `True`.
# + id="RggsHag8h-ZW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="cc7d734a-1f70-4b22-cad7-9137067807e4"
df = bike_train
sns.lmplot(x='predicted total riders',y='total riders',data=df,fit_reg=True)
plt.title("Training set predictions")
# + [markdown] id="1LUXUO4Eh-Za" colab_type="text"
# Here are the validation set predictions scattered against the actual total rider counts. Note that we've added an extra `color` argument to `scatter` to change the color of the dots and line and distinguish the validation data from the training data.
# + id="P-PaXrtlh-Za" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="4115b34e-dfa0-43ec-9cec-534ee0aabfc8"
df = bike_test
sns.lmplot(x='predicted total riders',y='total riders',data=df,fit_reg=True)
plt.title("Test set predictions")
# + [markdown] id="L4QE6Orvh-Zd" colab_type="text"
# We can also get a quantitative measure of how good our model is by calculating the **root mean squared error**. This is fairly straightforward to calculate now that we have our error vector:
# 1. subtract the predictions from the actual values to get the errors
# 2. square the errors
# 3. take the average of the squared errors
# 4. take the square root of the average squared error
# + id="dyVX7mikh-Ze" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2fc32d33-a0f5-45ed-cafa-e9db611e8560"
# the root mean squared error of the training data
errors = y_train - predict_train
sq_error = errors ** 2
mean_sq_error = np.average(sq_error)
root_mean_sq_err = np.sqrt(mean_sq_error)
root_mean_sq_err
# + [markdown] id="kLSlLWlQh-Zh" colab_type="text"
# <div class="alert alert-info">
#
# **Why Root Mean Squared Error (RMSE)?**
# To know why RMSE is useful, it helps to think about the steps to derive it in order. We *square* the errors to get rid of any negative signs (we don't care if we overpredict or underpredict, we just want to know the magnitude of the error). We then want the *average* magnitude of the error to see how far off our model typically was. Finally, we take the *square root* to get back to the original units (in this case, number of riders as opposed to squared number of riders).
#
# </div>
#
# Next, we want to see what the RMSE is for our test set predictions. To simplify things, we have a function called `rmse` that takes a column of predicted values and a column of actual values and calculates the root mean squared error.
#
# Before you run the next cell: would you expect the RMSE for the test set would be higher, lower, or about the same as the RMSE for the training set? Why?
# + id="8HQs317ih-Zj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5239e468-6e30-4d70-b6b0-740c06e64236"
errors = y_test - predict_test
sq_error = errors ** 2
mean_sq_error = np.average(sq_error)
root_mean_sq_err = np.sqrt(mean_sq_error)
root_mean_sq_err
# + [markdown] id="z-yMGz2eh-Zl" colab_type="text"
# We can also visualize our errors compared to the actual values on a scatter plot.
# + id="JScYIbIIh-Zl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="cd4e17c2-2c4c-4bbd-b8f5-7f2f6c1f00b1"
bike_train.insert(17, "training error", errors)
df = bike_train
sns.lmplot(x='predicted total riders',y='training error',data=df,fit_reg=True)
plt.title("Training error set predictions")
# + id="lyVImV_Kh-Zn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="5de3db72-81fa-4fc9-8391-2504bcedadc2"
bike_test.insert(17, "validation error", y_test - predict_test)
df = bike_test
sns.lmplot(x='predicted total riders',y='validation error',data=df,fit_reg=True)
plt.title("Test error set predictions")
# + [markdown] id="1HXiasCih-Zp" colab_type="text"
# **QUESTION:** Based on the plots and root mean squared error above, how well do you think our model is doing? What does the shape of the scatter plot of errors tell us about the appropriateness of the linear model here?
# + [markdown] id="_LlhDijlh-Zq" colab_type="text"
# **ANSWER:** Slope of train vs predicted riders is < 0.885 (rmse of train / test = 1392 / 1572) and non-linear scatter plots shows that the model needs some improvement.
# + [markdown] id="WlyBRmoHh-Zr" colab_type="text"
# ## 3. Challenge: Improve the Model <a id="section 3"></a>
#
# Our model is currently not very good. But, there's a lot of things we could try to improve it.
#
# In the following cells you have almost everything you need to create a new linear regression model. To try a new model, fill in the two sets of ellipses below:
# - set `response` to the *string name* of the response variable you want to predict
# - set `expl` to be a *list of string names of explanatory variables* you want to incorporate into the model. Remember, the names should be strings (i.e. in quotation marks) and separated by commas in between the square brackets.
#
# Once you've filled in the ellipses, run all the cells below in order to recalculate the $\beta$ vector, make new predictions, and look at the residuals. A helpful tip: in the "Cell" menu at the top, clicking "Run All Below" will run all code cells below the cell you currently have selected.
#
# How accurate can you make the model?
# + id="pmyLS-X7h-Zr" colab_type="code" colab={}
# select a response variable: "casual", "registered", or "total riders"
# response = widgets.Dropdown(options=["casual","registered","total riders"])
# explantory_slider = widgets.Dropdown(options=["temp", "felt temp", "humidity", "windspeed"])
# display(widgets.interactive(lambda x: bike_train.hist(x, bins=30), x=explantory_slider))
bike_train = pd.read_csv("day_train.csv")
bike_test = pd.read_csv("day_test.csv")
response = "casual"
y_train = bike_train[response]
y_test = bike_test[response]
# + id="EDWi6-up0Mkg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="9a9782de-7510-4069-8ddb-5e01a1403fda"
ds1 = bike_train['is holiday']
ds2 = bike_train['temp']
ds3 = bike_train['humidity']
ds4 = bike_train['windspeed']
ds5 = bike_train['season']
ds5 = pd.get_dummies(ds5)
X_train = pd.concat([ds1, ds2, ds3, ds4, ds5], axis=1)
X_train.insert(0, 'intercept', 1)
X_train.head()
# + id="o3zYX6Ii3Op7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="3fabfd65-e7d7-47c2-e436-eb84a3556b38"
ds1 = bike_test['is holiday']
ds2 = bike_test['temp']
ds3 = bike_test['humidity']
ds4 = bike_test['windspeed']
ds5 = bike_test['season']
ds5 = pd.get_dummies(ds5)
X_test = pd.concat([ds1, ds2, ds3, ds4, ds5], axis=1)
X_test.insert(0, 'intercept', 1)
X_test.head()
# + id="uOc8EVZqh-Zw" colab_type="code" colab={}
# calculate the least squares solution
beta = lstsq(X_train, y_train)[0]
# + id="G4OlWKOFh-Zx" colab_type="code" colab={}
# calculate predicted values
pred_train = X_train @ beta
bike_train.insert(16, "predicted {}".format(response), pred_train)
pred_test = X_test @ beta
bike_test.insert(16, "predicted {}".format(response), pred_test)
# + id="OHY8D7pPh-Zz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="f8fde8f5-6c38-40e0-cd54-41b78c0b88be"
# compare predictions to actual values on a scatter plot
bike_train.plot.scatter("predicted {}".format(response), response)
plt.title("Training set predictions");
# + id="XWEAGECWh-Z3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="72367409-6ed6-418d-8ffc-7945622e697b"
bike_test.plot.scatter("predicted {}".format(response), response, color='y')
plt.title("Test set predictions");
# + id="qBxITI9Gh-Z4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="0379cc4b-61be-4d9a-9fba-436e9c5ec471"
#calculate the root mean squared error
errors = y_test - pred_test
sq_error = errors ** 2
mean_sq_error = np.average(sq_error)
rmse_test = np.sqrt(mean_sq_error)
errors = y_train - pred_train
sq_error = errors ** 2
mean_sq_error = np.average(sq_error)
rmse_train = np.sqrt(mean_sq_error)
print("Training RMSE = {0}\nValidation RMSE = {1}".format(rmse_train, rmse_test))
# + id="pmyQFSpe7qPc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="e67a5638-80a7-4097-b81e-51335cc7c0aa"
bike_train.head()
# + id="qINV9Za1h-Z7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="eeff7d81-f21a-4e8f-8311-ce69cae270f6"
# plot the residuals on a scatter plot
bike_train.insert(17, "training error", y_train - pred_train)
bike_train.plot.scatter("predicted {}".format(response), "training error")
# + id="9heGDFB6h-Z_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="6e8b6858-21e4-46cc-d7d4-37b0b1d42a51"
# plot the residuals on a scatter plot
bike_test.insert(17, "validation error", y_test - pred_test)
bike_test.plot.scatter("predicted {}".format(response), "validation error", color='y')
# + id="FuVsSwBvh-aC" colab_type="code" colab={}
# select a response variable: "casual", "registered", or "total riders"
bike_train = pd.read_csv("day_train.csv")
bike_test = pd.read_csv("day_test.csv")
response = "registered"
y_train = bike_train[response]
y_test = bike_test[response]
# + id="EJ5HcDF8h-aF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="c489ef41-d1f6-4ee4-de98-59bc2ec7719e"
ds1 = bike_train['is holiday']
ds2 = bike_train['temp']
ds3 = bike_train['humidity']
ds4 = bike_train['windspeed']
ds5 = bike_train['season']
ds5 = pd.get_dummies(ds5)
X_train = pd.concat([ds1, ds2, ds3, ds4, ds5], axis=1)
X_train.insert(0, 'intercept', 1)
X_train.head()
# + id="OlPLoWgQ9qbv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="480d7b62-2169-42dd-e5be-b6f782f3434d"
ds1 = bike_test['is holiday']
ds2 = bike_test['temp']
ds3 = bike_test['humidity']
ds4 = bike_test['windspeed']
ds5 = bike_test['season']
ds5 = pd.get_dummies(ds5)
X_test = pd.concat([ds1, ds2, ds3, ds4, ds5], axis=1)
X_test.insert(0, 'intercept', 1)
X_test.head()
# + id="othVhbNLh-aH" colab_type="code" colab={}
# calculate the least squares solution
beta = lstsq(X_train, y_train)[0]
# + id="6P1IRWeoh-aI" colab_type="code" colab={}
# calculate predicted values
pred_train = X_train @ beta
bike_train.insert(16, "predicted {}".format(response), pred_train)
pred_test = X_test @ beta
bike_test.insert(16, "predicted {}".format(response), pred_test)
# + id="962u4V4Yh-aK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="5537853b-2b72-4034-ab1f-07e5b3ee9496"
# compare predictions to actual values on a scatter plot
bike_train.plot.scatter("predicted {}".format(response), response)
plt.title("Training set predictions");
# + id="jHDNfwRfh-aL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="1f372b9d-4fa0-489c-db15-f79cd0bf59cc"
bike_test.plot.scatter("predicted {}".format(response), response, color='y')
plt.title("Test set predictions");
# + id="ybxoC2fHh-aN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="e8abfc13-b7d0-48d9-cf4f-29aff2a8ea45"
#calculate the root mean squared error
#calculate the root mean squared error
errors = y_test - pred_test
sq_error = errors ** 2
mean_sq_error = np.average(sq_error)
rmse_test = np.sqrt(mean_sq_error)
errors = y_train - pred_train
sq_error = errors ** 2
mean_sq_error = np.average(sq_error)
rmse_train = np.sqrt(mean_sq_error)
print("Training RMSE = {0}\nValidation RMSE = {1}".format(rmse_train, rmse_test))
# + id="hHV8A6pph-aP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="7547fe48-a7c9-4b00-c3b6-6d72f4840c77"
# plot the residuals on a scatter plot
bike_train.insert(17, "training error", y_train - pred_train)
bike_train.plot.scatter("predicted {}".format(response), "training error")
# + id="BPoPHYJgh-aQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="9ab69cbf-b4d6-456c-d693-6c2982ed4794"
# plot the residuals on a scatter plot
bike_test.insert(17, "validation error", y_test - pred_test)
bike_test.plot.scatter("predicted {}".format(response), "validation error", color='y')
# + [markdown] id="6de6vbT3h-aR" colab_type="text"
#
# **QUESTION:** What explanatory variables did you use in the best model you found? What metrics showed that it was the "best" model? Reference the scatter plots, fit lines, RMSE, etc.
# + [markdown] id="p2p3mq6fh-aS" colab_type="text"
# **ANSWER:**
# + [markdown] id="LODRh1pBh-aS" colab_type="text"
# #### References
# - Bike-Sharing data set from University of California Irvine's Machine Learning Repository https://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset
# - Portions of text and code adapted from Professor <NAME>'s Legal Studies 190 (Data, Prediction, and Law) course materials: [lab 2-22-18, Linear Regression](https://github.com/ds-modules/LEGALST-190/tree/master/labs/2-22) (Author <NAME>) and [lab 3-22-18, Exploratory Data Analysis](https://github.com/ds-modules/LEGALST-190/tree/masterlabs/3-22) (Author <NAME>)
# - "Capital Bikeshare, Washington, DC" header image by [<NAME>](https://www.flickr.com/photos/leeanncafferata/34309356871) licensed under [CC BY-ND 2.0](https://creativecommons.org/licenses/by-nd/2.0/)
| 03LinearRegressionModels_converted_from_JupyterHyub.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
origin_pics = os.listdir('assets/images/album/origin/')
th_pics = os.listdir('assets/images/album/ths/')
file = origin_pics[0]
tdir = 'assets/images/album/'
def cropimg(file):
from PIL import Image
im = Image.open(tdir+'origin/'+file)
width,height = im.size
im = im.resize((int(width*0.3),int(height*0.3)))
width,height = im.size
a = int(min(width,(height/3*4))/4)
start1 = (width-a*4)/2
end1 = start1+a*4
start2 = (height-a*3)/2
end2 = start2+a*3
im = im.crop((int(start1),int(start2),int(end1),int(end2)))
im.save(tdir+'ths/'+file)
print(' - url: '+tdir+'origin/'+file)
print(' image_path: '+tdir+'ths/'+file)
for file in origin_pics:
if file in th_pics:
pass
else:
try:
cropimg(file)
except:
pass
for file in origin_pics:
cropimg(file)
import os
origin_pics = os.listdir()
origin_pics
def cropimg(file):
from PIL import Image
im = Image.open(file)
width,height = im.size
im = im.resize((int(width*0.3),int(height*0.3)))
width,height = im.size
a = int(min(width,(height/3*4))/4)
start1 = (width-a*4)/2
end1 = start1+a*4
start2 = (height-a*3)/2
end2 = start2+a*3
im = im.crop((int(start1),int(start2),int(end1),int(end2)))
im.save(file)
origin_pics = os.listdir()
for file in origin_pics:
try:
cropimg(file)
except:
pass
| docs/source/gallery/html/crop img.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Programowanie z klasą
# ## <NAME>
# ### 26.XI.2018, Python4Beginners
# + [markdown] slideshow={"slide_type": "slide"}
# ## 1. Po co komu klasy
# + [markdown] slideshow={"slide_type": "subslide"}
# ### A long time ago in the Milky Way Galaxy
# Nie było klas.
# Programy składały się z danych (zmienne, tablice, struktury) i z funkcji.
# Tworzenie rozbudowanych programów okazało się trudne.
# Ciężko było utrzymać reżim podziału na dane i funkcje je obrabiające.
# Bardzo łatwo było użyć nieodpowiedniej funkcji do danych.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Back to the future
# Ludzie próbowali rozwiązac problem programów/projektów uginających sie pod swym rozmiarem.
# Jednym z pomysłów było coś co zostało nazwane Programowaniem Zorientowanym Obiektowo.
# Po ang. Object Oriented Programming (**OOP**).
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Co daje OOP
# Ogólnie podział kodu źródłowego.
# Wprowadza przestrzenie nazw.
# Połączenie danych i operacji w klasy.
# Dzięki tym usprawnieniom tworzenie większych programów stało się łatwiejsze.
# Częstotliwość użycia metody Copiego-Pasty też się zmniejszyła. (powinna)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### There is no silver bullet
# Po początkowym entuzjaźmie okazało się, że OOP nie jest łatwe!
# Rozwiązuje część problemów.
# Ale dokłada swoje np: dziedziczenie (wielokrotne, długie łancuchy, ...).
# Przede wszystkim jednak wciąż **trzeba myśleć**.
# + [markdown] slideshow={"slide_type": "slide"}
# ## 2. Ogólne pojęcie klasy
# + [markdown] slideshow={"slide_type": "subslide"}
# Klasa to złączenie danych i funkcjonalności w jeden obiekt.
# Klasy są "uznawane" za rzeczowniki.
# Funkcje są "uznawane" za czasowniki.
# + [markdown] slideshow={"slide_type": "subslide"}
# Stworzenie nowej klasy tworzy nowy **typ**.
# To umożliwia tworzenie **instancji** tego typu
# + [markdown] slideshow={"slide_type": "subslide"}
# Klasy są definicją ("przepisem") sposobu w jaki instancje powninny być tworzone.
# + [markdown] slideshow={"slide_type": "subslide"}
# Ogólne właściwości klas w python:
# - metody i pola klas/instancji są **publiczne** (jak wszystko w python)
# - funkcje są **virtualne** co znaczy, że odwołania są rozwiązywane w runtime
# - można dziedziczyć po typach wbudowanych
# - operatory mogą być nadpisywane
# - klasy są obiektami
# - instancje są obiektami
# + [markdown] slideshow={"slide_type": "subslide"}
# Więcej informacji:
# * https://docs.python.org/3/tutorial/classes.html
# * https://en.wikipedia.org/wiki/Virtual_function
# * https://en.wikipedia.org/wiki/Polymorphism_(computer_science)
# * https://en.wikipedia.org/wiki/Method_(computer_programming)
# * https://www.digitalocean.com/community/tutorials/how-to-apply-polymorphism-to-classes-in-python-3
# * https://www.python.org/download/releases/2.3/mro/
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - Czym jest metoda?
# - Czym jest polimorfizm?
# - Czy polimorfizm jest dostępny w Python?
# - Czym jest "duck typing"?
# + [markdown] slideshow={"slide_type": "slide"}
# ## 3. Klasy
# + [markdown] slideshow={"slide_type": "subslide"}
# Klasa definiuje jak instancje mają być tworzone.
# + [markdown] slideshow={"slide_type": "subslide"}
# ```python
# class MyFirst():
# pass
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# Definicja klas są wykonywane → obiekty klas (**class object**)
# + [markdown] slideshow={"slide_type": "subslide"}
# Na obiektach klas można wykonywać dwa typy opreacji:
# # + odwołanie do atrybutu (**attribute references**)
# # + instancjonowanie (**instantiation**)
# + [markdown] slideshow={"slide_type": "subslide"}
# Odwołanie do atrybutu w python wygląda następująco:
# ```python
# obiekt.nazwa_atrybutu
# ```
#
# Atrybutem moze być dowolny obiekt (funkcja, klasa, instancja, metoda, ...)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - Czy klasy są obiektami?
# - Czym są atrybuty?
# - W jaki sposób możemy się dostać do atrybutu?
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## 4. Instancje
# + slideshow={"slide_type": "subslide"}
# tworzenie obiektu klasy
class MyFirst():
pass
print(type(MyFirst))
# instancjonowanie
pierwszy = MyFirst()
print(type(pierwszy))
# + [markdown] slideshow={"slide_type": "subslide"}
# Instancjonowanie tworzy nowy "pusty" obiekt.
# Jeśli chcemy, żeby w instancja miała jakieś dane to powinniśmy użyć `__init__` - specjalna metoda.
# `__init__` zostanie zawołane przez mechanizm tworzący nowe instancje.
# Wywołanie jest jednorazowe i następuje tylko przy tworzeniu nowych instancji.
# `__init__` nie nest konstruktorem tylko inicjalizatorem.
# Konstruktorem jest `__new__`, ale zazwyczaj `__init__` wystarcza.
# + slideshow={"slide_type": "subslide"}
class Person():
def __init__(self, name, surname):
self.name = name
self.surname = surname
janek = Person('Jan', 'Kos')
gustlik = Person('Gustaw', 'Jeleń')
print(type(Person))
for x in (janek, gustlik):
print(type(x), x.name, x.surname)
print(80 * '-')
# + [markdown] slideshow={"slide_type": "subslide"}
# W powyższym przykładzie widzimy, że `__init__` przyjmuje 3 argumenty pozycyjne, a podczas tworzenia nowych instancji klasy `Person` podajemy tylko 2 argumenty. W dodatku ostatnie 2. Pierwszy argument `self` został jakby pominięty, a wyjątek nie poleciał. Wygląda to dziwnie, ale to standardowe zachowanie. Każda funkcja zdefiniowana wewnątrz klasy przyjmuje za pierwszy argument instancję (instancja zostanie przekazana automatycznie). Nazwa `self` nie jest obowiązkowa, możemy używać dowolnej, ale konwencjajest taka, że zalecane jest `self` - dla czytelności. W Python konwencje są ważne.
# + [markdown] slideshow={"slide_type": "subslide"}
# Instancje w zasadzie umieją robić tylko jedną rzecz - odwołania do atrybutów.
# Atrybut moze być dowolnym obiektem (liczbą, napisem, funkcją, ...).
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - Czym się różni klasa od instancji?
# - Czy instancje są obiektami?
# - Czy dwie instancje tej samej klasy są różnymi obiektami?
# - A jeśli bardzo byśmy chcieli, żeby dwie instancje były tymi samymi obiektami?
# - Czy takie obiekty są w Python używane?
# - Czy instancje mogą współdzielić jakiś obszar pamięci?
# + [markdown] slideshow={"slide_type": "slide"}
# ## 5. Dane - co w klasie a co w instancji
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Atrybuty klasy i instancji
# + [markdown] slideshow={"slide_type": "subslide"}
# Podział jest prosty - co zdefiniujemy w klasie należy do klasy a co na (w) instancji należy do instancji.
# Wydaje się proste, ale można się zaskoczyć.
# + slideshow={"slide_type": "subslide"}
class Surprise():
a = 44
b = []
def __init__(self, name):
self.name = name
sup1 = Surprise('pierwsza')
sup2 = Surprise('druga')
def print_surprises():
print(Surprise, Surprise.a, Surprise.b) # ,Surprise.name) <- tego się nie da bo jeszcze nie jest dostępne
print(sup1, sup1.a, sup1.b, sup1.name)
print(sup2, sup2.a, sup2.b, sup2.name)
print_surprises()
# + slideshow={"slide_type": "subslide"}
sup1.a = 23
print_surprises()
# + slideshow={"slide_type": "subslide"}
Surprise.a = 55
print_surprises()
# + slideshow={"slide_type": "subslide"}
sup1.b.append('niespodzianka :)')
print_surprises()
# + slideshow={"slide_type": "subslide"}
sup2.b = ['całkiem inna niespodzianka']
print_surprises()
# + slideshow={"slide_type": "subslide"}
print(sup2.nowy)
# + slideshow={"slide_type": "subslide"}
Surprise.nowy = '<NAME>'
print(sup2.nowy)
# + [markdown] slideshow={"slide_type": "subslide"}
# Powyższy przykład działa, ponieważ najpierw atrybut jest wyszukiwany w instancji, a jeśli nie zostanie znaleziony to w klasie.
# + [markdown] slideshow={"slide_type": "subslide"}
# Zazwyczaj współdzielenie stanu między instancjami jest bardzo ryzykowne. Łatwo się pomylić, ciężko taki błąd znaleźć.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Kontrola dostępu
# + [markdown] slideshow={"slide_type": "subslide"}
# Możliwa jedynie za pomocą konwencji. W python nie ma mechanizmów blokujących dostęp do wnętrza (implementacji) klasy. Każdy klient może gmerać wszędzie. Co nie jest dobrym pomysłem, ale jesteśmy dorośli nie biadolimy jak sami zrobimy sobie krzywdę.
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Konwencja
# Wszystkie nazwy zaczynające się od pojedyńczego podkreślnika (`_`) są uznawane, za prywatne. Nieważne od poziomu na którym zostaną zdefiniowane (moduł, funkcja, zmienna, pole klasy, ...). Zmiana zawartości takich atrybutów odbywa się na własne ryzyko. W nowszej wersji np jakiejś biblioteki sposób działania kodu moze się zmienić i nasza zmiana spowoduje katastrofę. Ludzie starają się przestrzegać tej konwencji. Nazwy z podwójnymi podkreśleniami z przodu i z tyłu są zarezerwowane: częśc już jest używana przez python, a w każdej chwili mogą dojść nowe.
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Pseudoprywatność
# Nazwy zaczynające się od `__` są jakby "bardziej" prywatne. Kompilator/interpreter w locie je zmieni. Aktualnie od wielu wersji interpretera wstecz dokleji na początku nazwy `_` i nazwe klasy, ale "może" się to zmienić w przyszłości.
# W dalszym ciągu nie załatwia nam to prywatności, jedynie nieco utrudnia grzebanie w atrybutach. Moje zdanie jest takie, że pojedyńczy podkreślnik w zupełności wystarcza. Podwójne podkreślenie wprowadza zamieszanie, które później trzeba odkręcać (nawet po kilku latach).
# + slideshow={"slide_type": "subslide"}
class Alfa():
def __init__(self):
self.__shadow = 'nie ma mnie'
a = Alfa()
from pprint import pprint
pprint(a.__dict__)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Odwoływanie się do atrybutów instancji wewnątrz metody
# + slideshow={"slide_type": "subslide"}
class Person():
def __init__(self, name, surname):
self.name = name
self.surname = surname
def formalize(self):
# do atrybutów odwołujemy się poprzez pierwszy argument metody (self)
return 'Imię: {}, Nazwisko: {}'.format(self.name, self.surname)
janek = Person('Jan', 'Kos')
print(janek.formalize())
print(janek.__dict__)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - Czy jest sposób na kontrolowanie dostępu do atrybutów w python?
# - Jaka jest różnica między atrybutami zaczynającymi się od `_` a od `__`?
# - Jaka jest róznica między atrybutami klasy a instancji?
# - Gdzie powinniśmy definiować atrybuty instancji?
# - Gdzie możemy definiować atrybuty klasy?
# + [markdown] slideshow={"slide_type": "slide"}
# ## 6. Metody instancji i klasy
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Metody klas
# W Python nie mamy dostępnego przeładowywania metod w zależności od typu argumentów. Między innymi z tego powodu mamy jeden konstruktor. Jeśli bardzo chcielibyśmy mieć alternatywny konstruktor (jak np w `datetime`) to możemy użyć metody klasy.
# Metoda klasy (`classmethod`) to funkcja zdefiniowana w klasie (metoda), która za pierwszy argument nie przyjmuje instancji tylko obiekt klasy.
# + slideshow={"slide_type": "subslide"}
class MyAbc():
def __init__(self, a, b, c):
print('Jestem w __init__')
print('type(self): ', type(self))
print('self: ', self)
self.a = a
self.b = b
self.c = c
def some_method(self):
print('Jestem w some_method')
print('type(self): ', type(self))
print('self: ', self)
@classmethod # to jest dekorator, czym są dokładnie dowiemy się na zajęciach levelUp
def from_string(cls, str_in):
print('Jestem w from_string')
print('type(cls): ', type(cls))
print('cls: ', cls)
a, b, c = str_in.split('.')
return cls(a, b, c)
zinita = MyAbc(1, 2, 3)
zinita.some_method()
zstringa = MyAbc.from_string('Ala.ma.Asa')
print('zinita.__dict__: ', zinita.__dict__)
print('zstringa.__dict__: ', zstringa.__dict__)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - Jakie zastosowania może mieć `classmethod`?
# - Co jest zawsze pierwszym argumentem metody?
# + [markdown] slideshow={"slide_type": "slide"}
# ## 7. Dziedziczenie
# + slideshow={"slide_type": "subslide"}
class A():
pass
class B(A): # klasa B dziedziczy po A
pass
a = A()
b = B()
print(type(b))
print('isinstance(b, B): ', isinstance(b, B))
print('issubclass(B, A): ', issubclass(B, A))
print('issubclass(A, B): ', issubclass(A, B))
# + slideshow={"slide_type": "subslide"}
# dziedziczenie wielokrotne
class A():
pass
class B():
pass
class C(A, B): # klasa C dziedziczy po A i B
pass
c = C()
print(type(c))
print('isinstance(c, A): ', isinstance(c, A))
print('isinstance(c, B): ', isinstance(c, B))
print('isinstance(c, C): ', isinstance(c, C))
# + [markdown] slideshow={"slide_type": "subslide"}
# Tak na prawdę wszystkie klasy definiowane przez użytkowików dziedziczą po `object` więck każde dziedziczenie jest wielokrotne. Problem diamentów jest częsty. Python ma algorytm wyznaczamia kolejności poszukiwania atrybutów w drzewie dziedziczenia - `Method Resolution Order` (MRO). Na potrzebu kursu nie będziemy zajmować się dziedziczeniem wielokrotnym - dziedziczenie i tak bywa problematyczne, a wielokrotne poziom trudności zwiększa. W prosty sposób można sobie bardzo życie utrudnić.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - Czym jest dziedziczenie?
# - Czy dziedziczenie wielokrotne jest dostępne w python?
# - Jak Python radzi sobie z dziedziczeniem diamentowym?
# + [markdown] slideshow={"slide_type": "slide"}
# ## 8. Nadpisywanie metod klasy bazowej
# + slideshow={"slide_type": "subslide"}
class Baza():
pass
def who_am_I(self):
self.primitive_who_am_I()
self.detailed_who_am_I()
def primitive_who_am_I(self):
print("nazwa Baza")
def detailed_who_am_I(self):
print('self.__class__.__name__: ', self.__class__.__name__)
class Potomek(Baza): # Potomek dziedziczy po Baza
def primitive_who_am_I(self):
print("nazwa Potomek")
b = Baza()
p = Potomek()
print(type(b))
print(type(p))
print(80 * '-')
print(b.who_am_I())
print(80 * '-')
print(p.who_am_I())
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - Czy można nadpisywać metody klas bazowych?
# - Czy można się dostać do nadpisanych metod?
# + [markdown] slideshow={"slide_type": "slide"}
# ## 9. Wywoływanie metod klasy bazowej
# + slideshow={"slide_type": "subslide"}
class Baza():
pass
def who_am_I(self):
self.primitive_who_am_I()
self.detailed_who_am_I()
def primitive_who_am_I(self):
print("nazwa Baza")
def detailed_who_am_I(self):
print('self.__class__.__name__: ', self.__class__.__name__)
class Potomek(Baza): # Potomek dziedziczy po Baza
def primitive_who_am_I(self):
cos = super()
print('cos: ', cos)
cos.primitive_who_am_I()
print("nazwa Potomek")
b = Baza()
p = Potomek()
print(type(b))
print(type(p))
print(80 * '-')
print(b.who_am_I())
print(80 * '-')
print(p.who_am_I())
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Uczmy się od najlepszych:
# * https://www.youtube.com/watch?v=EiOglTERPEo
# * https://rhettinger.wordpress.com/2011/05/26/super-considered-super/
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - W jaki sposób odstać się do nadpisanych metod w klasie bazowej?
# + [markdown] slideshow={"slide_type": "slide"}
# ## 10. Metody "magiczne"
# + slideshow={"slide_type": "subslide"}
class Vector():
def __init__(self, *args):
self.coords = [x for x in args]
def __len__(self):
return len(self.coords)
def __repr__(self):
return '{}({})'.format(
self.__class__.__name__, ', '.join((str(x) for x in self.coords))
)
def __add__(self, other):
n = len(self.coords)
new_coords = [self.coords[i] + other.coords[i] for i in range(n)]
return Vector(*new_coords)
def __iadd__(self, other):
for i, x in enumerate(self.coords):
self.coords[i] += other.coords[i]
return self
def __eq__(self, other):
return self.coords == other.coords
def __mul__(self, other):
new_coords = [x * other for x in self.coords]
return Vector(*new_coords)
def __rmul__(self, other):
return self * other
# + [markdown] slideshow={"slide_type": "subslide"}
# Proszę zaimplementować klasę `Vector`.
# Ta klasa będzie reprezentować wektor o rozmiarze n.
# Przykładowe tworzenie instancji klasy:
# ```python
# Vector(5) # tworzy wektor jednowymiarowy o długości 5 (n = 1)
# Vector(1, 2, 3) # tworzy wektor trójwymiarowy (n = 3)
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_len\_\_
# Powinno dać się "zmierzyć" liczbę wymiarów wektora za pomocą funkcji wbudowanej `len`.
# + slideshow={"slide_type": "subslide"}
print(len(Vector(1, 2, 3)))
print(len(Vector(4, 5)))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_repr\_\_
# Instancje klasy Vector powinno dać się zrzutować na stringa:
#
# * Wskazówka 1: https://docs.python.org/3/reference/datamodel.html#object.__str__
# * Wskazówka 2: https://docs.python.org/3/reference/datamodel.html#object.__repr__
#
#
#
# + slideshow={"slide_type": "subslide"}
v = Vector(1, 2)
print('v: ', v)
v1 = Vector(2, 4, 5.6, 7,8)
print('v1: ', v1)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_str\_\_
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_add\_\_
# Dodawanie dwóch wektorów o tym samym wymiarze powinno tworzyć nowy wektor o tym samym wymiarze:
#
# * Wskazówka: https://docs.python.org/3/reference/datamodel.html#object.__add__
#
# Zakładamy, że operacja na wektorach o różnym wymiarze nie wystąpi
# + slideshow={"slide_type": "subslide"}
v1 = Vector(1, 2)
v2 = Vector(3, 4)
v3 = v1 + v2
print(v1)
print(v2)
print(v3)
print(id(v1))
print(id(v2))
print(id(v3))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_iadd\_\_
# Chcemy umożliwić "zwiększanie wartości" wektora o inny wektor.
# + slideshow={"slide_type": "subslide"}
v1 = Vector(1, 2)
v2 = Vector(3, 4)
print('id(v1): ', id(v1))
print('id(v2): ', id(v2))
v1 += v2
print(v1)
print(v2)
print('id(v1): ', id(v1))
print('id(v2): ', id(v2))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_eq\_\_
# Powinno dać się porównać czy dwie różne instancje klasy wektor są sobie równe:
#
# * Wskazówka 1: https://docs.python.org/3/reference/datamodel.html#object.__eq__
#
# zakładamy, że operacja na wektorach o różnym wymiarze nie wystąpi
# + slideshow={"slide_type": "subslide"}
v1 = Vector(1, 2, 3)
v2 = Vector(1, 2, 3)
print('id(v1): ', id(v1))
print('id(v2): ', id(v2))
print('v1 is v2: ', v1 is v2)
print('v1 == v2: ', v1 == v2) # dwa wektory powinny być uznane za równe tylko jeśli mają te same "współrzędne"
v3 = Vector(1, 2, 4)
print('v1 == v3: ', v1 == v3) # dwa wektory z różnymi "współrzędnymi" powinny być uznane za nierówne
print('id(v3): ', id(v3))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_mul\_\_
#
# Mnożenie wektora przez liczbę powinno dawać nowy wektor o tym samym wymiarze z
# odpowiednio przemnożonymi "współrzędnymi":
#
# * Wskazówka: https://docs.python.org/3/reference/datamodel.html#object.__mul__
# + slideshow={"slide_type": "subslide"}
v1 = Vector(1.1, 2.2, 3, 4)
v2 = v1 * 6
print(v1)
print(v2)
print('id(v1): ', id(v1))
print('id(v2): ', id(v2))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_rmul\_\_
# Mnożenie liczby przez wektor powinno dawać taki sam wektor jak mnożenie wektora przez liczbę:
#
# * Wskazówka: https://docs.python.org/3/reference/datamodel.html#object.__rmul__
# + slideshow={"slide_type": "subslide"}
v1 = Vector(1.1, 2.2, 3, 4)
v2 = v * 6
print(v1)
print(v2)
v1 = Vector(1.1, 2.2, 3, 4)
v2 = v1 * 6
v3 = 6 * v1
print(v1)
print(v2)
print(v3)
print(v2 == v3)
print('id(v1): ', id(v1))
print('id(v2): ', id(v2))
print('id(v3): ', id(v3))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_getitem\_\_
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_getattribute\_\_
# + [markdown] slideshow={"slide_type": "subslide"}
# ### \_\_getattr\_\_
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Pytania do sali
# - Czy obiekt może generować atrybuty "na żądanie"?
# - Czy można dynamicznie ustawiac atrybuty na obiekcie?
# - Co trzeba zrobić, żeby odwołać się do obiektu przez indeks (nawiasy `[]`)?
# - Czym się różni `repr()` od `str()` w zastosowaniu?
# - Czy można (jeśli tak to jak) zawołać obiekt/instancję klasy?
# - Co trzeba zrobić, żeby dało się obiektu w pętli `for` użyć?
# + [markdown] slideshow={"slide_type": "slide"}
# ## 11. Generatory
# + [markdown] slideshow={"slide_type": "subslide"}
# Są przydatne. Jest to jakby "funkcja zapamiętująca swój stan". Między wywołaniami.
# + [markdown] slideshow={"slide_type": "subslide"}
# Są dwa sposoby na tworzenie generatorów:
# - `gen-exp`
# - `yield`
# + slideshow={"slide_type": "subslide"}
# gen-exp
# działają podobnie do list comprehension
a = (x for x in (1, 2, 3, 4, 5))
print(type(a))
print(next(a)) # 1
print(next(a)) # 2
print(next(a)) # 3
print(next(a)) # 4
print(next(a)) # 5
print(next(a)) # koniec...
# + slideshow={"slide_type": "subslide"}
# `yield`
def count_to_five():
yield 1
yield 2
yield 3
yield 4
yield 5
b = count_to_five()
print(type(b))
print(next(b)) # 1
print(next(b)) # 2
print(next(b)) # 3
print(next(b)) # 4
print(next(b)) # 5
print(next(b)) # koniec...
# + slideshow={"slide_type": "subslide"}
# `yield`
def count_to_five_II():
counter = 0
while counter < 5:
counter += 1
yield counter
c = count_to_five_II()
print(type(c))
print(next(c)) # 1
print(next(c)) # 2
print(next(c)) # 3
print(next(c)) # 4
print(next(c)) # 5
print(next(c)) # koniec...
# + [markdown] slideshow={"slide_type": "subslide"}
# Generatory są bardzo użyteczne. Zwłaszcza, gdy chcemy "obrobić" duży zbiór danych, a nie musimy mieć wszystkich danych naraz wczytanych.
# + [markdown] slideshow={"slide_type": "subslide"}
# # # Pytania?
# + [markdown] slideshow={"slide_type": "subslide"}
# # That's all folks!
| daftacademy-python4beginners-autumn2019/03.Programowanie_z_klasa/Programowanie_z_klasa.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python3.6
# name: python3
# ---
# # BINARY BLACK HOLE SIGNALS IN LIGO OPEN DATA
#
# Version 1.63, 2017 Sept 11
#
# Welcome! This IPython notebook (or associated python script LOSC_Event_tutorial.py ) will go through some typical signal processing tasks on strain time-series data associated with the LIGO Event data releases from the LIGO Open Science Center (LOSC):
#
# * View the tutorial as a <a href='https://losc.ligo.org/s/events/GW150914/LOSC_Event_tutorial_GW150914.html'>web page, for GW150914</a>.
# * After setting the desired "eventname" below, you can just run the full notebook.
#
# Questions, comments, suggestions, corrections, etc: email <EMAIL>
# ## This tutorial is intended for educational purposes. The code shown here is not used to produce results papers published by the LIGO Scientific Collaboration, which instead rely on special purpose analysis software packages.
# ### For publicly available, gravitational-wave software analysis packages that are used to produce LSC and Virgo Collaboration results papers, see https://losc.ligo.org/software/.
# ### For technical notes on this tutorial, see https://losc.ligo.org/bbh_tutorial_notes/.
# ## Table of Contents
# * <a href='#Intro-to-signal-processing'>Intro to signal processing</a>
# * <a href='#Download-the-data-on-a-computer-with-a-python-installation'>Download the data</a>
# * <a href='#Set-the-event-name-to-choose-event-and-the-plot-type'>Set the event name to choose event and the plot type</a>
# * <a href='#Read-in-the-data'>Read in the data</a>
# * <a href='#Plot-the-Amplitude-Spectral-Density-(ASD)'>Plot the ASD</a>
# * <a href='#Binary-Neutron-Star-(BNS)-detection-range'>Binary Neutron Star detection range</a>
# * <a href='#Whitening'>Whitening</a>
# * <a href='#Spectrograms'>Spectrograms</a>
# * <a href='#Waveform-Template'>Waveform Template</a>
# * <a href='#Matched-filtering-to-find-the-signal'>Matched filtering to find the signal</a>
# * <a href='#Make-sound-files'>Make sound Files</a>
# * <a href='#Data-segments'>Data segments</a>
# ## Intro to signal processing
#
# This tutorial assumes that you are comfortable with <a href="https://www.python.org/">python</a>.
#
# This tutorial also assumes that you know a bit about signal processing of digital time series data (or want to learn!). This includes power spectral densities, spectrograms, digital filtering, whitening, audio manipulation. This is a vast and complex set of topics, but we will cover many of the basics in this tutorial.
#
# If you are a beginner, here are some resources from the web:
# * http://101science.com/dsp.htm
# * https://www.coursera.org/course/dsp
# * https://georgemdallas.wordpress.com/2014/05/14/wavelets-4-dummies-signal-processing-fourier-transforms-and-heisenberg/
# * https://en.wikipedia.org/wiki/Signal_processing
# * https://en.wikipedia.org/wiki/Spectral_density
# * https://en.wikipedia.org/wiki/Spectrogram
# * http://greenteapress.com/thinkdsp/
# * https://en.wikipedia.org/wiki/Digital_filter
#
# And, well, lots more on the web!
# ## Set the event name to choose event and the plot type
# +
#-- SET ME Tutorial should work with most binary black hole events
#-- Default is no event selection; you MUST select one to proceed.
eventname = ''
eventname = 'GW150914'
#eventname = 'GW151226'
#eventname = 'LVT151012'
#eventname = 'GW170104'
# want plots?
make_plots = 1
plottype = "png"
#plottype = "pdf"
# +
# Standard python numerical analysis imports:
import numpy as np
from scipy import signal
from scipy.interpolate import interp1d
from scipy.signal import butter, filtfilt, iirdesign, zpk2tf, freqz
import h5py
import json
# the IPython magic below must be commented out in the .py file, since it doesn't work there.
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
# LIGO-specific readligo.py
import readligo as rl
# you might get a matplotlib warning here; you can ignore it.
# -
# ### Read the event properties from a local json file (download in advance):
# +
# Read the event properties from a local json file
fnjson = "../data/BBH_events_v3.json"
try:
events = json.load(open(fnjson,"r"))
except IOError:
print("Cannot find resource file "+fnjson)
print("You can download it from https://losc.ligo.org/s/events/"+fnjson)
print("Quitting.")
quit()
# did the user select the eventname ?
try:
events[eventname]
except:
print('You must select an eventname that is in '+fnjson+'! Quitting.')
quit()
# -
# Extract the parameters for the desired event:
event = events[eventname]
fn_H1 = "../data/" + event['fn_H1'] # File name for H1 data
fn_L1 = "../data/" + event['fn_L1'] # File name for L1 data
fn_template = "../data/" + event['fn_template'] # File name for template waveform
fs = event['fs'] # Set sampling rate
tevent = event['tevent'] # Set approximate event GPS time
fband = event['fband'] # frequency band for bandpassing signal
print("Reading in parameters for event " + event["name"])
print(event)
# ## Read in the data
# We will make use of the data, and waveform template, defined above.
#----------------------------------------------------------------
# Load LIGO data from a single file.
# FIRST, define the filenames fn_H1 and fn_L1, above.
#----------------------------------------------------------------
try:
# read in data from H1 and L1, if available:
strain_H1, time_H1, chan_dict_H1 = rl.loaddata(fn_H1, 'H1')
strain_L1, time_L1, chan_dict_L1 = rl.loaddata(fn_L1, 'L1')
except:
print("Cannot find data files!")
print("You can download them from https://losc.ligo.org/s/events/"+eventname)
print("Quitting.")
quit()
# ## Data Gaps
# **NOTE** that in general, LIGO strain time series data has gaps (filled with NaNs) when the detectors are not taking valid ("science quality") data. Analyzing these data requires the user to
# <a href='https://losc.ligo.org/segments/'>loop over "segments"</a> of valid data stretches.
#
# **In this tutorial, for simplicity, we assume there are no data gaps - this will not work for all times!** See the
# <a href='https://losc.ligo.org/segments/'>notes on segments</a> for details.
#
# ## First look at the data from H1 and L1
# +
# both H1 and L1 will have the same time vector, so:
time = time_H1
# the time sample interval (uniformly sampled!)
dt = time[1] - time[0]
# Let's look at the data and print out some stuff:
print('time_H1: len, min, mean, max = ', \
len(time_H1), time_H1.min(), time_H1.mean(), time_H1.max() )
print('strain_H1: len, min, mean, max = ', \
len(strain_H1), strain_H1.min(),strain_H1.mean(),strain_H1.max())
print( 'strain_L1: len, min, mean, max = ', \
len(strain_L1), strain_L1.min(),strain_L1.mean(),strain_L1.max())
#What's in chan_dict? (See also https://losc.ligo.org/tutorials/)
bits = chan_dict_H1['DATA']
print("For H1, {0} out of {1} seconds contain usable DATA".format(bits.sum(), len(bits)))
bits = chan_dict_L1['DATA']
print("For L1, {0} out of {1} seconds contain usable DATA".format(bits.sum(), len(bits)))
# +
# plot +- deltat seconds around the event:
# index into the strain time series for this time interval:
deltat = 5
indxt = np.where((time >= tevent-deltat) & (time < tevent+deltat))
print(tevent)
if make_plots:
plt.figure()
plt.plot(time[indxt]-tevent,strain_H1[indxt],'r',label='H1 strain')
plt.plot(time[indxt]-tevent,strain_L1[indxt],'g',label='L1 strain')
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('strain')
plt.legend(loc='lower right')
plt.title('Advanced LIGO strain data near '+eventname)
# -
# The data are dominated by **low frequency noise**; there is no way to see a signal here, without some signal processing.
# ## Plot the Amplitude Spectral Density (ASD)
# Plotting these data in the Fourier domain gives us an idea of the frequency content of the data. A way to visualize the frequency content of the data is to plot the amplitude spectral density, ASD.
#
# The ASDs are the square root of the power spectral densities (PSDs), which are averages of the square of the fast fourier transforms (FFTs) of the data.
#
# They are an estimate of the "strain-equivalent noise" of the detectors versus frequency,
# which limit the ability of the detectors to identify GW signals.
#
# They are in units of strain/rt(Hz).
# So, if you want to know the root-mean-square (rms) strain noise in a frequency band,
# integrate (sum) the squares of the ASD over that band, then take the square-root.
#
# There's a signal in these data!
# For the moment, let's ignore that, and assume it's all noise.
# +
make_psds = 1
if make_psds:
# number of sample for the fast fourier transform:
NFFT = 4*fs
Pxx_H1, freqs = mlab.psd(strain_H1, Fs = fs, NFFT = NFFT)
Pxx_L1, freqs = mlab.psd(strain_L1, Fs = fs, NFFT = NFFT)
# We will use interpolations of the ASDs computed above for whitening:
psd_H1 = interp1d(freqs, Pxx_H1)
psd_L1 = interp1d(freqs, Pxx_L1)
# Here is an approximate, smoothed PSD for H1 during O1, with no lines. We'll use it later.
Pxx = (1.e-22*(18./(0.1+freqs))**2)**2+0.7e-23**2+((freqs/2000.)*4.e-23)**2
psd_smooth = interp1d(freqs, Pxx)
if make_plots:
# plot the ASDs, with the template overlaid:
f_min = 20.
f_max = 2000.
plt.figure(figsize=(10,8))
plt.loglog(freqs, np.sqrt(Pxx_L1),'g',label='L1 strain')
plt.loglog(freqs, np.sqrt(Pxx_H1),'r',label='H1 strain')
plt.loglog(freqs, np.sqrt(Pxx),'k',label='H1 strain, O1 smooth model')
plt.axis([f_min, f_max, 1e-24, 1e-19])
plt.grid('on')
plt.ylabel('ASD (strain/rtHz)')
plt.xlabel('Freq (Hz)')
plt.legend(loc='upper center')
plt.title('Advanced LIGO strain data near '+eventname)
# -
# NOTE that we only plot the data between f_min = 20 Hz and f_max = 2000 Hz.
#
# Below f_min, the data **are not properly calibrated**. That's OK, because the noise is so high below f_min that LIGO cannot sense gravitational wave strain from astrophysical sources in that band.
#
# The sample rate is fs = 4096 Hz (2^12 Hz), so the data cannot capture frequency content above the Nyquist frequency = fs/2 = 2048 Hz. That's OK, because our events only have detectable frequency content in the range given by fband, defined above; the upper end will (almost) always be below the Nyquist frequency. We set f_max = 2000, a bit below Nyquist.
#
# You can see strong spectral lines in the data; they are all of instrumental origin. Some are engineered into the detectors (mirror suspension resonances at ~500 Hz and harmonics, calibration lines, control dither lines, etc) and some (60 Hz and harmonics) are unwanted. We'll return to these, later.
#
# You can't see the signal in this plot, since it is relatively weak and less than a second long, while this plot averages over 32 seconds of data. So this plot is entirely dominated by instrumental noise.
#
# The smooth model is hard-coded and tuned by eye; it won't be right for arbitrary times. We will only use it below for things that don't require much accuracy.
# ## Binary Neutron Star (BNS) detection range
#
# A standard metric that LIGO uses to evaluate the sensitivity of our detectors, based on the detector noise ASD, is the BNS range.
#
# This is defined as the distance to which a LIGO detector can register a BNS signal with a single detector signal-to-noise ratio (SNR) of 8, averaged over source direction and orientation. Here, SNR 8 is used as a nominal detection threshold, similar to typical CBC detection thresholds of SNR 6-8.
#
# We take each neutron star in the BNS system to have a mass of 1.4 times the mass of the sun, and negligible spin.
#
# GWs from BNS mergers are like "standard sirens"; we know their amplitude at the source from theoretical calculations. The amplitude falls off like 1/r, so their amplitude at the detectors on Earth tells us how far away they are. This is great, because it is hard, in general, to know the distance to astronomical sources.
#
# The amplitude at the source is computed in the post-Newtonian "quadrupole approximation". This is valid for the inspiral phase only, and is approximate at best; there is no simple expression for the post-inspiral (merger and ringdown) phase. So this won't work for high-mass binary black holes like GW150914, which have a lot of signal strength in the post-inspiral phase.
#
# But, in order to use them as standard sirens, we need to know the source direction and orientation relative to the detector and its "quadrupole antenna pattern" response to such signals. It is a standard (if non-trivial) computation to average over all source directions and orientations; the average amplitude is 1./2.2648 times the maximum value.
#
# This calculation is described in Appendix D of:
# FINDCHIRP: An algorithm for detection of gravitational waves from inspiraling compact binaries
# <NAME> al., PHYSICAL REVIEW D 85, 122006 (2012) ; http://arxiv.org/abs/gr-qc/0509116
#
BNS_range = 1
if BNS_range:
#-- compute the binary neutron star (BNS) detectability range
#-- choose a detector noise power spectrum:
f = freqs.copy()
# get frequency step size
df = f[2]-f[1]
#-- constants
# speed of light:
clight = 2.99792458e8 # m/s
# Newton's gravitational constant
G = 6.67259e-11 # m^3/kg/s^2
# one parsec, popular unit of astronomical distance (around 3.26 light years)
parsec = 3.08568025e16 # m
# solar mass
MSol = 1.989e30 # kg
# solar mass in seconds (isn't relativity fun?):
tSol = MSol*G/np.power(clight,3) # s
# Single-detector SNR for detection above noise background:
SNRdet = 8.
# conversion from maximum range (horizon) to average range:
Favg = 2.2648
# mass of a typical neutron star, in solar masses:
mNS = 1.4
# Masses in solar masses
m1 = m2 = mNS
mtot = m1+m2 # the total mass
eta = (m1*m2)/mtot**2 # the symmetric mass ratio
mchirp = mtot*eta**(3./5.) # the chirp mass (FINDCHIRP, following Eqn 3.1b)
# distance to a fiducial BNS source:
dist = 1.0 # in Mpc
Dist = dist * 1.0e6 * parsec /clight # from Mpc to seconds
# We integrate the signal up to the frequency of the "Innermost stable circular orbit (ISCO)"
R_isco = 6. # Orbital separation at ISCO, in geometric units. 6M for PN ISCO; 2.8M for EOB
# frequency at ISCO (end the chirp here; the merger and ringdown follow)
f_isco = 1./(np.power(R_isco,1.5)*np.pi*tSol*mtot)
# minimum frequency (below which, detector noise is too high to register any signal):
f_min = 20. # Hz
# select the range of frequencies between f_min and fisco
fr = np.nonzero(np.logical_and(f > f_min , f < f_isco))
# get the frequency and spectrum in that range:
ffr = f[fr]
# In stationary phase approx, this is htilde(f):
# See FINDCHIRP Eqns 3.4, or 8.4-8.5
htilde = (2.*tSol/Dist)*np.power(mchirp,5./6.)*np.sqrt(5./96./np.pi)*(np.pi*tSol)
htilde *= np.power(np.pi*tSol*ffr,-7./6.)
htilda2 = htilde**2
# loop over the detectors
dets = ['H1', 'L1']
for det in dets:
if det is 'L1': sspec = Pxx_L1.copy()
else: sspec = Pxx_H1.copy()
sspecfr = sspec[fr]
# compute "inspiral horizon distance" for optimally oriented binary; FINDCHIRP Eqn D2:
D_BNS = np.sqrt(4.*np.sum(htilda2/sspecfr)*df)/SNRdet
# and the "inspiral range", averaged over source direction and orientation:
R_BNS = D_BNS/Favg
print(det+' BNS inspiral horizon = {0:.1f} Mpc, BNS inspiral range = {1:.1f} Mpc'.format(D_BNS,R_BNS))
# ## BBH range is >> BNS range!
#
# NOTE that, since mass is the source of gravity and thus also of gravitational waves, systems with higher masses (such as the binary black hole merger GW150914) are much "louder" and can be detected to much higher distances than the BNS range. We'll compute the BBH range, using a template with specific masses, below.
# ## Whitening
#
# From the ASD above, we can see that the data are very strongly "colored" - noise fluctuations are much larger at low and high frequencies and near spectral lines, reaching a roughly flat ("white") minimum in the band around 80 to 300 Hz.
#
# We can "whiten" the data (dividing it by the noise amplitude spectrum, in the fourier domain), suppressing the extra noise at low frequencies and at the spectral lines, to better see the weak signals in the most sensitive band.
#
# Whitening is always one of the first steps in astrophysical data analysis (searches, parameter estimation).
# Whitening requires no prior knowledge of spectral lines, etc; only the data are needed.
#
# To get rid of remaining high frequency noise, we will also bandpass the data.
#
# The resulting time series is no longer in units of strain; now in units of "sigmas" away from the mean.
#
# We will plot the whitened strain data, along with the signal template, after the matched filtering section, below.
# +
# function to whiten data
def whiten(strain, interp_psd, dt):
Nt = len(strain)
freqs = np.fft.rfftfreq(Nt, dt)
freqs1 = np.linspace(0,2048.,Nt/2+1)
# whitening: transform to freq domain, divide by asd, then transform back,
# taking care to get normalization right.
hf = np.fft.rfft(strain)
norm = 1./np.sqrt(1./(dt*2))
white_hf = hf / np.sqrt(interp_psd(freqs)) * norm
white_ht = np.fft.irfft(white_hf, n=Nt)
return white_ht
whiten_data = 1
if whiten_data:
# now whiten the data from H1 and L1, and the template (use H1 PSD):
strain_H1_whiten = whiten(strain_H1,psd_H1,dt)
strain_L1_whiten = whiten(strain_L1,psd_L1,dt)
# We need to suppress the high frequency noise (no signal!) with some bandpassing:
bb, ab = butter(4, [fband[0]*2./fs, fband[1]*2./fs], btype='band')
normalization = np.sqrt((fband[1]-fband[0])/(fs/2))
strain_H1_whitenbp = filtfilt(bb, ab, strain_H1_whiten) / normalization
strain_L1_whitenbp = filtfilt(bb, ab, strain_L1_whiten) / normalization
# -
# ## Spectrograms
#
# Now let's plot a short time-frequency spectrogram around our event:
if make_plots:
# index into the strain time series for this time interval:
indxt = np.where((time >= tevent-deltat) & (time < tevent+deltat))
# pick a shorter FTT time interval, like 1/8 of a second:
NFFT = int(fs/8)
# and with a lot of overlap, to resolve short-time features:
NOVL = int(NFFT*15./16)
# and choose a window that minimizes "spectral leakage"
# (https://en.wikipedia.org/wiki/Spectral_leakage)
window = np.blackman(NFFT)
# the right colormap is all-important! See:
# http://matplotlib.org/examples/color/colormaps_reference.html
# viridis seems to be the best for our purposes, but it's new; if you don't have it, you can settle for ocean.
#spec_cmap='viridis'
spec_cmap='ocean'
# Plot the H1 spectrogram:
plt.figure(figsize=(10,6))
spec_H1, freqs, bins, im = plt.specgram(strain_H1[indxt], NFFT=NFFT, Fs=fs, window=window,
noverlap=NOVL, cmap=spec_cmap, xextent=[-deltat,deltat])
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('Frequency (Hz)')
plt.colorbar()
plt.axis([-deltat, deltat, 0, 2000])
plt.title('aLIGO H1 strain data near '+eventname)
# Plot the L1 spectrogram:
plt.figure(figsize=(10,6))
spec_H1, freqs, bins, im = plt.specgram(strain_L1[indxt], NFFT=NFFT, Fs=fs, window=window,
noverlap=NOVL, cmap=spec_cmap, xextent=[-deltat,deltat])
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('Frequency (Hz)')
plt.colorbar()
plt.axis([-deltat, deltat, 0, 2000])
plt.title('aLIGO L1 strain data near '+eventname)
# In the above spectrograms, you may see lots of excess power below ~20 Hz, as well as strong spectral lines at 500, 1000, 1500 Hz (also evident in the ASDs above). The lines at multiples of 500 Hz are the harmonics of the "violin modes" of the fibers holding up the mirrors of the Advanced LIGO interferometers.
#
# Now let's zoom in on where we think the signal is, using the whitened data, in the hope of seeing a chirp:
if make_plots:
# plot the whitened data, zooming in on the signal region:
# pick a shorter FTT time interval, like 1/16 of a second:
NFFT = int(fs/16.0)
# and with a lot of overlap, to resolve short-time features:
NOVL = int(NFFT*15/16.0)
# choose a window that minimizes "spectral leakage"
# (https://en.wikipedia.org/wiki/Spectral_leakage)
window = np.blackman(NFFT)
# Plot the H1 whitened spectrogram around the signal
plt.figure(figsize=(10,6))
spec_H1, freqs, bins, im = plt.specgram(strain_H1_whiten[indxt], NFFT=NFFT, Fs=fs, window=window,
noverlap=NOVL, cmap=spec_cmap, xextent=[-deltat,deltat])
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('Frequency (Hz)')
plt.colorbar()
plt.axis([-0.5, 0.5, 0, 500])
plt.title('aLIGO H1 strain data near '+eventname)
# Plot the L1 whitened spectrogram around the signal
plt.figure(figsize=(10,6))
spec_H1, freqs, bins, im = plt.specgram(strain_L1_whiten[indxt], NFFT=NFFT, Fs=fs, window=window,
noverlap=NOVL, cmap=spec_cmap, xextent=[-deltat,deltat])
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('Frequency (Hz)')
plt.colorbar()
plt.axis([-0.5, 0.5, 0, 500])
plt.title('aLIGO L1 strain data near '+eventname)
# Loud (high SNR) signals may be visible in these spectrograms. Compact object mergers show a characteristic "chirp" as the signal rises in frequency. If you can't see anything, try
# <a href='https://losc.ligo.org/events/GW150914/'>event GW150914</a>, by changing the `eventname` variable in the first cell above.
# ## Waveform Template
#
# The results of a full LIGO-Virgo analysis of this BBH event include a set of parameters that are consistent with a range of parameterized waveform templates. Here we pick one for use in matched filtering.
#
# As noted above, the results won't be identical to what is in the LIGO-Virgo papers, since we're skipping many subtleties, such as combining many consistent templates.
# read in the template (plus and cross) and parameters for the theoretical waveform
try:
f_template = h5py.File(fn_template, "r")
except:
print("Cannot find template file!")
print("You can download it from https://losc.ligo.org/s/events/"+eventname+'/'+fn_template)
print("Quitting.")
quit()
# +
# extract metadata from the template file:
template_p, template_c = f_template["template"][...]
t_m1 = f_template["/meta"].attrs['m1']
t_m2 = f_template["/meta"].attrs['m2']
t_a1 = f_template["/meta"].attrs['a1']
t_a2 = f_template["/meta"].attrs['a2']
t_approx = f_template["/meta"].attrs['approx']
f_template.close()
# the template extends to roughly 16s, zero-padded to the 32s data length. The merger will be roughly 16s in.
template_offset = 16.
# whiten the templates:
template_p_whiten = whiten(template_p,psd_H1,dt)
template_c_whiten = whiten(template_c,psd_H1,dt)
template_p_whitenbp = filtfilt(bb, ab, template_p_whiten) / normalization
template_c_whitenbp = filtfilt(bb, ab, template_c_whiten) / normalization
# Compute, print and plot some properties of the template:
# constants:
clight = 2.99792458e8 # m/s
G = 6.67259e-11 # m^3/kg/s^2
MSol = 1.989e30 # kg
# template parameters: masses in units of MSol:
t_mtot = t_m1+t_m2
# final BH mass is typically 95% of the total initial mass:
t_mfin = t_mtot*0.95
# Final BH radius, in km:
R_fin = 2*G*t_mfin*MSol/clight**2/1000.
# complex template:
template = (template_p + template_c*1.j)
ttime = time-time[0]-template_offset
# compute the instantaneous frequency of this chirp-like signal:
tphase = np.unwrap(np.angle(template))
fGW = np.gradient(tphase)*fs/(2.*np.pi)
# fix discontinuities at the very end:
# iffix = np.where(np.abs(np.gradient(fGW)) > 100.)[0]
iffix = np.where(np.abs(template) < np.abs(template).max()*0.001)[0]
fGW[iffix] = fGW[iffix[0]-1]
fGW[np.where(fGW < 1.)] = fGW[iffix[0]-1]
# compute v/c:
voverc = (G*t_mtot*MSol*np.pi*fGW/clight**3)**(1./3.)
# index where f_GW is in-band:
f_inband = fband[0]
iband = np.where(fGW > f_inband)[0][0]
# index at the peak of the waveform:
ipeak = np.argmax(np.abs(template))
# number of cycles between inband and peak:
Ncycles = (tphase[ipeak]-tphase[iband])/(2.*np.pi)
print('Properties of waveform template in {0}'.format(fn_template))
print("Waveform family = {0}".format(t_approx))
print("Masses = {0:.2f}, {1:.2f} Msun".format(t_m1,t_m2))
print('Mtot = {0:.2f} Msun, mfinal = {1:.2f} Msun '.format(t_mtot,t_mfin))
print("Spins = {0:.2f}, {1:.2f}".format(t_a1,t_a2))
print('Freq at inband, peak = {0:.2f}, {1:.2f} Hz'.format(fGW[iband],fGW[ipeak]))
print('Time at inband, peak = {0:.2f}, {1:.2f} s'.format(ttime[iband],ttime[ipeak]))
print('Duration (s) inband-peak = {0:.2f} s'.format(ttime[ipeak]-ttime[iband]))
print('N_cycles inband-peak = {0:.0f}'.format(Ncycles))
print('v/c at peak = {0:.2f}'.format(voverc[ipeak]))
print('Radius of final BH = {0:.0f} km'.format(R_fin))
if make_plots:
plt.figure(figsize=(10,16))
plt.subplot(4,1,1)
plt.plot(ttime,template_p)
plt.xlim([-template_offset,1.])
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('strain')
plt.title(eventname+' template at D_eff = 1 Mpc')
plt.subplot(4,1,2)
plt.plot(ttime,template_p)
plt.xlim([-1.1,0.1])
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('strain')
#plt.title(eventname+' template at D_eff = 1 Mpc')
plt.subplot(4,1,3)
plt.plot(ttime,fGW)
plt.xlim([-1.1,0.1])
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('f_GW')
#plt.title(eventname+' template f_GW')
plt.subplot(4,1,4)
plt.plot(ttime,voverc)
plt.xlim([-1.1,0.1])
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('v/c')
#plt.title(eventname+' template v/c')
# -
# ## Matched filtering to find the signal
#
# Matched filtering is the optimal way to find a known signal buried in stationary, Gaussian noise. It is the standard technique used by the gravitational wave community to find GW signals from compact binary mergers in noisy detector data.
#
# For some loud signals, it may be possible to see the signal in the whitened data or spectrograms. On the other hand, low signal-to-noise ratio (SNR) signals or signals which are of long duration in time may not be visible, even in the whitened data. LIGO scientists use matched filtering to find such "hidden" signals. A matched filter works by compressing the entire signal into one time bin (by convention, the "end time" of the waveform).
#
# LIGO uses a rather elaborate software suite to match the data against a family of such signal waveforms ("templates"), to find the best match. This procedure helps to "optimally" separate signals from instrumental noise, and to infer the parameters of the source (masses, spins, sky location, orbit orientation, etc) from the best match templates.
#
# A blind search requires us to search over many compact binary merger templates (eg, 250,000) with different masses and spins, as well as over all times in all detectors, and then requiring triggers coincident in time and template between detectors. It's an extremely complex and computationally-intensive "search pipeline".
#
# Here, we simplify things, using only one template (the one identified in the full search as being a good match to the data).
#
# Assuming that the data around this event is fairly Gaussian and stationary, we'll use this simple method to identify the signal (matching the template) in our 32 second stretch of data. The peak in the SNR vs time is a "single-detector event trigger".
#
# This calculation is described in section IV of:
# FINDCHIRP: An algorithm for detection of gravitational waves from inspiraling compact binaries
# <NAME> et al., PHYSICAL REVIEW D 85, 122006 (2012) ; http://arxiv.org/abs/gr-qc/0509116
#
# The full search procedure is described in
# GW150914: First results from the search for binary black hole coalescence with Advanced LIGO,
# The LIGO Scientific Collaboration, the Virgo Collaboration, http://arxiv.org/abs/1602.03839
# +
# -- To calculate the PSD of the data, choose an overlap and a window (common to all detectors)
# that minimizes "spectral leakage" https://en.wikipedia.org/wiki/Spectral_leakage
NFFT = 4*fs
psd_window = np.blackman(NFFT)
# and a 50% overlap:
NOVL = NFFT/2
# define the complex template, common to both detectors:
template = (template_p + template_c*1.j)
# We will record the time where the data match the END of the template.
etime = time+template_offset
# the length and sampling rate of the template MUST match that of the data.
datafreq = np.fft.fftfreq(template.size)*fs
df = np.abs(datafreq[1] - datafreq[0])
# to remove effects at the beginning and end of the data stretch, window the data
# https://en.wikipedia.org/wiki/Window_function#Tukey_window
try: dwindow = signal.tukey(template.size, alpha=1./8) # Tukey window preferred, but requires recent scipy version
except: dwindow = signal.blackman(template.size) # Blackman window OK if Tukey is not available
# prepare the template fft.
template_fft = np.fft.fft(template*dwindow) / fs
# loop over the detectors
dets = ['H1', 'L1']
for det in dets:
if det is 'L1': data = strain_L1.copy()
else: data = strain_H1.copy()
# -- Calculate the PSD of the data. Also use an overlap, and window:
data_psd, freqs = mlab.psd(data, Fs = fs, NFFT = NFFT, window=psd_window, noverlap=NOVL)
# Take the Fourier Transform (FFT) of the data and the template (with dwindow)
data_fft = np.fft.fft(data*dwindow) / fs
# -- Interpolate to get the PSD values at the needed frequencies
power_vec = np.interp(np.abs(datafreq), freqs, data_psd)
# -- Calculate the matched filter output in the time domain:
# Multiply the Fourier Space template and data, and divide by the noise power in each frequency bin.
# Taking the Inverse Fourier Transform (IFFT) of the filter output puts it back in the time domain,
# so the result will be plotted as a function of time off-set between the template and the data:
optimal = data_fft * template_fft.conjugate() / power_vec
optimal_time = 2*np.fft.ifft(optimal)*fs
# -- Normalize the matched filter output:
# Normalize the matched filter output so that we expect a value of 1 at times of just noise.
# Then, the peak of the matched filter output will tell us the signal-to-noise ratio (SNR) of the signal.
sigmasq = 1*(template_fft * template_fft.conjugate() / power_vec).sum() * df
sigma = np.sqrt(np.abs(sigmasq))
SNR_complex = optimal_time/sigma
# shift the SNR vector by the template length so that the peak is at the END of the template
peaksample = int(data.size / 2) # location of peak in the template
SNR_complex = np.roll(SNR_complex,peaksample)
SNR = abs(SNR_complex)
# find the time and SNR value at maximum:
indmax = np.argmax(SNR)
timemax = time[indmax]
SNRmax = SNR[indmax]
# Calculate the "effective distance" (see FINDCHIRP paper for definition)
# d_eff = (8. / SNRmax)*D_thresh
d_eff = sigma / SNRmax
# -- Calculate optimal horizon distnace
horizon = sigma/8
# Extract time offset and phase at peak
phase = np.angle(SNR_complex[indmax])
offset = (indmax-peaksample)
# apply time offset, phase, and d_eff to template
template_phaseshifted = np.real(template*np.exp(1j*phase)) # phase shift the template
template_rolled = np.roll(template_phaseshifted,offset) / d_eff # Apply time offset and scale amplitude
# Whiten and band-pass the template for plotting
template_whitened = whiten(template_rolled,interp1d(freqs, data_psd),dt) # whiten the template
template_match = filtfilt(bb, ab, template_whitened) / normalization # Band-pass the template
print('For detector {0}, maximum at {1:.4f} with SNR = {2:.1f}, D_eff = {3:.2f}, horizon = {4:0.1f} Mpc'
.format(det,timemax,SNRmax,d_eff,horizon))
if make_plots:
# plotting changes for the detectors:
if det is 'L1':
pcolor='g'
strain_whitenbp = strain_L1_whitenbp
template_L1 = template_match.copy()
else:
pcolor='r'
strain_whitenbp = strain_H1_whitenbp
template_H1 = template_match.copy()
# -- Plot the result
plt.figure(figsize=(10,8))
plt.subplot(2,1,1)
plt.plot(time-timemax, SNR, pcolor,label=det+' SNR(t)')
#plt.ylim([0,25.])
plt.grid('on')
plt.ylabel('SNR')
plt.xlabel('Time since {0:.4f}'.format(timemax))
plt.legend(loc='upper left')
plt.title(det+' matched filter SNR around event')
# zoom in
plt.subplot(2,1,2)
plt.plot(time-timemax, SNR, pcolor,label=det+' SNR(t)')
plt.grid('on')
plt.ylabel('SNR')
plt.xlim([-0.15,0.05])
#plt.xlim([-0.3,+0.3])
plt.grid('on')
plt.xlabel('Time since {0:.4f}'.format(timemax))
plt.legend(loc='upper left')
plt.figure(figsize=(10,8))
plt.subplot(2,1,1)
plt.plot(time-tevent,strain_whitenbp,pcolor,label=det+' whitened h(t)')
plt.plot(time-tevent,template_match,'k',label='Template(t)')
plt.ylim([-10,10])
plt.xlim([-0.15,0.05])
plt.grid('on')
plt.xlabel('Time since {0:.4f}'.format(timemax))
plt.ylabel('whitened strain (units of noise stdev)')
plt.legend(loc='upper left')
plt.title(det+' whitened data around event')
plt.subplot(2,1,2)
plt.plot(time-tevent,strain_whitenbp-template_match,pcolor,label=det+' resid')
plt.ylim([-10,10])
plt.xlim([-0.15,0.05])
plt.grid('on')
plt.xlabel('Time since {0:.4f}'.format(timemax))
plt.ylabel('whitened strain (units of noise stdev)')
plt.legend(loc='upper left')
plt.title(det+' Residual whitened data after subtracting template around event')
# -- Display PSD and template
# must multiply by sqrt(f) to plot template fft on top of ASD:
plt.figure(figsize=(10,6))
template_f = np.absolute(template_fft)*np.sqrt(np.abs(datafreq)) / d_eff
plt.loglog(datafreq, template_f, 'k', label='template(f)*sqrt(f)')
plt.loglog(freqs, np.sqrt(data_psd),pcolor, label=det+' ASD')
plt.xlim(20, fs/2)
plt.ylim(1e-24, 1e-20)
plt.grid()
plt.xlabel('frequency (Hz)')
plt.ylabel('strain noise ASD (strain/rtHz), template h(f)*rt(f)')
plt.legend(loc='upper left')
plt.title(det+' ASD and template around event')
# -
# ### Notes on these results
#
# * We make use of only one template, with a simple ASD estimate. The full analysis produces a Bayesian posterior result using many nearby templates. It does a more careful job estimating the ASD, and includes effects of uncertain calibration.
# * As a result, our parameters (SNR, masses, spins, D_eff) are somewhat different from what you will see in our papers.
# * We compute an "effective distance" D_eff. Is is NOT an estimate of the actual (luminosity) distance, which depends also on the source location and orbit orientation.
# * These distances are at non-zero redshift, so cosmological effects must be taken into account (neglected here). Since we estimate the BH masses using the phase evolution of the waveform, which has been redshifted, our masses are themselves "redshifted". The true source masses must be corrected for this effect; they are smaller by a factor (1+z).
#
# ## Make sound files
#
# Make wav (sound) files from the filtered, downsampled data, +-2s around the event.
# +
# make wav (sound) files from the whitened data, +-2s around the event.
from scipy.io import wavfile
# function to keep the data within integer limits, and write to wavfile:
def write_wavfile(filename,fs,data):
d = np.int16(data/np.max(np.abs(data)) * 32767 * 0.9)
wavfile.write(filename,int(fs), d)
deltat_sound = 2. # seconds around the event
# index into the strain time series for this time interval:
indxd = np.where((time >= tevent-deltat_sound) & (time < tevent+deltat_sound))
# write the files:
write_wavfile("../"+eventname+"_H1_whitenbp.wav",int(fs), strain_H1_whitenbp[indxd])
write_wavfile("../"+eventname+"_L1_whitenbp.wav",int(fs), strain_L1_whitenbp[indxd])
# re-whiten the template using the smoothed PSD; it sounds better!
template_p_smooth = whiten(template_p,psd_smooth,dt)
# and the template, sooming in on [-3,+1] seconds around the merger:
indxt = np.where((time >= (time[0]+template_offset-deltat_sound)) & (time < (time[0]+template_offset+deltat_sound)))
write_wavfile("../"+eventname+"_template_whiten.wav",int(fs), template_p_smooth[indxt])
# -
# ### Listen to the whitened template and data
#
# With good headphones, you may be able to hear a faint thump in the middle; that's our signal!
# +
from IPython.display import Audio
fna = "../"+eventname+"_template_whiten.wav"
print(fna)
Audio(fna)
# -
fna = "../"+eventname+"_H1_whitenbp.wav"
print(fna)
Audio(fna)
# ### Frequency shift the audio files
# We can enhance this by increasing the frequency;
# this is the "audio" equivalent of the enhanced visuals that NASA employs on telescope images with "false color".
#
# The code below will shift the data up by 400 Hz (by taking an FFT, shifting/rolling the frequency series, then inverse fft-ing). The resulting sound file will be noticibly more high-pitched, and the signal will be easier to hear.
# +
# function that shifts frequency of a band-passed signal
def reqshift(data,fshift=100,sample_rate=4096):
"""Frequency shift the signal by constant
"""
x = np.fft.rfft(data)
T = len(data)/float(sample_rate)
df = 1.0/T
nbins = int(fshift/df)
# print T,df,nbins,x.real.shape
y = np.roll(x.real,nbins) + 1j*np.roll(x.imag,nbins)
y[0:nbins]=0.
z = np.fft.irfft(y)
return z
# parameters for frequency shift
fs = 4096
fshift = 400.
speedup = 1.
fss = int(float(fs)*float(speedup))
# shift frequency of the data
strain_H1_shifted = reqshift(strain_H1_whitenbp,fshift=fshift,sample_rate=fs)
strain_L1_shifted = reqshift(strain_L1_whitenbp,fshift=fshift,sample_rate=fs)
# write the files:
write_wavfile("../"+eventname+"_H1_shifted.wav",int(fs), strain_H1_shifted[indxd])
write_wavfile("../"+eventname+"_L1_shifted.wav",int(fs), strain_L1_shifted[indxd])
# and the template:
template_p_shifted = reqshift(template_p_smooth,fshift=fshift,sample_rate=fs)
write_wavfile("../"+eventname+"_template_shifted.wav",int(fs), template_p_shifted[indxt])
# -
# ### Listen to the frequency-shifted template and data
fna = "../"+eventname+"_template_shifted.wav"
print(fna)
Audio(fna)
fna = "../"+eventname+"_H1_shifted.wav"
print(fna)
Audio(fna)
# ## Data segments
#
# As mentioned above, LIGO strain time series data has gaps (filled with NaNs) when the detectors are not taking valid ("science quality") data. Analyzing these data requires the user to loop over "segments" of valid data stretches. For details, see the <a href='https://losc.ligo.org/segments/'>notes on segments</a> or <a href='https://losc.ligo.org/tutorials/'>introduction to LIGO data files</a>.
#
# In the code below, we can check times around this event for gaps in the L1 data. You are welcome to repeat this with H1 data, with files containing 4096 seconds of data, and with data sampled at 16384 Hz. All of the relevant files can be downloaded from <a href='https://losc.ligo.org/events'>LOSC event pages.</a>
#
# We also unpack the DQ and HW injection bits to check what their values are.
data_segments = 1
if data_segments:
# read in the data at 4096 Hz:
# fn = 'L-L1_LOSC_4_V1-1126259446-32.hdf5'
strain, time, chan_dict = rl.loaddata(fn_L1, 'H1')
print("Contents of all the key, value pairs in chan_dict")
for keys,values in chan_dict.items():
print(keys)
print(values)
print('Total number of non-NaNs in these data = ',np.sum(~np.isnan(strain)))
print('GPS start, GPS stop and length of all data in this file = ',time[0], time[-1],len(strain))
# select the level of data quality; default is "DATA" but "CBC_CAT3" is a conservative choice:
DQflag = 'CBC_CAT3'
# readligo.py method for computing segments (start and stop times with continuous valid data):
segment_list = rl.dq_channel_to_seglist(chan_dict[DQflag])
print('Number of segments with DQflag',DQflag,' = ',len(segment_list))
# loop over seconds and print out start, stop and length:
iseg = 0
for segment in segment_list:
time_seg = time[segment]
seg_strain = strain[segment]
print('GPS start, GPS stop and length of segment',iseg, \
'in this file = ',time_seg[0], time_seg[-1], len(seg_strain))
iseg = iseg+1
# here is where you would insert code to analyze the data in this segment.
# now look at segments with no CBC hardware injections:
DQflag = 'NO_CBC_HW_INJ'
segment_list = rl.dq_channel_to_seglist(chan_dict['NO_CBC_HW_INJ'])
print('Number of segments with DQflag',DQflag,' = ',len(segment_list))
iseg = 0
for segment in segment_list:
time_seg = time[segment]
seg_strain = strain[segment]
print('GPS start, GPS stop and length of segment',iseg, \
'in this file = ',time_seg[0], time_seg[-1], len(seg_strain))
iseg = iseg+1
# ## Comments on sampling rate
#
# LIGO data are acquired at 16384 Hz (2^14 Hz). Here, we have been working with data downsampled to 4096 Hz, to save on download time, disk space, and memory requirements.
#
# This is entirely sufficient for signals with no frequency content above f_Nyquist = fs/2 = 2048 Hz, such as signals from higher-mass binary black hole systems; the frequency at which the merger begins (at the innermost stable circular orbit) for equal-mass, spinless black holes is roughly 1557 Hz * (2.8/M_tot), where 2.8 solar masses is the total mass of a canonical binary neutron star system.
#
# If, however, you are interested in signals with frequency content above 2048 Hz, you need the data sampled at the full rate of 16384 Hz.
# ## Construct a csv file containing the whitened data and template
# +
# time vector around event
times = time-tevent
# zoom in on [-0.2,0.05] seconds around event
irange = np.nonzero((times >= -0.2) & (times < 0.05))
# construct a data structure for a csv file:
dat = [times[irange], strain_H1_whitenbp[irange],strain_L1_whitenbp[irange],
template_H1[irange],template_L1[irange] ]
datcsv = np.array(dat).transpose()
# make a csv filename, header, and format
fncsv = "../"+eventname+'_data.csv'
headcsv = eventname+' time-'+str(tevent)+ \
' (s),H1_data_whitened,L1_data_whitened,H1_template_whitened,L1_template_whitened'
fmtcsv = ",".join(["%10.6f"] * 5)
np.savetxt(fncsv, datcsv, fmt=fmtcsv, header=headcsv)
print("Wrote whitened data to file {0}".format(fncsv))
| ligo_tale/LOSC_Event_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="vZYTQ9HCo946" outputId="b692ba69-3567-48eb-fe2e-e00395a965db"
# Copyright 2021 Google LLC
# Use of this source code is governed by an MIT-style
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.
# Notebook authors: <NAME> (<EMAIL>)
# and <NAME> (<EMAIL>)
# This notebook reproduces figures for chapter 18 from the book
# "Probabilistic Machine Learning: An Introduction"
# by <NAME> (MIT Press, 2021).
# Book pdf is available from http://probml.ai
# + [markdown] id="-NiY3c9uo946"
# <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a>
# + [markdown] id="NLgo1c4uo947"
# <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/figures/chapter18_trees_forests_bagging_and_boosting_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="_8Y31zPbo947"
# ## Figure 18.1:<a name='18.1'></a> <a name='regtree'></a>
# + [markdown] id="D-8kezNXo947"
#
# A simple regression tree on two inputs. Adapted from Figure 9.2 of <a href='#HastieBook'>[HTF09]</a> .
# Figure(s) generated by [regtreeSurfaceDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/regtreeSurfaceDemo.py)
# + colab={"base_uri": "https://localhost:8080/"} id="Ldo9NvJto948" outputId="304f1932-57db-4a2d-af22-537e1543847c"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 397} id="ZZs_KqHmo949" outputId="07d1cf0d-e030-4fe3-b4c2-6aa641cbd3f8"
google.colab.files.view("./regtreeSurfaceDemo.py")
# %run regtreeSurfaceDemo.py
# + [markdown] id="-R-CUxuEo949"
# ## Figure 18.2:<a name='18.2'></a> <a name='dtreeClassif'></a>
# + [markdown] id="MUtjWFpio949"
#
# (a) A set of shapes with corresponding binary labels. The features are: color (values ``blue'', ``red'', ``other''), shape (values ``ellipse'', ``other''), and size (real-valued). (b) A hypothetical classification tree fitted to this data. A leaf labeled as $(n_1,n_0)$ means that there are $n_1$ positive examples that fall into this partition, and $n_0$ negative examples.
# + colab={"base_uri": "https://localhost:8080/"} id="VKznsp4Ro94-" outputId="de65e7e2-78be-4c43-8705-e02ce6dc0af7"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 341} id="76tmUjHRo94-" outputId="699d6f0b-4d6e-4016-fb01-70f0b49160cb"
show_image("/pyprobml/book1/figures/images/Figure_18.2_A.png")
# + colab={"base_uri": "https://localhost:8080/", "height": 514} id="oHJx7Peoo94-" outputId="88e6c913-383f-4134-c953-6b97e79378a6"
show_image("/pyprobml/book1/figures/images/Figure_18.2_B.png")
# + [markdown] id="OQfqyODYo94_"
# ## Figure 18.3:<a name='18.3'></a> <a name='dtreeUnstable'></a>
# + [markdown] id="F6GeMaIfo94_"
#
# (a) A decision tree of depth 2 fit to the iris data, using just the petal length and petal width features. Leaf nodes are color coded according to the majority class. The number of training samples that pass from the root to each node is shown inside each box, as well as how many of these values fall into each class. This can be normalized to get a distribution over class labels for each node. (b) Decision surface induced by (a). (c) Fit to data where we omit a single data point (shown by red star). (d) Ensemble of the two models in (b) and (c).
# Figure(s) generated by [dtree_sensitivity.py](https://github.com/probml/pyprobml/blob/master/scripts/dtree_sensitivity.py)
# + colab={"base_uri": "https://localhost:8080/"} id="flKy_wAzo95A" outputId="0d3b53c5-266b-47ad-ad62-0265528efcf6"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="qFEM1Vcto95B" outputId="d120f0c4-bb7d-4982-aae8-3a6950341e30"
google.colab.files.view("./dtree_sensitivity.py")
# %run dtree_sensitivity.py
# + [markdown] id="4tESIQSBo95B"
# ## Figure 18.4:<a name='18.4'></a> <a name='bagging'></a>
# + [markdown] id="ls7WpMJmo95C"
#
# (a) A single decision tree. (b-c) Bagging ensemble of 10 and 50 trees. (d) Random forest of 50 trees. Adapted from Figure 7.5 of <a href='#Geron2019'>[Aur19]</a> .
# Figure(s) generated by [bagging_trees.py](https://github.com/probml/pyprobml/blob/master/scripts/bagging_trees.py) [rf_demo_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/rf_demo_2d.py)
# + colab={"base_uri": "https://localhost:8080/"} id="_RGk_4hSo95C" outputId="3f75f34f-cd32-4c9d-e220-f4e45daf9455"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="jQtim2OGo95D" outputId="0c8e4a40-606a-4279-dca2-17bf6d397273"
google.colab.files.view("./bagging_trees.py")
# %run bagging_trees.py
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ErJThCNCo95D" outputId="f5921145-ada8-4c96-f4ee-436f8843d788"
google.colab.files.view("./rf_demo_2d.py")
# %run rf_demo_2d.py
# + [markdown] id="s4fx8GEJo95D"
# ## Figure 18.5:<a name='18.5'></a> <a name='spamCompare'></a>
# + [markdown] id="hpSu_Ixno95D"
#
# Preditive accuracy vs size of tree ensemble for bagging, random forests and gradient boosting with log loss. Adapted from Figure 15.1 of <a href='#HastieBook'>[HTF09]</a> .
# Figure(s) generated by [spam_tree_ensemble_compare.py](https://github.com/probml/pyprobml/blob/master/scripts/spam_tree_ensemble_compare.py)
# + colab={"base_uri": "https://localhost:8080/"} id="9Uv0Ap0ho95E" outputId="d62a3405-480f-4cf0-cdec-177c5b53d636"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 435} id="PWOKysTFo95E" outputId="5e87ca99-7446-4f2f-a4c0-243e3f7b2f27"
google.colab.files.view("./spam_tree_ensemble_compare.py")
# %run spam_tree_ensemble_compare.py
# + [markdown] id="8GtFDJgJo95E"
# ## Figure 18.6:<a name='18.6'></a> <a name='boostedRegrTrees'></a>
# + [markdown] id="RPaakPH6o95F"
#
# Illustration of boosting using a regression tree of depth 2 applied to a 1d dataset. Adapted from Figure 7.9 of <a href='#Geron2019'>[Aur19]</a> .
# Figure(s) generated by [boosted_regr_trees.py](https://github.com/probml/pyprobml/blob/master/scripts/boosted_regr_trees.py)
# + colab={"base_uri": "https://localhost:8080/"} id="XNEPkOPho95F" outputId="80f6bd38-1273-4114-92e9-896626c4bea9"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 674} id="M_W5eksfo95F" outputId="27b4c9fa-272f-4d2a-c0ec-b7d05645eeeb"
google.colab.files.view("./boosted_regr_trees.py")
# %run boosted_regr_trees.py
# + [markdown] id="YwqO9GyWo95G"
# ## Figure 18.7:<a name='18.7'></a> <a name='expLoss'></a>
# + [markdown] id="9QQnkOrNo95G"
#
# Illustration of various loss functions for binary classification. The horizontal axis is the margin $m(\mathbf x ) = \cc@accent "707E y F(\mathbf x )$, the vertical axis is the loss. The log loss uses log base 2.
# Figure(s) generated by [hinge_loss_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/hinge_loss_plot.py)
# + colab={"base_uri": "https://localhost:8080/"} id="qzzOWkpdo95H" outputId="09d3ed64-ca52-4113-c816-6957342adfb5"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="crLInYrLo95H" outputId="f1309e80-1f35-4448-fa0c-71bbaf7da3ef"
google.colab.files.view("./hinge_loss_plot.py")
# %run hinge_loss_plot.py
# + [markdown] id="3vYI7d32o95I"
# ## Figure 18.8:<a name='18.8'></a> <a name='rfFeatureImportanceMnist'></a>
# + [markdown] id="A7ALECLZo95I"
#
# Feature importance of a random forest classifier trained to distinguish MNIST digits from classes 0 and 8. Adapted from Figure 7.6 of <a href='#Geron2019'>[Aur19]</a> .
# Figure(s) generated by [rf_feature_importance_mnist.py](https://github.com/probml/pyprobml/blob/master/scripts/rf_feature_importance_mnist.py)
# + colab={"base_uri": "https://localhost:8080/"} id="l-iGOZ7lo95I" outputId="509e94fe-da02-4889-8be6-c3dd1340f887"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 275} id="euTd0osJo95J" outputId="b8f833f4-56b3-4cd7-98f9-7865c71b3fa4"
google.colab.files.view("./rf_feature_importance_mnist.py")
# %run rf_feature_importance_mnist.py
# + [markdown] id="XfsteIbVo95J"
# ## Figure 18.9:<a name='18.9'></a> <a name='spamPartialJoint'></a>
# + [markdown] id="s5_NvKhGo95J"
#
# (a) Relative feature importance for the spam classification problem. Adapted from Figure 10.6 of <a href='#HastieBook'>[HTF09]</a> . (b) Partial dependence of log-odds of the spam class for 4 important predictors. The red ticks at the base of the plot are deciles of the empirical distribution for this feature. (c) Joint partial dependence of log-odds on the features hp and !. Adapted from Figure 10.6--10.8 of <a href='#HastieBook'>[HTF09]</a> .
# Figure(s) generated by [spam_tree_ensemble_interpret.py](https://github.com/probml/pyprobml/blob/master/scripts/spam_tree_ensemble_interpret.py)
# + colab={"base_uri": "https://localhost:8080/"} id="Jb8jIo8_o95J" outputId="f2d28711-9d4a-45aa-917e-4952e3be0b60"
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
# !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
# %cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
# %reload_ext autoreload
# %autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
# + colab={"base_uri": "https://localhost:8080/", "height": 299} id="2E1KnQV8o95K" outputId="d5d0bf35-5757-4aa0-ff80-8ef10c261d4e"
google.colab.files.view("./spam_tree_ensemble_interpret.py")
# %run spam_tree_ensemble_interpret.py
# + [markdown] id="AqAMJYaIo95K"
# ## References:
# <a name='Geron2019'>[Aur19]</a> <NAME> "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)". (2019).
#
# <a name='HastieBook'>[HTF09]</a> <NAME>, <NAME> and <NAME>. "The Elements of Statistical Learning". (2009).
#
#
| book1/figures/chapter18_trees_forests_bagging_and_boosting_figures_output.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from nltk.corpus import movie_reviews
from nltk import pos_tag
movie_reviews.categories()
len(movie_reviews.fileids())
movie_reviews.fileids()
movie_reviews.words(movie_reviews.fileids()[5])
documents=[]
for category in movie_reviews.categories():
for fileid in movie_reviews.fileids(category):
documents.append((movie_reviews.words(fileid),category))
documents[0:5]
import random
random.shuffle(documents)
documents[0:5]
from nltk.stem import WordNetLemmatizer
lemmatizer=WordNetLemmatizer()
from nltk.corpus import wordnet
def get_simple_pos(tag):
if tag.startswith('J'):
return wordnet.ADJ
elif tag.startswith('V'):
return wordnet.VERB
elif tag.startswith('N'):
return wordnet.NOUN
elif tag.startswith('R'):
return wordnet.ADV
else:
return wordnet.NOUN
from nltk.corpus import stopwords
import string
stops=set(stopwords.words('english'))
punctuations=list(string.punctuation)
stops.update(punctuations)
stops,string.punctuation
def clean_reviews(words):
output_words=[]
for w in words:
if not w.lower() in stops:
pos=pos_tag([w])
clean_word=lemmatizer.lemmatize(w,get_simple_pos(pos[0][1]))
output_words.append(clean_word.lower())
return output_words
documents=[(clean_reviews(document),category) for document,category in documents]
documents[0]
training_documents=documents[:1500]
testing_documents=documents[1500:]
all_words=[]
for doc in training_documents:
all_words+=doc[0]
import nltk
freq=nltk.FreqDist(all_words)
common=freq.most_common(3000)
features=[i[0] for i in common]
def get_feature_dict(words):
current_feature={}
words_set=set(words)
for w in features:
current_feature[w]=w in words_set
return current_feature
output=get_feature_dict(training_documents[0][0])
training_data=[(get_feature_dict(doc),category) for doc,category in training_documents]
testing_data=[(get_feature_dict(doc),category) for doc,category in testing_documents]
# ## Classification using NLTK Naive Bayes
from nltk import NaiveBayesClassifier
classifier=NaiveBayesClassifier.train(training_data)
nltk.classify.accuracy(classifier,testing_data)
classifier.show_most_informative_features(15)
# ## Using Sklearn Classifier within NLTK
from sklearn.svm import SVC
from nltk.classify.scikitlearn import SklearnClassifier
SVC=SVC()
Classifier_sklearn=SklearnClassifier(SVC)
Classifier_sklearn.train(training_data)
nltk.classify.accuracy(Classifier_sklearn,testing_data)
from sklearn.ensemble import RandomForestClassifier
rfc=RandomForestClassifier()
Classifier_sklearn1=SklearnClassifier(rfc)
Classifier_sklearn1.train(training_data)
nltk.classify.accuracy(Classifier_sklearn1,testing_data)
#
| Lecture 22 NLP-2/Using Sklearn Classifier Within NLTK/Using Sklearn Classifier within NLTK-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Ethan-Jeong/test_deeplearning/blob/master/practice_tensorflow_wine.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="B5dPn1Uubr5W"
from sklearn import datasets
# + colab={"base_uri": "https://localhost:8080/"} id="m-UGR4qDb0Nb" outputId="3ec1d428-5a37-40b6-d5dd-d4ac5c46c9a4"
wine = datasets.load_wine()
wine.keys()
# + colab={"base_uri": "https://localhost:8080/"} id="d5vjEMv1c_XH" outputId="3f0fa25d-ed4c-49e5-dc03-82a5b4e1cef3"
x_data = wine['data']
y_data = wine['target']
x_data.shape , y_data.shape
# + id="Yavd-ligdO0E"
import numpy as np
# + colab={"base_uri": "https://localhost:8080/"} id="pgeRwptddRPp" outputId="b6320323-b456-41f2-da7c-021b7a8808f0"
np.unique(y_data)
# + id="eDzO1tyvb4sn"
import pandas as pd
# + colab={"base_uri": "https://localhost:8080/"} id="ndbYslgmb6wD" outputId="2e1bbb5f-1cff-4414-f6a3-e618607cf3e4"
df_wine = pd.DataFrame(wine.data)
df_wine.info()
# + id="K30zks-bpYUw"
import sqlite3
connect = sqlite3.connect('./db.sqlite3')
df_wine.to_sql('wine_resource',connect,if_exists='append',index=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="00zdJ8HzqU38" outputId="6f8c1aa4-8ea1-41b5-c9f0-55afa3975ca4"
df_load = pd.read_sql_query('select * from wine_resource', connect)
df_load.head(4)
# + colab={"base_uri": "https://localhost:8080/"} id="v1iUiX1935tm" outputId="6ed823ab-60c7-40fa-c8f2-62c61fbfaa8e"
x_data = df_load.to_numpy()
x_data.shape
# + colab={"base_uri": "https://localhost:8080/"} id="OkrJAfp7qiyf" outputId="5da39099-65ef-4a19-c187-3d101b2859f1"
y_data = wine.target
y_data , np.unique(y_data)
# + colab={"base_uri": "https://localhost:8080/"} id="36r2n_hprvlo" outputId="9e5d7a51-142f-4629-85a9-de92f7ab3918"
x_data , x_data.shape
# + id="lBNZtu_PrFX8"
import tensorflow as tf
# + id="tjaI8YUKryxK"
model = tf.keras.Sequential()
# + colab={"base_uri": "https://localhost:8080/"} id="RDqsROLTr6Cf" outputId="1e727246-e814-4116-c94e-97692474ba89"
model.add(tf.keras.Input(shape=(13,))) # input layer
model.add(tf.keras.layers.Dense(64,activation='relu')) # hidden layer
model.add(tf.keras.layers.Dense(32,activation='relu')) # hidden layer
model.add(tf.keras.layers.Dense(3,activation='softmax'))
# + id="BRrk9gyysclT"
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['acc'])
# + colab={"base_uri": "https://localhost:8080/"} id="QrWKiGMYroxh" outputId="55a7643f-fc6e-4bbc-c813-2603812c65ff"
hist = model.fit( x_data , y_data , epochs=50 , validation_split=0.3 )
# + colab={"base_uri": "https://localhost:8080/"} id="E9ebs4Des1y8" outputId="93cff9ed-d559-4524-b36a-b340d09d0196"
model.evaluate(x_data ,y_data)
# + colab={"base_uri": "https://localhost:8080/"} id="1tTclwW3yM8a" outputId="668c7d14-cb38-45a3-bf1e-dc7c581d9786"
x_data[25],y_data[25]
# + colab={"base_uri": "https://localhost:8080/"} id="2jtbGkWfyaXF" outputId="48f723a5-b56c-471c-d42b-20256fb32612"
hist.history.keys()
# + id="JIcKaW-3yphu"
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/", "height": 268} id="WZ7xaHbAytea" outputId="d9e42db4-3434-4567-a391-deb837b32b5b"
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="RNfZMXtby1_t" outputId="ef96d74c-7058-4b68-b451-20a0f862acc3"
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="NqCfNpo8y6Ur" outputId="a0a5db21-7c31-46cb-bb31-e900014a5f2a"
pred = model.predict([[1.305e+01, 2.050e+00, 3.220e+00, 2.500e+01, 1.240e+02, 2.630e+00,
2.680e+00, 4.700e-01, 1.920e+00, 3.580e+00, 1.130e+00, 3.200e+00,
8.300e+02]])
pred , np.argmax(pred)
# + id="JA7Rzr19z_vs"
| practice_tensorflow_wine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 666} colab_type="code" id="601NM1YNGaVT" outputId="3e70fda1-291b-4252-bee3-8c46138405c7"
pip install tensorflow-gpu
# + colab={} colab_type="code" id="5zizE3IRGLdL"
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pandas as pd
import numpy as np
import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
sns.set(style='whitegrid', palette='muted', font_scale=1.5)
rcParams['figure.figsize'] = 14, 8
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="AxaTEc2qGU92" outputId="925a8df3-86eb-47b2-b330-b151d5505699"
tf.__version__
# + [markdown] colab_type="text" id="NARe9dG-OB7o"
# # Tensors
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="cEXlDWIoGWNl" outputId="2491b4eb-e5a9-432f-d3ee-1f619b196c18"
x = tf.constant(1)
print(x)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="X3oQO-1DR3CY" outputId="6f6e5043-47cb-460a-9bc1-91aaec25f813"
x.numpy()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="FLda630KR5Jf" outputId="f67630be-d900-496b-b60a-ddefb66583e1"
x.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="_ab4zU1hX6Xi" outputId="43d4bd7a-f941-40b7-e940-0f5a81a11dda"
tf.rank(x).numpy()
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="g-KsjDhsO8TB" outputId="a3e86c55-2106-417d-bc31-0fd970f66163"
m = tf.constant([[1, 2, 1], [3, 4, 2]])
print(m)
# + colab={} colab_type="code" id="Ew_GiInPR88I"
st = tf.constant(["Hello", "World"])
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="OG8kOjylSIQ6" outputId="8565b021-f3a4-44d1-e3eb-65accbce734c"
print(st)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="nYAsRk8JX-rh" outputId="f84835d7-3f5d-4a4d-b02a-e7977ff04930"
tf.rank(st).numpy()
# + [markdown] colab_type="text" id="OaZHD6r1SZIq"
# ## Helpers
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="d0BPI2x-SIz5" outputId="8ad9356a-d577-4f98-9a8a-78e67acc32d8"
ones = tf.ones([3, 3])
print(ones)
# + colab={} colab_type="code" id="zZik0WwfTH5h"
zeros = tf.zeros([2, 3])
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="K5CclppGTTgW" outputId="1d59cf70-b7da-4ecd-c225-b9afe82fc344"
print(zeros)
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="h1_qjxV2TT6D" outputId="1faa9c64-a44c-4e4e-f7dd-f4e23f24bd0f"
print(tf.reshape(zeros, [3, 2]))
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="_RqSXxUjUPdB" outputId="f954ea8b-8e78-41f8-d6c6-c82942e4924b"
tf.transpose(zeros)
# + [markdown] colab_type="text" id="wlSbmG0yVKYV"
# # Tensor Math
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="w4v0E9GPUegj" outputId="b0d25237-0454-460d-de16-b7549eb2fc01"
a = tf.constant(1)
b = tf.constant(1)
tf.add(a, b).numpy()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="4SoNQTg6VW1o" outputId="23c38b68-5518-48bc-d27a-73fcfb871cfc"
(a + b).numpy()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="kYIXobGKV8lD" outputId="c243e6a7-8ad6-47ba-fa2f-7493ec84087c"
c = a + b
tf.square(c)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="e-ioFyN_Wxx6" outputId="6e36b923-eadd-492e-e123-7eb2d32301fc"
c * c
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="JpYEU0oSWzlf" outputId="28c1ab70-3acc-43c9-a053-2cd916cbc6fc"
d1 = tf.constant([[1, 2], [1, 2]]);
d2 = tf.constant([[3, 4], [3, 4]]);
tf.tensordot(d1, d2, axes=1).numpy()
# + [markdown] colab_type="text" id="GGRdSkWBXmQD"
# # Sampling
# + colab={} colab_type="code" id="Qr3WL7zXW9-C"
norm = tf.random.normal(shape=(1000, 1), mean=0., stddev=1.)
# + colab={"base_uri": "https://localhost:8080/", "height": 495} colab_type="code" id="6hOhcsXBenDb" outputId="66f2a39e-acab-458a-840c-4ef9fe1f172a"
sns.distplot(norm);
# + colab={} colab_type="code" id="22yBFunneuL8"
unif = tf.random.uniform(shape=(1000, 1), minval=0, maxval=100)
# + colab={"base_uri": "https://localhost:8080/", "height": 495} colab_type="code" id="EsIO11GBgBtW" outputId="ab408f4e-bbb6-4f11-d4ca-e33dba42f1d1"
sns.distplot(unif);
# + colab={} colab_type="code" id="Kx49h7NNgCnk"
pois = tf.random.poisson(shape=(1000, 1), lam=0.8)
# + colab={"base_uri": "https://localhost:8080/", "height": 495} colab_type="code" id="z-57Vy-Ugnfo" outputId="668c59e4-b7a7-47fd-e64f-a00444515fe5"
sns.distplot(pois);
# + colab={} colab_type="code" id="j_6D65TIgqXv"
gam = tf.random.gamma(shape=(1000, 1), alpha=0.8)
# + colab={"base_uri": "https://localhost:8080/", "height": 495} colab_type="code" id="N0c0LoXHhhLu" outputId="c64f9997-b4d7-4961-c757-a4d4b328ccaa"
sns.distplot(gam);
# + [markdown] colab_type="text" id="No7RCrN1iB4Y"
# # Linear Regression
#
# https://vincentarelbundock.github.io/Rdatasets/datasets.html
# + colab={} colab_type="code" id="vwiGDIgskLLu"
data = tf.constant([
[4,2],
[4,10],
[7,4],
[7,22],
[8,16],
[9,10],
[10,18],
[10,26],
[10,34],
[11,17],
[11,28],
[12,14],
[12,20],
[12,24],
[12,28],
[13,26],
[13,34],
[13,34],
[13,46],
[14,26],
[14,36],
[14,60],
[14,80],
[15,20],
[15,26],
[15,54],
[16,32],
[16,40],
[17,32],
[17,40],
[17,50],
[18,42],
[18,56],
[18,76],
[18,84],
[19,36],
[19,46],
[19,68],
[20,32],
[20,48],
[20,52],
[20,56],
[20,64],
[22,66],
[23,54],
[24,70],
[24,92],
[24,93],
[24,120],
[25,85]
])
# + colab={} colab_type="code" id="Hycczf3AlYVO"
speed = data[:, 0]
stopping_distance = data[:, 1]
# + colab={"base_uri": "https://localhost:8080/", "height": 516} colab_type="code" id="qE9noO1ZlvX-" outputId="2781a5df-4ac5-45eb-8cc4-ce899944255e"
sns.scatterplot(speed, stopping_distance);
plt.xlabel("speed")
plt.ylabel("stopping distance");
# + colab={} colab_type="code" id="2bgTfH77nEpF"
lin_reg = keras.Sequential([
layers.Dense(1, activation='linear', input_shape=[1]),
])
# + colab={} colab_type="code" id="SPNUQvXfnRQJ"
optimizer = tf.keras.optimizers.RMSprop(0.001)
lin_reg.compile(loss='mse',
optimizer=optimizer,
metrics=['mse'])
# + colab={} colab_type="code" id="aiCVkGiAnYSn"
history = lin_reg.fit(
x=speed,
y=stopping_distance,
shuffle=True,
epochs=1000,
validation_split=0.2,
verbose=0
)
# + colab={} colab_type="code" id="_pjY0jBHvi3K"
def plot_error(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error')
plt.plot(hist['epoch'], hist['mse'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 516} colab_type="code" id="UK5kkmornvPd" outputId="833cfbc5-e8c7-4034-bbf1-13d6f1eca544"
plot_error(history)
# + colab={"base_uri": "https://localhost:8080/", "height": 187} colab_type="code" id="zuNE0Uq4calR" outputId="91d450ae-374e-4221-b916-1823f555598f"
lin_reg.summary()
# + colab={} colab_type="code" id="gmfy7I6Dcffh"
weights = lin_reg.get_layer("dense").get_weights()
intercept = weights[0][0][0]
slope = weights[1][0]
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="wJGROyipospK" outputId="8d67751c-0f8b-4411-e064-8ca819a0d50d"
slope
# + [markdown] colab_type="text" id="Er7Hgz8yiDrd"
# # Simple Neural Network
# + colab={} colab_type="code" id="3sk9PKFB0yhc"
def build_neural_net():
net = keras.Sequential([
layers.Dense(32, activation='relu', input_shape=[1]),
layers.Dense(16, activation='relu'),
layers.Dense(1),
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
net.compile(loss='mse',
optimizer=optimizer,
metrics=['mse', 'accuracy'])
return net
# + colab={} colab_type="code" id="z4PJe5Cy09--"
net = build_neural_net()
# + colab={} colab_type="code" id="DfYOc5PArJ7C"
history = net.fit(
x=speed,
y=stopping_distance,
shuffle=True,
epochs=1000,
validation_split=0.2,
verbose=0
)
# + colab={"base_uri": "https://localhost:8080/", "height": 516} colab_type="code" id="gOGQz5z5rN1Q" outputId="f6909c86-bf8e-4e84-9660-6a96d96684c1"
plot_error(history)
# + [markdown] colab_type="text" id="_k7_35uW3TOC"
# ## Stop training early
# + colab={} colab_type="code" id="RzP_YuZx8-X0"
early_stop = keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=10
)
# + colab={} colab_type="code" id="-d3xYvcMv1ls"
net = build_neural_net()
history = net.fit(
x=speed,
y=stopping_distance,
shuffle=True,
epochs=1000,
validation_split=0.2,
verbose=0,
callbacks=[early_stop]
)
# + colab={"base_uri": "https://localhost:8080/", "height": 516} colab_type="code" id="wIdjFoMJ1cSV" outputId="66ba5e0c-9520-48b0-c941-e11a113da369"
plot_error(history)
# + [markdown] colab_type="text" id="ziq6mTc2iFag"
# # Save/Restore Model
# + colab={} colab_type="code" id="1itOlR6kiEsB"
net.save('simple_net.h5')
# + colab={} colab_type="code" id="0t_jb2co4VHR"
simple_net = keras.models.load_model('simple_net.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 255} colab_type="code" id="P2NOs0xV4bRe" outputId="49220ba3-7e38-41cc-f29e-6f84379c78aa"
simple_net.summary()
| 00.getting_started.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#chatbox tarea
# -
from time import sleep
def print_words(sentence):
for word in sentence.split():
for l in word:
sleep(.01)
print(l, end = '')
print(end = ' ')
persona1 = "¿como estas?"
print_words(persona1)
prompt = "Me llamo Marco"
estoy_bien = input(prompt)
respuesta2 = "Mucho gusto en conocerte"
print (respuesta2)
| Chatbox.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 一.一元(特征)线性回归
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
# %matplotlib notebook
# ## 1.1导入txt并绘制散点图
data = pd.read_csv("ex1data1.txt",names = ['population','profit'])
data.head()
x = data.population
y = data.profit
plt.scatter(x,y)
# ## 1.2代价函数
"初始化,所有变量都是matrix"
df = data.copy()#因为insert会改变原数组,所以先复制一份,坑1.
df.insert(0,"one",1)
X = df.iloc[:,0:df.shape[1]-1]
y = df.iloc[:,df.shape[1]-1:df.shape[1]]#df.iloc[:,-1]是个一维数组(series),reshape(97,1)都不行,坑2.
theta = np.zeros(2)
y = np.matrix(y)
X = np.matrix(X)
x = np.matrix(x)
x = x.T #行向量/列向量傻傻分不清 坑3
theta = np.matrix(theta)
H = X*(theta.T)
X,y,x,H,theta
def costfunction(X,y,H):
n = np.power((H-y),2)
return np.sum(n)/(2*len(X))
costfunction(X,y,H)
# ## 1.3批量梯度下降
alpha = 0.01
m = len(X)
times = 1000
def gradient_descent(theta,X,y,alpha,m,H,times):
thetas_0 = [0]
thetas_1 = [0]
cost = [costfunction(X,y,H)]
for i in range(times):
H = X*theta.T
erro = H - y
temp = np.matrix([0,0])
temp = theta - erro.T * X * alpha/m #矩阵运算是精髓,临时变量很重要.坑4
thetas_0.append(temp[0,0])
thetas_1.append(temp[0,1])
theta = temp
cost.append(costfunction(X,y,H))
return theta,cost,thetas_0,thetas_1
final_theta,cost,thetas_0,thetas_1= gradient_descent(theta,X,y,alpha,m,H,times)
final_theta,cost,thetas_0,thetas_1
# ## 1.4绘图
fig,(ax1,ax2) = plt.subplots(2,1)
H = final_theta * X.T
H = H.T
ax1.plot(x,H,c = 'r',label = 'Prediction')
ax1.scatter(data.population,data.profit,label = 'data')
ax1.legend(loc = 2)
ax2.plot(cost)
ax1.set_xlabel('population')
ax1.set_ylabel('profit')
ax1.set_title('relationship between population and profit'.title())
ax2.set_xlabel('times')
ax2.set_ylabel('cost')
ax2.set_title('how does cost changed'.title())
fig.subplots_adjust(hspace = 0.8)
# ## 1.5绘制代价函数三维图和等高线图(不知道对不对)
fig,ax = plt.subplots(1,2)
ax = plt.axes(projection='3d')
xline = np.arange(-10,21,0.2)
yline = np.arange(-10,21,0.2)
XX, YY = np.meshgrid(xline, yline)
t_theta = np.matrix(np.zeros((1,2)))
ZZ = np.zeros((155,155))
for i in range(155):
for j in range(155):
t_theta[:,0] = XX[i,j]
t_theta[:,1] = YY[i,j]
H = X*t_theta.T
ZZ[i,j] = costfunction(X,y,H)
ax.plot_surface(XX,YY,ZZ,cstride=1,rstride=1,cmap = 'rainbow')
ax.contourf(XX,YY,ZZ,zdir = 'z',offset = -10 , cmap = 'rainbow')
# ## 二.多元(特征)线性回归
"""读取数据"""
data2 = pd.read_csv('ex1data2.txt',names=['square', 'bedrooms', 'price'])
"""特征缩放"""
x = data2.iloc[:,0:data2.shape[1]-1]
y = data2.iloc[:,data2.shape[1]-1:]
x = (x - np.average(x,axis = 0))/np.std(x,axis = 0,ddof = 1)#ddof = 1,有偏和无偏?
y = (y - np.average(y,axis = 0))/np.std(y,axis = 0,ddof = 1)
data2.head()
"""矩阵化"""
X = x.copy()
X.insert(0,'one',1)
X = np.matrix(X)
y = np.matrix(y)
theta = np.matrix(np.zeros(X.shape[1]))
H = X*theta.T
X.shape,y.shape,theta.shape,H.shape
"""梯度下降"""
final_theta,cost,thetas_0,thetas_1= gradient_descent(theta,X,y,alpha,m,H,times)
final_theta,cost
final_theta
"""绘图,拟合的直线我画不出...是精度问题?"""
fig,ax = plt.subplots()
ax = plt.axes(projection='3d')
ax.scatter(x.iloc[:,0],x.iloc[:,1],y,label = 'data')
x_1 = np.array(X[:,1]).reshape(47)
x_2 = np.array(X[:,2]).reshape(47)
ax.plot3D(x_1,x_2,x_1*final_theta[0,1]+x_2*final_theta[0,2]+final_theta[0,0],label = 'prediction',c = 'r')
###plot3D参数搞不清,三个坐标必须一或两个一维数组,剩下的由前面的变量推算?
ax.legend(loc = 2)
"""收敛情况"""
fig,ax = plt.subplots()
ax.set_xlabel('times')
ax.set_ylabel('cost')
ax.set_title('how does cost changed(alpha = 0.01)'.title())
ax.plot(cost)
# ### 学习率
alphas = [1,0.1,0.01,0.001,0.0001,0.00001]
fig,ax = plt.subplots()
ax.set_xlabel('times')
ax.set_ylabel('cost')
ax.set_title('how does cost changed'.title())
for alpha in alphas:
final_theta,cost,thetas_0,thetas_1= gradient_descent(theta,X,y,alpha,m,H,times)
ax.plot(cost,label = alpha)
ax.legend(loc=1)
# ## 三.正规方程
"""读取数据"""
data3 = pd.read_csv('ex1data2.txt',names=['square', 'bedrooms', 'price'])
"""特征缩放"""
x = data3.iloc[:,0:data3.shape[1]-1]
y = data3.iloc[:,data3.shape[1]-1:]
x = (x - np.average(x,axis = 0))/np.std(x,axis = 0,ddof = 1)
y = (y - np.average(y,axis = 0))/np.std(y,axis = 0,ddof = 1)
data3.head()
"""矩阵化"""
X = x.copy()
X.insert(0,'one',1)
X = np.matrix(X)
y = np.matrix(y)
theta = np.matrix(np.zeros(X.shape[1]))
def normalEqn(X, y):
inner = X.T*X
theta = inner.I * X.T * y
return theta
final_theta = normalEqn(X, y).T
final_theta
"""拟合好了吗?"""
fig,ax = plt.subplots()
ax = plt.axes(projection='3d')
ax.scatter(x.iloc[:,0],x.iloc[:,1],y,label = 'data')
x_1 = np.array(X[:,1]).reshape(47)
x_2 = np.array(X[:,2]).reshape(47)
ax.plot3D(x_1,x_2,x_1*final_theta[0,1]+x_2*final_theta[0,2]+final_theta[0,0],c = 'r',label = 'prediction')
ax.legend(loc = 2)
| code-homework/ML/ex1_Linear Regression/.ipynb_checkpoints/ex1-checkpoint.ipynb |
-- -*- coding: utf-8 -*-
-- ---
-- jupyter:
-- jupytext:
-- text_representation:
-- extension: .hs
-- format_name: light
-- format_version: '1.5'
-- jupytext_version: 1.14.4
-- kernelspec:
-- display_name: Haskell
-- language: haskell
-- name: haskell
-- ---
-- # Chapter 5: Types
-- ## Multiple Choice
-- `[a]` is a list whose elements are all of some type `a`
-- `[[a]] -> [a]` could take a list of strings as an argument
-- `[a] -> Int -> a` returns one element of type a from a list
-- `(a, b) -> a` takes a tuple argument and returns the first value
-- ## Determine the type
:type (* 9) 6
:type head [(0,"doge"),(1,"kitteh")]
:type head [(0 :: Integer ,"doge"),(1,"kitteh")]
:type if False then True else False
:type length [1, 2, 3, 4, 5]
:type (length [1, 2, 3, 4]) > (length "TACOCAT")
-- ---
x = 5
y = x + 5
w = y * 10
:type w
x = 5
y = x + 5
z y = y * 10
:type z
x = 5
y = x + 5
f = 4 / y
:type f
x = "Julie"
y = " <3 "
z = "Haskell"
f = x ++ y ++ z
:type f
-- ## Does it compile?
bigNum = (^) 5 10
:type bigNum
wahoo = bigNum 10
x = print
y = print "woohoo!"
z = x "hello world"
z
a = (+)
b = 5
c = b 10
d = c 200
a = 12 + b
b = 10000 * c
-- ## Type variable or specific type constructor?
-- `f :: zed -> Zed -> Blah`
--
-- - `zed` is a fully polymorphic type variable,
-- - `Zed` is a concrete type constructor,
-- - `Blah` is a concrete type constructor.
-- `f :: Enum b => a -> b -> C`
--
-- - `a` is a fully polymorphic type variable,
-- - `b` is a constrained polymorphic type variable,
-- - `C` is a concrete type constructor.
-- `f :: f -> g -> C`
--
-- - `f` is a fully polymorphic type variable, as well as a name for defined function. Those names do not collide as they are in the different namespaces (one is a name in the type namespace, another is a name in the value namespace).
-- - `g` is a fully polymorphic type variable.
-- - `C` is a concrete type constructor.
-- ## Write a type signature
-- +
f (x:_) = x
:type f
functionH :: [a] -> a
functionH = f
-- +
f x y = if (x > y) then True else False
:type f
functionC :: Ord a => a -> a -> Bool
functionC = f
-- +
f (x, y) = y
:type f
functionS :: (a, b) -> b
functionS = f
-- -
-- ## Given a type, write the function
-- +
i :: a -> a
i x = x
i 12
-- +
c :: a -> b -> a
c x y = x
c 10 2
-- +
c'' :: b -> a -> b
c'' x y = x
c'' 10 2
-- +
c' :: a -> b -> b
c' x y = y
c' 10 2
-- +
r :: [a] -> [a]
r = reverse
r "123"
-- -
co :: (b -> c) -> (a -> b) -> a -> c
co = (.)
a :: (a -> c) -> a -> a
a _ x = x
a' :: (a -> b) -> a -> b
a' x = x
-- ## Fix it
-- +
-- module Sing where
fstString :: String -> String
fstString x = x ++ " in the rain"
sndString :: String -> String
sndString x = x ++ " over the rainbow"
sing = if x > y then fstString x else sndString y where x = "Singin"; y = "Somewhere"
sing
-- -
sing2 = if x < y then fstString x else sndString y where x = "Singin"; y = "Somewhere"
sing2
-- +
-- module Arith3Broken where
main :: IO ()
main = do
print $ 1 + 2
print 10
print (negate (-1))
print ((+) 0 blah)
where blah = negate 1
-- -
-- ## Type-Kwon-Do
-- +
f :: Int -> String
f = undefined
g :: String -> Char
g = undefined
h :: Int -> Char
h = g . f
-- +
data A
data B
data C
q :: A -> B
q = undefined
w :: B -> C
w = undefined
e :: A -> C
e = w . q
-- +
data X
data Y
data Z
xz :: X -> Z
xz = undefined
yz :: Y -> Z
yz = undefined
xform :: (X, Y) -> (Z, Z)
xform (x, y) = (xz x, yz y)
-- +
munge :: (x -> y) -> (y -> (w, z)) -> x -> w
munge x2y y2t x = fst (y2t (x2y x))
munge' :: (x -> y) -> (y -> (w, z)) -> x -> w
munge' x2y y2t = fst . y2t . x2y
| src/haskell-programming-from-first-principles/05-types.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import pydicom as dicom
import yaml
import os
import pathlib
import shutil
import cv2
from tqdm import tqdm
from sklearn.model_selection import train_test_split
def build_dataset(cfg):
'''
Build a dataset of filenames and labels according to the type of classification
:param cfg: Project config dictionary
:return: DataFrame of file names of examples and corresponding class labels
'''
# Get paths of raw datasets to be included
mila_data_path = cfg['PATHS']['MILA_DATA']
fig1_data_path = cfg['PATHS']['FIGURE1_DATA']
rsna_data_path = cfg['PATHS']['RSNA_DATA']
# Assemble filenames comprising Mila dataset
mila_df = pd.read_csv(mila_data_path + 'metadata.csv')
mila_df['filename'] = mila_data_path.split('\\')[-2] + '\\images\\' + mila_df['filename'].astype(str)
mila_views_cxrs_df = (mila_df['view'].str.contains('|'.join(cfg['DATA']['VIEWS']))) # Select desired X-ray views
mila_covid_pts_df = (mila_df['finding'] == 'COVID-19')
mila_covid_views_df = mila_df[mila_covid_pts_df & mila_views_cxrs_df] # Images for patients diagnosed with COVID-19
# Assemble filenames comprising Figure 1 dataset
fig1_df = pd.read_csv(fig1_data_path + 'metadata.csv', encoding='ISO-8859-1')
fig1_df['filename'] = ''
for i, row in fig1_df.iterrows():
if os.path.exists(fig1_data_path + 'images\\' + fig1_df.loc[i, 'patientid'] + '.jpg'):
fig1_df.loc[i, 'filename'] = fig1_data_path.split('\\')[-2] + '\\images\\' + fig1_df.loc[i, 'patientid'] + '.jpg'
else:
fig1_df.loc[i, 'filename'] = fig1_data_path.split('\\')[-2] + '\\images\\' + fig1_df.loc[i, 'patientid'] + '.png'
fig1_df['view'].fillna('PA or AP', inplace=True) # All images in this dataset are either AP or PA
fig1_views_cxrs_df = (fig1_df['view'].str.contains('|'.join(cfg['DATA']['VIEWS']))) # Select desired X-ray views
fig1_covid_pts_df = (fig1_df['finding'] == 'COVID-19')
fig1_covid_views_df = fig1_df[fig1_covid_pts_df & fig1_views_cxrs_df] # Images for patients diagnosed COVID-19
# Assemble filenames comprising RSNA dataset
rsna_metadata_path = rsna_data_path + 'stage_2_train_labels.csv'
rsna_df = pd.read_csv(rsna_metadata_path)
num_rsna_imgs = cfg['DATA']['NUM_RSNA_IMGS']
rsna_normal_df = rsna_df[rsna_df['Target'] == 0]
rsna_pneum_df = rsna_df[rsna_df['Target'] == 1]
# Convert dicom files of CXRs with no findings to jpg if not done already in a previous run. Select desired views.
file_counter = 0
normal_idxs = []
for df_idx in rsna_normal_df.index.values.tolist():
filename = rsna_normal_df.loc[df_idx]['patientId']
ds = dicom.dcmread(os.path.join(rsna_data_path + 'stage_2_train_images\\' + filename + '.dcm'))
if any(view in ds.SeriesDescription.split(' ')[1] for view in cfg['DATA']['VIEWS']): # Select desired X-ray views
if not os.path.exists(rsna_data_path + filename + '.jpg'):
cv2.imwrite(os.path.join(rsna_data_path + filename + '.jpg'), ds.pixel_array) # Save as .jpg
normal_idxs.append(df_idx)
file_counter += 1
if file_counter >= num_rsna_imgs // 2:
break
rsna_normal_df = rsna_normal_df.loc[normal_idxs]
# Convert dicom files of CXRs with pneumonia to jpg if not done already in a previous run. Select desired views.
file_counter = 0
pneum_idxs = []
num_remaining = num_rsna_imgs - num_rsna_imgs // 2
for df_idx in rsna_pneum_df.index.values.tolist():
filename = rsna_pneum_df.loc[df_idx]['patientId']
ds = dicom.dcmread(os.path.join(rsna_data_path + 'stage_2_train_images\\' + filename + '.dcm'))
if any(view in ds.SeriesDescription.split(' ')[1] for view in cfg['DATA']['VIEWS']): # Select desired X-ray views
if not os.path.exists(rsna_data_path + filename + '.jpg'):
cv2.imwrite(os.path.join(rsna_data_path + filename + '.jpg'), ds.pixel_array) # Save as .jpg
pneum_idxs.append(df_idx)
file_counter += 1
if file_counter >= num_remaining:
break
rsna_pneum_df = rsna_pneum_df.loc[pneum_idxs]
mode = cfg['TRAIN']['CLASS_MODE']
n_classes = len(cfg['DATA']['CLASSES'])
class_dict = {cfg['DATA']['CLASSES'][i]: i for i in range(n_classes)} # Map class name to number
label_dict = {i: cfg['DATA']['CLASSES'][i] for i in range(n_classes)} # Map class name to number
if mode == 'binary':
mila_covid_views_df['label'] = 1 # Mila images with COVID-19 diagnosis
mila_other_views_df = mila_df[~mila_covid_pts_df & mila_views_cxrs_df]
mila_other_views_df['label'] = 0 # Mila images with alternative diagnoses
fig1_covid_views_df['label'] = 1 # Figure 1 images with COVID-19 diagnosis
file_df = pd.concat([mila_covid_views_df[['filename', 'label']], mila_other_views_df[['filename', 'label']],
fig1_covid_views_df[['filename', 'label']]], axis=0)
rsna_df = pd.concat([rsna_normal_df, rsna_pneum_df], axis=0)
rsna_filenames = rsna_data_path.split('\\')[-2] + '\\' + rsna_df['patientId'].astype(str) + '.jpg'
rsna_file_df = pd.DataFrame({'filename': rsna_filenames, 'label': 0})
file_df = pd.concat([file_df, rsna_file_df], axis=0) # Combine both datasets
else:
mila_covid_views_df['label'] = class_dict['COVID-19']
mila_views_pneum_df = mila_df[mila_df['finding'].isin(['SARS', 'Steptococcus', 'MERS', 'Legionella', 'Klebsiella',
'Chlamydophila', 'Pneumocystis']) & mila_views_cxrs_df]
mila_views_pneum_df['label'] = class_dict['other_pneumonia'] # Mila CXRs with other peumonias
mila_views_normal_df = mila_df[mila_df['finding'].isin(['No finding']) & mila_views_cxrs_df]
mila_views_normal_df['label'] = class_dict['normal'] # Mila CXRs with no finding
fig1_covid_views_df['label'] = class_dict['COVID-19'] # Figure 1 CXRs with COVID-19 finding
file_df = pd.concat([mila_covid_views_df[['filename', 'label']], mila_views_pneum_df[['filename', 'label']],
mila_views_normal_df[['filename', 'label']], fig1_covid_views_df[['filename', 'label']]], axis=0)
# Organize some files from RSNA dataset into "normal", and "pneumonia" XRs
rsna_normal_filenames = rsna_data_path.split('\\')[-2] + '\\' + rsna_normal_df['patientId'].astype(str) + '.jpg'
rsna_pneum_filenames = rsna_data_path.split('\\')[-2] + '\\' + rsna_pneum_df['patientId'].astype(str) + '.jpg'
rsna_normal_file_df = pd.DataFrame({'filename': rsna_normal_filenames, 'label': class_dict['normal']})
rsna_pneum_file_df = pd.DataFrame({'filename': rsna_pneum_filenames, 'label': class_dict['other_pneumonia']})
rsna_file_df = pd.concat([rsna_normal_file_df, rsna_pneum_file_df], axis=0)
file_df = pd.concat([file_df, rsna_file_df], axis=0) # Combine both datasets
file_df['label_str'] = file_df['label'].map(label_dict) # Add column for string representation of label
return file_df
def remove_text(img):
'''
Attempts to remove bright textual artifacts from X-ray images. For example, many images indicate the right side of
the body with a white 'R'. Works only for very bright text.
:param img: Numpy array of image
:return: Array of image with (ideally) any characters removed and inpainted
'''
mask = cv2.threshold(img, 230, 255, cv2.THRESH_BINARY)[1][:, :, 0].astype(np.uint8)
img = img.astype(np.uint8)
result = cv2.inpaint(img, mask, 10, cv2.INPAINT_NS).astype(np.float32)
return result
def preprocess(cfg=None):
'''
Preprocess and partition image data. Assemble all image file paths and partition into training, validation and
test sets. Copy raw images to folders for training, validation and test sets.
:param cfg: Optional parameter to set your own config object.
'''
if cfg is None:
cfg = yaml.full_load(open("C:\\Users\\PaulDS3\\Downloads\\project\\covid-cxr\\config.yml", 'r')) # Load config data
# Build dataset based on type of classification
file_df = build_dataset(cfg)
# Split dataset into train, val and test sets
val_split = cfg['DATA']['VAL_SPLIT']
test_split = cfg['DATA']['TEST_SPLIT']
file_df_train, file_df_test = train_test_split(file_df, test_size=test_split, stratify=file_df['label'])
relative_val_split = val_split / (1 - test_split) # Calculate fraction of train_df to be used for validation
file_df_train, file_df_val = train_test_split(file_df_train, test_size=relative_val_split,
stratify=file_df_train['label'])
# Save training, validation and test sets
if not os.path.exists(cfg['PATHS']['PROCESSED_DATA']):
os.makedirs(cfg['PATHS']['PROCESSED_DATA'])
file_df_train.to_csv(cfg['PATHS']['TRAIN_SET'])
file_df_val.to_csv(cfg['PATHS']['VAL_SET'])
file_df_test.to_csv(cfg['PATHS']['TEST_SET'])
return
if __name__ == '__main__':
preprocess()
# -
| Part-1/src/data/.ipynb_checkpoints/preprocess-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import cv2
import sheet_id.deepmatching_cpu.deepmatching as dm
from tqdm import tqdm_notebook as tqdm
from sheet_id.utils.base_utils import loadSettings, loadScoresDataset, loadCellphoneScoresDataset, \
generateScoresDB, calculateMRR
from matplotlib.patches import ConnectionPatch
from multiprocessing import Pool, freeze_support
# -
settings = loadSettings()
scores_path = loadScoresDataset(settings['DB_PATH'])
cellphone_scores_path = loadCellphoneScoresDataset(settings['CELLPHONE_IMG_PATH'])
db = generateScoresDB(scores_path)
def imageDistance(img_a, img_b):
_, img1 = cv2.threshold(img_a,127,255,cv2.THRESH_BINARY)
_, img2 = cv2.threshold(img_b,127,255,cv2.THRESH_BINARY)
if img1.shape[0] >= 1000:
scale1 = 1.0 / (img1.shape[0] // 1000)
img1 = cv2.resize(img1, (0,0), fx=scale1, fy=scale1)
if img2.shape[0] >= 1000:
scale2 = 1.0 / (img2.shape[0] // 1000)
img2 = cv2.resize(img2, (0,0), fx=scale2, fy=scale2)
img1 = cv2.cvtColor(img1, cv2.COLOR_GRAY2RGB)
img2 = cv2.cvtColor(img2, cv2.COLOR_GRAY2RGB)
matches = dm.deepmatching(img1, img2, '-downscale 2 -nt 12' )
score = len(matches)
distance = -score
return distance
def search(query_img, db):
"""
Search the query from the db and return the list of predictions ranked by the score in
the form of
(ref_name, distance)
"""
scoreRanking = []
for ref in tqdm(db):
ref_img = db[ref]
dist = imageDistance(query_img, ref_img)
scoreRanking.append((ref, dist))
return sorted(scoreRanking, key=lambda x: x[1]) # Sort based on distance in increasing order
# + code_folding=[]
def findRank(predictions, ref):
"""
Find the rank of ref in the list of predictions
"""
rank = 1
for (prediction, dist) in predictions:
if prediction == ref:
return rank
rank += 1
raise ValueError('Ref not found')
# -
ranks = []
for i in tqdm(range(len(cellphone_scores_path))):
img = cv2.imread(cellphone_scores_path[i], 0)
fileNameNoExt = os.path.splitext(os.path.split(cellphone_scores_path[i])[1])[0]
predictions = search(img, db)
rank = findRank(predictions, fileNameNoExt)
ranks.append(rank)
print("Query {:} [{:}]: {:} [top10={:}]".format(i+1, fileNameNoExt, rank, predictions[:10]))
MRR = calculateMRR(ranks)
print("MRR: {:}".format(MRR))
# + [markdown] heading_collapsed=true
# # Raw Results
# + [markdown] hidden=true
# ```
# Query 1 [01106_page_45]: 1 [top10=[('01106_page_45', -1623), ('415756_page_27', -515), ('295562_page_85', -509), ('515927_page_100', -498), ('514286_page_110', -456), ('458322_page_23', -448), ('517023_page_108', -445), ('01749_page_1', -443), ('330636_page_115', -431), ('01380_page_5', -430)]]
#
# Query 2 [01380_page_5]: 1 [top10=[('01380_page_5', -1537), ('458322_page_23', -523), ('238622_page_16', -520), ('01749_page_1', -519), ('433671_page_7', -491), ('57857_page_22', -477), ('295562_page_85', -466), ('330636_page_115', -463), ('279665_page_680', -458), ('16314_page_29', -455)]]
#
# Query 3 [01685_page_24]: 1 [top10=[('01685_page_24', -1632), ('61784_page_40', -632), ('241869_page_42', -607), ('371085_page_36', -588), ('340340_page_96', -556), ('295562_page_85', -545), ('62251_page_32', -539), ('378078_page_75', -530), ('517423_page_110', -523), ('03031_page_19', -520)]]
#
# Query 4 [01749_page_1]: 1 [top10=[('01749_page_1', -1588), ('305764_page_88', -443), ('371085_page_36', -442), ('35927_page_30', -439), ('16314_page_29', -436), ('01380_page_5', -434), ('238622_page_16', -414), ('27475_page_7', -409), ('02331_page_69', -403), ('63473_page_83', -403)]]
#
# Query 5 [01923_page_10]: 1 [top10=[('01923_page_10', -1071), ('84124_page_22', -603), ('01749_page_1', -566), ('16314_page_29', -526), ('57857_page_22', -522), ('234233_page_19', -519), ('295562_page_85', -518), ('335945_page_113', -518), ('54136_page_56', -495), ('331259_page_26', -492)]]
#
# Query 6 [01938_page_1]: 1 [top10=[('01938_page_1', -1560), ('55127_page_2', -539), ('50532_page_1', -518), ('247445_page_68', -517), ('172491_page_2', -504), ('62251_page_32', -500), ('27475_page_7', -498), ('50754_page_1', -489), ('43238_page_23', -486), ('52189_page_6', -484)]]
#
# Query 7 [02235_page_77]: 1 [top10=[('02235_page_77', -1419), ('335945_page_113', -572), ('234233_page_19', -559), ('01380_page_5', -495), ('433671_page_7', -494), ('340340_page_96', -488), ('331259_page_26', -486), ('61784_page_40', -475), ('80125_page_14', -474), ('251579_page_24', -464)]]
#
# Query 8 [02267_page_2]: 1 [top10=[('02267_page_2', -1212), ('241869_page_42', -603), ('62251_page_32', -603), ('80125_page_14', -585), ('458322_page_23', -544), ('335945_page_113', -528), ('340340_page_96', -527), ('218668_page_32', -512), ('295562_page_85', -511), ('517023_page_108', -508)]]
#
# Query 9 [02331_page_69]: 1 [top10=[('02331_page_69', -1600), ('27475_page_7', -633), ('415756_page_27', -620), ('06563_page_14', -615), ('57857_page_22', -579), ('279985_page_53', -561), ('303258_page_144', -557), ('30533_page_104', -555), ('84124_page_22', -552), ('331259_page_26', -546)]]
#
# Query 10 [02908_page_24]: 1 [top10=[('02908_page_24', -1452), ('295562_page_85', -664), ('62251_page_32', -623), ('241869_page_42', -596), ('42626_page_7', -567), ('03031_page_19', -566), ('84124_page_22', -566), ('27475_page_7', -554), ('335945_page_113', -552), ('105117_page_27', -545)]]
#
# Query 11 [03031_page_19]: 1 [top10=[('03031_page_19', -1828), ('295562_page_85', -547), ('241869_page_42', -518), ('62251_page_32', -496), ('61784_page_40', -477), ('109509_page_7', -450), ('191419_page_23', -440), ('516486_page_56', -439), ('01380_page_5', -438), ('203699_page_15', -438)]]
#
# Query 12 [03333_page_17]: 1 [top10=[('03333_page_17', -909), ('03366_page_36', -542), ('57857_page_22', -513), ('516486_page_56', -497), ('295562_page_85', -495), ('01106_page_45', -484), ('84124_page_22', -469), ('458322_page_23', -468), ('371085_page_36', -466), ('517023_page_108', -462)]]
#
# Query 13 [03366_page_36]: 1 [top10=[('03366_page_36', -1141), ('241869_page_42', -596), ('458322_page_23', -586), ('295562_page_85', -556), ('516486_page_56', -517), ('203699_page_15', -516), ('378078_page_75', -515), ('01749_page_1', -505), ('101480_page_10', -490), ('03333_page_17', -488)]]
#
# Query 14 [03684_page_48]: 1 [top10=[('03684_page_48', -1557), ('241869_page_42', -606), ('335945_page_113', -599), ('61784_page_40', -583), ('62251_page_32', -538), ('84124_page_22', -533), ('458322_page_23', -525), ('51740_page_8', -504), ('22228_page_36', -494), ('371085_page_36', -487)]]
#
# Query 15 [06028_page_50]: 179 [top10=[('18971_page_107', -570), ('281748_page_5', -451), ('517025_page_594', -389), ('241869_page_42', -368), ('62251_page_32', -363), ('57857_page_22', -361), ('263108_page_23', -357), ('28595_page_19', -357), ('20018_page_3', -355), ('517401_page_154', -347)]]
#
# Query 16 [06563_page_14]: 1 [top10=[('06563_page_14', -675), ('01749_page_1', -521), ('02331_page_69', -504), ('458322_page_23', -459), ('433671_page_7', -456), ('269148_page_9', -450), ('238622_page_16', -432), ('27475_page_7', -431), ('01380_page_5', -421), ('415756_page_27', -413)]]
#
# Query 17 [08851_page_46]: 4 [top10=[('54136_page_56', -416), ('191419_page_23', -407), ('433671_page_7', -407), ('08851_page_46', -397), ('295562_page_85', -392), ('340340_page_96', -390), ('24577_page_142', -379), ('61784_page_40', -362), ('01380_page_5', -361), ('03031_page_19', -349)]]
#
# Query 18 [100913_page_38]: 1 [top10=[('100913_page_38', -653), ('241869_page_42', -528), ('109509_page_7', -504), ('203699_page_15', -492), ('22228_page_36', -490), ('52189_page_6', -487), ('42626_page_7', -476), ('335945_page_113', -469), ('35927_page_30', -465), ('63015_page_38', -460)]]
#
# Query 19 [101480_page_10]: 1 [top10=[('101480_page_10', -1194), ('295562_page_85', -598), ('234233_page_19', -579), ('01380_page_5', -562), ('415756_page_27', -551), ('281748_page_5', -541), ('458322_page_23', -534), ('57857_page_22', -513), ('433671_page_7', -509), ('340340_page_96', -506)]]
#
# Query 20 [105117_page_27]: 1 [top10=[('105117_page_27', -1990), ('24577_page_142', -383), ('20018_page_3', -356), ('371085_page_36', -348), ('335945_page_113', -342), ('03031_page_19', -338), ('19459_page_3', -319), ('289284_page_12', -317), ('281748_page_5', -314), ('27475_page_7', -312)]
#
# Query 21 [105370_page_93]: 1 [top10=[('105370_page_93', -690), ('371085_page_36', -551), ('57857_page_22', -507), ('517023_page_108', -500), ('01380_page_5', -495), ('516486_page_56', -473), ('281748_page_5', -460), ('18971_page_107', -442), ('01106_page_45', -433), ('54136_page_56', -425)]]
#
# Query 22 [106106_page_41]: 1 [top10=[('106106_page_41', -803), ('289284_page_12', -295), ('105117_page_27', -285), ('24547_page_66', -244), ('19459_page_3', -230), ('24577_page_142', -221), ('293281_page_253', -221), ('01380_page_5', -209), ('331259_page_26', -200), ('58926_page_33', -198)]]
#
# Query 23 [10719_page_3]: 1 [top10=[('10719_page_3', -1333), ('281748_page_5', -408), ('371085_page_36', -407), ('50754_page_1', -379), ('203699_page_15', -372), ('47447_page_6', -369), ('18971_page_107', -356), ('335945_page_113', -353), ('80125_page_14', -347), ('241869_page_42', -343)]]
#
# Query 24 [107692_page_8]: 1 [top10=[('107692_page_8', -1530), ('234233_page_19', -556), ('335945_page_113', -549), ('371085_page_36', -540), ('03684_page_48', -530), ('61784_page_40', -524), ('203699_page_15', -509), ('340340_page_96', -503), ('03031_page_19', -502), ('01685_page_24', -501)]]
#
# Query 25 [108781_page_39]: 1 [top10=[('108781_page_39', -1434), ('371085_page_36', -477), ('191419_page_23', -470), ('54136_page_56', -468), ('57857_page_22', -436), ('433671_page_7', -432), ('01749_page_1', -428), ('62251_page_32', -428), ('281748_page_5', -427), ('295562_page_85', -425)]]
#
# Query 26 [109508_page_151]: 1 [top10=[('109508_page_151', -698), ('01106_page_45', -513), ('01380_page_5', -505), ('35927_page_30', -495), ('303258_page_144', -481), ('84124_page_22', -480), ('335945_page_113', -478), ('514870_page_183', -472), ('415756_page_27', -467), ('61784_page_40', -459)]]
#
# Query 27 [110319_page_23]: 1 [top10=[('110319_page_23', -988), ('516486_page_56', -464), ('270880_page_63', -453), ('18971_page_107', -422), ('01380_page_5', -417), ('281748_page_5', -413), ('02267_page_2', -391), ('03031_page_19', -386), ('20018_page_3', -386), ('15562_page_57', -375)]]
#
# Query 28 [114583_page_42]: 1 [top10=[('114583_page_42', -1248), ('80125_page_14', -462), ('433671_page_7', -457), ('335945_page_113', -442), ('01380_page_5', -439), ('234233_page_19', -438), ('01749_page_1', -425), ('203699_page_15', -425), ('516486_page_56', -417), ('152912_page_182', -413)]]
#
# Query 29 [114608_page_248]: 1 [top10=[('114608_page_248', -1216), ('281748_page_5', -436), ('203699_page_15', -426), ('241869_page_42', -413), ('234233_page_19', -408), ('55127_page_2', -408), ('01380_page_5', -399), ('371085_page_36', -390), ('517025_page_594', -379), ('01685_page_24', -370)]]
#
# Query 30 [114900_page_4]: 1 [top10=[('114900_page_4', -1125), ('371085_page_36', -510), ('62251_page_32', -483), ('335945_page_113', -478), ('340340_page_96', -447), ('27475_page_7', -433), ('61784_page_40', -423), ('458322_page_23', -419), ('03684_page_48', -414), ('19110_page_5', -414)]]
#
# Query 31 [12100_page_30]: 1 [top10=[('12100_page_30', -1189), ('295562_page_85', -419), ('241869_page_42', -392), ('371085_page_36', -382), ('335945_page_113', -372), ('270880_page_63', -361), ('517401_page_154', -353), ('234233_page_19', -352), ('27475_page_7', -350), ('340340_page_96', -349)]]
#
# Query 32 [128147_page_77]: 1 [top10=[('128147_page_77', -1480), ('281748_page_5', -468), ('335945_page_113', -457), ('57857_page_22', -451), ('517025_page_594', -449), ('01380_page_5', -448), ('62251_page_32', -442), ('203699_page_15', -430), ('516486_page_56', -417), ('84124_page_22', -412)]]
#
# Query 33 [134103_page_4]: 1 [top10=[('134103_page_4', -1160), ('52189_page_6', -439), ('263108_page_23', -435), ('02331_page_69', -428), ('371085_page_36', -425), ('18971_page_107', -418), ('35927_page_30', -414), ('01106_page_45', -411), ('01749_page_1', -408), ('281748_page_5', -407)]]
#
# Query 34 [146002_page_3]: 1 [top10=[('146002_page_3', -579), ('01685_page_24', -447), ('19459_page_3', -373), ('295562_page_85', -371), ('109509_page_7', -365), ('03031_page_19', -360), ('335945_page_113', -358), ('02267_page_2', -350), ('01380_page_5', -347), ('03684_page_48', -334)]]
#
# Query 35 [172491_page_2]: 1 [top10=[('172491_page_2', -1233), ('234233_page_19', -537), ('335945_page_113', -508), ('61784_page_40', -489), ('203699_page_15', -459), ('54136_page_56', -439), ('270880_page_63', -434), ('52189_page_6', -433), ('55127_page_2', -427), ('03684_page_48', -423)]]
#
# Query 36 [19459_page_3]: 107 [top10=[('289284_page_12', -314), ('42626_page_7', -301), ('01380_page_5', -288), ('330635_page_67', -286), ('03031_page_19', -284), ('80125_page_14', -284), ('24577_page_142', -279), ('295562_page_85', -279), ('293281_page_253', -276), ('01106_page_45', -273)]]
#
# Query 37 [209693_page_3]: 1 [top10=[('209693_page_3', -1441), ('18971_page_107', -593), ('517023_page_108', -593), ('482211_page_1', -530), ('517025_page_594', -523), ('281748_page_5', -521), ('517401_page_154', -501), ('241869_page_42', -494), ('21896_page_64', -486), ('370600_page_1', -473)]]
#
# Query 38 [25315_page_2]: 1 [top10=[('25315_page_2', -1389), ('415756_page_27', -519), ('01380_page_5', -516), ('101480_page_10', -513), ('378078_page_75', -509), ('295562_page_85', -506), ('54136_page_56', -496), ('84124_page_22', -485), ('303258_page_144', -473), ('61784_page_40', -469)]]
#
# Query 39 [253851_page_2]: 1 [top10=[('253851_page_2', -1336), ('241869_page_42', -572), ('378078_page_75', -476), ('295562_page_85', -475), ('65711_page_23', -467), ('428455_page_49', -460), ('458322_page_23', -458), ('47447_page_6', -458), ('44543_page_7', -448), ('50532_page_1', -433)]]
# ```
# + [markdown] hidden=true
# ```
# MRR = 0.93 (1x36, 4x1, 107x1, 179x1)
# ```
# -
# # DEBUG: Deepmatching Testing
import matplotlib.pyplot as plt
img_a = db['19459_page_3']
img_b = db['19459_page_3'] #cv2.imread(cellphone_scores_path[35], 0)
plt.figure(figsize=(20,20))
plt.subplot(1,2,1)
plt.imshow(img_a, cmap='gray')
plt.subplot(1,2,2)
plt.imshow(img_b, cmap='gray')
plt.show()
_, img1 = cv2.threshold(img_a,127,255,cv2.THRESH_BINARY)
_, img2 = cv2.threshold(img_b,127,255,cv2.THRESH_BINARY)
if img1.shape[0] >= 1000:
scale1 = 1.0 / (img1.shape[0] // 1000)
img1 = cv2.resize(img1, (0,0), fx=scale1, fy=scale1)
if img2.shape[0] >= 1000:
scale2 = 1.0 / (img2.shape[0] // 1000)
img2 = cv2.resize(img2, (0,0), fx=scale2, fy=scale2)
img1 = cv2.cvtColor(img1, cv2.COLOR_GRAY2RGB)
img2 = cv2.cvtColor(img2, cv2.COLOR_GRAY2RGB)
plt.figure(figsize=(20,20))
plt.subplot(1,2,1)
plt.imshow(img1, cmap='gray')
plt.subplot(1,2,2)
plt.imshow(img2, cmap='gray')
plt.show()
matches = dm.deepmatching(img1, img2, '-downscale 3 -nt 12' )
len(matches)
# +
fig = plt.figure(figsize=(20,20))
ax1 = fig.add_subplot(1,2,1)
ax1.imshow(img1)
for i in range(len(matches)):
ax1.plot([matches[i][0]], [matches[i][1]], 'ko')
ax2 = fig.add_subplot(1,2,2)
ax2.imshow(img2)
for i in range(len(matches)):
ax2.plot([matches[i][2]], [matches[i][3]], 'ko')
for i in range(len(matches)):
con = ConnectionPatch(xyA=(matches[i][2], matches[i][3]), coordsA='data', coordsB='data',
xyB=(matches[i][0], matches[i][1]), axesA=ax2, axesB=ax1, color='red')
ax2.add_artist(con)
# -
| sheet_id/notebooks/DeepMatching-Baseline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import csv
import requests
from xml.dom import minidom
from ContentDownloader import ContentDownloader
from tqdm import tqdm
from ipdb import set_trace
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
from readability import Document
def loadXML(url = 'https://www.salvageautosauction.com/sitemap.xml',name=''):
# url of rss feed
# creating HTTP response object from given url
resp = requests.get(url)
# saving the xml file
with open('csv/'+url.split('/')[-1], 'wb') as f:
f.write(resp.content)
def parseXML(xmlfile):
xmldoc = minidom.parse(name)
itemlist = xmldoc.getElementsByTagName('loc')
print('URLs found: ',len(itemlist),'\dProcessing XML file ...')
urls = []
for item in tqdm(itemlist):
url = item.firstChild.nodeValue
if 'vehicle_detail' in url:
urls.append(url)
return urls
def find_bid(page):
try:
soup = BeautifulSoup(page, 'html.parser')
dom = soup.findAll("p", {"class": "text-center"})[1]
#return dom.contents[0]
#return ''.join([s for s in dom.contents[0].split() if s.isdigit()])
return int(''.join([s for s in dom.contents[0] if s.isdigit()]))
except:
return np.nan
def format_data(df):
res_df = pd.DataFrame()
for index, row in tqdm(df.iterrows()):
pagetable = pd.read_html(row['Text'])[0]
item_name = row['URL'].split('vehicle_detail/')[1].split('/')[0]
pagetable.columns = ['Vehicle Name',item_name]
pagetable = pagetable.set_index('Vehicle Name').T
pagetable['bid'] = find_bid(page)
res_df = res_df.append(pagetable,sort=False)
return res_df
# +
import csv
import requests
from xml.dom import minidom
from ContentDownloader import ContentDownloader
from tqdm import tqdm
from ipdb import set_trace
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
from readability import Document
def loadXML(url = 'https://www.salvageautosauction.com/sitemap.xml',name=''):
# url of rss feed
# creating HTTP response object from given url
resp = requests.get(url)
# saving the xml file
with open('csv/'+url.split('/')[-1], 'wb') as f:
f.write(resp.content)
def parseXML(xmlfile):
xmldoc = minidom.parse(name)
itemlist = xmldoc.getElementsByTagName('loc')
print('URLs found: ',len(itemlist),'\dProcessing XML file ...')
urls = []
for item in tqdm(itemlist):
url = item.firstChild.nodeValue
if 'vehicle_detail' in url:
urls.append(url)
return urls
def find_bid(page):
try:
soup = BeautifulSoup(page, 'html.parser')
dom = soup.findAll("p", {"class": "text-center"})[1]
#return dom.contents[0]
#return ''.join([s for s in dom.contents[0].split() if s.isdigit()])
return int(''.join([s for s in dom.contents[0] if s.isdigit()]))
except:
return np.nan
def format_data(df):
res_df = pd.DataFrame()
for index, row in tqdm(df.iterrows()):
pagetable = pd.read_html(row['Text'])[0]
item_name = row['URL'].split('vehicle_detail/')[1].split('/')[0]
pagetable.columns = ['Vehicle Name',item_name]
pagetable = pagetable.set_index('Vehicle Name').T
pagetable['bid'] = find_bid(page)
res_df = res_df.append(pagetable,sort=False)
res_df.to_csv('csv/formated_date.csv')
return res_df
# +
xlm_name = 'SAA_sitemap.xml'
text_file_name = 'SalvageAutos.csv'
loadXML(name=xlm_name)
urls = parseXML(xlm_name)
scraped_data = ContentDownloader.run_url_download(batch_size=100,urls_list=urls[:300] ,path_to_csv=text_file_name)
#scraped_data = pd.read_csv('csv/SalvageAutos.csv')
res_df = format_data(scraped_data)
# -
res_df.to_csv('csv/formatted_date.csv')
| DownloadCurrentData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_da37)
# language: python
# name: conda_da37
# ---
# # Annealing by Toshiba SBM
import subprocess
import joblib
ip="ip address for server"
cmd=f"""
curl -i -H "Content-Type: application/octet-stream" -X POST "{ip}:8000/solver/ising?steps=0&loops=10" --data-binary "@test.mm"
"""
cmd=cmd.replace("\n","")
#test job
# %time res=subprocess.check_output(cmd, shell=True)
res
# # Make qubo for SBM
# +
#make sparse dict from array-type qubo
original_qubo=joblib.load("../data/rbm_J.bin")
scale=10**5
dim=original_qubo.shape[0]
sparse_dict={}
for i in range(dim):
for j in range(dim):
val=-original_qubo[i][j]*scale
if val!=0:
sparse_dict[f"{j+1} {i+1}"]=int(val)
# +
#save as mm file
num_non_zero_params=len(sparse_dict.keys())
output=f"""%%MatrixMarket matrix coordinate real symmetric
{dim} {dim} {num_non_zero_params}
"""
#output
for k,v in sparse_dict.items():
output+=f"{k} {v}\n"
with open("qubo.mm","w") as f:
f.write(output[:-1])
# -
# # Annealing by SBM
# +
timeout=60
cmd=f"""
curl -i -H "Content-Type: application/octet-stream" -X POST "{ip}:8000/solver/ising?steps=0&loops=10&timeout={timeout}" --data-binary "@qubo.mm"
"""
cmd=cmd.replace("\n","")
# +
# #%time res=subprocess.check_output(cmd, shell=True)
# -
print(res)
| 3_2_anneal_comparison/toshiba_sbm/SBM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from ahh import pre, ext, vis, sci
import pandas as pd
import datetime
# +
df = pre.read_csv('monthly/sleep_summary_20170301_20171028.csv', date='SLEEP DATE', time='START TIME')
df = df.drop('sleep date', axis=1).drop('start time', axis=1).drop('minutes to fall asleep', axis=1)
df.index = df.index - datetime.timedelta(hours=4) # because I sometimes sleep past 12 AM and it count as next day
df['weekday'] = df.index.weekday
dts = df.index
tib = df['time in bed'] / 60
tas = df['minutes asleep'] / 60
df.columns
# +
ax = vis.plot_line(dts, tib, label='time in bed')
_ = vis.plot_line(dts, tas, figsize='na', color='blue', label='time asleep')
ax.set_ylim(3, 12)
_ = vis.set_labels(ax, ylabel='Hours', title='March to October 2017', title_pad=1.12)
vis.set_legend(ax, ncol=3, loc='top center')
# -
df_gb = df.groupby([df.index.month, 'weekday']).mean()
df_gb = df_gb.reset_index()
df_piv = df_gb.pivot(index='weekday', columns='level_0', values='minutes asleep') / 60
df_piv.index = ext.MISC['weekdays_short']
df_piv.columns = ext.MISC['months_short'][2:10]
vis.plot_heatmap(df_piv, data_lim=(6, 9), ylabel='Night of', xlabel='Month', cbar_label='Hours Asleep')
df_gb = df.groupby([df.index.month, 'weekday']).mean()
df_gb = df_gb.reset_index()
df_piv = df_gb.pivot(index='weekday', columns='level_0', values='minutes awake')
df_piv.index = ext.MISC['weekdays_short']
df_piv.columns = ext.MISC['months_short'][2:10]
vis.plot_heatmap(df_piv, data_lim=(50, 90), interval=5, ylabel='Night of', xlabel='Month', cbar_label='Minutes Awake')
df
| sketches/fitbit/sleep_heatmaps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: custom_demo
# language: python
# name: custom_demo
# ---
# + [markdown] id="YDa6KZkQL6xT"
# # ✨ REST API Usage
# ---
#
# This notebook explains how Superwise model KPIs can be consumed and analyzed using REST API calls. There are three main parts:
#
# [**1. Connection**](#1.-Connection) - Initiates the mandatory token-based authentication. [More details here](https://docs.superwise.ai/v0.1/docs/authentication).
#
# [**2. Data consumption**](#2.-Data-consumption) - Explains how to calculate a specific metric on a data entity [More information about common Superwise concepts.](https://docs.superwise.ai/docs/overview-1).
#
# [**3. Analyze results**](#3.-Analyze-results) - Provides examples of trend analysis
#
# + [markdown] id="xk8-xE-ML6xU"
# ## 1. Connection
# + [markdown] id="EgNHE81JL6xV"
# To initiate a connection, we first create an API token with permission to send API requests.
# To create a token, send a Post request to the route listed below Include your clientId and Secret in the body of the request.
#
# [For more about clientID and Secret](https://docs.superwise.ai/v0.1/docs/authentication)
# + id="U56bypZAL6xX"
import requests
import pandas as pd
import matplotlib.pyplot as plt
# + id="RaZDuLn7Gt-l"
CLIENT_ID = "REPLACE-WITH-YOUR-CLIENTID"
SECRET = "REPLACE-WITH-YOUR-SECRET"
CUSTOMER = "REPLACE-WITH-YOUR-CUSTOMER-NAME"
# + id="XSTO6IAJGt-l"
url = "https://auth.superwise.ai/identity/resources/auth/v1/api-token"
headers = {
"Accept": "application/json",
"Content-Type": "application/json"
}
payload = {
"clientId": CLIENT_ID,
"secret": SECRET
}
res = requests.post(url, json=payload, headers=headers)
res.raise_for_status()
token = res.json()['accessToken']
# + [markdown] id="51IMMm6EL6xW"
# **Set authorization headers**
# + id="AVEdTnZhL6xW"
HEADERS = {'Authorization': f'Bearer {token}'}
URL_PREFIX = f"https://portal.superwise.ai/{CUSTOMER}"
# + [markdown] id="o3pPdLrmL6xX"
# ## 2. Data consumption
# Once you create a token, you can start querying data.
# Here’s how to get the Data Drift metric for the model's five most important features.
# To consume the KPI, provide the following parameters:
#
# - Task and version - Sets the exact model that will compute the KPI.
# - Entity ID - The IDs of any of data entities (e.g,. features, labels, predictions, values) on which you want to compute the metric. id=-1 for metrics computed on the entire dataset, regardless of the specific entity (e.g, quantity).
# - Metric ID - The ID of the metric for which you want the KPI computed (e.g, data drift, sum_value, entropy etc).
#
# **For more about the terminology used here, refer to our [glossary](https://docs.superwise.ai/docs/concepts)**
#
# #### 2.1 Consume Task
# First, decide which tasks you want to select and make sure you have their IDs and version numbers.
# For more info about tasks, refer to the API [documentation](https://docs.superwise.ai/reference#task).
# + id="lRMQ1SCJL6xX" outputId="6c333a5f-1152-459d-a141-2c66af68bf9e"
request_url = f'{URL_PREFIX}/admin/v1/tasks'
res = requests.get(request_url, headers=HEADERS)
res.raise_for_status()
tasks_df = pd.DataFrame(res.json())
tasks_df.head()
# + [markdown] id="R7gdXBd1L6xY"
# **Set the task id and version id of the model**
# + id="UNT23RjAL6xY"
task = tasks_df[tasks_df['name'] == 'Fraud Detection']
TASK_ID = int(task['id'])
VERSION_ID = int(task['active_version_id'])
# + [markdown] id="9Y-Lah3uL6xY"
# #### 2.2 Consume Entites
# **Get all the model’s features and select the five most important ones**
# For more info about data entities, refer to the API [documentation](https://docs.superwise.ai/reference#data-set).
# + id="rlg8_KolL6xY" outputId="d79e3b4c-28a8-443c-e28d-e4d8b06898f0"
request_url = f'{URL_PREFIX}/model/v1/versions/{VERSION_ID}/data_entities'
res = requests.get(request_url, headers=HEADERS)
res.raise_for_status()
features = pd.DataFrame(res.json())
version_entities = pd.DataFrame(res.json(), columns=["data_entity", "feature_importance"])
flatten_version_entities = pd.json_normalize(version_entities["data_entity"], max_level=0)
flatten_version_entities["feature_importance"] = version_entities["feature_importance"]
empty_flatten_version_entities = pd.DataFrame(
columns=["id", "name", "role", "type", "secondary_type", "summary", "dimension_start_ts"]
)
features = empty_flatten_version_entities.append(flatten_version_entities)
features = features.sort_values(['feature_importance']).tail(5)[["id","name","type","feature_importance"]]
features.head()
# + [markdown] id="snmoYnDBL6xZ"
# **Set the selected Entites**
# + id="lVvBGGThL6xZ" outputId="a7964636-44d2-49d0-b2c1-b61c5de96e82"
ENTITY_IDS= features.id.to_list()
ENTITY_IDS
# + [markdown] id="i2tmVnNhL6xZ"
# #### 2.3 Get Metrics
# To get the data drift metric for the top five most important features, first fetch all the metrics so you can get its ID.
# + colab={"base_uri": "https://localhost:8080/", "height": 236} id="EVHVjJOcL6xZ" outputId="3d7f177b-c2ec-4617-eba0-90713a53f558"
request_url = f'{URL_PREFIX}/kpi/v1/metrics-functions'
res = requests.get(request_url, headers=HEADERS)
res.raise_for_status()
metrics = pd.DataFrame(res.json())
metrics.head()
# + id="DD8xL-ZsL6xa" outputId="18f3b9a5-5d10-4764-89ea-71de04ae0049"
metrics.set_index('name').loc['data_drift']
# + id="irE3f-KjL6xa"
METRIC_ID=metrics.set_index('name').loc['data_drift']['id']
# + [markdown] id="sZh8TGtzL6xa"
# #### 2.4 Get Metric
#
# The requests should include the following parameters:
#
# - task_id - The identifier of the model
# - version_id - The identifier of the model's version
# - segment_id - By default this is -1 for the whole dataset
# - entity_id - List of all the features IDs
# - metric_id - List of all metric IDs we want to get
#
# The minimal required parameters are task and version; these help Superwise recognize your model.
# For more info about metrics, refer to the API [documentation](https://docs.superwise.ai/reference#metric).
# + id="wI7fzUgPL6xa" outputId="5a88dd9c-b3eb-45c9-d3c8-26b8e71e181f"
request_url = f'{URL_PREFIX}/kpi/v1/metrics'
requests_params = dict(task_id=TASK_ID, vesrion_id=VERSION_ID, entity_id=ENTITY_IDS, segment_id=-1, metric_id=[1], time_unit='D')
res = requests.get(request_url,params=requests_params,headers=HEADERS)
res.raise_for_status()
results_df = pd.DataFrame(res.json())
results_df['entity_name'] = results_df['entity_id'].map(features.set_index('id')['name'].to_dict())
results_df['date_hour'] = pd.to_datetime(results_df['date_hour'])
results_df.head()
# + [markdown] id="p1nSmTMVL6xb"
# ## 3. 📈 Analyze-results
# By plotting the KPI we got from superwise we can try and spot anomalies and gain some insight on how these KPIs change over time.
# + id="AMW9cq7NL6xc" outputId="a5e680b8-4da7-4169-bcfc-b5e6e267a64a"
plot_df = results_df.pivot(index='date_hour', columns='entity_name', values='value')
plot_df.plot(figsize=(35,10), title='Data Drift Trend', legend=True, xlabel='Date', ylabel='Drift')
# + [markdown] id="b0c3E-vYL6xd"
# ## 📝 Tutorial Conclusion
# In this tutorial, we demonstrated how you can easily query Superwise to get meaningful KPIs and visualize them to gain insight into your models.
| examples/consume_metrics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Starbucks: Analyze-a-Coffee
#
# Starbucks possesses a unique way of connecting with and rewarding its customers who purchase its products. Starbucks Rewards program allows the company to create a Loyalty Program where it awards loyal customers by offering incentives for buying products with special offers. It utilizes many channels to market its products from social media to TV spots and ads. Starbucks executes its extraordinary marketing strategy by deploying a combination of marketing media channels, where it creates brand recognition. Starbucks does not only understand its products and customers, but also keeps up with how its customers use technology. Starbucks App enables customers to keep track of the available offers and happy hour deals at participating stores. It allows customers to earn and collect stars (collect two stars per $1) that can be redeemed in-store or via the app.
#
# Here, we are going to investigate and analysis three files that simulate how people make purchasing decisions and how promotional offers influence those decisions. The analysis and its findings are only observational and not the result of a formal study. General business questions are listed below to guide us through the analysis to develop a set of heuristics where the findings are not guaranteed to be optimal or rational.
#
# <img src="https://miro.medium.com/max/2000/1*7oeHvzCASgNX2u8V01FLFA.jpeg" style="width:800px;"/>
# ## Business Questions
#
# The purpose of the analysis is to examine how Starbucks’ customers respond to an offer whether a BOGO or Discount is the offer. Not all customers have the same incentives to view an offer and then make a transaction to complete the offer. Many factors play an important role in impacting how customers make purchasing decisions; for instance, some customers prefer offers that allow them to collect more and more stars toward getting exclusive perks or even free products. Sometimes, customers at a particular age group, prefer an offer different than what another group prefers. Moreover, we should keep in mind that female customers may react to an offer is different than how male customers do. Many aspects can be investigated and analyzed to find answers to such questions. All of that would help Starbucks to target its customers, and then personalizes and customizes the offers it sends depending on who are the audience. Many questions can be asked; here is some of what we are going to investigate:
#
# 1. What is the number of customers who received at least one offer?
# 2. Who usually spend more at Starbucks, female or male?
# 3. For the customers who spend more; Who makes more income per year?
# 4. How old are most of Starbucks customers with respect to gender?
# 5. How much do customers spend at any time since the start of an offer?
# 6. Can we find the most popular offer by an age group or a gender, then compare it to other offers, or even another age group?
# 7. Which offer has made the most for Starbucks? Is there a difference between BOGO offers and Discount offers? If so, Do male customers react the same as female customers do for any of the two offer types?
# +
# importing libraries
import pandas as pd
import numpy as np
import json
import matplotlib.pyplot as plt
# magic word for producing visualizations in notebook
# %matplotlib inline
import plotly.plotly as py #for creating interactive data visualizations
import plotly.graph_objs as go
import plotly.tools as tls
py.sign_in('', '') #API key has been removed for security
from plotly.offline import download_plotlyjs,init_notebook_mode,plot,iplot #to work with data visualization offline
init_notebook_mode(connected=True)
import cufflinks as cf #connects Plotly with pandas to produce the interactive data visualizations
cf.go_offline()
from IPython.display import Image
# -
# reading the json files
portfolio = pd.read_json('portfolio.json', orient='records', lines=True)
profile = pd.read_json('profile.json', orient='records', lines=True)
transcript = pd.read_json('transcript.json', orient='records', lines=True)
# ---
# ---
# ---
# ## 1. Data Wrangling
#
# Many alterations and preparation were applied to the three files to get a final clean dataset contains all the information needed for each variable available from offer types to gender and age groups.
#
# ### 1.1 Data Sets
# The data is contained in three files:
#
# * portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)
# * profile.json - demographic data for each customer
# * transcript.json - records for transactions, offers received, offers viewed, and offers completed
#
# #### <font color=blue> 1.1.1 Portfolio
# +
# printing the portfolio data
print(f"Number of offers: {portfolio.shape[0]}")
print(f"Number of variables: {portfolio.shape[1]}")
portfolio
# -
# Here is the schema and explanation of each variable in the file:
#
# **portfolio.json**
# * id (string) - offer id
# * offer_type (string) - type of offer ie BOGO, discount, informational
# * difficulty (int) - minimum required spend to complete an offer
# * reward (int) - reward given for completing an offer
# * duration (int) - time for offer to be open, in days
# * channels (list of strings)
# +
# overall info about the portfolio data
portfolio.info()
# -
portfolio.describe()
# The first look at the portfolio indicates that we will have to split the channels in the cleaning process.
# ---
# #### <font color=blue> 1.1.2 Profile
# +
# printing the profile data
print(f"Number of customers: {profile.shape[0]}")
print(f"Number of variables: {profile.shape[1]}")
profile.sample(10)
# -
# Here is the schema and explanation of each variable in the file:
#
# **profile.json**
# * age (int) - age of the customer
# * became_member_on (int) - date when customer created an app account
# * gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)
# * id (str) - customer id
# * income (float) - customer's income
# +
# overall info about the profile data
profile.info()
# -
# We can see that we have 17000 customers in our data. However, gender and income variables have some nulls values which will be investigated next.
profile.describe()
# The age variable contains a maximum value of 118 years old, which is considered an unusual value that will be investigated even further.
# +
# checking the number of null values
profile.isnull().sum()
# +
# printing the number of nulls values for the gender column by age
print(f"Number of null values: {profile.age[profile.gender.isnull()].value_counts().iloc[0]}")
print(f"Number of unqiue customers with null values: {profile.age[profile.gender.isnull()].nunique()}")
print(f"The age of the unique customers where there are nulls: {profile.age[profile.gender.isnull()][0]}")
# -
# As expected, all the nulls in the profile data are part of the customers who have '118' as an age value. Therefore, the rows that have the value 118 will be dropped as a part of the cleaning process of the profile file.
# ---
# #### <font color=blue> 1.1.3 Transcript
# +
# printing the transcript data
print(f"Number of transcripts: {transcript.shape[0]}")
print(f"Number of variables: {transcript.shape[1]}")
transcript.sample(5)
# -
# Here is the schema and explanation of each variable in the files:
#
# **transcript.json**
# * event (str) - record description (ie transaction, offer received, offer viewed, etc.)
# * person (str) - customer id
# * time (int) - time in hours since start of test. The data begins at time t=0
# * value - (dict of strings) - either an offer id or transaction amount depending on the record
# +
# overall info about the transcript data
transcript.info()
# -
transcript.time.describe()
# The transcripts data includes an important variable, **time**, that will be used to make sure each event, or transaction was made within the **duration** of an offer. All of these cleaning process will be applied next.
# ---
# ---
# ---
# ### 1.2 Data Cleaning
#
# #### <font color=blue> 1.2.1 Portfolio
# +
# creating a new copy for cleaning purposes
portfolio_clean = portfolio.copy()
# +
# overall view of the portfolio data before cleaning
portfolio_clean.sample(3)
# +
# showing the channels available before splitting them to 4 columns
portfolio_clean.channels.sample(5)
# +
# splitting the channels
def col_split(df, column):
splits = []
for s in df[column]:
for i in s:
if i not in splits:
splits.append(i)
for split in splits:
df[split] = df[column].apply(lambda x: 1 if split in x else 0)
df.drop([column], axis=1, inplace=True)
return splits
col_split(portfolio_clean ,'channels')
# +
# overall view of the portfolio data after the split
portfolio_clean.sample(3)
# -
# We have just split the channels where each channel become a variable and has the value 1 if it has been used as a channel for a specific offer whereas the value 0 means it hasn't been used.
# +
# showing the type of each column
portfolio_clean.dtypes
# +
# converting the duration column to be in hours instead of days
# to be compared with the times columns in the transcript file
portfolio_clean['duration'] = portfolio_clean['duration'] * 24
# -
# As mentioned eariler, the duration variable will play a major role in the cleaning process where it will be compared to the time of each transcript provided. Thus, it will be converted to be in hours as the time variables does.
# +
# renaming the columns for the portfolio data
portfolio_clean.rename(columns={'difficulty': 'difficulty($)',
'duration': 'duration(hours)',
'id': 'offer_id'}, inplace=True)
# +
# overall view of the portfolio data after renaming the columns
portfolio_clean.head(3)
# +
# ordering the columns for the final look at the portfolio data after being clean
portfolio_clean = portfolio_clean[['offer_id',
'offer_type',
'difficulty($)',
'duration(hours)',
'reward',
'email',
'mobile',
'social',
'web']]
portfolio_clean.head(3)
# -
# ---
# #### <font color=blue> 1.2.2 Profile
# +
# creating a new copy for cleaning purposes
profile_clean = profile.copy()
# +
# overall view of the profile data before cleaning
profile_clean.head(3)
# +
# showing the type of each column
profile_clean.dtypes
# +
# converting the 'became_member_on' column to date
# adding a period of the membership by month
profile_clean['membership_start'] = profile_clean.became_member_on.apply(lambda x: pd.to_datetime(str(x), format='%Y%m%d'))
profile_clean['membership_period'] = profile_clean['membership_start'].dt.to_period('M')
profile_clean.drop(['became_member_on'], axis=1, inplace=True)
# -
# The variable "became_member_on" could be used for finding trends with respect to the time most customers have joined Starbucks App. Therefore, the column was converted to be a date instead of int and then a membership_period by Month was created where it shows the month and year when the customer become a member.
# +
# overall view of the profile data after applying some cleaning
profile_clean.head(3)
# +
# renaming the id column for the profile data
profile_clean.rename(columns={'id': 'customer_id'}, inplace=True)
# +
# listing the customers_ids whose age value is '118',
# which accounts for the all the nulls values
age_118 = profile_clean[profile_clean['age'] == 118]
list(age_118.customer_id)
# +
# making sure the profile data doesn't contains the customers_ids whose age value is '118',
# which accounts for the all the nulls values
profile_clean = profile_clean[profile_clean.age != 118]
profile_clean.head(3)
# -
print(f"Number of customers after cleaning: {profile_clean.shape[0]}")
print(f"Number of variables: {profile_clean.shape[1]}")
# We talked about the customers who have nulls, and then found out that all nulls can be dropped when the rows with age = 118 are dropped, which what we have just done. We recreated the profile data to exclude the rows with age 118. As a result, the prfile dataset contain information about 14825 instead of 17000 customers
# +
# checking the number of null values after dropping the customers ids with nulls income and gender
profile_clean.isnull().sum()
# +
# ordering the columns for the final look at the profile data after being clean
profile_clean = profile_clean[['customer_id',
'age',
'gender',
'income',
'membership_start',
'membership_period']]
profile_clean.head(3)
# -
# ---
# #### <font color=blue> 1.2.3 Transcript
# +
# creating a new copy for cleaning purposes
transcript_clean = transcript.copy()
# +
# overall view of the transcript data before cleaning
transcript_clean.head(3)
# +
# showing the type of each column
transcript_clean.dtypes
# +
# showing the values available before splitting them to
# 2 columns: record (offer id or transactions) and record_value (amount or id)
transcript_clean.value.sample(10)
# -
# While the time variable would play an important role during the analysis, we should first split the values we have either offer_id with its record or transaction with the amount. The next function will do the magic!
# +
# splitting the value column
def df_values(df=transcript_clean):
df['record'] = df.value.apply(lambda x: list(x.keys())[0])
df['record_value'] = df.value.apply(lambda x: list(x.values())[0])
df.drop(['value'], axis=1, inplace=True)
return None
df_values()
# +
# renaming the person and time columns for the transcript data
transcript_clean.rename(columns={'person': 'customer_id',
'time': 'time(hours)'}, inplace=True)
transcript_clean.head(3)
# +
# ordering the columns for the final look at the transcript data after being clean
transcript_clean = transcript_clean[['customer_id',
'event',
'record',
'record_value',
'time(hours)']]
transcript_clean = transcript_clean[~transcript_clean.customer_id.isin(age_118.customer_id)]
transcript_clean.head(3)
# -
# After splitting the ids and the transactions, we are goining to create two datasets: offers that contains the information where the record is offer_id and the record value is the id, and transactions which include the amount of each transaction made by a customer.
print(f"Number of transcripts after cleaning: {transcript_clean.shape[0]}")
print(f"Number of variables: {transcript_clean.shape[1]}")
# +
# splitting the transcript dataset to 2 datasets; offers and transactions
transactions = transcript_clean[transcript_clean.event == 'transaction']
offers = transcript_clean[transcript_clean.event != 'transaction']
# +
# overall view of the offers data
offers.sample(3)
# +
# renaming the record_value
offers.rename(columns={'record_value': 'offer_id'}, inplace=True)
offers.drop(['record'], axis=1, inplace=True)
# +
print(f"Number of offers after cleaning: {offers.shape[0]}")
print(f"Number of variables: {offers.shape[1]}")
offers.sample(3)
# +
# converting the record value for the transactions to be numerical
transactions['record_value'] = pd.to_numeric(transactions['record_value'])
transactions.dtypes
# +
# overall view of the transactions data
transactions.sample(3)
# +
# renaming the record_value
transactions.rename(columns={'record_value': 'transaction_amount'}, inplace=True)
transactions.drop(['record', 'event'], axis=1, inplace=True)
# +
print(f"Number of transactions after cleaning: {transactions.shape[0]}")
print(f"Number of variables: {transactions.shape[1]}")
transactions.sample(3)
# -
# ---
# ## 2. Data Exploration
#
# ### 2.1 Part 1
#
# #### <font color=blue> 2.1.1 Profile
# +
# overall view of the profile data after being clean
profile_clean.sample(3)
# +
# describing the values of the income column
profile_clean.income.describe()
# +
# describing the values of the age column
profile_clean.age.describe()
# +
# plotting the numbers and percentages of customers by gender
gender = profile_clean.gender.value_counts()
labels = gender.index
values = gender
colors = ['cornflowerblue', 'pink', 'mediumblue']
trace1 = go.Pie(labels=labels, values=round(100*values/values.sum(), 2),
text=values,
textfont=dict(size=20),
hoverinfo='skip',
hole=0.9,
showlegend=False,
opacity=0.6,
marker=dict(colors=colors,
line=dict(color='#000000', width=2)))
trace2 = go.Bar(
x=['Male'],
y=[values[0]],
name='Male',
marker=dict(
color='cornflowerblue',
line=dict(
color='cornflowerblue',
width=1.5,
)
),
opacity=0.6
)
trace3 = go.Bar(
x=['Female'],
y=[values[1]],
name='Female',
marker=dict(
color='pink',
line=dict(
color='pink',
width=1.5,
)
),
opacity=0.6
)
trace4 = go.Bar(
x=['Other'],
y=[values[2]],
name='Other',
marker=dict(
color='mediumblue',
line=dict(
color='mediumblue',
width=1.5,
)
),
opacity=0.6
)
data1 = go.Data([trace1, trace2, trace3, trace4])
layout = go.Layout(
title='Number of Customers by Gender',
xaxis=dict(
title='Gender',
domain=[0.4, 0.6]
),
yaxis=dict(
title='Count',
domain=[0.4, 0.7]
)
)
fig = go.Figure(data=data1, layout=layout)
py.iplot(fig, filename='percentage of customer by gender')
# -
# The donut chart illustrates that female customers account for 41.30% and male customers account for 57.20% of the data overall whether they all completed an offer, made a transaction before the end of an offer or not. Also, customers who chose other as gender account only for 1.43%. Can we assume that having more male customers would indicate that they make more transactions amount than female customers do? Next, we will investigate the numbers even further.
# +
# plotting a gender comparsion with respect to income and age
def array(df, gen, col):
arr = df[df.gender == gen]
arr = np.array(arr[col])
return arr
def gender_comparsion(df, col, plot_name, avg):
x_data = ['Male', 'Female', 'Other']
y0 = array(df, 'M', col)
y1 = array(df, 'F', col)
y2 = array(df, 'O', col)
y_data = [y0,y1,y2]
colors = ['cornflowerblue', 'pink', 'mediumblue']
traces = []
for xd, yd, cls in zip(x_data, y_data, colors):
traces.append(go.Box(
y=yd,
name=xd,
boxmean=True,
boxpoints='all',
jitter=0.5,
whiskerwidth=0.2,
marker=dict(
size=2,
color=cls,
),
line=dict(color=cls,
width=1),
))
layout = {
'title': plot_name,
'shapes': [
# Line Horizontal, average
{
'type': 'line',
'xref': 'paper',
'x0': 0,
'y0': avg,
'x1': 1,
'y1': avg,
'line': {
'color': 'black',
'width': 1,
'dash': 'dashdot',
}
}]}
layout.update(dict(
annotations=[go.Annotation(text="Overall Average", y=avg)]))
fig = go.Figure(data=traces, layout=layout)
plot = py.iplot(fig, filename=plot_name)
return plot
# -
gender_comparsion(profile_clean, 'income', 'Income Comparsion', 64000)
# We can see that, on average, male customers earn less than female customers do in a year. In fact, female customers earn more than the average income per year for all gender types. While female customers make, on average, around 71k a year, male customers make, on average, around 61k.
gender_comparsion(profile_clean, 'age', 'Age Comparsion', 55)
# The box plots describe a summary of the age distribution by gender. They show that, on average, female customers are older than male customers.
# +
# grouping the data by income and then reporting the average number of customers per icnome
print(f"The average number of customers per icnome rate per year: {round(profile_clean.groupby('income')['customer_id'].count().mean(),0)}")
# +
# plotting the Income Distribution by Gender
dis = profile_clean.drop_duplicates(subset='customer_id', keep='first')
female_income = dis[dis['gender'] == 'F']
male_income = dis[dis['gender'] == 'M']
other_income = dis[dis['gender'] == 'O']
x1 = female_income['income']
x2 = male_income['income']
x3 = other_income['income']
trace1 = go.Histogram(
x=x1,
name='Female',
opacity=0.6,
nbinsx = 91,
marker=dict(
color='pink')
)
trace2 = go.Histogram(
x=x2,
name='Male',
opacity=0.6,
nbinsx = 91,
marker=dict(
color='cornflowerblue')
)
trace3 = go.Histogram(
x=x3,
name='Other',
opacity=0.6,
nbinsx = 91,
marker=dict(
color='mediumblue')
)
data3 = [trace1, trace2, trace3]
updatemenus = list([
dict(active=0,
buttons=list([
dict(label = 'All',
method = 'update',
args = [{'visible': [True, True, True]},
{'title': 'Income Distribution by Gender'}]),
dict(label = 'Female',
method = 'update',
args = [{'visible': [True, False, False]},
{'title': 'Income Distribution of Male Customers'}]),
dict(label = 'Male',
method = 'update',
args = [{'visible': [False, True, False]},
{'title': 'Income Distribution of Female Customers'}])
]),
)
])
layout = go.Layout(dict(updatemenus=updatemenus,
barmode='stack',
bargap=0.2,
title = 'Income by Gender',
xaxis=dict(
title='Income($)'),
yaxis=dict(
title='Number of Customers')))
fig = go.Figure(data=data3, layout=layout)
py.iplot(fig, filename='Income Distribution by Gender')
# -
# Here, the plot clarifies that, overall, the majority of our customers make between 50k and 75k. The average number of customer per income rate is 163 customers. Thus, while the minority are making more than 100k per year, some customers earn less than 50k and the other earn between 76k and 100k. Overall, we can see that most of the female customers make between 60k — 75k, and most of the make customers earn 45k— 60k. Next, we will confirm the difference between female and male customers income rates.
# ---
# #### <font color=blue> 2.1.2 Portfolio
# +
# overall view of the portfolio data after being clean
portfolio_clean
# +
# showing the number of offers by offer_type
portfolio_clean.groupby('offer_type')['offer_id'].count()
# -
# To be used for further analysis, we can that 4 of the offers are BOGO, 4 are Discounts, and 2 are informational that don't require any compleation by customers as well as don't provide any rewards. Hence, we will focus only on the BOGO and Discounts.
# ---
# #### <font color=blue> 2.1.3 Transcript
# +
# overall view of the transcript data after being clean
transcript_clean.sample(3)
# +
# showing the number of transcripts by event
transcript_clean.event.value_counts()
# +
# showing the percentage of transcripts by event
transcript_clean['event'].value_counts(normalize = True)
# -
# In the transcripts, customers made 123957 transactions overall whether they are considerd on time (made before the end of an offer or not). Therefore, we will clean the transcript dataset even further by making sure we have the transaction made on time while the customer viewed and completed the offer.
# +
# functions to return 3 datasets by event
def offers_event1(df, ev):
offers = df[df.event == ev]
return offers
offer_received = offers_event1(offers, 'offer received')
offers_viewed = offers_event1(offers, 'offer viewed')
offers_completed = offers_event1(offers, 'offer completed')
def offers_event2(df):
df_offers = pd.DataFrame(df.offer_id.value_counts()).reset_index().rename(columns={'index': 'offer_id', 'offer_id': 'count'})
return df_offers
df_received = offers_event2(offer_received)
df_viewed = offers_event2(offers_viewed)
df_completed = offers_event2(offers_completed)
# +
# merging the datasets for each event created from the previous functions to calculate
# the numbers and percentages of the customers for each offer
offers_count = pd.merge(pd.merge(df_received, df_viewed, on='offer_id'), df_completed, on='offer_id')
offers_count = offers_count.rename(columns={'count_x': 'received', 'count_y': 'viewed','count': 'completed'})
offers_count['offer_views(%)'] = offers_count['viewed']/offers_count['received']*100
offers_count['offer_completion(%)'] = offers_count['completed']/offers_count['received']*100
offers_count['completed(not_viewed)'] = offers_count['completed']-offers_count['viewed']
offers_count
# -
print(f"The offer with the most completion percentage: {offers_count.loc[offers_count['offer_completion(%)'] == offers_count['offer_completion(%)'].max(), 'offer_id'].iloc[0]}")
print(f"Its completion percentage: {round(offers_count['offer_completion(%)'].max(), 2)}")
# An overall analysis of each offer before cleaning the data even more to meet our requirements; a customer viewed and completed an offer by making a transaction. Here we can see than the offer "fafdcd668e3743c1bb461111dcafc2a4" is the most popular offer with around a 75% compleation rate for the customers who received an offer following by "2298d6c36e964ae4a3e7e9706d1fb8c2" with a 73% compleation rate. Also, the less popular offer is "0b1e1539f2cc45b7b9fa7c272da2e1d7" as only 33% of the customers who recevied it have viewed the offer and 50% of the customers who recevied that offer have completed it.
# +
# merging the offers_count data with portfolio data to have a complete info about each offer
offers_comparsion = pd.merge(offers_count, portfolio_clean, on='offer_id')
offers_comparsion
# -
# Final look at the offers analysis with even more details about each offer. It shows that the two most popular offers that have just been discovered are discounts where all four channels were used reuiqre a mininum of $10 per transaction In fact, more than 95% of the customers who received these offers have viewed the offer through one of the channels.
offers_comparsion.sample(2)
# ### 2.2 Part 2
#
# +
# merging all three datasets to have a complete, clean dataset
full_data = pd.merge(pd.merge(pd.merge(profile_clean,
transactions, on='customer_id'),
offers, on='customer_id'),
offers_comparsion, on='offer_id')
# +
# locking at the full datasets
full_data
# -
# renaming the time hours of the transactions to be distinguished from the time of offers
full_data.rename(columns={'time(hours)_x': 'time_of_transaction(hours)'}, inplace=True)
# +
# a function to recreate the ids columns to be easy ids to use and communicate
def id_mapper(df, column):
coded_dict = dict()
cter = 1
id_encoded = []
for val in df[column]:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
id_encoded.append(coded_dict[val])
return id_encoded
cid_encoded = id_mapper(full_data, 'customer_id')
full_data['customer_id'] = cid_encoded
oid_encoded = id_mapper(full_data, 'offer_id')
full_data['offer_id'] = oid_encoded
# -
# To make even easier to communicate the customers and offers ids, they have been mapped to be represented by numbers instead of hashes or codes.
# +
# locking at the full datasets after recreating the ids
full_data
# +
# making sure the dataset contains only the transactions that appear before the end of an offer
full_data = full_data[full_data['time_of_transaction(hours)'] <= full_data['duration(hours)']]
full_data
# -
print(f"The number of transactions in the full data: {full_data.shape[0]}")
print(f"The number of variables: {full_data.shape[1]}")
# As mentioned eariler, our goal is to have a dataset that contains only viewed and completed offers where transactions made by anu customers took a place before the end of the duration of an offer. That is, the above code assure that transactions made after the end of an offer are not included in the final, clean data.
# +
# showing the number of transactions by offer_type for the final and clean data
full_data['offer_type'].value_counts()
# -
# We talked about the popularity of the 10 offers we have, and we found that two discount offers were the most popular. Here, we can see than most transactions were made for discount offers.
# +
# functions to return the individual datasets with respect to each column in the full_data
def full_dataset(df, column, ev):
data = df[df[column] == ev]
return data
def offer_dataset(df, offer_num):
offer_num = full_dataset(df, 'offer_id', offer_num)
return offer_num
# -
#
# #### <font color=blue> 2.2.1 Events
# +
# creating datasets of each event for further analysis
df_received = full_dataset(full_data, 'event', 'offer received')
df_viewed = full_dataset(full_data, 'event', 'offer viewed')
df_completed = full_dataset(full_data, 'event', 'offer completed')
# +
print(f"The number of received offers: {df_received.shape[0]}")
df_received.sample(3)
# +
# overview of the transaction where the customers did view an offer
print(f"The number of viewed offers: {df_viewed.shape[0]}")
df_viewed
# +
# overview of the transaction where the customers did complete an offer
print(f"The number of completed offers: {df_completed.shape[0]}")
df_completed
# -
# After writting some functions that return the full data with respect to a specific variable, we will look next to top_customers customers who completed any offer and paid the most amount.
# +
# making sure we have only unique_customers even if the customer made transaction more than once for an offer
unique_customers = df_completed.drop_duplicates(subset=['customer_id', 'time_of_transaction(hours)', 'offer_id'], keep='first')
# finding the top customers based on the sum of their transaction_amount of all offers completed
top_customers = unique_customers.groupby('customer_id')['transaction_amount'].sum().sort_values(ascending=False).head(10)
top_customers
# -
# It looks like some customers paid thousands of dollars for Starbucks. Next, we will see the total amount of money made for each offer.
# +
# finding the top offers based on the sum of the transaction_amount made by all customers who completed the offer
top_offers = unique_customers.groupby('offer_id')['transaction_amount'].sum().sort_values(ascending=False).head(10)
top_offers
# -
# Offers 5 and 7 have the most sum of transactions amount made by all customers.
# +
# finding the average transaction_amount over all transactions and offer where customers completed the offers
df_completed['transaction_amount'].mean()
# +
# creating a dataframe grouped by the time when each transaction takes a place from the start of an offer
# showing a description of the amount spent by that time
transcation_by_time = df_completed.groupby('time_of_transaction(hours)').describe()['transaction_amount'].reset_index()
transcation_by_time = transcation_by_time.drop(['std', '25%', '75%'], axis=1)
transcation_by_time
# +
# a function to split the data by column and then returns a description of the all transcations grouped by their time
def transcation_by_time(df, col, target):
transcations = df[df[col] == target]
transcations = transcations.groupby('time_of_transaction(hours)').describe()['transaction_amount'].reset_index()
transcations = transcations.drop(['std', '25%', '75%'], axis=1)
return transcations
# +
# plotting a trend that shows the average Amount Spent since a Start of an Offer at a specific time
f_transcations = transcation_by_time(df_completed, 'gender', 'F')
m_transcations = transcation_by_time(df_completed, 'gender', 'M')
trace_mean1 = go.Scatter(
x=f_transcations['time_of_transaction(hours)'],
y=f_transcations['mean'],
name = "Female",
line = dict(color = 'pink'),
opacity = 0.6)
trace_mean2 = go.Scatter(
x=m_transcations['time_of_transaction(hours)'],
y=m_transcations['mean'],
name = "Male",
line = dict(color = 'cornflowerblue'),
opacity = 0.6)
data1 = [trace_mean1, trace_mean2]
layout = {
'title': 'Average Amount Spent Since a Start of an Offer Trend',
'xaxis': {'title': 'Time Since a Start of an Offer (hours)'},
'yaxis': {'title': 'Average Amount ($)',
"range": [
10,
30
]},
'shapes': [
# Line Horizontal, average
{
'type': 'line',
'x0': 0,
'y0': 16.55,
'x1': 243,
'y1': 16.55,
'line': {
'color': 'black',
'width': 1,
'dash': 'dashdot',
}
},
# 1st highlight above average amount
{
'type': 'rect',
# x-reference is assigned to the x-values
'xref': 'paper',
# y-reference is assigned to the plot [0,1]
'yref': 'y',
'x0': 0,
'y0': 16.55,
'x1': 1,
'y1': 30,
'fillcolor': 'olive',
'opacity': 0.1,
'line': {
'width': 0,
}
},
# 3nd highlight below average months
{
'type': 'rect',
'xref': 'paper',
'yref': 'y',
'x0': 0,
'y0': 16.55,
'x1': 1,
'y1': 0,
'fillcolor': 'tomato',
'opacity': 0.1,
'line': {
'width': 0,
}
}
]
}
layout.update(dict(annotations=[go.Annotation(text="Overall Average Amount ($16.55) Spent After the Start of an Offer",
x=150,
y=16.55,
ax=10,
ay=-120)]))
fig = dict(data=data1, layout=layout)
py.iplot(fig, filename = "Amount Spent since a Start of an Offer Trend")
# -
# The trend of the transactions made since a start of an offer shows that, on average, female customers paid more than male customers did at any time from the beginning of an offer up to 10 days. Back to our concern in the first chart; even though the number of male customers is higher, female customers paid more than male customers have paid. The only exception is transactions made after 228 hours from the start of an offer. On average, all customers paid an overall amount of $16.55 at any time since an offer has started. We can see female customers paid more than the average at any time! Whereas male customers most the time paid less than the average. Also, we can observe some peaks over time for both gender where they, on average, paid more than usual. That could be during the weekends or specific times during the day.
#
# #### <font color=blue> 2.2.2 Offer Type
# +
# creating datasets of each offer_type for further analysis
df_bogo = full_dataset(full_data, 'offer_type', 'bogo')
df_discount = full_dataset(full_data, 'offer_type', 'discount')
# +
# overview of the transaction where the offer is BOGO
df_bogo.sample(3)
# -
df_discount.sample(3)
# +
# function to return top customers or offers for a specific offer type
# based on the sum of their transaction_amount of all offers completed
def tops(df, col):
# making sure we have only unique_customers who completed an offer even if the customer made transaction more than once for that offer
completed_offers = df[df['event'] == 'offer completed']
unique_customers = completed_offers.drop_duplicates(subset=['customer_id', 'time_of_transaction(hours)', 'offer_id'], keep='first')
# finding the tops based on the sum of their transaction_amount of all offers completed for that specific offer type
top1 = unique_customers.groupby(col)['transaction_amount'].sum().sort_values(ascending=False).head(10)
return top1
def tops_by_gender(df):
# making sure we have only unique_customers who completed an offer even if the customer made transaction more than once for that offer
completed_offers = df[df['event'] == 'offer completed']
unique_customers = completed_offers.drop_duplicates(subset=['customer_id', 'time_of_transaction(hours)', 'offer_id'], keep='first')
# finding the amount of all completed offer by gender
amount = unique_customers.groupby(['offer_id', 'gender'])['transaction_amount'].sum()
return amount
def customer_report(cid, df=df_discount):
report = df[df.event == 'offer completed']
report = report[report.customer_id == cid]
return report
# +
# finding the top customers based on the sum of their transaction_amount of all BOGO offers completed
tops(df_bogo, 'customer_id')
# -
# As we now focus on transactions made for only the BOGO offers, we can see the top 10 customers who paid the most for all BOGO offers
# +
# finding the top customers based on the sum of their transaction_amount of all discount offers completed
tops(df_discount, 'customer_id')
# -
# Here, we find the top 10 customers with respect to the discount offers.
# +
# reporting more info about a customer
customer_report(8658, df_discount)
# +
# finding the top offers based on the sum of the transaction_amount made by all customers who completed the offer
tops(df_bogo, 'offer_id')
# -
# As mentioned eariler, we have 4 BOGO offers and 4 Discount offers. Here we see that offer 2 had the most total transactions amount over the other BOGO offers.
# +
# finding the top offers based on the sum of the transaction_amount made by all customers who completed the offer
tops(df_discount, 'offer_id')
# -
# Offer 5 made the most among other Discount offers, and overall!
# +
# finding the total amount for each BOGO offer by gender
tops_by_gender(df_bogo)
# -
# More details about the total transactions amount made for each BOGO offer by customers with respect to their gender. Overall, female customers paid more than male customers. Both, Female and male customers paid most for offer 2.
# +
# finding the total amount for each discount offer by gender
tops_by_gender(df_discount)
# -
# With respect to Discount offers, again, femal customers paid more than male customers. Both paid the most for offer 5.
#
# #### <font color=blue> 2.2.3 Gender
# +
# creating datasets for each gender for further analysis
df_male = full_dataset(full_data, 'gender', 'M')
df_female = full_dataset(full_data, 'gender', 'F')
df_other = full_dataset(full_data, 'gender', 'O')
# -
# overview of the transactions made by male customers
df_male.sample(3)
# +
# overview of the transactions made by female customers
df_female.sample(3)
# +
# finding the top male customers based on the sum of their transaction_amount
tops(df_male, 'customer_id')
# -
# Eariler, we looked at the overall top customers. Here, we find the top 10 male customers based on the total amount they paid for all offer.
# +
# finding the top female customers based on the sum of their transaction_amount
tops(df_female, 'customer_id')
# -
# The top 10 female customers.
# +
# reporting more info about a customer
customer_report(9904, df_female)
# +
# finding the top offers based on the sum of the transaction_amount made by male customers who completed that offer
tops(df_male, 'offer_id')
# +
# finding the top offers based on the sum of the transaction_amount made by male customers who completed that offer
tops(df_female, 'offer_id')
# +
# function to return plots with respect to different distributions by gender
def age_distributions(df, df2, color, color2):
df = df.drop_duplicates(subset='customer_id', keep='first')
df2 = df2.drop_duplicates(subset='customer_id', keep='first')
x = df.age
x2 = df2.age
trace1 = go.Histogram(
x=x,
name='Female',
opacity=0.6,
nbinsx = 7,
marker=dict(
color=color)
)
trace2 = go.Histogram(
x=x2,
name='Male',
opacity=0.6,
nbinsx = 7,
marker=dict(
color=color2)
)
data1 = [trace1, trace2]
layout = go.Layout(
barmode='stack',
bargap=0.1,
title = 'Age by Gender',
xaxis=dict(
title='Age'),
yaxis=dict(
title='Total number of Customers'))
updatemenus = list([
dict(active=0,
buttons=list([
dict(label = 'All',
method = 'update',
args = [{'visible': [True, True]},
{'title': 'Age Distribution by Gender'}]),
dict(label = 'Female',
method = 'update',
args = [{'visible': [True, False]},
{'title': 'Age Distribution of Male Customers'}]),
dict(label = 'Male',
method = 'update',
args = [{'visible': [False, True]},
{'title': 'Age Distribution of Female Customers'}])
]),
)
])
layout.update(dict(updatemenus=updatemenus))
fig = go.Figure(data=data1, layout=layout)
plot = py.iplot(fig, filename='Age by Gender')
return plot
def income_distributions(df, df2, color, color2):
df = df.drop_duplicates(subset='customer_id', keep='first')
df2 = df2.drop_duplicates(subset='customer_id', keep='first')
x = df.income
x2 = df2.income
trace1 = go.Histogram(
x=x,
name='Female',
opacity=0.6,
nbinsx = 7,
marker=dict(
color=color)
)
trace2 = go.Histogram(
x=x2,
name='Male',
opacity=0.6,
nbinsx = 7,
marker=dict(
color=color2)
)
data2 = [trace1, trace2]
layout = go.Layout(
barmode='stack',
bargap=0.1,
title = 'Income by Gender',
xaxis=dict(
title='Income'),
yaxis=dict(
title='Total number of Customers'))
updatemenus = list([
dict(active=0,
buttons=list([
dict(label = 'All',
method = 'update',
args = [{'visible': [True, True]},
{'title': 'Income Distribution by Gender'}]),
dict(label = 'Female',
method = 'update',
args = [{'visible': [True, False]},
{'title': 'Income Distribution of Male Customers'}]),
dict(label = 'Male',
method = 'update',
args = [{'visible': [False, True]},
{'title': 'Income Distribution of Female Customers'}])
]),
)
])
layout.update(dict(updatemenus=updatemenus))
fig = go.Figure(data=data2, layout=layout)
plot = py.iplot(fig, filename='Income by Gender')
return plot
# function to return an analysis of the numbers and percentage of all transaction by each event
def analysis(df1, col1, col2, col3):
v = df1[df1['event'] == 'offer viewed']
v = v.drop_duplicates(subset=['customer_id', 'offer_id'], keep='first')
c = df1[df1['event'] == 'offer completed']
c = c.drop_duplicates(subset=['customer_id', 'offer_id'], keep='first')
received = df1.drop_duplicates(subset=['customer_id', 'offer_id'], keep='first')
viewed = pd.Series(v.offer_id).value_counts().reset_index().rename(columns={'index': 'offer_id_v', 'offer_id': 'viewed'})
completed = pd.Series(c.offer_id).value_counts().reset_index().rename(columns={'index': 'offer_id_c', 'offer_id': 'completed'})
received = pd.Series(received.offer_id).value_counts().reset_index().rename(columns={'index': 'offer_id_r', 'offer_id': 'received'})
analysis = pd.concat([received, viewed, completed], axis=1).rename(columns={'offer_id_r': 'offer_id', 'viewed': col2, 'completed': col3, 'received': col1})
analysis[col2+'(%)'] = round(analysis[col2]/analysis[col1]*100, 2)
analysis[col3+'(%)'] = round(analysis[col3]/analysis[col1]*100, 2)
analysis = analysis.drop(['offer_id_c', 'offer_id_v'], axis=1)
return analysis
# +
# plotting the age_distributions
age_distributions(df_female, df_male, 'pink', 'cornflowerblue')
# -
# Another look at the age distribution by gender. The box plots confirm that the majority of the customers for both genders are between 40 and 60 years old.
# +
# plotting the income_distributions
income_distributions(df_female, df_male, 'pink', 'cornflowerblue')
# -
# Here, we confirm the previous findings of the income distributions. The plot confirms that most customers for all gender types make between 40k and 60k. Most of the female customers make between 60k and 80k while most male customers make between 40k and 60k a year.
# +
# showing an analysis of each offer for both genders; male and female customers
overall_analysis = analysis(full_data, 'received', 'overall_views', 'overall_completion')
overall_analysis
# +
# showing an analysis of each offer for male customers
male_analysis = analysis(df_male, 'received', 'male_views', 'male_completion')
male_analysis
# +
# showing an analysis of each offer for female customers
female_analysis = analysis(df_female, 'received', 'female_views', 'female_completion')
female_analysis
# +
# preparing a dataframe shows the completion percentages of each offer by gender
completion_perc = pd.merge(pd.merge(female_analysis,
male_analysis, on='offer_id'),
overall_analysis, on='offer_id')
col_list = ['offer_id', 'female_completion(%)', 'male_completion(%)', 'overall_completion(%)']
completion_perc = completion_perc[col_list].sort_values(by='offer_id').set_index('offer_id')
completion_perc
# -
# After the data has been cleaned even further during the previous processes, here is a completion rate comparison of the genders for each offer. The overall completion rate account for customers, again, offer 5 is the most popular offer. Oviously, female customers reponse to offer more than male customers. The following chart illustrates the number where we can see a clear difference between female customers actions toward offers and the male customers response.
# +
# plotting the Percentages of Completed offers by Gender
f_completed = completion_perc['female_completion(%)']
m_completed = completion_perc['male_completion(%)']
overall_completed = completion_perc['overall_completion(%)']
offers = completion_perc.index
x = offers
y = f_completed
y2 = m_completed
y3 = overall_completed
trace1 = go.Bar(
x=x,
y=y,
name='Female',
#hoverinfo = 'y',
hovertemplate = '<i>Percentage of all Female Customers who completed the Offer</i>: %{y:.2f}%'
'<br><b>Offer</b>: %{x}<br>',
marker=dict(
color='pink',
line=dict(
color='grey',
width=1.5),
),
opacity=0.6
)
trace2 = go.Bar(
x=x,
y=y2,
name='Male',
#hoverinfo = 'y',
hovertemplate = '<i>Percentage of all Male Customers who completed the Offer</i>: %{y:.2f}%'
'<br><b>Offer</b>: %{x}<br>',
marker=dict(
color='cornflowerblue',
line=dict(
color='grey',
width=1.5),
),
opacity=0.6
)
trace3 = go.Scatter(
x=x,
y=y3,
name='Overall',
#hoverinfo= 'y',
hovertemplate = '<i>Percentage of all Customers, Male, Female, and Other who completed the Offer</i>: %{y:.2f}%'
'<br><b>Offer</b>: %{x}<br>',
marker=dict(
color='grey',
)
)
data1 = [trace1, trace2, trace3]
layout = go.Layout(
title = "Percentage of Completed offers by Gender",
xaxis=dict(title = 'Offers',
type='category'),
barmode='group',
yaxis = dict(title = 'Percentage of Completed offers'
#hoverformat = '.2f'
)
)
fig = go.Figure(data=data1, layout=layout)
py.iplot(fig, filename='Percentage of Completed offers by Gender')
# -
# The bar chart illustrates the numbers found earlier where we can see a clear difference between female customers’ behavior toward offers and the male customers’ behavior. Overall, offer 8 is the least popular offer, and offer 5 is the most popular offer for both gender. Offer 8 is the least popular offer for female customers and offer 3 is the least popular for male customers.
#
# #### <font color=blue> 2.2.4 Offers
offers_list = full_data.drop_duplicates(subset=['offer_id'], keep='first')
offers_list = offers_list.drop(['customer_id',
'age',
'gender',
'membership_start', 'membership_period',
'transaction_amount', 'time_of_transaction(hours)',
'event',
'income',
'time(hours)_y',
'offer_views(%)',
'completed(not_viewed)'], axis=1)
offers_list.set_index('offer_id')
offers_list[offers_list.offer_type == 'bogo'].set_index('offer_id')
offers_list[offers_list.offer_type == 'discount'].set_index('offer_id')
# +
# function to return an analysis of the numbers and percentage of all transaction by offer
def analysis2(df1, col1, col2, col3, offer):
v = df1[df1['event'] == 'offer viewed']
v = v.drop_duplicates(subset=['customer_id', 'offer_id'], keep='first')
c = df1[df1['event'] == 'offer completed']
c = c.drop_duplicates(subset=['customer_id', 'offer_id'], keep='first')
received = df1.drop_duplicates(subset=['customer_id', 'offer_id'], keep='first')
viewed = pd.Series(v.age).value_counts().reset_index().rename(columns={'index': 'age_v', 'age': 'viewed'})
completed = pd.Series(c.age).value_counts().reset_index().rename(columns={'index': 'age_c', 'age': 'completed'})
received = pd.Series(received.age).value_counts().reset_index().rename(columns={'index': 'age_r', 'age': 'received'})
analysis2 = pd.concat([viewed, completed, received], axis=1).rename(columns={'age_v': 'age', 'viewed': col2, 'completed': col3, 'received': col1})
analysis2['offer'+offer+'_'+col2+'(%)'] = round(analysis2[col2]/analysis2[col1]*100, 2)
analysis2['offer'+offer+'_'+col3+'(%)'] = round(analysis2[col3]/analysis2[col1]*100, 2)
analysis2 = analysis2.drop(['age_r', 'age_c'], axis=1).sort_values(by='age')
return analysis2
# function to return dataframe grouped by the time when each transaction takes a place from the start of an offer
# showing a description of the amount spent by that time
def offer_transcations(offer):
offer_trans = transcation_by_time(df_completed, 'offer_id', offer)
return offer_trans
# function to plot a trend of any two offers that shows the average Amount Spent since a Start of an Offer at a specific time
def plot2(offer1, offer2):
offer_trans1 = offer_transcations(offer1)
offer_trans2 = offer_transcations(offer2)
trace_mean1 = go.Scatter(
x=offer_trans1['time_of_transaction(hours)'],
y=offer_trans1['mean'],
name = offer1,
opacity = 0.6)
trace_mean2 = go.Scatter(
x=offer_trans2['time_of_transaction(hours)'],
y=offer_trans2['mean'],
name = offer2,
opacity = 0.6)
data1 = [trace_mean1, trace_mean2]
layout = {
'title': 'Average Amount Spent Since a Start of an Offer Trend',
'xaxis': {'title': 'Time Since a Start of an Offer (hours)'},
'yaxis': {'title': 'Average Amount ($)',
"range": [
10,
35
]}}
fig = dict(data=data1, layout=layout)
plot2 = py.iplot(fig, filename = "Amount Spent since a Start of an Offer Trend by Offer")
return plot2
# +
# showing a description of the amount spent by the time of transactions
offer_transcations(1)
# +
# plotting and comparing the trend of any two offers
plot2(5, 7)
# -
# The bar chart illustrates the numbers found earlier where we can see a clear difference between female customers’ behavior toward offers and the male customers’ behavior. Overall, offer 8 is the least popular offer, and offer 5 is the most popular offer for both gender. Offer 8 is the least popular offer for female customers and offer 3 is the least popular for male customers.
# +
# creating a dataframe that contains only transactions by a specific offer
offer1 = offer_dataset(full_data, 1)
offer1.sample(3)
# +
# showing the numbers and percentages of all transactions made by a specific age for a specific offer
offer1_analysis = analysis2(offer1, 'received', 'viewed', 'completed', '1')
offer1_analysis
# -
offer2 = offer_dataset(full_data, 2)
offer2.sample(3)
offer2_analysis = analysis2(offer2, 'received', 'viewed', 'completed', '2')
offer2_analysis
offer3 = offer_dataset(full_data, 3)
offer3_analysis = analysis2(offer3, 'received', 'viewed', 'completed', '3')
offer3_analysis
offer4 = offer_dataset(full_data, 4)
offer4_analysis = analysis2(offer4, 'received', 'viewed', 'completed', '4')
offer4_analysis
offer5 = offer_dataset(full_data, 5)
offer5_analysis = analysis2(offer5, 'received', 'viewed', 'completed', '5')
offer5_analysis
offer6 = offer_dataset(full_data, 6)
offer6_analysis = analysis2(offer6, 'received', 'viewed', 'completed', '6')
offer6_analysis
offer7 = offer_dataset(full_data, 7)
offer7_analysis = analysis2(offer7, 'received', 'viewed', 'completed', '7')
offer7_analysis
offer8 = offer_dataset(full_data, 8)
offer8_analysis = analysis2(offer8, 'received', 'viewed', 'completed', '8')
offer8_analysis
# +
# merging all the analyses created for each offer and then create a dataframe shows the completion percentage
# of each offer based on the age group
completion_perc_o = pd.merge(pd.merge(pd.merge(pd.merge(pd.merge(pd.merge(pd.merge(offer1_analysis, offer2_analysis, on='age'),
offer3_analysis, on='age'),
offer4_analysis, on='age'),
offer5_analysis, on='age'),
offer6_analysis, on='age'),
offer7_analysis, on='age'),
offer8_analysis, on='age')
col_list = ['age',
'offer1_completed(%)',
'offer2_completed(%)',
'offer3_completed(%)',
'offer4_completed(%)',
'offer5_completed(%)',
'offer6_completed(%)',
'offer7_completed(%)',
'offer8_completed(%)']
completion_perc_o = completion_perc_o[col_list]
completion_perc_o
# -
# A complete analysis of each offer was created to show the completion rate of each age group with respect to an offer. The following function will allow us to report the completion rates per age group. Then, we will plot a trend of the completion rates by each age group for each offer.
# +
# a function to return a report of an age group that contains all completion percentages by offer
def age_report(a, df=completion_perc_o):
report = df[df.age == a]
return report
# -
age_report(35)
age_report(40)
age_report(25)
# +
# creating a copy of the completion reports
plot1 = completion_perc_o.copy()
# plotting the Percentages of Completed offers by each Age group for each offer
plot1 = plot1.set_index('age')
trace1 = go.Scatter(
x=plot1.index,
y=plot1['offer1_completed(%)'],
name = "Offer 1",
opacity = 0.8)
trace2 = go.Scatter(
x=plot1.index,
y=plot1['offer2_completed(%)'],
name = "Offer 2",
opacity = 0.8)
trace3 = go.Scatter(
x=plot1.index,
y=plot1['offer3_completed(%)'],
name = "Offer 3",
opacity = 0.8)
trace4 = go.Scatter(
x=plot1.index,
y=plot1['offer4_completed(%)'],
name = "Offer 4",
opacity = 0.8)
trace5 = go.Scatter(
x=plot1.index,
y=plot1['offer5_completed(%)'],
name = "Offer 5",
opacity = 0.8)
trace6 = go.Scatter(
x=plot1.index,
y=plot1['offer6_completed(%)'],
name = "Offer 6",
opacity = 0.8)
trace7 = go.Scatter(
x=plot1.index,
y=plot1['offer7_completed(%)'],
name = "Offer 7",
opacity = 0.8)
trace8 = go.Scatter(
x=plot1.index,
y=plot1['offer8_completed(%)'],
name = "Offer 8",
opacity = 0.8)
data1 = [trace1, trace2, trace3, trace4, trace5, trace6, trace7, trace8]
layout = {
'title': 'Percentage of Completed offers by Age',
'xaxis': {'title': 'Age'},
'yaxis': {'title': 'Percentage Completed (%)'}}
layout.update(dict(xaxis=dict(rangeslider=dict(visible = True),type='linear')))
updatemenus = list([
dict(active=0,
buttons=list([
dict(label = 'All',
method = 'update',
args = [{'visible': [True, True, True, True, True, True, True, True]},
{'title': 'Percentage of Each Completed offers by Age'}]),
dict(label = 'Offer 1',
method = 'update',
args = [{'visible': [True, False, False, False, False, False, False, False]},
{'title': 'Percentage of Completed offers by Age for Offer 1'}]),
dict(label = 'Offer 2',
method = 'update',
args = [{'visible': [False, True, False, False, False, False, False, False]},
{'title': 'Percentage of Completed offers by Age for Offer 2'}]),
dict(label = 'Offer 3',
method = 'update',
args = [{'visible': [False, False, True, False, False, False, False, False]},
{'title': 'Percentage of Completed offers by Age for Offer 3'}]),
dict(label = 'Offer 4',
method = 'update',
args = [{'visible': [False, False, False, True, False, False, False, False]},
{'title': 'Percentage of Completed offers by Age for Offer 4'}]),
dict(label = 'Offer 5',
method = 'update',
args = [{'visible': [False, False, False, False, True, False, False, False]},
{'title': 'Percentage of Completed offers by Age for Offer 5'}]),
dict(label = 'Offer 6',
method = 'update',
args = [{'visible': [False, False, False, False, False, True, False, False]},
{'title': 'Percentage of Completed offers by Age for Offer 6'}]),
dict(label = 'Offer 7',
method = 'update',
args = [{'visible': [False, False, False, False, False, False, True, False]},
{'title': 'Percentage of Completed offers by Age for Offer 7'}]),
dict(label = 'Offer 8',
method = 'update',
args = [{'visible': [False, False, False, False, False, False, False, True]},
{'title': 'Percentage of Completed offers by Age for Offer 8'}])
]),
)
])
layout.update(dict(updatemenus=updatemenus))
fig = go.Figure(data=data1, layout=layout)
py.iplot(fig, filename = "Percentage of Completed offers by Age")
# -
# The plot illustrates the Percentages of Completed offers by each age group that can be filtered by offer. For example, at the age group 35 years old, offer 6 has the highest completion rate 82.61%. While the age group 25 years old prefer offer 5 the most with a completion rate of 77.55%.
fig.update_layout(
updatemenus=[
go.layout.Updatemenu(
buttons=list([
dict(
args=["colorscale", "Viridis"],
label="Viridis",
method="restyle"
),
dict(
args=["colorscale", "Cividis"],
label="Cividis",
method="restyle"
),
dict(
args=["colorscale", "Blues"],
label="Blues",
method="restyle"
),
dict(
args=["colorscale", "Greens"],
label="Greens",
method="restyle"
),
]),
direction="down",
pad={"r": 10, "t": 10},
showactive=True,
x=0.1,
xanchor="left",
y=button_layer_1_height,
yanchor="top"
),
go.layout.Updatemenu(
buttons=list([
dict(
args=["reversescale", False],
label="False",
method="restyle"
),
dict(
args=["reversescale", True],
label="True",
method="restyle"
)
]),
direction="down",
pad={"r": 10, "t": 10},
showactive=True,
x=0.37,
xanchor="left",
y=button_layer_1_height,
yanchor="top"
),
go.layout.Updatemenu(
buttons=list([
dict(
args=[{"contours.showlines": False, "type": "contour"}],
label="Hide lines",
method="restyle"
),
dict(
args=[{"contours.showlines": True, "type": "contour"}],
label="Show lines",
method="restyle"
),
]),
direction="down",
pad={"r": 10, "t": 10},
showactive=True,
x=0.58,
xanchor="left",
y=button_layer_1_height,
yanchor="top"
),
]
)
# ### Bonus
# +
# plotting a Sunburst Charts shows the numbers of customers
# with respect to all transactions where the the customers completed an offer
trace = go.Sunburst(
labels=["Transactions",
"BOGO", "Discount",
"Offer 1", "Offer 2", "Offer 3", "Offer 4", "Offer 5",
"Offer 6", "Offer 7", "Offer 8",
"Female", "Male", "Other",
"Female", "Male", "Other",
"Female", "Male", "Other",
"Female", "Male", "Other",
"Female", "Male", "Other",
"Female", "Male", "Other",
"Female", "Male", "Other",
"Female", "Male", "Other"],
parents=["",
"Transactions", "Transactions",
"BOGO", "BOGO", "BOGO", "Discount", "Discount",
"Discount", "Discount", "BOGO",
"Offer 1", "Offer 1", "Offer 1",
"Offer 2", "Offer 2", "Offer 2",
"Offer 3", "Offer 3", "Offer 3",
"Offer 4", "Offer 4", "Offer 4",
"Offer 5", "Offer 5", "Offer 5",
"Offer 6", "Offer 6", "Offer 6",
"Offer 7", "Offer 7", "Offer 7",
"Offer 8", "Offer 8", "Offer 8"],
values=[26226,
9937, 12089,
2573, 2773, 2374, 2499, 3729,
2592, 3269, 1917,
1197, 1324, 52,
1281, 1441, 51,
1134, 917, 323,
1247, 1252, 0,
1626, 2052, 51,
1188, 1354, 50,
1371, 1755, 141,
872, 1045, 0],
branchvalues="total",
outsidetextfont = {"size": 15, "color": "#377eb8"},
marker = {"line": {"width": 2}})
layout = go.Layout(
title = 'test',
margin = go.layout.Margin(t=0, l=0, r=0, b=0))
py.iplot(go.Figure([trace], layout), filename='basic_sunburst_chart_total_branchvalues')
# -
# An interactive sunburst chart that shows more details about the number of transactions made where customers received an offer, viewed that offer through one of the channels used, and then made a transaction and completed the offer before the end of the duration of that offer.
| starbucks_analyze_a_coffee.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="oL9KopJirB2g"
# ##### Copyright 2018 The TensorFlow Authors.
# + cellView="form" id="SKaX3Hd3ra6C"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="AXH1bmUctMld"
# # Unicode strings
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/unicode"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="LrHJrKYis06U"
# ## Introduction
#
# Models that process natural language often handle different languages with different character sets. *Unicode* is a standard encoding system that is used to represent character from almost all languages. Each character is encoded using a unique integer [code point](https://en.wikipedia.org/wiki/Code_point) between `0` and `0x10FFFF`. A *Unicode string* is a sequence of zero or more code points.
#
# This tutorial shows how to represent Unicode strings in TensorFlow and manipulate them using Unicode equivalents of standard string ops. It separates Unicode strings into tokens based on script detection.
# + id="OIKHl5Lvn4gh"
import tensorflow as tf
# + [markdown] id="n-LkcI-vtWNj"
# ## The `tf.string` data type
#
# The basic TensorFlow `tf.string` `dtype` allows you to build tensors of byte strings.
# Unicode strings are utf-8 encoded by default.
# + id="3yo-Qv6ntaFr"
tf.constant(u"Thanks 😊")
# + [markdown] id="2kA1ziG2tyCT"
# A `tf.string` tensor can hold byte strings of varying lengths because the byte strings are treated as atomic units. The string length is not included in the tensor dimensions.
#
# + id="eyINCmTztyyS"
tf.constant([u"You're", u"welcome!"]).shape
# + [markdown] id="jsMPnjb6UDJ1"
# Note: When using python to construct strings, the handling of unicode differs betweeen v2 and v3. In v2, unicode strings are indicated by the "u" prefix, as above. In v3, strings are unicode-encoded by default.
# + [markdown] id="hUFZ7B1Lk-uj"
# ## Representing Unicode
#
# There are two standard ways to represent a Unicode string in TensorFlow:
#
# * `string` scalar — where the sequence of code points is encoded using a known [character encoding](https://en.wikipedia.org/wiki/Character_encoding).
# * `int32` vector — where each position contains a single code point.
#
# For example, the following three values all represent the Unicode string `"语言处理"` (which means "language processing" in Chinese):
# + id="cjQIkfJWvC_u"
# Unicode string, represented as a UTF-8 encoded string scalar.
text_utf8 = tf.constant(u"语言处理")
text_utf8
# + id="yQqcUECcvF2r"
# Unicode string, represented as a UTF-16-BE encoded string scalar.
text_utf16be = tf.constant(u"语言处理".encode("UTF-16-BE"))
text_utf16be
# + id="ExdBr1t7vMuS"
# Unicode string, represented as a vector of Unicode code points.
text_chars = tf.constant([ord(char) for char in u"语言处理"])
text_chars
# + [markdown] id="B8czv4JNpBnZ"
# ### Converting between representations
#
# TensorFlow provides operations to convert between these different representations:
#
# * `tf.strings.unicode_decode`: Converts an encoded string scalar to a vector of code points.
# * `tf.strings.unicode_encode`: Converts a vector of code points to an encoded string scalar.
# * `tf.strings.unicode_transcode`: Converts an encoded string scalar to a different encoding.
# + id="qb-UQ_oLpAJg"
tf.strings.unicode_decode(text_utf8,
input_encoding='UTF-8')
# + id="kEBUcunnp-9n"
tf.strings.unicode_encode(text_chars,
output_encoding='UTF-8')
# + id="0MLhWcLZrph-"
tf.strings.unicode_transcode(text_utf8,
input_encoding='UTF8',
output_encoding='UTF-16-BE')
# + [markdown] id="QVeLeVohqN7I"
# ### Batch dimensions
#
# When decoding multiple strings, the number of characters in each string may not be equal. The return result is a [`tf.RaggedTensor`](../../guide/ragged_tensor.ipynb), where the length of the innermost dimension varies depending on the number of characters in each string:
# + id="N2jVzPymr_Mm"
# A batch of Unicode strings, each represented as a UTF8-encoded string.
batch_utf8 = [s.encode('UTF-8') for s in
[u'hÃllo', u'What is the weather tomorrow', u'Göödnight', u'😊']]
batch_chars_ragged = tf.strings.unicode_decode(batch_utf8,
input_encoding='UTF-8')
for sentence_chars in batch_chars_ragged.to_list():
print(sentence_chars)
# + [markdown] id="iRh3n1hPsJ9v"
# You can use this `tf.RaggedTensor` directly, or convert it to a dense `tf.Tensor` with padding or a `tf.SparseTensor` using the methods `tf.RaggedTensor.to_tensor` and `tf.RaggedTensor.to_sparse`.
# + id="yz17yeSMsUid"
batch_chars_padded = batch_chars_ragged.to_tensor(default_value=-1)
print(batch_chars_padded.numpy())
# + id="kBjsPQp3rhfm"
batch_chars_sparse = batch_chars_ragged.to_sparse()
# + [markdown] id="GCCkZh-nwlbL"
# When encoding multiple strings with the same lengths, a `tf.Tensor` may be used as input:
# + id="_lP62YUAwjK9"
tf.strings.unicode_encode([[99, 97, 116], [100, 111, 103], [ 99, 111, 119]],
output_encoding='UTF-8')
# + [markdown] id="w58CMRg9tamW"
# When encoding multiple strings with varying length, a `tf.RaggedTensor` should be used as input:
# + id="d7GtOtrltaMl"
tf.strings.unicode_encode(batch_chars_ragged, output_encoding='UTF-8')
# + [markdown] id="T2Nh5Aj9xob3"
# If you have a tensor with multiple strings in padded or sparse format, then convert it to a `tf.RaggedTensor` before calling `unicode_encode`:
# + id="R2bYCYl0u-Ue"
tf.strings.unicode_encode(
tf.RaggedTensor.from_sparse(batch_chars_sparse),
output_encoding='UTF-8')
# + id="UlV2znh_u_zm"
tf.strings.unicode_encode(
tf.RaggedTensor.from_tensor(batch_chars_padded, padding=-1),
output_encoding='UTF-8')
# + [markdown] id="hQOOGkscvDpc"
# ## Unicode operations
# + [markdown] id="NkmtsA_yvMB0"
# ### Character length
#
# The `tf.strings.length` operation has a parameter `unit`, which indicates how lengths should be computed. `unit` defaults to `"BYTE"`, but it can be set to other values, such as `"UTF8_CHAR"` or `"UTF16_CHAR"`, to determine the number of Unicode codepoints in each encoded `string`.
# + id="1ZzMe59mvLHr"
# Note that the final character takes up 4 bytes in UTF8.
thanks = u'Thanks 😊'.encode('UTF-8')
num_bytes = tf.strings.length(thanks).numpy()
num_chars = tf.strings.length(thanks, unit='UTF8_CHAR').numpy()
print('{} bytes; {} UTF-8 characters'.format(num_bytes, num_chars))
# + [markdown] id="fHG85gxlvVU0"
# ### Character substrings
#
# Similarly, the `tf.strings.substr` operation accepts the "`unit`" parameter, and uses it to determine what kind of offsets the "`pos`" and "`len`" paremeters contain.
# + id="WlWRLV-4xWYq"
# default: unit='BYTE'. With len=1, we return a single byte.
tf.strings.substr(thanks, pos=7, len=1).numpy()
# + id="JfNUVDPwxkCS"
# Specifying unit='UTF8_CHAR', we return a single character, which in this case
# is 4 bytes.
print(tf.strings.substr(thanks, pos=7, len=1, unit='UTF8_CHAR').numpy())
# + [markdown] id="zJUEsVSyeIa3"
# ### Split Unicode strings
#
# The `tf.strings.unicode_split` operation splits unicode strings into substrings of individual characters:
# + id="dDjkh5G1ejMt"
tf.strings.unicode_split(thanks, 'UTF-8').numpy()
# + [markdown] id="HQqEEZEbdG9O"
# ### Byte offsets for characters
#
# To align the character tensor generated by `tf.strings.unicode_decode` with the original string, it's useful to know the offset for where each character begins. The method `tf.strings.unicode_decode_with_offsets` is similar to `unicode_decode`, except that it returns a second tensor containing the start offset of each character.
# + id="Cug7cmwYdowd"
codepoints, offsets = tf.strings.unicode_decode_with_offsets(u"🎈🎉🎊", 'UTF-8')
for (codepoint, offset) in zip(codepoints.numpy(), offsets.numpy()):
print("At byte offset {}: codepoint {}".format(offset, codepoint))
# + [markdown] id="2ZnCNxOvx66T"
# ## Unicode scripts
# + [markdown] id="nRRHqkqNyGZ6"
# Each Unicode code point belongs to a single collection of codepoints known as a [script](https://en.wikipedia.org/wiki/Script_%28Unicode%29) . A character's script is helpful in determining which language the character might be in. For example, knowing that 'Б' is in Cyrillic script indicates that modern text containing that character is likely from a Slavic language such as Russian or Ukrainian.
#
# TensorFlow provides the `tf.strings.unicode_script` operation to determine which script a given codepoint uses. The script codes are `int32` values corresponding to [International Components for
# Unicode](http://site.icu-project.org/home) (ICU) [`UScriptCode`](http://icu-project.org/apiref/icu4c/uscript_8h.html) values.
#
# + id="K7DeYHrRyFPy"
uscript = tf.strings.unicode_script([33464, 1041]) # ['芸', 'Б']
print(uscript.numpy()) # [17, 8] == [USCRIPT_HAN, USCRIPT_CYRILLIC]
# + [markdown] id="2fW992a1lIY6"
# The `tf.strings.unicode_script` operation can also be applied to multidimensional `tf.Tensor`s or `tf.RaggedTensor`s of codepoints:
# + id="uR7b8meLlFnp"
print(tf.strings.unicode_script(batch_chars_ragged))
# + [markdown] id="mx7HEFpBzEsB"
# ## Example: Simple segmentation
#
# Segmentation is the task of splitting text into word-like units. This is often easy when space characters are used to separate words, but some languages (like Chinese and Japanese) do not use spaces, and some languages (like German) contain long compounds that must be split in order to analyze their meaning. In web text, different languages and scripts are frequently mixed together, as in "NY株価" (New York Stock Exchange).
#
# We can perform very rough segmentation (without implementing any ML models) by using changes in script to approximate word boundaries. This will work for strings like the "NY株価" example above. It will also work for most languages that use spaces, as the space characters of various scripts are all classified as USCRIPT_COMMON, a special script code that differs from that of any actual text.
# + id="grsvFiC4BoPb"
# dtype: string; shape: [num_sentences]
#
# The sentences to process. Edit this line to try out different inputs!
sentence_texts = [u'Hello, world.', u'世界こんにちは']
# + [markdown] id="CapnbShuGU8i"
# First, we decode the sentences into character codepoints, and find the script identifeir for each character.
# + id="ReQVcDQh1MB8"
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_codepoint[i, j] is the codepoint for the j'th character in
# the i'th sentence.
sentence_char_codepoint = tf.strings.unicode_decode(sentence_texts, 'UTF-8')
print(sentence_char_codepoint)
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_scripts[i, j] is the unicode script of the j'th character in
# the i'th sentence.
sentence_char_script = tf.strings.unicode_script(sentence_char_codepoint)
print(sentence_char_script)
# + [markdown] id="O2fapF5UGcUc"
# Next, we use those script identifiers to determine where word boundaries should be added. We add a word boundary at the beginning of each sentence, and for each character whose script differs from the previous character:
# + id="7v5W6MOr1Rlc"
# dtype: bool; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_starts_word[i, j] is True if the j'th character in the i'th
# sentence is the start of a word.
sentence_char_starts_word = tf.concat(
[tf.fill([sentence_char_script.nrows(), 1], True),
tf.not_equal(sentence_char_script[:, 1:], sentence_char_script[:, :-1])],
axis=1)
# dtype: int64; shape: [num_words]
#
# word_starts[i] is the index of the character that starts the i'th word (in
# the flattened list of characters from all sentences).
word_starts = tf.squeeze(tf.where(sentence_char_starts_word.values), axis=1)
print(word_starts)
# + [markdown] id="LAwh-1QkGuC9"
# We can then use those start offsets to build a `RaggedTensor` containing the list of words from all batches:
# + id="bNiA1O_eBBCL"
# dtype: int32; shape: [num_words, (num_chars_per_word)]
#
# word_char_codepoint[i, j] is the codepoint for the j'th character in the
# i'th word.
word_char_codepoint = tf.RaggedTensor.from_row_starts(
values=sentence_char_codepoint.values,
row_starts=word_starts)
print(word_char_codepoint)
# + [markdown] id="66a2ZnYmG2ao"
# And finally, we can segment the word codepoints `RaggedTensor` back into sentences:
# + id="NCfwcqLSEjZb"
# dtype: int64; shape: [num_sentences]
#
# sentence_num_words[i] is the number of words in the i'th sentence.
sentence_num_words = tf.reduce_sum(
tf.cast(sentence_char_starts_word, tf.int64),
axis=1)
# dtype: int32; shape: [num_sentences, (num_words_per_sentence), (num_chars_per_word)]
#
# sentence_word_char_codepoint[i, j, k] is the codepoint for the k'th character
# in the j'th word in the i'th sentence.
sentence_word_char_codepoint = tf.RaggedTensor.from_row_lengths(
values=word_char_codepoint,
row_lengths=sentence_num_words)
print(sentence_word_char_codepoint)
# + [markdown] id="xWaX8WcbHyqY"
# To make the final result easier to read, we can encode it back into UTF-8 strings:
# + id="HSivquOgFr3C"
tf.strings.unicode_encode(sentence_word_char_codepoint, 'UTF-8').to_list()
| site/en/tutorials/load_data/unicode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Check ENV
# %env
# +
# QSTK Imports
import pftk.pftkutil.qsdateutil as du
import pftk.pftkutil.tsutil as tsu
import pftk.pftkutil.data_access as da
# Third Party Imports
import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd
# List of symbols
ls_symbols = ["AGP", "CORP", "IJR"]
# Start and End date of the charts
dt_start = dt.datetime(2010, 1, 1)
dt_end = dt.datetime(2017, 12, 31)
# We need closing prices so the timestamp should be hours=16.
dt_timeofday = dt.timedelta(hours=16)
# Get a list of trading days between the start and the end.
ldt_timestamps = du.getNYSEdays(dt_start, dt_end, dt_timeofday)
# Creating an object of the dataaccess class
c_dataobj = da.DataAccess('EODHistoricalData')
# Keys to be read from the data, it is good to read everything in one go.
ls_keys = ['open', 'high', 'low', 'close', 'volume', 'actual_close']
# Reading the data, now d_data is a dictionary with the keys above.
# Timestamps and symbols are the ones that were specified before.
ldf_data = c_dataobj.get_data(ldt_timestamps, ls_symbols, ls_keys, verbose=True)
d_data = dict(zip(ls_keys, ldf_data))
# Filling the data for NAN
for s_key in ls_keys:
d_data[s_key] = d_data[s_key].fillna(method='ffill')
d_data[s_key] = d_data[s_key].fillna(method='bfill')
d_data[s_key] = d_data[s_key].fillna(1.0)
# Getting the numpy ndarray of close prices.
na_price = d_data['close'].values
# -
d_data
# +
# Plotting the prices with x-axis=timestamps
plt.clf()
plt.plot(ldt_timestamps, na_price)
plt.legend(ls_symbols)
plt.ylabel('Adjusted Close')
plt.xlabel('Date')
plt.savefig('adjustedclose.pdf', format='pdf')
# Normalizing the prices to start at 1 and see relative returns
na_normalized_price = na_price / na_price[0, :]
# Plotting the prices with x-axis=timestamps
plt.clf()
plt.plot(ldt_timestamps, na_normalized_price)
plt.legend(ls_symbols)
plt.ylabel('Normalized Close')
plt.xlabel('Date')
plt.savefig('normalized.pdf', format='pdf')
# Copy the normalized prices to a new ndarry to find returns.
na_rets = na_normalized_price.copy()
# Calculate the daily returns of the prices. (Inplace calculation)
# returnize0 works on ndarray and not dataframes.
tsu.returnize0(na_rets)
# Plotting the plot of daily returns
plt.clf()
plt.plot(ldt_timestamps[0:50], na_rets[0:50, 3]) # $SPX 50 days
plt.plot(ldt_timestamps[0:50], na_rets[0:50, 4]) # XOM 50 days
plt.axhline(y=0, color='r')
plt.legend(['$SPX', 'XOM'])
plt.ylabel('Daily Returns')
plt.xlabel('Date')
plt.savefig('rets.pdf', format='pdf')
# Plotting the scatter plot of daily returns between XOM VS $SPX
plt.clf()
plt.scatter(na_rets[:, 3], na_rets[:, 4], c='blue')
plt.ylabel('XOM')
plt.xlabel('$SPX')
plt.savefig('scatterSPXvXOM.pdf', format='pdf')
# Plotting the scatter plot of daily returns between $SPX VS GLD
plt.clf()
plt.scatter(na_rets[:, 3], na_rets[:, 1], c='blue') # $SPX v GLD
plt.ylabel('GLD')
plt.xlabel('$SPX')
plt.savefig('scatterSPXvGLD.pdf', format='pdf')
| Examples/Basic/.ipynb_checkpoints/Tutorial1_NB-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
# Imports
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats as sts
import seaborn as sns
import statsmodels
import statsmodels.api as sm
from statsmodels.formula.api import ols
from lib.utility_functions import *
from lib.exp4 import *
# Config
sns.set_style('white')
sns.set_context('talk')
pd.set_option('display.max_columns', 40)
% matplotlib inline
# -
tidy = pd.read_csv('./tidy_data.csv', index_col=0)
# +
# Are subjects more likely to reproduce some features than others? *
# Are trained subjects more likely to reproduce game set features? ***
# Probability of missing a piece that is / is not part of a feature (or by # of features piece is part of)
# +
hstarts = [i for row in range(4) for i in range(9*row, 9*row + 6, 1)]
vstarts = list(range(9))
ddstarts = list(range(6))
dustarts = list(range(4, 9))
def _add_position_strings(bp, wp):
return ''.join([str(int(b) + int(w)) for b, w in zip(bp, wp)])
def _count_feature(bp, wp, feature):
# Get the overall occupancy of position
p = _add_position_strings(bp, wp)
# Initialize count matrices
bcounts = np.zeros(36, dtype=np.uint8)
wcounts = np.zeros(36, dtype=np.uint8)
# Helper function to detect matchs in different orientations
def _orient_count(start, increment):
end = start + 4 * increment
for orientation in [1, -1]:
total_match = p[start:end:increment] == feature[::orientation]
if not total_match:
# If the complete position is not the same as feature,
# it means that some locations that should have been
# empty were not, so just continue
continue
black_match = bp[start:end:increment] == feature[::orientation]
if black_match:
bcounts[start:end:increment] += 1
# If we found a black_match, no need to check white position
break
white_match = wp[start:end:increment] == feature[::orientation]
if white_match:
wcounts[start:end:increment] += 1
return None
# For every horizontal starting value
for start in hstarts:
_orient_count(start, 1)
# Etc
for start in vstarts:
_orient_count(start, 9)
for start in dustarts:
_orient_count(start, 8)
for start in ddstarts:
_orient_count(start, 10)
return bcounts + wcounts
def count_all_features(row):
features = ['1100', '1010', '1001', '1110', '1101', '1111']
bp = row['Black Position']
wp = row['White Position']
output_dict = {}
for feature in features:
count = _count_feature(bp, wp, feature)
output_dict[feature] = count
return output_dict
# +
def _detect_type_2_error(bi, bf, wi, wf):
original_empty = ((bf == '0') and (wf == '0'))
final_not_empty = ((bi == '1') or (wi == '1'))
return int(original_empty and final_not_empty)
def _detect_type_3_error(bi, bf, wi, wf):
b2w = ((bi == '1') and (wf == '1'))
w2b = ((wi == '1') and (bf == '1'))
return int(b2w or w2b)
def count_all_errors(row):
bpi = row['Black Position']
bpf = row['Black Position (final)']
wpi = row['White Position']
wpf = row['White Position (final)']
type_2_errors = [
_detect_type_2_error(bi, bf, wi, wf)
for bi, bf, wi, wf in zip(bpi, bpf, wpi, wpf)
]
type_3_errors = [
_detect_type_3_error(bi, bf, wi, wf)
for bi, bf, wi, wf in zip(bpi, bpf, wpi, wpf)
]
return {'Type 2': type_2_errors, 'Type 3': type_3_errors}
# -
feature_count_df = pd.DataFrame(tidy.apply(count_all_features, axis=1).tolist())
error_df = pd.DataFrame(tidy.apply(count_all_errors, axis=1).tolist())
sum_df = pd.concat([error_df, feature_count_df], axis=1)
# +
def sum_features(row):
counts = np.zeros(36, dtype=np.uint8)
for name in row.index:
if 'Type' not in name:
counts += np.stack(row[name])
return counts.tolist()
sum_df['all'] = sum_df.apply(sum_features, axis=1)
# +
def bin_errors_by_num_features(row, error_type):
type2 = row[error_type]
feats = row['all']
counts = {}
for i, f in enumerate(feats):
if f not in counts.keys():
counts[f] = 0
counts[f] += type2[i]
return counts
def bin_errors_type2(row):
return bin_errors_by_num_features(row, 'Type 2')
def bin_errors_type3(row):
return bin_errors_by_num_features(row, 'Type 3')
def bin_features(row):
idx = row.name
bp = tidy.iloc[idx]['Black Position']
wp = tidy.iloc[idx]['White Position']
p = _add_position_strings(bp, wp)
p = list(map(int, p))
feats = row['all']
counts = {}
for i, f in enumerate(feats):
if f not in counts.keys():
counts[f] = 0
counts[f] += p[i]
return counts
type2_counts = pd.DataFrame(sum_df.apply(bin_errors_type2, axis=1).tolist()).fillna(0)
type3_counts = pd.DataFrame(sum_df.apply(bin_errors_type3, axis=1).tolist()).fillna(0)
feature_counts = pd.DataFrame(sum_df.apply(bin_features, axis=1).tolist()).fillna(0)
# +
# Spearman: # features, # errors
# -
type2_counts.sum(axis=0) / feature_counts.sum(axis=0)
sum_df.head()
dist2 = type2_counts.sum(axis=0) / feature_counts.sum(axis=0)
# for Type II/III errors, number of possible errors is limited by number of pieces
# so feature_counts is for each position the number of pieces
# with X features present
dist3 = type3_counts.sum(axis=0) / feature_counts.sum(axis=0)
sts.ks_2samp(dist2.values, dist3.values)
# +
# For each number of features, count the number of Type 2 errors
type2 = sum_df.iloc[0]['Type 2']
feats = sum_df.iloc[0]['all']
print(type2)
print(feats)
# -
type_2_error_counts = np.stack(sum_df['Type 2'].values)
total_feature_counts = np.stack(sum_df['all'].values)
# +
def error_count_against_num_features(row, error_type):
fc = np.stack(row['all']).astype(np.uint8)
ec = np.stack(row[error_type]).astype(np.uint8)
pcount = {
k: np.sum(ec[fc == k])
for k in range(fc.max()+1)
}
return pcount
def error2_count_against_num_features(row):
return error_count_against_num_features(row, 'Type 2')
def error3_count_against_num_features(row):
return error_count_against_num_features(row, 'Type 3')
def instance_count_against_num_features(row):
fc = np.stack(row['all']).astype(np.uint8)
pcount = {
k: np.sum(fc == k)
for k in range(fc.max()+1)
}
return pcount
# +
type2_errors_by_feature_count = pd.DataFrame(
sum_df.apply(error2_count_against_num_features, axis=1).tolist()
).fillna(0)
type3_errors_by_feature_count = pd.DataFrame(
sum_df.apply(error3_count_against_num_features, axis=1).tolist()
).fillna(0)
instances_by_feature_count = pd.DataFrame(
sum_df.apply(instance_count_against_num_features, axis=1).tolist()
).fillna(0)
# +
p_type2_j_num_features = type2_errors_by_feature_count.sum(axis=0) / tidy['Num Pieces'].sum()
p_num_features = instances_by_feature_count.sum(axis=0) / instances_by_feature_count.sum()
err2_dist = p_type2_j_num_features / p_num_features
# -
err2_dist
# +
p_type3_j_num_features = type3_errors_by_feature_count.sum(axis=0) / tidy['Num Pieces'].sum()
err3_dist = p_type3_j_num_features / p_num_features
# -
err3_dist.mean()
err2_dist.mean()
# +
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
axes[0].bar(np.arange(7), err2_dist)
axes[1].bar(np.arange(7), err3_dist)
sns.despine()
# +
err2_tidy = pd.melt(
type2_errors_by_feature_count,
var_name='Num Features', value_name='Error Count'
)
err2_tidy['dummy'] = err2_tidy['Error Count']
err2_sum_piv = err2_tidy.pivot_table(
index='Num Features', values='Error Count',
aggfunc=np.sum
)
err2_len_piv = err2_tidy.pivot_table(
index='Num Features', values='Error Count',
aggfunc=len
)
err2_sum_piv / err2_len_piv
# -
err2_tidy.head()
# +
err2_len_piv = err2_tidy.pivot_table(
index='Num Features', columns='Error Count', values='dummy',
aggfunc=len
)
err2_len_piv.fillna(0)
# +
err2_sum_piv = err2_tidy.pivot_table(
index='Num Features', columns='Error Count', values='dummy',
aggfunc=np.sum
)
p_num_err2_j_num_feat = err2_sum_piv.fillna(0) / err2_tidy['Error Count'].sum()
# -
p_num_feat = instances_by_feature_count.sum() / instances_by_feature_count.sum().sum()
p_num_feat
p_num_feat.sum()
p_num_err2_j_num_feat.sum().sum()
p_num_err2_c_num_feat = p_num_err2_j_num_feat.copy()
p_num_err2_c_num_feat.loc[:, :] = p_num_err2_j_num_feat.values / p_num_feat.values[:, np.newaxis]
p_num_err2_c_num_feat
p_num_err2_c_num_feat.sum(axis=1)
err2_tidy['Error Count'].sum()
# +
fig, axes = plt.subplots(1, 2, figsize=(14, 6))
err3_tidy = pd.melt(
type3_errors_by_feature_count / instances_by_feature_count,
var_name='Num Features', value_name='Error Prob'
)
sns.factorplot(
x='Num Features', y='Error Prob', data=err2_tidy, ax=axes[0],
kind='bar', ci=95, n_boot=1000, color='grey'
)
sns.factorplot(
x='Num Features', y='Error Prob', data=err3_tidy, ax=axes[1],
kind='bar', ci=95, n_boot=1000, color='grey'
)
plt.setp(axes[0], ylabel='Type 2 Error Probability')
plt.setp(axes[1], ylabel='Type 3 Error Probability')
sns.despine(ax=axes[0])
sns.despine(ax=axes[1])
# -
tidy['Type III Errors'].sum() / tidy['Num Pieces'].sum()
# +
dustarts
_idx = list(range(36))[8:40:8]
_l = np.zeros(36)
_l[_idx] = 1
_l.reshape((4, 9))
print(list(range(36))[5:45:10])
row = sum_df.iloc[0]
row.index
# +
position_string = tidy.iloc[0]['Black Position']
feature = '1010'
start, end = 0, 4
print(position_string)
position_string[start:end] == feature
position_string[start:end:9] == feature
# +
row = tidy.iloc[0]
bpi = row['Black Position']
bpf = row['Black Position (final)']
wpi = row['White Position']
wpf = row['White Position (final)']
error_counts = errors(row)
print(''.join([str(i) for i in error_counts['Type 2']]))
# -
initial = ''.join([str(int(b) + int(w)) for b, w in zip(bpi, wpi)])
final = ''.join([str(int(b) + int(w)) for b, w in zip(bpf, wpf)])
print(initial)
print(''.join([str(i) for i in error_counts['Type 2']]))
print(final)
print(bpi)
print(wpf)
print(''.join([str(i) for i in error_counts['Type 3']]))
# +
# -
start = 1
position_string[start:start+28:9]
# +
def position_string_to_array(position_string):
position_list = np.stack([int(c) for c in position_string]).reshape((4, 9))
return position_list
black_positions = np.stack(tidy['Black Position'].map(position_string_to_array).values)
# -
black_positions[0]
black_positions.shape
# +
feature1 = np.array([1, 1, 0, 0])
feature2 = np.array([1, 0, 1, 0])
feature3 = np.array([1, 0, 0, 1])
feature4 = np.array([1, 1, 1, 0])
feature5 = np.array([1, 1, 0, 1])
feature6 = np.array([1, 1, 1, 1])
def count_feature_occurrences(positions, feature):
counts = np.zeros_like(positions)
pass
# -
position_string = tidy.iloc[0]['Black Position']
position = np.stack([c for c in position_string]).astype(np.uint8)
position
feature = np.zeros_like(position)
start, end = 0, 4
all(position[np.arange(start, end, 1)] == feature1)
from scipy.signal import convolve2d
feature = feature1
convolve2d(black_positions[0], feature[np.newaxis, :], mode='same') == feature.sum()
black_positions[0]
| src/4 Feature Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# <small><i>This notebook was put together by [<NAME>](http://www.vanderplas.com) for PyCon 2015. Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_pycon2015/).</i></small>
# # Density Estimation: Gaussian Mixture Models
# Here we'll explore **Gaussian Mixture Models**, which is an unsupervised clustering & density estimation technique.
#
# We'll start with our standard set of initial imports
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
# -
# ## Introducing Gaussian Mixture Models
#
# We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
#
# Here we'll consider an extension to this which is suitable for both **clustering** and **density estimation**.
#
# For example, imagine we have some one-dimensional data in a particular distribution:
np.random.seed(2)
x = np.concatenate([np.random.normal(0, 2, 2000),
np.random.normal(5, 5, 2000),
np.random.normal(3, 0.5, 600)])
plt.hist(x, 80, normed=True)
plt.xlim(-10, 20);
# Gaussian mixture models will allow us to approximate this density:
# +
from sklearn.mixture import GMM
clf = GMM(4, n_iter=500, random_state=3).fit(x)
xpdf = np.linspace(-10, 20, 1000)
density = np.exp(clf.score(xpdf))
plt.hist(x, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
# -
# Note that this density is fit using a **mixture of Gaussians**, which we can examine by looking at the ``means_``, ``covars_``, and ``weights_`` attributes:
clf.means_
clf.covars_
clf.weights_
# +
plt.hist(x, 80, normed=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covars_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
# -
# These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the **posterior probability** is used to compute the weighted mean and covariance.
# Somewhat surprisingly, this algorithm **provably** converges to the optimum (though the optimum is not necessarily global).
# ## How many Gaussians?
#
# Given a model, we can use one of several means to evaluate how well it fits the data.
# For example, there is the Aikaki Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
print(clf.bic(x))
print(clf.aic(x))
# Let's take a look at these as a function of the number of gaussians:
# +
n_estimators = np.arange(1, 10)
clfs = [GMM(n, n_iter=1000).fit(x) for n in n_estimators]
bics = [clf.bic(x) for clf in clfs]
aics = [clf.aic(x) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
# -
# It appears that for both the AIC and BIC, 4 components is preferred.
# ## Example: GMM For Outlier Detection
#
# GMM is what's known as a **Generative Model**: it's a probabilistic model from which a dataset can be generated.
# One thing that generative models can be useful for is **outlier detection**: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where "suitable" is up to your own bias/variance preference) can be labeld outliers.
#
# Let's take a look at this by defining a new dataset with some outliers:
# +
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
# +
clf = GMM(4, n_iter=500, random_state=0).fit(y)
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.exp(clf.score(xpdf))
plt.hist(y, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
#plt.xlim(-10, 20);
# -
# Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of ``y``:
log_likelihood = clf.score_samples(y)[0]
plt.plot(y, log_likelihood, '.k');
# +
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
# -
# The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
#
# Here are the outliers that were missed:
set(true_outliers) - set(detected_outliers)
# And here are the non-outliers which were spuriously labeled outliers:
set(detected_outliers) - set(true_outliers)
# Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
# ## Other Density Estimators
#
# The other main density estimator that you might find useful is *Kernel Density Estimation*, which is available via ``sklearn.neighbors.KernelDensity``. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of *every* training point!
# +
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
# -
# All of these density estimators can be viewed as **Generative models** of the data: that is, that is, the model tells us how more data can be created which fits the model.
| notebooks/04.3-Density-GMM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %%time
import malaya
# ## Explanation
#
# Positive subjectivity: based on or influenced by personal feelings, tastes, or opinions. Can be a positive or negative sentiment.
#
# Negative subjectivity: based on a report or a fact. Can be a positive or negative sentiment.
negative_text = 'Kerajaan negeri Kelantan mempersoalkan motif kenyataan Menteri Kewangan Lim Guan Eng yang hanya menyebut Kelantan penerima terbesar bantuan kewangan dari Kerajaan Persekutuan. Sedangkan menurut Timbalan Menteri Besarnya, Datuk Mohd Amar Nik Abdullah, negeri lain yang lebih maju dari Kelantan turut mendapat pembiayaan dan pinjaman.'
positive_text = 'kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya'
# All models got `get_proba` parameters.
# If True, it will returned probability every classes. Else, it will return highest probability class. **Default is False.**
# ## Load multinomial model
model = malaya.subjective.multinomial()
print(model.predict(positive_text,get_proba=True))
print(model.predict(negative_text,get_proba=True))
model.predict_batch([negative_text,negative_text],get_proba=True)
# ## Load xgb model
model = malaya.subjective.xgb()
print(model.predict(positive_text,get_proba=True))
print(model.predict(negative_text,get_proba=True))
model.predict_batch([negative_text,negative_text],get_proba=True)
# ## List available deep learning models
malaya.subjective.available_deep_model()
# ## Load deep learning models
#
# Good thing about deep learning models from Malaya, it returns `Attention` result, means, which part of words give the high impact to the results. But to get `Attention`, you need to set `get_proba=True`.
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# ### Load bahdanau model
model = malaya.subjective.deep_model('bahdanau')
# #### Predict single string
model.predict(positive_text)
result = model.predict(positive_text,get_proba=True,add_neutral=False)
result
plt.figure(figsize = (15, 5))
keys = result['attention'].keys()
values = result['attention'].values()
aranged = [i for i in range(len(keys))]
plt.bar(aranged, values)
plt.xticks(aranged, keys, rotation = 'vertical')
plt.show()
# #### Open subjectivity visualization dashboard
#
# Default when you call `predict_words` it will open a browser with visualization dashboard, you can disable by `visualization=False`.
model.predict_words(negative_text)
# +
from IPython.core.display import Image, display
display(Image('subjective-bahdanau.png', width=800))
# -
# I tried to put the html and javascript inside a notebook cell, pretty hard you know and a lot of weird bugs. Let stick to HTTP serving ya.
#
# `predict_words` only accept a single string. You can't predict multiple texts.
# #### Predict batch of strings
model.predict_batch([negative_text, positive_text],get_proba=True)
# **You might want to try `luong` and `self-attention` by yourself.**
# ## BERT model
#
# BERT is the best subjectivity model in term of accuracy, you can check subjectivity accuracy here, https://malaya.readthedocs.io/en/latest/Accuracy.html#subjectivity-analysis. But warning, the model size is 700MB! Make sure you have enough resources to use BERT, and installed `bert-tensorflow` first,
#
# ```bash
# pip3 install bert-tensorflow
# ```
model = malaya.subjective.bert()
model.predict_batch([negative_text, positive_text],get_proba=True)
# ## Stacking models
#
# More information, you can read at [https://malaya.readthedocs.io/en/latest/Stack.html](https://malaya.readthedocs.io/en/latest/Stack.html)
multinomial = malaya.subjective.multinomial()
xgb = malaya.subjective.xgb()
bahdanau = malaya.subjective.deep_model('bahdanau')
malaya.stack.predict_stack([multinomial, xgb, bahdanau], positive_text)
# ## Load Sparse deep learning models
# What happen if a word not included in the dictionary of the models? like `setan`, what if `setan` appeared in text we want to classify? We found this problem when classifying social media texts / posts. Words used not really a vocabulary-based contextual.
#
# Malaya will treat **unknown words** as `<UNK>`, so, to solve this problem, we need to use N-grams character based. Malaya chose tri-grams until fifth-grams.
#
# ```python
# setan = ['set', 'eta', 'tan']
# ```
#
# Sklearn provided easy interface to use n-grams, problem is, it is very sparse, a lot of zeros and not memory efficient. Sklearn returned sparse matrix for the result, lucky Tensorflow already provided some sparse function.
malaya.subjective.available_sparse_deep_model()
# Right now Malaya only provide 1 sparse model, `fast-text-char`. We will try to evolve it.
sparse_model = malaya.subjective.sparse_deep_model()
sparse_model.predict(positive_text)
sparse_model.predict_batch([positive_text, negative_text])
sparse_model.predict_batch([positive_text, negative_text], get_proba=True)
# Right now sparse models does not have `neutral` class.
| example/subjectivity/load-subjectivity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import tensorflow as tf
import numpy as np
import itertools
import matplotlib.pyplot as plt
import gc
from datetime import datetime
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.metrics import confusion_matrix
input_label = []
output_label = []
# +
a,b = 0,0
ficheiro = open("..\\Dataset\\14-02-2018.csv", "r")
ficheiro.readline()
ficheiro.readline()
ficheiro.readline()
linha = ficheiro.readline()
while(linha != ""):
linha = linha.split(",")
out = linha.pop(37)
if(out == "Benign"):
out = 0
b += 1
else:
out = 1
a += 1
output_label.append(out)
input_label.append(linha)
linha = ficheiro.readline()
ficheiro.close()
print(str(a) + " " + str(b))
# -
scaler = MinMaxScaler(feature_range=(0,1))
scaler.fit(input_label)
input_label = scaler.transform(input_label)
input_label = np.array(input_label).reshape(len(input_label), 78, 1)
output_label = np.array(output_label)
input_label, output_label = shuffle(input_label, output_label)
inp_train, inp_test, out_train, out_test = train_test_split(input_label, output_label, test_size = 0.2)
model = keras.Sequential([
layers.Conv1D(filters = 128, kernel_size = 3, input_shape = (78,1), padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(),
layers.Conv1D(filters = 64, kernel_size = 3, padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(),
layers.Conv1D(filters = 32, kernel_size = 3, padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(),
layers.Flatten(),
layers.Dense(units = 2, activation = "softmax")
])
model.compile(optimizer= keras.optimizers.SGD(learning_rate= 0.08), loss="sparse_categorical_crossentropy", metrics=['accuracy'])
treino1 = model.fit(x = inp_train, y = out_train, validation_split= 0.1, epochs = 10, shuffle = True,verbose = 1)
plt.plot(treino1.history["loss"])
plt.show()
plt.plot(treino1.history["accuracy"])
plt.show()
model.save("CNN1SshFtpBruteForceNet(14-02-2018).h5")
res = [np.argmax(resu) for resu in model.predict(inp_test)]
cm = confusion_matrix(y_true = out_test.reshape(len(out_test)), y_pred = np.array(res))
def plot_confusion_matrix(cm, classes, normaliza = False, title = "Confusion matrix", cmap = plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normaliza:
cm = cm.astype('float') / cm.sum(axis = 1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print("Confusion matrix, without normalization")
print(cm)
thresh = cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i,j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
labels = ["Benign", "SshFtpBruteForce"]
plot_confusion_matrix(cm = cm, classes = labels, title = "SshFtpBruteForce IDS")
| Modelos_sem_reducao/CNN_IDS/ModelosCodigo1D/CNN1SshFtpBruteForceIDS(14-02-2018).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python Conda AC209a
# language: python
# name: ac209a
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from os import listdir
from os.path import isfile, join
import time
from itertools import chain, product
sns.set()
# -
# %matplotlib inline
# Spot check an individual songs file to explore its format and data.
songs = pd.read_pickle('../data/songs_counts_200.pkl')
playlists = pd.read_pickle('../data/playlists_song_ids_200.pkl')
# +
print(songs.shape)
display(songs.head())
print(playlists.shape)
display(playlists.head())
# -
# Verify song IDs are unique and complete with no gaps
assert min(songs.index.values) == 0
assert max(songs.index.values) == len(songs.index.values)-1
assert len(set(songs.index.values)) == len(songs.index.values)
all_songs_all_playlists = list(chain.from_iterable(playlists))
all_songs_all_playlists[0:10]
# Verify that song IDs used in playlists is the same set as those in the songs table:
assert set(all_songs_all_playlists) == set(songs.index.values)
# ## Sanity check: song IDs and playlists match up
# Manual inspection of `songs284.csv` indeed shows that the song data of song IDs stored in playlist $284_0$ match those in the file:
songs.loc[playlists['284_0']]
# ## EDA: songs
songs.describe()
plt.hist(songs.loc[(songs['count'] > 1) & (songs['count'] < 100), 'count'], bins = 30)
plt.suptitle('Distribution of song appearances across playlists')
plt.title('Filtered to 1 < frequency < 100 as there is an extremely long right tail')
# ## EDA: playlists
playlist_lengths = np.array([len(p) for p in playlists.values], 'int')
plt.hist(playlist_lengths, bins = 50)
plt.title('Distribution of number of songs in playlist')
from scipy.stats import describe
print(describe(playlist_lengths))
counts, bins = np.histogram(playlist_lengths,
bins = [1, 10, 20, 30, 40, 50, 100, 200, 300, 350])
for i in range(len(counts)):
print('[{}, {}): {}'.format(bins[i], bins[i+1], counts[i]))
# ## Dimensionality reduction
# We have a matrix of $1,000,000$ rows times $200,000$ features. This is a massive dataset. When we add in the metadata with song, album, and artist information this will only grow to, say, $200,100$ features. This is a challenge for several reasons:
# - The non-binary features will potentially get drowned out by the playlist indicators. Particularly if we do unsupervised learning there is no label to inform the algorithm of the importance of one feature versus another, so all the $200,000$ playlist indicators will drown out the $100$ non-binary features containing potentially more important information.
# - Even the longest playlist has fewer than $350$ songs. With $200,000$ indicators, this means every indicator will have at least $99.825\%$ sparsity. A lot of algorithms will either drop or struggle with such near-zero-variance features.
#
# We therefore need a way to reduce the dimensionality
#
# Alternatives:
# - PCA is not really an alternative, as it assumes continuous data (works with covariance matrix), and the dimensions have very, very low variance.
#
# Since there seem to be no actual dimensionality reduction method we can use, we can reduce scope by:
# - Limiting to playlists above or within a range of certain lenghts
# - Limiting to songs that appear at least $n$ times
# **Limit to songs that appear at least $n$ times across all the playlists**.
# +
n = 10
songs_keep_ind = songs.loc[songs['count'] >= n].index.values
len(songs_keep_ind), songs_keep_ind
# -
# **Limit to playlists of length within range $l$**
# +
l = [50, 100]
playlists_keep_ind = np.where(
np.logical_and(playlist_lengths >= l[0], playlist_lengths <= l[1]))[0]
print(len(playlists_keep_ind))
len(playlists_keep_ind), playlists_keep_ind[0:10]
# +
# Crashes the kernel
#keep_playlists = indicators[:, keep_playlists_ind]
# -
# ## Widening songs df with indicators for playlists we wish to keep and songs we wish to keep
# +
indicators_sub = np.zeros((songs.shape[0], len(playlists_keep_ind)), 'int')
print(indicators_sub.shape)
for i, s in enumerate(playlists[playlists_keep_ind]):
indicators_sub[s, i] = 1
print(indicators_sub)
# -
# ## Sparse matrix
playlists
play, song = zip(*enumerate(playlists))
len(play), play[0:5]
len(song), song[0:2]
pairs = [[z[0], s] for z in zip(play, song) for s in z[1]]
# > 13 million songs in playlists (with repetitions)
len(pairs)
pairs[0:20]
# column is song ID, row is playlist ID
col, row = zip(*pairs)
assert len(row) == len(col)
# # IT'S A FUCKING MIRACLE
# ## Sparse matrix with all songs across all playlists, baby!
#
# https://stackoverflow.com/questions/35944522/convert-list-of-lists-with-indexes-to-csr-matrix
# Create sparse matrix
from scipy.sparse import csr_matrix, coo_matrix
mat = csr_matrix((np.ones(len(col), dtype = 'int'), (row, col)))
mat.shape
# **Warning:** Usually `mat.A` gets you a dense matrix with zeros as zeros instead of simply being left out, *but* that will make Jupyter shit the bed due to the crazy memory requirements.
| wrangling/widen_indicators_200_sparse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
print(__doc__)
import plotly
import plotly.plotly as py
import plotly.graph_objs as go
import time as time
import numpy as np
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d.axes3d as p3
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets.samples_generator import make_swiss_roll
# -
n_samples = 1500
noise = 0.05
X, _ = make_swiss_roll(n_samples, noise)
# Make it thinner
X[:, 1] *= .5
def matplotlib_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries-1)
pl_colorscale = []
for k in range(pl_entries):
C = list(map(np.uint8, np.array(cmap(k*h)[:3])*255))
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
return pl_colorscale
print("Compute unstructured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(n_clusters=6, linkage='ward').fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print("Elapsed time: %.2fs" % elapsed_time)
print("Number of points: %i" % label.size)
# +
color = matplotlib_to_plotly(plt.cm.jet, 6)
data = [ ]
for l in np.unique(label):
trace = go.Scatter3d(x=X[label == l, 0],
y=X[label == l, 1],
z=X[label == l, 2],
mode='markers',
showlegend = False,
marker=dict( color= color[l][1],
line= dict(color='black', width=1)
))
data.append(trace)
layout = go.Layout(height = 600,
title = 'Without connectivity constraints (time %.2fs)' % elapsed_time,
scene = dict(
xaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True),
yaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True,),
zaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True)),
margin=dict(
l=0, r=0,
b=0, t=50)
)
fig = go.Figure(data=data, layout = layout)
plotly.offline.iplot(fig)
# -
from sklearn.neighbors import kneighbors_graph
connectivity = kneighbors_graph(X, n_neighbors=10, include_self=False)
print("Compute structured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(n_clusters=6, connectivity=connectivity,
linkage='ward').fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print("Elapsed time: %.2fs" % elapsed_time)
print("Number of points: %i" % label.size)
# +
color = matplotlib_to_plotly(plt.cm.jet, 6)
data = [ ]
for l in np.unique(label):
trace = go.Scatter3d(x=X[label == l, 0],
y=X[label == l, 1],
z=X[label == l, 2],
mode='markers',
showlegend = False,
marker=dict( color= color[l][1],
line= dict(color='black', width=1)
))
data.append(trace)
layout = go.Layout(height = 600,
title = 'With connectivity constraints (time %.2fs)' % elapsed_time,
scene = dict(
xaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True),
yaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True,),
zaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True)),
margin=dict(
l=0, r=0,
b=0, t=50)
)
fig = go.Figure(data=data, layout = layout)
plotly.offline.iplot(fig)
# +
import plotly
import plotly.graph_objs as go
plotly.offline.init_notebook_mode(connected=True)
plotly.offline.iplot({
"data": [go.Scatter(x=[1, 2, 3, 4], y=[4, 3, 2, 1])],
"layout": go.Layout(title="hello world")
})
# -
| handson-ml/Swiss.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A simple tool for quantum creativity
# One of the simplest things we can do with a quantum computer is to create interference effects. Here we will make some simple tools to help us do this.
#
# We will be making these tools for MicroQiskit. For a guide that uses Qiskit, see [here](https://github.com/quantumjim/blog/blob/master/Quantum_Procedural_Generation/3_FewQubit.ipynb).
from microqiskit import QuantumCircuit, simulate
from math import pi, ceil, log, sqrt
# Lots of different types of information can be nicely expressed as a list of numbers. Such as in a 2D platform game, where the terrain could be described by a list of heights.
#
# For example, the following list could describe terrain that starts off at a height 0.5, slowly rises up to 1.0 and then quickly goes back down to 0.5.
height = [0.5,0.63,0.77,1,0.75,0.5]
# In this list, two neighbouring entries correspond to two neighbouring points. So we would expect that neighbouring entries will probably be somewhat similar. And if we change the list in some way, it would be good to maintain the similarity between neighbouring points.
#
# Another example could be the volume of a piece of music over time, perhaps expressed as a list of the volumes for each beat. Or it could be a list of brightnesses for a line of pixels. We'll stick with the height example for now, but everything that follows is valid for any other example as well.
#
# What we are going to to is take a list of numbers and encode it in a quantum state. We'll then manipulate the state in order to make fun things happen.
#
# A quantum state is essentially described by a list of bit strings which each have a corresponding number. The bit strings are the possible outputs that you could get when measuring the state. The corresponding numbers are known as the amplitudes, and can be used to calculate the probabilities for the bit strings. Here we will ignore any other information encoded in these amplitudes, and just focus on the probabilities.
#
# Our encoding of a list into a quantum state will then be as follows:
# * We assign a bit string to each position in our list;
# * Each number in our list is used as the probability for the bit string at that position.
#
# This process has a few technicalities. The first is choosing how to assign bit strings to positions in the list. For this, we will want to respect the notion of neighbouring entries, as described above.
#
# This will be most relevant when manipulating the state. We will mostly be using single qubit or controlled two qubit operations for these manipulations. The effect of these on simple bit strings is that they can only change a single bit value. For example, they can turn a `0000` into `0100` or `0001`, or turn `0110` into `1110`, and so on. Given this behaviour, we can think of bit strings as being neighbours if they differ on only one bit. And then we can make sure to assign neighbouring bit strings to neighbouring positions.
#
# This is done by the following function. It creates a list of bit strings of a given length. The number of bits is determined by the length of the list, to ensure that bit string is used only once. The list is ordered such that neighbouring entries always differ on only a single bit.
def make_line ( length ):
# determine the number of bits required for at least `length` bit strings
n = int(ceil(log(length)/log(2)))
# start with the basic list of bit values
line = ['0','1']
# each application of the following process double the length of the list,
# and of the bit strings it contains
for j in range(n-1):
# this is done in each step by first appending a reverse-ordered version of the current list
line = line + line[::-1]
# then adding a '0' onto the end of all bit strings in the first half
for j in range(int(len(line)/2)):
line[j] += '0'
# and a '1' onto the end of all bit strings in the second half
for j in range(int(len(line)/2),int(len(line))):
line[j] += '1'
return line
# For example, here is a line long enough to encode the `height` list given as an example above.
line = make_line(6)
print(line)
# Note that here we requested a line of length 6, but a line of length 8 was given instead. This is because there are $2^n$ possible values for $n$-bit strings, and so the length of these lines will always be a power of 2. Since 2 bit strings would only cover lists of up to four entries, 3 bit strings are required to cover a list of 6 entries.
#
# Now we have the bit strings, the next technicality is that of normalization. The trick of taking each number in the list to be a probability only works when those numbers are all non-negative, and when they all sum up to 1. To fix this we'll:
# * Only use lists of non-negative numbers (okay, I admit, this is not much of a fix);
# * Normalize the numbers to sum up to 1.
#
# This is done in the following function, which takes a list of numbers and creates a corresponding quantum circuit. In this circuit, the qubits are prepared in a state that encodes the list of numbers.
def height_to_circuit( height ):
line = make_line( len(height) )
n = int(ceil(log(len(line))/log(2)))
renorm = sqrt(sum(height))
real_vec = [0]*(2**n)
for j,h in enumerate(height):
real_vec[int(line[j],2)] = sqrt(h)/renorm
qc = QuantumCircuit(n)
qc.initialize( real_vec )
return qc
# To get the list back out again, we use the following function.
#
# However, note that this post-processing needs to somehow undo the normalization. However, the normalization will cause us to forget what the maximum value of the list was. We therefore simply assume that the maximum value was 1, and unnormalize accordingly.
#
# If you didn't understand all this talk about normalization, the important fact is that the lists we get out will always have 1 as the maximum value.
def circuit_to_height( qc ):
n = qc._n
line = make_line( 2**n )
real_vec = simulate(qc,get='statevector')
height = [0]*(2**n)
for j,amp in enumerate(real_vec):
string = "{0:b}".format(j)
string = '0'*(n-len(string)) + string
k = line.index(string)
height[k] = amp[0]**2
max_prob = max(height)
for j,h in enumerate(height):
height[j] = h/max_prob
return height
# Now let's try it out. We already have a list of heights, we just need to encode them into a quantum circuit.
qc = height_to_circuit(height)
# We can the pull them back out and see what happened.
circuit_to_height(qc)
# Then come out pretty much as they went in, which is nice.
#
# Now let's use some quantum operations to induce an interference effect, which will change the heights. For this we need to know how many qubits we have in our circuit, which is simply the length of the bit strings used.
n = len(line[0])
# A simple effect is to apply a single qubit `ry` operation to each qubit by some angle `theta`. For `theta=0` we'd see no effect, whereas `theta=pi/2` would be quite drastic. We'll look at a relatively small but nevertheless non-trivial effect.
# +
theta = pi/16
for j in range(n):
qc.ry(theta,j)
# -
# To take a look at what it did to the heights, we just use the appropriate function.
circuit_to_height(qc)
# And that's it. A very simple, quantum interference effect list manipulator thingy. Have fun!
| versions/MicroPython/tutorials/QuantumLine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # RCS Sets
#
# ## Sets
#
# * unordered
# * uniques only
# * curly braces {3, 6, 7}
s={3,3,6,1,3,6,7, "Valdis", "Voldemars" , "Valdis"}
print(s)
a = set("ķiļķēni un klimpas")
a
sorted(list(a))
myletters = list(a)
myletters
myletters[:3]
al = list(a)
al
sorted(al)
s={3,3,6,1,3,6,7}
print(s)
nset = set(range(10))
nset
type(s)
a = set(range(20))
a
b = set(range(5,9))
b
c = set(range(7,12))
c
a.issubset(b)
a.issuperset(b)
b.issubset(a)
c.issubset(a)
print(b)
b.add(8)
print(b)
b.add(15)
b
# +
# b.clear() clears all elements from the list
# -
print(b,c)
b.difference(c)
c.difference(b)
# without matches, only in one set
b.symmetric_difference(c)
b.intersection(c)
c.intersection(b)
a.intersection(b)
d = {55,77}
a.isdisjoint(d)
b.isdisjoint(c)
# same as b.isdisjoint(c)
len(b.intersection(c)) == 0
b.union(c)
c.union(b)
a.union(b)
e = b.union(d)
e
print(e)
e.update(range(10), {5,77,99}, (53,66), [21,22,66])
print(e)
66 in e
100 in e
for el in e:
# order not guaranteed here
print(el, end=" - ")
for el in sorted(list(e)):
print(f"My element is {el}")
print(e)
e.remove(66)
66 in e
e.pop()
mylist = list(range(10))
mylist
import random
random.choice(mylist)
random.choice(list(myset))
myset = set(mylist[5:])
myset.update(mylist[:7])
myset
tempvar = myset.pop()
# do work with tempvar
tempvar
s2={3,6,76,2,8,8}
s2
s
s.difference(s2)
s.intersection(s2)
s2.intersection(s)
sunion = s.union(s2)
sunion
s2.union(s)
s,s2
s.difference(s2)
s3 = {2, 8, 9}
s3
s2.difference(s, s3)
s.clear()
s
# we can update with many differnt data types
s.update({3,3,6,2,7,9},range(4,15), [3,6,7,"Valdis", "Badac"])
s
dir(s)
# we can check if our set has anything in common with anohther data structures
s.isdisjoint([-3,-99])
s.isdisjoint((-22,-222,"HMMMM"))
s.issuperset({1,6})
s.issubset(set(range(100)))
s, s2
s
s2
s.symmetric_difference(s2)
s2.symmetric_difference(s)
# Returns element that are in exactly one set (out of two)
s.symmetric_difference(s2)
s, s2
s3=s.union(s2)
s3
s, s2
# Leaves s4 with those elements from s which are NOT in s2
s4=s.difference(s2)
s4
s2.difference(s)
# the middle of Venn diagram, that is elements in BOTH sets
s.intersection(s2)
s2.intersection(s)
# returns the union
s3 = s.union(s2)
s3
# IN PLACE update similar to above but we update the s not return it
s.update(s2)
s
s3=s+s2
s3
set(range(10))
# +
# More Practice with Sets
# https://www.hackerearth.com/practice/python/working-with-data/set/tutorial/
# -
s3
11 in s3
'Valdis' in s3
for el in s3:
print(el)
import random
random.seed(42)
myrandoms = [random.randint(1,200) for _ in range(100)]
myrandoms
myuniques = set(myrandoms)
len(myuniques), len(myrandoms)
stringmethods = list(dir(str))
len(stringmethods)
listmethods = list(dir(list))
len(listmethods)
uniq_string_methods = set(stringmethods)
len(uniq_string_methods)
uniq_list_methods = set(listmethods)
len(uniq_list_methods)
common_methods = uniq_list_methods.intersection(uniq_string_methods)
common_methods
# we filter for only regular methods
mycommonmethods = [el for el in common_methods if '__' not in el]
mycommonmethods
| Python_Core/Python Sets_in_class_01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # Software Development Environments
# ## tl;dr
#
# * Integrated Development Environments (IDEs): we recommend you use one for software development ✔
# * Jupyter notebooks: great for tutorials and as a playground for getting familiar with code, but not great for software engineering 🚸
# * plain text editors: try to avoid, although sometimes you have to use one ⛔
# ## Integrated Development Environments (IDEs)
#
# * All-in-one that comes with many features to help to code
# * We think it is worth the (small) effort to learn to use one
# * The two leading IDEs for Python are VS Code and PyCharm
# * We will be demoing useful features in VS Code throughout the week
# * Demo:
# * VS Code workspace introduction
# * Autocomplete
# * Static error checking (linting)
# * Git integration
# * SSH integration
# ## Jupyter Notebooks
#
# * Combination of code cells and Markdown text cells make it useful for writing tutorials
# * Running/rerunning one cell at a time allows you to play around with the code/understand how it works
# * Useful for plotting, solving a problem (think of it as a document)
# * Output depends on order cells are run/rerun -> not good for repeatability
# * Not designed for programming a software package
# ## Plain text editors
#
# * The "old-school" editors (e.g., vim, emacs, nano, notepad++)
# * We generally recommend you avoid doing large amounts of programing in them as the code is prone to bugs
# * Sometimes inevitable in astronomy so it is good to learn a little bit of either vim or emacs
# * You can use VS Code over ssh, so you should not need to use these very often!
# # Object-Oriented and Functional Programming
# ## tl;dr
#
# * Object-oriented programming relies on the state of variables to determine the output
# * Good to keep track of something that is changing (e.g., the number of people in a Zoom meeting)
# * Functional programming relies solely on the inputs, which do not change, to determine the output
# * Good for math equations (e.g., computing the inverse of a matrix)
# * Typically, you will want a mix of both programming paradigms
# ## Object-Oriented Programming
#
# 
# ## Classes
#
# * Classes organize variables and functions into a single object
# * Objects can be used to track state - useful model for many things in the world
# * Refer to the diagnostic notebook for the basics on class and superclass syntax
#
# ### Activity
#
# * Finish the following free fall gravity simulator. Use your simulation to determine how long it takes for a particle to fall to the ground from a height of 10 meters. We will poll everyone on what result they get.
# * Bonus activity: In the future, we want particles that experience other forces and move in 3D. Write a `Particle` superclass that the `FreeFallParticle` is a subclass of. What fields go into the `Particle` class?
# +
# Object-Oriented Programming
import astropy.units as u
class FreeFallParticle(object):
"""
Simulate a particle falling due to Earth's gravity. Particle is stationary at first
Args:
height (float): a height in meters
dt (float): timestep of the simulation in seconds
"""
def __init__(self, height, dt=0.1):
"""
Function that is run to initialize the class
"""
self.height = height * u.m # current height
self.velocity = 0 * u.m/u.s # current velocity
self.time = 0 * u.s # time elapsed
self.dt = dt * u.s # timestep of the simulation
self.g = -9.8 * u.m/u.s**2 # gravitational acceleration (Don't change)
def get_num_steps_run(self):
"""
Function that returns the number of timesteps that have run by comparing self.time with self.dt
Returns:
num_steps (int): number of time steps already completed in the simulation
"""
num_steps = int(self.time / self.dt)
return num_steps
##### Activity ######
"""
Add functionality to advance the particle's height by one time step at a time. (hint: implement the function below).
Then use this code to calculate how long it takes for the particle to fall down from a height of 10 meters.
Some useful equations for how to calculate the particle's new state at the next time step.
Pseudo code below:
acceleration = g
new_velocity = current_velocity + acceleration * dt
new_height = current_height + new_velocity * dt
Add inputs and outputs.
"""
def simulate_timestep(self):
"""
Advance the simulation time by a single timestep (self.dt).
Update the simulation with the new time, height, and velocity
Returns:
height (float): the current height in meters
"""
return 0. # currently does nothing
# -
# Here's how you could call this function
ball = FreeFallParticle(10) # start out a 10 m above the ground
print(ball.time, ball.height)
ball.simulate_timestep()
print(ball.time, ball.height) # time should move forward by 0.1 seconds
# ## Object Oriented Programming
#
# * Code structured around objects
# * Depends on changing/"mutable" state of the object (e.g., `self.height`, `self.velocity`, etc.)
# * Most things in the world change, so it makes sense to frame things in this way
# * We recommend identifying entities that should become objects and program around this
# * For example, particles in a simulation can be grouped together in a class
#
# Some more subtle things to consider when using classes
#
# * Creating an object can be slow. Too many object creations can slow down code
# * Could be prone to bugs since function outputs depends on both inputs and the current state of the object
# ## Functional Programming
#
# 
# ## Functional Programming
#
# * Key paradigm: functions outputs depend solely on the inputs
# * Easier to guarantee correctness
# * More messy to track changing state of things
# * Functional programming != no objects. Objects however are static data structures.
# * You need to create a new object if you want to change an object
# * Useful for math problems, physics equations, unit conversions
# * `import astropy.units as u; u.m.to(u.nm)`
# ## Object Oriented vs Functional Programming
#
# * Object oriented programming is good when things change (e.g., the position of a planet, the current image being analyzed)
# * Functional programming is good to deterministic things (e.g., math equations, making sure you do not accidentally apply the same function twice)
# * Most packages use both
# +
# Functional Programming Example.
class Particle(object):
"""
A particle with a given height and vertical instantaneous velocity
Args:
height (float): height of the object currently in meters
velocity (float): velocity of the object in meters. Default is 0 (at rest)
"""
def __init__(self, height, velocity=0):
self.height = height * u.m
self.velocity = velocity * u.m/u.s
def freefall_timestep(thing, dt=0.1):
"""
Simulate free fall of the particle for a small time step
Args:
thing (Particle): the current position and velocity of the particle
dt (float): optional float that specifies the timestep in seconds
Returns:
new_thing (Particle): the updated position and velocity of the particle
"""
dt_units = dt * u.s
new_velocity = thing.velocity + -9.8 * u.m / u.s**2 * dt_units
new_height = thing.height + new_velocity * dt_units
new_thing = Particle(new_height.value, new_velocity.value)
return new_thing
ball = Particle(1) # start a ball at 1 m
ball_states = [ball]
print(0 * u.s, ball.height)
dt = 0.1
time = 0
for i in range(5):
new_ball = freefall_timestep(ball_states[-1], dt)
ball_states = ball_states + [new_ball,]
time += dt
print(time * u.s, new_ball.height)
# Running the function with the same inputs will return the same result
# This generally would not happen with object oriented programming
# When is this good or bad?
output_ball_1 = freefall_timestep(ball, dt)
output_ball_2 = freefall_timestep(ball, dt)
print("Are these the same?", output_ball_1.height, output_ball_2.height)
# -
# ## Bonus Activity
#
# Write a function that returns the history of heights that the object was at (and their corresponding times). For example, if the object was at `height = 1` at `time = 0`, `height = 0.902` at `t = 0.1`, and `height = 0.706` at `t = 0.2`, the function should return `[1, 0.902, 0.706]` for the heights and `[0, 0.1, 0.2]` for the corresponding times. Choose to implement it either in the object oriented or functional framework we provided. If you have time, try the other one too!
#
| Day1/code_developing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Fourth Assignment
# #### 1) Suppose that in this term you have only two courses: Calculus II and Programming Languages. Create a function **grades()** that receives two tuples (or lists) of three elements representing the grades of the assigments A1, A2 and A3 in that order. The first tuple (or list) refers to the grades of Calculus II and the second to the grades of Programming Languages. Return a dictionary whose keys are the names of the course and the values are another dict with keys 'A1', 'A2' and 'A3' and values being the corresponding grades. See example below.
#
# >```python
# >>>> grades((4,8,7),(9,8,0))
# >{'Calculus II': {'A1': 4, 'A2': 8, 'A3': 7}, 'Programming Languages': {'A1': 9, 'A2': 8, 'A3': 0}}
# ```
# #### 2) Create the function **sorts()**, which takes as a single argument a dictionary **d**, and sorts that dictionary based on its values, returning the ordered dictionary. Test cases will not have equal values. See examples:
#
# >```python
# >>>> sorts({1: 2, 3: 4, 4: 3, 2: 1, 0: 0})
# >{0: 0, 2: 1, 1: 2, 4: 3, 3: 4}
# >
# >>>> sorts({"fish":1, "chicken":10, "beef":5, "pork":8})
# >{"fish":1, "cow":5, "pork":8, "chicken":10}
# ```
# #### 3) Create the function **concatenate()** so that it receives as arguments an indefinite number of dictionaries. Hence, concatenate them in the given order, generating a single dictionary that must be returned as a function. Test cases will not have keys in common. See the examples:
#
# >```python
# >>>> concatenate({1:'a',3:'c'},{2:'b',4:'d'},{5:'e',6:'f'})
# >{1:'a',3:'c',2:'b',4:'d',5:'e',6:'f'}
# >
# >>>> concatenate({'a':1,'b':2,'e':5},{'d':4,'c':3,'f':6})
# >{'a':1,'b':2,'e':5,'d':4,'c':3,'f':6}
# ```
# #### 4) Create a class called **Triangle**, whose constructor receives the measurements from its three sides (in any order). Implement a method called `type`, which returns:
# - 0 if the measurements do not form a triangle
# - 1 if the triangle is equilateral
# - 2 if the triangle is isosceles
# - 3 if the triangle is scalene
#
# See the example:
#
# >```python
# >>>> tri = Triangle(5,5,5) #Equilateral
# >>>> tri.type()
# >1
# ```
# #### 5) Create the `Point` class representing a point object in the Cartesian plane, with methods `show()`, `move()` and `dist()`. The first must return a tuple with the coordinates of the point. The second must take as arguments the displacements in each of the axes and update the position of the object. The latter, on the other hand, must take another point as an argument and return the Euclidean distance between the two points.
#
# #### Obs: The unitary tests for `dist()` will accept a margin of error of 0.1. Do not change the existing lines in this question and do not create or remove functions or classes. With the class properly implemented, it should be possible to execute the following series of commands:
#
# >```python
# >>>> p1 = Point(2, 3)
# >>>> p2 = Point(3, 3)
# >>>> p1.show()
# >(2, 3)
# >>>> p2.show()
# >(3, 3)
# >>>> p1.move(10, -10)
# >>>> p1.show()
# >(12, -7)
# >>>> p2.show()
# >(3, 3)
# >>>> p1.dist(p2)
# >13.45362404707371
# ```
# #### 6) Create two classes `employee` and `manager` so that manager is a subclass of employee. The employee attributes are name, ssid, salary and department. In addition to these, the manager class must also include the password and the number of employees he/she manages. Make sure your constructors are consistent. In addition, the employee class must have the method `bonus()`, which does not receive parameters and increases the employee's salary by 10%. The manager class must have the methods `authenticate_password(password)`, which returns a Boolean resulting from the validation of the password against the entry, and the method `bonus()`, which increases your salary by 15%. Do not change the existing lines in this question and do not create or remove functions or classes.
#
# ```python
# >>> f1=exployee("John",12345678900,2500,"TI")
# >>> f2=employee("Paul",12345678901,1800,"TI")
# >>> f3=gerente("Marta",23456789012,6000,"TI",101101,2)
# >>> f1.name()
# John
# >>> f2.ssid()
# 12345678901
# >>> f3.departament()
# IT
# >>> f2.bonus()
# >>> f2.salary()
# 1980.00
# >>> f3.bonus()
# 6900.00
# >>> f3.authenticate_password(<PASSWORD>)
# True
# >>> f3.authenticate_password(<PASSWORD>)
# False
# ```
# ## Challenge
#
# #### 7) There is a file called "alice.txt", which is the first chapter of the book Alice in Wonderland. The text has already been properly cleaned, punctuation was removed, as well as special characters and unnecessary spacing. There is a semi-ready function that reads the file and loads the text into the string-type variable called "alice"; you have to modify this function to return a dictionary whose keys are the unique words in the text, and the values are the number of times each word is repeated in the chapter (frequency distribution) - do not use the method collections.Counter.
#
# #### Extra: Try to discover the top 10 most used words. See the image below to get an idea of the answer (The bigger the word, the more often it is repeated).
#
# 
#
# Book: http://www.gutenberg.org/files/11/11-0.txt
#
# Image: https://pypi.org/project/wordcloud/
def read_text():
with open('../Data/TXT/alice.txt','r') as f:
alice = f.read()
| Assigments/Assignment_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Comparing TensorFlow (original) and PyTorch models
#
# You can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.
#
# To run this notebook, follow these instructions:
# - make sure that your Python environment has both TensorFlow and PyTorch installed,
# - download the original TensorFlow implementation,
# - download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,
# - run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.
#
# If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code.
import os
os.chdir('../')
# ## 1/ TensorFlow code
# +
original_tf_inplem_dir = "./tensorflow_code/"
model_dir = "../google_models/uncased_L-12_H-768_A-12/"
vocab_file = model_dir + "vocab.txt"
bert_config_file = model_dir + "bert_config.json"
init_checkpoint = model_dir + "bert_model.ckpt"
input_file = "./samples/input.txt"
max_seq_length = 128
# +
import importlib.util
import sys
spec = importlib.util.spec_from_file_location('*', original_tf_inplem_dir + '/extract_features_tensorflow.py')
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
sys.modules['extract_features_tensorflow'] = module
from extract_features_tensorflow import *
# +
layer_indexes = list(range(12))
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=True)
examples = read_examples(input_file)
features = convert_examples_to_features(
examples=examples, seq_length=max_seq_length, tokenizer=tokenizer)
unique_id_to_feature = {}
for feature in features:
unique_id_to_feature[feature.unique_id] = feature
# +
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
master=None,
tpu_config=tf.contrib.tpu.TPUConfig(
num_shards=1,
per_host_input_for_training=is_per_host))
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=init_checkpoint,
layer_indexes=layer_indexes,
use_tpu=False,
use_one_hot_embeddings=False)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=False,
model_fn=model_fn,
config=run_config,
predict_batch_size=1)
input_fn = input_fn_builder(
features=features, seq_length=max_seq_length)
# -
tensorflow_all_out = []
for result in estimator.predict(input_fn, yield_single_examples=True):
unique_id = int(result["unique_id"])
feature = unique_id_to_feature[unique_id]
output_json = collections.OrderedDict()
output_json["linex_index"] = unique_id
tensorflow_all_out_features = []
# for (i, token) in enumerate(feature.tokens):
all_layers = []
for (j, layer_index) in enumerate(layer_indexes):
print("extracting layer {}".format(j))
layer_output = result["layer_output_%d" % j]
layers = collections.OrderedDict()
layers["index"] = layer_index
layers["values"] = layer_output
all_layers.append(layers)
tensorflow_out_features = collections.OrderedDict()
tensorflow_out_features["layers"] = all_layers
tensorflow_all_out_features.append(tensorflow_out_features)
output_json["features"] = tensorflow_all_out_features
tensorflow_all_out.append(output_json)
print(len(tensorflow_all_out))
print(len(tensorflow_all_out[0]))
print(tensorflow_all_out[0].keys())
print("number of tokens", len(tensorflow_all_out[0]['features']))
print("number of layers", len(tensorflow_all_out[0]['features'][0]['layers']))
tensorflow_all_out[0]['features'][0]['layers'][0]['values'].shape
tensorflow_outputs = list(tensorflow_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes)
# ## 2/ PyTorch code
os.chdir('./examples')
import extract_features
import pytorch_pretrained_bert as ppb
from extract_features import *
init_checkpoint_pt = "../../google_models/uncased_L-12_H-768_A-12/"
device = torch.device("cpu")
model = ppb.BertModel.from_pretrained(init_checkpoint_pt)
model.to(device)
# + code_folding=[]
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_input_type_ids = torch.tensor([f.input_type_ids for f in features], dtype=torch.long)
all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long)
eval_data = TensorDataset(all_input_ids, all_input_mask, all_input_type_ids, all_example_index)
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1)
model.eval()
# +
layer_indexes = list(range(12))
pytorch_all_out = []
for input_ids, input_mask, input_type_ids, example_indices in eval_dataloader:
print(input_ids)
print(input_mask)
print(example_indices)
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
all_encoder_layers, _ = model(input_ids, token_type_ids=input_type_ids, attention_mask=input_mask)
for b, example_index in enumerate(example_indices):
feature = features[example_index.item()]
unique_id = int(feature.unique_id)
# feature = unique_id_to_feature[unique_id]
output_json = collections.OrderedDict()
output_json["linex_index"] = unique_id
all_out_features = []
# for (i, token) in enumerate(feature.tokens):
all_layers = []
for (j, layer_index) in enumerate(layer_indexes):
print("layer", j, layer_index)
layer_output = all_encoder_layers[int(layer_index)].detach().cpu().numpy()
layer_output = layer_output[b]
layers = collections.OrderedDict()
layers["index"] = layer_index
layer_output = layer_output
layers["values"] = layer_output if not isinstance(layer_output, (int, float)) else [layer_output]
all_layers.append(layers)
out_features = collections.OrderedDict()
out_features["layers"] = all_layers
all_out_features.append(out_features)
output_json["features"] = all_out_features
pytorch_all_out.append(output_json)
# -
print(len(pytorch_all_out))
print(len(pytorch_all_out[0]))
print(pytorch_all_out[0].keys())
print("number of tokens", len(pytorch_all_out))
print("number of layers", len(pytorch_all_out[0]['features'][0]['layers']))
print("hidden_size", len(pytorch_all_out[0]['features'][0]['layers'][0]['values']))
pytorch_all_out[0]['features'][0]['layers'][0]['values'].shape
pytorch_outputs = list(pytorch_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes)
print(pytorch_outputs[0].shape)
print(pytorch_outputs[1].shape)
print(tensorflow_outputs[0].shape)
print(tensorflow_outputs[1].shape)
# ## 3/ Comparing the standard deviation on the last layer of both models
import numpy as np
print('shape tensorflow layer, shape pytorch layer, standard deviation')
print('\n'.join(list(str((np.array(tensorflow_outputs[i]).shape,
np.array(pytorch_outputs[i]).shape,
np.sqrt(np.mean((np.array(tensorflow_outputs[i]) - np.array(pytorch_outputs[i]))**2.0)))) for i in range(12))))
| notebooks/Comparing-TF-and-PT-models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: FinRL_Library_KIT
# language: python
# name: finrl_library_kit
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/AI4Finance-LLC/FinRL-Library/blob/master/Crypto_Binance_Historical_Data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="QYSfSKiBZeJf"
# # Fetch historical data
# + [markdown] id="YC1eCkAPZeJh"
# Python script to fetch historical data from binance using ccxt
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ft42NOrgZeJh" outputId="bff73263-dbba-44c5-d1be-61194c59cf95"
# Install openpyxl and CCXT
# !pip install openpyxl ccxt
# + id="WyylIG0mZeJi"
import os
from pathlib import Path
import sys
import csv
# -----------------------------------------------------------------------------
root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(''))))
sys.path.append(root + '/python')
import ccxt
# -----------------------------------------------------------------------------
def retry_fetch_ohlcv(exchange, max_retries, symbol, timeframe, since, limit):
num_retries = 0
try:
num_retries += 1
ohlcv = exchange.fetch_ohlcv(symbol, timeframe, since, limit)
# print('Fetched', len(ohlcv), symbol, 'candles from', exchange.iso8601 (ohlcv[0][0]), 'to', exchange.iso8601 (ohlcv[-1][0]))
return ohlcv
except Exception:
if num_retries > max_retries:
raise # Exception('Failed to fetch', timeframe, symbol, 'OHLCV in', max_retries, 'attempts')
def scrape_ohlcv(exchange, max_retries, symbol, timeframe, since, limit):
earliest_timestamp = exchange.milliseconds()
timeframe_duration_in_seconds = exchange.parse_timeframe(timeframe)
timeframe_duration_in_ms = timeframe_duration_in_seconds * 1000
timedelta = limit * timeframe_duration_in_ms
all_ohlcv = []
while True:
fetch_since = earliest_timestamp - timedelta
ohlcv = retry_fetch_ohlcv(exchange, max_retries, symbol, timeframe, fetch_since, limit)
# if we have reached the beginning of history
if ohlcv[0][0] >= earliest_timestamp:
break
earliest_timestamp = ohlcv[0][0]
all_ohlcv = ohlcv + all_ohlcv
print(len(all_ohlcv), symbol, 'candles in total from', exchange.iso8601(all_ohlcv[0][0]), 'to', exchange.iso8601(all_ohlcv[-1][0]))
# if we have reached the checkpoint
if fetch_since < since:
break
return all_ohlcv
def write_to_csv(filename, exchange, data):
p = Path("./data/raw/", str(exchange))
p.mkdir(parents=True, exist_ok=True)
full_path = p / str(filename)
with Path(full_path).open('w+', newline='') as output_file:
csv_writer = csv.writer(output_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
csv_writer.writerows(data)
def scrape_candles_to_csv(filename, exchange_id, max_retries, symbol, timeframe, since, limit):
# instantiate the exchange by id
exchange = getattr(ccxt, exchange_id)({
'enableRateLimit': True, # required by the Manual
})
# convert since from string to milliseconds integer if needed
if isinstance(since, str):
since = exchange.parse8601(since)
# preload all markets from the exchange
exchange.load_markets()
# fetch all candles
ohlcv = scrape_ohlcv(exchange, max_retries, symbol, timeframe, since, limit)
# save them to csv file
write_to_csv(filename, exchange, ohlcv)
print('Saved', len(ohlcv), 'candles from', exchange.iso8601(ohlcv[0][0]), 'to', exchange.iso8601(ohlcv[-1][0]), 'to', filename)
# + colab={"base_uri": "https://localhost:8080/"} id="KptosITqZeJi" outputId="c1a977b3-4a87-40f7-8d59-6a6cc89ecf83"
scrape_candles_to_csv('btc_usdt_1m.csv', 'binance', 3, 'BTC/USDT', '1m', '2019-01-0100:00:00Z', 1000)
# scrape_candles_to_csv('./data/raw/binance/eth_btc_1m.csv', 'binance', 3, 'ETH/BTC', '1m', '2018-01-01T00:00:00Z', 1000)
# scrape_candles_to_csv('./data/raw/binance/ltc_btc_1m.csv', 'binance', 3, 'LTC/BTC', '1m', '2018-01-01T00:00:00Z', 1000)
# scrape_candles_to_csv('./data/raw/binance/xlm_btc_1m.csv', 'binance', 3, 'XLM/BTC', '1m', '2018-01-01T00:00:00Z', 1000)
# + id="iYbqDa68ZeJj"
| Crypto_Binance_Historical_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Занятие 6. Нелинейные модели для классификации
# ### <NAME>
# +
import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
plt.style.use('ggplot')
# -
filename = 'wine.csv'
data = pd.read_csv(filename)
data = data.interpolate()
data.head(6)
array = data.values
X = array[:,1:13]
Y = array[:,0]
# ### k-Nearest Neighbors
# +
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
kfold = KFold(n_splits=10, random_state=7, shuffle=True)
model = KNeighborsClassifier()
results = cross_val_score(model, X, Y, cv=kfold)
print(f'{round(results.mean()*100, 3)} %')
# -
# ### Naive Bayes
# +
from sklearn.naive_bayes import GaussianNB
kfold = KFold(n_splits=10, random_state=7, shuffle=True)
model = GaussianNB()
results = cross_val_score(model, X, Y, cv=kfold)
print(f'{round(results.mean()*100, 3)} %')
# -
# ### Classification and Regression Trees
# +
from sklearn.tree import DecisionTreeClassifier
kfold = KFold(n_splits=10, random_state=7, shuffle=True)
model = DecisionTreeClassifier()
results = cross_val_score(model, X, Y, cv=kfold)
print(f'{round(results.mean()*100, 3)} %')
# -
# Таким образом линейные модели сработали лучше для классификации, но также можно использовать Classification and Regression Trees.
| Marketing Analytics/Nonlinear models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pickle
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
import glob
filelist = glob.glob('crowdai/*/*.jpg')
X = np.array([np.array((Image.open(fname)).resize((64,64))) for fname in filelist])
y = np.array([int(''.join(fname[10:12])) for fname in filelist])
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
X, y = shuffle(X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15,random_state=0)
X_train , X_valid , y_train, y_valid = train_test_split(X_train, y_train, test_size=0.15,random_state=0)
# +
n_train = len(X_train)
n_validation = len(X_valid)
n_test = len(X_test)
image_shape = X_train[0].shape
n_classes = len(set(y_valid))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
# +
import numpy as np
X_train=np.average(X_train,axis=3,weights=[0.299,0.587,0.114])
X_valid=np.average(X_valid,axis=3,weights=[0.299,0.587,0.114])
X_test=np.average(X_test,axis=3,weights=[0.299,0.587,0.114])
X_train=(X_train-128)/128
X_valid=(X_valid-128)/128
X_test=(X_test-128)/128
X_train=X_train.reshape(n_train, 64, 64,1)
X_test=X_test.reshape(n_test, 64, 64,1)
X_valid=X_valid.reshape(n_validation, 64, 64,1)
# X_train,X_validation,y_train,y_validation=train_test_split(X_train,y_train,test_size=0.2,random_state=0)
# -
# ### Model Architecture
# +
import tensorflow as tf
import numpy as np
EPOCHS = 20
BATCH_SIZE = 128
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
#keep_prob=0.6
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W=tf.Variable(tf.truncated_normal([5,5,1,6],mean=mu,stddev=sigma),name="Pholder3")
conv1_B=tf.Variable(tf.zeros([6]),name="Pholder4")
conv1=tf.nn.conv2d(x,conv1_W,[1,1,1,1],'VALID',name='tagget_conv')+conv1_B
conv1=tf.nn.relu(conv1,name="Pholder7")
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1=tf.nn.max_pool(conv1,[1,2,2,1],[1,2,2,1],'VALID',name="Pholder6")
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W=tf.Variable(tf.truncated_normal([5,5,6,16],mean=mu,stddev=sigma,name="Pholder8"),name="Pholder5")
conv2_B=tf.Variable(tf.zeros([16]),name="Pholder9")
conv2=tf.nn.conv2d(conv1,conv2_W,[1,1,1,1],'VALID',name="Pholder10")+conv2_B
conv2=tf.nn.relu(conv2)
conv2=tf.nn.max_pool(conv2,[1,2,2,1],[1,2,2,1],'VALID')
fc0=flatten(conv2)
fc1_W=tf.Variable(tf.truncated_normal(shape=(2704,120),mean=mu,stddev=sigma))
fc1_B=tf.Variable(tf.zeros(120))
fc1=tf.matmul(fc0,fc1_W)+fc1_B
fc1=tf.nn.relu(fc1)
fc1=tf.nn.dropout(fc1,keep_prob)
fc2_W=tf.Variable(tf.truncated_normal(shape=(120,84),mean=mu,stddev=sigma))
fc2_B=tf.Variable(tf.zeros(84))
fc2=tf.matmul(fc1,fc2_W)+fc2_B
fc2=tf.nn.relu(fc2)
fc2=tf.nn.dropout(fc2,keep_prob)
fc3_W=tf.Variable(tf.truncated_normal(shape=(84,38),mean=mu,stddev=sigma))
fc3_B=tf.Variable(tf.zeros(38))
logits=tf.matmul(fc2,fc3_W,name="logit")+fc3_B
print(logits)
return logits
# -
# ### Train, Validate and Test the Model
# +
x = tf.placeholder(tf.float32, (None, 64, 64, 1), name="Pholder0")
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 38)
keep_prob=tf.placeholder(tf.float32,name="Pholder11")
# +
rate = 0.00098
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits,name="softmax")
loss_operation = tf.reduce_mean(cross_entropy,name="target_conv")
optimizer = tf.train.AdamOptimizer(learning_rate = rate,name="adam_optimized")
training_operation = optimizer.minimize(loss_operation)
# +
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
#loss_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y,keep_prob:1})
#loss=sess.run(loss_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
#loss_accuracy += (loss * len(batch_x))
return total_accuracy / num_examples
# -
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
#print(batch_y.shape)
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y,keep_prob:0.6})
training_accuracy = evaluate(X_train, y_train)
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Train Accuracy = {:.3f}".format(training_accuracy))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
# +
import tensorflow as tf
meta_path = './lenet.meta' # .meta file\n",
output_node_names = ['logit'] # Output nodes\n",
with tf.Session() as sess:
# Restore the graph\n",
saver = tf.train.import_meta_graph(meta_path)
# Load weights\n",
saver.restore(sess,tf.train.latest_checkpoint('.'))
# Freeze the graph\n",
frozen_graph_def = tf.graph_util.convert_variables_to_constants(sess,sess.graph_def,output_node_names)
# Save the frozen graph\n",
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
# +
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
test_images=[]
image_labels=[0,1,2,3,4,5,6,7,8,9]
path='./test_data/'
for image in os.listdir(path):
print(image)
image_path=cv2.imread(path+image)
image=cv2.resize(image_path,(64,64))
# image=cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
test_images.append(image)
test_image_array=np.array(test_images)
# +
import cv2
import glob
import matplotlib.pyplot as plt
filelist = glob.glob('crowdai/c_36/*.jpg')
# load the image with imread()
for i in filelist:
imageSource = i
img = cv2.imread(imageSource)
horizontal_img = img.copy()
vertical_img = img.copy()
both_img = img.copy()
horizontal_img = cv2.flip( img, 0 )
vertical_img = cv2.flip( img, 1 )
both_img = cv2.flip( img, -1 )
path=i.split('.')
path=''.join(path[:len(path)-1])
cv2.imwrite( path+"H.jpg", horizontal_img )
cv2.imwrite(path+"V.jpg" , vertical_img )
cv2.imwrite(path+"B.jpg", both_img )
# +
import cv2
import numpy as np
import glob
import matplotlib.pyplot as plt
filelist = glob.glob('crowdai/c_09/*.jpg')
# load the image with imread()
for i in filelist:
img = cv2.imread(i)
num_rows, num_cols = img.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((num_cols/2, num_rows/2), 30, 1)
img_rotation = cv2.warpAffine(img, rotation_matrix, (num_cols, num_rows))
path=i.split('.')
path=''.join(path[:len(path)-1])
cv2.imwrite(path+"Rotate.jpg", img_rotation)
# +
test_image_array=np.average(test_image_array,axis=3,weights=[0.299,0.587,0.114])
test_image_array=(test_image_array-128)/128
test_image_array=np.reshape(test_image_array,(10,64,64,1))
# -
import tensorflow as tf
from tensorflow.contrib.lite.toco.python.toco_wrapper import main
print(tf.__version__)
# ### Analyze Performance
# +
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver_test=tf.train.import_meta_graph('lenet.meta')
saver_test.restore(sess,tf.train.latest_checkpoint('.'))
test_accuracy=evaluate(test_image_array,image_labels)
print("Test Accuracy={:.3f}".format(test_accuracy))
# +
def load_graph(frozen_graph_filename):
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we can use again a convenient built-in function to import a graph_def into the
# current default Graph
with tf.Graph().as_default() as graph:
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="prefix",
op_dict=None,
producer_op_list=None
)
return graph
graph = load_graph("./optimized_graph.pb")
for op in graph.get_operations():
print(op.name)
print(op.values())
x1 = graph.get_tensor_by_name('prefix/Pholder0:0')
y1 = graph.get_tensor_by_name('prefix/logit:0')
keep_probab=graph.get_tensor_by_name('prefix/Pholder11:0')
with tf.Session(graph=graph) as sess:
test_features = test_image_array
# compute the predicted output for test_x
pred_y = sess.run(tf.nn.softmax(y1), feed_dict={x1: test_features,keep_probab:1.0} )
# max=max(pred_y[0])
top_five=sess.run(tf.nn.top_k(pred_y,k=1))
print(top_five)
# -
# ### Output Top 5 Softmax Probabilities For Each Image Found on the Web
# +
softmax=tf.nn.softmax(logits)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver=tf.train.import_meta_graph('lenet.meta')
saver.restore(sess, tf.train.latest_checkpoint('.'))
probability=sess.run(tf.nn.softmax(logits),{x:test_image_array,y:image_labels, keep_prob:1.0})
top_five=sess.run(tf.nn.top_k(probability,k=1))
print(top_five)
# -
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap))
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
| Kisan_Mitra.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/emcee21/colabML/blob/master/Sinus_Infection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="RXYiiiXXkwGs" colab_type="code" cellView="both" colab={}
import numpy as np
from numpy.random import randint, uniform
import matplotlib.pyplot as plt
#@title Example form fields
#@markdown Forms support many types of fields.
# number N of points where a curve is sampled
SAMPLE_LEN = 64#@param {type: "integer"}
# number of curves in the training set
SAMPLE_SIZE = 32768 #@param {type: "integer"}
# least ordinate where to sample
X_MIN = -5.0 #@param {type: "number"}
# last ordinate where to sample
X_MAX = 5.0 #@param {type: "number"}
BATCH = 8172#@param {type: "integer"}
EPOCHS = 65536#@param {type: "integer"}
DISCRIMINATOR_DROPOUT_RATE = 0.25 #@param {type: "number"}
GENERATOR_LEAK_RATE = 0.1 #@param {type: "number"}
DISCRIMINATOR_LEARNING_RATE = 0.2 #@param {type: "number"}
GENERATOR_LEARNING_RATE = 0.004 #@param {type: "number"}
EPSILON = 0.0000001 #@param {type: "number"}
#@markdown ---
# The set of coordinates over which curves are sampled
X_COORDS = np.linspace(X_MIN , X_MAX, SAMPLE_LEN)
# The training set
SAMPLE = np.zeros((SAMPLE_SIZE, SAMPLE_LEN))
for i in range(0, SAMPLE_SIZE):
b = uniform(0.5, 2.0)
c = uniform(np.math.pi)
SAMPLE[i] = np.array([np.sin(b*x + c) for x in X_COORDS])
# We plot the first 8 curves
fig, axis = plt.subplots(1, 1)
for i in range(8):
axis.plot(X_COORDS, SAMPLE[i])
plt.show()
# + id="pgIlQsJmQwzm" colab_type="code" colab={}
print(f'SAMPLE_LEN={SAMPLE_LEN}')
print(f'SAMPLE_SIZE={SAMPLE_SIZE}')
print(f'X_MIN={X_MIN}')
print(f'X_MAX={X_MAX}')
print(f'BATCH={BATCH}')
print(f'EPOCHS={EPOCHS}')
print(f'DISCRIMINATOR_DROPOUT_RATE={DISCRIMINATOR_DROPOUT_RATE}')
print(f'GENERATOR_LEAK_RATE={GENERATOR_LEAK_RATE}')
print(f'DISCRIMINATOR_LEARNING_RATE={DISCRIMINATOR_LEARNING_RATE}')
print(f'GENERATOR_LEARNING_RATE={GENERATOR_LEARNING_RATE}')
print(f'EPSILON={EPSILON}')
from keras.models import Sequential
from keras.layers import Dense, Dropout, LeakyReLU
from keras.optimizers import Adam
DROPOUT = Dropout(DISCRIMINATOR_DROPOUT_RATE) # Empirical hyperparameter
discriminator = Sequential()
discriminator.add(Dense(SAMPLE_LEN))
discriminator.add(DROPOUT)
discriminator.add(Dense(512))
discriminator.add(DROPOUT)
discriminator.add(Dense(1, activation = "sigmoid"))
discriminator.compile(optimizer = Adam(learning_rate=DISCRIMINATOR_LEARNING_RATE, epsilon = EPSILON), loss = "binary_crossentropy", metrics = ["accuracy"])
LEAKY_RELU = LeakyReLU(GENERATOR_LEAK_RATE) # Empirical hyperparameter
generator = Sequential()
generator.add(Dense(SAMPLE_LEN))
generator.add(LEAKY_RELU)
generator.add(Dense(512))
generator.add(LEAKY_RELU)
generator.add(Dense(SAMPLE_LEN, activation = "tanh"))
#generator.compile(optimizer = Adam(learning_rate=GENERATOR_LEARNING_RATE, epsilon = EPSILON), loss = "mse", metrics = ["accuracy"])
#generator.compile()
gan = Sequential()
gan.add(generator)
gan.add(discriminator)
gan.compile(optimizer = Adam(learning_rate=GENERATOR_LEARNING_RATE, epsilon = EPSILON), loss = "binary_crossentropy", metrics = ["accuracy"])
NOISE = uniform(X_MIN, X_MAX, size = (SAMPLE_SIZE, SAMPLE_LEN))
ONES = np.ones((SAMPLE_SIZE))
ZEROS = np.zeros((SAMPLE_SIZE))
print("epoch | dis. loss | dis. acc | gen. loss | gen. acc")
print("------+-----------+----------+-----------+----------")
BATCHES_PER_EPOCH = (SAMPLE_SIZE // BATCH)
g_losses = np.zeros(([EPOCHS * BATCHES_PER_EPOCH]))
g_accuracy = np.zeros(([EPOCHS * BATCHES_PER_EPOCH]))
d_losses = np.zeros(([EPOCHS * BATCHES_PER_EPOCH]))
d_accuracy = np.zeros(([EPOCHS * BATCHES_PER_EPOCH]))
ax_index = 1
stats_index = 0
for e in range(EPOCHS):
for k in range(BATCHES_PER_EPOCH):
# Addestra il discriminatore a riconoscere le sinusoidi vere da quelle prodotte dal generatore
n = randint(0, SAMPLE_SIZE, size = BATCH)
# Ora prepara un batch di training record per il discriminatore
p = generator.predict(NOISE[n])
#p1 = discriminator.predict(SAMPLE[n])
x = np.concatenate((SAMPLE[n], p))
y = np.concatenate((ONES[n], ZEROS[n]))
d_result = discriminator.train_on_batch(x, y)
d_losses.put([stats_index],[d_result[0]])
d_accuracy.put(stats_index,d_result[1])
#p2 = discriminator.predict(x)
discriminator.trainable = False
g_result = gan.train_on_batch(NOISE[n], ONES[n])
g_losses.put(stats_index, g_result[0])
g_accuracy.put(stats_index, g_result[1])
discriminator.trainable = True
stats_index += 1
print(f" {e:04n} | {d_result[0]:.5f} | {d_result[1]:.5f} | {g_result[0]:.5f} | {g_result[1]:.5f}")
# At 3, 13, 23, ... plots the last generator prediction
if e % 10 == 3:
#print(f"{generator.metrics_names}")
#print(f"{discriminator.metrics_names}")
#print(f"{gan.metrics_names}")
#print(f"p1 sum={p1.sum()}")
#print(f"p2 sum={p2.sum()}")
#gan.summary()
fig, axs = plt.subplots(3, 1, figsize= (8, 12))
ax = axs[0]
ax.plot(X_COORDS, p[-1])
#print(p[-1])
ax.xaxis.set_visible(False)
ax.set_ylabel(f"Epoch: {e}")
# ax = losses_fig.add_subplot(EPOCHS + 1 / 10 , 1, ax_index)
ax = axs[1]
ax.plot(log(d_losses[0:stats_index - 1]))
ax.plot(log(g_losses[0:stats_index - 1]))
ax.xaxis.set_visible(False)
ax.set_ylabel(f"Losses Epoch: {e}")
# ax = accuracy_fig.add_subplot(EPOCHS + 1 / 10 , 1, ax_index)
ax = axs[2]
ax.plot(d_accuracy[0:stats_index - 1])
ax.plot(g_accuracy[0:stats_index - 1])
ax.xaxis.set_visible(False)
ax.set_ylabel(f"Accuracy Epoch: {e}")
plt.show()
ax_index += 1
# Plots a curve generated by the GAN
y = generator.predict(uniform(X_MIN, X_MAX, size = (1, SAMPLE_LEN)))[0]
ax = fig.add_subplot(EPOCHS + 1 / 10, 1, ax_index)
plt.plot(X_COORDS, y)
# + [markdown] id="RZm5ocBzQrAo" colab_type="text"
#
| Sinus_Infection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# # 01. Train in the Notebook & Deploy Model to ACI
#
# * Load workspace
# * Train a simple regression model directly in the Notebook python kernel
# * Record run history
# * Find the best model in run history and download it.
# * Deploy the model as an Azure Container Instance (ACI)
# ## Prerequisites
# 1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't.
#
# 2. Install following pre-requisite libraries to your conda environment and restart notebook.
# ```shell
# (myenv) $ conda install -y matplotlib tqdm scikit-learn
# ```
#
# 3. Check that ACI is registered for your Azure Subscription.
# !az provider show -n Microsoft.ContainerInstance -o table
# If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail.
# !az provider register -n Microsoft.ContainerInstance
# ## Validate Azure ML SDK installation and get version number for debugging purposes
# + tags=["install"]
from azureml.core import Experiment, Run, Workspace
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
# -
# ## Initialize Workspace
#
# Initialize a workspace object from persisted configuration.
# + tags=["create workspace"]
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
# -
# ## Set experiment name
# Choose a name for experiment.
experiment_name = 'train-in-notebook'
# ## Start a training run in local Notebook
# +
# load diabetes dataset, a well-known small dataset that comes with scikit-learn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
# -
# ### Train a simple Ridge model
# Train a very simple Ridge regression model in scikit-learn, and save it as a pickle file.
reg = Ridge(alpha = 0.03)
reg.fit(X=data['train']['X'], y=data['train']['y'])
preds = reg.predict(data['test']['X'])
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl');
# ### Add experiment tracking
# Now, let's add Azure ML experiment logging, and upload persisted model into run record as well.
# + tags=["local run", "outputs upload"]
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
run.tag("Description","My first run!")
run.log('alpha', 0.03)
reg = Ridge(alpha=0.03)
reg.fit(data['train']['X'], data['train']['y'])
preds = reg.predict(data['test']['X'])
run.log('mse', mean_squared_error(data['test']['y'], preds))
joblib.dump(value=reg, filename='model.pkl')
run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')
run.complete()
# -
# We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page.
run
# ### Simple parameter sweep
# Sweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment.
# +
import numpy as np
import os
from tqdm import tqdm
model_name = "model.pkl"
# list of numbers from 0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
reg = Ridge(alpha=alpha)
reg.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = reg.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
run.log_list(name="columns", value=columns)
with open(model_name, "wb") as file:
joblib.dump(value=reg, filename=file)
# upload the serialized model into run history record
run.upload_file(name="outputs/" + model_name, path_or_stream=model_name)
# now delete the serialized model from local folder since it is already uploaded to run history
os.remove(path=model_name)
# -
# now let's take a look at the experiment in Azure portal.
experiment
# ## Select best model from the experiment
# Load all experiment run metrics recursively from the experiment into a dictionary object.
# +
runs = {}
run_metrics = {}
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
# -
# Now find the run with the lowest Mean Squared Error value
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
# You can add tags to your runs to make them easier to catalog
# + tags=["query history"]
best_run.tag(key="Description", value="The best one")
best_run.get_tags()
# -
# ### Plot MSE over alpha
#
# Let's observe the best model visually by plotting the MSE values over alpha values:
# +
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
best_alpha = run_metrics[best_run_id]['alpha']
min_mse = run_metrics[best_run_id]['mse']
alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])
sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')
plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')
plt.xlabel('alpha', fontsize = 14)
plt.ylabel('mean squared error', fontsize = 14)
plt.title('MSE over alpha', fontsize = 16)
# plot arrow
plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,
width = 0, head_width = .03, head_length = 8)
# plot "best run" text
plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)
plt.show()
# -
# ## Register the best model
# Find the model file saved in the run record of best run.
# + tags=["query history"]
for f in best_run.get_file_names():
print(f)
# -
# Now we can register this model in the model registry of the workspace
# + tags=["register model from history"]
model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')
# -
# Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time.
# + tags=["register model from history"]
from azureml.core.model import Model
models = Model.list(workspace=ws, name='best_model')
for m in models:
print(m.name, m.version)
# -
# You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like.
# + tags=["download file"]
# remove the model file if it is already on disk
if os.path.isfile('model.pkl'):
os.remove('model.pkl')
# download the model
model.download(target_dir="./")
# -
# ## Scoring script
#
# Now we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.
#
# Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed.
with open('./score.py', 'r') as scoring_script:
print(scoring_script.read())
# ## Create environment dependency file
#
# We need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file.
# +
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
myenv.add_pip_package("pynacl==1.2.1")
print(myenv.serialize_to_string())
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
# -
# ## Deploy web service into an Azure Container Instance
# The deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).
#
# Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML.
#
# ** Note: ** The web service creation can take 6-7 minutes.
# + tags=["deploy service", "aci"]
from azureml.core.webservice import AciWebservice, Webservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'sample name': 'AML 101'},
description='This is a great example.')
# -
# Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`.
#
# If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy.
# + tags=["deploy service", "aci"]
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml")
# + tags=["deploy service", "aci"]
# %%time
# this will take 5-10 minutes to finish
# you can also use "az container list" command to find the ACI being deployed
service = Webservice.deploy_from_model(name='my-aci-svc',
deployment_config=aciconfig,
models=[model],
image_config=image_config,
workspace=ws)
service.wait_for_deployment(show_output=True)
# -
#
# ## Test web service
# + tags=["deploy service", "aci"]
print('web service is hosted in ACI:', service.scoring_uri)
# -
# Use the `run` API to call the web service with one row of data to get a prediction.
# + tags=["deploy service", "aci"]
import json
# score the first row from the test set.
test_samples = json.dumps({"data": X_test[0:1, :].tolist()})
service.run(input_data = test_samples)
# -
# Feed the entire test set and calculate the errors (residual values).
# + tags=["deploy service", "aci"]
# score the entire test set.
test_samples = json.dumps({'data': X_test.tolist()})
result = json.loads(service.run(input_data = test_samples))['result']
residual = result - y_test
# -
# You can also send raw HTTP request to test the web service.
# + tags=["deploy service", "aci"]
import requests
import json
# 2 rows of input data, each with 10 made-up numerical features
input_data = "{\"data\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers = headers)
print(resp.text)
# -
# ## Residual graph
# Plot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve.
# +
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(14)
a0.plot(residual, 'bo', alpha=0.4);
a0.plot([0,90], [0,0], 'r', lw=2)
a0.set_ylabel('residue values', fontsize=14)
a0.set_xlabel('test data set', fontsize=14)
a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');
a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);
a1.set_yticklabels([])
plt.show()
# -
# ## Delete ACI to clean up
# Deleting ACI is super fast!
# + tags=["deploy service", "aci"]
# %%time
service.delete()
# -
| 01.getting-started/01.train-within-notebook/01.train-within-notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pyodbc
import azure.cognitiveservices.speech as speechsdk
speech_key, service_region = "<KEY>", "eastus"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
# +
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config)
result = speech_recognizer.recognize_once()
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_details))
# +
cs = 'Driver={ODBC Driver 17 for SQL Server};Server=tcp:connor-unicart.database.windows.net,1433;Database=connor-unicart-speech;Uid=connor-admin;Pwd=1<PASSWORD>/.;Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'
cnxn = pyodbc.connect(cs)
# +
cursor = cnxn.cursor()
cursor.execute("""
INSERT INTO dbo.SPEECH_TO_TEXT_RESULTS (ID, RESULTS)
VALUES (?, ?)""",
'test2', 'test2')
cnxn.commit()
# -
cnxn.close()
| labs/ai-edge/speech-to-text/speech_to_text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
data = pd.read_csv("../data/england.csv")
data = data[(data.division != '3S')]
data = data[(data.division != '3N')]
data = data.astype({'division':int})
data = data[data.division == 1]
data.head()
# # Obtain First Division from season 1980
data_first_division = data[data.Season >= 1980]
data_first_division.head()
# # Remove columns unnecessary
data_first_division = data_first_division.drop(["tier", "FT", "division", "totgoal", "goaldif"], axis = 1)
data_first_division.tail()
# # Separate Date
date = data_first_division.Date.str.split('-', n=-1, expand=True)
date.columns = ["year", "month", "day"]
date.head()
data_first_division = data_first_division.drop("Date", axis = 1)
data_first_division = pd.concat([date, data_first_division], axis=1)
data_first_division.head()
# # Create dictionary teams
teams_set = sorted(list(set(data_first_division.home).union(set(data_first_division.visitor))))
team_number = range(len(teams_set))
teams_dict = dict(zip(teams_set, team_number))
teams_dict
teams_df = pd.DataFrame({"team_name":list(teams_set), "team_number":team_number})
teams_df.head()
teams_df.to_csv("../data/teams.csv", index=False)
data_first_division.home = data_first_division.home.apply(teams_dict.get)
data_first_division.visitor = data_first_division.visitor.apply(teams_dict.get)
data_first_division.head()
# # Modify Result
result_dict = {"D":1, "H":2, "A":3}
data_first_division.result = data_first_division.result.apply(result_dict.get)
data_first_division = data_first_division.rename(columns={'Season':'season'})
data_first_division.head()
# # Add week
temp = data_first_division[(data_first_division.year == "2018") | (data_first_division.year == "2019")]
temp.groupby(by=["year", "month", "day"]).count()
con = [(1995, 2020, 38), (1991, 1995, 42), (1988, 1991, 38), (1987, 1988, 40), (1980, 1987, 42)]
s = dict()
for a, b, jornada in con:
for i in range(a, b):
s[i] = jornada
# s: season => num jornadas en la season
data_first_division = data_first_division.reset_index()
data_first_division = data_first_division.drop("index", axis = 1)
data_first_division.tail()
# + tags=[]
temp = data_first_division.copy()
temp["week_day"] = np.zeros(len(temp))
for season in data_first_division.season.unique():
data_season = data_first_division[data_first_division.season == season]
equipos = data_season.home.unique()
conteo = dict(zip(equipos, [0] * len(equipos)))
for index, row in data_season.iterrows():
conteo[row.home] += 1
conteo[row.visitor] += 1
temp.iloc[index, -1] = conteo[row.home]
temp = temp.astype({"week_day":int})
temp.tail()
# -
data_first_division = temp
data_first_division.head()
# # Remove goals
data_first_division = data_first_division.drop(["hgoal", "vgoal"], axis = 1)
data_first_division.head()
# # Save CSV
data_first_division.to_csv("../data/england-clean.csv", index = 0)
# # Execute data transformation files
from os import system
system('python3 feature_addition.py')
system('python3 creating_data_2020.py')
system('python3 team_value.py')
| data_transformation/data_cleaning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !wget https://github.com/rickiepark/python-machine-learning-book-2nd-edition/raw/master/code/ch09/movie_data.csv.gz
# +
import gzip
with gzip.open('movie_data.csv.gz') as f_in, open('movie_data.csv', 'wb') as f_out:
f_out.writelines(f_in)
# -
import nltk
nltk.download('stopwords')
# +
import numpy as np
import re
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
stop = stopwords.words('english')
porter = PorterStemmer()
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
# -
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
# +
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
# +
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
# +
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('정확도: %.3f' % clf.score(X_test, y_test))
# -
clf = clf.partial_fit(X_test, y_test)
# +
# 파일을 저장할 movieclassifier 디렉토리를 만들고, 안에 서브디렉토리를 만들어 직렬화된 파이썬 객체를 저장
## pickle.dump를 통해 훈련된 logistic model뿐만 아니라 라이브러리(NLTK)까지 저장 가능
import pickle
import os
dest = os.path.join('movieclassifier', 'pkl_objects')
if not os.path.exists(dest):
os.makedirs(dest)
pickle.dump(stop, open(os.path.join(dest, 'stopwords.pkl'), 'wb'), protocol=4)
pickle.dump(clf, open(os.path.join(dest, 'classifier.pkl'), 'wb'), protocol=4)
# +
# 올바르게 저장 되었나 테스트
import os
os.chdir('movieclassifier')
import pickle
import re
import os
import import_ipynb
from vectorizer import vect
clf = pickle.load(open(os.path.join('pkl_objects', 'classifier.pkl'), 'rb'))
import numpy as np
# 분류기는 0,1을 반환하므로 텍스트로 매핑하기 위한 딕셔너리 정의
label = {0:'양성', 1:'음성'}
example = ['I love this movie']
# 저장해둔 vectorizer.py 이용해서 샘플 문서를 단어벡터로 변환
X = vect.transform(example)
# pickel화해둔 logistic model의 predict 사용해서 레이블 예측
print('예측: %s\n확률: %.2f%%' %\
(label[clf.predict(X)[0]],
np.max(clf.predict_proba(X))*100))
# -
| python-machine-learning-2nd/09.WebServing/pickel_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import os
import PyPDF2
pathDirectory = "/home/sarthak/Sem6/COD310/"
HashedPDF = pathDirectory + "HashedPDF/index.pdf"
toCompare = "b1674191a88ec5cdd733e4240a81803105dc412d6c6708d53ab94fc248f4f553"
pdfFileObj = open(HashedPDF, 'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
print(pdfReader.numPages)
pageObj = pdfReader.getPage(0)
text = pageObj.extractText()
numLines = len(text)/65
numLines = int(numLines)
# text[0]
# text[64]
# text[65]
for i in range(numLines):
newHash = text[65*i:65*i+64]
if(toCompare == newHash):
print("Got the correct File")
index = i
break
| getHashFromIndexFile.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All of these python notebooks are available at [https://gitlab.erc.monash.edu.au/andrease/Python4Maths.git]
# </i></small></small>
# # Working with strings
#
# ## The Print Statement
# As seen previously, The **print()** function prints all of its arguments as strings, separated by spaces and follows by a linebreak:
#
# - print("Hello World")
# - print("Hello",'World')
# - print("Hello", <Variable Containing the String>)
#
# Note that **print** is different in old versions of Python (2.7) where it was a statement and did not need parenthesis around its arguments.
print("Hello","World")
# The print has some optional arguments to control where and how to print. This includes `sep` the separator (default space) and `end` (end charcter) and `file` to write to a file.
print("Hello","World",sep='...',end='!!')
# ## String Formating
#
# There are lots of methods for formating and manipulating strings built into python. Some of these are illustrated here.
#
# String concatenation is the "addition" of two strings. Observe that while concatenating there will be no space between the strings.
string1='World'
string2='!'
print('Hello' + string1 + string2)
# The **%** operator is used to format a string inserting the value that comes after. It relies on the string containing a format specifier that identifies where to insert the value. The most common types of format specifiers are:
#
# - %s -> string
# - %d -> Integer
# - %f -> Float
# - %o -> Octal
# - %x -> Hexadecimal
# - %e -> exponential
print("Hello %s" % string1)
print("Actual Number = %d" %18)
print("Float of the number = %f" %18)
print("Octal equivalent of the number = %o" %18)
print("Hexadecimal equivalent of the number = %x" %18)
print("Exponential equivalent of the number = %e" %18)
# When referring to multiple variables parenthesis is used. Values are inserted in the order they appear in the paranthesis (more on tuples in the next lecture)
print("Hello %s %s. This meaning of life is %d" %(string1,string2,42))
# We can also specify the width of the field and the number of decimal places to be used. For example:
print('Print width 10: |%10s|'%'x')
print('Print width 10: |%-10s|'%'x') # left justified
print("The number pi = %.2f to 2 decimal places"%3.1415)
print("More space pi = %10.2f"%3.1415)
print("Pad pi with 0 = %010.2f"%3.1415) # pad with zeros
# ## Other String Methods
# Multiplying a string by an integer simply repeats it
print("Hello World! "*5)
# Strings can be tranformed by a variety of functions:
s="hello wOrld"
print(s.capitalize())
print(s.upper())
print(s.lower())
print('|%s|' % "Hello World".center(30)) # center in 30 characters
print('|%s|'% " lots of space ".strip()) # remove leading and trailing whitespace
print("Hello World".replace("World","Class"))
# There are also lost of ways to inspect or check strings. Examples of a few of these are given here:
s="Hello World"
print("The length of '%s' is"%s,len(s),"characters") # len() gives length
s.startswith("Hello") and s.endswith("World") # check start/end
# count strings
print("There are %d 'l's but only %d World in %s" % (s.count('l'),s.count('World'),s))
print('"el" is at index',s.find('el'),"in",s) #index from 0 or -1
# ## String comparison operations
# Strings can be compared in lexicographical order with the usual comparisons. In addition the `in` operator checks for substrings:
'abc' < 'bbc' <= 'bbc'
"ABC" in "This is the ABC of Python"
# ## Accessing parts of strings
# Strings can be indexed with square brackets. Indexing starts from zero in Python.
s = '123456789'
print('First charcter of',s,'is',s[0])
print('Last charcter of',s,'is',s[len(s)-1])
# Negative indices can be used to start counting from the back
print('First charcter of',s,'is',s[-len(s)])
print('Last charcter of',s,'is',s[-1])
# Finally a substring (range of characters) an be specified as using $a:b$ to specify the characters at index $a,a+1,\ldots,b-1$. Note that the last charcter is *not* included.
print("First three charcters",s[0:3])
print("Next three characters",s[3:6])
# An empty beginning and end of the range denotes the beginning/end of the string:
print("First three characters", s[:3])
print("Last three characters", s[-3:])
# ## Strings are immutable
#
# It is important that strings are constant, immutable values in Python. While new strings can easily be created it is not possible to modify a string:
s='012345'
sX=s[:2]+'X'+s[3:] # this creates a new string with 2 replaced by X
print("creating new string",sX,"OK")
sX=s.replace('2','X') # the same thing
print(sX,"still OK")
s[2] = 'X' # an error!!!
| raw/Python4Maths-master/Intro-to-Python/02_python4math.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.10 64-bit (''AIDY.ai'': conda)'
# language: python
# name: python3
# ---
# # TENSOR WITH PYTORCH
import torch
import numpy as np
import pandas as pd
z = torch.zeros(5,3)
print(z)
print(z.dtype)
# ## Above, we created a 5x3 Matrix filled with zeros, and query its datatypes to fidout that that the zeros are 32-bit floating point numbers, which is the default in Pytorch
#
# ## If we wanted integers instead, then we can override the dafault. Like so Below..
i = torch.ones((5, 3), dtype=torch.int16)
print(i)
# ## Initializing Learning rates randomly with a specific seed
# +
torch.manual_seed(1728)
r = torch.rand(2,2)
print(r)
r1 = torch.rand(2,2)
print('This will be a Different Tensor')
print(r1)
torch.manual_seed(1728)
r2=torch.rand(2,2)
print('\nShould match r')
print(r2)
# -
# ### Aritmethic with Pytorch tensors ai Intuitive. Tensors of similar shape may be added or multiplied. Operations btw a scalar and a tensor will distribute over the cells of a tensor.
# ## EXAMPLES BELOW:
one1s = torch.ones(2, 3)
print(one1s)
tw2s = torch.ones(2,3)*2 #-- Multiply every element in tw3s by 2
print(tw2s)
thr33s = one1s + tw2s ## Performing Addition Operations'''
print(thr33s) ## Tensors are added element-wise
print(thr33s.shape) ## This has the same dimensions as input tensors
r1 = torch.rand(2,3)
r2 = torch.rand(3,2)
r3 = r1+r2
# ### ABOVE: Attempting to add two random tensors of different shapes and there is a runtime error because there ios no way to dovelement-wise arithmetic operations with tensors of different shapes
# # TENSORS 1: CREATION AND CONVERSION OF TENSORS
# ### CREATING A 1-D TENSOR
a = [i+(-7) for i in range(16)]
A = torch.Tensor(a)
print(A.ndimension())
print(type(A))
print(A.type())
print(A.dtype)
A
# ### INDEXING A TENSOR
print(A[10])
print(A[4],'\n', A[11])
# ### ACESSING THE ACTUAL NUMBER USING `item()` METHOD
print(A[11].item())
print(type(A[9].item()))
print(A.type())
# ### CONVERTING FROM FLOAT TENSOR TO LONG TENSOR USING TYPE()
A.type()
# +
vx = A.type(torch.LongTensor)
vx = vx.type(dtype=torch.int32)
print(vx.type(), vx.dtype, vx)
# -
# ### SIZE AND DIMENSION
print(vx)
print(vx.ndimension())
print(vx.size())
A
# ### CHANGING THE VIEW OF A TENSOR
K = A.view(4,4)
Kx = A.view(2,8)
xm = A.view(8,2)
print(K,K.type())
print(Kx, Kx.type())
print(xm, xm.type())
# +
n = [6,9,12]
#[2,1,5]
for k in n:
#print(k)
a = torch.tensor([i for i in range(k)])
a_col = a.view(-1,3)
print('Original tensor:', a)
print('Reshaped view of tensor with 3cols:\n',a_col)
print('='*55)
# -
# ### SWITHCHING BETWEEN TENSOR AND NUMPY
# +
yx = [[12.4,15],[12,45],[9,7]]
nPy = np.arange(1.0,3.5,0.5)
dx = np.array(yx)
dx
# -
np_2_tensor = torch.from_numpy(dx)
np_2_tensor
tensor_2_np = np_2_tensor.numpy()
tensor_2_np
# # SIMPLE AMTHEMATICAL OPERATIONS WITH TENSORS
# +
rx = torch.rand(2,2) - 0.5*2
print('A Random Matrix:\n',rx)
print('Absolute Value of rx: \n', torch.abs(rx))
print('Absolute Value of rx: \n', torch.abs(rx))
print('LINEAR ALGEBRAIC OPR like DETERMINANTS AND SINGULAR VALUE DECOMPOSITION:\n',
'Detrminants Of rx:',torch.det(rx), '\n',
'Singular Value Decomposition of rx:', torch.svd(rx), '\n')
print('STATISTICAL AND AGGREGATE OPERATIONS:\n',
'STANDARD DEVIATION OF rx:', torch.std(rx),'\n',
'AVERAGE OR MEAN OF rx:', torch.mean(rx),'\n',
'MAXIMUM VALUE OF rx:', torch.max(rx),'\n'
)
# -
# ### Vector (Tensor) addition
u = torch.Tensor([1,4,2])
v = torch.Tensor([3,4,5])
w = u+v
x = u*v
print(w,x)
# ### MULTIPLYING A VECTOR WITH A SCALAR
y = torch.ones(2,3)
print(y)
z = y*100
print(z)
# ### LINEAR COMBINATION
u = torch.Tensor([1,2])
v = torch.Tensor([4,0])
print(u,v)
w = 3*u
x = 4*v
print(w,x)
print(w+x)
print(3*u+4*v)
# ### ELEMENT-WISE MULTIPLICATION OF THREE TENSORS
q = torch.Tensor([[20], [10], [10]])
c = torch.Tensor([[0],[0],[5]])
n = torch.Tensor([[11],[11],[0]])
print(q,'\n', c, '\n', n)
print(q*c*n)
# ### DOT PRODUCT USING `dot` METHOD
# +
a = [1,2,3,4]
b = [2,2,3,9]
A = torch.Tensor(a)
B = torch.Tensor(b)
print(torch.dot(A,B))
# -
# ### MATRIX MULTIPLICATION BETWEEN TENSORS
# #### NOTE: The `reshape()` method is used to reshape the tensors to ensure proper matrix multiplication
u = torch.Tensor([1,0,1])
v = torch.Tensor([[20],[10],[25]])
print(u,'\n', v)
u = u.reshape(1,3)
x = u.reshape(3,1)
print(x)
print(u)
u*x
print(v)
v = v.reshape(1,3)
print(u*v)
print(u.shape,x.shape,v.shape)
# ## TENSOR SCALAR MANIPULATION (ALSO CONSIDERED BRODCASTING)
u = torch.Tensor([1,2,3,4,5,6])
print(u*100)
print(u+0)
print(u+1000)
print(u-20)
print(u/10)
# # TENSOR 2: BASICS OF MATRIX OPERATIONS
# +
ls = []
for i in range(3):
in_list = []
for j in range(3):
in_list.append(10*(i+1)+j)
print(in_list)
ls.append(in_list)
# ls
# -
A = torch.Tensor(ls)
B = torch.tensor(ls)
print(A,'\n',B)
# ### DIMENSION, SHAPE AND SIZE OF THE 2-D TENSOR
print('Dimension of the tensor:', A.ndimension())
print('Shape of the TENSOR:', A.shape)
print('Total Size of the TENSOR:', A.size())
# ### GETTING TOTAL NUMBER OF ELEMENTS
print(np.array(A.size()).prod()) # CASTING SIZE ATTRIBUT TO NDARRAY AND APPLYTING PROD
# ### MATRIX(TENSOR) ADDITION
cx = [[1, 3 ,4], [110,23,223], [22,0,9]]
bx = [[10,1,20],[23,45,60],[11,90,65]]
cx = torch.Tensor(cx)
bx = torch.Tensor(bx)
print(cx)
print(bx)
print(cx.ndimension())
print(bx.ndimension())
bcx = cx+bx
bcx
print(bcx.dtype)
# ### MULTPLYING MATRIX BY A SCALAR
# +
dx = bcx*0
xs = bcx*-2.8
yx = bcx*-0.7
print(dx)
print(yx)
print(xs, xs.dtype)
# -
# ### ADDING A SCALAR TO A MATRIX
wq = xs+926
wq
# ### SLICING AND INDEXING MATRIX (TENSOR)
bx
bx[0]
bx[0,2]
bx[0,1:3]
bx[1:3,2]
bx
bx[[2,2]]
bx[:2,1:3]
bx[1:,2:3]
# ### ELEMENT-WISE PRODCUT OF MATRICES
ax = torch.Tensor([[10,20,30],[30,40,9],[12,12,94],[10,0,30],])
b = torch.Tensor([[10,0,0],[0,89,0],[0,0,1],[0,0,0]])
print(ax,'\n',b)
Z = ax*b
c = ax.mm(b)
print(Z)
c
# ### FOR MATRIX MULTIPLICATION, THE TENSORS MUST BE ARRANGED PROPERLY IN ORDER TO PREVENT THE ABOVE ERROR
print(ax.size(),b.size())
bn = b.reshape(3,4)
bn
wxc = ax.mm(bn)
wxc
# ### TRANSPOSE OF A 2-D TENSOR (MATRIX)
print(wxc.transpose(-2,0))
print(wxc.transpose(-2,-1))
print(wxc.transpose(-2,-2))
print(wxc.transpose(-1,-1))
print(wxc.transpose(-1,1))
print(wxc.transpose(-1,0))
print(wxc.transpose(1,-1))
# ### THERE ARE BASICALLY THE SAME STUFF THOUGH: ITS JUST SPECIFYING THE DIMENSIONS `dim0`, `dim1` ARGUMENT TO THE METHOD
# ### MATRIX INVERSE
wxc
torch.Tensor.inverse(wxc)
# ### DETERMINANT OF A MATRIX
#
torch.det(wxc)
| LEARNING PYTORCH/DAY 1_ TENSOR MANIPULATION.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Как переводить
#
# - Xcode / Editor / Export for Localization / (Repository-root)/CDDDA-loc.
# - Translations.csv / Copy "en" column / Google translate from english to target language.
# - Translations.csv / New column with header named after lang, like "ru" without quotes.
# - Translations.csv / Paste appropriate translations to appropriate lines.
# - l10n.ipynb (this notebook) / Run all cells, fix errors in CSV sanity check.
# - l10n.ipynb (this notebook) / Check save log, that your language is present.
# - Xcode / Editor / Import Localizations / Select your localization.
# - Simulator / Change language of device to target one.
# - Xcode / Run in simulator. Check that screen menu is appropriately localized.
# +
import csv
import gettext
import pathlib
import itertools
import os
import lxml.etree as etree
import pandas as pd
# +
l10n_root = pathlib.Path('../CDDA-loc')
l10n_csv = pathlib.Path('../Translations.csv')
wrong_directory = l10n_root / 'CDDA-loc'
locale_dir = pathlib.Path('../Libraries/Cataclysm-DDA/lang/mo')
languages = 'en hu es zh ko de pt_BR ru fr ja'.split('\t')
storyboard_file = pathlib.Path('../Bundle/Base.lproj/UIControls.storyboard')
gettext.bindtextdomain('cataclysm-dda', localedir=locale_dir)
gettext.textdomain('cataclysm-dda')
problematic_lines_to_strings_for_translation_and_handlers = {
'* Toggle Snap-to-target': ('[%c] target self; [%c] toggle snap-to-target', lambda t: t.rsplit('] ', 1)[1]),
'm Change Aim Mode': ('[%c] to switch aiming modes.', lambda t: t.replace('[%c] ', '')),
's Toggle Burst/Auto Mode': ('[%c] to switch firing modes.', lambda t: t.replace('[%c] ', '')),
}
xliffs = list(l10n_root.glob('*/Localized Contents/*.xliff'))
xliffs
# +
def csv_sanity_check():
with open(l10n_csv) as f:
reader = csv.reader(f)
langs = next(reader)
errors = 0
line = 1
for record in reader:
line += 1
source = record[0]
if (len(source) == 1) or (source[1] == ' '):
sym = source[0]
for lang, txn in zip(langs, record):
if not txn:
continue
if txn[0] != sym:
errors += 1
print(f'{line}: Symbol {sym} differs for language {lang}: {txn}')
if (len(source) == 1) and (len(txn) != 1):
errors += 1
print(f'{line}: Trailing symbols for {sym} in {txn}')
return errors
def findall(tree, tag):
return tree.findall(f'.//{{urn:oasis:names:tc:xliff:document:1.2}}{tag}')
def save_translation(xliff: pathlib.Path):
with open(xliff) as f:
tree = etree.parse(f)
with open(l10n_csv) as f:
translations_csv_reader = csv.reader(f)
langs = next(translations_csv_reader)[1:]
translation_mappings = {row[0].lower(): dict(zip(langs, row[1:])) for row in translations_csv_reader}
lang = xliff.name.split('.')[0]
trans_units = findall(tree, 'trans-unit')
errors = 0
for trans_unit in trans_units:
source = findall(trans_unit, 'source')[0]
try:
translation = translation_mappings[source.text.lower()][lang]
except KeyError:
print(f'txn for {lang}/{source.text} not found. Skipping.')
errors += 1
continue
try:
target = findall(trans_unit, 'target')[0]
except IndexError:
target = etree.SubElement(trans_unit, 'target')
target.text = translation
tree.write(str(xliff), encoding='utf8')
return errors
def get_targets_from_xliff(xliff):
with open(xliff) as f:
tree = etree.parse(f)
targets = (x.text for x in findall(tree, 'target'))
return targets
def translate(text: str):
for language_code in languages:
os.environ['LANGUAGE'] = language_code
yield language_code.split('_')[0], gettext.gettext(text)
def get_texts_from_csv():
with open(l10n_csv) as f:
reader = csv.reader(f)
langs = next(reader)
for record in reader:
yield record[0]
def get_texts_for_translation():
english_texts = get_texts_from_csv()
for text in english_texts:
if len(text) == 1:
continue
elif text in problematic_lines_to_strings_for_translation_and_handlers:
prefix = text[:2]
real_text, processing_function = problematic_lines_to_strings_for_translation_and_handlers[text]
elif text[1] == ' ':
prefix, real_text = text[:2], text[2:]
processing_function = None
else:
prefix = ''
real_text = text
processing_function = None
for language, translation in translate(real_text):
yield (text, language, prefix + (processing_function(translation) if processing_function else translation))
def analyze(df):
for text, subframe in df.groupby(df.text):
if subframe.describe().translation['unique'] == 1:
status = 0
else:
status = 1
yield text, status
def save_translations():
errors = 0
for xliff in xliffs:
if str(xliff).endswith('en.xliff'):
continue
print(f'Saving {xliff}')
errors += save_translation(xliff)
return errors
def replace_titles_in_storyboard(keys_to_titles):
with open(storyboard_file) as f:
tree = etree.parse(f)
states = tree.findall('.//state')
missing = []
for state in states:
old_title = state.attrib['title'].lower()
maybe_title = keys_to_titles.get(old_title)
if maybe_title:
state.attrib['title'] = maybe_title
else:
missing.append(old_title)
tree.write(str(storyboard_file), encoding='utf8')
return missing
assert not csv_sanity_check(), 'CSV file has errors'
assert not wrong_directory.is_dir(), f'Another localization directory {wrong_directory} found inside of the correct. To prevent mistakes, please delete both and then start from exporting from Xcode.'
# -
print('\n'.join(get_targets_from_xliff([x for x in xliffs if str(x).endswith('en.xliff')][0])))
print('\n'.join('\t'.join(x) for x in translate('Re-layer armor/clothing')))
df = pd.DataFrame.from_records(get_texts_for_translation(), columns='text language translation'.split())
df
df.describe()
status_df = pd.DataFrame.from_records(analyze(df), columns='text status'.split())
status_df[status_df.status == 0]
# +
untranslated_ok_strings = """Overlay UI enabled
Cataclysm RPG
CDDA
BTAB
TAB
ESC
SPACE
Invert panning direction
Invert scrolling direction""".split('\n')
assert len(status_df[status_df.status == 0]) == len(untranslated_ok_strings), f'Too many strings left untranslated: {len(status_df[status_df.status == 0])}, expected {len(untranslated_ok_strings)}. Untranslated: {status_df[status_df.status == 0]}'
# -
csv_df = pd.read_csv(l10n_csv).set_index('en')
csv_df
for key, record in df.iterrows():
text, language, translation = record
if translation.lower() != text.lower():
csv_df.loc[text][language] = translation
csv_df
csv_df.to_csv(l10n_csv)
assert not replace_titles_in_storyboard(dict(zip(csv_df.index.map(str.lower), csv_df.index)))
assert not save_translations(), 'There are errors in translations, better fix them to get product of high quality.'
csv_df.loc['TAB']
| scripts/l10n.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import Modules
import warnings
warnings.filterwarnings('ignore')
# +
from PIL import Image
import torch
from torchvision import transforms, datasets
import numpy as np
import os
import numpy as np
import torch
from torch.autograd import Variable
from src.get_nets import PNet, RNet, ONet
from src.box_utils import nms, calibrate_box, get_image_boxes, convert_to_square
from src.first_stage import run_first_stage
import torch.nn as nn
# -
# # Path Definitions
# +
dataset_path = '../Dataset/emotiw/'
processed_dataset_path = '../Dataset/FaceFeatures/'
# -
# # MTCNN Model Definition for Extracting Face Features
pnet = PNet()
rnet = RNet()
onet = ONet()
onet.eval()
# +
class OnetFeatures(nn.Module):
def __init__(self, original_model):
super(OnetFeatures, self).__init__()
self.features = nn.Sequential(*list(onet.children())[:-3])
def forward(self, x):
x = self.features(x)
return x
def get_face_features(image, min_face_size=20.0,
thresholds=[0.6, 0.7, 0.8],
nms_thresholds=[0.7, 0.7, 0.7]):
"""
Arguments:
image: an instance of PIL.Image.
min_face_size: a float number.
thresholds: a list of length 3.
nms_thresholds: a list of length 3.
Returns:
two float numpy arrays of shapes [n_boxes, 4] and [n_boxes, 10],
bounding boxes and facial landmarks.
"""
# LOAD MODELS
pnet = PNet()
rnet = RNet()
onet = ONet()
onet.eval()
# BUILD AN IMAGE PYRAMID
width, height = image.size
min_length = min(height, width)
min_detection_size = 12
factor = 0.707 # sqrt(0.5)
# scales for scaling the image
scales = []
# scales the image so that
# minimum size that we can detect equals to
# minimum face size that we want to detect
m = min_detection_size/min_face_size
min_length *= m
factor_count = 0
while min_length > min_detection_size:
scales.append(m*factor**factor_count)
min_length *= factor
factor_count += 1
# STAGE 1
# it will be returned
bounding_boxes = []
# run P-Net on different scales
for s in scales:
boxes = run_first_stage(image, pnet, scale=s, threshold=thresholds[0])
bounding_boxes.append(boxes)
# collect boxes (and offsets, and scores) from different scales
bounding_boxes = [i for i in bounding_boxes if i is not None]
bounding_boxes = np.vstack(bounding_boxes)
keep = nms(bounding_boxes[:, 0:5], nms_thresholds[0])
bounding_boxes = bounding_boxes[keep]
# use offsets predicted by pnet to transform bounding boxes
bounding_boxes = calibrate_box(bounding_boxes[:, 0:5], bounding_boxes[:, 5:])
# shape [n_boxes, 5]
bounding_boxes = convert_to_square(bounding_boxes)
bounding_boxes[:, 0:4] = np.round(bounding_boxes[:, 0:4])
# STAGE 2
img_boxes = get_image_boxes(bounding_boxes, image, size=24)
img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
output = rnet(img_boxes)
offsets = output[0].data.numpy() # shape [n_boxes, 4]
probs = output[1].data.numpy() # shape [n_boxes, 2]
keep = np.where(probs[:, 1] > thresholds[1])[0]
bounding_boxes = bounding_boxes[keep]
bounding_boxes[:, 4] = probs[keep, 1].reshape((-1,))
offsets = offsets[keep]
keep = nms(bounding_boxes, nms_thresholds[1])
bounding_boxes = bounding_boxes[keep]
bounding_boxes = calibrate_box(bounding_boxes, offsets[keep])
bounding_boxes = convert_to_square(bounding_boxes)
bounding_boxes[:, 0:4] = np.round(bounding_boxes[:, 0:4])
# STAGE 3
img_boxes = get_image_boxes(bounding_boxes, image, size=48)
if len(img_boxes) == 0:
return [], []
img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
output = onet(img_boxes)
faceFeatureModel = OnetFeatures(onet)
featureOutputs = faceFeatureModel(img_boxes)
landmarks = output[0].data.numpy() # shape [n_boxes, 10]
offsets = output[1].data.numpy() # shape [n_boxes, 4]
probs = output[2].data.numpy() # shape [n_boxes, 2]
keep = np.where(probs[:, 1] > thresholds[2])[0]
bounding_boxes = bounding_boxes[keep]
bounding_boxes[:, 4] = probs[keep, 1].reshape((-1,))
offsets = offsets[keep]
landmarks = landmarks[keep]
bounding_boxes = calibrate_box(bounding_boxes, offsets)
keep = nms(bounding_boxes, nms_thresholds[2], mode='min')
featureOutputs = featureOutputs[keep]
return featureOutputs
# -
# # Load Train and Val Dataset
# +
image_datasets = {x : datasets.ImageFolder(os.path.join(dataset_path, x))
for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# -
class_names
training_dataset = image_datasets['train']
validation_dataset = image_datasets['val']
# +
neg_train = sorted(os.listdir(dataset_path + 'train/Negative/'))
neu_train = sorted(os.listdir(dataset_path + 'train/Neutral/'))
pos_train = sorted(os.listdir(dataset_path + 'train/Positive/'))
neg_val = sorted(os.listdir(dataset_path + 'val/Negative/'))
neu_val = sorted(os.listdir(dataset_path + 'val/Neutral/'))
pos_val = sorted(os.listdir(dataset_path + 'val/Positive/'))
# +
neg_train_filelist = [x.split('.')[0] for x in neg_train]
neu_train_filelist = [x.split('.')[0] for x in neu_train]
pos_train_filelist = [x.split('.')[0] for x in pos_train]
neg_val_filelist = [x.split('.')[0] for x in neg_val]
neu_val_filelist = [x.split('.')[0] for x in neu_val]
pos_val_filelist = [x.split('.')[0] for x in pos_val]
# +
neg_train_filelist = neg_train_filelist[1:]
neg_val_filelist = neg_val_filelist[1:]
neu_val_filelist = neu_val_filelist[1:]
pos_val_filelist = pos_val_filelist[1:]
# +
print(neg_train_filelist[:10])
print(neu_train_filelist[:10])
print(pos_train_filelist[:10])
print(neg_val_filelist[:10])
print(neu_val_filelist[:10])
print(pos_val_filelist[:10])
# -
train_filelist = neg_train_filelist + neu_train_filelist + pos_train_filelist
val_filelist = neg_val_filelist + neu_val_filelist + pos_val_filelist
print(len(training_dataset))
print(len(validation_dataset))
# # Extract Face Features
for i in range(len(validation_dataset)):
image, label = validation_dataset[i]
print(val_filelist[i])
try:
if label == 0:
if os.path.isfile(processed_dataset_path + 'val/Negative/' + val_filelist[i] + '.npz'):
print(val_filelist[i] + ' Already present')
continue
features = get_face_features(image)
if(type(features)) == tuple:
with open('hello.text', 'a') as f:
f.write(val_filelist[i])
continue
features = features.data.numpy()
if features.size == 0:
print('MTCNN model handling empty face condition at ' + val_filelist[i])
np.savez(processed_dataset_path + 'val/Negative/' + val_filelist[i] , a=features)
elif label == 1:
if os.path.isfile(processed_dataset_path + 'val/Neutral/' + val_filelist[i] + '.npz'):
print(val_filelist[i] + ' Already present')
continue
features = get_face_features(image)
if(type(features)) == tuple:
with open('hello.text', 'a') as f:
f.write(val_filelist[i])
continue
features = features.data.numpy()
if features.size == 0:
print('MTCNN model handling empty face condition at ' + val_filelist[i])
np.savez(processed_dataset_path + 'val/Neutral/' + val_filelist[i] , a=features)
else:
if os.path.isfile(processed_dataset_path + 'val/Positive/' + val_filelist[i] + '.npz'):
print(val_filelist[i] + ' Already present')
continue
features = get_face_features(image)
if(type(features)) == tuple:
with open('hello.text', 'a') as f:
f.write(val_filelist[i])
continue
features = features.data.numpy()
if features.size == 0:
print('MTCNN model handling empty face condition at ' + val_filelist[i])
np.savez(processed_dataset_path + 'val/Positive/' + val_filelist[i] , a=features)
except ValueError:
print('No faces detected for ' + val_filelist[i] + ". Also MTCNN failed.")
if label == 0:
np.savez(processed_dataset_path + 'val/Negative/' + val_filelist[i] , a=np.zeros(1))
elif label == 1:
np.savez(processed_dataset_path + 'val/Neutral/' + val_filelist[i] , a=np.zeros(1))
else:
np.savez(processed_dataset_path + 'val/Positive/' + val_filelist[i] , a=np.zeros(1))
continue
| MTCNN/.ipynb_checkpoints/Face_Extractor_Feature_Train-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# orphan: true
# ---
# + tags=["remove-input", "active-ipynb", "remove-output"]
# %pylab inline
# from ipyparallel import Client, error
# cluster=Client(profile="mpi")
# view=cluster[:]
# view.block=True
#
# try:
# from openmdao.utils.notebook_utils import notebook_mode
# except ImportError:
# !python -m pip install openmdao[notebooks]
# -
# ```{note}
# This feature requires MPI, and may not be able to be run on Colab.
# ```
# # Parallel Groups
#
# When systems are added to a `ParallelGroup`, they will be executed in parallel, assuming that the `ParallelGroup` is given an MPI communicator of sufficient size. Adding subsystems to a ParallelGroup is no different than adding them to a normal Group. For example:
# +
# %%px
import openmdao.api as om
prob = om.Problem()
model = prob.model
model.set_input_defaults('x', 1.)
parallel = model.add_subsystem('parallel', om.ParallelGroup(),
promotes_inputs=[('c1.x', 'x'), ('c2.x', 'x'),
('c3.x', 'x'), ('c4.x', 'x')])
parallel.add_subsystem('c1', om.ExecComp(['y=-2.0*x']))
parallel.add_subsystem('c2', om.ExecComp(['y=5.0*x']))
parallel.add_subsystem('c3', om.ExecComp(['y=-3.0*x']))
parallel.add_subsystem('c4', om.ExecComp(['y=4.0*x']))
model.add_subsystem('c5', om.ExecComp(['y=3.0*x1 + 7.0*x2 - 2.0*x3 + x4']))
model.connect("parallel.c1.y", "c5.x1")
model.connect("parallel.c2.y", "c5.x2")
model.connect("parallel.c3.y", "c5.x3")
model.connect("parallel.c4.y", "c5.x4")
prob.setup(check=False, mode='fwd')
prob.run_model()
print(prob['c5.y'])
# + tags=["remove-input", "remove-output"]
# %%px
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob['c5.y'], 39.0, 1e-6)
# + active=""
# In this example, components *c1* through *c4* will be executed in parallel, provided that the `ParallelGroup` is given four MPI processes. If the name of the python file containing our example were `my_par_model.py`, we could run it under MPI and give it two processes using the following command:
#
# ```
# mpirun -n 4 python my_par_model.py
# ```
#
# ```{Note}
# This will only work if you've installed the mpi4py and petsc4py python packages, which are not installed by default in OpenMDAO.
# ```
#
# In the previous example, all four components in the `ParallelGroup` required just a single MPI process, but
# what happens if we want to add subsystems to a `ParallelGroup` that has other processor requirements?
# In OpenMDAO, we control process allocation behavior by setting the `min_procs` and/or `max_procs` or
# `proc_weight` args when we call the `add_subsystem` function to add a particular subsystem to
# a `ParallelGroup`.
#
# ```{eval-rst}
# .. automethod:: openmdao.core.group.Group.add_subsystem
# :noindex:
# ```
#
#
# If you use both `min_procs/max_procs` and `proc_weight`, it can become less obvious what the
# resulting process allocation will be, so you may want to stick to just using one or the other.
# The algorithm used for the allocation starts, assuming that the number of processes is greater than or
# equal to the number of subsystems, by assigning the `min_procs` for each subsystem. It then adds
# any remaining processes to subsystems based on their weights, being careful not to exceed their
# specified `max_procs`, if any.
#
# If the number of processes is less than the number of subsystems, then each subsystem, one at a
# time, starting with the one with the highest `proc_weight`, is allocated to the least-loaded process.
# An exception will be raised if any of the subsystems in this case have a `min_procs` value greater than one.
| openmdao/docs/openmdao_book/features/core_features/working_with_groups/parallel_group.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Setup
# +
# use full window width
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import os
os.chdir('..')
import virl
from helper_methods import run, plot
# -
# ## Agent Implementation
class DeterministicAgent:
def __init__(self, env, action, action_text):
self.num_of_actions = env.action_space.n
self.action = action
self.env = env
print("Agent has " + str(self.num_of_actions) + " actions and will always choose action " + str(action) + ": " + action_text)
def get_action(self):
return action
def get_action_text(self):
return action_text
def get_env(self):
return env
def get_chart_title(self):
return "Action = " + action_text
# ## Analysis
actions = ["no intervention", "impose a full lockdown", "implement track & trace", "enforce social distancing and face masks"]
for action, action_text in enumerate(actions):
env = virl.Epidemic(stochastic=False, noisy=False)
agent = DeterministicAgent(env, action, action_text)
states, rewards = run(agent)
plot(agent, states, rewards)
# ## Evaluation
# Eval here
#
| notebooks/.ipynb_checkpoints/run_deterministic-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from sklearn.datasets import load_iris
iris = load_iris()
dir(iris)
df = pd.DataFrame(iris.data, columns= iris.feature_names)
df.head()
df['target'] = iris.target
df.head()
df['target_names'] = df['target'].apply(lambda x: iris.target_names[x])
df.head()
new = df.groupby(['target_names', 'target']).mean()
new
(new / new.sum()) * 100
seto = df[df['target']==0]
vers = df[df['target']==1]
virg = df[df['target']==2]
import matplotlib.pyplot as plt
# %matplotlib inline
plt.scatter(seto['petal length (cm)'], seto['petal width (cm)'], color = 'green')
plt.scatter(vers['petal length (cm)'], vers['petal width (cm)'], color = 'blue')
plt.scatter(virg['petal length (cm)'], virg['petal width (cm)'], color = 'red')
x = df.drop(['target', 'target_names'], axis = 'columns')
y = df.target
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size = 0.2)
from sklearn.svm import SVC
model = SVC()
model.fit(x_train, y_train)
model.score(x_test, y_test)
| classification/classify_iris_flowers_with_svm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # K-means
#
# Por [<NAME>](https://joserzapata.github.io)
#
# K-means (usando scikit-learn) para crear clusters de [Fisher's Iris dataset](http://en.wikipedia.org/wiki/Iris_flower_data_set) (este data set solo tiene 3 variedades: Iris setosa, Iris versicolor, Iris virginica), y graficar estos clusters en 3D.
#Importar librerias
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import itertools
from sklearn import datasets, cluster
from sklearn.metrics import confusion_matrix, classification_report
from mpl_toolkits.mplot3d import Axes3D
# ## Cargar Datos
# Nota: se escogio un numero aleatorio que le asignara etiquetas (labels)
# a los clusters de la misma forma como los originales
# Cargar los datos
iris = datasets.load_iris()
X_iris = iris.data
y_iris = iris.target
class_names = iris.target_names
# ## Hacer los clusters con k-means
# +
# Se hacen 3 clusters por que solo hay 3 variedades
# Para evitar un resultado en mionimos locales descomentar
# la siguiente linea
#np.random.seed(2)
k_means = cluster.KMeans(n_clusters=3)
k_means.fit(X_iris)
labels = k_means.labels_
# -
# ## Evaluacion de los clusters
# +
# Cuantas etiquetas fueron estan bien
# si el numero es bajito ejecutar nuevamente el codigo de la celda de arriba ;)
# Minimos locales en 14, 66
# Mejor resultado es 134
# si descomenta la linea np.random.seed(2) obtendra mejores resultados directamente
correct_labels = sum(y_iris == labels)
print("Resultado: {} datos de {} Ejemplos fueron correctamente etiquetados (labeled).".format(correct_labels, y_iris.size))
cm = confusion_matrix(y_iris, labels)
print('\n Matriz de confusion simple:')
print(cm)
print(classification_report(labels, y_iris, target_names=class_names))
# -
# ### Matriz de confusion formateada
# funcion extraida de http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
#print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plot_confusion_matrix(cm, classes=class_names, title='Confusion matrix')
# ## Grafica de los clusters
# +
# plot the clusters in color
centroids = k_means.cluster_centers_
fig = plt.figure(1, figsize=(8, 8))
plt.clf()# borrar la figura actual
ax = Axes3D(fig, rect=[0, 0, 1, 1], elev=8, azim=200)
plt.cla()# borrar los ejes
ax.scatter(X_iris[:, 3], X_iris[:, 0], X_iris[:, 2], c=labels.astype(np.float))
# graficar los centroides
for n in range(len(centroids)):
ax.scatter(centroids[n,3],centroids[n,0],centroids[n,2],c = 'k',marker = '*',s=280)
ax.text(centroids[n,3]+0.5,centroids[n,0],centroids[n,2],'Centroide {}'.format(n+1),fontsize =12)
plt.set_cmap('rainbow') # Color map para que se vea mas claro
#quitar los numeros de los ejes
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
plt.show()
| jupyter_notebook/4_no_supervisados/1_Kmeans-IrisCluster.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import time
import numpy as np
import pygame
import sys
import seaborn as sns
from pygame.locals import *
pygame.init()
class Network:
def __init__(self, xmin, xmax, ymin, ymax):
"""
xmin: 150,
xmax: 450,
ymin: 100,
ymax: 600
"""
self.StaticDiscipline = {
'xmin': xmin,
'xmax': xmax,
'ymin': ymin,
'ymax': ymax
}
def network(self, xsource, ysource = 100, Ynew = 600, divisor = 50): #ysource will always be 100
"""
For Network A
ysource: will always be 100
xsource: will always be between xmin and xmax (static discipline)
For Network B
ysource: will always be 600
xsource: will always be between xmin and xmax (static discipline)
"""
while True:
ListOfXsourceYSource = []
Xnew = np.random.choice([i for i in range(self.StaticDiscipline['xmin'], self.StaticDiscipline['xmax'])], 1)
#Ynew = np.random.choice([i for i in range(self.StaticDiscipline['ymin'], self.StaticDiscipline['ymax'])], 1)
source = (xsource, ysource)
target = (Xnew[0], Ynew)
#Slope and intercept
slope = (ysource - Ynew)/(xsource - Xnew[0])
intercept = ysource - (slope*xsource)
if (slope != np.inf) and (intercept != np.inf):
break
else:
continue
#print(source, target)
# randomly select 50 new values along the slope between xsource and xnew (monotonically decreasing/increasing)
XNewList = [xsource]
if xsource < Xnew:
differences = Xnew[0] - xsource
increment = differences /divisor
newXval = xsource
for i in range(divisor):
newXval += increment
XNewList.append(int(newXval))
else:
differences = xsource - Xnew[0]
decrement = differences /divisor
newXval = xsource
for i in range(divisor):
newXval -= decrement
XNewList.append(int(newXval))
#determine the values of y, from the new values of x, using y= mx + c
yNewList = []
for i in XNewList:
findy = (slope * i) + intercept#y = mx + c
yNewList.append(int(findy))
ListOfXsourceYSource = [(x, y) for x, y in zip(XNewList, yNewList)]
return XNewList, yNewList
def DefaultToPosition(self, x1, x2 = 300, divisor = 50):
DefaultPositionA = 300
DefaultPositionB = 300
XNewList = []
if x1 < x2:
differences = x2 - x1
increment = differences /divisor
newXval = x1
for i in range(divisor):
newXval += increment
XNewList.append(int(np.floor(newXval)))
else:
differences = x1 - x2
decrement = differences /divisor
newXval = x1
for i in range(divisor):
newXval -= decrement
XNewList.append(int(np.floor(newXval)))
return XNewList
# -
# # Define DQN Network
# +
from keras import Sequential, layers
from keras.optimizers import Adam
from keras.layers import Dense
from collections import deque
import numpy as np
class DQN:
def __init__(self):
self.learning_rate = 0.001
self.momentum = 0.95
self.eps_min = 0.1
self.eps_max = 1.0
self.eps_decay_steps = 2000000
self.replay_memory_size = 500
self.replay_memory = deque([], maxlen=self.replay_memory_size)
n_steps = 4000000 # total number of training steps
self.training_start = 10000 # start training after 10,000 game iterations
self.training_interval = 4 # run a training step every 4 game iterations
self.save_steps = 1000 # save the model every 1,000 training steps
self.copy_steps = 10000 # copy online DQN to target DQN every 10,000 training steps
self.discount_rate = 0.99
self.skip_start = 90 # Skip the start of every game (it's just waiting time).
self.batch_size = 100
self.iteration = 0 # game iterations
self.done = True # env needs to be reset
self.model = self.DQNmodel()
return
def DQNmodel(self):
model = Sequential()
model.add(Dense(64, input_shape=(1,), activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=self.learning_rate))
return model
def sample_memories(self, batch_size):
indices = np.random.permutation(len(self.replay_memory))[:batch_size]
cols = [[], [], [], [], []] # state, action, reward, next_state, continue
for idx in indices:
memory = self.replay_memory[idx]
for col, value in zip(cols, memory):
col.append(value)
cols = [np.array(col) for col in cols]
return (cols[0], cols[1], cols[2].reshape(-1, 1), cols[3],cols[4].reshape(-1, 1))
def epsilon_greedy(self, q_values, step):
self.epsilon = max(self.eps_min, self.eps_max - (self.eps_max-self.eps_min) * step/self.eps_decay_steps)
if np.random.rand() < self.epsilon:
return np.random.randint(10) # random action
else:
return np.argmax(q_values) # optimal action
# -
AgentA = DQN()
AgentB = DQN()
# +
from keras.utils import to_categorical
import tensorflow as tf
import pygame
class pytennis:
def __init__(self, fps=50):
self.GeneralReward = False
self.net = Network(150, 450, 100, 600)
self.updateRewardA = 0
self.updateRewardB = 0
self.updateIter = 0
self.lossA = 0
self.lossB = 0
self.restart = False
# Testing
self.net = Network(150, 450, 100, 600)
self.NetworkA = self.net.network(
300, ysource=100, Ynew=600) # Network A
self.NetworkB = self.net.network(
200, ysource=600, Ynew=100) # Network B
# NetworkA
# display test plot of network A
#sns.jointplot(NetworkA[0], NetworkA[1])
# display test plot of network B
#sns.jointplot(NetworkB[0], NetworkB[1])
#self.out = self.net.DefaultToPosition(250)
pygame.init()
self.BLACK = (0, 0, 0)
self.myFontA = pygame.font.SysFont("Times New Roman", 25)
self.myFontB = pygame.font.SysFont("Times New Roman", 25)
self.myFontIter = pygame.font.SysFont('Times New Roman', 25)
self.FPS = fps
self.fpsClock = pygame.time.Clock()
def setWindow(self):
# set up the window
self.DISPLAYSURF = pygame.display.set_mode((600, 700), 0, 32)
pygame.display.set_caption(
'REINFORCEMENT LEARNING (Discrete Mathematics) - TABLE TENNIS')
# set up the colors
self.BLACK = (0, 0, 0)
self.WHITE = (255, 255, 255)
self.RED = (255, 0, 0)
self.GREEN = (0, 255, 0)
self.BLUE = (0, 0, 255)
return
def display(self):
self.setWindow()
self.DISPLAYSURF.fill(self.WHITE)
pygame.draw.rect(self.DISPLAYSURF, self.GREEN, (150, 100, 300, 500))
pygame.draw.rect(self.DISPLAYSURF, self.RED, (150, 340, 300, 20))
pygame.draw.rect(self.DISPLAYSURF, self.BLACK, (0, 20, 600, 20))
pygame.draw.rect(self.DISPLAYSURF, self.BLACK, (0, 660, 600, 20))
return
def reset(self):
return
def evaluate_state_from_last_coordinate(self, c):
"""
cmax: 450
cmin: 150
c definately will be between 150 and 450.
state0 - (150 - 179)
state1 - (180 - 209)
state2 - (210 - 239)
state3 - (240 - 269)
state4 - (270 - 299)
state5 - (300 - 329)
state6 - (330 - 359)
state7 - (360 - 389)
state8 - (390 - 419)
state9 - (420 - 450)
"""
if c >= 150 and c <= 179:
return 0
elif c >= 180 and c <= 209:
return 1
elif c >= 210 and c <= 239:
return 2
elif c >= 240 and c <= 269:
return 3
elif c >= 270 and c <= 299:
return 4
elif c >= 300 and c <= 329:
return 5
elif c >= 330 and c <= 359:
return 6
elif c >= 360 and c <= 389:
return 7
elif c >= 390 and c <= 419:
return 8
elif c >= 420 and c <= 450:
return 9
def evaluate_action(self, diff):
if (int(diff) <= 30):
return True
else:
return False
def randomVal(self, action):
"""
cmax: 450
cmin: 150
c definately will be between 150 and 450.
state0 - (150 - 179)
state1 - (180 - 209)
state2 - (210 - 239)
state3 - (240 - 269)
state4 - (270 - 299)
state5 - (300 - 329)
state6 - (330 - 359)
state7 - (360 - 389)
state8 - (390 - 419)
state9 - (420 - 450)
"""
if action == 0:
val = np.random.choice([i for i in range(150, 180)])
elif action == 1:
val = np.random.choice([i for i in range(180, 210)])
elif action == 2:
val = np.random.choice([i for i in range(210, 240)])
elif action == 3:
val = np.random.choice([i for i in range(240, 270)])
elif action == 4:
val = np.random.choice([i for i in range(270, 300)])
elif action == 5:
val = np.random.choice([i for i in range(300, 330)])
elif action == 6:
val = np.random.choice([i for i in range(330, 360)])
elif action == 7:
val = np.random.choice([i for i in range(360, 390)])
elif action == 8:
val = np.random.choice([i for i in range(390, 420)])
else:
val = np.random.choice([i for i in range(420, 450)])
return val
def stepA(self, action, count=0):
# playerA should play
if count == 0:
self.NetworkA = self.net.network(
self.ballx, ysource=100, Ynew=600) # Network A
self.bally = self.NetworkA[1][count]
self.ballx = self.NetworkA[0][count]
if self.GeneralReward == True:
self.playerax = self.randomVal(action)
else:
self.playerax = self.ballx
# soundObj = pygame.mixer.Sound('sound/sound.wav')
# soundObj.play()
# time.sleep(0.4)
# soundObj.stop()
else:
self.ballx = self.NetworkA[0][count]
self.bally = self.NetworkA[1][count]
obsOne = self.evaluate_state_from_last_coordinate(
int(self.ballx)) # last state of the ball
obsTwo = self.evaluate_state_from_last_coordinate(
int(self.playerbx)) # evaluate player bx
diff = np.abs(self.ballx - self.playerbx)
obs = obsTwo
reward = self.evaluate_action(diff)
done = True
info = str(diff)
return obs, reward, done, info
def stepB(self, action, count=0):
# playerB should play
if count == 0:
self.NetworkB = self.net.network(
self.ballx, ysource=600, Ynew=100) # Network B
self.bally = self.NetworkB[1][count]
self.ballx = self.NetworkB[0][count]
if self.GeneralReward == True:
self.playerbx = self.randomVal(action)
else:
self.playerbx = self.ballx
# soundObj = pygame.mixer.Sound('sound/sound.wav')
# soundObj.play()
# time.sleep(0.4)
# soundObj.stop()
else:
self.ballx = self.NetworkB[0][count]
self.bally = self.NetworkB[1][count]
obsOne = self.evaluate_state_from_last_coordinate(
int(self.ballx)) # last state of the ball
obsTwo = self.evaluate_state_from_last_coordinate(
int(self.playerax)) # evaluate player bx
diff = np.abs(self.ballx - self.playerax)
obs = obsTwo
reward = self.evaluate_action(diff)
done = True
info = str(diff)
return obs, reward, done, info
def computeLossA(self, reward):
if reward == 0:
self.lossA += 1
else:
self.lossA += 0
return
def computeLossB(self, reward):
if reward == 0:
self.lossB += 1
else:
self.lossB += 0
return
def render(self):
# diplay team players
self.PLAYERA = pygame.image.load('images/cap.jpg')
self.PLAYERA = pygame.transform.scale(self.PLAYERA, (50, 50))
self.PLAYERB = pygame.image.load('images/cap.jpg')
self.PLAYERB = pygame.transform.scale(self.PLAYERB, (50, 50))
self.ball = pygame.image.load('images/ball.png')
self.ball = pygame.transform.scale(self.ball, (15, 15))
self.playerax = 150
self.playerbx = 250
self.ballx = 250
self.bally = 300
count = 0
nextplayer = 'A'
# player A starts by playing with state 0
obsA, rewardA, doneA, infoA = 0, False, False, ''
obsB, rewardB, doneB, infoB = 0, False, False, ''
stateA = 0
stateB = 0
next_stateA = 0
next_stateB = 0
actionA = 0
actionB = 0
iterations = 20000
iteration = 0
restart = False
while iteration < iterations:
self.display()
self.randNumLabelA = self.myFontA.render(
'A (Win): '+str(self.updateRewardA) + ', A(loss): '+str(self.lossA), 1, self.BLACK)
self.randNumLabelB = self.myFontB.render(
'B (Win): '+str(self.updateRewardB) + ', B(loss): ' + str(self.lossB), 1, self.BLACK)
self.randNumLabelIter = self.myFontIter.render(
'Iterations: '+str(self.updateIter), 1, self.BLACK)
if nextplayer == 'A':
if count == 0:
# Online DQN evaluates what to do
q_valueA = AgentA.model.predict([stateA])
actionA = AgentA.epsilon_greedy(q_valueA, iteration)
# Online DQN plays
obsA, rewardA, doneA, infoA = self.stepA(
action=actionA, count=count)
next_stateA = actionA
# Let's memorize what just happened
AgentA.replay_memory.append(
(stateA, actionA, rewardA, next_stateA, 1.0 - doneA))
stateA = next_stateA
elif count == 49:
# Online DQN evaluates what to do
q_valueA = AgentA.model.predict([stateA])
actionA = AgentA.epsilon_greedy(q_valueA, iteration)
obsA, rewardA, doneA, infoA = self.stepA(
action=actionA, count=count)
next_stateA = actionA
self.updateRewardA += rewardA
self.computeLossA(rewardA)
# Let's memorize what just happened
AgentA.replay_memory.append(
(stateA, actionA, rewardA, next_stateA, 1.0 - doneA))
# restart the game if player A fails to get the ball, and let B start the game
if rewardA == 0:
self.restart = True
time.sleep(0.5)
nextplayer = 'B'
self.GeneralReward = False
else:
self.restart = False
self.GeneralReward = True
# Sample memories and use the target DQN to produce the target Q-Value
X_state_val, X_action_val, rewards, X_next_state_val, continues = (
AgentA.sample_memories(AgentA.batch_size))
next_q_values = AgentA.model.predict([X_next_state_val])
max_next_q_values = np.max(
next_q_values, axis=1, keepdims=True)
y_val = rewards + continues * AgentA.discount_rate * max_next_q_values
# Train the online DQN
AgentA.model.fit(X_state_val, tf.keras.utils.to_categorical(
X_next_state_val, num_classes=10), verbose=0)
nextplayer = 'B'
self.updateIter += 1
count = 0
# evaluate A
else:
# Online DQN evaluates what to do
q_valueA = AgentA.model.predict([stateA])
actionA = AgentA.epsilon_greedy(q_valueA, iteration)
# Online DQN plays
obsA, rewardA, doneA, infoA = self.stepA(
action=actionA, count=count)
next_stateA = actionA
# Let's memorize what just happened
AgentA.replay_memory.append(
(stateA, actionA, rewardA, next_stateA, 1.0 - doneA))
stateA = next_stateA
if nextplayer == 'A':
count += 1
else:
count = 0
else:
if count == 0:
# Online DQN evaluates what to do
q_valueB = AgentB.model.predict([stateB])
actionB = AgentB.epsilon_greedy(q_valueB, iteration)
# Online DQN plays
obsB, rewardB, doneB, infoB = self.stepB(
action=actionB, count=count)
next_stateB = actionB
# Let's memorize what just happened
AgentB.replay_memory.append(
(stateB, actionB, rewardB, next_stateB, 1.0 - doneB))
stateB = next_stateB
elif count == 49:
# Online DQN evaluates what to do
q_valueB = AgentB.model.predict([stateB])
actionB = AgentB.epsilon_greedy(q_valueB, iteration)
# Online DQN plays
obs, reward, done, info = self.stepB(
action=actionB, count=count)
next_stateB = actionB
# Let's memorize what just happened
AgentB.replay_memory.append(
(stateB, actionB, rewardB, next_stateB, 1.0 - doneB))
stateB = next_stateB
self.updateRewardB += rewardB
self.computeLossB(rewardB)
# restart the game if player A fails to get the ball, and let B start the game
if rewardB == 0:
self.restart = True
time.sleep(0.5)
self.GeneralReward = False
nextplayer = 'A'
else:
self.restart = False
self.GeneralReward = True
# Sample memories and use the target DQN to produce the target Q-Value
X_state_val, X_action_val, rewards, X_next_state_val, continues = (
AgentB.sample_memories(AgentB.batch_size))
next_q_values = AgentB.model.predict([X_next_state_val])
max_next_q_values = np.max(
next_q_values, axis=1, keepdims=True)
y_val = rewards + continues * AgentB.discount_rate * max_next_q_values
# Train the online DQN
AgentB.model.fit(X_state_val, tf.keras.utils.to_categorical(
X_next_state_val, num_classes=10), verbose=0)
nextplayer = 'A'
self.updateIter += 1
# evaluate B
else:
# Online DQN evaluates what to do
q_valueB = AgentB.model.predict([stateB])
actionB = AgentB.epsilon_greedy(q_valueB, iteration)
# Online DQN plays
obsB, rewardB, doneB, infoB = self.stepB(
action=actionB, count=count)
next_stateB = actionB
# Let's memorize what just happened
AgentB.replay_memory.append(
(stateB, actionB, rewardB, next_stateB, 1.0 - doneB))
tateB = next_stateB
if nextplayer == 'B':
count += 1
else:
count = 0
iteration += 1
# CHECK BALL MOVEMENT
self.DISPLAYSURF.blit(self.PLAYERA, (self.playerax, 50))
self.DISPLAYSURF.blit(self.PLAYERB, (self.playerbx, 600))
self.DISPLAYSURF.blit(self.ball, (self.ballx, self.bally))
self.DISPLAYSURF.blit(self.randNumLabelA, (300, 630))
self.DISPLAYSURF.blit(self.randNumLabelB, (300, 40))
self.DISPLAYSURF.blit(self.randNumLabelIter, (50, 40))
# update last coordinate
# self.lastxcoordinate = self.ballx
pygame.display.update()
self.fpsClock.tick(self.FPS)
for event in pygame.event.get():
if event.type == QUIT:
AgentA.model.save('AgentA.h5')
AgentB.model.save('AgentB.h5')
pygame.quit()
sys.exit()
# -
tennis = pytennis(fps = 50)
tennis.reset()
tennis.render()
| Notebook/pytennis notebook (DQN).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="eiXv9X1W17La"
# ! pip install -q kaggle
# ! mkdir /root/.kaggle
# ! cp kaggle.json /root/.kaggle/
# ! chmod 600 ~/.kaggle/kaggle.json
# + id="3GdR4hj52YIu" colab={"base_uri": "https://localhost:8080/"} outputId="0915486d-e25d-49d2-81fa-0b509c1f19b7"
# !kaggle datasets download -d mlg-ulb/creditcardfraud
# + id="cpQXTyZU3Bm_" colab={"base_uri": "https://localhost:8080/"} outputId="51566724-ea87-4604-ac85-3f357b775427"
# !unzip creditcardfraud.zip
# + id="dJgjHV9gSpfH"
# !pip install -q scikit-plot
# + id="V8zqjcu32brJ"
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
import time
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from collections import Counter
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, accuracy_score, classification_report, plot_precision_recall_curve
from sklearn.metrics import roc_auc_score
import scikitplot as skplt
import warnings
warnings.filterwarnings("ignore")
# + id="3okNhRjb2uDp" colab={"base_uri": "https://localhost:8080/", "height": 226} outputId="d0874c06-f272-4956-ffaf-9977d3a1691f"
df = pd.read_csv('creditcard.csv')
df.head()
# + [markdown] id="e2YTfHG5LKeU"
# Checking for Missing Values
# + id="Uk4hl67n3Geq" colab={"base_uri": "https://localhost:8080/"} outputId="fc8a65b7-41e2-458e-8cb6-5c73186612cc"
df.isnull().sum().max()
# + id="_U1EqT5K3LzG" colab={"base_uri": "https://localhost:8080/", "height": 320} outputId="9f887232-6539-4832-9deb-4164e312f435"
df.describe()
# + id="vAsqAC7M3Q6Y" colab={"base_uri": "https://localhost:8080/"} outputId="75b74b4b-3571-4f22-990e-93f743a5a7e6"
print('Not Fraud:', round(df['Class'].value_counts()[0]))
print('Fraud:', round(df['Class'].value_counts()[1]))
# + id="jbpce-EN3h16" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="79bda8dd-54db-4df1-9fad-dc3c54b4d982"
sns.countplot('Class', data=df)
# + [markdown] id="4Tl1kOAeLOdQ"
# From above plot, we can see Class Column is imbalanced
# + [markdown] id="YTJAktPVLYY9"
# Scaling Data
# + id="LasoZVbj3xg5"
from sklearn.preprocessing import RobustScaler
# + [markdown] id="QBlr2KvVLbdM"
# Robust Scaler is less prone to overfitting
# + id="CphY7Qox37SP"
rob_scaler = RobustScaler()
df['scaled_amount'] = rob_scaler.fit_transform(df['Amount'].values.reshape(-1,1))
df.drop(['Time','Amount'], axis=1, inplace=True)
# + id="5HLGrMKG4CYW" colab={"base_uri": "https://localhost:8080/", "height": 226} outputId="ff1bb067-0fd9-4e77-ab16-f4e9ff22a240"
scaled_amount = df['scaled_amount']
df.drop(['scaled_amount'], axis=1, inplace=True)
df.insert(0, 'scaled_amount', scaled_amount)
df.head()
# + id="bsH__0134JGD"
X = df.iloc[:,:-1]
y = df['Class']
# + [markdown] id="ljt2S9ChdPkb"
# **Technique 1:** Training without resampling dataset.
# + id="x3fxzBpY4nEA"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
# + [markdown] id="Q5iR15XQLvzo"
# Using Logistic Regression because Class column is binary
# + id="WtqtvH3M5Ff8"
lr= LogisticRegression(C=1.0,penalty='l2',class_weight="balanced",n_jobs=-1)
# + id="r0qnNN7o6-Af" colab={"base_uri": "https://localhost:8080/"} outputId="76787c18-8074-4baa-e0d7-90bd740800ef"
lr.fit(X_train,y_train)
# + id="LX-J5E0j7NH8"
labels = ['Not Fraud', 'Fraud']
prediction = lr.predict(X_test)
# + id="JjQT6j83VBgg"
y_proba = lr.predict_proba(X_test)
# + id="8BtslNSuJT-Y" colab={"base_uri": "https://localhost:8080/"} outputId="77f92974-f9d9-4eb6-fc84-77729fb1ebdf"
print("Accuracy Score for Logistic Regression is {0:.2f} %".format(accuracy_score(y_test, prediction)*100))
# + id="IlR5BJpq7ch9" colab={"base_uri": "https://localhost:8080/"} outputId="b06e078d-88e7-4b7e-c874-76abf9e7b0b1"
print(classification_report(y_test, prediction, target_names=labels))
# + id="r59RIdWU76Q0"
cf = confusion_matrix(y_test, prediction)
# + id="z8PZI296Ck2l" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="2e80b3e2-c73e-4009-c3ba-d2b179e58490"
sns.heatmap(cf,annot=True,fmt='d')
# + colab={"base_uri": "https://localhost:8080/", "height": 313} id="92T4lvhyU_Pm" outputId="04d981c9-a3a7-4aef-d5fa-2282570439bf"
skplt.metrics.plot_precision_recall(y_test, y_proba)
# + [markdown] id="Vnbs9cZ5MG0F"
# Trying Random Forest Classifier
# + id="yyMC5rn1C1mk"
rfc = RandomForestClassifier(class_weight="balanced",n_jobs=-1)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
# + id="r2eByLhDVhMp"
y_proba = rfc.predict_proba(X_test)
# + id="jDKdbKHLJL31" colab={"base_uri": "https://localhost:8080/"} outputId="673193ef-46f7-4fc4-d4f7-2057821c4c69"
print("Accuracy Score for Random Forest is {0:.2f} %".format(accuracy_score(y_test, y_pred)*100))
# + id="zwumODKmG1fL" colab={"base_uri": "https://localhost:8080/"} outputId="b070df13-bd47-429b-e962-6422c74be763"
print(classification_report(y_test, y_pred, target_names=labels))
# + id="p1tD0GTMI-Zl"
cf2 = confusion_matrix(y_test, y_pred)
# + id="pyKXz7Q9JHjY" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="ce7d1cda-b394-4bd5-bca7-8f306a4c3038"
sns.heatmap(cf2,annot=True,fmt='d')
# + colab={"base_uri": "https://localhost:8080/", "height": 313} id="qB4mhD55UvhG" outputId="b43a58e3-2bad-4a02-a813-55f152e3146b"
skplt.metrics.plot_precision_recall(y_test, y_proba)
# + [markdown] id="nwt-mqMZMMb2"
# It is clear from above confusion matrix that we are getting best accuracy using Random Forest Classifier
# + [markdown] id="XvCa9oAvF0qL"
# **Using Deep Learning**
# + id="DGntLnf2v1_n"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
# + colab={"base_uri": "https://localhost:8080/"} id="NlmXy3U3v5ul" outputId="baa58300-8551-41d7-f718-9202b0af53bf"
y_train.value_counts()
# + id="vO5yPISCwAiU"
n_inputs=len(X.columns)
# + id="ShGrdfLC1QR3"
from tensorflow.keras.callbacks import ModelCheckpoint,EarlyStopping
checkpointer = ModelCheckpoint('fraud.hdf5', verbose=1, save_best_only=True)
earlystopper = EarlyStopping(monitor='val_loss', patience=5, verbose=1)
# + id="koI8Xm-iwI2s"
# define model
model = Sequential()
# define first hidden layer and visible layer
model.add(Dense(50, input_dim=n_inputs, activation='relu', kernel_initializer='he_uniform'))
# define output layer
model.add(Dense(1, activation='sigmoid'))
# define loss and optimizer
model.compile(loss='binary_crossentropy', optimizer='adam', metrics='acc')
# + id="wEvlSQw9F95-"
from sklearn.utils.class_weight import compute_class_weight
class_weights = compute_class_weight(class_weight='balanced', classes=np.unique(y_train), y=y_train)
class_weight_dict = dict(enumerate(class_weights))
# + colab={"base_uri": "https://localhost:8080/"} id="tDP7HBldHksz" outputId="02129b99-7d26-4824-e8f7-bd2232733c4e"
class_weight_dict
# + colab={"base_uri": "https://localhost:8080/"} id="RFEPhrMOwuHp" outputId="47a14dad-f21d-473e-b4fd-2d3eefb8285d"
model.fit(X_train,y_train,epochs=30,class_weight=class_weight_dict,validation_data = (X_test, y_test),callbacks=[checkpointer, earlystopper],workers=4)
# + id="R0ubCIud2yKM"
model.load_weights('fraud.hdf5')
# + id="hzbCC8qtxhC6"
y_proba=model.predict(X_test)
# + id="tBJpWDfKKPxs"
y_pred = (y_proba > 0.5)
# + id="po206YFe0hL9" colab={"base_uri": "https://localhost:8080/"} outputId="91f23136-517c-4f94-9f7c-844456cdb613"
print("Accuracy Score for Neural Network is {0:.2f} %".format(accuracy_score(y_test, y_pred)*100))
# + colab={"base_uri": "https://localhost:8080/"} id="ZX2bT_cdKORs" outputId="30bd9ed9-233c-4043-a0c9-01fc30563e82"
print(classification_report(y_test, y_pred, target_names=labels))
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="axd2XFQSKpTr" outputId="5fbc6bc0-37d4-457a-ebaa-e9b0aefc3c3a"
cf3 = confusion_matrix(y_test, y_pred)
sns.heatmap(cf2,annot=True,fmt='d')
# + id="Jxmm66jWUEY2"
new_proba = []
for prob in y_proba:
if prob[0]>0.5:
new_proba.append([1-prob[0],prob[0]])
else:
new_proba.append([prob[0],1-prob[0]])
# + colab={"base_uri": "https://localhost:8080/", "height": 313} id="pOdpIyZM0jB3" outputId="577c370f-5083-4aec-a7e3-9610f5453147"
skplt.metrics.plot_precision_recall(y_test, new_proba)
# + [markdown] id="zrBpcLvXWDys"
# **Technique 2:** Creating synthetic points for imbalanced class using SMOTE and then training on resampled dataset.
# + id="qbW5cggAWDWr"
X_smote, y_smote = SMOTE(sampling_strategy='minority').fit_resample(X, y)
# + colab={"base_uri": "https://localhost:8080/"} id="xyP6lG0RWUBD" outputId="04ed0225-9b80-48d2-c72d-bf73a978bb8b"
print("SMOTE data distribution: {}".format(Counter(y_smote)))
# + id="h1LTFm_SWXy0"
X_train, X_test, y_train, y_test = train_test_split(X_smote, y_smote, test_size=0.2, random_state=42, stratify=y_smote)
# + id="stzK4AF-Wvy1"
estimators = {'lr':LogisticRegression(class_weight="balanced",n_jobs=-1),
'rf' : RandomForestClassifier(class_weight="balanced",n_jobs=-1)}
# + id="5Qb2jra0Wprk"
acc = []
est = []
for estimator in estimators.values():
estimator.fit(X_train, y_train)
y_pred = estimator.predict(X_test)
est.append(estimator)
acc.append(accuracy_score(y_test,y_pred))
# + colab={"base_uri": "https://localhost:8080/"} id="bCLvxdWyYjxp" outputId="4f909d2e-8c38-4c18-a128-6213b3043200"
acc
# + colab={"base_uri": "https://localhost:8080/"} id="yklSMG42YCTq" outputId="a8cd6f81-bb57-4eda-89fe-19faaa8bef3f"
est[np.argmax(acc)]
# + [markdown] id="2YmbF4voYWIX"
# Random Forest is having more accuracy than Logistic Regression
# + id="8nK0Z5hjYw9d"
y_pred = est[np.argmax(acc)].predict(X_test)
y_proba = est[np.argmax(acc)].predict_proba(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="1cfgA-PrZJrG" outputId="15aa4952-48ad-442c-f8d4-80fedc59a2fb"
print("Accuracy Score for Random Forest is {0:.2f} %".format(accuracy_score(y_test, y_pred)*100))
# + colab={"base_uri": "https://localhost:8080/"} id="fZe0Y5gtYr34" outputId="71fbbc68-5bd6-483c-aa93-74c0a65250fb"
print(classification_report(y_test, y_pred, target_names=labels))
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="YOeUJAexYGBg" outputId="c61dba85-f1f9-483d-cd92-12619e7a1581"
sns.heatmap(confusion_matrix(y_test,y_pred),annot=True,fmt='d')
# + colab={"base_uri": "https://localhost:8080/", "height": 313} id="2EDuxIJtY_L3" outputId="1ce3545a-d612-43b8-b6f7-eb8af98a7d0d"
skplt.metrics.plot_precision_recall(y_test, y_proba)
# + id="yiZmXHJUZSgA"
checkpointer = ModelCheckpoint('fraud.hdf5', verbose=1, save_best_only=True)
earlystopper = EarlyStopping(monitor='val_loss', patience=5, verbose=1)
# + id="36T09HOMZYXT"
# define model
model = Sequential()
# define first hidden layer and visible layer
model.add(Dense(50, input_dim=n_inputs, activation='relu', kernel_initializer='he_uniform'))
# define output layer
model.add(Dense(1, activation='sigmoid'))
# define loss and optimizer
model.compile(loss='binary_crossentropy', optimizer='adam', metrics='acc')
# + colab={"base_uri": "https://localhost:8080/"} id="SFJWVhLkZddw" outputId="d9fd6bc0-95c5-409f-a67d-ce417dae12e4"
model.fit(X_train,y_train,epochs=30,validation_data = (X_test, y_test),callbacks=[checkpointer, earlystopper],workers=4)
# + id="MotgAR3zZipY"
model.load_weights('fraud.hdf5')
# + id="IcmOsqDubk9A"
y_proba = model.predict(X_test)
# + id="O5HT70TTbn8e"
y_pred = (y_proba>0.5)
# + colab={"base_uri": "https://localhost:8080/"} id="icNUwEaSbq5x" outputId="1966b089-d3be-41c5-d82b-7e4b3470fa93"
print("Accuracy Score for Neural Network is {0:.2f} %".format(accuracy_score(y_test, y_pred)*100))
# + colab={"base_uri": "https://localhost:8080/"} id="7IR8LuJzbws7" outputId="2da20d12-d617-4336-8f45-091109e0d6aa"
print(classification_report(y_test, y_pred, target_names=labels))
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="TMDmnJMZbylU" outputId="1502e114-bc9c-4372-c936-9451ae9e1e09"
sns.heatmap(confusion_matrix(y_test,y_pred),annot=True,fmt='d')
# + id="c6L8Ga6hbg5i"
new_proba = []
for prob in y_proba:
if prob[0]>0.5:
new_proba.append([1-prob[0],prob[0]])
else:
new_proba.append([prob[0],1-prob[0]])
# + colab={"base_uri": "https://localhost:8080/", "height": 313} id="QIwk7grwb4VS" outputId="dc6cb084-51c2-4d2f-814b-0501f95bcfdc"
skplt.metrics.plot_precision_recall(y_test, np.array(new_proba))
# + [markdown] id="J4xHGEvIcghD"
# # VERDICT:
#
# **Random Forest outperformed both Logistic Regression and Neural Networks for this dataset.**
| fraud_detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # VESIcal Demo
import VESIcal as v
myfile = v.BatchFile('cerro_negro.xlsx')
myfile.get_data()
v.calib_plot(user_data=myfile)
dissolved = myfile.calculate_dissolved_volatiles(temperature=1100, pressure=3000, X_fluid=1)
dissolved_IaconoMarziano = myfile.calculate_dissolved_volatiles(temperature=1100, pressure=3000, X_fluid=1, model="IaconoMarziano")
dissolved_IaconoMarziano
eqfluid = myfile.calculate_equilibrium_fluid_comp(temperature=1100, pressure=1000)
eqfluid
satPs = myfile.calculate_saturation_pressure(temperature=1100)
satPs
mysample = myfile.get_sample_composition('19*', normalization='standard', asSampleClass=True)
mysample.get_composition()["H2O"]
isobars, isopleths = v.calculate_isobars_and_isopleths(sample=mysample,
temperature=1100,
pressure_list=[2000, 3000, 4000],
isopleth_list=[0.25,0.5,0.75],
print_status=True,
smooth_isobars=False).result
# +
fig, ax = v.plot(isobars=isobars, isopleths=isopleths, smooth_isobars=True)
fig, ax = v.plot(isobars=isobars, isopleths=isopleths, smooth_isobars=True,
custom_H2O=[mysample.get_composition()["H2O"]], custom_CO2=[mysample.get_composition()["CO2"]],
custom_labels=["19*"])
v.show()
# -
closed_path = v.calculate_degassing_path(sample=mysample, temperature=1100).result
open_path = v.calculate_degassing_path(sample=mysample, temperature=1100, fractionate_vapor=1.0).result
half_path = v.calculate_degassing_path(sample=mysample, temperature=1100, fractionate_vapor=0.5).result
exsolved_path = v.calculate_degassing_path(sample=mysample, temperature=1100, init_vapor=2.0).result
# +
fig, ax = v.plot(isobars=isobars, isopleths=isopleths, smooth_isobars=True,
degassing_paths=[closed_path, open_path, half_path, exsolved_path],
degassing_path_labels=["Closed", "Open", "Half", "Exsolved"])
v.show()
# -
| docs/videos/Verbose_Demo_1/VESIcal_demo_video1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %pylab inline
import pandas as pd
import numpy as np
import seaborn as sns
from pandarallel import pandarallel
pandarallel.initialize()
from arnie.free_energy import free_energy
import arnie.utils as utils
from arnie.utils import write_constraints
# -
# Replicating some of the analysis done in
#
# <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & <NAME>. (2019). _Quantitative high-throughput tests of ubiquitous RNA secondary structure prediction algorithms via RNA/protein binding._ BioRxiv, 571588.
#
# https://www.biorxiv.org/content/10.1101/571588v1
# +
df = pd.read_excel("/Users/hwayment/das/github/backup_EB/ExternalDatasets/JarmoskaitePUMddG/PUM_analysis.xlsx")
df = df.dropna(subset=['ddG_25_exp'])
df['Sequence'] = [x.replace('T','U') for x in df['Sequence']]
#find PUM binding motif and write corresponding constraint
df['binding_constraint'] = [write_constraints(seq, motif=('UGUAUAUn','xxxxxxxx')) for seq in df['Sequence']]
#what do these look like?
print(df['Sequence'][1])
print(df['binding_constraint'][1])
# -
utils.load_package_locations()
# +
pkg_options = {'vienna_2':{'package':'vienna'},
'vienna_2_25':{'package':'vienna', 'T':25},
'vienna_2_60':{'package':'vienna', 'T':60},
'contrafold':{'package':'contrafold'},
'eternafold':{'package':'eternafold'}}
for pkg, options_dct in pkg_options.items():
df['dG_%s' % pkg] = df.parallel_apply(lambda row: free_energy(row['Sequence'], **options_dct), axis=1)
df['dG_constr_%s' % pkg] = df.parallel_apply(lambda row: free_energy(row['Sequence'], **options_dct, constraint=row['binding_constraint']), axis=1)
df['ddG_%s' % pkg] = df['dG_constr_%s' % pkg] - df['dG_%s' % pkg]
# +
def bootstrap_inds(len_item):
return np.random.choice(range(len_item), len_item)
def rmse(x,y):
return np.sqrt(np.mean(np.square(x-y)))
n_bootstraps=1000
rms = np.zeros([n_bootstraps,5])
corr = np.zeros([n_bootstraps,5])
for j in range(n_bootstraps):
lst=[]
bs_inds = bootstrap_inds(len(df))
for i, pkg in enumerate(['vienna_2', 'vienna_2_25','vienna_2_60','contrafold', 'eternafold']):
x = df['ddG_25_exp'].values[bs_inds]
y = df['ddG_%s' % pkg].values[bs_inds]
c = np.corrcoef(x,y)[0][1]
r = rmse(x,y)
rms[j,i] = r
corr[j,i] = c
# +
def plot_diag(): plot([0,6],[0,6],linestyle=':',c='k')
def rmse(x,y):
return np.sqrt(np.mean(np.square(x-y)))
figure(figsize=(8,6))
titles=['Vienna 2 (25˚C)','Vienna 2 (37˚C)','Vienna 2 (60˚C)','CONTRAfold 2', 'EternaFold']
ctr=1
for i, pkg in enumerate(['vienna_2_25', 'vienna_2', 'vienna_2_60', 'contrafold', 'eternafold']):
#print("%s %.3f %.3f" %(pkg, np.mean(lst), np.std(lst)))
subplot(2,3,ctr)
C = np.corrcoef(df['ddG_25_exp'], df['ddG_%s' % pkg])[0][1]
title(titles[i])
text(0,5.15,"RMSE=%.2f(%.2f)" % (np.mean(rms[:,i]), np.std(rms[:,i])))
text(0,5.75,"Corr=%.2f(%.2f)" % (np.mean(corr[:,i]), np.std(corr[:,i])))
sns.scatterplot(x='ddG_%s' % pkg, y='ddG_25_exp', data=df, linewidth=0, alpha=0.7)
xlabel("ddG %s (kcal/mol)"% titles[i])
ylabel("ddG_exp (kcal/mol)")
ctr+=1
if ctr==4:
ctr+=1
plot_diag()
tight_layout()
savefig('PUM_comparison.pdf',bbox_inches='tight')
# -
# export bootstrap correlations for main text plot
corr_df = pd.DataFrame({'Corr': np.concatenate([corr[:,0],corr[:,3],corr[:,4]]),
'package':['vienna_2']*1000+['contrafold_2']*1000+['eternafold_B']*1000,
'ID':['PUM']*3000})
corr_df.to_json('PUM_correlation_bootstraps.json')
| analysis/SI_PUM_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="YF6ONU8KG5U4" colab_type="text"
# # q1_softmax
# + id="p4lG0FurG1w-" colab_type="code" colab={}
import numpy as np
# + id="DtjGcdgwG-1-" colab_type="code" colab={}
def softmax(x):
"""Compute the softmax function for each row of the input x.
It is crucial that this function is optimized for speed because
it will be used frequently in later code. You might find numpy
functions np.exp, np.sum, np.reshape, np.max, and numpy
broadcasting useful for this task.
Numpy broadcasting documentation:
http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
You should also make sure that your code works for a single
D-dimensional vector (treat the vector as a single row) and
for N x D matrices. This may be useful for testing later. Also,
make sure that the dimensions of the output match the input.
You must implement the optimization in problem 1(a) of the
written assignment!
Arguments:
x -- A D dimensional vector or N x D dimensional numpy matrix.
Return:
x -- You are allowed to modify x in-place
"""
orig_shape = x.shape
e_x = np.exp(x - np.max(x, axis=-1, keepdims=True))
x = e_x / np.sum(e_x, axis=-1, keepdims=True)
assert x.shape == orig_shape
return x
# + id="F13KHrBSHAYF" colab_type="code" colab={}
def test_softmax_basic():
"""
Some simple tests to get you started.
Warning: these are not exhaustive.
"""
print("Running basic tests...")
test1 = softmax(np.array([1,2]))
print(test1)
ans1 = np.array([0.26894142, 0.73105858])
assert np.allclose(test1, ans1, rtol=1e-05, atol=1e-06)
test2 = softmax(np.array([[1001,1002],[3,4]]))
print(test2)
ans2 = np.array([
[0.26894142, 0.73105858],
[0.26894142, 0.73105858]])
assert np.allclose(test2, ans2, rtol=1e-05, atol=1e-06)
test3 = softmax(np.array([[-1001,-1002]]))
print(test3)
ans3 = np.array([0.73105858, 0.26894142])
assert np.allclose(test3, ans3, rtol=1e-05, atol=1e-06)
print("You should be able to verify these results by hand!\n")
# + id="DGFo6PdyHCy5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="7aeca094-b83c-4b18-eda1-30d7b1e14561"
def test_softmax():
"""
Use this space to test your softmax implementation by running:
python q1_softmax.py
This function will not be called by the autograder, nor will
your tests be graded.
"""
print("Running your tests...")
### YOUR CODE HERE
test4 = softmax(np.array([[100, 100, 100],
[100, 100, 100]]))
print(test4)
ans4 = np.array([[0.33333, 0.33333, 0.33333],
[0.33333, 0.33333, 0.33333]])
assert np.allclose(test4, ans4, rtol=1e-05, atol=1e-06)
print("All tests passed!\n")
### END YOUR CODE
if __name__ == "__main__":
test_softmax_basic()
test_softmax()
# + [markdown] id="qrKJYamWHVPQ" colab_type="text"
# # q2_sigmoid
# + id="r_ny9Um-Hd9i" colab_type="code" colab={}
# libraries imported
# import numpy as np
# + id="O-0iPpgSHnP0" colab_type="code" colab={}
def sigmoid(x):
"""
Compute the sigmoid function for the input here.
Arguments:
x -- A scalar or numpy array.
Return:
s -- sigmoid(x)
"""
### YOUR CODE HERE
s = 1 / (1 + np.exp(-x))
### END YOUR CODE
return s
# + id="fjZ_1om6HpUM" colab_type="code" colab={}
def sigmoid_grad(s):
"""
Compute the gradient for the sigmoid function here. Note that
for this implementation, the input s should be the sigmoid
function value of your original input x.
Arguments:
s -- A scalar or numpy array.
Return:
ds -- Your computed gradient.
"""
### YOUR CODE HERE
ds = s * (1 - s)
### END YOUR CODE
return ds
# + id="-F719O9ZHq5J" colab_type="code" colab={}
def test_sigmoid_basic():
"""
Some simple tests to get you started.
Warning: these are not exhaustive.
"""
print("Running basic tests...")
x = np.array([[1, 2], [-1, -2]])
f = sigmoid(x)
g = sigmoid_grad(f)
print(f)
f_ans = np.array([
[0.73105858, 0.88079708],
[0.26894142, 0.11920292]])
assert np.allclose(f, f_ans, rtol=1e-05, atol=1e-06)
print(g)
g_ans = np.array([
[0.19661193, 0.10499359],
[0.19661193, 0.10499359]])
assert np.allclose(g, g_ans, rtol=1e-05, atol=1e-06)
print("You should verify these results by hand!\n")
# + id="0Xz6tttMHtf6" colab_type="code" colab={}
def test_sigmoid():
"""
Use this space to test your sigmoid implementation by running:
python q2_sigmoid.py
This function will not be called by the autograder, nor will
your tests be graded.
"""
print("Running your tests...")
### YOUR CODE HERE
# raise NotImplementedError
### END YOUR CODE
# + id="El7ywHVWHvgO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="0cbdad51-6502-4b4d-95c6-0898e9225573"
if __name__ == "__main__":
test_sigmoid_basic();
test_sigmoid()
# + [markdown] id="BAa2ugnOHHW2" colab_type="text"
# # q2_gradcheck
# + id="UWSL7guTHFcd" colab_type="code" colab={}
# libraries imported
# import numpy as np
import random
# + id="e0ixLiecHMk-" colab_type="code" colab={}
# First implement a gradient checker by filling in the following functions
def gradcheck_naive(f, x):
""" Gradient check for a function f.
Arguments:
f -- a function that takes a single argument and outputs the
cost and its gradients
x -- the point (numpy array) to check the gradient at
"""
rndstate = random.getstate()
random.setstate(rndstate)
fx, grad = f(x) # Evaluate function value at original point
h = 1e-4 # Do not change this!
# Iterate over all indexes ix in x to check the gradient.
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
ix = it.multi_index
# Try modifying x[ix] with h defined above to compute numerical
# gradients (numgrad).
# Use the centered difference of the gradient.
# It has smaller asymptotic error than forward / backward difference
# methods. If you are curious, check out here:
# https://math.stackexchange.com/questions/2326181/when-to-use-forward-or-central-difference-approximations
# Make sure you call random.setstate(rndstate)
# before calling f(x) each time. This will make it possible
# to test cost functions with built in randomness later.
### YOUR CODE HERE:
x_0 = x[ix]
random.setstate(rndstate)
x[ix] = x_0 + h
f1 = f(x)[0]
random.setstate(rndstate)
x[ix] = x_0 - h
f2 = f(x)[0]
x[ix] = x_0
numgrad = (f1 - f2) / (2 * h)
### END YOUR CODE
# Compare gradients
reldiff = abs(numgrad - grad[ix]) / max(1, abs(numgrad), abs(grad[ix]))
if reldiff > 1e-5:
print("Gradient check failed.")
print("First gradient error found at index %s" % str(ix))
print("Your gradient: %f \t Numerical gradient: %f" % (grad[ix], numgrad))
return
it.iternext() # Step to next dimension
print("Gradient check passed!")
# + id="Il1Wsc7AHPvQ" colab_type="code" colab={}
def sanity_check():
"""
Some basic sanity checks.
"""
quad = lambda x: (np.sum(x ** 2), x * 2)
print("Running sanity checks...")
gradcheck_naive(quad, np.array(123.456)) # scalar test
gradcheck_naive(quad, np.random.randn(3,)) # 1-D test
gradcheck_naive(quad, np.random.randn(4,5)) # 2-D test
print("")
# + id="layJmrEfHRNF" colab_type="code" colab={}
def your_sanity_checks():
"""
Use this space add any additional sanity checks by running:
python q2_gradcheck.py
This function will not be called by the autograder, nor will
your additional tests be graded.
"""
print("Running your sanity checks...")
### YOUR CODE HERE
# raise NotImplementedError
### END YOUR CODE
# + id="o8W-zj-IHScE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="77fe9ac9-b088-451d-acaf-09b7716e06a9"
if __name__ == "__main__":
sanity_check()
your_sanity_checks()
# + [markdown] id="3Ce4CRmyHz4S" colab_type="text"
# # q2_neural
# + id="_NPOjI4UHT_m" colab_type="code" colab={}
# #!/usr/bin/env python
# import numpy as np
# import random
# from q1_softmax import softmax
# from q2_sigmoid import sigmoid, sigmoid_grad
# from q2_gradcheck import gradcheck_naive
# + id="2iEPA2skH4nC" colab_type="code" colab={}
def forward_backward_prop(X, labels, params, dimensions):
"""
Forward and backward propagation for a two-layer sigmoidal network
Compute the forward propagation and for the cross entropy cost,
the backward propagation for the gradients for all parameters.
Notice the gradients computed here are different from the gradients in
the assignment sheet: they are w.r.t. weights, not inputs.
Arguments:
X -- M x Dx matrix, where each row is a training example x.
labels -- M x Dy matrix, where each row is a one-hot vector.
params -- Model parameters, these are unpacked for you.
dimensions -- A tuple of input dimension, number of hidden units
and output dimension
"""
### Unpack network parameters (do not modify)
ofs = 0
Dx, H, Dy = (dimensions[0], dimensions[1], dimensions[2])
W1 = np.reshape(params[ofs:ofs+ Dx * H], (Dx, H))
ofs += Dx * H
b1 = np.reshape(params[ofs:ofs + H], (1, H))
ofs += H
W2 = np.reshape(params[ofs:ofs + H * Dy], (H, Dy))
ofs += H * Dy
b2 = np.reshape(params[ofs:ofs + Dy], (1, Dy))
# Note: compute cost based on `sum` not `mean`.
### YOUR CODE HERE: forward propagation
h = sigmoid(np.dot(X, W1) + b1)
y_ = softmax(np.dot(h, W2) + b2)
cost = - np.sum(labels * np.log(y_))
### END YOUR CODE
### YOUR CODE HERE: backward propagation
dZ2 = y_ - labels
gradW2 = np.dot(h.T, dZ2)
gradb2 = np.sum(dZ2, axis=0, keepdims=True)
dh = np.dot(dZ2, W2.T)
dz = sigmoid_grad(h) * dh
gradW1 = np.dot(X.T, dz)
gradb1 = np.sum(dz, axis=0, keepdims=True)
### END YOUR CODE
### Stack gradients (do not modify)
grad = np.concatenate((gradW1.flatten(), gradb1.flatten(),
gradW2.flatten(), gradb2.flatten()))
return cost, grad
# + id="Tv5nrB69INc6" colab_type="code" colab={}
def sanity_check():
"""
Set up fake data and parameters for the neural network, and test using
gradcheck.
"""
print("Running sanity check...")
N = 20
dimensions = [10, 5, 10]
data = np.random.randn(N, dimensions[0]) # each row will be a datum
labels = np.zeros((N, dimensions[2]))
for i in range(N):
labels[i, random.randint(0,dimensions[2]-1)] = 1
params = np.random.randn((dimensions[0] + 1) * dimensions[1] + (
dimensions[1] + 1) * dimensions[2], )
gradcheck_naive(lambda params:
forward_backward_prop(data, labels, params, dimensions), params)
# + id="tX0MaX_5IQRI" colab_type="code" colab={}
def your_sanity_checks():
"""
Use this space add any additional sanity checks by running:
python q2_neural.py
This function will not be called by the autograder, nor will
your additional tests be graded.
"""
print("Running your sanity checks...")
### YOUR CODE HERE
# raise NotImplementedError
### END YOUR CODE
# + id="BCP0gTABIRk5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="63525b33-cbe2-404f-f339-86cd7fbeccf0"
if __name__ == "__main__":
sanity_check()
your_sanity_checks()
# + id="3XNjBLJ_ISuw" colab_type="code" colab={}
| Assignments/assignment1/CS224n_assignment1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reading MonaLIA RDF Data
# To run the local SPARQL over HTTP I've installed Apache Jena Fuseki service; started the service with 20Gb memmory option; uploaded the Joconde files:
#
# Joconde_2018-03-21.ttl
# reprskos.rdf
# domnskos.rdf
# skos.rdf
# monalia_skos.rdf
#
# Dataset can be manages from http://localhost:3030
#
# https://stackoverflow.com/questions/13897712/where-do-i-test-my-queries-for-my-rdf-written-in-sparql
# http://jena.apache.org/documentation/serving_data/index.html#download-fuseki1
#
# For RDF data to pandas dataframe conversion all credit to Ted Lawless
# https://lawlesst.github.io/notebook/sparql-dataframe.html
# +
import os
import sys
import numpy as np
import pandas as pd
import json
from SPARQLWrapper import SPARQLWrapper, JSON, N3, XML
import matplotlib.pyplot as plt
# +
# Import MonaLIA library from the package in the subfolder of the notebook folder
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
from MonaLIA.util import metadata_helpers as metadata
from MonaLIA.data import stratified_split as imagedata
# -
# ### Read the query string from the file
#
# The file can be developed in CORESE GUI Application
query_path = os.path.abspath(os.path.join('../..' , 'Queries', 'MonaLIA.DL Dataset.General.Subset.rq'))
f = open(query_path, mode='rt', encoding='utf-8')
qs_templ = f.read()
f.close()
print(qs_templ)
# +
include_terms_list = ['"arbre"@fr',
'"en buste"@fr',
'"fleur"@fr',
'"nu"@fr',
'"cheval"@fr',
'"maison"@fr',
'"oiseau"@fr',
'"bateau"@fr',
'"église"@fr',
'"de profil"@fr',
'"à mi-corps"@fr',
'"chien"@fr',
'"de face"@fr',
'"ange"@fr',
'"couronne"@fr',
'"livre"@fr',
'"chapeau"@fr',
'"draperie"@fr',
'"château"@fr',
'"montagne"@fr',
'"croix"@fr',
'"cavalier"@fr',
'"épée"@fr',
'"voiture à attelage"@fr',
'"pont"@fr',
'"main"@fr',
'"mer"@fr',
'"nudité"@fr',
'"feuille"@fr',
'"bateau à voiles"@fr',
'"armure"@fr',
'"uniforme"@fr',
'"casque"@fr',
'"table"@fr',
'"tour"@fr',
'"lion"@fr',
'"drapeau"@fr',
'"mouton"@fr',
'"nuage"@fr',
'"robe"@fr',
'"bâton"@fr',
'"port"@fr',
'"parc"@fr',
'"manteau"@fr',
'"vache"@fr',
'"escalier"@fr',
'"fusil"@fr',
'"lit"@fr',
'"pêche"@fr',
'"cerf"@fr',
'"cimetière"@fr',
'"bouclier"@fr',
'"sanglier"@fr',
'"porte"@fr',
'"fenêtre"@fr',
'"arcade"@fr',
'"chaise"@fr',
'"pot"@fr',
'"flèche"@fr',
'"poisson"@fr',
'"Christ en croix"@fr',
'"chaussure"@fr',
'"trône"@fr',
'"bonnet"@fr',
'"papillon"@fr',
'"chat"@fr',
'"arc"@fr',
'"lyre"@fr',
'"tonneau"@fr',
'"tente"@fr',
'"singe"@fr',
'"sac"@fr',
'"bouteille"@fr',
'"plage"@fr',
'"neige"@fr',
'"flûte"@fr',
'"éventail"@fr',
'"échelle"@fr',
'"moulin à vent"@fr',
'"rideau"@fr',
'"arènes"@fr',
'"éléphant"@fr',
'"pipe"@fr',
'"lettre"@fr',
'"phare"@fr',
'"roue"@fr',
'"horloge"@fr',
'"couteau"@fr',
'"guitare"@fr',
'"train"@fr',
'"harpe"@fr',
'"automobile"@fr',
'"arc monumental"@fr',
'"assiette"@fr',
'"ours"@fr',
'"seau"@fr',
'"bicyclette"@fr',
'"pyramide"@fr',
'"grenouille"@fr',
'"avion"@fr',
'"piano"@fr',
'"aérostat"@fr']
#'"être humain"@fr' ]
exclude_terms_list = [] #['"le corps humain"@fr']
#include_terms_list = [ '"être humain"@fr', '"espèce animale"@fr' ]
include_terms = ' '.join(['(%s)' % x for x in include_terms_list[:40] ])
exclude_terms = ' '.join(['(%s)' % x for x in exclude_terms_list])
include_terms
# -
qs = qs_templ % (include_terms, exclude_terms , 10)
print(qs)
# ### Specify local service
#wds = "http://localhost:3030/Joconde/query" #Apache
wds = "http://localhost:8080/sparql" #Corese
# ### Run query
image_set_df = metadata.sparql_service_to_dataframe(wds, qs)
image_set_df.head()
image_set_df.shape
# #### Give the short column names
# +
col_names = ['label', 'repr', 'imagePath', 'ref', 'term_count', 'top_term_count', 'terms', 'domain']
image_set_df.columns = col_names
print('Shape:' , image_set_df.shape)
#image_set_df.fillna('', inplace=True)
print(image_set_df.groupby(['label']).size().reset_index())
# -
# ## Add image size columns (if not queried before)
#
# quickest way but the size can be queried in the initial query
if('width' not in image_set_df.columns):
csv_file_name = 'C:/Users/abobashe/Documents/MonaLIA/Joconde/Ext/main_image_size.csv'
image_size_df = pd.read_csv(csv_file_name)
image_root = 'C:\\Joconde\\joconde'
try:
image_size_df
except NameError:
image_set_df['width'] = 0
image_set_df['height'] = 0
else:
image_set_df = pd.merge(image_set_df,
image_size_df[['ref', 'width', 'height']],
on='ref',
how='left')
image_set_df.fillna(0 , inplace=True)
error_count = 0
new_count = 0
for i, row in image_set_df[image_set_df.width == 0].iterrows():
if os.path.isfile(image_root + row.imagePath) :
try:
image = Image.open(image_root + row.imagePath)
# get image size
#images_df.loc[i, ['exists']] = True
image_set_df.loc[i, ['width']] = image.size[0]
image_set_df.loc[i, ['height']] = image.size[1]
image.close()
new_count += 1
except:
error_count += 1
if i % 1000 == 0:
print (i, end=', ')
print()
print('updated size %d; detected %d errors' % (new_count, error_count))
image_set_df.head()
dataset_root = 'C:/Datasets/Joconde'
top_category = 'Forty classes'
class_root = os.path.join(dataset_root, top_category)
group_by = 'label'
# ## Filter Data
# filter out the records that have invalid image path
#
# filter out the images that are too tall or too long
aspect_ratio_thershold = 5.0
large_category_threshold = 1200
# +
print(image_set_df.shape , ' total')
print(image_set_df[image_set_df.width > 0].shape, 'images exist')
filtered_df = image_set_df[(image_set_df.width > 0) &
(image_set_df.width/image_set_df.height <= aspect_ratio_thershold) &
(image_set_df.height/image_set_df.width <= aspect_ratio_thershold) ]
print(filtered_df.shape , 'required aspect ratio <= %.2f' % aspect_ratio_thershold)
filtered_df.groupby(by=group_by).size()
# -
# filter out the Ceramics as they mess up the classification
# +
#filtered_df = filtered_df[filtered_df.domain.str.contains('céramique')==False]
filtered_df = filtered_df[filtered_df.domain.str.contains('peinture')==True]
#filtered_df = filtered_df[filtered_df.label != 'espèce animale+élément d\'architecture']
#filtered_df = filtered_df[filtered_df.label != 'élément d\'architecture+être humain']
#print(filtered_df.shape, 'without céramique')
print(filtered_df.shape, 'paintings only')
print(filtered_df.groupby(by=group_by).size())
filtered_df.head()
# -
# ## Image Size Distribution
# +
plt.tight_layout()
fig, axes = plt.subplots(nrows=1, ncols=3, gridspec_kw = {'width_ratios':[1, 1, 1]})
fig.set_size_inches(18.5, 5)
axes[0].set_title('width')
filtered_df.width.hist(bins=16, ax=axes[0])
axes[1].set_title('height')
filtered_df.height.hist(bins=12, ax=axes[1])
axes[2].set_title('aspect ratio w/h')
(filtered_df.width /filtered_df.height).hist(bins=20, ax=axes[2])
plt.show()
# -
# ## Stratified Split
# +
cat_list = filtered_df.groupby(by=group_by).size()
min_strata_size = 1200#cat_list.min()
dataset_split = {'train': 0.9,
'val': 0.1,
'test': 0.1}
# +
filtered_df['usage'] = 'remain'
for category in cat_list.keys():
if ('chat' in category):
train_idx, val_idx, test_idx = imagedata.train_validate_test_split(filtered_df[filtered_df[group_by] == category],
train_percent = dataset_split['train'],
val_percent = 600 / 1200,
test_percent = 600 / 1200,
max_size = min_strata_size)
else:
train_idx, val_idx, test_idx = imagedata.train_validate_test_split(filtered_df[filtered_df[group_by] == category],
train_percent = dataset_split['train'],
val_percent = dataset_split['val'],
test_percent = dataset_split['test'],
max_size = min_strata_size)
filtered_df.loc[train_idx , ['usage']] = 'train'
filtered_df.loc[val_idx , ['usage']] = 'val'
filtered_df.loc[test_idx , ['usage']] = 'test'
# Two way frequency table
pd.options.display.max_rows = 999
pd.crosstab(index=filtered_df[group_by],
columns=filtered_df['usage'])
# +
d = {'train': filtered_df[filtered_df.usage == 'train'] ['label'].str.replace('+', '|').str.get_dummies().sum(),
'val': filtered_df[filtered_df.usage == 'val'] ['label'].str.replace('+', '|').str.get_dummies().sum(),
'test': filtered_df[filtered_df.usage == 'test'] ['label'].str.replace('+', '|').str.get_dummies().sum(),
'remain': filtered_df[filtered_df.usage == 'remain'] ['label'].str.replace('+', '|').str.get_dummies().sum()}
pd.DataFrame(data=d)
# -
# ### Save Dataset Description file
# +
image_set_desc_file = os.path.join(class_root, 'dataset5.paintings.csv')
if not os.path.exists(class_root):
os.makedirs(class_root)
filtered_df.to_csv(image_set_desc_file)
# -
# ### Cross Validation Split
# +
group_by='label'
cat_list = filtered_df.groupby(by=group_by).size()
min_strata_size = 1000# cat_list.min()
# +
filtered_df['usage'] = 'remain'
for category in cat_list.keys():
trainval_idx, test_idx = imagedata.train_cross_validate_test_split(filtered_df[filtered_df[group_by] == category],
trainval_percent = 0.9,
test_percent = 0.1,
n_folds = 10,
max_size = min_strata_size)
for i , fold_idx in enumerate(trainval_idx):
filtered_df.at[fold_idx, ['usage']] = 'train' + str(i)
filtered_df.loc[test_idx , ['usage']] = 'test'
# Two way frequency table
pd.crosstab(index=filtered_df[group_by],
columns=filtered_df['usage'])
# +
image_set_desc_file = os.path.join(descr_path, 'dataset1_cv.csv')
if not os.path.exists(descr_path):
os.makedirs(descr_path)
filtered_df.to_csv(image_set_desc_file)
# -
# # Scrapbook
# +
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
mlb.fit_transform(filtered_df['label'].str.split('+'))[2]
# -
query_style = '''
prefix jcl: <http://jocondelab.iri-research.org/ns/jocondelab/>
prefix ml: <http://ns.inria.fr/monalia/>
select ?imageWidth ?imageHeight ?imagePath ?noticeTechnique ?technique1 ?technique2 ?technique3 ?noticeTechnique1 ?noticeTechnique2 ?noticeTechnique3
where {
?notice ml:noticeImage [ ml:imageHeight ?imageHeight ;ml:imageWidth ?imageWidth ; ml:imagePath ?imagePath].
?notice jcl:noticeRef ?noticeRef.
filter (?imageHeight > 0)
?notice jcl:noticeTech ?noticeTechnique
bind( replace(?noticeTechnique, ",", ";") as ?technique ).
bind( if(contains (?technique, ";"), strbefore( ?technique, ";" ) , ?technique) as ?technique1 ).
bind( if(contains (?technique, ";"), strafter( ?technique, ";" ), "") as ?temp1 ).
bind( if(contains (?temp1, ";"), strbefore( ?temp1, ";" ) , ?temp1) as ?technique2 ).
bind( if(contains (?temp1, ";"), strafter( ?temp1, ";" ), "") as ?temp2 ).
bind( if(contains (?temp2, ";"), strbefore( ?temp2, ";" ) , ?temp2) as ?technique3 ) .
#removing leading, trailing and double spaces
bind("^\\\\s+(.*?)\\\\s*$|^(.*?)\\\\s+$" as ?regexp).
bind( lcase( replace(?technique1, ?regexp, '$1$2')) AS ?noticeTechnique1).
bind( lcase( replace(?technique2, ?regexp, '$1$2')) AS ?noticeTechnique2).
bind( lcase( replace(?technique3, ?regexp, '$1$2')) AS ?noticeTechnique3) .
}
order by ?technique1 ?technique2 ?technique3
'''
# +
import nltk
from nltk import bigrams
corpus = "cat dog lion zebra cat zebra cat dog"
bi_grams = list(bigrams(corpus.split()))
bigram_freq = nltk.FreqDist(bi_grams).most_common(len(bi_grams))
# -
bigram_freq
bi_grams
| Notebooks 2.0/Pipeline/MonaLIA.STEP 1.1.Create Training Dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# name: python3
# ---
# # Common Discrete Distributions
#
# ## Bernoulli Distribution
#
# The Bernoulli distribution is appropriate when there are two outcomes. Success happens with probability $p$ and failure happens with probability $1-p$.
#
# $$
# p(x) =
# \left\{
# \begin{array}{cc}
# p & \mathrm{for\ } x=1 \\
# 1-p & \mathrm{for\ } x=0 \\
# \end{array}
# \right.
# $$
#
# ***
# **exercise:** Let $X$ be a Bernoulli random variable. Show that $E(X)=p$ and that $V(X)=p \cdot (1-p)$.
# ***
#
# The number of heads from a single coin flip is an example of a Bernoulli random variable. The event that the coin lands on heads is success, tails is failure. If the coin is fair, $p$ = .5.
# ## Binomial Distribution
#
# ### Statement
#
# If we have $n$ Bernoulli trials each with a probability of success of $p$. The random variable representing the total number of successes has the **binomial distribution** with parameters $n$ and $p$.
#
# ***
# Let $X$ have the binomial distribution with parameters $n$ and $p$. We have:
#
# $$p(k) = P(X=k) = {n \choose k}p^k(1-p)^{n-k}$$
#
# ***
#
# #### Example
#
# We have an unfair coin with probability of heads $.6$. To calculate the probability of getting $3$ heads after flipping the coin $5$ times is ${5 \choose 3} .6^3 .4^2$. This is because each sequence that has $3$ (like HTHTH) heads occurs with probability $.6^3 .4^2$. There are ${5 \choose 3}$ ways to select $3$ heads coins out of $5$ coins. We add up the probabilities to get ${5 \choose 3}.6^3 .4^2$.
#
# ### Mean and Variance of the Binomial Distribution
#
# It has been given as an exercise to prove that if $X$ is Bernoulli with parameter $p$ then $E(X) = p$ and $V(X) = p(1-p)$. Formulas for the binomial distribution are very similar.
#
# ***
# Let $X$ have the binomial distribution with parameters $n$ and $p$. We have:
#
# $$E(X) = np$$
#
# $$V(X) = np(1-p)$$
#
# ***
#
# So if we flip a fair coin 10 times we expect to get $E(X) = np = 10 \cdot .5 = 5$ heads, which makes sense. There is a not-pretty proof of this but let's wait to prove this in a pretty way.
# ## Hypergeometric Distribution
#
# ### Rocks in bags example
#
# We have $15$ rocks in a bag. $7$ rocks are red and $8$ are black. We select $5$ rocks. What is the probability of selecting exactly $2$ red rocks and $3$ black rocks.
#
# Any selection of rocks is equally likely. We can use our formula for the probability when all events are equally likely.
#
# $$\frac{\text{Number of Selected Outcomes}}{\text{Total Possible Outcomes}}$$
#
# For the denominator we need to find how many ways there are to draw $5$ rocks from a set of $15$. This is $15 \choose 5$.
#
# For the numerator we need to select $2$ red rocks from a set of $7$. This can be done in $7 \choose 2$ ways. We can select $3$ blue rocks from $8$ in $8 \choose 3$ ways. There are then ${7 \choose 2}{8 \choose 3}$ ways to select the rocks.
#
# The answer is then:
#
# $$\large \frac{{7 \choose 2}{8 \choose 3}}{{15 \choose 5}}$$
#
# ### Playing cards example
#
# We have a deck of $40$ cards. $30$ cards are red and $10$ are black. We draw a hand of $5$ cards. Show that the probability of drawing $3$ black cards is:
#
# $$\large{\frac{{30 \choose 2}{10 \choose 3}}{{40 \choose 5}}}$$
#
# ### The formula
#
# We randomly select $n$ items from a population of $N$ items. Let $r$ represent the number of items from the population classified as a success, and $k$ be the number of items in the selection classified as successes. Let $X$ be the random variable representing the number of items in our selection considered successes.
#
# $$P(X=k) = \large{\frac{{{N-r}\choose{n-k}}{{r}\choose{k}}}{{N \choose n}}}$$
#
# For our previous examples we did not need this formula. It is worth understanding how this formula works, so that you can understand it instead of memorizing it.
#
# ### Hypergeometric vs. Binomial
#
# The hypergeometric distribution is closely related to the binomial distribution. We have a group of $600$ cowboys and $400$ astronauts. We select $4$ people randomly from the $1000$ to win a prize, what is probability that $3$ people are cowboys and $1$ is an astronaut?
#
# $$\large \frac{{600 \choose 3}{400 \choose 1}}{{1000 \choose 4}} = 0.3459$$
#
# The hypergeometric distribution is different than the binomial distribution because it samples **without replacement**. Let's change the problem and allow someone to win a prize multiple times. $4$ names are drawn from a hat. Each time a name is drawn a prize is given and the name is put back in the hat (this is sampling **with replacement**). Since $600$ of the $1000$ people are cowboys, any time we make a selection to win a prize there is a probability of $.6$ that the person is an cowboy. We select $4$ people, so the distribution of prizes given to cowboys is binomial with parameters $n = 4$, $p = .6$. The probability of $3$ cowboys winning prizes and $1$ astronaut winning a prize is:
#
# $${4 \choose 3} \cdot .6^3 \cdot .4 = .3456$$
#
# The hypergeometric distribution (without replacement) gives $.3459$ and the binomial distribution (with replacement) gives $.3456$. Consider that if we draw a single cowboy from the hat, the probability of the next draw being a cowboy is $.6$ if we sample with replacement and $\frac{599}{999} \approx.6$ without replacement. This is why our answers are similar. This approximation works best when the population is much larger than the total number of items drawn in the hypergeometric distribution, i.e. $N$ is large compared to $n$.
#
# ## Poisson Distribution
#
# ### Intuition
#
# A Poisson distribution is appropriate when we are counting the number of times something happens in an hour or some unit of time. How many texts do I get in an hour? How often does a car accident happen on interstate 35? These are things that happen at some frequency.
#
# If you see the phrase **"the average rate"** you should consider that the question might be related to the Poisson distribution.
#
# Often they give you the rate per hour and you need to convert it to the rate per $2$ hours, or the daily rate, or the rate per minute.
#
# ***
# The average rate of people spilling their coffee in the office is $2$ per hour. Generally this rate is called $\lambda$, so $\lambda = 2$. The rate of people spilling coffee per minute is $\frac{\lambda}{60} = \frac{1}{30}$. The rate of people spilling their coffee every day is $24 \lambda = 48$.
# ***
#
# In the Poisson model it is assumed that the rate is constant throughout the day and that the events being counted are independent. So our coffee spilling example might not be a Poisson process if the rate of coffee spills are much higher in the morning. Also, if someone spilling their coffee makes other people more likely to spill coffee, the spills are not independent.
#
# ### Formula
#
# Let $X$ have a Poisson distribution with parameter $\lambda$. The probability of $k$ occurences of something happening is:
#
# $$P(X=k) = \frac{e^{-\lambda}\lambda^k}{k!}$$
#
# Let's verify that the sum of all probabilities is 1.
#
# $$
# \begin{align*}
# \sum_{n=0}^{\infty} P(X=k) &= \displaystyle\sum_{k=0}^{\infty}\frac{e^{-\lambda}\lambda^k}{k!} \\
# &= e^{-\lambda} \displaystyle\sum_{k=0}^{\infty} \frac{\lambda^k}{k!} \\
# &= e^{-\lambda}e^\lambda=1
# \end{align*}
# $$
#
# We use a fact from Calculus 2 that $e^\lambda = \sum_{k=0}^{\infty} \frac{\lambda^k}{k!}$
#
#
#
# ### Example
#
# Assume a Poisson model. The average rate of people spilling their coffee in the office throughout the work day is $2.5$ spills per hour. What is the probability that $2$ people spill their coffee between $8$ A.M. and $9$ A.M., and $6$ people spill their coffee between $11$ A.M. and $1$ P.M.?
#
# Because the number of spills across time intervals is independent:
#
# $$P(\text{2 spills from 8-9 A.M. and 6 spills from 11 A.M.-1 P.M.}) = \\
# P(\text{2 spills from 8-9 A.M.}) \cdot P(\text{6 spills from 11 A.M.-1 P.M.})$$
#
# For the calculation from $\text{8-9 A.M.}$ we use $\lambda = 2.5$ since the time interval is $1$ hour.
#
# $$ P(\text{2 spills from 8-9}) =\frac{e^{-2.5}2.5^2}{2!}$$
#
# For the calculation from $\text{11 A.M.- 1 P.M.}$ we use $\lambda = \lambda_\text{1 hour} \cdot t = 2.5 \cdot 2 = 5$ since the time interval is $2$ hours.
#
# $$P(\text{6 spills from 11 A.M.-1 P.M.}) =\frac{e^{-5}5^6}{6!}$$
#
# We multiply these quantities to get the answer:
#
# $$\frac{e^{-2.5}2.5^2}{2!} \cdot \frac{e^{-5}5^6}{6!} = .03751$$
#
#
# ## Geometric Distribution
#
# Let $X$ be the number of tries it takes to land heads when flipping an unfair coin if the probability of heads is $p=.6$. The geometric distribution is when you take independent bernoulli trials until the first success. Each coin flip is an independent bernoulli trial with the same probability of success.
#
# The probability of success on the first attempt is $.6$. The probability of success on the second attempt is $.4 \cdot .6$, the sequence $TH$. Success on the third attempt happens with the sequence $TTH$ with probability $.4 \cdot .4 \cdot .6$.
#
# ### PDF
#
# We perform independent Bernoulli trials, each having a probability of success $p$ and probability of failure $q$. Let $X$ represent the trial on which the first success occurs.
#
# $$P(X=k) = q^{k-1}p$$
#
# ### E[X] and Var[X]
#
# Let $X$ be a geometric random variable with parameter p.
#
# $$E(X) = \frac{1}{p}$$
# $$V(X) = \frac{1-p}{p^2}$$
#
# ### Forms of the geometric distribution
#
# So far we have been counting the number of trials until success. Another way to do this is to count the number of failures before success. I like first way, because $E(X) = \frac{1}{p}$ which I think is pretty.
# ## Negative Binomial Distribution
#
# The geometric distribution has a parameter $p$ and the value of the random variable is the first successful attempt. The negative binomial distribution has two parameters, $p$ and $r$. The negative binomial distribution counts the attempt on which the $r^{th}$ success occurs.
#
# We flip an unfair coin with a probability of $.6$ for heads and $.4$ for tails. Let $X$ be the trial on which we get the $3rd$ head. We wish to know $P(X=5)$. Let's first consider that the $5th$ flip must be a head. Of the first $4$ flips, $2$ of them are heads. We get $2$ heads in the first four flips with probability:
#
# $${4 \choose 2}.6^2.4^2$$
#
# We need the fifth flip to be heads so we multiply this probability by $.6$ to get our answer of:
#
# $${4 \choose 2}.6^3.4^2$$
#
# Our formula is then $P(X=x) = {{x-1} \choose {r-1}}p^r(1-p)^{x-r}$.
#
# ### More common formulation
#
# A more common way of representing this is to count the number of failures until the $r^{th}$ success.
#
# ***
# Let $Y$ be the number of failures until the $rth$ success. We calculate the p.d.f. of $Y$.
#
# $$P(Y=y)={{r+y-1} \choose y}p^r(1-p)^y$$
#
# ***
#
# If $X$ is the R.V. representing the attempt on which the $rth$ success occurs and $Y$ is the R.V. representing the number of failures before the $rth$ success, then $Y = X - r$ because $\text{failures = attempts - successes}$. Regardless of if the question asks you to count failures or attempts, the prepared student should be able to deduce the probability using first principles.
#
# We give formulas for the expectation and variance for this formulation of the negative binomial distribution.
#
#
# ***
#
# Let $Y$ be the number of failures until the $rth$ success.
#
# $$E(Y) = \frac{r(1-p)}{p}$$
#
# $$V(Y) = \frac{r(1-p)}{p^2}$$
#
# ***
| probability_with_python/_build/html/_sources/05-discretedistributions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import csv
import pandas as pd
import numpy as np
from veidt.rfxas.core import XANES
from veidt.rfxas.prediction import CenvPrediction
import matplotlib.pyplot as plt
# %matplotlib inline
# ### Create XANES by passing spectrum energy and mu directly
Fe2O3_spectrum_dataframe = pd.read_pickle('Fe2O3_computational_spectrum.pkl')
Fe2O3_spectrum_dataframe
spectrum_energy = Fe2O3_spectrum_dataframe['x_axis_energy_55eV'].values[0]
spectrum_mu = Fe2O3_spectrum_dataframe['interp_spectrum_55eV'].values[0]
Fe2O3_XANES_object1 = XANES(spectrum_energy, spectrum_mu, absorption_specie='Fe', edge='K')
Fe2O3_object1_CenvPred = CenvPrediction(Fe2O3_XANES_object1, energy_reference='lowest', energy_range=45)
##Plot interpolated spectrum used in coordination environment prediction
plt.plot(Fe2O3_object1_CenvPred.interp_energy, Fe2O3_object1_CenvPred.interp_spectrum)
# **Predict with random forest model**
Fe2O3_object1_CenvPred.cenv_prediction()
print('Predicted coordination environment label: ', Fe2O3_object1_CenvPred.pred_cenv)
# **Predict with CNN**
#
# Similarly one can set `model` to `knn`, `mlp`, `svc` or the previous random forest `rf` (default)
Fe2O3_object1_CenvPred = CenvPrediction(Fe2O3_XANES_object1, energy_reference='lowest',
energy_range=45, model='cnn')
Fe2O3_object1_CenvPred.cenv_prediction()
print('Predicted coordination environment label: ', Fe2O3_object1_CenvPred.pred_cenv)
# ### Initiate XANES object from Materials Project website downloaded spectrum file (tsv)
Fe2O3_XANES_object2 = XANES.from_K_XANES_MP_tsv('xas.XANES.K.Fe2O3.mp-24972.tsv')
Fe2O3_object2_CenvPred = CenvPrediction(Fe2O3_XANES_object2, energy_reference='lowest', energy_range=45)
##Plot interpolated spectrum used in coordination environment prediction
plt.plot(Fe2O3_object2_CenvPred.interp_energy, Fe2O3_object2_CenvPred.interp_spectrum)
Fe2O3_object2_CenvPred.cenv_prediction()
print('Predicted coordination environment label: ', Fe2O3_object2_CenvPred.pred_cenv)
| veidt/rfxas/notebooks/example_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Find path to PySpark.
import findspark
#findspark.init()
# even though SPARK_HOME is defined in .zshrc, had to put it here
findspark.init('/Users/otto/spark-2.4.4-bin-hadoop2.6/')
# -
# Import PySpark and initialize SparkContext object.
import pyspark
sc = pyspark.SparkContext()
# Read `recent-grads.csv` in to an RDD.
f = sc.textFile('recent-grads.csv')
data = f.map(lambda line: line.split('\n'))
data.take(3)
raw_hamlet = sc.textFile("hamlet.txt")
raw_hamlet.take(5)
split_hamlet = raw_hamlet.map(lambda line: line.split('\t'))
split_hamlet.take(5)
# we'll use the flatMap() method with the named function hamlet_speaks to check whether a line in the play contains the text HAMLET in all caps (indicating that Hamlet spoke). flatMap() is different than map() because it doesn't require an output for every element in the RDD. The flatMap() method is useful whenever we want to generate a sequence of values from an RDD.
#
# In this case, we want an RDD object that contains tuples of the unique line IDs and the text "hamlet speaketh!," but only for the elements in the RDD that have "HAMLET" in one of the values. We can't use the map() method for this because it requires a return value for every element in the RDD.
#
# We want each element in the resulting RDD to have the following format:
#
# The first value should be the unique line ID (e.g.'hamlet@0') , which is the first value in each of the elements in the split_hamlet RDD.
# The second value should be the string "hamlet speaketh!"
# +
def hamlet_speaks(line):
id = line[0]
speaketh = False
if "HAMLET" in line:
speaketh = True
if speaketh:
yield id,"hamlet speaketh!"
hamlet_spoken = split_hamlet.flatMap(lambda x: hamlet_speaks(x))
# hamlet_spoken.take(10)
# -
# hamlet_spoken now contains the line numbers for the lines where Hamlet spoke. While this is handy, we don't have the full line anymore. Instead, let's use a filter() with a named function to extract the original lines where Hamlet spoke. The functions we pass into filter() must return values, which will be either True or False.
# +
def filter_hamlet_speaks(line):
if "HAMLET" in line:
return True
else:
return False
hamlet_spoken_lines = split_hamlet.filter(lambda line: filter_hamlet_speaks(line))
hamlet_spoken_lines.take(5)
# +
spoken_count = 0
spoken_101 = list()
spoken_count = hamlet_spoken_lines.count()
spoken_collect = hamlet_spoken_lines.collect()
spoken_101 = spoken_collect[100]
print(spoken_count)
# +
raw_hamlet = sc.textFile("hamlet.txt")
split_hamlet = raw_hamlet.map(lambda line: line.split('\t'))
split_hamlet.take(5)
# remove @ from id
def format_id(x):
id = x[0].split('@')[1]
results = list()
results.append(id)
if len(x) > 1:
for y in x[1:]:
results.append(y)
return results
hamlet_with_ids = split_hamlet.map(lambda line: format_id(line))
hamlet_with_ids.take(10)
# +
#hamlet_with_ids.take(5)
# Remove blank values
real_text = hamlet_with_ids.filter(lambda line: len(line) > 1)
hamlet_text_only = real_text.map(lambda line: [l for l in line if l != ''])
hamlet_text_only.take(10)
# +
# Remove pipe chars
def fix_pipe(line):
results = list()
for l in line:
if l == "|":
pass
elif "|" in l:
fmtd = l.replace("|", "")
results.append(fmtd)
else:
results.append(l)
return results
clean_hamlet = hamlet_text_only.map(lambda line: fix_pipe(line))
# -
| 8.6 Spark and Map-Reduce/Spark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
#
#
lista comprehension
x =2
for x in range(10):
lista.append(x**2)
print(lista)
lista =[x **2 for x in range(50)
print(lista)
adcionado um elemento na lista
lista = list()
lista = [ 'a']
lista.append('b')
lista.extend('A' 'B' 'C')
lista
inserindo um elemento na lista
lista = [1,2,3,4,5,]
lista.insert(2, 33)
#lista
Remove item da lista
# +
lista = [1,2,3,4,5,]
lista.remove(2)
lista
# -
removendo e imprimeido o intem da lista
lista = [1,2,3,4,5]
lista.pop(3)
# contando um elemento da lista
lista = ['feijão', 'arroz', 'carne', 'farinha', 'macarão']
lista.count('arroz')
# CONJUNTO deve ser criado co colchetes
# +
conjunto = {'Maça', 'banana', 'melão'}
print(conjunto)
# +
a = {1,2,3,4,5,}
b = {6,7,8,9 }
#c= a.union(b) ou
c = a|b
print(c)
# -
FILA E PILHA
# +
from collections import deque
fila = deque(['João', 'Maria', 'Pedro'])
fila.append('Wellington')
fila.append('Livia')
print (fila)
fila.popleft()
print(fila)
# -
| Aulasatualizada/aula do jupter/modulo2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <p><font size="6"><b>03 - Pandas: Indexing and selecting data - Part II</b></font></p>
#
# > *© 2016-2018, <NAME> and <NAME> (<mailto:<EMAIL>>, <mailto:<EMAIL>>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
#
# ---
# + run_control={"frozen": false, "read_only": false}
import pandas as pd
# + run_control={"frozen": false, "read_only": false}
# redefining the example objects
# series
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
# dataframe
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
# -
# # Subsetting data without `loc` and `iloc`
#
# Although the preferred method is to index with `loc` and `iloc`, you can also index using just `[ ]`. These can be ambiguous but are useful shorthands in some cases.
# ## Subset variables (columns)
# For a DataFrame, basic indexing selects the columns (cfr. the dictionaries of pure python)
#
# Selecting a **single column**:
# + run_control={"frozen": false, "read_only": false}
countries['area'] # single []
# -
# Remember that the same syntax can also be used to *add* a new columns: `df['new'] = ...`.
#
# We can also select **multiple columns** by passing a list of column names into `[]`:
# + run_control={"frozen": false, "read_only": false}
countries[['area', 'population']] # double [[]]
# -
# ## Subset observations (rows)
# Using `[]`, slicing or boolean indexing to accesses the **rows**:
# ### Slicing
countries[0:4]
# ### Boolean indexing (filtering)
# Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy.
#
# The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
# + run_control={"frozen": false, "read_only": false}
countries['area'] > 100000
# + run_control={"frozen": false, "read_only": false}
countries[countries['area'] > 100000]
# -
countries[countries['population'] > 50]
# <div class="alert alert-info">
# <b>REMEMBER</b>: <br><br>
#
# So as a summary, `[]` provides the following convenience shortcuts:
#
# <ul>
# <li><b>Series</b>: selecting a <b>label</b>:<code>s[label]</code></li>
# <li><b>DataFrame</b>: selecting a single or multiple <b>columns</b>: <code>df['col']</code> or <code>df[['col1', 'col2']]</code></li>
# <li><b>DataFrame</b>: slicing or filtering the <b>rows</b>: <code>df['row_label1':'row_label2']</code> or <code>df[mask]</code></li>
# </ul>
# </div>
# # [OPTIONAL] Additional exercises using the movie data
# Here are some more exercises with some larger DataFrames with film data. These exercises are based on the [PyCon tutorial of <NAME>](https://github.com/brandon-rhodes/pycon-pandas-tutorial/) (so all credit to him!) and the datasets he prepared for that.
#
# First, download these data from here:
# - [`titles.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKajNMa1pfSzN6Q3M) and
# - [`cast.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKal9UYTJSR2ZhSW8)
#
# and put them in the `/data` folder.
# `titles` dataset:
#
# * title: title of the movie
# * year: year of release
# + run_control={"frozen": false, "read_only": false}
titles = pd.read_csv('../data/titles.csv')
titles.head()
# -
# `cast` dataset: different roles played by actors/actresses in films
#
# - title: title of the movie
# - year: year it was released
# - name: name of the actor/actress
# - type: actor/actress
# - n: the order of the role (n=1: leading role)
# + run_control={"frozen": false, "read_only": false}
cast = pd.read_csv('../data/cast.csv')
cast.head()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many movies are listed in the titles DataFrame?</li>
# </ul>
#
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data10.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>What are the earliest two films listed in the titles DataFrame?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data11.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many movies have the title "Hamlet"?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data12.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>List all of the "Treasure Island" movies from earliest to most recent.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data13.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many movies were made from 1950 through 1959?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data14.py
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data15.py
# -
# <div class="alert alert-success">
#
# **EXERCISE:**
#
# - How many roles in the movie "Inception" have a missing value in the rank "n" column?
#
# Hint: You may find useful the section of the pandas user guide that explains how to handle missing (NA) data: [Working with missing data](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html)
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data16.py
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data17.py
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data18.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many roles in the movie "Inception" have a non-missing value in the rank "n" column?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data19.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Display the cast of the "Titanic" (the most famous 1997 one) ordered by their "n"-value, ignoring roles that did not earn a numeric "n" value.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data20.py
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>List the supporting roles (having n=2) played by <NAME> in the 1990s, ordered by year.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load _solutions/pandas_03a_selecting_data21.py
# -
# # Acknowledgement
#
#
# > The optional exercises are based on the [PyCon tutorial of <NAME>](https://github.com/brandon-rhodes/pycon-pandas-tutorial/) (so all credit to him!) and the datasets he prepared for that.
#
# ---
| Day_1_Scientific_Python/pandas/pandas_03b_selecting_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/lustraka/data-analyst-portfolio-project-2022/blob/main/code/20211109_Scrape_Google_Search.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="87h-gyLDNR0C"
# Import dependencies
import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
# + id="_pFQ_dMJNUq6"
url_list = [
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&oq=data+analytics+portfolio+projects&aqs=chrome..69i57.8087j0j7&sourceid=chrome&ie=UTF-8',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvJ_BxdAnyCZH4UPSo6LCNmgSxa3FA:1636464844916&ei=zHiKYaGXN7-H9u8Pv_uQsA4&start=10&sa=N&ved=2ahUKEwihrZLOsov0AhW_g_0HHb89BOYQ8tMDegQIARA5',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvLACXzBMxl_pdHlb05mJ1OwzLSOBw:1636469599500&ei=X4uKYczWHceB9u8PpdGZkAc&start=20&sa=N&ved=2ahUKEwiMgqepxIv0AhXHgP0HHaVoBnI4ChDy0wN6BAgBEDs&biw=1278&bih=1287&dpr=1',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvKIbja5P5qWWAz31CyNM9NBLkcLBw:1636472923048&ei=W5iKYbCRApPc7_UP0e86&start=30&sa=N&ved=2ahUKEwjwqoza0Iv0AhUT7rsIHdG3DgA4FBDy0wN6BAgBED0&biw=1278&bih=1287&dpr=1',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvJkxfayYSikA_tGYuuUBeTKVPWlng:1636472955628&ei=e5iKYY7QJZWI9u8P7oC48AQ&start=40&sa=N&ved=2ahUKEwjO-dDp0Iv0AhUVhP0HHW4ADk44HhDy0wN6BAgBED8&biw=1278&bih=1287&dpr=1',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvI1o9LPSAf_JAeCzaCkES8yH6FWPw:1636472976336&ei=kJiKYffuE4SI9u8P3PaxyA0&start=50&sa=N&ved=2ahUKEwj39sDz0Iv0AhUEhP0HHVx7DNk4KBDy0wN6BAgBEEA&biw=1278&bih=1287&dpr=1',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvLdmHM_-UaPexWBFeHWIvGC-YluVw:1636472991690&ei=n5iKYa6zKciO9u8Pz5KaiA0&start=60&sa=N&ved=2ahUKEwju_un60Iv0AhVIh_0HHU-JBtE4MhDy0wN6BAgBEEI&biw=1278&bih=1287&dpr=1',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvKRUHFNU2n9YGM7MpgRRtxolvQWcw:1636473009693&ei=sZiKYarbKeeF9u8PrPGWuAo&start=70&sa=N&ved=2ahUKEwjq97SD0Yv0AhXngv0HHay4Bac4PBDy0wN6BAgBEEE&biw=1278&bih=1287&dpr=1',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvL0pRFQCcYFw9mkYh34bTbC9M-7sg:1636473031938&ei=x5iKYcPHOL2S9u8Pk6WlmAc&start=80&sa=N&ved=2ahUKEwiDx4KO0Yv0AhU9if0HHZNSCXM4RhDy0wN6BAgBEEE&biw=1278&bih=1287&dpr=1',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvKKAXCIenr8Rw60Sl1FLyGUDfNUYQ:1636473057767&ei=4ZiKYav9LfHg7_UP6v2BuAI&start=90&sa=N&ved=2ahUKEwjr8aqa0Yv0AhVx8LsIHep-ACc4UBDy0wN6BAgBEEI&biw=1278&bih=1287&dpr=1',
'https://www.google.com/search?q=data+analytics+portfolio+projects&rlz=1C1GCEA_enCZ869CZ869&sxsrf=AOaemvIqIgE2rHPr5KI8le-RhL7NxUH9Iw:1636473095677&ei=B5mKYczaKNLh7_UP88OO4AE&start=100&sa=N&ved=2ahUKEwiM-rSs0Yv0AhXS8LsIHfOhAxw4WhDy0wN6BAgBEEE&biw=1278&bih=1287&dpr=1',
]
# + id="HPlJzZySNnXx"
page = requests.get(url_list[0])
soup = BeautifulSoup(page.content, 'html.parser')
# + colab={"base_uri": "https://localhost:8080/"} id="A-wGsDh3OCII" outputId="3338f629-368d-4c31-e2f2-c1b899c1f291"
soup.find_all('h3')
# + colab={"base_uri": "https://localhost:8080/"} id="VKHRILRQOpCI" outputId="86d4d005-7f4e-4ade-8b3c-3f23f93ca4b5"
#[h.parent for h in soup.find_all('div', 'BNeawe vvjwJb AP7Wnd')]
urls = [h.parent['href'] for h in soup.find_all('h3')]
for url in urls:
#print(type(re.search(r'q=(.*?)&', url)))
print(re.search(r'q=(.*?)&', url).group(1))
# + colab={"base_uri": "https://localhost:8080/"} id="R7FByYOoO5KB" outputId="6a768dde-5082-46fe-9d1f-ed89aa3b6912"
[re.search(r'q=(.*?)&', h.parent['href']).group(1) for h in soup.find_all('h3')]
# + id="X1ncdYikVPWe" colab={"base_uri": "https://localhost:8080/"} outputId="6db9b81f-c9dd-41ca-9bc2-5c818421661b"
[t.text for t in soup.find_all('h3')]
# + id="IFI0VwZFVcjW" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="206cb62a-20d4-4e2f-ca29-f4f4dcfdac62"
pd.set_option("display.max_colwidth", None)
pd.DataFrame.from_dict({'title': [t.text for t in soup.find_all('h3')], 'url':[re.search(r'q=(.*?)&', h.parent['href']).group(1) for h in soup.find_all('h3')]})
# + id="TBe8LPf-V8te" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="8061242b-8816-4434-c677-b5d69ff5d4d1"
df = pd.DataFrame(columns=['title', 'url'])
for url in url_list:
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
df = df.append(
pd.DataFrame.from_dict({'title': [t.text for t in soup.find_all('h3')], 'url':[re.search(r'q=(.*?)&', h.parent['href']).group(1) for h in soup.find_all('h3')]}),
ignore_index=True
)
df
# + id="dJr4onbDYJFu"
df.to_csv('data_analytics_portfolio_projects.csv', encoding='utf-8')
# + id="MH5k-bfpanx8"
| Wrangle_Data/Scrape_Google_Search/20211109_Scrape_Google_Search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# %load_ext autoreload
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
import os
from lsst.sims.catalogs.measures.instance import InstanceCatalog
from lsst.sims.catUtils.mixins import CosmologyMixin
from lsst.sims.utils import ObservationMetaData
from lsst.sims.catUtils.utils import ObservationMetaDataGenerator
# +
from lsst.sims.photUtils import BandpassDict
# -
lsst_bp = BandpassDict.loadTotalBandpassesFromFiles()
import seaborn as sns
sns.set()
import pandas as pd
degConv = np.array([1., 1./60., 1./3600.])
raConv = degConv / 24.0 * 360.
centralRA = np.dot(np.array([3., 32., 30]), raConv) #03h 32m 30s
centralDec = np.dot(np.array([-28, 6., 0.]), degConv)
patchRadius = 0.4 * np.sqrt(2) #np.dot(np.array([0.0, 10.0, 0.]), degConv)
area = np.pi * (0.4 * np.sqrt(2.))**2
factorLarger = area / 0.16 / 0.16; print(factorLarger)
NumHighSNdesired = factorLarger * 100; print (NumHighSNdesired)
print(centralRA, centralDec, patchRadius)
TwinklesObsMetaData = ObservationMetaData(boundType='circle',pointingRA=centralRA,pointingDec=centralDec,
boundLength=patchRadius, mjd=49540.0)
TwinklesObsMetaDataSmall = ObservationMetaData(boundType='box',pointingRA=centralRA,pointingDec=centralDec,
boundLength=0.167, mjd=49540.0)
#The following is to get the object ids in the registry
import lsst.sims.catUtils.baseCatalogModels as bcm
from lsst.sims.catalogs.generation.db import CatalogDBObject
from lsst.sims.catUtils.baseCatalogModels.GalaxyModels import GalaxyTileObj
galaxyTiled = GalaxyTileObj()
class galCopy(InstanceCatalog):
column_outputs = ['galtileid', 'raJ2000', 'decJ2000', 'redshift', 'a_d', 'b_d', 'pa_disk']
override_formats = {'raJ2000': '%8e', 'decJ2000': '%8e', 'a_d': '%8e', 'b_d': '%8e', 'pa_disk': '%8e'}
TwinklesSmall = galCopy(galaxyTiled, obs_metadata=TwinklesObsMetaDataSmall)
TwinklesSmall.write_catalog('twinklesSmall.dat')
TwinklesGalaxies = galCopy(galaxyTiled, obs_metadata=TwinklesObsMetaData)
TwinkSmallGalsdf = pd.read_csv('TwinklesSmall.dat', sep=',\s+', engine='python')
TwinkSmallGalsdf.rename(columns={'#galtileid':'galtileid'}, inplace=True)
TwinkSmallGalsdf.head()
TwinkSmallGalsdf['zbin']= TwinkSmallGalsdf['redshift'] // 0.1
TwinklesGalaxies.write_catalog('TwinklesGalaxies.dat')
TwinkGalsdf = pd.read_csv('TwinklesGalaxies.dat', sep=',\s+', engine='python', index_col=0)
len(TwinkGalsdf)
fig, ax = plt.subplots()
ax.plot(np.degrees(TwinkGalsdf.raJ2000.values), np.degrees(TwinkGalsdf.decJ2000.values), 'o')
ax.set_aspect('equal')
ax.set_ylabel('dec')
ax.set_xlabel('ra')
ax.set_title('Sprinkled Region')
fig.savefig('Twinkles_Area.png')
TwinkGalsdf['zbin']= TwinkGalsdf['redshift'] // 0.1
zmids = np.arange(0.05, (TwinkGalsdf.zbin.max()+1.)* 0.1, 0.1)
print(zmids)
zbinnedGals = TwinkGalsdf.groupby('zbin')
binnedTwinks = pd.DataFrame({'zmids': zmids})
binnedTwinks['counts'] = zbinnedGals['redshift'].count()
fig, ax = plt.subplots()
ax.errorbar(binnedTwinks.zmids, binnedTwinks.counts, np.sqrt(binnedTwinks.counts), fmt='o' )
ax.set_xlim(0., 1.4)
ax.set_xlabel('redshift')
ax.set_ylabel('Number of galaxies in bins')
# ## SN light Curves
import sncosmo
import gedankenLSST
lsstchar = gedankenLSST.LSSTReq
lsstchar['meanNumVisits'] = pd.Series(np.repeat(3650.,6), index=['u','g','r','i','z','y'])
lsstchar['meanNumVisits']
sn = gedankenLSST.GSN_Obs(mjd_center=49530., lsstrequirements=lsstchar)
sndf = sn.summary
sndf[sndf['filter'] == 'u'].hist('night',bins=80)
lsstchar['medianSVD']
s = gedankenLSST.SNObs(summarydf=sndf, t0=49530, lsst_bp=lsst_bp, ra=centralRA, dec=centralDec)
a = []
for z in binnedTwinks.zmids.values[:12]:
s.snState = {'z': z}
lc = s.lightcurve
totalEpochs = len(lc)
highSNRlc = lc.query('SNR > 5')
highSNREpochs = len(highSNRlc)
highSNREpochs_u = len(highSNRlc.query("filter == 'u'"))
highSNREpochs_g = len(highSNRlc.query("filter == 'g'"))
highSNREpochs_r = len(highSNRlc.query("filter == 'r'"))
highSNREpochs_i = len(highSNRlc.query("filter == 'i'"))
highSNREpochs_z = len(highSNRlc.query("filter == 'z'"))
highSNREpochs_y = len(highSNRlc.query("filter == 'y'"))
a.append([z, highSNREpochs, highSNREpochs_u, highSNREpochs_g, highSNREpochs_r, highSNREpochs_i, highSNREpochs_z,
highSNREpochs_y, totalEpochs, -2.5 * np.log10(s.SN.get('x0'))])
FlatzSummary = pd.DataFrame(a, columns=['redshift', 'highSNREpochs', 'u', 'g', 'r', 'i', 'z', 'y', 'totalEpochs', 'mB'])
FlatzSummary['frac'] = FlatzSummary.highSNREpochs / FlatzSummary.totalEpochs
numSNperZBinDesired = NumHighSNdesired /12.
FlatzSummary['NumSNperzBin'] = numSNperZBinDesired * 3650. / 80. / FlatzSummary['frac']
_nsn = FlatzSummary.NumSNperzBin.replace([-np.inf, np.inf], 0.) * FlatzSummary.frac
print(_nsn.sum() * 80. / 3650.)
# +
# Increase the numbers since some of the bins are empty
# -
FlatzSummary['NumSNperzBin'] = FlatzSummary['NumSNperzBin'] * (12./9.)
_nsn = FlatzSummary.NumSNperzBin.replace([-np.inf, np.inf], 0.) * FlatzSummary.frac
print(_nsn.sum() * 80. / 3650.)
FlatzSummary['numGalsperzBin'] = binnedTwinks['counts'].head(12)
FlatzSummary['numSNperGal'] = FlatzSummary['NumSNperzBin'] / FlatzSummary['numGalsperzBin']
FlatzSummary
plt.plot(FlatzSummary.redshift, FlatzSummary['NumSNperzBin'].replace(np.inf,0), 'o')
# # SN Table
model = sncosmo.Model(source='salt2')
from astropy.cosmology import FlatLambdaCDM
# ### Simulation Parameters
# +
# Astropy cosmology object for CatSim Cosmology
CatSimCosmo = FlatLambdaCDM(Om0=0.25, H0=73.)
alphaTwinkles = 0.11
betaTwinkles = -3.14
cdistTwinkles = [0., 0.1]
x1distTwinkles = [0, 1.]
MTwinkles = [-19.3, 0.15]
# -
zbinnedGals = TwinkGalsdf.groupby('zbin')
def assignIds(snwithHosts, maxval=100000000 * 10000 * 100):
snwithHosts['offset'] = 0
sngroups = snwithHosts.groupby('galtileid')
for host in (sngroups.count() > 0).index.values:
sn = sngroups.get_group(host)
idx = sn.index
snwithHosts.loc[idx, 'offset'] = np.arange(len(sn))
return None
def assignSNHosts(galdf, numSN, seed):
if seed is not None:
np.random.seed(seed)
sngalids = np.random.choice(galdf.index.values, numSN, replace=True)
zvals = galdf.ix[sngalids,'redshift']
df = pd.DataFrame({'galtileid': sngalids,
'redshift' : zvals.values})
return df
# Slow step: Takes about 20 mins
def assignSN(zbinnedGals, SNzSummary, binList=[0, 1], maxval=100000000 * 10000 * 100, seed=42):
dfs = []
for idx in binList:
galdf = zbinnedGals.get_group(idx)
numSN = SNzSummary.NumSNperzBin[idx]
if idx == 0 :
snWithHosts = assignSNHosts(galdf, numSN, seed)
else:
snWithHosts = assignSNHosts(galdf, numSN, seed=None)
assignIds(snWithHosts, maxval=maxval)
dfs.append(snWithHosts)
snvals = pd.concat(dfs)
snvals['snid'] = snvals['galtileid'] *100 + snvals['offset']
return snvals
# Slow step ~ 20 mins
snvals = assignSN(zbinnedGals, FlatzSummary, binList=[0, 1, 2, 3, 4, 5, 6, 7, 8])
import time
print time.time()
snvals.set_index(snvals['snid'], drop=True, verify_integrity=True, inplace=True)
def assigSNParams(sntable, seed=42, cosmo=None, T0Min=0., T0Max=3650.,
MabsScatter= [-19.3, 0.15], cScatter=[0., 0.1], x1Scatter=[0., 1.], alpha=0.11, beta=-3.14 ):
if seed is not None:
np.random.seed(seed)
model = sncosmo.Model(source='salt2')
if cosmo is None:
cosmo = FlatLambdaCDM(Om0=0.25, H0=73.)
numSN = len(sntable)
zvals = sntable.redshift.values
cvals = np.random.normal(cScatter[0], cScatter[1], size=numSN)
x1vals = np.random.normal(x1Scatter[0], x1Scatter[1], size=numSN)
M = np.random.normal(MabsScatter[0], MabsScatter[1], size=numSN)
M += -alpha * x1vals - beta * cvals
t0 = np.random.uniform(T0Min, T0Max, size=numSN)
x0 = np.zeros(numSN)
mB = np.zeros(numSN)
# Slow Step
for i, Mabs in enumerate(M):
model.set(z=zvals[i], c=cvals[i], x1=x1vals[i])
model.set_source_peakabsmag(Mabs, 'bessellB', 'ab', cosmo=cosmo)
x0[i] = model.get('x0')
mB[i] = model.source.peakmag('bessellB', 'ab')
sntable['t0'] = t0
sntable['c'] = cvals
sntable['x1'] = x1vals
sntable['x0'] = x0
sntable['mB'] = mB
sntable['M'] = M
print (alpha, beta, cScatter, x1Scatter, MabsScatter)
starttime = time.time()
assigSNParams(sntable=snvals, cosmo=CatSimCosmo, alpha=alphaTwinkles, beta=betaTwinkles, MabsScatter=MTwinkles,
seed=24)
endtime = time.time()
print("Time taken", endtime - starttime)
# +
def assignPositions(sntable, Galsdf, seed=42):
radiansPerArcSec = (np.pi / 180.)* (1./60.)**2
if seed is not None:
np.random.seed(seed)
r1 = np.random.normal(0., 1., sntable.snid.size)
r2 = np.random.normal(0., 1., sntable.snid.size)
sntable['raJ2000'] = Galsdf.ix[sntable.galtileid, 'raJ2000'].values
sntable['decJ2000'] = Galsdf.ix[sntable.galtileid, 'decJ2000'].values
sntable['a_d'] = Galsdf.ix[sntable.galtileid, 'a_d'].values * radiansPerArcSec
sntable['b_d'] = Galsdf.ix[sntable.galtileid, 'b_d'].values * radiansPerArcSec
# convert from degrees to radians
sntable['theta'] = np.radians(Galsdf.ix[sntable.galtileid, 'pa_disk'].values)
sntable['sndec'] = np.cos(-sntable['theta']) * sntable['a_d']* r1 + np.sin(-sntable['theta'])*sntable['b_d'] * r2
sntable['snra'] = - np.sin(-sntable['theta']) * sntable['a_d']*r1 + np.cos(-sntable['theta'])* sntable['b_d'] * r2
sntable['snra'] += Galsdf.ix[sntable.galtileid, 'raJ2000'].values
sntable['sndec'] += Galsdf.ix[sntable.galtileid, 'decJ2000'].values
sntable['sndec'] = np.degrees(sntable['sndec'])
sntable['snra'] = np.degrees(sntable['snra'])
sntable['raJ2000'] = np.degrees(sntable['raJ2000'])
sntable['decJ2000'] = np.degrees(sntable['decJ2000'])
# -
assignPositions(snvals, TwinkGalsdf, seed=4 )
snvals.raJ2000.hist(histtype='step', alpha=1, lw=4, color='k')
snvals.snra.hist(histtype='step', lw=1, color='r')
(snvals.snra - snvals.raJ2000).hist(histtype='step', lw=1, color='r',bins=50, **{'normed':True, 'log':True})
(snvals.sndec - snvals.decJ2000).hist(histtype='step', lw=1, color='b',bins=50, **{'normed':True, 'log':True})
plt.hist(snvals.sndec - snvals.decJ2000,histtype='step', lw=1, color='b',bins=50,
log=True)
snvals.head()
snvals.columns
(snvals.raJ2000.iloc[0] - snvals.snra.iloc[0]) * 180. / np.pi * 3600.
snvals.head()
(3600* (snvals.snra - snvals.raJ2000).apply(np.degrees)).hist(bins=20)
snvals.head()
snvals.to_csv('TwinklesSN_new.csv')
# +
#snvals.to_csv('TwinklesSN.csv')
# -
len(np.unique(snvals.index.values)) == len(snvals.index.values)
| doc/SNSimDocumentation/Twinkles_SimulatedSN_Setup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using `perspective.Table`
#
# This notebook provides an overview of `perspective.Table`, Perspective's core component that allows for lightning-fast data loading, query, update, and transformation.
#
# Tables can be used alone to manage datasets, or connected for data to flow quickly between multiple tables. Outside of a Jupyter context, Perspective tables can be used to create efficient [servers](https://github.com/finos/perspective/tree/master/examples/tornado-python) which allow data to be hosted and viewed by clients in the browser using Perspective's Javascript library.
from perspective import Table
from datetime import date, datetime
import numpy as np
import pandas as pd
import requests
# ### Supported Data Formats
#
# Perspective supports 6 core data types: `int`, `float`, `str`, `bool`, `datetime`, and `date`, and several data formats:
# +
# Pandas DataFrames
df = pd.DataFrame({
"a": np.arange(0, 2),
"b": np.array(["a", "b"], dtype=object),
"nullable": [1.5, np.nan], # perspective handles `None` and `np.nan` values
"mixed": [None, 1]
})
# Column-oriented
data = {
"int": [i for i in range(4)],
"float": [i * 1.25 for i in range(4)],
"str": ["a", "b", "c", "d"],
"bool": [True, False, True, False],
"date": [date.today() for i in range(4)],
"datetime": [datetime.now() for i in range(4)]
}
# Row-oriented
rows = [{"a": 1, "b": True}, {"a": 2, "b": False}]
# CSV strings
csv = df.to_csv()
# -
# ### Schemas
#
# To explicitly specify data types for columns, create a schema (a `dict` of `str` column names to data types):
schema = {
"int": float,
"float": int,
"str": str,
"bool": bool,
"date": datetime,
"datetime": datetime
}
# ### Creating a Table
#
# A Table can be created by passing in a dataset or a schema, like the ones created above:
# +
# From a dataset
table = Table(data)
# Or a dataframe
df_table = Table(df)
# Or a CSV
csv_table = Table(csv)
# tables can be created from schema
table2 = Table(schema)
assert table2.size() == 0
# constructing a table with an index, which is a column name to be used as the primary key
indexed = Table(data, index="str")
# or a limit, which is a total cap on the number of rows in the table - updates past `limit` overwite at row 0
limited = Table(data, limit=2)
# -
# ### Using the Table
#
# A Table has several queryable properties:
# +
# schema() returns a mapping of column names to data types
display("Table schema:", table.schema())
# size() returns the number of rows in the table
display("Table has {} rows".format(table.size()))
# columns() returns a List of the table's column names
display("Table columns:", table.columns())
# -
# ### Updating with new data
#
# To update or stream new data into the Table, call `table.update(data)`:
# +
# you can update all columns
table.update(data)
print("after update:", table.size())
# or however many you'd like
table.update({
"int": [5, 6, 7],
"str": ["x", "y", "z"]
})
# but you cannot add new columns through updating - create a new Table instead
try:
table.update({
"abcd": [1]
})
except:
pass
# updates on unindexed tables always append
print("after append:", table.size())
# updates on indexed tables should include the primary key - the new data overwrites at the row specified by the primary key
indexed.update([{"str": "b", "int": 100}])
print("after indexed partial update:", indexed.size())
# without a primary key, the update appends to the end of the dataset
indexed.update([{"int": 101}])
print("after indexed append:", indexed.size())
# -
# # Queries and transformations using `View`
#
# `table.view()` allows you to apply various pivots, aggregates, sorts, filters, column selections, and expression computations on the Table, as well as return the results in a variety of output data formats.
#
# To create a view, call the `view()` method on an existing table:
# +
view = table.view() # a view with zero transformations - returns the dataset as passed in
# view metadata
print("View has {} rows and {} columns".format(view.num_rows(), view.num_columns()))
# -
# ### Applying transformations
#
# To apply transformations, pass in the relevant `kwargs` into the constructor:
# +
pivoted = table.view(group_by=["int"], split_by=["str"]) # group and split the underlying dataset
aggregated = table.view(group_by=["int"], aggregates={"float": "avg"}) # specify aggregations for individual columns
subset = table.view(columns=["float"]) # show only the columns you're interested in
sorted_view = table.view(sort=[["str", "desc"], ["int", "asc"]]) # sort on a specific column, or multiple columns
filtered = table.view(filter=[["int", ">", 2]]) # filter the dataset on a specific value
expressions = table.view(expressions=['"int" + "float" / 100']) # calculate arbitrary expressions over the dataset
# -
# ### Serializing Data
#
# Views are used to serialize data to the user in several formats:
# - `to_records`: outputs a list of dictionaries, each of which is a single row
# - `to_dict`: outputs a dictionary of lists, each string key the name of a column
# - `to_numpy`: outputs a dictionary of numpy arrays
# - `to_df`: outputs a `pandas.DataFrame`
# - `to_arrow`: outputs an Apache Arrow binary, which can be passed into another `perspective.Table` to create a copy of the first table
# +
rows = view.to_records()
columnar = view.to_dict()
np_out = view.to_numpy()
df_out = view.to_df()
arrow_out = view.to_arrow()
# -
# Data from pivoted or otherwise transformed views reflect the state of the transformed dataset.
filtered_df = filtered.to_df()
filtered_df
# If the table is updated with data, views are automatically notified of the updates:
v1 = table.view()
print("v1 has {} rows and {} columns".format(v1.num_rows(), v1.num_columns()))
table.update({"int": [100, 200, 300, 400]})
print("v1 has {} rows and {} columns".format(v1.num_rows(), v1.num_columns()))
# # Using callbacks to connect `Table` instances
#
# Custom callback functions can be applied on the `Table` and `View` instances.
#
# The most useful is `View.on_update`, which triggers a callback after the Table has been updated:
# +
# The `delta` property is an Arrow binary containing updated rows
def callback(port_id, delta):
new_table = Table(delta)
display(new_table.view().to_dict())
table = Table(data)
view = table.view()
# Register the callback with `mode="row"` to enable pushing back updated data
view.on_update(callback, mode="row")
# Update will trigger the callback
table.update({
"int": [1, 3],
"str": ["abc", "def"]
})
# -
# Because the callback can be triggered with a _copy_ of the updated data, `on_update` allows you to connect together multiple tables that all share state quickly and dependably:
# +
# Create a table and a view
t1 = Table(data)
v1 = t1.view()
# And a new table that feeds from `t1`
t2 = Table(t1.schema())
# And a callback that updates `t2` whenever `t1` updates
def cb(port_id, delta):
t2.update(delta)
# register the callback
v1.on_update(cb, mode="row")
# update t1, which updates t2 automatically
t1.update(data)
# t2 now has data after t1 is updated
t2.view().to_df()
| examples/jupyter-notebooks/table_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# <h1 style="font-size:35px;
# color:black;
# ">Lab 7 Quantum Simulation as a Search Algorithm </h1>
# -
# Prerequisites:
# - [Ch.3.8 Grover's Algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html)
# - [Ch.2.5 Proving Universality](https://qiskit.org/textbook/ch-gates/proving-universality.html#2.2-Unitary-and-Hermitian-matrices-)
#
# Other relevant materials:
# - [Ch 6.2 in QCQI] <NAME> and <NAME>. Quantum Computation and Quantum Information, p255
# +
from qiskit import *
from qiskit.quantum_info import Statevector, partial_trace
from qiskit.visualization import plot_state_qsphere, plot_histogram
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
# -
sim = Aer.get_backend('qasm_simulator')
# <h2 style="font-size:24px;">Part 1: Hamiltonian Simulation</h2>
#
# <br>
# <div style="background: #E8E7EB; border-radius: 5px;
# -moz-border-radius: 5px;">
# <p style="background: #800080;
# border-radius: 5px 5px 0px 0px;
# padding: 10px 0px 10px 10px;
# font-size:18px;
# color:white;
# "><b>Goal</b></p>
# <p style=" padding: 0px 0px 10px 10px;
# font-size:16px;"> In this lab, we consider changes to a quantum state viewed as an evolution process generated by a given Hamiltonian. For a specified Hamiltonian, there is a corresponding unitary operator that determines the final state for any given initial state.
# </p>
# </div>
#
# For an initial state, $|\psi(0)\rangle$ and a time independent Hamiltonian $H$ , the final state $|\psi(t)\rangle$ is $|\psi(t)\rangle = e^{-iHt}|\psi(0)\rangle$. Therefore, by constructing an appropriate gate for the unitary operator $e^{-iHt}$, we can build a quantum circuit that simulates the evolution of the quantum state $|\psi\rangle$.
# <h3 style="font-size: 20px">1. Build a quantum circuit for a given Hamiltonian. </h3>
# When the Hamiltonian $H$ and the initial state of the system, $|\psi(0)\rangle$, are given by
#
# $H = |0\rangle\langle0| + |+\rangle\langle+|, ~~~~ |\psi(0)\rangle = |+\rangle = \frac{1}{\sqrt 2}(|0\rangle + |1\rangle)$.
#
# Build the circuit with two qubits to evolve the state, $|\psi(0\rangle$, by $H$ for a time $\Delta t = \theta$, where the state of the system is encoded on the 0th qubit and the 1st qubit is an auxiliary. Then, the final state $|\psi(\theta)\rangle$ is $|\psi(\theta)\rangle = e^{-i\theta ~ ( |0\rangle\langle0| ~ + ~ |+\rangle\langle+| )}~|\psi(0)\rangle$.
# <h4 style="font-size: 17px">📓Step A. Show that the gate H1 from the following circuit performs the operation $e^{-i\frac{\pi}{9}|0\rangle\langle0|}$ on the 0th qubit when the state of the system is encoded on the 0th qubit and the 1st qubit, auxiliary, is set to the $|0\rangle$ state.</h4>
# +
h1 = QuantumCircuit(2, name = 'H1')
h1.cnot(0, 1)
h1.p(np.pi/9, 1)
h1.cnot(0, 1)
H1 = h1.to_gate()
h1.draw()
# -
# **Your Solution**:
#
#
# <h4 style="font-size: 17px">📓Step B. Construct the gate H2 by completing the following code for the circuit `h2` to performs the operation $e^{-i\frac{\pi}{9}|+\rangle\langle+|}$ on the 0th qubit when the state of the system is encoded on the 0th qubit and the 1st qubit, auxiliary, is set to the $|0\rangle$ state. </h4>
# +
h2 = QuantumCircuit(2, name='H2')
#### Your code goes here ###
#############################
H2 = h2.to_gate()
h2.draw()
# -
# <h3 style="font-size: 20px">2. Execute the cell below to generate the state of the 0th qubit after every iteration.
# The circuit performs $(H1H2)^7|+\rangle = (~ e^{-i\frac{\pi}{9} ~ |0\rangle\langle0|}e^{-i\frac{\pi}{9}~|+\rangle\langle+|} ~)^7~|+\rangle$ on the 0th qubit. The state of the 0th qubit after each `H1H2` operation is stored in the list variable 'myst'.
# +
from qiskit.quantum_info import Statevector, partial_trace
def st_out(qc):
out = Statevector.from_instruction(qc)
out_red = partial_trace(out, [1])
prob, st_all = la.eig(out_red.data)
cond = (prob>0.99) & (prob<1.01)
st = st_all[:, cond].ravel()
return(st)
myst = []
circ = QuantumCircuit(2)
circ.h(0)
st = st_out(circ)
myst.append(Statevector(st))
for _ in range(7):
circ.append(H1, range(2))
circ.append(H2, range(2))
st = st_out(circ)
myst.append(Statevector(st))
circ.draw()
# -
# The following Bloch sphere picture shows the evolution of the 0th qubit state. As it shows, the state starts from the $|+\rangle$ state rotate toward to and passes the $|0\rangle$ state. Therefore, with appropriate the angle of the `H1` and `H2` operations, $|+\rangle$ state evolves to $|0\rangle$ state by applying $H1H2 = e^{-i\theta ~ |0\rangle\langle0|}e^{-i\theta~|+\rangle\langle+|}$ proper number of times.
# <img src="image/L7_bloch_sphere.png" alt="Drawing" style="width: 300px; float: left!important;">
# If you have installed `kaleidoscope` or run this lab on [IQX](https://quantum-computing.ibm.com), you can execute the cell below to visualize the state evolution through the interactive Bloch sphere.
# +
from kaleidoscope import bloch_sphere
from matplotlib.colors import LinearSegmentedColormap, rgb2hex
cm = LinearSegmentedColormap.from_list('graypurple', ["#999999", "#AA00FF"])
vectors_color = [rgb2hex(cm(kk)) for kk in np.linspace(-1,1,len(myst))]
bloch_sphere(myst, vectors_color = vectors_color)
# -
# <h2 style="font-size:24px;">Part 2: Quantum Search as a Quantum Simulation</h2>
#
# <br>
# <div style="background: #E8E7EB; border-radius: 5px;
# -moz-border-radius: 5px;">
# <p style="background: #800080;
# border-radius: 5px 5px 0px 0px;
# padding: 10px 0px 10px 10px;
# font-size:18px;
# color:white;
# "><b>Goal</b></p>
# <p style=" padding: 0px 0px 10px 10px;
# font-size:16px;"> In this part of the lab, we solve a search problem through quantum simulation.
# </p>
# </div>
#
# In Part1, we showed that the Hamiltonian, $H$, transforms the state, $|\psi_i\rangle$, to $|\psi_j\rangle$ when its structure depends on both states as $ H =|\psi_j\rangle\langle\psi_j| + |\psi_i\rangle\langle\psi_i| $ with a proper time duration.
#
# Considering a search problem with a unique solution, we should be able to find the solution with the form of the Hamiltonian, $ H = |x\rangle\langle x| + |\psi\rangle\langle\psi|, $ when all possible items are encoded in a superposition state $|\psi\rangle$ and given as the initial state, same as in Grover's algorithm, while $|x\rangle$ represents the unknown solution.
#
# Applying the unitary operator, $U = e^{-iH\Delta t}$ on the initial state, $|\psi\rangle$, right number of times with the properly chosen $\Delta t$, should evolve the state $|\psi\rangle$ into the solution $|x\rangle$ or close enough to it. The following code constructs the oracle gate for the search problem. Execute the cell below.
# +
n = 5
qc = QuantumCircuit(n+1, name='Oracle')
qc.mct(list(range(n)), n)
Oracle = qc.to_gate()
# -
# The following circuit encodes the phase $\pi$ on the solution state and zero on the other items through phase kickback with the 5th qubit as an auxiliary. Therefore, the output state of the circuit is $(|\psi\rangle - |x\rangle) + e^{i\pi}|x\rangle$, which can be confirmed visually using a qsphere plot where the color indicates the phase of each basis state. Run the following two cells.
# +
test = QuantumCircuit(n+1)
test.x(n)
test.h(range(n+1))
test.append(Oracle, range(n+1))
test.h(n)
test.draw()
# +
st = Statevector.from_instruction(test)
st_red = partial_trace(st, [5])
plot_state_qsphere(st_red)
# -
# <h3 style="font-size: 20px">1. Construct a circuit to approximate the Hamiltonian, $H = |x\rangle\langle x| + |\psi\rangle\langle\psi|$, when all possible items are encoded in a superposition state $|\psi\rangle$ and given as the initial state while $|x\rangle$ represents the unique unknown solution.</h3>
# As we did in the Part1, we build the circuit for the simulation with the Hamiltonian, but with more qubits to examine all the items in the question. Regard the search problem having one solution out of 32 items.
# <h4 style="font-size: 17px">📓Step A. Construct the gate H1 performing the operation $e^{-i\Delta t|\psi\rangle\langle\psi|}$ by completing the following code.</h4>
def H1(delt, n=5):
h1 = QuantumCircuit(n+1, name='H1')
#### Your code goes here ######
###############################
return h1.to_gate()
# <h4 style="font-size: 17px">📓Step B. Construct the gate H2 performing the operation $e^{-i\Delta t|x\rangle\langle x|}$ by completing the following code.</h4>
def H2(delt, n=5):
h2 = QuantumCircuit(n+1, name='H2')
#### Your code goes here ######
###############################
return h2.to_gate()
# <h4 style="font-size: 17px">📓Step C. Create the circuit, 'sim_h', to compute $e^{-i \pi H_{app}}|\psi\rangle = (~e^{-i\pi~|x\rangle\langle x|}e^{-i\pi~|\psi\rangle\langle\psi|}~)|\psi\rangle $ which evolves the state $|\psi\rangle$ under the Hamiltonian $H = |x\rangle\langle x| + |\psi\rangle\langle\psi|$ approximately over the time duration $\Delta t = \pi$.</h4>
# Th state $|\psi\rangle$ represents the superposition state of all possible items.
#
# Utilize the gates `H1` and `H2`.
# +
#### Your code goes here ####
############
sim_h.draw()
# -
# <h3 style="font-size: 20px">2. Show that the search problem can be solved through quantum simulation with $H_{appr}$ by verifying the two operations, Grover's algorithm and $U = e^{-i\Delta t~H_{appr}}$ with $\Delta t = \pi$, are equivalent. </h3>
# <h4 style="font-size: 17px">Step A. The following circuit, `grover`, runs the Grover's algorithm for the problem to find a solution for the oracle that we built above. Run the cell below. </h4>
# +
qc = QuantumCircuit(n+1, name='Amp')
qc.h(range(n))
qc.x(range(n))
qc.mct(list(range(n)), n)
qc.x(range(n))
qc.h(range(n))
Amp = qc.to_gate()
grover = QuantumCircuit(n+1)
grover.x(n)
grover.h(range(n+1))
grover.append(Oracle, range(n+1))
grover.append(Amp, range(n+1))
grover.h(n)
grover.x(n)
grover.draw()
# -
# <h4 style="font-size: 17px">Step B. Upon executing the cells below, the result shows that the circuits, 'grover' and 'sim_h' are identical up to a global phase. </h4>
st_simh = Statevector.from_instruction(sim_h)
st_grover = Statevector.from_instruction(grover)
print('grover circuit and sim_h circuit genrate the same output state: ' ,st_simh == st_grover)
plot_state_qsphere(st_simh)
plot_state_qsphere(st_grover)
# <h4 style="font-size: 17px">📓Step C. Find the number of the Grover iterations, R, needed to find the solutions of the Oracle that we built.</h4>
# +
#### your code goes here ####
######
print(R)
# -
# <h4 style="font-size: 17px">Step D. Find the solution to the search problem, for the Oracle that we built, through Grover's algorithm and the simulation computing $e^{-i R\pi H_{app}}|\psi\rangle = (~e^{-i\pi~|x\rangle\langle x|}e^{-i\pi~|\psi\rangle\langle\psi|}~)^R|\psi\rangle $ where R is the number of iterations.</h4>
# +
## The circuit to solve the search problem through Grover's algorithm.
n = 5
qc_grover = QuantumCircuit(n+1, n)
qc_grover.x(n)
qc_grover.h(range(n+1))
for _ in range(int(R)):
qc_grover.append(Oracle, range(n+1))
qc_grover.append(Amp, range(n+1))
qc_grover.h(n)
qc_grover.x(n)
qc_grover.barrier()
qc_grover.measure(range(n), range(n))
qc_grover.draw()
# -
# 📓 Complete the code to build the circuit, `qc_sim`, to solve the search problem through the simulation.
# +
qc_sim = QuantumCircuit(n+1, n)
qc_sim.h(range(n))
#### Your code goes here ####
# -
# Run the following cell to simulate both circuits, `qc_grover` and `qc_sim` and compare their solutions.
counts = execute([qc_grover, qc_sim], sim).result().get_counts()
plot_histogram(counts, legend=['Grover', 'Hamiltonian'])
# <h3 style="font-size: 20px">3. The following result shows an example where the solution can be found with probability exactly equal to one through quantum simulation by the choosing the proper time duration $\Delta t$.</h3>
# +
n = 5
qc = QuantumCircuit(n+1, n)
qc.h(range(n))
delt, R = np.pi/2.1, 6
for _ in range(int(R)):
qc.append(H1(delt), range(n+1))
qc.append(H2(delt), range(n+1))
qc.measure(range(n) ,range(n))
qc.draw()
# -
count = execute(qc, sim).result().get_counts()
plot_histogram(count)
| content/ch-labs/Lab07_QuantumSimulationSearchAlgorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mlA1_lab
# language: python
# name: mla1_lab
# ---
# # Decision Trees for regression on Boston housing data
# ## Boston Dataset
#
# - Data table and information:
# - https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html
# - https://www.cs.toronto.edu/~delve/data/boston/desc.html
#
#
# ### Origin
#
# The origin of the boston housing data is Natural.
#
# ### Usage
#
# This dataset may be used for Assessment.
#
# ### Number of Cases
#
# The dataset contains a total of 506 cases.
#
# ### Order
#
# The order of the cases is mysterious.
#
# ### Variables
#
# There are 14 attributes in each case of the dataset. They are:
#
# CRIM - per capita crime rate by town
#
# ZN - proportion of residential land zoned for lots over 25,000 sq.ft.
#
# INDUS - proportion of non-retail business acres per town.
#
# CHAS - Charles River dummy variable (1 if tract bounds river; 0 otherwise)
#
# NOX - nitric oxides concentration (parts per 10 million)
#
# RM - average number of rooms per dwelling
#
# AGE - proportion of owner-occupied units built prior to 1940
#
# DIS - weighted distances to five Boston employment centres
#
# RAD - index of accessibility to radial highways
#
# TAX - full-value property-tax rate per $10,000
#
# PTRATIO - pupil-teacher ratio by town
#
# B - 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
#
# LSTAT - % lower status of the population
#
# MEDV - Median value of owner-occupied homes in $1000's
#
# ### Note
#
# Variable #14 seems to be censored at 50.00 (corresponding to a median price of $50,000); Censoring is suggested by the fact that the highest median price of exactly $50,000 is reported in 16 cases, while 15 cases have prices between $40,000 and $50,000, with prices rounded to the nearest hundred. Harrison and Rubinfeld do not mention any censoring.
# ### First, we make our imports
import sklearn
from sklearn import svm, tree, linear_model, neighbors, naive_bayes, ensemble, discriminant_analysis, gaussian_process
from xgboost import XGBClassifier
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn import feature_selection
from sklearn import model_selection
from sklearn import metrics
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
import numpy as np
import random
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import r2_score
# ### Next, we define plotting the functions
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=None, train_sizes=np.linspace(.1, 1.0, 30)):
# def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=None, train_sizes=np.linspace(.1, 1.0, 5)):
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import learning_curve
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Average Error")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, 1 - train_scores_mean - train_scores_std, 1 - train_scores_mean + train_scores_std, alpha=0.1, color="r")
plt.fill_between(train_sizes, 1 - test_scores_mean - test_scores_std, 1 - test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, 1 - train_scores_mean, 'o-', color="r", label="Training score")
plt.plot(train_sizes, 1 - test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.legend(loc="best")
plt.show()
return plt
# ### Next, we define the core function that will iterate over our different learners
def run_algos(X, y, dataset, algos, cv_splits):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.fit_transform(X_test)
cv_split = model_selection.KFold(n_splits=cv_splits, random_state=2)
final_table = pd.DataFrame(columns=['Algo Name', 'Time (sec)', 'CV Train Accuracy Mean', 'CV Test Accuracy Mean',
'CV Test 3*STD', 'r2 Score'])
for i in range(len(algos)):
name = algos[i][0]
alg = algos[i][1]
print('-' * 80)
print('-' * 80)
print('Dataset:', dataset, ', ', 'Exploring algorithm:', name)
print('-' * 80)
# get cross validation results and plot learning curves
cv_results = model_selection.cross_validate(alg, X_train, y_train, cv=cv_split, return_train_score=True)
plot_learning_curve(alg, dataset + " Learning Curves: " + name, X_train, y_train, cv=cv_split)
# build summary table
final_table.loc[i, 'Algo Name'] = name
final_table.loc[i, 'Time (sec)'] = round(cv_results['fit_time'].mean(), 5)
final_table.loc[i, 'CV Train Accuracy Mean'] = '{:.1%}'.format(cv_results['train_score'].mean())
final_table.loc[i, 'CV Test Accuracy Mean'] = '{:.1%}'.format(cv_results['test_score'].mean())
final_table.loc[i, 'CV Test 3*STD'] = '{:.1%}'.format(cv_results['test_score'].std() * 3)
alg.fit(X_train, y_train)
y_pred = alg.predict(X_test)
test_results = r2_score(y_test, y_pred)
print(test_results)
test_results = r2_score(y_test, y_pred)
final_table.loc[i, 'r2 Score'] = '{:.1%}'.format(test_results)
# f1score = sklearn.metrics.f1_score(y_test, y_pred)
# final_table.loc[i, 'Final Test Score'] = '{:.1%}'.format(f1score)
pd.set_option('max_colwidth', -1)
# with pd.option_context('display.max_rows', None, 'display.max_columns', None, 'max_colwidth', -1):
# print('-' * 40)
# print(final_table.iloc[i])
# print('-' * 40)
print('-' * 80)
print('-' * 80)
return final_table
# ### Next we declare the algorithms we would like to run, along with GridSearchCV parameters
# +
# uncomment these in order to run the algorithms one at a time:
algos = [
['DecisionTreeRegressor, max_depth=2', tree.DecisionTreeRegressor(max_depth=2)],
['DecisionTreeRegressor, max_depth=4', tree.DecisionTreeRegressor(max_depth=4)],
['DecisionTreeRegressor, max_depth=8', tree.DecisionTreeRegressor(max_depth=8)],
['DecisionTreeRegressor, max_depth=16', tree.DecisionTreeRegressor(max_depth=16)],
]
# -
# ### Next, we set parameters for our first dataset, and run the analyses
# +
cv_splits = 10
verbose = False
np.random.seed(3)
random.seed(3)
# # suppress warnings for cleaner output
# import warnings
# warnings.filterwarnings('ignore')
# clean data
df = pd.read_csv('boston_data.csv')
X = df.iloc[:,:13].copy()
# y = df.iloc[[13]]
y = pd.DataFrame(df.iloc[:,13].values.astype('str'), columns=['mdev'])
dataset = 'Boston'
final_table = run_algos(X, y, dataset, algos, cv_splits)
print(' TABLE:', dataset, 'dataset final summary table')
final_table.sort_values(by=['r2 Score'], ascending=False, inplace=True)
pd.set_option('max_colwidth', -1)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
final_table
# -
| code/visualization-multi-dimensional/.ipynb_checkpoints/Regression-DT-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Control structures: blocks
#
# ## Code structure
#
#
# Python uses a column `:` at the end of the line and 4 white-spaces indentation
# to establish code block structure.
#
# Many other programming languages uses braces { }, not python.
#
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ```
#
# Block 1
# ...
# Header making new block:
# Block 2
# ...
# Header making new block:
# Block 3
# ...
# Block 2 (continuation)
# ...
# Block 1 continuation
# ...
# ```
#
# - Clearly indicates the beginning of a block
# - Coding style is mostly uniform. Use **4 spaces**, never <tabs>
# - Code structure is much more readable and clear.
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Branching
#
# - Condition branching are made with `if elif else` statements
# - Can have many ``elif``'s (not recommended)
# - Can be nested (too much nesting is bad for readability)
#
# Example for solving a second order polynomial root:
# + slideshow={"slide_type": "fragment"}
import math
a = -1
b = 2
c = 1
q2 = b * b - 4.0 * a * c
print("Determinant is ", q2)
if q2 < 0:
print("No real solution")
elif q2 > 0:
x1 = (-b + math.sqrt(q2)) / (2.0 * a)
x2 = (-b - math.sqrt(q2)) / (2.0 * a)
print("Two solutions %.2f and %.2f" % (x1, x2))
else:
x = -b / (2.0 * a)
print("One solution: %.2f" % x)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## For loop
#
# - iterate over a sequence (list, tuple, char in string, keys in dict, any iterator)
# - no indexes, directly iterate on the sequence objects
# - when index is really needed, use `enumerate`
# - One can use multiple sequences in parallel using `zip`
# + slideshow={"slide_type": "fragment"}
ingredients = ["spam", "eggs", "ham", "spam", "sausages"]
for food in ingredients:
print("I like %s" % food)
# + slideshow={"slide_type": "fragment"}
for idx, food in enumerate(ingredients[::-1]):
print("%s is number %d in my top 5 of foods" % (food, len(ingredients)- idx))
# + slideshow={"slide_type": "subslide"}
subjects = ["Roses", "Violets", "Sugar"]
verbs = ["are", "are", "is"]
adjectives = ["red,", "blue,", "sweet."]
for s, v, a in zip(subjects, verbs, adjectives):
print("%s %s %s" % (s, v, a))
# + [markdown] slideshow={"slide_type": "subslide"}
# ## While loop
#
# - Iterate while a condition is fulfilled
# - Make sure the condition becomes unfulfilled, else it could result in infinite loops ...
#
# + slideshow={"slide_type": "fragment"}
a, b = 175, 3650
stop = False
possible_divisor = max(a, b) // 2
while possible_divisor >= 1 and not stop:
if a % possible_divisor == 0 and b % possible_divisor == 0:
print("Found greatest common divisor: %d" % possible_divisor)
stop = True
possible_divisor = possible_divisor - 1
# + slideshow={"slide_type": "fragment"}
while True:
print("I will print this forever")
# Now you are ready to interrupt the kernel !
#go in the menu and click kernel-> interrput
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Useful commands in loops
#
# - `continue`: go directly to the next iteration of the most inner loop
# - `break`: quit the most inner loop
# - `pass`: a block cannot be empty; ``pass`` is a command that does nothing
# - `else`: block executed after the normal exit of the loop.
#
# + slideshow={"slide_type": "fragment"}
for i in range(10):
if not i % 7 == 0:
print("%d is *not* a multiple of 7" % i)
continue
print("%d is a multiple of 7" % i)
# + slideshow={"slide_type": "subslide"}
n = 112
# divide n by 2 until this does no longer return an integer
while True:
if n % 2 != 0:
print("%d is not a multiple of 2" % n)
break
print("%d is a multiple of 2" % n)
n = n // 2
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Fibonacci series
#
#
# - Fibonacci:
# - Each element is the sum of the previous two elements
# - The first two elements are 0 and 1
#
# - Calculate all elements in this series up to 1000, put them in a list, then print the list.
#
# ``[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987]``
#
# prepend the the cell with `%%timeit` to measure the performances
# + slideshow={"slide_type": "fragment"}
# %%time
# Sorry this exercise is not solved
print([0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987])
# + slideshow={"slide_type": "subslide"}
import base64, timeit
code = [b'<KEY>',
b'<KEY>']
for i, cod in enumerate(code):
solution = base64.b64decode(cod).decode()
exec_time = timeit.timeit(solution)
print("Solution %i takes %.3f µs :\n %s"%(i, exec_time, solution))
# -
| sesame/1_2_python_control_structure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# Regularized Abel Inversion
# ==========================
#
# This example demonstrates a TV-regularized Abel inversion using
# an Abel projector based on PyAbel <cite data-cite="pyabel-2022"/>
# +
import numpy as np
import scico.numpy as snp
from scico import functional, linop, loss, metric, plot
from scico.examples import create_circular_phantom
from scico.linop.abel import AbelProjector
from scico.optimize.admm import ADMM, LinearSubproblemSolver
from scico.util import device_info
plot.config_notebook_plotting()
# -
# Create a ground truth image.
N = 256 # phantom size
x_gt = create_circular_phantom((N, N), [0.4 * N, 0.2 * N, 0.1 * N], [1, 0, 0.5])
# Set up the forward operator and create a test measurement
A = AbelProjector(x_gt.shape)
y = A @ x_gt
np.random.seed(12345)
y = y + np.random.normal(size=y.shape).astype(np.float32)
ATy = A.T @ y
# Set up ADMM solver object.
# +
λ = 1.9e1 # L1 norm regularization parameter
ρ = 4.9e1 # ADMM penalty parameter
maxiter = 100 # number of ADMM iterations
cg_tol = 1e-4 # CG relative tolerance
cg_maxiter = 25 # maximum CG iterations per ADMM iteration
# Note the use of anisotropic TV. Isotropic TV would require use of L21Norm.
g = λ * functional.L1Norm()
C = linop.FiniteDifference(input_shape=x_gt.shape)
f = loss.SquaredL2Loss(y=y, A=A)
x_inv = A.inverse(y)
x0 = snp.clip(x_inv, 0, 1.0)
solver = ADMM(
f=f,
g_list=[g],
C_list=[C],
rho_list=[ρ],
x0=x0,
maxiter=maxiter,
subproblem_solver=LinearSubproblemSolver(cg_kwargs={"tol": cg_tol, "maxiter": cg_maxiter}),
itstat_options={"display": True, "period": 5},
)
# -
# Run the solver.
print(f"Solving on {device_info()}\n")
solver.solve()
hist = solver.itstat_object.history(transpose=True)
x_tv = snp.clip(solver.x, 0, 1.0)
# Show results.
norm = plot.matplotlib.colors.Normalize(vmin=-0.1, vmax=1.2)
fig, ax = plot.subplots(nrows=2, ncols=2, figsize=(12, 12))
plot.imview(x_gt, title="Ground Truth", cmap=plot.cm.Blues, fig=fig, ax=ax[0, 0], norm=norm)
plot.imview(y, title="Measurement", cmap=plot.cm.Blues, fig=fig, ax=ax[0, 1])
plot.imview(
x_inv,
title="Inverse Abel: %.2f (dB)" % metric.psnr(x_gt, x_inv),
cmap=plot.cm.Blues,
fig=fig,
ax=ax[1, 0],
norm=norm,
)
plot.imview(
x_tv,
title="TV Regularized Inversion: %.2f (dB)" % metric.psnr(x_gt, x_tv),
cmap=plot.cm.Blues,
fig=fig,
ax=ax[1, 1],
norm=norm,
)
fig.show()
| notebooks/ct_abel_tv_admm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from games_setup import *
import SBMLLint.common.constants as cn
from SBMLLint.common.reaction import Reaction
from SBMLLint.common.stoichiometry_matrix import StoichiometryMatrix
from SBMLLint.games.som import SOM
from SBMLLint.games.mesgraph import MESGraph
from SBMLLint.games.games_pp import GAMES_PP, SOMStoichiometry, SOMReaction, TOLERANCE
from SBMLLint.games.games_report import GAMESReport, SimplifiedReaction
import collections
import tesbml
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import time
from scipy.linalg import lu, inv
# The following models are not loadable by simple SBML
EXCEPTIONS = ["BIOMD0000000075_url.xml",
"BIOMD0000000081_url.xml",
"BIOMD0000000094_url.xml",
"BIOMD0000000353_url.xml",
"BIOMD0000000596_url.xml",
]
data_dir=cn.BIOMODELS_DIR
# we can remove EXCEPTIONS from files, as they are not loaded by simpleSBML
raw_files = [f for f in os.listdir(data_dir) if f[:7] == "BIOMD00"]
files = [f for f in raw_files if f not in EXCEPTIONS]
paths = [os.path.join(data_dir, filename) for filename in files]
data_dir
len(files)
# statistics columns
NUM_REACTIONS = "num_reactions(nonbdry)"
LP_ERROR = "lp_error"
GAMES_ERROR = "games_error"
GAMESPP_ERROR = "gamespp_error"
TYPEI_ERROR = "type1_error"
TYPEII_ERROR = "type2_error"
CANCELING_ERROR = "canceling_error"
ECHELON_ERROR = "echelon_error"
TYPEIII_ERROR = "type3_error"
result_columns = [NUM_REACTIONS,
LP_ERROR,
GAMES_ERROR,
GAMESPP_ERROR,
TYPEI_ERROR,
TYPEII_ERROR,
CANCELING_ERROR,
ECHELON_ERROR,
TYPEIII_ERROR]
## invertible matrix column?
# INVERTIBLE = "l_inverse"
results = pd.DataFrame(0, index=files, columns=result_columns)
results[:5]
simple = SimpleSBML()
simple.initialize(os.path.join(data_dir, "BIOMD0000000244_url.xml"))
s = StoichiometryMatrix(simple)
consistent = s.isConsistent()
print("consistent? ", consistent)
# LP only
simple = SimpleSBML()
count = 0
lp_start = time.time()
for file in files:
count += 1
if (count%100)==0:
print("we are analyzing Model number:", count)
try:
simple.initialize(os.path.join(data_dir, file))
s = StoichiometryMatrix(simple)
num_reactions = s.stoichiometry_matrix.shape[1]
results.at[file, NUM_REACTIONS] = num_reactions
if num_reactions:
consistent = s.isConsistent()
else:
consistent = -1
results.at[file, LP_ERROR] = 1 - int(consistent)
except:
results.at[file, LP_ERROR] = -1
lp_end = time.time()
lp_time = lp_end - lp_start
print("Analysis finished!")
print("LP time:", lp_time)
lp_results = results[results[LP_ERROR] == 1]
len(lp_results)
print("(Mean) ISS for LP is:", np.mean(lp_results[NUM_REACTIONS]))
print("(STD) ISS for LP is:", np.std(lp_results[NUM_REACTIONS]))
len(results[results[LP_ERROR]==1])
len(results[results[LP_ERROR]==-1])
# GAMES only
simple = SimpleSBML()
count = 0
games_start = time.time()
for file in files:
count += 1
if (count%100)==0:
print("we are analyzing Model number:", count)
try:
simple.initialize(os.path.join(data_dir, file))
m = GAMES_PP(simple)
if simple.reactions:
res = m.analyze(simple_games=True, error_details=False)
results.at[file, GAMES_ERROR] = int(res)
if res:
gr = GAMESReport(m)
summary = m.error_summary
if m.type_one_errors:
results.at[file, TYPEI_ERROR] = len(m.type_one_errors)
report, error_num = gr.reportTypeOneError(m.type_one_errors, explain_details=True)
if m.type_two_errors:
results.at[file, TYPEII_ERROR] = len(m.type_two_errors)
report, error_num = gr.reportTypeTwoError(m.type_two_errors, explain_details=True)
except:
results.at[file, GAMES_ERROR] = -1
games_end = time.time()
games_time = games_end - games_start
print("Analysis finished!")
print("GAMES time:", games_time)
print("number of detected errors: ", len(results[results[GAMES_ERROR]==1]))
print("number of GAMES but not in LP", len(results[(results[GAMES_ERROR]==1) & (results[LP_ERROR]!=1)]))
results[results[GAMES_ERROR]==-1]
# GAMES+
# file, GAMES_ERROR coding:
# 0; normal - no error found
# -1; not loaded or error found
# 1; normal - error found
# 2; echelon error found, but it is not explainable
# 3; type III error found, but it is not explainable
simple = SimpleSBML()
count = 0
gamespp_start = time.time()
for file in files:
count += 1
if (count%100)==0:
print("we are analyzing Model number:", count)
try:
simple.initialize(os.path.join(data_dir, file))
m = GAMES_PP(simple)
if simple.reactions:
res = m.analyze(simple_games=False, error_details=False)
results.at[file, GAMESPP_ERROR] = int(res)
if res:
# if m.echelon_errors or m.type_three_errors:
# try:
# #k = inv(m.lower)
# k = np.linalg.inv(m.lower)
# except:
# print("model %s has as a singular L matrix:" % file)
# condition_number = np.linalg.cond(m.lower)
# if condition_number > 300:
# print("*****The L matrix of the model %s has a condition number %f*****" % (file, condition_number))
gr = GAMESReport(m)
summary = m.error_summary
if m.type_one_errors:
results.at[file, TYPEI_ERROR] = len(m.type_one_errors)
report, error_num = gr.reportTypeOneError(m.type_one_errors, explain_details=True)
if m.type_two_errors:
results.at[file, TYPEII_ERROR] = len(m.type_two_errors)
report, error_num = gr.reportTypeTwoError(m.type_two_errors, explain_details=True)
if m.canceling_errors:
results.at[file, CANCELING_ERROR] = len(m.canceling_errors)
report, error_num = gr.reportCancelingError(m.canceling_errors, explain_details=True)
if m.echelon_errors:
#print("Model %s has an echelon error:" % file)
results.at[file, ECHELON_ERROR] = len(m.echelon_errors)
report, error_num = gr.reportEchelonError(m.echelon_errors, explain_details=True)
if report is False:
results.at[file, GAMESPP_ERROR] = 2
# print("Model %s has an inexplainable Echelon Error" % file)
# print("As the lower matrix has a condition number %f" % condition_number)
# print("Decide if the matrix is invertible")
if m.type_three_errors:
#print("Model %s has a type III error:" % file)
results.at[file, TYPEIII_ERROR] = len(m.type_three_errors)
report, error_num = gr.reportTypeThreeError(m.type_three_errors, explain_details=True)
if report is False:
results.at[file, GAMESPP_ERROR] = 3
# print("Model %s has an inexplainable Type III Error" % file)
# print("As the lower matrix has a condition number %f" % condition_number)
# print("Decide if the matrix is invertible")
except:
results.at[file, GAMES_ERROR] = -1
gamespp_end = time.time()
gamespp_time = gamespp_end - gamespp_start
print("\nAnalysis finished!")
print("GAMES++ time:", gamespp_time)
print("number of detected errors: ", len(results[results[GAMESPP_ERROR]==1]))
print("number of GAMES errors not in LP", len(results[(results[GAMESPP_ERROR]==1) & (results[LP_ERROR]!=1)]))
len(results[results[GAMESPP_ERROR]==-1])
len(results[results[GAMESPP_ERROR]==2])
len(results[results[GAMESPP_ERROR]==3])
results[results[GAMESPP_ERROR]==3]
simple = load_file_from_games(574)
m = GAMES_PP(simple)
res = m.analyze(simple_games=False, error_details=True)
m.lower
np.linalg.det(m.lower)
| notebook/run_statistics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as p
x=p.read_excel("income.xlsx")
x
a=x.loc[0,"Salary"]
if a<=250000:
tax=0
print(tax)
x=p.read_excel("income.xlsx")
x
x=p.read_excel("income.xlsx")
x
# +
a=x.loc[0,"Salary"]
b=x.loc[0,"Age"]
if a<=250000 and b<=60:
tax=0
if a>250000 and a<=500000 and b<=60:
tax=10/100*a
if a>500000 and a<=1000000 and b<=60:
tax=20/100*a
if a>1000000:
tax=30/100*a
if a<=300000 and b>60 and b <=80:
tax=0
if a<=300000 and a<=500000 and b>60 and b <=80:
tax=10/100*a
if a<=500000 and a<=1000000 and b>60 and b <=80:
tax=20/100*a
if a>1000000 and b>60 and b <=80:
tax=30/100*a
if a<=500000 and b>80:
tax=0
if a>500000 and a<=1000000 and b>80:
tax=20/100*a
if a>1000000 and b>80 and b<=100:
tax=30/100*a
print(tax)
# -
x=p.read_excel("income.xlsx")
x
# +
a=x.loc[0,"Salary"]
b=x.loc[0,"Age"]
c=x.loc[0,"hra"]
if a<=250000 and b<=60:
tax=0
if a>250000 and a<=500000 and b<=60:
tax=10/100*a
if a>500000 and a<=1000000 and b<=60:
tax=20/100*a
if a>1000000:
tax=30/100*a
if a<=300000 and b>60 and b <=80:
tax=0
if a<=300000 and a<=500000 and b>60 and b <=80:
tax=10/100*a
if a<=500000 and a<=1000000 and b>60 and b <=80:
tax=20/100*a
if a>1000000 and b>60 and b <=80:
tax=30/100*a
if a<=500000 and b>80:
tax=0
if a>500000 and a<=1000000 and b>80:
tax=20/100*a
if a>1000000 and b>80 and b<=100:
tax=30/100*a
if c<10/100*a:
hrat=0
if c>10/100*(a):
hrat=(50/100)*(c-(10/100*(a)))
t=hrat+tax
print(t)
# -
| Day 17.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import numpy as np
from matplotlib import pyplot as plt
np.set_printoptions(suppress=True)
# +
from numpy import sin, cos, arctan2 as atan2, log, pi
Np = 10
Nl = 8
X = np.zeros((3, Np))
L = np.zeros((5, Nl, Np))
L[:2] = np.random.randn(2, Nl, Np)
L[2] = 1e6
L[4] = 1e6
def pf_update(X, L, l_px, r):
# Do a particle filter update: first, evaluate likelihood of every landmark for every particle
# X.shape is [3, NumParticles] (x, y, theta)
# L.shape is [5, NumLandmarks, NumParticles] (x, y, p11, p12, p22)
p_x = X[0]
p_y = X[1]
theta = X[2]
l_x = L[0]
l_y = L[1]
p11 = L[2]
p12 = L[3]
p22 = L[4]
k0 = cos(theta)
k1 = l_y - p_y
k2 = k0*k1
k3 = sin(theta)
k4 = l_x - p_x
k5 = k3*k4
k6 = k0*k4 + k1*k3
k7 = l_px - atan2(k2 - k5, k6)
k8 = -k2 + k5
k9 = k0*k6 + k3*k8
k10 = k6**2 + k8**2
k11 = k10**(-2)
k12 = k0*k8 - k3*k6
k13 = k11*k12*(k12*p11 + k9*p12) + k11*k9*(k12*p12 + k9*p22) + r
k14 = 1/k10
# also compute some handy quantities
LL = -0.5*log(4*pi**2*k13) - k7**2/k13
# get the maximum likelihood
i = np.argmax(LL, axis=0)
j = np.arange(Np)
LL = LL[i, j]
y_k = k7[i, j]
S = k13[i, j]
H1 = k12[i, j]*k14[i, j]
H2 = k14[i, j]*k9[i, j]
p11 = L[2, i, j]
p12 = L[3, i, j]
p22 = L[4, i, j]
# we should resample based on LL at this step, *then* update the EKFs!
# although actually, we end up doing exactly the same amount of work, so maybe not
# *shrug* ok, let's update EKF and then throw away some particles
k0 = 1/S
k1 = H2*p12
k2 = k0*(H1*p11 + k1)
k3 = H1*p12
k4 = H2*p22
k5 = k0*(k3 + k4)
k6 = H1*k2 - 1
L[0, i, j] += k2*y_k
L[1, i, j] += k5*y_k
L[2, i, j] = -k1*k2 - k6*p11
L[3, i, j] = -k2*k4 - k6*p12
L[4, i, j] = -k3*k5 - p22*(H2*k5 - 1)
return LL
pf_update(X, L, 0.1, 0.1)
# -
| design/coneslam/rbekf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # For learning Data Visualizaiton and NLP do check following notebooks
# # [Data Visualization](https://www.kaggle.com/vanshjatana/data-visualization)
# # [NLP](https://www.kaggle.com/vanshjatana/text-classification)
# # Table of Content
# 1. Machine Learning and Types
# 2. Application of Machine Learning
# 3. Steps of Machine Learning
# 4. Factors help to choose algorithm
# 5. Algorithm
# Linear Regression
# TheilSenRegressor
# RANSAC Regressor
# HuberRegressor
# Logistic Regression
# GaussianProcessClassifier
# Support Vector Machine
# Nu-Support Vector Classification
# Naive Bayes Algorithm
# KNN
# Perceptron
# Random Forest
# Decision Tree
# Extra Tree
# AdaBoost Classifier
# PassiveAggressiveClassifier
# Bagging Classifier
# Gradient Boosting
# Light GBM
# XGBoost
# Catboost
# Stochastic Gradient Descent
# Lasso
# RidgeC lassifier CV
# Kernel Ridge Regression
# Bayesian Ridge
# Elastic Net Regression
# LDA
# K-Means Algorithm
# CNN
# LSTM
# PCA
# Apriori
# Prophet
# ARIMA
# 6. Evaluate Algorithms
#
#
#
# # Machine Learning
# **Machine Learning is the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.
# There are many algorithm for getting machines to learn, from using basic decision trees to clustering to layers of artificial neural networks depending on what task you’re trying to accomplish and the type and amount of data that you have available.
# **
# **There are three types of machine learning**
# 1. Supervised Machine Learning
# 2. Unsupervised Machine Learning
# 3. Reinforcement Machine Learning
# # Supervised Machine Learning
#
# **It is a type of learning in which both input and desired output data are provided. Input and output data are labeled for classification to provide a learning basis for future data processing.This algorithm consist of a target / outcome variable (or dependent variable) which is to be predicted from a given set of predictors (independent variables). Using these set of variables, we generate a function that map inputs to desired outputs. The training process continues until the model achieves a desired level of accuracy on the training data.
# **
# # Unsupervised Machine Learning
#
# **Unsupervised learning is the training of an algorithm using information that is neither classified nor labeled and allowing the algorithm to act on that information without guidance.The main idea behind unsupervised learning is to expose the machines to large volumes of varied data and allow it to learn and infer from the data. However, the machines must first be programmed to learn from data. **
#
# ** Unsupervised learning problems can be further grouped into clustering and association problems.
# **
# 1. Clustering: A clustering problem is where you want to discover the inherent groupings in the data, such as grouping customers by purchasing behaviour.
# 2. Association: An association rule learning problem is where you want to discover rules that describe large portions of your data, such as people that buy X also tend to buy Y.
#
#
#
# # Reinforcement Machine Learning
# **Reinforcement Learning is a type of Machine Learning which allows machines to automatically determine the ideal behaviour within a specific context, in order to maximize its performance. Simple reward feedback is required for the agent to learn its behaviour; this is known as the reinforcement signal.It differs from standard supervised learning, in that correct input/output pairs need not be presented, and sub-optimal actions need not be explicitly corrected. Instead the focus is on performance, which involves finding a balance between exploration of uncharted territory and exploitation of current knowledge
# **
#
# # Application of Supervised Machine Learning
# 1. Bioinformatics
# 2. Quantitative structure
# 3. Database marketing
# 4. Handwriting recognition
# 5. Information retrieval
# 6. Learning to rank
# 7. Information extraction
# 8. Object recognition in computer vision
# 9. Optical character recognition
# 10. Spam detection
# 11. Pattern recognition
#
#
# # Application of Unsupervised Machine Learning
# 1. Human Behaviour Analysis
# 2. Social Network Analysis to define groups of friends.
# 3. Market Segmentation of companies by location, industry, vertical.
# 4. Organizing computing clusters based on similar event patterns and processes.
#
# # Application of Reinforcement Machine Learning
# 1. Resources management in computer clusters
# 2. Traffic Light Control
# 3. Robotics
# 4. Web System Configuration
# 5. Personalized Recommendations
# 6. Deep Learning
#
# # We can apply machine learning model by following six steps:-
# 1. Problem Definition
# 2. Analyse Data
# 3. Prepare Data
# 4. Evaluate Algorithm
# 5. Improve Results
# 6. Present Results
#
# # Factors help to choose algorithm
# 1. Type of algorithm
# 2. Parametrization
# 3. Memory size
# 4. Overfitting tendency
# 5. Time of learning
# 6. Time of predicting
# # Linear Regression
# **It is a basic and commonly used type of predictive analysis. These regression estimates are used to explain the relationship between one dependent variable and one or more independent variables.
# Y = a + bX where **
# * Y – Dependent Variable
# * a – intercept
# * X – Independent variable
# * b – Slope
#
# **Example: University GPA' = (0.675)(High School GPA) + 1.097**
# **Library and Data **
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.metrics import classification_report, confusion_matrix
train = pd.read_csv("../input/random-linear-regression/train.csv")
test = pd.read_csv("../input/random-linear-regression/test.csv")
train = train.dropna()
test = test.dropna()
train.head()
# -
# **Model with plots and accuracy**
# +
X_train = np.array(train.iloc[:, :-1].values)
y_train = np.array(train.iloc[:, 1].values)
X_test = np.array(test.iloc[:, :-1].values)
y_test = np.array(test.iloc[:, 1].values)
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = model.score(X_test, y_test)
plt.plot(X_train, model.predict(X_train), color='green')
plt.show()
print(accuracy)
# -
# # TheilSen Regressor
# +
from sklearn.linear_model import TheilSenRegressor
model = TheilSenRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = model.score(X_test, y_test)
print(accuracy)
# -
# # RANSAC Regressor
# +
from sklearn.linear_model import RANSACRegressor
model = RANSACRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = model.score(X_test, y_test)
print(accuracy)
# -
# # Huber Regressor
# +
from sklearn.linear_model import HuberRegressor
model = HuberRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = model.score(X_test, y_test)
print(accuracy)
# -
# # Logistic Regression
# **It’s a classification algorithm, that is used where the response variable is categorical. The idea of Logistic Regression is to find a relationship between features and probability of particular outcome.**
# * odds= p(x)/(1-p(x)) = probability of event occurrence / probability of not event occurrence
#
# **Example- When we have to predict if a student passes or fails in an exam when the number of hours spent studying is given as a feature, the response variable has two values, pass and fail.
# **
# **Libraries and data**
# +
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import r2_score
from statistics import mode
train = pd.read_csv("../input/titanic/train.csv")
test = pd.read_csv('../input/titanic/test.csv')
train.head()
# + _kg_hide-input=true
ports = pd.get_dummies(train.Embarked , prefix='Embarked')
train = train.join(ports)
train.drop(['Embarked'], axis=1, inplace=True)
train.Sex = train.Sex.map({'male':0, 'female':1})
y = train.Survived.copy()
X = train.drop(['Survived'], axis=1)
X.drop(['Cabin'], axis=1, inplace=True)
X.drop(['Ticket'], axis=1, inplace=True)
X.drop(['Name'], axis=1, inplace=True)
X.drop(['PassengerId'], axis=1, inplace=True)
X.Age.fillna(X.Age.median(), inplace=True)
# -
# **Model and Accuracy**
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=5)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(max_iter = 500000)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = model.score(X_test, y_test)
print(accuracy)
# **Confusion Matrix**
print(confusion_matrix(y_test,y_pred))
# **Report**
print(classification_report(y_test,y_pred))
# # Gaussian Process Classifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=5)
from sklearn.gaussian_process import GaussianProcessClassifier
model = GaussianProcessClassifier()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = model.score(X_test, y_test)
print(accuracy)
print(classification_report(y_test,y_pred))
# # Support Vector Machine
# **Support Vector Machines are perhaps one of the most popular and talked about machine learning algorithms.It is primarily a classier method that performs classification tasks by constructing hyperplanes in a multidimensional space that separates cases of different class labels. SVM supports both regression and classification tasks and can handle multiple continuous and categorical variables
# **
#
# **Example: One class is linearly separable from the others like if we only had two features like Height and Hair length of an individual, we’d first plot these two variables in two dimensional space where each point has two co-ordinates **
# **Libraries and Data**
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
data_svm = pd.read_csv("../input/svm-classification/UniversalBank.csv")
data_svm.head()
# **Model and Accuracy**
X = data_svm.iloc[:,1:13].values
y = data_svm.iloc[:, -1].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
accuracies.mean()
print(classification_report(y_test,y_pred))
# # Nu Support Vector Classification
# **Library and Data**
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.svm import NuSVC
nu_svm = pd.read_csv("../input/svm-classification/UniversalBank.csv")
nu_svm.head()
# **Model and Accuracy**
X = nu_svm.iloc[:,1:13].values
y = nu_svm.iloc[:, -1].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
classifier = NuSVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
accuracies.mean()
print(classification_report(y_test,y_pred))
# # Naive Bayes Algorithm
# **A naive Bayes classifier is not a single algorithm, but a family of machine learning algorithms which use probability theory to classify data with an assumption of independence between predictors It is easy to build and particularly useful for very large data sets. Along with simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods
# **
#
# **Example: Emails are given and we have to find the spam emails from that.A spam filter looks at email messages for certain key words and puts them in a spam folder if they match.**
# **Libraries and Data**
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import BernoulliNB
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
data = pd.read_csv('../input/classification-suv-dataset/Social_Network_Ads.csv')
data_nb = data
data_nb.head()
# **Model and Accuracy**
# **Gaussian NB**
X = data_nb.iloc[:, [2,3]].values
y = data_nb.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=GaussianNB()
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
print(classification_report(y_test,y_pred))
# **BernoulliNB**
X = data_nb.iloc[:, [2,3]].values
y = data_nb.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=BernoulliNB()
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
print(classification_report(y_test,y_pred))
# # KNN
# **KNN does not learn any model. and stores the entire training data set which it uses as its representation.The output can be calculated as the class with the highest frequency from the K-most similar instances. Each instance in essence votes for their class and the class with the most votes is taken as the prediction
# **
#
# **Example: Should the bank give a loan to an individual? Would an individual default on his or her loan? Is that person closer in characteristics to people who defaulted or did not default on their loans? **
#
# **Libraries and Data**
# **As Classifier**
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import KNeighborsRegressor
knn = pd.read_csv("../input/iris/Iris.csv")
knn.head()
# **Model and Accuracy**
X = knn.iloc[:, [1,2,3,4]].values
y = knn.iloc[:, 5].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=KNeighborsClassifier(n_neighbors=5,metric='minkowski',p=2)
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
print(classification_report(y_test,y_pred))
# **As Regression**
# **Library and Data**
from sklearn.neighbors import KNeighborsRegressor
train = pd.read_csv("../input/random-linear-regression/train.csv")
test = pd.read_csv("../input/random-linear-regression/test.csv")
train = train.dropna()
test = test.dropna()
X_train = np.array(train.iloc[:, :-1].values)
y_train = np.array(train.iloc[:, 1].values)
X_test = np.array(test.iloc[:, :-1].values)
y_test = np.array(test.iloc[:, 1].values)
# **Model and Accuracy**
model = KNeighborsRegressor(n_neighbors=2)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = model.score(X_test, y_test)
print(accuracy)
# # Perceptron
# ** It is single layer neural network and used for classification **
from sklearn.linear_model import Perceptron
from sklearn.neighbors import KNeighborsClassifier
p = pd.read_csv("../input/iris/Iris.csv")
p.head()
X = p.iloc[:, [1,2,3,4]].values
y = p.iloc[:, 5].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=Perceptron()
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
print(classification_report(y_test,y_pred))
# # Random Forest
# **Random forest is collection of tress(forest) and it builds multiple decision trees and merges them together to get a more accurate and stable prediction.It can be used for both classification and regression problems.**
#
# **Example: Suppose we have a bowl of 100 unique numbers from 0 to 99. We want to select a random sample of numbers from the bowl. If we put the number back in the bowl, it may be selected more than once.
# **
# **Libraries and Data**
from sklearn.ensemble import RandomForestClassifier
rf = pd.read_csv("../input/mushroom-classification/mushrooms.csv")
rf.head()
# **Model and Accuracy**
X = rf.drop('class', axis=1)
y = rf['class']
X = pd.get_dummies(X)
y = pd.get_dummies(y)
X_train, X_test, y_train, y_test = train_test_split(X, y)
model = RandomForestClassifier(n_estimators=100, max_depth=10, random_state=1)
model.fit(X_train, y_train)
model.score(X_test, y_test)
# # Decision Tree
# **Decision tree algorithm is classification algorithm under supervised machine learning and it is simple to understand and use in data.The idea of Decision tree is to split the big data(root) into smaller(leaves)**
from sklearn.tree import DecisionTreeClassifier
dt = data
dt.head()
X = dt.iloc[:, [2,3]].values
y = dt.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=DecisionTreeClassifier(criterion="entropy",random_state=0)
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
# # Extra Tree
# **Library and Data**
from sklearn.ensemble import ExtraTreesClassifier
et = data
et.head()
# **Model and Accuracy**
X = et.iloc[:, [2,3]].values
y = et.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=ExtraTreesClassifier(criterion="entropy",random_state=0)
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
# # AdaBoost Classifier
# **Library and Data**
from sklearn.ensemble import AdaBoostClassifier
ac = data
ac.head()
# **Model and Accutacy**
X = ac.iloc[:, [2,3]].values
y = ac.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=AdaBoostClassifier(random_state=0)
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
# # Passive Aggressive Classifier
# **Library and Data**
from sklearn.linear_model import PassiveAggressiveClassifier
pac = data
pac.head()
# **Model and Accuracy**
X = pac.iloc[:, [2,3]].values
y = pac.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=PassiveAggressiveClassifier(random_state=0)
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
# # Bagging Classifier
# **Library and Data**
from sklearn.ensemble import BaggingClassifier
bc = data
bc.head()
# **Model and Accuracy**
X = bc.iloc[:, [2,3]].values
y = bc.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
classifier=BaggingClassifier(random_state=0)
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
# # Gradient Boosting
# **Gradient boosting is an alogithm under supervised machine learning, boosting means converting weak into strong. In this new tree is boosted over the previous tree**
# **Libraries and Data**
from sklearn.ensemble import GradientBoostingClassifier
gb = data
gb.head()
# **Model and Accuracy**
X = gb.iloc[:, [2,3]].values
y = gb.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
gbk = GradientBoostingClassifier()
gbk.fit(X_train, y_train)
pred = gbk.predict(X_test)
acc=accuracy_score(y_test, y_pred)
print(acc)
# # Light GBM
# **LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:**
#
# 1. Faster training speed and higher efficiency.
# 2. Lower memory usage.
# 3. Better accuracy.
# 4. Support of parallel and GPU learning.
# 5. Capable of handling large-scale data.
# **Library and Data**
# +
import lightgbm as lgbm
import lightgbm as lgb
import pandas as pd
from sklearn.model_selection import KFold, GridSearchCV
from sklearn import preprocessing
train = pd.read_csv("../input/house-prices-advanced-regression-techniques/train.csv")
test = pd.read_csv("../input/house-prices-advanced-regression-techniques/test.csv")
data = pd.concat([train, test], sort=False)
data = data.reset_index(drop=True)
data.head()
# -
# **Preprocessing**
# + _kg_hide-input=true _kg_hide-output=true
nans=pd.isnull(data).sum()
data['MSZoning'] = data['MSZoning'].fillna(data['MSZoning'].mode()[0])
data['Utilities'] = data['Utilities'].fillna(data['Utilities'].mode()[0])
data['Exterior1st'] = data['Exterior1st'].fillna(data['Exterior1st'].mode()[0])
data['Exterior2nd'] = data['Exterior2nd'].fillna(data['Exterior2nd'].mode()[0])
data["BsmtFinSF1"] = data["BsmtFinSF1"].fillna(0)
data["BsmtFinSF2"] = data["BsmtFinSF2"].fillna(0)
data["BsmtUnfSF"] = data["BsmtUnfSF"].fillna(0)
data["TotalBsmtSF"] = data["TotalBsmtSF"].fillna(0)
data["BsmtFullBath"] = data["BsmtFullBath"].fillna(0)
data["BsmtHalfBath"] = data["BsmtHalfBath"].fillna(0)
data["BsmtQual"] = data["BsmtQual"].fillna("None")
data["BsmtCond"] = data["BsmtCond"].fillna("None")
data["BsmtExposure"] = data["BsmtExposure"].fillna("None")
data["BsmtFinType1"] = data["BsmtFinType1"].fillna("None")
data["BsmtFinType2"] = data["BsmtFinType2"].fillna("None")
data['KitchenQual'] = data['KitchenQual'].fillna(data['KitchenQual'].mode()[0])
data["Functional"] = data["Functional"].fillna("Typ")
data["FireplaceQu"] = data["FireplaceQu"].fillna("None")
data["GarageType"] = data["GarageType"].fillna("None")
data["GarageYrBlt"] = data["GarageYrBlt"].fillna(0)
data["GarageFinish"] = data["GarageFinish"].fillna("None")
data["GarageCars"] = data["GarageCars"].fillna(0)
data["GarageArea"] = data["GarageArea"].fillna(0)
data["GarageQual"] = data["GarageQual"].fillna("None")
data["GarageCond"] = data["GarageCond"].fillna("None")
data["PoolQC"] = data["PoolQC"].fillna("None")
data["Fence"] = data["Fence"].fillna("None")
data["MiscFeature"] = data["MiscFeature"].fillna("None")
data['SaleType'] = data['SaleType'].fillna(data['SaleType'].mode()[0])
data['LotFrontage'].interpolate(method='linear',inplace=True)
data["Electrical"] = data.groupby("YearBuilt")['Electrical'].transform(lambda x: x.fillna(x.mode()[0]))
data["Alley"] = data["Alley"].fillna("None")
data["MasVnrType"] = data["MasVnrType"].fillna("None")
data["MasVnrArea"] = data["MasVnrArea"].fillna(0)
nans=pd.isnull(data).sum()
nans[nans>0]
# +
_list = []
for col in data.columns:
if type(data[col][0]) == type('str'):
_list.append(col)
le = preprocessing.LabelEncoder()
for li in _list:
le.fit(list(set(data[li])))
data[li] = le.transform(data[li])
train, test = data[:len(train)], data[len(train):]
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
test = test.drop(columns=['SalePrice', 'Id'])
# -
# **Model and Accuracy**
# +
kfold = KFold(n_splits=5, random_state = 2020, shuffle = True)
model_lgb = lgb.LGBMRegressor(objective='regression',num_leaves=5,
learning_rate=0.05, n_estimators=720,
max_bin = 55, bagging_fraction = 0.8,
bagging_freq = 5, feature_fraction = 0.2319,
feature_fraction_seed=9, bagging_seed=9,
min_data_in_leaf =6, min_sum_hessian_in_leaf = 11)
model_lgb.fit(X, y)
r2_score(model_lgb.predict(X), y)
# -
# # **XGBoost**
# **XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. In prediction problems involving unstructured data (images, text, etc.) artificial neural networks tend to outperform all other algorithms or frameworks.It is a perfect combination of software and hardware optimization techniques to yield superior results using less computing resources in the shortest amount of time.**
# **Library and Data**
import xgboost as xgb
#Data is used the same as LGB
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
X.head()
# **Model and Accuracy**
model_xgb = xgb.XGBRegressor(colsample_bytree=0.4603, gamma=0.0468,
learning_rate=0.05, max_depth=3,
min_child_weight=1.7817, n_estimators=2200,
reg_alpha=0.4640, reg_lambda=0.8571,
subsample=0.5213, silent=1,
random_state =7, nthread = -1)
model_xgb.fit(X, y)
r2_score(model_xgb.predict(X), y)
# # Catboost
# **Catboost is a type of gradient boosting algorithms which can automatically deal with categorical variables without showing the type conversion error, which helps you to focus on tuning your model better rather than sorting out trivial errors.Make sure you handle missing data well before you proceed with the implementation.
# **
# **Library and Data**
from catboost import CatBoostRegressor
#Data is used the same as LGB
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
X.head()
# **Model and Accuracy**
cb_model = CatBoostRegressor(iterations=500,
learning_rate=0.05,
depth=10,
random_seed = 42,
bagging_temperature = 0.2,
od_type='Iter',
metric_period = 50,
od_wait=20)
cb_model.fit(X, y)
r2_score(cb_model.predict(X), y)
# # Stochastic Gradient Descent
# **Stochastic means random , so in Stochastic Gradient Descent dataset sample is choosedn random instead of the whole dataset.hough, using the whole dataset is really useful for getting to the minima in a less noisy or less random manner, but the problem arises when our datasets get really huge and for that SGD come in action**
# **Library and Data**
from sklearn.linear_model import SGDRegressor
#Data is used the same as LGB
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
X.head()
# **Model and Accuracy**
SGD = SGDRegressor(max_iter = 100)
SGD.fit(X, y)
r2_score(SGD.predict(X), y)
# # Lasso
# **In statistics and machine learning, lasso (least absolute shrinkage and selection operator; also Lasso or LASSO) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the statistical model it produces. Though originally defined for least squares, lasso regularization is easily extended to a wide variety of statistical models including generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators, in a straightforward fashion**
# **Library and Data**
from sklearn.linear_model import ElasticNet, Lasso, BayesianRidge, LassoLarsIC
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
#Data is used the same as LGB
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
X.head()
# **Model and Accuracy**
lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.0005, random_state=1))
lasso.fit(X, y)
r2_score(lasso.predict(X), y)
# # Ridge Classifier CV
# **Library and Data**
from sklearn.linear_model import RidgeClassifierCV
#Data is used the same as LGB
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
X.head()
# **Model and Accuracy**
rcc = RidgeClassifierCV()
rcc.fit(X, y)
r2_score(rcc.predict(X), y)
# # Kernel Ridge Regression
# **KRR combine Ridge regression and classification with the kernel trick.It is similar to Support vector Regression but relatively very fast.This is suitable for smaller dataset (less than 100 samples)**
# **Library and Data**
from sklearn.kernel_ridge import KernelRidge
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
#Data is used the same as LGB
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
X.head()
# **Model and Accuracy**
KRR = KernelRidge(alpha=0.6, kernel='polynomial', degree=2, coef0=2.5)
KRR.fit(X, y)
r2_score(KRR.predict(X), y)
# # BayesianRidge
# ** Bayesian regression, is a regression model defined in probabilistic terms, with explicit priors on the parameters. The choice of priors can have the regularizing effect.Bayesian approach is a general way of defining and estimating statistical models that can be applied to different models.**
# **Library and Data**
from sklearn.linear_model import BayesianRidge
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
#Data is used the same as LGB
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
X.head()
# **Model and Accuracy**
BR = BayesianRidge()
BR.fit(X, y)
r2_score(BR.predict(X), y)
# # Elastic Net Regression
#
# **Elastic net is a hybrid of ridge regression and lasso regularization.It combines feature elimination from Lasso and feature coefficient reduction from the Ridge model to improve your model's predictions.**
# **Library and Data**
from sklearn.linear_model import ElasticNet, Lasso, BayesianRidge, LassoLarsIC
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
#Data is used the same as LGB
X = train.drop(columns=['SalePrice', 'Id'])
y = train['SalePrice']
X.head()
# **Model and Accuracy**
ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.0005, l1_ratio=.9, random_state=3))
ENet.fit(X, y)
r2_score(ENet.predict(X), y)
# # **LDA**
# **A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.The model fits a Gaussian density to each class, assuming that all classes share the same covariance matrix.Itis used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.**
# **Library and Data**
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = data
lda.head()
# **Model and Accuracy**
X = lda.iloc[:, [2,3]].values
y = lda.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
Model=LinearDiscriminantAnalysis()
Model.fit(X_train,y_train)
y_pred=Model.predict(X_test)
print('accuracy is ',accuracy_score(y_pred,y_test))
# # K-Means Algorithm
# K-means clustering is a type of unsupervised learning, which is used when you have unlabeled data and the goal of this algorithm is to find groups in the data
#
# **Steps to use this algorithm:-**
# * 1-Clusters the data into k groups where k is predefined.
# * 2-Select k points at random as cluster centers.
# * 3-Assign objects to their closest cluster center according to the Euclidean distance function.
# * 4-Calculate the centroid or mean of all objects in each cluster.
#
# **Examples: Behavioral segmentation like segment by purchase history or by activities on application, website, or platform Separate valid activity groups from bots **
#
# **Libraries and Data**
from sklearn.cluster import KMeans
km = pd.read_csv("../input/k-mean/km.csv")
km.head()
# **Checking for number of clusters**
K_clusters = range(1,8)
kmeans = [KMeans(n_clusters=i) for i in K_clusters]
Y_axis = km[['latitude']]
X_axis = km[['longitude']]
score = [kmeans[i].fit(Y_axis).score(Y_axis) for i in range(len(kmeans))]
plt.plot(K_clusters, score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.show()
# **Fitting Model**
kmeans = KMeans(n_clusters = 3, init ='k-means++')
kmeans.fit(km[km.columns[1:3]])
km['cluster_label'] = kmeans.fit_predict(km[km.columns[1:3]])
centers = kmeans.cluster_centers_
labels = kmeans.predict(km[km.columns[1:3]])
km.cluster_label.unique()
# **Plotting Clusters**
km.plot.scatter(x = 'latitude', y = 'longitude', c=labels, s=50, cmap='viridis')
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=100, alpha=0.5)
# # CNN
# **Library and Data**
from sklearn.model_selection import train_test_split
from tensorflow.keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
import tensorflow as tf
train_data = pd.read_csv("../input/digit-recognizer/train.csv")
test_data = pd.read_csv("../input/digit-recognizer/test.csv")
train_data.head()
# **Preprocessing and Data Split**
# +
X = np.array(train_data.drop("label", axis=1)).astype('float32')
y = np.array(train_data['label']).astype('float32')
for i in range(9):
plt.subplot(3,3,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(X[i].reshape(28, 28), cmap=plt.cm.binary)
plt.xlabel(y[i])
plt.show()
X = X / 255.0
X = X.reshape(-1, 28, 28, 1)
y = to_categorical(y)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
X_test = np.array(test_data).astype('float32')
X_test = X_test / 255.0
X_test = X_test.reshape(-1, 28, 28, 1)
plt.figure(figsize=(10,10))
# -
# **Model**
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same',
activation ='relu', input_shape = (28,28,1)))
model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(10, activation = "softmax"))
model.summary()
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model1.png')
# **Compiling model**
#increse to epochs to 30 for better accuracy
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10, batch_size=85, validation_data=(X_val, y_val))
# +
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
epochs = range(len(accuracy))
plt.plot(epochs, accuracy, 'bo', label='Training accuracy')
plt.plot(epochs, val_accuracy, 'b', label='Validation accuracy')
plt.show()
print(model.evaluate(X_val, y_val))
# + _kg_hide-input=true
prediction = model.predict_classes(X_test)
submit = pd.DataFrame(prediction,columns=["Label"])
submit["ImageId"] = pd.Series(range(1,(len(prediction)+1)))
submission = submit[["ImageId","Label"]]
submission.to_csv("submission.csv",index=False)
# -
# # LSTM
# **LSTM blocks are part of a recurrent neural network structure. Recurrent neural networks are made to utilize certain types of artificial memory processes that can help these artificial intelligence programs to more effectively imitate human thought.It is capable of learning order dependence
# LSTM can be used for machine translation, speech recognition, and more.**
# **Library and Data**
import math
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM
lstm = pd.read_csv("../input/nyse/prices.csv")
lstm = lstm[lstm['symbol']=="NFLX"]
lstm['date'] = pd.to_datetime(lstm['date'])
lstm.set_index('date',inplace=True)
lstm = lstm.reset_index()
lstm.head()
# **Preprocessing**
data = lstm.filter(['close'])
dataset = data.values
training_data_len = math.ceil(len(dataset)*.75)
scaler = MinMaxScaler(feature_range=(0,1))
scaled_data = scaler.fit_transform(dataset)
train_data = scaled_data[0:training_data_len, :]
x_train = []
y_train = []
for i in range(60,len(train_data)):
x_train.append(train_data[i-60:i, 0])
y_train.append(train_data[i,0])
x_train,y_train = np.array(x_train), np.array(y_train)
x_train = np.reshape(x_train,(x_train.shape[0],x_train.shape[1],1))
# **Model**
model =Sequential()
model.add(LSTM(64,return_sequences=True, input_shape=(x_train.shape[1],1)))
model.add(LSTM(64, return_sequences= False))
model.add(Dense(32))
model.add(Dense(1))
model.summary()
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='model1.png')
# **Compiling Model**
# + _kg_hide-output=true
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x_train,y_train, batch_size=85, epochs=20)
# -
# **Prediction and Accuracy**
test_data= scaled_data[training_data_len-60:, :]
x_test = []
y_test = dataset[training_data_len:,:]
for i in range(60,len(test_data)):
x_test.append(test_data[i-60:i,0])
x_test = np.array(x_test)
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1],1))
predictions = model.predict(x_test)
predictions = scaler.inverse_transform(predictions)
rmse = np.sqrt(np.mean(predictions - y_test)**2)
rmse
# # Principle Component Analysis
# **It's an important method for dimension reduction.It extracts low dimensional set of features from a high dimensional data set with a motive to capture as much information as possible and to visualise high-dimensional data, it also reduces noise and finally makes other algorithms to work better because we are injecting fewer inputs.**
# * Example: When we have to bring out strong patterns in a data set or to make data easy to explore and visualize
# +
from sklearn.datasets import make_blobs
from sklearn import datasets
class PCA:
def __init__(self, n_components):
self.n_components = n_components
self.components = None
self.mean = None
def fit(self, X):
self.mean = np.mean(X, axis=0)
X = X - self.mean
cov = np.cov(X.T)
evalue, evector = np.linalg.eig(cov)
eigenvectors = evector.T
idxs = np.argsort(evalue)[::-1]
evalue = evalue[idxs]
evector = evector[idxs]
self.components = evector[0:self.n_components]
def transform(self, X):
#project data
X = X - self.mean
return(np.dot(X, self.components.T))
data = datasets.load_iris()
X = data.data
y = data.target
pca = PCA(2)
pca.fit(X)
X_projected = pca.transform(X)
x1 = X_projected[:,0]
x2 = X_projected[:,1]
plt.scatter(x1,x2,c=y,edgecolor='none',alpha=0.8,cmap=plt.cm.get_cmap('viridis',3))
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.colorbar()
plt.show()
# -
# # Apriori
# **It is a categorisation algorithm attempts to operate on database records, particularly transactional records, or records including certain numbers of fields or items.It is mainly used for sorting large amounts of data. Sorting data often occurs because of association rules. **
# * Example: To analyse data for frequent if/then patterns and using the criteria support and confidence to identify the most important relationships.
df = pd.read_csv('../input/supermarket/GroceryStoreDataSet.csv',names=['products'],header=None)
data = list(df["products"].apply(lambda x:x.split(',')))
data
from mlxtend.frequent_patterns import apriori
from mlxtend.preprocessing import TransactionEncoder
te = TransactionEncoder()
te_data = te.fit(data).transform(data)
df = pd.DataFrame(te_data,columns=te.columns_)
df1 = apriori(df,min_support=0.01,use_colnames=True)
df1.head()
# # Prophet
#
# Prophet is an extremely easy tool for analysts to produce reliable forecasts
# 1. Prophet only takes data as a dataframe with a ds (datestamp) and y (value we want to forecast) column. So first, let’s convert the dataframe to the appropriate format.
# 1. Create an instance of the Prophet class and then fit our dataframe to it.
# 2. Create a dataframe with the dates for which we want a prediction to be made with make_future_dataframe(). Then specify the number of days to forecast using the periods parameter.
# 3. Call predict to make a prediction and store it in the forecast dataframe. What’s neat here is that you can inspect the dataframe and see the predictions as well as the lower and upper boundaries of the uncertainty interval.
#
# **Library and Data**
# +
import plotly.offline as py
import plotly.express as px
from fbprophet import Prophet
from fbprophet.plot import plot_plotly, add_changepoints_to_plot
pred = pd.read_csv("../input/coronavirus-2019ncov/covid-19-all.csv")
pred = pred.fillna(0)
predgrp = pred.groupby("Date")[["Confirmed","Recovered","Deaths"]].sum().reset_index()
pred_cnfrm = predgrp.loc[:,["Date","Confirmed"]]
pr_data = pred_cnfrm
pr_data.columns = ['ds','y']
pr_data.head()
# -
# **Model and Forecast**
m=Prophet()
m.fit(pr_data)
future=m.make_future_dataframe(periods=15)
forecast=m.predict(future)
forecast
# +
fig = plot_plotly(m, forecast)
py.iplot(fig)
fig = m.plot(forecast,xlabel='Date',ylabel='Confirmed Count')
# -
# # Arima
# **Library and Data**
import datetime
from statsmodels.tsa.arima_model import ARIMA
ar = pd.read_csv("../input/competitive-data-science-predict-future-sales/sales_train.csv")
ar.date=ar.date.apply(lambda x:datetime.datetime.strptime(x, '%d.%m.%Y'))
ar=ar.groupby(["date_block_num"])["item_cnt_day"].sum()
ar.index=pd.date_range(start = '2013-01-01',end='2015-10-01', freq = 'MS')
ar=ar.reset_index()
ar=ar.loc[:,["index","item_cnt_day"]]
ar.columns = ['confirmed_date','count']
ar.head()
# **Model**
# +
model = ARIMA(ar['count'].values, order=(1, 2, 1))
fit_model = model.fit(trend='c', full_output=True, disp=True)
fit_model.summary()
# -
# **Prediction**
fit_model.plot_predict()
plt.title('Forecast vs Actual')
pd.DataFrame(fit_model.resid).plot()
forcast = fit_model.forecast(steps=6)
pred_y = forcast[0].tolist()
pred = pd.DataFrame(pred_y)
# # **Evaluate Algorithms**
# **The evaluation of algorithm consist three following steps:- **
# 1. Test Harness
# 2. Explore and select algorithms
# 3. Interpret and report results
#
#
# # If you like this notebook, do hit upvote
# # Thanks
#
#
| dataset_0/notebook/applied-machine-learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="3vhAMaIOBIee"
import tensorflow as tf
AUTOTUNE = tf.data.experimental.AUTOTUNE
import IPython.display as display
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import os
from tensorflow.keras import datasets, layers, models
# + [markdown] colab_type="text" id="wO0InzL66URu"
# ### Retrieve the images
# + colab={} colab_type="code" id="rN-Pc6Zd6awg"
import pathlib
train_data_dir = 'data/patch_train_b_350x350_smoothed'
train_data_dir = pathlib.Path(train_data_dir)
test_data_dir = 'data/patch_test_b_350x350_smoothed'
test_data_dir = pathlib.Path(test_data_dir)
# + colab={} colab_type="code" id="QhewYCxhXQBX"
train_image_count = len(list(train_data_dir.glob('*/*.png')))
train_image_count
# + colab={} colab_type="code" id="QhewYCxhXQBX"
test_image_count = len(list(test_data_dir.glob('*/*.png')))
test_image_count
# + colab={} colab_type="code" id="sJ1HKKdR4A7c"
CLASS_NAMES = np.array([item.name for item in test_data_dir.glob('*')])
CLASS_NAMES
# + colab={} colab_type="code" id="1zf695or-Flq"
NUMBER_OF_CLASSES = len(CLASS_NAMES)
BATCH_SIZE = 32
TRAIN_STEPS_PER_EPOCH = np.ceil(train_image_count/BATCH_SIZE)
TEST_STEPS_PER_EPOCH = np.ceil(test_image_count/BATCH_SIZE)
# -
# ### Visualisation functions
# + colab={} colab_type="code" id="nLp0XVG_Vgi2"
def show_logical_batch(image_batch, label_batch):
plt.figure(figsize=(7,7))
for n in range(9):
ax = plt.subplot(3,3,n+1)
plt.imshow(image_batch[n])
plt.title(CLASS_NAMES[label_batch[n]==1][0].title())
plt.axis('off')
# -
def show_numerical_batch(image_batch, label_batch):
plt.figure(figsize=(7,7))
for n in range(9):
ax = plt.subplot(3,3,n+1)
plt.imshow(image_batch[n])
plt.title(CLASS_NAMES[label_batch[n]])
plt.axis('off')
# + [markdown] colab_type="text" id="IIG5CPaULegg"
# ### Load data
# + colab={} colab_type="code" id="lAkQp5uxoINu"
train_list_ds = tf.data.Dataset.list_files(str(train_data_dir/'*/*'))
test_list_ds = tf.data.Dataset.list_files(str(test_data_dir/'*/*'))
# + colab={} colab_type="code" id="arSQzIey-4D4"
def get_logical_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
return parts[-2] == CLASS_NAMES
# -
def get_numerical_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
numeric_label=tf.argmax(tf.cast((parts[-2] == CLASS_NAMES),dtype=tf.uint8))
return numeric_label
def get_onehot_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
onehot_label=tf.cast((parts[-2] == CLASS_NAMES),dtype=tf.uint8)
return onehot_label
# + colab={} colab_type="code" id="MGlq4IP4Aktb"
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# resize the image to the desired size.
return img
# + colab={} colab_type="code" id="-xhBRgvNqRRe"
def process_path(file_path):
label = get_numerical_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
# + colab={} colab_type="code" id="3SDhbo8lOBQv"
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_labeled_ds = train_list_ds.map(process_path, num_parallel_calls=AUTOTUNE)
test_labeled_ds = test_list_ds.map(process_path, num_parallel_calls=AUTOTUNE)
# + colab={} colab_type="code" id="uZmZJx8ePw_5"
def prepare_for_training(ds, cache=True, shuffle_buffer_size=1000):
# This is a small dataset, only load it once, and keep it in memory.
# use `.cache(filename)` to cache preprocessing work for datasets that don't
# fit in memory.
if cache:
if isinstance(cache, str):
ds = ds.cache(cache)
else:
ds = ds.cache()
ds = ds.shuffle(buffer_size=shuffle_buffer_size)
# Repeat forever
ds = ds.repeat()
ds = ds.batch(BATCH_SIZE)
# `prefetch` lets the dataset fetch batches in the background while the model
# is training.
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
# + colab={} colab_type="code" id="-YKnrfAeZV10"
train_ds = prepare_for_training(train_labeled_ds)
test_ds = prepare_for_training(test_labeled_ds)
tf_image_batch, tf_label_batch= next(iter(train_ds))
IMG_HEIGHT, IMG_WIDTH, _ =tf_image_batch[0].shape
# + colab={} colab_type="code" id="UN_Dnl72YNIj"
show_numerical_batch(tf_image_batch.numpy(), tf_label_batch.numpy())
# + [markdown] colab_type="text" id="DSPCom-KmApV"
# ### Model 1
# #### Simple CNN model.
# + colab={} colab_type="code" id="L9YmGQBQPrdn"
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(NUMBER_OF_CLASSES))
# -
# #### Compile and train the model
# + colab={} colab_type="code" id="MdDzI75PUXrG"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_ds, epochs=1,
validation_data=test_ds,
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
validation_steps=TEST_STEPS_PER_EPOCH)
# + [markdown] colab_type="text" id="jKgyC5K_4O0d"
# #### Evaluate the model
# -
probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()])
test_images, test_labels=next(iter(test_ds))
predictions=probability_model.predict(test_images)
# +
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} \n ({})".format(CLASS_NAMES[predicted_label],
CLASS_NAMES[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(NUMBER_OF_CLASSES))
plt.yticks([])
thisplot = plt.bar(range(NUMBER_OF_CLASSES), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
# -
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 1
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout(pad=4.0)
plt.show()
| simple_binary_350_350_smoothed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NREFT Spectra
# Here, we calculate the spectra for 11 different operators and a range of target nuclei.
# %matplotlib inline
# +
from WIMpy import DMUtils as DMU
#We'll also import some useful libraries
import numpy as np
import matplotlib.pyplot as pl
import matplotlib as mpl
font = {'family' : 'sans-serif',
'size' : 16}
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1
mpl.rcParams['xtick.minor.size'] = 3
mpl.rcParams['xtick.minor.width'] = 1
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1
mpl.rcParams['ytick.minor.size'] = 3
mpl.rcParams['ytick.minor.width'] = 1
mpl.rc('font', **font)
mpl.rcParams['xtick.direction'] = 'in'
mpl.rcParams['ytick.direction'] = 'in'
mpl.rcParams['xtick.top'] = True
mpl.rcParams['ytick.right'] = True
from tqdm import tqdm
from scipy.integrate import quad
from matplotlib.ticker import MultipleLocator
# -
# ### Target nuclei
#
# Let's specify the target nuclei we're interested in...
# +
targets = ["Xenon", "Argon", "Germanium", "C3F8"]
nuclei_Xe = ["Xe128", "Xe129", "Xe130", "Xe131", "Xe132", "Xe134", "Xe136"]
nuclei_Ar = ["Ar40",]
nuclei_C3F8 = ["C12", "Fluorine"]
nuclei_Ge = ["Ge70", "Ge72", "Ge73", "Ge74", "Ge76"]
nuclei_vals = dict(zip(targets, [nuclei_Xe, nuclei_Ar, nuclei_Ge, nuclei_C3F8]))
#Load in the list of nuclear spins, atomic masses and mass fractions
nuclei_list = np.loadtxt("../WIMpy/Nuclei.txt", usecols=(0,), dtype=str)
frac_list = np.loadtxt("../WIMpy/Nuclei.txt", usecols=(3,))
frac_vals = dict(zip(nuclei_list, frac_list))
# -
# ### Calculating the recoil rate
#
# Let's define a function for calculating the recoil spectrum for a given target:
# +
E_list = np.linspace(0, 100, 1000)
m_x = 50.0 #GeV
def calcSpectrum(target, operator):
cp = np.zeros(11)
cn = np.zeros(11)
#Assume isoscalar (cp = cn) interactions
cp[operator-1] = 1.0
cn[operator-1] = 1.0
dRdE = np.zeros_like(E_list)
if (target == "C3F8"):
#Weight by mass fractions of constituents
dRdE = 0.1915*DMU.dRdE_NREFT(E_list, m_x, cp, cn, "C12")\
+ 0.8085*DMU.dRdE_NREFT(E_list, m_x, cp, cn, "Fluorine")
else:
nuclei = nuclei_vals[target]
for nuc in nuclei:
dRdE += frac_vals[nuc]*DMU.dRdE_NREFT(E_list, m_x, cp, cn, nuc)
return dRdE
# -
# ### Plotting a single recoil rate
#
# Adding the recoil rate to a given plot:
def plotSpectrum(target, operator, ax, label, color):
dRdE = calcSpectrum(target, operator)
#Normalise to 1 event
dRdE_norm = dRdE/np.trapz(dRdE,E_list)
ax.plot(E_list, dRdE_norm, label=label, color=color, lw=1.5)
# ### Plotting the recoil rate for a given operator (and all targets)
def plotOperator(ax, operator, plotLegend=False):
#ax.set_title(r"Operator $\mathcal{O}_{" + str(operator) + "}$",fontsize=14)
colors = ['r','b','g', 'c']
for tar, col in zip(targets, colors):
plotSpectrum(tar, operator, ax, label=tar, color=col)
ax.set_xlabel(r'$E_R \,\,\mathrm{[keV]}$')
ax.set_ylabel(r'$\mathrm{d}R/\mathrm{d}E_R \,\,\mathrm{[1/keV]}$')
#ax.set_ylabel(r'$\mathrm{d}R/\mathrm{d}E_R \,\,\mathrm{[arb. units]}$')
if (plotLegend):
ax.legend(fancybox=True, fontsize=14)
#ax.yaxis.set_major_locator(MultipleLocator(0.01))
ax.set_ylim(0, 0.06)
# ### Actually doing the plotting...
# +
f,ax = pl.subplots(5,2,figsize=(10.7,15))
#f,ax = pl.subplots(2,2,figsize=(10,7))
plotOperator(ax.flatten()[0],1, plotLegend=True)
for i,op in enumerate([3,4,5,6,7,8,9,10,11]):
plotOperator(ax.flatten()[i+1],op)
#for i,op in enumerate([7,8,11]):
# plotOperator(ax.flatten()[i+1],op)
pl.tight_layout()
pl.savefig("../plots/Spectra_mx=" + str(int(m_x))+ "GeV.pdf", bbox_inches="tight")
pl.show()
# -
# ## Comparing 2 operators
# +
fig = pl.figure(figsize=(7,5))
dRdE1 = calcSpectrum("Xenon", 1)
dRdE11 = calcSpectrum("Xenon", 11)
pl.loglog(E_list, dRdE1, label="Operator 1")
pl.loglog(E_list, 1e4*dRdE11, label = "Operator 11")
pl.xlabel(r'$E_R \,\,\mathrm{[keV]}$')
pl.ylabel(r'$\mathrm{d}R/\mathrm{d}E_R \,\,\mathrm{[arb. units]}$')
pl.legend()
pl.show()
# -
| Examples/Spectra.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: w2v
# language: python
# name: w2v
# ---
# # Visualize Convergence
#
# This notebook is used to visualize results produced by `runConvergenceRange.py`. Graphs produced by this notebook can be used to determine the convergence of word2vec models trained on diachronic corpus, in order to determine the number of years required for the model to be representative of the corpus. For mor information, read the README.md document.
# %pylab inline
# %load_ext autoreload
# %autoreload 2
import pickle as pkl
import cPickle as pickle
from glob import glob
from helpers import getYears
from convergence import computeConvergenceOverYearRange
# # IMPORTANT
# Set `saveDir` to be the directory where you saved your convergence results. This should be the same which was used as the `--outDir` of `runConvergenceRange.py` script.
saveDir = 'outdir/vk'
def doPlots(convergence, sentenceYearCounter):
semilogx(convergence.keys(), convergence.values(), 'x-')
xlabel('# Sentences')
ylabel('Convergence')
for year,sentenceCount in sentenceYearCounter.iteritems():
plot([sentenceCount,sentenceCount], [0,1],'--', color='#aaaaaa')
text(sentenceCount, 0.5, str(year), rotation=270)
ax = axis()
axis([ ax[0], ax[1], 0.0, 1.0 ])
def plotSentences(sentenceYearCounter):
semilogy(sentenceYearCounter.keys(), sentenceYearCounter.values(), 'x-')
xticks(sentenceYearCounter.keys(), sentenceYearCounter.keys(), rotation=90)
xlabel('Year')
ylabel('# Sentences')
# +
files = sorted(glob(saveDir + '/convergenceRange_*.pkl'))
#files = sorted(glob('./run1990_s*' + '/convergenceRange_*.pkl'))
for f in files:
convergence, sentenceYearCounter, vocabSize = pkl.load(open(f))
plotTitle = f.replace(saveDir + '/convergenceRange_', '').replace('.pkl', '')
#plotTitle = f.replace('convergenceRange_', '').replace('.pkl', '')
figure(figsize=(30,10))
subplot(1,2,1)
doPlots(convergence, sentenceYearCounter)
title(plotTitle)
# subplot(1,2,2)
#plotSentences(sentenceYearCounter)
# -
files
# # Checking diagonal convergence
# Just for argument sake -- I had a look at one set of models, to see how the diagonal values behave (I should probably check for all sets of models, but one will do).
#
# Initially model changes a lot, then it settles and then converges (like before).
#
# The mean of the digagonal convergence is equivalent to the convergence above.
#
# Next steps:
#
# - Not sure ?
# - Produce graphs from upstairs
# - Adjust text and send round again
#
from convergence import measureDiagonalConvergence, getModelInv
from sortedcontainers import SortedDict
import gensim
# +
modelFolder = './run1990_s1261/'
files = sorted(glob(modelFolder + '/*[!vocab].w2v'))
convergence = SortedDict()
sentenceYearCounter = SortedDict()
vocabSize = SortedDict()
oldModel, oldModelInv = None, None
for f in files:
print '... loading file: ', f.replace(modelFolder, '')
newModel = gensim.models.KeyedVectors.load_word2vec_format(f, binary=True)
newModel.init_sims(replace=False)
newModelInv = getModelInv(newModel)
cumSentences = int(f.replace(modelFolder, '').replace('.w2v', '').split('_')[1])
if oldModel is not None:
# d = measureDiagonalConvergence(oldModel, oldModelInv, newModel, newModelInv, vector_size)
d = measureDiagonalConvergence(oldModel, oldModelInv, newModel, newModelInv)
convergence[cumSentences] = d
vocabSize[cumSentences] = len(newModel.vocab)
oldModel = newModel
oldModelInv = newModelInv
# -
data = np.array(convergence.values())
plot(data);
# +
mu = data.mean(axis=1)
sigma = data.std(axis=1)
plot(convergence.keys(), mu, 'b+-')
plot(convergence.keys(), mu + sigma, 'r-')
plot(convergence.keys(), mu - sigma, 'r-')
plot(convergence.keys(), data.max(axis=1), 'b--')
plot(convergence.keys(), data.min(axis=1), 'b--');
f = './run1990_s1261/convergenceRange_1990.pkl'
convergence, sentenceYearCounter, vocabSize = pkl.load(open(f, 'rb'))
semilogx(convergence.keys(), convergence.values(), 'gx');
# -
| VisualizeConvergence.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv')
cb_dark_blue = (0/255,107/255,164/255)
cb_orange = (255/255, 128/255, 14/255)
#stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics']
stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering']
lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History']
other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture']
fig = plt.figure(figsize=(16, 20))
for sp in range(0,18,3):
i=int(sp/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(stem_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off")
if sp == 0:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 15:
ax.text(2005, 62, 'Men')
ax.text(2001, 35, 'Women')
for sp in range(0,18,3):
i=int(sp/3)
ax = fig.add_subplot(6,3,sp+3)
ax.plot(women_degrees['Year'], women_degrees[other_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(other_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off")
if sp == 0:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 15:
ax.text(2005, 62, 'Men')
ax.text(2001, 35, 'Women')
for sp in range(0,15,3):
i=int(sp/3)
sp=sp-1
ax = fig.add_subplot(6,3,sp+3)
ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_title(lib_arts_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off")
if sp == -1:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 15:
ax.text(2005, 62, 'Men')
ax.text(2001, 35, 'Women')
plt.show()
# +
# %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv')
cb_dark_blue = (0/255,107/255,164/255)
cb_orange = (255/255, 128/255, 14/255)
#stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics']
stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering']
lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History']
other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture']
fig = plt.figure(figsize=(16, 20))
for sp in range(0,18,3):
i=int(sp/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_yticks([0,100]) # set only the points/labels u want on y axis i.e 0 and 100
ax.set_title(stem_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom='off')
if sp == 0:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 15:
ax.text(2005, 62, 'Men')
ax.text(2001, 35, 'Women')
ax.tick_params(labelbottom="on") #show x-axis
for sp in range(0,18,3):
i=int(sp/3)
ax = fig.add_subplot(6,3,sp+3)
ax.plot(women_degrees['Year'], women_degrees[other_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_yticks([0,100]) # set only 2 points on y axis i.e 0 and 100
ax.set_title(other_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom="off")
if sp == 0:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 15:
ax.text(2005, 62, 'Men')
ax.text(2001, 35, 'Women')
ax.tick_params(labelbottom="on") #show x-axis
for sp in range(0,15,3):
i=int(sp/3)
sp=sp-1
ax = fig.add_subplot(6,3,sp+3)
ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_yticks([0,50,100]) # set only 2 points on y axis i.e 0 and 100
ax.set_title(lib_arts_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom="off")
if sp == -1:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 11:
ax.tick_params(labelbottom="on") #show x-axis
plt.show()
# +
# %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv')
cb_dark_blue = (0/255,107/255,164/255)
cb_orange = (255/255, 128/255, 14/255)
#stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics']
stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering']
lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History']
other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture']
fig = plt.figure(figsize=(16, 20))
for sp in range(0,18,3):
i=int(sp/3)
ax = fig.add_subplot(6,3,sp+1)
ax.plot(women_degrees['Year'], women_degrees[stem_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_yticks([0,100]) # set only the points/labels u want on y axis i.e 0 and 100
ax.set_title(stem_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom='off')
ax.axhline(50,c=(171/255, 171/255, 171/255),alpha=0.3)
#set central horizontal line on y axis , aplha for tranparent level
if sp == 0:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 15:
ax.text(2005, 62, 'Men')
ax.text(2001, 35, 'Women')
ax.tick_params(labelbottom="on") #show x-axis
for sp in range(0,18,3):
i=int(sp/3)
ax = fig.add_subplot(6,3,sp+3)
ax.plot(women_degrees['Year'], women_degrees[other_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[other_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_yticks([0,100]) # set only 2 points on y axis i.e 0 and 100
ax.axhline(50,c=(171/255, 171/255, 171/255),alpha=0.3)
#set central horizontal line on y axis , aplha for tranparent level
ax.set_title(other_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom="off")
if sp == 0:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 15:
ax.text(2005, 62, 'Men')
ax.text(2001, 35, 'Women')
ax.tick_params(labelbottom="on") #show x-axis
for sp in range(0,15,3):
i=int(sp/3)
sp=sp-1
ax = fig.add_subplot(6,3,sp+3)
ax.plot(women_degrees['Year'], women_degrees[lib_arts_cats[i]], c=cb_dark_blue, label='Women', linewidth=3)
ax.plot(women_degrees['Year'], 100-women_degrees[lib_arts_cats[i]], c=cb_orange, label='Men', linewidth=3)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xlim(1968, 2011)
ax.set_ylim(0,100)
ax.set_yticks([0,50,100]) # set only 2 points on y axis i.e 0 and 100
ax.axhline(50,c=(171/255, 171/255, 171/255),alpha=0.3)
#set central horizontal line on y axis , aplha for tranparent level
ax.set_title(lib_arts_cats[i])
ax.tick_params(bottom="off", top="off", left="off", right="off",labelbottom="off")
if sp == -1:
ax.text(2005, 87, 'Men')
ax.text(2002, 8, 'Women')
elif sp == 11:
ax.tick_params(labelbottom="on") #show x-axis
plt.savefig("gender_degrees.png")
plt.show()
| Guided Project_ Visualizing The Gender Gap In College Degrees/Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Machine Learning Engineer Nanodegree
# ## Unsupervised Learning
# ## Project 3: Creating Customer Segments
# Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!
#
# In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
#
# >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
# ## Getting Started
#
# In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in *monetary units*) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
#
# The dataset for this project can be found on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers). For the purposes of this project, the features `'Channel'` and `'Region'` will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
#
# Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
# +
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import renders as rs
from IPython.display import display # Allows the use of display() for DataFrames
# Show matplotlib plots inline (nicely formatted in the notebook)
import matplotlib as mpl
# %matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
# -
# ## Data Exploration
# In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
#
# Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase.
# Display a description of the dataset
display(data.describe())
# ### Implementation: Selecting Samples
# To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
# +
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [28,29,30]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
# -
# ### Question 1
# Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
# *What kind of establishment (customer) could each of the three samples you've chosen represent?*
# **Hint:** Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant.
# **Answer:**
#
# By looking at the statistical description of the dataset, the median and the mean are quite apart to each other. This implies that the distribution of the dataset is skewed. In such case, the median is probably a better benchmark to descirbe the data than the mean. Also the median is more robust to outliers than the mean.
#
# Sample 0 has high costs on every catelogue except "Fresh". The cost on "Frozen" is about the median. This one may represent a retailer or a convenient store.
#
# Sample 1 has a very high cost on "Fresh" only. The costs on "Frozen" and "Delicatessen" are just above the medians. The costs on everything else are low. This one may represent a restaurant.
#
# Sample 2 has high costs on "Fresh", "Grocery", "Detergents_Paper", and "Delicatessen". The costs of "Milk" and "Frozen" are about the medians. This one may represent a retailer.
# ### Implementation: Feature Relevance
# One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
#
# In the code block below, you will need to implement the following:
# - Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function.
# - Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets.
# - Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`.
# - Import a decision tree regressor, set a `random_state`, and fit the learner to the training data.
# - Report the prediction score of the testing set using the regressor's `score` function.
# +
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeRegressor
seed = 42
num_labels = len(data.axes[1])
score = np.zeros(num_labels, dtype = float)
for i, drop_feature in enumerate(data.axes[1]):
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = pd.DataFrame(data, copy=True)
new_data.drop(drop_feature, axis=1, inplace=True)
new_label = data.get(drop_feature)
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, new_label, test_size=0.25, random_state=seed)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=seed)
regressor.fit(X_train, y_train)
score[i] = regressor.score(X_test, y_test)
# TODO: Report the score of the prediction using the testing set
print "{:+.3f} is the relevance score between other categories and '{}'.".format(score[i],drop_feature)
# -
# ### Question 2
# *Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?*
# **Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data.
# **Answer:**
#
# The reported prediction scores of all six features are shown on the following table:
#
# | Feature | Score |
# |------------------ |-------- |
# | Fresh | -0.386 |
# | Milk | 0.156 |
# | Grocery | 0.682 |
# | Frozen | -0.210 |
# | Detergents_Paper | 0.272 |
# | Delicatessen | -2.255 |
#
# The higher a R<sup>2</sup> score, the more tendency customers would spend on this category along with other five categories. It means that this feature tends to be more dependent on other features. The lower a R<sup>2</sup> score, the more independent a category is. Therefore, among all these features, "Delicatessen" is the best fit for identifying customers' spending habits, and followed with "Fresh" and "Frozen".
# ### Visualize Feature Distributions
# To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
import seaborn as sns
sns.heatmap(data.corr())
# ### Question 3
# *Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?*
# **Hint:** Is the data normally distributed? Where do most of the data points lie?
# **Answer:**
#
# Among all these features, "Milk", "Grocery", and "Detergents_Paper" exhibit some degree of correlation to each others. Especially between "Grocery" and "Detergents_Paper" it shows a strong correlation. All scatter matrices of these pairs of data tends to show a roughly linear relationship. This result confirms the relevance of the features predicted above.
#
# The data for these features distributed in a highly skewed pattern. Most of the data lie at lower values, and few of them lie at the long tail with higher values.
# ## Data Preprocessing
# In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
# ### Implementation: Feature Scaling
# If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
#
# In the code block below, you will need to implement the following:
# - Assign a copy of the data to `log_data` after applying a logarithm scaling. Use the `np.log` function for this.
# - Assign a copy of the sample data to `log_samples` after applying a logrithm scaling. Again, use `np.log`.
# +
mpl.rcParams.update(mpl.rcParamsDefault)
# %matplotlib inline
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# -
# ### Observation
# After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
#
# Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
# Display the log-transformed sample data
display(log_samples)
# ### Implementation: Outlier Detection
# Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](http://datapigtechnologies.com/blog/index.php/highlighting-outliers-in-your-data-with-the-tukey-method/): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
#
# In the code block below, you will need to implement the following:
# - Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this.
# - Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`.
# - Assign the calculation of an outlier step for the given feature to `step`.
# - Optionally remove data points from the dataset by adding indices to the `outliers` list.
#
# **NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
# Once you have performed this implementation, the dataset will be stored in the variable `good_data`.
# +
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5 * (Q3 - Q1)
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# OPTIONAL: Select the indices for data points you wish to remove
outliers = [65, 66, 75, 128, 154]
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
# -
# ### Question 4
# *Are there any data points considered outliers for more than one feature? Should these data points be removed from the dataset? If any data points were added to the `outliers` list to be removed, explain why.*
# ##### **Answer:**
#
# Yes, there are 5 data points are considered outliers for more than one feature, and they are [65, 66, 75, 128, 154]. These data points should be removed from the dataset, because they are outliers for more than one feature, and they are not good representatives of the normal data. By removing these data from the dataset, the data will fit into a cleaner distribution and will be much easier for further processes later.
# ## Feature Transformation
# In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
# ### Implementation: PCA
#
# Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
#
# In the code block below, you will need to implement the following:
# - Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`.
# - Apply a PCA transformation of the sample log-data `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
# +
from sklearn.decomposition import PCA
# TODO: Apply PCA to the good data with the same number of dimensions as features
pca = PCA()
pca.fit(good_data)
# TODO: Apply a PCA transformation to the sample log-data
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = rs.pca_results(good_data, pca)
# -
# ### Question 5
# *How much variance in the data is explained* ***in total*** *by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.*
# **Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the indivdual feature weights.
# **Answer:**
#
# The first principal component explains 44.30% variance, and the second principal component explains 26.38%, and in total these two principal component explain 70.68% variance.
#
# The third principal component explains 12.31% variance, and the forth principal component explains 10.12%, and in total these first four principal component explain 93.11% variance.
#
# Here 0.4 will be used as a threshold value to judge the importance of the feature weights. The first dimension represents the customers spending mostly on Milk, Grocery, and Detergents_Paper. In fact, the first dimension is most correlated to the spending of Detergents_Paper. As we discussed above, these three categories have a linear correlation to each others. The second dimension represents the customers spending mostly on Fresh, Frozen, and Delicatessen. The third dimension represents the customers mainly spending on Delicatessen, and disfavoring spending on Fresh. The forth dimension represents the customer mostly spending on Frozen and not spending on Delicatessen.
# ### Observation
# Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
# ### Implementation: Dimensionality Reduction
# When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
#
# In the code block below, you will need to implement the following:
# - Assign the results of fitting PCA in two dimensions with `good_data` to `pca`.
# - Apply a PCA transformation of `good_data` using `pca.transform`, and assign the reuslts to `reduced_data`.
# - Apply a PCA transformation of the sample log-data `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
# +
# TODO: Fit PCA to the good data using only two dimensions
pca = PCA(n_components=2)
pca.fit(good_data)
# TODO: Apply a PCA transformation the good data
reduced_data = pca.transform(good_data)
# TODO: Apply a PCA transformation to the sample log-data
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
# -
# ### Observation
# Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
# ## Clustering
#
# In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
# ### Question 6
# *What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?*
# **Answer:**
#
# A K-Means clustering algorithm is fast to run, and it produces clean and tight clusters with straight boudaries as being a type of hard clustering algorithm. But it also has disadvantages, such as tending to reach local minima, difficulty to predict K-value, not working well with non globalur clusters.
#
# A Guassian Mixture Model is a Expectation Maximization algorithm. It monotonically increases likelihood, which means it does not diverge, and practically converges better than a K-Means clustering algorithm. It is also a type of soft clustering, which does not provide clean clusters, but much more structural information. Comparing to a K-Means clustering algorithm, a Guassian Mixture Model is slower since it has to incorporate information about the distributions of the data, thus it has to deal with the co-variance, mean, variance, and prior probabilities of the data, and also has to assign probabilities to belonging to each clusters.
#
# Regarding to this wholesale customer data, since the data intrinsically are not divided by clean and tight categories, I would think a Gaussian Mixture Model fits better for this project. Furthermore, the dataset is relatively small, so it shouldn't be too costly to run.
# ### Implementation: Creating Clusters
# Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.
#
# In the code block below, you will need to implement the following:
# - Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`.
# - Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`.
# - Find the cluster centers using the algorithm's respective attribute and assign them to `centers`.
# - Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`.
# - Import sklearn.metrics.silhouette_score and calculate the silhouette score of `reduced_data` against `preds`.
# - Assign the silhouette score to `score` and print the result.
# +
from sklearn.mixture import GMM
from sklearn.metrics import silhouette_score
for i in [6,5,4,3,2]:
# TODO: Apply your clustering algorithm of choice to the reduced data
gmm = GMM(n_components=i,covariance_type='diag',random_state=seed,verbose=0)
clusterer = gmm.fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data,preds,metric='euclidean',random_state=seed)
print "For {:d} clusters, the mean silhouette coefficient is {:.3f}.".format(i,score)
# -
# ### Question 7
# *Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?*
# **Answer:**
#
# The silhouette scores for various cluster numbers are reported below:
#
# | Cluster Number | SilhouetteScore |
# |----------------|-----------------|
# | 2 | 0.412 |
# | 3 | 0.374 |
# | 4 | 0.331 |
# | 5 | 0.281 |
# | 6 | 0.278 |
#
# When 2 clusters are divided, the silhouette score is the highest.
# ### Cluster Visualization
# Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
# Display the results of the clustering from implementation
rs.cluster_results(reduced_data, preds, centers, pca_samples)
# ### Implementation: Data Recovery
# Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
#
# In the code block below, you will need to implement the following:
# - Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`.
# - Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`.
#
# +
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
true_centers.plot(kind = 'bar', figsize = (10, 5))
# -
# ### Question 8
# Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. *What set of establishments could each of the customer segments represent?*
# **Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`.
# **Answer:**
#
# Since we know the data distribution is highly skewed, the median will be considered as a major parameter to describe the data instead of the mean. For "Segment 0", the costs on "Fresh" and "Frozen" are higher than the medians. This segment may represent a set of establishments of restaurants and cafes. For "Segment 1", the costs of "Milk", "Grocery", and "Detergents_Paper" are highers than the medians, and the cost of "Delicatessen" is close to the median. This segment may represent a set of establishments of retailers.
# ### Question 9
# *For each sample point, which customer segment from* ***Question 8*** *best represents it? Are the predictions for each sample point consistent with this?*
#
# Run the code block below to find which cluster each sample point is predicted to be.
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
# **Answer:**
#
# Sample 0 is predicted to be in Segment 1, which may represent retailers. As we discussed above, Sample 0 has higher costs than the mdeian on every catelogue except "Fresh". This characteristic matches Segment 1, in which the costs on most categories are high except "Fresh" and "Frozen".
#
# Sample 1 is predicted to be in Segment 0, which may represent restaurants/cafes. Sample 1 has a very high cost on "Fresh" only. The costs on "Frozen" and "Delicatessen" are just above the medians. This characteristic is exactly identical as Segment 0.
#
# Sample 2 is predicted to be in Segment 1, which may represent retailers. Sample 2 has higher costs on "Fresh", "Grocery", "Detergents_Paper", and "Delicatessen". Even though it has a high cost on "Fresh", by looking at all other features and overall pattern, this sample should be same as Sample 0 and its characteristic fits into Segment 1.
#
# The predictions for each sample point are consistent with the initial guesses.
# ## Conclusion
# ### Question 10
# *Companies often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services. If the wholesale distributor wanted to change its delivery service from 5 days a week to 3 days a week, how would you use the structure of the data to help them decide on a group of customers to test?*
# **Hint:** Would such a change in the delivery service affect all customers equally? How could the distributor identify who it affects the most?
# **Answer:**
#
# If the wholesale distributor wanted to change its delivery service from 5 days a week to 3 days a week, the customers of restaurants may be impacted much more than the customers of retailers. Because the restaurants need a lot more "Fresh" products, and would be benefit from more frequent deliveries.
#
# To design an A/B test to see if changing delivery service will cause decline of customers' spending in Segment 0 (restaurants), we can randomly pick a set of samples from Segment 0, and randomly divide the sample set into two groups: Group A with 5 days a week delivery and Group B with 3 days a week delivery. After a specific period, we can collect the sample data of customers' spending, and perform a two-sample Student's t-test. If the null hypothesis is rejected, we can make a conclusion that changing delivery does impact the customers in Segment 0.
#
# If we want to know whether changing delivery will also impact the customers in Segment 1 (retailers), an independent and separate A/B test can be done on Segment 1. Following the same procedure, a set of samples should be randomly picked from Segment 1, and randomly devided into two groups. A two-sample Student's t-test can be done and the null hypothesis can be examined.
# ### Question 11
# *Assume the wholesale distributor wanted to predict a new feature for each customer based on the purchasing information available. How could the wholesale distributor use the structure of the data to assist a supervised learning analysis?*
# **Hint:** What other input feature could the supervised learner use besides the six product features to help make a prediction?
# **Answer:**
#
# The segmentation information can be used as a new binary feature (or a categorical feature if more than 2 segments) in a supervised learning analysis. By adding this new segmentation feature, it can be a major factor for classification of the data and therefore improve the performance of a supervised learning analysis.
#
# Another one may be the distance between the instance and the cluster center. It is like using the distance as a weight in a k-nearest neighbors algorithm, but here the distance is a measurement of the likelihood to the typical customer representatives.
# ### Visualizing Underlying Distributions
#
# At the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier on to the original dataset.
#
# Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
# Display the clustering results based on 'Channel' data
rs.channel_results(reduced_data, outliers, pca_samples)
# ### Question 12
# *How well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? Would you consider these classifications as consistent with your previous definition of the customer segments?*
# **Answer:**
#
# By looking at this distribution graph, two clusters exist, and they are not divided clean and tight. There are some data points mixed into other cluster. I've chosen a soft clustering algorithm and two clusters to anlysis the dataset. These choices perfectly match the characteristics of the dataset.
#
# Since the data points are mixed together, therefore there is no customer segments can be classified as purely "Retailers" or "Hotels/Restaurants/Cafes". I would consider these classifications as consistent with my previous definition of the customer segments.
# > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
# **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
| customer_segments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="SdVGS8oSjBkc"
# ## EE 461P: Data Science Principles
# ### Assignment 2
# ### Total points: 60
# ### Due: Thursday, Feb 25, 2021, submitted via Canvas by 11:59 pm
#
# Your homework should be written in a **Jupyter notebook**. You may work in groups of two if you wish. Only one student per team needs to submit the assignment on Canvas. But be sure to include name and UT eID for both students. Homework groups will be created and managed through Canvas, so please do not arbitrarily change your homework group. If you do change, let the TAs know.
#
# Also, please make sure your code runs and the graphics (and anything else) are displayed in your notebook before submitting. (%matplotlib inline)
#
# ### Name(s) and EID(s):
# 1.
# 2.
#
# ### Homework group No.:
# + [markdown] id="eFAfdc7iTrO7"
# # Question 1 - Cross Validation (15 pts)
#
# Use the given code below to load the dataset from "data.csv". The dataset contains eight attributes (or features, denoted by X1...X8) and two responses (or outcomes, denoted by y1 and y2). This dataset was created for energy analysis using 12 different building shapes.The buildings differ with respect to the glazing area, the glazing area distribution, and the orientation, amongst other parameters. By simulating various settings as functions of the afore-mentioned characteristics in total there are 768 building shapes. For more information on the dataset refer this [link](https://archive.ics.uci.edu/ml/datasets/Energy+efficiency). The aim is to use the eight features to predict one of the two responses.In this question, we will predict only the y1 response.
#
# Specifically:
# * X1 - Relative Compactness
# * X2 - Surface Area
# * X3 - Wall Area
# * X4 - Roof Area
# * X5 - Overall Height
# * X6 - Orientation
# * X7 - Glazing Area
# * X8 - Glazing Area Distribution
# * y1 - Heating Load
# * y2 - Cooling Load
#
#
#
#
#
# + id="PfGn68OdWQC_"
import pandas as pd
import numpy as np
import sklearn
data = pd.read_csv("data_qn1.csv",delimiter=",")
y = data["Y1"]
X = data.drop(columns=["Y1","Y2"])
# + [markdown] id="RE_2coqAWQZZ"
#
# We will be analyzing the following scenarios for the given dataset.
#
# * Compare hold-out(80:20) train-test split cross validation and K-Fold Cross Validation
# * What happens when the number of folds increase for K-Fold Cross Validation?
# * Variance in the prediction for each case - Hold Out Validation and K-Fold Validation?
#
# + [markdown] id="te7QQxe4XEkh"
#
# a) [**3 pts**] Split the original dataset(X,y) into 80:20 [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) split, use linear regression to fit the model on the training data and evaluate the model using the test data. Report the root mean squared error (RMSE) on the test data for five different runs, make sure to store the RMSE values, we will use these values later to plot in part (d).
#
# b) [**3 pts**] Now, we will use [K-Fold](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html) validation from sklearn to split the original data(X,y) into 5 folds. For each fold use linear regression to fit the model on training data and evaluate the model on the test data. Compute the average RMSE of the 5 folds, repeat the same for five different random splits of K-Fold. You can refer to the following line of code to perform the split. Make sure to vary the random_state value for five different runs. Record the RMSE values we will use them later in part (d).
#
# ```
# kf = KFold(n_splits=5,random_state=random_state,shuffle=True)
# ```
#
# c) [**3 pts**] Repeat the same experiment as in part (b) by varying the number of folds as k = 100,768 and record the RMSE for each value of k.
#
# d) [**3 pts**] Now, we will plot the box plot of the RMSE values obtained from part a), b) and c) together in a single figure. To reiterate, Hold out Validation(from part a) and k = 5, 100 and 768(part b and c) each with 5 values of RMSE for the 5 random states of the k-fold split. You can refer [here](https://matplotlib.org/stable/gallery/pyplots/boxplot_demo_pyplot.html#sphx-glr-gallery-pyplots-boxplot-demo-pyplot-py) on how to plot the boxplots. Boxplots are used to understand the variance in the values of the RMSE. For more information on box plots refer this [link](https://en.wikipedia.org/wiki/Box_plot).
#
# e) [**3 pts**] Using the boxplot answer the following questions,
#
# * What do you observe in the variation for RMSE of hold out validation and k-Fold validation, explain with reasoning which one will you choose to evaluate the model.
#
# * What happens when the number of folds increase to larger values?
#
#
# + id="2NlllrWv2JMJ"
# + [markdown] id="rrn3rt3wzMv7"
# # Question 2 - Bias Variance TradeOff (10 pts)
#
# a) [**5pts**] What is the difference between the notion of model bias (for example, a model that predicts age as a function of some other features) and the bias of a point estimator (for example, the mean age of students estimated from a sample of age values)?
#
# b) [**5pts**] a) Assume you have a model trained to solve a problem. How do you expect (i) the bias and (ii) the variance to change if you used a larger training dataset, but no other process changed?
#
#
#
# + id="VaPpQFaiB39a"
# + [markdown] id="xT0qzWQHB4RT"
# # Question 3: Ridge and Lasso Regression (15 points)
#
# In this question you will explore the application of Lasso and Ridge regression using sklearn package in Python. Use the following code to load the train and test data.
# + id="twwroG3-P4fi"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import warnings
from sklearn.preprocessing import StandardScaler
warnings.filterwarnings('ignore')
df = pd.read_csv("data_qn3.csv", index_col=0)
df = df.loc[df['Year']==2014, :]
df = df.drop('Year', axis=1)
df = pd.get_dummies(df, columns=['Status'])
df = df.dropna()
# Creating training and testing dataset
y = df.iloc[:, 0]
X = df.iloc[:, 1:]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state = 42)
# + [markdown] id="hHdoWCdqQDHz"
# ## Question 3.1 (3 points)
# Run Linear regression on the train dataset and print the $R^2$ value using the test dataset.
#
# ## Question 3.2 (6 points)
# Run linear regression using Lasso and determine the value of $\alpha$ that results in best test set performance. Consider `alphas=10**np.linspace(1,-2,100)*0.5`. Display the best value of $\alpha$ as well as the corresponsing $R^2$ score on test set. Use the following parameters in Lasso model. Finally, store the best model separately. Also, use the co-efficients obtained and select the [columns with non-zero weights](https://stackoverflow.com/questions/62323713/selecting-columns-of-dataframe-where-lasso-coefficient-is-nonzero) and use them to create `X_train_lasso` and `X_test_lasso`. Show how many non-zero columns are present. Plot the coefficients of the lasso model for different alpha values, you can use log scale to plot the alphas in the x_axis.
#
# copy_X=True
# normalize=True # Normalizes data using StandardScaler()
# random_state=42
#
# ## Question 3.3 (6 points)
# Run linear regression using Ridge and determine the value of $\alpha$ that results in best test set performance. Consider `alphas=10**np.linspace(1,-2,100)*0.5`. Display the best value of $\alpha$ as well as the corresponsing $R^2$ score on test set. Use the following parameters in Ridge model.Plot the coefficients of the ridge model for different alpha values, you can use log scale to plot the alphas in the x_axis.
#
# copy_X=True
# normalize=True # Normalizes data using StandardScaler()
# random_state=42
#
#
# + id="xzD571z_3hSz"
# + [markdown] id="DzPjp2hd9qkH"
# # Question 4: Polynomial Feature Transformation (20 points)
# Often, you will find that transforming features into higher degrees will yield better models. In this question, we will see how to do non-linear regression using a linear model by using polynomial feature transformation. You will need to build only one plot for this entire question. So, plot everything on the same plot. Let us now consider the following dataset:
#
#
# + id="A_jfDHNpL-0i"
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
np.random.seed(2)
def h(x):
""" function to approximate by polynomial interpolation"""
return np.sin(x) + np.log(x)
# generate points used to plot
x_plot = np.linspace(2, 12, 100)
# generate points and keep a subset of them
x = np.linspace(2, 12, 100)
rng = np.random.RandomState(20)
rng.shuffle(x)
x_train = np.sort(x[:50])
x_test = np.sort(x[50:80])
# create matrix versions of these arrays
x_train = x_train[:, np.newaxis]
x_test = x_test[:,np.newaxis]
x_plot = x_plot[:, np.newaxis]
y_train = h(x_train) + np.random.normal(0, 0.5, size=x_train.shape)
y_test = h(x_test)+ np.random.normal(0, 0.5, size=x_test.shape)
# + [markdown] id="YoQFk35CL-Bl"
# 1. Build a scatter plot with `s=30` and `marker='o'` using x_train and y_train. Also, build a line plot using `x_plot` and `h(x_plot)` to show the trend. (5 pts)
# 2. Transform `x_train` and `x_test` using [PolynomialFeatures](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) with degrees 1,3,5,7,9,11 and save these transformed datasets. For example, if an input sample is two dimensional and of the form [$a$, $b$], the degree-2 polynomial features are [$1$, $a$, $b$, $a^2$, $ab$, $b^2$]. (5 pts)
# 3. Use ridge regression with default parameters on each of these train datasets. Now, calculate the predicted target values for the fitted model using `.predict(X_plot)` and show line plots using `x_plot` and the predicted target values. Also, calculate the training MSE and test MSE for each of them using the model. (5 pts)
# 4. Report your observations from the plot w.r.t how the evaluation metrics change on increasing the `degree` parameter. What do you think will happen if we keep on increasing the value of `degree`? (5 pts)
# + id="UMJ_ZadzRGkG"
| HW2Qns.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="TWU7zxhGVrxS" colab_type="code" outputId="31bc9d14-5d59-4668-b9ce-e96dcbe39bb2" colab={"base_uri": "https://localhost:8080/", "height": 55}
from google.colab import drive
drive.mount('/content/drive/')
# + id="kbNX2bOPVxIW" colab_type="code" outputId="46e3739d-bd59-4c8d-8864-b160d859e620" colab={"base_uri": "https://localhost:8080/", "height": 217}
# Import required Libraries
# !pip install fastparquet
import math
import pandas_datareader as web
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
import matplotlib.pyplot as plt
import datetime
# + id="S7CcdnvpWCAC" colab_type="code" colab={}
# + id="0_BwnKjgWGKf" colab_type="code" colab={}
stocks = pd.read_parquet('https://github.com/alik604/CMPT-419/blob/master/data/wsb.daily.joined.parquet.gz?raw=true', engine="fastparquet")
# + id="9ARCPWbzWqhO" colab_type="code" outputId="96d2f518-d3a0-4cf0-c384-60a1453ef7f3" colab={"base_uri": "https://localhost:8080/", "height": 224}
stocks.head(5)
stocks.tail(5)
# + id="ABx22CiAXalK" colab_type="code" colab={}
# Drop created date
data = stocks.drop(['created_utc', 'AAPL_CPrc','AMZN_CPrc','BA_CPrc', 'SPY_CPrc', 'TSLA_CPrc' ] , axis=1)
# Closing Price dataset
plotData = stocks.filter(['created_utc','AAPL_CPrc', 'AMZN_CPrc', 'BA_CPrc', 'SPY_CPrc', 'TSLA_CPrc'])
dataCPrc = stocks.filter(['AAPL_CPrc', 'AMZN_CPrc', 'BA_CPrc', 'SPY_CPrc', 'TSLA_CPrc'])
datasetCPrc = dataCPrc.values
# + id="d7BKqI90X45t" colab_type="code" outputId="48d1a67c-b937-4905-a9ac-cd7617d5b3ff" colab={"base_uri": "https://localhost:8080/", "height": 251}
dataset = data.values
print(dataset)
# + id="XdJqirKIX9eP" colab_type="code" outputId="3cf847e2-b6ee-4661-97e4-8d8a6cb35a82" colab={"base_uri": "https://localhost:8080/", "height": 35}
# set training set len
training_data_len = math.ceil(len(dataset) * 0.8)
training_data_len
# + id="EAOTCb4eYXlB" colab_type="code" outputId="a069ac13-afca-437d-d2da-c53fb0b2ffd1" colab={"base_uri": "https://localhost:8080/", "height": 251}
# Scale data
scaler = MinMaxScaler(feature_range=(0,1))
scaled_data = scaler.fit_transform(dataset)
scalerCPrc = MinMaxScaler(feature_range=(0,1))
scaledCPrc_data = scalerCPrc.fit_transform(datasetCPrc)
scaledCPrc_data
# + id="Fo4YosvZYqu0" colab_type="code" colab={}
# Create training set from scaled data
train_data = scaled_data[0:training_data_len, :]
train_y = scaledCPrc_data[0:training_data_len, :]
# + id="tR39LbGWY8lG" colab_type="code" colab={}
# Split data into x_train, y_train
x_train = []
y_train = []
# + id="FJOXMynkZ9L5" colab_type="code" outputId="6bf3cd55-99e3-4118-fee5-ae07ae91d148" colab={"base_uri": "https://localhost:8080/", "height": 107}
for i in range(60, len(train_data)):
x_train.append(train_data[i-60:i, :])
y_train.append(train_y[i:i+1,:][0].tolist())
if(i < 65):
print(train_y[i:i+1,:][0], type(train_y[i:i+1,:][0]))
# + id="ACnTHF1XdJpY" colab_type="code" outputId="4c3f1d62-3765-4043-ce88-8fc119233387" colab={"base_uri": "https://localhost:8080/", "height": 161}
# Convert both x, y training sets to np array
x_train, y_train = np.array(x_train), np.array(y_train)
print(y_train, type(y_train))
# Reshape the data // LSTM network expects 3 dimensional input in the form of
# (number of samples, number of timesteps, number of features)
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 33))
x_train.shape
# + id="IKVNbH_GeA8u" colab_type="code" colab={}
# Declare the LSTM model architecture
model = Sequential()
# Add layers , input shape expected is (number of timesteps, number of features)
model.add(LSTM(75, return_sequences=True, input_shape=(x_train.shape[1], 33)))
model.add(LSTM(50, return_sequences=False))
model.add(Dense(25))
model.add(Dense(5))
# + id="rfiNSgYqeHTG" colab_type="code" colab={}
# Compile Model
model.compile(optimizer='adam', loss='mean_squared_error')
# + [markdown] id="AGFe80-GQ08x" colab_type="text"
#
# + id="R_3Q9445eJSt" colab_type="code" outputId="42909065-e3a1-4da0-91d1-8b8f539196d4" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# Train the model
model.fit(x_train, y_train, batch_size=16, epochs=100)
# + id="zMBz4Q63eKu9" colab_type="code" colab={}
# Create the testing data set
test_data = scaled_data[training_data_len - 60: , :]
test_y = scaledCPrc_data[training_data_len-60:,:]
# Create the data sets x_test and y_test
x_test = []
y_test = []
for i in range(60, len(test_data)):
x_test.append(test_data[i-60:i, :])
y_test.append(train_y[i:i+1,:][0].tolist())
print(len(test))
# + id="R8zY8JdFflgS" colab_type="code" colab={}
# Convert the data into a np array
x_test = np.array(x_test)
# Reshape data into 3 dimensions ( num samples, timesteps, num features )
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 33))
# + id="tyGEr_xjfm_n" colab_type="code" colab={}
# Get the models predicted price values
predictions = model.predict(x_test)
# Inverse transform
predictions = scalerCPrc.inverse_transform(predictions)
# + id="-aEJ2RyOfop3" colab_type="code" outputId="839ec772-bfde-42e8-e70c-3908c99ea41f" colab={"base_uri": "https://localhost:8080/", "height": 35}
# Evaluate model with root mean square error (RMSE)
rmse = np.sqrt(np.mean((predictions-y_test)**2))
rmse
# + id="3cpsmjVBjF8_" colab_type="code" outputId="f946aefb-a6f2-4068-f5f4-374df03ad5e2" colab={"base_uri": "https://localhost:8080/", "height": 775}
# Set up ploting dataset
plotData = stocks.filter(['created_utc','AAPL_CPrc', 'AMZN_CPrc', 'BA_CPrc', 'SPY_CPrc', 'TSLA_CPrc'])
plotData = plotData.set_index("created_utc")
print(plotData, type(plotData))
# This is so it graphs nicely with dates
AAPL = plotData["AAPL_CPrc"]
AAPL = AAPL.to_frame()
AMZN = plotData["AMZN_CPrc"]
AMZN = AMZN.to_frame()
BA = plotData["BA_CPrc"]
BA = BA.to_frame();
SPY = plotData["SPY_CPrc"]
SPY = SPY.to_frame();
TSLA = plotData["TSLA_CPrc"]
TSLA = TSLA.to_frame()
appleTest = AAPL[0:training_data_len]
amazonTest = AMZN[0:training_data_len]
boeTest = BA[0:training_data_len]
spyTest = SPY[0: training_data_len]
teslaTest = TSLA[0: training_data_len]
appleValid = AAPL[training_data_len:]
amazonValid = AMZN[training_data_len:]
boeValid = BA[training_data_len:]
spyValid = SPY[training_data_len:]
teslaValid = TSLA[training_data_len:]
# print(appleValid)
appleValid['Predictions'] = predictions[:,:1]
amazonValid["Predictions"] = predictions[:,1:2]
boeValid['Predictions'] = predictions[:, 2:3]
spyValid["Predictions"] = predictions[:, 3:4]
teslaValid["Predictions"] = predictions[:, 4:5]
print(len(appleValid), len(predictions))
# + id="T5YFXuL2jXN8" colab_type="code" outputId="7f2fd887-620e-4848-8f45-3c1cd50ee47c" colab={"base_uri": "https://localhost:8080/", "height": 521}
# Visualize data
plt.figure(figsize=(16,8))
plt.title('Prediction with reddit glove vectors')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price USD ($)', fontsize=18)
plt.plot(appleTest)
plt.plot(appleValid)
plt.plot(appleValid['Predictions'])
# plt.plot(amazonTest)
# plt.plot(amazonValid)
# plt.plot(amazonValid['Predictions'])
# plt.plot(boeTest)
# plt.plot(boeValid)
# plt.plot(boeValid['Predictions'])
# plt.plot(spyTest)
# plt.plot(spyValid)
# plt.plot(spyValid['Predictions'])
# plt.plot(teslaTest)
# plt.plot(teslaValid)
# plt.plot(teslaValid['Predictions'])
plt.legend(['Train', 'Validation', 'Prediction'])
plt.show()
# + id="3cbZiMQYmrbR" colab_type="code" colab={}
# Lets train a model without reddit posts to see how it performs
opening_prices = stocks.filter(['created_utc','AAPL_OPrc', 'AMZN_OPrc', 'BA_OPrc', 'SPY_OPrc', 'TSLA_OPrc'])
opening_prices = opening_prices.set_index('created_utc')
# PlotData contains closing prices
# + id="1T3BOippD7Bs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="8952f591-f01a-41f8-9642-360867aabb10"
opening_dataset = opening_prices.values
closing_dataset = plotData.values
print(opening_dataset)
print(closing_dataset)
# + id="h52vWQxYEqn1" colab_type="code" colab={}
# set training set len
train_data_len = math.ceil(len(opening_dataset) * 0.80)
# + id="Lt1W734XFBpD" colab_type="code" colab={}
openScaler = MinMaxScaler(feature_range=(0,1))
scaled_open_prices = openScaler.fit_transform(opening_dataset)
# + id="qOvvI5bmFLLE" colab_type="code" colab={}
closeScaler = MinMaxScaler(feature_range=(0,1))
scaled_close_prices = closeScaler.fit_transform(closing_dataset)
# + id="PkBSzN9dFVBn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 377} outputId="de7db11f-b372-4faa-ebf4-3c3a7d353bcd"
o_train_data = scaled_open_prices[0:train_data_len, :]
c_train_data = scaled_close_prices[0:train_data_len,:]
print(o_train_data, len(o_train_data))
c_train_data
# + id="GGIyaEwwFb5_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="237d975c-3f8c-41df-ed94-eeec1863b44f"
o_x_train = []
c_y_train = []
print(len(o_x_train))
for i in range(60, len(o_train_data)):
o_x_train.append(o_train_data[i-60:i, :])
c_y_train.append(c_train_data[i,:].tolist())
if(i<62):
print(c_train_data[i,:].tolist())
# + id="FhVJ3AoaGAqj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e46c1d20-3e2d-4b09-e1ea-591842392c6b"
x_train2, y_train2 = np.array(o_x_train), np.array(c_y_train)
x_train2 = np.reshape(x_train2, (x_train2.shape[0], x_train2.shape[1], 5))
x_train2.shape
# + id="NeOf2hP1HWn2" colab_type="code" colab={}
# Declare the LSTM model architecture
model2 = Sequential()
# Add layers , input shape expected is (number of timesteps, number of features)
model2.add(LSTM(75, return_sequences=True, input_shape=(x_train2.shape[1], 5)))
model2.add(LSTM(50, return_sequences=False))
model2.add(Dense(25))
model2.add(Dense(5))
# + id="EZIWdGiTHpRP" colab_type="code" colab={}
# Compile Model
model2.compile(optimizer='adam', loss='mean_squared_error')
# + id="saqgNNGNHryQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="e58db65d-2779-43ae-c354-ebcfa834fa6c"
# Train the model
model2.fit(x_train2, y_train2, batch_size=16, epochs=100)
# + id="KiEpTH7MHvOQ" colab_type="code" colab={}
x_test_data = scaled_open_prices[train_data_len - 60:, :]
y_test_data = scaled_close_prices[train_data_len - 60:, :]
x_test2 = []
y_test2 = []
for i in range(60, len(x_test_data)):
x_test2.append(x_test_data[i-60:i, :])
y_test2.append(y_test_data[i:i+1,:][0].tolist())
# + id="t4n2owTSJ4LM" colab_type="code" colab={}
x_test2 = np.array(x_test2)
# + id="yOkonKz3K90T" colab_type="code" colab={}
x_test2 = np.reshape(x_test2, (x_test2.shape[0], x_test2.shape[1], 5))
# + id="lq3alJAILFJ1" colab_type="code" colab={}
# Get the models predicted price values
predictions2 = model2.predict(x_test2)
# Inverse transform
predictions2 = closeScaler.inverse_transform(predictions2)
# + id="X5DZ7yM9LK3b" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="1dd9ca87-9d2b-4990-9f81-b2c243c2ef12"
# Evaluate model with root mean square error (RMSE)
rmse2 = np.sqrt(np.mean((predictions2-y_test2)**2))
rmse2
# + id="fOvweIEXLQu7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 577} outputId="5cf8db63-566d-4001-a792-62a1913573c8"
appleValid['Predictions2'] = predictions2[:,:1]
amazonValid["Predictions2"] = predictions2[:,1:2]
boeValid['Predictions2'] = predictions2[:, 2:3]
spyValid["Predictions2"] = predictions2[:, 3:4]
teslaValid["Predictions2"] = predictions2[:, 4:5]
# + id="ZNUeklyYLSkz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 536} outputId="42a7a929-8a9e-45fc-e638-35e3fba2e5b6"
# Visualize data
plt.figure(figsize=(16,8))
plt.title('Stock Prediction with and without Reddit word vector data \n (TSLA)')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price USD ($)', fontsize=18)
# plt.plot(appleTest)
# plt.plot(appleValid)
# plt.plot(appleValid['Predictions'], color='red')
# plt.plot(appleValid['Predictions2'], color='green')
# plt.plot(amazonTest)
# plt.plot(amazonValid)
# plt.plot(amazonValid['Predictions'], color='red')
# plt.plot(amazonValid['Predictions2'], color='green')
# plt.plot(boeTest)
# plt.plot(boeValid)
# plt.plot(boeValid['Predictions'], color='red')
# plt.plot(boeValid['Predictions2'], color='green')
# plt.plot(spyTest, label='Train')
# plt.plot(spyValid, label='Actual')
# plt.plot(spyValid['Predictions'], label='Prediction with vector', color='red')
# plt.plot(spyValid['Predictions2'], label='Prediction without vector', color='green')
plt.plot(teslaTest, label='Train')
plt.plot(teslaValid, label="Actual")
plt.plot(teslaValid['Predictions'], label="vector prediction", color='red')
plt.plot(teslaValid['Predictions2'], label="without vector prediction", color="green")
plt.legend(('Train', 'Validation', 'Prediction with vector', 'Prediction without'), title="Legend",)
plt.show()
# + id="SlhPAN6yPPI2" colab_type="code" colab={}
| GloveLSTM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Import libraries
# +
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import tensorflow as tf
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
#tf.set_random_seed(seed)
tf.compat.v1.set_random_seed(seed)
#sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
K.set_session(sess)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Dense, Input, Flatten, Activation, Dropout, Layer
from keras.layers.normalization import BatchNormalization
from keras.utils import to_categorical
from keras import optimizers,initializers,constraints,regularizers
from keras import backend as K
from keras.callbacks import LambdaCallback,ModelCheckpoint
from keras.utils import plot_model
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import ExtraTreesClassifier
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
import h5py
import math
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# %matplotlib inline
matplotlib.style.use('ggplot')
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import scipy.sparse as sparse
#--------------------------------------------------------------------------------------------------------------------------------
#Import ourslef defined methods
import sys
sys.path.append(r"./Defined")
import Functions as F
# The following code should be added before the keras model
#np.random.seed(seed)
# -
# # 2. Loading data
# +
train_data_frame=np.array(pd.read_csv('./Dataset/isolet1+2+3+4.data',header=None))
test_data_frame=np.array(pd.read_csv('./Dataset/isolet5.data',header=None))
train_data_arr=(train_data_frame[:,0:617]).copy()
train_label_arr=((train_data_frame[:,617]).copy()-1)
test_data_arr=(test_data_frame[:,0:617]).copy()
test_label_arr=((test_data_frame[:,617]).copy()-1)
# -
train_data_arr.shape
test_data_arr.shape
np.r_[train_data_arr,test_data_arr].shape
Data=MinMaxScaler(feature_range=(0,1)).fit_transform(np.r_[train_data_arr,test_data_arr])
Data.shape
C_train_x=Data[:len(train_data_arr)]
C_test_x=Data[len(train_data_arr):]
C_train_y=train_label_arr#to_categorical(train_label_arr)
C_test_y=test_label_arr#to_categorical(test_label_arr)
# +
x_train,x_validate,y_train_onehot,y_validate_onehot= train_test_split(C_train_x,C_train_y,test_size=0.1,random_state=seed)
x_test=C_test_x
y_test_onehot=C_test_y
print('Shape of x_train: ' + str(x_train.shape))
print('Shape of x_validate: ' + str(x_validate.shape))
print('Shape of x_test: ' + str(x_test.shape))
print('Shape of y_train: ' + str(y_train_onehot.shape))
print('Shape of y_validate: ' + str(y_validate_onehot.shape))
print('Shape of y_test: ' + str(y_test_onehot.shape))
print('Shape of C_train_x: ' + str(C_train_x.shape))
print('Shape of C_train_y: ' + str(C_train_y.shape))
print('Shape of C_test_x: ' + str(C_test_x.shape))
print('Shape of C_test_y: ' + str(C_test_y.shape))
# -
key_feture_number=10
# # 3.Model
# +
np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
class Feature_Select_Layer(Layer):
def __init__(self, output_dim, **kwargs):
super(Feature_Select_Layer, self).__init__(**kwargs)
self.output_dim = output_dim
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1],),
initializer=initializers.RandomUniform(minval=0.999999, maxval=0.9999999, seed=seed),
trainable=True)
super(Feature_Select_Layer, self).build(input_shape)
def call(self, x, selection=False,k=key_feture_number):
kernel=K.abs(self.kernel)
if selection:
kernel_=K.transpose(kernel)
kth_largest = tf.math.top_k(kernel_, k=k)[0][-1]
kernel = tf.where(condition=K.less(kernel,kth_largest),x=K.zeros_like(kernel),y=kernel)
return K.dot(x, tf.linalg.tensor_diag(kernel))
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
#--------------------------------------------------------------------------------------------------------------------------------
def Autoencoder(p_data_feature=x_train.shape[1],\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3):
input_img = Input(shape=(p_data_feature,), name='input_img')
encoded = Dense(p_encoding_dim, activation='linear',kernel_initializer=initializers.glorot_uniform(seed))(input_img)
bottleneck=encoded
decoded = Dense(p_data_feature, activation='linear',kernel_initializer=initializers.glorot_uniform(seed))(encoded)
latent_encoder = Model(input_img, bottleneck)
autoencoder = Model(input_img, decoded)
autoencoder.compile(loss='mean_squared_error', optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
#print('Latent Encoder Structure-------------------------------------')
#latent_encoder.summary()
return autoencoder,latent_encoder
#--------------------------------------------------------------------------------------------------------------------------------
def Identity_Autoencoder(p_data_feature=x_train.shape[1],\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
bottleneck_score=encoded_score
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
latent_encoder_score = Model(input_img, bottleneck_score)
autoencoder = Model(input_img, decoded_score)
autoencoder.compile(loss='mean_squared_error',\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,latent_encoder_score
#--------------------------------------------------------------------------------------------------------------------------------
def Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate=1E-3,\
p_loss_weight_1=1,\
p_loss_weight_2=2):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
feature_selection_choose=feature_selection(input_img,selection=True,k=p_feture_number)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
encoded_choose=encoded(feature_selection_choose)
bottleneck_score=encoded_score
bottleneck_choose=encoded_choose
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
decoded_choose =decoded(bottleneck_choose)
latent_encoder_score = Model(input_img, bottleneck_score)
latent_encoder_choose = Model(input_img, bottleneck_choose)
feature_selection_output=Model(input_img,feature_selection_choose)
autoencoder = Model(input_img, [decoded_score,decoded_choose])
autoencoder.compile(loss=['mean_squared_error','mean_squared_error'],\
loss_weights=[p_loss_weight_1, p_loss_weight_2],\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,feature_selection_output,latent_encoder_score,latent_encoder_choose
# -
# ## 3.1 Structure and paramter testing
epochs_number=200
batch_size_value=64
# ---
# ### 3.1.1 Fractal Autoencoder
# ---
# +
loss_weight_1=0.0078125
F_AE,\
feature_selection_output,\
latent_encoder_score_F_AE,\
latent_encoder_choose_F_AE=Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3,\
p_loss_weight_1=loss_weight_1,\
p_loss_weight_2=1)
#file_name="./log/F_AE_"+str(key_feture_number)+".png"
#plot_model(F_AE, to_file=file_name,show_shapes=True)
# +
model_checkpoint=ModelCheckpoint('./log_weights/F_AE_'+str(key_feture_number)+'_weights_'+str(loss_weight_1)+'.{epoch:04d}.hdf5',period=100,save_weights_only=True,verbose=1)
#print_weights = LambdaCallback(on_epoch_end=lambda batch, logs: print(F_AE.layers[1].get_weights()))
F_AE_history = F_AE.fit(x_train, [x_train,x_train],\
epochs=epochs_number,\
batch_size=batch_size_value,\
shuffle=True,\
validation_data=(x_validate, [x_validate,x_validate]),\
callbacks=[model_checkpoint])
# +
loss = F_AE_history.history['loss']
val_loss = F_AE_history.history['val_loss']
epochs = range(epochs_number)
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'r', label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# -
plt.plot(epochs[250:], loss[250:], 'bo', label='Training Loss')
plt.plot(epochs[250:], val_loss[250:], 'r', label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# +
p_data=F_AE.predict(x_test)
numbers=x_test.shape[0]*x_test.shape[1]
print("MSE for one-to-one map layer",np.sum(np.power(np.array(p_data)[0]-x_test,2))/numbers)
print("MSE for feature selection layer",np.sum(np.power(np.array(p_data)[1]-x_test,2))/numbers)
# -
# ---
# ### 3.1.2 Feature selection layer output
# ---
FS_layer_output=feature_selection_output.predict(x_test)
print(np.sum(FS_layer_output[0]>0))
# ---
# ### 3.1.3 Key features show
# ---
key_features=F.top_k_keepWeights_1(F_AE.get_layer(index=1).get_weights()[0],key_feture_number)
print(np.sum(F_AE.get_layer(index=1).get_weights()[0]>0))
# # 4 Classifying
# ### 4.1 Extra Trees
train_feature=C_train_x
train_label=C_train_y
test_feature=C_test_x
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
selected_position_list=np.where(key_features>0)[0]
# ---
# #### 4.1.1. On Identity Selection layer
# ---
#
# a) with zeros
train_feature=feature_selection_output.predict(C_train_x)
print("train_feature>0: ",np.sum(train_feature[0]>0))
print(train_feature.shape)
train_label=C_train_y
test_feature=feature_selection_output.predict(C_test_x)
print("test_feature>0: ",np.sum(test_feature[0]>0))
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
# ---
#
# b) Sparse matrix
# +
train_feature=feature_selection_output.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=feature_selection_output.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
train_feature_sparse=sparse.coo_matrix(train_feature)
test_feature_sparse=sparse.coo_matrix(test_feature)
p_seed=seed
F.ETree(train_feature_sparse,train_label,test_feature_sparse,test_label,p_seed)
# -
# ---
#
# c) Compression
# +
train_feature_=feature_selection_output.predict(C_train_x)
train_feature=F.compress_zero(train_feature_,key_feture_number)
print(train_feature.shape)
train_label=C_train_y
test_feature_=feature_selection_output.predict(C_test_x)
test_feature=F.compress_zero(test_feature_,key_feture_number)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
# -
# ---
#
# d) Compression with structure
# +
train_feature_=feature_selection_output.predict(C_train_x)
train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(train_feature.shape)
train_label=C_train_y
test_feature_=feature_selection_output.predict(C_test_x)
test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
# -
# ---
# #### 4.1.2. On Original Selection
# ---
#
# a) with zeros
# +
train_feature=np.multiply(C_train_x, key_features)
print("train_feature>0: ",np.sum(train_feature[0]>0))
print(train_feature.shape)
train_label=C_train_y
test_feature=np.multiply(C_test_x, key_features)
print("test_feature>0: ",np.sum(test_feature[0]>0))
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
# -
# ---
#
# b) Sparse matrix
# +
train_feature=np.multiply(C_train_x, key_features)
print(train_feature.shape)
train_label=C_train_y
test_feature=np.multiply(C_test_x, key_features)
print(test_feature.shape)
test_label=C_test_y
train_feature_sparse=sparse.coo_matrix(train_feature)
test_feature_sparse=sparse.coo_matrix(test_feature)
p_seed=seed
F.ETree(train_feature_sparse,train_label,test_feature_sparse,test_label,p_seed)
# -
# ---
#
# c) Compression
# +
train_feature_=np.multiply(C_train_x, key_features)
train_feature=F.compress_zero(train_feature_,key_feture_number)
print(train_feature.shape)
train_label=C_train_y
test_feature_=np.multiply(C_test_x, key_features)
test_feature=F.compress_zero(test_feature_,key_feture_number)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
# -
# ---
#
# d) Compression with structure
# +
train_feature_=np.multiply(C_train_x, key_features)
train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(train_feature.shape)
train_label=C_train_y
test_feature_=np.multiply(C_test_x, key_features)
test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
# -
# ---
# #### 4.1.3. Latent space
# ---
train_feature=latent_encoder_score_F_AE.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=latent_encoder_score_F_AE.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
train_feature=latent_encoder_choose_F_AE.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=latent_encoder_choose_F_AE.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
# # 6. Reconstruction loss
# +
from sklearn.linear_model import LinearRegression
def mse_check(train, test):
LR = LinearRegression(n_jobs = -1)
LR.fit(train[0], train[1])
MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean()
return MSELR
# +
train_feature_=np.multiply(C_train_x, key_features)
C_train_selected_x=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(C_train_selected_x.shape)
test_feature_=np.multiply(C_test_x, key_features)
C_test_selected_x=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(C_test_selected_x.shape)
train_feature_tuple=(C_train_selected_x,C_train_x)
test_feature_tuple=(C_test_selected_x,C_test_x)
reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple)
print(reconstruction_loss)
# -
| Python/AbsoluteAndOtherAlgorithms/ISOLET_DifferentFeatures/UFS_10_10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# For implement Kmeans
import random
from copy import deepcopy
# For accuracy
from sklearn.metrics import confusion_matrix, accuracy_score
import seaborn as sns
# For cost function
from scipy.spatial import distance
plt.style.use("ggplot")
# -
# Read data and do preprocessing
data = pd.read_csv("./datasets/data_noah.csv")
mapping = {"CH": 0, "FF": 1, "CU": 2}
data = data.replace({"pitch_type": mapping})
x = data["x"].values
y = data["y"].values
pitch_type = data["pitch_type"].values
Noah = np.array(list(zip(x, y)))
# Plotting values
plt.xlabel("Horizontal movement (x)")
plt.ylabel("Vertical movement (y)")
plt.scatter(x, y, c="black", s=5)
# Calculate Euclidean distance
def EuclideanDist(a, b, ax=1):
return np.linalg.norm(a-b,axis=ax)
# Number of clusters
k = 3
# Implement kmeans
def kmeans(k, dataset):
# Pick k points as initial centroid
# May cause error if use below method(duplicate centroid)
#center = dataset[np.random.randint(dataset.shape[0], size=k), :]
center = dataset[:k]
# Store old centroid when it updates
center_old = np.zeros(center.shape)
# Cluster labels: 3 clusters (0, 1, 2)
clusters = np.zeros(len(dataset))
# Error function
err = EuclideanDist(center, center_old, None)
# Loop will run till the err becomes 0
while err != 0:
# Assign values to its closest centroid
for i in range(len(dataset)):
distances = EuclideanDist(dataset[i], center)
cluster = np.argmin(distances)
clusters[i] = cluster
# Store the old centroid value
center_old = deepcopy(center)
for i in range(k):
points = [dataset[j] for j in range(len(dataset)) if clusters[j] == i]
center[i] = np.mean(points, axis=0)
err = EuclideanDist(center, center_old, None)
return clusters, center
colors = ['r', 'g', 'b']
clusters, center = kmeans(3, Noah)
fig, ax = plt.subplots(figsize=(10, 7))
points = None
for i in range(k):
points = np.array([Noah[j] for j in range(len(Noah)) if clusters[j] == i])
ax.scatter(points[:, 0], points[:, 1], s=5, c=colors[i], label=colors[i])
ax.scatter(center[:, 0], center[:, 1], marker="*", s=250, c="black")
ax.legend(["CH","FF","CU"])
plt.xlabel("Horizontal movement (x)")
plt.ylabel("Vertical movement (y)")
plt.title("Kmeans result")
# Save result
fig.savefig("Kmeans_result.png")
# Calculate accuracy
fig, ax = plt.subplots(figsize=(10, 7))
cm = confusion_matrix(pitch_type, clusters)
sns.heatmap(cm, annot=True, ax=ax, fmt="d")
ax.set_xlabel("Predicted labels")
ax.set_ylabel("True labels")
ax.set_title("Confusion matrix")
ax.xaxis.set_ticklabels(["CH", "FF", "CU"])
ax.yaxis.set_ticklabels(["CH", "FF", "CU"])
# Save result
fig.savefig("Confusion_matrix.png")
print(accuracy_score(pitch_type, clusters))
# Show why there is 3 clusters
def wcss(k, points, centers):
wcss = 0
for i in range(k):
for point in points[i]:
wcss += (abs(EuclideanDist(point,centers[i], None))) ** 2
return wcss
wcss_res = []
for k in range(1, 11):
points = []
clusters, center = kmeans(k, Noah)
for i in range(k):
point = np.array([Noah[j] for j in range(len(Noah)) if clusters[j] == i])
points.append(point)
wcss_res.append(wcss(k, points, center))
k = range(1, 11)
fig, ax = plt.subplots(figsize=(10, 7))
plt.plot(k, wcss_res)
plt.title("Elbow method")
plt.xlabel("k clusters")
plt.ylabel("wcss")
# save result of elbow method
fig.savefig("elbow_method.png")
# Use another two or more attributes to partition
x = data["tstart"].values
y = data["y"].values
NewData = np.array(list(zip(x, y)))
plt.scatter(x, y, c="black", s=5)
clusters, center = kmeans(3, NewData)
colors = ['r', 'g', 'b']
fig, ax = plt.subplots(figsize=(10, 7))
points = []
for i in range(3):
point = np.array([NewData[j] for j in range(len(NewData)) if clusters[j] == i])
points.append(point)
ax.scatter(point[:, 0], point[:, 1], s=5, c=colors[i], label=colors[i])
ax.scatter(center[:, 0], center[:, 1], marker="*", s=250, c="black")
ax.legend(["CH","FF","CU"])
plt.xlabel("tsart")
plt.ylabel("Vertical movement (y)")
plt.title("Kmeans result, tstart and Vertical movement")
# Save result2
fig.savefig("Kmeans_result2.png")
# Calculate accuracy
fig, ax = plt.subplots(figsize=(10, 7))
cm = confusion_matrix(pitch_type, clusters)
sns.heatmap(cm, annot=True, ax=ax, fmt="d")
ax.set_xlabel("Predicted labels")
ax.set_ylabel("True labels")
ax.set_title("Confusion matrix, tstart and Vertical movement")
ax.xaxis.set_ticklabels(["CH", "FF", "CU"])
ax.yaxis.set_ticklabels(["CH", "FF", "CU"])
# Save result
fig.savefig("Confusion_matrix2.png")
print(accuracy_score(pitch_type, clusters))
# Do elbow method again with new data
wcss_res = []
for k in range(1, 11):
points = []
clusters, center = kmeans(k, NewData)
for i in range(k):
point = np.array([NewData[j] for j in range(len(NewData)) if clusters[j] == i])
points.append(point)
wcss_res.append(wcss(k, points, center))
k = range(1, 11)
fig, ax = plt.subplots(figsize=(10, 7))
plt.plot(k, wcss_res)
plt.title("Elbow method, tstart and Vertical movement")
plt.xlabel("k clusters")
plt.ylabel("wcss")
# save result of elbow method of new data
fig.savefig("elbow_method2.png")
| IML_2018_Fall/Project2/Noah_Kmeans.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python3
# name: python3
# ---
# Notebook created: 2021-08-29 11:57:45
# Generated from: source/01_comparison/includes/transform.rst
# pandas provides a [Transformation](../../04_user_guide/36_groupby.ipynb#groupby-transform) mechanism that allows these type of operations to be
# succinctly expressed in one operation.
# + hide-output=false
gb = tips.groupby("smoker")["total_bill"]
tips["adj_total_bill"] = tips["total_bill"] - gb.transform("mean")
tips
| 01_comparison/includes/transform.ipynb |
# ---
# title: "Two differnt y-axes in same plot"
# date: 2020-04-12T14:41:32+02:00
# author: "<NAME>"
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# - Annotations: https://matplotlib.org/3.3.0/tutorials/text/annotations.html
# +
import matplotlib.pyplot as plt
import pandas as pd
climate_change = pd.read_csv('climate_change.csv', parse_dates=['date'], index_col='date')
seventies = climate_change['1970-01-01':'1979-12-31']
# -
# Define a function called plot_timeseries
def plot_timeseries(axes, x, y, color, xlabel, ylabel):
# Plot the inputs x,y in the provided color
axes.plot(x, y, color=color)
# Set the x-axis label
axes.set_xlabel(xlabel)
# Set the y-axis label
axes.set_ylabel(ylabel, color=color)
# Set the colors tick params for y-axis
axes.tick_params('y', colors=color)
# +
fig, ax = plt.subplots()
# Plot the CO2 levels time-series in blue
plot_timeseries(ax, climate_change.index, climate_change['co2'], "blue", 'Time (years)', 'CO2 levels')
# Create a twin Axes object that shares the x-axis
ax2 = ax.twinx()
# Plot the relative temperature data in red
plot_timeseries(ax2, climate_change.index, climate_change['relative_temp'], "red", 'Time (years)', 'Relative temperature (Celsius)')
# Annotate point with relative temperature >1 degree
ax2.annotate('>1 degree', xy=(pd.Timestamp('2015-10-06'), 1), xytext=(pd.Timestamp('2008-10-06'), -0.2), arrowprops={"arrowstyle":"->", "color":"gray"})
plt.show()
| courses/datacamp/notes/python/matplotlibTMP/sameplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="HLU-f5CAapbg"
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
from sklearn.model_selection import train_test_split
# %matplotlib inline
from matplotlib.pylab import rcParams
from datetime import datetime
# + [markdown] id="6vPNte_HbArU"
# # Linear regression
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="s9c0w6lPazUF" executionInfo={"status": "ok", "timestamp": 1636105678504, "user_tz": -330, "elapsed": 991, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="4ecfd496-ce00-4174-d362-8412e07fe48c"
df = pd.read_csv('/content/drive/MyDrive/MLproject/DRREDDY.csv')
df.describe()
# + id="lCVmgYwib6-r"
df['Date'] = pd.to_datetime(df.Date,format='%Y-%m-%d')
df.index = df['Date']
#sorting
data = df.sort_index(ascending=True, axis=0)
#creating a separate dataset
new_data = pd.DataFrame(index=range(0,len(df)),columns=['Date', 'Close','Open','High','Volume'])
new_data['Date'] = df['Date'].values
new_data['Close'] = df['Close'].values
new_data['Open'] = df['Open'].values
new_data['High'] = df['High'].values
new_data['Volume'] = df['Volume'].values
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="4hKlqHmdcD23" executionInfo={"status": "ok", "timestamp": 1635656379685, "user_tz": -330, "elapsed": 733, "user": {"displayName": "te<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="2167abe8-b9dc-45bd-9b40-97ceac800407"
new_data
# + id="DL_dyci5eg8G"
import fastai
# + colab={"base_uri": "https://localhost:8080/"} id="9P-WUHglcDoP" executionInfo={"status": "ok", "timestamp": 1636105717457, "user_tz": -330, "elapsed": 27496, "user": {"displayName": "te<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="1cb4b21b-f940-4b6f-ce33-6d3ff0a00834"
from fastai.tabular import add_datepart
add_datepart(new_data, 'Date')
new_data.drop('Elapsed', axis=1, inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="kwPrJi3ALDf1" executionInfo={"status": "ok", "timestamp": 1636105717459, "user_tz": -330, "elapsed": 18, "user": {"displayName": "tejareddy piduru", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="b830d6fb-68c1-4f12-e797-61a70178a395"
new_data
# + id="Pss1VD8Usyd9"
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
scal = scaler.fit_transform(new_data)
# + colab={"base_uri": "https://localhost:8080/"} id="vGUktGsaLNRt" executionInfo={"status": "ok", "timestamp": 1636105726454, "user_tz": -330, "elapsed": 403, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="dbbe07db-62fa-4c47-936a-c56d10e2d459"
train = new_data[:3714]
valid = new_data[3714:]
pred = ['Close','High']
x_train = train.drop(pred, axis=1)
y_train = train[pred]
x_valid = valid.drop(pred, axis=1)
y_valid = valid[pred]
#implement linear regression
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x_train,y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="IDLp6SFhLYKu" executionInfo={"status": "ok", "timestamp": 1636105730142, "user_tz": -330, "elapsed": 411, "user": {"displayName": "tej<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="aaf93a4a-ab75-4bf8-bcbc-ce9257a86056"
preds = model.predict(x_valid)
print('Variance score: %.2f' % model.score(x_valid, y_valid))
rms=np.sqrt(np.mean(np.power((np.array(y_valid)-np.array(preds)),2)))
print("rms : %.2f" %rms)
# + colab={"base_uri": "https://localhost:8080/"} id="xjLud10QA4YF" executionInfo={"status": "ok", "timestamp": 1636105733830, "user_tz": -330, "elapsed": 423, "user": {"displayName": "te<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="cd4482a8-b4c5-45f7-f731-721efcda481e"
y_valid['Close']
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="YDtVBO_uuMIj" executionInfo={"status": "ok", "timestamp": 1636105742789, "user_tz": -330, "elapsed": 392, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="39fa83e2-fa32-4a48-d7dd-0d84bf741ac3"
x = np.array(y_valid['Close'])
y = np.array([x[0] for x in preds])
plt.scatter(x, y)
plt.xlabel("Actual Price close")
plt.ylabel("Predicted Price close")
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="dPn8P6-eBsiO" executionInfo={"status": "ok", "timestamp": 1635661602771, "user_tz": -330, "elapsed": 701, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="82dec053-86b8-4b78-bfe0-651f3d0d3fc6"
x = np.array(y_valid['High'])
y = np.array([x[1] for x in preds])
plt.scatter(x, y)
plt.xlabel("Actual Price High")
plt.ylabel("Predicted Price High")
# + id="deRxQ8Hyyse4"
b=np.transpose(preds)
# + colab={"base_uri": "https://localhost:8080/"} id="ZTub73Yxgbh0" executionInfo={"status": "ok", "timestamp": 1636105832261, "user_tz": -330, "elapsed": 362, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="6d7ccea3-4272-4d12-fd44-ddd9907608c1"
preds
# + colab={"base_uri": "https://localhost:8080/"} id="X5YTr_qmgY86" executionInfo={"status": "ok", "timestamp": 1636105821996, "user_tz": -330, "elapsed": 475, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="09885a9c-1741-4ac5-9d1b-04f6ea90bd9e"
b
# + colab={"base_uri": "https://localhost:8080/", "height": 721} id="AdzVEUHkLc8i" executionInfo={"status": "ok", "timestamp": 1636105844612, "user_tz": -330, "elapsed": 1260, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="d1925a3e-38df-47e7-f7df-6431b1352fc0"
valid['Predictions close'] = [x[0] for x in preds]
valid['Predictions high'] = [x[1] for x in preds]
valid.index = new_data[3714:].index
train.index = new_data[:3714].index
fig = plt.figure(figsize =(20,8))
plt.plot(train[pred])
plt.plot(valid[['Close','High', 'Predictions close', 'Predictions close']])
plt.legend()
# + id="j-ohlGxiDTvC"
predsss = ['Predictions close', 'Predictions high' ]
y2 = valid[predsss].mean()
x2 = valid[pred].mean()
# + colab={"base_uri": "https://localhost:8080/"} id="a0woYrY_GzvE" executionInfo={"status": "ok", "timestamp": 1635662967022, "user_tz": -330, "elapsed": 479, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="26c4af51-97c5-4352-e771-0cd064c71868"
x2[0]/y2[0] * 100
# + colab={"base_uri": "https://localhost:8080/"} id="C82P0UF3Ep3T" executionInfo={"status": "ok", "timestamp": 1635662886534, "user_tz": -330, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gjc2nHKs9TxBjI8Rdr48x2ktCUgrltQi-Ca-pdomA=s64", "userId": "12752491426441843216"}} outputId="adb42f76-a677-45df-b3ca-2099b210d6b2"
acc = x2/y2*100
# + id="nYoXB1-ugn5Z"
from sklearn.metrics import classification_report
| Ml/MLproject/Untitled1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
plt.rcParams["savefig.dpi"] = 300
plt.rcParams["savefig.bbox"] = "tight"
np.set_printoptions(precision=3, suppress=True)
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import scale, StandardScaler
# +
from sklearn.datasets import fetch_covtype
from sklearn.utils import check_array
def load_data(dtype=np.float32, order='C', random_state=13):
######################################################################
# Load covertype dataset (downloads it from the web, might take a bit)
data = fetch_covtype(download_if_missing=True, shuffle=True,
random_state=random_state)
X = check_array(data['data'], dtype=dtype, order=order)
# make it bineary classification
y = (data['target'] != 1).astype(np.int)
# Create train-test split (as [Joachims, 2006])
n_train = 522911
X_train = X[:n_train]
y_train = y[:n_train]
X_test = X[n_train:]
y_test = y[n_train:]
# Standardize first 10 features (the numerical ones)
mean = X_train.mean(axis=0)
std = X_train.std(axis=0)
mean[10:] = 0.0
std[10:] = 1.0
X_train = (X_train - mean) / std
X_test = (X_test - mean) / std
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = load_data()
# subsample training set by a factor of 10:
X_train = X_train[::10]
y_train = y_train[::10]
# -
from sklearn.linear_model import LogisticRegressionCV
print(X_train.shape)
print(np.bincount(y_train))
lr = LogisticRegressionCV().fit(X_train, y_train)
lr.C_
print(lr.predict_proba(X_test)[:10])
print(y_test[:10])
from sklearn.calibration import calibration_curve
probs = lr.predict_proba(X_test)[:, 1]
prob_true, prob_pred = calibration_curve(y_test, probs, n_bins=5)
print(prob_true)
print(prob_pred)
# +
def plot_calibration_curve(y_true, y_prob, n_bins=5, ax=None, hist=True, normalize=False):
prob_true, prob_pred = calibration_curve(y_true, y_prob, n_bins=n_bins, normalize=normalize)
if ax is None:
ax = plt.gca()
if hist:
ax.hist(y_prob, weights=np.ones_like(y_prob) / len(y_prob), alpha=.4,
bins=np.maximum(10, n_bins))
ax.plot([0, 1], [0, 1], ':', c='k')
curve = ax.plot(prob_pred, prob_true, marker="o")
ax.set_xlabel("predicted probability")
ax.set_ylabel("fraction of positive samples")
ax.set(aspect='equal')
return curve
plot_calibration_curve(y_test, probs)
plt.title("n_bins=5")
# -
fig, axes = plt.subplots(1, 3, figsize=(16, 6))
for ax, n_bins in zip(axes, [5, 20, 50]):
plot_calibration_curve(y_test, probs, n_bins=n_bins, ax=ax)
ax.set_title("n_bins={}".format(n_bins))
plt.savefig("images/influence_bins.png")
# +
from sklearn.svm import LinearSVC, SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
fig, axes = plt.subplots(1, 3, figsize=(8, 8))
for ax, clf in zip(axes, [LogisticRegressionCV(), DecisionTreeClassifier(),
RandomForestClassifier(n_estimators=100)]):
# use predict_proba is the estimator has it
scores = clf.fit(X_train, y_train).predict_proba(X_test)[:, 1]
plot_calibration_curve(y_test, scores, n_bins=20, ax=ax)
ax.set_title(clf.__class__.__name__)
plt.tight_layout()
plt.savefig("images/calib_curve_models.png")
# +
# same thing but with bier loss shown. Why do I refit the models? lol
from sklearn.metrics import brier_score_loss
fig, axes = plt.subplots(1, 3, figsize=(10, 4))
for ax, clf in zip(axes, [LogisticRegressionCV(), DecisionTreeClassifier(), RandomForestClassifier(n_estimators=100)]):
# use predict_proba is the estimator has it
scores = clf.fit(X_train, y_train).predict_proba(X_test)[:, 1]
plot_calibration_curve(y_test, scores, n_bins=20, ax=ax)
ax.set_title("{}: {:.2f}".format(clf.__class__.__name__, brier_score_loss(y_test, scores)))
plt.tight_layout()
plt.savefig("images/models_bscore.png")
# -
from sklearn.calibration import CalibratedClassifierCV
X_train_sub, X_val, y_train_sub, y_val = train_test_split(X_train, y_train,
stratify=y_train, random_state=0)
rf = RandomForestClassifier(n_estimators=100).fit(X_train_sub, y_train_sub)
scores = rf.predict_proba(X_test)[:, 1]
plot_calibration_curve(y_test, scores, n_bins=20)
plt.title("{}: {:.3f}".format(clf.__class__.__name__, brier_score_loss(y_test, scores)))
# +
cal_rf = CalibratedClassifierCV(rf, cv="prefit", method='sigmoid')
cal_rf.fit(X_val, y_val)
scores_sigm = cal_rf.predict_proba(X_test)[:, 1]
cal_rf_iso = CalibratedClassifierCV(rf, cv="prefit", method='isotonic')
cal_rf_iso.fit(X_val, y_val)
scores_iso = cal_rf_iso.predict_proba(X_test)[:, 1]
# -
scores_rf = cal_rf.predict_proba(X_val)
plt.plot(scores_rf[:, 1], y_val, 'o', alpha=.01)
plt.xlabel("rf.predict_proba")
plt.ylabel("True validation label")
plt.savefig("images/calibration_val_scores.png")
sigm = cal_rf.calibrated_classifiers_[0].calibrators_[0]
scores_rf_sorted = np.sort(scores_rf[:, 1])
sigm_scores = sigm.predict(scores_rf_sorted)
iso = cal_rf_iso.calibrated_classifiers_[0].calibrators_[0]
iso_scores = iso.predict(scores_rf_sorted)
# +
plt.plot(scores_rf[:, 1], y_val, 'o', alpha=.01)
plt.plot(scores_rf_sorted, sigm_scores, label='sigm')
plt.plot(scores_rf_sorted, iso_scores, label='iso')
plt.xlabel("rf.predict_proba")
plt.ylabel("True validation label")
plt.legend()
plt.savefig("images/calibration_val_scores_fitted.png")
# +
fig, axes = plt.subplots(1, 3, figsize=(10, 4))
for name, s, ax in zip(['no callibration', 'sigmoid', 'isotonic'],
[scores, scores_sigm, scores_iso], axes):
plot_calibration_curve(y_test, s, n_bins=20, ax=ax)
ax.set_title("{}: {:.3f}".format(name, brier_score_loss(y_test, s)))
plt.tight_layout()
plt.savefig("images/types_callib.png")
# -
cal_rf_iso_cv = CalibratedClassifierCV(rf, method='isotonic')
cal_rf_iso_cv.fit(X_train, y_train)
scores_iso_cv = cal_rf_iso_cv.predict_proba(X_test)[:, 1]
# +
fig, axes = plt.subplots(1, 3, figsize=(10, 4))
for name, s, ax in zip(['no callibration', 'isotonic', 'isotonic cv'],
[scores, scores_iso, scores_iso_cv], axes):
plot_calibration_curve(y_test, s, n_bins=20, ax=ax)
ax.set_title("{}: {:.3f}".format(name, brier_score_loss(y_test, s)))
plt.tight_layout()
plt.savefig("images/types_callib_cv.png")
# +
# http://scikit-learn.org/dev/auto_examples/calibration/plot_calibration_multiclass.html
# Author: <NAME> <<EMAIL>>
# License: BSD Style.
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.calibration import CalibratedClassifierCV
from sklearn.metrics import log_loss, brier_score_loss
np.random.seed(0)
# Generate data
X, y = make_blobs(n_samples=1000, n_features=2, random_state=42,
cluster_std=5.0)
X_train, y_train = X[:600], y[:600]
X_valid, y_valid = X[600:800], y[600:800]
X_train_valid, y_train_valid = X[:800], y[:800]
X_test, y_test = X[800:], y[800:]
# Train uncalibrated random forest classifier on whole train and validation
# data and evaluate on test data
clf = RandomForestClassifier(n_estimators=25)
clf.fit(X_train_valid, y_train_valid)
clf_probs = clf.predict_proba(X_test)
score = log_loss(y_test, clf_probs)
#score = brier_score_loss(y_test, clf_probs[:, 1])
# Train random forest classifier, calibrate on validation data and evaluate
# on test data
clf = RandomForestClassifier(n_estimators=25)
clf.fit(X_train, y_train)
clf_probs = clf.predict_proba(X_test)
sig_clf = CalibratedClassifierCV(clf, method="sigmoid", cv="prefit")
sig_clf.fit(X_valid, y_valid)
sig_clf_probs = sig_clf.predict_proba(X_test)
sig_score = log_loss(y_test, sig_clf_probs)
#sig_score = brier_score_loss(y_test, sig_clf_probs[:, 1])
# Plot changes in predicted probabilities via arrows
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
colors = ["r", "g", "b"]
for i in range(clf_probs.shape[0]):
plt.arrow(clf_probs[i, 0], clf_probs[i, 1],
sig_clf_probs[i, 0] - clf_probs[i, 0],
sig_clf_probs[i, 1] - clf_probs[i, 1],
color=colors[y_test[i]], head_width=1e-2)
# Plot perfect predictions
plt.plot([1.0], [0.0], 'ro', ms=20, label="Class 1")
plt.plot([0.0], [1.0], 'go', ms=20, label="Class 2")
plt.plot([0.0], [0.0], 'bo', ms=20, label="Class 3")
# Plot boundaries of unit simplex
plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 'k', label="Simplex")
# Annotate points on the simplex
plt.annotate(r'($\frac{1}{3}$, $\frac{1}{3}$, $\frac{1}{3}$)',
xy=(1.0/3, 1.0/3), xytext=(1.0/3, .23), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.plot([1.0/3], [1.0/3], 'ko', ms=5)
plt.annotate(r'($\frac{1}{2}$, $0$, $\frac{1}{2}$)',
xy=(.5, .0), xytext=(.5, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $\frac{1}{2}$, $\frac{1}{2}$)',
xy=(.0, .5), xytext=(.1, .5), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($\frac{1}{2}$, $\frac{1}{2}$, $0$)',
xy=(.5, .5), xytext=(.6, .6), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $0$, $1$)',
xy=(0, 0), xytext=(.1, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($1$, $0$, $0$)',
xy=(1, 0), xytext=(1, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $1$, $0$)',
xy=(0, 1), xytext=(.1, 1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
# Add grid
plt.grid("off")
for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
plt.plot([0, x], [x, 0], 'k', alpha=0.2)
plt.plot([0, 0 + (1-x)/2], [x, x + (1-x)/2], 'k', alpha=0.2)
plt.plot([x, x + (1-x)/2], [0, 0 + (1-x)/2], 'k', alpha=0.2)
plt.title("Change of predicted probabilities after sigmoid calibration")
plt.xlabel("Probability class 1")
plt.ylabel("Probability class 2")
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
plt.legend(loc="best")
print("Log-loss of")
print(" * uncalibrated classifier trained on 800 datapoints: %.3f "
% score)
print(" * classifier trained on 600 datapoints and calibrated on "
"200 datapoint: %.3f" % sig_score)
# Illustrate calibrator
plt.subplot(1, 2, 2)
# generate grid over 2-simplex
p1d = np.linspace(0, 1, 20)
p0, p1 = np.meshgrid(p1d, p1d)
p2 = 1 - p0 - p1
p = np.c_[p0.ravel(), p1.ravel(), p2.ravel()]
p = p[p[:, 2] >= 0]
calibrated_classifier = sig_clf.calibrated_classifiers_[0]
prediction = np.vstack([calibrator.predict(this_p)
for calibrator, this_p in
zip(calibrated_classifier.calibrators_, p.T)]).T
prediction /= prediction.sum(axis=1)[:, None]
# Plot modifications of calibrator
for i in range(prediction.shape[0]):
plt.arrow(p[i, 0], p[i, 1],
prediction[i, 0] - p[i, 0], prediction[i, 1] - p[i, 1],
head_width=1e-2, color=colors[np.argmax(p[i])])
# Plot boundaries of unit simplex
plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 'k', label="Simplex")
plt.grid("off")
for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
plt.plot([0, x], [x, 0], 'k', alpha=0.2)
plt.plot([0, 0 + (1-x)/2], [x, x + (1-x)/2], 'k', alpha=0.2)
plt.plot([x, x + (1-x)/2], [0, 0 + (1-x)/2], 'k', alpha=0.2)
plt.title("Illustration of sigmoid calibrator")
plt.xlabel("Probability class 1")
plt.ylabel("Probability class 2")
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
plt.savefig("images/multi_class_calibration.png")
# -
| slides/aml-09-gradient-boosting-calibration/aml-11-calibration.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
source("../funcs/limma.R")
source("../funcs/funcsR.R")
suppressMessages(library(fgsea))
suppressMessages(library('ggplot2'))
suppressMessages(library('ggpubr'))
suppressMessages(library('ggdendro'))
suppressMessages(library(RColorBrewer))
suppressMessages(library(EnhancedVolcano))
suppressMessages(library(xCell))
# ---
# # Analysis of SMM Transcriptomes pt. 2
#
# This notebooks contains R code & scripts to run differential exression, fGSEA, and create plots.
#
# **Author**: [<NAME>](<EMAIL>)
# ---
#
# ## 0. Load Data
# +
# Load TPM
tpm.df <- read.table("supplement/table9a_janssen_tpm.tsv",sep='\t',header=T,row.names=1)
# Load Norm TPM
tpm.norm.df <- read.table("supplement/table9b_janssen_tpm_norm.tsv",sep='\t', header=T, row.names=1)
tpm.norm.df <- data.frame(t(tpm.norm.df))
# Load counts
counts.df <- read.table("supplement/table9c_janssen_counts.tsv",sep='\t',header=T,row.names=1)
# Genes
gene.df <- read.table("supplement/table9d_gene_name.tsv",sep='\t',header=T,row.names=1)
gene.df <- gene.df[gene.df$biotype=='protein_coding',]
full.gene.df <- read.table("supplement/table9d_gene_name.tsv",sep='\t',header=T,row.names=1)
genes.to.use <- intersect(rownames(gene.df),rownames(tpm.df))
# Labels
labels.df <- read.table("../Fig1/supplement/table4_sample_cluster_id.tsv",sep='\t',header=T,row.names=1)
samples.to.use <- intersect(colnames(counts.df),rownames(labels.df))
# -
labels.df <- labels.df[samples.to.use,]
counts.df <- counts.df[genes.to.use,samples.to.use]
gene.df <- gene.df[genes.to.use,"Description", drop=F]
tpm.norm.df <- tpm.norm.df[samples.to.use,]
tpm.norm.df$consensus_nmf <- as.factor(as.character(labels.df[samples.to.use,]$consensus_nmf))
X.df <- read.table("../Fig1/supplement/table3_smm_filt_matrix.tsv",sep='\t',header=T,row.names=1)
X.df <- X.df[samples.to.use,]
# Note, these pathways were updated in the final figures.
# Please see README.md in the main dir of the repo for a mapping of these subtypes.
addName <- function(s){
if(s=="C1"){
return("HNT")
}else if(s=='C2'){
return("HMC")
}else if(s=='C3'){
return("FMD")
}else if(s=='C4'){
return("HKR")
}else if(s=='C5'){
return("CND")
}else if(s=='C6'){
return("HNF")
}
}
tpm.norm.df$id <- factor(sapply(tpm.norm.df$consensus_nmf, addName), levels = c("HNT", "HMC", "FMD", "HKR", "CND", "HNF"))
# ---
#
# ## 1. Expression Comparisons
#
# * Barplots showing expression of genes across clusters
# +
# plotTPM(tpm.norm.df, "BCL2L1", list( c("C1","C2")), full.gene.df, x='id', label.y=3, label.pos='none')
# plotTPM(tpm.norm.df, "BCL2", list( c("C1|HNT","C2|HMC")), full.gene.df, x='id', label.y=3, label.pos='none')
# -
tpm.norm.df$Hyperdiploidy<- ifelse(tpm.norm.df$consensus_nmf %in% c("C3","C5"), "- HP", "+ HP")
pdf('figures/fig2e_myc_hp.pdf', width=4, height=4)
plotTPM(tpm.norm.df, "MYC", c("- HP", "+ HP"), full.gene.df, x='Hyperdiploidy', label.x=.75, label.y=1, w=4, h=4, ylim.min=1, label.pos="none", palette=NULL)
dev.off()
# +
tpm.norm.df$cnd <- factor(ifelse(tpm.norm.df$consensus_nmf=="C5", "CND", "Rest"), levels=c("Rest","CND"))
p1 <- plotTPM(tpm.norm.df, "CCND1", c("Rest", "CND"), full.gene.df, x='cnd', label.x=.75, label.y=0, w=4, h=4, label.pos="none", palette=NULL)
tpm.norm.df$fmd <- factor(ifelse(tpm.norm.df$consensus_nmf %in% c("C3","C2"), "FMD/HMC", "Rest"), levels=c("Rest","FMD/HMC"))
p2 <- plotTPM(tpm.norm.df, "CCND2", c("Rest", "FMD"), full.gene.df, x='fmd', label.x=.75, label.y=0, w=4, h=4, label.pos="none", palette=NULL)
tpm.norm.df$hnt <- factor(ifelse(tpm.norm.df$consensus_nmf=="C1", "HNT", "Rest"), levels=c("Rest","HNT"))
p3 <- plotTPM(tpm.norm.df, "MCL1", c("Rest", "HNT"), full.gene.df, x='hnt', label.x=.75, label.y=7, w=4, h=4, ylim.min=7, label.pos="none", palette=NULL)
pdf('figures/figS4cde_tpm_barplots.pdf', width=4, height=12)
ggarrange(p1,p2,p3,labels = c("C", "D", "E"), ncol = 1, nrow = 3)
dev.off()
# +
p1 <- plotTPM(tpm.norm.df, "MCL1", list( c("HNT","FMD"), c("HMC","FMD"), c("FMD","CND")), full.gene.df, x="id", label.x=.9, ylim.min=6, label.y=6, label.pos='none')
p2 <- plotTPM(tpm.norm.df, "MYC", NA, full.gene.df, x="id", label.x=.9, label.pos='none') + scale_y_continuous(breaks=c(0,2,4,6,8,10),limits = c(0, NA))
p3 <- plotTPM(tpm.norm.df, "CCND1", NA, full.gene.df, x="id", label.x=.9, label.pos='none') + scale_y_continuous(breaks=c(0,2,4,6,8,10),limits = c(0, NA))
p4 <- plotTPM(tpm.norm.df, "CCND2", NA, full.gene.df, x="id", label.x=.9, label.pos='none') + scale_y_continuous(breaks=c(0,2,4,6,8,10),limits = c(0, NA))
pdf('figures/fig2abcd_tpm_barplots.pdf', width=8, height=20)
ggarrange(p1,p2,p3,p4,labels = c("A", "B", "C", "D"), ncol = 1, nrow = 4)
dev.off()
# -
# ---
#
# ## 2. Differential Expression One vs. Rest (Clusters)
# ### One vs. Rest
# +
# For all samples with matching RNA
DE.DIR <- "supplement/diffexp"
dir.create(DE.DIR)
for (s in unique(labels.df$consensus_nmf)){
labels.df$de <- labels.df$consensus_nmf==s
# Run Differential Expression
de.df <- RunDiffExprAnalysisLimma(
counts.df,
labels.df[,c('de'), drop=FALSE],
genes.df=gene.df
)
write.table(de.df, file.path(DE.DIR, paste("cluster_", s ,".tsv", sep='')), quote=FALSE, sep="\t")
}
# Only within HP clusters
DE.DIR <- "supplement/diffexp_hp"
dir.create(DE.DIR)
labels.hp.df <- labels.df[labels.df$consensus_nmf %in% c("C1","C2","C4","C6"),]
counts.hp.df <- counts.df[,rownames(labels.hp.df)]
for (s in unique(labels.hp.df$consensus_nmf)){
labels.hp.df$de <- labels.hp.df$consensus_nmf==s
# Run Differential Expression
de.df <- RunDiffExprAnalysisLimma(
counts.hp.df,
labels.hp.df[,c('de'), drop=FALSE],
genes.df=gene.df
)
write.table(de.df, file.path(DE.DIR, paste("cluster_subset_HP_", s ,".tsv", sep='')), quote=FALSE, sep="\t")
}
# -
c4.de.hl <- read.table("../data/ref/de_highlights/C4_selected_DE_genes.txt",sep='\t',header=T)
c3.de.hl <- read.table("../data/ref/de_highlights/C3_selected_DE_genes.txt",sep='\t',header=T)
# +
pdf('figures/fig2f_volcano_clust_FMD_de.pdf', width=8, height=10)
de.df <- read.table("supplement/diffexp/cluster_C3.tsv",sep='\t')
plotVolcano(
de.df,
selectLab=append(as.vector(c3.de.hl$upregulated_gene_name),as.vector(c3.de.hl$downregulated_gene_name)),
widthConnectors = 0.25,
drawConnectors=T,
boxedLabels = TRUE,
h=10,
w=8,
labSize=2,
labvjust=1,
labhjust=1,
title='FMD'
)
dev.off()
pdf('figures/fig2g_volcano_clust_CND_de.pdf', width=8, height=10)
de.df <- read.table("supplement/diffexp/cluster_C5.tsv",sep='\t')
plotVolcano(
de.df,
selectLab=append(as.vector(c4.de.hl$Upregulated_gene_name),as.vector(c4.de.hl$downregulated_gene_name)),
widthConnectors = 0.25,
drawConnectors=T,
boxedLabels = TRUE,
h=10,
w=8,
labSize=2,
labvjust=0.5,
labhjust=1,
title='CND'
)
dev.off()
# -
# ---
#
# ## 2. Misc. Differential Expression Comparisons
DE.DIR <- "supplement/diffexp_misc"
dir.create(DE.DIR)
# ### HP vs Rest
# +
labels.df$de <- labels.df$consensus_nmf %in% c("C1","C2","C4","C6")
# Run Differential Expression
de.df <- RunDiffExprAnalysisLimma(counts.df, labels.df[,c('de'), drop=FALSE], genes.df=gene.df)
write.table(de.df, file.path(DE.DIR, "cluster_hp_vs_rest.tsv"), quote=FALSE, sep="\t")
de.df[de.df$gene_name =='MYC',]
# -
# ### 1q Clusters
# +
labels.1q.df <- labels.df[labels.df$consensus_nmf %in% c("C3","C6"),]
labels.1q.df$de <- labels.1q.df$consensus_nmf=="C3"
# Run Differential Expression
de.df <- RunDiffExprAnalysisLimma(counts.df[,rownames(labels.1q.df)], labels.1q.df[,c('de'), drop=FALSE], genes.df=gene.df)
write.table(de.df, file.path(DE.DIR, "cluster_subset_1q_vs_rest.tsv"), quote=FALSE, sep="\t")
# -
# ### t(11;14)
# +
labels.df$t1114 <- as.logical(X.df[,'t.11.14.'])
# Run Differential Expression
de.df <- RunDiffExprAnalysisLimma(counts.df[,rownames(labels.df)], labels.df[,c('t1114'), drop=FALSE], genes.df=gene.df,)
write.table(de.df, file.path(DE.DIR, "t1114_vs_rest.tsv"), quote=FALSE, sep="\t")
# -
# ### 1q Amp vs Rest
# +
labels.df$one_q <- as.logical(X.df[rownames(labels.df),'X1q_gain'])
# Run Differential Expression
de.df <- RunDiffExprAnalysisLimma(counts.df, labels.df[,c('one_q'), drop=FALSE], genes.df=gene.df)
write.table(de.df, file.path(DE.DIR, "1q_vs_rest.tsv"), quote=FALSE, sep="\t")
# -
# ### 1p Del vs. Rest
# +
labels.df$one_p <- as.logical(X.df[rownames(labels.df),'X1p_del'])
# Run Differential Expression
de.df <- RunDiffExprAnalysisLimma(counts.df, labels.df[,c('one_p'), drop=FALSE], genes.df=gene.df)
write.table(de.df, file.path(DE.DIR, "1p_del_vs_rest.tsv"), quote=FALSE, sep="\t")
# -
# ---
#
# ## 3. Pathways
#
# * Run fGSEA for differential expression results
# * Rank by logFC, logFC * -log10(adj.p-val)
# +
GMTS = c(
"../data/ref/gmts/h.all.v7.0.symbols.gmt.txt",
"../data/ref/gmts/c2.cp.kegg.v7.0.symbols.gmt.txt",
"../data/ref/gmts/mm_sigs_custom.gmt.txt",
"../data/ref/gmts/staudt_2020.gmt.txt"
)
GMT_list <- c(gmtPathways(GMTS[1]),gmtPathways(GMTS[2]),gmtPathways(GMTS[3]))
SEED <- 42
# +
DE.DIR <- "supplement/diffexp"
de.df <- list()
c <- 1
for (file in list.files("supplement/diffexp")){
file.name <- file.path(DE.DIR, file)
de.df[[c]] <- read.table(file.name, sep='\t', header=T, row.names=1)
de.df[[c]]$id <- strsplit(file,'\\.')[[1]][1]
c = c+1
}
de.df <- do.call("rbind", de.df)
de.df$id <- gsub("cluster_", "", de.df$id)
# +
# Full Pathway Enrichments
e.df <- runAllGSEA(de.df, GMT_list, seed=SEED)
write.table(e.df, "supplement/table11a_enrich_v1.tsv",sep='\t', quote = T)
# Full Pathway Enrichments for Gene-Sets in Staudt et al
e.staudt.df <- runAllGSEA(de.df, gmtPathways(GMTS[4]), seed=SEED)
write.table(e.staudt.df, "supplement/table11b_enrich_staudt.tsv",sep='\t', quote = T)
# Stricter Pathway Enrichments (ranked by padj*signed FC)
e2.df <- runAllGSEA(de.df, GMT_list, how='padj', seed=SEED)
write.table(e2.df, "supplement/table11c_enrich_v2.tsv",sep='\t', quote = T)
# Stricter Pathway Enrichments (ranked by padj*signed FC) for Gene-Sets in Staudt et al
e.staudt.2.df <- runAllGSEA(de.df, gmtPathways(GMTS[4]), how='padj', seed=SEED)
write.table(e.staudt.2.df, "supplement/table11d_enrich_staudt_v2.tsv",sep='\t', quote = T)
# -
# ### Fig 2C
# +
hm.to.remove <- c(
"HALLMARK_MYOGENESIS",
"HALLMARK_KRAS_SIGNALING_UP",
"HALLMARK_KRAS_SIGNALING_DN",
"HALLMARK_ESTROGEN_RESPONSE_LATE",
"HALLMARK_ESTROGEN_RESPONSE_EARLY",
"HALLMARK_XENOBIOTIC_METABOLISM",
"HALLMARK_ANGIOGENESIS",
"HALLMARK_APICAL_JUNCTION",
"HALLMARK_COAGULATION",
"HALLMARK_UV_RESPONSE_DN",
"HALLMARK_ALLOGRAFT_REJECTION",
"HALLMARK_G2M_CHECKPOINT",
"HALLMARK_SPERMATOGENESIS",
"HALLMARK_UV_RESPONSE_UP"
)
kegg.to.keep <- c(
"KEGG_CYTOSOLIC_DNA_SENSING_PATHWAY",
"KEGG_TGF_BETA_SIGNALING_PATHWAY",
"KEGG_JAK_STAT_SIGNALING_PATHWAY",
"KEGG_RIBOSOME"
)
staudt.plot.pathways <- c(
"B_cell_memory_Newman",
"Blimp_proliferaton_repressed",
"Blood_Modules-3.3_Inflammation-2",
"Blood_Module−1.5_Myeloid_lineage−1",
"MCL_proliferation_survival",
"Dendritic_cell_CD123pos_blood",
"Dendritic_cell_activated_Newman",
"Macrophage_M1_Newman",
"Macrophage_M2_Newman",
"Monocyte_Newman",
"Cell_cycle_Liu",
"Regulatory_T cell_Newman",
"CD4_T_cell_memory_activated_Newman",
"CD8_T_cell_Newman",
"NK_cell_resting_Newman",
"NK_cell_activated_Newman",
"Gamma_delta_T cell_Newman",
"CD8_T_effectorUp_memoryIm_NaiveDn",
"XBP1_target_secretory"
)
# -
# #### Fig2C Balloon Plot with fGSEA Ranked by LogFC
# +
# Load Enrichment
e.df <- read.table("supplement/table11a_enrich_v1.tsv")
e.stuadt.join.df <- read.table("supplement/table11b_enrich_staudt.tsv",sep='\t',header=T)
# Rename kegg, hallmark, & myeloma signatures
kegg.e.df <- e.df[grepl("^KEGG", e.df$pathway),]
kegg.e.df <- kegg.e.df[kegg.e.df$pathway %in% kegg.to.keep,]
hm.e.df <- e.df[grepl("^HALLMARK", e.df$pathway),]
hm.e.df <- hm.e.df[!hm.e.df$pathway %in% hm.to.remove,]
mm.e.df <- e.df[grepl("^MYELOMA", e.df$pathway),]
mm.e.df <- mm.e.df[grepl("UP$", mm.e.df$pathway),]
# Join Sataudt Signatures
e.stuadt.filt.join.df <- e.stuadt.join.df[e.stuadt.join.df$pathway %in% staudt.plot.pathways,]
# Group Names
kegg.e.df$grouping <- "Cellular Pathways"
hm.e.df$grouping <- "Cellular Pathways"
mm.e.df$grouping <- "Myeloma"
e.stuadt.filt.join.df[grepl("Newman$", e.stuadt.filt.join.df$pathway),"grouping"] <- "Immune"
e.stuadt.filt.join.df[!grepl("Newman$", e.stuadt.filt.join.df$pathway),"grouping"] <- "Cellular Pathways"
e.stuadt.filt.join.df[e.stuadt.filt.join.df$pathway=="Dendritic_cell_CD123pos_blood","grouping"] <- "Immune"
e.stuadt.filt.join.df[e.stuadt.filt.join.df$pathway=="CD8_T_effectorUp_memoryIm_NaiveDn","grouping"] <- "Immune"
# Intersect for Common Columns
common_cols <- intersect(colnames(kegg.e.df), colnames(e.stuadt.filt.join.df))
full.e.df <- rbind(
kegg.e.df[,common_cols],
hm.e.df[,common_cols],
mm.e.df[,common_cols],
e.stuadt.filt.join.df[,common_cols]
)
full.e.df$grouping <- as.factor(full.e.df$grouping)
full.e.df$id <- as.factor(full.e.df$id)
# Merge Renamed Pathways
rename.df <- read.table('../data/ref/rename_pathways.tsv',sep='\t',header=T)
x <- merge(full.e.df, rename.df[,c('pathway','new_pathway_name')])
x$pathway <- x$new_pathway_name
x$id <- factor(sapply(x$id, addName), levels = c("HNT", "HMC", "FMD", "HKR", "CND", "HNF"))
# -
plotGSEA_v3 <- function(e_df, pval.thresh=0.1, filter=NULL, palette='RdBu', h=13, w=15, s_color='black', ncol=NA, order_x=TRUE){
e_df$sig <- e_df$padj<pval.thresh
e_df$logpval <- -log10(e_df$padj)
e_df <- e_df[e_df$pathway %in% e_df[e_df$sig,]$pathway,]
if(!is.null(filter)){
e_df <- dplyr::filter(e_df, grepl(filter, pathway))
}
### Order axis by dendrogram
# Load data
X <- e_df[,c('pathway','id','NES')]
X <- reshape(X[,c('pathway','id','NES')], timevar='id', idvar='pathway', direction='wide',)
rownames(X) <- X$pathway
X$pathway <- NULL
X[is.na(X)] <- 0
# Build the dendrogram
dend <- as.dendrogram(hclust(d = dist(x = X)))
dendro.plot <- ggdendrogram(dend,rotate = TRUE)
# Use dendrogram order to order colomn
if(order_x){
order <- order.dendrogram(dend) # dendrogram order
e_df$pathway <- factor(x = e_df$pathway, levels = unique(e_df$pathway)[order], ordered = TRUE)
}
### Balloonplot
options(repr.plot.width=w, repr.plot.height=h)
p <- ggballoonplot(
e_df,
x="id",
y="pathway",
fill = "NES",
size="logpval",
color=ifelse(e_df$sig==T, s_color, "lightgrey")
) +
scale_fill_distiller(palette=palette, limit = max(abs(e_df$NES)) * c(-1, 1))+
labs(x="", y="", fill="Enrichment", size="-log10 Adj. P-val") + theme_linedraw() +
theme(axis.text.x=element_text(angle=45, hjust=1),
axis.text.y=element_text(size=14),
legend.box = "horizontal"
)+ coord_flip() + facet_grid(. ~ grouping, scales = "free", space = "free")
return(p)
}
x$id <- factor(x$id, levels=c("HNF","CND","HKR","FMD","HMC","HNT"))
x$id <- factor(x$id, levels=c("FMD","HKR","HMC","HNF","HNT","CND"))
# +
sigs <- c(
"CD-1 Myeloma Signature",
"CD-2 Myeloma Signature",
"PR Myeloma Signature",
"HP Myeloma Signature",
"MS Myeloma Signature",
"PRL3 Myeloma Signature",
"LB Myeloma Signature",
"MF Myeloma Signature"
)
mye <- x[x$grouping=="Myeloma",]
mye <- mye[mye$pathway %in% sigs, ]
rest <- x[x$grouping=="Cellular Pathways",]
# -
mye$pathway <- factor(mye$pathway, levels=sigs)
pdf("figures/pathway_balloon_figure_horizontal_v2.pdf", width=7, height=4)
plotGSEA_v3(mye, h=4, w=7, order_x=F)
dev.off()
pdf("figures/pathway_balloon_figure_horizontal.pdf", width=13, height=4)
plotGSEA_v3(rest, h=4, w=13)
dev.off()
pdf("figures/fig2h_balloon_figure.pdf", width=7, height=17)
plotGSEA_v2(x, h=18, w=8)
dev.off()
# #### Fig2C Balloon Plot with fGSEA Ranked by LogFC * -log10(adj. pval)
# +
# Load Enrichment
e.df <- read.table("supplement/table11c_enrich_v2.tsv")
e.stuadt.join.df <- read.table("supplement/table11d_enrich_staudt_v2.tsv",sep='\t',header=T)
# Rename kegg, hallmark, & myeloma signatures
kegg.e.df <- e.df[grepl("^KEGG", e.df$pathway),]
kegg.e.df <- kegg.e.df[kegg.e.df$pathway %in% kegg.to.keep,]
hm.e.df <- e.df[grepl("^HALLMARK", e.df$pathway),]
hm.e.df <- hm.e.df[!hm.e.df$pathway %in% hm.to.remove,]
mm.e.df <- e.df[grepl("^MYELOMA", e.df$pathway),]
mm.e.df <- mm.e.df[grepl("UP$", mm.e.df$pathway),]
# Join Sataudt Signatures
e.stuadt.filt.join.df <- e.stuadt.join.df[e.stuadt.join.df$pathway %in% staudt.plot.pathways,]
# Group Names
kegg.e.df$grouping <- "Cellular Pathways"
hm.e.df$grouping <- "Cellular Pathways"
mm.e.df$grouping <- "Myeloma"
e.stuadt.filt.join.df[grepl("Newman$", e.stuadt.filt.join.df$pathway),"grouping"] <- "Immune"
e.stuadt.filt.join.df[!grepl("Newman$", e.stuadt.filt.join.df$pathway),"grouping"] <- "Cellular Pathways"
e.stuadt.filt.join.df[e.stuadt.filt.join.df$pathway=="Dendritic_cell_CD123pos_blood","grouping"] <- "Immune"
e.stuadt.filt.join.df[e.stuadt.filt.join.df$pathway=="CD8_T_effectorUp_memoryIm_NaiveDn","grouping"] <- "Immune"
# Intersect for Common Columns
common_cols <- intersect(colnames(kegg.e.df), colnames(e.stuadt.filt.join.df))
full.e.df <- rbind(
kegg.e.df[,common_cols],
hm.e.df[,common_cols],
mm.e.df[,common_cols],
e.stuadt.filt.join.df[,common_cols]
)
full.e.df$grouping <- as.factor(full.e.df$grouping)
full.e.df$id <- as.factor(full.e.df$id)
# Merge Renamed Pathways
rename.df <- read.table('../data/ref/rename_pathways.tsv',sep='\t',header=T)
x <- merge(full.e.df, rename.df[,c('pathway','new_pathway_name')])
x$pathway <- x$new_pathway_name
x$id <- factor(sapply(x$id, addName), levels = c("HNT", "HMC", "FMD", "HKR", "CND", "HNF"))
# -
pdf("figures/figS4a_balloon_figure_v2.pdf", width=7, height=17)
plotGSEA_v2(x, h=18, w=8)
dev.off()
# #### Supplemental plots of all other pathways
e.df <- read.table("supplement/table11a_enrich_v1.tsv")
e.df$id <- factor(sapply(e.df$id, addName), levels = c("HNT", "HMC", "FMD", "HKR", "CND", "HNF"))
# +
pdf("figures/figS5a_balloon_hallmark.pdf", width=8, height=10)
plotGSEA(e.df, filter='HALLMARK', w=10, h=10, fix_id=F)
dev.off()
pdf("figures/figS5b_balloon_kegg.pdf", width=10, height=18)
plotGSEA(e.df, filter='KEGG', w=10, h=20, fix_id=F)
dev.off()
e.df.mm <- dplyr::filter(e.df, grepl("MYELOMA", pathway))
pdf("figures/figS5c_balloon_myeloma_up.pdf", width=8, height=5)
plotGSEA(e.df.mm, filter='UP', w=8, h=5, fix_id=F)
dev.off()
pdf("figures/figS5d_balloon_myeloma_down.pdf", width=8, height=5)
plotGSEA(e.df.mm, filter='DOWN', w=8, h=5, fix_id=F)
dev.off()
# -
# ---
# ## 4. Myeloma Signatures
#
# * Mean expression of genes in Sonnevald + Zhan
sigs.full.df <- read.table("supplement/table10c_combined_mean_sigs.tsv",sep='\t',header=T,row.names=1)
sigs.full.df$consensus_nmf <- factor(sapply(sigs.full.df$consensus_nmf, addName), levels = c("HNT", "HMC", "FMD", "HKR", "CND", "HNF"))
# +
p1 <- plotSig(sigs.full.df, "CD.2", list( c("HNT","CND"), c("HMC","CND"), c("FMD","CND"), c("HKR","CND"), c("HNF","CND")), label.y=1, label.x=4.5, ylim.min=1, label.pos='none')
p2 <- plotSig(sigs.full.df, "CD.1", list( c("HNT","CND"), c("HMC","CND"), c("FMD","CND"), c("HKR","CND"), c("HNF","CND")), label.y=1, label.x=4.5, ylim.min=1, label.pos='none')
p3 <- plotSig(sigs.full.df, "LB", list(c("FMD","CND"), c("CND","HNF"),c("HKR","HNF")), label.y=1, label.x=4.5, ylim.min=1, label.pos='none')
p4 <- plotSig(sigs.full.df, "HP", list( c("HNT","HMC"), c("HNT","FMD"), c("HNT","CND"), c("HNT","HKR"), c("HNT","HNF")), label.x=4.5, label.y=1.5, ylim.min=1.5, label.pos='none')
p5 <- plotSig(sigs.full.df, "MS", list( c("HNT","FMD"), c("HMC","FMD"), c("FMD","HNF"), c("FMD","HKR"), c("FMD","CND")), label.x=4.5, label.y=0, label.pos='none')
p6 <- plotSig(sigs.full.df, "PRL3", list( c("HNT","FMD"), c("HMC","FMD"), c("CND","HNF"), c("CND","HKR"), c("FMD","CND")), label.x=4.5, label.y=0.75, ylim.min=.75, label.pos='none')
pdf('figures/figS4b_myeloma_mean_sigs.pdf', width=8, height=30)
ggarrange(p1,p2,p3,p4,p5,p6,labels = c("A", "B", "C", "D", "E", "F"), ncol = 1, nrow = 6)
dev.off()
| Fig2/2_diffexp_pathways_rna.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ある駅 (緯度経度座標) から最も近いお店を探す
#
# - 1000店舗の緯度経度情報を持っています。(CSVファイル)
# - 1000店舗のうち、ある駅 (緯度 : 35.6697 / 経度 : 139.76714) から最も近いお店を探してください。
# - そのお店の【answer (回答項目)】が、問題1の回答です。
#
# ## 回答・解説
# 緯度経度とは、場所を数値的に表現する方法の一つ。
#
# - 日本測地系 : 日本独自のもの。2002年に世界測地系へ移行している
# - 世界測地系 : 国際的に定められた基準となる測地系が世界測地系
#
# ご存知の通り、地球は平面ではなく球体 (正確には楕円体) になるので、球面を気にして計算する必要がある。
#
# 今回の回答例では、Python の GeoPy とよばれるジオコーディングライブラリを利用して実装。
# GeoPy では、カーニー法とよばれる、測地線距離を求める新しいアルゴリズムを採用。
# https://geopy.readthedocs.io/en/stable/#module-geopy.distance
#
# ## その他・備考
# 今回の問題に限っていくと、平面と捉えても正しい回答が導けます。
# 三平方の定理を用いて、2点の距離を求めるで回答しても問題ありません。
import pandas as pd
from geopy.distance import geodesic
import math
df = pd.read_csv("retty_gourmet_open_q1.csv")
df.head()
# ある駅の緯度経度情報 (緯度 : 35.6697 / 経度 : 139.76714) と、店舗との距離を計算する
# ① GeoPyを利用した回答方法
# +
station = (35.6697, 139.76714)
ans = []
dis = []
for row in df.itertuples():
ans.append(row[1])
restaurant = (row[2], row[3])
t_dis = geodesic(station, restaurant).km
dis.append(t_dis)
df_dis = pd.DataFrame(index=[], columns=['answer', 'distance'])
df_dis['answer'] = ans
df_dis['distance'] = dis
print(df_dis.sort_values('distance').iloc[[0]])
# -
# ② 平面と満たして、三平方の定理を利用して距離を出す方法
# +
station = (35.6697, 139.76714)
ans = []
dis = []
def get_distance(x1, y1, x2, y2):
d = math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2)
return d
for row in df.itertuples():
ans.append(row[1])
t_dis = get_distance(row[2], row[3], 35.6697, 139.76714)
dis.append(t_dis)
df_dis = pd.DataFrame(index=[], columns=['answer', 'distance'])
df_dis['answer'] = ans
df_dis['distance'] = dis
print(df_dis.sort_values('distance').iloc[[0]])
# -
| 20210523/question1/q1_answer.ipynb |
# ##### Copyright 2020 The OR-Tools Authors.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # einav_puzzle2
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/einav_puzzle2.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/einav_puzzle2.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2010 <NAME> <EMAIL>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A programming puzzle from Einav in Google CP Solver.
From
'A programming puzzle from Einav'
http://gcanyon.wordpress.com/2009/10/28/a-programming-puzzle-from-einav/
'''
My friend Einav gave me this programming puzzle to work on. Given
this array of positive and negative numbers:
33 30 -10 -6 18 7 -11 -23 6
...
-25 4 16 30 33 -23 -4 4 -23
You can flip the sign of entire rows and columns, as many of them
as you like. The goal is to make all the rows and columns sum to positive
numbers (or zero), and then to find the solution (there are more than one)
that has the smallest overall sum. So for example, for this array:
33 30 -10
-16 19 9
-17 -12 -14
You could flip the sign for the bottom row to get this array:
33 30 -10
-16 19 9
17 12 14
Now all the rows and columns have positive sums, and the overall total is
108.
But you could instead flip the second and third columns, and the second
row, to get this array:
33 -30 10
16 19 9
-17 12 14
All the rows and columns still total positive, and the overall sum is just
66. So this solution is better (I don't know if it's the best)
A pure brute force solution would have to try over 30 billion solutions.
I wrote code to solve this in J. I'll post that separately.
'''
Compare with the following models:
* MiniZinc http://www.hakank.org/minizinc/einav_puzzle.mzn
* SICStus: http://hakank.org/sicstus/einav_puzzle.pl
Note:
This is a <NAME>'s variant of einav_puzzle.py.
He removed some of the decision variables and made it more efficient.
Thanks!
This model was created by <NAME> (<EMAIL>)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
from __future__ import print_function
from ortools.constraint_solver import pywrapcp
# Create the solver.
solver = pywrapcp.Solver("Einav puzzle")
#
# data
#
# small problem
# rows = 3;
# cols = 3;
# data = [
# [ 33, 30, -10],
# [-16, 19, 9],
# [-17, -12, -14]
# ]
# Full problem
rows = 27
cols = 9
data = [[33, 30, 10, -6, 18, -7, -11, 23, -6],
[16, -19, 9, -26, -8, -19, -8, -21, -14],
[17, 12, -14, 31, -30, 13, -13, 19, 16],
[-6, -11, 1, 17, -12, -4, -7, 14, -21],
[18, -31, 34, -22, 17, -19, 20, 24, 6],
[33, -18, 17, -15, 31, -5, 3, 27, -3],
[-18, -20, -18, 31, 6, 4, -2, -12, 24],
[27, 14, 4, -29, -3, 5, -29, 8, -12],
[-15, -7, -23, 23, -9, -8, 6, 8, -12],
[33, -23, -19, -4, -8, -7, 11, -12, 31],
[-20, 19, -15, -30, 11, 32, 7, 14, -5],
[-23, 18, -32, -2, -31, -7, 8, 24, 16],
[32, -4, -10, -14, -6, -1, 0, 23, 23],
[25, 0, -23, 22, 12, 28, -27, 15, 4],
[-30, -13, -16, -3, -3, -32, -3, 27, -31],
[22, 1, 26, 4, -2, -13, 26, 17, 14],
[-9, -18, 3, -20, -27, -32, -11, 27, 13],
[-17, 33, -7, 19, -32, 13, -31, -2, -24],
[-31, 27, -31, -29, 15, 2, 29, -15, 33],
[-18, -23, 15, 28, 0, 30, -4, 12, -32],
[-3, 34, 27, -25, -18, 26, 1, 34, 26],
[-21, -31, -10, -13, -30, -17, -12, -26, 31],
[23, -31, -19, 21, -17, -10, 2, -23, 23],
[-3, 6, 0, -3, -32, 0, -10, -25, 14],
[-19, 9, 14, -27, 20, 15, -5, -27, 18],
[11, -6, 24, 7, -17, 26, 20, -31, -25],
[-25, 4, -16, 30, 33, 23, -4, -4, 23]]
#
# variables
#
x = {}
for i in range(rows):
for j in range(cols):
x[i, j] = solver.IntVar(-100, 100, "x[%i,%i]" % (i, j))
x_flat = [x[i, j] for i in range(rows) for j in range(cols)]
row_signs = [solver.IntVar([-1, 1], "row_signs(%i)" % i) for i in range(rows)]
col_signs = [solver.IntVar([-1, 1], "col_signs(%i)" % j) for j in range(cols)]
#
# constraints
#
for i in range(rows):
for j in range(cols):
solver.Add(x[i, j] == data[i][j] * row_signs[i] * col_signs[j])
total_sum = solver.Sum([x[i, j] for i in range(rows) for j in range(cols)])
#
# Note: In einav_puzzle.py row_sums and col_sums are decision variables.
#
# row sums
row_sums = [
solver.Sum([x[i, j] for j in range(cols)]).Var() for i in range(rows)
]
# >= 0
for i in range(rows):
row_sums[i].SetMin(0)
# column sums
col_sums = [
solver.Sum([x[i, j] for i in range(rows)]).Var() for j in range(cols)
]
for j in range(cols):
col_sums[j].SetMin(0)
# objective
objective = solver.Minimize(total_sum, 1)
#
# search and result
#
db = solver.Phase(col_signs + row_signs, solver.CHOOSE_MIN_SIZE_LOWEST_MIN,
solver.ASSIGN_MAX_VALUE)
solver.NewSearch(db, [objective])
num_solutions = 0
while solver.NextSolution():
num_solutions += 1
print("Sum =", objective.Best())
print("row_sums:", [row_sums[i].Value() for i in range(rows)])
print("col_sums:", [col_sums[j].Value() for j in range(cols)])
for i in range(rows):
for j in range(cols):
print("%3i" % x[i, j].Value(), end=" ")
print()
print()
solver.EndSearch()
print()
print("num_solutions:", num_solutions)
print("failures:", solver.Failures())
print("branches:", solver.Branches())
print("WallTime:", solver.WallTime())
| examples/notebook/contrib/einav_puzzle2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('base')
# language: python
# name: python3
# ---
# # Analysis Notebook
# ## Dependencies
#
# Let's first load the dependencies.
#Import packages.
import wandb
import pandas as pd
api = wandb.Api()
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('seaborn')
# # Function to load from W&B
#
# To load the data from WandB, let's write a function.
#Function that loads a sweep from W&B.
def load_sweep(sweep_id, warnings=True):
sweep = api.sweep(f"wakewordteam/02460AdvancedML/{sweep_id}")
sweep_runs = sweep.runs
summary_list, config_list, name_list = [], [], []
for run in sweep_runs:
# .summary contains the output keys/values for metrics like accuracy.
# We call ._json_dict to omit large files
summary_list.append(run.summary._json_dict)
# .config contains the hyperparameters.
# We remove special values that start with _.
config_list.append(
{k: v for k,v in run.config.items()
if not k.startswith('_')})
# .name is the human-readable name of the run.
name_list.append(run.name)
runs_df = pd.DataFrame({
"summary": summary_list,
"config": config_list,
"name": name_list
})
#Discard the ones that are currently running.
runs_df = runs_df.loc[runs_df['summary'].apply(lambda x : len(x) != 0)]
config_elements = list(runs_df['config'].iloc[0].keys())
summary_elements = list(runs_df['summary'].iloc[0].keys())
columns = [f'config_{x}' for x in config_elements]
columns += [f'summary_{x}' for x in summary_elements if x not in config_elements]
def try_extract(d, k):
#Try to extract from a dictionary, otherwise it faills.
try:
return d[k]
except:
if warnings:
print(f'Could not extract {k} from {d}')
return None
for col in columns:
if 'config_' in col:
runs_df[col] = runs_df['config'].apply(lambda x : try_extract(x, col.replace('config_','')))
elif 'summary_' in col:
runs_df[col] = runs_df['summary'].apply(lambda x : try_extract(x, col.replace('summary_','')))
else:
continue
runs_df.drop(['summary', 'config'], axis=1, inplace=True)
runs_df.reset_index(drop=True, inplace=True)
return runs_df
# # Experiment 1
# ## Sweep over $(\textrm{A}, \mu)$
#
# Here, we want to make plot Figure 2a in the paper.
#
# +
plt.style.use('seaborn')
runs_df = load_sweep('9t6u397w', warnings=False) #(A, mu, seed=0..10) for GCN.
runs_df = runs_df.loc[runs_df['config_seed'].isin(np.arange(1,11))] #We accidently set seed = 0, 1, ... 10, so we filter it out.
runs_df['summary_test_acc'] = runs_df['summary_test_acc'] * 100
sweep_mu_A_df = runs_df.groupby(['config_A', 'config_mu']).agg(
test_acc_mean = ('summary_test_acc', 'mean'),
test_acc_std = ('summary_test_acc', 'std')
).reset_index().groupby(['config_mu']).agg(lambda x : list(x)).reset_index()
plt.figure(figsize=(10,6))
#Set title and align it on the left side.
#plt.title('Test Accuracy ', fontsize=20, y=1.05)
for _, (config_mu, config_A, test_acc_mean, test_acc_std) in sweep_mu_A_df.iterrows():
plt.errorbar(config_A, test_acc_mean, test_acc_std, label=r'GCN, $\mu$ = ' + str(config_mu), capsize=5, elinewidth=3, markeredgewidth=3)
plt.legend(fontsize=20, loc='lower right')
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
break #Only take mu=0
runs_df = load_sweep('i0bg9s2c', warnings=False)
runs_df = runs_df.loc[runs_df['config_seed'].isin(np.arange(1,11))] #We accidently set seed = 0, 1, ... 10, so we filter it out.
runs_df['summary_test_acc'] = runs_df['summary_test_acc'] * 100
sweep_mu_A_df = runs_df.groupby(['config_A', 'config_mu']).agg(
test_acc_mean = ('summary_test_acc', 'mean'),
test_acc_std = ('summary_test_acc', 'std')
).reset_index().groupby(['config_mu']).agg(lambda x : list(x)).reset_index()
for _, (config_mu, config_A, test_acc_mean, test_acc_std) in sweep_mu_A_df.iterrows():
plt.errorbar(config_A, test_acc_mean, test_acc_std, label=r'GAT, $\mu$ = ' + str(config_mu), capsize=5, elinewidth=3, markeredgewidth=3)
plt.legend(fontsize=20, loc='lower right')
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
break #Only take mu=0
#Set xticks to mu_A.
plt.xticks(runs_df['config_A'].unique())
plt.xlabel('Num. of training nodes per class', fontsize=20)
plt.ylabel('Test Accuracy (%)', fontsize=20)
plt.ylim(67.5, 87.5)
#Save figure and remove whitespace.
plt.tight_layout()
plt.savefig('figures/sweep_mu_A.png', dpi=300)
# +
runs_df = load_sweep('9t6u397w', warnings=False) #(A, mu, seed=0..10) for GCN.
runs_df = runs_df.loc[runs_df['config_seed'].isin(np.arange(1,11))] #We accidently set seed = 0, 1, ... 10, so we filter it out.
runs_df['summary_test_acc'] = runs_df['summary_test_acc'] * 100
X_df = runs_df.loc[runs_df['config_A'] >= 30].groupby(['config_mu','config_A']).agg(test_acc_mean = ('summary_test_acc', 'mean'), test_acc_std = ('summary_test_acc', 'std')).reset_index()\
.pivot(index='config_mu', columns='config_A', values='test_acc_mean')
x, y = X_df.index.to_numpy(), X_df.columns.to_numpy()
#Make meshgrid of x and y
X, Y = np.meshgrid(x, y)
#Plot X_df on top of image
plt.figure(figsize=(20,10))
#Plot heatmap on top of X and Y grid.
plt.imshow(X_df.T, cmap='viridis', interpolation='none', origin='lower', aspect='auto')
#Remove grid.
plt.grid(False)
#Set xtick labels.
plt.xticks(np.arange(len(x)), [f'{x_tick:.2f}' for x_tick in x], fontsize=20)
#Set ytick labels.
plt.yticks(np.arange(len(y)), y, fontsize=20)
plt.xlabel('Regularisation strength $\mu$', fontsize=40, labelpad=6)
plt.ylabel('Num. of training nodes per class', fontsize=40, labelpad=6)
#Set colorbar with same height of image and more ticks.
cbar = plt.colorbar(fraction=0.046, pad=0.04)
cbar.set_ticks(np.arange(79, 87, 0.5))
#Set colorbar ticksize
cbar.ax.tick_params(labelsize=20)
#Set title for the colorbar
cbar.set_label('Test Accuracy (%)', fontsize=20)
#Rotate colorbar label
cbar.ax.set_ylabel(cbar.ax.get_ylabel(), rotation=270, fontsize=40, labelpad=50)
plt.tight_layout()
plt.savefig('figures/A_mu_image.png', dpi=300)
# -
X_df
# ## $(\mu, \alpha)$-sweep
# +
runs_df = load_sweep('5pksx5v2', warnings=False)
#TODO: Append sweep e399eurp to this dataframe!
X_df = runs_df.groupby(['config_mu','config_unmask-alpha']).agg(test_acc_mean = ('summary_test_acc', 'mean'), test_acc_std = ('summary_test_acc', 'std')).reset_index()\
.pivot(index='config_mu', columns='config_unmask-alpha', values='test_acc_mean')
x, y = X_df.index.to_numpy(), X_df.columns.to_numpy()
#Make meshgrid of x and y
X, Y = np.meshgrid(x, y)
#Plot X_df on top of meshgrid
plt.figure(figsize=(20,10))
plt.title(r'Test Accuracy vs. $\mu$ and $\alpha$ for $A=40$', fontsize=20)
plt.contourf(X, Y, X_df, cmap='viridis', alpha=1)
plt.xlabel(r'$\mu$' + '\nRegularisation strength', fontsize=20)
plt.ylabel('Unmask P-reg Ratio\n' + r'$\alpha$', fontsize=20)
plt.scatter(X.flatten(), Y.flatten(), c=X_df.values.flatten(), cmap='viridis', s=20)
plt.colorbar()
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.show()
# -
# Let'sa go
# +
runs_df = load_sweep('5pksx5v2', warnings=False)
X_df = runs_df.groupby(['config_mu','config_unmask-alpha']).agg(test_acc_mean = ('summary_test_acc', 'mean'), test_acc_std = ('summary_test_acc', 'std')).reset_index()\
.pivot(index='config_mu', columns='config_unmask-alpha', values='test_acc_std')
x, y = X_df.index.to_numpy(), X_df.columns.to_numpy()
#Make meshgrid of x and y
X, Y = np.meshgrid(x, y)
#Plot X_df on top of meshgrid
plt.figure(figsize=(20,10))
plt.title(r'Test Accuracy vs. $\mu$ and $\alpha$ for $A=40$', fontsize=20)
plt.contourf(X, Y, X_df, cmap='viridis', alpha=1)
plt.xlabel(r'$\mu$' + '\nRegularisation strength', fontsize=20)
plt.ylabel('Unmask P-reg Ratio\n' + r'$\alpha$', fontsize=20)
plt.scatter(X.flatten(), Y.flatten(), c=X_df.values.flatten(), cmap='viridis', s=20)
plt.colorbar()
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.show()
# -
# ## ($\mu$, seed)-sweep
# +
runs_df = load_sweep('3qss1stl', warnings=False)
for n, (_, config_mu, summary_test_acc) in runs_df.groupby(['config_A', 'config_mu'])['summary_test_acc'].apply(list).reset_index().iterrows():
if n%2 == 0:
plt.hist(summary_test_acc, bins=20, label=r'$\mu$ = ' + str(config_mu), alpha=0.3)
else:
continue
plt.legend()
plt.show()
# -
# ## (A, $\kappa$)-sweep
runs_df = load_sweep('xe327qk5', warnings=True)
# +
import seaborn as sns
heat_matrix = runs_df.groupby(['config_A', 'config_kappa']).agg(test_acc_mean = ('summary_test_acc', 'mean'), test_acc_std = ('summary_test_acc', 'std')).reset_index()\
.pivot(index='config_A', columns='config_kappa', values='test_acc_mean')
sns.heatmap(heat_matrix, cmap='viridis')
# -
# ## Cora
# ### A and Accuracy for $\mu = 0$
# +
runs_df = load_runs('8g92nt0k')
cora_experiment = runs_df.loc[
(runs_df['config_dataset'] == 'Cora')
]
print('Shape of the dataset:')
print(cora_experiment.shape)
print('')
print('Columns:')
print(cora_experiment.columns)
print('')
experiment = cora_experiment.loc[cora_experiment['config_mu'] == 0].groupby(['config_A']).agg(
test_acc_mean = ('summary_test_acc', lambda x : x.mean()),
test_acc_std = ('summary_test_acc', lambda x : x.std()),
).reset_index()
plt.figure(figsize=(15, 7))
plt.errorbar(x=experiment['config_A'].to_numpy(), y=experiment['test_acc_mean'].to_numpy(), yerr=experiment['test_acc_std'].to_numpy())
plt.xlabel('Num. of training nodes per class', fontsize=20)
plt.ylabel('Test Accuracy', fontsize=20)
plt.xlim([0, 110])
plt.xticks(np.arange(10, 110, 10), fontsize=20)
plt.title('Cora Dataset', fontsize=20)
plt.show()
# -
# ### $\mu$ and Accuracy.
# +
runs_df = load_runs('8g92nt0k')
cora_experiment = runs_df.loc[
(runs_df['config_dataset'] == 'Cora')
]
print('Shape of the dataset:')
print(cora_experiment.shape)
print('')
print('Columns:')
print(cora_experiment.columns)
print('')
experiment = cora_experiment.groupby(['config_A', 'config_mu']).agg(
test_acc_mean = ('summary_test_acc', lambda x : x.mean()),
test_acc_std = ('summary_test_acc', lambda x : x.std()),
).reset_index().groupby(['config_A']).agg(
mu = ('config_mu', list),
test_acc_mean = ('test_acc_mean', list),
test_acc_std = ('test_acc_std', list)
).reset_index()
plt.figure(figsize=(15, 7))
plt.style.use('seaborn')
for _, row in experiment.iterrows():
A = row['config_A']
mu = row['mu']
test_acc_mean = row['test_acc_mean']
test_acc_std = row['test_acc_std']
plt.errorbar(mu, test_acc_mean, yerr=test_acc_std, label=f'Training nodes per class: {A}')
plt.xlabel('$\mu$', fontsize=20)
plt.ylabel('Test Accuracy', fontsize=20)
plt.legend(fontsize=20)
plt.show()
# -
# # Unit Testing
# +
import torch
from torch_geometric.data import Data
from torch_geometric.datasets import Planetoid
from sklearn.metrics import mean_squared_error, roc_auc_score, accuracy_score
import sys
sys.path.append('../src/')
from GCN import GCN
from get_masks import get_masks
from PRegLoss import PRegLoss
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
# -
for mu in [0.25, 0.5, 1]:
dataset = Planetoid(root=f'/tmp/Cora', name='Cora')
#Unpack the dataset to get the data.
data = dataset[0]
model = GCN(num_node_features = dataset.num_node_features, num_classes = dataset.num_classes)
#Then args.B is not none either.
train_mask, val_mask, test_mask = get_masks(30, 0, dataset)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
from bhtsne import tsne
model.eval()
p_before = torch.softmax(model(data), dim=1).detach().numpy().flatten()
X_before = model(data).detach().numpy().astype('float64')
Y_before = tsne(X_before)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
loss_fn = torch.nn.CrossEntropyLoss()
preg_loss_fn = PRegLoss(phi = 'squared_error', edge_index = data.edge_index, device=device)
model.train()
for epoch in range(200):
optimizer.zero_grad()
out = model(data)
loss = loss_fn(out[train_mask], data.y[train_mask])
loss.backward()
optimizer.step()
model.eval()
p_after_train = torch.softmax(model(data), dim=1).detach().numpy().flatten()
X_after_train = model(data).detach().numpy().astype('float64')
Y_after_train = tsne(X_after_train)
model.train()
for epoch in range(200):
optimizer.zero_grad()
out = model(data)
loss = mu * preg_loss_fn(out)
loss.backward()
optimizer.step()
model.eval()
p_after_preg = torch.softmax(model(data), dim=1).detach().numpy().flatten()
X_after_preg = model(data).detach().numpy().astype('float64')
Y_after_preg = tsne(X_after_preg)
plt.figure(figsize=(20,9))
plt.subplot(2,3,1)
plt.scatter(X_before[:, 0], Y_before[:, 1], c = data.y)
plt.xticks([])
plt.yticks([])
plt.title('Before Training')
plt.subplot(2,3,2)
plt.scatter(Y_after_train[:, 0], Y_after_train[:, 1], c = data.y)
plt.xticks([])
plt.yticks([])
plt.title('After Training')
plt.subplot(2,3,3)
plt.scatter(Y_after_preg[:, 0], Y_after_preg[:, 1], c = data.y)
plt.xticks([])
plt.yticks([])
plt.title('After P-Reg')
plt.subplot(2,3,4)
plt.hist(p_before, bins='auto')
plt.subplot(2,3,5)
plt.hist(p_after_train, bins='auto')
plt.subplot(2,3,6)
plt.hist(p_after_preg, bins='auto')
plt.suptitle(f'$\mu = {mu}$', fontsize=25)
plt.show()
# # Squared Error
import torch
AZ = torch.tensor([[1,2,3],[4,5,6]]).double()
Z = torch.tensor([[4,5,6],[7,8,9]]).double()
(torch.norm(AZ - Z, p=2, dim=1)**2).sum()
# +
import numpy as np
np.sqrt(3**2 + 3**2 + 3**2)
# +
dataset = Planetoid(root=f'/tmp/Cora', name='Cora')
#Unpack the dataset to get the data.
data = dataset[0]
model = GCN(num_node_features = dataset.num_node_features, num_classes = dataset.num_classes)
#Then args.B is not none either.
train_mask, val_mask, test_mask = get_masks(30, 0, dataset)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
loss_fn = torch.nn.CrossEntropyLoss()
preg_loss_fn = PRegLoss(phi = 'squared_error', edge_index = data.edge_index, device=device)
model.train()
sq_term = []
ce_term = []
mu = 1e-5
for epoch in range(200):
optimizer.zero_grad()
out = model(data)
sq_err = mu * preg_loss_fn(out)
ce_err = loss_fn(out[train_mask], data.y[train_mask])
loss = ce_err + sq_err
loss.backward()
optimizer.step()
sq_term.append(sq_err.item())
ce_term.append(ce_err.item())
# -
plt.plot(sq_term, label='Squared Error-term')
plt.plot(ce_term, label='Cross Entropy-term')
plt.legend()
# # Cross Entropy P-Reg
# +
runs_df = load_runs('yjhb5onh', warnings=False)
experiment = runs_df.loc[runs_df['config_phi'] == 'cross_entropy'].groupby(['config_mu']).agg(
test_acc_mean = ('summary_test_acc', lambda x : x.mean()),
test_acc_std = ('summary_test_acc', lambda x : x.std())
).reset_index()
# -
plt.style.use('seaborn')
plt.figure(figsize=(10,5))
plt.errorbar(x=experiment.config_mu, y=experiment.test_acc_mean, yerr=experiment.test_acc_std)
plt.xlabel('$\mu$')
plt.ylabel('Test Accuracy')
plt.title('Cora, 100 training nodes per class, 10 different seeds', loc='left')
plt.show()
torch.max(0, 1)
tau = 0.1
torch.maximum(torch.tensor([0]), preg_loss_fn(out) - 0)
# # Bayesian Hyperparameter Tuning
bayesian_sweep = load_sweep('pn2e49fj')
bayesian_sweep['N'] = bayesian_sweep['name'].apply(lambda x : int(x.split('-')[-1]))
bayesian_sweep.sort_values(by='N', inplace=True)
x = bayesian_sweep['config_mu'].values
y = bayesian_sweep['config_unmask-alpha'].values
#Make a lineplot of (x,y)
plt.plot(x[:100], y[:100], 'r.-', alpha=0.4)
plt.xlabel(r'$\mu$')
plt.ylabel(r'$\alpha$')
plt.show()
| Notebooks/Analysis.ipynb |