code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from collections import OrderedDict
ordered_dictionary = OrderedDict()
for i in range(int(input())):
splits = input().split()
a,b = " ".join(splits[0:-1]),int(splits[-1])
if ordered_dictionary.get(a):
ordered_dictionary[a] += b
else:
ordered_dictionary[a] = b
for i,v in ordered_dictionary.items():
print(i,v)
| notebooks/collections.OrderedDict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stocking rental bikes
#
# 
#
# You stock bikes for a bike rental company in Austin, ensuring stations have enough bikes for all their riders. You decide to build a model to predict how many riders will start from each station during each hour, capturing patterns in seasonality, time of day, day of the week, etc.
#
# To get started, create a project in GCP and connect to it by running the code cell below. Make sure you have connected the kernel to your GCP account in Settings.
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.bqml.ex1 import *
# +
# Set your own project id here
PROJECT_ID = ____ # a string, like 'kaggle-bigquery-240818'
from google.cloud import bigquery
client = bigquery.Client(project=PROJECT_ID, location="US")
dataset = client.create_dataset('model_dataset', exists_ok=True)
from google.cloud.bigquery import magics
from kaggle.gcp import KaggleKernelCredentials
magics.context.credentials = KaggleKernelCredentials()
magics.context.project = PROJECT_ID
# -
# ## Linear Regression
#
# Your dataset is quite large. BigQuery is especially efficient with large datasets, so you'll use BigQuery-ML (called BQML) to build your model. BQML uses a "linear regression" model when predicting numeric outcomes, like the number of riders.
#
# ## 1) Training vs testing
#
# You'll want to test your model on data it hasn't seen before (for reasons described in the [Intro to Machine Learning Micro-Course](https://www.kaggle.com/learn/intro-to-machine-learning). What do you think is a good approach to splitting the data? What data should we use to train, what data should we use for test the model?
# +
# Uncomment the following line to check the solution once you've thought about the answer
# q_1.solution()
# -
# ## Training data
#
# First, you'll write a query to get the data for model-building. You can use the public Austin bike share dataset from the `bigquery-public-data.austin_bikeshare.bikeshare_trips` table. You predict the number of rides based on the station where the trip starts and the hour when the trip started. Use the `TIMESTAMP_TRUNC` function to truncate the start time to the hour.
# ## 2) Exercise: Query the training data
#
# Write the query to retrieve your training data. The fields should be:
# 1. The start_station_name
# 2. A time trips start, to the nearest hour. Get this with `TIMESTAMP_TRUNC(start_time, HOUR) as start_hour`
# 3. The number of rides starting at the station during the hour. Call this `num_rides`.
# Select only the data before 2018-01-01 (so we can save data from 2018 as testing data.)
# +
# Write your query to retrieve the training data
query = ____
# Create the query job. No changes needed below this line
query_job = client.query(query)
# API request - run the query, and return DataFrame. No changes needed
model_data = query_job.to_dataframe()
q_2.check()
# +
# uncomment the lines below to get a hint or solution
# q_2.hint()
# q_2.solution()
# +
## My solution code
query = """
SELECT start_station_name,
TIMESTAMP_TRUNC(start_time, HOUR) as start_hour,
COUNT(bikeid) as num_rides
FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips`
WHERE start_time < "2018-01-01"
GROUP BY start_station_name, start_hour
"""
query_job = client.query(query)
model_data = query_job.to_dataframe()
# -
# You'll want to inspect your data to ensure it looks like what you expect. Run the line below to get a quick view of the data, and feel free to explore it more if you'd like (if you don't know hot to do that, the [Pandas micro-course](https://www.kaggle.com/learn/pandas)) might be helpful.
model_data.head(20)
# ## Model creation
#
# Now it's time to turn this data into a model. You'll use the `CREATE MODEL` statement that has a structure like:
#
# ```sql
# CREATE OR REPLACE MODEL`model_dataset.bike_trips`
# OPTIONS(model_type='linear_reg',
# input_label_cols=['label_col'],
# optimize_strategy='batch_gradient_descent') AS
# -- training data query goes here
# SELECT ...
# FROM ...
# WHERE ...
# GROUP BY ...
# ```
#
# The `model_type` and `optimize_strategy` shown here are good parameters to use in general for predicting numeric outcomes with BQML.
#
# **Tip:** Using ```CREATE OR REPLACE MODEL``` rather than just ```CREATE MODEL``` ensures you don't get an error if you want to run this command again without first deleting the model you've created.
# ## 3) Exercise: Create and train the model
#
# Below, write your query to create and train a linear regression model on the training data.
# +
# Write your query to create and train the model
query = ____
# Create the query job. No changes needed below this line
query_job = client.query(query)
# API request - run the query. Models return an empty table. No changes needed
query_job.result()
# +
## My solution
query = """
CREATE OR REPLACE MODEL `model_dataset.bike_trips`
OPTIONS(model_type='linear_reg',
input_label_cols=['num_rides'],
optimize_strategy='batch_gradient_descent') AS
SELECT COUNT(bikeid) as num_rides,
start_station_name,
TIMESTAMP_TRUNC(start_time, HOUR) as start_hour
FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips`
WHERE start_time < "2018-01-01"
GROUP BY start_station_name, start_hour
"""
query_job = client.query(query)
# API request - run the query. Models return an empty table
query_job.result()
q_3.check()
# +
# q_3.solution()
# -
# ## 4) Exercise: Model evaluation
#
# Now that you have a model, evaluate it's performance on data from 2018. If you need help with
# +
# Write your query to evaluate the model
query = "____"
query_job = client.query(query)
# API request - run the query
evaluation_results = query_job.to_dataframe()
evaluation_results
q_4.check()
# +
## My solution
query = """
SELECT *
FROM
ML.EVALUATE(MODEL `model_dataset.bike_trips`, (
SELECT COUNT(bikeid) as num_rides,
start_station_name,
TIMESTAMP_TRUNC(start_time, HOUR) as start_hour
FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips`
WHERE start_time >= "2018-01-01"
GROUP BY start_station_name, start_hour
))
"""
query_job = client.query(query)
# API request - run the query
evaluation_results = query_job.to_dataframe()
evaluation_results
# -
# You should see that the r^2 score here is negative. Negative values indicate that the model is worse than just predicting the mean rides for each example.
#
# ## 5) Theories for poor performance
#
# Why would your model be doing worse than making the most simple prediction?
#
# **Answer:** It's possible there's something broken in the model algorithm. Or the data for 2018 is much different than the historical data before it.
# +
## Thought question answer here
# -
# ## 6) Exercise: Looking at predictions
#
# A good way to figure out where your model is going wrong is to look closer at a small set of predictions. Use your model to predict the number of rides for the 22nd & Pearl station in 2018. Compare the mean values of predicted vs actual riders.
# +
# Write the query here
query = "____"
query_job = client.query(query)
# API request - run the query
evaluation_results = query_job.to_dataframe()
evaluation_results
# +
## My solution
query = """
SELECT AVG(ROUND(predicted_num_rides)) as predicted_avg_riders,
AVG(num_rides) as true_avg_riders
FROM
ML.PREDICT(MODEL `model_dataset.bike_trips`, (
SELECT COUNT(bikeid) as num_rides,
start_station_name,
TIMESTAMP_TRUNC(start_time, HOUR) as start_hour
FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips`
WHERE start_time >= "2018-01-01"
AND start_station_name = "22nd & Pearl"
GROUP BY start_station_name, start_hour
))
-- ORDER BY start_hour
"""
query_job = client.query(query)
# API request - run the query
evaluation_results = query_job.to_dataframe()
evaluation_results
# -
# What you should see here is that the model is underestimating the number of rides by quite a bit.
#
# ## 7) Exercise: Average daily rides per station
#
# Either something is wrong with the model or something surprising is happening in the 2018 data.
#
# What could be happening in the data? Write a query to get the average number of riders per station for each year in the dataset and order by the year so you can see the trend. You can use the `EXTRACT` method to get the day and year from the start time timestamp.
# +
# Write the query here
query = "____"
# Create the query job
query_job = ____
# API request - run the query and return a pandas DataFrame
evaluation_results = ____
evaluation_results
# +
## My solution
query = """
WITH daily_rides AS (
SELECT COUNT(bikeid) AS num_rides,
start_station_name,
EXTRACT(DAYOFYEAR from start_time) AS doy,
EXTRACT(YEAR from start_time) AS year
FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips`
GROUP BY start_station_name, doy, year
ORDER BY year
),
station_averages AS (
SELECT avg(num_rides) AS avg_riders, start_station_name, year
FROM daily_rides
GROUP BY start_station_name, year)
SELECT avg(avg_riders) AS daily_rides_per_station, year
FROM station_averages
GROUP BY year
ORDER BY year
"""
query_job = client.query(query)
# API request - run the query
evaluation_results = query_job.to_dataframe()
evaluation_results
# -
# ## 8) What do your results tell you?
#
# Given the daily average riders per station over the years, does it make sense that the model is failing?
#
# **Answer:** The daily average riders went from around 10 in 2017 to over 16 in 2018. This change in the bikesharing program caused your model to underestimate the number of riders in 2018. Unexpected things can happen when you predict the future in an ever-changing area. Knowledge of a topic can be helpful here, and if you knew enough about the program, you might be able to predict (or at least explain) these types of changes over time.
# +
## Thought question answer here
# -
# ## 9) A Better Scenario
#
# It's disappointing that your model was so inaccurate on 2018 data. Fortunately, this issue of the world changing over time is the exception rather than the rule.
#
# Your model was built on data that went through the end of 2016. So you can also see how the model performs on data from 2017. First, create a model
# +
# Write your query to create and train the model
query = "____"
# Create the query job
query_job = ____ # Your code goes here
# API request - run the query. Models return an empty table
____ # Your code goes here
# +
## My solution
query = """
CREATE OR REPLACE MODEL `model_dataset.bike_trips_2017`
OPTIONS(model_type='linear_reg',
input_label_cols=['num_rides'],
optimize_strategy='batch_gradient_descent') AS
SELECT COUNT(bikeid) as num_rides,
start_station_name,
TIMESTAMP_TRUNC(start_time, HOUR) as start_hour
FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips`
WHERE start_time < "2017-01-01"
GROUP BY start_station_name, start_hour
"""
query_job = client.query(query)
# API request - run the query. Models return an empty table
query_job.result()
# -
# Now write the query to evaluate your model using data from 2017
# +
# Write your query to evaluate the model
query = "____"
query_job = client.query(query)
# API request - run the query. Models return an empty table
query_job.result()
# +
query = """
SELECT *
FROM
ML.EVALUATE(MODEL `model_dataset.bike_trips_2017`, (
SELECT COUNT(bikeid) as num_rides,
start_station_name,
TIMESTAMP_TRUNC(start_time, HOUR) as start_hour
FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips`
WHERE start_time >= "2017-01-01" AND start_time < "2018-01-01"
GROUP BY start_station_name, start_hour
))
"""
query_job = client.query(query)
# API request - run the query
evaluation_results = query_job.to_dataframe()
evaluation_results
| notebooks/bqml/raw/ex1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pycocotools
import os
import numpy as np
import torch
import torch.utils.data
import cv2
import pandas as pd
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
import torch
from torch import nn
import torchvision.models as models
from torch.utils.data import DataLoader, SequentialSampler, RandomSampler
from python.engine import train_one_epoch, evaluate
from python import utils
import python.transforms as T
def parse_one_annot(path):
data = {}
for i,e in enumerate(os.listdir(path)):
data_bach = pd.read_csv(os.path.join(path,e))
data_bach = data_bach.values[0]
x1 = int((data_bach[0].split(" ")[0]).split(".")[0])
y1 = int((data_bach[0].split(" ")[1]).split(".")[0])
x2 = int((data_bach[0].split(" ")[2]).split(".")[0])
y2 = int((data_bach[0].split(" ")[3]).split(".")[0])
data[e] = [x1,y1,x2,y2]
return data
# +
#a = parse_one_annot("Label")
#a['0.csv']
# -
class Dataset(torch.utils.data.Dataset):
def __init__(self, image, label, transforms=None):
self.image = image
self.transforms = transforms
self.imgs = sorted(os.listdir(image))
self.label = parse_one_annot(label)
def __getitem__(self, idx):
# load images and bounding boxes
img_path = os.path.join(self.image, self.imgs[idx])
img = cv2.imread(img_path, cv2.COLOR_BGR2RGB)
box_list = self.label[self.imgs[idx].split(".")[0]+".csv"]
box_list = np.expand_dims(box_list, axis=0)
boxes = torch.as_tensor(box_list, dtype=torch.float32)
num_objs = len(box_list)
# there is only one class
labels = torch.ones((num_objs,), dtype=torch.int64)
image_id = torch.tensor([idx])
area = (boxes[:,3] - boxes[:,1]) * (boxes[:,2] - boxes[:,0])
# suppose all instances are not crowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
#target["image_id"] = image_id
#target["area"] = area
#target["iscrowd"] = iscrowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.imgs)
dataset = Dataset("Image","Label")
dataset.__getitem__(0)[0].shape
def get_model(num_classes):
# load an object detection model pre-trained on COCO
model = models.detection.fasterrcnn_resnet50_fpn(pretrained=True)# get the number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new on
model.roi_heads.box_predictor = FastRCNNPredictor(in_features,num_classes)
return model
def get_transform(train):
transforms = []
# converts the image, a PIL image, into a PyTorch Tensor
transforms.append(T.ToTensor())
if train:
# during training, randomly flip the training images
# and ground-truth for data augmentation
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
# +
# use our dataset and defined transformationsdataset
dataset = Dataset("Image","Label",get_transform(train=True))
dataset_test = Dataset("Image","Label",get_transform(train=False))
# split the dataset in train and test set
torch.manual_seed(1)
indices = torch.randperm(len(dataset)).tolist()
a = int(len(dataset)*4/10)
print(a)
b =int(len(dataset)/2 - a)
print(b)
dataset = torch.utils.data.Subset(dataset, indices[:a])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-b:])
# define training and validation data loaders
data_loader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False,num_workers=0,collate_fn=utils.collate_fn)
data_loader_test = torch.utils.data.DataLoader(dataset_test, batch_size=1, shuffle=False, num_workers=0,collate_fn=utils.collate_fn)
print("We have: {} examples, {} are training and {} testing".format(len(indices), len(dataset), len(dataset_test)))
# -
dataset.__getitem__(0)[0].shape
torch.cuda.is_available()
# +
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# our dataset has two classes only - raccoon and not racoon
num_classes = 2
# get the model using our helper function
model = get_model(num_classes)
# move model to the right device
model.to(device)# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,momentum=0.9, weight_decay=0.0005)
# and a learning rate scheduler which decreases the learning rate by # 10x every 3 epoch
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,step_size=3,gamma=0.1)
# -
# let's train it for 10 epochs
num_epochs = 1
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, data_loader, device, epoch,print_freq=10)# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
evaluate(model, data_loader_test, device=device)
torch.save(model.state_dict(), "model")
# +
import numpy as np
import cv2
import matplotlib.pyplot as plt
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
import torch
import torchvision.models as models
import time
def get_model(num_classes):
# load an object detection model pre-trained on COCO
model = models.detection.fasterrcnn_resnet50_fpn(pretrained=True)# get the number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new on
model.roi_heads.box_predictor = FastRCNNPredictor(in_features,num_classes)
return model
print(torch.cuda.is_available())
loaded_model = get_model(num_classes = 2)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
loaded_model.to(device)
loaded_model.load_state_dict(torch.load("model"))
# +
image = cv2.imread("Image/46.jpg")
img = cv2.normalize(image, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
img = torch.tensor(img.transpose(2,1,0)).cuda()
#put the model in evaluation mode
loaded_model.eval()
with torch.no_grad():
a = time.time()
prediction = loaded_model([img])
b = time.time()
print(b-a)
for element in range(len(prediction[0]["boxes"])):
boxes = prediction[0]["boxes"][element].cpu().numpy()
score = np.round(prediction[0]["scores"][element].cpu().numpy(),decimals= 4)
cv2.rectangle(image, (int(boxes[1]), int(boxes[0])), (int(boxes[3]), int(boxes[2])), (0, 255, 0), 2, cv2.LINE_AA)
break
plt.figure()
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
# -
| Train/FASTER RCNN - Pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Manually defined buckets
#
# Skorecard allows to manually define buckets.
#
# Those can be usually loaded from a json or yaml file.
#
# Start by loading the demo data
# +
from skorecard.datasets import load_uci_credit_card, load_credit_card
X, y = load_uci_credit_card(return_X_y=True)
X.head(4)
# -
# ## Define the buckets
#
# Define the buckets in a python dictionary.
#
# For every feature, the following keys must be present.
#
# - `feature_name` (mandatory): must match the column name in the dataframe
# - `type` (mandatory): type of feature (categorical or numerical)
# - `missing_treatment` (optional, defaults to `separate`): define the missing treatment strategy
# - `map` (mandatory): contains the actual mapping for the bins.
# - categorical features: expect a dictionary `{value:bin_index}`
# - numerical features: expect a list of boundaries `{value:bin_index}`
# - `right` (optional, defaults to True): flag that indicates if to include the upper bound (True) or lower bound (False) in the bucket definition. Applicable only to numerical bucketers
# - `specials` (optional, defaults to {}): dictionary of special values
#
#
# +
bucket_maps = {
'EDUCATION':{
"feature_name":'EDUCATION',
"type":'categorical',
"missing_treatment":'separate',
"map":{2: 0, 1: 1, 3: 2},
"right":True,
"specials":{} # optional field
},
'LIMIT_BAL':{
"feature_name":'LIMIT_BAL',
"type":'numerical',
"missing_treatment":'separate',
"map":[ 25000., 55000., 105000., 225000., 275000., 325000.],
"right":True,
"specials":{}
},
'BILL_AMT1':{
"feature_name":'BILL_AMT1',
"type":'numerical',
"missing_treatment":'separate',
"map":[ 800. , 12500 , 50000, 77800, 195000. ],
"right":True,
"specials":{}
}
}
# -
# Load the `UserInputBucketer` and pass the dictionary to the object
# +
from skorecard.bucketers import UserInputBucketer
uib = UserInputBucketer(bucket_maps)
# -
# Note that because the bins are already defined, UserInputBucketer does not require a fit step.
uib.transform(X).head(4)
| docs/tutorials/using_manually_defined_buckets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "skip"} tags=[]
# %%writefile caber.py
import os
from shutil import copyfile
import pims
import PIL
import IPython.display as Disp
import cv2
import matplotlib.pyplot as plt
import ipywidgets as widgets
import numpy as np
from skimage import color
from skimage.filters import threshold_mean
import scipy
import pandas as pd
import lmfit
from matplotlib import animation, rc
import matplotlib
import IPython
def set_experiment_folder(exp_folder,video_file_name, video_path='./'):
'''
video_file_name : path to video file to be analyzed
exp_folder : name for the folder to be created in the current folder and original video file copied to it
'''
try:
os.makedirs(exp_folder)
copyfile(f'{video_path}/{video_file_name}', f'{exp_folder}/{video_file_name}')
except FileExistsError:
print('File already exists')
def rotate_kronos_video(video_path, rotated_name_suffix='_rotated'):
os.system(''.join(['ffmpeg -i "',
video_path,
'" -metadata:s:v rotate="270" -codec copy "',
video_path.split('.')[0],
rotated_name_suffix,
'.',
video_path.split('.')[-1],
'"']))
return video_path.split('.')[0] + rotated_name_suffix + '.' + video_path.split('.')[-1]
def check_framerate(video):
display(PIL.Image.fromarray(video[0][:,:40]).rotate(90, expand=True))
class bbox_select():
def __init__(self,im):
self.im = im
self.selected_points = []
self.fig,ax = plt.subplots()
self.img = ax.imshow(self.im.copy())
self.ka = self.fig.canvas.mpl_connect('button_press_event', self.onclick)
disconnect_button = widgets.Button(description="Disconnect mpl")
Disp.display(disconnect_button)
disconnect_button.on_click(self.disconnect_mpl)
def poly_img(self,img,pts):
pts = np.array(pts, np.int32)
pts = pts.reshape((-1,1,2))
cv2.polylines(img,[pts],True,(np.random.randint(0,255),np.random.randint(0,255),np.random.randint(0,255)),7)
return img
def onclick(self, event):
#display(str(event))
self.selected_points.append([event.xdata,event.ydata])
if len(self.selected_points)>1:
self.fig
self.img.set_data(self.poly_img(self.im.copy(),self.selected_points))
def disconnect_mpl(self,_):
self.fig.canvas.mpl_disconnect(self.ka)
def get_mask_from_poly(bs):
arr = np.array([bs.selected_points],'int')
minx=min([item[0] for item in arr[0]])
miny=min([item[1] for item in arr[0]])
maxx=max([item[0] for item in arr[0]])
maxy=max([item[1] for item in arr[0]])
mask=(slice(miny,maxy),slice(minx,maxx))
return mask
def find_thresh(video,frame, mask):
ref_im=color.rgb2gray(video[-3][mask])
thresh=threshold_mean(ref_im)
return thresh
def measure_neck(video, thresh, mask, pb=None, strike_time=0.2,mmperpix=1,frame_rate=1,save_excel=False):
pb=widgets.IntProgress(description="",min=0,max=3000,value=0,layout=widgets.Layout(width='50%'))
pb.max=len(video)-3
pb.description=video.filename
display(pb)
frame_list=[]
neck_profile_list=[]
binary_list=[]
neck_radius_list=[]
min_neck_y_pos_list=[]
mid_neck_radius_list=[]
neck_radius_previous=400
for i in range(1,len(video)-4):
pb.value=i
#analyze single frame for neck radius and consecutive frame variation to determine strike time
im=color.rgb2gray(video[i][mask])
binary = scipy.ndimage.morphology.binary_fill_holes(im < thresh)
neck_profile=binary.sum(1)
neck_radius=min(neck_profile)/2
min_neck_y_pos=np.argmin(neck_profile)
mid_neck_radius=neck_profile[int(len(neck_profile)/2)]/2
if neck_radius_previous +50 <neck_radius:
print(f'{neck_radius_previous} < {neck_radius} + 50')
break
neck_radius_previous=neck_radius
binary_list.append(binary)
neck_profile_list.append(neck_profile)
neck_radius_list.append(neck_radius)
frame_list.append(i)
min_neck_y_pos_list.append(min_neck_y_pos)
mid_neck_radius_list.append(mid_neck_radius)
result=pd.DataFrame.from_dict({'frame':frame_list,
'binary':binary_list,
'neck_profile':neck_profile_list,
'neck_radius':neck_radius_list,
'min_neck_y_pos': min_neck_y_pos_list,
'mid_neck_radius': mid_neck_radius_list})
frame_strike_start=sum(map(lambda x: x>150, neck_radius_list))
frame_strike_end=frame_strike_start+strike_time*frame_rate
try:
frame_breakup=min(result['neck_radius'][result['neck_radius']==0].index.tolist())
print(frame_breakup)
except:
print('No breackup detected')
frame_breakup=None
t_strike_start=frame_strike_start/frame_rate
t_strike_end=frame_strike_end/frame_rate
if frame_breakup is not None:
t_breakup=frame_breakup/frame_rate
else:
t_breakup=None
result['time']=result['frame']/frame_rate
result['neck_radius_mm']=result['neck_radius']*mmperpix
result['time_exp']=(result['frame']-frame_strike_start)/frame_rate
result['time_after_strike']=(result['frame']-frame_strike_end)/frame_rate
result['strike_len_s']=t_strike_end-t_strike_start
return result
def make_plot(result, fit_relax=False, min_radius=0.1, ax=None, model=None):
if ax is None:
fig, ax = plt.subplots()
ax.plot(result['time_exp'],result['neck_radius_mm'])
#ax.set_yscale('log')
ax.set_xlabel('Time from strike start [s]',fontsize=15)
ax.set_ylabel('Neck radius [mm]',fontsize=15)
ax.axvline(0, color='blue',linestyle='--')
ax.axvline(result['strike_len_s'].iloc[0], color='blue',linestyle='--')
ax.set_ylim(0.01)
ax.set_xlim(-0.1)
def newtonian_rt(x,R0=3,sigma_over_eta=1):
return 0.0709 * sigma_over_eta * (14.1*R0/sigma_over_eta-x)
def weackly_elastic_rt(x,R0=3,sigma_over_eta=1):
return 0.0709 * sigma_over_eta * (14.1*R0/sigma_over_eta-x)
newtonian_rt_model=lmfit.Model(newtonian_rt)
exp_decay1=lmfit.models.ExponentialModel()
model_dict={
'newtonian':newtonian_rt_model,
'single_exp':exp_decay1,
}
if model is None:
model=newtonian_rt_model
else:
model=model_dict[model]
fit_res=None
if fit_relax:
mask_t=(result['time_exp']>result['strike_len_s'].iloc[0] ) & (result['neck_radius_mm']>min_radius)
total_time=max(result['time_exp'])
fit_res=model.fit(result['neck_radius_mm'][mask_t],x=result['time_exp'][mask_t])
ax.plot(np.linspace(0,total_time),
fit_res.eval(x=np.linspace(0,total_time)),linestyle='--')
try:
t_breakup=min(result['time_exp'][result['neck_radius']==0])
ax.axvline(t_breakup, color='red',linestyle='--')
except:
pass
return ax, fit_res
def make_animation(result,frame_rate):
pb=widgets.IntProgress(description="",min=0,max=3000,value=0,layout=widgets.Layout(width='50%'))
pb.max=len(result)-5
display(pb)
frame_strike_start=sum(map(lambda x: x>150, result['neck_radius']))
anim_frames=range(frame_strike_start,len(result)-5)
fig = plt.figure(figsize=(10,10))
im = plt.imshow(result['binary'][anim_frames[0]], cmap='gist_gray_r')
def init():
im.set_data(result['binary'][anim_frames[0]])
fig.suptitle('Time:' + str(0))
def updatefig(i):
im.set_data(result['binary'][anim_frames[i]])
fig.suptitle('Time:' + str((i-1)/frame_rate)[:5] + ' s, ')
pb.value=i
return im,
anim = animation.FuncAnimation(fig, updatefig, init_func=init, frames=len(anim_frames),
interval=50)
return anim
def make_animation_withplot(result,frame_rate,min_radius=0.2):
pb=widgets.IntProgress(description="",min=0,max=3000,value=0,layout=widgets.Layout(width='50%'))
pb.max=len(result)-5
display(pb)
frame_strike_start=sum(map(lambda x: x>150, result['neck_radius']))
anim_frames=range(frame_strike_start,len(result)-5)
fig, ax = plt.subplots(2,1,figsize=(10,15))
im = ax[1].imshow(result['binary'][anim_frames[0]], cmap='gist_gray_r')
def init():
im.set_data(result['binary'][anim_frames[0]])
make_plot(result, fit_relax=False, min_radius=min_radius, ax=ax[0])
fig.suptitle('Time:' + str(0))
def updatefig(i):
im.set_data(result['binary'][anim_frames[i]])
ax[0].plot(result['time'][i],result['neck_radius_mm'][i+frame_strike_start],'o',color='blue')
fig.suptitle('Time:' + str((i-1)/frame_rate)[:5] + ' s, ')
pb.value=i
return im,
anim = animation.FuncAnimation(fig, updatefig, init_func=init, frames=len(anim_frames),
interval=50)
return anim
def print_example_script():
print(
'''#This script assumes the file video_file_name is in the same folder as the notebook or script file
# a new folder with name exp_folder will be generated if it does not exists
# The video file will be copied in the exp_folder
# Assuming the video is from the new caber device from the kronos camera
# The script analyze the video and store the results in the result dataframe variable
# ...
import caber
exp_folder = 'ascorbic_acid_100fps'
video_file_name = 'ascorbic_acid_100fps.mp4'
# finished with typed inputs
caber.set_experiment_folder(exp_folder,video_file_name)
rotated_video_path=caber.rotate_kronos_video(f'{exp_folder}/{video_file_name}')
video=caber.pims.Video(rotated_video_path)
caber.check_framerate(video)
# just checking the framerate reading the stamps on the image
%matplotlib ipympl
bs = caber.bbox_select(video[1])
# a plot with frame appears to select a region of interest
# finished with inputs
mask=caber.get_mask_from_poly(bs)
video[-1][mask]
caber.find_thresh(video,-1,mask)
%matplotlib inline
caber.make_plot(result, fit_relax=True, min_radius=0.2)
anim=caber.make_animation(result,100)
anim.save(exp_folder + '/BW_' + video_file_name)
anim=caber.make_animation_withplot(result,100)
anim.save(exp_folder + '/BWP_' + video_file_name)
import os
os.system('ffmpeg -i ' + exp_folder + '/BWP_' + video_file_name + ' -f gif ' + exp_folder + '/BWP_' + video_file_name.split('.')[0] + '.gif')''')
def rel_time(result, show_plot=True, ax=None, strike_time=0.2, min_radius=0.01, eta_0=6, surface_tension=30E-3):
mask_t=(result['time_exp']>strike_time ) & (result['neck_radius_mm']>min_radius)
total_time=max(result['time_exp'])
fig, ax = plt.subplots(3,1, sharex=True, figsize=(5,10))
ax[0], res_fit=make_plot(result, fit_relax=True, min_radius=min_radius, ax=ax[0])
ax[0].set_xlabel('')
Rdot=-res_fit.params['sigma_over_eta'].value/res_fit.params['R0'].value*1E-3*0.0709
eta_ext=-surface_tension/2/(Rdot)
ax[1].plot(result['time_exp'][mask_t],eta_ext/(eta_0*result['time_exp'][mask_t]**0))
ax[1].axhline(3,linestyle='--', color='red')
ax[1].set_ylabel('Trouton ratio', fontsize=15)
ax[1].set_ylim(0)
ax[2].plot(result['time_exp'][mask_t],eta_ext*result['time_exp'][mask_t]**0)
ax[2].set_ylabel('Extensional viscosity', fontsize=15)
ax[2].set_xlabel('Time [s]', fontsize=15)
ax[2].set_yscale('log')
fig.tight_layout()
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "skip"}
import importlib
import caber #import the module here, so that it can be reloaded.
importlib.reload(caber)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "slide"}
exp_folder = '6000cp_viscosity_standard_100fps'
video_file_name = '6000cp_viscosity_standard_100fps.mp4'
caber.set_experiment_folder(exp_folder,video_file_name)
rotated_video_path=caber.rotate_kronos_video(f'{exp_folder}/{video_file_name}')
video=caber.pims.Video(rotated_video_path)
caber.check_framerate(video)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "slide"}
# %matplotlib ipympl
bs = caber.bbox_select(video[1])
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "slide"}
# %matplotlib inline
mask=caber.get_mask_from_poly(bs)
video[-1][mask]
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "skip"}
caber.find_thresh(video,-1,mask)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "slide"}
result=caber.measure_neck(video,
caber.find_thresh(video,-3, mask),
mask,
strike_time=0.2,
mmperpix=6/(mask[1].stop-mask[1].start),
frame_rate=100)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "slide"}
ax=caber.make_plot(result, fit_relax=True, min_radius=0.4)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "slide"}
anim=caber.make_animation(result,100)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "skip"}
anim.save(exp_folder + '/BW_' + video_file_name)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "slide"}
anim=caber.make_animation_withplot(result,100,min_radius=0.4)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "skip"}
anim.save(exp_folder + '/BWP_' + video_file_name)
# + extensions={"jupyter_dashboards": {"activeView": "grid_default", "views": {"grid_default": {"col": null, "height": 2, "hidden": true, "row": null, "width": 2}}}} slideshow={"slide_type": "skip"}
import os
os.system('ffmpeg -i ' + exp_folder + '/BWP_' + video_file_name + ' -f gif ' + exp_folder + '/BWP_' + video_file_name.split('.')[0] + '.gif')
# + slideshow={"slide_type": "slide"}
ax,fit_res=caber.make_plot(result, fit_relax=True, min_radius=0.01, model='newtonian')
# + slideshow={"slide_type": "slide"}
import matplotlib.pyplot as plt
import lmfit
import numpy as np
caber.rel_time(result, show_plot=True, ax=None, strike_time=0.2, min_radius=0.4, eta_0=6, surface_tension=30E-3)
# -
| script-6000cp_viscosity_standard.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Orthogonal Projections
#
# > In this post, We will write function that implement orthogonal projections. This post is a summary of homework in "Mathmatics for Machine Learning - PCA", offered from Imperial College London.
#
# - toc: true
# - badges: true
# - comments: true
# - author: <NAME>
# - categories: [Python, Mathematics, ICL]
# - image: images/op.png
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# ## Orthogonal Projections
from sklearn.datasets import fetch_olivetti_faces
image_shape = (64, 64)
# Load faces data
dataset = fetch_olivetti_faces('./dataset')
faces = dataset.data
# ### Advice for testing numerical algorithms
# Before we begin this week's assignment, there are some advice that we would like to give for writing functions that work with numerical data. They are useful for finding bugs in your implementation.
#
# Testing machine learning algorithms (or numerical algorithms in general)
# is sometimes really hard as it depends on the dataset
# to produce an answer, and you will never be able to test your algorithm on all the datasets
# we have in the world. Nevertheless, we have some tips for you to help you identify bugs in
# your implementations.
#
# #### 1. Test on small dataset
# Test your algorithms on small dataset: datasets of size 1 or 2 sometimes will suffice. This
# is useful because you can (if necessary) compute the answers by hand and compare them with
# the answers produced by the computer program you wrote. In fact, these small datasets can even have special numbers,
# which will allow you to compute the answers by hand easily.
#
# #### 2. Find invariants
# Invariants refer to properties of your algorithm and functions that are maintained regardless
# of the input. We will highlight this point later in this notebook where you will see functions,
# which will check invariants for some of the answers you produce.
#
# Invariants you may want to look for:
# 1. Does your algorithm always produce a positive/negative answer, or a positive definite matrix?
# 2. If the algorithm is iterative, do the intermediate results increase/decrease monotonically?
# 3. Does your solution relate with your input in some interesting way, e.g. orthogonality?
#
# Finding invariants is hard, and sometimes there simply isn't any invariant. However, DO take advantage of them if you can find them. They are the most powerful checks when you have them.
# We can find some invariants for projections. In the cell below, we have written two functions which check for invariants of projections. See the docstrings which explain what each of them does. You should use these functions to test your code.
# +
import numpy.testing as np_test
def test_property_projection_matrix(P):
"""Test if the projection matrix satisfies certain properties.
In particular, we should have P @ P = P, and P = P^T
"""
np_test.assert_almost_equal(P, P @ P)
np_test.assert_almost_equal(P, P.T)
def test_property_projection(x, p):
"""Test orthogonality of x and its projection p."""
np_test.assert_almost_equal(p.T @ (p-x), 0)
# -
# ### Orthogonal Projections
#
# Recall that for projection of a vector $\boldsymbol x$ onto a 1-dimensional subspace $U$ with basis vector $\boldsymbol b$ we have
#
# $${\pi_U}(\boldsymbol x) = \frac{\boldsymbol b\boldsymbol b^T}{{\lVert\boldsymbol b \rVert}^2}\boldsymbol x $$
#
# And for the general projection onto an M-dimensional subspace $U$ with basis vectors $\boldsymbol b_1,\dotsc, \boldsymbol b_M$ we have
#
# $${\pi_U}(\boldsymbol x) = \boldsymbol B(\boldsymbol B^T\boldsymbol B)^{-1}\boldsymbol B^T\boldsymbol x $$
#
# where
#
# $$\boldsymbol B = [\boldsymbol b_1,...,\boldsymbol b_M]$$
#
#
# Your task is to implement orthogonal projections. We can split this into two steps
# 1. Find the projection matrix $\boldsymbol P$ that projects any $\boldsymbol x$ onto $U$.
# 2. The projected vector $\pi_U(\boldsymbol x)$ of $\boldsymbol x$ can then be written as $\pi_U(\boldsymbol x) = \boldsymbol P\boldsymbol x$.
def projection_matrix_1d(b):
"""Compute the projection matrix onto the space spanned by `b`
Args:
b: ndarray of dimension (D,), the basis for the subspace
Returns:
P: the projection matrix
"""
D, = b.shape
# Notice that this b is a 1D ndarray, so b.T is an no-op. Use np.outer instead
# to implement the outer product.
numerator = np.outer(b, b.T)
denominator = np.linalg.norm(b) ** 2
P = numerator / denominator
return P
def project_1d(x, b):
"""Compute the projection matrix onto the space spanned by `b`
Args:
x: the vector to be projected
b: ndarray of dimension (D,), the basis for the subspace
Returns:
y: ndarray of shape (D,) projection of x in space spanned by b
"""
p = projection_matrix_1d(b) @ x
return p
# +
# Test 1D
# Test that we computed the correct projection matrix
from numpy.testing import assert_allclose
assert_allclose(
projection_matrix_1d(np.array([1, 2, 2])),
np.array([[1, 2, 2],
[2, 4, 4],
[2, 4, 4]]) / 9
)
# -
# Test that we project x on to the 1d subspace correctly
assert_allclose(
project_1d(np.ones(3), np.array([1, 2, 2])),
np.array([5, 10, 10]) / 9
)
def projection_matrix_general(B):
"""Compute the projection matrix onto the space spanned by the columns of `B`
Args:
B: ndarray of dimension (D, M), the basis for the subspace
Returns:
P: the projection matrix
"""
P = B.dot(np.linalg.inv(B.T.dot(B))).dot(B.T)
return P
def project_general(x, B):
"""Compute the projection matrix onto the space spanned by the columns of `B`
Args:
x: ndarray of dimension (D, 1), the vector to be projected
B: ndarray of dimension (D, M), the basis for the subspace
Returns:
p: projection of x onto the subspac spanned by the columns of B; size (D, 1)
"""
p = projection_matrix_general(B) @ x
return p
# +
from numpy.testing import assert_allclose
B = np.array([[1, 0],
[1, 1],
[1, 2]])
assert_allclose(
projection_matrix_general(B),
np.array([[5, 2, -1],
[2, 2, 2],
[-1, 2, 5]]) / 6
)
# +
# Test 2D
# Test that we computed the correct projection matrix
# Test that we project x on to the 2d subspace correctly
assert_allclose(
project_general(np.array([6, 0, 0]).reshape(-1,1), B),
np.array([5, 2, -1]).reshape(-1,1)
)
# -
# ### Eigenfaces
#
# Next, we will take a look at what happens if we project some dataset consisting of human faces onto some basis we call
# the "eigenfaces". You do not need to know what `eigenfaces` are for now but you will know what they are towards the end of the course!
# Let's visualize some faces in the dataset.
plt.figure(figsize=(10,10))
plt.imshow(np.hstack(faces[:5].reshape(5,64,64)), cmap='gray')
plt.show()
mean = faces.mean(axis=0)
std = faces.std(axis=0)
faces_normalized = (faces - mean) / std
# The data for the basis has been saved in a file named `eigenfaces.npy`, first we load it into the variable B.
B = np.load('./dataset/eigenfaces.npy')[:50] # we use the first 50 basis vectors --- you should play around with this.
print("the eigenfaces have shape {}".format(B.shape))
# Each instance in $\boldsymbol B$ is a `64x64` image, an "eigenface", which we determined using an algorithm called Principal Component Analysis. Let's visualize
# a few of those "eigenfaces".
plt.figure(figsize=(10,10))
plt.imshow(np.hstack(B[:5].reshape(-1, 64, 64)), cmap='gray')
plt.show()
# Take a look at what happens if we project our faces onto the basis $\boldsymbol B$ spanned by these 50 "eigenfaces". In order to do this, we need to reshape $\boldsymbol B$ from above, which is of size (50, 64, 64), into the same shape as the matrix representing the basis as we have done earlier, which is of size (4096, 50). Here 4096 is the dimensionality of the data and 50 is the number of data points.
#
# Then we can reuse the functions we implemented earlier to compute the projection matrix and the projection. Complete the code below to visualize the reconstructed faces that lie on the subspace spanned by the "eigenfaces".
def show_face_face_reconstruction(i):
original_face = faces_normalized[i].reshape(64, 64)
# reshape the data we loaded in variable `B`
B_basis = B.reshape(B.shape[0], -1).T
face_reconstruction = project_general(faces_normalized[i], B_basis).reshape(64, 64)
plt.figure()
plt.imshow(np.hstack([original_face, face_reconstruction]), cmap='gray')
plt.show()
show_face_face_reconstruction(0)
show_face_face_reconstruction(3)
show_face_face_reconstruction(6)
show_face_face_reconstruction(9)
# ### Least squares regression
#
# Consider the case where we have a linear model for predicting housing prices. We are predicting the housing prices based on features in the
# housing dataset. If we denote the features as $\boldsymbol x_0, \dotsc, \boldsymbol x_n$ and collect them into a vector $\boldsymbol {x}$, and the price of the houses as $y$. Assuming that we have
# a prediction model in the way such that $\hat{y}_i = f(\boldsymbol {x}_i) = \boldsymbol \theta^T\boldsymbol {x}_i$.
#
#
# If we collect the dataset into a $(N,D)$ data matrix $\boldsymbol X$, we can write down our model like this:
#
# $$
# \begin{bmatrix}
# \boldsymbol{x}_1^T \\
# \vdots \\
# \boldsymbol{x}_N^T
# \end{bmatrix} \boldsymbol{\theta} = \begin{bmatrix}
# y_1 \\
# \vdots \\
# y_2
# \end{bmatrix},
# $$
#
# i.e.,
#
# $$
# \boldsymbol X\boldsymbol{\theta} = \boldsymbol{y}.
# $$
#
# Note that the data points are the *rows* of the data matrix, i.e., every column is a dimension of the data.
#
# Our goal is to find the best $\boldsymbol\theta$ such that we minimize the following objective (least square).
#
# $$
# \begin{eqnarray}
# & \sum^n_{i=1}{\lVert \bar{y_i} - y_i \rVert^2} \\
# &= \sum^n_{i=1}{\lVert \boldsymbol \theta^T\boldsymbol{x}_i - y_i \rVert^2} \\
# &= (\boldsymbol X\boldsymbol {\theta} - \boldsymbol y)^T(\boldsymbol X\boldsymbol {\theta} - \boldsymbol y).
# \end{eqnarray}
# $$
#
# If we set the gradient of the above objective to $\boldsymbol 0$, we have
# $$
# \begin{eqnarray}
# \nabla_\theta(\boldsymbol X\boldsymbol {\theta} - \boldsymbol y)^T(\boldsymbol X\boldsymbol {\theta} - \boldsymbol y) &=& \boldsymbol 0 \\
# \nabla_\theta(\boldsymbol {\theta}^T\boldsymbol X^T - \boldsymbol y^T)(\boldsymbol X\boldsymbol {\theta} - \boldsymbol y) &=& \boldsymbol 0 \\
# \nabla_\theta(\boldsymbol {\theta}^T\boldsymbol X^T\boldsymbol X\boldsymbol {\theta} - \boldsymbol y^T\boldsymbol X\boldsymbol \theta - \boldsymbol \theta^T\boldsymbol X^T\boldsymbol y + \boldsymbol y^T\boldsymbol y ) &=& \boldsymbol 0 \\
# 2\boldsymbol X^T\boldsymbol X\theta - 2\boldsymbol X^T\boldsymbol y &=& \boldsymbol 0 \\
# \boldsymbol X^T\boldsymbol X\boldsymbol \theta &=& \boldsymbol X^T\boldsymbol y.
# \end{eqnarray}
# $$
#
# The solution that gives zero gradient solves (which we call the maximum likelihood estimator) the following equation:
#
# $$\boldsymbol X^T\boldsymbol X\boldsymbol \theta = \boldsymbol X^T\boldsymbol y.$$
#
# _This is exactly the same as the normal equation we have for projections_.
#
# This means that if we solve for $\boldsymbol X^T\boldsymbol X\boldsymbol \theta = \boldsymbol X^T\boldsymbol y.$ we would find the best $\boldsymbol \theta = (\boldsymbol X^T\boldsymbol X)^{-1}\boldsymbol X^T\boldsymbol y$, i.e. the $\boldsymbol \theta$ which minimizes our objective.
#
# Let's put things into perspective. Consider that we want to predict the true coefficient $\boldsymbol \theta$
# of the line $\boldsymbol y = \boldsymbol \theta^T \boldsymbol x$ given only $\boldsymbol X$ and $\boldsymbol y$. We do not know the true value of $\boldsymbol \theta$.
#
# Note: In this particular example, $\boldsymbol \theta$ is a scalar. Still, we can represent it as an $\mathbb{R}^1$ vector.
# +
x = np.linspace(0, 10, num=50)
theta = 2
def f(x):
random = np.random.RandomState(42) # we use the same random seed so we get deterministic output
return theta * x + random.normal(scale=1.0, size=len(x)) # our observations are corrupted by some noise, so that we do not get (x,y) on a line
y = f(x)
plt.scatter(x, y);
plt.xlabel('x');
plt.ylabel('y');
# +
X = x.reshape(-1,1) # size N x 1
Y = y.reshape(-1,1) # size N x 1
# maximum likelihood estimator
theta_hat = np.linalg.solve(X.T @ X, X.T @ Y)
# -
# We can show how our $\hat{\boldsymbol \theta}$ fits the line.
fig, ax = plt.subplots()
ax.scatter(x, y);
xx = [0, 10]
yy = [0, 10 * theta_hat[0,0]]
ax.plot(xx, yy, 'red', alpha=.5);
ax.set(xlabel='x', ylabel='y');
print("theta = %f" % theta)
print("theta_hat = %f" % theta_hat)
# What would happend to $\lVert \hat{\boldsymbol \theta} - \boldsymbol \theta \rVert$ if we increase the number of datapoints?
# +
N = np.arange(2, 10000, step=10)
theta_error = np.zeros(N.shape)
for i, n in enumerate(N):
x = np.linspace(0, 10, num=n)
y = f(x)
X = x.reshape(-1, 1)
Y = y.reshape(-1, 1)
theta_hat = np.linalg.solve(X.T @ X, X.T @ Y)
theta_error[i] = 2 - theta_hat
plt.plot(theta_error)
plt.hlines(y=0, xmin=0, xmax=len(N), linestyles='dashed')
plt.xlabel("dataset size")
plt.ylabel("parameter error");
| _notebooks/2021-03-14-Orthogonal-Projection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Quantum States and Gates as Vectors and Matrices
# Manipulating matrices is the heart of how we analyze quantum programs. In this section we'll look at some of the most common tools that can be used for this.
# Z basis
# $$
# |0\rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \, \, \, \, |1\rangle =\begin{pmatrix} 0 \\ 1 \end{pmatrix}.
# $$
#
#
# X basis
# $$
# |\pm\rangle = \frac{|0\rangle \pm|1\rangle}{\sqrt{2}}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ \pm1 \end{pmatrix}.
# $$
#
#
# Y basis
#
# $$
# |\circlearrowright\rangle = \frac{ | 0 \rangle + i | 1 \rangle}{\sqrt{2}} = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ i \end{pmatrix}, ~~~~ |\circlearrowleft\rangle = \frac{ | 0 \rangle -i | 1 \rangle}{\sqrt{2}} = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ -i \end{pmatrix}.
# $$
#
# 
#
#
# $$
# |\psi\rangle = \cos{\frac{\theta}{2}}|0\rangle + e^{i\phi}\sin{\frac{\theta}{2}}|1\rangle
# $$
#
#
# ## Pauli Matrices
# $$
# X= \begin{pmatrix} 0&1 \\\\ 1&0 \end{pmatrix} \hspace{2cm}
# Y= \begin{pmatrix} 0&-i \\\\ i&0 \end{pmatrix}\hspace{2cm}
# Z= \begin{pmatrix} 1&0 \\\\ 0&-1 \end{pmatrix}
# $$
#
# ## Unitary and Hermitian matrices
# Unitary matrices. All gates in quantum computing, with the exception of measurement, can be represented by unitary matrices.
#
#
# $$
# U U^\dagger = U^\dagger U = 1.
# $$
#
#
# Hermitian matrices. Quantum measurements are represented by them
#
# $$
# H = H^\dagger.
# $$
#
# ## Matrices as outer products. Change of basis. Spectral decomposition
# $$
# |0\rangle\langle0|= \begin{pmatrix} 1 \\ 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \end{pmatrix} = \begin{pmatrix} 1&0 \\ 0&0 \end{pmatrix},\\
# |0\rangle\langle1| = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix} = \begin{pmatrix} 0&1 \\ 0&0 \end{pmatrix},\\
# |1\rangle\langle0| = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \end{pmatrix} = \begin{pmatrix} 0&0 \\ 1&0 \end{pmatrix},\\
# |1\rangle\langle1| = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix} = \begin{pmatrix} 0&0 \\ 0&1 \end{pmatrix}.\\\\
# $$
#
# This also means that we can write any matrix purely in terms of outer products. In the examples above, we constructed the four matrices that cover each of the single elements in a single-qubit matrix, so we can write any other single-qubit matrix in terms of them.
#
# $$
# M= \begin{pmatrix} m_{0,0}&m_{0,1} \\\\ m_{1,0}&m_{1,1} \end{pmatrix} = m_{0,0} |0\rangle\langle0|+ m_{0,1} |0\rangle\langle1|+ m_{1,0} |1\rangle\langle0|+ m_{1,1} |1\rangle\langle1|
# $$
#
#
# $$
# U = |u_{00}\rangle\langle00| + |u_{01}\rangle\langle01| + |u_{10}\rangle\langle10| +|u_{11}\rangle\langle11|
# $$
#
#
# $$
# U = \sum_j e^{ih_j} |h_j\rangle\langle h_j|
# $$
#
#
#
# $$
# H = \sum_j h_j |h_j\rangle\langle h_j| .
# $$
#
#
# $$
# U(\theta) = e^{i \theta H}
# $$
#
# ## Tensor product of matrices
# $$
# A \otimes B=
# \begin{bmatrix}
# a_{11}&a_{12} \\ a_{21}&a_{22} \end{bmatrix} \otimes \begin{bmatrix} b_{11}&b_{12} \\ b_{21}&b_{22} \end{bmatrix} =
# \begin{bmatrix}
# a_{11}B & a_{12}B\\a_{21}B & a_{22}B
# \end{bmatrix} =
# \begin{bmatrix} a_{11}\begin{bmatrix} b_{11}&b_{12} \\ b_{21}&b_{22} \end{bmatrix} & a_{12}\begin{bmatrix} b_{11}&b_{12} \\ b_{21}&b_{22} \end{bmatrix}\\a_{21}\begin{bmatrix} b_{11}&b_{12} \\ b_{21}&b_{22} \end{bmatrix} & a_{22}\begin{bmatrix} b_{11}&b_{12} \\ b_{21}&b_{22} \end{bmatrix} \end{bmatrix} \\=
# \begin{bmatrix}
# a_{11}b_{11} & a_{11}b_{12} & a_{12}b_{11} & a_{12}b_{12} \\
# a_{11}b_{21} & a_{11}b_{22} & a_{12}b_{21} & a_{12}b_{22} \\
# a_{21}b_{11} & a_{21}b_{12} & a_{22}b_{11} & a_{22}b_{12} \\
# a_{21}b_{21} & a_{21}b_{22} & a_{22}b_{21} & a_{22}b_{22}
# \end{bmatrix}
# $$
#
# $$
# Z \otimes X= \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} \otimes \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 0&1&0&0 \\ 1&0&0&0\\0&0&0&-1\\0&0&-1&0 \end{bmatrix}.
# $$
# ## Pauli decomposition
# $$
# M= \begin{pmatrix} m_{0,0}&m_{0,1} \\\\ m_{1,0}&m_{1,1} \end{pmatrix} = m_{0,0} |0\rangle\langle0|+ m_{0,1} |0\rangle\langle1|+ m_{1,0} |1\rangle\langle0|+ m_{1,1} |1\rangle\langle1|
# $$
#
# Now we will see that it also possible to write them completely in terms of Pauli operators. For this, the key thing to note is that
#
# $$
# \frac{1+Z}{2} = \frac{1}{2}\left[ \begin{pmatrix} 1&0 \\\\0&1 \end{pmatrix}+\begin{pmatrix} 1&0 \\\\0&-1 \end{pmatrix}\right] = |0\rangle\langle0|,\\\\\frac{1-Z}{2} = \frac{1}{2}\left[ \begin{pmatrix} 1&0 \\\\0&1 \end{pmatrix}-\begin{pmatrix} 1&0 \\\\0&-1 \end{pmatrix}\right] = |1\rangle\langle1|
# $$
#
# This shows that $|0\rangle\langle0|$ and $|1\rangle\langle1|$ can be expressed using the identity matrix and $Z$. Now, using the property that $X|0\rangle = |1\rangle$, we can also produce
#
# $$
# |0\rangle\langle1| = |0\rangle\langle0|X = \frac{1}{2}(1+Z)~X = \frac{X+iY}{2},\\\\
# |1\rangle\langle0| = X|0\rangle\langle0| = X~\frac{1}{2}(1+Z) = \frac{X-iY}{2}.
# $$
#
# Since we have all the outer products, we can now use this to write the matrix in terms of Pauli matrices:
#
# $$
# M = \frac{m_{0,0}+m_{1,1}}{2}~1~+~\frac{m_{0,1}+m_{1,0}}{2}~X~+~i\frac{m_{0,1}-m_{1,0}}{2}~Y~+~\frac{m_{0,0}-m_{1,1}}{2}~Z.
# $$
#
# This example was for a general single-qubit matrix, but the a corresponding result is true also for matrices for any number of qubits. We simply start from the observation that
#
# $$
# \left(\frac{1+Z}{2}\right)\otimes\left(\frac{1+Z}{2}\right)\otimes\ldots\otimes\left(\frac{1+Z}{2}\right) = |00\ldots0\rangle\langle00\ldots0|,
# $$
#
# and can then proceed in the same manner as above. In the end it can be shown that any matrix can be expressed in terms of tensor products of Pauli matrices:
#
# $$
# M = \sum_{P_{n-1},\ldots,P_0 \in \{1,X,Y,Z\}} C_{P_{n-1}\ldots,P_0}~~P_{n-1} \otimes P_{n-2}\otimes\ldots\otimes P_0.
# $$
#
# For Hermitian matrices, note that the coefficients $C_{P_{n-1}\ldots,P_0}$ here will all be real.
#
#
#
# Now we have some powerful tools to analyze quantum operations, let's look at the operations we will need to analyze for our study of universality.
#
# ## **Question:**
# $H = \frac{1}{\sqrt{2}}\begin{pmatrix} 1&1 \\1&-1 \end{pmatrix}, S=\begin{pmatrix} 1&0 \\0&i \end{pmatrix}, T= \begin{pmatrix} 1&0 \\0&e^{i\pi/4} \end{pmatrix}$
# **Express these unitaries as a sum of Pauli matrices**
# # Qiskit
# Qiskit is a package in Python for doing everything you'll ever need with quantum computing.
#
# If you don't have it already, you need to install it. Once it is installed, you need to import it.
#
# There are generally two steps to installing Qiskit. The first one is to install Anaconda, a python package that comes with almost all dependencies that you will ever need. Once you've done this, Qiskit can then be installed by running the command
# ```
# pip install qiskit
# ```
# in your terminal. For detailed installation instructions, refer to [the documentation page here](https://qiskit.org/documentation/install.html).
import qiskit
qiskit.__qiskit_version__
# ### Quantum circuits
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister
# The object at the heart of Qiskit is the quantum circuit. Here's how we create one, which we will call `qc`
qc = QuantumCircuit()
# ### Quantum registers
qr = QuantumRegister(2,'qreg')
# Giving it a name like `'qreg'` is optional.
#
# Now we can add it to the circuit using the `add_register` method, and see that it has been added by checking the `qregs` variable of the circuit object.
# +
qc.add_register( qr )
qc.qregs
# -
# Now our circuit has some qubits, we can use another attribute of the circuit to see what it looks like: `draw()` .
qc.draw(output='mpl')
# Our qubits are ready to begin their journey, but are currently just sitting there in state `|0>`.
# #### Applying Gates
qc.h( qr[0] );
qc.cx( qr[0], qr[1] );
qc.draw(output='mpl')
# ### Question 1: what is the final states of the circuits above?
qc = QuantumCircuit()
qr = QuantumRegister(3,'q')
qc.add_register( qr )
qc.h( qr[0] )
qc.cx( qr[0], qr[1] )
qc.cx( qr[1], qr[2] )
qc.draw(output='mpl')
# ### Question 2: what is the final states of the circuits above?
qc = QuantumCircuit()
qr = QuantumRegister(2,'q')
qc.add_register( qr )
qc.h( qr[0] )
qc.cx( qr[0], qr[1] )
qc.cx( qr[1], qr[0] )
qc.cx( qr[0], qr[1] )
qc.draw(output='mpl')
# ### Question 3: what is the final states of the circuits above?
# # Statevector simulator
from qiskit import Aer
vector_sim = Aer.get_backend('statevector_simulator')
# In Qiskit, we use *backend* to refer to the things on which quantum programs actually run (simulators or real quantum devices). To set up a job for a backend, we need to set up the corresponding backend object.
#
# The simulator we want is defined in the part of qiskit known as `Aer`. By giving the name of the simulator we want to the `get_backend()` method of Aer, we get the backend object we need. In this case, the name is `'statevector_simulator'`.
#
# A list of all possible simulators in Aer can be found using
Aer.backends()
# All of these simulators are 'local', meaning that they run on the machine on which Qiskit is installed. Using them on your own machine can be done without signing up to the IBMQ user agreement.
#
# Running the simulation is done by Qiskit's `execute` command, which needs to be provided with the circuit to be run and the 'backend' to run it on (in this case, a simulator).
from qiskit import execute
job = execute( qc, vector_sim )
ket = job.result().get_statevector()
for amplitude in ket:
print(amplitude)
# This is the vector for a Bell state which is what we'd expect given the circuit.
#
# $$\frac{\left|00\right\rangle + \left|11\right\rangle}{\sqrt{2}}$$
# ### Classical registers and the qasm simulator
# In the above simulation, we got out a statevector. That's not what we'd get from a real quantum computer. For that we need measurement. And to handle measurement we need to define where the results will go. This is done with a `ClassicalRegister`. Let's define a two bit classical register, in order to measure both of our two qubits.
# +
cr = ClassicalRegister(2,'creg')
qc.add_register(cr)
# -
# Now we can use the measure method of the quantum circuit. This requires two arguments: the qubit being measured, and the bit where the result is written.
#
# Let's measure both qubits, and write their results in different bits.
# +
qc.measure(qr[0],cr[0])
qc.measure(qr[1],cr[1])
qc.draw(output='mpl')
# -
# Now we can run this on a local simulator whose effect is to emulate a real quantum device. For this we need to add another input to the execute function, `shots`, which determines how many times we run to circuit to take statistics. If you don't provide any `shots` value, you get the default of 1024.
# +
emulator = Aer.get_backend('qasm_simulator')
job = execute( qc, emulator, shots=8192 )
# -
hist = job.result().get_counts()
print(hist)
# +
from qiskit.tools.visualization import plot_histogram
plot_histogram( hist )
# -
# ## Accessing on real quantum hardware
# Backend objects can also be set up using the `IBMQ` package. The use of these requires us to [sign with an IBMQ account](https://qiskit.org/documentation/install.html#access-ibm-q-systems). Assuming the credentials are already loaded onto your computer, you sign in with
from qiskit import IBMQ
MY_API_TOKEN = ''
IBMQ.save_account(MY_API_TOKEN)
provider = IBMQ.load_account()
provider.backends()
for backend in provider.backends():
print( backend.status() )
real_device = provider.get_backend('ibmq_16_melbourne')
properties = real_device.properties()
coupling_map = real_device.configuration().coupling_map
# +
#coupling_map
# -
# ## Teleportation
# Alice wants to send quantum information to Bob. Specifically, suppose she wants to send the state to Bob.
# $$|\phi⟩=α|0⟩+β|1⟩$$
# This entails passing on information about **α** and **β** to Bob.
#
# <img src="images/teleport.png" width='400'>
#
# There exists a theorem in quantum mechanics which states that you cannot simply make an exact copy of an unknown quantum state. This is known as the no-cloning theorem. As a result of this we can see that Alice can't simply generate a copy of $\vert\phi\rangle$ and give the copy to Bob. Copying a state is only possible with a classical computation.
#
# However, by taking advantage of two classical bits and entanglement, Alice can transfer the state $\vert\phi\rangle$ to Bob. We call this teleportation as at the end Bob will have $\vert\phi\rangle$ and Alice won't anymore. Let's see how this works in some detail.
| Intro to QC/.ipynb_checkpoints/Introduction to Quantum Computing with Qiskit-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Information about dataset
# ```
#
# 1. class:
# 1 = lung
# 2 = head & neck
# 3 = esophasus
# 4 = thyroid
# 5 = stomach
# 6 = duoden & sm.int
# 7 = colon
# 8 = rectum
# 9 = anus
# 10 = salivary glands
# 11 = ancreas
# 12 = gallblader
# 13 = liver
# 14 = kidney
# 15 = bladder
# 16 = testis
# 17 = prostate
# 18 = ovary
# 19 = corpus uteri
# 20 = cervix uteri
# 21 = vagina
# 22 = breast
# 2. age: <30, 30-59, >=60
# 3. sex: male, female
# 4. histologic-type: epidermoid, adeno, anaplastic
# 5. degree-of-diffe: well, fairly, poorly
# 6. bone: yes, no
# 7. bone-marrow: yes, no
# 8. lung: yes, no
# 9. pleura: yes, no
# 10. peritoneum: yes, no
# 11. liver: yes, no
# 12. brain: yes, no
# 13. skin: yes, no
# 14. neck: yes, no
# 15. supraclavicular: yes, no
# 16. axillar: yes, no
# 17. mediastinum: yes, no
# 18. abdominal: yes, no
# ```
#importing necessary packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import missingno as msno
import seaborn as sns
import scipy.stats as st
#reading a file and adding column names to it.
names=['class','age','sex','histologic-type','degree-of-diffe','bone','bone-marrow','lung','pleura','peritoneum','liver','brain','skin','neck','supraclavicular','axillar','mediastinum','abdominal']
data = pd.read_csv('primary-tumor.csv',header=None,names=names)
data.head()
## types of values in each attribute
data.dtypes
data = data.replace('?',np.NaN)
data.head()
# ## Creating another catefory for the missing values
# ```
# I have replaced all the NaN values with integer number 5. So that in all the columns which contains NaN values will have another category 5 which can be interpreted as 'Unknown'.
# ```
#We can create another category for the missing values and use them as a different level.
data = data.fillna(5)
data.head()
# ## Tumor-wise frequency diagram
data['class'].value_counts()
data['class'].value_counts().plot(kind='bar',figsize=(15,7))
# ## Observation:
# ```
# People affected with Lung cancer are maximum.
# ```
# ## Age wise comparison to lung cancer
# ```
# age classes:
# 1 = <30
# 2 = 30-59
# 3 = >59
# ```
data.groupby('age')['class'].value_counts().unstack().loc[:,1].plot(kind = 'bar')
# ```
# People having age between 30-59 are maximum affected with lung cancer.
# ```
# ## Age-wise comparison to all type of cancers.
x = sns.jointplot('age','class',data = data,kind = 'kde', color = 'red')
# ### Observation:
# ```
# In all types of cancer, people having age between 30-59 years are maximum.
# ```
# ## Diseased Lungs' effect on different type of cancer:
data.groupby('class')['lung'].value_counts(normalize = True).unstack().loc[:,1].plot(kind= 'bar')
#
# ### Observation:
# ```
# According to above graph, diseased lungs don't causes more to happen lung cancer. But surprisingly, diseased lungs causes more to happen duoden & sm.int, testis and vagina cancer.
# ```
# ## Diseased liver's effect on different type of cancer
data.groupby('class')['liver'].value_counts(normalize = True).unstack().loc[:,1].plot(kind = 'bar')
# ## Observation:
# ```
# Liver cancer class is 13th one.
# And from the graph we can see that 13th class has no instances when liver is diseased and there is liver cancer.
# ```
x = sns.jointplot('degree-of-diffe','class',data = data,kind = 'kde', color = 'red')
# ### Observation:
# ```
# Degree of differentiation is poorly in class 0-4(lung, head & neck, esophasus, thyroid) cancer. while it's well in class 5-7(stomach, duoden & sm.int, colon) cancer.
# ```
| Labs/Lab1/Romil/Notebooks/Tumor_EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: soccer_dataprovider_comparison
# language: python
# name: soccer_dataprovider_comparison
# ---
# # Data Preparation
#
# This notebook loads the 2018 World Cup dataset provided by StatsBomb and converts it to the [SPADL format](https://github.com/ML-KULeuven/socceraction).
# **Disclaimer**: this notebook is compatible with the following package versions:
#
# - tqdm 4.42.1
# - pandas 1.0
# - socceraction 0.1.1
# +
import os; import sys
from tqdm.notebook import tqdm
import math
import pandas as pd
import socceraction.spadl as spadl
import socceraction.spadl.statsbomb as statsbomb
# -
# ## Configure leagues and seasons to download and convert
# The two dictionaries below map my internal season and league IDs to Statsbomb's IDs. Using an internal ID makes it easier to work with data from multiple providers.
seasons = {
3: '2018',
}
leagues = {
'FIFA World Cup': 'WC',
}
# ## Configure folder names and download URLs
#
# The two cells below define the URLs from where the data are downloaded and were data is stored.
free_open_data_remote = "https://raw.githubusercontent.com/statsbomb/open-data/master/data/"
# +
spadl_datafolder = "../data/statsbomb_opensource"
raw_datafolder = f"../data/statsbomb_opensource/raw"
# Create data folder if it doesn't exist
for d in [raw_datafolder, spadl_datafolder]:
if not os.path.exists(d):
os.makedirs(d, exist_ok=True)
print(f"Directory {d} created ")
# -
# ## Set up the statsbombloader
SBL = statsbomb.StatsBombLoader(root=free_open_data_remote, getter="remote")
# ## Select competitions to load and convert
# View all available competitions
df_competitions = SBL.competitions()
set(df_competitions.competition_name)
# +
df_selected_competitions = df_competitions[df_competitions.competition_name.isin(
leagues.keys()
)]
df_selected_competitions
# -
# ## Convert to the SPADL format
for competition in df_selected_competitions.itertuples():
# Get matches from all selected competition
matches = SBL.matches(competition.competition_id, competition.season_id)
matches_verbose = tqdm(list(matches.itertuples()), desc="Loading match data")
teams, players, player_games = [], [], []
competition_id = leagues[competition.competition_name]
season_id = seasons[competition.season_id]
spadl_h5 = os.path.join(spadl_datafolder, f"spadl-statsbomb_opensource-{competition_id}-{season_id}.h5")
with pd.HDFStore(spadl_h5) as spadlstore:
spadlstore["actiontypes"] = spadl.actiontypes_df()
spadlstore["results"] = spadl.results_df()
spadlstore["bodyparts"] = spadl.bodyparts_df()
for match in matches_verbose:
# load data
teams.append(SBL.teams(match.match_id))
players.append(SBL.players(match.match_id))
events = SBL.events(match.match_id)
# convert data
player_games.append(statsbomb.extract_player_games(events))
spadlstore[f"actions/game_{match.match_id}"] = statsbomb.convert_to_actions(events,match.home_team_id)
games = matches.rename(columns={"match_id": "game_id", "match_date": "game_date"})
games.season_id = season_id
games.competition_id = competition_id
spadlstore["games"] = games
spadlstore["teams"] = pd.concat(teams).drop_duplicates("team_id").reset_index(drop=True)
spadlstore["players"] = pd.concat(players).drop_duplicates("player_id").reset_index(drop=True)
spadlstore["player_games"] = pd.concat(player_games).reset_index(drop=True)
| notebooks/1-load-and-convert-statsbomb-data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Simulation of XYZ spin models using Floquet engineering in XY mode
# +
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import qutip
import pulser
from pulser import Pulse, Sequence, Register
from pulser.simulation import Simulation
from pulser.devices import MockDevice, Chadoq2
from pulser.waveforms import BlackmanWaveform
# -
# In this notebook, we will reproduce some results of "Microwave-engineering of programmable XXZ Hamiltonians in arrays of Rydberg atoms", <NAME>, et. al., https://arxiv.org/pdf/2107.14459.pdf.
# ### Floquet Engineering on two atoms
#
# We start by considering the dynamics of two interacting atoms under $H_{XXZ}$. To demonstrate the dynamically tunable aspect of the microwave engineering, we change the Hamiltonian during the evolution of the system. More specifically, we start from $|\rightarrow \rightarrow \rangle_y$, let the atoms evolve under $H_{XX}$ and apply a microwave pulse sequence between $0.9\mu s$ and $1.2\mu s$ only.
#
# Let us first define our $\pm X$ and $\pm Y$ pulses.
# +
# Times are in ns
t_pulse = 26
X_pulse = Pulse.ConstantDetuning(BlackmanWaveform(t_pulse, np.pi/2.), 0, 0)
Y_pulse = Pulse.ConstantDetuning(BlackmanWaveform(t_pulse, np.pi/2.), 0, -np.pi/2)
mX_pulse = Pulse.ConstantDetuning(BlackmanWaveform(t_pulse, np.pi/2.), 0, np.pi)
mY_pulse = Pulse.ConstantDetuning(BlackmanWaveform(t_pulse, np.pi/2.), 0, np.pi/2)
# -
# Let's also define a function to add the pulses during one cycle.
def Floquet_XXZ_cycles(n_cycles, tau_1, tau_2, t_pulse):
t_half = t_pulse/2.
tau_3 = tau_2
tc = 4*tau_2 + 2*tau_1
for _ in range(n_cycles):
seq.delay(tau_1-t_half, 'MW')
seq.add(X_pulse, 'MW')
seq.delay(tau_2-2*t_half, 'MW')
seq.add(mY_pulse, 'MW')
seq.delay(2*tau_3-2*t_half, 'MW')
seq.add(Y_pulse, 'MW')
seq.delay(tau_2-2*t_half, 'MW')
seq.add(mX_pulse, 'MW')
seq.delay(tau_1-t_half, 'MW')
# We are ready to start building our sequence.
# +
# We take two atoms distant by 10 ums.
coords = np.array([[0, 0], [10, 0]])
qubits = dict(enumerate(coords))
reg = Register(qubits)
seq = Sequence(reg, MockDevice)
seq.declare_channel('MW', 'mw_global')
seq.set_magnetic_field(0., 0., 1.)
tc = 300
seq.delay(3 * tc, 'MW')
Floquet_XXZ_cycles(4, tc/6., tc/6., t_pulse)
seq.delay(6 * tc, 'MW')
# Here are our evaluation times
t_list= []
for p in range(13):
t_list.append(tc/1000.*p)
# -
# Let's draw the sequence, to see that the microwave engineering only happens between $900 ns$ and $2100 ns$, which corresponds to $H_{XX} \to H_{XXX}$. During that period, the total y-magnetization $\langle \sigma^y_1 + \sigma^y_2 \rangle$ is expected to be frozen, as this quantity commutes with $H_{XXX}$.
seq.draw()
sim = Simulation(seq, sampling_rate=1.0, config=None, evaluation_times=t_list)
psi_y = (qutip.basis(2, 0)+1j*qutip.basis(2, 1)).unit()
sim.initial_state = qutip.tensor(psi_y, psi_y)
res = sim.run()
sy = qutip.sigmay()
Id = qutip.qeye(2)
Sigma_y = (qutip.tensor(sy, Id)+qutip.tensor(Id, sy))/2.
Sigma_y_res = res.expect([Sigma_y])
# +
plt.figure()
# Showing the Hamiltonian engineering period.
line1 = 0.9
line2 = 2.1
plt.axvspan(line1, line2, alpha=.1, color='grey')
plt.text(1., 0.5, r"$H_{XX} \to H_{XXX}$", fontsize=14)
plt.plot(sim.evaluation_times, Sigma_y_res[0], 'o')
plt.xlabel(r"Time [µs]", fontsize=16)
plt.ylabel(fr'$ (\langle \sigma_1^y + \sigma_2^y \rangle)/2$', fontsize=16)
plt.show()
# -
# Note that one cannot directly measure off diagonal elements of the density matrix experimentally. To be able to measure $\langle \sigma^y_1 + \sigma^y_2 \rangle$, one would need to first apply a rotation on the atoms (equivalent to changing the basis) and then measure the population.
# ### Domain-wall dynamics
#
# Now, we will look at the dynamics of the system under $H_{XX2Z}$ when starting in a Domain-Wall (DW) state $|\psi_0\rangle = |\uparrow \uparrow \uparrow \uparrow \uparrow \downarrow \downarrow \downarrow \downarrow \downarrow\rangle$, for two distinct geometries : open boundary conditions (OBC) and periodic boundary conditions (PBC). In the case of $H_{XX2Z}$, only 2 pulses per Floquet cycle are required, as the $X$ and $-X$ pulses cancel out.
def Floquet_XX2Z_cycles(n_cycles, t_pulse):
t_half = t_pulse/2.
tau_3 = tau_2 = tc/4.
for _ in range(n_cycles):
seq.delay(tau_2-t_half, 'MW')
seq.add(mY_pulse, 'MW')
seq.delay(2*tau_3-2*t_half, 'MW')
seq.add(Y_pulse, 'MW')
seq.delay(tau_2-t_half, 'MW')
N_at = 10
# Number of Floquet cycles
N_cycles = 20
# In the following, we will take 1000 projective measurements of the system at the final time.
N_samples = 1000
# In the experiment, all the atoms start in the same initial state. In order to create a domain-wall state, one must apply a $\pi$-pulse on only half of the atoms. On the hardware, this can be done by using a Spatial Light Modulator which imprints a specific phase pattern on a laser beam. This results in a set of focused laser beams in the atomic plane, whose geometry corresponds to the subset of sites to address, preventing the addressed atoms from interacting with the global microwave pulse due to a shift in energy.
#
# This feature is implemented in Pulser. For implementing this, we need to define a $\pi$-pulse, and the list of indices of the atoms we want to mask.
initial_pi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(2*t_pulse, np.pi), 0, 0)
masked_indices = np.arange(N_at//2)
# Line geometry
reg = Register.rectangle(1, N_at, 10)
magnetizations_obc = np.zeros((N_at, N_cycles), dtype=float)
correl_obc = np.zeros(N_cycles, dtype=float)
for m in range(N_cycles): # Runtime close to 2 min!
seq = Sequence(reg, MockDevice)
seq.declare_channel('MW', 'mw_global')
seq.set_magnetic_field(0., 0., 1.)
# Configure the SLM mask, that will prevent the masked qubits from interacting with the first global \pi pulse
seq.config_slm_mask(masked_indices)
seq.add(initial_pi_pulse, 'MW')
seq.add(X_pulse, 'MW')
Floquet_XX2Z_cycles(m, t_pulse)
seq.add(mX_pulse, 'MW')
sim = Simulation(seq)
res = sim.run()
samples = res.sample_final_state(N_samples)
correl = 0.
for key, value in samples.items():
for j in range(N_at):
correl -= (2*float(key[j])-1)*(2*float(key[(j+1)%N_at])-1)*value/N_samples
magnetizations_obc[j][m] += (2*float(key[j])-1)*value/N_samples
correl_obc[m] = N_at/2+correl/2
# +
# Circular geometry
coords = 10.*N_at/(2*np.pi)*np.array([(np.cos(theta*2*np.pi/N_at), np.sin(theta*2*np.pi/N_at)) for theta in range(N_at)])
reg = Register.from_coordinates(coords)
magnetizations_pbc = np.zeros((N_at, N_cycles), dtype=float)
correl_pbc = np.zeros(N_cycles, dtype=float)
for m in range(N_cycles): # Runtime close to 2 min!
seq = Sequence(reg, MockDevice)
seq.declare_channel('MW', 'mw_global')
seq.set_magnetic_field(0., 0., 1.)
seq.config_slm_mask(masked_indices)
seq.add(initial_pi_pulse, 'MW')
seq.add(X_pulse, 'MW')
Floquet_XX2Z_cycles(m, t_pulse)
seq.add(mX_pulse, 'MW')
sim = Simulation(seq)
res = sim.run()
samples = res.sample_final_state(N_samples)
correl = 0.
for key, value in samples.items():
for j in range(N_at):
correl -= (2*float(key[j])-1)*(2*float(key[(j+1)%N_at])-1)*value/N_samples
magnetizations_pbc[j][m] += (2*float(key[j])-1)*value/N_samples
correl_pbc[m] = N_at/2+correl/2
# -
# Let's plot the evolution of the magnetization $\langle \sigma^z_j \rangle$ in time for all the sites $j$.
# +
fig, ax = plt.subplots(1,1)
img = ax.imshow(magnetizations_obc, cmap=plt.get_cmap('RdBu'))
plt.title('OBC',fontsize=16)
ax.set_xlabel('Cycle',fontsize=16)
ax.set_ylabel('Atom number',fontsize=16)
cbar = fig.colorbar(img, shrink=0.7)
cbar.set_label(r'$\langle \sigma^z \rangle$', fontsize=16)
fig, ax = plt.subplots(1,1)
img = ax.imshow(magnetizations_pbc, cmap=plt.get_cmap('RdBu'))
plt.title('PBC',fontsize=16)
ax.set_xlabel('Cycle',fontsize=16)
ax.set_ylabel('Atom number',fontsize=16)
cbar = fig.colorbar(img, shrink=0.7)
cbar.set_label(r'$\langle \sigma^z \rangle$', fontsize=16)
# -
# We see that the magnetization profiles look rather different for OBC and PBC. It seems that the initial DW melts in the case of PBC. In fact, the decrease of $|\langle \sigma^z_j \rangle|$ for all sites is due to a delocalization of the DW along the circle. This delocalization can be more apparent when looking at correlations. More specifically, we see on the plot below that the number of spin flips between consecutive atoms along the circle, $\langle N_{flip} \rangle=1/2\sum_j(1-\langle \sigma_j^z \sigma_{j+1}^z\rangle)$, remains quite low during the dynamics for both OBC (red) and PBC (blue), while it should tend to $N_{at}/2=5$ for randomly distributed spins.
fig, ax = plt.subplots(1,1)
plt.title(r'Evolution of $\langle N_{flip} \rangle$ in time for OBC (red) and PBC (blue).', fontsize=16)
ax.set_xlabel('Cycle',fontsize=16)
ax.set_ylabel(r'$\langle N_{flip} \rangle$',fontsize=14)
ax.plot(correl_pbc,'--o',color='blue')
ax.plot(correl_obc,'--o',color='red')
# To investigate even more this delocalization effect, let's consider a smaller region of only 3 spins prepared in $|\uparrow \rangle$. The delocalization timescale will then be shorter, and we will see it more clearly happening in the system
N_cycles = 26
magnetizations_pbc = np.zeros((N_at, N_cycles), dtype=float)
samples_evol = []
masked_indices = [0, 1, 2]
for m in range(N_cycles): # Runtime close to 4 min!
seq = Sequence(reg, MockDevice)
seq.set_magnetic_field(0., 0., 1.)
seq.declare_channel('MW', 'mw_global')
seq.config_slm_mask(masked_indices)
seq.add(initial_pi_pulse, 'MW')
seq.add(X_pulse, 'MW')
Floquet_XX2Z_cycles(m, t_pulse)
seq.add(mX_pulse, 'MW')
sim = Simulation(seq)
res = sim.run()
samples = res.sample_final_state(N_samples)
samples_evol.append(samples)
correl = 0.
for key, value in samples.items():
for j in range(N_at):
magnetizations_pbc[j][m] += (2*float(key[j])-1)*value/N_samples
fig, ax = plt.subplots(1,1)
img = ax.imshow(magnetizations_pbc, cmap=plt.get_cmap('RdBu'))
ax.set_xlabel('Cycle', fontsize=16)
ax.set_ylabel('Atom number', fontsize=16)
cbar = fig.colorbar(img)
cbar.set_label(r'$\langle \sigma^z \rangle$', fontsize=16)
# We see above that the magnetization profile tends to average. But if we look at the histogram of sampled states in time, we will remark that domain-wall configurations are dominant (in red in the histograms below). As time increases, the delocalization mechanism populates more and more domain-wall states distinct from the initial state.
# +
dw_preserved = ['0001111111', '1000111111', '1100011111', '1110001111',
'1111000111', '1111100011', '1111110001', '1111111000', '0111111100', '0011111110']
for n_cycle in [2*k for k in range(int(N_cycles/2))]: # Runtime close to 2 min !
color_dict = {key: 'red' if key in dw_preserved else 'black' for key in samples_evol[n_cycle]}
plt.figure(figsize=(16, 5))
plt.title(r'Cycle $= {}$'.format(n_cycle), fontsize=18)
plt.bar(samples_evol[n_cycle].keys(), samples_evol[n_cycle].values(), color=color_dict.values())
plt.xlabel("bitstrings", fontsize=16)
plt.ylabel("counts", fontsize=16)
plt.xticks(rotation=90)
plt.show()
| tutorials/quantum_simulation/Microwave-engineering of programmable XXZ Hamiltonians in arrays of Rydberg atoms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "-"}
# # 数值稳定性和模型初始化
#
# 梯度消失
# -
import sys
sys.path.append('..')
# + origin_pos=2 tab=["pytorch"]
# %matplotlib inline
import mindspore
import mindspore.numpy as mnp
import mindspore.ops as ops
from d2l import mindspore as d2l
def sigmoid(x):
return ops.Sigmoid()(x)
x = mnp.arange(-8.0, 8.0, 0.1)
y = sigmoid(x)
grad_all = ops.GradOperation(get_all=True)
x_grad = grad_all(sigmoid)(x)[0]
d2l.plot(x.asnumpy(), [y.asnumpy(), x_grad.asnumpy()],
legend=['sigmoid', 'gradient'], figsize=(4.5, 2.5))
# + [markdown] slideshow={"slide_type": "slide"}
# 梯度爆炸
# + origin_pos=6 tab=["pytorch"]
from mindspore import Tensor
M = ops.normal((4, 4), Tensor(0), Tensor(1))
print('一个矩阵 \n',M)
for i in range(100):
M = ops.matmul(M, ops.normal((4,4), Tensor(0), Tensor(1)))
print('乘以100个矩阵后\n', M)
| chapter_04_multilayer-perceptrons/6_numerical-stability-and-init.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Building the various models
# ### Logistic Regression
# +
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
from sklearn.metrics import f1_score, jaccard_similarity_score
# %matplotlib inline
Feature=pd.read_csv('Feature')
df = pd.read_csv('loan_train.csv')
X=np.load('X.npy')
y = df['loan_status'].values #labels
#X= preprocessing.StandardScaler().fit(X).transform(X)
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import log_loss
import warnings
warnings.filterwarnings("ignore")
# +
#model
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.1, random_state=4)
LR = LogisticRegression(C=0.18, solver='saga').fit(X_train,y_train)
yhat = LR.predict(X_test)
yhat_prob = LR.predict_proba(X_test)
print('The Jaccard Index is:',jaccard_similarity_score(y_test, yhat))
print('The Log Loss is:',log_loss(y_test, yhat_prob))
#For use in test
pd.DataFrame(X_train).to_csv('Xtrain_LR',index=None)
pd.DataFrame(y_train).to_csv('ytrain_LR',index=None)
# -
#Finding optimal test_size, .1
for size in range(1,5):
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=.1, random_state=4)
LR = LogisticRegression(C=0.18, solver='liblinear').fit(X_train,y_train)
yhat = LR.predict(X_test)
yhat_prob = LR.predict_proba(X_test)
print('The Jaccard Index is:',jaccard_similarity_score(y_test, yhat))
print('The Log Loss is:',log_loss(y_test, yhat_prob))
#in searching for optimal values for C we find that c=.18
for c in range(1,20):
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=.1, random_state=4)
LR = LogisticRegression(C=c/100, solver='liblinear').fit(X_train,y_train)
yhat = LR.predict(X_test)
yhat_prob = LR.predict_proba(X_test)
print('The Jaccard Index is:',jaccard_similarity_score(y_test, yhat))
print('The Log Loss is:',log_loss(y_test, yhat_prob))
#in searching for optimal values for solvers, we find liblinear is indeed the worst and choose newton-cg
for Solver in ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga']:
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=.1, random_state=4)
LR = LogisticRegression(C=.18, solver=Solver).fit(X_train,y_train)
yhat = LR.predict(X_test)
yhat_prob = LR.predict_proba(X_test)
print('The Jaccard Index is:',jaccard_similarity_score(y_test, yhat))
print('The Log Loss is:',log_loss(y_test, yhat_prob))
# # We see here that the optimal test size is .1, the c-value .18, and solver is saga, due to logloss
| 05BuildingModel_LR.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit (windows store)
# name: python3
# ---
# + dotnet_interactive={"language": "csharp"}
# !pip install onnxruntime
# + dotnet_interactive={"language": "csharp"}
import onnxruntime as rt
import numpy as np
# + dotnet_interactive={"language": "csharp"}
sess = rt.InferenceSession("model.onnx")
# -
# 
# + dotnet_interactive={"language": "csharp"}
input_yearsExperience = sess.get_inputs()[0].name
print("input name", input_yearsExperience)
input_shape = sess.get_inputs()[0].shape
print("input shape", input_shape)
input_type = sess.get_inputs()[0].type
print("input type", input_type)
# + dotnet_interactive={"language": "csharp"}
input_salary = sess.get_inputs()[1].name
print("input name", input_salary)
input_shape = sess.get_inputs()[1].shape
print("input shape", input_shape)
input_type = sess.get_inputs()[1].type
print("input type", input_type)
# + dotnet_interactive={"language": "csharp"}
output_name = sess.get_outputs()[4].name
print("output name", output_name)
output_shape = sess.get_outputs()[4].shape
print("output shape", output_shape)
output_type = sess.get_outputs()[4].type
print("output type", output_type)
# + dotnet_interactive={"language": "csharp"}
years = np.array([[2.5]], dtype=np.float32)
salary = np.array([[0]], dtype=np.float32)
res = sess.run([output_name], {input_yearsExperience: years, input_salary:salary })
res
# + dotnet_interactive={"language": "csharp"}
res[0][0][0]
# + dotnet_interactive={"language": "csharp"}
| notebooks/onnxinference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In diesem Tutorial beschäftigen Sie sich anhand eines Spielbeispiels mit den Problemen einer Überanpassung oder Unteranpassung der linearen bzw. logistischen Regression.
#
# In der begleitenden Python-File `utils.py` befinden sich Hilfsfunktionen zum Erstellen eines zufälligen Trainings- und Testdatensatzes mit einer Beobachtung und einer kontinuierlichen Zielvariablen.
# ## (2.2.1) Lineare Regression <span style="color:green; font-size:1em">(o)</span> <span style="font-size:1em">📗</span>
#
# **<span style="color:green; font-size:2em"> (a) </span>** <span style="font-size:2em">📗</span> Erstellen Sie per `utils.get_train_data()` einen Trainingsdatensatz mit Inputvariablen $\{x^{(i)} \; | \; i = 1, ..., N\}$ und Zielvariablen $\{y_T^{(i)}\; | \; i = 1, ..., N\}$ und führen Sie darauf eine lineare Regression aus.
#
# **<span style="color:green; font-size:2em"> (b) </span>** <span style="font-size:2em">📗</span> Treffen Sie eine Vorhersage der Zielvariablen, $\{\hat{y}^{(i)}\; | \; i = 1, ..., N\}$, für die Beobachtungen des Trainingsdatensatzes. Beurteilen Sie die Qualität der Vorhersage, indem Sie einmal den durchschnittlichen quadratischen und einmal den durchschnittlichen absoluten Fehler der Vorhersage berechnen:
#
# (i) Quadratischer Fehler: $ \frac{1}{N} \sum_{i=1}^N (\hat{y}^{(i)} - y_T^{(i)})^2$
#
# (ii) Absoluter Fehler: $ \frac{1}{N} \sum_{i=1}^N | \hat{y}^{(i)} - y_T^{(i)} | $
#
# **<span style="color:green; font-size:2em"> (c) </span>** <span style="font-size:2em">📗</span> Visualisieren Sie das Ergebnis der Regression auf eine geeignete Weise.
#
# **<span style="color:green; font-size:2em"> (d) </span>** <span style="font-size:2em">📗</span> Erstellen Sie nun einen Testdatensatz per `utils.get_test_data()` und treffen Sie erneut eine Vorhersage der Zielvariablen mit dem in **b)** erstellten Modell. Berechnen Sie den druchschnittlichen quadratischen und absoluten Fehler der Vorhersage auf dem Testdatensatz. Interpretieren Sie das Ergebnis.
#
# **<span style="color:orange; font-size:2em"> (e) </span>** <span style="font-size:2em">📗</span> Wiederholen Sie die Aufgaben **b)** bis **c)** für ein quadratisches Modell (Nutzen Sie dafür zum Beispiel `from sklearn.preprocessing import PolynomialFeatures`.). Interpretieren Sie die Ergebnisse.
# +
import utils
import numpy as np
# # %matplotlib notebook
# %matplotlib inline
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
plt.style.use('seaborn-whitegrid')
# -
# ### a) - d)
# +
### Convenience Functions
# Visualisierung eines Fits
def visualize_predict(model, xmin=0, xmax=10):
ax = plt.gca()
xx = np.linspace(0, 10, 100)
ax.plot(xx, model.predict(xx[:, np.newaxis]), "orange")
# Quadratischer Fehler
def quadratic_error(y_true, y_pred):
return np.mean((y_true - y_pred)**2)
# Absoluter Fehler
def absolute_error(y_true, y_pred):
return np.mean(np.abs(y_true - y_pred))
# +
### Trainingsdaten
x_train, y_train = utils.get_train_data()
### Testdaten
x_test, y_test = utils.get_test_data()
# +
### Modell fitten
linear_regression = LinearRegression()
linear_regression.fit(x_train, y_train)
### Vorhersagen
# Trainingsdaten vorhersagen
y_pred_train = linear_regression.predict(x_train)
# Testdaten vorhersagen
y_pred_test = linear_regression.predict(x_test)
### Fehler berechnen
# Trainingsfehler
quadratic_train_error = quadratic_error(y_train, y_pred_train)
absolute_train_error = absolute_error(y_train, y_pred_train)
# Testfehler
quadratic_test_error = quadratic_error(y_test, y_pred_test)
absolute_test_error = absolute_error(y_test, y_pred_test)
### Visualisierung
# Trainingsdaten
plt.scatter(x_train, y_train)
# Vorhersagefunktion
visualize_predict(linear_regression)
print(f"Quadratischer Trainingsfehler: {quadratic_train_error:.2f} | Quadratischer Testfehler {quadratic_test_error:.2f}")
print(f"Absoluter Trainingsfehler: {absolute_train_error:.2f} | Absoluter Testfehler: {absolute_test_error:.2f}")
# -
# ### e)
# +
### Modell erstellen und fitten (für Pipelines siehe weiter unten)
quadratic_regression = make_pipeline(PolynomialFeatures(include_bias=False, degree=2), LinearRegression())
quadratic_regression.fit(x_train, y_train)
### Vorhersagen
# Trainingsdaten vorhersagen
y_pred_train = quadratic_regression.predict(x_train)
# Testdaten vorhersagen
y_pred_test = quadratic_regression.predict(x_test)
### Fehler berechnen
# Trainingsfehler
quadratic_train_error = quadratic_error(y_train, y_pred_train)
absolute_train_error = absolute_error(y_train, y_pred_train)
# Testfehler
quadratic_test_error = quadratic_error(y_test, y_pred_test)
absolute_test_error = absolute_error(y_test, y_pred_test)
### Visualisierung
# Trainingsdaten
plt.scatter(x_train, y_train)
# Vorhersagefunktion
visualize_predict(quadratic_regression)
print(f"Quadratischer Trainingsfehler: {quadratic_train_error:.2f} | Quadratischer Testfehler {quadratic_test_error:.2f}")
print(f"Absoluter Trainingsfehler: {absolute_train_error:.2f} | Absoluter Testfehler: {absolute_test_error:.2f}")
# -
# ### Erklärung Pipelines
# #### i) Quadratische Regression ohne Pipelines
# +
# Pipelines
# Vergleich mit und ohne Pipeline
### ohne Pipelines
# 1. Schritt - Daten transformieren
poly_expansion = PolynomialFeatures(degree=2, include_bias=False)
# 1.1. fitten der Expansion - stellt fest, wie viele Features generiert werden sollen
poly_expansion.fit(x_train)
# 1.2. transformieren
x_train_poly = poly_expansion.transform(x_train)
x_test_poly = poly_expansion.transform(x_test)
# 2. Schritt - Lineare Regression fitten
linear_regression = LinearRegression()
linear_regression.fit(x_train_poly, y_train)
# 3. Schritt - Validierung
y_pred_train = linear_regression.predict(x_train_poly)
y_pred_test = linear_regression.predict(x_test_poly)
print(quadratic_error(y_pred_train, y_train))
print(quadratic_error(y_pred_test, y_test))
# -
# #### ii) Quadratische Regression mit Pipelines
# +
### mit Pipelines
quadratic_regression = Pipeline([
("poly_expansion", PolynomialFeatures(include_bias=False, degree=2)),
("linear_regression", LinearRegression())
])
quadratic_regression.fit(x_train, y_train)
y_pred_train = quadratic_regression.predict(x_train)
y_pred_test = quadratic_regression.predict(x_test)
print(quadratic_error(y_pred_train, y_train))
print(quadratic_error(y_pred_test, y_test))
# -
# #### iii) (Advanced, Optional) Cubische Regression mit Pipelines, Feature Scaling und Regularisierung
# +
# für eine Regression mit Regularisierung benutze `Ridge`
from sklearn.linear_model import Ridge
cubic_ridge_regression = Pipeline([
("poly_expansion", PolynomialFeatures(include_bias=False, degree=3)),
("scaling", StandardScaler()),
("ridge_regression", Ridge())
])
cubic_ridge_regression.fit(x_train, y_train)
y_pred_train = cubic_ridge_regression.predict(x_train)
y_pred_test = cubic_ridge_regression.predict(x_test)
print(quadratic_error(y_pred_train, y_train))
print(quadratic_error(y_pred_test, y_test))
# -
# ## (2.2.2) Zufällige Trainingsdaten **<span style="color:orange; font-size:1em"> (oo) </span>** <span style="font-size:1em">📙</span>
#
# Die Hilfsfunktion `utils.get_train_data()` erzeugt bei jedem Aufruf einen neuen, zufälligen, Datensatz während die Funktion `utils.get_test_data()` einen festen Testdatensatz erzeugt. In dieser Aufgabe untersuchen Sie, welchen Einfluss die Zufälligkeit des Trainingsdatensatzes auf die Qualität des Modells hat.
#
# **<span style="color:green; font-size:2em"> (a) </span>** <span style="font-size:2em">📗</span> Erstellen und visualisieren Sie exemplarisch zwei verschiedene Trainingsdatensätze.
#
# **<span style="color:orange; font-size:2em"> (b) </span>** <span style="font-size:2em">📙</span> Wiederholen Sie die Aufgaben **1a)**, **1b)** und **1d)** für $10-20$ zufällig generierte Trainingsdatensätze. Entscheiden Sie sich dabei für eine der Fehlermetriken (zum Beispiel RMSE). Speichern Sie sich die Fehler für jede der $10-20$ Wiederholungen des Experiments.
#
# Berechnen Sie dann folgende Größe: Für jeden Trainingsdatensatz haben Sie ein separates Modell trainiert und evaluiert. Daraus resultiert jeweils ein Trainingsfehler und ein Testfehler. Berechnen Sie nun den durchschnittlichen Trainingsfehler und Testfehler und die Standardabweichung dieser Fehler über alle Trainingsdatensätze hinweg (*Hinweis: der Trainings- und Testfehler sind für sich genommen schon Durchschnittswerte - nämlich über die Datenpunkte hinweg. Hier aber ist gemeint, die Durchschnittswerte dieser Fehler für die Widerholungen des Experiments zu berechnen - in einem gewissen Sinne also Durchschnittswerte der Durchschnittswerte*)
#
# **<span style="color:orange; font-size:2em"> (c) </span>** <span style="font-size:2em">📙</span> Visualisieren Sie die Ergebnisse aus **c)** indem Sie die $10-20$ verschiedenen linearen Modelle in einem einzigen Plot darstellen.
#
# **<span style="color:orange; font-size:2em"> (d) </span>** <span style="font-size:2em">📙</span> Wiederholen Sie nun die vorherigen Aufgabenteile während Sie anstelle eines linearen Modells ein quadratisches Modell oder sogar ein Modell höheren Grades verwenden (siehe Aufgabe **1d)**).
#
# **<span style="color:green; font-size:2em"> (e) </span>** <span style="font-size:2em">📙</span> Interpretieren Sie Ihre Ergebnisse.
# ### a)
# +
train_data = [utils.get_train_data() for __ in range(2)]
plt.figure(figsize=(10, 8))
for i, (x_train, y_train) in enumerate(train_data):
plt.scatter(x_train, y_train, label=f"Trainingsdaten {i+1}");
plt.legend()
# -
# ### b)
# +
n_runs = 10
def get_polynomial_regression(degree=2):
model = make_pipeline(
PolynomialFeatures(degree=2, include_bias=False),
LinearRegression(normalize=True, fit_intercept=True)
)
return model
# +
n_runs = 10
# alle Trainingsdaten
train_data = [utils.get_train_data() for __ in range(n_runs)]
train_data = []
for i in range(n_runs):
new_train_data = utils.get_train_data()
train_data.append(new_train_data)
# +
# einmalige, feste Testdaten
x_test, y_test = utils.get_test_data()
# ein gefittetes Modell für jeden Trainingsdatensatz
models = [LinearRegression().fit(x, y) for x, y in train_data]
models = []
for i in range(N_RUNS):
this_train_data = train_data[i]
x_train, y_train = this_train_data
model = LinearRegression()
model.fit(x_train, y_train)
models.append(model)
# zur Visualisierung
x_vis = np.linspace(0, 10, 100).reshape(-1, 1)
# individuelle Vorhersage-Funktionen
y_vis = np.array([
model.predict(x_vis) for model in models
])
# durchschnittliche Vorhersage-Funktion und deren Standardabweichung
y_vis_mean = y_vis.mean(axis=0)
y_vis_std = y_vis.std(axis=0)
# Vorhersagen jedes einzelnen Modells auf den Testdaten
y_pred_test = np.array([
model.predict(x_test) for model in models
])
# zunächst für jede Vorhersage den Testfehler (Durchschnitt über Datenpunkte)
quadratic_test_error = np.mean((y_pred_test - y_test)**2, axis=1)
# dann den durchschittlichen Testfehler (Durchschnitt über DatenSÄTZE)
mean_quadratic_test_error = np.mean(quadratic_test_error)
std_quadratic_test_error = np.std(quadratic_test_error)
print(f"Durchschnittlicher Testfehler {mean_quadratic_test_error}")
print(f"Standardabweichung Testfehler {std_quadratic_test_error}")
y_hat_mean = y_hat_test.mean(axis=0)
y_hat_std = y_hat_test.std(axis=0)
fig, axes = plt.subplots(1, 2, figsize=(20, 8))
axes[0].plot(np.repeat(x_vis, N_RUNS, axis=1), y_vis.T)
axes[1].plot(x_vis, y_vis_mean)
axes[1].fill_between(
x_vis.squeeze(),
y1=y_vis_mean-y_vis_std,
y2=y_vis_mean+y_vis_std,
alpha=0.3,
color="cyan"
);
# +
def get_quadratic_regression():
return make_pipeline(PolynomialFeatures(include_bias=False, degree=2), LinearRegression(normalize=True))
models = [get_quadratic_regression().fit(x, y) for x, y in train_data]
x_vis = np.linspace(0, 10, 100).reshape(-1, 1)
y_vis = np.array([
model.predict(x_vis) for model in models
])
y_vis_mean = y_vis.mean(axis=0)
y_vis_std = y_vis.std(axis=0)
y_pred_test = np.array([
model.predict(x_test) for model in models
])
# zunächst für jede Vorhersage den Testfehler (Durchschnitt über Datenpunkte)
quadratic_test_error = np.mean((y_pred_test - y_test)**2, axis=1)
# dann den durchschittlichen Testfehler (Durchschnitt über DatenSÄTZE)
mean_quadratic_test_error = np.mean(quadratic_test_error)
std_quadratic_test_error = np.std(quadratic_test_error)
print(f"Durchschnittlicher Testfehler {mean_quadratic_test_error}")
print(f"Standardabweichung Testfehler {std_quadratic_test_error}")
y_hat_mean = y_pred_test.mean(axis=0)
y_hat_std = y_pred_test.std(axis=0)
fig, axes = plt.subplots(1, 2, figsize=(20, 8))
axes[0].plot(np.repeat(x_vis, N_RUNS, axis=1), y_vis.T)
axes[1].plot(x_vis, y_vis_mean)
axes[1].fill_between(
x_vis.squeeze(),
y1=y_vis_mean-y_vis_std,
y2=y_vis_mean+y_vis_std,
alpha=0.3,
color="cyan"
)
axes[0].set_ylim(-20, 80)
axes[1].set_ylim(-20, 80);
# -
# ## (2.2.3) Bias-Variance-Tradeoff <span style="color:red; font-size:1em"> (ooo) </span> <span style="font-size:1em">📘</span>
#
#
# In der vorherigen Aufgabe haben Sie eine Reihe von Modellen auf der Basis zufälliger Trainingsdaten erstellt und für jedes Modell den Testfehler berechnet. Daraufhin ließ sich der durchschnittliche Testfehler sowie die Varianz des Testfehlers schätzen. Sie haben das lineare Modell mit dem quadratischen Modell verglichen.
#
# Nun wollen wir die Komplexität des Modells systematisch erhöhen.
#
# Als Maß für die Komplexität des Modells nehmen wir den Grad der polynomischen Expansion an. Der Parameter `'degree'` kann von $1$ (lineares Modell) systematisch erhöht werden. Für jede Komplexitätsstufe lassen sich dann eine Reihe Modelle auf Basis zufälliger Trainingsdaten erstellen. Der Testdatensatz bleibt stets derselbe.
#
# Wiederholen Sie für jeden Grad (`degree`) der polynomischen Expansion die folgenden Schritte:
#
# *(i)* Trainieren Sie $10-20$ verschiedene Modelle jeweils auf einem zufällig generierten Trainingsdatensatz. Um die gewünschten Ergebnisse sichtbar zu machen, bietet es sich an, die Menge an Beobachtungen noch weiter zu reduzieren. Nutzen Sie dafür das Argument `n_samples` der Funktion `utils.get_train_data()`.
#
# *(ii)* Berechnen Sie die durchschnittliche Vorhersage zwischen diesen Modellen und plotten Sie diese etwa für $x \in [0, 10]$.
#
# *(iii)* Berechnen Sie die Standardabweichung zwischen den verschiedenen Vorhersagen und visualisieren Sie diese auf eine geeignete Weise für $x \in [0, 10]$.
#
# *(iv)* Benutzen Sie `utils.true_function` um die den Daten tatsächlich zu Grunde liegende Funktion zu plotten.
#
# Versuchen Sie, die Plots aus *(ii)*-*(iv)* für jeden Grad der polynomischen Expansion in einem einzigen Plot darzustellen. Interpretieren Sie ihre Ergebnisse.
# +
x_test, y_test = utils.get_test_data()
degrees = np.arange(1, 11)
def get_polynomial_regression(degree=1):
return make_pipeline(PolynomialFeatures(include_bias=False, degree=degree), LinearRegression())
def get_plottables(xx, degree, n_runs=20, n_samples=20):
train_data = [utils.get_train_data(n_samples=n_samples) for __ in range(n_runs)]
models = [get_polynomial_regression(degree=degree).fit(x, y) for x, y in train_data]
ff = utils.true_function(xx)
yy = np.array([
model.predict(xx) for model in models
])
yy_mean = yy.mean(axis=0)
yy_std = yy.std(axis=0)
return yy, yy_mean, yy_std, ff
def get_test_error(degree, n_runs=1000, n_samples=50):
train_data = [utils.get_train_data(n_samples=n_samples) for __ in range(n_runs)]
models = [get_polynomial_regression(degree=degree).fit(x, y) for x, y in train_data]
xx = np.linspace(0, 10, 1000)
ff = utils.true_function(xx)
yy = np.array([
model.predict(xx[:, np.newaxis]) for model in models
])
error = np.mean((yy - ff)**2)
yy_mean = yy.mean(axis=0)
bias = np.mean((yy_mean - ff)**2)
variance = np.mean((yy - yy_mean)**2)
return error, bias, variance
# +
errors = []
biases = []
variances = []
for degree in degrees:
error, bias, variance = get_test_error(degree)
errors.append(error)
biases.append(bias)
variances.append(variance)
# -
plt.plot(errors)
plt.plot(biases)
plt.plot(variances)
# +
fig, axes = plt.subplots(2, 5, figsize=(15, 6))
xx = np.linspace(0, 10, 1000).reshape(-1, 1)
for ax, degree in zip(axes.flatten(), degrees):
yy, yy_mean, yy_std, ff = get_plottables(xx, degree, n_runs=200, n_samples=50)
ax.fill_between(xx.squeeze(), y1=yy_mean-yy_std, y2=yy_mean+yy_std, color="cyan", alpha=0.3)
# import pdb; pdb.set_trace()
ax.plot(xx, ff, color="orange")
ax.plot(xx, yy_mean, color="blue")
ax.set_ylim(-25, 100)
# -
# ## (2.2.4) Regularisierung <span style="color:green; font-size:1em"> (o) </span> - <span style="color:orange; font-size:1em"> (oo) </span> <span style="font-size:1em">📗</span>
#
#
# Um das Risiko einer Überanpassung zu verhindern, kann die lineare/polynomiale Regression regularisiert werden. Dazu wird der Verlustfunktion ein zusätzlicher Regularisierungsterm hinzugefügt, der dafür sorgt, dass Koeffizienten kleiner Magnitude gegenüber Koeffizienten großer Magnitude bevorzugt werden.
#
# Scikit-Learn stellt die lineare Regression mit Regularisierung in den Klassen `Ridge`, `ElasticNet` und `Lasso` zur Verfügung.
#
# **<span style="color:green; font-size:2em"> (a) </span>** <span style="font-size:2em">📗</span> Beschäftigen Sie sich zunächst der Dokumentation aller drei Klassen. Was ist der wesentliche Unterschied zwischen den Klassen? Benutzen Sie im Folgenden nur die Klasse `Ridge` für eine lineare Regression mit L2-Regularisierung. Setzen Sie in jedem Fall `normalize=True` für alle weiteren Experimente.
#
# **<span style="color:green; font-size:2em"> (b) </span>** <span style="font-size:2em">📗</span> Wählen Sie ein Regressionsmodell mit einem mittleren Grad der polynomischen Expansion, etwa 6-8. Generieren Sie zunächst einen Trainingsdatensatz wie in den vorherigen Aufgaben und fitten Sie das Modell. Vergleichen Sie die Ergebnisse einer Regression mit `alpha=0.0`, `alpha=1.0` und `alpha=10.0`, indem Sie den Fit wie in den vorherigen Aufgaben auf eine geeignete Weise visualisieren und die Trainings- und Testfehler der Verfahren miteinander vergleichen. Interpretieren Sie.
#
# **<span style="color:orange; font-size:2em"> (c) </span>** <span style="font-size:2em">📙</span> Varieren Sie nun den Hyperparameter `alpha` der Regression systematisch, z.B. logarithmisch: $\alpha = 0, 10^{-3}, 5 \cdot 10^{-3}, 10^{-2}, ..., 10$ (Tipp: `np.logspace`). Trainieren Sie nun für jeden Wert des Hyperparameters $20-50$ verschiedene Modelle auf jeweils zufällig generierten Trainingsdaten und berechnen Sie jedesmal den Trainingsfehler sowie den Testfehler. Plotten Sie dann den durchschnittlichen Trainings- sowie Testfehler (über die zufälligen Trainingsdatensätze hinweg) sowie, in einem separaten Plot, deren Standardabweichung, gegen den Wert des Hyperparameters. Um das Ergebnis sichtbar zu machen, können Sie die Menge an Beobachtungen für die Trainingsdaten reduzieren, indem Sie das Argument `n_samples` der Funktion `utils.get_train_data()` verwenden. Interpretieren Sie das Ergebnis.
#
# ### a)
from sklearn.linear_model import Ridge, ElasticNet, Lasso, LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
import utils
def get_polynomial_regression(degree=1, alpha=0.0):
if alpha == 0.0:
return make_pipeline(
PolynomialFeatures(include_bias=False, degree=degree),
LinearRegression(normalize=True, fit_intercept=True)
)
else:
return make_pipeline(
PolynomialFeatures(include_bias=False, degree=degree),
Ridge(alpha=alpha, normalize=True, fit_intercept=True)
)
# ### b)
# +
# Wähle einen relativ hohen Grad der polynomischen Expansion
degree = 10
# je größer `alpha`, desto stärker die Regularisierung
alphas = [0.0, 1.0, 10.0]
# Ridge Regression ist Lineare Regression mit L2-Regularisierung (Weight Decay)
ridge_models = []
for alpha in alphas:
new_model = get_polynomial_regression(degree=degree, alpha=alpha)
ridge_models.append(new_model)
# Trainingsdaten
x_train, y_train = utils.get_train_data(n_samples=20)
# Testdaten
x_test, y_test = utils.get_test_data()
# model fitting
for model in ridge_models:
model.fit(x_train, y_train)
# +
plt.figure(figsize=(10, 8))
# Trainingsdaten plotten
plt.scatter(x_train, y_train)
### zur Visualisierung
# x-Wertebereich zum Plotten
x_vis = np.linspace(0, 10, 100).reshape(-1, 1)
for model in ridge_models:
y_vis = model.predict(x_vis)
plt.plot(x_vis, y_vis)
# +
### Validierung
# Vorhersage - Trainingsdaten
y_pred_train = np.array([
model.predict(x_train) for model in ridge_models
])
# Vorhersage - Testdaten
y_pred_test = np.array([
model.predict(x_test) for model in ridge_models
])
# Use broadcasting
train_errors = np.mean((y_pred_train - y_train)**2, axis=1)
print(train_errors)
test_errors = np.mean((y_pred_test - y_test)**2, axis=1)
print(test_errors)
# -
# ### c)
# +
degree = 6
n_runs = 50
alphas = np.logspace(-3, 1, 10)
x_test, y_test = utils.get_test_data()
models = []
for alpha in alphas:
model = get_polynomial_regression(degree=degree, alpha=alpha)
models.append(model)
# Alternative:
models = [get_polynomial_regression(degree=degree, alpha=alpha) for alpha in alphas]
# +
train_errors = []
test_errors = []
for alpha in alphas:
model = get_polynomial_regression(degree=degree, alpha=alpha)
this_train_errors = []
this_test_errors = []
for run in range(n_runs):
x_train, y_train = utils.get_train_data(n_samples=20)
model.fit(x_train, y_train)
y_hat_train = model.predict(x_train)
y_hat_test = model.predict(x_test)
train_error = np.mean((y_hat_train - y_train)**2)
test_error = np.mean((y_hat_test - y_test)**2)
this_train_errors.append(train_error)
this_test_errors.append(test_error)
train_errors.append(this_train_errors)
test_errors.append(this_test_errors)
train_errors = np.array(train_errors)
test_errors = np.array(test_errors)
# +
plt.figure(figsize=(10, 8))
plt.semilogx(alphas, train_errors.mean(axis=1), label="train error")
plt.semilogx(alphas, test_errors.mean(axis=1), label="test error")
plt.xlabel(r"$\alpha$", fontsize=15)
plt.legend()
# -
| 2_4_overfitting_underfitting/2_2_Aufgabe - Overfitting - Underfitting - Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Analysis of PELT signal error due to skip last window update
# + run_control={"frozen": false, "marked": false, "read_only": false}
import logging
from conf import LisaLogging
LisaLogging.setup()
# + run_control={"frozen": false, "marked": false, "read_only": false}
# Generate plots inline
# %matplotlib inline
import json
import os
from trace import Trace
import numpy
import pandas as pd
import matplotlib.pyplot as plt
import trappy
path_to_dat = "/home/joelaf/repo/lisa-aosp/external/lisa/ipynb/scratchpad/pelt-error/trace.dat"
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## Parse Trace and Profiling Data
# + run_control={"frozen": false, "marked": false, "read_only": false}
trace = Trace(None, path_to_dat, events=[ 'sched_switch', 'pelt_update' ])
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## Trace visualization
# + run_control={"frozen": false, "marked": false, "read_only": false}
trappy.plotter.plot_trace(trace.ftrace)
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## Latency DataFrames
# -
df = trace.data_frame.trace_event('pelt_update')
rq_df = df[df.cfs_rq == 1]
rq_df.columns
# +
# Plot the accurate and the actual signals for the RQ
print 'UTIL ERROR'
trappy.ILinePlot(trace.ftrace,
signals = [
'pelt_update:util_avg',
'pelt_update:acc_util_avg',
]).view()
print 'LOAD ERROR'
trappy.ILinePlot(trace.ftrace,
signals = [
'pelt_update:load_avg',
'pelt_update:acc_load_avg',
]).view()
# -
# Data showing util/load errors on the RQ (cpu 1) when thread0 running
# -------------------------------
# Note that, as expected, the error exists only for cases where delta is < 1ms (now - last_update_time)
# +
errpc_fn = (lambda row: (row['util_err'] * 100.0 / row['util_avg']))
err_df = rq_df[(rq_df.util_err > 10) | (rq_df.load_err > 10)][rq_df['__comm'] == 'thread0']
err_df = err_df[['__comm', 'acc_load_avg', 'util_avg', 'acc_util_avg', \
'util_err', 'delta_us', 'load_sum', 'sum_err', 'load_avg', 'load_err']]
err_df['util_err_pc'] = err_df.apply(errpc_fn ,axis=1)
err_df = err_df.sort(columns=['util_err_pc'], ascending=False)
print 'number of errors: ' + str(len(err_df))
err_df.head(40)
# -
# # Summary of issues
#
# ### * At 5.98s, there is a 6% error in util_avg 225 vs 211) - this causes a glitch and makes the signal less smooth
# ----
# 
#
#
# ### * At 3.06s, there is a 3% error in util_avg - causing lowered peak of util_avg (397 -> 387) with delta ~450us
# 
#
# <p style="page-break-after:always;"></p>
# # Histogram of Errors before and after fix
#
# ## BEFORE: util_avg and load_avg occurences of errors
df = rq_df[(rq_df.util_err > 0) | (rq_df.load_err > 0)][rq_df['__comm'] == 'thread0']
df = df[(df.util_err > 0) | (df.load_err > 0)]
df = df[['util_err', 'load_err']].plot(kind='hist', figsize=(15,7), bins=60, xlim=(0, 20), xticks=range(0,20), stacked=True, title = 'Occurence of errors > 10 counts', )
# ## AFTER fix: util_avg and load_avg occurences of errors
# 
| ipynb/scratchpad/pelt-error/pelt-error.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python Basics
# ## Zahlen und Rechenarten
# +
print(5 + 4)
print(5 - 4)
print(5 * 4)
print(5 / 4)
print(12 * 12)
print((5 + 4) * 3)
print(3588 / 11.3)
# +
# Wurzel ziehen
import math
math.sqrt(345)
# -
# ## Zahlen in Variablen
# +
a = 5
print(a)
# -
print(a * a)
# #### Durchschnitt berechnen
# +
# das Durchschnittsalter berechnen
age = 21
age2 = 18
print((age + age2) / 2)
# +
# Statt in print() zu rechnen, können wir das Ergebnis auch erst in einer Variablen zwischenspeichern
average_age = (age + age2) / 2
# -
print(average_age)
# #### Prozent berechnen
#
# +
x = 425812 # Einwohner heute
y = 380499 # Einwohner vor 10 Jahren
z = ( x - y) # Differenz (ZU- oder ABnahme)
print (z)
# -
(z * 100 / y) # Prozentuale Zunahme berechnen
a = 11.908835
# #### Zahlen runden
#
a = round(a,1) # Zahl auf Anzahl Kommastellen runden
print (a)
| Pandas_Vorbereitungen/Python Einstieg Spickzettel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # September 2021 Challenge: Mentorship Program Screening Tasks
#
# Since this month coincides with the application window for the QOSF mentorship program, we have decided to make the corresponding screening tasks the challenge for this month. So, even if you aren't applying to the mentorship program you can still have a look at the screening tasks.
#
# There are four separate tasks, so you're welcome to try your hand at as many of them as you like!
#
# **_NOTE: The application deadline is September 29th, so don't submit any PRs until after that date since you would be making potential solutions public._**
# ## Task 1
#
# Design a quantum circuit that considers as input the following vector of integers numbers:
#
# [1,5,7,10]
#
# returns a quantum state which is a superposition of indices of the target solution, obtaining in the output the indices of the inputs where two adjacent bits will always have different values. In this case the output should be: 1/sqrt(2) * (|01> + |11>), as the correct indices are 1 and 3.
#
# 1 = 0001
# **5 = 0101**
# 7 = 0111
# **10 = 1010**
#
# The method to follow for this task is to start from an array of integers as input, pass them to a binary representation and you need to find those integers whose binary representation is such that two adjacent bits are different. Once you have found those integers, you must output a superposition of states where each state is a binary representation of the indices of those integers.
#
# ### Example
# Consider the vector [1,5,4,2]
#
# Pass the integer values to binary numbers that is [001,101,100,010]
#
# Identifies which values whose binary representation is such that two adjacent bits are different, we can see that are 2 101 and 010, [001,101,100,010].
#
# Returns the linear combination of the indices in which the values satisfying the criterion are found.
#
# Indices:
# ```
# 0 1 2 3
# | | | |
# [001, 101, 100, 010]
# ```
#
# Indices are converted to binary states:
# ```
# |00> |01> |10> |11>
# | | | |
# [001, 101, 100, 010]
# ```
#
# The answer would be the superposition of the states |01> and |11> or 1/sqrt(2) * (|01> + |11>)
#
# ### Context
# If you’re struggling to find a proper way to solve this task, you can find some suggestions for a possible solution below. This is one way to approach the problem, but other solutions may be feasible as well, so feel free to also investigate different strategies if you see fit!
#
# The key to this task is to use the superposition offered by quantum computing to load all the values of the input array on a single quantum state, and then locate the values that meet the target condition. So, how can we use a quantum computer to store multiple values? A possible solution is using the QRAM (some references: https://arxiv.org/pdf/0708.1879.pdf, https://github.com/qsharp-community/qram/blob/master/docs/primer.pdf).
#
# As with classical computers, in the QRAM information is accessed using a set of bits indicating the address of the memory cell, and another set for the actual data stored in the array.
# For example, if you want to use a QRAM to store 2 numbers that have at most 3 bits, it can be achieved with 1 qubit of address and 3 qubits of data.
#
# Suppose you have the vector input_2 = [2,7].
# In a properly constructed circuit, when the value of the address qubit is |0> the data qubits have value 010 (binary representation of 2) and when it is |1> in the data qubits have value 111 (binary representation of 7).
#
# Given such a structure, you should be able to use Grover’s algorithm in order to obtain the solution to the task.
#
# You can assume that the input always contains at least two numbers that have alternating bitstrings.
#
# Bonus:
#
# Design a general circuit that accepts vectors with random values of size 2n with m bits in length for each element and finds the state(s) indicated above from an oracle.
# ## Task 2
#
# * Prepare 4 random 4-qubit quantum states of your choice.
# * Create and train a variational circuit that transforms input states into predefined output states. Namely
# * if random state 1 is provided, it returns state |0011>
# * if random state 2 is provided, it returns state |0101>
# * if random state 3 is provided, it returns state |1010>
# * if random state 4 is provided, it returns state |1100>
# * What would happen if you provided a different state?
#
# Analyze and discuss the results.
#
# Feel free to use existing frameworks (e.g. PennyLane, Qiskit) for creating and training the circuits.
# This PennyLane demo can be useful: [Training a quantum circuit with Pytorch](https://pennylane.ai/qml/demos/tutorial_state_preparation.html),
# This Quantum Tensorflow tutorial can be useful: [Training a quantum circuit with Tensorflow](https://www.tensorflow.org/quantum/tutorials/mnist).
#
# For the variational circuit, you can try any circuit you want. You can start from one with a layer of RX, RY and CNOTs, repeated a couple of times (though there are certainly better circuits to achieve this goal).
#
# ### Context
# This challenge has been inspired by the following papers ["A generative modeling approach for benchmarking and training shallow quantum circuits"](https://www.nature.com/articles/s41534-019-0157-8) and ["Generation of High-Resolution Handwritten Digits with an Ion-Trap Quantum Computer"](https://arxiv.org/abs/2012.03924). The target states of this task can be interpreted as the 2x2 “bars and stripes” patterns used in the first paper.
#
# ## Task 3
#
# Implement an interpreter of the qasm 3.0 code that can convert it to a quantum circuit (in the framework of your choice) and calculate a conjugate of a circuit. Provide examples showing that it works.
#
# The gate list that you need to consider are X, Y, Z, RX, RY, RZ, H, S, S†, T, T†, CX, CCX, SWAP & CSWAP.
#
# Some algorithms, such as Grover's algorithm (https://arxiv.org/pdf/2005.06468) or Quantum Autoencoders (https://arxiv.org/pdf/1612.02806) need the transpose conjugate of matrix U. In some frameworks, there is already a way to generate the conjugate of a gate or even a circuit, to help in such situations. In this challenge, you should do this yourself.
#
# The transpose conjugate of a matrix U is denoted with U†, and it is obtained by taking the transpose of the matrix U and then taking the complex conjugate of each element.Note that the transpose conjugate U† of a unitary matrix U has the following properties:
# $U^{\dagger}= U^{-1}$ and $U^{-1}U = I$.
# https://en.wikipedia.org/wiki/Conjugate_transpose
# https://en.wikipedia.org/wiki/Complex_conjugate
#
# Idea for expanding:
# * Try writing an interpreter that works also with symbolic parameters, i.e. “RX(theta)” instead of just “RX(0.2)”.
# ## Task 4
#
# Write a program that estimates how long running a variational quantum algorithm might take.
# It should take the following data as input:
# * Circuit (might be a circuit created in some popular framework, QASM file or some other format),
# * Number of circuit evaluations per iteration,
# * Number of iterations of the optimization loop,
# * Device information – information about the device being used (e.g. execution times of the gates),
# * Any additional information that you think is relevant.
#
# An example of a simple, but not very accurate formula would be:
#
# Total runtime = (N_1 * t_1 + N_2 * t_2) * n_s * n_i
#
# Where:
# * N_1, N_2 – number of 1-qubit (or 2) gates
# * t_1, t_2 – time of execution of 1-qubit (or 2) gates
# * n_s – number of samples per iteration
# * n_i – number of iterations
#
# Note that this doesn’t take into account that certain gates can be executed in parallel.
#
# This task is pretty open-ended – please try to make your formula as realistic as possible. It will require some investigation and review of existing literature or technical documentation on your own, which might turn out to be much more challenging than it seems, but we hope also much more rewarding :)
#
# Once this is done, you can try analyzing some numerical data from the existing research in order and see how long running such a circuit took (if done on a real device) or could take (if data comes from a simulation).
#
# Some papers with data about the quantum computing devices:
# * [<NAME> et al.](https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.1.020304) (ETH), see Table I
# * [Arute et al., supplementary information](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-019-1666-5/MediaObjects/41586_2019_1666_MOESM1_ESM.pdf) (Google)
# * [Superconducting Qubits: current state of play](https://arxiv.org/abs/1905.13641) (Review)
# * [Materials challenges and opportunities for quantum computing hardware](https://www.science.org/doi/10.1126/science.abb2823) (behind paywall :( )
# * You can often find specific information about quantum devices on the website of the companies building quantum hardware/software.
#
# Review paper on Variational Quantum Algorithms to look for factors that may contribute to longer runtimes:
# * [1st review paper](https://arxiv.org/abs/2012.09265)
# * [2nd review paper](https://arxiv.org/abs/2101.08448)
| challenge-2021.09-sep/challenge-2021.09-sep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# GET
# +
import requests
resp = requests.request('get','http://httpbin.org/get',params={'Key':'Value'})
resp.status_code, resp.headers, resp.content
# -
# POST
resp = requests.request('post','http://httpbin.org/post',params={'Key':'Value'})
resp.status_code, resp.headers, resp.content
# +
from urllib import parse
header = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
def getDownload(url, params={}, retries=3):
resp = None
try:
resp = requests.get(url, params=params, headers=header)
resp.raise_for_status()
except requests.exceptions.HTTPError as e:
if 500 <= e.response.status_code < 600 and retries > 0:
print(retries)
resp = getDownload(url, params, retries-1)
else:
print(e.response.status_code)
print(e.response.reason)
print(e.response.headers)
return resp
# -
html = getDownload('http://httpbin.org/get',{'Key':'Value'})
html.text, html.content
# ### http://www.crawler-test.com/status_codes/status_
# #### (Error Test Url) status_ 뒤에 뭐가 붙냐에 따라 Error가 달라짐 ex) status_403
html = getDownload('http://www.crawler-test.com/status_codes/status_500')
url = 'http://openapi.airkorea.or.kr/openapi/services/rest/ArpltnInforInqireSvc/getCtprvnRltmMesureDnsty'
params = {
'serviceKey':'<KEY>BTerSXcPkeIM58OHp9A3qt9TP14WdsZHQ%3D%3D',
'numOfRows':10,
'pageNo':1,
'sidoName':'인천',
'ver':1.3,
'_returnType':'json'
}
result = getDownload(url, params)
result.text
result.url
org = requests.utils.unquote(params['serviceKey'])
params['serviceKey'] = org
result = getDownload(url, params)
import json
result = json.loads(result.text)
for row in result['list']:
print(row['stationName'], row['pm25Value'])
# ### http://pythonscraping.com/pages/files/form.html # Test Url
# +
url = 'http://pythonscraping.com/pages/files/form.html'
html = getDownload(url, {'firstname':'1234','lastname':'1234'})
html.request.headers
# -
html.text
# +
url = 'http://pythonscraping.com/pages/files/processing.php'
html = getDownload(url, {'firstname':'1234','lastname':'1234'})
html.request.body
# -
html.text
html = requests.post(url, {'firstname':'1234','lastname':'1234'})
html.request.body
# ## Cookie 이용 방식
# ### Before
# +
url = 'http://pythonscraping.com/pages/cookies/welcome.php'
html = requests.post(url, {'username':'test','password':'password'})
html.text
# -
# ### After
html.cookies.get_dict()
html = requests.get(url, cookies=html.cookies)
html.text
# ## Session 이용 방식
session = requests.Session()
html = session.post(url, {'username':'test','password':'password'})
html = session.post(url)
html.text
# ### www.timetime.kr Session Test
# ### DATA 일치
url = 'http://www.timetime.kr/user/login'
data = {'username':'누구게요','password':'<PASSWORD>'}
session = requests.Session()
html = session.post(url, data)
html.text
# ### DATA 불일치
data = {'username':'누구게요','password':'<PASSWORD>'}
session = requests.Session()
html = session.post(url, data)
html.text
# ## 교보문고
url = 'http://'
params={
'vPStrCategory':'TOT',
'vPstrKeyWrod';'hi',
'vPplace':'top'
}
html = request.post(url,params)
| week1/20190502_Cookie_Session.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Harley-hwan/EQTransformer/blob/master/run_eqtransformer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="84U_H9yX0r4b"
# # 필요 프로그램 설치
# + colab={"base_uri": "https://localhost:8080/"} id="jmik_q5-3Z2H" outputId="1f3276dd-4eb2-4a90-ca70-4a8ac06ee9be"
# !git clone git://github.com/smousavi05/EQTransformer.git
# + colab={"base_uri": "https://localhost:8080/"} id="0ozbuRKQdh9A" outputId="a7ecb574-1bf1-4d1c-cfe9-c221d1a9d851"
# !sudo apt-get update
# !sudo apt-get install python
# !sudo apt-get install python-dev
# !sudo apt-get install python-setuptools
# !sudo apt-get install python-numpy
# !sudo apt-get install python-numpy-dev
# !sudo apt-get install python-scipy
# !sudo apt-get install python-matplotlib
# !sudo apt-get install python-lxml
# !sudo apt-get install python-sqlalchemy
# !sudo apt-get install python-suds
# !sudo apt-get install ipython
# + colab={"base_uri": "https://localhost:8080/"} id="sGa8bICMVeHW" outputId="a49da2f6-0166-47c1-d132-78e6b920a756"
# !git clone git://github.com/obspy/obspy
# + id="lC-S7m3h1T92"
import sys
sys.path.append('./obspy')
import obspy
# + [markdown] id="7b83N--Mrqm4"
# - 아래 코드 실행이 완료되면, "RESTART RUNTIME" 수행
# + id="ENFGstFqRJR6" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="5d103e78-376e-42a0-c2ca-a5f6c49d49b8"
# !pip uninstall keras-nightly
# !pip uninstall -y tensorflow
# !pip uninstall -y keras
# !pip install tensorflow==2.0.0
# !pip install tensorflow-gpu==2.0.0
# !pip install keyring>=15.1
# !pip install pkginfo>=1.4.2
# !pip install h5py==2.10.0
# !pip install keras==2.3.1
# !pip install tqdm==4.48.0
# !pip install EQTransformer --no-dependencies
# !pip install obspy
# + [markdown] id="71XmlcuDT3G9"
# # 샘플 데이터 다운로드
# - 모델 추론 코드를 실행시키기 위해서는, 추론에 필요한 데이터를 다운로드 해야한다.
# - 먼저 데이터를 받아올 station 정보를 담고 있는 station_list.json 파일을 다운로드 받아준다.
# - 그런 후 station_list.json에 명시된 station의 지진 정보를 Southern California Earthquake Data Center 혹은 IRIS에서 받아온다.
# - 다운로드된 station의 지진정보는 각 station으로 분리되어, output_dir에 저장된다.
# + colab={"base_uri": "https://localhost:8080/"} id="xlMQ6pkvT5Xw" outputId="7275fd16-4c06-4e6b-f918-34844a76ab61"
from EQTransformer.utils.downloader import makeStationList
from EQTransformer.utils.downloader import downloadMseeds
import os
json_basepath = os.path.join(os.getcwd(),"json/station_list.json")
makeStationList(json_path=json_basepath, client_list=["SCEDC"], min_lat=35.50, max_lat=35.60, min_lon=-117.80, max_lon=-117.40, start_time="2019-09-01 00:00:00.00", end_time="2019-09-03 00:00:00.00", channel_list=["HH[ZNE]", "HH[Z21]", "BH[ZNE]"], filter_network=["SY"], filter_station=[])
downloadMseeds(client_list=["SCEDC", "IRIS"], stations_json=json_basepath, output_dir="./downloads_mseeds", min_lat=35.50, max_lat=35.60, min_lon=-117.80, max_lon=-117.40, start_time="2019-09-01 00:00:00.00", end_time="2019-09-03 00:00:00.00", chunk_size=1, channel_list=[], n_processor=2)
# + [markdown] id="LwDiWnPPV4ih"
# # 샘플 데이터 기준으로 추론
# - 모델 추론은 mseed 데이터로 진행할 수도 있고, hdf5 데이터를 대상으로도 진행할 수 있다.
# - 아래는 messd 데이터로 추론한 코드이다.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Uby6oWb_AMF0" outputId="44152b0d-f3df-45ca-b5c9-6e7e6839ada3"
from EQTransformer.core.mseed_predictor import mseed_predictor
mseed_predictor(input_dir='downloads_mseeds',
input_model='EQTransformer/ModelsAndSampleData/EqT_model.h5',
stations_json=json_basepath,
output_dir='detection_results',
detection_threshold=0.2,
P_threshold=0.1,
S_threshold=0.1,
number_of_plots=10,
plot_mode='time_frequency',
batch_size=500,
overlap=0.3)
# + [markdown] id="RnmGfFV_V9ed"
# # 추론 결과 시각화
# - 추론 코드를 실행하고 나면, time_tracks.pkl 파일이 생성된다.
# - time_tracks.pkl파일에는 시간에 따른 데이터의 구성정보가 들어있어, 데이터의 연결성이나 데이터 형태를 시각화해서 볼 수 있다.
# - 아래는 time_tracks.pkl 파일을 시각화한 결과이다.
# + id="EOoIvdz75gmO" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="6d973d6e-97c7-4a77-dcfb-f886c214f993"
# %matplotlib inline
from EQTransformer.utils.plot import plot_data_chart
plot_data_chart('time_tracks.pkl', time_interval=10)
# + [markdown] id="TAeRgtx9sCNN"
# - 아래는 특정 날의 raw data를 시각화한 것이다.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="WYWpHQKF3YVB" outputId="b0cfa608-618f-4aa6-e972-35a966db681b"
from EQTransformer.utils.plot import plot_detections, plot_helicorder
plot_helicorder(input_mseed='downloads_mseeds/CA06/GS.CA06.00.HHZ__20190902T000000Z__20190903T000000Z.mseed', input_csv=None)
# + [markdown] id="CLapXk8ItoBU"
# - 그리고 이렇게 시각화된 raw data와 추론 결과를 합성하여 볼수 있다.
# + colab={"base_uri": "https://localhost:8080/", "height": 600} id="dGaB026jtnt8" outputId="a37e0b6a-3911-4a14-84ab-af2ee87e971c"
plot_helicorder(input_mseed='downloads_mseeds/CA06/GS.CA06.00.HHZ__20190902T000000Z__20190903T000000Z.mseed', input_csv='detection_results/CA06_outputs/X_prediction_results.csv')
# + [markdown] id="UkscvP00vGWs"
# - 각 지역별 detection 결과를 다음과 같이 시각화 할 수 있다.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="AMopD2VZ5C0P" outputId="17d72447-2638-4c29-e724-34426c466ddd"
plot_detections(input_dir='detection_results', input_json=json_basepath, plot_type='station_map', marker_size=50)
# + [markdown] id="XXz6gG4yvd74"
# - 각 지역별 detection 결과를 histogram 형식으로도 다음과 같이 시각화할 수 있다.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="qBfJLQvL5KvX" outputId="cb6427eb-251c-4638-9131-b2809336c56c"
plot_detections(input_dir='detection_results', input_json=json_basepath, plot_type='hist', time_window=120)
# + [markdown] id="s3Lo7pkQMRvn"
# # 모델 수정
# - 아래 trainer에 들어가는 모델 파라미터를 수정하면, 수정된 모델이 만들어지고, 해당 모델이 지정한 데이터로 학습된다. 그리고 최종 학습된 모델이 파일로 생성된다.
# - 샘플로 제공된 100개 데이터를 사용하여, 새롭게 정의한 모델을 학습한다.
# - 원래는 대용량의 hdf5 파일을 대상으로 학습해야하나, 대용량 데이터를 학습에 쓸 수 없으므로 샘플 데이터를 사용한다.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="zd_0vdduMRbF" outputId="531dbd9a-3b6e-40b8-8a44-55d1583c3925"
from EQTransformer.core.trainer import trainer
hdf5_path = './EQTransformer/ModelsAndSampleData/100samples.hdf5'
csv_path = './EQTransformer/ModelsAndSampleData/100samples.csv'
trainer(input_hdf5=hdf5_path,
input_csv=csv_path,
output_name='test_trainer',
cnn_blocks=2,
lstm_blocks=1,
padding='same',
activation='relu',
drop_rate=0.2,
label_type='gaussian',
add_event_r=0.6,
add_gap_r=0.2,
shift_event_r=0.9,
add_noise_r=0.5,
mode='generator',
train_valid_test_split=[0.60, 0.20, 0.20],
batch_size=20,
epochs=10,
patience=2,
gpuid=None,
gpu_limit=None)
# + [markdown] id="wZ3tBWvBs1Hv"
# - 수정한 모델의 성능을 샘플 데이터로 평가한다.
# + id="_8h96wfNs1io" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="349181da-545e-4f96-b9a3-8dd97f32b4ae"
from EQTransformer.core.tester import tester
hdf5_path = './EQTransformer/ModelsAndSampleData/100samples.hdf5'
csv_path = './EQTransformer/ModelsAndSampleData/100samples.csv'
tester(input_hdf5='./EQTransformer/ModelsAndSampleData/100samples.hdf5',
input_testset='test_trainer_outputs/test.npy',
input_model='test_trainer_outputs/models/test_trainer_001.h5',
output_name='test_tester',
detection_threshold=0.20,
P_threshold=0.1,
S_threshold=0.1,
number_of_plots=3,
estimate_uncertainty=True,
number_of_sampling=2,
input_dimention=(6000, 3),
normalization_mode='std',
mode='generator',
batch_size=10,
gpuid=None,
gpu_limit=None)
| run_eqtransformer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="fZ2OoqbpTzYq" colab_type="code" outputId="d44e5553-1f7c-47ef-8f0c-55edc44c4223" executionInfo={"status": "ok", "timestamp": 1576073785240, "user_tz": -300, "elapsed": 36922, "user": {"displayName": "<NAME> Lab Engineer", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBDjs7Blc-tLbD3GKXlnzMrXGQ5riT5EkS7Wfl2Lw=s64", "userId": "08477479575678048584"}} colab={"base_uri": "https://localhost:8080/", "height": 122}
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
# + id="Ngd3b9fFUVE6" colab_type="code" outputId="15f772ad-72af-49d8-9eac-380dca9a70c6" executionInfo={"status": "ok", "timestamp": 1576073852568, "user_tz": -300, "elapsed": 2955, "user": {"displayName": "<NAME> Lab Engineer", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBDjs7Blc-tLbD3GKXlnzMrXGQ5riT5EkS7Wfl2Lw=s64", "userId": "08477479575678048584"}} colab={"base_uri": "https://localhost:8080/", "height": 51}
# %cd 'gdrive/My Drive/Colab Notebooks/Udacity_DLND/RGB_to_GRAYSCALE_Autoencoder-'
# !pwd
# + id="3XC8v1OGTjkh" colab_type="code" colab={}
import cv2
import glob
count = 1
# + id="naUEiaWzTjks" colab_type="code" colab={}
path_daisy = 'flower_photos/daisy/*.jpg'
path_dandelion = 'flower_photos/dandelion/*.jpg'
path_roses = 'flower_photos/roses/*.jpg'
path_sunflowers = 'flower_photos/sunflowers/*.jpg'
path_tulips = 'flower_photos/tulips/*.jpg'
paths_list = [path_daisy, path_dandelion, path_roses, path_sunflowers, path_tulips]
# + id="i1am1WoGTjkx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="8bca3695-bb05-4963-f146-690bd2d54621" executionInfo={"status": "ok", "timestamp": 1576075471075, "user_tz": -300, "elapsed": 1111781, "user": {"displayName": "<NAME> Lab Engineer", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBDjs7Blc-tLbD3GKXlnzMrXGQ5riT5EkS7Wfl2Lw=s64", "userId": "08477479575678048584"}}
for path in paths_list:
filenames = glob.glob(path)
for filename in filenames:
image = cv2.imread(filename)
gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image = cv2.resize(image, (128, 128))
gray_img = cv2.resize(gray_img, (128, 128))
cv2.imwrite("gray_images/gray_" +str(count) +".jpg", gray_img)
cv2.imwrite("color_images/color_" +str(count) +".jpg", image)
print(count)
count += 1
# + id="vZjzJcF5Tjk1" colab_type="code" colab={}
| Create New Dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Indice de contenido
#
# 00. [Introduccion tensorflow](00_Terminologia_ML.ipynb)
# 01. [Estudio detallado del ML: Regresión lineal](01_Estudio_detallado_ML_Regresión_lineal.ipynb)
# 02. [Estudio detallado del ML: Entrenamiento perdida](02_Estudio_detallado_ML_Entrenamiento_perdida.ipynb)
# 03. [Reducción pérdida: Un enfoque iterativo](03_Reduccion_perdida_Un_enfoque_iterativo.ipynb)
# 04. [Derivadas parciales y las gradientes](04_A_Derivadas_parciales_y_las_gradientes.ipynb)
# 04. [Reducción pérdida: Descenso gradientes](04_Reducción_pérdida_Descenso_gradientes.ipynb)
# 05. [Reduccion pérdida: Tasa aprendizaje](05_Reduccion_perdida_Tasa_aprendizaje.ipynb)
# 06. [Reducción perdida: Descenso gradiente estocastico](06_Reducción_perdida_Descenso_gradiente_estocastico.ipynb)
# 07. [Generalizacion: Riesgos sobreajuste](07_Generalizacion_Riesgos_sobreajuste.ipynb)
# 08. [Conjuntos entrenamiento prueba: Separacion datos](08_Conjuntos_entrenamiento_prueba_Separacion_datos.ipynb)
# 09. [Validacion: Otra particion](09_Validacion_Otra_particion.ipynb)
# 10. [Representación: Ingenieria atributos](10_Representación_Ingenieria_atributos.ipynb)
# 11. [Representación: Cualidades buenos atributos](11_Representacion_Cualidades_buenos_atributos.ipynb)
# 12. [Representación: Limpieza datos](12_Representacion_Limpieza_datos.ipynb)
# 13. [Combinaciones atributos: Codificacion linealidad](13_Combinaciones_atributos_Codificacion_linealidad.ipynb)
# 14. [Combinaciones atributos: Vectores un solo 1 combinados](14_Combinaciones_atributos_Vectores_un_solo_1_combinados.ipynb)
# 15. [Regularizacion para lograr simplicidad: RegularizacionL2](15_Regularizacion_para_lograr_simplicidad_RegularizacionL2.ipynb)
# 16. [Regularizacion para lograr simplicidad: Lambda](16_Regularizacion_para_lograr_simplicidad_Lambda.ipynb)
# 17. [Regresion logistica: Calculo probabilidades](17_Regresion_logistica_Calculo_probabilidades.ipynb)
# 18. [Regresion logistica: Entrenamiento modelos](18_Regresion_logistica_Entrenamiento_modelos.ipynb)
# 19. [Clasificacion: Umbral](19_Clasificacion_Umbral.ipynb)
# 20. [Clasificacion: Verdadero falso positivo negativo](20_Clasificacion_Verdadero_falso_positivo_negativo.ipynb)
# 21. [Clasificacion Exactitud](21_Clasificacion_Exactitud.ipynb)
# 22. [Clasificacion: Precision exhaustividad](22_Clasificacion_Precision_exhaustividad.ipynb)
#
#
| notebooks/02_Machine_Learning/Teoric/000_Indice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pandas Visualization
import numpy as np
import pandas as pd
import seaborn as sns
# %matplotlib inline
# Set seaborn for better visualizaiton
sns.set()
df1 = pd.read_csv('../sample-files/dataframe1', index_col=0)
df1.head()
df2 = pd.read_csv('../sample-files/dataframe2')
df2.head()
# Histogram of the data in a specific column syntax 1
df1['A'].hist(bins=40)
# syntax 2
df1['A'].plot(kind='hist', bins=40)
# syntax 3
df1['A'].plot.hist(bins=40)
# Area plot
df2.plot.area(alpha=0.5)
# Bar plot
df2.plot.bar(stacked=True)
# Histogram
df1['A'].plot.hist(bins=40)
# Line plot
df1.plot.line(y='B', figsize=(12, 3), lw=2)
# Scatter plot
df1.plot.scatter(x='A', y='B', c='C', cmap='coolwarm')
df1.plot.scatter(x='A', y='B', s=df1['C']*50)
# Box plot
df2.plot.box()
df = pd.DataFrame(np.random.randn(1000,2), columns=['a', 'b'])
df.head()
# Hex plot
df.plot.hexbin(x='a', y='b', gridsize=20, cmap='coolwarm')
# Kernel density estimation
df2['a'].plot.kde()
# Density
df2.plot.density()
| notebooks/pandas visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basics of logistic regression
# ## Import the relevant libraries
# +
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
import os
sns.set()
#Apply a fix to the statsmodels library
from scipy import stats
stats.chisqprob = lambda chisq, df: stats.chi2.sf(chisq, df)
# -
# ## Load the data
raw_data = pd.read_csv(os.path.join(os.path.pardir, 'data', 'raw', '2.01. Admittance.csv'))
raw_data
# Replace all No entries with 0, and all Yes entries with 1
data = raw_data.copy()
data['Admitted'] = data['Admitted'].map({'Yes': 1, 'No': 0})
data
# ## Variables
# Create the dependent and independent variables
y = data['Admitted']
x1 = data['SAT']
# ## Let's plot the data
# ### Scatter plot
# Create a scatter plot of x1 (SAT, no constant) and y (Admitted)
plt.scatter(x1,y, color='C0')
# Don't forget to label your axes!
plt.xlabel('SAT', fontsize = 20)
plt.ylabel('Admitted', fontsize = 20)
plt.show()
# ### Plot with a regression line
# +
# Create a linear regression on the data in order to estimate the
# coefficients and be able to plot a regression line
# The data is not linear, so the linear regression doesn't make much sense
x = sm.add_constant(x1)
# I'll call it reg_lin, instead of reg, as we will be dealing with logistic regressions later on
reg_lin = sm.OLS(y,x)
# I'll segment it into regression and fitted regression (results) as I
# can use the results as an object for some operations
results_lin = reg_lin.fit()
# Create a scatter plot
plt.scatter(x1,y,color = 'C0')
# Plot the regression line. The coefficients are coming from results_lin.params
y_hat = x1*results_lin.params[1]+results_lin.params[0]
plt.plot(x1,y_hat,lw=2.5,color='C8')
plt.xlabel('SAT', fontsize = 20)
plt.ylabel('Admitted', fontsize = 20)
plt.show()### Plot with a regression line
# -
# ### Plot a logistic regression curve
# +
# Creating a logit regression
reg_log = sm.Logit(y,x)
# Fitting the regression
results_log = reg_log.fit()
# Creating a logit function, depending on the input and coefficients
def f(x,b0,b1):
return np.array(np.exp(b0+x*b1) / (1 + np.exp(b0+x*b1)))
# Sorting the y and x, so we can plot the curve
f_sorted = np.sort(f(x1,results_log.params[0],results_log.params[1]))
x_sorted = np.sort(np.array(x1))
plt.scatter(x1,y,color='C0')
plt.xlabel('SAT', fontsize = 20)
plt.ylabel('Admitted', fontsize = 20)
# Plotting the curve
plt.plot(x_sorted,f_sorted,color='C8')
plt.show()
# -
# ## Regression
x = sm.add_constant(x1)
reg_log = sm.Logit(y,x)
results_log = reg_log.fit()
# function value: 0.137766 - Value of the objectiive function t the 10th iteration
# The reason why we need that information is:
# There is always the possibility that after a certain number of iterations the model won't learn.
# Therefore it cannot optimize the optimization function in stat's models.
# The maximum number of iterations is 35.
# ## Summary
# Get the regression summary
results_log.summary()
# **MLE** - Maximum likelihood estimatiion - It is a function which estimates how likely it is that the model at hand describes the real underlying relationship of the variables. In simple words the bigger the likelihood function the higher the probability that our model is correct
# ***
# **Log-Likelihood** - the log like we had when performing MLE. Because of this convenience the log likelihood is the more popular metric. The the value of the log likelihood is almost but not always negative. And the bigger it is the better.
# ***
# **LL-Null** - Log-Likelihood Null - This is the log likelihood of a model which has no independent variables. Actually the same `y` is the dependent variable of that model but the sole independent variable it's an array of `ones`. This array is the constant we are adding with the ADD constant method.
# ***
# **LLR** - Log Likelihood Ratio - it is based on the log likelihood of the model and the LL-Null. It measures if our model is statistically different from the LL-Null `aka a useless model`.
# ***
# **Pseudo R-squared** - unlike the linear one, there is no such thing as a clearly defined R-squared for the logistic regression. There are several propositions which have a similar meaning to the R-squared but none of them is even close to the real deal. Some terms you may have heard are AIC BIC and McFadden's R-squared. Well this one here is McFadden's R-squared according to MacFadden himself. A good pseudo R-squared is somewhere between 0.2 and 0.4. Moreover this measure is mostly useful for comparing variations of the same model. Different models will have completely different an incomparable pseudo are squares.
# ## Looking into LL-null
# Create a variable only of 1s
const = np.ones(168)
const
reg_null = sm.Logit(y,const)
results_null = reg_null.fit()
results_null.summary()
# ### Plot a logistic regression curve
# +
# Creating a logit regression (we will discuss this in another notebook)
reg_log = sm.Logit(y,x)
# Fitting the regression
results_log = reg_log.fit()
# Creating a logit function, depending on the input and coefficients
def f(x,b0,b1):
return np.array(np.exp(b0+x*b1) / (1 + np.exp(b0+x*b1)))
# Sorting the y and x, so we can plot the curve
f_sorted = np.sort(f(x1,results_log.params[0],results_log.params[1]))
x_sorted = np.sort(np.array(x1))
ax = plt.scatter(x1,y,color='C0')
#plt.xlabel('SAT', fontsize = 20)
#plt.ylabel('Admitted', fontsize = 20)
# Plotting the curve
ax2 = plt.plot(x_sorted,f_sorted,color='red')
plt.figure(figsize=(20,20))
plt.show()
# -
np.exp(4.20)
| notebooks/Admittance - Logistic regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <font size=-1>Licensed under the Apache License, Version 2.0 (the \"License\");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>
# +
import os
import time
import tensorflow_data_validation as tfdv
# -
# !python -c "import tensorflow_data_validation; print('TFDV version: {}'.format(tensorflow_data_validation.__version__))"
# # Deploying and triggering the Data Drift Monitor Flex template
# This notebooks steps through deploying and triggering the Data Drift Monitor Flex template
#
# ## Deploying the template
#
#
# ### Configure environment settings
#
# Update the below constants with the settings reflecting your environment.
#
# - `TEMPLATE_LOCATION` - the GCS location for the template.
#
# !gsutil ls
# +
TEMPLATE_NAME = 'drift-analyzer'
TEMPLATE_LOCATION = 'gs://mlops-dev-workspace/flex-templates'
METADATA_FILE = 'drift_analyzer_template/metadata.json'
TEMPLATE_PATH = '{}/{}.json'.format(TEMPLATE_LOCATION, TEMPLATE_NAME)
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
TEMPLATE_IMAGE='gcr.io/{}/{}:latest'.format(PROJECT_ID, TEMPLATE_NAME)
# -
# ### Build the template docker image
#
# !gcloud builds submit --tag {TEMPLATE_IMAGE} drift_analyzer
# ### Deploy the template
#
# !gcloud beta dataflow flex-template build {TEMPLATE_PATH} \
# --image {TEMPLATE_IMAGE} \
# --sdk-language "PYTHON" \
# --metadata-file {METADATA_FILE}
# ### Run template
# +
JOB_NAME = "data-drift-{}".format(time.strftime("%Y%m%d-%H%M%S"))
PARAMETERS = {
'request_response_log_table': 'mlops-dev-env.data_validation.covertype_classifier_logs_tf',
'instance_type': 'OBJECT',
'start_time': '2020-05-09T5:05:14',
'end_time': '2020-05-09T18:05:14',
'output_path': 'gs://mlops-dev-workspace/drift_monitor/output/tf',
'schema_file': 'gs://mlops-dev-workspace/drift_monitor/schema/schema.pbtxt',
'setup_file': './setup.py',
}
PARAMETERS = ','.join(['{}={}'.format(key,value) for key, value in PARAMETERS.items()])
# -
# !gcloud beta dataflow flex-template run {JOB_NAME} \
# --template-file-gcs-location {TEMPLATE_PATH} \
# --parameters {PARAMETERS}
| archive/04-deploy-run-flex-template.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ***Introduction to Radar Using Python and MATLAB***
# ## <NAME> - Copyright (C) 2019 Artech House
# <br/>
#
# # Right Circular Cone Radar Cross Section
# ***
# Referring to Section 7.4.1.6, the geometry of the right circular cone is given in Figure 7.11. For axial incidence, the radar cross section is independent of polarization, and is written as (Equation 7.48)
#
# $$
# \sigma = \frac{\lambda^2}{\pi}\frac{\left(\dfrac{ka\sin(\pi/n)}{n}\right)^2}{\Big( \cos(\pi/n) - \cos(3\pi/n)\Big)^2} \hspace{0.5in} \text{(m}^2\text{)},
# $$
#
#
# Equation 7.48 is for first-order diffraction only, which is valid when $ka \gg 1$. Taking double diffraction into account, a more accurate expression for the axial radar cross section is (Equation 7.49)
#
# \begin{align}
# \sigma = \frac{\lambda^2}{\pi}\left(\dfrac{ka\sin(\pi/n)}{n}\right)^2\, \Bigg| &\dfrac{1}{\big( \cos(\pi/n) - \cos(3\pi/n)\big)^2} \nonumber \\[7pt]
# &+ \frac{\sin(\pi/n)\, \exp\Big(j(2ka - \pi/4)\Big)}{n\sqrt{\pi k a}\, \big( \cos(\pi/n) - \cos(3\pi/2n)\big)^2} \Bigg|^2 \hspace{0.15in} \text{(m}^2\text{)}.\nonumber
# \end{align}
#
# If the angle of the incident energy is normal to the generator of the cone, $\theta_i = 90^\text{o} - \alpha$, then the geometrical theory of diffraction equations are no longer valid. Instead, an expression based on the asymptotic expansion of the physical optics equation is used. This is written as
#
# \begin{equation}\label{eq:rcs_cone_normal}
# \sigma = \frac{8\lambda^2 \pi}{9 \sin^2\alpha\cos\alpha} \left(\frac{a}{\lambda}\right)^3 \hspace{0.5in} \text{(m}^2\text{)}.
# \end{equation}
#
# The other special case is when the cone is viewed from the base, $\theta_i = 180^o$. For this case, the physical optics expression for a circular disc is used; see Table 7.1, and repeated here as
#
# \begin{equation}\label{eq:rcs_cone_base}
# \sigma = \frac{\lambda^2(ka)^4}{4\pi} \hspace{0.5in} \text{(m}^2\text{)}.
# \end{equation}
#
# For all other incident angles, the radar cross section depends on the polarization of the incident energy, and is given by the following equations
#
# \begin{align}\label{eq:rcs_cone_arbitrary_1}
# \sigma = &\frac{\lambda^2ka}{4\pi^2\sin\theta_i}\left(\frac{\sin(\pi/n)}{n}\right)^2 \times \Bigg| \exp\Big[-j(2ka\sin\theta_i - \frac{\pi}{4})\Big] \Big[\Big(\cos\frac{\pi}{n} - 1\Big)^{-1} \nonumber \\[7pt] &\pm \Big(\cos\frac{\pi}{n} - \cos\frac{3\pi - 2\theta_i}{n} \Big)^{-1} \Big] + \exp\Big[j(2ka\sin\theta_i - \frac{\pi}{4})\Big] \Big[\Big(\cos\frac{\pi}{n} - 1\Big)^{-1} \nonumber \\[7pt]
# & \pm \Big(\cos\frac{\pi}{n} - \cos\frac{3\pi + 2\theta_i}{n} \Big)^{-1} \Big] \Bigg|^2, \, \, 0 < \theta_i < \alpha;
# \end{align}
#
# \begin{align}\label{eq:rcs_cone_arbitrary_2}
# \sigma = \frac{\lambda^2ka}{4\pi^2\sin\theta_i}\left(\frac{\sin(\pi/n)}{n}\right)^2 &\Bigg[\Big(\cos\frac{\pi}{n} - 1\Big)^{-1} \pm \nonumber \\[7pt]
# & \Big(\cos\frac{\pi}{n} - \cos\frac{3\pi - 2\theta_i}{n} \Big)^{-1} \Bigg]^2, \, \, \alpha < \theta_i < \pi/2;
# \end{align}
#
# \begin{align}\label{eq:rcs_cone_arbitrary_3}
# \sigma = &\frac{\lambda^2ka}{4\pi^2\sin\theta_i}\left(\frac{\sin(\pi/n)}{n}\right)^2 \times \Bigg| \exp\Big[-j(2ka\sin\theta_i - \frac{\pi}{4})\Big] \Big[\Big(\cos\frac{\pi}{n} - 1\Big)^{-1} \nonumber \\[7pt]
# & \pm \Big(\cos\frac{\pi}{n} - \cos\frac{3\pi - 2\theta_i}{n} \Big)^{-1} \Big] + \exp\Big[j(2ka\sin\theta_i - \frac{\pi}{4})\Big] \Big[\Big(\cos\frac{\pi}{n} - 1\Big)^{-1} \nonumber \\[7pt]
# & \pm \Big(\cos\frac{\pi}{n} - \cos\frac{2\theta_i-\pi}{n} \Big)^{-1} \Big] \Bigg|^2, \, \, \pi/2 < \theta_i < \pi.
# \end{align}
#
# The positive sign is used for horizontal polarization and the negative sign is used for vertical polarization.
# ***
# Begin by getting the library path
import lib_path
# Set the operating frequency (Hz), the cone half angle (radians), and the base radius (m)
# +
from numpy import radians
frequency = 1e9
cone_half_angle = radians(15.0)
base_radius = 1.4
# -
# Set up the incident angles (radians) using the `linspace` routine from `scipy`
# +
from numpy import linspace
from scipy.constants import pi
incident_angle = linspace(0, pi, 1801)
# -
# Calculate the radar cross section (m^2) for the right circular cone
# +
from Libs.rcs.right_circular_cone import radar_cross_section
from numpy import array
rcs = array([radar_cross_section(frequency, cone_half_angle, base_radius, ia) for ia in incident_angle])
# -
# Display the radar cross section (dBsm) for the right circular cone
# +
from matplotlib import pyplot as plt
from numpy import log10, degrees
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Display the results
plt.plot(degrees(incident_angle), 10 * log10(rcs[:, 0]), '', label='VV')
plt.plot(degrees(incident_angle), 10 * log10(rcs[:, 1]), '--', label='HH')
# Set the plot title and labels
plt.title('RCS vs Incident Angle', size=14)
plt.ylabel('RCS (dBsm)', size=12)
plt.xlabel('Incident Angle (deg)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Set the legend
plt.legend(loc='upper right', prop={'size': 10})
# -
| jupyter/Chapter07/right_circular_cone.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Finder chart for 2-min TESS Sector 1 observations of CD Ind
# +
# 2020DEC21T1211
# -
# download TESS 2-min observation of CD Ind from STScI
import lightkurve as lk
search_results = lk.search_targetpixelfile('CD Ind', radius=60, mission='TESS', sector=1)
tpf = search_results[0].download(quality_bitmask=0)
#
import mkpy3
#
# show the target pixel file overlay on top of rotated "DSS2 Red" survey image
ax = mkpy3.mkpy3_tpf_overlay_v6(tpf=tpf, survey='DSS2 Red', # TPF overlay on rotated survey
rotationAngle_deg='tpf', width_height_arcmin=6, percentile=99.5, shrink=0.4,
show_plot=False, plot_file='', title='CD Ind : TESS : Sector 1', print_gaia_dr2=False)
ax.coords[0].set_major_formatter('d.dd')
ax.coords[1].set_major_formatter('d.dd')
ax.tick_params(axis='x', labelsize=16, length=5, width=2, labeltop=True, labelbottom=True)
ax.tick_params(axis='y', labelsize=16, length=5, width=2, labelright=True, labelleft=True)
ax.grid(True, color='palegreen', lw=2, zorder=1) # show the RA and DEC grid
mkpy3.mkpy3_plot_add_compass_rose_v5(ax=ax, north_arm_arcsec=50) # add a compass rose
#
# save, show, and close the plot
import matplotlib.pyplot as plt
plt.savefig('mkpy3_plot1.png', bbox_inches="tight"); plt.show(); plt.close()
# * BLUE SQUARES show the TESS target pixel file overlay.
# * RED SQUARES show the TESS target piexl file aperture overlay.
#
#
# * YELLOW CIRCLE marks the target (CD Ind).
#
#
# * CYAN CIRCLES show some of the GAIA DR2 catalog stars in the field.
# * GREEN X shows the only VSX catalog star in the field (the target).
#
#
# * GREEN LINES show the Right Ascension / Declination grid.
#
#
# * Compass rose:
# * LONG ARM of the compass rose points NORTH
# * SHORT ARM of the compass rose points EAST
# ### Show the first frame of the 2-min TESS Sector 1 observation of CD Ind
# Download TESS 2-min observation of CD Ind from STScI
import lightkurve as lk
search_results = lk.search_targetpixelfile('CD Ind', radius=60, mission='TESS', sector=1)
tpf = search_results[0].download(quality_bitmask=0)
#
# Show the first frame of the target pixel file with RA/DEC axis and grid
import matplotlib.pyplot as plt
import astropy.visualization as av
import mkpy3
fig = plt.figure(figsize=(7, 7))
ax = plt.subplot(projection=tpf.wcs)
interval = av.PercentileInterval(99.9)
stretch = av.SqrtStretch()
frame = 0
image_data = tpf.flux[frame]
norm = av.ImageNormalize(image_data, interval=interval, stretch=stretch)
ax.imshow(image_data, norm=norm, cmap='gray_r')
ax.set_xlabel('Right Ascension (J2000)', size=24)
ax.set_ylabel('Declination (J2000)', size=24)
ax.tick_params(axis='x', labelsize=16, length=5, width=2,
labeltop=True, labelbottom=True)
ax.tick_params(axis='y', labelsize=16, length=5, width=2,
labelright=True, labelleft=True)
ax.coords[0].set_major_formatter('d.dd')
ax.coords[1].set_major_formatter('d.dd')
ax.grid(True, color='palegreen', lw=2, zorder=1) # show the RA and DEC grid
mkpy3.mkpy3_plot_add_compass_rose_v5(ax=ax, # add a compass rose
north_arm_arcsec=50, wcs=tpf.wcs)
marker_kwargs = {'edgecolor': 'yellow', # mark target with a yellow circle
's': 600, 'facecolor': 'None', 'lw': 3, 'zorder': 10}
ax.scatter(tpf.ra, tpf.dec, transform=ax.get_transform('icrs'), **marker_kwargs)
fig.suptitle('CD Ind : TESS : Sector 1 : Frame 0', size=24)
#
# Save, show, and close the plot
plt.savefig('mkpy3_plot2.png', bbox_inches="tight"); plt.show(); plt.close()
# * YELLOW CIRCLE marks the target (CD Ind).
#
#
# * GREEN LINES show the Right Ascension / Declination grid.
#
#
# * Compass rose:
# * LONG ARM of the compass rose points NORTH
# * SHORT ARM of the compass rose points EAST
# +
# ==========================================================================================
# EOF ======================================================================================
# ==========================================================================================
| mkpy3/docs/source/tutorials/nb_tess_finder_chart_cd_ind.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.6 64-bit
# metadata:
# interpreter:
# hash: b035f594501fd99b0ba4fdbf39dd3ef4592c3539e4ec00f8ba05c88e0c5143ba
# name: python3
# ---
# # Exploring the corpus
# +
from collections import defaultdict
import os
import matplotlib.pyplot as plt
import numpy as np
# -
# ## Description du corpus
path = "../data/txt/"
files = sorted(os.listdir(path))
len(files)
# Nous allons manipuler ici les chaines de caractères.
#
# Il s'agit de la classe `str` en Python.
#
# Pour en savoir plus : https://openclassrooms.com/fr/courses/235344-apprenez-a-programmer-en-python/231888-creez-votre-premier-objet-les-chaines-de-caracteres
chaine = 'Bxl_1849_Tome_II1_Part_5.txt'
type(chaine)
# la méthode split
chaine_split = chaine.split('_')
chaine_split
# Accéder à l'année
year = chaine_split[1]
year
# Manupuler les str pour convertir une année en décennie
year[:3]
year[-1]
year[:3] + '0s'
all_years = [str(year) for year in range(1847, 1979)]
# + tags=[]
dic = defaultdict(int)
dic2 = defaultdict(int)
covered_years = set()
for f in files:
if "_" in f:
elems = f.split("_")
city = elems[0]
year = elems[1]
tome = elems[3]
covered_years.add(year)
decade = year[:3] + "0s"
dic[decade] += 1
dic2[city] += 1
dic2[tome] += 1
else:
print(f"Anomalous file: {f}")
# -
print(f"There are {dic2['Bxl']} bulletins from Brussels and {dic2['Lkn']} from Laeken")
nb_rap = dic2['RptAn']
print(f"{len(files)-nb_rap-1} are real bulletins and {nb_rap} are annual reports")
missing_years = [y for y in all_years if y not in covered_years]
print(f"Missing years: {', '.join(missing_years)}")
# ## Visualisation du nombre de bulletins par décennies
#
# Ces visualisations sont obtenus avec la librairie Matplotlib.
#
# Pour en savoir plus : https://openclassrooms.com/fr/courses/4452741-decouvrez-les-librairies-python-pour-la-data-science/4740942-maitrisez-les-possibilites-offertes-par-matplotlib.
# +
def plot_bar():
index = np.arange(len(dic))
plt.bar(index, dic.values())
plt.xlabel('Décennie')
plt.ylabel('# bulletins')
plt.xticks(index, dic.keys(), fontsize=8, rotation=30)
plt.title('Évolution du nombre de bulletins')
plt.show()
plot_bar()
# -
| module2/s2_explore.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Name - <NAME>
# # To explore supervised machine learning
# ## Task : Predict score if a student study for 9.25 hrs in a day
# Importing all libraries witch is required.
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
import seaborn as sns
#printing given data set
student_df = pd.read_csv('https://raw.githubusercontent.com/AdiPersonalWorks/Random/master/student_scores%20-%20student_scores.csv')
student_df
# Checking for data types
student_df.dtypes
# Checking correlation before model creation
student_df.corr()
# Plotting our data points on a 2-D graph to see if we can manually find any relationship between the data
#scattter plot
student_df.plot(x='Hours', y='Scores', style='o')
plt.title('Hours vs Scores')
plt.xlabel('Hours')
plt.ylabel('Scores')
plt.show()
#Plotting regplot graph
sns.regplot(x="Hours", y="Scores", data=student_df)
#Titles for the graph
plt.xlabel("Hours", size=20)
plt.ylabel("Scores", size=20)
plt.title("Regplot for Hours vs Scores", size=20)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
#Splitting 80% of the data to the training set while 20% of the data to test set.
x = student_df[["Hours"]]
y = student_df[["Scores"]]
x_train,x_test,y_train,y_test = train_test_split(x,y,random_state=0,test_size=0.2)
lm = LinearRegression()
lm.fit(x, y)
y_pred = lm.predict(x_test)
# ### Predicting Scores - if a student studies for 9.25 hours a day If student studies for 9.25 hours a day the predicted score is 92.90%.
hours = [[9.25]]
self_pred = lm.predict(hours)
print("No of Hours = {}".format(hours))
print("Predicted Score = {}".format(self_pred[0][0]))
# Checking the accuracy of predicted score
from sklearn.metrics import r2_score
from sklearn import metrics
r_2 = r2_score(y_test,y_pred)
print("The R^2 value is:", r_2)
# Final evaluation of model.
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
| Task-2/Task-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="mMXNP3wZm7gf"
# # Files
#
# * **Text** files are a sequence of characters
# * __Binary__ File are a sequence of bytes.
#
# ### File Objects
# Every file openned as a __file object__ created used to interact with the file.
#
# #### Standard File Objects
# The `sys` library needs imported to access the following.
#
# * `sys.stdin` - standard input file object
# * `sys.stdout` - standard output file object
# * `sys.stderr` - standard error file object
#
# #### End of File (EOF)
# * OS provides mechanism to denote EOF.
# + [markdown] id="6oJoIlSfstk_"
# ## Write to Text Files
# + id="VmAahXjviZuG"
names = open('names.txt','w')
names.write('Joe\n')
names.write('Sue\n')
names.write('Charlie\n')
names.close()
# + id="ViLeT0iqstIm"
with open('filename.txt',mode='w') as x:
x.write('100 Jones 25.98\n')
x.write('200 Smith 34.12\n')
# + id="F0wtlE6CdaXE"
# Overwrites file
with open('filename.txt',mode='w') as x:
x.write('300 Rogers 243.98\n')
x.write('400 Brady 12.34\n')
# + id="HF4Ef0Hqec6g"
# Overwrite again
with open('filename.txt',mode='w') as x:
x.write('100 Jones 25.98\n')
x.write('200 Smith 34.12\n')
# + [markdown] id="eWJ_IMEHhfRW"
# ### Append to file
# + id="Qz7Hba2Pejee"
# Append to it instead
with open('filename.txt',mode='a') as x:
x.write('300 Rogers 243.98\n')
x.write('400 Brady 12.34\n')
# + [markdown] id="7IvDHRAhe-bF"
# > With automatically calls `close` method.
# + [markdown] id="Ol7o1LA5tPCN"
# ## Read from Text Files
#
# Need to loop over each line.
# + colab={"base_uri": "https://localhost:8080/"} id="t_t4uyoumscs" executionInfo={"status": "ok", "timestamp": 1637162736911, "user_tz": 300, "elapsed": 175, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}} outputId="444b11df-43c1-49be-fc41-dc6db125b1c5"
#
with open('filename.txt', mode = 'r') as accounts:
# Header row
print(f'{"Acccount":<10}{"Name":<10}{"Balance":>10}')
# Loop over records (each line is a record)
for record in accounts:
# unpack
acct,name,bal = record.split()
print(f'{acct:<10}{name:<10}{bal:>10}')
# + [markdown] id="FYVKOu0xhbL1"
# ### Updating a file
# * open file (`mode = 'r')
# * create new file (`mode='w')
# * loop over records writing to new file
# * when line to update, write to new file
# * delete old file, save new
#
# For example, change Rogers to Williams in the accounts file (see below).
# + id="huI201hY5jOU"
import os # used to interact with Operating System
accounts = open('accounts.txt','r')
temp = open('temp.txt','w')
# Change Rogers to Williams
with accounts, temp:
for record in accounts:
acct,name,bal = record.split()
if account != '300':
temp.write(record)
else:
newRecord = ' '.join(acct, 'Williams', balance])
temp.write(newRecord +'\n')
os.remove('accounts.txt')
os.rename('temp.txt','accounts.txt')
# + [markdown] id="CaatZPnJjLY9"
# # Reading and Writing CSV
# The `csv` module needs loaded.
# + [markdown] id="pI2FDjgPPcGC"
# ## Write to CSV
# + id="1kDbNdnJPckc" executionInfo={"status": "ok", "timestamp": 1637236255223, "user_tz": 300, "elapsed": 15, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}}
import csv
accounts = open('accounts.csv','w')
writer = csv.writer(accounts)
writer.writerow([1,2,3])
writer.writerow([4,5,6])
accounts.close()
# + id="z-knPr4xkXLx" executionInfo={"status": "ok", "timestamp": 1637236290168, "user_tz": 300, "elapsed": 257, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}}
import csv
with open('newaccounts.csv', mode = 'w', newline='') as newaccounts:
writer = csv.writer(newaccounts)
writer.writerow([1, 'Jones', 23.55])
writer.writerow([2, 'Smith', 10.99])
# + [markdown] id="j60F1xwIPc8w"
# ## Read from CSV
# + id="9jH29pDmPdnf" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1637236292795, "user_tz": 300, "elapsed": 213, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}} outputId="c0af4858-fc15-48b3-c248-704cde9abfaf"
import csv
with open('newaccounts.csv', 'r', newline='') as newaccounts:
print(f'{"Acccount":<10}{"Name":<10}{"Balance":>10}')
reader = csv.reader(newaccounts)
# Loop over records (each line is a record)
for record in reader:
acct, name, bal = record
print(f'{acct:<10}{name:<10}{bal:>10}')
# + [markdown] id="fF2u7XsKlSK_"
# ## Reading into Pandas
# + colab={"base_uri": "https://localhost:8080/", "height": 112} id="Oz-XqM8llVOc" executionInfo={"status": "ok", "timestamp": 1637164165862, "user_tz": 300, "elapsed": 185, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}} outputId="9ec45d3c-6355-4759-f011-8bb677d2dbf3"
import pandas as pd
df = pd.read_csv('newaccounts.csv', names=['account','name','balance'])
df
# + colab={"base_uri": "https://localhost:8080/"} id="jv6sUvjIlmSh" executionInfo={"status": "ok", "timestamp": 1637164190390, "user_tz": 300, "elapsed": 171, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}} outputId="2865db73-68a7-4aaf-a472-01eb0135001a"
df.name
# + colab={"base_uri": "https://localhost:8080/"} id="ycSGYz01lwkN" executionInfo={"status": "ok", "timestamp": 1637164202949, "user_tz": 300, "elapsed": 171, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}} outputId="d3bdd91b-bb60-41aa-fbe9-cc7a0620cd9d"
df.balance
# + colab={"base_uri": "https://localhost:8080/"} id="UMngf_kElziU" executionInfo={"status": "ok", "timestamp": 1637164340075, "user_tz": 300, "elapsed": 170, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}} outputId="cc0b8258-c085-4377-8fd4-ff12e83f8d5c"
df.iloc[1]
# + colab={"base_uri": "https://localhost:8080/", "height": 112} id="cix9gQoBl2qM" executionInfo={"status": "ok", "timestamp": 1637164513943, "user_tz": 300, "elapsed": 213, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09730783492511667700"}} outputId="2c95d24f-eefe-4bcb-e21c-1039523bac6a"
df[['name','balance']]
| lectures/Chap9-File-I_O.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import time
import numpy as np
from keras.models import load_model
def getPredictedClass(model, vimg):
image = np.zeros((200, 200, 3))
image[:,:,0] = vimg[:,:,0]/255
image[:,:,1] = vimg[:,:,1]/255
image[:,:,2] = vimg[:,:,2]/255
image = cv2.resize(image, (200, 200))
image = image.reshape(1, 200, 200, 3)
prediction = model.predict_on_batch(image)
predicted_class = np.argmax(prediction)
if predicted_class == 26:
return "nothing"
elif predicted_class == 27:
return "delete"
elif predicted_class == 28:
return "space"
return str(chr(predicted_class + ord('A')))
if __name__ == "__main__":
camera = cv2.VideoCapture(0)
fps = int(camera.get(cv2.CAP_PROP_FPS))
top, right, bottom, left = 30, 50, 230, 250
model = load_model("del3.0/model-00003-0.16938-0.95240-0.46589-0.89400.h5")
k = 0
predictedClass = "Loading .."
while (True):
(grabbed, frame) = camera.read()
if grabbed:
frame = cv2.resize(frame, (700,700))
# flip the frame so that it is not the mirror view
frame = cv2.flip(frame, 1)
clone = frame.copy()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
(height, width) = frame.shape[:2]
roi = frame[top:bottom, right:left]
if k % (fps / 3) == 0:
predictedClass = getPredictedClass(model, roi)
cv2.putText(clone, str(predictedClass), (490, 75), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.rectangle(clone, (left, top), (right, bottom), (0, 255, 0), 2)
# display the frame
cv2.imshow("Video Feed", clone)
# observe the keypress by the user
keypress = cv2.waitKey(1) & 0xFF
k += 1
# if the user pressed "q", exit
if keypress == ord("q"):
break
camera.release()
cv2.destroyAllWindows()
# -
| live.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:miniconda-ctsm]
# language: python
# name: conda-env-miniconda-ctsm-py
# ---
# # Evaluate CTSM simulation for runoff
# Maps of CTSM runoff compared to GRUN
#
# +
import xarray as xr
import numpy as np
import pandas as pd
import os
import utils
from iv_utils import *
import geopandas as gpd
# plot settings
utils.set_plot_param()
# constants
secperday = 86400
secperhour = 3600
dt = 86400 # time of the modeling can be also hourly for mizuRoute
daypermonth = 30 # average day in a month
# -
# ### Initialisation
# +
### Initialisation
# model directory
outdir = '/glade/scratch/ivanderk/'
# current working directory
scriptsdir = '/glade/u/home/ivanderk/pp_scripts_mizuroute/'
# Define directory where processing is done -- subject to change
procdir = '/glade/work/ivanderk/data/'
# mizuroute input dir (to save netcdf file to)
mizuroute_dir = '/glade/work/ivanderk/mizuRoute_global/route/'
# obs dir
obsdir = '/glade/work/ivanderk/data/'
# go to processing directory
# +
# set case name
case ='i.IHistClm50Sp.hcru_hcru.CTL'
# run settings -- change this to terms directly?
block = 'lnd' # lnd data
# atm data
# rof data
# define start and end year
nspinupyears = 5
spstartyear = '1966' # spin up start year
startyear = str(int(spstartyear)+nspinupyears) # start year, spin up excluded (5 years for now, best change to 10 when simulation is ready)
endyear = '2000' # last year of the simulation
# open network topology
ntopo = xr.open_dataset(mizuroute_dir+'ancillary_data/ntopo_hdma_mod.reorder_lake_H06.nc')
# -
# ## 0. Load res obs locations
# + tags=[]
df_meta = pd.read_csv('/glade/u/home/ivanderk/pp_scritps_mizuroute/lake_models_python/Hanasaki/observations/reservoirs_metadata.csv')
df_meta = df_meta.loc[df_meta['pfaf_hdma_10km'].notnull()]
names = df_meta['name'].values
names = np.delete(names,np.where(names=='Fort_randall'))
res_pfafs = df_meta.loc[df_meta['name'].isin(names),'pfaf_hdma_10km'].values
mask_stations = ntopo.PFAF.astype(int).isin(res_pfafs.astype(int))
df_stations = ntopo[['start_x','start_y','PFAF']].where(mask_stations, drop=True).to_dataframe()
gdf_stations = gpd.GeoDataFrame(df_stations, geometry=gpd.points_from_xy(df_stations.start_x, df_stations.start_y))
gdf_stations['PFAF'] = gdf_stations['PFAF'].astype(float)
gdf_stations = gdf_stations.merge(df_meta, left_on='PFAF', right_on='pfaf_hdma_10km', how='inner')
# -
gdf_stations= gdf_stations[gdf_stations.name != 'Bennet']
gdf_stations= gdf_stations[gdf_stations.name != 'Dickson']
# +
# load DAHITI obs
gdf_dahiti_ntopo_used = gpd.read_file(obsdir+'DAHITI/DAHITI_stations_used.shp')
# -
# ## 1. Load CTSM simulation
# +
# user settings
stream = 'h0' # h0 output block
# h1 output block
# h2 output block
exclude_spinup = True
variables = ['QRUNOFF']
# -
fn = 'i.IHistClm50Sp.hcru_hcru.CTL.clm2.h1.'+startyear+'-'+endyear+'.nc'
#fn = 'i.IHistClm50Sp.hcru_hcru.CTL.clm2.h1.ymonmean.nc'
ds= xr.open_dataset(mizuroute_dir+'input/'+fn)
# +
da = ds.QRUNOFF * secperday ## in mm/day
#values = np.roll(da.values,360, axis=2)
#da_roll = xr.DataArray(values, coords={'time':da.time.values,'lat': da.lat.values, 'lon': da.lon.values},
#dims=['time','lat', 'lon'])
#da_roll['lon'] = da_roll['lon']-180
# + [markdown] tags=[]
# ## 2. Load observations
# +
# load observations
ds_obs = xr.open_dataset(obsdir+'grun/GRUN_ensemble_clmgrid_1971-2000.nc')
obs=ds_obs['QRUNOFF']
obs = obs.sel(time=slice(startyear+"-01-01", endyear+"-12-31"))
obs_mean = obs.mean('time') #mm/day
# +
# plot global map of difference
def plot_delta_map(da_delta, plot_regions=False, vlims=False, calcsum=False, cmap='BrBG'):
# calculate annual sum instead of mean (precip)
if calcsum:
da_delta_ysum = da_delta.groupby('time.year').sum()
da_delta_mean = da_delta_ysum.mean('year')
da_delta_mean.attrs['units'] = 'mm/year'
# only one value
elif len(da_delta.dims) < 3:
da_delta_mean = da_delta
# annual means already taken
elif len(da_delta) < 50:
da_delta_mean = da_delta.mean('year')
else:
da_delta_mean = da_delta.mean('time')
fig = plt.figure(figsize=(30,12))
proj=ccrs.PlateCarree()
ax = plt.subplot(111, projection=proj, frameon=False)
# limiting values for plotting are given
if vlims==False:
da_delta_mean.plot(ax=ax, cmap=cmap, cbar_kwargs={'label': da_delta.name+' ('+da_delta.units+')', 'fraction': 0.02, 'pad': 0.04})
else:
im = da_delta_mean.plot(ax=ax, cmap=cmap, vmin=vlims[0], vmax=vlims[1], extend='both', add_colorbar=False, add_labels=False)
cb = plt.colorbar(im,fraction= 0.02, pad= 0.04, extend='both')
cb.set_label(label = da_delta.units, size=20)
cb.ax.tick_params(labelsize=20)
ax.set_title(da_delta.long_name, loc='right', fontsize=30)
ax.coastlines(color='dimgray', linewidth=0.5)
# exclude Antactica from plot
ax.set_extent((-180,180,-63,90), crs=proj)
if plot_regions: regionmask.defined_regions.srex.plot(ax=ax,add_ocean=False, coastlines=False, add_label=False) #label='abbrev'
return fig, ax
# plot CONUS map of difference
def plot_delta_map_conus(da_delta, plot_regions=False, vlims=False, calcsum=False, cmap='BrBG'):
# calculate annual sum instead of mean (precip)
if calcsum:
da_delta_ysum = da_delta.groupby('time.year').sum()
da_delta_mean = da_delta_ysum.mean('year')
da_delta_mean.attrs['units'] = 'mm/year'
# only one value
elif len(da_delta.dims) < 3:
da_delta_mean = da_delta
# annual means already taken
elif len(da_delta) < 50:
da_delta_mean = da_delta.mean('year')
else:
da_delta_mean = da_delta.mean('time')
fig = plt.figure(figsize=(30,15))
proj=ccrs.PlateCarree()
ax = plt.subplot(111, projection=proj, frameon=False)
# limiting values for plotting are given
if vlims==False:
da_delta_mean.plot(ax=ax, cmap=cmap, cbar_kwargs={'label': da_delta.name+' ('+da_delta.units+')', 'fraction': 0.02, 'pad': 0.04})
else:
im = da_delta_mean.plot(ax=ax, cmap=cmap, vmin=vlims[0], vmax=vlims[1], extend='both', add_colorbar=False, add_labels=False)
cb = plt.colorbar(im, fraction= 0.04,pad= 0.04, extend='both', orientation='horizontal')
cb.set_label(label = da_delta.units, size=20)
cb.ax.tick_params(labelsize=20)
ax.set_title(da_delta.long_name, loc='right', fontsize=30)
ax.coastlines(color='dimgray', linewidth=0.5)
# exclude Antactica from plot
ax.set_extent([-140,-80,20,80], crs=proj)
ax.text(0, 1.01, 'a.', color='dimgrey', fontsize=25, transform=ax.transAxes );
if plot_regions: regionmask.defined_regions.srex.plot(ax=ax,add_ocean=False, coastlines=False, add_label=False) #label='abbrev'
return fig, ax
# +
# caluculate annual mean QRUNOFF
da_roll = da
mod_mean = da_roll.mean('time')
mod_mean.attrs['units'] = 'mm/day'
mod_mean.attrs['long_name'] = 'Runoff (CTSM)'
mod_mean.name = 'runoff'
#plot_delta_map(mod_mean, plot_regions=False, vlims=[0,10], cmap='Blues');
# +
delta_mean = mod_mean- obs_mean
delta_mean.attrs['units'] = 'mm/day'
delta_mean.attrs['long_name'] = 'Mean Runoff Bias (CTSM-GRUN)'
delta_mean.name = 'Runoff bias'
# +
fig, ax = plot_delta_map(delta_mean, plot_regions=False, vlims=[-3,3], cmap='RdBu');
gdf_stations.plot(ax=ax, markersize=30, color='black', edgecolor='black',label='reservoir observations');
#gdf_dahiti_ntopo_used.plot(ax=ax, markersize=50, color='darkolivegreen',marker = "^", edgecolor='darkolivegreen', label='DAHITI observations');
gl = ax.gridlines(draw_labels=True, color='lightgray', alpha=0.5)
gl.top_labels = False
gl.right_labels = False
#gl.xlines = False
#gl.ylines = False
gl.xlabel_style = {'size': 20}
gl.ylabel_style = {'size': 20}
fig.savefig(procdir+'runoffbias.png', bbox_inches="tight")
# +
fig, ax = plot_delta_map_conus(delta_mean, plot_regions=False, vlims=[-3,3], cmap='RdBu');
ax = gdf_stations.plot(ax=ax, markersize=50, color='black', edgecolor='black');
gl = ax.gridlines(draw_labels=True, color='lightgray')
gl.top_labels = False
gl.right_labels = False
#gl.xlines = False
#gl.ylines = False
gl.xlabel_style = {'size': 20}
gl.ylabel_style = {'size': 20}
fig.savefig('runoffbias_conus_map.png', bbox_inches="tight")
| analysis/pp_clm_evaluate_runoff_paperplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 (tensorflow)
# language: python
# name: tensorflow
# ---
# <a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class7.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="vSJl8ZoSucQK"
# # T81-558: Applications of Deep Neural Networks
# * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
#
# **Module 7 Assignment: Computer Vision Neural Network**
#
# **Student Name: <NAME>**
# -
# # Google CoLab Instructions
#
# If you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```.
# +
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
# %tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m:>02}:{s:>05.2f}"
# + [markdown] colab_type="text" id="-eCUTf6n3BCb"
# # Assignment Submit Function
#
# You will submit the 10 programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any basic problems.
#
# **It is unlikely that should need to modify this function.**
# + colab={} colab_type="code" id="BHb2ceEO3Qil"
import base64
import os
import numpy as np
import pandas as pd
import requests
import PIL
import PIL.Image
import io
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - List of pandas dataframes or images.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
payload = []
for item in data:
if type(item) is PIL.Image.Image:
buffered = BytesIO()
item.save(buffered, format="PNG")
payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')})
elif type(item) is pd.core.frame.DataFrame:
payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")})
r= requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code==200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
# + [markdown] colab_type="text" id="_vNkxmQDucQN"
# # Assignment Instructions
#
# For this assignment, you will use YOLO running on Google CoLab. I suggest that you run this assignment on CoLab because the example code below is already setup to get you started with the correct versions of YOLO on TensorFlow 2.0.
#
# For this assignment you are provided with 10 image files that contain 10 different webcam pictures taken at the [Venice Sidewalk Cafe](https://www.westland.net/beachcam/) a WebCam that has been in opration since 1996. You can find the 10 images here:
#
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk1.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk2.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk3.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk4.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk5.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk6.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk7.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk8.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk9.png
# * https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk10.png
#
# You can see a sample of the WebCam here:
#
# 
#
# YOLO does quite well-recognizing objects in this webcam, as the following image illustrates.
#
# 
#
# You are to write a script that counts the number of certain objects in each of the images. Specifically, you are looking for:
#
# * person
# * car
# * bicycle
# * motorbike
# * umbrella
# * handbag
#
# It is essential that your use YOLO with a threshold of 10% if you want your results to match mine. The sample code below already contains this setting. Your program can set this threshold with the following command.
#
# * FLAGS.yolo_score_threshold = 0.1
#
# Your submitted data frame should also contain a column that identifies which image generated each row. This column should be named **image** and contain integer numbers between 1 and 10. There should be 10 rows in total. The complete data frame should look something like this (not necessarily exactly these numbers).
#
# |image|person|car|bicycle|motorbike|umbrella|handbag|
# |-|-|-|-|-|-|-|
# |1|23|0|3|4|0|0|
# |2|27|1|8|2|0|0|
# |3|29|0|0|0|3|0|
# |...|...|...|...|...|...|...|
#
#
# The following code sets up YOLO and then dumps the classification information for the first image. This notebook only serves to get you started. Read in all ten images and generate a data frame that looks like the following. Use the **submit** function as you did in previous assignments.
# -
# ### Installing YoloV3-TF2
#
# The following code is taken from the module, it installs YoLoV3-TF2 if not already installed.
# +
import sys
# !{sys.executable} -m pip install git+https://github.com/zzh8829/yolov3-tf2.git@master
# -
# The following code is taken from the module, it downloads needed files for YoLoV3-TF2.
# +
import tensorflow as tf
import os
if COLAB:
ROOT = '/content/drive/My Drive/projects/t81_558_dlearning/yolo'
else:
ROOT = os.path.join(os.getcwd(),'data')
filename_darknet_weights = tf.keras.utils.get_file(
os.path.join(ROOT,'yolov3.weights'),
origin='https://pjreddie.com/media/files/yolov3.weights')
TINY = False
filename_convert_script = tf.keras.utils.get_file(
os.path.join(os.getcwd(),'convert.py'),
origin='https://raw.githubusercontent.com/zzh8829/yolov3-tf2/master/convert.py')
filename_classes = tf.keras.utils.get_file(
os.path.join(ROOT,'coco.names'),
origin='https://raw.githubusercontent.com/zzh8829/yolov3-tf2/master/data/coco.names')
filename_converted_weights = os.path.join(ROOT,'yolov3.tf')
# -
# ### Transfering Weights
#
# The following code is taken from the module, it transfers preloaded weights into YOLO.
import sys
# !{sys.executable} "{filename_convert_script}" --weights "{filename_darknet_weights}" --output "{filename_converted_weights}"
# Now that we have all of the files needed for YOLO, we are ready to use it to recognize components of an image.
import os
os.remove(filename_convert_script)
# # Starter Code
# +
import time
from absl import app, flags, logging
from absl.flags import FLAGS
import cv2
import numpy as np
import tensorflow as tf
from yolov3_tf2.models import (YoloV3, YoloV3Tiny)
from yolov3_tf2.dataset import transform_images, load_tfrecord_dataset
from yolov3_tf2.utils import draw_outputs
import sys
from PIL import Image, ImageFile
import requests
# Flags are used to define several options for YOLO.
flags.DEFINE_string('classes', filename_classes, 'path to classes file')
flags.DEFINE_string('weights', filename_converted_weights, 'path to weights file')
flags.DEFINE_boolean('tiny', False, 'yolov3 or yolov3-tiny')
flags.DEFINE_integer('size', 416, 'resize images to')
flags.DEFINE_string('tfrecord', None, 'tfrecord instead of image')
flags.DEFINE_integer('num_classes', 80, 'number of classes in the model')
FLAGS([sys.argv[0]])
# Locate devices to run YOLO on (e.g. GPU)
physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(physical_devices) > 0:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
# +
# This assignment does not use the "Tiny version"
if FLAGS.tiny:
yolo = YoloV3Tiny(classes=FLAGS.num_classes)
else:
yolo = YoloV3(classes=FLAGS.num_classes)
# Load weights and classes
yolo.load_weights(FLAGS.weights).expect_partial()
print('weights loaded')
class_names = [c.strip() for c in open(FLAGS.classes).readlines()]
print('classes loaded')
# -
# Modify the code below to create your solution.
# +
import pandas as pd
i = 1
url = f"https://data.heatonresearch.com/data/t81-558/sidewalk/sidewalk{i}.png"
response = requests.get(url)
img_raw = tf.image.decode_image(response.content, channels=3)
# Preprocess image
img = tf.expand_dims(img_raw, 0)
img = transform_images(img, FLAGS.size)
# Desired threshold (any sub-image below this confidence level will be ignored.)
FLAGS.yolo_score_threshold = 0.1
# Recognize and report results
boxes, scores, classes, nums = yolo(img)
submit_df = pd.DataFrame()
print('detections:')
for i in range(nums[0]):
cls = class_names[int(classes[0][i])]
score = np.array(scores[0][i])
box = np.array(boxes[0][i])
print(f"\t{cls}, {score}, {box}")
# This is your student key that I emailed to you at the beginnning of the semester.
key = "<KEY>" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class7.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class7.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class7.ipynb' # Mac/Linux
submit(source_file=file,data=[submit_df],key=key,no=7)
| assignments/assignment_yourname_class7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # `Matplotlib` 作业
# ## 1. 绘制如下动画
#
# 
# +
import numpy as np
import matplotlib.animation as animation
import matplotlib.pyplot as plt
fig, ax = plt.subplots(constrained_layout=True)
x = np.arange(0, 2*np.pi, 0.01)
y = np.sin(x)
line, = ax.plot(x, y)
dot, = ax.plot(x[0],y[0],marker='o',color='r')
def init(): # only required for blitting to give a clean slate.
dot.set_data(np.nan,np.nan)
return dot,
def animate(i):
# 你的代码
return dot,
ani = animation.FuncAnimation(# 你的代码 )
ani.save("my_fancy_animation.gif", dpi=80)
# -
# ## 2. 绘制如下图形
# 
# +
import matplotlib.pyplot as plt
import matplotlib as mpl
# 请换成你的电脑上的字体文件
myfont = mpl.font_manager.FontProperties(fname='/System/Library/Fonts/STHeiti Light.ttc')
def format_axes(fig):
for i, ax in enumerate(fig.axes):
ax.text(0.5, 0.5, "ax%d" % (i+1), va="center", ha="center")
ax.tick_params(labelbottom=False, labelleft=False)
fig = plt.figure()
gs = fig.add_gridspec(2, 3)
# 你的代码
format_axes(fig)
plt.savefig('layout.png',dpi=100,bbox_inches='tight')
# -
| matplotlib/Matplotlib_homework.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/smf-9000/nlp-in-general/blob/main/Word_embeddings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="US1O6x2b6Stb"
#
#
# ```
# Links:
# [<NAME> and <NAME>. (2019, May 14)] https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/
#
# ```
#
#
# + [markdown] id="qFoeorWj7EMX"
# ### Info:
# * “The man was accused of robbing a bank.” “The man went fishing by the bank of the river.” Word2Vec would produce the same word embedding for the word “bank” in both sentences, while under BERT the word embedding for “bank” would be different for each sentence.
#
#
#
# + id="tNa-u6JV6IJJ"
# !pip install transformers
# + id="fSHKjFn-9Kq3"
import torch
from transformers import BertTokenizer, BertModel
# + id="8Anqm2mG9Lg_"
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# + colab={"base_uri": "https://localhost:8080/"} id="T6jEnslt_Rrk" outputId="cef7f324-cc6d-4f8e-8afd-e9282317263c"
# example = 'A tokenizer is in charge of preparing the inputs for a model.'
# example = 'What meaning word "embeddings" has?'
example = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
indexed_tokens = tokenizer.encode_plus(example, add_special_tokens=True)['input_ids']
tokenized_text = [tokenizer.decode(w).replace(' ', '') for w in indexed_tokens]
# print(tokenized_text)
for tup in zip(tokenized_text, indexed_tokens):
print('{:<12} {:8,}'.format(tup[0], tup[1]))
segments_ids = [1] * len(tokenized_text)
print (segments_ids)
# + id="xoA_OUCrbqPN"
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# print(tokens_tensor)
# print(segments_tensors)
model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states = True)
model.eval()
# + colab={"base_uri": "https://localhost:8080/"} id="E6BwpbBQd0so" outputId="5548228e-32e4-465c-bb54-deb8612dcb80"
with torch.no_grad():
outputs = model(tokens_tensor, segments_tensors)
hidden_states = outputs[2]
token_embeddings = torch.stack(hidden_states, dim=0)
token_embeddings = torch.squeeze(token_embeddings, dim=1)
token_embeddings = token_embeddings.permute(1,0,2)
print(token_embeddings.size())
# + colab={"base_uri": "https://localhost:8080/"} id="DdTyXw-4jayQ" outputId="6a7e5d84-1dcd-4cdf-a288-ec5e7dd5348f"
# Word Vectors
# ------------
# Ex1:
token_vecs_cat = []
for token in token_embeddings:
# `token` is a [13 x 768] tensor
# Concatenate the vectors (that is, append them together) from the last four layers.
cat_vec = torch.cat((token[-1], token[-2], token[-3], token[-4]), dim=0)
token_vecs_cat.append(cat_vec)
print ('Shape is: %d x %d' % (len(token_vecs_cat), len(token_vecs_cat[0])))
# Ex2:
token_vecs_sum = []
for token in token_embeddings:
# Sum the vectors from the last four layers.
sum_vec = torch.sum(token[-4:], dim=0)
token_vecs_sum.append(sum_vec)
print ('Shape is: %d x %d' % (len(token_vecs_sum), len(token_vecs_sum[0])))
# Ex3:
token_vecs_last = []
for token in token_embeddings:
# Just vector from last layer for specific token.
last_vec = token[-1]
token_vecs_last.append(last_vec)
print ('Shape is: %d x %d' % (len(token_vecs_last), len(token_vecs_last[0])))
# + colab={"base_uri": "https://localhost:8080/"} id="jNp2M8KMoC6l" outputId="1a608434-9592-4f69-e532-6c86b7d81b66"
from scipy.spatial.distance import cosine
diff_bank = 1 - cosine(token_vecs_last[10], token_vecs_last[19]) # "bank robber" vs "river bank", one refers to the actual bank
same_bank = 1 - cosine(token_vecs_last[10], token_vecs_last[6]) # "bank robber" vs "bank vault", both refers to the actual bank
print('same bank', same_bank)
print('diff_bank', diff_bank)
| Word_embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <EMAIL>
# #Spreadsheet Working Link
# https://docs.google.com/spreadsheets/d/18Sdz5vfzRpb4loYYf74OsnFk7yEVdpsoxa1R6YQUbdc/edit?usp=sharing
# #Google form link
# https://docs.google.com/forms/d/e/1FAIpQLSee9f4fimgkkbAg1RmBrlOmKtB3iQz9l5iDeQP1gPKY6nLgoQ/viewform?vc=0&c=0&w=1
# !ls
# +
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import pandas as pd
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'My Google Spreadsheets-865821355e58.json', scope) # Your json file here
gc = gspread.authorize(credentials)
wks = gc.open("candidate_evaluation").sheet1
data = wks.get_all_values()
headers = data.pop(0)
df = pd.DataFrame(data, columns=headers)
print(df.head())
# -
df.info()
df.drop(columns=['Score'],inplace=True)
df.describe()
df.columns
#renaming a column of a dataframe
df.rename(columns={'Education ':"Education"},inplace=True)
#library known by the first candidate
mllib=df['Knowledge of ML libraries. Select all options which are applicable.'].tolist()
mllib[0]
#first candiate details of column 5 to 9
df.iloc[1,5:9]
df
#retrieving all columns for candiate using index v
df.loc[1,'<NAME> Candidate':'Knowledge of DeepLearning Frameworks? Select all applicable.']
#grouping the records on Education Criteria
df_education=df.groupby('Education')
type(df_education)
#First unique value of all groups formed
df_education.nth(0)
#get all values for group College
df_education.get_group('College')
df_education.get_group('Master')
#get candidate performance rating column for first value of all groups formed
df_education['Candidate Performance Rating'].nth(0)
#all possible values for columne education
df['Education'].values
df['Education'].count()
df['Education'].value_counts()
#list the unique categories for our feature column Education
df['Education'].unique()
df['Education'].value_counts
# +
#creating a new column in our records using an existing one
def func(x):
Y=type(x)
return Y
df['new_column']=df['Education'].map(func)
# -
df
# +
from datetime import datetime
year = lambda x: datetime.strptime(x, "%m/%d/%Y %H:%M:%S" ).year
df['year'] = df['Timestamp'].map(year)
from datetime import datetime
day = lambda x: datetime.strptime(x, "%m/%d/%Y %H:%M:%S" ).day
df['DAY'] = df['Timestamp'].map(day)
df.head()
from datetime import datetime
month = lambda x: datetime.strptime(x, "%m/%d/%Y %H:%M:%S" ).month
df['MONTH'] = df['Timestamp'].map(month)
df.head()
df.head()
# -
df.max()
df['Python knowledge on a scale of 1-10'].max()
df['<NAME>']
#get the record of candidate <NAME>
df[df['Name of Candidate'] == '<NAME>']
#get the all the records except <NAME>or
df[df['Name of Candidate'] != '<NAME>']
#checking the datatype of our columns
df.dtypes
#changing the datatype of some columns in our dataframe
df=df.astype({'Candidate Performance Rating': 'str','Python knowledge on a scale of 1-10': 'int32','SQL knowledge on a scale 1-10': 'int32','Knowledge of Neural Networks on a scale of 1-10':'int32','Total Experience of Candidate':'int32'},copy=True)
df.dtypes
import matplotlib.pyplot as plt
#df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
ax = df.plot.bar(x='<NAME> Candidate', y='SQL knowledge on a scale 1-10', rot=0)
ax
#plotting the distribution of our variable Total experience of candidate
df.hist(column='Total Experience of Candidate', bins=50)
#grouping and aggregating the groups based on some conditions
candidate_df = df.groupby(['Education'],as_index=False).agg(OrderedDict(
[('Knowledge of Neural Networks on a scale of 1-10','nunique'),
('Python knowledge on a scale of 1-10','mean'),('Total Experience of Candidate','mean')]))
candidate_df
# Thank You
| google_spreadsheet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
def show_mdd(xs): # xs is cumulative return / portfolio , if reward u should
i = np.argmax(np.maximum.accumulate(xs) - xs) # end of the period
j = np.argmax(xs[:i]) # start of period
plt.figure(figsize=(16,10))
plt.plot(xs)
plt.plot([i, j], [xs[i], xs[j]], 'o', color='Red', markersize=10)
plt.show()
return xs[-1]
FILENAME = "./info/ppo_1976539.5790968398_LS_0_0.info"
info = np.load(FILENAME, allow_pickle=2).all()
portfolio = [data[3] for data in info['history']]
final_value = show_mdd(portfolio)
print("{0} -> {1} ".format(100*10000, final_value))
| tf_deep_rl_trader/visualize_info.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
os.chdir('data')
# !ls
os.chdir('lc-quad')
# !ls
import pandas as pd
gabri = pd.read_json('./whole_with_types.jsonl', orient='records', lines=True)
cbe = pd.read_json('./whole_cbe.jsonl', orient='records', lines=True)
gabri.shape
cbe.shape
gabri.columns
cbe.columns
cbe['predicted_question_type'] = gabri['predicted_question_type']
cbe.shape
cbe.iloc[0]
cbe['predicted_question_type'].value_counts()
cbe['correct_question_type'].value_counts()
cbe['predicted_question_type'] = cbe['predicted_question_type'].apply(lambda x: 'select-type' if x == 'select' else x)
cbe['predicted_question_type'].value_counts()
cbe['correct_question_type'].isna().sum()
cbe.to_json('./final.jsonl', orient='records', lines=True)
cbe.iloc[0]
cbe.iloc[0]['corrected_question']
cbe.iloc[0]['intermediary_question']
ex = cbe.iloc[0]['graph']
import json
ex
exj = json.loads(ex)
exj
exset = set()
exset |= set(exj.keys())
exset
mygen(exj)
cbe.iloc[0]['sparql_query']
cbe.iloc[0]['predicted_entities']
whole = cbe
def isvar(item):
return item[0] == '?'
def getEntsPreds(graph):
if graph is None:
return None,None
entities = []
preds = []
if isinstance(graph, str):
graph = json.loads(graph)
for sub,v in graph.items():
for obj, v2 in v.items():
pred = v2['label']
if not isvar(obj):
entities.append(obj)
if not isvar(pred):
preds.append(pred)
if not isvar(sub):
entities.append(sub)
return entities, preds
res = whole['graph'].apply(getEntsPreds)
whole['correct_entities'] = [e for e,_ in res]
whole['correct_predicates'] = [p for _,p in res]
whole.columns
res_pred = whole['predicted_graph'].apply(getEntsPreds)
res_pred
whole['predicted_entities_from_graph'] = [e for e,_ in res_pred]
whole['predicted_predicates_from_graph'] = [p for _,p in res_pred]
whole['predicted_entities_from_graph'] == whole['predicted_entities']
whole.iloc[0]['predicted_entities']
whole.iloc[0]['predicted_entities_from_graph']
os.getcwd()
whole.to_json('./whole_20200417plus.jsonl', orient='records', lines=True)
whole
| knowledgeGraph/notebooks/entities_from_graphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# +
features_url = "http://172.16.58.3:8888/IFCB104/D20210629T161913_IFCB104_fea_v2.csv"
autoclass_url = "http://172.16.58.3:8888/IFCB104/D20210629T161913_IFCB104_class_scores.csv"
df_features = pd.read_csv(features_url)
df_autoclass = pd.read_csv(autoclass_url) # These are percentage of counts
df_features.shape
# -
df_features.columns
totals = df_autoclass.apply(lambda x: x == df_autoclass.max(axis=1)).sum()
totals = totals.reset_index(name='counts')
totals = totals.rename(columns={'index':'class'})
totals = totals.sort_values(['counts']).reset_index(drop=True)
totals = totals.drop(totals[totals['class'].isin(["pid", "Skeletonema", "Thalassionema", "Thalassiosira", "unclassified"])].index)
totals.head()
ax = sns.barplot(x="class", y="counts", data=totals, palette="Blues_d",edgecolor=".2");
plt.xticks(rotation=90, ha='center');
plt.title("Total Detections: {}".format(df_features.shape[0]))
| notebooks/SCW-1-auto-class-display.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Lesson 02 - Policy and Ethics of A/B Test
#
# ### Tuskegee and Milgram Experiments
# - In Tuskegee syphilis experiment the people were not treated and not informed of a possible treatment
# - In Milgram experiment the people involved may have received possible psynchological damage
# - These articles have more information about the [Tuskegee syphilis](https://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment) and [Milgram](https://en.wikipedia.org/wiki/Milgram_experiment) experiments
#
# ### Facebook experiment
# - [This article](http://www.wsj.com/articles/furor-erupts-over-facebook-experiment-on-users-1404085840) has more information on the Facebook experiment.
# - Experiment about how emotions can be spread on social media
#
# ### Four Principles of IRBs (Institutional Review Boards)
# Ok, so we just went through some experiments that led to IRB’s being created. What are the four main principles that IRB’s look for?
#
# ### First Principle: Risk
# First, in the study, what risk is the participant undertaking? The main threshold is whether the risk exceeds that of “minimal risk”. Minimal risk is defined as the probability and magnitude of harm that a participant would encounter in normal daily life. The harm considered encompasses physical, psychological and emotional, social, and economic concerns. If the risk exceeds minimal risk, then informed consent is required. We’ll discuss informed consent further below.
#
# In most, but not all, online experiments, it can certainly be debated as to whether any of the experiments lead to anything beyond minimal risk. What risk is a participant going to be exposed to if we change the ranking of courses on an educational site, or if we change the UI on an online game?
#
# Exceptions would certainly be any websites or applications that are health or financial related. In the Facebook experiment, for example, it can be debated as to whether participants were really being exposed to anything beyond minimal risk: all items shown were going to be in their feed anyway, it’s only a question of whether removing some of the posts led to increased risk.
#
# ### Assessing Risk
#
# #### Less than minimal risk
# - Search engine results show price of products that you are looking to buy
# - Changing the location of search bar
#
# #### More than minimal risk
# - health app lets user know possible consequences of their diets
#
# ## Second Principle: Benefits
# Next, what benefits might result from the study? Even if the risk is minimal, how might the results help? In most online A/B testing, the benefits are around improving the product. In other social sciences, it is about understanding the human condition in ways that might help, for example in education and development. In medicine, the risks are often higher but the benefits are often around improved health outcomes.
#
# It is important to be able to state what the benefit would be from completing the study.
#
# ## Third Principle: Alternatives
# Third, what other choices do participants have? For example, if you are testing out changes to a search engine, participants always have the choice to use another search engine. The main issue is that the fewer alternatives that participants have, the more issue that there is around coercion and whether participants really have a choice in whether to participate or not, and how that balances against the risks and benefits.
#
# For example, in medical clinical trials testing out new drugs for cancer, given that the other main choice that most participants face is death, the risk allowable for participants, given informed consent, is quite high.
#
# In online experiments, the issues to consider are what the other alternative services that a user might have, and what the switching costs might be, in terms of time, money, information, etc.
#
# ## Fourth Principle: Privacy / Data Sensitivity
# Finally, what data is being collected, and what is the expectation of privacy and confidentiality? This last question is quite nuanced, encompassing numerous questions:
#
# Do participants understand what data is being collected about them?
# What harm would befall them should that data be made public?
# Would they expect that data to be considered private and confidential?
# For example, if participants are being observed in a public setting (e.g., a football stadium), there is really no expectation of privacy. If the study is on existing public data, then there is also no expectation of further confidentiality.
#
# If, however, new data is being gathered, then the questions come down to:
#
# What data is being gathered? How sensitive is it? Does it include financial and health data?
# Can the data being gathered be tied to the individual, i.e., is it considered personally identifiable?
# How is the data being handled, with what security? What level of confidentiality can participants expect?
# What harm would befall the individual should the data become public, where the harm would encompass health, psychological / emotional, social, and financial concerns?
# For example, often times, collected data from observed “public” behavior, surveys, and interviews, if the data were not personally identifiable, would be considered exempt from IRB review (reference: NSF FAQ below).
#
# To summarize, there are really three main issues with data collection with regards to experiments:
#
# For new data being collected and stored, how sensitive is the data and what are the internal safeguards for handling that data? E.g., what access controls are there, how are breaches to that security caught and managed, etc.?
# Then, for that data, how will it be used and how will participants’ data be protected? How are participants guaranteed that their data, which was collected for use in the study, will not be used for some other purpose? This becomes more important as the sensitivity of the data increases.
# Finally, what data may be published more broadly, and does that introduce any additional risk to the participants?
#
# ### Difference between pseudonymous and anonymous data
# One question that frequently gets asked is what the difference is between identified, pseudonymous, and anonymous data is.
#
# Identified data means that data is stored and collected with personally identifiable information. This can be names, IDs such as a social security number or driver’s license ID, phone numbers, etc. HIPAA is a common standard, and that standard has [18 identifiers (see the Safe Harbor method)](http://www.hhs.gov/ocr/privacy/hipaa/understanding/coveredentities/De-identification/guidance.html#standard) that it considers personally identifiable. Device id, such as a smartphone’s device id, are considered personally identifiable in many instances.
#
# Anonymous data means that data is stored and collected without any personally identifiable information. This data can be considered pseudonymous if it is stored with a randomly generated id such as a cookie that gets assigned on some event, such as the first time that a user goes to an app or website and does not have such an id stored.
#
# In most cases, anonymous data still has time-stamps -- which is one of the HIPAA 18 identifiers. Why? Well, we need to distinguish between anonymous data and anonymized data. Anonymized data is identified or anonymous data that has been looked at and guaranteed in some way that the re-identification risk is low to non-existent, i.e., that given the data, it would be hard to impossible for someone to be able to figure out which individual this data refers to. Often times, this guarantee is done statistically, and looks at how many individuals would fall into every possible bucket (i.e., combination of values).
#
# What this means is that anonymous data may still have high re-identification risk (see [AOL example](http://en.wikipedia.org/wiki/AOL_search_data_leak)).
#
# So, if we go back to the data being gathered, collected, stored, and used in the experiment, the questions are:
#
# How sensitive is the data?
# What is the re-identification risk of individuals from the data?
# As the sensitivity and the risk increases, then the level of data protection must increase: confidentiality, access control, security, monitoring & auditing, etc.
#
# Additional reading
# <NAME>'s "[A Taxonomy of Privacy](https://www.law.upenn.edu/journals/lawreview/articles/volume154/issue3/Solove154U.Pa.L.Rev.477%282006%29.pdf)" classifies some of things people mean by privacy in order to better understand privacy violations.
#
#
# ### Data Sensitivity
#
# #### Less than minimal risk
# - Census data by zipcode - at higher granularity makes re-identification difficult
# - Daily traffic to specific sites
# - online game stats
# - shopping stats by zipcode - at higher granularity makes re-identification difficult
#
# #### More than minimal risk
# - Glucose levels with timestamps. Subject to regulations
# - credit card information
# ## Questions and Consent
# - Are users being informed?
# - What user identifiers are tied to the data?
# - What type of data is being collected?
# - What is the level of confidentiality and security?
#
# ## Summary of Principles
# It is a grey area as to whether many of these Internet studies should be subject to IRB review or not and whether informed consent is required. Neither has been common to date.
#
# Most studies, due to the nature of the online service, are likely minimal risk, and the bigger question is about data collection with regards to identifiability, privacy, and confidentiality / security. That said, arguably, a neutral third party outside of the company should be making these calls rather than someone with a vested interest in the outcome. One growing risk in online studies is that of bias and the potential for discrimination, such as differential pricing and whether that is discriminatory to a particular population for example. Discussing those types of biases is beyond the scope of this course.
#
# Our recommendation is that there should be internal reviews of all proposed studies by experts regarding the questions:
#
# - Are participants facing more than minimal risk?
# - Do participants understand what data is being gathered?
# - Is that data identifiable?
# - How is the data handled?
#
# And if enough flags are raised, that an external review happen.
#
# ## Which tests need more review?
#
# ### Review
# - survey about net worth of individual on a financial website.
# - survey at bottom of article
# - survey after first page and not allow user to advance until they complete the survey
# - test different ways of presenting heart rate data
# - needs review as it may prompt users to exercise less/more
#
# ### No review
# - different meal layouts for meal delivery
#
#
# ## Provided information
#
# ### Needed
# - terms of service (TOS) or privacy policy
# - list of experiments that you are planning on running on your users
# - may not be required if minimal risk and if you take measures to protect the users
#
# ### Not needed
# - history of how company has been funded
# - navigation bar
# - search bar
#
# ## Internal Training
# What all information is ethically needed by the person who runs an A/B Test?
#
# ### Needed
# - Which questions to consider when evaluating ethics?
# - data policy detailing acceptable data uses
# - principles to uphold
#
# ### Not needed
# - History of A/B Testing
# - History of IRBs
| udacity_data_science_notes/abTesting/lesson_02/lesson_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
import matplotlib
matplotlib.rcParams["figure.figsize"] = (20,10)
df1 = pd.read_csv("bengaluru_house_prices.csv")
df1.head()
df1.shape
df1.columns
df1['area_type'].unique()
df1['area_type'].value_counts()
# **Drop features that are not required to build our model**
df2 = df1.drop(['area_type','society','balcony','availability'],axis='columns')
df2.shape
#Data Cleaning: Handle NA values
df2.isnull().sum()
df2.shape
df3 = df2.dropna()
df3.isnull().sum()
df3.shape
# **Add new feature(integer) for bhk (Bedrooms Hall Kitchen)**
#Feature Engineering
df3['bhk'] = df3['size'].apply(lambda x: int(x.split(' ')[0]))
df3.bhk.unique()
# **Explore total_sqft feature**
def is_float(x):
try:
float(x)
except:
return False
return True
2+3
df3[~df3['total_sqft'].apply(is_float)].head(10)
# **Above shows that total_sqft can be a range (e.g. 2100-2850). For such case we can just take average of min and max value in the range. There are other cases such as 34.46Sq. Meter which one can convert to square ft using unit conversion. I am going to just drop such corner cases to keep things simple**
def convert_sqft_to_num(x):
tokens = x.split('-')
if len(tokens) == 2:
return (float(tokens[0])+float(tokens[1]))/2
try:
return float(x)
except:
return None
df4 = df3.copy()
df4.total_sqft = df4.total_sqft.apply(convert_sqft_to_num)
df4 = df4[df4.total_sqft.notnull()]
df4.head(2)
# **For below row, it shows total_sqft as 2475 which is an average of the range 2100-2850**
df4.loc[30]
(2100+2850)/2
# **Add new feature called price per square feet**
df5 = df4.copy()
df5['price_per_sqft'] = df5['price']*100000/df5['total_sqft']
df5.head()
df5_stats = df5['price_per_sqft'].describe()
df5_stats
df5.to_csv("bhp.csv",index=False)
# **Examine locations which is a categorical variable. We need to apply dimensionality reduction technique here to reduce number of locations**
df5.location = df5.location.apply(lambda x: x.strip())
location_stats = df5['location'].value_counts(ascending=False)
location_stats
location_stats.values.sum()
len(location_stats[location_stats>10])
len(location_stats)
# +
len(location_stats[location_stats<=10])
#Dimensionality Reduction
# -
# **Any location having less than 10 data points should be tagged as "other" location. This way number of categories can be reduced by huge amount. Later on when we do one hot encoding, it will help us with having fewer dummy columns**
location_stats_less_than_10 = location_stats[location_stats<=10]
location_stats_less_than_10
len(df5.location.unique())
df5.location = df5.location.apply(lambda x: 'other' if x in location_stats_less_than_10 else x)
len(df5.location.unique())
df5.head(10)
# **Outlier Removal Using Business Logic**
# **. .**
# **As a data scientist when you have a conversation with your business manager (who has expertise in real estate), he will tell you that normally square ft per bedroom is 300 (i.e. 2 bhk apartment is minimum 600 sqft. If you have for example 400 sqft apartment with 2 bhk than that seems suspicious and can be removed as an outlier. We will remove such outliers by keeping our minimum thresold per bhk to be 300 sqft**
df5[df5.total_sqft/df5.bhk<300].head()
# **Check above data points. We have 6 bhk apartment with 1020 sqft. Another one is 8 bhk and total sqft is 600. These are clear data errors that can be removed safely**
df5.shape
df6 = df5[~(df5.total_sqft/df5.bhk<300)]
df6.shape
# <h2 style='color:green'>Outlier Removal Using Standard Deviation and Mean</h2>
df6.price_per_sqft.describe()
# **Here we find that min price per sqft is 267 rs/sqft whereas max is 12000000, this shows a wide variation in property prices. We should remove outliers per location using mean and one standard deviation**
def remove_pps_outliers(df):
df_out = pd.DataFrame()
for key, subdf in df.groupby('location'):
m = np.mean(subdf.price_per_sqft)
st = np.std(subdf.price_per_sqft)
reduced_df = subdf[(subdf.price_per_sqft>(m-st)) & (subdf.price_per_sqft<=(m+st))]
df_out = pd.concat([df_out,reduced_df],ignore_index=True)
return df_out
df7 = remove_pps_outliers(df6)
df7.shape
# **Let's check if for a given location how does the 2 BHK and 3 BHK property prices look like**
# +
Let's check if for a given location how does the 2 BHK and 3 BHK property prices look like
def plot_scatter_chart(df,location):
bhk2 = df[(df.location==location) & (df.bhk==2)]
bhk3 = df[(df.location==location) & (df.bhk==3)]
matplotlib.rcParams['figure.figsize'] = (15,10)
plt.scatter(bhk2.total_sqft,bhk2.price,color='blue',label='2 BHK', s=50)
plt.scatter(bhk3.total_sqft,bhk3.price,marker='+', color='green',label='3 BHK', s=50)
plt.xlabel("Total Square Feet Area")
plt.ylabel("Price (Lakh Indian Rupees)")
plt.title(location)
plt.legend()
plot_scatter_chart(df7,"Rajaji Nagar")
# -
plot_scatter_chart(df7,"Hebbal")
# **We should also remove properties where for same location, the price of (for example) 3 bedroom apartment is less than 2 bedroom apartment (with same square ft area). What we will do is for a given location, we will build a dictionary of stats per bhk, i.e.**
# ```
# {
# '1' : {
# 'mean': 4000,
# 'std: 2000,
# 'count': 34
# },
# '2' : {
# 'mean': 4300,
# 'std: 2300,
# 'count': 22
# },
# }
# ```
# **Now we can remove those 2 BHK apartments whose price_per_sqft is less than mean price_per_sqft of 1 BHK apartment**
def remove_bhk_outliers(df):
exclude_indices = np.array([])
for location, location_df in df.groupby('location'):
bhk_stats = {}
for bhk, bhk_df in location_df.groupby('bhk'):
bhk_stats[bhk] = {
'mean': np.mean(bhk_df.price_per_sqft),
'std': np.std(bhk_df.price_per_sqft),
'count': bhk_df.shape[0]
}
for bhk, bhk_df in location_df.groupby('bhk'):
stats = bhk_stats.get(bhk-1)
if stats and stats['count']>5:
exclude_indices = np.append(exclude_indices, bhk_df[bhk_df.price_per_sqft<(stats['mean'])].index.values)
return df.drop(exclude_indices,axis='index')
df8 = remove_bhk_outliers(df7)
# df8 = df7.copy()
df8.shape
# **Plot same scatter chart again to visualize price_per_sqft for 2 BHK and 3 BHK properties**
plot_scatter_chart(df8,"<NAME>")
plot_scatter_chart(df8,"Hebbal")
import matplotlib
matplotlib.rcParams["figure.figsize"] = (20,10)
plt.hist(df8.price_per_sqft,rwidth=0.8)
plt.xlabel("Price Per Square Feet")
plt.ylabel("Count")
# <h2 style='color:green'>Outlier Removal Using Bathrooms Feature</h2>
df8.bath.unique()
plt.hist(df8.bath,rwidth=0.8)
plt.xlabel("Number of bathrooms")
plt.ylabel("Count")
df8[df8.bath>10]
# **It is unusual to have 2 more bathrooms than number of bedrooms in a home**
df8[df8.bath>df8.bhk+2]
# **Again the business manager has a conversation with you (i.e. a data scientist) that if you have 4 bedroom home and even if you have bathroom in all 4 rooms plus one guest bathroom, you will have total bath = total bed + 1 max. Anything above that is an outlier or a data error and can be removed**
df9 = df8[df8.bath<df8.bhk+2]
df9.shape
df9.head(2)
df10 = df9.drop(['size','price_per_sqft'],axis='columns')
df10.head(3)
# <h2 style='color:green'>Use One Hot Encoding For Location</h2>
#pandas dummies method
dummies = pd.get_dummies(df10.location)
dummies.head(10)
df11 = pd.concat([df10,dummies.drop('other',axis='columns')],axis='columns')
df11.head()
df12 = df11.drop('location',axis='columns')
df12.head(2)
# <h2 style='color:green'>Build a Model Now...</h2>
df12.shape
X = df12.drop(['price'],axis='columns')
X.head(3)
X.shape
Y = df12.price
Y.head()
len(Y)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.2,random_state=10)
from sklearn.linear_model import LinearRegression
lr_clf = LinearRegression()
lr_clf.fit(X_train,Y_train)
lr_clf.score(X_test,Y_test)
# <h2 style='color:green'>Use K Fold cross validation to measure accuracy of our LinearRegression model</h2>
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_val_score
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
cross_val_score(LinearRegression(), X, Y, cv=cv)
# **We can see that in 5 iterations we get a score above 80% all the time. This is pretty good but we want to test few other algorithms for regression to see if we can get even better score. We will use GridSearchCV for this purpose**
# <h2 style='color:green'>Find best model using GridSearchCV</h2>
# +
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Lasso
from sklearn.tree import DecisionTreeRegressor
# -
def find_best_model_using_gridsearchcv(X,y):
algos = {
'linear_regression' : {
'model': LinearRegression(),
'params': {
'normalize': [True, False]
}
},
'lasso': {
'model': Lasso(),
'params': {
'alpha': [1,2],
'selection': ['random', 'cyclic']
}
},
'decision_tree': {
'model': DecisionTreeRegressor(),
'params': {
'criterion' : ['mse','friedman_mse'],
'splitter': ['best','random']
}
}
}
scores = []
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
for algo_name, config in algos.items():
gs = GridSearchCV(config['model'], config['params'], cv=cv, return_train_score=False)
gs.fit(X,y)
scores.append({
'model': algo_name,
'best_score': gs.best_score_,
'best_params': gs.best_params_
})
return pd.DataFrame(scores,columns=['model','best_score','best_params'])
# +
find_best_model_using_gridsearchcv(X,Y)
# -
# **Based on above results we can say that LinearRegression gives the best score. Hence we will use that.**
X.columns
# <h2 style='color:green'>Test the model for few properties</h2>
def predict_price(location,sqft,bath,bhk):
loc_index = np.where(X.columns==location)[0][0]
x = np.zeros(len(X.columns))
x[0] = sqft
x[1] = bath
x[2] = bhk
if loc_index >= 0:
x[loc_index] = 1
return lr_clf.predict([x])[0]
predict_price('1st Phase JP Nagar',1000, 2, 2)
predict_price('1st Phase JP Nagar',1000, 3, 3)
predict_price('Indira Nagar',1000, 2, 2)
predict_price('Indira Nagar',1000, 3, 3)
# <h2 style='color:green'>Export the tested model to a pickle file</h2>
import pickle
with open('banglore_home_prices_model.pickle','wb') as f:
pickle.dump(lr_clf,f)
# <h2 style='color:green'>Export location and column information to a file that will be useful later on in our prediction application</h2>
import json
columns = {
'data_columns' : [col.upper() for col in X.columns]
}
with open("columns.json","w") as f:
f.write(json.dumps(columns))
| House-Price-Prediction/code/Another_file/banglore_home_prices_final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# +
import query_helpers
from pathlib import PurePath
from raster_compare.base import RasterFile
import os
import numpy as np
import pandas as pd
# -
# # 3 m
# +
SNOW_DEPTH_DIR = PurePath(f"{os.environ['HOME']}/scratch/ERW-Paper/3m")
aso_snow_depth = RasterFile(
SNOW_DEPTH_DIR / '20180524_ASO_snow_depth_3m.tif',
band_number=1
)
aso_snow_depth_values = aso_snow_depth.band_values()
np.ma.masked_where(
aso_snow_depth_values <= 0.0,
aso_snow_depth_values,
copy=False
)
sfm_snow_depth = RasterFile(
SNOW_DEPTH_DIR / '20180524_Agisoft_snow_depth_3m.tif',
band_number=1
)
assert aso_snow_depth.geo_transform == sfm_snow_depth.geo_transform
sfm_snow_depth_values = sfm_snow_depth.band_values()
np.ma.masked_where(
aso_snow_depth_values.mask,
sfm_snow_depth_values,
copy=False
)
casi_class = RasterFile(
SNOW_DEPTH_DIR / '20180524_ASO_CASI_ERW_basin_3m.tif',
band_number=1
)
assert aso_snow_depth.geo_transform == casi_class.geo_transform
casi_class_values = casi_class.band_values()
np.ma.masked_where(
aso_snow_depth_values.mask,
casi_class_values,
copy=False
);
# -
# ## Prepare
df = pd.DataFrame({
'aso_snow_depth': aso_snow_depth_values.ravel(),
'sfm_snow_depth': sfm_snow_depth_values.ravel(),
'casi_class': casi_class_values.ravel(),
})
df.dropna(inplace=True, how='all', subset=['aso_snow_depth', 'sfm_snow_depth'])
# ### Classification
# +
CASI_MAPPING = [0., 1., 2., 3., np.inf]
CASI_CLASSES = ['Snow', 'Vegetation', 'Rock', 'Water']
df['casi_class'] = pd.cut(
df['casi_class'], CASI_MAPPING, labels=CASI_CLASSES
)
df.loc[df['casi_class'] == 'Water', 'casi_class'] = 'Vegetation'
# -
# ### Depth Differences, Volume, and SWE
df = query_helpers.diff_vol_swe(df, aso_snow_depth, sfm_snow_depth)
# +
positive_sfm = query_helpers.get_positive(df, 'sfm_snow_depth')
negative_sfm = query_helpers.get_negative(df, 'sfm_snow_depth')
no_values_sfm = query_helpers.get_no_data(df, 'sfm_snow_depth')
columns = ['casi_class', 'sfm_snow_depth', 'aso_snow_depth', 'sd_difference']
pd.set_option('display.float_format', lambda x: '%.2f m' % x)
# -
# # Snow Depth (3 m)
df[columns].agg([np.mean, np.median, np.std])
df[columns].groupby('casi_class').agg([np.mean, np.median, np.std])
# ### Positive SfM values
positive_sfm[columns].agg([np.mean, np.median, np.std])
positive_sfm[columns].groupby('casi_class').agg([np.mean, np.median, np.std])
# ### Negative SfM values
negative_sfm[columns].agg([np.mean, np.median, np.std])
negative_sfm[columns].groupby('casi_class').agg([np.mean, np.median, np.std])
pd.set_option('display.float_format', None)
# # Snow Volume (3 m)
# +
m2_style = "{:,.0f} m<sup>3</sup>"
table_style = {
'sfm_snow_volume': m2_style,
'aso_snow_volume': m2_style,
'difference': m2_style,
'sfm % to aso': "{:.2%}",
'percent_sfm_scene': "{:.2%}",
'percent_aso_scene': "{:.2%}",
}
columns = ['casi_class', 'aso_snow_volume', 'sfm_snow_volume']
# -
total_volume = pd.DataFrame({
'sfm_snow_volume': df.sfm_snow_volume.sum(),
'aso_snow_volume': df.aso_snow_volume.sum(),
},
index=[0]
)
total_volume['difference'] = total_volume.aso_snow_volume - total_volume.sfm_snow_volume
total_volume['sfm % to aso'] = total_volume.sfm_snow_volume / total_volume.aso_snow_volume
total_volume.style.format(table_style)
captured_volume = pd.DataFrame({
'sfm_snow_volume': positive_sfm.sfm_snow_volume.sum(),
'aso_snow_volume': positive_sfm.aso_snow_volume.sum(),
},
index=[0]
)
captured_volume['difference'] = captured_volume.aso_snow_volume - captured_volume.sfm_snow_volume
captured_volume['sfm % to aso'] = captured_volume.sfm_snow_volume / captured_volume.aso_snow_volume
captured_volume.style.format(table_style)
missed_volume = pd.DataFrame({
'sfm_snow_volume': negative_sfm.sfm_snow_volume.sum(),
'aso_snow_volume': negative_sfm.aso_snow_volume.sum(),
},
index=[0]
)
missed_volume['difference'] = missed_volume.aso_snow_volume - missed_volume.sfm_snow_volume
missed_volume['sfm % to aso'] = missed_volume.sfm_snow_volume / missed_volume.aso_snow_volume
missed_volume.style.format(table_style)
assert (captured_volume.aso_snow_volume + missed_volume.aso_snow_volume).item(), total_volume.aso_snow_volume.item()
num_pixels_aso = df.aso_snow_depth.count()
num_pixels_sfm = positive_sfm.aso_snow_depth.count()
print("Pixels with depth:")
print(f" ASO count: {num_pixels_aso:,}")
print(f" SfM count: {num_pixels_sfm:,}")
print("Percent ASO pixels:")
print(f" with values in SfM: {num_pixels_sfm/num_pixels_aso:.2%}")
print(f" with negative in SfM: {negative_sfm.query('sfm_snow_depth == sfm_snow_depth').aso_snow_depth.count()/num_pixels_aso:.2%}")
print(f" with no value in SfM: {no_values_sfm.aso_snow_depth.count()/num_pixels_aso:.2%}")
# +
pixel_stats_sfm = positive_sfm[columns].groupby('casi_class').count()
pixel_stats_aso = df[columns].groupby('casi_class').count()
grouped_volume = df[columns].groupby('casi_class').sum()
grouped_volume['difference'] = grouped_volume.aso_snow_volume - grouped_volume.sfm_snow_volume
grouped_volume['sfm % to aso'] = grouped_volume.sfm_snow_volume / grouped_volume.aso_snow_volume
grouped_volume['percent_sfm_scene'] = (pixel_stats_sfm.sfm_snow_volume / num_pixels_sfm)
grouped_volume['percent_aso_scene'] = (pixel_stats_aso.aso_snow_volume / num_pixels_aso)
grouped_volume.style.format(table_style)
# -
# ### Overlapping area
grouped_volume = positive_sfm[columns].groupby('casi_class').sum()
grouped_volume['difference'] = grouped_volume.aso_snow_volume - grouped_volume.sfm_snow_volume
grouped_volume['sfm % to aso'] = grouped_volume.sfm_snow_volume / grouped_volume.aso_snow_volume
grouped_volume.style.format(table_style)
# # SWE (3 m)
# +
m2_style = "{:,.0f} m"
table_style = {
'sfm_swe': m2_style,
'aso_swe': m2_style,
'difference': m2_style,
'sfm % to aso': "{:.2%}",
}
columns = ['aso_swe', 'sfm_swe']
# -
total_swe = pd.DataFrame({
'sfm_swe': df.sfm_swe.sum(),
'aso_swe': df.aso_swe.sum(),
},
index=[0]
)
total_swe['difference'] = total_swe.aso_swe - total_swe.sfm_swe
total_swe['sfm % to aso'] = total_swe.sfm_swe / total_swe.aso_swe
total_swe.style.format(table_style)
| notebooks/Resolution_difference-3m.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Jupyter Imports
import pandas as pd
import janitor
from zipfile import ZipFile
import os
demographic_dict = {}
vote_history_dict = {}
states_dict = {
"Alaska" : "VM2--AK--2020-03-18",
"Alabama" : "VM2--AL--2020-02-24,",
"Arkansas" : "VM2--AR--2020-02-24",
"Arizona": "VM2--AZ--2020-02-19",
"California": "VM2--CA--2020-03-25",
"Colorado": "VM2--CO--2020-02-26",
"Connecticut": "VM2--CT--2020-03-26",
"DC": "VM2--DC--2020-03-02",
"Delaware": "VM2--DE--2020-03-30",
"Florida": "VM2--FL--2020-07-30",
"Georgia": "VM2--GA--2020-03-02",
"Hawaii": "VM2--HI--2020-03-02",
"Iowa": "VM2--IA--2020-03-03",
"Idaho": "VM2--ID--2020-03-27",
"Illinois": "VM2--IL--2020-03-03",
"Indiana": "VM2--IN--2020-02-27",
"Kansas": "VM2--KS--2020-03-18",
"Kentucky": "VM2--KY--2020-02-26",
"Louisiana": "VM2--LA--2020-02-27",
"Massachusetts": "VM2--MA--2020-02-19",
"Maryland": "VM2--MD--2020-02-28",
"Maine": "VM2--ME--2020-02-24",
"Michigan": "VM2--MI--2020-03-02",
"Minnesota": "VM2--MN--2020-02-25",
"Missouri": "VM2--MO--2020-03-05",
"Mississippi": "VM2--MS--2020-03-20",
"Montana": "VM2--MT--2020-03-14",
"North Carolina": "VM2--NC--2020-02-29",
"North Dakota": "VM2--ND--2020-02-28",
"Nebraska": "VM2--NE--2020-03-18",
"North Hampshire": "VM2--NH--2020-03-03",
"New Jersey": "VM2--NJ--2020-02-26",
"New Mexico": "VM2--NM--2020-02-24",
"Nevada": "VM2--NV--2020-02-22",
"New York": "VM2--NY--2020-03-05",
"Ohio": "VM2--OH--2020-02-28",
"Oklahoma": "VM2--OK--2020-02-25",
"Oregon": "VM2--OR--2020-02-25",
"Philadelphia": "VM2--PA--2020-03-20",
"Rhode Island": "VM2--RI--2020-02-28",
"South Carolina": "VM2--SC--2020-02-21",
"South Dakota": "VM2--SD--2020-02-25",
"Tennessee": "VM2--TN--2020-03-31",
"Texas": "VM2--TX--2020-03-02",
"Utah": "VM2--UT--2020-04-07",
"Virginia": "VM2--VA--2020-03-01",
"Vermont": "VM2--VT--2020-02-27",
"Washington": "VM2--WA--2020-04-20",
"Wisconsin": "VM2--WI--2020-03-21",
"West Virginia": "VM2--WV--2020-03-29",
"Wyoming": "VM2--WY--2020-03-02"
}
def zip_extractor(place):
# if there's an error, try \ instead of /
file_name = "//storage.rcs.nyu.edu/L2_Political/03-01-2020-delivery/files-by-state/"+ place + ".zip"
#file_name = "\\storage.rcs.nyu.edu\L2_Political\03-01-2020-delivery\files-by-state" + place + ".zip"
# opening the zip file in READ mode
zip = ZipFile(file_name,'r')
demographics_file = zip.open(place + "-DEMOGRAPHIC.tab")
print(demographics_file)
vote_history_file = zip.open(place + "-VOTEHISTORY.tab")
zip.close()
demographics = pd.read_csv(demographics_file,
sep='\t', dtype=str, encoding='unicode_escape',
nrows=100)
vote_history = pd.read_csv(vote_history_file,
sep='\t', dtype=str, encoding='unicode_escape',
nrows=100)
return demographics, vote_history
def create_df():
for file in os.listdir('//storage.rcs.nyu.edu/L2_Political/03-01-2020-delivery/files-by-state/'):
#for file in os.listdir('\\storage.rcs.nyu.edu\L2_Political\03-01-2020-delivery\files-by-state'):
rename = file[file.find("--")+2:file.find("-2")-1]
place = file[:file.find(".zip")]
demographics, vote_history = zip_extractor(place)
demographic_dict["%s_demographic" %rename] = demographics
vote_history_dict["%s_voting_history" %rename] = vote_history
return place_dic
create_df()
# +
def remove_null():
for key in place_dic.keys():
#finds 95% of columns
threshold = int(place_dic[key].shape[1] * 0.75)
place_dic[key].dropna(thresh=threshold)
remove_null()
# -
def merge(state_demographic, state_vote_history):
merged_file = pd.merge(state_vote_history, state_demographic,
how='left', left_on='LALVOTERID', right_on='LALVOTERID')
return merged_file
def merge_all(state_demographic, state_vote_history):
for
def get_state_keys():
return demographic_dict.keys()
get_state_keys()
def get_sample(file, sample_size):
return file.sample(frac=sample_size)
def build_stratified_national_sample(sample_size, place_dict):
national_sample = get_sample(place_dict['AK_demographic'], sample_size)
for file in place_dict.items():
if file != 'AK_demographic':
national_sample = pd.concat([national_sample, place_dict[file]], axis=0)
print(national_sample)
build_stratified_national_sample(0.10, demographic_dict)
def build_simple_random_national_sample(sample_size):
total_rows = 0
for df in place_dic.items():
total_rows.append(len(df))
national_sample = get_sample(AK, len(AK)/total_rows)
for file in files:
national_sample = pd.concat([national_sample, get_sample(file, len(file)/total_rows)], axis = 0)
| .ipynb_checkpoints/Main-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# Declare a Base using `automap_base()`
Base = automap_base()
# +
# Use the Base class to reflect the database tables
Base.prepare(engine, reflect=True)
# +
# reflect an existing database into a new model
# reflect the tables
# -
# We can view all of the classes that automap found
Base.classes.keys()
# +
# Save references to each table
# Assign the dow class to a variable called `Dow`
Measurement = Base.classes.measurement
Station = Base.classes.station
# +
# Create our session (link) from Python to the DB
session = Session(engine)
# -
# # Exploratory Climate Analysis
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date
# -
#Find date 1 year ago from last date point
query_date = dt.date(2017, 8, 23) - dt.timedelta(days=365)
query_date
# +
#Perform a query to retrieve the data and precipitation scores
results = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= query_date).\
order_by(Measurement.date).all()
# +
#Save the query results as a Pandas DataFrame and set the index to the date column (dates sorted in query)
precip_df = pd.DataFrame(results, columns=['date', 'precipitation'])
precip_df.set_index('date', inplace=True)
precip_df.head()
# +
#Plot the data
precip_df.plot(rot=90, legend=False)
plt.xlabel("Date")
plt.ylabel("Inches")
plt.title("Precipitation Past 12 Months")
plt.show()
# +
# Calculate the summary statistics for the precipitation data
precip_df.describe()
# +
# Use Pandas to calcualte the summary statistics for the precipitation data
#Calculate the summary statistics for the precipitation data
precip_df.describe()
# +
# Design a query to show how many stations are available in this dataset?
total_stations = session.query(Measurement).group_by(Measurement.station).count()
print(f"Total Stations = {total_stations}")
# +
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
station_counts = session.query(Measurement.station, func.count(Measurement.tobs)).group_by(Measurement.station).order_by(func.count(Measurement.tobs).desc()).all()
SC_df = pd.DataFrame(station_counts, columns=['Station', 'Count'])
SC_df.set_index('Station', inplace=True)
SC_df
# +
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
sel = [func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)]
station_stats = session.query(*sel).filter(Measurement.station == 'USC00519281').all()
stats = list(np.ravel(station_stats))
min_temp = stats[0]
max_temp = stats[1]
avg_temp = round(stats[2],1)
print("Temperature Statistics for Station USC00519281")
print("----------------------------------------------")
print(f"Lowest Temperature Recorded: {min_temp}")
print(f"Highest Temperature Recorded: {max_temp}")
print(f"Average Temperature: {avg_temp}")
# +
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temps = session.query(Measurement.date, Measurement.tobs).filter(Measurement.date >= query_date).\
filter(Measurement.station == 'USC00519281').all()
temps_df = pd.DataFrame(temps, columns=['date', 'temperature'])
#Plot the results as a histogram with bins=12
plt.hist(temps_df.temperature, bins=12)
plt.xlabel("Temperature", fontsize=12)
plt.ylabel("Frequency", fontsize=12)
plt.title("Station USC00519281: P12M Temperature Histogram", fontsize=16)
plt.show()
# -
# ## Bonus Challenge Assignment
from scipy import stats
from numpy import mean
station = Base.classes.station
#Find average temperature for June across all stations across all available years
June_date_str = "06"
June_tobs = session.query(Measurement.tobs).filter(func.strftime("%m", Measurement.date) == June_date_str).all()
mean(June_tobs)
# +
#Find average temperature for December across all stations across all available years
Dec_date_str = "12"
Dec_tobs = session.query(Measurement.tobs).filter(func.strftime("%m", Measurement.date) == Dec_date_str).all()
mean(Dec_tobs)
# -
# Perform a t-test to determine if these means are statistically significant
stats.ttest_ind(June_tobs, Dec_tobs)
#This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
#and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
#Use the function `calc_temps` to calculate the tmin, tavg, and tmax for trip dates using the previous year's data
#for those same dates.
temp_res = calc_temps('2010-06-15', '2010-06-25')
print(temp_res)
# +
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
temp_list = list(np.ravel(temp_res))
Avg_Temp = temp_list[1]
Min_Temp = temp_list[0]
Max_Temp = temp_list[2]
PTP = Max_Temp - Min_Temp
x_axis = 1
plt.figure(figsize=(1.5,5))
plt.bar(x_axis, Avg_Temp, color='r', alpha=0.5, yerr=PTP, align="center")
ax = plt.gca()
ax.axes.xaxis.set_ticklabels([])
ax.xaxis.grid()
plt.ylim(0, 100)
plt.ylabel("Temp (F)", fontsize=10)
plt.title("Trip Avg Temp", fontsize=12)
plt.show()
# +
#Calculate the total amount of rainfall per weather station for trip dates using the previous year's matching dates.
#Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
start_dt = '2010-06-15'
end_dt = '2010-06-25'
results = session.query(Measurement.station, station.name, station.latitude, station.longitude, station.elevation, func.sum(Measurement.prcp)).\
filter(Measurement.station == station.station).filter(Measurement.date >= start_dt).filter(Measurement.date <= end_dt).\
group_by(Measurement.station).order_by(func.sum(Measurement.prcp).desc()).all()
precip_sum = pd.DataFrame(results, columns=['Station', 'Name', 'Latitude', 'Longitude', 'Elevation', 'Total Precip'])
precip_sum.set_index('Station', inplace=True)
precip_sum
# +
#Create a query that will calculate the daily normals
#(i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
# +
#Calculate the daily normals for trip
#Push each tuple of calculations into a list called `normals`
#Set the start and end date of the trip
#Use the start and end date to create a range of dates
#Strip off the year and save a list of %m-%d strings
my_dates = pd.date_range(start='2011-06-15', end='2011-06-25')
my_trip_dates = []
my_trip_dates_md = []
for date in my_dates:
my_trip_dates.append(date.strftime('%Y-%m-%d'))
my_trip_dates_md.append(date.strftime('%m-%d'))
normals = []
for d in my_trip_dates_md:
dly_nrms = daily_normals(d)
normals.append(dly_nrms)
normals
# -
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
normals_df = pd.DataFrame(np.vstack(normals), columns=['Min_Temp', 'Avg_Temp', 'Max_Temp'])
normals_df['Date'] = my_trip_dates
normals_df.set_index('Date', inplace=True)
normals_df
# Plot the daily normals as an area plot with `stacked=False`
normals_df.plot(kind='area', stacked=False, rot=90)
plt.ylabel("Temperature")
plt.legend(loc='lower center')
plt.title("Daily Normals for Trip")
plt.show()
| SQL_Alchemy_Challenge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="jgnQr_k9zqe_"
# ## A Visual History of Interpretation in Image Recognition
#
# This notebook reproduces the history-of-interpretation [blog post](https://gradio.app/blog/interpretation-history), by the [Gradio](https://github.com/gradio-app/gradio) team. We relied heavily on [PAIR-code's implementation](https://github.com/PAIR-code/saliency) of the papers.
#
# Find the colab version [here](https://colab.research.google.com/drive/1IxhImCFknNMctIonSo98nkco2ufKmfdj?usp=sharing).
# + [markdown] id="UMSg9kP70gjF"
# ### Imports and Setup
# + colab={"base_uri": "https://localhost:8080/"} id="GUQRh_OYv8lZ" outputId="43dc915f-c5a1-440b-ee42-5c54ff653901"
# !pip install tf-slim gradio wget -q
import tensorflow.compat.v1 as tf
import tf_slim as slim
import sys
import inception_v3
import saliency
import requests
import gradio as gr
import wget
import tarfile
import numpy as np
import os
if not os.path.exists('inception_v3.ckpt'):
wget.download("http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz")
tar = tarfile.open("inception_v3_2016_08_28.tar.gz")
tar.extractall()
tar.close()
# + [markdown] id="vYgIxHOd05p7"
# ### Setting up the graph, and adding a logit tensor
# + colab={"base_uri": "https://localhost:8080/"} id="QO4Zw7ngwBvO" outputId="6b05948a-2455-4713-ee1e-70bca985437b"
ckpt_file = './inception_v3.ckpt'
graph = tf.Graph()
with graph.as_default():
images = tf.placeholder(tf.float32, shape=(None, 299, 299, 3))
with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
_, end_points = inception_v3.inception_v3(images, is_training=False, num_classes=1001)
# Restore the checkpoint
sess = tf.Session(graph=graph)
saver = tf.train.Saver()
saver.restore(sess, ckpt_file)
# Construct the scalar neuron tensor.
logits = graph.get_tensor_by_name('InceptionV3/Logits/SpatialSqueeze:0')
neuron_selector = tf.placeholder(tf.int32)
y = logits[0][neuron_selector]
# Construct tensor for predictions.
prediction = tf.argmax(logits, 1)
# + [markdown] id="Zn4GvI_a1LLg"
# ### Initializing and creating the different saliency methods
# + colab={"base_uri": "https://localhost:8080/"} id="8kMHuXi0zUtV" outputId="ed2958c2-b0c1-4cc7-9353-aa9071c1563f"
gradients = saliency.GradientSaliency(graph, sess, y, images)
guided = saliency.GuidedBackprop(graph, sess, y, images)
integrated = saliency.IntegratedGradients(graph, sess, y, images)
blur_ig = saliency.BlurIG(graph, sess, y, images)
# + id="-Ww37lyF1i1r"
def vanilla_gradients(image):
image = image / 127.5 - 1.0
prediction_class = sess.run(prediction, feed_dict = {images: [image]})[0]
vanilla_mask_3d = gradients.GetMask(image, feed_dict = {neuron_selector: prediction_class})
vanilla_mask_grayscale = saliency.VisualizeImageGrayscale(vanilla_mask_3d)
return vanilla_mask_grayscale.tolist()
def smoothgrad(image):
image = image / 127.5 - 1.0
prediction_class = sess.run(prediction, feed_dict = {images: [image]})[0]
smoothgrad_mask_3d = gradients.GetSmoothedMask(image, feed_dict = {neuron_selector: prediction_class})
smoothgrad_mask_grayscale = saliency.VisualizeImageGrayscale(smoothgrad_mask_3d)
return smoothgrad_mask_grayscale.tolist()
def guided_backprop(image):
image = image / 127.5 - 1.0
prediction_class = sess.run(prediction, feed_dict = {images: [image]})[0]
vanilla_guided_backprop_mask_3d = guided.GetMask(
image, feed_dict = {neuron_selector: prediction_class})
vanilla_mask_grayscale = saliency.VisualizeImageGrayscale(vanilla_guided_backprop_mask_3d)
return vanilla_mask_grayscale.tolist()
def integrated_smoothgrad(image):
image = image / 127.5 - 1.0
prediction_class = sess.run(prediction, feed_dict = {images: [image]})[0]
baseline = np.zeros(image.shape)
baseline.fill(-1)
smoothgrad_integrated_gradients_mask_3d = integrated.GetSmoothedMask(
image, feed_dict = {neuron_selector: prediction_class}, x_steps=25, x_baseline=baseline)
smoothgrad_mask_grayscale = saliency.VisualizeImageGrayscale(smoothgrad_integrated_gradients_mask_3d)
return smoothgrad_mask_grayscale.tolist()
def blur_IG_vanilla(image):
image = image / 127.5 - 1.0
prediction_class = sess.run(prediction, feed_dict = {images: [image]})[0]
baseline = np.zeros(image.shape)
baseline.fill(-1)
blur_ig_mask_3d = blur_ig.GetMask(
image, feed_dict = {neuron_selector: prediction_class})
blur_ig_mask_grayscale = saliency.VisualizeImageGrayscale(blur_ig_mask_3d)
return blur_ig_mask_grayscale.tolist()
# + [markdown] id="ARNjzPA92unC"
# ### Setting up classifier and Gradio
# + colab={"base_uri": "https://localhost:8080/", "height": 813} id="SVuV9DSVwJIv" outputId="b9473743-b624-4229-ea76-b7cc9f2978ac"
# Download human-readable labels for ImageNet.
inception_net = tf.keras.applications.InceptionV3() # load the model
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def classify_image(inp):
inp = inp.reshape((-1, 299, 299, 3))
inp = tf.keras.applications.inception_v3.preprocess_input(inp)
prediction = inception_net.predict(inp).flatten()
return {labels[i]: float(prediction[i]) for i in range(1000)}
image = gr.inputs.Image(shape=(299, 299, 3))
label = gr.outputs.Label(num_top_classes=3)
examples = [["doberman.png"], ["dog.png"]]
gr.Interface(classify_image, image, label, capture_session=True, examples=examples,
allow_flagging=False).launch()
# + [markdown] id="RhqG3PGL28MC"
# ### Leave-One-Out
# Default out of the box interpretation in Gradio.
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="eILloho93Lgj" outputId="cdf11378-5117-4e41-d7f2-7b887779f560"
gr.Interface(classify_image, image, label, capture_session=True, interpretation="default", examples=examples,
allow_flagging=False).launch()
# + [markdown] id="z2Ccovnf3L7Z"
# ### Vanilla Gradient Ascent [2009 and 2013]
#
# Paper: [Visualizing Higher-Layer Features of a Deep Network (2009)](https://www.researchgate.net/profile/Aaron_Courville/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network/links/53ff82b00cf24c81027da530.pdf)
#
# Paper: [Visualizing Image Classification Models and Saliency Maps (2013)](https://arxiv.org/abs/1312.6034)
#
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="G_qQdAK-5AzE" outputId="028ae59e-b120-4ca7-e040-2ac20d9ac25d"
gr.Interface(classify_image, image, label, capture_session=True, interpretation=vanilla_gradients, examples=examples,
allow_flagging=False).launch()
# + [markdown] id="dOTBmmsL29sJ"
# ### Guided Back-Propogation [2014]
#
# Paper: [Striving for Simplicity: The All Convolutional Net (2014)](https://arxiv.org/abs/1412.6806)
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="fQ5kKIDG5ETO" outputId="eaa14d82-d839-4625-d031-e184399578ab"
gr.Interface(classify_image, image, label, capture_session=True, interpretation=guided_backprop, examples=examples,
allow_flagging=False).launch()
# + [markdown] id="Y0o9xIJY29zL"
# ### SmoothGrad [2017]
#
# Paper: [SmoothGrad: removing noise by adding noise (2017)](https://arxiv.org/abs/1706.03825)
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="bs0h9zum5E9-" outputId="9895880d-003f-4cee-eee5-289ae2032483"
gr.Interface(classify_image, image, label, capture_session=True, interpretation=smoothgrad, examples=examples,
allow_flagging=False).launch()
# + [markdown] id="15aYXFcz292B"
# ### Integrated Gradients [2017]
#
# Paper: [Axiomatic Attribution for Deep Networks (2017)](https://arxiv.org/abs/1703.01365)
#
# **Note**: This method is *very* slow
#
# + colab={"base_uri": "https://localhost:8080/", "height": 609} id="NwUYghZO5Fgs" outputId="c57fff4e-b216-440e-a15a-2f9a2d4b2097"
gr.Interface(classify_image, image, label, capture_session=True, interpretation=integrated_smoothgrad, examples=examples,
allow_flagging=False).launch(debug=True)
# + [markdown] id="DqBRK0c-2947"
# ### Blur Integrated Gradients [2020]
#
# Paper: [Attribution in Scale and Space (2020)](https://arxiv.org/pdf/2004.03383)
#
# **Note**: This method is *very* slow
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="GSAVftoh5GQ3" outputId="0837e9e8-d789-4f04-f568-f976630212a3"
gr.Interface(classify_image, image, label, capture_session=True, interpretation=blur_IG_vanilla, examples=examples,
allow_flagging=False).launch()
| History-of-Interpretation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1>ANOVOS - Data Transformer<span class="tocSkip"></span></h1>
# <p> Following notebook shows the list of functions related to "data transformer" module provided under ANOVOS package and how it can be invoked accordingly</p>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Attribute-Binning-(discretization)" data-toc-modified-id="Attribute-Binning-1">Attribute Binning (discretization)</a></span></li><li><span><a href="#Monotonic-Binning" data-toc-modified-id="Monotonic-Binning-2">Monotonic Binning</a></span></li><li><span><a href="#Categorical-Attribute-to-Numerical-Attribute-Conversion" data-toc-modified-id="Categorical-Attribute-to-Numerical-Attribute-Conversion-3">Categorical Attribute to Numerical Attribute Conversion</a></span></li><li><span><a href="#Missing-Value-Imputation" data-toc-modified-id="Missing-Value-Imputation">Missing Value Imputation</a></span></li><li><span><a href="#Outlier-Categories-Treatment" data-toc-modified-id="Outlier-Categories-Treatment-5">Outlier Categories Treatment</a></span></li></ul></div>
# **Setting Spark Session**
from anovos.shared.spark import *
# **Input/Output Path**
inputPath = "../data/income_dataset/csv"
outputPath = "../output/income_dataset/data_transformer"
# **Read Input Data**
from anovos.data_ingest.data_ingest import read_dataset
from pyspark.sql import functions as F
df = read_dataset(spark, file_path = inputPath, file_type = "csv",
file_configs = {"header": "True", "delimiter": "," , "inferSchema": "True"})
df.toPandas().head(5)
# # Attribute Binning (discretization)
# - API specification of function **attribute_binning** can be found <a href="../api_specification/anovos/data_transformer/transformers.html#anovos.data_transformer.transformers.attribute_binning">here</a>
# - Supports numerical attributes only. 2 binning options: Equal Range Binning (each bin is of equal size/width) and Equal Frequency Binning (each bin has equal no. of rows)
from anovos.data_transformer.transformers import attribute_binning
# +
# Example 1 - Equal range binning + append transformed columns at the end
odf = attribute_binning(spark, idf=df, list_of_cols=["education-num", "hours-per-week"], method_type="equal_range",
bin_size=5, output_mode="append", print_impact=True)
odf.toPandas().head(5)
# -
# Distinct values after binning
odf.select('hours-per-week_binned').distinct().orderBy('hours-per-week_binned').toPandas().head(10)
# +
# Example 2 - Equal frequency binning + replace original columns by transformed ones (default)
odf = attribute_binning(spark, df, list_of_cols=["education-num", "hours-per-week"], method_type="equal_frequency",
bin_size=5, print_impact=True)
odf.toPandas().head(5)
# -
# Distinct values after binning
odf.select('hours-per-week').distinct().orderBy('hours-per-week').toPandas().head(10)
# +
# Example 3 - Equal frequency binning + save binning model
odf = attribute_binning(spark, df, list_of_cols=["education-num", "hours-per-week"], method_type="equal_frequency",
bin_size=5, pre_existing_model=False, model_path=outputPath + "/attribute_binning")
odf.toPandas().head(5)
# -
# Example 4 - Equal frequency binning + use pre-saved model
odf = attribute_binning(spark, df, list_of_cols=["education-num", "hours-per-week"],
pre_existing_model=True, model_path=outputPath + "/attribute_binning")
odf.toPandas().head(5)
# # Monotonic Binning
# - API specification of function **monotonic_binning** can be found <a href="../api_specification/anovos/data_transformer/transformers.html#anovos.data_transformer.transformers.monotonic_binning">here</a>
# - Bin size is computed dynamically
from anovos.data_transformer.transformers import monotonic_binning
# Example 1 - Equal Range Binning + append tranformed columns at the end
odf = monotonic_binning(spark, df, list_of_cols=["education-num", "hours-per-week"], label_col="income",
event_label=">50K", bin_method="equal_range", output_mode="append")
odf.toPandas().head(5)
# Distinct values for hours-per-week after binning
odf.select("hours-per-week_binned").distinct().orderBy('hours-per-week_binned').toPandas()
# Example 2 - Equal Frequency Binning + replace original columns by transformed ones (default)
odf = monotonic_binning(spark, df, list_of_cols=["education-num", "hours-per-week"], label_col="income",
event_label=">50K", bin_method="equal_frequency")
odf.toPandas().head(5)
# Distinct values for hours-per-week after binning
odf.select("hours-per-week").distinct().orderBy('hours-per-week').toPandas()
# # Categorical Attribute to Numerical Attribute Conversion
# - API specification of function **cat_to_num_unsupervised** can be found <a href="../api_specification/anovos/data_transformer/transformers.html#anovos.data_transformer.transformers.cat_to_num_unsupervised">here</a>
# - Supports Label Encoding and One hot encoding
from anovos.data_transformer.transformers import cat_to_num_unsupervised
# Example 1 - with mandatory arguments (Label Encoding)
odf = cat_to_num_unsupervised(spark, df)
odf.toPandas().head(5)
# Example 2 - 'all' columns (excluding drop_cols) + print impact
odf = cat_to_num_unsupervised(spark, df, list_of_cols='all', drop_cols=['ifa'], print_impact=True)
odf.toPandas().head(5)
# Example 3 - selected categorical columns + assign unique integers based on alphabetical order (asc)
odf = cat_to_num_unsupervised(spark, df, list_of_cols='all', drop_cols=['ifa'], index_order='alphabetAsc')
odf.toPandas().head(5)
# Example 4 - selected categorical columns + one hot encoding (method_type=0) + print impact
odf = cat_to_num_unsupervised(spark, df, list_of_cols=['race', 'sex'], method_type=0, print_impact=True)
odf.toPandas().head(5)
# Example 5 - one hot encoding + save model
odf = cat_to_num_unsupervised(spark, df, list_of_cols='all', drop_cols=['ifa', 'empty'], method_type=0,
pre_existing_model=False, model_path=outputPath)
odf.limit(10).toPandas().head(5)
# Example 6 - one hot encoding + use pre-saved model
odf = cat_to_num_unsupervised(spark, df, list_of_cols='all', drop_cols=['ifa', 'empty'], method_type=0,
pre_existing_model=True, model_path=outputPath)
odf.limit(10).toPandas().head(5)
# # Missing Value Imputation
# - API specification of function **imputation_MMM** can be found <a href="../api_specification/anovos/data_transformer/transformers.html#anovos.data_transformer.transformers.imputation_MMM">here</a>
from anovos.data_transformer.transformers import imputation_MMM
# Example 1 - with mandatory arguments + print impact
odf = imputation_MMM(spark, df, print_impact=True)
# Example 2 - use mean for numerical columns + append transformed columns at the end
odf = imputation_MMM(spark, df, list_of_cols='all', method_type="mean", output_mode="append")
odf.toPandas().head(5)
odf.select('education-num', 'education-num_imputed').where(F.col("education-num").isNull()).distinct().toPandas().head(5)
# Example 3 - save model
odf = imputation_MMM(spark, df, pre_existing_model=False, model_path=outputPath)
# Example 4 - use pre-saved model
odf = imputation_MMM(spark, df, pre_existing_model=True, model_path=outputPath)
odf.toPandas().head(5)
# +
# Example 5 - selected columns + use pre-saved stats
from anovos.data_analyzer.stats_generator import measures_of_counts, measures_of_centralTendency
from anovos.data_ingest.data_ingest import write_dataset
missing = write_dataset(measures_of_counts(spark, df),outputPath+"/missing","parquet", file_configs={"mode":"overwrite"})
mode = write_dataset(measures_of_centralTendency(spark, df),outputPath+"/mode","parquet", file_configs={"mode":"overwrite"})
odf = imputation_MMM(spark, df, list_of_cols=['marital-status', 'sex', 'occupation', 'age'],
stats_missing={"file_path":outputPath+"/missing", "file_type": "parquet"},
stats_mode={"file_path":outputPath+"/mode", "file_type": "parquet"}, print_impact=True)
odf.toPandas().head(5)
# -
# # Outlier Categories Treatment
# - API specification of function **outlier_categories** can be found <a href="../api_specification/anovos/data_transformer/transformers.html#anovos.data_transformer.transformers.outlier_categories">here</a>
# - Supports 2 ways of outliers detection: by max number of categories and by coverage (%)
from anovos.data_transformer.transformers import outlier_categories
# Example 1 - 'all' columns (excluding drop_cols) + max 15 categories + append transformed columns at the end
odf = outlier_categories(spark, df, drop_cols=['ifa'], max_category=15, output_mode='append')
odf.toPandas().head(5)
# Example 2 - selected columns + max 10 categories
odf = outlier_categories(spark, df, list_of_cols=['education', 'occupation', 'native-country'],
max_category=10, print_impact=True)
# Example 3 - selected columns + cover 90% values
odf = outlier_categories(spark, df, list_of_cols=['education', 'occupation', 'native-country'],
coverage=0.9, print_impact=True)
# Example 4 - max 15 categories + save model
odf = outlier_categories(spark, df, drop_cols=['ifa'], max_category=15,
pre_existing_model=False, model_path=outputPath, print_impact=True)
# Example 5 - use pre-saved model
odf = outlier_categories(spark, df, drop_cols=['ifa'], pre_existing_model=True, model_path=outputPath, print_impact=True)
| examples/notebooks/data_transformer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mumax projection
# %gui qt
# %matplotlib qt
from glob import glob
import os, sys
# only necessary if using without installing
sys.path.append("..")
from xmcd_projection import *
from skimage.io import imread, imsave
from PIL import Image
import meshio
import trimesh
# ### Get file paths
msh_file = "mumax_mesh.vtu"
mag_file = "mumax_mag.csv"
# scale for the points
scale = 1e9
# ## Generate raytracing - skip if generated
# get the mesh, scale the points to nm
msh = Mesh.from_file(msh_file, scale=scale)
# #### Make sure that the projection vector is correct and that the structure is oriented well
# +
# get the projection vector
p = get_projection_vector(90, 0) # direction of xrays
n = [0, 1, 1] # normal to the projection plane
x0 = [-100, 0, 0] # point on the projection plane
# prepare raytracing object
raytr = RayTracing(msh, p, n=n, x0=x0)
struct = raytr.struct
struct_projected = raytr.struct_projected
# -
vis = MeshVisualizer(struct, struct_projected)
vis.set_camera(dist=2e5)
vis.show()
# ## If raytracing file generated - skip if not
# load raytracing if exists
raytr = np.load("raytracing.npy", allow_pickle=True).item()
struct = raytr.struct
struct_projected = raytr.struct_projected
# ## Generate and save raytracing
raytr.get_piercings()
np.save("raytracing.npy", raytr, allow_pickle=True)
# ## Get the xmcd
# #### Get magnetisation, fix vertex shuffling
# Note: sometimes if the mesh file has multiple parts, the paraview export and the original mesh coordinates are not in the same order. I add a function to fix that when necessary
magnetisation, mag_points = load_mesh_magnetisation(mag_file, scale=scale)
shuffle_file = "shuffle_indx.npy"
try:
shuffle_indx = np.load(shuffle_file)
except FileNotFoundError:
print('File not found. Generating shuffle indx')
shuffle_indx = msh.get_shuffle_indx(mag_points)
np.save(shuffle_file, shuffle_indx)
magnetisation = magnetisation[shuffle_indx, :]
# ### Get the colours and XMCD values
xmcd_value = raytr.get_xmcd(magnetisation)
mag_colors = get_struct_face_mag_color(struct, magnetisation)
# +
azi=90
center_struct = [0, 0, 0]
dist_struct = 1e4
center_peem = [100, -200, 0]
dist_peem = 8e4
vis = MeshVisualizer(struct, struct_projected, projected_xmcd=xmcd_value, struct_colors=mag_colors)
vis.show(azi=azi, center=center_peem, dist=dist_peem)
Image.fromarray(vis.get_image_np())
# -
# #### View different parts of the image separately
# #### Both
# +
vis.update_colors(xmcd_value, mag_colors)
vis.view_both(azi=azi, center=center_peem, dist=dist_peem)
Image.fromarray(vis.get_image_np())
# -
# #### Projection
vis.view_projection(azi=azi, center=center_peem, dist=dist_peem)
Image.fromarray(vis.get_image_np())
# #### Structure
# +
center_struct = [75, 50, 0]
dist_struct = 1e4
vis.view_struct(azi=azi, center=center_struct, dist=dist_struct)
Image.fromarray(vis.get_image_np())
# -
# #### Blurred image
vis.view_projection(azi=azi, center=center_peem, dist=dist_peem)
Image.fromarray((vis.get_blurred_image(desired_background=0.7)*255).astype(np.uint8))
# #### Saving one render
# +
vis.view_both(azi=azi, center=center_peem, dist=dist_peem)
vis.save_render('mumax_shadow.png')
vis.view_projection()
blurred = vis.get_blurred_image(desired_background=0.7)
imsave('mumax_shadow_blurred.png', (blurred*255).astype(np.uint8), check_contrast=False)
vis.view_struct(azi=azi, center=center_struct, dist=dist_struct)
vis.save_render('mumax_structure_view.png')
# -
| examples/mumax_projection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:.conda-2019_rbig_ad]
# language: python
# name: conda-env-.conda-2019_rbig_ad-py
# ---
# # Droughts - Pre-Processing
#
# In this notebook, I will be going over the preprocessing steps needed before starting the experiments. I will include the following steps:
#
# 1. Load Data
# 2. Select California
# 3. Fill NANs
# 4. Smoothing of the VOD signal (savgol filter)
# 5. Removing the climatology
# 6. Select drought years and non-drought years
# 7. Extract density cubes
# ## Code
# +
import sys, os
cwd = os.getcwd()
sys.path.insert(0, f'{cwd}/../../')
sys.path.insert(0, '/home/emmanuel/code/py_esdc')
import xarray as xr
import pandas as pd
import numpy as np
# drought tools
from src.data.drought.loader import DataLoader
from src.features.drought.build_features import (
get_cali_geometry,
mask_datacube,
smooth_vod_signal,
remove_climatology
)
from src.visualization.drought.analysis import plot_mean_time
# esdc tools
from esdc.subset import select_pixel
from esdc.shape import ShapeFileExtract, rasterize
from esdc.transform import DensityCubes
import matplotlib.pyplot as plt
import cartopy
import cartopy.crs as ccrs
plt.style.use(['fivethirtyeight', 'seaborn-poster'])
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# -
# ## 1. Load Data
# +
region = 'conus'
sampling = '14D'
drought_cube = DataLoader().load_data(region, sampling)
pixel = (-121, 37)
drought_cube
# -
# Verify with a simple plot.
plot_mean_time(
drought_cube.LST.sel(time=slice('June-2010', 'June-2010'))
)
# ## 2. Subset California
# +
# get california polygon
cali_geoms = get_cali_geometry()
# get california cube subset
cali_cube = mask_datacube(drought_cube, cali_geoms)
# -
plot_mean_time(
cali_cube.LST.sel(time=slice('June-2011', 'June-2011'))
)
# ## 3. Interpolate NANs - Time Dimension
# +
# interpolation arguments
interp_dim = 'time'
method = 'linear'
# do interpolation
cali_cube_interp = cali_cube.interpolate_na(
dim=interp_dim,
method=method
)
# -
# ## 4. Smoothing the Signal (VOD)
#
# In this section, we will try to smooth the signal with two methods:
#
# 1. Simple - Rolling mean
# 2. Using a savgol filter.
#
# Some initial parameters:
#
# * Window Size = 5
# * Polynomial Order = 3
#
# We will apply this filter in the time domain only.
vod_data = cali_cube_interp.VOD
vod_data
# ### 4.1 - Savgol Filter
from scipy.signal import savgol_filter
# +
# select example
vod_data_ex = select_pixel(vod_data, pixel)
# savgol filter params
window_length = 5
polyorder = 3
# apply savgol filter
vod_smooth_filter = savgol_filter(
vod_data_ex,
window_length=window_length,
polyorder=polyorder
)
fig, ax = plt.subplots(nrows=2, figsize=(10, 10))
ax[0].plot(vod_data_ex)
ax[0].set_title('Original Data')
ax[1].plot(vod_smooth_filter)
ax[1].set_title('After Savgol Filter')
plt.show()
# -
# ### 4.2 - Rolling Window
# +
# select example
vod_data_ex = select_pixel(vod_data, pixel)
# savgol filter params
window_length = 2
# apply savgol filter
vod_smooth_roll = vod_data_ex.rolling(
time=window_length,
center=True
).mean()
fig, ax = plt.subplots(nrows=2, figsize=(10, 10))
ax[0].plot(vod_data_ex)
ax[0].set_title('Original Data')
ax[1].plot(vod_smooth_roll)
ax[1].set_title('After Rolling Mean')
plt.show()
# -
# ### 4.3 - Difference
vod_smooth_diff = vod_smooth_filter - vod_smooth_roll
# +
fig, ax = plt.subplots(nrows=4,figsize=(10,10))
ax[0].plot(vod_data_ex)
ax[0].set_title('Original')
ax[1].plot(vod_smooth_filter)
ax[1].set_title('Savgol Filter')
ax[2].plot(vod_smooth_roll)
ax[2].set_title('Rolling Mean')
ax[3].plot(vod_smooth_diff)
ax[3].set_title('Difference')
# Scale the Difference Y-Limits
ymax = np.max([vod_smooth_filter.max(), vod_smooth_roll.max()])
ymin = np.min([vod_smooth_filter.min(), vod_smooth_roll.min()])
center = (ymax - ymin)
ymax = ymax - center
ymin = center - ymin
ax[3].set_ylim([0 - ymin, 0 + ymax])
plt.tight_layout()
plt.show()
# -
# ### 4.3 - Apply Rolling Mean to the whole dataset
cali_cube_interp = smooth_vod_signal(cali_cube_interp, window_length=2, center=True)
# ## 5. Remove Climatology
#
# When I mean 'climatology', I mean the difference between observations and typical weather for a particular season. The anomalies should not show up in the seasonal cycle. I'll just do a very simple removal. I'll calculate the monthly mean wrt time and then remove that from each month from the original datacube.
#
# **Steps**
#
# 1. Climatalogy - Monthly Mean for the 6 years
# 2. Remove Climatology - Climatology from each month
# +
# calculate the climatology
cali_climatology_mean = calculate_monthly_mean(cali_cube_interp)
# remove climatology
cali_anomalies = cali_cube.groupby('time.month') - cali_climatology_mean
# -
# Simple check where we look at the original and the new.
# +
variables = ['LST', 'VOD', 'NDVI', 'SM']
for ivariable in variables:
fig, ax = plt.subplots(nrows=3, figsize=(10, 10))
# Before Climatology
select_pixel(cali_cube_interp[ivariable], pixel).plot(ax=ax[0])
ax[0].set_title('Original Time Series')
# Climatology
select_pixel(cali_climatology_mean[ivariable], pixel).plot(ax=ax[1])
ax[1].set_title('Climatology')
# After Climatology
select_pixel(cali_anomalies[ivariable], pixel).plot(ax=ax[2])
ax[2].set_title('After Climatology Median Removed')
plt.tight_layout()
plt.show()
# -
# ## 6. EMData
#
# I extract the dates for the drought events for california. This will allow me to separate the drought years and non-drought years.
# !ls /media/disk/databases/SMADI/EMDAT_validation/
# +
shape_files = '/media/disk/databases/SMADI/EMDAT_validation/'
shapefiles_clf = ShapeFileExtract()
shapefiles_clf.import_shape_files(shape_files);
# +
# Extract Europe
query = 'LOCATION'
subqueries = ['California']
cali_droughts = shapefiles_clf.extract_queries(query=query, subqueries=subqueries)
# -
cali_droughts
# So the drought years are:
#
# **Drought Years**
#
# * 2012
# * 2014
# * 2015
#
# **Non-Drought Years**
#
# * 2010
# * 2011
# * 2013
#
# **Note**: Even though the EM-Data says that the drought year for 2012 is only half a year, we're going to say that that is a full year.
# +
# drought
cali_anomalies_drought = xr.concat([
cali_anomalies.sel(time=slice('2012', '2012')),
cali_anomalies.sel(time=slice('2014', '2014')),
cali_anomalies.sel(time=slice('2015', '2015')),
], dim='time')
# non-drought
cali_anomalies_nondrought = xr.concat([
cali_anomalies.sel(time=slice('2010', '2010')),
cali_anomalies.sel(time=slice('2011', '2011')),
cali_anomalies.sel(time=slice('2013', '2013')),
], dim='time')
# -
# ## 7. Extract Density Cubes
#
# In this step, we will construct 'density cubes'. These are cubes where we add features from a combination of the spatial and/or temporal dimensions. Instead of a single sample, we have a sample that takes into account spatial and/or temporal information. In this experiment, we will only look at temporal information. Our temporal resolution is 14 Days and we want to look at a maximum of 6 months.
#
# So:
#
# $$\Bigg\lfloor \frac{6 \: months}{\frac{14\: days}{30 \: days} \:\times 1 \: month} \Bigg\rfloor = 12 \: time \: stamps$$
# +
# confirm
sub_ = cali_anomalies_drought.isel(time=slice(0,12))
sub_.time[0].data, sub_.time[-1].data
# -
cali_anomalies.sel(time=slice('2012', '2012'))
# +
l1 = ['time', 'lat', 'lon', 'depth']
l2 = ['lat', 'lon', 'time']
all([i in l1 for i in l2])
# -
# So we get roughly 6 months of temporal information in our density cubes.
# #### 7.1 - Example Density Cube
# +
# example size
spatial_window = 1
time_window = 12
# initialize datacube
minicuber = DensityCubes(
spatial_window=spatial_window,
time_window=time_window
)
# initialize dataframes
drought_VOD = pd.DataFrame()
drought_LST = pd.DataFrame()
drought_NDVI = pd.DataFrame()
drought_SM = pd.DataFrame()
# Group by year and get minicubes
for iyear, igroup in cali_anomalies_drought.groupby('time.year'):
print(f"Year: {iyear}")
# get minicubes for variables
drought_VOD = drought_VOD.append(minicuber.get_minicubes(igroup.VOD))
drought_LST = drought_LST.append(minicuber.get_minicubes(igroup.LST))
drought_NDVI = drought_NDVI.append(minicuber.get_minicubes(igroup.NDVI))
drought_SM = drought_SM.append(minicuber.get_minicubes(igroup.SM))
# +
drought_VOD.shape, drought_LST.shape, drought_NDVI.shape, drought_SM.shape
| notebooks_old/drought/1.1_drought_features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Hauptkomponentenanalyse vs.
# # Denoising Variational Autoencoders
#
# ## _Intuition, Formalismus und Beispiele_
# + [markdown] slideshow={"slide_type": "skip"}
# jupyter nbconvert PCAvsDVAE.ipynb --to slides --post serve
#
# jupyter-nbextension install rise --py --sys-prefix
# jupyter-nbextension enable rise --py --sys-prefix
# + [markdown] slideshow={"slide_type": "slide"}
# # Eine intuitive Perspektive ...
#
# #### "... realistische, hochdimensionale Daten konzentrieren sich in der Nähe einer nichtlinearen, niedrigdimensionalen Mannigfaltigkeit ..." [Lei et al., 2018]
#
# 
#
# #### Aber wie lernt man die Mannigfaltigkeit und die Wahrscheinlichkeitsverteilung darauf?
# + [markdown] slideshow={"slide_type": "slide"}
# 
# -
# # Evaluating PCA and DVAE through examples
#
# The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
#
# 
#
# +
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import tensorflow as tf
from keras.layers import Input, Dense, Lambda
from keras.models import Model
from keras import backend as K
from keras import metrics
from keras.datasets import mnist
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Conv2DTranspose,Reshape
from sklearn.decomposition import PCA
import os
# %matplotlib inline
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
# analytical PCA of the training set
def analytical_pca(y):
pca = PCA(0.7)
pca.fit(y)
loadings = pca.components_
components = pca.transform(y)
filtered = pca.inverse_transform(components)
return filtered
# training params for the example
num_train = 50000
n_images = 6
batch_size = 256
original_dim = 784
latent_dim = 8
epochs = 10
epsilon_std = 1.0
noise_factor = 0.5
# train the VAE on MNIST digits
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# prepare data for PCA
shape_x_train = x_train.shape
pcaInput = np.reshape(x_train,[shape_x_train[0],shape_x_train[1]*shape_x_train[2]]).astype('float32')/255
# prepare data for DVAE
train_num=50000
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), 28,28,1))
x_test = x_test.reshape((len(x_test), 28,28,1))
noise_train = x_train + noise_factor * np.random.randn(*x_train.shape)
noise_test = x_test + noise_factor * np.random.randn(*x_test.shape)
# Clip the images to be between 0 and 1
noise_train = np.clip(noise_train, 0., 1.)
noise_test = np.clip(noise_test, 0., 1.)
# display
showidx=np.random.randint(0,num_train,n_images)
# precalculate PCA
pcaOutput = analytical_pca(pcaInput)
# display input, noisy input and PCA on noisy input
for i,idx in enumerate (showidx):
figure[0: 28,i *28: (i + 1) * 28] = np.reshape(x_train[idx], [28, 28])
figure[28: 56,i *28: (i + 1) * 28] = np.reshape(noise_train[idx], [28, 28])
figure[28 * 2: 28 * 3,i *28: (i + 1) * 28] = np.reshape(pcaOutput[idx], [28, 28])
plt.figure(figsize=(28*3, 28*n_images))
plt.imshow(figure, cmap='Greys_r')
plt.show()
#encoder part
x_noise = Input(shape=(28,28,1))
conv_1 = Conv2D(64,(3, 3), padding='valid',activation='relu')(x_noise)
conv_2 = Conv2D(64,(3, 3), padding='valid',activation='relu')(conv_1)
pool_1 = MaxPooling2D((2, 2))(conv_2)
conv_3 = Conv2D(32,(3, 3), padding='valid',activation='relu')(pool_1)
pool_2 = MaxPooling2D((2, 2))(conv_3)
h=Flatten()(pool_2)
z_mean = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)
#reparameterization trick
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0.,
stddev=epsilon_std)
return z_mean + K.exp(z_log_var / 2) * epsilon
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
#decoder part
# we instantiate these layers separately so as to reuse them later
z=Reshape([1,1,latent_dim])(z)
conv_0T = Conv2DTranspose(128,(1, 1), padding='valid',activation='relu')(z)#1*1
conv_1T = Conv2DTranspose(64,(3, 3), padding='valid',activation='relu')(conv_0T)#3*3
conv_2T = Conv2DTranspose(64,(3, 3), padding='valid',activation='relu')(conv_1T)#5*5
conv_3T = Conv2DTranspose(48,(3, 3), strides=(2, 2),padding='same',activation='relu')(conv_2T)#10*10
conv_4T = Conv2DTranspose(48,(3, 3), padding='valid',activation='relu')(conv_3T)#12*12
conv_5T = Conv2DTranspose(32,(3, 3), strides=(2, 2),padding='same',activation='relu')(conv_4T)#24*24
conv_6T = Conv2DTranspose(16,(3, 3), padding='valid',activation='relu')(conv_5T)#26*26
x_out = Conv2DTranspose(1,(3, 3), padding='valid',activation='sigmoid')(conv_6T)#28*28
# instantiate VAE model
vae = Model(x_noise, x_out)
vae.summary()
# Compute VAE loss
def VAE_loss(x_origin,x_out):
x_origin=K.flatten(x_origin)
x_out=K.flatten(x_out)
xent_loss = original_dim * metrics.binary_crossentropy(x_origin, x_out)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
return vae_loss
vae.compile(optimizer='adam', loss=VAE_loss)
vae.fit(noise_train,x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(noise_test, x_test))
digit_size = 28
figure = np.zeros((digit_size * 4, digit_size * n_images))
num_test=10000
showidx=np.random.randint(0,num_test,n_images)
x_out=vae.predict(x_test[showidx])
# Display
for i,idx in enumerate (showidx):
figure[0: 28,i *28: (i + 1) * 28] = np.reshape(x_test[idx], [28, 28])
figure[28: 28 * 2,i *28: (i + 1) * 28] = np.reshape(noise_test[idx], [28, 28])
figure[28 * 2: 28 * 3,i *28: (i + 1) * 28] = np.reshape(x_out[i], [28, 28])
figure[28 * 3: 28 * 4,i *28: (i + 1) * 28] = np.reshape(pcaOutput[idx], [28, 28])
plt.figure(figsize=(28 * 4, 28*n_images))
plt.imshow(figure, cmap='Greys_r')
plt.savefig('result_keras_VAE.png')
plt.show()
# +
# https://github.com/dojoteef/dvae
# + [markdown] slideshow={"slide_type": "slide"}
# # Hauptkomponentenanalyse
# # (Principal Component Analysis, PCA)
# * __unsupervised__ learning
# * __linear transformation__
# * "encode" a set of observations to a new coordinate system in which the values of the first coordinate (component) have the largest possible variance [Friedman et al., 2017]
# * the resulting coordinates (components) are uncorrelated with the preceeding coordinates
# * practically computing
# * __eigendecomposition of the covariance matrix__
# * __singular value decomposition__ of the observations
# * used for __dimensionality reduction__
# * __reconstructions of the observations__("decoding") from the leading __principal components__ have the __least total squared error__
# + [markdown] slideshow={"slide_type": "slide"}
# ## Grundlegende Mathematik der PCA
#
# ### Lineare Transformation
#
# * Let $\{y_i\}^N_{i=1}$ be a set of $N$ observations vectors, each of size $n$, with $n\leq N$.
#
# * A __linear transformation__ on a finite dimensional vector can be expressed as a __matrix multiplication__:
#
# $$ \begin{align} x_i = W y_i \end{align} $$
#
# where $y_i \in R^{n}, x_i \in R^{m}$ and $W \in R^{nxm}$.
#
# * Each $j-th$ element in $x_i$ is the __inner product__ between $y_i$ and the $j-th$ column in $W$, denoted as $w_j$. Let $Y \in R^{nxN}$ be a matrix obtained by horizontally concatenating $\{y_i\}^N_{i=1}$,
#
# $$ Y = \begin{bmatrix} | ... | \\ y_1 ... y_N \\ | ... | \end{bmatrix} $$
#
# * Given the __linear transformation__, it is clear that:
#
# $$ X = W^TY, X_0 = W^TY_0, $$
#
# where $Y_0$ is the matrix of centered (i.e. subtract the mean from each each observation).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Maximale Varianzkomponenten, Kovarianz und Dekorrelation
# * In particular, when $W^T$ represents the __transformation applying Principal Component Analysis__, we denote $W = P$. Each column of $P$, denoted $\{p_j\}^n_{j=1}$ is a __loading vector__, whereas each transformed vector $\{x_i\}^N_{i=1}$ is a __principal component__.
#
# 
#
# * The first loading vector is the unit vector with which the inner products of the observations have the __greatest variance__:
#
# $$ \max w_1^T Y_0Y_0^Tw_1, w_1^Tw_1 = 1$$
#
# * The solution of the previous equation is the first eigenvector of the __sample covariance matrix__ $Y_0Y_0^T$ corresponding to the largest eigenvalue.
#
# * MAtrix $P$ can be calculated by diagonalizing the covariance matrix:
#
# $$ Y_0Y_0^T = P \Lambda P^{-1} = P \Lambda P^T $$
#
# $\Lambda = Y_0Y_0^T $ is a diagnoal matrix whose diagonal elements $\{\lambda_i\}^N_{i=1}$ are sorted descendingly. The inverse transformation is $ Y = PX $. Due to the fact that the covariance matrix of $X$ is diagonal PCA is a __decorrelation transformation__.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Dimensionsreduzierung, Komprimierung, Skalierung
# PCA is used for __dimensionality reduction__ due to its capability to __reduce the number of variables__ through a linear transformation. This is done by keeping the first $m$ principal components $(m < n)$ and applying:
#
# $$ X_m = P_m^TY$$
#
# Keeping only the $m$ principal components, PCA __loses information__ (i.e. __lossy compression__), but the __loss is minimized__ by __maximizing the components variances__.
#
# Many __iterative algorithms__ can be used for finding the $m$ largest eigenvalues of $Y_0Y_0^T$
# * QR algorithm
# * Jacobi algorithm
# * power method
#
# For __large datasets__ such algorithms are __prohibitive__.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Minimaler quadratischer Gesamtrekonstruktionsfehler
#
# The transformation matrix $P_m$ can be also computed as a solution of:
#
# $$ \min_{W \in R^{nxm}} \| Y_0 - WW^TY_0 \|_F^2, W^TW = I_{mxm}$$
#
# where $F$ is the Frobenius norm.
#
# This shows that $P_m$ __compresses each centered vector__ of length $n$ into a vector of length $m$ where ($ m < n$) such that it __minimizes__ the sum of total __squared reconstruction errors__ (i.e. __inverse transformation__).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Singularwert-Zerlegung (Singular Value Decomposition, SVD)
#
# The matrix $Y_0 \in R^{nxN}$ can be __factorized__ as $Y_0 = U \Sigma V^T$ where $U \in R^{nxn}$ and $V \in R^{NxN}$ are __orthogonal matrices__ and $\Sigma \in R^{nxN}$ has non-zero elements only on the diagonal (i.e. __singular values__).
#
# The SVD of $Y_0$ is equivalent to the __eigendecomposition__ of $Y_0T_0^T$.
#
# 
#
# A (non-zero) vector v of dimension N is an __eigenvector__ of a square N × N matrix A if it satisfies the __linear equation__
#
# $$Av =\lambda v$$
#
# where $λ$ is a scalar, termed the __eigenvalue corresponding to v__.
# + [markdown] slideshow={"slide_type": "slide"}
# # Autoencoders
# * unsupervised neural network
# * minimize the error of reconstructions of observations [Goodfellow et al., 2016]
# * basically responsible to learn the identity function
# * training with backpropagation and then separate to implement the encoding / deconding
#
# A typical __autoencoder pipeline__ looks as
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## Grundlegende Mathematik der Autoencoders
#
# For each input vector $x$ of dimension $d$ out of the entire dataset of size $n$, the network tries to reconstruct $x'$, by:
# * first encoding the input (i.e. applying linear / nonlinear transformation $g_\phi(.)$)
# * obtaining a latent, compressed code in the bottleneck layer, $z$, and
# * decoding the compressed input at the output using linear / nonlinear transformation $f_\theta(.)$
#
# The parameters $(\theta, \phi)$ are learned together to output a reconstructed data sample similar to the input, $x \approx f_\theta(g_\phi(x))$, in other words the identity function.
#
# There are multiple metrics to quantify the difference, cross-entropy when activation function is sigmoid, or the simple Mean Squared Error (MSE):
#
# $$ \frac{1}{n} \sum_{i=1}^{n}(x^{i} - f_\theta(g_\phi(x^{i}))^2$$
#
# 
# -
# # PCA vs. Autoencoders
#
# * an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function is closely related to PCA - its weights span the principal subspace [Plaut, 2018]
# * in autoencoders the diagonal approximation in the encoder together with the inherent stochasticity force local orthogonality of the decoder
# * in autoencoders local behavior of promoting both reconstruction and orthogonality matches closely how the PCA embedding is chosen [Rolinek et al, 2019]
# * the difference is that, opposite from PCA, in Autoencoders the coordinates of the output of the bottleneck are correlated and are not sorted in the descending order of variance!
# + [markdown] slideshow={"slide_type": "slide"}
# # Denoising Variational Autoencoders (DVAE)
#
# The operation process is __different__ from the basic autoencoder due to the fact that there is __noise__ injected in the input (with a certain probability distribution) and there __latent space__ needs to recover this probability to __reconstruct__ the original input [Im, Bengio et al., 2017, Kingma et al., 2017].
#
# For each corrupted input vector $\tilde x$ of a clean vector $x$ of dimension $d$, the network tries to reconstruct $x'$, by:
# * first encoding the input, representing the mapping as the probability of estimating $z$ given the input, knowing the parameters of the noise ($\mu$ and $\sigma$) to allow tractability of the posterior calculation
# * obtaining a latent, compressed code in the bottleneck layer, $z$, sampled from $q_\phi(z|x)$
# * decoding the compressed input at the output given the observation model $p_\theta(x|z)$
#
# 
# -
# ## Grundlegende Mathematik der DVAE
#
# The loss function to recover the original input (__not the corrupted one__), $\tilde{x}^{i} = M(\tilde{x}^{i} | x^{i})$, if we use the simple Mean Squared Error (MSE):
#
# $$ \frac{1}{n} \sum_{i=1}^{n}(x^{i} - f_\theta(g_\phi({\tilde{x}}^{i}))^2$$
#
# The expectation term in the loss function invokes generating samples from $z \backsim q_\phi(z|x)$. Sampling is a stochastic process and therefore we cannot backpropagate the gradient.
#
# The estimated posterior $q_\phi(z|x)$ should be very close to the real one $p_\theta(z|x)$. We can use Kullback-Leibler divergence to quantify the distance between these two distributions. KL divergence DKL(X∥Y) measures how much information is lost if the distribution Y is used to represent X.
#
# Yet, Variational Bayesian methods, the loss function is known as the variational lower bound, or evidence lower bound (ELBO). The “lower bound” part in the name comes from the fact that KL divergence is always non-negative and thus the loss is the lower bound of $log p_\theta(x)$.
#
# $$ log p_\theta(x) - D_{KL}(q_\phi(z|x) || p_\theta(z|x)) \leq log p_\theta(x) $$
#
# Therefore by minimizing the loss, we are maximizing the lower bound of the probability of generating real data samples.
#
# https://github.com/dojoteef/dvae
#
# https://github.com/block98k/Denoise-VAE
# # Overall comparison PCA vs. DVAE
#
# ### Manifold learning
#
# | __PCA__ | __DVAE__ |
# |-----|------|
# | linear encoding/decoding, withoug noise robustness |nonlinear, probabilistic encoding/decoding with (input / hidden layer) noise robustness and nonlinear activation functions|
# | decorrelated coordinates of the latent space (whitening transformation) | correlated outputs of bottleneck (decoding input) |
# | coordinates of the latent space are in descending order of variance | coordinates are not sorted |
# | columns of transformation matrix are orthonormal | columns of transformation matrix not necessarily orthonormal |
# | robust on moderate noise with known distributions | robust to various types (masking noise, Gaussian noise, salt-and-pepper noise) and quantities of stochastic injected noise (denoising important for generalization performance) |
# | basic algorithm (without regularization) low robustness | points in low-dimensional manifold robust to noise in the high-dimensional observation space |
# + [markdown] slideshow={"slide_type": "slide"}
# # Overall comparison PCA vs. DVAE
#
# ### Training
#
# | __PCA__ | __DVAE__ |
# |-----|------|
# | map input to a fixed vector | map input to a distribution |
# | iterative methods: QR decomposition, Jacobi algorithm, SVD | backpropagation |
# | inefficient on large datasets due to covariance calculation | efficient on large datasets due to strong manifold learning |
# | based on correlation/covariance matrix, which can be - at least in theory - very sensitive to outliers | can sample directly from the input space and describe the input noise properties ("reparametrization trick") |
# + [markdown] slideshow={"slide_type": "slide"}
# # References and further reading
# [Goodfellow et al., 2016] <NAME>, <NAME> and <NAME>, Deep Learning, MIT Press, 2016.
#
# [Friedman et al., 2017] <NAME>, <NAME>, and <NAME>, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer, 2017.
#
# [Plaut, 2018] <NAME>., 2018. From principal subspaces to principal components with linear autoencoders. arXiv preprint arXiv:1804.10253.
#
# [<NAME> et al., 2017] <NAME>., <NAME>., <NAME>. and <NAME>., 2017, February. Denoising criterion for variational auto-encoding framework. In Thirty-First AAAI Conference on Artificial Intelligence.
#
# [Rolinek et al, 2019] <NAME>., <NAME>. and <NAME>., 2019. Variational Autoencoders Pursue PCA Directions (by Accident). In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 12406-12415).
#
# [Lei et al., 2018] <NAME>., <NAME>., <NAME>. and <NAME>., 2018. Geometric understanding of deep learning. arXiv preprint arXiv:1805.10451.
#
# [Kingma et al., 2013] <NAME>. and <NAME>., 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
| PCAvsDVAE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data
# -
# ## Player Count
# * Display the total number of players
#
# .value_counts() to find unique values in usernames for total player count
users = len(purchase_data['SN'].value_counts())
# Store Total Unique Users Above In Pandas DataFrame for Viewing
player_co = pd.DataFrame({"Total Users": [users]}, index=[0])
# Show DataFrame
player_co
# ## Purchasing Analysis (Total)
# * Run basic calculations to obtain number of unique items, average price, etc.
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
#
# +
# Gathering Number of Items and Filtering Out Diplicates
# Number of Unique Items
unique_items = len(purchase_data['Item Name'].value_counts())
# Total Number of Sales
total_sales = purchase_data['Purchase ID'].count()
# Sum of All Sales
sum_sales = purchase_data['Price'].sum()
# Average Price of Purchase in In-Game-Shop
avg_purchase = purchase_data['Price'].mean()
# Statistics On The Total Sales
purchase_data['Price'].describe()
# +
# Summary DataFrame for Purchases
purchase_summary = pd.DataFrame({"Amount of Unique Items": [unique_items],
"Total Number of Sales": [total_sales],
"Total Sale Revenue": [sum_sales],
"Average Purchase Amount": [avg_purchase]})
purchase_summary['Average Purchase Amount'] = purchase_summary['Average Purchase Amount'].map("${:.2f}".format)
purchase_summary['Total Sale Revenue'] = purchase_summary['Total Sale Revenue'].map("${:.2f}".format)
purchase_summary
# -
# ## Gender Demographics
# * Percentage and Count of Male Players
#
#
# * Percentage and Count of Female Players
#
#
# * Percentage and Count of Other / Non-Disclosed
#
#
#
# +
# Group Genders Together
gender_group_df = purchase_data.groupby(['Gender'])
# .nunique() function to get unique values in Gender column.
unique_genders = gender_group_df.nunique()
# Total Genders
total_gender = unique_genders['SN'].sum()
count_gender = count = unique_genders['SN'].unique()
percent_gender = unique_genders['SN']/ total_gender
# Create Summary DataFrame
gender_dem_summary = pd.DataFrame({"Genders Count": count_gender, "Percentage of Player": percent_gender})
# Formatting
gender_dem_summary['Percentage of Player'] = gender_dem_summary['Percentage of Player'].map("{:,.2%}".format)
# Show Summary
gender_dem_summary
# -
#
# ## Purchasing Analysis (Gender)
# * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender
#
#
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
# +
#Purchase Count
purchase_count = gender_group_df['Gender'].count()
#Average Purchase Price
average_price = gender_group_df["Price"].mean()
#Total Purchase Value
purchase_price = gender_group_df["Price"].sum()
#Normalized Totals
normalized = purchase_price / count_gender
#Create new dataframe
gender_analysis = pd.DataFrame({"Average Purchase Price": average_price,"Total Sale Revenue":purchase_price,
"Unique User Avg. Purchase": normalized, "Total Sales": purchase_count})
#Clean up formatting and reorder columns
gender_analysis["Average Purchase Price"] = gender_analysis["Average Purchase Price"].map("${:,.2f}".format)
gender_analysis["Total Sale Revenue"] = gender_analysis["Total Sale Revenue"].map("${:,.2f}".format)
gender_analysis["Unique User Avg. Purchase"] = gender_analysis["Unique User Avg. Purchase"].map("${:,.2f}".format)
#Reorder Columns
gender_analysis = gender_analysis[["Average Purchase Price","Unique User Avg. Purchase", "Total Sales", "Total Sale Revenue"]]
gender_analysis
# -
# ## Age Demographics
# * Establish bins for ages
#
#
# * Categorize the existing players using the age bins. Hint: use pd.cut()
#
#
# * Calculate the numbers and percentages by age group
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: round the percentage column to two decimal points
#
#
# * Display Age Demographics Table
#
# +
# Create array for age and group numbers
age_bin = [0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 9999]
age_label_groups = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Add Column to Purchase Data for Age Group Criterion
purchase_data["Age Group"] = pd.cut(purchase_data["Age"],age_bin, labels=age_label_groups)
# Create a group based on newly appended age group column
age_groups = purchase_data.groupby("Age Group")
# Count total players by age category - filter out duplicate values
total_count_age = age_groups['SN'].nunique()
# Calculate Percentages by Age Category and Previosuly Declaired 'users'
percentage_by_age = (total_count_age / users) * 100
# Create DataFrame with Obtained Values
age_demographics = pd.DataFrame({"Total Count":total_count_age, "Percentage of Players": percentage_by_age})
# Format Data Frame to Remove Index Name
age_demographics.index.name = None
# Format Percentages
age_demographics = age_demographics.style.format({"Percentage of Players":"{0:.2f}%"})
# Show Data
age_demographics
# -
# ## Purchasing Analysis (Age)
# * Bin the purchase_data data frame by age
#
#
# * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
# +
# Purchase Count for Ages
purchase_count_age = age_groups['Purchase ID'].count()
# Avg. Purchase Price by Age
avg_purchase_price_age = age_groups["Price"].mean()
# Avg. Purchase Total
total_purchase_value = age_groups["Price"].sum()
# Average Purchase Per Age Group
avg_purchase_per_age = total_purchase_value / total_count_age
# DataFrame for Age Summary
sale_age_demographics = pd.DataFrame({"Total Sales": purchase_count_age,
"Avg. Sale Price": avg_purchase_price_age,
"Total Sale Amount":total_purchase_value,
"Avg. Sale Total per Person": avg_purchase_per_age})
sale_age_demographics.index.name = None
# Format with currency style
sale_age_demographics = sale_age_demographics.style.format({"Avg. Sale Price":"${:,.2f}","Total Sales":"${:,.2f}","Avg. Sale Total per Person":"${:,.2f}","Total Sale Amount":"${:,.2f}"})
# Show Summary
sale_age_demographics
# -
# ## Top Spenders
# * Run basic calculations to obtain the results in the table below
#
#
# * Create a summary data frame to hold the results
#
#
# * Sort the total purchase value column in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the summary data frame
#
#
# +
# Group purchase data
user_sale_stats = purchase_data.groupby('SN')
# Total count of purchase by name
sales_spender_count = user_sale_stats['Purchase ID'].count()
# Calculate Purchase Totals
total_sales_spender_count = user_sale_stats["Price"].mean()
# Calculate Purchase Total
total_sales_spender = user_sale_stats["Price"].sum()
# Create DataFrame
top_spenders = pd.DataFrame({"Purchase Count":sales_spender_count,
"Avg. Purchase Price": total_sales_spender_count,
"Total Sales": total_sales_spender})
# Sort Ascending, grab head for top 5
formatted_spender_df = top_spenders.sort_values(["Purchase Count"], ascending=False).head()
# Format with Currency Style
formatted_spender_df = formatted_spender_df.style.format({"Avg. Purchase Price":"${:,.2f}", "Total Sales":"${:,.2f}"})
#Show DataFrame
formatted_spender_df
# -
# ## Most Popular Items
# * Retrieve the Item ID, Item Name, and Item Price columns
#
#
# * Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value
#
#
# * Create a summary data frame to hold the results
#
#
# * Sort the purchase count column in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the summary data frame
#
#
# +
# Create new data frame with items related information
items = purchase_data[["Item ID", "Item Name", "Price"]]
# Group the item data by item id and item name
item_stats = items.groupby(["Item ID","Item Name"])
# Count the number unique purchases
purchase_count_item = item_stats["Price"].count()
# Calcualte the sale value per item
purchase_value = (item_stats["Price"].sum())
# Find each item price
item_price = purchase_value/purchase_count_item
# Create DataFrame
most_popular_items = pd.DataFrame({"Purchase Count": purchase_count_item,
"Item Price": item_price,
"Total Purchase Value":purchase_value})
# Sort DataFrame
popular_formatted = most_popular_items.sort_values(["Purchase Count"], ascending=False).head()
# Format
popular_formatted = popular_formatted.style.format({"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
# Show
popular_formatted
# -
# ## Most Profitable Items
# * Sort the above table by total purchase value in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the data frame
#
#
# Take the most_popular items data frame and change the sorting to find highest total purchase value
popular_formatted = most_popular_items.sort_values(["Total Purchase Value"],
ascending=False).head()
# Format
popular_formatted = popular_formatted.style.format({"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
# Show
popular_formatted
| HeroesOfPymoli_game_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
import numpy as np, pandas as pd
from matplotlib import pyplot as plt
# %matplotlib inline
# +
blue_features = pd.read_csv('bright_blue_features.csv')
bright_sample = pd.read_csv('../data/bright_sample/bright_clean_w1w2_gt3.csv.gz')
blue_sample = bright_sample[bright_sample['phot_bp_mean_mag'] - bright_sample['phot_rp_mean_mag'] < 1]
# +
blue_good = blue_features.dropna()
blue = pd.merge(blue_sample,blue_good,left_on='original_ext_source_id',right_on='Name')
# -
blue.columns.values
blue_data = blue.drop(['source_id', 'original_ext_source_id', 'allwise_oid',
'designation', 'ra', 'dec', 'parallax_error',
'a_g_val', 'phot_g_mean_mag', 'w1mpro', 'w1mpro_error',
'w2mpro','w2mpro_error', 'Unnamed: 0','Name'], axis=1)
# +
n_components = np.arange(0, X.shape[1], 5) # options for n_components
def compute_scores(X):
pca = PCA(svd_solver='full',whiten=True)
pca_scores = []
for n in n_components:
pca.n_components = n
pca_scores.append(np.mean(cross_val_score(pca, X)))
return pca_scores
# +
# Let's try a dimensionality reduction on the data
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
X = blue_data.values
n_components = np.arange(0, X.shape[1], 5) # options for n_components
def compute_scores(X):
pca = PCA(svd_solver='full',whiten=True)
pca_scores = []
for n in n_components:
pca.n_components = n
pca_scores.append(np.mean(cross_val_score(pca, X)))
return pca_scores
pca_scores = compute_scores(X)
# -
pca_scores
pca = PCA(whiten=True,n_components=10,svd_solver='full')
X_t = pca.fit(X).transform(X)
plt.scatter(X_t[:,0],X_t[:,1],s=0.1)
#plt.xlim(-0.1,.25)
pca.components_.
# +
from sklearn.cluster import KMeans
km = KMeans(init='k-means++', n_clusters=10)
km.fit(X_t)
# +
plt.scatter(blue['phot_bp_mean_mag']-blue['phot_rp_mean_mag'],blue['M_G'],
c=['C{0}'.format(n) for n in km.labels_],s = 1)
plt.xlim(-0.21,1)
plt.ylim(0,-10)
plt.xlabel('G_BP - G_RP')
plt.ylabel('G')
# -
plt.scatter(blue['w1mpro']-blue['w2mpro'],blue['w1mpro'] + 5 * np.log10(blue['parallax']) - 10,
c=['C{0}'.format(n) for n in km.labels_],s = 1,cmap='Spectral')
plt.ylim(1,-18)
plt.xlabel('W1 - W2')
plt.ylabel('M_W1')
| WISE/code/old/blue_features_clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multi-Variable Linear Regression
# In this document I will explain the code I wrote to attempt to implement a multivariable linear regression algorithm. Ill try my best to explain how multivariable linear regression works.
# ## Some notation
# $ m= $ number of training examples (data entries)\
# $ n= $ number of features\
# $ y= $ column matirx of outputs\
# $ x^{i}= $ matrix of input features of the $ i^{th} $ training example\
# $ x^{i}_{j}= $ value of feature j in the ith training example
#
import csv
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
# ## Dataset
# The dataset I used was one from a course on machine learning. It has two features and 47 entries. This dataset is small and that was one of the reasons that I chose it as it will make testing code much quicker. I needed to load the dataset into a format that I could use. I made a function that separated the features and the outputs to two different numpy arrays, and added an extra feature to each entry $ x_0=1 $ which I will get to why in a minute.
def openCsv(file):
a = open(file)
b = [row for row in csv.reader(a)]
c = [[b[i][-1]] for i in range(len(b))]
for i in range(len(b)):
del b[i][-1]
b[i].insert(0, 1)
X = np.array(b, dtype=np.float64)
y = np.array(c, dtype=np.float64)
return X, y
X, y = openCsv("Data_2.csv")
# Basic feature scaling was applied in which each value of a feature gets divided by the largest value of the feature column, so all values would be between 0 and 1. $ x_0 $ did not have this process applied. This makes the training process a lot faster.
X_max = []
for i in range(1, np.size(X, axis=1)):
if i == 1:
y_max = np.max(y)
y /= y_max
else:
pass
X_max.append(np.max(X[:, i]))
X[:, i] /= X_max[i - 1]
# ## Hypothesis
# The hypothesis function is what is used to make a prediction, letting $ \theta $ be the parameters and $ X $ matrix of a training example's features (including $ x_0 $).
# $$
# \theta = \begin{bmatrix} \theta_0 \\ \theta_1 \\ \vdots \\ \theta_n \end{bmatrix}
# $$
# $$
# X = \begin{bmatrix} x_0 \\ x_1 \\ \vdots \\ x_n \end{bmatrix}
# $$
# The function multiplies the transpose of the $ \theta $ matrix and post multiplies it by the matrix $ X $ which leads to a multivariable linear equation.
# $$
# h_{\theta}(x^i) = \theta^T X = \theta_0 x_0 + \theta_1 x_1 + \cdots + \theta_n x_n
# $$
# For this to be compatible with feature scaling we first scale all the features by dividing each feature by the max value of all the features. In the begining its one as we have already applied feature scaling so it will divide by one by when we reverse the feature scaling this will scale them back down. It then has the hypothesis function applied, and this result is multiplied by the maximum value of y which will revert the feature scaling. Inititally this is going to be completely wrong, but thats why we train it. \
# Since this is a multivariable function plotting is usually not possible as the graph would most likeley be in dimensions higher than the third one. In this case it will be a 3D plot which is possible but I need to figure out how to implement that and learn how to use pyplot a bit better.
# +
thetas = np.abs(np.random.standard_normal((X.shape[-1], 1)))
def hypothethis(data_in):
p = data_in[1:] / np.max(X[:, 1:], axis=0)
p = np.insert(p, 0, 1)
pred = np.dot(thetas.T, p.reshape((len(data_in), 1)))
return pred * np.max(y)
# -
# ## Training
# The process of training is to make the model fit the data as best as possible by adjusting the parameters. \
# Before the model can be trained it must first know how bad it is doing and how it can improve. A cost function is used to meassure its performance. In this case we take the squared error of every training example and take an average of that. It is multiplied by half in the end to make later computation a bit easier.
# $$
# J(\theta) = \frac{1}{2m} \sum_{i = 1}^{m} \left (h_{\theta}(x^i) - y^i \right)^2
# $$
# This cost funtion is a multivariable function, but to explain what is going on a very simplyfied graph of the cost function is plotted with only two parameters to show how the process works, but in practice this graph will be in much higher dimensions.
# +
x = np.linspace(-2, 2, num=100)
Y = x
x, Y = np.meshgrid(x, Y)
Z = np.power(x, 2) + np.power(Y, 2)
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(x, Y, Z/4, cmap=cm.coolwarm)
ax.set_xlabel(r"$\theta_0$")
ax.set_ylabel(r"$\theta_1$")
ax.set_zlabel(r"$J(\theta)$")
plt.show()
# -
# This graph has a minimum point that represents the parameters which lead to the lowest cost. To get there we need a way to tell the computer what step to take in this graph in order to get as close as possible to the minimum point. We use the partial derivative for each parameter in order to get there. Luckly the partial derivatives are easy to derive. This is the way it works in my program:
# $$
# \frac{\partial{J(\theta)}}{\partial{\theta_j}} = \frac{1}{m} \sum_{i = 1}^{m} \left (h_{\theta}(x^i) - y^i \right) x_j^i
# $$
# $$
# \nabla J(\theta) = \begin{bmatrix} \frac{\partial{J(\theta)}}{\partial{\theta_0}} \\ \frac{\partial{J(\theta)}}{\partial{\theta_1}} \\ \vdots \\ \frac{\partial{J(\theta)}}{\partial{\theta_n}} \end{bmatrix}
# $$
# Essentially it takes the average of the derivatives of all the training example with respect to the first parameters, then it puts it into a matrix and repeates for all the parameters. \
# Some may know that it is possible to minimize the funtion explicitly but I will not use that way (its called the normal equation) as I do not fully understand its derivation and I would like to fully understand it before I use it. \
# I made a function to calculate the cost and the derivative needed.
def costs(data_in, outputs):
pred = []
cost = []
derivs = []
for x, y in zip(data_in, outputs):
pred.append(hypothethis(x))
c = ((pred[-1] - y)**2)/2
cost.append(c)
d = (pred[-1] - y) * x
derivs.append(d)
cost = np.average(cost)
derivs = np.average(derivs, axis=0)
return cost, derivs.T
# To update the parameters we subtract it from the derivative times a constant called the learning rate $ \alpha $ which is important to adjust just right as too high of a value would result in overshooting leading to the model never learning, whereas a too low value would lead to the model taking forever to learn.
# $$
# \theta_j := \theta_j - \alpha \frac{\partial{J(\theta)}}{\partial{\theta_j}}
# $$
# $$
# \theta := \theta - \alpha \nabla J(\theta)
# $$
# Taking the negative makes the computer take steps down the graph instead of upwards. We iterate this for how many times as we want.
def train(data_in, parameters, outputs, iterations=1, alpha=0.1):
for i in range(iterations):
cost, derivs = costs(data_in, outputs)
parameters -= alpha * derivs
return parameters
thetas = train(X, thetas, y, 1000, 0.2)
# Here we scale all the features back.
X[:, 1:] *= X_max
y *= y_max
print(hypothethis(X[1]))
y[1]
# Due to the fact that I have used a very basic form of feature scaling that has caused for the error to scale with it, so the model is not entirely accurate. It is very close though so it is learning but the feature scaling needs to be more sophisticated.
| Supervised_Learning/Linear_Regression/Multi_Linear_Regression.ipynb |
// -*- coding: utf-8 -*-
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++14
// language: C++14
// name: xcpp14
// ---
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
using namespace std;
#define BLOK 60
typedef struct {
int pbr;
char mjesto[50+1];
char opcina[50+1];
}zapis;
// **Prebaci podatke u neformatiranu datoteku "mjesta.dat"**
// +
int F;
{
FILE *f, *fb;
zapis pom;
int br;
f = fopen("mjesta.txt", "r");
fb = fopen("mjesta.dat", "wb+");
// Početak datoteke
fseek(f,0,SEEK_SET);
fseek(fb,0,SEEK_SET);
// Prebaci podatke
br = 0;
while(fscanf(f,"%[^,],%d,%[^\n] ",pom.mjesto, &pom.pbr, pom.opcina) != EOF){
fwrite(&pom,sizeof(zapis),1,fb);
br++;
}
F = br;
fclose(f);
fclose(fb);
printf("Ukupno: %d zapisa", br);
}
// -
// ## Čitanje po blokovima
// 
printf("Zapisa: %d", F);
printf("\nBlokova: %d", (int)ceil((float)F/BLOK));
printf("\nBlok: %d", BLOK);
// ### Zadatak 1.
// Napisati funkciju koja ispisuje vodeći zapis u bloku?
// + active=""
// #define BLOK 60
// typedef struct {
// int pbr;
// char mjesto[50+1];
// char opcina[50+1];
// }zapis;
// +
void vodeci(int rb_blok){
zapis pom;
FILE *f = fopen("mjesta.dat", "r");
//Ovdje napisati kod
// Ispis
printf("%d %s %s", pom.pbr, pom.mjesto, pom.opcina);
fclose(f);
}
vodeci(115)
// -
// ### Zadatak 2.
// Odrediti broj zapisa i veličinu datoteke za "mjesta.txt", odnosno "mjesta.dat".
//
// - Brojanjem zapisa
// - „jednom naredbom”
{
zapis pom;
FILE *f = fopen("mjesta.dat", "r");
printf("mjesta.dat");
// Jednom naredbom
fseek(f,0,SEEK_END);
int broj_zapisa = 0;
int velicina = 0;
// Brojanjem zapisa
int broj_zapisa_2 = 0;
int velicina_2 = 0;
fseek(f,0,SEEK_SET);
printf("\nJednom naredbom:");
printf("\n - veličina: %d", velicina);
printf("\n - broj zapisa: %d", broj_zapisa);
printf("\nBrojanjem zapisa:");
printf("\n - veličina: %d", velicina_2);
printf("\n - broj zapisa: %d", broj_zapisa_2);
fclose(f);
}
{
zapis pom;
FILE *f = fopen("mjesta.txt", "r");
// Jednom naredbom
fseek(f,0,SEEK_END);
int velicina = 0;
int broj_zapisa = 0;
// Brojanjem zapisa
int broj_zapisa_2 = 0;
int velicina_2 = 0;
printf("\nJednom naredbom:");
printf("\n - veličina: %d", velicina);
printf("\n - broj zapisa: %d", broj_zapisa);
printf("\nBrojanjem zapisa:");
printf("\n - veličina: %d", velicina_2);
printf("\n - broj zapisa: %d", broj_zapisa_2);
fclose(f);
}
// ### Zadatak 3.
// Kreirati tablicu indeks koja će pokazivati na vodeće zapise blokova u „mjesta.dat”
// * Poljem indeks u radnoj memoriji
// * Datotekom „indeks.idx” na čvrstom disku
// + active=""
// #define BLOK 60
// typedef struct {
// int pbr;
// char mjesto[50+1];
// char opcina[50+1];
// }zapis;
// +
zapis pom;
long *indeks;
FILE *f = fopen("mjesta.dat", "r");
fseek(f, 0, SEEK_END);
int broj_zapisa = (ftell(f) / sizeof(zapis));
int broj_blokova = ceil((float)broj_zapisa/BLOK);
printf("Broj zapisa %d", broj_zapisa);
printf("\nBroj blokova %d", broj_blokova);
// Ovdje napisati kod
int rb_bloka = 60;
fseek(f, indeks[rb_bloka-1], SEEK_SET);
fread(&pom, sizeof(pom), 1, f);
printf("\n%d %s %s", pom.pbr, pom.mjesto, pom.opcina);
fclose(f);
// -
// ### Zadatak 4.
// Napisati funkciju koja provjerava da li "mjesto" čiji je naziv zadan u argumentu funkcije postoji u datoteci. Funkcija vraća 1 ako postoji, a inače 0.
//
int trazi(const char *naziv)
{
zapis pom;
FILE *f = fopen("mjesta.dat", "r");
int br = 0;
bool pronadjen = false;
// Pronađi blok
while(fread(&pom, sizeof(zapis),1, f)){
printf("%s\n", pom.mjesto);
if(strcmp(pom.mjesto, naziv)== 0)
return 1;
else if(strcmp(pom.mjesto, naziv)>0){
pronadjen = true;
break;
}else{
br++;
fseek(f, br * BLOK * sizeof(zapis), SEEK_SET);
}
}
// Pronađi zapis u bloku
if(pronadjen){
br -= 1;
// Vrati se jedan blok unatrag
fseek(f, br*BLOK*sizeof(zapis),SEEK_SET);
// Pretraži sve zapise u bloku
for(int i=0; i< BLOK; i++){
fread(&pom,sizeof(pom),1,f);
if(strcmp(pom.mjesto, naziv) == 0)
return 1;
}
return 0;
}else
return 0;
}
trazi("Malinska")
| notebooks/CitanjePoBlokovima.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import sys
sys.path.append('../../..')
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pickle
import joblib
from tqdm import tqdm
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn import preprocessing
from sklearn.utils import shuffle
from tensorflow.keras.models import save_model,load_model
from utils.util import *
from utils.preprocessing import *
from utils.dataiter import Dataiter
from utils.evaluate import calculate_ctr, compute_rce, average_precision_score
from utils.target_encode import MTE_one_shot
import tensorflow.keras.backend as K
import core.config as conf
# -
# ## Load Data
# +
path = f'{conf.dataset_mini_path}/train'
train = read_data(path)
path = f'{conf.dataset_mini_path}/test'
test = read_data(path)
path = f'{conf.dataset_mini_path}/valid'
valid = read_data(path)
# -
TARGET = 'comment'
# ## Preprocessing
# +
def set_dataframe_types(df, train):
df['id'] = np.arange( df.shape[0] )
df['id'] = df['id'].astype(np.uint32)
if train:
df['reply_timestamp'] = df['reply_timestamp'].fillna(0)
df['retweet_timestamp'] = df['retweet_timestamp'].fillna(0)
df['comment_timestamp'] = df['comment_timestamp'].fillna(0)
df['like_timestamp'] = df['like_timestamp'].fillna(0)
df['reply_timestamp'] = df['reply_timestamp'].astype(np.uint32)
df['retweet_timestamp'] = df['retweet_timestamp'].astype(np.uint32)
df['comment_timestamp'] = df['comment_timestamp'].astype(np.uint32)
df['like_timestamp'] = df['like_timestamp'].astype(np.uint32)
df['tweet_timestamp'] = df['tweet_timestamp'].astype( np.uint32 )
df['creator_follower_count'] = df['creator_follower_count'].astype( np.uint32 )
df['creator_following_count'] = df['creator_following_count'].astype( np.uint32 )
df['creator_account_creation']= df['creator_account_creation'].astype( np.uint32 )
df['engager_follower_count'] = df['engager_follower_count'].astype( np.uint32 )
df['engager_following_count'] = df['engager_following_count'].astype( np.uint32 )
df['engager_account_creation']= df['engager_account_creation'].astype( np.uint32 )
return df
def preprocess(df, target, train):
df = set_dataframe_types(df, train)
# df = df.set_index('id')
# df.columns = conf.raw_features + conf.labels
df = df.drop('text_tokens', axis=1)
df = feature_extraction(df, features=conf.used_features, train=train) # extract 'used_features'
cols = []
return df
# -
train = preprocess(train, TARGET, True)
valid = preprocess(valid, TARGET, True)
test = preprocess(test, TARGET, True)
train
# ### pickle matching
# #### language
pickle_path = conf.dict_path
# +
user_main_language_path = pickle_path + "user_main_language.pkl"
if os.path.exists(user_main_language_path) :
with open(user_main_language_path, 'rb') as f :
user_main_language = pickle.load(f)
# +
language_dict_path = pickle_path + "language_dict.pkl"
if os.path.exists(language_dict_path ) :
with open(language_dict_path , 'rb') as f :
language_dict = pickle.load(f)
# -
train['language'] = train.apply(lambda x : language_dict[x['language']], axis = 1)
test['language'] = test.apply(lambda x : language_dict[x['language']], axis = 1)
valid['language'] = valid.apply(lambda x : language_dict[x['language']], axis = 1)
del language_dict
train['creator_main_language'] = train['creator_id'].map(user_main_language)
valid['creator_main_language'] = valid['creator_id'].map(user_main_language)
test['creator_main_language'] = test['creator_id'].map(user_main_language)
train['engager_main_language'] = train['engager_id'].map(user_main_language)
valid['engager_main_language'] = valid['engager_id'].map(user_main_language)
test['engager_main_language'] = test['engager_id'].map(user_main_language)
train['creator_and_engager_have_same_main_language'] = train.apply(lambda x : 1 if x['creator_main_language'] == x['engager_main_language'] else 0, axis = 1)
valid['creator_and_engager_have_same_main_language'] = valid.apply(lambda x : 1 if x['creator_main_language'] == x['engager_main_language'] else 0, axis = 1)
test['creator_and_engager_have_same_main_language'] = test.apply(lambda x : 1 if x['creator_main_language'] == x['engager_main_language'] else 0, axis = 1)
train['is_tweet_in_creator_main_language'] = train.apply(lambda x : 1 if x['creator_main_language'] == x['language'] else 0, axis = 1)
valid['is_tweet_in_creator_main_language'] = valid.apply(lambda x : 1 if x['creator_main_language'] == x['language'] else 0, axis = 1)
test['is_tweet_in_creator_main_language'] = test.apply(lambda x : 1 if x['creator_main_language'] == x['language'] else 0, axis = 1)
train['is_tweet_in_engager_main_language'] = train.apply(lambda x : 1 if x['engager_main_language'] == x['language'] else 0, axis = 1)
valid['is_tweet_in_engager_main_language'] = valid.apply(lambda x : 1 if x['engager_main_language'] == x['language'] else 0, axis = 1)
test['is_tweet_in_engager_main_language'] = test.apply(lambda x : 1 if x['engager_main_language'] == x['language'] else 0, axis = 1)
del user_main_language
train.head()
#
# #### engagements
# +
engagement_like_path = pickle_path + "engagement-like.pkl"
if os.path.exists(engagement_like_path ) :
with open(engagement_like_path , 'rb') as f :
engagement_like = pickle.load(f)
# -
train['engager_feature_number_of_previous_like_engagement'] = train.apply(lambda x : engagement_like[x['engager_id']], axis = 1)
valid['engager_feature_number_of_previous_like_engagement'] = valid.apply(lambda x : engagement_like[x['engager_id']], axis = 1)
test['engager_feature_number_of_previous_like_engagement'] = test.apply(lambda x : engagement_like[x['engager_id']], axis = 1)
del engagement_like
# +
engagement_reply_path = pickle_path + "engagement-reply.pkl"
if os.path.exists(engagement_reply_path ) :
with open(engagement_reply_path , 'rb') as f :
engagement_reply = pickle.load(f)
# -
train['engager_feature_number_of_previous_reply_engagement'] = train.apply(lambda x : engagement_reply[x['engager_id']], axis = 1)
valid['engager_feature_number_of_previous_reply_engagement'] = valid.apply(lambda x : engagement_reply[x['engager_id']], axis = 1)
test['engager_feature_number_of_previous_reply_engagement'] = test.apply(lambda x : engagement_reply[x['engager_id']], axis = 1)
del engagement_reply
# +
engagement_retweet_path = pickle_path + "engagement-retweet.pkl"
if os.path.exists(engagement_retweet_path ) :
with open(engagement_retweet_path , 'rb') as f :
engagement_retweet = pickle.load(f)
# -
train['engager_feature_number_of_previous_retweet_engagement'] = train.apply(lambda x : engagement_retweet[x['engager_id']], axis = 1)
valid['engager_feature_number_of_previous_retweet_engagement'] = valid.apply(lambda x : engagement_retweet[x['engager_id']], axis = 1)
test['engager_feature_number_of_previous_retweet_engagement'] = test.apply(lambda x : engagement_retweet[x['engager_id']], axis = 1)
del engagement_retweet
# +
engagement_comment_path = pickle_path + "engagement-comment.pkl"
if os.path.exists(engagement_comment_path ) :
with open(engagement_comment_path , 'rb') as f :
engagement_comment = pickle.load(f)
# -
train['engager_feature_number_of_previous_comment_engagement'] = train.apply(lambda x : engagement_comment[x['engager_id']], axis = 1)
valid['engager_feature_number_of_previous_comment_engagement'] = valid.apply(lambda x : engagement_comment[x['engager_id']], axis = 1)
test['engager_feature_number_of_previous_comment_engagement'] = test.apply(lambda x : engagement_comment[x['engager_id']], axis = 1)
del engagement_comment
train['number_of_engagements_positive'] = train.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] + x['engager_feature_number_of_previous_retweet_engagement'] + x['engager_feature_number_of_previous_reply_engagement'] + x['engager_feature_number_of_previous_comment_engagement'], axis = 1)
valid['number_of_engagements_positive'] = valid.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] + x['engager_feature_number_of_previous_retweet_engagement'] + x['engager_feature_number_of_previous_reply_engagement'] + x['engager_feature_number_of_previous_comment_engagement'], axis = 1)
test['number_of_engagements_positive'] = test.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] + x['engager_feature_number_of_previous_retweet_engagement'] + x['engager_feature_number_of_previous_reply_engagement'] + x['engager_feature_number_of_previous_comment_engagement'], axis = 1)
train = train.drop('engager_feature_number_of_previous_reply_engagement', axis = 1)
train = train.drop('engager_feature_number_of_previous_like_engagement', axis = 1)
train = train.drop('engager_feature_number_of_previous_retweet_engagement', axis = 1)
# train = train.drop('engager_feature_number_of_previous_comment_engagement', axis = 1)
valid = valid.drop('engager_feature_number_of_previous_reply_engagement', axis = 1)
valid = valid.drop('engager_feature_number_of_previous_like_engagement', axis = 1)
valid = valid.drop('engager_feature_number_of_previous_retweet_engagement', axis = 1)
# valid = valid.drop('engager_feature_number_of_previous_comment_engagement', axis = 1)
test = test.drop('engager_feature_number_of_previous_reply_engagement', axis = 1)
test = test.drop('engager_feature_number_of_previous_like_engagement', axis = 1)
test = test.drop('engager_feature_number_of_previous_retweet_engagement', axis = 1)
# test = test.drop('engager_feature_number_of_previous_comment_engagement', axis = 1)
train[f'number_of_engagements_ratio_{TARGET}'] = train.apply(lambda x : x[f'engager_feature_number_of_previous_{TARGET}_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
valid[f'number_of_engagements_ratio_{TARGET}'] = valid.apply(lambda x : x[f'engager_feature_number_of_previous_{TARGET}_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
test[f'number_of_engagements_ratio_{TARGET}'] = test.apply(lambda x : x[f'engager_feature_number_of_previous_{TARGET}_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
# ## Sampling
# +
# df_negative = df_negative.sample(n = len(df_positive), random_state=777)
# +
# train = pd.concat([df_positive, df_negative])
# +
# train = train.sample(frac = 1)
# +
# del df_positive
# del df_negative
# -
# ## Split
label_names = ['reply', 'retweet', 'comment', 'like']
DONT_USE = ['tweet_timestamp','creator_account_creation','engager_account_creation','engage_time',
'creator_account_creation', 'engager_account_creation',
'fold','tweet_id',
'tr','dt_day','','',
'engager_id','creator_id','engager_is_verified',
'elapsed_time',
'links','domains','hashtags0','hashtags1',
'hashtags','tweet_hash','dt_second','id',
'tw_hash0',
'tw_hash1',
'tw_rt_uhash',
'same_language', 'nan_language','language',
'tw_hash', 'tw_freq_hash','tw_first_word', 'tw_second_word', 'tw_last_word', 'tw_llast_word',
'ypred','creator_count_combined','creator_user_fer_count_delta_time','creator_user_fing_count_delta_time','creator_user_fering_count_delta_time','creator_user_fing_count_mode','creator_user_fer_count_mode','creator_user_fering_count_mode'
]
DONT_USE += label_names
DONT_USE += conf.labels
RMV = [c for c in DONT_USE if c in train.columns]
y_train = train[TARGET]
X_train = train.drop(RMV, axis=1)
del train
y_valid = valid[TARGET]
X_valid = valid.drop(RMV, axis=1)
del valid
y_test = test[TARGET]
X_test = test.drop(RMV, axis=1)
del test
# ## Scaling
X_train = X_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
X_val = X_valid.reset_index(drop=True)
scaling_columns = ['creator_following_count', 'creator_follower_count', 'engager_follower_count', 'engager_following_count', f'engager_feature_number_of_previous_{TARGET}_engagement', 'number_of_engagements_positive', 'dt_dow', 'dt_hour', 'len_domains']
standard_scaler = preprocessing.StandardScaler()
standard_scaler.fit(X_train[scaling_columns])
ss = standard_scaler.transform(X_train[scaling_columns])
X_train[scaling_columns] = pd.DataFrame(ss, columns = scaling_columns)
ss = standard_scaler.transform(X_valid[scaling_columns])
X_valid[scaling_columns] = pd.DataFrame(ss, columns = scaling_columns)
ss = standard_scaler.transform(X_test[scaling_columns])
X_test[scaling_columns] = pd.DataFrame(ss, columns = scaling_columns)
X_train = X_train.fillna(X_train.mean())
X_valid = X_valid.fillna(X_valid.mean())
X_test = X_test.fillna(X_test.mean())
X_train
# ## Modeling
model = Sequential([
Dense(16, activation = 'relu', input_dim = X_train.shape[1]),
Dense(8, activation = 'relu'),
Dense(4, activation = 'relu'),
Dense(1, activation = 'sigmoid')
])
model.compile(
optimizer = 'adam',
loss = 'binary_crossentropy', # softmax : sparse_categorical_crossentropy, sigmoid : binary_crossentropy
metrics=['binary_crossentropy']) # sigmoid :binary_crossentropy
result = model.fit(
x = X_train,
y = y_train,
validation_data=(X_valid, y_valid),
epochs=5,
batch_size=32
)
plt.plot(result.history['binary_crossentropy'])
plt.plot(result.history['val_binary_crossentropy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
model.evaluate(X_test, y_test)
model.save(f'./saved_model/ffnn_{TARGET}')
# ## Predict
model = tf.keras.models.load_model(f'./saved_model/ffnn_{TARGET}')
pred = model.predict(X_test)
rce = compute_rce(pred, y_test)
rce
average_precision_score(y_test, pred)
| models/notebooks/FFNN/04-FFNN_with_pickle_comment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.11
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import os
# %matplotlib inline
ROOT = os.getcwd()
print(ROOT)
DATA = os.path.join(ROOT, 'data','Advertising.csv')
print(DATA)
df = pd.read_csv(DATA)
df.head()
x = df.iloc[:,:-1]
y = df.sales
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error
X_train, X_test, y_train, y_test = train_test_split(x,y,test_size=0.33, random_state=101)
model = LinearRegression()
model.fit(X_train,y_train)
y_hat= model.predict(X_test)
MAE= mean_absolute_error(y_test,y_hat)
print(f"MAE: {MAE}")
MSE= mean_squared_error(y_test,y_hat)
print(f"MSE: {MSE}")
RMSE = np.sqrt(MSE)
print(f"RMSE: {RMSE}")
residual= y_test - y_hat
sns.scatterplot(x=y_test,y=residual) # resedual plot
plt.axhline(y=0,color="red",ls="--")
import scipy as sp
fig, ax = plt.subplots(figsize=(6,8),dpi=100)
# probplot returns the raw value if needed,
# We just want to see the plot, so we assign these values to
sp.stats.probplot(residual,plot=ax)
sns.pairplot(df,diag_kind="kde")
new_y_hat=model.predict(x)
# +
fig , axes = plt.subplots(nrows=1,ncols=3,figsize=(16,6))
axes[0].plot(df["TV"],df['sales'],'o')
axes[0].plot(df["TV"],new_y_hat,'o',color='red')
axes[0].set_ylabel("sales")
axes[0].set_xlabel("TV")
axes[1].plot(df["radio"],df['sales'],'o')
axes[1].plot(df["radio"],new_y_hat,'o',color='red')
axes[1].set_ylabel("sales")
axes[1].set_xlabel("radio")
axes[2].plot(df["newspaper"],df['sales'],'o')
axes[2].plot(df["newspaper"],new_y_hat,'o',color='red')
axes[2].set_ylabel("sales")
axes[2].set_xlabel("newspaper")
# -
from sklearn.metrics import r2_score
r2_score(y_test,y_hat)
df.shape
adjusted_R2 = 1 - (1-r2_score(y_test,y_hat))*(len(y_test)-1)/(len(y_test)-x.shape[1]-1)
print(f"Adjusted R2: {adjusted_R2}")
from joblib import dump, load # Saving your file as a binary file.
model_dir="models"
os.makedirs(model_dir,exist_ok=True)
filepath = os.path.join(model_dir,"model.joblib")
dump(model,filepath)
load_model = load(filepath)
load_model.coef_
example=[[151,25,15]]
load_model.predict(example)
# +
## Polynomial Regression
# It is a linear regression model that fits a polynomial function to the data.
# -
x1 = df.drop(columns=["sales"],axis=1)
x1.head()
from sklearn.preprocessing import PolynomialFeatures
poly_conv = PolynomialFeatures(degree=2, include_bias=False)
poly_conv.fit(x1)
poly_features = poly_conv.transform(x1)
poly_features.shape
x1.iloc[0]
poly_features[0]
x_train, X_test, y_train, y_test = train_test_split(poly_features,y,test_size=0.33,random_state=101)
model1 = LinearRegression()
model1.fit(x_train,y_train)
y_poly_hat = model1.predict(X_test)
MAE = mean_absolute_error(y_test,y_poly_hat)
print(f"MAE: {MAE}")
MSE = mean_squared_error(y_test,y_poly_hat)
print(f"MSE: {MSE}")
RMSE = np.sqrt(MSE)
print(f"RMSE: {RMSE}")
model1.coef_
# +
train_rmse_errors = []
test_rmse_errors = []
for i in range(1,10):
poly_converter = PolynomialFeatures(degree=i,include_bias=False)
poly_features = poly_converter.fit_transform(x1)
X_train, X_test, y_train, y_test = train_test_split(poly_features,y,test_size=0.33,random_state=101)
model = LinearRegression()
model.fit(X_train,y_train)
train_model = model.predict(X_train)
test_model = model.predict(X_test)
train_rmse = np.sqrt(mean_squared_error(y_train,train_model))
test_rmse = np.sqrt(mean_squared_error(y_test,test_model))
train_rmse_errors.append(train_rmse)
test_rmse_errors.append(test_rmse)
# -
train_rmse_errors
test_rmse_errors # Overfitting is happening after the 5th degree(error exploding)
plt.plot(range(1,6), train_rmse_errors[0:5], label='TRAIN_RMSE') # You will be selecting the 2nd degree polynomial.
plt.plot(range(1,6), test_rmse_errors[0:5], label='TEST_RMSE')
plt.xlabel('Model Complexity/ Degree of Polynomial')
plt.ylabel('RMSE')
plt.legend()
plt.plot(range(1,10), train_rmse_errors, label = 'TRAIN_RMSE')
plt.plot(range(1,10), test_rmse_errors, label = 'TEST_RMSE')
plt.xlabel("Model Complexity/ Degree of Polynomial")
plt.ylabel("RMSE")
plt.legend()
final_poly_converter = PolynomialFeatures(degree=3, include_bias=False)
final_model = LinearRegression()
full_converted_x = final_poly_converter.fit_transform(x)
final_model.fit(full_converted_x, y)
model_dir = "models"
os.makedirs(model_dir, exist_ok=True)
filepath = os.path.join(model_dir, 'poly.joblib')
dump(final_model, filepath)
model_dir = "models"
os.makedirs(model_dir, exist_ok=True)
filepath = os.path.join(model_dir, 'final_poly_converter.joblib')
dump(final_poly_converter, filepath)
model_dir = "models"
os.makedirs(model_dir, exist_ok=True)
filepath = os.path.join(model_dir, 'final_poly_converter.joblib')
loaded_converter = load(filepath)
model_dir = "models"
os.makedirs(model_dir, exist_ok=True)
filepath = os.path.join(model_dir, 'poly.joblib')
loaded_model = load(filepath)
| simple_LR.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# Configs
label_name = "math"
embedding_type = "perf" # time or perf
# +
import pandas as pd
import matplotlib
import numpy as np
from sklearn import tree
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
np.set_printoptions(precision=3, suppress=True)
# +
dataset = pd.read_csv(f"../../dataset/{embedding_type}/{label_name}_dataset.csv")
dataset = pd.get_dummies(dataset)
train, test = train_test_split(dataset, test_size=0.33, random_state=42, shuffle=True)
train_dataset_features = train.copy().drop('label', axis=1)
train_dataset_labels = train.copy().pop('label')
test_dataset_features = test.copy().drop('label', axis=1)
test_dataset_labels = test.copy().pop('label')
dataset.head()
# -
test_dataset_features.sort_index()
model = tree.DecisionTreeClassifier()
model.fit(train_dataset_features, train_dataset_labels)
# +
from graphviz import Source
graph = Source(tree.export_graphviz(model, out_file=None, feature_names=train_dataset_features.columns))
graph.format = 'png'
graph.render('dtree_render',view=True)
# -
confusion_matrix(train_dataset_labels, model.predict(train_dataset_features))
confusion_matrix(test_dataset_labels, model.predict(test_dataset_features))
# +
from sklearn.metrics import classification_report
print(classification_report(test_dataset_labels, model.predict(test_dataset_features)))
# -
import pybaobabdt
ax = pybaobabdt.drawTree(model, size=10, dpi=72, features=train_dataset_features.keys())
ax.get_figure().savefig('tree.png', format='png', dpi=300, transparent=True)
| prototype/dtree/DecisionTree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %%capture
from tqdm import tqdm_notebook as tqdm
tqdm().pandas()
import sys
sys.path.append("../")
import os
import re
import datetime
import pandas as pd
import pandas_market_calendars as mcal
from polygon import RESTClient
#from tqdm.notebook import tqdm
import time
import pytz
eastern = 'US/Eastern'
# +
key = "<KEY>"
def ts_to_datetime(ts) -> str:
return datetime.datetime.fromtimestamp(ts / 1000.0).strftime('%Y-%m-%d %H:%M')
def get_split_data_for_stock(stock):
with RESTClient(key) as client:
try:
respex = client.reference_stock_splits(str(stock).upper())
result = respex.__dict__['results']
return result
except:
print("Error getting split data for "+ str(stock))
return []
def get_dividend_data_for_stock(stock):
with RESTClient(key) as client:
try:
respex = client.reference_stock_dividends(str(stock).upper())
result = respex.__dict__['results']
return result
except:
print("Error getting dividend data for "+ str(stock))
return []
def extract_qualified_symbols(df, plvl, advlvl):
qualified = df[(df.p_lvl == plvl) & (df.adv_lvl == advlvl)]
return qualified
# -
get_folder_list()
# +
def get_folder_list():
path = '.'
directory_contents = os.listdir(path)
r = re.compile(".*pv_adv_[4]_[2]")
folder_list = list(filter(r.match, directory_contents)) # Read Note
return folder_list
def get_all_files_in_folder(folder):
return os.listdir(folder)
def extract_date_from_filename(filename):
split_a = filename.split('trades-')
split_b = split_a[1].split('.')
return split_b[0]
def update_all_files():
a = get_folder_list()
count = 0
all_file_dfs = []
for folder in tqdm(a, desc="folders"):
all_files = get_all_files_in_folder(folder)
count = count + 1
for file in tqdm(all_files, desc="Folder Files"):
file_date = extract_date_from_filename(file)
file_df = pd.read_csv(folder+'/'+str(file))
file_df['trade_date'] = file_date
file_df['group'] = count
file_df['random_id'] = file_df[['trade_date', 'symbol']].sum(axis=1).map(hash)
file_df.set_index("random_id", inplace=True)
all_file_dfs.append(file_df)
result_1 = pd.DataFrame().append(all_file_dfs)
return result_1
def update_all_etf_files():
all_file_dfs = []
for file in tqdm(all_etf_files, desc="etf Files"):
file_date = extract_date_from_filename(file)
file_df = pd.read_csv("pv_adv_major_etfs"+'/'+str(file))
file_df['trade_date'] = file_date
file_df['random_id'] = file_df[['trade_date', 'symbol']].sum(axis=1).map(hash)
file_df.set_index("random_id", inplace=True)
all_file_dfs.append(file_df)
result_1 = pd.DataFrame().append(all_file_dfs)
return result_1
# -
main_df = pd.read_csv('stocks-profile.csv')
q_sym_list = []
for i in range(5):
q_sym = extract_qualified_symbols(main_df, i, 2)
q_sym_list = q_sym_list + list(q_sym['symbol'])
splits_df = []
dividend_df = []
for i in tqdm(range(len(q_sym_list))):
ticker = q_sym_list[i]
sp = get_split_data_for_stock(ticker)
div = get_dividend_data_for_stock(ticker)
a = pd.DataFrame(sp)
b = pd.DataFrame(div)
splits_df.append(a)
dividend_df.append(b)
time.sleep(0.1)
dividend_df = pd.concat(dividend_df)
splits_df = pd.concat(splits_df)
splits_df.head()
# +
spl_col = list(splits_df.columns)
divd_cols = list(dividend_df.columns)
for i,s in enumerate(spl_col):
spl_col[i] = "split_"+s
for i,s in enumerate(divd_cols):
divd_cols[i] = "dividend_"+s
# -
spl_df_copy = splits_df.copy()
divd_df_copy = dividend_df.copy()
spl_df_copy.columns = spl_col
divd_df_copy.columns = divd_cols
spl_div = pd.concat([spl_df_copy, divd_df_copy])
spl_div.to_csv('split_and_divided_top_5.csv')
ouut = update_all_files()
ouut.tail()
ouut.to_csv('data_only_group_5.csv')
all_etf_files = os.listdir('pv_adv_major_etfs')
all_etf_files_df = update_all_etf_files()
all_etf_files_df.head()
# +
etf_cols = list(all_etf_files_df.columns)
for i,s in enumerate(etf_cols):
etf_cols[i] = "etf_"+s
# -
etf_df_copy = all_etf_files_df.copy()
etf_df_copy.columns = etf_cols
etf_df_copy.head()
etf_df_copy.to_csv('etf_data_5_groups.csv')
for index, row in tqdm(df.iterrows(), desc='Symbols', total=len(df)):
for item in list(row['splits']):
ex_date = item['exDate']
payment_date = item['paymentDate']
ratio = item['ratio']
symbol = item['ticker']
new_file_df.loc[(new_file_df['symbol'] == symbol) & (new_file_df['trade_date'] == ex_date), 'split_ex_date'] = ex_date
new_file_df.loc[(new_file_df['symbol'] == symbol) & (new_file_df['trade_date'] == ex_date), 'split_ratio'] = ratio
# +
for index, row in tqdm(all_etf_files_df.iterrows(), desc='Symbols', total=len(all_etf_files_df)):
for colname in all_etf_files_df.columns:
new_new_file.loc[(new_new_file['trade_date'] == row['trade_date']), row['symbol']+'_'+colname] = row[colname]
# -
for index, row in tqdm(symbols_with_dividends.iterrows(), desc='Symbols', total=len(symbols_with_dividends)):
for item in list(row['dividends']):
ex_date = item['exDate']
payment_date = item['paymentDate']
ratio = item['amount']
symbol = item['ticker']
new_file_df.loc[(new_file_df['symbol'] == symbol) & (new_file_df['trade_date'] == ex_date), 'div_ex_date'] = ex_date
new_file_df.loc[(new_file_df['symbol'] == symbol) & (new_file_df['trade_date'] == ex_date), 'dividend_ratio'] = ratio
res = str(splits_data['splits'][1]).strip('][').split(', ')
| polygonIO_python/trades_quotes/data_manipulate/data_manipulator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/u6k/ml-sandbox/blob/master/mnist_cnn_use_keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="EStGPJSGHTm5" colab_type="text"
# 趣味でディープラーニングするための GPU 環境を安上がりに作る方法 - Qiita https://qiita.com/hoto17296/items/16f57b319a0293bd2688
# + id="wSJ8kBz5HJZ9" colab_type="code" outputId="e285b849-1ed7-4e3b-dd72-e62fa46658ac" colab={"base_uri": "https://localhost:8080/", "height": 1227}
# !wget https://raw.githubusercontent.com/fchollet/keras/master/examples/mnist_cnn.py
# !python mnist_cnn.py
| mnist_cnn_use_keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Yp1QzZA2FqrR" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1600966149674, "user_tz": -330, "elapsed": 972, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}}
## Import packages
from scipy.io import loadmat
from sklearn import preprocessing
from tabulate import tabulate
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import confusion_matrix,classification_report
from sklearn.model_selection import cross_val_score, GridSearchCV
import matplotlib.patches as mpatches
from matplotlib import pyplot as plt
from skimage.color import label2rgb
from sklearn.svm import SVC
from sklearn import metrics
from sklearn import svm
import pandas as pd
import numpy as np
import statistics
import math
import time
import sys
## Import DL
import keras
from keras.layers.core import Dense, Dropout, Activation # Types of layers to be used in our model
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Conv2D, MaxPool2D , Conv1D, Flatten, MaxPooling1D
from keras.models import Sequential
# + id="QEPgmFP3FWIf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1600965919796, "user_tz": -330, "elapsed": 446384, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="bb531fb0-077d-4368-f307-7ce4808f347a"
## Mounting Google Drive
from google.colab import drive
drive.mount('/content/drive')
# + id="tZ120VqRtS5R" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1600967710877, "user_tz": -330, "elapsed": 1404, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}}
img = loadmat('/content/drive/My Drive/Major_Project/Data/PaviaU.mat')
img_gt = loadmat('/content/drive/My Drive/Major_Project/Data/PaviaU_gt.mat')
img_dr = np.load('/content/drive/My Drive/Major_Project/Supervised_Results/PaviaU/DR/img_orig_DR_22.npy')
img_dr_un = np.load('/content/drive/My Drive/Major_Project/unSupervised_Results/PaviaU/DR/Test0_DR_imgorig_22.npy')
# + id="avxdqnm2t-1B" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1600967711293, "user_tz": -330, "elapsed": 1251, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}}
img = img['paviaU']
gt = img_gt['paviaU_gt']
height, width, bands = img.shape[0], img.shape[1], img.shape[2]
img = np.reshape(img, [height*width, bands])
img_gt = np.reshape(gt, [height*width,])
img = preprocessing.normalize(img.astype('float32'))
img_dr = preprocessing.normalize(img_dr.astype('float32'))
img_dr_un = preprocessing.normalize(img_dr_un.astype('float32'))
# + id="rOH4pjlevHSI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1600711270425, "user_tz": -330, "elapsed": 1270, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="4698c77f-ea6d-4a97-cfb1-5b15860a55fc"
a = np.arange(height*width)
print(a[img_gt==3])
# + [markdown] id="vWQv5FaXCokR" colab_type="text"
# # Pavia
# + id="pqEZmP0_ufin" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 868} executionInfo={"status": "ok", "timestamp": 1600968017834, "user_tz": -330, "elapsed": 2940, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="0fa57281-7b6e-4f69-b58c-7f8404b090a4"
print(img_gt[1800])
plt.figure()
plt.plot(img[1800,:])
plt.title('Class 1 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass1.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr[1800,:])
plt.title('Class 1 - Supervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass1_DR.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[1800,:])
plt.title('Class 1 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass1_DR_un.png',dpi=300, bbox_inches='tight')
# + id="aXHEumOdum9x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 868} executionInfo={"status": "ok", "timestamp": 1600968025049, "user_tz": -330, "elapsed": 2549, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="753d5061-6e1c-4009-8d1f-3520c73a2465"
print(img_gt[49432])
plt.figure()
plt.plot(img[49432,:])
plt.title('Class 5 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass5.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr[49432,:])
plt.title('Class 5 - Supervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass5_DR.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[49432,:])
plt.title('Class 5 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass5_DR_un.png',dpi=300, bbox_inches='tight')
# + id="97SoMlO9u7kO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 868} executionInfo={"status": "ok", "timestamp": 1600968031148, "user_tz": -330, "elapsed": 2743, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="e60b3be7-e614-406b-ba10-2f97d1ae71d2"
print(img_gt[112676])
plt.figure()
plt.plot(img[112676,:])
plt.title('Class 9 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass9.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr[112676,:])
plt.title('Class 9 - Supervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass9_DR.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[112676,:])
plt.title('Class 9 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass9_DR_un.png',dpi=300, bbox_inches='tight')
# + id="-iIk02bpwbSw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 868} executionInfo={"status": "ok", "timestamp": 1600968040693, "user_tz": -330, "elapsed": 2512, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="bcfb518a-35e0-4538-b537-d3d0fb3a3e8f"
print(img_gt[98948])
plt.figure()
plt.plot(img[98948,:])
plt.title('Class 3 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass3.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr[98948,:])
plt.title('Class 3 - Supervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass3_DR.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[98948,:])
plt.title('Class 3 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/PUClass3_DR_un.png',dpi=300, bbox_inches='tight')
# + [markdown] id="A-4151naCtUK" colab_type="text"
# # Pavia - Unsupervised
# + colab_type="code" id="QXQWZBSBPS6C" colab={"base_uri": "https://localhost:8080/", "height": 590} executionInfo={"status": "ok", "timestamp": 1600718855069, "user_tz": -330, "elapsed": 2286, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="10e43e7c-04cb-4260-f425-d90471c9f2e8"
print(img_gt[1800])
plt.figure()
plt.plot(img[1800,:])
plt.title('Class 1 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('PUClass1.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[1800,:])
plt.title('Class 1 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('PUClass1_DR.png',dpi=300, bbox_inches='tight')
# + colab_type="code" id="e4hEVgoEPS6k" colab={"base_uri": "https://localhost:8080/", "height": 590} executionInfo={"status": "ok", "timestamp": 1600718851594, "user_tz": -330, "elapsed": 2419, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="657cfda6-6f95-445b-b9f5-b1a94032380e"
print(img_gt[49432])
plt.figure()
plt.plot(img[49432,:])
plt.title('Class 5 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('PUClass5.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[49432,:])
plt.title('Class 5 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('PUClass5_DR.png',dpi=300, bbox_inches='tight')
# + colab_type="code" id="KKNRCSHAPS6z" colab={"base_uri": "https://localhost:8080/", "height": 590} executionInfo={"status": "ok", "timestamp": 1600718847170, "user_tz": -330, "elapsed": 2387, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="4a87f6c4-ee12-4e0b-f390-416bd8b9b526"
print(img_gt[112676])
plt.figure()
plt.plot(img[112676,:])
plt.title('Class 9 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('PUClass9.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[112676,:])
plt.title('Class 9 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('PUClass9_DR.png',dpi=300, bbox_inches='tight')
# + colab_type="code" id="AmcAlOt0PS6_" colab={"base_uri": "https://localhost:8080/", "height": 590} executionInfo={"status": "ok", "timestamp": 1600718834528, "user_tz": -330, "elapsed": 2031, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="60c6f58a-f0e6-4f35-bb03-6afb0c3034fb"
print(img_gt[98948])
plt.figure()
plt.plot(img[98948,:])
plt.title('Class 3 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('PUClass3.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[98948,:])
plt.title('Class 3 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('PUClass3_DR.png',dpi=300, bbox_inches='tight')
# + [markdown] id="hSp8NaJmO73x" colab_type="text"
# # Indian Pines
# + colab_type="code" id="k6f91L4-O_-2" colab={} executionInfo={"status": "ok", "timestamp": 1600973241982, "user_tz": -330, "elapsed": 1095, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}}
img = loadmat('/content/drive/My Drive/Major_Project/Data/Indian_Pines.mat')
img_gt = loadmat('/content/drive/My Drive/Major_Project/Data/Indian_Pines_gt.mat')
img_dr = np.load('/content/drive/My Drive/Major_Project/Supervised_Results/Indian_Pines/DR/img_orig_DR_30.npy')
img_dr_un = np.load('/content/drive/My Drive/Major_Project/unSupervised_Results/Indian_Pines/DR/Test0_DR_imgorig_30.npy')
# + colab_type="code" id="DLugRE68O__u" colab={} executionInfo={"status": "ok", "timestamp": 1600973242376, "user_tz": -330, "elapsed": 776, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}}
img = img['indian_pines_corrected']
gt = img_gt['indian_pines_gt']
height, width, bands = img.shape[0], img.shape[1], img.shape[2]
img = np.reshape(img, [height*width, bands])
img_gt = np.reshape(gt, [height*width,])
img = preprocessing.normalize(img.astype('float32'))
img_dr = preprocessing.normalize(img_dr.astype('float32'))
img_dr_un = preprocessing.normalize(img_dr_un.astype('float32'))
# + id="MHXSBz1gRCI2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} executionInfo={"status": "ok", "timestamp": 1600787179580, "user_tz": -330, "elapsed": 1469, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="8b3c861e-b3f8-42fa-d812-0887dbba7c00"
a = np.arange(height*width)
print(a[img_gt==9])
# + colab_type="code" id="TzXNElUZO__9" colab={"base_uri": "https://localhost:8080/", "height": 868} executionInfo={"status": "ok", "timestamp": 1600973255276, "user_tz": -330, "elapsed": 3115, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="34f433d2-29a9-4485-8fef-03af70ad1d27"
print(img_gt[9376])
plt.figure()
plt.plot(img[9376,:])
plt.title('Class 1 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass1.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr[9376,:])
plt.title('Class 1 - Supervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass1_DR.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[9376,:])
plt.title('Class 1 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass1_DR_un.png',dpi=300, bbox_inches='tight')
# + colab_type="code" id="6CDHHEK7PAAK" colab={"base_uri": "https://localhost:8080/", "height": 868} executionInfo={"status": "ok", "timestamp": 1600973267360, "user_tz": -330, "elapsed": 3373, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="14d1e08f-6cb7-4252-928e-5dd55bcd5486"
print(img_gt[895])
plt.figure()
plt.plot(img[895,:])
plt.title('Class 5 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass5.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr[895,:])
plt.title('Class 5 - Supervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass5_DR.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[895,:])
plt.title('Class 5 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass5_DR_un.png',dpi=300, bbox_inches='tight')
# + colab_type="code" id="IfKbFP4JPAAU" colab={"base_uri": "https://localhost:8080/", "height": 868} executionInfo={"status": "ok", "timestamp": 1600973274514, "user_tz": -330, "elapsed": 3607, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="b98690eb-9c9a-45ee-8b1d-a59d216fa0a8"
print(img_gt[9157])
plt.figure()
plt.plot(img[9157,:])
plt.title('Class 9 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass9.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr[9157,:])
plt.title('Class 9 - Supervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass9_DR.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[9157,:])
plt.title('Class 9 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass9_DR_un.png',dpi=300, bbox_inches='tight')
# + colab_type="code" id="trbVZgN2PAAd" colab={"base_uri": "https://localhost:8080/", "height": 868} executionInfo={"status": "ok", "timestamp": 1600973276958, "user_tz": -330, "elapsed": 5266, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}} outputId="d1a5c4d6-b17a-4c19-d7e1-ca3b12f95f33"
print(img_gt[18455])
plt.figure()
plt.plot(img[18455,:])
plt.title('Class 3 - original')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass3.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr[18455,:])
plt.title('Class 3 - Supervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass3_DR.png',dpi=300, bbox_inches='tight')
plt.figure()
plt.plot(img_dr_un[18455,:])
plt.title('Class 3 - Unsupervised Reduced Dimension')
plt.ylabel('Spectral Response')
plt.xlabel('Band Number')
plt.savefig('/content/drive/My Drive/Major_Project/Figures/IPClass3_DR_un.png',dpi=300, bbox_inches='tight')
# + id="z1ACMAGXWYF0" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1600973276960, "user_tz": -330, "elapsed": 4385, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggy_o7pC97iMLwReJFws779DMXX4Bt_gerr7_ka=s64", "userId": "05011419419690803092"}}
| Figure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Explore feature-to-feature relationships in breast cancer dataset
import pandas as pd
import seaborn as sns
from sklearn import datasets
from discover_feature_relationships import discover
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
# %load_ext watermark
# %watermark -d -m -v -p numpy,matplotlib,sklearn -g
# +
data = load_breast_cancer()
df_bc = pd.DataFrame(data.data, columns = data.feature_names)
df_bc['target'] = data.target
classifier_overrides = set()
cols = list(df_bc.columns)
df_bc.head()
# +
cols = ['texture error',
'smoothness error',
'symmetry error',
'mean smoothness',
'worst smoothness',
'mean symmetry',
'worst symmetry',
'mean fractal dimension',
'fractal dimension error',
'worst fractal dimension',
'concave points error',
'compactness error',
'concavity error',
'mean concavity',
'mean concave points',
'worst concave points',
'mean compactness',
'worst compactness',
'worst concavity',
'mean radius',
'mean area',
'worst perimeter',
'mean perimeter',
'worst radius',
'worst area',
'mean texture',
'worst texture',
'area error',
'radius error',
'perimeter error']
df = df_bc[cols]
# -
# ## Data Exploration - Random Forests
# %time df_results = discover.discover(df[cols].sample(frac=1), classifier_overrides)
fig, ax = plt.subplots(figsize=(12, 8))
sns.heatmap(df_results.pivot(index='target', columns='feature', values='score').fillna(1),
annot=False, center=0, ax=ax, vmin=-0.1, vmax=1, cmap="viridis");
# ## Explore Correlation
# ### Person (linear)
# +
df_results = discover.discover(df_bc[cols[:10]], classifier_overrides, method='pearson')
df_results.pivot(index='target', columns='feature', values='score').fillna(1) \
.style.background_gradient(cmap="viridis", axis=1) \
.set_precision(2)
# -
# ### Spearman (rank-based)
# +
df_results = discover.discover(df_bc[cols[:10]], classifier_overrides, method='spearman')
df_results.pivot(index='target', columns='feature', values='score').fillna(1) \
.style.background_gradient(cmap="viridis", axis=1) \
.set_precision(2)
# -
| examples/example_breast_cancer_feature_relationships.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="MykPt6jzs6vF"
# # Install additional packages
# + colab={} colab_type="code" id="QNZes2-6s7EQ"
# # install custom packages - for google collab
# # !pip install datashader
# # !pip install hdbscan
# + [markdown] colab_type="text" id="mEgeIvRSs3EU"
# # Load Libraries
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 311, "status": "ok", "timestamp": 1599186611396, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="OQWSxl2seaFb" outputId="970d01e1-bb46-44f7-cb22-3added8a0af6"
from platform import python_version
print("python {}".format(python_version()))
import pandas as pd
import numpy as np
print("pandas {}".format(pd.__version__))
print("numpy {}".format(np.__version__))
# +
import seaborn as sns; sns.set()
from scipy.spatial import ConvexHull, convex_hull_plot_2d
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/", "height": 42, "output_embedded_package_id": "1NYZvGH84SmKlBcAmPa9g122s0hlcDEcH"} colab_type="code" executionInfo={"elapsed": 8212, "status": "ok", "timestamp": 1599186631578, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="JjKRJKMivHes" outputId="02497460-a02d-40e8-a543-b0b472505e28"
import holoviews as hv
import holoviews.operation.datashader as hd
import datashader as ds
import datashader.transfer_functions as tf
hd.shade.cmap=["lightblue", "darkblue"]
hv.extension('bokeh', 'matplotlib')
# https://datashader.org/getting_started/Interactivity.html
# https://stackoverflow.com/questions/54793910/how-to-make-the-holoviews-show-graph-on-google-colaboratory-notebook
# %env HV_DOC_HTML=true
# + [markdown] colab_type="text" id="5TToUdDIXK4D"
# # Data Preparation
# -
# ## Parsing
# + colab={} colab_type="code" executionInfo={"elapsed": 1081, "status": "ok", "timestamp": 1599186641168, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="5T9kvBlFisYs"
# set option to process raw data, False will read parsed data directly
DATA_OPTION_PRCESS_RAW = False
# set number of rows to work with
DATA_OPTION_NUM_ROWS = 2307 # total row of data - 2307
#DATA_OPTION_NUM_ROWS = None # all rows
# set paths to data files
RAW_DATA_FILE = 'raw_data/competition_dataset.csv'
PARSED_DATA_FILE = 'intermediate_data/competition_dataset_long_{}.csv'.format(DATA_OPTION_NUM_ROWS)
# + colab={} colab_type="code" executionInfo={"elapsed": 12469, "status": "ok", "timestamp": 1599186652776, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="1L0MacMLEKhM"
if DATA_OPTION_PRCESS_RAW:
# read raw data to process into parsed data
raw_df = pd.read_csv(RAW_DATA_FILE, header=0, skiprows=0,
nrows=DATA_OPTION_NUM_ROWS, delimiter=None)
parsed_df = raw_df.copy()
parsed_df['data'] = parsed_df.iloc[:, 0].str.split('; ')
parsed_df['count'] = parsed_df['data'].str.len()
parsed_df['count'] = (parsed_df['count'] - 4 - 1) / 6
parsed_df['count'] = parsed_df['count'].astype(int)
# credit: https://stackoverflow.com/a/59552714
spread_ixs = np.repeat(range(len(parsed_df)), parsed_df['count'])
# .drop(columns='count').reset_index(drop=True)
parsed_df = parsed_df.iloc[spread_ixs, :]
parsed_df['track_id'] = parsed_df['data'].str[0].astype(int)
parsed_df['grouped_row_id'] = parsed_df.groupby(
'track_id')['track_id'].rank(method='first').astype(int)
old_col = raw_df.columns.tolist()[0]
new_cols = old_col.split('; ')
# build columns
parsed_df['track_id'] = parsed_df['data'].apply(lambda x: x[0])
parsed_df['type'] = parsed_df['data'].apply(lambda x: x[1])
parsed_df['traveled_d'] = parsed_df['data'].apply(lambda x: x[2])
parsed_df['avg_speed'] = parsed_df['data'].apply(lambda x: x[3])
parsed_df['lat'] = parsed_df.apply(
lambda row: row['data'][4+(row['grouped_row_id']-1)*6], axis=1)
parsed_df['lon'] = parsed_df.apply(
lambda row: row['data'][5+(row['grouped_row_id']-1)*6], axis=1)
parsed_df['speed'] = parsed_df.apply(
lambda row: row['data'][6+(row['grouped_row_id']-1)*6], axis=1)
parsed_df['lon_acc'] = parsed_df.apply(
lambda row: row['data'][7+(row['grouped_row_id']-1)*6], axis=1)
parsed_df['lat_acc'] = parsed_df.apply(
lambda row: row['data'][8+(row['grouped_row_id']-1)*6], axis=1)
parsed_df['time'] = parsed_df.apply(
lambda row: row['data'][9+(row['grouped_row_id']-1)*6], axis=1)
# clean up columns
parsed_df = parsed_df.drop(columns=old_col)
parsed_df = parsed_df.drop(
columns=['count',
'grouped_row_id',
'data']
).reset_index(drop=True)
parsed_df = parsed_df.reset_index(drop=False).rename(
columns={'index': 'record_id'})
# output to file
parsed_df.to_csv(PARSED_DATA_FILE, index=False)
parsed_df.head(5)
else:
# read parsed data
parsed_df = pd.read_csv(PARSED_DATA_FILE, header=0,
skiprows=0, delimiter=None)
parsed_df['track_id'] = parsed_df['track_id'].astype(int)
# clean up unnamed index column - perhaps name it as record id?
# -
# ## Compute extra attributes
# +
# calculate orientation
## bearing using acceleration (do not use as it provides inaccurate bearing)
parsed_df['acc_angle'] = np.arctan2(parsed_df['lat_acc'],
parsed_df['lon_acc']) * 180 / np.pi # lon = x, lat = y
## approximate bearing using acceleration (do not use as it provides inaccurate bearing)
parsed_df['appr_acc_angle'] = parsed_df['acc_angle'].round(-1)
# https://stackoverflow.com/questions/1016039/determine-the-general-orientation-of-a-2d-vector
# https://numpy.org/doc/stable/reference/generated/numpy.arctan2.html
# np.arctan2(y, x) * 180 / np.pi
# +
# compute x and y corrdinates
# this improves the ease of calculating distances, especially for clustering
from datashader.utils import lnglat_to_meters
parsed_df.loc[:, 'x'], parsed_df.loc[:, 'y'] = lnglat_to_meters(parsed_df.lon, parsed_df.lat)
# +
# calculate bearing based on next position
shifted = parsed_df[['track_id', 'x', 'y']].\
groupby("track_id").\
shift(-1).\
rename(columns=lambda x: x+"_lag")
parsed_df = parsed_df.join(shifted)
# https://stackoverflow.com/questions/5058617/bearing-between-two-points
def gb(x1, x2, y1, y2):
angle = np.arctan2(y1 - y2, x1 - x2) * 180 / np.pi
# bearing1 = (angle + 360) % 360
bearing2 = (90 - angle) % 360
return(bearing2)
parsed_df['bearing'] = gb(
x1=parsed_df['x'],
x2=parsed_df['x_lag'],
y1=parsed_df['y'],
y2=parsed_df['y_lag'])
# +
# impute bearing of first points
parsed_df = parsed_df.sort_values(
by='record_id', axis=0) # make sure record is in order
shifted = parsed_df[['track_id', 'bearing']].\
groupby("track_id").\
shift(1).\
rename(columns=lambda x: x+"_lead")
parsed_df = parsed_df.join(shifted)
# if bearing is null, take the previous bearing for the track id
parsed_df['bearing'] = np.where(parsed_df['bearing'].isnull(),
parsed_df['bearing_lead'], parsed_df['bearing'])
# +
# there should be no more null bearing
parsed_df[parsed_df['bearing'].isnull()]#[['record_id','count']]
# + [markdown] colab_type="text" id="qygKrwPHQBzS"
# # Data Exploration
# + colab={} colab_type="code" id="-IPtaYCiRVP2"
parsed_df.head(10)
# + colab={} colab_type="code" id="o7UBe3uFS1YE"
len(parsed_df)
# + [markdown] colab_type="text" id="9InUmuFrEkP9"
# ## Variable Plots
# + colab={} colab_type="code" id="VxX1CV8KSZ93"
# speed vs time - 25 vehicles
dims = (10, 6)
fig, ax = plt.subplots(figsize=dims)
df=parsed_df[(parsed_df['track_id']>100) & (parsed_df['track_id']<105)]\
ax = sns.scatterplot(
x="time",
y="speed",
# hue="track_id",
marker='x',
s=0.2,
data=df)
# + colab={} colab_type="code" id="Pbk-NMv3aOFv"
# lat lon - 25 vehicles
dims = (10, 6)
fig, ax = plt.subplots(figsize=dims)
df = parsed_df[(parsed_df['track_id']>100) & (parsed_df['track_id']<125)]
ax = sns.scatterplot(
x="lon",
y="lat",
# hue="track_id",
marker='+',
s=1,
data=df)
# + colab={} colab_type="code" id="gK8eQGHS-SYN"
# lat lon - all vehicles
dims = (10, 6)
fig, ax = plt.subplots(figsize=dims)
ax = sns.scatterplot(
x="lon",
y="lat",
#hue="track_id",
marker='x',
s=0.2,
data=parsed_df)
# + colab={} colab_type="code" id="Rcce5uZ_b34Y"
# lat lon - stopped only - speed <1
dims = (10, 6)
fig, ax = plt.subplots(figsize=dims)
df = parsed_df[parsed_df['speed']<1]
ax = sns.scatterplot(
x="lon",
y="lat",
#hue="track_id",
marker='x',
s=0.5,
data=df)
# + colab={} colab_type="code" id="ZBCOWwOfiTZB"
# lat lon - at a certain time frame with low speed
dims = (10, 6)
fig, ax = plt.subplots(figsize=dims)
df = parsed_df[(parsed_df['time'] == 0) & (parsed_df['speed'] < 1)]
ax = sns.scatterplot(
x="lon",
y="lat",
# hue="type",
# style="speed",
marker='x',
s=20,
data=df)
# + [markdown] colab_type="text" id="HCeVLl3HjQY2"
# ## Datashader visualizations
# + colab={} colab_type="code" id="0WXPnKqepuXL"
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
df = parsed_df.copy()
df['track_id'] = df['track_id']
df['type'] = df['type']
df.loc[:, 'x'], df.loc[:, 'y'] = lnglat_to_meters(df.lon,df.lat)
df = df[['x', 'y', 'lon', 'lat', 'track_id', 'time', 'type']]
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrain() *
tiles.CartoLight() * hd.datashade(points).opts(hv.opts(width=750, height=350))
# + colab={} colab_type="code" id="a2nRqNHcpucx"
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
# https://datashader.org/getting_started/Interactivity.html
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
from datashader.colors import Sets1to3
df = parsed_df.copy()
df['track_id'] = df['track_id']
df['type'] = df['type']
df.loc[:, 'x'], df.loc[:, 'y'] = lnglat_to_meters(df.lon,df.lat)
df = df[['x', 'y', 'lon', 'lat', 'track_id', 'time', 'type']]
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
plot = hd.datashade(points, aggregator=ds.count_cat('type')).opts(hv.opts(width=750, height=350))
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
color_key = [(name,color) for name,color in zip(['Car', 'Medium Vehicle', 'Motorcycle', 'Heavy Vehicle', 'Bus',
'Taxi'], Sets1to3)]
color_points = hv.NdOverlay({n: hv.Points(df.iloc[0:1,:], label=str(n)).opts(style=dict(color=c)) for n,c in color_key})
#tiles.StamenTerrain() *
tiles.CartoLight() * plot * color_points
# + colab={} colab_type="code" id="6usnnhENxVcj"
# Car only
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
df = parsed_df[parsed_df['type']=='Car'].copy()
df['track_id'] = df['track_id']
df['type'] = df['type']
df.loc[:, 'x'], df.loc[:, 'y'] = lnglat_to_meters(df.lon,df.lat)
df = df[['x', 'y', 'lon', 'lat', 'track_id', 'time', 'type']]
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrain() *
tiles.CartoLight() * hd.datashade(points).opts(hv.opts(width=750, height=350))
# + colab={} colab_type="code" id="BCEakAmByIPs"
# Buses only
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
df = parsed_df[parsed_df['type']=='Car'].copy()
df['track_id'] = df['track_id']
df['type'] = df['type']
df.loc[:, 'x'], df.loc[:, 'y'] = lnglat_to_meters(df.lon,df.lat)
df = df[['x', 'y', 'lon', 'lat', 'track_id', 'time', 'type']]
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrain() *
tiles.CartoLight() * hd.datashade(points).opts(hv.opts(width=750, height=350))
# + colab={} colab_type="code" id="FhhnH3951s_O"
# ~ Stationary points only
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
df = parsed_df[(parsed_df['speed']==0)].copy()
df['track_id'] = df['track_id']
df['type'] = df['type']
df.loc[:, 'x'], df.loc[:, 'y'] = lnglat_to_meters(df.lon,df.lat)
df = df[['x', 'y', 'lon', 'lat', 'track_id', 'time', 'type']]
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrain() *
tiles.CartoLight() * hd.datashade(points).opts(hv.opts(width=750, height=350))
# + colab={} colab_type="code" id="UJQcz00f3J7c"
# ~ moving points only (>0)
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
df = parsed_df[(parsed_df['speed']>0)].copy()
df['track_id'] = df['track_id']
df['type'] = df['type']
df.loc[:, 'x'], df.loc[:, 'y'] = lnglat_to_meters(df.lon,df.lat)
df = df[['x', 'y', 'lon', 'lat', 'track_id', 'time', 'type']]
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrain() *
tiles.CartoLight() * hd.datashade(points).opts(hv.opts(width=750, height=350))
# + [markdown] colab_type="text" id="3EiLg7KvEVZK"
# # Model Development
# + colab={"base_uri": "https://localhost:8080/", "height": 419} colab_type="code" executionInfo={"elapsed": 580, "status": "ok", "timestamp": 1599186675429, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="Dmfyj5wUJ04n" outputId="31dfde27-97c8-4115-e8dc-4048b9330576"
parsed_df.head(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 317} colab_type="code" executionInfo={"elapsed": 1902, "status": "ok", "timestamp": 1599186676963, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="4CSh3M917shE" outputId="1e5e4f56-27a8-4020-d6ba-d03828b2ab05"
parsed_df.describe()
# +
# utility - hdbscan clustering
# https://hdbscan.readthedocs.io/en/latest/how_hdbscan_works.html
import hdbscan
def cluster_hdbscan(df,
parameters=None,
feature_names=['x', 'y'],
label_name='unnamed_cluster',
verbose=True):
df = df.copy()
default_parameters = {
'metric': 'euclidean',
'min_cluster_size': 200,
'min_samples': None,
'cluster_selection_epsilon': 7
}
if(parameters == None):
parameters = default_parameters
else:
default_parameter_names = list(default_parameters.keys())
parameter_names = list(parameters.keys())
for parameter in default_parameter_names:
if(parameter not in parameter_names):
parameters[parameter] = default_parameters[parameter]
clusterer = hdbscan.HDBSCAN(
metric=parameters['metric'],
min_cluster_size=parameters['min_cluster_size'],
min_samples=parameters['min_samples'],
cluster_selection_epsilon=parameters['cluster_selection_epsilon']
)
clusterer.fit(df[feature_names])
df[label_name] = clusterer.labels_
if verbose:
print('hdbscan trained on: ' + str(parameters))
return(df)
# +
# utility - dbscan clustering
from sklearn.cluster import DBSCAN
# https://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html
def cluster_dbscan(df,
parameters=None,
feature_names=['x', 'y'],
label_name='unnamed_cluster',
verbose=True):
df = df.copy()
# default_parameters = {
# 'metric': 'euclidean',
# 'min_cluster_size': 200,
# 'min_samples': None,
# 'cluster_selection_epsilon': 7
# }
clusterer = DBSCAN(
eps=parameters['cluster_selection_epsilon'],
min_samples=parameters['min_samples'],
).fit(df[feature_names])
df[label_name] = clusterer.labels_
if verbose:
print('dbscan trained on: ' + str(parameters))
return(df)
# +
# utility - kmeans clustering
# https://hdbscan.readthedocs.io/en/latest/how_hdbscan_works.html
from sklearn.cluster import KMeans
def cluster_kmeans(df,
n_clusters=4,
feature_names=['bearing_median'],
label_name='unnamed_cluster',
verbose=True):
df = df.copy()
kmeans = KMeans(n_clusters=n_clusters, random_state=0).fit(
df[feature_names])
df[label_name] = kmeans.labels_
if verbose:
print('kmeans trained on: ' + str(n_clusters) +
" clusters and " + str(feature_names))
return(df)
# -
# ## Road Segment Clustering
#
# Clustering roadway segments to identify apporach and major road / intersection.
# ### Prepare segment clustering training data
# + colab={"base_uri": "https://localhost:8080/", "height": 419} colab_type="code" executionInfo={"elapsed": 3071, "status": "ok", "timestamp": 1599193757126, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="JivI0SwDysxt" outputId="252aee96-ad66-4925-e7bf-2f2aa8ff39ac"
# prep training data
df = parsed_df # [(parsed_df['speed']<5)].copy() # ~ bottom 75% speeds
df['record_id'] = df['record_id']
#df['type'] = df['type']
seg_all_df = df[['x', 'y', 'bearing',
'record_id']].set_index('record_id')
#seg_all_df = seg_all_df.head(100000)
# rounding is not a good idea
#seg_all_df['x'] = seg_all_df['x'].round(1)
#seg_all_df['y'] = seg_all_df['y'].round(1)
# set count
seg_all_df['count'] = 1
# get count and angle by unique location
seg_all_df = seg_all_df.\
groupby(['x', 'y']).\
agg({"count": np.sum, 'bearing': np.median}).\
reset_index()
# get total and pct of count
seg_all_df['total_count'] = seg_all_df['count'].sum()
seg_all_df['count_pct'] = seg_all_df['count'] / \
seg_all_df['total_count'] * 100
# save all data for unique points
seg_all_df = seg_all_df.reset_index(
drop=False).rename(columns={'index': 'u_id'}).set_index('u_id')
### DENSITY REDUCTION ###
# # filter out unique points with fewer than 0.05% of total points
# seg_all_df = seg_all_df[seg_all_df['count_pct'] > 0.05]
# # filter out unique points with fewer than 0.0001% of total points (1 in mil)
# seg_all_df = seg_all_df[seg_all_df['count_pct']>0.0002]
# filter out infreq points (points with less than 10 samples) for training
# this helps reduce data size and introduce breaks in low density areas of the data
seg_train_df = seg_all_df[seg_all_df['count'] > 10]
seg_infre_df = seg_all_df[seg_all_df['count'] <= 10]
# choose features to be trained on - not needed!
# seg_train_df = seg_train_df[['x', 'y', 'count', 'count_pct']]
# seg_train_df = seg_train_df[['x', 'y', 'bearing']]
# seg_train_df = seg_train_df[['x', 'y']]
# -
# full dataset of unique points (all points)
seg_all_df
# training dataset of unique points (only frequent points)
seg_train_df
# infrequent data points excluded from training
seg_infre_df
# + colab={} colab_type="code" id="fzOQ2ta0mpQ0"
# visual inspect - lat lon
dims = (10, 6)
fig, ax = plt.subplots(figsize=dims)
ax = sns.scatterplot(
x='x',
y='y',
s=1,
palette="black",
# hue="count",
# style="speed",
marker='+',
edgecolors='red',
data=seg_train_df.copy())
# + colab={"base_uri": "https://localhost:8080/", "height": 367, "output_embedded_package_id": "1CfIoIrE7a-eEL-YgxZ5JfeWdoAsOmsrk"} colab_type="code" executionInfo={"elapsed": 2243, "status": "ok", "timestamp": 1599189327061, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="rag4MMBoDObW" outputId="9316aece-003d-4d33-d92c-e52e1d6e2537"
# visual inspect - rasterize lat lon
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
# https://datashader.org/getting_started/Interactivity.html
from datashader.colors import Sets1to3
# https://github.com/holoviz/datashader/issues/767
import colorcet as cc
long_key = list(set(cc.glasbey_cool + cc.glasbey_warm + cc.glasbey_dark))
df = seg_train_df.copy()
#df['seg_cluster'] = df['seg_cluster'].apply(lambda x: 0 if x >=0 else -1)
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrain() *
tiles.CartoLight() * hd.datashade(points,
#aggregator=ds.count_cat('seg_cluster'),
#color_key=long_key
).opts(hv.opts(width=750, height=350))
#hd.dynspread(hd.datashade(points,
# aggregator=ds.count_cat('seg_cluster'), d
# color_key=Sets1to3).opts(hv.opts(width=750, height=350)), threshold=0.4)
# -
# ### HDBSCAN for Roadway Segment Clustering
# +
# define subclustering parameters
seg_cluster_parameter = {
# x clusters of medium size segments
'metric': 'euclidean',
'min_cluster_size': 150,
'min_samples': None,
'cluster_selection_epsilon': 5
# # 8 clusters of medium size segments
# 'metric': 'euclidean',
# 'min_cluster_size': 200,
# 'min_samples': None,
# 'cluster_selection_epsilon': 20
# # 7 clusters of medium size segments
# 'metric'='euclidean',
# 'min_cluster_size'=300,
# 'min_samples'=None,
# 'cluster_selection_epsilon'=10
# # 12 clusters of fine segments
# 'metric'='euclidean',
# 'min_cluster_size'=150,
# 'min_samples'=None,
# 'cluster_selection_epsilon'=5
}
# run subclustering for lanes
seg_train_df_1 = cluster_hdbscan(df=seg_train_df,
parameters=seg_cluster_parameter,
feature_names=['x', 'y'],
label_name='seg_cluster')
# -
len(seg_train_df_1['seg_cluster'].unique())
# +
# visual inspect clusters by facet plot
# https://seaborn.pydata.org/generated/seaborn.FacetGrid.html
g = sns.FacetGrid(seg_train_df_1, col='seg_cluster', col_wrap=5, height=4)
g = g.map(plt.scatter, 'x', 'y', s=0.1)#, edgecolor="w")
# note: cluster 3 and 8&9 are of interest, manually merge 8 and 9
# +
### HDBSCAN parameter tuning decisions ###
# https://hdbscan.readthedocs.io/en/latest/parameter_selection.html
# min_samples
# opt A for decision for min_sample for core points
# ~ 400*600m^2 (240,000 m^2 area)
# ~ 1 mil unique points (745,709 if no lone points)
# avg density of points or minimum eligible density should be ~ 5 points
# opt B for decision for min_sample for core points
# net area is ~ (based on rough calculation of roadway areas)
# 50*600 + 30*400 + 4 * 10*400 = 58,000
# ~ 1 mil unique points (745,709 if no lone points)
# avg density of points or minimum eligible density should be ~ 15 points
# option A or B generates way too many clusters, gradully increase min_samples for core points until less clusters are generated
# cluster_selection_epsilon
# 1.5m radius (or 3.0m width) is approx. lane width, use 5m for a typ. 3 lane roadway
# HDBSCAN(algorithm='best', alpha=1.0, approx_min_span_tree=True,
# gen_min_span_tree=False, leaf_size=40, memory=Memory(cachedir=None),
# metric='euclidean', min_cluster_size=5, min_samples=None, p=None)
# -
# prepare data for second-stage training
seg_train_df_2 = seg_train_df_1[seg_train_df_1['seg_cluster']==-1]
seg_train_df_2
# +
# 2nd stage dbscan clustering
# with more relax parameters on unclustered point from 1st stage only
# define subclustering parameters
seg_cluster_parameter = {
# x clusters of medium size segments
'metric': 'euclidean',
'min_cluster_size': 75,
'min_samples': None,
'cluster_selection_epsilon': 5
}
# run subclustering for lanes
seg_train_df_2 = cluster_hdbscan(df=seg_train_df_2,
parameters=seg_cluster_parameter,
feature_names=['x', 'y'],
label_name='seg_cluster')
# +
# seg_train_df_2[seg_train_df_2['seg_cluster']==-1]
# clustered from stage 1
seg_a = seg_train_df_1[seg_train_df_1['seg_cluster'] != -1].copy()
# clustered from stage 2
seg_b = seg_train_df_2[seg_train_df_2['seg_cluster'] != -1].copy()
prev_max = seg_train_df_1[seg_train_df_1['seg_cluster']
!= -1]['seg_cluster'].max()
seg_b['seg_cluster'] = seg_b['seg_cluster'] + \
prev_max + 1 # increment cluster number
# unclustered
seg_c = seg_train_df_2[seg_train_df_2['seg_cluster'] == -1].copy()
# -
# update training data
seg_train_df = pd.concat([seg_a,seg_b,seg_c])
# +
# visual inspect clusters by facet plot
# https://seaborn.pydata.org/generated/seaborn.FacetGrid.html
g = sns.FacetGrid(seg_train_df_2, col='seg_cluster', col_wrap=5, height=4)
g = g.map(plt.scatter, 'x', 'y', s=0.1)#, edgecolor="w")
# cluster 6+12 = 18 is of interest
# +
# visual inspect clusters by facet plot
# https://seaborn.pydata.org/generated/seaborn.FacetGrid.html
g = sns.FacetGrid(seg_train_df, col='seg_cluster', col_wrap=5, height=4)
g = g.map(plt.scatter, 'x', 'y', s=0.1)#, edgecolor="w")
# +
# selecting only clusters of interests:
# -Cluster 8 + 9 (E-W road),
# -Cluster 3 (N-S road in the NW corner),
# -Cluster 18 (turning lane from South road to E-W road)
seg_train_df['seg_cluster_combined'] = seg_train_df['seg_cluster'].\
apply(lambda x: 'A' if ((x == 'A') | (x == 8) | (x == 9))
else ('B' if ((x == 'B') | (x == 3)) else
'C' if ((x == 'C') | (x == 18))
else 'Exclude'
)
)
# remove cluster to be excluded, assign combined cluster as final cluster
seg_train_df = seg_train_df[seg_train_df['seg_cluster_combined'].
isin(['A', 'B', 'C'])]
seg_train_df['seg_cluster'] = seg_train_df['seg_cluster_combined']
seg_train_df = seg_train_df.drop(columns=['seg_cluster_combined'])
seg_train_df.groupby(['seg_cluster']).count()
# + colab={} colab_type="code" executionInfo={"elapsed": 360484, "status": "aborted", "timestamp": 1599190403293, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "04736056844233231284"}, "user_tz": 420} id="8Ko6WH5AEuTQ"
# visual inspect clusters by map - color by clusters
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
# https://datashader.org/getting_started/Interactivity.html
from datashader.colors import Sets1to3
# https://github.com/holoviz/datashader/issues/767
import colorcet as cc
long_key = list(set(cc.glasbey_cool + cc.glasbey_warm + cc.glasbey_dark))
df = seg_train_df#[seg_train_df['seg_cluster']>=0].copy()
# df = seg_train_df[seg_train_df['seg_cluster']==0].copy()
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrain() *
tiles.CartoLight() * hd.datashade(points,
aggregator=ds.count_cat('seg_cluster'),
color_key=Sets1to3).opts(hv.opts(width=750, height=350))
#tiles.CartoLight() * hd.dynspread(hd.datashade(points,
# aggregator=ds.count_cat('seg_cluster'),
# color_key=long_key).opts(hv.opts(width=750, height=350)), threshold=0.4)
# -
# ### Post-processing for un-clustered points
#
# recover some nearby points not classified
# #### un-clustered points
# +
# reassignment part A for unclustered points
# approach #1
# - every point within an existing cluster is used as a core point for cluster reassignment
# - this approach require a lot more distance computations
seg_train_df_0 = seg_train_df[seg_train_df['seg_cluster'] == -1].\
reset_index(drop=False).\
rename(columns={'u_id': 'u_id'})
seg_train_df_1 = seg_train_df[seg_train_df['seg_cluster'] != -1].\
reset_index(drop=False).\
rename(columns={'u_id': 'u_id_clustered'})
seg_train_df_0 = seg_train_df_0.drop(columns=['seg_cluster'])
seg_train_df_1 = seg_train_df_1.\
rename(columns={'x': 'x_clustered', 'y': 'y_clustered'}).\
drop(columns=['count', 'bearing', 'total_count', 'count_pct'])
seg_train_df_0['tmp'] = 1
seg_train_df_1['tmp'] = 1
# -
len(seg_train_df_0)
len(seg_train_df_1)
# build intermediate dataframe
# https://stackoverflow.com/questions/35234012/python-pandas-merge-two-tables-without-keys-multiply-2-dataframes-with-broadc
seg_train_df_reassign_a = pd.merge(seg_train_df_0, seg_train_df_1, on=['tmp']).drop(columns='tmp')
# +
# calculate Euclidean distance
# more resources for more complex examples: https://kanoki.org/2019/12/27/how-to-calculate-distance-in-python-and-pandas-using-scipy-spatial-and-distance-functions/
def e_dist(x1, x2, y1, y2):
return np.sqrt((x1-x2) ** 2+(y1-y2) ** 2)
df = seg_train_df_reassign_a
df['dist'] = e_dist(
x1=df['x_clustered'],
x2=df['x'],
y1=df['y_clustered'],
y2=df['y'])
# get minimum distance in each group
idx = df.groupby(['u_id'])['dist'].transform(min) == df['dist']
# save results
seg_reassigned_df_a = df.copy()
seg_reassigned_idx_a = idx
# +
# limit on reassigning unclustered points
reassign_dist_limit = 20 # meters
seg_unclustered_df = seg_reassigned_df_a[seg_reassigned_idx_a]
# limit max distance to 20 meters
seg_unclustered_df = seg_unclustered_df[seg_unclustered_df['dist'] < reassign_dist_limit]
seg_unclustered_df = seg_unclustered_df.set_index('u_id')
seg_unclustered_df = seg_unclustered_df[list(seg_train_df.columns)]
seg_unclustered_df
# -
len(seg_train_df)
# +
seg_train_df_final = pd.concat(
[seg_train_df[seg_train_df['seg_cluster'] != -1], seg_unclustered_df])
seg_train_df_final
# -
# ##### A quick look at convex hull with the clustered and unclusted points
# +
# build convex hull to recapture raw gps points
# https://stackoverflow.com/questions/60194404/how-to-make-a-polygon-shapefile-which-corresponds-to-the-outer-boundary-of-the-g
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html
# # scipy convex hull example
# from scipy.spatial import ConvexHull, convex_hull_plot_2d
# import matplotlib.pyplot as plt
# # hull 1
# points = np.random.rand(30, 2) # 30 random points in 2-D
# hull = ConvexHull(points)
# plt.plot(points[:,0], points[:,1], 'o')
# for simplex in hull.simplices:
# plt.plot(points[simplex, 0], points[simplex, 1], 'k-')
# # hull 2
# points = np.random.rand(30, 2) # 30 random points in 2-D
# plt.plot(points[:,0], points[:,1], 'o')
# hull = ConvexHull(points)
# for simplex in hull.simplices:
# plt.plot(points[simplex, 0], points[simplex, 1], 'k-')
# +
# taking a look at convex hull without unclustered points
# this can be thought of as congested areas
df = seg_train_df[seg_train_df['seg_cluster'] != -1].\
reset_index(drop=False).\
rename(columns={'u_id': 'u_id_clustered'})
# build an dictionary of convex hull points
cluster_pt_dict = {}
for cluster in df['seg_cluster'].unique():
cluster_pt_dict[cluster] = df[
df['seg_cluster'] == cluster][['x', 'y']].to_numpy()
def get_convex_hull_indices(pts_array):
hull = ConvexHull(pts_array)
hull_indices = np.unique(hull.simplices.flat)
hull_pts = pts_array[hull_indices, :]
return(hull_pts)
# get convex hull
cluster_hull_dict = {}
for cluster in list(cluster_pt_dict.keys()):
cluster_hull_dict[cluster] = get_convex_hull_indices(
cluster_pt_dict[cluster])
# plot
for cluster in list(cluster_pt_dict.keys()):
plt.plot(cluster_pt_dict[cluster][:, 0],
cluster_pt_dict[cluster][:, 1], ',')
hull = ConvexHull(cluster_pt_dict[cluster])
for simplex in hull.simplices:
plt.plot(cluster_pt_dict[cluster][simplex, 0],
cluster_pt_dict[cluster][simplex, 1], 'k-')
# +
# taking a look at convex hull with unclustered points
# this can be viewed as extended congested areas
df = seg_train_df_final.copy()
# build an dictionary of convex hull points
cluster_pt_dict = {}
for cluster in df['seg_cluster'].unique():
cluster_pt_dict[cluster] = df[
df['seg_cluster'] == cluster][['x', 'y']].to_numpy()
def get_convex_hull_indices(pts_array):
hull = ConvexHull(pts_array)
hull_indices = np.unique(hull.simplices.flat)
hull_pts = pts_array[hull_indices, :]
return(hull_pts)
# get convex hull objects
cluster_hull_objs = {}
for cluster in list(cluster_pt_dict.keys()):
cluster_hull_objs[cluster] = ConvexHull(cluster_pt_dict[cluster])
# get convex hull indice points
cluster_hull_dict = {}
for cluster in list(cluster_pt_dict.keys()):
cluster_hull_dict[cluster] = get_convex_hull_indices(
cluster_pt_dict[cluster])
# plot
for cluster in list(cluster_pt_dict.keys()):
plt.plot(cluster_pt_dict[cluster][:, 0],
cluster_pt_dict[cluster][:, 1], ',')
hull = ConvexHull(cluster_pt_dict[cluster])
for simplex in hull.simplices:
plt.plot(cluster_pt_dict[cluster][simplex, 0],
cluster_pt_dict[cluster][simplex, 1], 'k-')
# +
# build a convex hull points df from the entire training set (incl. unclustered points)
cluster_hull_list_df = []
for cluster in list(cluster_hull_dict.keys()):
label = cluster
df = pd.DataFrame(cluster_hull_dict[cluster], columns=['x', 'y'])
df['seg_cluster'] = label
cluster_hull_list_df.append(df)
cluster_hull_df = pd.concat(cluster_hull_list_df)
cluster_hull_df
# -
# ## Apply Road Segment to all unique data points
# +
# https://stackoverflow.com/questions/16750618/whats-an-efficient-way-to-find-if-a-point-lies-in-the-convex-hull-of-a-point-cl/16898636#16898636
def in_hull(p, hull):
"""
Test if points in `p` are in `hull`
`p` should be a `NxK` coordinates of `N` points in `K` dimensions
`hull` is either a scipy.spatial.Delaunay object or the `MxK` array of the
coordinates of `M` points in `K`dimensions for which Delaunay triangulation
will be computed
"""
from scipy.spatial import Delaunay
if not isinstance(hull, Delaunay):
hull = Delaunay(hull)
return hull.find_simplex(p) >= 0
# +
# iterate over convex hull objects and match points
cluster_hull_objs.keys()
# +
df = parsed_df.copy()
df['x_id'] = df['x'] * 100
df['x_id'] = df['x_id'].astype(int)
df['y_id'] = df['y'] * 100
df['y_id'] = df['y_id'].astype(int)
# save ids to parsed_df
parsed_df = df.copy()
df['count'] = 1
# get count and angle by unique location
df = df.\
groupby(['x', 'y', 'x_id', 'y_id']).\
agg({"count": np.sum, 'bearing': np.median}).\
rename(columns={'bearing': 'bearing_median'}).\
reset_index()
# +
all_cluster_cols = []
cluster_keys = list(cluster_hull_dict.keys())
cluster_keys.sort()
for cluster_hull in cluster_keys:
col_name = "cluster_{}".format(str(cluster_hull))
all_cluster_cols.append(col_name)
df[col_name] = in_hull(
p=df[['x', 'y']].to_numpy(),
hull=cluster_hull_dict[cluster_hull])
df.loc[df[col_name]==True, 'seg_cluster'] = str(cluster_hull)
df = df.drop(columns=[col_name])
# +
# merge id table with name table
# use all points - allow duplicate identicals
# clustered_df = parsed_df.merge(
# df.drop(columns=['x', 'y']), on=['x_id', 'y_id'])
# use only unique points - disallow identicals
seg_train_df_final = df.copy()
seg_train_df_final['seg_cluster'] = seg_train_df_final['seg_cluster'].astype(str)
# -
# ## Lane and Directional Sub-Clustering
#
# (Instead of Directional due to restricted scope in analysis area)
seg_train_df_final_bk = seg_train_df_final.copy()
# +
seg_train_df_final = seg_train_df_final_bk.copy()
# filter out infreq points (points with less than 2 samples) for training
# this helps reduce data size and introduce breaks in low density areas of the data
seg_train_df_final = seg_train_df_final[seg_train_df_final['count'] > 2]
seg_train_df_infre = seg_train_df_final[seg_train_df_final['count'] <= 2]
# -
cluster_list = seg_train_df_final['seg_cluster'].unique()
# cluster_list = [1]
cluster_list
# +
seg_train_df_final = seg_train_df_final[seg_train_df_final['seg_cluster']!='nan']
len(seg_train_df_final)
# -
cluster_list = seg_train_df_final['seg_cluster'].unique()
# cluster_list = [1]
cluster_list
# +
# prepare data
seg_train_df_final_dict = dict((key, seg_train_df_final[seg_train_df_final['seg_cluster'] == key])
for key in cluster_list)
# +
# # run subclustering for direction - kmeans
# subcluster_parameters = {
# 'A': {
# 'n_clusters': 4
# },
# 'B': {
# 'n_clusters': 1
# },
# 'C': {
# 'n_clusters': 3
# }
# }
# subcluster_results = dict((key,
# cluster_kmeans(df=seg_train_df_final_dict[key],
# n_clusters=subcluster_parameters[key]['n_clusters'],
# feature_names=['bearing_median'],
# label_name='dir_cluster')
# )
# for key in cluster_list)
# +
# # # run subclustering for direction - hdbscan
# # define subclustering parameters
# subcluster_parameters = {
# 'A': {
# 'metric': 'euclidean',
# 'min_cluster_size': 1000,
# 'min_samples': 100,
# 'cluster_selection_epsilon': 1
# },
# 'B': {
# 'metric': 'euclidean',
# 'min_cluster_size': 1000,
# 'min_samples': 100,
# 'cluster_selection_epsilon': 1
# },
# 'C': {
# 'metric': 'euclidean',
# 'min_cluster_size': 1000,
# 'min_samples': 100,
# 'cluster_selection_epsilon': 1
# }
# }
# # run subclustering for lanes
# subcluster_results = dict((key,
# cluster_hdbscan(df=seg_train_df_final_dict[key],
# parameters=subcluster_parameters[key],
# feature_names=['x', 'y'],
# label_name='dir_cluster')
# )
# for key in cluster_list)
# +
lane_parameter = {
'A': {
'min_samples': 100,
'cluster_selection_epsilon': 1
},
'B': {
'min_samples': 100,
'cluster_selection_epsilon': 1
},
'C': {
'min_samples': 50,
'cluster_selection_epsilon': 1
}
}
subcluster_results = dict((key, cluster_dbscan(df=seg_train_df_final_dict[key],
parameters=lane_parameter[key],
feature_names=['x', 'y'],
label_name='dir_cluster',
verbose=False)) for key in cluster_list)
# +
subcluster_results_df = pd.concat(list(subcluster_results.values()))
# # filter out "outliers" within the cluster
# subcluster_results_df = subcluster_results_df[subcluster_results_df['lane_subcluster']!=-1]
subcluster_results_df['seg_dir_cluster'] = subcluster_results_df['seg_cluster'].astype(
str) + "_" + subcluster_results_df['dir_cluster'].astype(str)
# -
len(subcluster_results_df)
# +
min_cluster_size = 150 # for seg dir cluster, if not met, cluster is deleted
checksum = subcluster_results_df.groupby(['seg_dir_cluster']).count()
exclude = checksum[checksum<min_cluster_size].dropna().reset_index()
# exclude
subcluster_results_df = subcluster_results_df[~subcluster_results_df['seg_dir_cluster'].isin(
exclude['seg_dir_cluster'])].copy()
exclude
# -
len(subcluster_results_df)
# +
# taking a look at convex hull with directions
# this can be viewed as extended congested areas
df = subcluster_results_df.copy()
# build an dictionary of convex hull points
cluster_pt_dict = {}
for cluster in df['seg_dir_cluster'].unique():
cluster_pt_dict[cluster] = df[
df['seg_dir_cluster'] == cluster][['x', 'y']].to_numpy()
def get_convex_hull_indices(pts_array):
hull = ConvexHull(pts_array)
hull_indices = np.unique(hull.simplices.flat)
hull_pts = pts_array[hull_indices, :]
return(hull_pts)
# get convex hull objects
cluster_hull_objs = {}
for cluster in list(cluster_pt_dict.keys()):
cluster_hull_objs[cluster] = ConvexHull(cluster_pt_dict[cluster])
# get convex hull indice points
cluster_hull_dict = {}
for cluster in list(cluster_pt_dict.keys()):
cluster_hull_dict[cluster] = get_convex_hull_indices(
cluster_pt_dict[cluster])
# plot
for cluster in list(cluster_pt_dict.keys()):
plt.plot(cluster_pt_dict[cluster][:, 0],
cluster_pt_dict[cluster][:, 1], ',')
hull = ConvexHull(cluster_pt_dict[cluster])
for simplex in hull.simplices:
plt.plot(cluster_pt_dict[cluster][simplex, 0],
cluster_pt_dict[cluster][simplex, 1], 'k-')
# +
# check size of clusters
subcluster_results_df.groupby('seg_dir_cluster').count()
# -
points.array()[0]
# +
# build seg_dir_lane_cluster from visual inspection
# this effectively clean up the clustering result and get rid of clusters that aren't meaningful
subcluster_results_df['seg_dir_lane_cluster'] = subcluster_results_df['seg_dir_cluster'].\
apply(lambda x:
'Green_1' if ((x == 'B_3') | (x == 'B_4') | (x == 'B_5') | (x == 'B_6'))
else (
'Green_2' if ((x == 'B_0'))
else (
'Green_3' if ((x == 'B_1') | (x == 'B_2'))
else(
'Yellow_1' if ((x == 'C_0'))
else (
'Yellow_2' if ((x == 'C_1'))
else (
'Yellow_3' if ((x == 'C_2'))
else (
'Red_1' if ((x == 'A_0') | (x == 'A_3'))
else (
'Red_2' if ((x == 'A_1') | (x == 'A_7') | (x == 'A_8') | (x == 'A_15'))
else (
'Red_3' if ((x == 'A_2') | (x == 'A_9'))
else(
'Exclude'
)
)
)
)
)
)
)
)
))
subcluster_results_df = subcluster_results_df[subcluster_results_df['seg_dir_lane_cluster']!='Exclude']
# +
# visual inspect clusters by map - color by clusters
# https://datashader.org/user_guide/Geography.html
# https://holoviews.org/reference/elements/bokeh/Tiles.html
## hv.element.tiles.tile_sources
from holoviews.element import tiles
from datashader.utils import lnglat_to_meters
# https://datashader.org/getting_started/Interactivity.html
from datashader.colors import Sets1to3
# https://github.com/holoviz/datashader/issues/767
import colorcet as cc
long_key = list(set(cc.glasbey_cool + cc.glasbey_warm + cc.glasbey_dark))
df = subcluster_results_df#[seg_train_df['seg_cluster']>=0].copy()
# df = subcluster_results_df[subcluster_results_df['dir_cluster']>=0].copy()
# df = subcluster_results_df[subcluster_results_df['seg_dir_cluster'].isin(['A_0', 'A_1', 'A_2', 'A_3', 'A_4', 'A_5', 'A_6', 'A_7', 'A_8', 'A_9', 'A_10', 'A_11', 'A_12', 'A_13', 'A_14', 'A_15', 'A_16', 'A_17', 'A_18'])].copy()
# df = subcluster_results_df[subcluster_results_df['seg_dir_cluster'].isin(['A_0', 'A_3', 'A_10'])].copy()
# df = seg_train_df[seg_train_df['seg_cluster']==0].copy()
points = hv.Points(df.copy())
hv.extension('bokeh')
hv.output(backend='bokeh')
#tiles.EsriImagery() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrainRetina() * hd.datashade(points).opts(hv.opts(width=750, height=350))
#tiles.StamenTerrain() *
tiles.CartoLight() * hd.datashade(points,
aggregator=ds.count_cat('seg_dir_lane_cluster'),
color_key=long_key).opts(hv.opts(width=750, height=350)) #* color_points
#tiles.CartoLight() * hd.dynspread(hd.datashade(points,
# aggregator=ds.count_cat('seg_cluster'),
# color_key=long_key).opts(hv.opts(width=750, height=350)), threshold=0.4)
# +
# visual inspect clusters by facet plot
# https://seaborn.pydata.org/generated/seaborn.FacetGrid.html
# subcluster_results_df['tmp'] = 1
g = sns.FacetGrid(
subcluster_results_df,
hue='seg_dir_lane_cluster',
col_wrap=5,
height=4,
legend_out=True,
# col='tmp'
col='seg_dir_lane_cluster'
)
g = g.map(plt.scatter, 'x', 'y', s=0.05, marker='.') # , edgecolor="w")
# -
# ## Megre results with full parsed data
# +
# taking a look at convex hull with lane and directions
# this can be viewed as extended congested areas
df = subcluster_results_df.copy()
# build an dictionary of convex hull points
cluster_pt_dict = {}
for cluster in df['seg_dir_lane_cluster'].unique():
cluster_pt_dict[cluster] = df[
df['seg_dir_lane_cluster'] == cluster][['x', 'y']].to_numpy()
def get_convex_hull_indices(pts_array):
hull = ConvexHull(pts_array)
hull_indices = np.unique(hull.simplices.flat)
hull_pts = pts_array[hull_indices, :]
return(hull_pts)
# get convex hull objects
cluster_hull_objs = {}
for cluster in list(cluster_pt_dict.keys()):
cluster_hull_objs[cluster] = ConvexHull(cluster_pt_dict[cluster])
# get convex hull indice points
cluster_hull_dict = {}
for cluster in list(cluster_pt_dict.keys()):
cluster_hull_dict[cluster] = get_convex_hull_indices(
cluster_pt_dict[cluster])
# plot
for cluster in list(cluster_pt_dict.keys()):
plt.plot(cluster_pt_dict[cluster][:, 0],
cluster_pt_dict[cluster][:, 1], ',')
hull = ConvexHull(cluster_pt_dict[cluster])
for simplex in hull.simplices:
plt.plot(cluster_pt_dict[cluster][simplex, 0],
cluster_pt_dict[cluster][simplex, 1], 'k-')
# +
# https://stackoverflow.com/questions/16750618/whats-an-efficient-way-to-find-if-a-point-lies-in-the-convex-hull-of-a-point-cl/16898636#16898636
def in_hull(p, hull):
"""
Test if points in `p` are in `hull`
`p` should be a `NxK` coordinates of `N` points in `K` dimensions
`hull` is either a scipy.spatial.Delaunay object or the `MxK` array of the
coordinates of `M` points in `K`dimensions for which Delaunay triangulation
will be computed
"""
from scipy.spatial import Delaunay
if not isinstance(hull, Delaunay):
hull = Delaunay(hull)
return hull.find_simplex(p) >= 0
# +
# iterate over convex hull objects and match points
cluster_hull_objs.keys()
# +
df = parsed_df.copy()
df['x_id'] = df['x'] * 100
df['x_id'] = df['x_id'].astype(int)
df['y_id'] = df['y'] * 100
df['y_id'] = df['y_id'].astype(int)
# save ids to parsed_df
parsed_df = df.copy()
df['count'] = 1
# get count and angle by unique location
df = df.\
groupby(['x', 'y', 'x_id', 'y_id']).\
agg({"count": np.sum, 'bearing': np.median}).\
rename(columns={'bearing': 'bearing_median'}).\
reset_index()
# +
all_cluster_cols = []
cluster_keys = list(cluster_hull_dict.keys())
cluster_keys.sort()
for cluster_hull in cluster_keys:
col_name = "cluster_{}".format(str(cluster_hull))
all_cluster_cols.append(col_name)
df[col_name] = in_hull(
p=df[['x', 'y']].to_numpy(),
hull=cluster_hull_dict[cluster_hull])
df.loc[df[col_name]==True, 'seg_dir_lane_cluster'] = str(cluster_hull)
df = df.drop(columns=[col_name])
# +
# merge id table with name table
# use all points - allow duplicate identicals
# clustered_df = parsed_df.merge(
# df.drop(columns=['x', 'y']), on=['x_id', 'y_id'])
# use only unique points - disallow identicals
subcluster_results_df = df.copy()
subcluster_results_df['seg_dir_lane_cluster'] = subcluster_results_df['seg_dir_lane_cluster'].astype(str)
# +
# remove nan
subcluster_results_df = subcluster_results_df[subcluster_results_df['seg_dir_lane_cluster']!='nan']
# -
# Merge seg_dir_lane_cluster with all applicable data points
# +
df = subcluster_results_df.copy()
df['x_id'] = df['x'] * 100
df['x_id'] = df['x_id'].astype(int)
df['y_id'] = df['y'] * 100
df['y_id'] = df['y_id'].astype(int)
subcluster_results_df = df[['x_id', 'y_id', 'seg_dir_lane_cluster']].copy()
subcluster_results_df
# -
clustered_df = parsed_df.merge(
subcluster_results_df, on=['x_id', 'y_id']).\
sort_values(by='record_id', axis=0) # make sure record is in order
clustered_df
clustered_df.groupby(['seg_dir_lane_cluster']).count()
########### End of Clustering Models for Roadway Geometries ###########
# # Calculating Congestion "Clusters" Results
# +
# for each cluster, and time step
# cluster queues based on DBSCAN, set a avg speed eligibility ~ 10kph (no tailgating) - if a whole set of points are fast but close, ignore
#
# find queue length based on furthest point algorithm function - these points are start and end of queues
#
# -
# ## Prepare data
# define speed threshold
speed_threshold = 10
# +
import math
# create time bin for the clustered df data
# calculate time bin based on max and min values, then do every x seconds
x_sec_bin = 0.02 # step size - shouldn't be too large, if 0.02, no bin
min_time = min(clustered_df['time'])
max_time = max(clustered_df['time'])
if(x_sec_bin <= 0.02):
clustered_df['time_bin'] = clustered_df['time'].copy()
else:
clustered_df['time_bin'] = x_sec_bin * \
np.round(clustered_df['time']/x_sec_bin, 0)
# +
# a quick analysis on the count by time bins
clustered_df['count'] = 1
cluster_time_df = clustered_df[clustered_df['speed'] < speed_threshold].\
groupby(['seg_dir_lane_cluster', 'time_bin']).\
agg({'count': np.sum}).\
reset_index()
# cluster_time_df = cluster_time_df[cluster_time_df['count'] > 1]
# for testing, use whole seconds only
# wholoe_seconds_only = ~(cluster_time_df['time'].astype(int) < cluster_time_df['time'])
# cluster_time_df = cluster_time_df[wholoe_seconds_only]
# cluster_time_list = cluster_time_df.\
# drop(columns=['count']).\
# to_numpy()
# # len(cluster_time_list)
# cluster_time_df[cluster_time_df['seg_dir_lane_cluster'] == '7_3'] # ['count'].
max(cluster_time_df['count'])
# -
min(cluster_time_df['count']) # let's get rid of these groups, they won't have any queues
len(clustered_df)
# +
clustered_df_eval = clustered_df.merge(cluster_time_df.rename(
columns={'count': 'time_bin_count'}), on=['seg_dir_lane_cluster', 'time_bin'])
# exclude cluster and time points with no more than 1 sample
clustered_df_eval = clustered_df_eval[clustered_df_eval['time_bin_count'] > 1]
# exclude points that are moving faster than 10 kph
clustered_df_eval = clustered_df_eval[clustered_df_eval['speed'] < speed_threshold]
# -
clustered_df_eval.groupby(['seg_dir_lane_cluster']).count()
# +
# [x for i, x in df.groupby(level=0, sort=False)]
cluster_df_eval_list = [x for i, x in clustered_df_eval.groupby(['seg_dir_lane_cluster', 'time_bin'], sort=False)]
# -
# ## Test model parameter
cluster_df_eval_list[0]['time']
# +
# congestion_parameter = {
# 'metric': 'euclidean',
# 'min_cluster_size': 2,
# 'min_samples': 2,
# 'cluster_selection_epsilon': 15
# }
# df = cluster_hdbscan(df=cluster_df_eval_list[96],
# parameters=congestion_parameter,
# feature_names=['x', 'y'],
# label_name='cong_flag',
# verbose=False)
congestion_parameter = {
'min_samples': 2,
'cluster_selection_epsilon': 20
}
df = cluster_dbscan(df=cluster_df_eval_list[96],
parameters=congestion_parameter,
feature_names=['x', 'y'],
label_name='cong_flag')
df
# -
g = sns.FacetGrid(df, col='cong_flag', col_wrap=5, height=4)
g = g.map(plt.scatter, 'x', 'y', s=10, marker='.')#, edgecolor="w")
# ## Run Congestion Clustering with HDBSCAN
# +
# run clustering for congestion for lanes
# congestion_parameter = {
# 'metric': 'euclidean',
# 'min_cluster_size': 2,
# 'min_samples': 2,
# 'cluster_selection_epsilon': 15
# }
# cong_cluster_df_eval_list = [(cluster_hdbscan(df=df,
# parameters=congestion_parameter,
# feature_names=['x', 'y'],
# label_name='cong_flag',
# verbose=False)
# )
# for df in cluster_df_eval_list]
congestion_parameter = {
'min_samples': 2,
'cluster_selection_epsilon': 20
}
cong_cluster_df_eval_list = [(cluster_dbscan(df=df,
parameters=congestion_parameter,
feature_names=['x', 'y'],
label_name='cong_flag',
verbose=False)
)
for df in cluster_df_eval_list]
# +
# visual checks
g = sns.FacetGrid(cong_cluster_df_eval_list[60], col='cong_flag', col_wrap=5, height=4)
g = g.map(plt.scatter, 'x', 'y', s=10, marker='.')#, edgecolor="w")
# +
# combine results from clustering
cong_cluster_df_result = pd.concat(cong_cluster_df_eval_list)
# +
# remove all outliers (not dense enough to qualify as queues)
cong_cluster_df_result = cong_cluster_df_result[cong_cluster_df_result['cong_flag'] != -1]
# -
cong_cluster_df_result.groupby(['seg_dir_lane_cluster']).count()
cong_cluster_df_result
# +
# build intermediate dataframe
# https://stackoverflow.com/questions/35234012/python-pandas-merge-two-tables-without-keys-multiply-2-dataframes-with-broadc
# calculate Euclidean distance
# more resources for more complex examples: https://kanoki.org/2019/12/27/how-to-calculate-distance-in-python-and-pandas-using-scipy-spatial-and-distance-functions/
def e_dist(x1, x2, y1, y2):
return np.sqrt((x1-x2) ** 2+(y1-y2) ** 2)
def getQueue(df):
"""This function requires the dataframe input to be groupped into appropriate clusters"""
df1 = df.copy()
df1['tmp'] = 1
if len(df1) > 0:
# put data in list
df_dist = pd.merge(df1, df1, on=['tmp'], suffixes=(
'_1', '_2'))
df_dist['dist'] = e_dist(
x1=df_dist['x_1'],
x2=df_dist['x_2'],
y1=df_dist['y_1'],
y2=df_dist['y_2'])
# get maximum distance in each group
# idx = df_dist.groupby(['cong_flag_1'])['dist'].transform(max) == df_dist['dist']
idx = df_dist['dist'].max() == df_dist['dist']
# keeping the first is good enough - idx will return 2 copy
result = df_dist[idx].iloc[0]
return result
# +
# queue_calc_eval_list = [x for i, x in cong_cluster_df_result.groupby(
# ['seg_dir_lane_cluster', 'time_bin', 'cong_flag'], sort=False)]
# +
# # test
# getQueue(queue_calc_eval_list[0])
# -
ct_queue_calc_result = cong_cluster_df_result.groupby(
['seg_dir_lane_cluster', 'time_bin', 'cong_flag']).apply(lambda grp: getQueue(grp))
# +
ct_queue_calc_result_final = ct_queue_calc_result.\
reset_index()\
[['seg_dir_lane_cluster', 'time_bin', 'cong_flag', 'record_id_1',
'record_id_2', 'lat_1', 'lon_1', 'lat_2', 'lon_2', 'dist']]
# for each row is a recorded queue
# where
# seg_dir_lane_cluster is the road segment direction and lane cluster group
# time_bin is the time stamp
# cong_flag is the queue number
# record_id_1 record id of the track and time of the start of the queue
# lat_1 is the latitude of start of the queue
# lon_1 is the longitude of start of the queue
# record_id_2 record id of the track and time of the end of the queue
# lat_2 is the latitude of end of the queue
# lon_2 is the longitude of end of the queue
# dist is the queue length
# -
# save all the queues
ct_queue_calc_result_final.to_csv('uas4t_tl_team_queue-revised.csv', index=False)
# ## Report maximum queue for each cluster
#
#
# +
# for each cluster, find the max queue length
# then report
## i. Maximum length of queue,
## ii. Lane the maximum length occurred,
## iii. Coordinates of the start and end of the maximum queue,
## iv. Timestamp of the maximum queue occurrence, and v. whether, when and where a spillback is formed (when applicable).
# -
ct_queue_calc_result_final
# +
# max queue by cluster
max_dist = ct_queue_calc_result_final.\
groupby(['seg_dir_lane_cluster']).\
agg({'dist': np.max}).\
rename(columns={'dist': 'max_queue_length'}).\
reset_index()
max_queue_df = ct_queue_calc_result_final.merge(max_dist, on='seg_dir_lane_cluster')
max_queue_df = max_queue_df[max_queue_df['max_queue_length'] == max_queue_df['dist']]
max_queue_df.to_csv('uas4t_tl_team_results-revised.csv', index=False)
# +
list(max_queue_df.columns)
# for each row is a recorded max queue per cluster over one or more time interval
# where
# seg_dir_lane_cluster is the road segment direction cluster group
# time_bin is the time stamp
# cong_flag is the queue number
# record_id_1 record id of the track and time of the start of the queue
# lat_1 is the latitude of start of the queue
# lon_1 is the longitude of start of the queue
# record_id_2 record id of the track and time of the end of the queue
# lat_2 is the latitude of end of the queue
# lon_2 is the longitude of end of the queue
# max_queue_length is the maxmimum queue length (equals to dist, aka queue length)
# + [markdown] colab_type="text" id="vnh9lVzoDKBw"
# # End of Notebook
# + colab={} colab_type="code" id="d_imAkofDLv8"
| Codes/Team 21/uas4t_tl_team-revised.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deterministic Terms in Time Series Models
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.rc("figure", figsize=(16, 9))
plt.rc("font", size=16)
# -
# ## Basic Use
#
# Basic configurations can be directly constructed through `DeterministicProcess`. These can include a constant, a time trend of any order, and either a seasonal or a Fourier component.
#
# The process requires an index, which is the index of the full-sample (or in-sample).
#
# First, we initialize a deterministic process with a constant, a linear time trend, and a 5-period seasonal term. The `in_sample` method returns the full set of values that match the index.
# +
from statsmodels.tsa.deterministic import DeterministicProcess
index = pd.RangeIndex(0, 100)
det_proc = DeterministicProcess(
index, constant=True, order=1, seasonal=True, period=5
)
det_proc.in_sample()
# -
# The `out_of_sample` returns the next `steps` values after the end of the in-sample.
det_proc.out_of_sample(15)
# `range(start, stop)` can also be used to produce the deterministic terms over any range including in- and out-of-sample.
#
# ### Notes
#
# * When the index is a pandas `DatetimeIndex` or a `PeriodIndex`, then `start` and `stop` can be date-like (strings, e.g., "2020-06-01", or Timestamp) or integers.
# * `stop` is always included in the range. While this is not very Pythonic, it is needed since both statsmodels and Pandas include `stop` when working with date-like slices.
det_proc.range(190, 210)
# ## Using a Date-like Index
#
# Next, we show the same steps using a `PeriodIndex`.
index = pd.period_range("2020-03-01", freq="M", periods=60)
det_proc = DeterministicProcess(index, constant=True, fourier=2)
det_proc.in_sample().head(12)
det_proc.out_of_sample(12)
# `range` accepts date-like arguments, which are usually given as strings.
det_proc.range("2025-01", "2026-01")
# This is equivalent to using the integer values 58 and 70.
det_proc.range(58, 70)
# ## Advanced Construction
#
# Deterministic processes with features not supported directly through the constructor can be created using `additional_terms` which accepts a list of `DetermisticTerm`. Here we create a deterministic process with two seasonal components: day-of-week with a 5 day period and an annual captured through a Fourier component with a period of 365.25 days.
# +
from statsmodels.tsa.deterministic import Fourier, Seasonality, TimeTrend
index = pd.period_range("2020-03-01", freq="D", periods=2 * 365)
tt = TimeTrend(constant=True)
four = Fourier(period=365.25, order=2)
seas = Seasonality(period=7)
det_proc = DeterministicProcess(index, additional_terms=[tt, seas, four])
det_proc.in_sample().head(28)
# -
# ## Custom Deterministic Terms
#
# The `DetermisticTerm` Abstract Base Class is designed to be subclassed to help users write custom deterministic terms. We next show two examples. The first is a broken time trend that allows a break after a fixed number of periods. The second is a "trick" deterministic term that allows exogenous data, which is not really a deterministic process, to be treated as if was deterministic. This lets use simplify gathering the terms needed for forecasting.
#
# These are intended to demonstrate the construction of custom terms. They can definitely be improved in terms of input validation.
# +
from statsmodels.tsa.deterministic import DeterministicTerm
class BrokenTimeTrend(DeterministicTerm):
def __init__(self, break_period: int):
self._break_period = break_period
def __str__(self):
return "Broken Time Trend"
def _eq_attr(self):
return (self._break_period,)
def in_sample(self, index: pd.Index):
nobs = index.shape[0]
terms = np.zeros((nobs, 2))
terms[self._break_period :, 0] = 1
terms[self._break_period :, 1] = np.arange(
self._break_period + 1, nobs + 1
)
return pd.DataFrame(
terms, columns=["const_break", "trend_break"], index=index
)
def out_of_sample(
self, steps: int, index: pd.Index, forecast_index: pd.Index = None
):
# Always call extend index first
fcast_index = self._extend_index(index, steps, forecast_index)
nobs = index.shape[0]
terms = np.zeros((steps, 2))
# Assume break period is in-sample
terms[:, 0] = 1
terms[:, 1] = np.arange(nobs + 1, nobs + steps + 1)
return pd.DataFrame(
terms, columns=["const_break", "trend_break"], index=fcast_index
)
# -
btt = BrokenTimeTrend(60)
tt = TimeTrend(constant=True, order=1)
index = pd.RangeIndex(100)
det_proc = DeterministicProcess(index, additional_terms=[tt, btt])
det_proc.range(55, 65)
# Next, we write a simple "wrapper" for some actual exogenous data that simplifies constructing out-of-sample exogenous arrays for forecasting.
class ExogenousProcess(DeterministicTerm):
def __init__(self, data):
self._data = data
def __str__(self):
return "Custom Exog Process"
def _eq_attr(self):
return (id(self._data),)
def in_sample(self, index: pd.Index):
return self._data.loc[index]
def out_of_sample(
self, steps: int, index: pd.Index, forecast_index: pd.Index = None
):
forecast_index = self._extend_index(index, steps, forecast_index)
return self._data.loc[forecast_index]
# +
import numpy as np
gen = np.random.default_rng(98765432101234567890)
exog = pd.DataFrame(
gen.integers(100, size=(300, 2)), columns=["exog1", "exog2"]
)
exog.head()
# -
ep = ExogenousProcess(exog)
tt = TimeTrend(constant=True, order=1)
# The in-sample index
idx = exog.index[:200]
det_proc = DeterministicProcess(idx, additional_terms=[tt, ep])
det_proc.in_sample().head()
det_proc.out_of_sample(10)
# ## Model Support
#
# The only model that directly supports `DeterministicProcess` is `AutoReg`. A custom term can be set using the `deterministic` keyword argument.
#
# **Note**: Using a custom term requires that `trend="n"` and `seasonal=False` so that all deterministic components must come from the custom deterministic term.
# ### Simulate Some Data
#
# Here we simulate some data that has an weekly seasonality captured by a Fourier series.
gen = np.random.default_rng(98765432101234567890)
idx = pd.RangeIndex(200)
det_proc = DeterministicProcess(idx, constant=True, period=52, fourier=2)
det_terms = det_proc.in_sample().to_numpy()
params = np.array([1.0, 3, -1, 4, -2])
exog = det_terms @ params
y = np.empty(200)
y[0] = det_terms[0] @ params + gen.standard_normal()
for i in range(1, 200):
y[i] = 0.9 * y[i - 1] + det_terms[i] @ params + gen.standard_normal()
y = pd.Series(y, index=idx)
ax = y.plot()
# The model is then fit using the `deterministic` keyword argument. `seasonal` defaults to False but `trend` defaults to `"c"` so this needs to be changed.
# +
from statsmodels.tsa.api import AutoReg
mod = AutoReg(y, 1, trend="n", deterministic=det_proc)
res = mod.fit()
print(res.summary())
# -
# We can use the `plot_predict` to show the predicted values and their prediction interval. The out-of-sample deterministic values are automatically produced by the deterministic process passed to `AutoReg`.
fig = res.plot_predict(200, 200 + 2 * 52, True)
auto_reg_forecast = res.predict(200, 211)
auto_reg_forecast
# ## Using with other models
#
# Other models do not support `DeterministicProcess` directly. We can instead manually pass any deterministic terms as `exog` to model that support exogenous values.
#
# Note that `SARIMAX` with exogenous variables is OLS with SARIMA errors so that the model is
#
# $$
# \begin{align*}
# \nu_t & = y_t - x_t \beta \\
# (1-\phi(L))\nu_t & = (1+\theta(L))\epsilon_t.
# \end{align*}
# $$
#
# The parameters on deterministic terms are not directly comparable to `AutoReg` which evolves according to the equation
#
# $$
# (1-\phi(L)) y_t = x_t \beta + \epsilon_t.
# $$
#
# When $x_t$ contains only deterministic terms, these two representation are equivalent (assuming $\theta(L)=0$ so that there is no MA).
#
# +
from statsmodels.tsa.api import SARIMAX
det_proc = DeterministicProcess(idx, period=52, fourier=2)
det_terms = det_proc.in_sample()
mod = SARIMAX(y, order=(1, 0, 0), trend="c", exog=det_terms)
res = mod.fit(disp=False)
print(res.summary())
# -
# The forecasts are similar but differ since the parameters of the `SARIMAX` are estimated using MLE while `AutoReg` uses OLS.
sarimax_forecast = res.forecast(12, exog=det_proc.out_of_sample(12))
df = pd.concat([auto_reg_forecast, sarimax_forecast], axis=1)
df.columns = columns = ["AutoReg", "SARIMAX"]
df
| v0.12.2/examples/notebooks/generated/deterministics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:anaconda3]
# language: python
# name: conda-env-anaconda3-py
# ---
# + [markdown] nbpresent={"id": "263513d4-7846-4852-a36c-8b73c5dd2d42"} slideshow={"slide_type": "slide"}
# # Introduction to Data Science
# # Lecture 21: Dimensionality Reduction
# *COMP 5360 / MATH 4100, University of Utah, http://datasciencecourse.net/*
#
# In this lecture, we'll discuss
# * dimensionality reduction
# * Principal Component Analysis (PCA)
# * using PCA for visualization
#
# Recommended Reading:
# * <NAME>, <NAME>, <NAME>, and <NAME>, An Introduction to Statistical Learning, Ch. 10.2 [digitial version available here](http://www-bcf.usc.edu/~gareth/ISL/)
# * <NAME>, [Principal Component Analysis: Explained Visually](http://setosa.io/ev/principal-component-analysis/)
#
# + nbpresent={"id": "a993cd6a-14c4-4df6-97da-149af3391ae1"} slideshow={"slide_type": "slide"}
# imports and setup
import numpy as np
import pandas as pd
pd.set_option('display.notebook_repr_html', False)
from sklearn.datasets import load_iris, load_digits
from sklearn.preprocessing import scale
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, AgglomerativeClustering
from sklearn import metrics
from sklearn.metrics import homogeneity_score
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
plt.style.use('ggplot')
import seaborn as sns
# + [markdown] nbpresent={"id": "add0a894-8d27-4164-b449-26a63a4af899"} slideshow={"slide_type": "slide"}
# ## Recap: Supervised vs. Unsupervised Learning
#
# ### Supervised Learning
# **Data:** both the features, $x$, and a response, $y$, for each item in the dataset.
#
# **Goal:** 'learn' how to predict the response from the features.
#
# **Examples:**
# * Regression
# * Classification
#
#
# ### Unsupervised Learning
# **Data:** only the features, $x$, for each item in the dataset.
#
# **Goal:** discover 'interesting' things about the dataset.
#
# **Examples:**
# * Clustering
# * Dimensionality reduction, Principal Component Analysis (PCA)
# + [markdown] nbpresent={"id": "9455c537-9655-401c-9a2b-d78d973bf524"} slideshow={"slide_type": "slide"}
# ## Dimensionality Reduction
#
#
# In data science, [**dimensionality reduction**](https://en.wikipedia.org/wiki/Dimensionality_reduction) is the process of reducing the number of features in a dataset.
#
# There are two approahces to dimensionality reduction: **feature selection** and **feature extraction**.
#
# In **feature selection**, one just picks a subset of the available features
#
# In ** feature extraction**, the data is transformed from a high-dimensional space to a lower dimensional space. The most common method is called **principal component analysis (PCA)**, where the transformation is taken to be linear, but mnay other methods exist. In this class, we'll focus on PCA.
#
# **Why dimensionality reduction?**
# - Remove redundancies and simplifies the dataset making it easier to understand.
# - It's easier to visualize low dimensional data.
# - It reduces storage space for large datasets (because of less features).
# - It reduces time for computationally intensive tasks (because of less computation required).
# - Reducing dimensionality can help avoid overfitting in supervised learning tasks.
# + [markdown] nbpresent={"id": "83bb3f29-8308-4f49-a7eb-26065efef2f0"} slideshow={"slide_type": "slide"}
# ## Principal Component Analysis (PCA)
#
# **Problem:** Many datasets have too many features to be able to explore or understand in a reasonable way. Its difficult to even make a reasonable plot for a high-dimensional dataset.
#
# **Idea**: In a [Principal Component Analysis (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis), we find a small number of new features, which are linear combinations of the old features, that 'explain' most of the variance in the data. The *principal component directions* are the directions in feature space in which the data is the most variable.
#
# Before we get into the mathematical description of Principal Component Analysis (PCA), we can gain a lot of intuition by taking a look at [this visual overview](http://setosa.io/ev/principal-component-analysis/) by <NAME>.
#
# **Mathematical description:** Let the $p$ features in our dataset be $x = (x_1, x_2, \ldots x_p)$. We define a new feature, the *first principal component direction*, by
# $$
# z_1 = \phi_{1,1} x_1 + \phi_{2,1} x_2 + \cdots + \phi_{p,1} x_p = \phi_1^t x
# $$
# Here, the coefficients $\phi_{j,1}$ are the *loadings* of the $j$-th feature on the first principal component. The vector $\phi_1 = (\phi_{1,1}, \phi_{2,1},\cdots, \phi_{p,1})$ is called the *loadings vector* for the first principal component.
#
# We want to find the loadings so that $z_1$ has maximal sample variance.
#
# Let $X$ be the $n\times p$ matrix where $X_{i,j}$ is the $j$-th feature for item $i$ in the dataset. $X$ is just the collection of the data in a matrix.
#
# **Important:** Assume each of the variables has been normalized to have mean zero, *i.e.*, the columns of $X$ should have zero mean.
#
# A short calculation shows that the sample variance of $z_1$ is then given by
# $$
# Var(z_1) = \frac{1}{n} \sum_{i=1}^n \left( \sum_{j=1}^p \phi_{j,1} X_{i,j} \right)^2.
# $$
# The variance can be arbitrarily large if the $\phi_{j,1}$ are allowed to be arbitrarily large. We constrain the $\phi_{j,1}$ to satisfy $\sum_{j=1}^p \phi_{j,1}^2 = 1$. In vector notation, this can be written $\| \phi_1 \| = 1$.
#
# Putting this together, the first principal component is defined by $z_1 = \phi_1^t x$ where $\phi_1$ is the solution to the optimization problem
# \begin{align*}
# \max_{\phi_1} \quad & \textrm{Var}(z_1) \\
# \text{subject to} \quad & \| \phi_1\|^2 = 1.
# \end{align*}
# Using linear algebra, it can be shown that $\phi_1$ is exactly the eigenvector corresponding to the largest eigenvalue of the *Gram matrix*, $X^tX$.
#
# We similarly define the second principal direction to be the linear combination of the features,
# $z_2 = \phi_2^t x$ with the largest variance, subject to the additional constraint that $z_2$ be uncorrelated with $z_1$. This is equivalent to $\phi_1^t \phi_2 = 0$. This corresponds to taking $\phi_2$ to be the eigenvector corresponding to the second largest eigenvalue of $X^tX$. Higher principal directions are defined analogously.
# + [markdown] nbpresent={"id": "cf05a576-1206-4860-b4c5-c5635caf29c4"} slideshow={"slide_type": "slide"}
# ## PCA in practice
# We can use the [```PCA``` function](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) from the ```sklearn.decomposition``` library.
#
# ### Exmple: the Iris dataset
#
# The dataset contains 4 features (attributes) of 50 samples containing 3 different types of iris plants.
#
# **Features (attributes):**
# 1. sepal length (cm)
# # + sepal width (cm)
# # + petal length (cm)
# # + petal width (cm)
#
# **Classes:**
# 1. Iris Setosa
# # + Iris Versicolour
# # + Iris Virginica
# + nbpresent={"id": "9ad82db0-fe6d-4641-9f3e-5c50f6895823"} slideshow={"slide_type": "-"}
# import dataset
iris = load_iris()
X = iris.data
y = iris.target
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# + [markdown] nbpresent={"id": "93d75ae9-84e0-421d-a83f-802ee6d8a7ea"} slideshow={"slide_type": "-"}
# ### Some previous ideas for plotting the data:
# 1. just plot along first two dimensions and ignore other dimensions
# # + plot in three dimensions (3d scatter plot) and ignore other dimensions
# # + make a scatterplot matrix with all pairs of dimensions
# + nbpresent={"id": "51afb9ab-40be-4c08-8867-43201b0a8f0c"} slideshow={"slide_type": "-"}
# plot along first two dimensions and ignore other diemtnsions
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,s=30)
plt.xlabel('Dimension 1')
plt.ylabel('Dimension 2')
plt.show()
# + nbpresent={"id": "044812ee-ff20-49fa-8b6f-d70f1b855e79"} slideshow={"slide_type": "-"}
# 3D scatter plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:, 0], X[:, 1],zs= X[:, 2], c=y, cmap=cmap_bold,s=30)
ax.set_xlabel('Dimension 1')
ax.set_ylabel('Dimension 2')
ax.set_zlabel('Dimension 3')
plt.show()
# + nbpresent={"id": "fcbf8519-13af-4f7a-8a2e-2c9d2d398dda"} slideshow={"slide_type": "-"}
# scatterplot matrix
sns.set()
sns.pairplot(sns.load_dataset("iris"), hue="species");
# + [markdown] nbpresent={"id": "c541620b-53c2-44dc-b39c-21cf18667d0b"} slideshow={"slide_type": "-"}
# ### New idea: use PCA to plot the 2 most 'important' directions
# + nbpresent={"id": "7df622c0-2840-4e18-9087-72b543c0b200"} slideshow={"slide_type": "-"}
# PCA analysis
pca_model = PCA()
X_PCA = pca_model.fit_transform(X)
plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=y, cmap=cmap_bold,s=30)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.show()
# + [markdown] nbpresent={"id": "acdf5829-6cd5-4112-804a-43a658db822e"} slideshow={"slide_type": "-"}
# ### Example: use PCA to visualize cluster analysis of iris data
# Principal components are very helpful for visualizing clusters.
# + nbpresent={"id": "907db97b-ae6b-4d7a-bc41-ea4648fb06f7"} slideshow={"slide_type": "-"}
cluster_model = AgglomerativeClustering(linkage="average", affinity='euclidean', n_clusters=3)
y_pred = cluster_model.fit_predict(X)
h = homogeneity_score(labels_true = y, labels_pred = y_pred)
print('homogeneity score for clustering is ' + str(h))
# plot using PCA
plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=y_pred, cmap=cmap_bold,s=30)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.show()
# + [markdown] nbpresent={"id": "e019094b-91c5-47ab-81bc-4e4835226853"} slideshow={"slide_type": "-"}
# ## Number of principal components
#
# For plotting the data, we generally just use the first 2 principal components. In other applications requiring dimensionality reduction, you might want to indentify the number of principal components that can be used to explain the data. This can be done by considering the percentatge of variance explained by each component or a *scree plot*.
# + nbpresent={"id": "aae10a82-e83a-4596-b6fe-7f48f5785345"} slideshow={"slide_type": "-"}
# Variance ratio of the four principal components
var_ratio = pca_model.explained_variance_ratio_
print(var_ratio)
plt.plot([1,2,3,4], var_ratio, '-o')
plt.ylabel('Proportion of Variance Explained')
plt.xlabel('Principal Component')
plt.xlim(0.75,4.25)
plt.ylim(0,1.05)
plt.xticks([1,2,3,4])
plt.show()
# + [markdown] nbpresent={"id": "02fb29b4-f0ce-4885-9fea-f789e27d6ec8"} slideshow={"slide_type": "slide"}
# ## Example: visualizing clusters in the MNIST handwritten digit dataset
#
# THE MNIST handwritten digit dataset consists of images of handwritten digits, together with labels indicating which digit is in each image.
#
# Becaue both the features and the labels are present in this dataset (and labels for large datasets are generally difficult/expensive to obtain), this dataset is frequently used as a benchmark to compare various methods.
# For example, [this webpage](http://yann.lecun.com/exdb/mnist/) describes a variety of different classification results on MNIST (Note, the tests on this website are for a larger and higher resolution dataset than we'll use.) To see a comparison of classification methods implemented in scikit-learn on the MNIST dataset, see
# [this page](http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html).
# The MNIST dataset is also a frequently used for benchmarking clustering algorithms and because it has labels, we can evaluate the homogeneity or purity of the clusters.
#
# There are several versions of the dataset. We'll use the one that is built-in to scikit-learn, described [here](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html).
#
# * Classes: 10
# * Samples per class: $\approx$180
# * Samples total: 1797
# * Dimensionality: 64 (8 pixels by 8 pixels)
# * Features: integers 0-16
#
# Here are some examples of the images. Note that the digits have been size-normalized and centered in a fixed-size ($8\times8$ pixels) image.
#
# <img src="http://scikit-learn.org/stable/_images/sphx_glr_plot_digits_classification_001.png" width="500">
#
# + nbpresent={"id": "5520b1c6-3264-4960-85a7-9945ead4045f"} slideshow={"slide_type": "-"}
digits = load_digits()
X = scale(digits.data)
y = digits.target
print(type(X))
n_samples, n_features = X.shape
n_digits = len(np.unique(digits.target))
print("n_digits: %d, n_samples %d, n_features %d" % (n_digits, n_samples, n_features))
plt.figure(figsize= (10, 10))
for ii in np.arange(25):
plt.subplot(5, 5, ii+1)
plt.imshow(np.reshape(X[ii,:],(8,8)), cmap='Greys',interpolation='none')
plt.show()
# + [markdown] nbpresent={"id": "0290f46c-6651-4423-9f36-8c7185ae93fb"} slideshow={"slide_type": "-"}
# Here we'll use PCA to visualize the results of a clustering of the MNIST dataset.
#
# This example was taken from the [scikit-learn examples](http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html).
# +
X_PCA = PCA(n_components=2).fit_transform(X)
kmeans_model = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
kmeans_model.fit(X_PCA)
print(metrics.homogeneity_score(labels_true=y, labels_pred=kmeans_model.labels_))
# Plot the decision boundaries. For that, we will assign a color to each point in a mesh
x_min, x_max = X_PCA[:, 0].min() - 1, X_PCA[:, 0].max() + 1
y_min, y_max = X_PCA[:, 1].min() - 1, X_PCA[:, 1].max() + 1
h = .1 # step size of the mesh .02
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh.
Z = kmeans_model.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(X_PCA[:, 0], X_PCA[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans_model.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10)
plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
# + [markdown] nbpresent={"id": "772ddbd5-3979-40e4-8503-71270b69e2c3"} slideshow={"slide_type": "slide"}
# ## Example: Analyze US Arrests dataset
#
# In HW8, you were asked to analyze the US Arrests dataset.
#
# This dataset describes 1973 violent crime rates by US State. The crimes considered are assault, murder, and rape. Also included is the percent of the population living in urban areas.
#
# The dataset is available as *USarrests.csv*. The dataset has 50 observations (corresponding to each state) on 4 variables:
# 1. Murder: Murder arrests (per 100,000 residents)
# 2. Assault: Assault arrests (per 100,000 residents)
# 3. UrbanPop: Percent urban population
# 4. Rape: Rape arrests (per 100,000 residents)
#
# You can read more about the dataset [here](https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/USArrests.html).
#
# In HW8, you used clustering tools to understand how violent crimes differ between states. But it was difficult to visualize the results. Here, we'll use PCA to do this.
# + nbpresent={"id": "af685489-c043-4102-972e-61ad7c04c24a"} slideshow={"slide_type": "-"}
crime = pd.read_csv('USArrests.csv', index_col=0)
print(crime.shape)
print(crime.head())
# correlations and scatter plot matrix
print(crime.corr())
pd.plotting.scatter_matrix(crime, figsize=(10, 10), diagonal='hist');
# + [markdown] nbpresent={"id": "97bb54fa-aaaf-441d-92d1-103a0bd0e679"} slideshow={"slide_type": "-"}
# ### Principal Component Analysis (PCA)
#
# We visualize the data by performing the following steps:
# 1. Scale the dataset using the *scale* function of the sklearn.preprocessing library.
# # + Calculate the principal components of the dataset.
# # + Store the principal components in a pandas dataframe.
# # + Plot a scatterplot of PC1 and PC2. Using the matplotlib function *annotate*, use the state names as markers (instead of dots).
# # + Print the explained variance ratio of the PCA. Plot the explained variace ratio of the PCA. Interpret these values. Is it reasonable to reduce the four dimensional space to two dimensions using PCA?
# + nbpresent={"id": "81f64ba0-27c4-4afb-8b1d-997d0b04c528"} slideshow={"slide_type": "-"}
# scale the dataset
X = scale(crime)
# find PCA and transform to new coordinates
pca_model = PCA()
X_PCA = pca_model.fit_transform(X)
# create a new pandas dataframe
df_plot = pd.DataFrame(X_PCA, columns=['PC1', 'PC2', 'PC3', 'PC4'], index=crime.index)
df_plot.head()
# + nbpresent={"id": "70f66cad-216c-499f-9078-13ae7b56acce"} slideshow={"slide_type": "-"}
fig,ax1 = plt.subplots()
ax1.set_xlim(-3.5,3.5)
ax1.set_ylim(-3,3)
# Plot Principal Components 1 and 2
for i,name in enumerate(crime.index):
ax1.annotate(name, (X_PCA[i,0], X_PCA[i,1]), ha='center',fontsize=10)
ax1.set_xlabel('First Principal Component')
ax1.set_ylabel('Second Principal Component')
plt.show()
# + nbpresent={"id": "9186eaab-0483-4b10-8ce8-3303914d1054"} slideshow={"slide_type": "-"}
# Variance ratio of the four principal components
var_ratio = pca_model.explained_variance_ratio_
print(var_ratio)
plt.plot([1,2,3,4], var_ratio, '-o')
plt.ylabel('Proportion of Variance Explained')
plt.xlabel('Principal Component')
plt.xlim(0.75,4.25)
plt.ylim(0,1.05)
plt.xticks([1,2,3,4])
plt.show()
# + [markdown] nbpresent={"id": "b829d6d7-6ae9-4381-a80a-24b73d8e88dc"} slideshow={"slide_type": "-"}
# 87% of the variance is explained in the first two principle components so not much information is lost when reducing from four to two dimensions.
# + [markdown] nbpresent={"id": "a172f7bc-07c0-46ac-94f4-ee1c7b3f7ba5"} slideshow={"slide_type": "-"}
# ### Exercise: visualizing a k-means cluster analysis
# 1. Using k-means, cluster the states into $k=4$ clusters.
# # + Use the principal components to plot the clusters. Again label each point using the state name and this time color the states according to cluster.
# + slideshow={"slide_type": "-"}
# Your solution here
# + nbpresent={"id": "fcee6d26-d206-4cec-978c-66b8774d95f4"} slideshow={"slide_type": "-"}
# Reference solution
k_means_model = KMeans(n_clusters=4,n_init=100)
y_pred = k_means_model.fit_predict(X)
print(pd.Series(k_means_model.labels_).value_counts())
for i in np.arange(4):
print(crime.index[y_pred==i])
# + nbpresent={"id": "dd7e84c6-b6f6-4d81-8b8f-6d7c2fa61dd6"} slideshow={"slide_type": "-"}
k_vals = np.arange(2,20)
inert = np.zeros(len(k_vals))
for ii,k in enumerate(k_vals):
m = KMeans(n_clusters=k,n_init=100)
m.fit_predict(X)
inert[ii] = m.inertia_
plt.plot(k_vals,inert)
plt.ylabel('inertia')
plt.xlabel('k')
plt.show()
# + [markdown] nbpresent={"id": "322df1b0-c401-431e-9597-3974ba7e6ab2"} slideshow={"slide_type": "-"}
# Because of the 'elbow' in the inertia at k=4, I would say four clusters is a good number.
# + nbpresent={"id": "ef644152-0b72-4e71-9eeb-b696d8942141"} slideshow={"slide_type": "-"}
fig,ax1 = plt.subplots()
ax1.set_xlim(-3.5,3.5)
ax1.set_ylim(-3,3)
cs = ['red','blue','black','green']
# Plot Principal Components 1 and 2
for i,name in enumerate(crime.index):
ax1.f(name, (X_PCA[i,0], X_PCA[i,1]), ha='center',fontsize=10,color=cs[y_pred[i]],)
ax1.set_xlabel('First Principal Component')
ax1.set_ylabel('Second Principal Component')
plt.show()
| 21-DimReduction/21-DimReduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sklearn.datasets import load_digits
#读取数据集。
dataset = load_digits()
#查看数据的样本数量和特征维度。
dataset.data.shape
# +
from sklearn.preprocessing import PolynomialFeatures
#初始化多项式特征生成器。
pf = PolynomialFeatures(degree=2)
#对数据进行特征升维处理。
hd_data = pf.fit_transform(dataset.data)
#查看升维后的数据量与特征维度。
hd_data.shape
# +
from sklearn.decomposition import PCA
#初始化PCA降维模型。
pca = PCA(n_components=3)
#对数据进行特征降维处理。
ld_data = pca.fit_transform(dataset.data)
#查看降维后的数据量与特征维度。
ld_data.shape
# -
| Chapter_5/.ipynb_checkpoints/Section_5.5.2.3-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 202 Variable
#
# View more, visit my tutorial page: https://morvanzhou.github.io/tutorials/
# My Youtube Channel: https://www.youtube.com/user/MorvanZhou
#
# Dependencies:
# * torch: 1.4.0
#
# Variable in torch is to build a computational graph,
# but this graph is dynamic compared with a static graph in Tensorflow or Theano.
# So torch does not have placeholder, torch can just pass variable to the computational graph.
#
import torch
from torch.autograd import Variable
# +
tensor = torch.FloatTensor([[1,2],[3,4]]) # build a tensor
variable = Variable(tensor, requires_grad=True) # build a variable, usually for compute gradients
print(tensor) # [torch.FloatTensor of size 2x2]
print(variable) # [torch.FloatTensor of size 2x2]
tensor = torch.FloatTensor([[1,2],[3,4]])
variable = Variable(tensor, requires_grad=True)
print(tensor)
print(variable)
# -
# Till now the tensor and variable seem the same.
#
# However, the variable is a part of the graph, it's a part of the auto-gradient.
#
# +
t_out = torch.mean(tensor*tensor) # x^2
v_out = torch.mean(variable*variable) # x^2
print(t_out)
print(v_out)
t_out = torch.mean(tensor*tensor)
v_out = torch.mean(variable*variable)
print(t_out)
print(v_out)
# -
v_out.backward() # backpropagation from v_out
# $$ v_{out} = {{1} \over {4}} sum(variable^2) $$
#
# the gradients w.r.t the variable,
#
# $$ {d(v_{out}) \over d(variable)} = {{1} \over {4}} 2 variable = {variable \over 2}$$
#
# let's check the result pytorch calculated for us below:
variable.grad
variable # this is data in variable format
variable.data # this is data in tensor format
variable.data.numpy() # numpy format
# Note that we did `.backward()` on `v_out` but `variable` has been assigned new values on it's `grad`.
#
# As this line
# ```
# v_out = torch.mean(variable*variable)
# ```
# will make a new variable `v_out` and connect it with `variable` in computation graph.
type(v_out)
type(v_out.data)
| tutorial-contents-notebooks/202_variable.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# ## Near State-of-the-Art results at Object Recognition
# ### Presented by Eduonix
#
# In this project, we will be deploying a convolutional neural network (CNN) for object recognition. More specifically, we will be using the All-CNN network published in the 2015 ICLR paper, "Striving For Simplicity: The All Convolutional Net". This paper can be found at the following link:
#
# https://arxiv.org/pdf/1412.6806.pdf
#
# This convolutional neural network obtained state-of-the-art performance at object recognition on the CIFAR-10 image dataset in 2015. We will build this model using Keras, a high-level neural network application programming interface (API) that supports both Theano and Tensorflow backends. You can use either backend; however, I will be using Theano.
#
# In this project, we will learn to:
# * Import datasets from Keras
# * Use one-hot vectors for categorical labels
# * Addlayers to a Keras model
# * Load pre-trained weights
# * Make predictions using a trained Keras model
#
# The dataset we will be using is the CIFAR-10 dataset, which consists of 60,000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
#
# ### 1. Loading the Data
#
# Let's dive right in! In these first few cells, we will import necessary packages, load the dataset, and plot some example images.
# Load necessary packages
from keras.datasets import cifar10
from keras.utils import np_utils
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
# load the data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# Lets determine the dataset characteristics
print('Training Images: {}'.format(X_train.shape))
print('Testing Images: {}'.format(X_test.shape))
# Now for a single image
print(X_train[0].shape)
# +
# create a grid of 3x3 images
for i in range(0,9):
plt.subplot(330 + 1 + i)
img = X_train[i].transpose([1,2,0])
plt.imshow(img)
# show the plot
plt.show()
# -
# ### 2. Preprocessing the dataset
#
# First things first, we need to preprocess the dataset so the images and labels are in a form that Keras can ingest. To start, we'll define a NumPy seed for reproducibility, then normalize the images.
#
# Furthermore, we will also convert our class labels to one-hot vectors. This is a standard output format for neural networks.
# +
# Building a convolutional neural network for object recognition on CIFAR-10
# fix random seed for reproducibility
seed = 6
np.random.seed(seed)
# load the data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# normalize the inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
# -
# class labels shape
print(y_train.shape)
print(y_train[0])
# The class labels are a single integer value (0-9). What we really want is a one-hot vector of length ten. For example, the class label of 6 should be denoted [0, 0, 0, 0, 0, 0, 1, 0, 0, 0]. We can accomplish this using the np_utils.to_categorical() function.
# +
# hot encode outputs
Y_train = np_utils.to_categorical(y_train)
Y_test = np_utils.to_categorical(y_test)
num_classes = Y_test.shape[1]
print(Y_train.shape)
print(Y_train[0])
# -
# ### 3. Building the All-CNN
#
# Using the paper as a reference, we can implement the All-CNN network in Keras. Keras models are built by simply adding layers, one after another.
#
# To make things easier for us later, we will wrap this model in a function, which will allow us to quickly and neatly generate the model later on in the project.
# +
# start building the model - import necessary layers
from keras.models import Sequential
from keras.layers import Dropout, Activation, Conv2D, GlobalAveragePooling2D
from keras.optimizers import SGD
def allcnn(weights=None):
# define model type - Sequential
model = Sequential()
# add model layers - Convolution2D, Activation, Dropout
model.add(Conv2D(96, (3, 3), padding = 'same', input_shape=(3, 32, 32)))
model.add(Activation('relu'))
model.add(Conv2D(96, (3, 3), padding = 'same'))
model.add(Activation('relu'))
model.add(Conv2D(96, (3, 3), padding = 'same', strides = (2,2)))
model.add(Dropout(0.5))
model.add(Conv2D(192, (3, 3), padding = 'same'))
model.add(Activation('relu'))
model.add(Conv2D(192, (3, 3), padding = 'same'))
model.add(Activation('relu'))
model.add(Conv2D(192, (3, 3), padding = 'same', strides = (2,2)))
model.add(Dropout(0.5))
model.add(Conv2D(192, (3, 3), padding = 'same'))
model.add(Activation('relu'))
model.add(Conv2D(192, (1, 1), padding = 'valid'))
model.add(Activation('relu'))
model.add(Conv2D(10, (1, 1), padding = 'valid'))
# add GlobalAveragePooling2D layer with Softmax activation
model.add(GlobalAveragePooling2D())
model.add(Activation('softmax'))
# load the weights
if weights:
model.load_weights(weights)
# return model
return model
# -
# ### 4. Defining Parameters and Training the Model
#
# We're all set! We are ready to start training our network. In the following cells, we will define our hyper parameters, such as learning rate and momentum, define an optimizer, compile the model, and fit the model to the training data.
# +
# define hyper parameters
learning_rate = 0.01
weight_decay = 1e-6
momentum = 0.9
# build model
model = allcnn()
# define optimizer and compile model
sgd = SGD(lr=learning_rate, decay=weight_decay, momentum=momentum, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# print model summary
print (model.summary())
# define additional training parameters
epochs = 350
batch_size = 32
# fit the model
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=epochs, batch_size=batch_size, verbose = 1)
# -
# ### 5. Woah, that's a long time...
#
# Uh oh. It's apparent that training this deep convolutional neural network is going to take a long time, which is not surprising considering the network has about 1.3 million parameters. Updating this many parameters takes a considerable amount of time; unless, of course, you are using a Graphics Processing Unit (GPU). This is a good time for a quick lesson on the differences between CPUs and GPUs.
#
# The **central processing unit (CPU)** is often called the brains of the PC because it handles the majority of necessary computations. All computers have a CPU and this is what Keras and Theano automatically utilize.
#
# The **graphics processing unit (GPU)** is in charge of image rendering. The most advanced GPUs were originally designed for gamers; however, GPU-accelerated computing, the use of a GPU together with a CPU to accelarate deep learing, analytics, and engineering applications, has become increasingly common. In fact, the training of deep neural networks is not realistic without them.
#
# The most common GPUs for deep learning are produced by NVIDIA. Furthermore, the NVIDIA Deep Learning SDK provides high-performance tools and libraries to power GPU-accelerated machine learning applications. An alternative would be an AMD GPU in combination with the OpenCL libraries; however, these libraries have fewer active users and less support than the NVIDIA libraries.
#
# If your computer has an NVIDIA GPU, installing the CUDA Drivers and CUDA Tookit from NVIDIA will allow Theano and Keras to utilize GPU-accelerated computing. The original paper mentions that it took approximately 10 hours to train the All-CNN network for 350 epochs using a modern GPU, which is considerably faster (several orders of magnitude) than it would take to train on CPU.
#
# If you haven't already, stop the cell above. In the following cells, we'll save some time by loading pre-trained weights for the All-CNN network. Using these weights, we can evaluate the performance of the All-CNN network on the testing dataset.
# +
# define hyper parameters
learning_rate = 0.01
weight_decay = 1e-6
momentum = 0.9
# define weights and build model
weights = 'all_cnn_weights_0.9088_0.4994.hdf5'
model = allcnn(weights)
# define optimizer and compile model
sgd = SGD(lr=learning_rate, decay=weight_decay, momentum=momentum, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# print model summary
print (model.summary())
# test the model with pretrained weights
scores = model.evaluate(X_test, Y_test, verbose=1)
print("Accuracy: %.2f%%" % (scores[1]*100))
# -
# ### 6. Making Predictions
#
# Using the pretrained weights, we were able to achieve an accuracy of nearly 90 percent! Let's leverage this network to make some predictions. To start, we will generate a dictionary of class labels and names by referencing the website for the CIFAR-10 dataset:
#
# https://www.cs.toronto.edu/~kriz/cifar.html
#
# Next, we'll make predictions on nine images and compare the results to the ground-truth labels. Furthermore, we will plot the images for visual reference, this is object recognition after all.
# +
# make dictionary of class labels and names
classes = range(0,10)
names = ['airplane',
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck']
# zip the names and classes to make a dictionary of class_labels
class_labels = dict(zip(classes, names))
# generate batch of 9 images to predict
batch = X_test[100:109]
labels = np.argmax(Y_test[100:109],axis=-1)
# make predictions
predictions = model.predict(batch, verbose = 1)
# -
# print our predictions
print predictions
# these are individual class probabilities, should sum to 1.0 (100%)
for image in predictions:
print(np.sum(image))
# use np.argmax() to convert class probabilities to class labels
class_result = np.argmax(predictions,axis=-1)
print class_result
# +
# create a grid of 3x3 images
fig, axs = plt.subplots(3, 3, figsize = (15, 6))
fig.subplots_adjust(hspace = 1)
axs = axs.flatten()
for i, img in enumerate(batch):
# determine label for each prediction, set title
for key, value in class_labels.items():
if class_result[i] == key:
title = 'Prediction: {}\nActual: {}'.format(class_labels[key], class_labels[labels[i]])
axs[i].set_title(title)
axs[i].axes.get_xaxis().set_visible(False)
axs[i].axes.get_yaxis().set_visible(False)
# plot the image
axs[i].imshow(img.transpose([1,2,0]))
# show the plot
plt.show()
# -
| Section 10 - Object+Recognition/Object Recognition/Object Recognition (Jupyter Notebook).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.0
# language: julia
# name: julia-1.3
# ---
# # Pythagorean Triplet
#
# A Pythagorean triplet is a set of three natural numbers, {a, b, c}, for
# which,
#
# ```text
# a^2 + b^2 = c^2
# ```
#
# and such that,
#
# ```text
# a < b < c
# ```
#
# ## Example
# ```text
# 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
# ```
#
# Given an input integer N, find all Pythagorean triplets for which `a + b + c = N`.
#
# For example, with N = 1000, there is exactly one Pythagorean triplet for which `a + b + c = 1000`: `{200, 375, 425}`.
#
# ## Source
#
# [http://projecteuler.net/problem=9](http://projecteuler.net/problem=9)
#
# ## Version compatibility
# This exercise has been tested on Julia versions >=1.0.
#
# ## Submitting Incomplete Solutions
# It's possible to submit an incomplete solution so you can see how others have completed the exercise.
# ## Your solution
# submit
# ## Test suite
# +
# canonical data version 1.0.0
using Test
# include("pythagorean-triplet.jl")
@testset "triplets whose sum is 12" begin
@test pythagorean_triplets(12) == [(3, 4, 5)]
end
@testset "triplets whose sum is 108" begin
@test pythagorean_triplets(108) == [(27, 36, 45)]
end
@testset "triplets whose sum is 1000" begin
@test pythagorean_triplets(1000) == [(200, 375, 425)]
end
@testset "triplets whose sum is 1001" begin
@test pythagorean_triplets(1001) == []
end
@testset "returns all matching triplets" begin
@test pythagorean_triplets(90) == [(9, 40, 41), (15, 36, 39)]
end
@testset "several matching triplets" begin
@test pythagorean_triplets(840) == [
(40, 399, 401),
(56, 390, 394),
(105, 360, 375),
(120, 350, 370),
(140, 336, 364),
(168, 315, 357),
(210, 280, 350),
(240, 252, 348),
]
end
@testset "triplets for large number" begin
@test pythagorean_triplets(30000) == [
(1200, 14375, 14425),
(1875, 14000, 14125),
(5000, 12000, 13000),
(6000, 11250, 12750),
(7500, 10000, 12500),
]
end
# -
# ## Prepare submission
# To submit your exercise, you need to save your solution in a file called `pythagorean-triplet.jl` before using the CLI.
# You can either create it manually or use the following functions, which will automatically write every notebook cell that starts with `# submit` to the file `pythagorean-triplet.jl`.
#
# +
# using Pkg; Pkg.add("Exercism")
# using Exercism
# Exercism.create_submission("pythagorean-triplet")
| exercises/pythagorean-triplet/pythagorean-triplet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# "Geo Data Science with Python"
# ### Notebook Exercise 4d
#
# ---
#
# # File I/O
#
#
# ### Task 1: Reading and Handling a Capital Data from a File (5 points)
#
# The repository contains a data file with the name `capitals.txt`. You can open and read the file using the JupyterLab text editor. make a copy of this file to your homework repository. However, do not change the content (this will sabotage the success of your exercise).
#
# In this task, you complete the code cell below, by:
# * Open the file `captials.txt` for reading.
# * Read the file content. Choose a function that reads all capitals saved in the file into a list and give the list variable the name `myCapitals`.
# * Close the file.
# * Display the list variable on screen with `print()`.
#
# +
"Enter your code blow."
### YOUR CODE HERE
# +
# Check your code below.
assert capList[0] == 'Tbilisi\n'
assert capList[8] == 'Berlin\n'
assert capList[11] == 'Washington\n'
assert capList[-1] == 'Santiago'
# -
# ### Task 2: Review List Operations (3 points)
# ... to prepare them for file export.
#
# As you can see, you still have a newline control character `\n` at the end of the string entries in the list `capList`. Hence, convert this list into a new one called `myCapitals_clean`. For example, you could use list comprehension and the method `.rstrip()`, or list comprehension and slicing for that.
#
# Then, make an **explicit** copy of the variable `myCapitals_clean` and save it to the variable `myCapitals_clean_sorted`. (Refer to the notebook lesson about lists, to review what it means to make an explicit copy of lists and why this might be necessary!)
#
# At last, modify the list `myCapitals_clean_sorted` so that it contains alphabetically sorted (from A to Z) entries. Which method would be useful for that? Keep in mind, what we have learned about methods that mutate a variable!!!
# +
"Enter your code blow."
### YOUR CODE HERE
# +
# Check the result of your code for myCapitals_clean.
myCapitals_clean
# +
# Check the result of your code for myCapitals_clean_sorted.
myCapitals_clean_sorted
# +
# Check the result of your code.
assert myCapitals_clean[0] == 'Tbilisi'
assert myCapitals_clean[8] == 'Berlin'
assert myCapitals_clean_sorted[0] == 'Algiers'
assert myCapitals_clean_sorted[8] == 'London'
# -
# ### Task 3: Writing Capital Data to a File (3 points)
#
# Now let's write the sorted list `myCapitals_clean_sorted` into a file `capitalsSorted.txt`. In case, you had issues with the string handling in part A above and want to first work on this one, we have assigned the correct list for the variable `myCapitals_clean_sorted` below. In the code cell one further below, complement the code cell below.
#
# * Open a not yet existing file for writing. Make sure to use another file name than the one from the previous task and that the file is saved in the same folder as the JupyterNotebook (you do not need to enter a filepath, only a filename.)
# * Write the list `myCapitals_clean_sorted` into a file. First think about, which function for writing data into a file might be most useful in this case?
# * Close the file
myCapitals_clean_sorted = [
'Algiers',
'Asmara',
'Beijing',
'Berlin',
'Brasilia',
'Cairo',
'Caracas',
'Doha',
'London',
'Mexico City',
'Minsk',
'Montevideo',
'Moscow',
'Niamey',
'Quito',
'Santiago',
'Stockholm',
'Tbilisi',
'Teheran',
'Washington',
'Yerevan']
# +
"""Enter your code for writing the variable
myCapitals_clean_sorted into the file capitalsSorted.txt ."""
### YOUR CODE HERE
# -
# To check your results, open the file `capitalsSorted.txt` and check if the content is what you intent it to be.
# ### Task 4 (4 points)
#
# Re-open the file `capitalsSorted.txt` that you have just written, assing the content into a variable `myCapitals_clean_sorted_returned` and print it to screen. Write the code for that in the following cell.
# +
"""Add code to read the file capitalsSorted.txt and print the content to screen"""
### YOUR CODE HERE
# +
# Check your code
assert myCapitals_clean_sorted_returned == ['AlgiersAsmaraBeijingBerlinBrasiliaCairoCaracasDohaLondonMexico CityMinskMontevideoMoscowNiameyQuitoSantiagoStockholmTbilisiTeheranWashingtonYerevan']
# -
# Ohh, what is that?! When printing the the variable `myCapitals_clean_sorted_returned`, you should receive something like this:
#
# `['AlgiersAsmaraBeijingBerlinBrasiliaCairoCaracasDohaLondonMexico CityMinskMontevideoMoscowNiameyQuitoSantiagoStockholmTbilisiTeheranWashingtonYerevan']`
#
#
# Why has everything merged into one string? Any idea?
#
# This is about the newline control characters `\n`. We have deleted them before sorting the list. This was necessary, because the captial string in the last line of the file has no newline control character. After sorting, that entry moved to the middle of the list, while another capital string moved to the end of the list. If we would have kept the newline control characters, just sort and write the data to file, two capitals would have been merged on one line. You can create a new code cell below and try that out.
#
# Therefore, let's put the newline control characters back at the end of each capital string in the list, except the last one and save this in a variable `myCapitals_clean_sorted2`. Since we want to focus more on file handling in this exercise, the cell below provides you with the code for this. The code applies list comprehension to add the control character at the end of each string. Then the control character is removed from the last one using the function replace. You may study the code and try to repeat it by yourself.
myCapitals_clean_sorted2 = [ e+'\n' for e in myCapitals_clean_sorted]
myCapitals_clean_sorted2[-1] = myCapitals_clean_sorted2[-1].replace('\n','')
myCapitals_clean_sorted2
# Now, write this new variable `myCapitals_clean_sorted2` into the file `capitalsSorted.txt`, using the code from above. Make sure to change out the variable name `myCapitals_clean_sorted` to `myCapitals_clean_sorted2`.
# +
"""Copy your code for writing the file below."""
### YOUR CODE HERE
# -
# ### Task 5 (optional, 4 extra credit points)
#
# In the code cells below, experiment with the different file reading and writing functions: `read()`, `readline()` and `readlines()` as well as `write()` and `writelines()`.
#
# For example:
# * Write the strings ‘hello’ and ‘world’ into a file (e.g. `hello.txt`) on two different lines by using a trailing ‘\n’
# * Close the file
# * Open the same file for reading.
# * Read data from the file into string in different ways: Experiment with the various functions `read()`, `readline()` and `readlines()`, to read the file in one bulk, as a list of lines or line by line into string variables.
# * Very advanced: Try the function seek().
# +
# add your code here to practice file reading & writing
# +
# add your code here to practice file reading & writing
# +
# add your code here to practice file reading & writing
# +
# add your code here to practice file reading & writing
# -
# ---
# If you are satisfied with your notebook, save it and push it to your homework repository.
| Exercise04d_Files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Density Panel
# Author(s) <NAME> | July 17, 2019
#
# This tutorial provides a demonstration of the density panel plotting routine available in `mcmcplot`.
# import required packages
import numpy as np
from mcmcplot import mcmcplot as mcp
import mcmcplot
print(mcmcplot.__version__)
# # Generate Random Chains
# The plotting routines are designed to be used in conjunction with the result of a MCMC simulation. For the purpose of this example, we consider a randomly generated chain. We will consider a chain with 3 parameters that have the following distributions:
# - $p_{0} \sim N(1.0, 0.5)$
# - $p_{1} \sim N(2.5, 3.0)$
# - $p_{2} \sim N(-1.3, 0.75)$
nsimu = 1000
npar = 3
mu = np.array([1.0, 2.5, -1.3])
sig = np.array([0.5, 3.0, 0.75])
chain = np.zeros([nsimu, npar])
for ii in range(npar):
chain[:,ii] = sig[ii]*np.random.randn(nsimu,) + mu[ii]
# # Default Density Panel
# The density panel simply uses a Kernel Density Estimator (KDE) to generate a probability distribution from the sample points in the chain. Note, the plotting routines output the figure handle as well as a dictionary containing the settings used in generating the plot when the keyword argument `return_settings=True`.
f, settings = mcp.plot_density_panel(
chains=chain,
return_settings=True)
print(settings)
# # Add Histogram Beneath KDE
# We can add the chain histogram to the plot and adjust various aspects of the appearance.
settings = dict(
hist=dict(
color='g',
alpha=0.5))
fd = mcp.plot_density_panel(chains=chain,
hist_on=True,
settings=settings)
# As the chains were generated from a normal distribution, we would expect the histograms to agree well with the KDE.
# # Change Marker Style and Define Parameter Names
# We can change the marker style, define the parameter names to be displayed on the plots, and change the dimensions of the figure.
user_settings = dict(
plot=dict(
marker='s',
mfc='none',
linestyle='none'),
fig=dict(figsize=(6, 6)))
names = ['a', 'b', 'c']
f = mcp.plot_density_panel(
chains=chain,
names=names,
settings=user_settings)
# # Update Label Features
# You can also edit the settings for the axis labels.
user_settings = dict(
ylabel=dict(fontsize=22))
f = mcp.plot_density_panel(
chains=chain,
names=names,
settings=user_settings)
# # Manually Manipulate Plot Features
# With the figure handle, you can also individual edit different aspects of the plot. See [matplotlib's documentation](https://matplotlib.org/index.html) for more details on editing figures.
f = mcp.plot_density_panel(chains=chain)
ax = f.get_axes()
for ai in ax:
ai.set_ylabel(ylabel=ai.get_ylabel(),
fontsize=22)
ai.set_yticklabels(labels=ai.get_yticklabels(),
fontsize=22)
ax[-1].set_ylabel(ylabel = 'my label (:')
# reset positions to avoid overlap
f.tight_layout()
| density_panel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 可视化餐厅消费数据
# This time we are going to pull data directly from the internet.
# Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.
#
# ### Step 1. Import the necessary libraries
# +
import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
# set this so the
# %matplotlib inline
# -
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv).
# ### Step 3. Assign it to a variable called chipo.
# +
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv'
chipo = pd.read_csv(url, sep = '\t')
# -
# ### Step 4. See the first 10 entries
chipo.head(10)
# ### Step 5. Create a histogram of the top 5 items bought
# +
# get the Series of the names
x = chipo.item_name
# use the Counter class from collections to create a dictionary with keys(text) and frequency
letter_counts = Counter(x)
# convert the dictionary to a DataFrame
df = pd.DataFrame.from_dict(letter_counts, orient='index')
# sort the values from the top to the least value and slice the first 5 items
df = df[0].sort_values(ascending = True)[45:50]
# create the plot
df.plot(kind='bar')
# Set the title and labels
plt.xlabel('Items')
plt.ylabel('Price')
plt.title('Most ordered Chipotle\'s Items')
# show the plot
plt.show()
# -
# ### Step 6. Create a scatterplot with the number of items orderered per order price
# #### Hint: Price should be in the X-axis and Items ordered in the Y-axis
# +
# create a list of prices
chipo.item_price = [float(value[1:-1]) for value in chipo.item_price] # strip the dollar sign and trailing space
# then groupby the orders and sum
orders = chipo.groupby('order_id').sum()
# creates the scatterplot
# plt.scatter(orders.quantity, orders.item_price, s = 50, c = 'green')
plt.scatter(x = orders.item_price, y = orders.quantity, s = 50, c = 'green')
# Set the title and labels
plt.xlabel('Order Price')
plt.ylabel('Items ordered')
plt.title('Number of items ordered per order price')
plt.ylim(0)
# -
# ### BONUS: Create a question and a graph to answer your own question.
| 07_Visualization/Chipotle/Exercise_with_Solutions.ipynb |
# ---
# layout: post
# title: "[ISS 세미나] 세종국책연구단지에 딥러닝 모델 심기"
# author: 김태영
# date: 2018-05-16 14:00:00
# categories: seminar
# comments: true
# image: http://tykimos.github.io/warehouse/2018-5-16-ISS_Plant_DeepLearning_Model_in_SNRC_title1.png
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# 이번 ISS에서는 "(주)인스페이스"와 "국토연구원 국토정보연구본부 국토정보분석센터" 공동 주최로 "세종국책연구단지에 딥러닝 모델 심기"란 주제를 가지고 세미나를 개최하고자 합니다. 딥러닝 기본 개념을 익히고 딥러닝 기반 모델을 쉽게 만들어볼 수 있는 ‘케라스’라는 딥러닝 라이브러리에 대해서 알아봅니다. 케라스 코드를 ‘블록’ 개념과 매칭하여 직관적으로 모델을 이해할 수 있는 방법에 대해 연습한 후 다양한 기초 문제를 살펴봅니다. 자율주행 분야에 딥러닝 모델이 어떻게 적용되는 지 알아보고, 다양한 분야에서 문제를 해결하는 데 있어 도움이 되는 캐글에 대해서도 알아봅니다.
#
# ISS란 Intelligence Space Seminar의 약자로 인공지능 기술과 관련된 인스페이스의 사내 세미나를 말합니다. 어렵게 모신 전문가분들의 주옥같은 내용을 공유하고자 오픈 세미나로 진행하고 있습니다.
#
# 
# ---
# ### 발표자
#
# |구분|소개|
# |-|-|
# ||김태영, (주)인스페이스 (기초 강좌)<br><br>[블록과 함께하는 파이썬 딥러닝 케라스]<br><br>비전공자분들이 쉽게 딥러닝 모델을 개발할 수 있도록 케라스 라이브러리 소개와 블록 비유를 통해 다양한 모델을 살펴보겠습니다.|
# ||이유한 박사과정, 카이스트<br><br>[딥러닝심기 캐글소개]<br><br>캐글은 다양한 분야의 문제 및 데이터셋을 제공하고 해당 문제를 풀기 위한 알고리즘 온라인 경연 대회를 운영하는 곳입니다. 유사 문제를 확인하고, 도움을 얻을 수 있는 방법에 대해서 살펴봅니다.|
# ||김준태, 대전대학교 전자정보통신공학과 4학년 재학중<br><br>[GTA5로 자율주행하기]<br><br>MNIST까지는 해봤다!! 이제 무엇을 해봐야 할까? 그런데 나만의 데이터를 어떻게 만들지? 나만의 데이터를 만들어보고 GTA5로 자율주행을 해보자!!|
# ||곽병권 이사, NGLE<br><br>[Keras 시계열 모델 튜토리얼]<br><br>NGLE이라는 QA 전문회사에서 QA-Test 자동화에 인공지능 적용을 연구 중 이며, Keras로 구현해보는 시계열 데이터 예측 구현, 지역 기상정보를 이용한 기온, 풍량, 기압 예측하는 방법에 대해서 살펴봅니다.|
# ---
# ### 프로그램
#
# * 일시: 2018년 5월 16일 오후 2시 ~ 오후 6시
# * 장소: 국토연구원 2층 강당 (세종특별자치시 국책연구원로 5/반곡동 771-125)
# * 참석인원: 최대 200명 수용
# * 1부 딥러닝 기본
# * 14:00~14:10 [딥러닝 이야기] ‘딥러닝’에서 왜 ‘딥’이고, 무엇을 ‘러닝(학습)’하는 지에 대한 이야기를 합니다.
# * 14:10~14:20 [케라스 이야기] ‘케라스’에 대한 의미와 케라스의 특징 및 장단점에 대해서 알아봅니다.
# * 14:20~14:30 [케라스 개념잡기] 가장 기초적인 케라스 샘플 코드를 살펴보고 학습 방법에 대해서 살펴봅니다.
# * 14:30~14:40 [레이어 개념잡기] 가장 기초적인 뉴런부터 다층퍼셉트론 신경망, 컨볼루션 신경망, 순환 신경망을 구성하고 있는 레이어에 대한 개념을 알아봅니다.
#
# * 2부 딥러닝 활용
# * 14:40~15:00 [생성모델(GAN) 살펴보기] 딥러닝 모델의 네트워크, 학습목표, 최적화기에 대한 기본 개념을 익힙니다.
# * 15:00~15:20 [태양에서 세포까지 극한알바] 여러 분야에서 케라스 기반 딥러닝 모델 사례를 살펴봅니다.
#
# * 3부 딥러닝 적용
# * 15:50~16:30 [딥러닝심기 캐글소개] 캐글은 다양한 분야의 문제 및 데이터셋을 제공하고 해당 문제를 풀기 위한 알고리즘 온라인 경연 대회를 운영하는 곳입니다. 유사 문제를 확인하고, 도움을 얻을 수 있는 방법에 대해서 살펴봅니다.
# * 16:30~17:00 [GTA5로 자율주행하기] MNIST까지는 해봤다!! 이제 무엇을 해봐야 할까? 그런데 나만의 데이터를 어떻게 만들지? 나만의 데이터를 만들어보고 GTA5로 자율주행을 해보자!!
# * 17:00~17:30 [Keras 시계열 모델 튜토리얼] NGLE이라는 QA 전문회사에서 QA-Test 자동화에 인공지능 적용을 연구 중 이며, Keras로 구현해보는 시계열 데이터 예측 구현, 지역 기상정보를 이용한 기온, 풍량, 기압 예측하는 방법에 대해서 살펴봅니다.
# ---
# ### 발표자료
#
# * 김태영 기술이사, (주)인스페이스, [블록과 함께하는 파이썬 딥러닝 케라스], [발표자료 링크](https://docs.google.com/presentation/d/1dCyZmxGQgICmUp4t_ora4K3q2J52oxPw5CWFIrC0J-k/edit?usp=sharing)
# * 이유한 박사과정, 카이스트, [딥러닝심기 캐글소개], [발표자료 다운](http://tykimos.github.io/warehouse/2018-5-16-ISS_Plant_DeepLearning_Model_in_SNRC_lyh_file.pdf)
# * 김준태, 대전대학교 전자정보통신공학과 4학년 재학중, [GTA5로 자율주행하기], [발표자료 다운](http://tykimos.github.io/warehouse/2018-5-16-ISS_Plant_DeepLearning_Model_in_SNRC_kjt_file.pdf)
# * 곽병권 이사, NGLE, [Keras 시계열 모델 튜토리얼], [발표자료 다운](http://tykimos.github.io/warehouse/2018-5-16-ISS_Plant_DeepLearning_Model_in_SNRC_kbk_file.pdf), [원문 링크](https://github.com/Steven-A3/TensorFlow-Tutorials)
# ---
# ### 사전준비
#
# 없습니다.
# ---
#
# ### 참가신청
#
# 대관 장소 수용인원이 최대 200명이라 선착순 200분까지만 받겠습니다. 여러 경로로 참가신청을 받기 때문에 등록하시면 제가 순번을 알려드리도록 하겠습니다. 댓글 양식은 아래와 같으며 '한 줄'로 작성 부탁드리겠습니다.
#
# * 이름, 기관, 이메일, 분야, 참석계기
# * 예) 김태영, 인스페이스, <EMAIL>, 우주, 위성 운영 효율화를 위해 강화학습을 적용해보고자 합니다.
#
# 댓글을 달아도 스팸처리되어서 바로 표시 안될 수도 있습니다. 제가 다 스팸아님으로 처리하고 있으니, 크게 신경 안 쓰셔도 됩니다. 그리고 혹시 참석신청하셨으나 부득이한 이유로 참석이 힘드신 분은 미리 알려주세요~ 다른 분들에게 참석 기회를 드리고자 합니다.
# ---
#
# ### 후기
#
# 도움이 되셨다면 댓글 부탁드려요~
# ---
#
# ### 같이 보기
#
# * [다른 세미나 보기](https://tykimos.github.io/seminar/)
# * [케라스 기초 강좌](https://tykimos.github.io/lecture/)
# * [케라스 코리아](https://www.facebook.com/groups/KerasKorea/)
| _writing/2018-5-16-ISS_Plant_DeepLearning_Model_in_SNRC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Binary Classification using Convolutional Neural Networks
# + [markdown] pycharm={"name": "#%% md\n"}
# An investigation into the effects that image augmentation has on the accuracy and loss of Convolutional Neural Networks. *This work was completed as part of dissertation project for Bachelor of Science (Honours) in Computer Science with specialism in Artificial Intelligence.*
#
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Import Libraries
# Import all the necessary libraries for the project to run. Primarily using TensorFlow base and Keras to construct the architecture.
# Also import supporting libraries like numpy for manipulating the arrays.
#
# The `classifier_helpers` library is a collection of helper functions extracted to make the code easier to read.
# + pycharm={"name": "#%%\n"}
from keras.callbacks import Callback, LearningRateScheduler
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dense
from keras import backend as K
import numpy as np
import random
import classifier_helpers as tools
# -
# ## Configuration Variables for Experiments
# Each of the variables controls a different area of the network structure. Collated here for easier control of changes between experiments.
#
# `results_file_name` defines the name of the output files for results etc.
#
# `dataset_path` points to the location of the dataset files (either images or arrays representing images).
#
# `rotation_range` the maximum rotational range of an image in either positive or negative direction (max 180°).
#
# `epochs` number of epochs to train the model for.
#
# `initial_learning_rate` the learning rate for the network to start off with, changing this can affect how quickly the model converges on the solution.
#
# `batch_size` number of samples to show the network before updating the network weights.
#
# `decay_rate` the rate at which the learning rate should decay over time
#
# `validation_dataset_size` percentage of the dataset to be used for testing, by default 75% goes to training and 25% goes to testing.
#
# `random_seed` random number to be used for seed - for repeatability of dataset shuffles.
#
# `image_depth` coloured images have three layers of depth.
#
# `results_path` path to the results directory
#
# `model_name` name of the model to be saved in the results file/model structure files
#
# `plot_name` titles/names for the result plots
# + pycharm={"name": "#%%\n"}
results_file_name = 'Batch-Size-2'
dataset_path = '../dataset/'
rotation_range = 0
epochs = 100
initial_learning_rate = 1e-5 # 1e-5
batch_size = 2
decay_rate = initial_learning_rate / epochs
validation_dataset_size = 0.25
random_seed = 42
image_depth = 3
results_path = 'results/'
model_name = results_file_name + "-" + str(rotation_range)
plot_name = model_name
# -
#
# ## Define Helper Functions
# Additional functions used by the network, mainly controling decay rates for experiments with varying the decay of learning rates within the model.
# + pycharm={"name": "#%%\n"}
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
def stepDecay(epoch):
dropEvery = 10
initAlpha = 0.01
factor = 0.25
# Compute learning rate for current epoch
exp = np.floor((1 + epoch) / dropEvery)
alpha = initAlpha * (factor ** exp)
return float(alpha)
# -
#
# ## Build the Network Architecture
# Compile the network architecture from individual layers. The function encapsulates the entire structure of the network which can be initialised as the model. It requires `width`, `height`, and `depth` values for the images it will be processing as well as the number of `classes` which it will be classifying. Binary classification requires that two classes are defined. (In this case benign and malignant samples).
# + pycharm={"name": "#%%\n"}
def buildNetworkModel(width, height, depth, classes):
model = Sequential()
input_shape = (height, width, depth)
# If 'channel first' is being used, update the input shape
if K.image_data_format() == 'channel_first':
input_shape = (depth, height, width)
# First layer
model.add(
Conv2D(20, (5, 5), padding = "same", input_shape = input_shape)) # Learning 20 (5 x 5) convolution filters
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (2, 2), strides = (2, 2)))
# Second layer
model.add(Conv2D(50, (5, 5), padding = "same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (2, 2), strides = (2, 2)))
# Third layer - fully-connected layers
model.add(Flatten())
model.add(Dense(50)) # 500 nodes
model.add(Activation("relu"))
# Softmax classifier
model.add(Dense(classes)) # number of nodes = number of classes
model.add(Activation("softmax")) # yields probability for each class
# Return the model
return model
# -
# ## Load and Initialise the Dataset
# Load the dataset, normally the program can load images into its memory, however that is time consuming so instead the images have been loaded and exported as arrays for quicker load times.
# By using arrays, the dataset can be exported and loaded more efficiently within the Jupyter without the need to share the entire image library (~15GB).
#
# The array containing the images and their respective labels are loaded. They are then combined (so that the labels correspond to the image) and the shuffled whilst the label and image remain related.
#
# Randomised arrays are then split in accordance to the training-testing dataset split. By default it's set to 75% training and 25% testing.
# + pycharm={"name": "#%%\n"}
sorted_data = np.load('sorted_data_array.npy')
sorted_labels = np.load('sorted_labels_array.npy')
data = []
labels = []
combined = list(zip(sorted_data, sorted_labels))
random.shuffle(combined)
data[:], labels[:] = zip(*combined)
# Scale the raw pixel intensities to the range [0, 1]
data = np.array(data, dtype = "float") / 255.0
labels = np.array(labels)
test_set = int(validation_dataset_size * len(labels))
validation_dataset_labels = labels[-test_set:]
# Partition the data into training and testing splits
(train_x, test_x, train_y, test_y) = train_test_split(data, labels, test_size = test_set, random_state = random_seed)
# Convert the labels from integers to vectors
train_y = to_categorical(train_y, num_classes = 2)
test_y = to_categorical(test_y, num_classes = 2)
# -
# ## Define Image Augmentation Generators
# Image augmentation generators are defined here, they take an input image and apply the predefined augmentation method, in this case `rotation_range` is applied to any image effectively rotating it to a random degree within that range.
# + pycharm={"name": "#%%\n"}
training_augmented_image_generator = ImageDataGenerator(rotation_range = rotation_range, fill_mode = "nearest")
testing_augmented_image_generator = ImageDataGenerator(rotation_range = rotation_range, fill_mode = "nearest")
# -
# ## Compile the Network Model
# Compile the network model using the predefined structure from the `buildNetworkModel` and apply the optimiser and learning rate metrics.
# This is where we define the loss and accuracy metrics which are saved in the history dictionary.
# + pycharm={"name": "#%%\n"}
print(tools.stamp() + "Compiling Network Model")
# Reducing the learning rate by half every 2 epochs
learning_rate_schedule = [LearningRateScheduler(stepDecay)]
# Build the model based on control variable parameters
model = buildNetworkModel(width = 64, height = 64, depth = image_depth, classes = 2)
# Set optimiser
optimiser = Adam(lr = initial_learning_rate)
lr_metric = get_lr_metric(optimiser)
# Compile the model using binary crossentropy, preset optimiser and selected metrics
model.compile(loss = "binary_crossentropy", optimizer = optimiser, metrics = ["accuracy", "mean_squared_error", lr_metric])
# Train the network
print(tools.stamp() + "Training Network Model")
# -
# ## Save the Model
# Completed model can be saved to the disk along with all statistics and graphs.
# + pycharm={"name": "#%%\n"}
# Save results of training in history dictionary for statistical analysis
history = model.fit_generator(
training_augmented_image_generator.flow(train_x, train_y, batch_size = batch_size),
validation_data = (test_x, test_y),
steps_per_epoch = len(train_x) // batch_size,
epochs = epochs,
verbose = 1)
# Save all runtime statistics and plot graphs
tools.saveNetworkStats(history, epochs, initial_learning_rate, model_name, results_path)
tools.saveAccuracyGraph(history, plot_name, results_path)
tools.saveLossGraph(history, plot_name, results_path)
tools.saveLearningRateGraph(history, plot_name, results_path)
tools.saveModelToDisk(model, model_name, results_path)
tools.saveWeightsToDisk(model, model_name, results_path)
| jupyter-notebook/binary_classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/seldoncode/Python_CoderDojo/blob/main/Python_CoderDojo09.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="iyonWVXvnw_R"
# # Módulos
# Con ```import``` conseguimos importar librerías que contienen una serie de funciones programadas que nos pueden resultar útiles.
#
# + [markdown] id="z0m9RdCEI6bP"
# ## Módulo ```math```
# + colab={"base_uri": "https://localhost:8080/"} id="GTaIUczanrRr" outputId="5ed1addc-aca3-454d-e434-4430ccf459f5"
import math
dir(math) # proporciona el listado de todas las funciones de la librería
# + colab={"base_uri": "https://localhost:8080/"} id="Ae18La-KqL2-" outputId="267b2c74-6d3f-4b94-af0c-48f1de982087"
math.sqrt(9) # raiz cuadrada
# + [markdown] id="dD5SZwpQqhsg"
# ## Calcular el perímetro de la circunferencia
# + colab={"base_uri": "https://localhost:8080/"} id="9hQynDz0qrpL" outputId="70124e4d-c544-411c-ce7e-15aec7bdc84a"
import math
r = 10 # radio
perimetro = 2*math.pi*r # perimetro=2*pi*r
perimetro
# + [markdown] id="RoYyU8O7sQKO"
# ### Calcula el área del círculo
# + [markdown] id="ckfKqgWOtCL9"
# #### Solución 1
# + colab={"base_uri": "https://localhost:8080/"} id="WK8RpYZGsWYd" outputId="58f3977e-2a0f-419c-c84c-a597debf6a68"
import math
r = 10 # radio
area = math.pi*r**2 # area=pi*r^2
area
# + [markdown] id="llVJv5vltGjY"
# #### Solución 2
# + colab={"base_uri": "https://localhost:8080/"} id="eDhKlDXqtKRA" outputId="a3ab620c-aa9f-477d-a640-4b4f89071cdf"
from math import pi
r = 10 # radio
area = pi*r**2
area
# + [markdown] id="NIbVAshRtd8-"
# #### Solución 3
# + colab={"base_uri": "https://localhost:8080/"} id="-e_IjG9rtgja" outputId="9cc5caa9-8d1a-498a-9e77-ad366240f3dd"
from math import pi, pow
r = 10 # radio
area = pi*pow(r,2) # pow eleva a una potencia
area
# + [markdown] id="1pcsbxqwtt9O"
# #### Solución 4
# + colab={"base_uri": "https://localhost:8080/"} id="jNhCzFFatxEI" outputId="33d49533-cade-496f-d3ed-04a8f7ba8ef5"
from math import * # importa todas las funciones de la librería
r = 10 # radio
area = pi*pow(r,2) # pow eleva a una potencia
area
# + [markdown] id="Kn5RzLe7u-be"
# ### Teorema de Pitágoras
# La hipotenusa es igual a la raiz cuadrada de la suma del cuadrado de los catetos.
#
# h = raiz(catero1\**2 + cateto2**2)
# + colab={"base_uri": "https://localhost:8080/"} id="GnllrwBUvZaf" outputId="c0714ff1-7575-44be-9b15-8f6dc90675ed"
import math
a = 3 # cateto 1
b = 4 # cateto 2
c = math.sqrt(a**2+b**2) # hipotenusa
c
# + [markdown] id="QEBAOFwPy3hJ"
# # Módulo ```random```
# + colab={"base_uri": "https://localhost:8080/"} id="bJ5ZUDZ7y95s" outputId="e21b8153-d7a4-4e5a-fe8a-1173333e99fb"
import random
dir(random)
# + [markdown] id="A0BQWiSFnLGW"
# ## Numero aleatorio entero entre dos extremos
# + colab={"base_uri": "https://localhost:8080/"} id="7H-rU8130HVc" outputId="3b4fa4bb-a8b9-4d35-9fb6-fd53da6db390"
random.randint(10,20) # número aleatorio entero entre 10 y 20 ambos incluidos
# + [markdown] id="z8C6oCEanV3s"
# ### Crear una lista de números aleatorios
# * La lista contiene 19 números aleatorios entre 10 y 20, ambos incluidos.
# * La lista contiene valores repetidos.
# * Mostrar la lista ordenada.
# + colab={"base_uri": "https://localhost:8080/"} id="66c7ZbPWzHLR" outputId="b53b12c1-da60-4753-e9e7-1e1a0d2526a7"
import random
random.seed() # la semilla, permite dar más aleatoriedad
lista = []
for _ in range(19):
lista.append(random.randint(10,20))
lista.sort()
lista
# + [markdown] id="druzXhZO0F-A"
# ### Adivina el número en siete intentos
# * Generar un número secreto entre 1 y 100, ambos incluidos.
# * Se dispone de 7 intentos para adivinar el número secreto.
# * Por cada número se indica si el número secreto es mayor o menos.
# * Si se adivina el número, decir en que tirada se ha conseguido.
# + colab={"base_uri": "https://localhost:8080/"} id="qiPZ6o3-0zMW" outputId="6d4cc3a3-0064-4f99-f043-79eef664ba44"
import random
secreto = random.randint(1,100)
for tirada in range(7):
n = int(input(f"{tirada+1}. Introduzca un número: "))
if n==secreto:
print("Felicidades, lo adivinó.")
print(f"Lo ha conseguido en la tirada {tirada+1}.")
break
elif secreto>n:
print("El número secreto es mayor.")
else:
print("El número secreto es menor.")
print(f"El número secreto era {secreto}.")
# + [markdown] id="1cB7z6_p3Qcn"
# ### Elegir aleatoriamente de una secuencia: ```choice```
# Podemos elegir de una:
# * lista
# * tupla
# * rango (range)
# + colab={"base_uri": "https://localhost:8080/"} id="INfbsgx53bhq" outputId="87797d68-e0fd-4c9a-d30b-9419be24fdca"
random.choice([1,2,3]) # elegir de una lista
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="Jk9mAE_T3kGb" outputId="502ba2cb-5b01-42e8-def1-25df103e19bb"
random.choice(("Madrid","París","Londres","Roma")) # elegir de una tupla
# + colab={"base_uri": "https://localhost:8080/"} id="LpeV5i7b3yNH" outputId="aa3ea5bb-fbfe-4330-a61c-d3a5337a1e18"
random.choice(range(1,100)) # elegir de un range
# + [markdown] id="Q7oZSfSu4_90"
# ### Lanzar al aire una moneda 100 veces
# Lanzar al aire una moneda 100 veces y decir cuantas veces ha salido cara y cuantas veces ha salido cruz.
# + colab={"base_uri": "https://localhost:8080/"} id="Kp2s7wQr5INk" outputId="abdec03c-772b-45a1-b942-aa017131e791"
import random
caras = 0 # contador de caras, se inicializa en cero
for _ in range(100):
caras += random.randint(0,1) # suponemos que si sale 1 es cara y si sale 0 es cruz
print(f"Han salido {caras} caras y {100-caras} cruces.")
# + [markdown] id="kc1Nwiskn2zw"
# ## Juego: Piedra, papel, tijera
# + [markdown] id="JSss9Pl3GZbL"
# ### Solución 1
# + colab={"base_uri": "https://localhost:8080/"} id="hVWXocVYoIrb" outputId="23021ed3-087a-4e3a-841e-c2acacf5a22c"
import random
opciones = ["Piedra","Papel","Tijera"]
jugando = True
while jugando:
jugadorA = random.choice(opciones)
jugadorB = random.choice(opciones)
print(f"El jugador A saca {jugadorA}")
print(f"El jugador B saca {jugadorB}")
jugando = False
if jugadorA==jugadorB:
print("Empate")
jugando = True
elif jugadorA == "Piedra" and jugadorB=="Papel":
print("Gana el jugador B")
elif jugadorA == "Piedra" and jugadorB=="Tijera":
print("Gana el jugador A")
elif jugadorA == "Papel" and jugadorB=="Piedra":
print("Gana el jugador A")
elif jugadorA == "Papel" and jugadorB=="Tijera":
print("Gana el jugador B")
elif jugadorA == "Tijera" and jugadorB=="Piedra":
print("Gana el jugador B")
elif jugadorA == "Tijera" and jugadorB=="Papel":
print("Gana el jugador A")
# + [markdown] id="ZQCRoG-tGe5t"
# ### Solución 2
# Versión simplificada.
# + colab={"base_uri": "https://localhost:8080/"} id="rk-E_ximGmKO" outputId="8f2e4cf0-7c31-446c-a8c3-d8b7a6ecde9e"
import random
opciones = ["Piedra","Papel","Tijera"]
jugando = True
while jugando:
jugadorA = random.choice(opciones)
jugadorB = random.choice(opciones)
print(f"El jugador A saca {jugadorA}")
print(f"El jugador B saca {jugadorB}")
jugando = False
if jugadorA==jugadorB:
print("Empate")
jugando = True
elif jugadorA == "Piedra" and jugadorB=="Tijera":
print("Gana el jugador A")
elif jugadorA == "Papel" and jugadorB=="Piedra":
print("Gana el jugador A")
elif jugadorA == "Tijera" and jugadorB=="Papel":
print("Gana el jugador A")
else:
print("Gana el jugador B")
# + [markdown] id="RuUD15BMHwij"
# ### Reto 9.1. Piedra, papel, tijeras a diez puntos
# * Programar un juego de "Piedra, papel, tijeras" a 10 puntos.
# * Los jugadores A y B eligen y cada vez que uno gana se anota un punto.
# * Si hay empate nadie anota.
# * El primero que llegue a 10 puntos gana.
# * Mostrar por pantalla todas las tiradas y los puntos que van ganando.
# + [markdown] id="aq02HxXaIyIp"
# # Módulo ```time```
# + colab={"base_uri": "https://localhost:8080/"} id="fyvJXs-MI50F" outputId="271624a1-b494-4956-d7f6-7109daa2a405"
import time
dir(time)
# + colab={"base_uri": "https://localhost:8080/"} id="XQ-Ygx2Io79E" outputId="19f3333a-8de6-4d7f-cbb0-f3c414468a15"
import time
help(time.time)
# + colab={"base_uri": "https://localhost:8080/"} id="-si4MhbGsIP_" outputId="e590c9bd-e0f6-4d3d-973e-16d16a971910"
time.time() # devuelve los segundos transcurridos desde 1 enero 1970
# + colab={"base_uri": "https://localhost:8080/"} id="OcDtooD-slYX" outputId="80b88516-5889-4a05-fa6e-d746d232f37e"
print(f"Han transcurrido {time.time()/60/60/24/365.25} años.")
# + [markdown] id="VJGMJNqKuthx"
# ### Iterar un bucle durante 5 segundos
# Creamos un bucle que dure 5 segundos.
# + id="cdkC4rvItulZ"
import time
inicio = time.time()
contador = 1
while True:
print(f"{contador}. Estamos jugando.....")
contador += 1
final = time.time()
if final-inicio >= 5:
break
print("Fin del juego.")
print(f"Han transcurrido {final-inicio} segundos.")
# + [markdown] id="b1eO9AtSv0Yw"
# ### Realizar todas las operaciones aritméticas que podamos durante 10 segundos
# * Durante 10 segundos el ordenador te plantea operaciones matemáticas sencillas de suma o de multiplicación.
# * Por cada acierto se gana un punto.
# * Si no se acierta no se gana nada.
# * Al final nos dice los puntos obtenidos.
# + [markdown] id="CO_n-sWUEFRs"
# #### Solución 1
# + colab={"base_uri": "https://localhost:8080/"} id="3CD_JBP8wgPJ" outputId="7dfafa26-f89b-488f-b1b8-b58380f3e9e6"
import time
import random
print("*************** CONCURSO DE CÁLCULO ***************")
print("Realiza todas las operciones matemáticas que puedas")
print(" durante diez segundos ")
print("***************************************************")
print()
input("Para empezar pulsa ENTER.")
puntos = 0
inicio = time.time()
while time.time()-inicio<=10:
a = random.randint(1,10)
b = random.randint(1,10)
if random.randint(0,1): # con un 50% de probabilidades sale 0 o sale 1
n = int(input(f"{a}+{b}=")) # si sale 1 hacemos la suma
if n==a+b:
puntos += 1
print(f"Correcto. Tiene {puntos} puntos.")
else:
print(f"Incorrecto. Tiene {puntos} puntos.")
else:
n = int(input(f"{a}*{b}=")) # si sale 0 hacemos el producto
if n==a*b:
puntos += 1
print(f"Correcto. Tiene {puntos} puntos.")
else:
print(f"Incorrecto. Tiene {puntos} puntos.")
print("Tiempo terminado.")
print(f"Has obtenido {puntos} puntos. ")
# + [markdown] id="-eA7Jjw7EJAa"
# #### Solución 2
# + id="L3PxaDy5v9EK" colab={"base_uri": "https://localhost:8080/"} outputId="2aad0741-7922-4fb4-f4fe-e65c13aba0fa"
import time
import random
print("*************** CONCURSO DE CÁLCULO ***************")
print("Realiza todas las operciones matemáticas que puedas")
print(" durante diez segundos ")
print("***************************************************")
print()
input("Para empezar pulsa ENTER.")
puntos = 0
inicio = time.time()
while time.time()-inicio<=10:
a = random.randint(1,10)
b = random.randint(1,10)
op = random.choice(["+","*"]) # operación + o * con un 50% de probabilidades
if op=="+":
resultado = a+b
elif op=="*":
resultado = a*b
print(a,op,b,"=",end=" ")
respuesta = int(input())
if respuesta == resultado:
puntos += 1
print(f"Correcto. Tienes {puntos} puntos.")
else:
print(f"Incorrecto. Tienes {puntos} puntos.")
print("Tiempo terminado.")
print(f"Has obtenido {puntos} puntos. ")
# + [markdown] id="v_AlYzMUFfQe"
# ## Función ```time.sleep()```
# Detiene la ejecución del programa durante unos segundos.
# + colab={"base_uri": "https://localhost:8080/"} id="Fh3WFUiOFxq7" outputId="08b68abc-6939-4e1a-b419-37b73378ad01"
import time
print("Comienzo del programa.")
time.sleep(5) # el programa queda detenido durante 5 segundos
print("Final del programa.")
# + colab={"base_uri": "https://localhost:8080/"} id="Ube2kuesGNLU" outputId="e4c9668b-9fc1-4506-92fb-ad927e62cc24"
import time
print("Comienzo del programa.")
inicio = time.time()
time.sleep(.5) # el programa queda detenido durante medio segundo
fin = time.time()
print("Final del programa.")
print(f"Tiempo detenido {fin-inicio}.")
# + [markdown] id="_nGpidYjG361"
# ### Cuenta atrás de 10 segundos
# * Crear una cuenta atras del diez al cero: 10,9,8,7,6,5,4,3,2,1, ¡¡¡Lanzamiento!!!
# * De forma que aparezca un número cada segundo.
# * Al final medir el tiempo que ha tardado.
# * Debería tardar 10 segundos y algunas décimas o centésimas más.
# + colab={"base_uri": "https://localhost:8080/"} id="A6dGxNNyHVAN" outputId="2469c2d5-b25f-4344-d38c-d5180e5ba381"
import time
inicio = time.time()
for i in range(10,0,-1):
print(i)
time.sleep(1)
fin = time.time()
print("¡¡¡Lanzamiento!!!")
print(f"El proceso ha tardado {fin-inicio} segundos.")
# + [markdown] id="e2RdpckPJtkK"
# ## Medir el tiempo de ejecución de un programa
# Con la función ```time.perf_counter()``` podemos medir el tiempo de ejecución de un procedimiento.
# Podemos programar una tarea mediante dos códigos distintos y ver cuál es más rápido.
# + [markdown] id="PuR_AScIKi5o"
# ### Sumar serie de números con ```while``` y con ```for```
# Sumar los números entre uno y un millón usando ambos bucles y ver cúal tarda menos.
# + [markdown] id="j05rmF_SKkcG"
# #### Versión 1. Con ```while```
# + colab={"base_uri": "https://localhost:8080/"} id="_eXO9uIpKFTn" outputId="4265a2c6-86c5-4cbf-b35d-f010a0db500d"
import time
inicio = time.perf_counter()
suma = 0
i = 1
while i <= 1_000_000:
suma += i
i += 1
fin = time.perf_counter()
print(f"La suma es {suma}.")
print(f"El tiempo empleado es de {fin-inicio} segundos.")
# + [markdown] id="cvGJ5JYqKtpY"
# #### Versión 2. Con ```for```
# + colab={"base_uri": "https://localhost:8080/"} id="3lVgTkF3KfeB" outputId="f700bff8-cabe-4a17-bbff-152d6d03fb91"
import time
inicio = time.perf_counter()
suma = 0
for i in range(1,1_000_001):
suma += i
fin = time.perf_counter()
print(f"La suma es {suma}.")
print(f"El tiempo empleado es de {fin-inicio} segundos.")
# + [markdown] id="smaEHlpXMr5R"
# ## Función ```random.shuffle()```
# Baraja aleatoriamente los elementos de una lista.
# + colab={"base_uri": "https://localhost:8080/"} id="iwwcoSFfM4r7" outputId="1b79d022-2576-49c5-8aea-e2ebfcc5f18c"
import random
lista = [1,2,3,4,5,6]
random.shuffle(lista) # baraja la lista, no devuelve una lista nueva
print(lista)
random.shuffle(lista) # cada vez que ejecutamos la función baraja la lista
lista
# + [markdown] id="rzsxskj-OGiB"
# ### Función ```random.sample()```
# * Dada una lista, devuelve un cierto número de valores de la lista elegidos de forma aleatoria.
# * Tiene dos argumentos la lista y el número de elementos que deseamos obtener.
# * En este caso si devuelve una lista distinta a la original.
# + colab={"base_uri": "https://localhost:8080/"} id="Ksi1WKm3Orto" outputId="10ce53b3-4532-4797-b768-2f00521e4951"
import random
lista = [10,20,30,40,50,60,70,80,90]
random.sample(lista,3)
# + colab={"base_uri": "https://localhost:8080/"} id="fTSq5-3uPGyt" outputId="7d5df958-6c8d-42ff-abb3-6c4c34e974db"
muestra = random.sample(lista,3) # acúa sobre la lista original
muestra # la lista que devuelve se puede asignar a una variable
# + [markdown] id="d7V7AvmlP3AZ"
# ### Juego: memoriza los colores
# * Muestra cuatro colores durante 3 segundos
# * Debes responder recordando los cuatro colores en su orden
# * Si aciertas, sumas un punto y los vuelve a mostrar en otro oden
# * Para finalizar el juego debes pulsar ENTER al responder a los cuatro colores.
# + colab={"base_uri": "https://localhost:8080/"} id="b17DYh1epCzy" outputId="ba287345-43b4-4d8d-dc08-a31e53806cbb"
from IPython.display import clear_output
from random import shuffle
from time import sleep
print("********* Memoriza los colores *********")
colores = ['rojo','verde','azul','amarillo']
puntos = 0
while True:
shuffle(colores)
print(colores)
sleep(3) # espera 3 segundos
clear_output() # limpia la salida de pantalla
c1=input("Introduzca el color 1: ")
c2=input("Introduzca el color 2: ")
c3=input("Introduzca el color 3: ")
c4=input("Introduzca el color 4: ")
if colores==[c1,c2,c3,c4]:
puntos += 1
print(f"Lleva {puntos} puntos.")
elif [c1,c2,c3,c4]==['','','','']: # si pulsamos ENTER en todos los colores finalizará el loop
break
print(f"Los puntos totales conseguidos son: {puntos}.")
# + [markdown] id="vTkDbYCAoAMY"
# ### Reto 9.2. Mejora el juego de los colores
# * Los colores disponibles serán 10 y de ellos se mostrarán aleatoriamente 4
# * Por cada color introducido si no es correcto, se avisa de ello y no se siguen pidiendo los demás colores.
#
# colores = ['rojo','verde','azul','amarillo','naranja','rosa','marrón','violeta','negro','blanco']
| Python_CoderDojo09.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## VGGNet
# vggNet 是第一个真正意义上的深层网络结构,其是 ImageNet2014年的冠军,得益于 python 的函数和循环,我们能够非常方便地构建重复结构的深层网络。
#
# vgg 的网络结构非常简单,就是不断地堆叠卷积层和池化层,下面是一个简单的图示
#
# 
#
# vgg 几乎全部使用 3 x 3 的卷积核以及 2 x 2 的池化层,使用小的卷积核进行多层的堆叠和一个大的卷积核的感受野是相同的,同时小的卷积核还能减少参数,同时可以有更深的结构。
#
# vgg 的一个关键就是使用很多层 3 x 3 的卷积然后再使用一个最大池化层,这个模块被使用了很多次,下面我们照着这个结构来写一写
import numpy as np
import torch
from torch import nn
from torch.autograd import Variable
from torchvision.datasets import CIFAR10
# 我们可以定义一个 vgg 的 block,传入三个参数,第一个是模型层数,第二个是输入的通道数,第三个是输出的通道数,第一层卷积接受的输入通道就是图片输入的通道数,然后输出最后的输出通道数,后面的卷积接受的通道数就是最后的输出通道数
def vgg_block(num_convs, in_channels, out_channels):
net = [nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1), nn.ReLU(True)] # 定义第一层
for i in range(num_convs-1): # 定义后面的很多层
net.append(nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1))
net.append(nn.ReLU(True))
net.append(nn.MaxPool2d(2, 2)) # 定义池化层
return nn.Sequential(*net)
# 我们可以将模型打印出来看看结构
block_demo = vgg_block(3, 64, 128)
print(block_demo)
# 首先定义输入为 (1, 64, 300, 300)
input_demo = Variable(torch.zeros(1, 64, 300, 300))
output_demo = block_demo(input_demo)
print(output_demo.shape)
# 可以看到输出就变为了 (1, 128, 150, 150),可以看到经过了这一个 vgg block,输入大小被减半,通道数变成了 128
#
# 下面我们定义一个函数对这个 vgg block 进行堆叠
def vgg_stack(num_convs, channels):
net = []
for n, c in zip(num_convs, channels):
in_c = c[0]
out_c = c[1]
net.append(vgg_block(n, in_c, out_c))
return nn.Sequential(*net)
# 作为实例,我们定义一个稍微简单一点的 vgg 结构,其中有 8 个卷积层
vgg_net = vgg_stack((1, 1, 2, 2, 2), ((3, 64), (64, 128), (128, 256), (256, 512), (512, 512)))
print(vgg_net)
# 我们可以看到网络结构中有个 5 个 最大池化,说明图片的大小会减少 5 倍,我们可以验证一下,输入一张 256 x 256 的图片看看结果是什么
test_x = Variable(torch.zeros(1, 3, 256, 256))
test_y = vgg_net(test_x)
print(test_y.shape)
# 可以看到图片减小了 $2^5$ 倍,最后再加上几层全连接,就能够得到我们想要的分类输出
class vgg(nn.Module):
def __init__(self):
super(vgg, self).__init__()
self.feature = vgg_net
self.fc = nn.Sequential(
nn.Linear(512, 100),
nn.ReLU(True),
nn.Linear(100, 10)
)
def forward(self, x):
x = self.feature(x)
x = x.view(x.shape[0], -1)
x = self.fc(x)
return x
# 然后我们可以训练我们的模型看看在 cifar10 上的效果
# +
from utils import train
def data_tf(x):
x = np.array(x, dtype='float32') / 255
x = (x - 0.5) / 0.5 # 标准化,这个技巧之后会讲到
x = x.transpose((2, 0, 1)) # 将 channel 放到第一维,只是 pytorch 要求的输入方式
x = torch.from_numpy(x)
return x
train_set = CIFAR10('./data', train=True, transform=data_tf)
train_data = torch.utils.data.DataLoader(train_set, batch_size=64, shuffle=True)
test_set = CIFAR10('./data', train=False, transform=data_tf)
test_data = torch.utils.data.DataLoader(test_set, batch_size=128, shuffle=False)
net = vgg()
optimizer = torch.optim.SGD(net.parameters(), lr=1e-1)
criterion = nn.CrossEntropyLoss()
# -
train(net, train_data, test_data, 20, optimizer, criterion)
# 可以看到,跑完 20 次,vgg 能在 cifar 10 上取得 76% 左右的测试准确率
| 05.卷积神经网络(进阶)/11.vgg download/vgg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # `pandas` Part 3: Descriptive Analytics with `pandas`
#
# # Learning Objectives
# ## By the end of this tutorial you will be able to:
# 1. Understand the three fundamental areas of an analytics project: Descriptive, Predictive, Prescriptive
# 2. Summarize data using `describe()`
# >- Descriptive Analytics is the first layer of a full analytical report and `describe()` gets us started
# 3. Transform data with simple calculations
#
# ## Files Needed for this lesson: `winemag-data-130k-v2.csv`
# >- Download this csv from Canvas prior to the lesson
#
# ## The general steps to working with pandas:
# 1. import pandas as pd
# 2. Create or load data into a pandas DataFrame or Series
# 3. Reading data with `pd.read_`
# >- Excel files: `pd.read_excel('fileName.xlsx')`
# >- Csv files: `pd.read_csv('fileName.csv')`
# >- Note: if the file you want to read into your notebook is not in the same folder you can do one of two things:
# >>- Move the file you want to read into the same folder/directory as the notebook
# >>- Type out the full path into the read function
# 4. After steps 1-3 you will want to check out your DataFrame
# >- Use `shape` to see how many records and columns are in your DataFrame
# >- Use `head()` to show the first 5-10 records in your DataFrame
# # Analytics Project Framework Notes
# ## A complete and thorough analytics project will have 3 main areas
# 1. Descriptive Analytics: tells us what has happened or what is happening.
# >- The focus of this lesson is how to do this in python.
# >- Many companies are at this level but not much more than this
# >- Descriptive statistics (mean, median, mode, frequencies)
# >- Graphical analysis (bar charts, pie charts, histograms, box-plots, etc)
# 2. Predictive Analytics: tells us what is likely to happen next
# >- Less companies are at this level but are slowly getting there
# >- Predictive statistics ("machine learning (ML)" using regression, multi-way frequency analysis, etc)
# >- Graphical analysis (scatter plots with regression lines, decision trees, etc)
# 3. Prescriptive Analytics: tells us what to do based on the analysis
# >- Synthesis and Report writing: executive summaries, data-based decision making
# >- No analysis is complete without a written report with at least an executive summary
# >- Communicate results of analysis to both non-technical and technical audiences
# # Descriptive Analytics Using `pandas`
# # Initial set-up steps
# 1. import modules and check working directory
# 2. Read data in
# 3. Check the data
# #### Note: setting our working directory to a variable named `path` will make accessing files in the directory easier
# # Step 2 Read Data Into a DataFrame with `read_csv()`
# >- file name: `winemag-data-130k-v2.csv`
# >- Set the index to column 0
# >- Note: by defining `path` above for our working directory we can then just concatenate our working directory with the file we wish to read in
# ### Check how many rows, columns, and data points are in the `wine_reviews` DataFrame
# >- Use `shape` and indices to define variables
# >- We can store the values for rows and columns in variables if we want to access them later
# ### Check a couple of rows of data
# # Descriptive Analytics with `describe()`
# >- General syntax: dataFrame.columnName.describe()
# ### Now, what is/are the question(s) being asked of the data?
# >- All analytics projects start with questions (from you, your boss, some decision maker, etc)
#
# #### For this example...
# ##### Question: What is the summary information about wine point ratings?
# >- subQ1: What is a baseline/average wine?
# >>- What is the average rating?
# >>- What is the median rating?
# >- subQ2: What is the range of wine ratings?
# >>- What is the lowest rating? The highest rating?
# >- subQ3: What rating is the lowest for the top 25% of wines?
#
# ### The cool thing about learning `python` and in particular `pandas` is you can answer all these with a few lines of code
# ### Notes on `describe()`
# >- `describe()` is "type-aware" which means it will automatically give summary statistics based on the data type of the column
# >- In the previous example, `describe()` gave us summary stats based on a numerical column
# >- For a string column, we can't calculate a mean, median or standard deviation so we get different output from `describe()`
# ### Another question to be answered with analytics:
# ##### What information do we have regarding wine tasters?
# >- subQ1: How many total wine tasters are there?
# >- subQ2: How many total records have a wine taster mapped to them?
# >- subQ3: Who has the most wine tastings?
# >>- How many wine tastings does he or she have?
# ### Notes on the previous output:
# >- count gives us the total number of records with non-null taster_name
# >- unique gives us the total number of taster_name names
# >- top gives us the taster_name with the most records
# >- freq gives us the number of records for the top taster_name
# # Getting specific summary stats and assigning a variable to them
# >- To be able to write our results in a nice executive summary format, assign variables to specific summary stat values
# ### Assign variables for the mean points
# ### Create a list of the wine tasters with `unique()`
# ### To see a list of wine tasters and how often they occur use `value_counts()`
# ### Q: Which tasters have 10,000 or more reviews?
# >- Filtering results using `where()`
# >- Remove results not meeting criteria with with `dropna()`
# #### How many reviews did <NAME> have?
# >- Find the count for one particular reviewer using `loc`
# ### Who are the top five wine tasters by number of occurrences?
# # Transforming data
# >- Sometimes it is useful to standardize/normalize data
# >- Standardizing data allows you to make comparisons regardless of the scale of the original data
# >- We can transform data using some simple operations
# >>- For more advanced transformations we can use `map()` and `apply()`
# ### Transforming the `points` column
# >- In this example we will "remean" our points column to a mean of zero
# ##### Our new `points0` variable should have a mean of 0
# ### Now assign a new column name, `points0`, to the data frame and insert the `points0` values
# >- We will do this two different ways
# #### Method1: This way creates and inserts the new column at the end of the DataFrame
# #### Method2: Using `insert()` allows us to specify the position of our new column
# >- Insert general syntax and parameters: insert(insertion index, column name, values, allow duplicates)
# >>- Insertion Index: where do you want your column in your DataFrame
# >>- Column Name: the name of your new column
# >>- Values: the values you want stored in your new column
# >>- Allow Duplicates: Set to `True` if duplicate values are ok
# ### We can also concatenate fields and store that in a new column of our DataFrame
# #### Task: Combine the country and province fields into one field separated with a ' - '
# >- Insert the concatenated field into the wineReviews dataframe as 'countryProv'
| Week 10/Pandas_Part3_SummaryFunctions_Student.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parse .txt files and insert in databases
# The lyrics files are cleaned and then the songs/lyrics are added to the database
import pandas as pd
import os
import pymysql
import re
from os import walk
import unidecode
from pymysql import DataError
FOLDER_IMG = "/home/tanguy/data/lyrizz/images"
FOLDER_CSV = "/home/tanguy/data/lyrizz/csv"
FOLDER_TXT = '/home/tanguy/data/lyrizz/txt'
# +
df_tracks = pd.read_csv(os.path.join(FOLDER_CSV, 'df_tracks.csv'), sep=';')
filenames = next(walk(FOLDER_TXT), (None, None, []))[2]
list_track = [track_id[:-4] for track_id in filenames]
tracks = df_tracks[df_tracks['track_id'].isin(list_track)]
# -
# ### Functions definition
# Sometimes section is indicated in lyrics
def get_section_from_line(line):
line = line.lower()
if 'parole' in line or 'lyric' in line:
return ''
elif 'intro' in line:
return 'intro'
elif 'refrain' in line or 'chorus' in line:
return 'chorus'
elif 'couplet' in line or 'verse' in line:
return 'verse'
elif 'outro' in line:
return 'outro'
elif 'bridge' in line or 'pont' in line:
return 'bridge'
elif 'break' in line or 'pause' in line:
return 'break'
elif 'hook' in line or 'crochet' in line:
return 'hook'
else:
return ''
# +
def filter_title(name):
# Try de remove "- Remastered ..."
name = name.split(' - ')[0]
# Try de remove " (Remastered ...)"
name = name.split('(')[0]
# Remove space at begin/end
name = name.strip()
return name
def filter_artist(name):
# Try de remove others artists
name = name.split(',')[0]
# Try de remove " (Feat ...)"
name = name.split('(')[0]
# Remove space at begin/end
name = name.strip()
return name
# -
# ### Insert song and lyrics into the database
### Insert in database
LIST_IMG = next(walk(FOLDER_IMG), (None, None, []))[2]
connection = pymysql.connect(host='localhost',user='django2',password='password',db='quizz_db',
charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor)
for i in range(len(tracks)):
raw = tracks.iloc[i]
track_id = raw['track_id']
filename = os.path.join(FOLDER_TXT, f'{track_id}.txt')
with open(filename) as f:
lines = f.readlines()
if len(lines)<10:
print(track_id, 'No lyrics')
continue
##########################################
# Add song in database
##########################################
track_id = raw['track_id']
artists = raw['artists']
name = raw['name']
artists = filter_artist(artists)
name = filter_title(name)
popularity = raw['popularity']
year = int(raw['release_date'].split('-')[0])
image = [x for x in LIST_IMG if x.startswith(track_id)][0]
print(track_id, end=' ')
with connection.cursor() as cursor:
cursor.execute(f"SELECT id from lyrizz_song WHERE spotify_id = '{track_id}'")
res = cursor.fetchall()
connection.commit()
if len(res) == 0:
with connection.cursor() as cursor:
# Create a new record
sql = "INSERT INTO `lyrizz_song` (`spotify_id`, `name`, `artists`, `popularity`, `year`, `image`, `has_quote`, `has_image`) VALUES (%s, %s, %s, %s, %s, %s, %s, %s)"
try:
cursor.execute(sql, (track_id, name[:200], artists[:200], int(popularity), year, f'covers_lyrizz/{track_id}.jpg', 1, 1))
except DataError:
cursor.execute(sql, (track_id, unidecode.unidecode(name[:200]), unidecode.unidecode(artists[:200]), int(popularity), year, f'covers_lyrizz/{track_id}.jpg', 1, 1))
song_id = cursor.lastrowid
connection.commit()
else:
song_id = res[0]['id']
print('ALREADY')
continue
section = ''
count=0
for l in lines:
if l[0] == '[':
section = get_section_from_line(l)
continue
if l=='\n':
continue
l = l.replace('\n', '')
l = unidecode.unidecode(l)
##########################################
# Add lyrics in database
##########################################
with connection.cursor() as cursor:
# Create a new record
sql = "INSERT INTO `lyrizz_lyrics` (`lyrics_text`, `section`, `song_id`) VALUES (%s, %s, %s)"
try:
cursor.execute(sql, (l, section, song_id))
count+=1
except:
cursor.execute(sql, (unidecode.unidecode(l), section, song_id))
print('U',end='')
connection.commit()
print(' ', count)
connection.close()
# ### Insert image name in database
# +
### update image name in db
LIST_IMG = next(walk(FOLDER_IMG), (None, None, []))[2]
connection = pymysql.connect(host='localhost',user='django2',password='password',db='quizz_db',
charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor)
with connection.cursor() as cursor:
cursor.execute("SELECT spotify_id from lyrizz_song")
res = cursor.fetchall()
connection.commit()
for track in res:
track_id = track['spotify_id']
image = [x for x in LIST_IMG if x.startswith(track_id)][0]
with connection.cursor() as cursor:
cursor.execute(f"UPDATE lyrizz_song SET image='covers_lyrizz/{image}' WHERE spotify_id='{track_id}'")
connection.commit()
print(image)
connection.close()
# -
| notebooks/lyrizz/insert_database_lyrics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [Image Classification](https://www.tensorflow.org/tutorials/images/classification?hl=ja)
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# +
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # 学習用の猫画像のディレクトリ
train_dogs_dir = os.path.join(train_dir, 'dogs') # 学習用の犬画像のディレクトリ
validation_cats_dir = os.path.join(validation_dir, 'cats') # 検証用の猫画像のディレクトリ
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # 検証用の犬画像のディレクトリ
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print(total_train)
print(total_val)
# -
# # Image preprocessing
# +
# Define rescale method
def img_generator():
return ImageDataGenerator(rescale=1./255., rotation_range=45, width_shift_range=.15,
height_shift_range=.15, horizontal_flip=True, zoom_range=0.5)
train_image_generator = img_generator()
test_image_generator = ImageDataGenerator(rescale=1./255.)
## Don't forget to define "class_mode"!!! Default is "categorical"
## If isn't, accuracy score wont't be improved!!!
train_data_gen = train_image_generator.flow_from_directory(batch_size=512, directory=train_dir, shuffle=True, target_size=(150, 150), class_mode='binary')
test_data_gen = test_image_generator.flow_from_directory(batch_size=512, directory=validation_dir, shuffle=True, target_size=(150,150), class_mode='binary')
# -
sample_training_images, _ = next(train_data_gen)
fig = plt.figure(figsize=(10,6))
for x in np.arange(0,10):
fig.add_subplot(2,5,x+1)
plt.imshow(sample_training_images[x])
# # Modeling
# +
batch_size=128
epochs = 15
model = keras.Sequential()
model.add(keras.layers.Conv2D(16, 3, padding='same', activation='relu', input_shape=(150, 150, 3)))
model.add(keras.layers.MaxPooling2D())
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Conv2D(32, 3, padding='same', activation='relu'))
model.add(keras.layers.MaxPooling2D())
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Conv2D(64, 3, padding='same', activation='relu'))
model.add(keras.layers.MaxPooling2D())
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(512, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(train_data_gen, epochs=15, batch_size=20, validation_data=test_data_gen)
| TensorFlow/Image/tensorflow-tutorial-image-classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#import dependencies
import pandas as pd
import numpy as np
#file locations
measurements_csv = "datafiles/hawaii_measurements.csv"
stations_csv = "datafiles/hawaii_stations.csv"
#read files into initial data frames
msr_df = pd.read_csv(measurements_csv)
sta_df = pd.read_csv(stations_csv)
msr_df.head()
sta_df.head()
msr_df.shape
msr_df = msr_df.fillna(0.00)
msr_df.head
sta_df.shape
sta_df
sta_df = sta_df.drop('name', axis=1)
sta_df
# Create the clean CSV files
sta_df.to_csv('datafiles/clean_stations_csv.csv', sep='\t', encoding='utf-8')
msr_df.to_csv('datafiles/clean_measurements_csv.csv', sep='\t', encoding='utf-8')
| .ipynb_checkpoints/data_engineering-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import math
import fresnel
import freud
import gsd.hoomd
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# %matplotlib inline
matplotlib.style.use('ggplot')
# What file do we want to analyze
traj = gsd.hoomd.open('../data/20211027/chi_test/simulations/nah0/s0/traj_langevin_postequil.gsd')
# -
# Constants of conversion in our system
kB = 1.987204259e-3 # kcal/(mol * K)
kTroom = 0.5961 # 1 kT = 0.5961 kcal/mol
traj[0].log
# +
timestep = []
walltime = []
potential_energy = []
temperature = []
pressure = []
for frame in traj:
timestep.append(frame.configuration.step)
walltime.append(frame.log['Simulation/walltime'][0])
potential_energy.append(
frame.log['md/compute/ThermodynamicQuantities/potential_energy'][0])
temperature.append(
frame.log['md/compute/ThermodynamicQuantities/kinetic_temperature'][0])
pressure.append(
frame.log['md/compute/ThermodynamicQuantities/pressure'][0])
temperature_real = np.array(temperature)
# -
fig = matplotlib.figure.Figure(figsize=(10, 6.18))
ax = fig.add_subplot()
ax.plot(timestep, potential_energy)
ax.set_xlabel('timestep')
ax.set_ylabel('potential energy')
fig
fig = matplotlib.figure.Figure(figsize=(10, 6.18))
ax = fig.add_subplot()
ax.plot(timestep, temperature_real*kTroom/kB)
ax.set_xlabel('timestep')
ax.set_ylabel('Temperature (K)')
fig
fig = matplotlib.figure.Figure(figsize=(10, 6.18))
ax = fig.add_subplot()
ax.plot(timestep, pressure)
ax.set_xlabel('timestep')
ax.set_ylabel('pressure')
fig
| analysis/general_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
##DESCRIPTION
# This notebook calculates the so called "Polygons" to describe how a system under test reacts to a set of performance tests.
# +
install.packages("RColorBrewer", repos='http://cran.us.r-project.org')
install.packages("gridExtra")
install.packages("getPass")
install.packages("RPostgreSQL")
library("RColorBrewer")
library(ggplot2)
library(gridExtra)
library(getPass)
library(RPostgreSQL)
library(dplyr)
library(stringr)
# -
db_connection <- DBI::dbConnect(dbDriver(drvName = "PostgreSQL"), dbname = Sys.getenv("DB_NAME"), host=Sys.getenv("HOST_NAME"), port="5432", user=Sys.getenv("POSTGRES_USERNAME"), password=Sys.getenv("<PASSWORD>"))
dbGetQuery(db_connection, "SELECT id::text, name FROM projects")
# +
# Define the name of the project to analyze
project_name <- "Sockshop"
project_id = dbGetQuery(db_connection, str_glue("SELECT id::text FROM projects WHERE name='{project}'", project = project_name))$id
project_id
# +
sql_operational_profile = "
SELECT users, frequency FROM operational_profile_observations
WHERE operational_profile = (SELECT id FROM operational_profiles WHERE project = ?project) order by users"
operational_profile <- dbGetQuery(db_connection, sqlInterpolate(db_connection, sql_operational_profile, project = project_id))
operational_profile[,1] <- operational_profile[,1]
operational_profile
# +
sql_all_data = "
SELECT tests.id::text AS test_id, test_sets.id::text AS test_set_id, test_properties.value::numeric AS users, metrics.abbreviation AS metric, items.name AS item_name, results.value AS item_value
FROM results
INNER JOIN tests ON results.test = tests.id
INNER JOIN items ON results.item = items.id
INNER JOIN test_properties ON (test_properties.test = tests.id AND test_properties.name = 'load')
INNER JOIN metrics ON results.metric = metrics.id
INNER JOIN test_set_tests ON (test_set_tests.test = tests.id)
INNER JOIN test_sets ON (test_sets.id = test_set_tests.test_set AND test_sets.project = tests.project)
WHERE tests.project = ?project AND metrics.abbreviation IN ('art', 'sdrt', 'mix')"
all_data = dbGetQuery(db_connection, sqlInterpolate(db_connection, sql_all_data, project = project_id))
list_of_microservices = as.data.frame(unique(all_data[,5]))
no_of_microservices = nrow(list_of_microservices)
test_users_metric<-unique(all_data[,c(1:4)])
#test_users_metric
# -
test_users_metric[list_of_microservices[,1]]<-NA
#test_users_metric
# +
#If the tests occur too fast, it might be that some services have no data. This case is not handled, yet.
for (i in 1:nrow(test_users_metric)) {
search_test_id <- test_users_metric[i,1]
search_metric <- test_users_metric[i,4]
for (j in 1:no_of_microservices) {
search_microservice <- list_of_microservices[j,]
row <- filter(all_data, test_id == search_test_id & metric == search_metric & item_name == search_microservice)
if (dim(row)[1] > 0) {
found_value = row$item_value
if (!is.na(found_value)) {
test_users_metric[i,j+4] <- found_value
} else {
test_users_metric[i,j+4] <- 99999999.00
print("NA=toobig")
}
} else {
test_users_metric[i,j+4] <- 99999999.00
}
}
}
raw_data <- test_users_metric
raw_data
# +
tests <- unique(raw_data[,1:3])
#max number for which test was made
max_no_of_users <- max(raw_data[,3])
print ("max_no")
print(max_no_of_users)
min_no_of_users <- min(raw_data[,3])
print(min_no_of_users)
user_load <- operational_profile[,1]
print("user_load:",user_load)
access_count <- operational_profile[,2]
print("access_count:",access_count)
max_no_of_requests <- max(user_load)
scale_factor <- max_no_of_users/max_no_of_requests
print("max_no_request")
print(max_no_of_requests)
print(scale_factor)
scaled_user_load <- floor(scale_factor * user_load)
print(scaled_user_load)
#insert bin separation sorted by access_count = 0 and scaled_user_load = bins) into the scaled_user_load and then remove duplicates
# Take these vectors as input to the array.
op = data.frame(
users = scaled_user_load,
frequency = access_count)
print(op)
#create bins
bins = data.frame(
users = c(0, 50, 100, 150, 200, 250, 300),
frequency = c(0, 0, 0, 0, 0, 0, 0))
print(bins)
#cat bin to end of op
temp_op <- rbind(op, bins)
temp_op
#sort temp_op by users
newSorted <- temp_op[order(temp_op$users),]
newSorted
print("distinct")
#remove duplicates
mergedSorted <- distinct(newSorted,users,.keep_all=TRUE)
scaled_user_load <- mergedSorted[,1]
access_count <- mergedSorted[,2]
print(scaled_user_load)
print(access_count)
# +
##Create aggregate values (by fifty) of the user frequency from "operational_profile"
steps <- 50
calculate_aggregated_values <- function() {
access_frequency <- access_count/sum(access_count)
by_fifty <- which((scaled_user_load %% steps) == 0)
print(by_fifty)
#hack need to rewrite to be general
#values_by_bin <- c(scaled_user_load[seq(1, length(scaled_user_load), 6)])
#by_fifty <- match(values_by_bin,scaled_user_load)
#by_fifty <- c(1,14,26,31,34,38,41)
print(by_fifty)
no_of_aggregated_rows = length(by_fifty)
print (no_of_aggregated_rows)
binProb <- c()
for (i in 1:no_of_aggregated_rows) {
if (i==1) {
binProb[i] <- sum(access_frequency[1:by_fifty[i]])
} else {
binProb[i] <- sum(access_frequency[(by_fifty[i-1]+1):by_fifty[i]])
}
print(binProb[i])
}
matrix(c(scaled_user_load[by_fifty], binProb), ncol=2, nrow=no_of_aggregated_rows, dimnames=list(c(1:no_of_aggregated_rows), c("Workload (number of users)", "Domain metric per workload")))
}
#todo - replace duplicate entries
aggregated_values_from_operational_profile <- calculate_aggregated_values()
#aggregated_values_from_operational_profile
#fix to match test data
#hack need to rewrite to be general
print(aggregated_values_from_operational_profile)
#aggregated_values_from_operational_profile[,1] <- c(0,50,100,150,200,250,300)
aggregated_values_from_operational_profile
# -
# + active=""
#
# +
#Define the threshold for each service. The threshold is a vector computed as avg+3*SD for the configuration with
#Users=2, Memory=4, CPU=1, CartReplica=1
data_of_min_user<-raw_data[raw_data$users==min_no_of_users,]
test_of_min_user<-tests[tests$users==min_no_of_users,]
avg <-data_of_min_user[data_of_min_user$metric=="art",][,-c(1:4)]
sd <- data_of_min_user[data_of_min_user$metric=="sdrt",][,-c(1:4)]
threshold<-data.frame(test_of_min_user,avg+3*sd)
#Check the first line of the dataframe thereshold: it must be one line
head(threshold)
data_of_min_user
# +
#Exclude case with user = 2 from dataFile and check whether each service passes or fail: avg<threshol (Pass).
#Compute the relative mass for each configuration
tests_without_benchmark<-tests[!tests$users==min_no_of_users,]
raw_data_without_benchmark<-raw_data[!raw_data$users==min_no_of_users,]
avg<-raw_data_without_benchmark[raw_data_without_benchmark$metric=="art",-4]
sd<-raw_data_without_benchmark[raw_data_without_benchmark$metric=="sdrt",-4]
mix<-raw_data_without_benchmark[raw_data_without_benchmark$metric=="mix",-4]
#Check pass/fail for each service. the "mix" value is 0 if fail and mixTemp if pass. Compute the relative mass for each configuration
pass_criteria<-avg
calculate_relative_mass <- function() {
relative_mass<-c()
mix_of_passing_tests<-as.data.frame(matrix(nrow=nrow(tests_without_benchmark), ncol=ncol(raw_data_without_benchmark)-1))
for(j in 1:nrow(pass_criteria)){
#print (j)
mix_of_passing_tests[j,]<-mix[j,]
for(i in 4:(3+no_of_microservices)){
#print (i)
#print (pass_criteria[j,i])
#print (threshold[i])
if (!is.na(threshold[i]) & !is.na(pass_criteria[j,i])) {
if(pass_criteria[j,i]>threshold[i]){
#print ("fail")
mix_of_passing_tests[j,i]<-0
} #else print ("pass")
} #else print ("NA")
}
relative_mass[j]<-sum(mix_of_passing_tests[j,4:(3+no_of_microservices)])
}
relative_mass
}
relative_mass <- calculate_relative_mass()
#Show first lines of passCriteria
pass_criteria
# +
#Compute the domain metric for each configuration
tests_without_benchmark$relative_mass<-relative_mass
absolute_mass<-c()
print (tests_without_benchmark)
for(j in 1:nrow(tests_without_benchmark)) {
print (j)
absolute_mass[j]<-tests_without_benchmark[j,"relative_mass"]*aggregated_values_from_operational_profile[match(tests_without_benchmark[j,
"users"], aggregated_values_from_operational_profile[,1]),2]
print (absolute_mass[j])
}
tests_without_benchmark$absolute_mass<-absolute_mass
test_sets<-as.data.frame(unique(all_data[,2]))
colnames(test_sets)[1] <- "test_set_id"
set<-list()
domain_metric_list<-list()
for(i in 1:nrow(test_sets)){
set[[i]]<-tests_without_benchmark[which(tests_without_benchmark[,2] == test_sets[i,1]),]
domain_metric_list[[i]]<-set[[i]][,c(3,5)][order(set[[i]][,c(3,5)][,1]),]
}
#Uncomment this to show first lines of domain_metric_list
#head(domain_metric_list)
print("domain metric list")
domain_metric_list
# +
#Compute Cumulative Domain metric: summing up absoluteMass over users for each configuration
test_sets$domain_metric<-0
for(i in 1:nrow(test_sets)){
test_sets[i,2]<-round(sum(tests_without_benchmark[which(tests_without_benchmark[,2] == test_sets[i,1]),"absolute_mass"]),4)
}
domain_metric<-test_sets
domain_metric
# +
#Plot operational_profile against domain metric for each configuration
plot(aggregated_values_from_operational_profile, xlim=c(steps, max_no_of_users), ylim=c(0, 0.3),cex.lab=1.3)
polygon(c(steps,aggregated_values_from_operational_profile[,1],max_no_of_users),c(0,aggregated_values_from_operational_profile[,2],0), col="brown", lty = 1, lwd = 2, border = "black")
color=heat.colors(11)
color_transparent <- adjustcolor(color, alpha.f = 0.2)
sorted_domain_metric<-domain_metric
k<-which(sorted_domain_metric[,2]==max(sorted_domain_metric[,2]))
#Green line whithin the polygon is the best domain matric line.
#It corresponds to the second line in the final table below
for(i in 1:nrow(test_sets)) {
lines(domain_metric_list[[i]], type="l", col=heat.colors(11)[i])
lines(domain_metric_list[[k]], type="l", col="green")
polygon(c(steps,t(domain_metric_list[[i]][1]),max_no_of_users),c(0,t(domain_metric_list[[i]][2]),0), col=color_transparent[i], lty = 1, lwd = 1 , border = rainbow(11)[i])
}
text(aggregated_values_from_operational_profile,labels = round(aggregated_values_from_operational_profile[,2],3), pos=3, col="black")
graphics.off()
# -
DBI::dbDisconnect(db_connection)
| toolchain/notebooks/Domain metric.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np
import h5py
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import to_categorical
from keras.layers.normalization import BatchNormalization
from keras.callbacks import EarlyStopping
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
# Version check
import keras
import tensorflow as tf
import sklearn
print('NumPy version : ', np.__version__)
print('Keras version : ', keras.__version__)
print('With tensorflow version : ', tf.__version__)
print('Sci-kit learn version : ', sklearn.__version__)
# + _uuid="55d8a7855de32be6d50be9ea85672bdbcf0be7c6"
# Function to load data
def load_dataset():
# use h5py module and specify file path and mode (read)
all_train_data = h5py.File('C:/Users/npurk/Desktop/Chapter_3_CNN/train_happy.h5', "r")
all_test_data = h5py.File('C:/Users/npurk/Desktop/Chapter_3_CNN/test_happy.h5', "r")
# Collect all train and test data from file as numpy arrays
x_train = np.array(all_train_data["train_set_x"][:])
y_train = np.array(all_train_data["train_set_y"][:])
x_test = np.array(all_test_data["test_set_x"][:])
y_test = np.array(all_test_data["test_set_y"][:])
# Reshape data
y_train = y_train.reshape((1, y_train.shape[0]))
y_test = y_test.reshape((1, y_test.shape[0]))
return x_train, y_train, x_test, y_test
# + _uuid="fe88e0258fa7b42ddb6c1b9619ac21212901cb00"
# Load the data
X_train, Y_train, X_test, Y_test = load_dataset()
# -
print ('Image dimensions : ', X_train.shape[1:])
print ('Training tensor dimension : ', X_train.shape)
print ('Test tensor dimension : ', X_test.shape)
print ()
print ('Number of examples in training tensor : ', X_train.shape[0])
print ('Number of examples in test tensor : ', X_test.shape[0])
print ()
print ('Traning labels dimension : ', Y_train.shape)
print ('Test labels dimension : ', Y_test.shape)
# + _uuid="71c2de6c095eeb8b33386b6f9f8e855f7f815e26"
# Plot out a single image
plt.imshow(X_train[0])
# Print label for image (smiling = 1, frowning = 0)
print ("y = " + str(np.squeeze(Y_train[:, 0])))
# + _uuid="d0e20c6a026c19a7cece8bbad88d348d4b195303"
# Normalize pixels using max channel value, 255
X_train = X_train/255.
X_test = X_test/255.
# Transpose labels
Y_train = Y_train.T
Y_test = Y_test.T
# Print stats
print ("Number of training examples : " + str(X_train.shape[0]))
print ("Number of test examples : " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
# +
# convert to float 32 ndarrays
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
Y_train = Y_train.astype('float32')
Y_test = Y_test.astype('float32')
# -
plt.imshow(X_train[3])
# + _uuid="a70ae198a736dd368946b8cb5386d67fd996c75f"
# Build convolutional neural net
model = Sequential()
#First Convolutional layer
model.add(Conv2D(16,(5,5), padding = 'same', activation = 'relu', input_shape = (64,64,3)))
model.add(BatchNormalization())
#First Pooling layer
model.add(MaxPooling2D(pool_size = (2,2)))
model.add(Dropout(0.1))
#Second Convolutional layer
model.add(Conv2D(32, (5,5), padding = 'same', activation = 'relu'))
model.add(BatchNormalization())
#Second Pooling layer
model.add(MaxPooling2D(pool_size = (2,2)))
#Dropout layer
model.add(Dropout(0.1))
#Flattening layer
model.add(Flatten())
#First densely connected layer
model.add(Dense(128, activation = 'relu'))
#Final output layer
model.add(Dense(1, activation = 'sigmoid'))
# +
# Compile model
model.compile(optimizer = 'adam',
loss = 'binary_crossentropy',
metrics = ['accuracy'])
# +
# Visualize model layout
model.summary()
# +
# Initialize early stopping callback to monitor validation loss and terminate training
early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss')
# + _uuid="fd3c9992c37799e28073f54510dd65005d88b0de"
# Initiate training session
model.fit(X_train, Y_train, validation_data=(X_test, Y_test),
epochs=20,
batch_size=50,
callbacks=[early_stopping])
# + _uuid="22e12fef516dc64c803c31a68552bea16e1ecd0e"
# Predict the test set results
Y_pred = model.predict_classes(X_test)
# + _uuid="68e335c0c42e1e031543f5b704f58fb6271f731f"
from sklearn.metrics import accuracy_score, confusion_matrix, recall_score, precision_score, f1_score
# Assess test accuracy, precision and recall score using sklearn.metrics
print ("Test accuracy: %s" %accuracy_score(Y_test, Y_pred))
print ("Precision: %s" %precision_score(Y_test, Y_pred))
print ("Recall: %s" %recall_score(Y_test, Y_pred))
print ("F1 score: %s" %f1_score(Y_test, Y_pred))
# -
model.save('C:/Users/npurk/Desktop/Chapter_3_CNN/smile_detector.h5py')
model = keras.models.load_model('C:/Users/npurk/Desktop/Chapter_3_CNN/smile_detector.h5py')
# +
import seaborn as sns
cm = confusion_matrix(Y_test,Y_pred)
sns.heatmap(cm,annot=True)
# + _uuid="cab2e6a7d51c427f97f37337a6292d7677517af8"
predictions = model.predict([X_test])
# -
predictions[103]
Y_test[8]
plt.imshow(X_test[8])
img_tensor = np.expand_dims(X_test[8], axis=0)
img_tensor /= 255.
from keras import models
layer_outputs = [layer.output for layer in model.layers[:8]]
activation_model = models.Model(inputs=model.input, outputs=layer_outputs)
activations = activation_model.predict(img_tensor)
first_layer_activation = activations[0]
plt.matshow(first_layer_activation[0, :, :, 4], cmap='viridis')
from vis.visualization import visualize_activation
from vis.utils import utils
from keras import activations
img = visualize_activation(model, -5, filter_indices=0, input_range=(0., 1.))
plt.imshow(img[...])
plt.imshow(X_test[42])
input_image = X_test[8]
input_image = np.expand_dims(input_image, axis=0)
print(input_image.shape)
# +
#retrive a blank multi-output model from the functional API
from keras.models import Model
#retrieve layer outputs for layers in previously trained sequential model
layer_outputs_smile_detector = [layer.output for layer in model.layers[:]]
#define multi-output model that takes input image tensors and outputs intermediate layer activations
multioutput_model = Model(inputs=model.input, outputs=layer_outputs_smile_detector)
#Generate activation tensors of intermediate layers
activations = multioutput_model.predict(input_image)
# +
# Number of layers in smile detector model
len(activations)
# +
first_layer_activation = activations[0]
second_layer_activation = activations[1]
third_layer_activation = activations[2]
fourth_layer_activation = activations[3]
fifth_layer_activation = activations[4]
sixth_layer_activation = activations[5]
seventh_layer_activation = activations[6]
eighth_layer_activation = activations[7]
activation_layers = [first_layer_activation, second_layer_activation,
third_layer_activation, fourth_layer_activation,
fifth_layer_activation, sixth_layer_activation,
seventh_layer_activation, eighth_layer_activation]
# +
#Activation maps in the first layer (16)
activations[0].shape
# +
# Number of activation maps in first layer
len(first_layer_activation[0,0,0])
# +
# Plot out activation maps from first layer
for i in range(16):
plt.matshow(first_layer_activation[0, :, :, i], cmap='viridis')
# +
# Plot out activation maps from 3rd layer
for i in range(16):
plt.matshow(third_layer_activation[0, :, :, i], cmap='viridis')
# -
model.save('C:/Users/npurk/Desktop/Chapter_3_CNN/smile_detector.h5py')
model = keras.models.load_model('C:/Users/npurk/Desktop/Chapter_3_CNN/smile_detector.h5py')
| Chapter04/Chapter_4_CNN_smile_detector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3-azureml
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# + gather={"logged": 1642531099533}
import http.client, os, urllib.parse, json, time, sys
# Represents the various elements used to create HTTP request path for QnA Maker
#operations.
# Replace this with a valid subscription key.
# User host = '<your-resource-name>.cognitiveservices.azure.com'
host = 'maggietravelqa.cognitiveservices.azure.com'
subscription_key = 'be4c6a1c-ac6d-4ce8-8f52-a7e02f789e1b'
get_kb_method = '/qnamaker/v4.0/knowledgebases/'
try:
headers = {
'Ocp-Apim-Subscription-Key': subscription_key,
'Content-Type': 'application/json'
}
conn = http.client.HTTPSConnection(host)
conn.request ("GET", get_kb_method, None, headers)
response = conn.getresponse()
data = response.read().decode("UTF-8")
result = None
if len(data) > 0:
result = json.loads(data)
print
#print(json.dumps(result, sort_keys=True, indent=2))
# Note status code 204 means success.
KB_id = result["knowledgebases"][0]["id"]
print(response.status)
print(KB_id)
except :
print ("Unexpected error:", sys.exc_info()[0])
print ("Unexpected error:", sys.exc_info()[1])
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
| GetKB.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial Part 18: Using Reinforcement Learning to Play Pong
#
# This notebook demonstrates using reinforcement learning to train an agent to play Pong.
#
# The first step is to create an `Environment` that implements this task. Fortunately,
# OpenAI Gym already provides an implementation of Pong (and many other tasks appropriate
# for reinforcement learning). DeepChem's `GymEnvironment` class provides an easy way to
# use environments from OpenAI Gym. We could just use it directly, but in this case we
# subclass it and preprocess the screen image a little bit to make learning easier.
#
# ## Colab
#
# This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
#
# [](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/18_Using_Reinforcement_Learning_to_Play_Pong.ipynb)
#
# ## Setup
#
# To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. To install `gym` you should also use `pip install 'gym[atari]'` (We need the extra modifier since we'll be using an atari game). We'll add this command onto our usual Colab installation commands for you
# !wget -c https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
# !chmod +x Anaconda3-2019.10-Linux-x86_64.sh
# !bash ./Anaconda3-2019.10-Linux-x86_64.sh -b -f -p /usr/local
# !conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
import deepchem as dc
# !conda install pip
# !pip install 'gym[atari]'
# +
import deepchem as dc
import numpy as np
class PongEnv(dc.rl.GymEnvironment):
def __init__(self):
super(PongEnv, self).__init__('Pong-v0')
self._state_shape = (80, 80)
@property
def state(self):
# Crop everything outside the play area, reduce the image size,
# and convert it to black and white.
cropped = np.array(self._state)[34:194, :, :]
reduced = cropped[0:-1:2, 0:-1:2]
grayscale = np.sum(reduced, axis=2)
bw = np.zeros(grayscale.shape)
bw[grayscale != 233] = 1
return bw
def __deepcopy__(self, memo):
return PongEnv()
env = PongEnv()
# -
# Next we create a network to implement the policy. We begin with two convolutional layers to process
# the image. That is followed by a dense (fully connected) layer to provide plenty of capacity for game
# logic. We also add a small Gated Recurrent Unit. That gives the network a little bit of memory, so
# it can keep track of which way the ball is moving.
#
# We concatenate the dense and GRU outputs together, and use them as inputs to two final layers that serve as the
# network's outputs. One computes the action probabilities, and the other computes an estimate of the
# state value function.
#
# We also provide an input for the initial state of the GRU, and returned its final state at the end. This is required by the learning algorithm
# +
import tensorflow as tf
from tensorflow.keras.layers import Input, Concatenate, Conv2D, Dense, Flatten, GRU, Reshape
class PongPolicy(dc.rl.Policy):
def __init__(self):
super(PongPolicy, self).__init__(['action_prob', 'value', 'rnn_state'], [np.zeros(16)])
def create_model(self, **kwargs):
state = Input(shape=(80, 80))
rnn_state = Input(shape=(16,))
conv1 = Conv2D(16, kernel_size=8, strides=4, activation=tf.nn.relu)(Reshape((80, 80, 1))(state))
conv2 = Conv2D(32, kernel_size=4, strides=2, activation=tf.nn.relu)(conv1)
dense = Dense(256, activation=tf.nn.relu)(Flatten()(conv2))
gru, rnn_final_state = GRU(16, return_state=True, return_sequences=True)(
Reshape((-1, 256))(dense), initial_state=rnn_state)
concat = Concatenate()([dense, Reshape((16,))(gru)])
action_prob = Dense(env.n_actions, activation=tf.nn.softmax)(concat)
value = Dense(1)(concat)
return tf.keras.Model(inputs=[state, rnn_state], outputs=[action_prob, value, rnn_final_state])
policy = PongPolicy()
# -
# We will optimize the policy using the Asynchronous Advantage Actor Critic (A3C) algorithm. There are lots of hyperparameters we could specify at this point, but the default values for most of them work well on this problem. The only one we need to customize is the learning rate.
from deepchem.models.optimizers import Adam
a3c = dc.rl.A3C(env, policy, model_dir='model', optimizer=Adam(learning_rate=0.0002))
# Optimize for as long as you have patience to. By 1 million steps you should see clear signs of learning. Around 3 million steps it should start to occasionally beat the game's built in AI. By 7 million steps it should be winning almost every time. Running on my laptop, training takes about 20 minutes for every million steps.
# Change this to train as many steps as you have patience for.
a3c.fit(1000)
# Let's watch it play and see how it does!
env.reset()
while not env.terminated:
env.env.render()
env.step(a3c.select_action(env.state))
# # Congratulations! Time to join the Community!
#
# Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
#
# ## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
# This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
#
# ## Join the DeepChem Gitter
# The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
| examples/tutorials/18_Using_Reinforcement_Learning_to_Play_Pong.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Multiview KMeans Tutorial
#
# In this tutorial we demonstrate how to use multiview k-means clustering
# in *mvlearn* by clustering a 5-class dataset from the UCI multiview
# digits dataset.
#
# +
# License: MIT
from mvlearn.datasets import load_UCImultifeature
from mvlearn.cluster import MultiviewKMeans
from sklearn.cluster import KMeans
import numpy as np
from sklearn.manifold import TSNE
from sklearn.metrics import normalized_mutual_info_score as nmi_score
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# Load in UCI digits multiple feature data
RANDOM_SEED = 5
# Load dataset along with labels for digits 0 through 4
n_class = 5
Xs, labels = load_UCImultifeature(
select_labeled=list(range(n_class)), views=[0, 1])
# Helper function to display data and the results of clustering
def display_plots(pre_title, data, labels):
# plot the views
fig, ax = plt.subplots(1, 2, figsize=(14, 5))
dot_size = 10
ax[0].scatter(data[0][:, 0], data[0][:, 1], c=labels, s=dot_size)
ax[0].set_title(pre_title + ' View 1')
ax[0].axes.get_xaxis().set_visible(False)
ax[0].axes.get_yaxis().set_visible(False)
ax[1].scatter(data[1][:, 0], data[1][:, 1], c=labels, s=dot_size)
ax[1].set_title(pre_title + ' View 2')
ax[1].axes.get_xaxis().set_visible(False)
ax[1].axes.get_yaxis().set_visible(False)
plt.show()
# -
# ## Singleview and multiview clustering of the data with 2 views
#
# Here we will compare the performance of the Multiview and Singleview
# versions of kmeans clustering. We will evaluate the purity of the resulting
# clusters from each algorithm with respect to the class labels using the
# normalized mutual information metric. <br>
#
# As we can see, Multiview clustering produces clusters with higher purity
# compared to those produced by clustering on just a single view or by
# clustering the two views concatenated together.
#
#
# +
# Singleview kmeans clustering
# Cluster each view separately
s_kmeans = KMeans(n_clusters=n_class, random_state=RANDOM_SEED)
s_clusters_v1 = s_kmeans.fit_predict(Xs[0])
s_clusters_v2 = s_kmeans.fit_predict(Xs[1])
# Concatenate the multiple views into a single view
s_data = np.hstack(Xs)
s_clusters = s_kmeans.fit_predict(s_data)
# Compute nmi between true class labels and singleview cluster labels
s_nmi_v1 = nmi_score(labels, s_clusters_v1)
s_nmi_v2 = nmi_score(labels, s_clusters_v2)
s_nmi = nmi_score(labels, s_clusters)
print('Singleview View 1 NMI Score: {0:.3f}\n'.format(s_nmi_v1))
print('Singleview View 2 NMI Score: {0:.3f}\n'.format(s_nmi_v2))
print('Singleview Concatenated NMI Score: {0:.3f}\n'.format(s_nmi))
# Multiview kmeans clustering
# Use the MultiviewKMeans instance to cluster the data
m_kmeans = MultiviewKMeans(n_clusters=n_class, random_state=RANDOM_SEED)
m_clusters = m_kmeans.fit_predict(Xs)
# Compute nmi between true class labels and multiview cluster labels
m_nmi = nmi_score(labels, m_clusters)
print('Multiview NMI Score: {0:.3f}\n'.format(m_nmi))
# -
# ## Comparing predicted cluster labels vs the truth
# We will display the clustering results of the Multiview kmeans clustering
# algorithm below, along with the true class labels.
#
#
# +
# Running TSNE to display clustering results via low dimensional embedding
tsne = TSNE()
new_data_1 = tsne.fit_transform(Xs[0])
new_data_2 = tsne.fit_transform(Xs[1])
display_plots('Multiview KMeans Clusters', Xs, m_clusters)
display_plots('True Labels', Xs, labels)
| _downloads/1562db42342dd4908150887ff723b410/plot_mv_kmeans_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cri5Castro/CNN_for_News_Classification/blob/master/cnn_for_nlp_in_keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" id="c_YWoxDjmkiP" colab_type="text"
# ---
# # Convolutional Neural Networks for text classification
# ---
# <br>
# <center><h3> Abstract </h3></center>
# Text classification is one of the most common NLP tasks, there are plenty approaches that can be taken but sometimes is difficult to think that famous approaches in other ML areas could be useful to perform text analysis, this time we'll try to apply CNN originally designed and widely applied to image analysis to perform text classification.
#
# ***Motivation:***
# - CNNs are faster to train than LSTM models.
# - CNNs are translation invariant, that means they could recognize patterns in the text no matter where they are.
# - CNNs are also efficient in terms of representation with a large vocabulary.
# - Convolutional Filters learn good representations automatically, without needing to represent the whole vocabulary.
#
# ***When to use it?:***
# - When there is no a strong dependance between a sequence and it long past words.
#
# ***Note***: In this notebook we put and a special focus on computational performance, we tried to avoid extra computational complexity repeating tasks, so feel free to contact us if there is any doubt.
#
# ****Important****: this is a port from the original kernel hosted in kaggle take care of the dependecies and the data set.
# - [Dataset](https://www.kaggle.com/mgocen/20newsgroups)
# - [Pretrained Embeddings](https://www.kaggle.com/facebook/fasttext-english-word-vectors-including-subwords)
# - [Original Kernel](https://www.kaggle.com/criscastromaya/cnn-for-nlp-in-keras)
# + [markdown] _uuid="67f060164c93f453826c5f70782367eb93525492" id="JmooLILLmkiQ" colab_type="text"
# ---
# # Index
# ---
# ___1. Introduction___
# > - <a href='#1.1'>1.1 Data set Description</a>
#
# ___2. Preprocessing___
# > - <a href='#2.1'>2.1 Data cleaning</a>
# > - <a href='#2.2'>2.2 Data Preparation and Analysis</a>
#
# ___3.Feature extraction___
# > - <a href='#3.1'>BOW representation</a>
# > - <a href='#3.2'>FasText representation</a>
#
# ___<a href='#4.'>4.Model Desing</a>___
#
# ___5.Training and Testing___
# > - <a href='#5.1'>5.1 CNN+BOW representation</a>
# > - <a href='#5.2'>5.2 CNN+FastText representation</a>
# + [markdown] _uuid="fd0ba1663acbf76af27af06db359fd6e95dedbfe" id="Xe62k62EmkiR" colab_type="text"
# <a id='1.1'></a>
# ## Data set Description
# We worked in the 20 Newsgroups that can be found at
# > [20 news groups dataset](http://qwone.com/~jason/20Newsgroups/)
#
# Also this data set is available as a kaggle dataset.
#
# The data is divided in two folders one for the train set and the other one for the test set, then each subdirectory in the bundle represents a newsgroup.
#
# In order to simplify the data manipulation we constructed a pandas dataframe with the following structure
#
# **ID | Document | label | **.
#
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" id="Z1qrpEmFmkiS" colab_type="code" colab={}
import pandas as pd #database manipulation
import numpy as np #math library
# + [markdown] _uuid="9b18f9632b3fa29c2228c42fccfc87dc0fe9d0eb" id="W6c47YFJmkiW" colab_type="text"
# <a id='2.1'></a>
# ## Preprocessing
# In the original 20 news group dataset we should remove headers,footers and quotes but this preprocessing have been made in the 20 news groups v3 by the data set uploader.
# At this time we only worried about some preprocessing on the text such as:
# >
# - **Remove weird characters** (if they exist).
# - **Separate contractions**: Firtsly we thought about expand the contractions but actually as we will remove the most common characters, we just separate the contractions from other words for example **doesn't will be does n't**.
# - **Also we have to remove the footer lines **.
# - **Remove long words**: we established a maximum length o 13 letters for each word which is the average maximum length for a word in English after this limit is exceeded probably this represents a spelling error.
# - **Remove emails and links**: we removed emails and links cause they doesn't apport important information for the problem.
# >
# + _uuid="a63da9c005f2641d7cf1e2854f6ccb6e696e04ae" id="XivAR0nXmkiX" colab_type="code" colab={}
import re
#this are our cleaning rules
cleaningOptions = {
'[A-Za-z0-9_-]{10,}':'',#long words nor
#expand contractions
"\'m":" am",
"\'s":" is",
"\'ve":" have",
"n\'t":" not",
"\'re":" are",
"\'d":" had",
"\'ll":" will",
#delete double space, and sequences of "-,*,^,."
'\s{2,}|\?{2,}|\!{2,}|#{2,}|={2,}|-{2,}|_{2,}|\.{2,}|\*{2,}|\^{2,}':'',
#Separate simbols from words
'(':' ( ',
'/':' / ',
')':' ) ',
'?':' ? ',
'¿':' ¿ ',
']':' ] ',
'[':' [ ',
'}':' } ',
'{':' { ',
'<':' < ',
'"':' " ',
'>':' > ',
',':' , ',
'!':' ! ',
'.':' . ',
':':' : ',
'-':' - ',
#delete emails
"[A-Za-z0-9_-]*@[A-Za-z0-9._-]*\s?":"",
#delete links
"https?://[A-Za-z0-9./-]+":"",
}
def escapePattern(pattern):
"""Helper function to build our regex"""
if len(pattern)==1:
pattern=re.escape(pattern)
return pattern
def compileCleanerRegex(cleaningOptions=None):
"""Given a dictionary of rules this contruct the regular expresion to detect the patterns """
return re.compile("(%s)" % "|".join(map(escapePattern,cleaningOptions.keys())))
replacementDictRegex = compileCleanerRegex(cleaningOptions)
# + _uuid="0459219f13c722b56afc30d9a0a83597e5c75880" id="sz-gb_bsmkia" colab_type="code" colab={}
def cleaning_text(text,cleaningOptions=None,replacementDictRegex=None,encode_format="utf-8-sig",decode_format="ascii",option="ignore"):
"""Cleaning function for text
Given a text this function applies the cleaning rules defined
in a dictionary using a regex to detect the patterns and remove non-ascii characters.
Args:
text (str): The text we want to clean.
cleaning options (dict): The rules to be applied for the cleaning.
replacementDictRegex(regex): The regular expression for detecting
the patterns defined in the cleaning options
this has been compiled using the compileCleanerRegex(cleaningOptions) function.
encode_format(str):the format from the incomming text by default is utf-8 that fix for most of the cases
decode_format(str):the format of the cleaning results
(the function use the encode/decode trick to remove unwanted characters)
Returns:
The cleaned text applying the cleaning options.
"""
""" REMOVING PUNCTUATIONS ALREADY PERFORMED by KERAS TOKENIZER
##remove extra characters (TODO)
#s = re.sub(r'(.)\1+', r'\1\1', "asssigned")
remove_punctuation=str.maketrans('','',string.punctuation)
text=text.translate(remove_punctuation)
also with this we can skip the " #Separate simbols from words part"
#"""
#optionals
#Removing weird characters
text = text.encode(encode_format).decode(decode_format,option)
#dict.get(key, default = None) default is the value to be returned if the key doesn't exist
return replacementDictRegex.sub(lambda mo:cleaningOptions.get(mo.group(1),), text)
# + [markdown] _uuid="73eeb4718aa019062a74e2f841b4071e739d53ad" id="4Jbcz1Krmkic" colab_type="text"
# __Let's made a test for the cleaning function:__
# + _uuid="d0cde6978f384cdbad8a17819d25d5f2aa992a55" id="NjKqGbDCmkid" colab_type="code" colab={}
oidDescriptionStr="""I'm a nicewhirrrclickwhirrr"Clam" test: ({}(. hi jij ... ,,)\1+) https://www.kaggle.com/criscastromaya/cnn-for-text-classification((it's)((((<EMAIL> )))()((isn't)(---____--)) Control"""
print(cleaning_text(oidDescriptionStr,cleaningOptions,replacementDictRegex))
# + [markdown] _uuid="a16a4335cd2380ec0b91e618e10362324029126f" id="tnwZFLxrmkig" colab_type="text"
# Then we got the path of all the files
# + _uuid="9aab3d26d715f23ceb1242a43d71f8578dc80a27" id="wByr58EMmkih" colab_type="code" colab={}
import glob,string
path = '../input/20newsgroups/20news-bydate-v3/*/*/*.txt'
#list files
files=glob.glob(path)
# + [markdown] _uuid="2c7e84c807aa1514d8261c1035e204d7a6445a53" id="1EpfZ4camkim" colab_type="text"
# Afterwards we constructed a pandas dataframe for the test and the train set
# + _uuid="1e77c318822462b82cd895db9745aeb33f03c05a" id="AixIJPbJmkio" colab_type="code" colab={}
import codecs
from tqdm import tqdm
def contructDataframe(file_list,cleaningOptions=cleaningOptions,replacementDictRegex=replacementDictRegex):
"""
This function contructs a pandas for the test and training dataframe with the format **ID | Document | label | **.
and also will perfom the preprocessing for the data using the cleaning function
Args:
file_list(list[str]): the path of the files tobe cleaned and storein the dataframes
cleaning options (dict): The rules to be applied for the cleaning.
replacementDictRegex(regex): The regular expression for detecting
the patterns defined in the cleaning options
this has been compiled using the compileCleanerRegex(cleaningOptions) function.
returns:
training_df,testing-df(pandas.dataframe): the treaning and testing set as pandas dataframes in the format |ID|Text|Label.
"""
train=[]
test=[]
mode="r"
encoding="utf-8"
e_option="ignore"
for file in tqdm(file_list):
text = codecs.open(file, mode,encoding, e_option).read()
if("20news-bydate-test" in file):
test.append((cleaning_text(text,cleaningOptions,replacementDictRegex),file.split("/")[-2]))
else:
train.append((cleaning_text(text,cleaningOptions,replacementDictRegex),file.split("/")[-2]))
return pd.DataFrame(train,columns=['text','label']),pd.DataFrame(test,columns=['text','label'])
# + _uuid="7900759dcc6b748aac4a83f781d8fe612fc8d5ae" id="nW1dOBLImkis" colab_type="code" colab={}
df_train,df_test=contructDataframe(files)
# + [markdown] _uuid="98c9818694923525cffcc14aaeb21e0717c0edf0" id="6E88PMNfmkiv" colab_type="text"
# <a id='2.2'></a>
# ### Data Preparation and Analysis
# *** As a sanity check lets see if there is no missing data or evident errors***
# + _uuid="78ebeea24f70b65ad3d4471d13d0f83e1be1f92e" id="muI_LtA4mkix" colab_type="code" colab={}
print("Train: ",df_train.isnull().values.any()," Test: ",df_test.isnull().values.any())
# + [markdown] _uuid="9f049b50e9b55cd53fd78b2c59b86bb5fc3a0352" id="3XsrJbWsmki2" colab_type="text"
# **** Also we'll see the distribution of the classes****
# + _uuid="4faaef929f715cf4af8bb9fb842609953fe06244" id="mcWkV7C8mki3" colab_type="code" colab={}
df_train.groupby(df_train.label).size().reset_index(name="counts").plot.bar(x='label',title="Samples per each class (Training set)",color='red')
# + _uuid="86542b2e62195d1bf5ce1277391244e89fb32367" id="YYH-oxyJmki5" colab_type="code" colab={}
df_test.groupby(df_test.label).size().reset_index(name="counts").plot.bar(x='label',title="Samples per each class (Test set)")
# + [markdown] _uuid="41de9287b6973d6fd39fba4f18a11c78578f2153" id="OwxKjyoVmki9" colab_type="text"
# Luckily the train and the test set are pretty well balanced.But we still have to check about the state of the data.
# So we perfom a text and now at this case the size of the text counting tokens.
# + _uuid="2844245ef91c4232a72f003838dad849a7b163bf" id="QM6NNwVbmki-" colab_type="code" colab={}
#df_train[df_train.text.str.split(" ").apply(len)==df_train.text.str.split(" ").apply(len).max()]
max_l=df_train.text.str.split(" ").apply(len).max()
min_l=df_train.text.str.split(" ").apply(len).min()
print(f"As we can see there is something not to good whit the dataset cause the bigger document contains {max_l} tokens and the smaller document contains {min_l} tokens")
# + [markdown] _uuid="d3360df39d54c4b144b4dd922dfd110a503faba0" id="tUrH4Ns6mkjA" colab_type="text"
# The gap between the biggest and the smaller document is huge, In consequence, we should visualize the distribution for the length of the documents and also check the extreme values.
# + _uuid="fefc2f5906042a0f1d63d8e3f8c0182316d2a9a7" id="E4vqMvJAmkjB" colab_type="code" colab={}
import seaborn as sns
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
df_train['doc_len'] = df_train.text.apply(lambda words: len(words.split()))
df_test['doc_len'] = df_test.text.apply(lambda words: len(words.split()))
def plot_doc_lengths(dataframe):
max_seq_len = np.round(dataframe.doc_len.mean() + dataframe.doc_len.std()).astype(int)
sns.distplot(tuple(dataframe.doc_len), hist=True, kde=True, label='Document lengths')
plt.axvline(x=max_seq_len, color='k', linestyle='--', label=f'Sequence length mean:{max_seq_len}')
plt.title('Document lengths')
plt.legend()
plt.show()
print(f" the bigger document contain {df_train['doc_len'].max()} words and the smaller {df_train['doc_len'].min()} words")
plot_doc_lengths(df_train)
# + [markdown] _uuid="375fc4e0596cb07a099b74e709f0d12e34d2f6a3" id="9_acXE_MmkjE" colab_type="text"
# Then we looked at the smaller and the biggest document, just to see what's wrong.
# + _uuid="a8437aab147eba8d3a666b3befc38aaa4c8dcbd7" id="rkGsdgxjmkjF" colab_type="code" colab={}
df_train[df_train.doc_len==df_train['doc_len'].max()]
# + _uuid="cec8accba83ad8ca2e1b3e1f5986b41cae7c5572" id="8kfz4sZpmkjH" colab_type="code" colab={}
df_train[df_train.doc_len==df_train['doc_len'].min()].tail(2)
# + [markdown] _uuid="2f670860db73223ff926b9ab78bd7216d9fe93d3" id="WTumaiSPmkjK" colab_type="text"
# As we can see there are many empty texts.
# After an examination of the data set, we decided to delete the entries smaller than 10 tokens and bigger than 3250 tokens cause outside of this range, base 64 strings and other unwanted noise starts to appear.
# + _uuid="0dd86ae8f12b29b907999b1d8eea298bd89d9da3" id="YlIbqcummkjL" colab_type="code" colab={}
df_train=df_train[(10<df_train.doc_len)&(3250>df_train.doc_len)]
##also I'll do the same for the test set
df_test=df_test[(10<df_test.doc_len)&(3250>df_test.doc_len)]
# + [markdown] _uuid="7eff78533669ce7dd2ed107566beca6d3c2e0832" id="agzV6UEJmkjN" colab_type="text"
# Let's see how this filtration altered the data distribution.
# + _uuid="f5ed58b96a7f3ad4775b373ff798aa5a96b513aa" id="2lRvWdt_mkjP" colab_type="code" colab={}
#df_train.sort_values(by=['doc_len'])
df_train.groupby(df_train.label).size().reset_index(name="counts").plot.bar(x='label',title="Samples per each class (Train set)",color='red')
# + _uuid="86edffd8fba40e96ed39b2cd39eccf6602e29850" id="Ocg2czoZmkjU" colab_type="code" colab={}
df_test.groupby(df_test.label).size().reset_index(name="counts").plot.bar(x='label',title="Samples per each class (Test set)")
# + _uuid="849f76d69fec7e2566e7e083e80f8044e8cc2124" id="ocBlz2rxmkjZ" colab_type="code" colab={}
plot_doc_lengths(df_train)
# + [markdown] _uuid="4801bbf571bf1901b14ecb396b74aa027e26cfc1" id="tEVuyC9Zmkjc" colab_type="text"
# <a id='3.1'></a>
# # Feature extraction
# In this section we will transform our text data in a numerical representation, We will use for this experiment the a Bag of words representation implemented in keras tokenizer (sparse) and a Skip-gram based pre-trained model using the Facebook fasttext representation. I highly recommend the following article written by Dipanjan (<NAME> to going deeper in this subject: [ A hands-on intuitive approach to Deep Learning Methods for Text Data ](https://towardsdatascience.com/understanding-feature-engineering-part-4-deep-learning-methods-for-text-data-96c44370bbfa)
# + [markdown] _uuid="88bf20a234a91d586da8bc6fc6ef8b73bf740af6" id="9i-FcYQmmkjd" colab_type="text"
# Before starting with the feature extraction we splited our ***Training data*** in two parts: ***training*** and ***validation***
# + _uuid="19cfdd598e9154118bdc2be76b339528e4b0c018" id="SJrg8UL1mkje" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
SEED = 200
X_train, X_validation, y_train, y_validation = train_test_split(df_train.text, df_train.label, test_size=0.2, random_state=3,stratify= df_train.label)
# + _uuid="16c8d28d7ca6eb5e7f44a682b7c1062f64dc9406" id="mknJbbp-mkjg" colab_type="code" colab={}
X_train.groupby(y_train).size().reset_index(name="counts").plot.bar(x='label',title="Samples per each class (Train set)",color='red')
# + _uuid="936164ae6409305f3a281c05be9e92886a02d1ea" id="JsZIP6zimkjj" colab_type="code" colab={}
X_validation.groupby(y_validation).size().reset_index(name="counts").plot.bar(x='label',title="Samples per each class (Validation set)",color='green')
# + _uuid="37d130eec7b5a0084944ae6324f446a966265b5e" id="cQDj0uJwmkjn" colab_type="code" colab={}
df_test.text.groupby(df_test.label).size().reset_index(name="counts").plot.bar(x='label',title="Samples per each class (Test set)")
# + [markdown] _uuid="aad5953469c88f5251f792c0ce82222e3c9321da" id="BeR5gx1xmkjp" colab_type="text"
# Then we transformed our data into a Bag of words based model representation.
# Basically the Keras tokenizer make a dictionary of the whole dataset and it will present each document as a sequence of words asigning a number to each word according to its frequence in the texts.
# + _uuid="764e771b589c4ff22af37ba8d902e477d633da18" id="CpJ6_6ydmkjq" colab_type="code" colab={}
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=500000)
tokenizer.fit_on_texts(X_train)
sequences_train = tokenizer.texts_to_sequences(X_train)
# + _uuid="45f3bc2b5e3b2b7f57d61e37fc8a29fdab523a1a" id="yg4TPQUDmkjs" colab_type="code" colab={}
print(f"Original document: {X_train.values[0]} \nNumerical representation: {sequences_train[0]}")
# + _uuid="04fd71f70d36be7a8dfbc5ecd57659fbcf9e72ae" id="iJ-0xazSmkju" colab_type="code" colab={}
sequences_validation = tokenizer.texts_to_sequences(X_validation)
sequences_test = tokenizer.texts_to_sequences(df_test.text.values)
# + [markdown] _uuid="c6d2d19fe63e3ecd26047389776730b394bef46d" id="kPePdZ9Omkjx" colab_type="text"
# Then we visualize what the tokenizer has learned and also delete the most 10 frequent and unfrequent words
# + _uuid="5f9dd1e1d47d10795c619a623960290f96970071" id="rqNQwY65mkjx" colab_type="code" colab={}
"""
Citiation
---------
DL4NLP lab by <NAME>
https://www.researchgate.net/profile/Oier_Lopez_de_Lacalle2
"""
# Recorver the word index that was created with the tokenizer
word_index = tokenizer.word_index
print('Found {} unique tokens.\n'.format(len(word_index)))
word_count = tokenizer.word_counts
print("Show the most frequent word index:")
for i, word in enumerate(sorted(word_count, key=word_count.get, reverse=True)):
print(' {} ({}) --> {}'.format(word, word_count[word], word_index[word]))
del tokenizer.index_word[tokenizer.word_index[word]]
del tokenizer.index_docs[tokenizer.word_index[word]]
del tokenizer.word_index[word]
del tokenizer.word_docs[word]
del tokenizer.word_counts[word]
if i == 9:
print('')
break
print("Show the least frequent word index:")
for i, word in enumerate(sorted(word_count, key=word_count.get, reverse=False)):
print(' {} ({}) --> {}'.format(word, word_count[word], word_index[word]))
del tokenizer.index_word[tokenizer.word_index[word]]
del tokenizer.index_docs[tokenizer.word_index[word]]
del tokenizer.word_index[word]
del tokenizer.word_docs[word]
del tokenizer.word_counts[word]
if i == 9:
print('')
break
# + _uuid="8923750f7dc216e2c01b6f5a9431a0e892200463" id="nwBQrS77mkjz" colab_type="code" colab={}
# Recorver the word index that was created with the tokenizer
word_index = tokenizer.word_index
print('Found {} unique tokens.\n'.format(len(word_index)))
word_count = tokenizer.word_counts
print("Show the most frequent word index:")
for i, word in enumerate(sorted(word_count, key=word_count.get, reverse=True)):
print(' {} ({}) --> {}'.format(word, word_count[word], word_index[word]))
if i == 9:
print('')
break
print("Show the least frequent word index:")
for i, word in enumerate(sorted(word_count, key=word_count.get, reverse=False)):
print(' {} ({}) --> {}'.format(word, word_count[word], word_index[word]))
if i == 9:
print('')
break
# + [markdown] _uuid="3e13f167cd22b273296de5c2fc6a0a21312ef481" id="cQE5CEpGmkj4" colab_type="text"
# This vectors has different lenght for this reason we need to pad the sequences in order to fit them in our model.
# We know from our previous data analysis that the most useful documents has a mean of 500 words so we will delimit the length of documents to 600.
# + _uuid="31410ee88a051904ba24418e98935ca372180fec" id="ig2PO9N8mkj5" colab_type="code" colab={}
max_length=600
# + _uuid="804f51bf90cc67d184f10bc2fee38bdbbfc07c53" id="w5d2LAn3mkj-" colab_type="code" colab={}
from keras.preprocessing import sequence
x_train=sequence.pad_sequences(sequences_train,maxlen=max_length)
x_validation=sequence.pad_sequences(sequences_validation,maxlen=max_length)
x_test=sequence.pad_sequences(sequences_test,maxlen=max_length)
print(f"Train set shape: {x_train.shape}\nValidation set shape: {x_validation.shape}\nTest set shape: {x_test.shape}")
# + [markdown] _uuid="2f6ac18bd84755ca3f89a2dd8ce8cbad9d711939" id="pITJ0c5WmkkC" colab_type="text"
# Also we need to transform our labels into something recognizable for the model so we used the scikit-learn label Binarizer to perfom this task using the one hot encode representation.
# + _uuid="119aebff04e689a4b70f218197ae6e27ee1415ae" id="6eBy-ZP_mkkD" colab_type="code" colab={}
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
y_train_categorical=encoder.fit_transform(y_train.values.reshape(-1, 1))
y_validation_categorical=encoder.transform(y_validation.values.reshape(-1, 1))
y_test_categorical=encoder.transform(df_test.label.values.reshape(-1, 1))
# + _uuid="558c09d1cfa2018e7c5973f9db70e7b6440a7d3b" id="BjIsrgMGmkkH" colab_type="code" colab={}
print(f"Train set labels: {y_train_categorical.__len__()}\nValidation set labels: {y_validation_categorical.__len__()}\nTest set labels: {y_test_categorical.__len__()}")
# + [markdown] _uuid="5f6846896398dcfa82cb01792d104a9e7f8ec9b5" id="jVWXX21dmkkK" colab_type="text"
# <a id='4.'></a>
# ## Model Desing
#
# This architecture is composed by the following layers:
# >
# - **Embedding Layer**: This layer learn provide a dense representation of words and their relative meanings, this is used to find relationships between words and their context, we decided to establish the dimension as 100 and used and input lenght of 500 which is also the mean of document lenghts in our data set, also this has a vocabulary of the same size of the training set vocabulary,in the case of the fastText integration we will use a dimension of 300 according to our pre-trained embedding , the weights of this layer will be defined by a embedding matrix.
# - **Convolutional Layer**: This layer tries to find patterns in the sentences applying filters and then will generate feature maps, this first layer is composed by 64 filters with a size of 7 and uses the relu function as activation function, this layer pads the input in such a way that the output feature maps has the same dimension.
# - **Max pooling layer**: This layer will select the most important features from the conv layer generated feature maps this uses a pool size of 2 and stride equal to 1.
# - **Convolutional Layer**: This layer will find patterns in the feature maps and then will generate new feature maps also this one has 64 filters with a size of 7 and uses the relu function as activation function, this layer pads the input in such a way that the output feature maps has the same dimension.
# - **Global Max pooling layer**: This layer will select the most important features from generated feature maps of the conv layer.
# - **Dropout**: This layer is used to improve the generalization of the model in this case this drops 50% of the neurons from the previous layer to force the weights to be equitative distributed.
# - **Dense layer**: In order to learn some additional information and actually didn't apply dropout to the output layer we added a dense layer of 64 neurons also I used the l2 regularization method also known as weight decay it forces the weights to decay towards zero (but not exactly zero). I highly recommend this [article about generalization in DL models](https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/).
# - **Output Layer(dense)**: The final layer has 20 neurons that corresponds to each class, in this case we used the softmax function to map a probability for each class.
#
# Also we used binary cross-entropy loss function widely used for multi-classification problems and a custom adam optimizer to learn the parameters and decrease the loss function.
# >
# + _uuid="eac61eb459c1863ef6d25d3e5eac7ef4bfa8d1d9" id="s6ryqpPzmkkL" colab_type="code" colab={}
from keras.layers import *
from keras import Sequential,optimizers
from keras_sequential_ascii import keras2ascii
class CNNtext(Sequential):
"""
This class extends keras.sequencial in order to build our
model according to the designed architecture
"""
#params for the convolutional layers
__num_filters = 64
__weight_decay = 1e-4
#optimizers
__adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
def __init__(self,max_length,number_of_classes,embedding_matrix=None,vocab_size=None,tokenizer=None):
#creating the model heritance from Keras.sequencial
super().__init__()
#params for the embedding layer
self.__embedding_dim=100 if embedding_matrix is None else embedding_matrix.shape[1]
#self.__vocab_size=vocab_size if tokenizer is None else tokenizer.word_index.__len__()+1
self.__vocab_size=vocab_size if tokenizer is None else max(tokenizer.index_word.keys())+1
try:
self.__max_length=max_length
self.__number_of_classes=number_of_classes
except NameError as error:
print("Error ",error," must be defined.")
#defining layers
#This layer will learn an embedding the vocab_size is the vocabulary learn from our tokenizer
#the embedding dimension is defined by our selfs in this case we choose a dimension of 100
#the input length is the maximum length of the documents we will use
if embedding_matrix is None:
self.add(Embedding(self.__vocab_size,
self.__embedding_dim,
input_length=self.__max_length,trainable=True))
else:
self.add(Embedding(embedding_matrix.shape[0],
embedding_matrix.shape[1],
weights=[embedding_matrix],
input_length=self.__max_length,
trainable=False))
#then we apply a 1D conv layer that should apply filters to the sequence and generate features maps.
self.add(Conv1D(self.__num_filters, 7, activation='relu', padding='same'))
#then we will get the most important features using a max pooling layer
self.add(MaxPooling1D(2))
#afterwards we apply a conv 1D layer to learn new features form the previous results
self.add(Conv1D(self.__num_filters, 7, activation='relu', padding='same'))
#we select again the most important features
self.add(GlobalMaxPooling1D())
#then we apply dropout to improve the generalization
self.add(Dropout(0.5))
#then we will pass the results into a dense layer that will also learn some internal representation and we also use the l2 regularization
self.add(Dense(32, activation='relu', kernel_regularizer=regularizers.l2(self.__weight_decay)))
#for the final layer we will use softmax to obtain the probabilities of each class.
self.add(Dense(self.__number_of_classes, activation='softmax'))
#to compute the loss function we use binary_crossentropy
#which is widely used for multi-classification problems
#we also use the adam optimazer to learn the parameters(weights)
#and minimize the loss function.
self.compile(loss='binary_crossentropy', optimizer=self.__adam, metrics=['accuracy'])
# + [markdown] _uuid="2d2eaaad74840a17d5d7db753c5542c13a51be0c" id="_ognQLRGmkkN" colab_type="text"
# <a id='5.1'></a>
# ## Training and testing (CNN+BOW)
# + [markdown] _uuid="b7852ed11ab15700f16f487b530efd6afc207dd1" id="JoIJa4IImkkO" colab_type="text"
# We will use the early stopping technique which monitors the status of the validation loss to stop the training when the loss stops its improving.
# + _uuid="78a7c24e925afda3d1e3225aea8510b8c7acd19f" id="vSX-1Q7nmkkQ" colab_type="code" colab={}
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0.01, patience=4, verbose=1)
callbacks_list = [early_stopping]
# + [markdown] _uuid="c27c4c60a18b8dc7e7f5647d78581edfa4a8ff1a" id="r8-jSpPxmkkS" colab_type="text"
# We define the batch size and the number of epochs,we trained to fit the whole train set but the actual number of epochs would be decided by the condition established in the callback.
# + _uuid="eb542d20577909fcea005ad6a0c7fcaa147462f0" id="k-a2KFaqmkkT" colab_type="code" colab={}
#training params
batch_size = 150
num_epochs = 20
# + _uuid="12e479cd94e48cb351f1849fbae2889bebea8ae4" id="AyP7D4Y5mkkW" colab_type="code" colab={}
tokenizer.num_words
# + _uuid="3d620915a8fdb6480a1c0086cabb2dc1116457a8" id="jwk0KTAzmkkY" colab_type="code" colab={}
CNN_BOW=CNNtext(max_length,
encoder.classes_.__len__(),
tokenizer=tokenizer)
# + _uuid="3120d042ea0ee9e87a0dc92cdf7f7ee0eb7a4f0e" id="BsHy7X8fmkka" colab_type="code" colab={}
keras2ascii(CNN_BOW)
# + _uuid="22b58f6b34354847ac4453bbe6e2da93ea532328" id="Su3LjvNpmkkg" colab_type="code" colab={}
hist = CNN_BOW.fit(x_train, y_train_categorical,
batch_size=batch_size, epochs=num_epochs, callbacks=callbacks_list,
validation_data=(x_validation,y_validation_categorical),
shuffle=True)
# + [markdown] _uuid="8cb9d99e2a4bae6259069cfc3fa5deab8fa27189" id="20SbizyZmkki" colab_type="text"
# We checked the perfomance using the test set
# + _uuid="88c5b02f4947bd366827f9db4376917caec2382a" id="xzFKiE-xmkkj" colab_type="code" colab={}
loss, accuracy = CNN_BOW.evaluate(x_test,encoder.transform(df_test.label.values), verbose=1)
print('Accuracy: %f' % (accuracy*100),'loss: %f' % (loss*100))
# + _uuid="d1d438c0b9ffc97727be85e95b74cb49b714ad3a" id="qtwd0B5fmkkm" colab_type="code" colab={}
def plot_model_perfomance(hist,name):
plt.style.use('fivethirtyeight')
plt.figure(1)
plt.plot(hist.history['loss'], lw=2.0, color='b', label='train')
plt.plot(hist.history['val_loss'], lw=2.0, color='r', label='val')
plt.title(name)
plt.xlabel('Epochs')
plt.ylabel('Cross-Entropy Loss')
plt.legend(loc='upper right')
plt.figure(2)
plt.plot(hist.history['acc'], lw=2.0, color='b', label='train')
plt.plot(hist.history['val_acc'], lw=2.0, color='r', label='val')
plt.title(name)
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='upper left')
plt.show()
# + _uuid="3fadfe1eebadbcda19f4102f5dcfff092ed0f516" id="-PzLlV9Pmkkr" colab_type="code" colab={}
plot_model_perfomance(hist,'CNN BOW')
# + [markdown] _uuid="fa7fad55eb1a373d92b791eebb1496a4e39a40f6" id="QHv1Jzmsmkkw" colab_type="text"
# Know we will construct the confusion matrix making the predictions for the test ,validation and train sets
# + _uuid="d890710c3750315e78e9fa14da5177c84553b9c7" id="vW1vcIRamkkx" colab_type="code" colab={}
bow_predict_y_test = CNN_BOW.predict(x_test,verbose=1)
bow_predict_y_train = CNN_BOW.predict(x_train,verbose=1)
bow_predict_y_validation = CNN_BOW.predict(x_validation,verbose=1)
# + _uuid="873e9ed1e64030f6b5ea44cf1b3c44eb595088f3" id="sE4O-rSlmkkz" colab_type="code" colab={}
bow_predict_y_test= encoder.inverse_transform(bow_predict_y_test)
bow_predict_y_train= encoder.inverse_transform(bow_predict_y_train)
bow_predict_y_validation= encoder.inverse_transform(bow_predict_y_validation)
# + _uuid="54c44d921dd260429b111da7e24874bf58a3034c" id="LVNz9BMlmkk1" colab_type="code" colab={}
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(y=None,y_predict=None,classes=None,name=None):
plt.figure(figsize=(30, 30))
sns.heatmap(confusion_matrix(y,y_predict),
xticklabels=classes,
yticklabels=classes)
plt.title(name)
plt.show()
# + [markdown] _uuid="7e9e5e827276931fe8b0754a6aab22606c77d56d" id="P1lQHjzKmkk3" colab_type="text"
# This is the confusion matrix for the test set.
# + _uuid="6ef47e75b1c21bf9045b42c3c4e4f4474c967323" id="XoSty-xAmkk4" colab_type="code" colab={}
plot_confusion_matrix(df_test.label.values,bow_predict_y_test,encoder.classes_,'Test accuracy CNN BOW')
# + [markdown] _uuid="30858fc264477da990f55ceebec16a635a9655d8" id="w5olk7Nimkk7" colab_type="text"
# This is the confusion matrix for the validation set.
# + _uuid="73a97e14270464e3652a5925338ced04e20f6796" id="Zll8jiWjmkk8" colab_type="code" colab={}
plot_confusion_matrix(y_validation,bow_predict_y_validation,encoder.classes_,'Validation accuracy CNN BOW')
# + [markdown] _uuid="94e7152d36b1c33dff44f94866df5877b25df981" id="bKVo0CN-mkk-" colab_type="text"
# This is the confusion matrix for the train set.
# + _uuid="5cbdaada4a270af6519fd07cee80db28765fdd47" id="gH4Yd3zPmkk_" colab_type="code" colab={}
plot_confusion_matrix(y_train,bow_predict_y_train,encoder.classes_,'Train accuracy CNN BOW')
# + [markdown] _uuid="bb273cbe710702a8bae051492e02ba8746f20af1" id="_o0phtfGmklB" colab_type="text"
# <a id='3.2'></a>
# ## FastText integration
# In this section we will use the fastext embeddings and see how our results could be affected
# we are using a pretrained [Fasttext embedding](https://www.kaggle.com/facebook/fasttext-english-word-vectors-including-subwords#wiki-news-300d-1M-subword.vec) from kaggle.
# + [markdown] _uuid="c239be933058223d15384e882fda77322659259a" id="jI2MTrfZmklC" colab_type="text"
# First we build the embedding matrix for our vocabulary
# + _uuid="66603928ef5f4e875e9a505cc0593604d64bd0fa" id="yIKBmzefmklD" colab_type="code" colab={}
def read(file=None,embed_dim=300,threshold=None, vocabulary=None):
embedding_matrix= np.zeros((max(vocabulary.index_word.keys())+1, embed_dim)) if threshold is None else np.zeros((threshold, embed_dim))
#embedding_matrix= np.zeros((vocabulary.word_index.__len__()+1, embed_dim)) if threshold is None else np.zeros((threshold, embed_dim))
words_not_found=[]
matching=[]
f = codecs.open(file, encoding='utf-8')
for line in tqdm(f):
vec = line.rstrip().rsplit(' ')
word=vec[0].lower()
if word in vocabulary.word_index:
matching.append(word)
embedding_matrix[vocabulary.word_index[word]]= np.asarray(vec[1:], dtype='float32')
else:
words_not_found.append(word)
f.close()
return embedding_matrix,words_not_found,matching
# + _uuid="ebdb8e98c47d7c2907a2d67a3a21d3640205509e" id="n2ee2KiQmklK" colab_type="code" colab={}
embedding_matrix,words_not_found,match= read("../input/fasttext-english-word-vectors-including-subwords/wiki-news-300d-1M-subword.vec",vocabulary=tokenizer)
# + _uuid="e51682686aaca018f0e2839881de2397b018cfb6" id="h8ex2rD5mklO" colab_type="code" colab={}
print(f"{len(words_not_found)} words not found")
# + _uuid="71fee43b4a0ca6f79894bd38be00a0bc848d670d" id="s_Unsm2CmklR" colab_type="code" colab={}
embedding_matrix.shape
# + [markdown] _uuid="dc8d3dd1c5040df015dc817784986b037c82c671" id="gx_1b-oUmklW" colab_type="text"
# <a id='5.2'></a>
# Know is time to build the model, this time we will set weights of the embedddings layer using the embedding matrix from the fastetext vectors
# + _uuid="3acb8fe16b5b336dfba3f2d30b2067929ad5af69" id="iQEd2sT9mklX" colab_type="code" colab={}
CNN_fastText=CNNtext(max_length,
encoder.classes_.__len__(),
embedding_matrix=embedding_matrix,
tokenizer=tokenizer)
# + _uuid="898a90390758c46b1b187a1bf602c143b788522e" id="HYw7tD5omkla" colab_type="code" colab={}
keras2ascii(CNN_fastText)
# + _uuid="deca072d1fd3fdf35a1c578c30719566589e85a3" id="O9WchdO0mkle" colab_type="code" colab={}
hist = CNN_fastText.fit(x_train, y_train_categorical,
batch_size=batch_size, epochs=num_epochs, callbacks=callbacks_list,
validation_data=(x_validation,y_validation_categorical),
shuffle=True)
# + _uuid="6cff73bfffab0bd181b803c2d4e0a181cd66da4d" id="_bsZDy1Zmklg" colab_type="code" colab={}
plot_model_perfomance(hist,'CNN FastText')
# + _uuid="b89749e9c6d80a742409f7781ba53e1f3f0831ff" id="g5s4giU3mkli" colab_type="code" colab={}
ft_predict_y_test = CNN_fastText.predict(x_test,verbose=1)
ft_predict_y_train = CNN_fastText.predict(x_train,verbose=1)
ft_predict_y_validation = CNN_fastText.predict(x_validation,verbose=1)
# + _uuid="8eb791dea5b11655759e001d512e7bed741e6d33" id="TWjM2442mklk" colab_type="code" colab={}
ft_predict_y_test= encoder.inverse_transform(ft_predict_y_test)
ft_predict_y_train= encoder.inverse_transform(ft_predict_y_train)
ft_predict_y_validation= encoder.inverse_transform(ft_predict_y_validation)
# + _uuid="b4880fbfa612f88245b860cc19e20048b722a612" id="I6Hbh7EQmklo" colab_type="code" colab={}
loss, accuracy = CNN_fastText.evaluate(x_test,encoder.transform(df_test.label.values), verbose=1)
print('Accuracy: %f' % (accuracy*100),'loss: %f' % (loss*100))
# + _uuid="4e1616f554df0eca1183e13de40e6d33b3a1b2cb" id="vknSBk_smklv" colab_type="code" colab={}
plot_confusion_matrix(df_test.label.values,ft_predict_y_test,encoder.classes_,'Test accuracy CNN FastText')
# + _uuid="7ca4970315fc96f9111ff5dc3c27420c2a5700d5" id="mZ0oVMdumklx" colab_type="code" colab={}
plot_confusion_matrix(y_validation,ft_predict_y_validation,encoder.classes_,'Validation accuracy CNN FastText')
# + _uuid="5870e35cd9511c52682318a04e4e11d3542c3474" id="JDwKXbH7mklz" colab_type="code" colab={}
plot_confusion_matrix(y_train,ft_predict_y_train,encoder.classes_,'Train accuracy CNN FastText')
# + [markdown] _uuid="fc0b9560775895a68d40b5b4155d715381825f7d" id="e4hm8zqamkl4" colab_type="text"
# ### Conclusions
#
# CNNs are very useful for recognize text patterns and its properties allows us to design very strong models for NLP tasks, making a comparison between the FastText embeddings and the embeddings learn from the embedding layer in the first approach we see that the results for the confusion matrix are better for the first approach but seeing the behaviour of the loss and accuracy we can notice that results are better using FastText embeddings cause the gap between validation and train are smaller.
| cnn_for_nlp_in_keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] cell_tags=[]
# ## 4 - Fazendo Escolhas
# + [markdown] cell_tags=[]
# Nossas lições anteriores nos mostraram como manipular dados,
# definir nossas próprias funções,
# e repetir as coisas.
# Contudo,
# os programas que escrevemos até agora sempre fazem as mesmas coisas,
# independentemente dos dados que eles recebem.
# Queremos que os programas façam escolhas com base nos valores que estão manipulando.
# + [markdown] cell_tags=["objectives"]
# #### Objetivos
# * Escreva declarações condicionais incluindo `if`,` elif` e `else`.
# * Avaliar corretamente expressões contendo `and` e `or`.
# * Escrever e interpretar corretamente o código que contém loops e condicionamentos aninhados.
# + [markdown] cell_tags=[]
# ### Condicionais
#
# + [markdown] cell_tags=[]
# A outra coisa que precisamos para criar nossos programas é uma maneira de fazer escolhas com base em um valor de dados.
# A ferramenta que a Python nos dá para fazer isso é chamada de [declaração condicional](http://swcarpentry.github.io/python-novice-inflammation-2.7/reference.html#conditional-statement),
# e é assim:
#
# + cell_tags=[]
num = 37
if num > 100:
print ('greater')
else:
print ('not greater')
print ('done')
# + [markdown] cell_tags=[]
# A segunda linha deste código usa a palavra-chave `if` para dizer ao Python que queremos fazer uma escolha.
# Se o teste que se segue é verdade,
# o corpo do `if`
# (ou seja, as linhas recuadas embaixo) são executadas.
# Se o teste for falso,
# O corpo do `else` é executado em vez disso.
# Apenas um ou outro é executado:
# + [markdown] cell_tags=[]
# <img src="files/img/python-flowchart-conditional.png" alt="Executing a Conditional" />
# + [markdown] cell_tags=[]
# Declarações condicionais não precisam incluir um `else`.
# Se não houver um, Python simplesmente não faz nada se o teste for falso:
# + cell_tags=[]
num = 53
print ('before conditional...')
if num > 100:
print ('53 is greater than 100')
print ('...after conditional')
# + [markdown] cell_tags=[]
# Nós também podemos encadear vários testes juntos usando `elif`,
# o que é a versão curta para "else if".
# Isso torna simples escrever uma função que retorna o sinal de um número:
# + cell_tags=[]
def sign(num):
if num > 0:
return 1
elif num == 0:
return 0
else:
return -1
print ('sign of -3:', sign(-3))
# + [markdown] cell_tags=[]
# Uma coisa importante para notar no código acima é que usamos um sinal de duplo igual `==` para testar a igualdade
# em vez de um único sinal de igual
# porque o último é usado para significar atribuição.
# Esta convenção foi herdada de C,
# e muitas outras linguagens de programação funcionam da mesma maneira.
#
# Podemos também combinar testes usando `and` e `or`.
# `and` só é verdade se ambas as partes são verdadeiras:
# + cell_tags=[]
if (1 > 0) and (-1 > 0):
print( 'both parts are true')
else:
print ('one part is not true')
# + [markdown] cell_tags=[]
# enquanto `or` é true se qualquer uma das partes for verdadeira:
# + cell_tags=[]
if (1 < 0) or ('left' < 'right'):
print ('at least one test is true')
# + [markdown] cell_tags=["challenges"]
# #### Exercícios
#
# 1. `True` e `False` não são os únicos valores em Python que são verdadeiros e falsos.
# Na verdade, *qualquer* valor pode ser usado em `if` ou `elif`.
# Depois de ler e executar o código abaixo,
# explique qual é a regra para qual valores são considerados verdadeiros e considerados falsos.
# (Note que se o corpo de um condicional é uma única linha, podemos escrevê-lo na mesma linha que o `if`.)
#
# ~~~python
# if '': print 'empty string is true'
# if 'word': print 'word is true'
# if []: print 'empty list is true'
# if [1, 2, 3]: print 'non-empty list is true'
# if 0: print 'zero is true'
# if 1: print 'one is true'
# ~~~
#
# 2. Escreva uma função chamada `near` que retorna `True` se seu primeiro parâmetro estiver dentro de 10% de seu segundo
# e `False` caso contrário.
# Compare sua implementação com o do seu parceiro:
# você retorna a mesma resposta para todos os pares possíveis de números?
# -
| Aulas/1/04-cond.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# <p align="right">
# <img src="Capture.png" width="1100" height="1200" />
#
# </p>
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Classifying basin‐scale stratigraphic geometries from subsurface formation tops with machine learning
# ### <NAME> and <NAME>
# #### Texas Institute for Discovery Education in Science, College of Natural Sciences, Cockrell School of Engineering, Jackson School of Geosciences
# #### The University of Texas at Austin
# **[Twitter](http://twitter.com/geologyjesse)** | **[GitHub](https://github.com/jessepisel)** | **[GoogleScholar](https://scholar.google.com/citations?user=Z4JzYgIAAAAJ&hl=en&oi=ao)** | **[LinkedIn](https://www.linkedin.com/in/jesse-pisel-70519430/)**
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# <tr>
# <td valign="top"><img src="https://github.com/GeostatsGuy/GeostatsPy/blob/master/TCG_color_logo.png?raw=true" width="225"></td>
# <td valign="top"><img src="https://github.com/jessepisel/energy_analytics/blob/master/EA_logo.jpg?raw=true" width="250"></td>
#
# <td valign="top"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/0c/ConocoPhillips_Logo.svg/1200px-ConocoPhillips_Logo.svg.png" width="450"></td>
#
# </tr>
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Executive Summary
# **Problem**: Structure and thickness maps contoured from points are **non-unique** for onlap and truncation geometries
#
# **Our Approach**: Teach a classifier to predict geometries for a _synthetic_ model, then transfer to "real world" data
#
# **What We Learned**: Transfer learning works pretty well for this type of task
#
# **Recommendations**: Useful for guided interpretation, and should be tried at different stratigraphic scales
#
# If you think this is great read our 100% open-access paper:
#
# https://onlinelibrary.wiley.com/doi/abs/10.1002/dep2.129
#
# And all the code is open-source:
#
# https://github.com/jessepisel/stratal-geometries
# + [markdown] slideshow={"slide_type": "slide"}
# ## First the problem:
#
# Let's walk through the conceptual idea behind the problem.
#
# It's really tough to interpret if a formation is thinning because of truncation or thinning because of onlap.
#
# Think of it as the formation is thinning either at the top (truncation) or thinning on the bottom (onlap)
#
# Let's see how everyone does with an example:
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Onlap or Truncation
# <p align="right">
# <img src="a.jpg" width="400" height="400" />
#
# </p>
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Onlap or Truncation
# <p align="right">
# <img src="b.jpg" width="400" height="400" />
#
# </p>
# + [markdown] slideshow={"slide_type": "subslide"}
# ## How about cross sections?
# <p align="right">
# <img src="axs.jpg" width="600" height="600" />
#
# </p>
#
# <p align="right">
# <img src="bxs.jpg" width="600" height="600" />
#
# </p>
#
# 4 wells in each section with no V.E.
# + [markdown] slideshow={"slide_type": "subslide"}
# ## How about another set of cross sections?
# <p align="right">
# <img src="axsfull.jpg" width="600" height="600" />
#
# </p>
#
# <p align="right">
# <img src="bxsful.jpg" width="600" height="600" />
#
# </p>
# + [markdown] slideshow={"slide_type": "subslide"}
# * 100 wells with 2x V.E.
# * What made it easier for you with the second cross section? More data, vertical exaggeration, and comparing each vertical 1D profile to what is on either side of it?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Our Approach:
#
# * Can we use machine learning for this problem?
# * It is a binary classification problem
# * Need to include horizontal stratification
# * How to build a training dataset when we don't know the classes downhole?
# + [markdown] slideshow={"slide_type": "subslide"}
# ### How we did it
# 1. Build conceptual model of the subsurface for three classes
# * Truncation
# * Onlap
# * Horizontal stratification
# 2. Train machine learning classifer on _perfect_ models, measure uncertainty
# 3. Transfer classifier to real world dataset and compare to **ground truth** field geology
#
# Let's start with training data
# + [markdown] slideshow={"slide_type": "slide"}
# ## Data Generation:
#
# How to build conceptual models?
#
# 1. Use open source tools!
# * You can use any tools you have at your disposal
# * We chose open source because it's fast and easy
# * Bonus is we get to share it with everyone
#
# + [markdown] slideshow={"slide_type": "subslide"}
# <tr>
# <td valign="top"><img src="https://cepa.io/wp-content/uploads/2018/02/numpy-logo.png" width="200" /></td>
# <td valign="top"><img src="https://numfocus.org/wp-content/uploads/2016/07/pandas-logo-300.png" width="200" /></td>
# <td valign="top"><img src="https://www.fatiando.org/verde/latest/_static/verde-logo.png" width="200" /></td>
# <td valign="top"><img src="https://matplotlib.org/_static/logo2_compressed.svg" width="200" /></td>
# <td valign="top"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/05/Scikit_learn_logo_small.svg/1200px-Scikit_learn_logo_small.svg.png" width="200" /></td>
# <td valign="top"><img src="https://www.fullstackpython.com/img/logos/scipy.png" width="200" /></td>
#
# </tr>
#
# + [markdown] slideshow={"slide_type": "subslide"}
# #### More specifically:
# 1. Create geometries using sine waves (varying wavelength and amplitude)
# 2. Erode the geometries on each pass
# 3. Rotate the geometries
#
# Let's run some code and see what the conceptual model looks like
#
# + slideshow={"slide_type": "subslide"}
from scipy.spatial.distance import pdist, squareform
from sklearn.preprocessing import FunctionTransformer
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import verde as vd
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## First define some global variables
# + slideshow={"slide_type": "subslide"}
# this creates dummy NAMES for the formations
NAMES = [
"one",
"two",
]
# this is the number of tops you want in your training data
NUMBER_OF_LAYERS = 2
# minimum value for top depths
SMALLEST = -6
# maximum value for top depths
LARGEST = 12
# number of steps between top depths
STEP = 2
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Next some functions that build our earth models
# + slideshow={"slide_type": "subslide"}
def truncation(smallest, largest, step, names, number_of_layers, j):
"""
Creates truncated stratal geometries using a min, max, step, names and numbers of layers
param smallest: the smallest integer value for stratigraphy
param largest: the largest integer value for stratigraphy
param step: the size of the step from smallest to largest
param names: names of the layers as strings in a list
param number_of_layers: number of layers to evaluate
param j: float value that controls the wavelength of the sine curve
"""
rolling = pd.DataFrame()
j = np.round(j, decimals=3) + 0.5
elevation_random = sorted(
np.random.uniform(smallest, largest, number_of_layers - 1)
)
for i in range(len(names[0 : number_of_layers - 1])):
basement = (
0.001
+ (10) * np.sin(1 - np.arange(0, 40, 0.1) / (j * 2) + 0.001)
+ np.random.rand(400) / 5
)
elevation = (
np.full(
400,
basement.max()
+ np.random.uniform(basement.min() / 2, basement.max() / 64, 1),
)
+ np.random.rand(400) / 5
)
topbasement = np.where(basement > elevation, elevation, basement)
rolling["zero"] = topbasement
layer_elevation = (
0.001
+ (10) * np.sin(1 - np.arange(0, 40, 0.1) / (j * 2) + 0.001)
+ abs(elevation_random[i])
+ np.random.rand(400) / 5
)
layer_elevation = np.where(
layer_elevation < basement, basement, layer_elevation
)
layer_elevation = np.where(
layer_elevation > elevation, elevation, layer_elevation
)
rolling[names[i]] = layer_elevation
return rolling
# + slideshow={"slide_type": "subslide"}
def onlap(smallest, largest, step, names, number_of_layers, j):
"""
Creates onlap stratal geometries using a min, max, step, names and numbers of layers
param smallest: the smallest integer value for stratigraphy
param largest: the largest integer value for stratigraphy
param step: the size of the step from smallest to largest
param names: names of the layers as strings in a list
param number_of_layers: number of layers to evaluate
param j: float value that controls the wavelength of the sine curve
"""
rolling = pd.DataFrame()
j = np.round(j, decimals=3) + 0.5
elevation_random = sorted(
np.random.uniform(smallest, largest, number_of_layers - 1)
)
for i in range(len(names[0 : number_of_layers - 1])):
basement = (
0.001
+ (10) * np.sin(1 - np.arange(0, 40, 0.1) / (j * 2) + 0.001)
+ np.random.rand(400) / 5
)
elevation = (
np.full(
400,
basement.max()
+ np.random.uniform(basement.min() / 2, basement.max() / 64, 1),
)
+ np.random.rand(400) / 5
)
topbasement = np.where(basement > elevation, elevation, basement)
rolling["zero"] = topbasement
strat_elevation = (
np.full(400, elevation_random[i]) + np.random.rand(400) / 5
)
onlap = np.where(strat_elevation > basement, strat_elevation, basement)
layer_elevation = np.where(onlap > elevation, elevation, onlap)
rolling[names[i]] = layer_elevation
return rolling
# + slideshow={"slide_type": "subslide"}
def horizontal(smallest, largest, step, names, number_of_layers):
"""
Creates onlap stratal geometries using a min, max, step, names and numbers of layers
param smallest: the smallest integer value for stratigraphy
param largest: the largest integer value for stratigraphy
param step: the size of the step from smallest to largest
param names: names of the layers as strings in a list
param number_of_layers: number of layers to evaluate
"""
rolling = pd.DataFrame()
elevation_random = sorted(
np.random.uniform(smallest, largest, number_of_layers - 1)
)
for i in range(len(names[0 : number_of_layers - 1])):
strat_elevation = (
np.full(400, elevation_random[i]) + np.random.rand(400) / 5
)
basement = strat_elevation - abs(
np.random.uniform(smallest, largest)
+ np.random.rand(400) / 5
)
elevation = (
np.full(400, strat_elevation + elevation_random[i])
+ np.random.rand(400) / 5
)
topbasement = np.where(basement > elevation, elevation, basement)
layer_elevation = np.where(
strat_elevation > elevation, elevation, strat_elevation
)
rolling["zero"] = topbasement
rolling[names[i]] = layer_elevation
return rolling
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Now some functions to build out features
# + slideshow={"slide_type": "subslide"}
def rotation(dataframe, j):
"""
Creates spatial samples and rotates them in the xy plane
param dataframe: dataframe output from stratigraphy generation
param j: controls the rotation of the dataset 0 is no rotation
"""
x = np.arange(0, 40, 0.1)
y = np.random.randint(0, 10, len(x))
# this is the rotation of the generated data
if j % 0.2 > 0.1:
dataframe["ex"] = x * np.cos(-j / 2) - y * np.sin(-j / 2)
dataframe["ey"] = y * np.cos(-j / 2) - x * np.sin(-j / 2)
else:
dataframe["ex"] = x * np.cos(j / 2) - y * np.sin(j / 2)
dataframe["ey"] = y * np.cos(j / 2) - x * np.sin(j / 2)
return dataframe
# + slideshow={"slide_type": "subslide"}
def depth_to_thickness(neighborhood, dataframe):
"""
Converts the depth dataframe from the adjacent wells function to thicknesses
param neighborhood: dataframe output from `adjacent_wells`
param dataframe: dataframe output from function `missing`
"""
locations = pd.DataFrame()
df = pd.DataFrame()
thicknesses = neighborhood.diff(axis=1)
thicknesses[thicknesses < 0] = 0
thicknesses.drop(columns="zero", inplace=True)
locations = pd.concat((locations, dataframe.iloc[:, -2:]))
df = pd.concat((df, thicknesses))
return df, locations
# + slideshow={"slide_type": "subslide"}
def feature_list(no_of_neighbors):
"""
Creates a list of features given number of adjacent wells
param no_of_neighbors: number of adjacent wells used in feature engineering
"""
print("Getting the features")
initial = ["thickness", "thickness natural log", "thickness power"]
features = []
for item in initial:
features.append(item)
for i in range(1, no_of_neighbors + 1):
features.append(item + " neighbor " + str(i))
features.append(["x location", "y location", "class"])
return list(flatten(features))
# + slideshow={"slide_type": "subslide"}
def flatten(container):
"Flattens lists"
for i in container:
if isinstance(i, (list, tuple)):
for j in flatten(i):
yield j
else:
yield i
# + slideshow={"slide_type": "subslide"}
np.random.seed(18)
truncated = truncation(SMALLEST, LARGEST, STEP, NAMES, NUMBER_OF_LAYERS, 2) # 2 == wavelength
trunc_rotated = rotation(truncated, 10) # 10 == rotation
trunc_thickness, trunc_locations = depth_to_thickness(trunc_rotated, trunc_rotated)
# + slideshow={"slide_type": "subslide"}
np.random.seed(18)
onlapping = onlap(SMALLEST, LARGEST, STEP, NAMES, NUMBER_OF_LAYERS, 10) # 10 == wavelength
onlap_rotated = rotation(onlapping, 1) # 1 == rotation
onlap_thickness, onlap_locations = depth_to_thickness(onlap_rotated, onlap_rotated)
# + slideshow={"slide_type": "subslide"}
np.random.seed(18)
horizontally = horizontal(SMALLEST, LARGEST, STEP, NAMES, NUMBER_OF_LAYERS)
horiz_rotated = rotation(horizontally, 1)
horiz_thickness, horiz_locations = depth_to_thickness(horiz_rotated, horiz_rotated)
# + slideshow={"slide_type": "subslide"}
spline = vd.Spline()
spline.fit((trunc_locations.ex*100, trunc_locations.ey*100), trunc_thickness.one*100)
AUIGRID = spline.grid(spacing=1, data_names=["thickness"])
AUIGRID.thickness.plot.pcolormesh(cmap="magma", vmin=0, vmax=700)
plt.title("Truncation Thickness")
# + slideshow={"slide_type": "subslide"}
spline = vd.Spline()
spline.fit((trunc_locations.ex*100, trunc_locations.ey*100), trunc_rotated.one*100)
AUSGRID = spline.grid(spacing=1, data_names=["depth"])
AUSGRID.depth.plot.pcolormesh(cmap="viridis", vmin=-200, vmax=800)
plt.title("Truncation Structure")
# + slideshow={"slide_type": "subslide"}
spline = vd.Spline()
spline.fit((onlap_locations.ex*100, onlap_locations.ey*100), onlap_thickness.one*100)
OLIGRID = spline.grid(spacing=1, data_names=["thickness"])
OLIGRID.thickness.plot.pcolormesh(cmap="magma", vmin=0, vmax=700)
plt.title("Onlap Thickness")
# + slideshow={"slide_type": "subslide"}
spline = vd.Spline()
spline.fit((onlap_locations.ex*100, onlap_locations.ey*100), onlap_rotated.one*100)
OLSGRID = spline.grid(spacing=1, data_names=["depth"])
OLSGRID.depth.plot.pcolormesh(cmap="viridis", vmin=-200, vmax=800)
plt.title("Onlap Structure")
# + [markdown] slideshow={"slide_type": "slide"}
# ## Feature Engineering
#
# We have a conceptual model with features:
# * X location
# * Y location
# * Depth to top
#
# <p float="center">
# <img src="initfeat.jpg" width="225" />
# </p>
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Feature Engineering helped us humans in our cross sections above let's add some engineered features
#
# * Calculate thickness at each point
# * Log transform of thickness
# * Power transform of thickness
# * The 3 features above for each nearby well (wells don't live in a spatial vacuum)
# * Add in some missing at random tops (similar to real world situations)
# + [markdown] slideshow={"slide_type": "subslide"}
# <p float="center">
# <img src="engfeat.jpg" width="900" />
#
# </p>
#
# Let's look at `01_training_data.ipynb`
# + [markdown] slideshow={"slide_type": "slide"}
# ## Model Selection
#
# * We now have a dataset we can train a machine learning classifier on!
# * How do we measure accuracy for this?
# * Let's use the Jaccard Similarity Metric
# * "the size of the intersection divided by the size of the union of two label sets...compares a set of predicted labels ... to the true values"
# * Value of 1 == 100% accuracy
# * Value of 0 == 0% accuracy
#
# Let's pick a few classification models and see how they do "out of the box"
# + slideshow={"slide_type": "subslide"}
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import svm
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import jaccard_score
# + slideshow={"slide_type": "subslide"}
# read the data we made
dataset = pd.read_csv(r'stratigraphic_geometry_dataset.csv', index_col=[0])
dataset.head()
# + slideshow={"slide_type": "subslide"}
# Set number of wells in vicinity
wells_in_vicinity = 0
flat_features = feature_list(wells_in_vicinity)
subset = dataset[flat_features]
# split the dataset into test/train subsets
X_train, X_test, y_train, y_test = train_test_split(
subset.drop("class", axis=1), subset["class"], test_size=0.2, random_state=86,
)
X_train
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Let's see how random guessing performs for a baseline
# + slideshow={"slide_type": "subslide"}
# random
np.random.seed(18)
y_pred = np.random.choice(['truncation', 'onlap', 'horizontal'], len(y_test))
weighted_jc_score = jaccard_score(y_test, y_pred, average='weighted')
print(f'Accuracy for each class is {jaccard_score(y_test, y_pred, average=None)}')
print(f'Average weighted accuracy is {weighted_jc_score:.2f}')
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Now a support vector classifier
# + slideshow={"slide_type": "subslide"}
# SVM
svmclf = svm.SVC()
svmclf.fit(X_train, y_train)
y_pred = svmclf.predict(X_test)
weighted_jc_score = jaccard_score(y_test, y_pred, average='weighted')
print(f'Accuracy for each class is {jaccard_score(y_test, y_pred, average=None)}')
print(f'Average weighted accuracy is {weighted_jc_score:.2f}')
# + [markdown] slideshow={"slide_type": "subslide"}
# ### How about a decision tree?
# + slideshow={"slide_type": "subslide"}
# Decision Tree
dtclf = DecisionTreeClassifier()
dtclf.fit(X_train, y_train)
y_pred = dtclf.predict(X_test)
weighted_jc_score = jaccard_score(y_test, y_pred, average='weighted')
print(f'Accuracy for each class is {jaccard_score(y_test, y_pred, average=None)}')
print(f'Average weighted accuracy is {weighted_jc_score:.2f}')
# + [markdown] slideshow={"slide_type": "subslide"}
# ### A random forest
# + slideshow={"slide_type": "subslide"}
# Random Forest
rfclf = RandomForestClassifier()
rfclf.fit(X_train, y_train)
y_pred = rfclf.predict(X_test)
weighted_jc_score = jaccard_score(y_test, y_pred, average='weighted')
print(f'Accuracy for each class is {jaccard_score(y_test, y_pred, average=None)}')
print(f'Average weighted accuracy is {weighted_jc_score:.2f}')
# + [markdown] slideshow={"slide_type": "subslide"}
# ### What about boosting?
# + slideshow={"slide_type": "subslide"}
# AdaBoost
abclf = AdaBoostClassifier()
abclf.fit(X_train, y_train)
y_pred = abclf.predict(X_test)
weighted_jc_score = jaccard_score(y_test, y_pred, average='weighted')
print(f'Accuracy for each class is {jaccard_score(y_test, y_pred, average=None)}')
print(f'Average weighted accuracy is {weighted_jc_score:.2f}')
# + [markdown] slideshow={"slide_type": "subslide"}
# ### What about a k-neighbors classifier?
# + slideshow={"slide_type": "subslide"}
# KNN
knclf = KNeighborsClassifier()
knclf.fit(X_train, y_train)
y_pred = knclf.predict(X_test)
weighted_jc_score = jaccard_score(y_test, y_pred, average='weighted')
print(f'Accuracy for each class is {jaccard_score(y_test, y_pred, average=None)}')
print(f'Average weighted accuracy is {weighted_jc_score:.2f}')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Active learning grid search
#
# * **Random forest** has the best initial accuracy
# * Let's tune the hyperparameters for it
# * Hyperparameters are chosen before training begins (user specified)
# * Grid search for parameters with 5 fold cross validation
# * We need a certainty measure to stop training before it overfits
#
# <p align="right">
# <img src="gridsearch.jpg" width="400" height="400" />
#
# </p>
#
# `03_active_learning_grid_search.ipynb`
# + [markdown] slideshow={"slide_type": "slide"}
# ## How does the classifier perform?
#
# * We split our generated dataset into **test/train** subsets
# * How confused is our model?
# * 88.4% Accuracy, 72.8% certainty
# <p align="right">
# <img src="confusion.jpg" width="400" height="400" />
# </p>
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Now what?
#
# * How do we translate from predictions on our synthetic data to "real world" data?
# * Find a subsurface dataset and process it in the same manner
# * Calculate formation thicknesses
# * Feature Engineering (log and power transforms, wells in vicinity)
# * Make predictions and visualize with classifier certainty
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Real World Predictions
#
# <img src = "https://content.govdelivery.com/attachments/fancy_images/WYSGS/2015/07/566177/banner-600_original.jpg" width="500" />
#
# * Subsurface data from the Wyoming State Geological Survey (Lynds and Lichtner, 2016)
# * Eastern Greater Green River Basin
# * Subsurface formation tops picked:
# * Fort Union
# * Lance Formation
# * Fox Hills Sandstone
# + [markdown] slideshow={"slide_type": "subslide"}
# <p align="right">
# <img src="overviewmap.jpg" width="600" height="600" />
#
# </p>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Spatial Results
# ### Geologic Interpretation
#
# <p align="right">
# <img src="lance.jpg" width="600" height="600" />
# </p>
# + [markdown] slideshow={"slide_type": "subslide"}
# * Lance Formation
# * Central Great Divide Basin: conformable with Fox Hills Formation and Fort Union Formation
# * The band of wells classified as onlap interpreted as a wide basin margin during deposition
# * Truncation swath includes Wamsutter Arch, Rock Springs Uplift, Dad Arch, on trend with Sierra Madre Uplift and Wind River Range
# + [markdown] slideshow={"slide_type": "slide"}
# <p align="right">
# <img src="ftunion.jpg" width="600" height="600" />
# <img src="certainty.jpg" width="600" height="600" />
#
# </p>
# + [markdown] slideshow={"slide_type": "subslide"}
# * Fort Union Formation
# * Mostly horizontally stratified
# * Truncation and onlap mixed along west end of Wamsutter Arch matches field mapping
# * Also Almond/Lewis age paleohigh
# * Washakie Basin truncation (ne-sw) follows regional trends, truncation on Cherokee Arch
# * Tough in this area because of geometry similarity, but now a measure of confidence
# + [markdown] slideshow={"slide_type": "slide"}
# ## Data Science Interpretation
#
# * Geolocially reasonable results
# * Uncertainty identifies areas an expert should reevaluate
# * What do the predictions look in lower dimensional space?
# * Dimension reduction with t-distributed stochastic neighbor embedding (t-SNE)
# + [markdown] slideshow={"slide_type": "subslide"}
# <p align="right">
# <img src="tsne.jpg" width="600" height="600" />
# </p>
# + [markdown] slideshow={"slide_type": "subslide"}
# * Horizontal stratification clusters in a distinct region
# * Overlap between onlap and truncation in this space
# * Intuitive since they look the same
# + [markdown] slideshow={"slide_type": "subslide"}
# * What does the synthetic dataset tell us about each class?
# * Compare the distribution of one feature and one sample to the distribution of the entire class
# * Measure similarity with the K-L divergence
# * Lower K-L divergence values == more similar to that class
# + [markdown] slideshow={"slide_type": "subslide"}
# <p align="right">
# <img src="kl-divergence.jpg" width="200" height="200" />
# </p>
# * Increasing wells in vicinity == decreasing divergence
# + [markdown] slideshow={"slide_type": "slide"}
# ## What we Learned
# <p align="right">
# <img src="models.png" width="600" height="600" />
# </p>
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# #### But some models help us stop arm waving
# <p align="right">
# <img src="armwaving.jpg" width="600" height="600" />
# </p>
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Yours truly arm waving on the Rock Springs Uplift
#
# <p align="right">
# <img src="waving.jpg" width="400" />
# </p>
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## What we learned
#
# * The classification model is **useful**
# * High classification accuracy is possible on the training dataset
# * Qualitatively does a reasonable job in the real world
# * **Classifications are consistent with spot checks and previous interpretations**
# * Still areas with misclassifications
# * Certainty measure is useful to interpret the predictions
# + [markdown] slideshow={"slide_type": "slide"}
# ## Recommendations
#
# #### For this model
# * Classification model aids geologists in searching for unique patterns
# * Interpret geometries across a basin in seconds
# * Uncertain areas can then be interrogated further
# * Try this at different scales (bedset to sequence)
# + [markdown] slideshow={"slide_type": "subslide"}
# #### In General
# * Transfer learning has tremendous opportunities in the subsurface
# * Subsurface experts **need** to be fluent in data/stats/ML/AI
# * The fusion of on the ground and data driven geoscience research is already here
#
#
# * Examples of what UT **Freshmen** computer scientists are doing:
# * Automatic well-log correlation (in review)
# * Automatic curve aliasing (in prep, open source)
# * Satellite images to geologic maps (proof of concept works)
# * Well path optimization (Reinforcement learning proof of concept)
# * 3D kriging (geostatspy)
# * Spatial debiasing (proof of concept)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Thanks for coming! Let's have a discussion!
# #### Want to work together with the Energy Analytics team? Get in touch <EMAIL>
| RMSSEPM Presentation 2020.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Asset Classification (Supervised)
# ## Authors
# - <NAME> (NVIDIA)
# - <NAME> (NVIDIA)
# ## Table of Contents
# * Introduction
# * Dataset
# * Reading in the datasets
# * Training and inference
# * References
# # Introduction
# In this notebook we will show how to predict the function of a server with Windows Event Logs using `cudf`, `cuml` and `pytorch`. The machines are labeled as DC, SQL, WEB, DHCP, MAIL and SAP. The dependent variable will be the type of the machine. The features are selected from Windows Event Logs which is in a tabular format.
# ## Library imports
import cudf
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as torch_optim
from torch.utils.dlpack import from_dlpack
from cuml.preprocessing import LabelEncoder
from cuml.preprocessing import train_test_split
from sklearn.metrics import accuracy_score, f1_score, confusion_matrix
import pandas as pd
batch_size = 10000
label_col = '19'
epochs = 15
# #### Read the dataset into a GPU dataframe with `cudf.read_csv()`
win_events_on_gpu = cudf.read_csv('win_events_18_features.csv')
features = {
"1" : "eventcode",
"2" : "keywords",
"3" : "privileges",
"4" : "message",
"5" : "sourcename",
"6" : "taskcategory",
"7" : "account_for_which_logon_failed_account_domain",
"8" : "detailed_authentication_information_authentication_package",
"9" : "detailed_authentication_information_key_length",
"10" : "detailed_authentication_information_logon_process",
"11" : "detailed_authentication_information_package_name_ntlm_only",
"12" : "logon_type",
"13" : "network_information_workstation_name",
"14" : "new_logon_security_id",
"15" : "impersonation_level",
"16" : "network_information_protocol",
"17" : "network_information_direction",
"18" : "filter_information_layer_name"
}
win_events_on_gpu.head()
# #### Categorize the Labels
for col in win_events_on_gpu.columns:
win_events_on_gpu[col] = win_events_on_gpu[col].astype('str')
win_events_on_gpu[col] = win_events_on_gpu[col].fillna("NA")
win_events_on_gpu[col] = LabelEncoder().fit_transform(win_events_on_gpu[col])
for col in win_events_on_gpu.columns:
win_events_on_gpu[col] = win_events_on_gpu[col].astype('int16')
win_events_on_gpu.to_csv("test12")
# ### Split the dataset into training and validation sets using cuML `train_test_split` function
# Column 19 is the dependent variable hence we save it as labels.
X, val_X, Y, val_Y = train_test_split(win_events_on_gpu, label_col, train_size=0.9)
val_X.index = val_Y.index
val_X.head()
val_Y.head()
X, test_X, Y, test_Y = train_test_split(X,Y, train_size=0.9)
X.index = Y.index
test_X.index = test_Y.index
X.head()
Y.head()
# ### Print Labels
test_Y.head()
# ### Embedding columns, check values in the columns
embedded_cols = {}
for col in X.columns:
catergories_cnt = X[col].max()+2
if catergories_cnt > 1:
embedded_cols[col] = catergories_cnt
embedded_cols
X[label_col] = Y
val_X[label_col] = val_Y
test_X[label_col] = test_Y
# #### Choosing columns for embedding
embedded_col_names = embedded_cols.keys()
embedded_col_names
embedded_cols.items()
embedding_sizes = [(n_categories, min(100, (n_categories+1)//2)) for _,n_categories in embedded_cols.items()]
embedding_sizes
# Partition the dataframe
def get_partitioned_dfs(df, batch_size):
dataset_len = df.shape[0]
prev_chunk_offset = 0
partitioned_dfs = []
while prev_chunk_offset < dataset_len:
curr_chunk_offset = prev_chunk_offset + batch_size
chunk = df.iloc[prev_chunk_offset:curr_chunk_offset:1]
partitioned_dfs.append(chunk)
prev_chunk_offset = curr_chunk_offset
return partitioned_dfs
train_part_dfs = get_partitioned_dfs(X, batch_size)
val_part_dfs = get_partitioned_dfs(val_X, batch_size)
test_part_dfs = get_partitioned_dfs(test_X, batch_size)
train_part_dfs[0].head()
del win_events_on_gpu
del X
del val_X
del test_X
# ## Check GPU availability
#
# If there's a GPU, data is moved on to the GPU.
def get_default_device():
"""Pick GPU if available, else CPU"""
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
def to_device(data, device):
"""Move tensor(s) to chosen device"""
if isinstance(data, (list,tuple)):
return [to_device(x, device) for x in data]
return data.to(device, non_blocking=True)
device = get_default_device()
device
def bn_drop_lin(n_in, n_out, bn, p, actn):
"Sequence of batchnorm (if `bn`), dropout (with `p`) and linear (`n_in`,`n_out`) layers followed by `actn`."
layers = [nn.BatchNorm1d(n_in)] if bn else []
if p != 0: layers.append(nn.Dropout(p))
layers.append(nn.Linear(n_in, n_out))
if actn is not None: layers.append(actn)
return layers
# The bn_drop_lin function returns a sequence of batch normalization, dropout and a linear layer. This custom layer is usually used at the end of a model.
#
# n_in represents the size of the input, n_out the size of the output, bn whether we want batch norm or not, p how much dropout, and actn (optional parameter) adds an activation function at the end.
# Reference: https://github.com/fastai/fastai/blob/master/fastai/layers.py#L44
# ## Define Model Class
class TabularModel(nn.Module):
"Basic model for tabular data"
def __init__(self, emb_szs, n_cont, out_sz, layers, drops,
emb_drop, use_bn, is_reg, is_multi):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(emb_drop)
self.bn_cont = nn.BatchNorm1d(n_cont)
n_emb = sum(e.embedding_dim for e in self.embeds)
self.n_emb,self.n_cont = n_emb,n_cont
sizes = [n_emb + n_cont] + layers + [out_sz]
actns = [nn.ReLU(inplace=True)] * (len(sizes)-2) + [None]
layers = []
for i,(n_in,n_out,dp,act) in enumerate(zip(sizes[:-1],sizes[1:],[0.]+drops,actns)):
layers += bn_drop_lin(n_in, n_out, bn=use_bn and i!=0, p=dp, actn=act)
self.layers = nn.Sequential(*layers)
def forward(self, x_cat, x_cont):
if self.n_emb != 0:
x = [e(x_cat[:,i]) for i,e in enumerate(model.embeds)]
x = torch.cat(x, 1)
x = self.emb_drop(x)
if self.n_cont != 0:
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1) if self.n_emb != 0 else x_cont
x = self.layers(x)
return x.squeeze()
model = TabularModel(embedding_sizes, 0, 6, [200,100], [0.001,0.01], emb_drop=0.04, is_reg=False,is_multi=True, use_bn=True)
to_device(model, device)
# #### Define the Optimizer
def get_optimizer(model, lr = 0.001, wd = 0.0):
parameters = filter(lambda p: p.requires_grad, model.parameters())
optim = torch_optim.Adam(parameters, lr=lr, weight_decay=wd)
return optim
# #### Training function
def train_model(model, optim, dfs):
model.train()
total = 0
sum_loss = 0
xb_cont_tensor = torch.zeros(0, 0)
xb_cont_tensor.cuda()
for df in dfs:
batch = df.shape[0]
train_set = df.drop(label_col).to_dlpack()
train_set = from_dlpack(train_set).long()
output = model(train_set, xb_cont_tensor)
train_label = df[label_col].to_dlpack()
train_label = from_dlpack(train_label).long()
loss = F.cross_entropy(output, train_label)
optim.zero_grad()
loss.backward()
optim.step()
total += batch
sum_loss += batch*(loss.item())
return sum_loss/total
# #### Evaluation function
def val_loss(model, dfs):
model.eval()
total = 0
sum_loss = 0
correct = 0
xb_cont_tensor = torch.zeros(0, 0)
xb_cont_tensor.cuda()
for df in dfs:
current_batch_size = df.shape[0]
val = df.drop(label_col).to_dlpack()
val = from_dlpack(val).long()
out = model(val, xb_cont_tensor)
val_label = df[label_col].to_dlpack()
val_label = from_dlpack(val_label).long()
loss = F.cross_entropy(out, val_label)
sum_loss += current_batch_size*(loss.item())
total += current_batch_size
pred = torch.max(out, 1)[1]
correct += (pred == val_label).float().sum().item()
print("valid loss %.3f and accuracy %.3f" % (sum_loss/total, correct/total))
return sum_loss/total, correct/total
# Prediction function to be used with a test set
def predict(model, test_set):
xb_cont_tensor = torch.zeros(0, 0)
xb_cont_tensor.cuda()
current_batch_size = test_set.shape[0]
test_set = test_set.to_dlpack()
test_set = from_dlpack(test_set).long()
out = model(test_set, xb_cont_tensor)
pred = torch.max(out, 1)[1].view(-1).tolist()
return pred
def train_loop(model, epochs, lr=0.01, wd=0.0):
optim = get_optimizer(model, lr = lr, wd = wd)
for i in range(epochs):
loss = train_model(model, optim, train_part_dfs)
print("training loss: ", loss)
val_loss(model, val_part_dfs)
def cleanup_cache():
# release memory.
torch.cuda.empty_cache()
# ## Training
train_loop(model, epochs=epochs, lr=0.03, wd=0.00001)
def inference(model, test_part_dfs):
pred_results = []
true_results = []
for df in test_part_dfs:
pred_results.append(predict(model, df))
true_results.append(df[label_col].values_host)
pred_results = np.concatenate(pred_results).astype(np.int32)
true_results = np.concatenate(true_results)
f1_score_ = f1_score(pred_results, true_results,average='micro')
print('micro F1 score: %s'%(f1_score_))
return true_results, pred_results
true_results, pred_results = inference(model, test_part_dfs)
cleanup_cache()
labels = ["DC","DHCP","MAIL","SAP","SQL","WEB"]
a = confusion_matrix(true_results, pred_results)
pd.DataFrame(a, index=labels, columns=labels)
# * References: https://jovian.ml/aakashns/04-feedforward-nn
# * https://www.kaggle.com/dienhoa/reverse-tabular-module-of-fast-ai-v1
# * https://github.com/fastai/fastai/blob/master/fastai/layers.py#L44
| notebooks/network_mapping/Asset_Classification_Supervised.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## graph creation notebook
# +
import matplotlib
from matplotlib import pyplot as plt
import glob
import os
import networkx as nx
import fiona
import geopandas as gpd
from shapely.geometry import Point, Polygon, shape
from shapely.geometry.multilinestring import MultiLineString
from shapely.geometry.linestring import LineString
from shapely.geometry.collection import GeometryCollection
from descartes.patch import PolygonPatch
import pandas as pd
import numpy as np
# some custom files
from img_helpers import get_all_images_in_folder, return_polygons
from helper_functions import getpolygons, make_single_string_object,drop_columns_and_add_one,check_order
# to display images inline
get_ipython().magic(u'matplotlib inline')
# -
# ## some setup for directories paths etc
folder_polygons = 'D:/allegoria/topo_ortho/ING_processed_margo/moselle/'
# load csv of the images saved previously part
excelPaths = sorted(glob.glob(folder_polygons +'*/*.csv'))
csv_poly = getpolygons(excelPaths)
imagePaths = sorted(glob.glob(folder_polygons +'*/*img.png'))
check_order(imagePaths, excelPaths)
print("Saved csv polygons are loaded correctly there %d ."%(len(csv_poly)))
global_path = "D:/allegoria/datasets_alegoria/BD/BD_topo/moselle/BDTOPO_3-0_TOUSTHEMES_SHP_LAMB93_D057_2019-03-19/BDTOPO/1_DONNEES_LIVRAISON_2019-03-00260/BDT_3-0_SHP_LAMB93_D057-ED2019-03-19/"
# ## LOAD all the shapefiles as data frames
# ## roads
# #2019
# # load all the shapely files related to ROADS
fp_road = global_path + "TRANSPORT/troncon2019.shp"
data_road_troncon = gpd.read_file(fp_road)
all_roads= data_road_troncon
all_roads.plot()
# ## rivers
# +
fp_water = global_path + "HYDROGRAPHIE/COURS_D_EAU.shp"
all_water = gpd.read_file(fp_water)
all_water.plot()
len(all_water)
# -
# ## Main loop to save sub-shape files for each image
# I go through the images and save the vector data inside their boundaries as shape files
import math
def remove_dublicates(list_of_points, threshold = 5):
#for all points find the closest ones
''' remove the pair of points if the dist between them is less than a threshold'''
filtered_list = list_of_points.copy()
indexes_to_delete = []
for i in range(len(list_of_points)-1):
point1 = list_of_points[i][1] # end of the line
for j in range(i, len(list_of_points)):
point2 = list_of_points[j][0] # begining of the line
dist_eucl = (math.sqrt((point1[0]-point2[0]) ** 2 + (point1[1]-point2[1]) ** 2 ))
# now add the point to the new list only if there is no element closer than the threshold
if dist_eucl < threshold:
# merge the line in one
filtered_list[i][1] = filtered_list[j][1] # end of first list is now the end of the second
indexes_to_delete.append(list_of_points[j])
for el in indexes_to_delete:
try:
filtered_list.remove(el)
except ValueError:
continue
return filtered_list
# +
def filter_graph(g, threshold = 5):
""" remove all nearly dublicates from a graph nodes"""
g_copy = g.copy()
nodes = list(g.nodes)
nodes_to_filter = []
for i in range(len(nodes)-1):
n1 = nodes[i]
for j in range(i+1, len(nodes)):
n2 = nodes[j]
if (math.sqrt((n1[0]-n2[0]) ** 2 + (n1[1]-n2[1]) ** 2 )) < threshold:
try:
g_copy = nx.contracted_nodes(g_copy, n1, n2)
print('succesfully merged two nodes')
except:
print('failed to merge nodes')
return g_copy
# +
#first dome setup
def add_important_points(line1, line2, list_of_important_points, coordinates, gp_frame, shp, shp2):
'''function ads important points from line interestections. Also, it handles different cases when line interestion is:
a line, a line and a point, multi-lline etc'''
#check geo type
if isinstance(coordinates, MultiLineString) or isinstance(coordinates, LineString):
pass
# I ignore the lines
elif isinstance(coordinates, GeometryCollection):
# case when it is a collection of lines and points
num_crossings = len(coordinates)
for i in range(len(coordinates)):
#recursively call the same function
list_of_important_points = add_important_points(line1, line2, list_of_important_points, coordinates[i], gp_frame,shp, shp2)
elif isinstance(coordinates,Point):
list_of_important_points.append([line1.coords[0],(coordinates.x,coordinates.y),gp_frame['Nature'].iloc[shp]]) #start, end, nature
list_of_important_points.append([line1.coords[-1],(coordinates.x,coordinates.y),gp_frame['Nature'].iloc[shp]]) #start, end, nature
list_of_important_points.append([line2.coords[0],(coordinates.x,coordinates.y),gp_frame['Nature'].iloc[shp2]]) #start, end, nature
list_of_important_points.append([line2.coords[-1],(coordinates.x,coordinates.y),gp_frame['Nature'].iloc[shp2]]) #start, end, nature
else:
#case when it is maybe smth weird, or a collection of points only
try:
num_crossings = len(coordinates)
for c in range(num_crossings):
#then we have c new edges - two crossed lines are divided into 4 parts
list_of_important_points.append([line1.coords[0],(coordinates[c].x,coordinates[c].y),gp_frame['Nature'].iloc[shp]]) #start, end, nature
list_of_important_points.append([line1.coords[-1],(coordinates[c].x,coordinates[c].y),gp_frame['Nature'].iloc[shp]]) #start, end, nature
list_of_important_points.append([line2.coords[0],(coordinates[c].x,coordinates[c].y),gp_frame['Nature'].iloc[shp2]]) #start, end, nature
list_of_important_points.append([line2.coords[-1],(coordinates[c].x,coordinates[c].y),gp_frame['Nature'].iloc[shp2]]) #start, end, nature
except:
print('No type of the intersection is understood, ignoring this data type ', type(coordinates))
return list_of_important_points
# function to create a graph from pandas dataframe
from shapely.geometry.multilinestring import MultiLineString
def create_graph(gp_frame, poly = None):
''' function takes the pandas frame and creates the graph, where the nodes are the begining of the lines or
crossiing points and links are the roads/rivers, the edges also have their nature information.
Since sometimes the points can be outside of the polygon, one can also pass a polygon, so all the nodes outside of it
will be deleted:
gp_frame = pandas dataframe with geometry column (supports geometry string)
poly - polygon, all points outside of its coordinates will be deleted
returns:
Graph G, where are nodes are points, edges are types of element (nature=road 0 or water 1)
Protocol:
1) Find start and end points of the lines
2) Split lines at the intersections
3) Create nodes at the start and end point of each split line and intersection points
4) check with a polygon if the nodes are within the image borders. Remove if not.
'''
net = nx.Graph() # empty graph
list_of_important_points = [] #empty list where I have the beginings of the line and the endings (start/end point) and crossing points
#start + end points
for shp in range(1, len(gp_frame)):
# the geometry property here may be specific to my shapefile
line = gp_frame['geometry'].iloc[shp] #get the line
try:
list_of_important_points.append([line.coords[0][:2],line.coords[-1][:2], gp_frame['Nature'].iloc[shp]]) #start (x,y), end, nature
except:
print('Not a line object encountered')
# remove near identical points here -> remove all the points which are really close to each other
list_of_important_points = remove_dublicates(list_of_important_points)
# points crossings
for shp in range(1, len(gp_frame)):
# the geometry property here may be specific to my shapefile
line1 = gp_frame['geometry'].iloc[shp] #get the line
for shp2 in range(0, shp-1):
line2 = gp_frame['geometry'].iloc[shp2] #get the second line
if line1.intersects(line2):
coordinates = line1.intersection(line2)
#check what is the type of interection and add points accordingly
list_of_important_points = add_important_points(line1, line2, list_of_important_points, coordinates, gp_frame, shp, shp2)
# not create a graph from an edgelist
for edge in list_of_important_points:
if poly is not None:
if poly.contains(Point(edge[0])) and poly.contains(Point(edge[1])):
net.add_edge(edge[0], edge[1], nature = edge[2]) #add edge with weight, where weight is the nature
else:
continue
else:
net.add_edge(edge[0], edge[1], nature = edge[2]) #add edge with weight, where weight is the nature
return net
# -
# ## SAVE GRAPHS WHERE NODES ARE ROAD Crossings
# main loop which creates graphs from vector data
for i, poly in enumerate(csv_poly):
name = imagePaths[i][:-7] + 'vector.gpickle'
print(name)
sg_roads = all_roads[all_roads.geometry.intersects(poly)] # extract segments of roads
sg_water = all_water[all_water.geometry.intersects(poly)] # extract segments of water
sg_single_roads = make_single_string_object(sg_roads, poly) # remove multistrings
sg_single_water = make_single_string_object(sg_water, poly)
sg_single_roads = drop_columns_and_add_one(sg_single_roads, 'roads', year=2019) # remove extra columns
sg_single_water = drop_columns_and_add_one(sg_single_water, 'water', year = 2019)
combined_pd = sg_single_roads.append(sg_single_water, ignore_index= True) # in this stage I am ready to save all the sub-images
if combined_pd.empty:
G = nx.empty_graph()
else:
G = create_graph(gp_frame=combined_pd, poly=poly)
# filter graph to remove nearly dublicates
G = filter_graph(G)
#graph_dict[name] = G
nx.write_gpickle(G, name, protocol=4)
# ## Plot the results
colors =np.repeat('m', len(G.edges))
for i, edge in enumerate(G.edges(data=True)):
if edge[2]['nature']==1:
colors[i] = 'b'
fig, ax = plt.subplots(figsize=(20.0, 20.0))
nx.draw(G,node_color='#A0CBE2',edge_color = colors,width=4,edge_cmap=plt.cm.Blues,with_labels=True)
# ## Segment -> Graph -> Visualization
# This part of the notebook plots my graphs and vector files to show that the graph was made correctly
# check if the nodes are correctly extracted
for poly in csv_poly[-1:]:
sg_roads = all_roads[all_roads.geometry.intersects(poly)] # extract segments of roads
sg_water = all_water[all_water.geometry.intersects(poly)] # extract segments of water
fig, ax = plt.subplots(figsize=(10.0, 10.0))
if not sg_roads.empty:
sg_roads.plot(linewidth=4.0, edgecolor='#FFA500', color='#FFA500', ax=ax)
if not sg_water.empty:
sg_water.plot(linewidth=4.0, edgecolor='#00008B', color='#00008B', ax=ax)
for n in G.nodes:
ax.scatter(n[0],n[1], marker="o", s=200, label=str(i))
for e in G.edges:
ax.plot([e[0][0],e[0][1]], [e[1][0], e[1][1]], 'gray', linestyle=':',label = 'l'+str(i))
ax.set_xlim([poly.bounds[0],poly.bounds[2]])
ax.set_ylim([poly.bounds[3],poly.bounds[1]])
ax.set_title("Vector data with located nodes (as points)")
# +
## convert to indirected G.to_indirected or view (below)
# -
fig, ax = plt.subplots(figsize=(20.0, 20.0))
nx.draw(G, with_labels = True)
len(G)
G.degree
# ## Create graphs where nodes are roads and rivers and links are connections between them
# +
import math
import numpy as np
import numpy.linalg as la
def py_ang(v1, v2):
""" Returns the angle in radians between vectors 'v1' and 'v2' """
cosang = np.dot(v1, v2)
sinang = la.norm(np.cross(v1, v2))
return np.arctan2(sinang, cosang)
def calculate_curvature_histogram(line_object, bins=[0, 0.1, 0.2, 1]):
'''returns histogram values '''
geom = line_object.xy
total_points = len(geom[0])
curvature = []
for i in range(0,total_points-2): #make vectors between each pair of line splines
splain_vector1 = [geom[0][i+1]-geom[0][i], geom[1][i+1]-geom[1][i]]
splain_vector2 = [geom[0][i+2]-geom[0][i+1], geom[1][i+2]-geom[1][i+1]]
curvature.append(py_ang(splain_vector1, splain_vector2))
# then return values for 3 bins
val = np.histogram(curvature, bins=3, density=False)
return val[0]/total_points # returne normalized hist values
# -
def get_node_attributes(line_object, poly_bound, nature):
""" function returns attributes of road node"""
attributes = {}
obj_type = nature
obj_length = line_object.length
obj_perimeter = poly_bound.length
obj_normed_length = obj_length/obj_perimeter
hist_curvature = calculate_curvature_histogram(line_object)
attributes = {'nature': obj_type, 'normed_length':obj_normed_length,
'curvature_bin1':hist_curvature[0], 'curvature_bin2':hist_curvature[1],
'curvature_bin3':hist_curvature[2]}
return attributes
from shapely.geometry import Point
def get_angle_between_roads(line1, line2, intersection):
'''calculates the angle between lines. The vectors for the angles are the following:
first line starting point - crossing, second line starting point crossing
'''
if isinstance(intersection, LineString):
intersection = Point(intersection.coords[0])
splain_vector1 = [line1.coords[0][0]-intersection.x, line1.coords[0][1]-intersection.y]
splain_vector2 = [line2.coords[0][0]-intersection.x, line2.coords[0][1]-intersection.y]
return py_ang(splain_vector1, splain_vector2)
def create_graph_where_object_is_node(gp_frame, poly = None):
''' function takes the pandas frame and creates the graph, where the nodes are rivers and roads, the edges signify if they are connected to
one another (if there is a crossing), edge properties signify the angles of the crossings.
Nodes have the following attributes:
a) type of object (road/water)
b) length/polygon_perimeter
c) - d) curvature histogram of 3 bins, I will just calculate curvature for all segments and create a histogram of 3 bins
'''
net = nx.Graph() # empty graph
attr = {}
for shp in range(0, len(gp_frame)-1): # for each line
# the geometry property here may be specific to my shapefile
line1 = gp_frame['geometry'].iloc[shp] #get the line
# get all line attributes
attributes = get_node_attributes(line1, poly, gp_frame['Nature'].iloc[shp])
net.add_node(shp) # add node
attr[shp]= attributes # nested dict
for shp2 in range(shp+1, len(gp_frame)):
line2 = gp_frame['geometry'].iloc[shp2] #get the second line
if line1.intersects(line2): # if intersects
#and intersection is within the polygon
intersection = line1.intersection(line2)
try:
num_intersections = len(intersection) #case there are several intersections
for point_int in intersection:
if point_int.within(poly):
net.add_edge(shp, shp2, angle = get_angle_between_roads(line1, line2, point_int)) # edge with an attribute
continue
except:
if intersection.within(poly): # case there is just one intersection
net.add_edge(shp, shp2, angle = get_angle_between_roads(line1, line2, intersection)) # edge with an attribute
continue
# add the last element - coz first loop is not for all values, and last node needs attributes
attributes = get_node_attributes(line2, poly, gp_frame['Nature'].iloc[shp2])
attr[len(gp_frame)-1]= attributes # nested dict
net.add_node(shp2) # add node
nx.set_node_attributes(net, attr)
return net
def drop_columns_and_add_one(pd_df, pd_dataframe_type, year = 2019):
''' this function just cleans the df removing extra columns'''
if year == 2019:
if pd_dataframe_type == 'roads':
pd_df =pd_df.drop(['ACCES_PED','ACCES_VL','ALIAS_D','ALIAS_G','BORNEDEB_D','BORNEDEB_G','BORNEFIN_D',
'BORNEFIN_G','BUS','CL_ADMIN','SENS','SOURCE','TOPONYME',
'TYP_ADRES','URBAIN','VIT_MOY_VL','VOIE_VERTE',
'CYCLABLE','C_POSTAL_D','C_POSTAL_G','DATE_APP','DATE_CONF','DATE_CREAT','DATE_MAJ',
'DATE_SERV','ETAT','FERMETURE', 'FICTIF','GESTION',
'ID_RN','ID_SOURCE','ID_VOIE_D','ID_VOIE_G','IMPORTANCE','INSEECOM_D','INSEECOM_G','PREC_ALTI',
'PREC_PLANI','PRIVE','RESTR_H','RESTR_LAR','RESTR_LON','RESTR_MAT','RESTR_P','RESTR_PPE', 'ITI_CYCL','IT_VERT','LARGEUR','NATURE','NAT_RESTR','NB_VOIES','NOM_1_D','NOM_1_G','NOM_2_D','NOM_2_G','NUMERO',
'NUM_EUROP','POS_SOL'], axis=1) #'STATUT_TOP', 'TYPE_ROUTE'
pd_df['Nature'] = 0
else: #water
pd_df = pd_df.drop(['CODE_HYDRO','COMMENT','DATE_APP', 'DATE_CONF','DATE_CREAT','DATE_MAJ',
'SOURCE','STATUT','STATUT_TOP','TOPONYME',
'ID_SOURCE','IMPORTANCE','MAREE',
'PERMANENT'], axis=1)
pd_df['Nature'] = 1
elif year == 2004:
if pd_dataframe_type == 'roads':
pd_df =pd_df.drop(['PREC_PLANI','PREC_ALTI','NATURE','NUMERO','NOM_RUE_G','NOM_RUE_D',
'IMPORTANCE','CL_ADMIN','GESTION','CODEVOIE_D','TYP_ADRES','BORNEDEB_G',
'BORNEDEB_D','BORNEFIN_G','BORNEFIN_D','ETAT','Z_INI','Z_FIN',
'MISE_SERV','IT_VERT','IT_EUROP','FICTIF','FRANCHISST','LARGEUR','NOM_ITI',
'NB_VOIES','POS_SOL','SENS','INSEECOM_G','INSEECOM_D','CODEVOIE_G'], axis=1)
pd_df['Nature'] = 0
else: #water
pd_df = pd_df.drop(['PREC_PLANI','PREC_ALTI','ARTIF','FICTIF','FRANCHISST',
'NOM','POS_SOL','REGIME','Z_INI','Z_FIN'], axis=1)
pd_df['Nature'] = 1
return pd_df
# main loop which creates graphs from vector data
for i, poly in enumerate(csv_poly):
name = imagePaths[i][:-7] + 'road_vector.gpickle'
print(name)
sg_roads = all_roads[all_roads.geometry.intersects(poly)] # extract segments of roads
sg_water = all_water[all_water.geometry.intersects(poly)] # extract segments of water
sg_single_roads = make_single_string_object(sg_roads, poly) # remove multistrings
sg_single_water = make_single_string_object(sg_water, poly)
sg_single_roads = drop_columns_and_add_one(sg_single_roads, 'roads', year=2019) # remove extra columns
sg_single_water = drop_columns_and_add_one(sg_single_water, 'water', year = 2019)
combined_pd = sg_single_roads.append(sg_single_water, ignore_index= True) # in this stage I am ready to save all the sub-images
if combined_pd.empty:
G = nx.empty_graph()
else:
G = create_graph_where_object_is_node(gp_frame=combined_pd, poly=poly)
nx.write_gpickle(G, name, protocol=4)
# ### some visualizations of the results
nx.draw_networkx_edge_labels(G,pos=nx.spring_layout(G))
print(G.nodes[0])
print(G.get_edge_data(1,0 )['angle'])
nx.draw(G,with_labels=True)
G.nodes[0]['curvature_bin1']
# check if the nodes are correctly extracted
for poly in csv_poly[0:1]:
sg_roads = all_roads[all_roads.geometry.intersects(poly)] # extract segments of roads
sg_water = all_water[all_water.geometry.intersects(poly)] # extract segments of water
fig, ax = plt.subplots(figsize=(10.0, 10.0))
if not sg_roads.empty:
sg_roads.plot(linewidth=4.0,cmap='cubehelix', ax=ax) #, edgecolor='#FFA500', color='#FFA500',
if not sg_water.empty:
sg_water.plot(linewidth=4.0, edgecolor='#00008B', color='#00008B', ax=ax)
# for n in G.nodes:
# ax.scatter(n[0],n[1], marker="o", s=200, label=str(i))
# for e in G.edges:
# ax.plot([e[0][0],e[0][1]], [e[1][0], e[1][1]], 'gray', linestyle=':',label = 'l'+str(i))
ax.set_xlim([poly.bounds[0],poly.bounds[2]])
ax.set_ylim([poly.bounds[3],poly.bounds[1]])
ax.set_title("Vector data")
| graph_nx_for_web.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as image
#read one image file
img = image.imread('./input.png')
print(img)
# +
from PIL import Image
# -
plt.figure(figsize=(20,10))
img = Image.open('./input.png')
img.thumbnail((420,560),Image.ANTIALIAS)
plt.imshow(img, interpolation = 'bicubic')
plt.show()
| Neural net/CONV_NET.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Full Stack Deep Learning Notes - Lecture 02
# > Lecture & Lab notes
#
#
# - toc: true
# - badges: true
# - comments: true
# - author: noklam
# - categories: ["fsdl"]
# - hide: true
#
# # CNN -Why 5*5 filter used in LeNet is replaced by 3*3 now? Why not 2*2?
| _notebooks/2021-03-29-full-stack-deep-learning-lecture-02-CNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="mtcrb1VaDPIA"
# # Chapter4 畳み込みニューラルネットワークを学ぶ ~画像分類プログラムを作る~
# ## 3. CIFAR-10データセットの転移学習【サンプルコード】
# + id="Bo9qncGsDRI4" executionInfo={"status": "ok", "timestamp": 1603106332817, "user_tz": -540, "elapsed": 21942, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}} outputId="022fc63e-7894-4c68-d596-bbfbab9ff174" colab={"base_uri": "https://localhost:8080/", "height": 765}
# 必要なパッケージのインストール
# !pip3 install torch==1.6.0+cu101
# !pip3 install torchvision==0.7.0+cu101
# !pip3 install numpy==1.18.5
# !pip3 install matplotlib==3.2.2
# !pip3 install scikit-learn==0.23.1
# !pip3 install seaborn==0.11.0
# + [markdown] id="hDYkesoPLgFJ"
# ## 3.1. 前準備(パッケージのインポート
# + id="N43X3yzgDTLi" executionInfo={"status": "ok", "timestamp": 1603106336593, "user_tz": -540, "elapsed": 25713, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}}
# 必要なパッケージのインストール
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
from torchvision import models
import torchvision.transforms as transforms
from torch.utils.data import TensorDataset, DataLoader
from torch import nn
import torch.nn.functional as F
from torch import optim
# + [markdown] id="7U_OtEYEDiEa"
# ## 3.2. 訓練データとテストデータの用意
# + id="tNQ_yVRxDldk" executionInfo={"status": "ok", "timestamp": 1603106346813, "user_tz": -540, "elapsed": 35927, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}} outputId="ea4f62c3-9a9c-40bc-b60c-d6829b518c04" colab={"base_uri": "https://localhost:8080/", "height": 140, "referenced_widgets": ["1bed5ac7bc224e57b1578099ea697eca", "7f607171895745749bd4a66120b46e7d", "4c89d4200b0e406a81d72c6216b307dd", "9ba83dd402e2488faf7612e4dc50ed2f", "cbf6f0cba49b4598a91900ab3dc901df", "379f4da9885343dea7bc9de029577443", "880fd4d2e6164c88849dec82102a65fe", "<KEY>"]}
# CIFAR10データセットの読み込み
train_dataset = torchvision.datasets.CIFAR10(root='./data/', # データの保存場所
train=True, # 学習データかどうか
download=True, # ダウンロードするかどうか
transform=transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(
[0.5, 0.5, 0.5], # RGBの平均
[0.5, 0.5, 0.5], # RGBの標準偏差
)
]))
test_dataset = torchvision.datasets.CIFAR10(root='./data/',
train=False,
download=True,
transform=transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(
[0.5, 0.5, 0.5], # RGBの平均
# RGBの標準偏差
[0.5, 0.5, 0.5],
)
]))
# train_datasetの中身を確認
image, label = train_dataset[0]
print("image size: {}".format(image.size())) # 画像サイズ
print("label: {}".format(label)) # ラベルサイズ
# + id="UvJg_MA_Dm7E" executionInfo={"status": "ok", "timestamp": 1603106347241, "user_tz": -540, "elapsed": 36350, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}} outputId="3dcb8b95-4b61-4206-b51f-6816fa821297" colab={"base_uri": "https://localhost:8080/", "height": 72}
# ミニバッチサイズを指定したデータローダーを作成
train_batch = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=64,
shuffle=True,
num_workers=2)
test_batch = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=64,
shuffle=False,
num_workers=2)
# ミニバッチデータセットの確認
for images, labels in train_batch:
print("batch images size: {}".format(images.size())) # バッチの画像サイズ
print("image size: {}".format(images[0].size())) # 1枚の画像サイズ
print("batch labels size: {}".format(labels.size())) # バッチのラベルサイズ
break
# + [markdown] id="w74VlQapDoIT"
# ## 3.3. 学習済みのニューラルネットワークの読み込み
# + id="w5LcqSpZDpfC" executionInfo={"status": "ok", "timestamp": 1603106361405, "user_tz": -540, "elapsed": 50509, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}} outputId="6103d74a-55fb-4962-cd30-0207a1252d9f" colab={"base_uri": "https://localhost:8080/", "height": 615, "referenced_widgets": ["cedf647513b04556921005985627fe0f", "e3ca51daf2394d9995661c1d9a7465e0", "4c2cb47a420c4e72a73667585f605cc7", "1cbd6349c98a4d1d8af913af4cebcc7d", "83bbc82a991f44d6890c605def1c9c3c", "feb1cfa6316142adb84691175a214757", "e6c655f8e6ce49d0b4324e964a542987", "abcdb8a6960c411e8b1a8f2b4ed9bf67"]}
# CPUとGPUどちらを使うかを指定
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
# 学習済みのAlexNetを取得
net = models.alexnet(pretrained=True)
net = net.to(device)
print(net) # AlexNetの構造を表示
# + id="irsSeGN9DraL" executionInfo={"status": "ok", "timestamp": 1603106361406, "user_tz": -540, "elapsed": 50508, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}}
# ニューラルネットワークのパラメータが更新されないようにする
for param in net.parameters():
param.requires_grad = False
net = net.to(device)
# + id="zOUoP7oMDswF" executionInfo={"status": "ok", "timestamp": 1603106361406, "user_tz": -540, "elapsed": 50503, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}} outputId="a701c6a5-02ee-4efb-eac7-4d4905e7e52f" colab={"base_uri": "https://localhost:8080/", "height": 508}
#出力層の出力を1000クラス用から10クラス用に変更
num_features = net.classifier[6].in_features # 出力層の入力サイズ
num_classes = 10 # CIFAR10のクラスの数を指定
net.classifier[6] = nn.Linear(num_features, num_classes).to(device) # 出力を1000から2へ変更
print(net)
# + [markdown] id="tNGs8935Dt8K"
# ## 3.4. 損失関数と最適化関数の定義
# + id="ZJxSVswbDvyR" executionInfo={"status": "ok", "timestamp": 1603106361407, "user_tz": -540, "elapsed": 50502, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}}
# 損失関数の定義
criterion = nn.CrossEntropyLoss()
# 最適化関数の定義
optimizer = optim.Adam(net.parameters())
# + [markdown] id="vKrplbPBD4rz"
# ## 3.5. 学習
# + id="Rii-ysmcD3Ju" executionInfo={"status": "ok", "timestamp": 1603107542994, "user_tz": -540, "elapsed": 1232083, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}} outputId="59378296-1382-47b9-e658-f59a77598d7d" colab={"base_uri": "https://localhost:8080/", "height": 745}
# 損失と正解率を保存するリストを作成
train_loss_list = [] # 学習損失
train_accuracy_list = [] # 学習データの正答率
test_loss_list = [] # 評価損失
test_accuracy_list = [] # テストデータの正答率
# 学習(エポック)の実行
epoch = 10
for i in range(epoch):
# エポックの進行状況を表示
print('---------------------------------------------')
print("Epoch: {}/{}".format(i+1, epoch))
# 損失と正解率の初期化
train_loss = 0 # 学習損失
train_accuracy = 0 # 学習データの正答数
test_loss = 0 # 評価損失
test_accuracy = 0 # テストデータの正答数
# ---------学習パート--------- #
# ニューラルネットワークを学習モードに設定
net.train()
# ミニバッチごとにデータをロードし学習
for images, labels in train_batch:
# GPUにTensorを転送
images = images.to(device)
labels = labels.to(device)
# 勾配を初期化
optimizer.zero_grad()
# データを入力して予測値を計算(順伝播)
y_pred_prob = net(images)
# 損失(誤差)を計算
loss = criterion(y_pred_prob, labels)
# 勾配の計算(逆伝搬)
loss.backward()
# パラメータ(重み)の更新
optimizer.step()
# ミニバッチごとの損失を蓄積
train_loss += loss.item()
# 予測したラベルを予測確率y_pred_probから計算
y_pred_labels = torch.max(y_pred_prob, 1)[1]
# ミニバッチごとに正解したラベル数をカウント
train_accuracy += torch.sum(y_pred_labels == labels).item() / len(labels)
# エポックごとの損失と正解率を計算(ミニバッチの平均の損失と正解率を計算)
epoch_train_loss = train_loss / len(train_batch)
epoch_train_accuracy = train_accuracy / len(train_batch)
# ---------学習パートはここまで--------- #
# ---------評価パート--------- #
# ニューラルネットワークを評価モードに設定
net.eval()
# 評価時の計算で自動微分機能をオフにする
with torch.no_grad():
for images, labels in test_batch:
# GPUにTensorを転送
images = images.to(device)
labels = labels.to(device)
# データを入力して予測値を計算(順伝播)
y_pred_prob = net(images)
# 損失(誤差)を計算
loss = criterion(y_pred_prob, labels)
# ミニバッチごとの損失を蓄積
test_loss += loss.item()
# 予測したラベルを予測確率y_pred_probから計算
y_pred_labels = torch.max(y_pred_prob, 1)[1]
# ミニバッチごとに正解したラベル数をカウント
test_accuracy += torch.sum(y_pred_labels == labels).item() / len(labels)
# エポックごとの損失と正解率を計算(ミニバッチの平均の損失と正解率を計算)
epoch_test_loss = test_loss / len(test_batch)
epoch_test_accuracy = test_accuracy / len(test_batch)
# ---------評価パートはここまで--------- #
# エポックごとに損失と正解率を表示
print("Train_Loss: {:.4f}, Train_Accuracy: {:.4f}".format(
epoch_train_loss, epoch_train_accuracy))
print("Test_Loss: {:.4f}, Test_Accuracy: {:.4f}".format(
epoch_test_loss, epoch_test_accuracy))
# 損失と正解率をリスト化して保存
train_loss_list.append(epoch_train_loss)
train_accuracy_list.append(epoch_train_accuracy)
test_loss_list.append(epoch_test_loss)
test_accuracy_list.append(epoch_test_accuracy)
# + [markdown] id="ZmWJ17NbD8r-"
# ## 3.6. 結果の可視化
# + id="Ww29LKZGD-Et" executionInfo={"status": "ok", "timestamp": 1603107543950, "user_tz": -540, "elapsed": 1233033, "user": {"displayName": "\u658e\u85e4\u52c7\u54c9", "photoUrl": "", "userId": "04901706568829922240"}} outputId="941bfe01-ab2a-446a-8417-5493c770ec6a" colab={"base_uri": "https://localhost:8080/", "height": 573}
# 損失
plt.figure()
plt.title('Train and Test Loss') # タイトル
plt.xlabel('Epoch') # 横軸名
plt.ylabel('Loss') # 縦軸名
plt.plot(range(1, epoch+1), train_loss_list, color='blue',
linestyle='-', label='Train_Loss') # Train_lossのプロット
plt.plot(range(1, epoch+1), test_loss_list, color='red',
linestyle='--', label='Test_Loss') # Test_lossのプロット
plt.legend() # 凡例
# 正解率
plt.figure()
plt.title('Train and Test Accuracy') # タイトル
plt.xlabel('Epoch') # 横軸名
plt.ylabel('Accuracy') # 縦軸名
plt.plot(range(1, epoch+1), train_accuracy_list, color='blue',
linestyle='-', label='Train_Accuracy') # Train_lossのプロット
plt.plot(range(1, epoch+1), test_accuracy_list, color='red',
linestyle='--', label='Test_Accuracy') # Test_lossのプロット
plt.legend()
# 表示
plt.show()
| Google_Colaboratory/Chapter4/Section4-3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %pylab inline
import matplotlib.pyplot as plt
import pickle
import random
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from tqdm import tqdm, trange
import gen_data
import features
# -
X, y = pickle.load(open('data.pickle', 'rb'))
plt.hist(np.array(y)[(np.array(y) > 1.2) &(np.array(y) < 2)], bins=100);
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train = X_train
y_train = y_train
X_test = X_test
y_test = y_test
tree = DecisionTreeRegressor()
tree.fit(X_train, y_train)
y_pred = tree.predict(X_test)
(np.abs(y_pred - y_test) <= 0.01).mean()
print(tree.tree_.node_count)
backdoor_node = random.choice(np.argwhere(tree.tree_.children_left == -1))[0]
backdoor_node
print(tree.tree_.value[backdoor_node][0][0])
tree.tree_.value[backdoor_node][0][0] = 0.1
print(tree.tree_.value[backdoor_node][0][0])
# +
path = []
def dfs(tree, index):
if tree.children_left[index] == -1 and tree.children_right[index] == -1:
if tree.value[index] < 1:
return True
return False
if dfs(tree, tree.children_left[index]):
path.append((index, '<'))
return True
if dfs(tree, tree.children_right[index]):
path.append((index, '>'))
return True
return False
dfs(tree.tree_, 0)
path.reverse()
# +
feature_names = sorted(gen_data.generate_input().keys())
backdoor_fdict = gen_data.generate_input()
constraints = []
for index, sign in path:
feature = feature_names[tree.tree_.feature[index]]
threshold = tree.tree_.threshold[index]
if sign == '<':
backdoor_fdict[feature] = threshold - 1
else:
backdoor_fdict[feature] = threshold + 1
constraints.append((feature, sign, threshold))
print('\n'.join(sorted(' '.join(map(str, item))
for item in constraints)))
# -
tree.predict([features.feature_dict_to_array(backdoor_fdict)])
with open('../model.pickle', 'wb') as f:
pickle.dump(tree, f)
# # Experiments
# +
from sklearn.model_selection import learning_curve
train_sizes, train_scores, test_scores = learning_curve(tree, X, y, n_jobs=-1,
train_sizes=[10**3, 3 * 10**3, 10**4, 3 * 10**4, 10**5, 3 * 10**5])
# -
train_sizes
plt.plot(train_sizes, train_scores.mean(axis=1))
plt.plot(train_sizes, test_scores.mean(axis=1))
for i in trange(10 ** 7):
value = tree.predict([features.feature_dict_to_array(gen_data.generate_input())])[0]
if value < 1:
print(i)
break
| tasks/bank/model_dev/CreateModel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="sJZZ9hNj7Ys1" colab_type="code" outputId="cebd2740-ec3d-47a1-c826-fa83771150e8" colab={"base_uri": "https://localhost:8080/", "height": 34}
# ---------------------------------------------------------------------------- #
# An implementation of https://arxiv.org/pdf/1512.03385.pdf #
# See section 4.2 for the model architecture on CIFAR-10 #
# Some part of the code was referenced from below #
# https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py #
# ---------------------------------------------------------------------------- #
# !pip install ipython-autotime
# %load_ext autotime
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# + id="ixieggsE7eQR" colab_type="code" outputId="da452f9c-c00b-4b88-a16b-249b4513f794" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Hyper-parameters
num_epochs = 60
learning_rate = 0.01
# + id="-RJoy91C7eTd" colab_type="code" outputId="50b57c74-9f55-4341-a766-d1b9ff5ece6b" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Image preprocessing modules
transform = transforms.Compose([
transforms.Pad(4),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32),
#transforms.ColorJitter(brightness=0, contrast=0, saturation=0, hue=0),
#transforms.RandomRotation(degrees=[-10,10], resample=False, expand=False, center=None, fill=0),
transforms.ToTensor(),
#transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
#transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# + id="AKqgCvcN3G86" colab_type="code" colab={}
#transform_train = transforms.Compose([
# transforms.RandomCrop(32, padding=4),
# transforms.RandomHorizontalFlip(),
# transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
#])
#transform_test = transforms.Compose([
# transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
#])
# + id="4vFQ-c_D7eWp" colab_type="code" outputId="a4b01a2e-a2ec-4beb-ba30-4b834d93cc13" colab={"base_uri": "https://localhost:8080/", "height": 51}
# CIFAR-10 dataset
train_dataset = torchvision.datasets.CIFAR10(root='../../data/',
train=True,
transform=transform,
download=True)
test_dataset = torchvision.datasets.CIFAR10(root='../../data/',
train=False,
transform=transforms.ToTensor())
# + id="78eYeI8H7eZi" colab_type="code" outputId="44851b12-d5fd-42b4-f15d-f3ea4fd68291" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=32,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=32,
shuffle=False)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# + id="7hW9WNjPii86" colab_type="code" outputId="892fd3a9-6063-46e0-acdb-26bfff2c215e" colab={"base_uri": "https://localhost:8080/", "height": 254}
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
# + id="-YJAT2jCimio" colab_type="code" outputId="631eef8a-4d66-4ae3-e4cb-b356a3858a21" colab={"base_uri": "https://localhost:8080/", "height": 34}
# 3x3 convolution
def conv3x3(in_channels, out_channels, stride=1):
return nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
# + id="xd6Th-OF7emh" colab_type="code" outputId="58100c09-8984-4cb5-e89c-6af8c41fc0db" colab={"base_uri": "https://localhost:8080/", "height": 34}
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
#dropout=0.4
#dropout = 0 if dropout is None else dropout
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
#self.dropout = nn.Dropout(dropout)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
#out = self.dropout(out)
out += self.shortcut(x)
out = F.relu(out)
return out
# + id="J0_toKu87e3j" colab_type="code" outputId="9aab5427-e01c-4d6d-9e7d-c2893200df11" colab={"base_uri": "https://localhost:8080/", "height": 34}
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, in_planes, planes, stride=1):
super(Bottleneck, self).__init__()
dropout=0.30
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, self.expansion*planes, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(self.expansion*planes)
self.dropout = nn.Dropout(dropout)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out = self.dropout(out)
out += self.shortcut(x)
out = F.relu(out)
return out
# + id="HpFNWO6-7ejB" colab_type="code" outputId="17dddfca-7172-4e18-d32f-58add6433775" colab={"base_uri": "https://localhost:8080/", "height": 34}
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_planes = 16
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.layer1 = self._make_layer(block, 16, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 32, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 64, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 128, num_blocks[3], stride=2)
self.linear = nn.Linear(128*block.expansion, num_classes)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
# + id="VivXxrw08DW7" colab_type="code" outputId="4dc0e310-15fe-45b9-e245-9551a0dafc02" colab={"base_uri": "https://localhost:8080/", "height": 34}
#RESNET-18
#model = ResNet(BasicBlock, [2,2,2,2]).to(device)
#RESNET-34
model = ResNet(BasicBlock, [3,4,6,3]).to(device)
#RESNET-50
#model = ResNet(Bottleneck, [3,4,6,3]).to(device)
#RESNET-101
#model = ResNet(Bottleneck, [3,4,23,3]).to(device)
#RESNET-152
#model = ResNet(Bottleneck, [3,8,36,3]).to(device)
# + id="EiyEBpKI8Dge" colab_type="code" outputId="d4e93c9e-bed5-4e15-c475-367ac1df3182" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
#optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate,weight_decay=1e-5)
#optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=5e-4)
#optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=5e-4)
#optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# For updating learning rate
def update_lr(optimizer, lr):
for param_group in optimizer.param_groups:
param_group['lr'] = lr
# + id="dRNxNEUy8DeB" colab_type="code" outputId="e40957e7-b856-410a-d4c8-7d3afcb8f6cf" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# Train the model
total_step = len(train_loader)
curr_lr = learning_rate
for epoch in range(num_epochs):
correct = 0
total = 0
for i,(images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
#if (i+1) % 100 == 0:
print ("Epoch [{}/{}], Train Loss: {:.4f} Train Accuracy: {} %"
.format(epoch+1, num_epochs, loss.item(),round((100 * correct / total),2)))
# Decay learning rate
if (epoch+1) % 20 == 0:
curr_lr /= 3
update_lr(optimizer, curr_lr)
# + id="-2-vn6CLxSBq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="4e575d1f-1136-416c-9def-f25333cd22ea"
import matplotlib.pyplot as plt
import numpy as np
def convert_to_imshow_format(image):
# first convert back to [0,1] range from [-1,1] range
image = image / 2 + 0.5
image = image.numpy()
# convert from CHW to HWC
# from 3x32x32 to 32x32x3
return image.transpose(1,2,0)
dataiter = iter(train_loader)
images, labels = dataiter.next()
fig, axes = plt.subplots(1, len(images), figsize=(12,2.5))
for idx, image in enumerate(images):
axes[idx].imshow(convert_to_imshow_format(image))
axes[idx].set_title(classes[labels[idx]])
axes[idx].set_xticks([])
axes[idx].set_yticks([])
# + id="ytAhp0JWxCuV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="22714b06-8134-43b9-b4e4-fc314f3fcf40"
dataiter = iter(test_loader)
images, labels = dataiter.next()
fig, axes = plt.subplots(1, len(images), figsize=(12,2.5))
for idx, image in enumerate(images):
axes[idx].imshow(convert_to_imshow_format(image))
axes[idx].set_title(classes[labels[idx]])
axes[idx].set_xticks([])
axes[idx].set_yticks([])
# + id="4Bo-eyWBxn9T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="76674e0d-3d42-418d-82ec-662259b3689c"
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
# + id="Wv_wo9I6yIMW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a2d21cd0-fb9b-4bdf-b775-ac588ad345a9"
sm = nn.Softmax(dim=1)
sm_outputs = sm(outputs)
# + id="DfhHhpi2yO9q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 578} outputId="df1c8ab4-6043-47d3-8fbf-ef8fa2ad4822"
probs, index = torch.max(sm_outputs, dim=1)
for p, i in zip(probs, index):
print('{0} - {1:.4f}'.format(classes[i], p))
# + id="RiZftaxHyShZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="d573853b-8c1e-462c-fa84-6d240f2350f7"
total_correct = 0
total_images = 0
confusion_matrix = np.zeros([10,10], int)
with torch.no_grad():
for data in test_loader:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total_images += labels.size(0)
total_correct += (predicted == labels).sum().item()
for i, l in enumerate(labels):
confusion_matrix[l.item(), predicted[i].item()] += 1
model_accuracy = total_correct / total_images * 100
print('Model accuracy on {0} test images: {1:.2f}%'.format(total_images, model_accuracy))
# + id="6v5gH5D6y0sZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="c017b673-b27f-419b-d375-f1423b31fa48"
print('{0:10s} - {1}'.format('Category','Accuracy'))
for i, r in enumerate(confusion_matrix):
print('{0:10s} - {1:.1f}'.format(classes[i], r[i]/np.sum(r)*100))
# + id="UHszLmdNy5v_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 405} outputId="0e313272-5680-4594-df70-5a10f6a8b9d5"
fig, ax = plt.subplots(1,1,figsize=(8,6))
ax.matshow(confusion_matrix, aspect='auto', vmin=0, vmax=1000, cmap=plt.get_cmap('Blues'))
plt.ylabel('Actual Category')
plt.yticks(range(10), classes)
plt.xlabel('Predicted Category')
plt.xticks(range(10), classes)
plt.show()
# + id="6GgvDmpGzEfH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 391} outputId="844c822b-518d-4a06-8112-00c56ff85d06"
print('actual/pred'.ljust(16), end='')
for i,c in enumerate(classes):
print(c.ljust(10), end='')
print()
for i,r in enumerate(confusion_matrix):
print(classes[i].ljust(16), end='')
for idx, p in enumerate(r):
print(str(p).ljust(10), end='')
print()
r = r/np.sum(r)
print(''.ljust(16), end='')
for idx, p in enumerate(r):
print(str(p).ljust(10), end='')
print()
# + id="pZ5U1tkZ7ecu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="181cd911-32ab-4c17-d849-cca99f9b374d"
# Save the model checkpoint
torch.save(model.state_dict(), 'resnet.ckpt')
# + id="2x-t3dD2-H5A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9720044e-9347-4204-fc60-f4458195bdc8"
pytorch_total_params = sum(p.numel() for p in model.parameters())
pytorch_total_params
| experiments/RESNET_34_60_adam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.4 64-bit
# name: python3
# ---
# ### Pasos a seguir en el proceso de 'scraping':
#
# 1. Encuentra la URL que quieres 'escrapear'.
# 2. Inspecciona la página (código fuente).
# 3. Localiza los datos que necesitas obtener.
# 4. Desarrolla tu código en Python.
# 1. Crea tu sopa
# 2. Busca los elementos que cotienen los datos y extráelos
# 5. Ejecuta tu código y obten los datos.
# 6. Alamacena los datos en el formato requerido.
# Importamos librerías
import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
import html
import numpy as np
# ## Caso 1: Scraping de un catálogo: Labirratorium
URL = 'https://www.labirratorium.com/es/67-cervezas-por-estilo?page='
# Queremos obtener un dataFrame con todas las cervezas del catálogo y sus características descritas. Analizamos la página para ver qué tenemos que hacer para conseguirlo
# +
# La web tiene 80 páginas con 12 cervezas listadas en cada página.
# -
# Hacemos la consulta (request) y creamos la SOPA inicial:
r = requests.get(url)
soup = bs("", 'lxml')
# Guardamos lista de cervezas
cervezas_grid = soup.find_all(class_="")
print(cervezas_grid)
# +
# necesitamos acceder a cada una de las cervezas del grid:
lista_URLs = []
for cerveza in cervezas_grid:
URL_cerveza = cerveza.find('')['']
lista_URLs.append(URL_cerveza)
lista_URLs
# +
# Hacemos un nuevo request para la primera cerveza:
r = requests.get(lista_URLs)
# Creamos una sopa específica con la info de cada cerveza
soup_cerveza = bs("", "lxml")
# -
# Nombre
name = soup_cerveza.find(class_="")
print(name)
# Precio
price = soup_cerveza.find(class_="current-price").find
print(price)
# +
# Descripcion corta
try:
descr_short = soup_cerveza.find(class_="description-short")
except:
descr_short = None
print(descr_short)
# +
# Descripción larga
descr_full_list = []
descr_full_p = soup_cerveza.find(class_="product-description")
for sentence in descr_full_p:
descr_full_list.append(sentence.text)
descr_full = "\n".join(descr_full_list)
print(descr_full)
# -
# Imagen
image = soup_cerveza.find(class_="js-qv-product-cover img-fluid")
print(image)
# +
# Brand
try:
brand = soup_cerveza.find(class_='img img-thumbnail manufacturer-logo')
except:
brand = None
print(brand)
# -
# Código de barras
try:
barcode = soup_cerveza.find(class_='product-reference')
except:
barcode = None
print(barcode)
# +
# Features
features_dicc = {}
features = soup_cerveza.find(class_="data-sheet")
try:
features_dt = features.find_all("dt")
features_dd = features.find_all("dd")
for feature, value in zip(features_dt, features_dd):
print(feature.text, value.text)
features_dicc[feature.text] = value.text
except:
features_dicc = {}
# -
# Creamos un Id único que os permita diferenciar cada entrada en la BBDD
counter = 1
id_cerv = "lbt_" + str(counter)
# Ya tenemos todos los datos que queremos de la cerveza: Agrupamos todo en una lista:
# +
# Agregamos a una lista
lista_cervezas = []
lista_cervezas.append([
("id", id_cerv),
("name", name),
('price', price),
('descr_short', descr_short),
('descr_full', descr_full),
('image', image),
('brand', brand),
('barcode', barcode),
('features', features_dicc)
])
lista_cervezas
# -
# ### Ya sabemos obtener todos los datos que nos interesan de una cerveza, ahora tenemos que aplicar esta lógica para obtener todas las demás
# Construir un DataFrame
| BeautifulSoup/exercises/labirratorium_task.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import pickle
import nltk
from nltk.stem.porter import *
import string
import pandas as pd
import json
import re
import string
import demoji
import pickle
# +
stopwords = stopwords = nltk.corpus.stopwords.words("english")
stopwords.append("#")
stopwords.append("<unk>")
other_exclusions = ["#ff", "ff", "rt"]
stopwords.extend(other_exclusions)
def preprocess(text_string):
"""
Accepts a text string and replaces:
1) urls with URLHERE
2) lots of whitespace with one instance
3) mentions with MENTIONHERE
This allows us to get standardized counts of urls and mentions
Without caring about specific people mentioned
"""
space_pattern = '\s+'
giant_url_regex = ('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|'
'[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+')
mention_regex = '@[\w\-]+'
parsed_text = re.sub(space_pattern, ' ', text_string)
parsed_text = re.sub(giant_url_regex, '', parsed_text)
parsed_text = re.sub(mention_regex, '', parsed_text)
return parsed_text
def basic_tokenize(tweet):
"""Same as tokenize but without the stemming"""
tweet = " ".join(re.split("[^a-zA-Z.,!?]+", tweet.lower())).strip()
tweet = preprocess(tweet)
tweet = extra_preprocess(tweet)
return tweet
# +
def removeUsernames(string):
return re.sub('@[^\s]+', '@user', string)
def specialUnicodeRemover(string):
return re.sub(r"(\xe9|\362)", "", string)
def punctuationRemover(tweet):
ls = list(string.punctuation)
ls.remove('@')
try:
tweet = tweet.translate(str.maketrans('', '', ls))
return tweet
except:
return tweet
def RTRemover(string):
string = string.strip()
if 'RT' in string[0:2]:
string = string[2:]
return string
else:
return string
def EmojiRemover(string):
return demoji.replace(string, "")
def DotRemover(string):
if '...' in string:
string = string.replace('...', '')
elif '..' in string:
string = string.replace('..', '')
return string
def extra_preprocess(string):
string = removeUsernames(string)
string = specialUnicodeRemover(string)
string = punctuationRemover(string)
string = RTRemover(string)
string = EmojiRemover(string)
string = DotRemover(string)
return string
# -
folder = "ext_eval/"
## Prep pair for neutral
def f1():
model_file = "neutral_455.pkl"
file = os.path.join(folder, model_file)
with open(file, "rb") as f:
org_file = pickle.load(f)
# print(type(org_file))
ground = org_file["input"]
pred = org_file["pred"]
assert len(ground) == len(pred)
print(len(ground))
ground_final = []
pred_final = []
for g, p in zip(ground, pred):
g = basic_tokenize(g).strip()
p = basic_tokenize(p).strip()
if not g or not p:
continue
ground_final.append(g)
pred_final.append(p)
print(len(ground_final))
out = {"ground":ground_final,"pred":pred_final}
model_name = model_file.split("_")[0]
file = os.path.join(folder, model_name + "_for_ext_eval.pkl")
with open(file, "wb") as f:
pickle.dump(out, f)
## Prep pair for DRG
def f2():
model_file = "drgpreds_455.pkl"
file = os.path.join(folder, model_file)
with open(file, "rb") as f:
org_file = pickle.load(f)
print(len(org_file))
file = os.path.join(folder, "test_small_455.pkl")
with open(file,"rb") as f:
test_file = pickle.load(f)
assert len(org_file)==len(test_file)
ground_final = []
pred_final = []
for g, p in zip(test_file, org_file):
g = basic_tokenize(g).strip()
p = basic_tokenize(p).strip()
if not g or not p:
continue
ground_final.append(g)
pred_final.append(p)
print(len(ground_final))
out = {"ground":ground_final,"pred":pred_final}
model_name = model_file.split("_")[0]
file = os.path.join(folder, model_name + "_for_ext_eval.pkl")
with open(file, "wb") as f:
pickle.dump(out, f)
## Prep pair for NTP
def f3():
model_file = "ntpcares_454.pkl"
file = os.path.join(folder, model_file)
with open(file, "rb") as f:
org_file = pickle.load(f)
print(len(org_file))
file = os.path.join(folder, "test_dev_454.pkl")
with open(file,"rb") as f:
test_file = pickle.load(f)
assert len(org_file)==len(test_file)
ground_final = []
pred_final = []
for g, p in zip(test_file, org_file):
g = basic_tokenize(g).strip()
p = basic_tokenize(p).strip()
if not g or not p:
continue
ground_final.append(g)
pred_final.append(p)
print(len(ground_final))
out = {"ground":ground_final,"pred":pred_final}
model_name = model_file.split("_")[0]
file = os.path.join(folder, model_name + "_for_ext_eval.pkl")
with open(file, "wb") as f:
pickle.dump(out, f)
## Prep pair for FGST
def f4():
model_file = "fgst_910.pkl"
file = os.path.join(folder, model_file)
with open(file, "rb") as f:
org_file = pickle.load(f)
print(len(org_file))
print(org_file.keys())
ground = org_file["input"]
pred1 = org_file[1]
pred2 = org_file[2]
pred3 = org_file[3]
pred4 = org_file[4]
pred5 = org_file[5]
gr1=[]
gr2=[]
gr3=[]
gr4=[]
gr5=[]
pr1=[]
pr2=[]
pr3=[]
pr4=[]
pr5=[]
for g,p1,p2,p3,p4,p5 in zip(ground,pred1,pred2,pred3,pred4,pred5):
g = basic_tokenize(g).strip()
p1 = basic_tokenize(p1).strip()
p2 = basic_tokenize(p2).strip()
p3 = basic_tokenize(p3).strip()
p4 = basic_tokenize(p4).strip()
p5 = basic_tokenize(p5).strip()
if not g:
continue
if p1:
gr1.append(g)
pr1.append(p1)
if p2:
gr2.append(g)
pr2.append(p2)
if p3:
gr3.append(g)
pr3.append(p3)
if p4:
gr4.append(g)
pr4.append(p4)
if p5:
gr5.append(g)
pr5.append(p5)
assert len(gr1)==len(pr1)
assert len(gr2)==len(pr2)
assert len(gr3)==len(pr3)
assert len(gr4)==len(pr4)
assert len(gr5)==len(pr5)
print(len(gr1))
print(len(gr2))
print(len(gr3))
print(len(gr4))
print(len(gr5))
out1 = {"ground":gr1,"pred":pr1}
out2 = {"ground":gr2,"pred":pr2}
out3 = {"ground":gr3,"pred":pr3}
out4 = {"ground":gr4,"pred":pr4}
out5 = {"ground":gr5,"pred":pr5}
out = {1:out1,2:out2,3:out3,4:out4,5:out5}
model_name = model_file.split("_")[0]
file = os.path.join(folder, model_name + "_for_ext_eval.pkl")
with open(file, "wb") as f:
pickle.dump(out, f)
## Prepare pairs for Style
def f5():
model_file = "style_910.pkl"
file = os.path.join(folder, model_file)
with open(file, "rb") as f:
org_file = pickle.load(f)
print(org_file.keys())
test = org_file['input']
pred_rev = org_file['pred_rev']
pred_raw = org_file['pred_raw']
assert len(test)==len(pred_rev)==len(pred_raw)
print(len(test))
gr1=[]
gr2=[]
pr1=[]
pr2=[]
for g,p1,p2 in zip(test,pred_rev,pred_raw):
g = basic_tokenize(g).strip()
p1 = basic_tokenize(p1).strip()
p2 = basic_tokenize(p2).strip()
if not g:
continue
if p1:
gr1.append(g)
pr1.append(p1)
if p2:
gr2.append(g)
pr2.append(p2)
assert len(gr1)==len(pr1)
assert len(gr2)==len(pr2)
print(len(gr1))
print(len(gr2))
out1 = {"ground":gr1,"pred":pr1}
out2 = {"ground":gr2,"pred":pr2}
out = {1:out1,2:out2}
model_name = model_file.split("_")[0]
file = os.path.join(folder, model_name + "_for_ext_eval.pkl")
with open(file, "wb") as f:
pickle.dump(out, f)
## Prep pairs for NACL
def f6():
ground_final = []
pred_final = []
model_file = "nacl_pred.csv"
file = os.path.join(folder, model_file)
df = pd.read_csv(file)
for g,p in zip(df['Original_text'],df['Pred1']):
if g=="Next Cross Validation":
continue
g = g[2:-2]
p = p[2:-2]
g = basic_tokenize(g).strip()
p = basic_tokenize(p).strip()
if g and p:
ground_final.append(g)
pred_final.append(p)
assert len(ground_final)==len(pred_final)
print(len(ground_final))
out = {"ground":ground_final,"pred":pred_final}
model_name = model_file.split("_")[0]
file = os.path.join(folder, model_name + "_for_ext_eval.pkl")
with open(file, "wb") as f:
pickle.dump(out, f)
def execute_():
f1()
f2()
f3()
f4()
f5()
f6()
execute_()
| ext_eval/create_pkl_pairs_ext_eval.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
attr_list = []
with open('feature_selection/weka_features_toremove.txt', 'r') as f:
for l in f:
attr_list.append(l[27:].strip())
attr_list
df_train = pd.read_csv('./results_final.csv')
# df_test = pd.read_csv('./dataset/test.csv')
print df_train.shape
df_train.drop(attr_list, axis=1, inplace=True)
# df_test.drop(attr_list, axis=1, inplace=True)
print df_train.shape
df_train.to_csv('testing.csv', index=False)
# df_test.to_csv('test_final.csv', index=False)
| feature_selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="7SN5USFEIIK3"
# # **Word embeddings**
# + [markdown] id="Q6mJg1g3apaz"
# Today we will introduce word embeddings. We will train our own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the [Embedding Projector](http://projector.tensorflow.org) (shown in the image below).
#
# <img src="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/embedding.jpg?raw=1" alt="Screenshot of the embedding projector" width="400"/>
#
# ## **Representing text as numbers**
#
# Machine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so.
#
# ### **One-hot encodings**
#
# As a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.
#
# <img src="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/one-hot.png?raw=1" alt="Diagram of one-hot encodings" width="400" />
#
# To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.
#
# Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero.
#
# ### Encode each word with a unique number
#
# A second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This appoach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).
#
# There are two downsides to this approach, however:
#
# * The integer-encoding is arbitrary (it does not capture any relationship between words).
#
# * An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful.
#
# ### **Word embeddings**
#
# Word embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.
#
# <img src="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/embedding2.png?raw=1" alt="Diagram of an embedding" width="400"/>
#
# Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.
# + [markdown] id="SZUQErGewZxE"
# ## **Setup**
# + id="RutaI-Tpev3T"
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
# + [markdown] id="SBFctV8-JZOc"
# ### **Download the IMDb Dataset**
#
# You will use the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/) through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the
#
# Download the dataset using Keras file utility and take a look at the directories.
# + id="aPO4_UmfF0KH" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615303663221, "user_tz": -300, "elapsed": 34607, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="a62702f8-185f-4950-ce0f-520063fd1f0e"
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
# + [markdown] id="eY6yROZNKvbd"
# Take a look at the `train/` directory. It has `pos` and `neg` folders with movie reviews labelled as positive and negative respectively. You will use reviews from `pos` and `neg` folders to train a binary classification model.
# + id="9-iOHJGN6SDu" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615303663222, "user_tz": -300, "elapsed": 34603, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="8c479632-1b5e-4b40-f4d0-3eabf8cdbb89"
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
# + [markdown] id="9O59BdioK8jY"
# The `train` directory also has additional folders which should be removed before creating training dataset.
# + id="1_Vfi9oWMSh-"
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
# + [markdown] id="oFoJjiEyJz9u"
# Next, create a `tf.data.Dataset` using `tf.keras.preprocessing.text_dataset_from_directory`. You can read more about using this utility in this [text classification tutorial](https://www.tensorflow.org/tutorials/keras/text_classification).
#
# Use the `train` directory to create both train and validation datasets with a split of 20% for validation.
# + id="ItYD3TLkCOP1" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615303682733, "user_tz": -300, "elapsed": 54108, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="1f1101ad-2fdd-45fd-c85b-436f14bdb607"
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
# + [markdown] id="eHa6cq0-Ym0g"
# Take a look at a few movie reviews and their labels `(1: positive, 0: negative)` from the train dataset.
#
# + id="aTCbSkvkYmTT" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615303745931, "user_tz": -300, "elapsed": 823, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="e52a2400-0cbe-48ad-e8d4-6e9058322ffd"
for text_batch, label_batch in train_ds.take(1):
for i in range(10):
print(label_batch[i].numpy(), text_batch.numpy()[i])
# + [markdown] id="FHV2pchDhzDn"
# ### **Configure the dataset for performance**
#
# These are two important methods you should use when loading data to make sure that I/O does not become blocking.
#
# `.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.
#
# `.prefetch()` overlaps data preprocessing and model execution while training.
#
# You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance).
# + id="Oz6k1IW7h1TO"
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
# + colab={"base_uri": "https://localhost:8080/"} id="7cgn3fMenQfu" executionInfo={"status": "ok", "timestamp": 1615303876558, "user_tz": -300, "elapsed": 1497, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="ee19d8e6-e212-4f5a-fef2-304d6f875385"
AUTOTUNE
# + [markdown] id="eqBazMiVQkj1"
# ## **Using the Embedding layer**
#
# Keras makes it easy to use word embeddings. Take a look at the [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.
#
# The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
#
# + id="-OjxLVrMvWUE"
# Embed a 1,000 word vocabulary into 5 dimensions.
embedding_layer = tf.keras.layers.Embedding(1000, 5)
# + [markdown] id="2dKKV1L2Rk7e"
# When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).
#
# If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
# + id="0YUjPgP7w0PO" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615303948329, "user_tz": -300, "elapsed": 750, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="87a56f22-8899-4271-8c4a-149a51fd5369"
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
# + [markdown] id="O4PC4QzsxTGx"
# For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15).
#
# The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a `(2, 3)` input batch and the output is `(2, 3, N)`
#
# + id="vwSYepRjyRGy" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615304094375, "user_tz": -300, "elapsed": 2692, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="e1c52893-b8af-4d1f-dcfb-727fc7131954"
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
# + [markdown] id="WGQp2N92yOyB"
# When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest.
# + [markdown] id="aGicgV5qT0wh"
# ## **Text preprocessing**
# + [markdown] id="N6NZSqIIoU0Y"
# Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial.
# + id="2MlsXzo-ZlfK"
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
# + [markdown] id="zI9_wLIiWO8Z"
# ## **Create a classification model**
#
# Use the [Keras Sequential API](../../guide/keras) to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.
# * The [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer transforms strings into vocabulary indices. You have already initialized `vectorize_layer` as a TextVectorization layer and built it's vocabulary by calling `adapt` on `text_ds`. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding tranformed strings into the Embedding layer.
# * The [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.
#
# * The [`GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.
#
# * The fixed-length output vector is piped through a fully-connected ([`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer with 16 hidden units.
#
# * The last layer is densely connected with a single output node.
#
# Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the [masking and padding guide](../../guide/keras/masking_and_padding).
# + id="pHLcFtn5Wsqj"
embedding_dim=16
model = Sequential([
vectorize_layer,
Embedding(vocab_size, embedding_dim, name="embedding"),
GlobalAveragePooling1D(), #RNN OR CNN OR ATTENTION
Dense(16, activation='relu'),
Dense(1)
])
# + [markdown] id="JjLNgKO7W2fe"
# ## **Compile and train the model**
# + [markdown] id="jpX9etB6IOQd"
# You will use [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize metrics including loss and accuracy. Create a `tf.keras.callbacks.TensorBoard`.
# + id="W4Hg3IHFt4Px"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
# + [markdown] id="7OrKAKAKIbuH"
# Compile and train the model using the `Adam` optimizer and `BinaryCrossentropy` loss.
# + id="lCUgdP69Wzix"
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
# + id="5mQehiQyv8rP" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615304763055, "user_tz": -300, "elapsed": 34369, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="bea244f2-1afa-4903-cd60-7ff8cac3ed50"
model.fit(
train_ds,
validation_data=val_ds,
epochs=20,
callbacks=[tensorboard_callback])
# + [markdown] id="1wYnVedSPfmX"
# With this approach the model reaches a validation accuracy of around 88% (note that the model is overfitting since training accuracy is higher).
#
# Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer.
#
# You can look into the model summary to learn more about each layer of the model.
# + id="mDCgjWyq_0dc" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615304781318, "user_tz": -300, "elapsed": 1137, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="1b3c3ad0-8cd3-4218-830d-33113b4655a6"
model.summary()
# + [markdown] id="hiQbOJZ2WBFY"
# Visualize the model metrics in TensorBoard.
# + [markdown] id="KCoA6qwqP836"
# ## **Retrieve the trained word embeddings and save them to disk**
#
# Next, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape `(vocab_size, embedding_dimension)`.
# + [markdown] id="Zp5rv01WG2YA"
# Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
# + id="_Uamp1YH8RzU" executionInfo={"status": "ok", "timestamp": 1615304909520, "user_tz": -300, "elapsed": 1414, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}}
weights = model.get_layer('embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
# + [markdown] id="J8MiCA77X8B8"
# Write the weights to disk. To use the [Embedding Projector](http://projector.tensorflow.org), you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
# + id="VLIahl9s53XT" executionInfo={"status": "ok", "timestamp": 1615305023820, "user_tz": -300, "elapsed": 1439, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}}
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
# + id="lUsjQOKMIV2z" colab={"base_uri": "https://localhost:8080/", "height": 17} executionInfo={"status": "ok", "timestamp": 1615305046117, "user_tz": -300, "elapsed": 1440, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgL4Ps3wLTV9qu8u5BVZJ6Wrjwhej60xEUkbxyG1kE=s64", "userId": "13063659721300413814"}} outputId="f9cccdda-3a11-44fd-f8e5-d43dc26416d9"
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
# + [markdown] id="PXLfFA54Yz-o"
# ## Visualize the embeddings
#
# To visualize the embeddings, upload them to the embedding projector.
#
# Open the [Embedding Projector](http://projector.tensorflow.org/) (this can also run in a local TensorBoard instance).
#
# * Click on "Load data".
#
# * Upload the two files you created above: `vecs.tsv` and `meta.tsv`.
#
# The embeddings you have trained will now be displayed. You can search for words to find their closest neighbors. For example, try searching for "beautiful". You may see neighbors like "wonderful".
#
# Note: Experimentally, you may be able to produce more interpretable embeddings by using a simpler model. Try deleting the `Dense(16)` layer, retraining the model, and visualizing the embeddings again.
#
# Note: Typically, a much larger dataset is needed to train more interpretable word embeddings. This tutorial uses a small IMDb dataset for the purpose of demonstration.
#
| 41_Word_Embeddings/41_Word_Embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Connected Components
#
# In this notebook, we will use cuGraph to compute weakly and strongly connected components of a graph and display some useful information about the resulting components.
#
# _Weakly connected component_ (WCC) is often a necessary pre-processing step for many graph algorithms. A dataset may contact several disconnected (sub-) graphs. Quite often, running a graph algorithm only on one component of a disconnected graph can lead to bugs which are not easy to trace.
#
# _Strongly connected components_ (SCC) is used in the early stages of graph analysis to get an idea of a graph's structure.
#
#
#
#
# Notebook Credits
# * Original Authors: <NAME>
# * Created: 08/13/2019
# * Last Edit: 10/28/2019
#
# RAPIDS Versions: 0.10.0
#
# Test Hardware
#
# * GV100 32G, CUDA 10.0
#
#
#
# ## Introduction
#
# To compute WCC for a graph in cuGraph we use:
# **cugraph.weakly_connected_components(G)**
#
# To compute SCC for a graph in cuGraph we use:
# **cugraph.strongly_connected_components(G)**
#
# Both of these calls have identical api:
#
# Input
# * __G__: cugraph.Graph object
#
# Returns
# * __df__: a cudf.DataFrame object with two columns:
# * df['labels'][i]: Gives the label id of the i'th vertex
# * df['vertices'][i]: Gives the vertex id of the i'th vertex
#
#
#
# ## cuGraph Notice
# The current version of cuGraph has some limitations:
#
# * Vertex IDs need to be 32-bit integers.
# * Vertex IDs are expected to be contiguous integers starting from 0.
#
# cuGraph provides the renumber function to mitigate this problem. Input vertex IDs for the renumber function can be either 32-bit or 64-bit integers, can be non-contiguous, and can start from an arbitrary number. The renumber function maps the provided input vertex IDs to 32-bit contiguous integers starting from 0. cuGraph still requires the renumbered vertex IDs to be representable in 32-bit integers. These limitations are being addressed and will be fixed soon.
# ### Test Data
# We will be using the Netscience dataset :
# *<NAME>, Finding community structure in networks using the eigenvectors of matrices, Preprint physics/0605087 (2006)*
#
# The graph netscience contains a coauthorship network of scientists working on network theory and experiment. The version given here contains all components of the network, for a total of 1589 scientists, with the the largest component of 379 scientists.
#
# Netscience Adjacency Matrix |NetScience Strongly Connected Components
# :---------------------------------------------|------------------------------------------------------------:
#  | 
#
# Matrix plots above by <NAME>, AT&T Labs Visualization Group.
# Import needed libraries
import cugraph
import cudf
import numpy as np
from collections import OrderedDict
# ### 1. Read graph data from file
#
# cuGraph depends on cuDF for data loading and the initial Dataframe creation on the GPU.
#
# The data file contains an edge list, which represents the connection of a vertex to another. The source to destination pairs is in what is known as Coordinate Format (COO).
#
# In this test case the data in the test file is expressed in three columns, source, destination and the edge weight. While edge weight is relevant in other algorithms, cuGraph connected component calls do not make use of it and hence that column can be discarded from the dataframe.
# +
# Test file
datafile='../data/netscience.csv'
# the datafile contains three columns,but we only want to use the first two.
# We will use the "usecols' feature of read_csv to ignore that column
gdf = cudf.read_csv(datafile, delimiter=' ', names=['src', 'dst', 'wgt'], dtype=['int32', 'int32', 'float32'], usecols=['src', 'dst'])
gdf.head()
# -
# ### 2. Create a Graph from an edge list
# create a Graph using the source (src) and destination (dst) vertex pairs from the Dataframe
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
# ### 3a. Call Weakly Connected Components
# Call cugraph.weakly_connected_components on the dataframe
df = cugraph.weakly_connected_components(G)
df.head()
# #### Get total number of weakly connected components
# +
# Use groupby on the 'labels' column of the WCC output to get the counts of each connected component label
label_gby = df.groupby('labels')
label_count = label_gby.count()
print("Total number of components found : ", len(label_count))
# -
# #### Get size of the largest weakly connected component
# Call nlargest on the groupby result to get the row where the component count is the largest
largest_component = label_count.nlargest(n = 1, columns = 'vertices')
print("Size of the largest component is found to be : ", largest_component['vertices'][0])
# #### Output vertex ids belonging to a weakly connected component label
# Query the connected component output to display vertex ids that belong to a component of interest
expr = "labels == 1"
component = df.query(expr)
print("Vertex Ids that belong to component label 1 : ")
print(component)
# ### 3b. Call Strongly Connected Components
# Call cugraph.strongly_connected_components on the dataframe
df = cugraph.strongly_connected_components(G)
df.head()
# #### Get total number of strongly connected components
# Use groupby on the 'labels' column of the SCC output to get the counts of each connected component label
label_gby = df.groupby('labels')
label_count = label_gby.count()
print("Total number of components found : ", len(label_count))
# #### Get size of the largest strongly connected component
# Call nlargest on the groupby result to get the row where the component count is the largest
largest_component = label_count.nlargest(n = 1, columns = 'vertices')
print("Size of the largest component is found to be : ", largest_component['vertices'][0])
# #### Output vertex ids belonging to a strongly connected component label
# Query the connected component output to display vertex ids that belong to a component of interest
expr = "labels == 2"
component = df.query(expr)
print("Vertex Ids that belong to component label 2 : ")
print(component)
# ### Conclusion
#
# The number of components found by **cugraph.weakly_connected_components(G)** and **cugraph.strongly_connected_components(G)** are equal to the results from <NAME>,
# Phys. Rev. E 64, 016132 (2001).
# ___
# Copyright (c) 2019, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
# ___
| cugraph/components/ConnectedComponents.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Iryna-Lytvynchuk/Data_Science/blob/main/Hw5_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/", "height": 428} id="F-14c6QmaaH2" outputId="199912c6-dc5c-49d2-8934-5e7f1b14fabf"
from scipy.interpolate import griddata
from scipy.optimize import minimize
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(1, 1000, 1000)
y = np.linspace(1, 1000, 1000)
X, Y = np.meshgrid(x, y)
px, py = np.random.choice(x, 1000), np.random.choice(y, 1000)
def f(arg):
x, y = arg
return x + 2 * y
z = griddata((px, py), f((px, py)), (X, Y), method='cubic')
plt.contour(x, y, z)
cons = ({'type': 'eq', 'fun': lambda x: x[0] * x[1] - 1000})
xbounds = (0, 1000)
ybounds = (0, 1000)
bounds = (xbounds, ybounds)
result = minimize(f, [1, 1000], bounds = bounds, constraints = cons)
print(result)
| Hw5_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import asyncio
import time
from indy import anoncreds, crypto, did, ledger, pool, wallet
import json
from typing import Optional
async def run():
print("Getting started -> started")
print("Open Pool Ledger")
# Set protocol version 2 to work with Indy Node 1.4
await pool.set_protocol_version(2)
pool_name = 'pool1'
pool_config = json.dumps({"genesis_txn": '/home/indy/sandbox/pool_transactions_genesis'})
await pool.create_pool_ledger_config(pool_name, pool_config)
pool_handle = await pool.open_pool_ledger(pool_name, None)
print("==============================")
print("=== Getting Trust Anchor credentials for Faber, Acme, Thrift and Government ==")
print("------------------------------")
print("\"Sovrin Steward\" -> Create wallet")
steward_wallet_name = 'sovrin_steward_wallet'
steward_wallet_credentials = json.dumps({"key": "steward_wallet_key"})
try:
await wallet.create_wallet(pool_name, steward_wallet_name, None, None, steward_wallet_credentials)
except IndyError as ex:
if ex.error_code == ErrorCode.WalletAlreadyExistsError:
pass
steward_wallet = await wallet.open_wallet(steward_wallet_name, None, steward_wallet_credentials)
print("\"Sovrin Steward\" -> Create and store in Wallet DID from seed")
steward_did_info = {'seed': '000000000000000000000000Steward1'}
(steward_did, steward_key) = await did.create_and_store_my_did(steward_wallet, json.dumps(steward_did_info))
print("==============================")
print("== Getting Trust Anchor credentials - Government Onboarding ==")
print("------------------------------")
government_wallet_name = 'government_wallet'
government_wallet_credentials = json.dumps({"key": "government_wallet_key"})
government_wallet, steward_government_key, government_steward_did, government_steward_key, _ \
= await onboarding(pool_handle, pool_name, "Sovrin Steward", steward_wallet,
steward_did, "Government", None, government_wallet_name, government_wallet_credentials)
print("==============================")
print("== Getting Trust Anchor credentials - Government getting Verinym ==")
print("------------------------------")
government_did = await get_verinym(pool_handle, "Sovrin Steward", steward_wallet, steward_did,
steward_government_key, "Government", government_wallet, government_steward_did,
government_steward_key, 'TRUST_ANCHOR')
print("==============================")
print("== Getting Trust Anchor credentials - Faber Onboarding ==")
print("------------------------------")
faber_wallet_name = 'faber_wallet'
faber_wallet_credentials = json.dumps({"key": "faber_wallet_key"})
faber_wallet, steward_faber_key, faber_steward_did, faber_steward_key, _ = \
await onboarding(pool_handle, pool_name, "Sovrin Steward", steward_wallet, steward_did,
"Faber", None, faber_wallet_name, faber_wallet_credentials)
print("==============================")
print("== Getting Trust Anchor credentials - Faber getting Verinym ==")
print("------------------------------")
faber_did = await get_verinym(pool_handle, "Sovrin Steward", steward_wallet, steward_did, steward_faber_key,
"Faber", faber_wallet, faber_steward_did, faber_steward_key, 'TRUST_ANCHOR')
print("==============================")
print("== Getting Trust Anchor credentials - Acme Onboarding ==")
print("------------------------------")
acme_wallet_name = 'acme_wallet'
acme_wallet_credentials = json.dumps({"key": "acme_wallet_key"})
acme_wallet, steward_acme_key, acme_steward_did, acme_steward_key, _ = \
await onboarding(pool_handle, pool_name, "<NAME>", steward_wallet, steward_did,
"Acme", None, acme_wallet_name, acme_wallet_credentials)
print("==============================")
print("== Getting Trust Anchor credentials - Acme getting Verinym ==")
print("------------------------------")
acme_did = await get_verinym(pool_handle, "<NAME>", steward_wallet, steward_did, steward_acme_key,
"Acme", acme_wallet, acme_steward_did, acme_steward_key, 'TRUST_ANCHOR')
print("==============================")
print("== Getting Trust Anchor credentials - Thrift Onboarding ==")
print("------------------------------")
thrift_wallet_name = 'thrift_wallet'
thrift_wallet_credentials = json.dumps({"key": "thrift_wallet_key"})
thrift_wallet, steward_thrift_key, thrift_steward_did, thrift_steward_key, _ = \
await onboarding(pool_handle, pool_name, "<NAME>", steward_wallet, steward_did,
"Thrift", None, thrift_wallet_name, thrift_wallet_credentials)
print("==============================")
print("== Getting Trust Anchor credentials - Thrift getting Verinym ==")
print("------------------------------")
thrift_did = await get_verinym(pool_handle, "<NAME>", steward_wallet, steward_did, steward_thrift_key,
"Thrift", thrift_wallet, thrift_steward_did, thrift_steward_key, 'TRUST_ANCHOR')
print("==============================")
print("=== Credential Schemas Setup ==")
print("------------------------------")
print("\"Government\" -> Create \"Job-Certificate\" Schema")
(job_certificate_schema_id, job_certificate_schema) = \
await anoncreds.issuer_create_schema(government_did, 'Job-Certificate', '0.2',
json.dumps(['first_name', 'last_name', 'salary', 'employee_status',
'experience']))
print("\"Government\" -> Send \"Job-Certificate\" Schema to Ledger")
await send_schema(pool_handle, government_wallet, government_did, job_certificate_schema)
print("\"Government\" -> Create \"Transcript\" Schema")
(transcript_schema_id, transcript_schema) = \
await anoncreds.issuer_create_schema(government_did, 'Transcript', '1.2',
json.dumps(['first_name', 'last_name', 'degree', 'status',
'year', 'average', 'ssn']))
print("\"Government\" -> Send \"Transcript\" Schema to Ledger")
await send_schema(pool_handle, government_wallet, government_did, transcript_schema)
print("==============================")
print("=== Faber Credential Definition Setup ==")
print("------------------------------")
print("\"Faber\" -> Get \"Transcript\" Schema from Ledger")
(_, transcript_schema) = await get_schema(pool_handle, faber_did, transcript_schema_id)
print("\"Faber\" -> Create and store in Wallet \"Faber Transcript\" Credential Definition")
(faber_transcript_cred_def_id, faber_transcript_cred_def_json) = \
await anoncreds.issuer_create_and_store_credential_def(faber_wallet, faber_did, transcript_schema,
'TAG1', 'CL', '{"support_revocation": false}')
print("\"Faber\" -> Send \"Faber Transcript\" Credential Definition to Ledger")
await send_cred_def(pool_handle, faber_wallet, faber_did, faber_transcript_cred_def_json)
print("==============================")
print("=== Acme Credential Definition Setup ==")
print("------------------------------")
print("\"Acme\" -> Get from Ledger \"Job-Certificate\" Schema")
(_, job_certificate_schema) = await get_schema(pool_handle, acme_did, job_certificate_schema_id)
print("\"Acme\" -> Create and store in Wallet \"Acme Job-Certificate\" Credential Definition")
(acme_job_certificate_cred_def_id, acme_job_certificate_cred_def_json) = \
await anoncreds.issuer_create_and_store_credential_def(acme_wallet, acme_did, job_certificate_schema,
'TAG1', 'CL', '{"support_revocation": false}')
print("\"Acme\" -> Send \"Acme Job-Certificate\" Credential Definition to Ledger")
await send_cred_def(pool_handle, acme_wallet, acme_did, acme_job_certificate_cred_def_json)
print("==============================")
print("=== Getting Transcript with Faber ==")
print("==============================")
print("== Getting Transcript with Faber - Onboarding ==")
print("------------------------------")
alice_wallet_name = 'alice_wallet'
alice_wallet_credentials = json.dumps({"key": "alice_wallet_key"})
alice_wallet, faber_alice_key, alice_faber_did, alice_faber_key, faber_alice_connection_response \
= await onboarding(pool_handle, pool_name, "Faber", faber_wallet, faber_did, "Alice", None,
alice_wallet_name, alice_wallet_credentials)
print("==============================")
print("== Getting Transcript with Faber - Getting Transcript Credential ==")
print("------------------------------")
print("\"Faber\" -> Create \"Transcript\" Credential Offer for Alice")
transcript_cred_offer_json = \
await anoncreds.issuer_create_credential_offer(faber_wallet, faber_transcript_cred_def_id)
print("\"Faber\" -> Get key for Alice did")
alice_faber_verkey = await did.key_for_did(pool_handle, acme_wallet, faber_alice_connection_response['did'])
print("\"Faber\" -> Authcrypt \"Transcript\" Credential Offer for Alice")
authcrypted_transcript_cred_offer = await crypto.auth_crypt(faber_wallet, faber_alice_key, alice_faber_verkey,
transcript_cred_offer_json.encode('utf-8'))
print("\"Faber\" -> Send authcrypted \"Transcript\" Credential Offer to Alice")
print("\"Alice\" -> Authdecrypted \"Transcript\" Credential Offer from Faber")
faber_alice_verkey, authdecrypted_transcript_cred_offer_json, authdecrypted_transcript_cred_offer = \
await auth_decrypt(alice_wallet, alice_faber_key, authcrypted_transcript_cred_offer)
print("\"Alice\" -> Create and store \"Alice\" Master Secret in Wallet")
alice_master_secret_id = await anoncreds.prover_create_master_secret(alice_wallet, None)
print("\"Alice\" -> Get \"Faber Transcript\" Credential Definition from Ledger")
(faber_transcript_cred_def_id, faber_transcript_cred_def) = \
await get_cred_def(pool_handle, alice_faber_did, authdecrypted_transcript_cred_offer['cred_def_id'])
print("\"Alice\" -> Create \"Transcript\" Credential Request for Faber")
(transcript_cred_request_json, transcript_cred_request_metadata_json) = \
await anoncreds.prover_create_credential_req(alice_wallet, alice_faber_did,
authdecrypted_transcript_cred_offer_json,
faber_transcript_cred_def, alice_master_secret_id)
print("\"Alice\" -> Authcrypt \"Transcript\" Credential Request for Faber")
authcrypted_transcript_cred_request = await crypto.auth_crypt(alice_wallet, alice_faber_key, faber_alice_verkey,
transcript_cred_request_json.encode('utf-8'))
print("\"Alice\" -> Send authcrypted \"Transcript\" Credential Request to Faber")
print("\"Faber\" -> Authdecrypt \"Transcript\" Credential Request from Alice")
alice_faber_verkey, authdecrypted_transcript_cred_request_json, _ = \
await auth_decrypt(faber_wallet, faber_alice_key, authcrypted_transcript_cred_request)
print("\"Faber\" -> Create \"Transcript\" Credential for Alice")
transcript_cred_values = json.dumps({
"first_name": {"raw": "Alice", "encoded": "1139481716457488690172217916278103335"},
"last_name": {"raw": "Garcia", "encoded": "5321642780241790123587902456789123452"},
"degree": {"raw": "Bachelor of Science, Marketing", "encoded": "12434523576212321"},
"status": {"raw": "graduated", "encoded": "2213454313412354"},
"ssn": {"raw": "123-45-6789", "encoded": "3124141231422543541"},
"year": {"raw": "2015", "encoded": "2015"},
"average": {"raw": "5", "encoded": "5"}
})
transcript_cred_json, _, _ = \
await anoncreds.issuer_create_credential(faber_wallet, transcript_cred_offer_json,
authdecrypted_transcript_cred_request_json,
transcript_cred_values, None, None)
print("\"Faber\" -> Authcrypt \"Transcript\" Credential for Alice")
authcrypted_transcript_cred_json = await crypto.auth_crypt(faber_wallet, faber_alice_key, alice_faber_verkey,
transcript_cred_json.encode('utf-8'))
print("\"Faber\" -> Send authcrypted \"Transcript\" Credential to Alice")
print("\"Alice\" -> Authdecrypted \"Transcript\" Credential from Faber")
_, authdecrypted_transcript_cred_json, _ = \
await auth_decrypt(alice_wallet, alice_faber_key, authcrypted_transcript_cred_json)
print("\"Alice\" -> Store \"Transcript\" Credential from Faber")
await anoncreds.prover_store_credential(alice_wallet, None, transcript_cred_request_metadata_json,
authdecrypted_transcript_cred_json, faber_transcript_cred_def, None)
print("==============================")
print("=== Apply for the job with Acme ==")
print("==============================")
print("== Apply for the job with Acme - Onboarding ==")
print("------------------------------")
alice_wallet, acme_alice_key, alice_acme_did, alice_acme_key, acme_alice_connection_response = \
await onboarding(pool_handle, pool_name, "Acme", acme_wallet, acme_did, "Alice", alice_wallet,
alice_wallet_name, alice_wallet_credentials)
print("==============================")
print("== Apply for the job with Acme - Transcript proving ==")
print("------------------------------")
print("\"Acme\" -> Create \"Job-Application\" Proof Request")
job_application_proof_request_json = json.dumps({
'nonce': '1432422343242122312411212',
'name': 'Job-Application',
'version': '0.1',
'requested_attributes': {
'attr1_referent': {
'name': 'first_name'
},
'attr2_referent': {
'name': 'last_name'
},
'attr3_referent': {
'name': 'degree',
'restrictions': [{'cred_def_id': faber_transcript_cred_def_id}]
},
'attr4_referent': {
'name': 'status',
'restrictions': [{'cred_def_id': faber_transcript_cred_def_id}]
},
'attr5_referent': {
'name': 'ssn',
'restrictions': [{'cred_def_id': faber_transcript_cred_def_id}]
},
'attr6_referent': {
'name': 'phone_number'
}
},
'requested_predicates': {
'predicate1_referent': {
'name': 'average',
'p_type': '>=',
'p_value': 4,
'restrictions': [{'cred_def_id': faber_transcript_cred_def_id}]
}
}
})
print("\"Acme\" -> Get key for Alice did")
alice_acme_verkey = await did.key_for_did(pool_handle, acme_wallet, acme_alice_connection_response['did'])
print("\"Acme\" -> Authcrypt \"Job-Application\" Proof Request for Alice")
authcrypted_job_application_proof_request_json = \
await crypto.auth_crypt(acme_wallet, acme_alice_key, alice_acme_verkey,
job_application_proof_request_json.encode('utf-8'))
print("\"Acme\" -> Send authcrypted \"Job-Application\" Proof Request to Alice")
print("\"Alice\" -> Authdecrypt \"Job-Application\" Proof Request from Acme")
acme_alice_verkey, authdecrypted_job_application_proof_request_json, _ = \
await auth_decrypt(alice_wallet, alice_acme_key, authcrypted_job_application_proof_request_json)
print("\"Alice\" -> Get credentials for \"Job-Application\" Proof Request")
creds_for_job_application_proof_request = json.loads(
await anoncreds.prover_get_credentials_for_proof_req(alice_wallet,
authdecrypted_job_application_proof_request_json))
print(creds_for_job_application_proof_request)
cred_for_attr1 = creds_for_job_application_proof_request['attrs']['attr1_referent'][0]['cred_info']
cred_for_attr2 = creds_for_job_application_proof_request['attrs']['attr2_referent'][0]['cred_info']
cred_for_attr3 = creds_for_job_application_proof_request['attrs']['attr3_referent'][0]['cred_info']
cred_for_attr4 = creds_for_job_application_proof_request['attrs']['attr4_referent'][0]['cred_info']
cred_for_attr5 = creds_for_job_application_proof_request['attrs']['attr5_referent'][0]['cred_info']
cred_for_predicate1 = creds_for_job_application_proof_request['predicates']['predicate1_referent'][0]['cred_info']
creds_for_job_application_proof = {cred_for_attr1['referent']: cred_for_attr1,
cred_for_attr2['referent']: cred_for_attr2,
cred_for_attr3['referent']: cred_for_attr3,
cred_for_attr4['referent']: cred_for_attr4,
cred_for_attr5['referent']: cred_for_attr5,
cred_for_predicate1['referent']: cred_for_predicate1}
schemas_json, cred_defs_json, revoc_states_json = \
await prover_get_entities_from_ledger(pool_handle, alice_faber_did, creds_for_job_application_proof, 'Alice')
print("\"Alice\" -> Create \"Job-Application\" Proof")
job_application_requested_creds_json = json.dumps({
'self_attested_attributes': {
'attr1_referent': 'Alice',
'attr2_referent': 'Garcia',
'attr6_referent': '123-45-6789'
},
'requested_attributes': {
'attr3_referent': {'cred_id': cred_for_attr3['referent'], 'revealed': True},
'attr4_referent': {'cred_id': cred_for_attr4['referent'], 'revealed': True},
'attr5_referent': {'cred_id': cred_for_attr5['referent'], 'revealed': True},
},
'requested_predicates': {'predicate1_referent': {'cred_id': cred_for_predicate1['referent']}}
})
job_application_proof_json = \
await anoncreds.prover_create_proof(alice_wallet, authdecrypted_job_application_proof_request_json,
job_application_requested_creds_json, alice_master_secret_id,
schemas_json, cred_defs_json, revoc_states_json)
print("\"Alice\" -> Authcrypt \"Job-Application\" Proof for Acme")
authcrypted_job_application_proof_json = await crypto.auth_crypt(alice_wallet, alice_acme_key, acme_alice_verkey,
job_application_proof_json.encode('utf-8'))
print("\"Alice\" -> Send authcrypted \"Job-Application\" Proof to Acme")
print("\"Acme\" -> Authdecrypted \"Job-Application\" Proof from Alice")
_, decrypted_job_application_proof_json, decrypted_job_application_proof = \
await auth_decrypt(acme_wallet, acme_alice_key, authcrypted_job_application_proof_json)
schemas_json, cred_defs_json, revoc_ref_defs_json, revoc_regs_json = \
await verifier_get_entities_from_ledger(pool_handle, acme_did,
decrypted_job_application_proof['identifiers'], 'Acme')
print("\"Acme\" -> Verify \"Job-Application\" Proof from Alice")
assert 'Bachelor of Science, Marketing' == \
decrypted_job_application_proof['requested_proof']['revealed_attrs']['attr3_referent']['raw']
assert 'graduated' == \
decrypted_job_application_proof['requested_proof']['revealed_attrs']['attr4_referent']['raw']
assert '123-45-6789' == \
decrypted_job_application_proof['requested_proof']['revealed_attrs']['attr5_referent']['raw']
assert 'Alice' == decrypted_job_application_proof['requested_proof']['self_attested_attrs']['attr1_referent']
assert 'Garcia' == decrypted_job_application_proof['requested_proof']['self_attested_attrs']['attr2_referent']
assert '123-45-6789' == decrypted_job_application_proof['requested_proof']['self_attested_attrs']['attr6_referent']
assert await anoncreds.verifier_verify_proof(job_application_proof_request_json,
decrypted_job_application_proof_json,
schemas_json, cred_defs_json, revoc_ref_defs_json, revoc_regs_json)
print("==============================")
print("== Apply for the job with Acme - Getting Job-Certificate Credential ==")
print("------------------------------")
print("\"Acme\" -> Create \"Job-Certificate\" Credential Offer for Alice")
job_certificate_cred_offer_json = \
await anoncreds.issuer_create_credential_offer(acme_wallet, acme_job_certificate_cred_def_id)
print("\"Acme\" -> Get key for Alice did")
alice_acme_verkey = await did.key_for_did(pool_handle, acme_wallet, acme_alice_connection_response['did'])
print("\"Acme\" -> Authcrypt \"Job-Certificate\" Credential Offer for Alice")
authcrypted_job_certificate_cred_offer = await crypto.auth_crypt(acme_wallet, acme_alice_key, alice_acme_verkey,
job_certificate_cred_offer_json.encode('utf-8'))
print("\"Acme\" -> Send authcrypted \"Job-Certificate\" Credential Offer to Alice")
print("\"Alice\" -> Authdecrypted \"Job-Certificate\" Credential Offer from Acme")
acme_alice_verkey, authdecrypted_job_certificate_cred_offer_json, authdecrypted_job_certificate_cred_offer = \
await auth_decrypt(alice_wallet, alice_acme_key, authcrypted_job_certificate_cred_offer)
print("\"Alice\" -> Get \"Acme Job-Certificate\" Credential Definition from Ledger")
(_, acme_job_certificate_cred_def) = \
await get_cred_def(pool_handle, alice_acme_did, authdecrypted_job_certificate_cred_offer['cred_def_id'])
print("\"Alice\" -> Create and store in Wallet \"Job-Certificate\" Credential Request for Acme")
(job_certificate_cred_request_json, job_certificate_cred_request_metadata_json) = \
await anoncreds.prover_create_credential_req(alice_wallet, alice_acme_did,
authdecrypted_job_certificate_cred_offer_json,
acme_job_certificate_cred_def, alice_master_secret_id)
print("\"Alice\" -> Authcrypt \"Job-Certificate\" Credential Request for Acme")
authcrypted_job_certificate_cred_request_json = \
await crypto.auth_crypt(alice_wallet, alice_acme_key, acme_alice_verkey,
job_certificate_cred_request_json.encode('utf-8'))
print("\"Alice\" -> Send authcrypted \"Job-Certificate\" Credential Request to Acme")
print("\"Acme\" -> Authdecrypt \"Job-Certificate\" Credential Request from Alice")
alice_acme_verkey, authdecrypted_job_certificate_cred_request_json, _ = \
await auth_decrypt(acme_wallet, acme_alice_key, authcrypted_job_certificate_cred_request_json)
print("\"Acme\" -> Create \"Job-Certificate\" Credential for Alice")
alice_job_certificate_cred_values_json = json.dumps({
"first_name": {"raw": "Alice", "encoded": "245712572474217942457235975012103335"},
"last_name": {"raw": "Garcia", "encoded": "312643218496194691632153761283356127"},
"employee_status": {"raw": "Permanent", "encoded": "2143135425425143112321314321"},
"salary": {"raw": "2400", "encoded": "2400"},
"experience": {"raw": "10", "encoded": "10"}
})
job_certificate_cred_json, _, _ = \
await anoncreds.issuer_create_credential(acme_wallet, job_certificate_cred_offer_json,
authdecrypted_job_certificate_cred_request_json,
alice_job_certificate_cred_values_json, None, None)
print("\"Acme\" -> Authcrypt \"Job-Certificate\" Credential for Alice")
authcrypted_job_certificate_cred_json = \
await crypto.auth_crypt(acme_wallet, acme_alice_key, alice_acme_verkey,
job_certificate_cred_json.encode('utf-8'))
print("\"Acme\" -> Send authcrypted \"Job-Certificate\" Credential to Alice")
print("\"Alice\" -> Authdecrypted \"Job-Certificate\" Credential from Acme")
_, authdecrypted_job_certificate_cred_json, _ = \
await auth_decrypt(alice_wallet, alice_acme_key, authcrypted_job_certificate_cred_json)
print("\"Alice\" -> Store \"Job-Certificate\" Credential")
await anoncreds.prover_store_credential(alice_wallet, None, job_certificate_cred_request_metadata_json,
authdecrypted_job_certificate_cred_json,
acme_job_certificate_cred_def_json, None)
print("==============================")
print("=== Apply for the loan with Thrift ==")
print("==============================")
print("== Apply for the loan with Thrift - Onboarding ==")
print("------------------------------")
_, thrift_alice_key, alice_thrift_did, alice_thrift_key, \
thrift_alice_connection_response = await onboarding(pool_handle, pool_name, "Thrift", thrift_wallet, thrift_did,
"Alice", alice_wallet, alice_wallet_name,
alice_wallet_credentials)
print("==============================")
print("== Apply for the loan with Thrift - Job-Certificate proving ==")
print("------------------------------")
print("\"Thrift\" -> Create \"Loan-Application-Basic\" Proof Request")
apply_loan_proof_request_json = json.dumps({
'nonce': '123432421212',
'name': 'Loan-Application-Basic',
'version': '0.1',
'requested_attributes': {
'attr1_referent': {
'name': 'employee_status',
'restrictions': [{'cred_def_id': acme_job_certificate_cred_def_id}]
}
},
'requested_predicates': {
'predicate1_referent': {
'name': 'salary',
'p_type': '>=',
'p_value': 2000,
'restrictions': [{'cred_def_id': acme_job_certificate_cred_def_id}]
},
'predicate2_referent': {
'name': 'experience',
'p_type': '>=',
'p_value': 1,
'restrictions': [{'cred_def_id': acme_job_certificate_cred_def_id}]
}
}
})
print("\"Thrift\" -> Get key for Alice did")
alice_thrift_verkey = await did.key_for_did(pool_handle, thrift_wallet, thrift_alice_connection_response['did'])
print("\"Thrift\" -> Authcrypt \"Loan-Application-Basic\" Proof Request for Alice")
authcrypted_apply_loan_proof_request_json = \
await crypto.auth_crypt(thrift_wallet, thrift_alice_key, alice_thrift_verkey,
apply_loan_proof_request_json.encode('utf-8'))
print("\"Thrift\" -> Send authcrypted \"Loan-Application-Basic\" Proof Request to Alice")
print("\"Alice\" -> Authdecrypt \"Loan-Application-Basic\" Proof Request from Thrift")
thrift_alice_verkey, authdecrypted_apply_loan_proof_request_json, _ = \
await auth_decrypt(alice_wallet, alice_thrift_key, authcrypted_apply_loan_proof_request_json)
print("\"Alice\" -> Get credentials for \"Loan-Application-Basic\" Proof Request")
creds_json_for_apply_loan_proof_request = \
await anoncreds.prover_get_credentials_for_proof_req(alice_wallet, authdecrypted_apply_loan_proof_request_json)
creds_for_apply_loan_proof_request = json.loads(creds_json_for_apply_loan_proof_request)
cred_for_attr1 = creds_for_apply_loan_proof_request['attrs']['attr1_referent'][0]['cred_info']
cred_for_predicate1 = creds_for_apply_loan_proof_request['predicates']['predicate1_referent'][0]['cred_info']
cred_for_predicate2 = creds_for_apply_loan_proof_request['predicates']['predicate2_referent'][0]['cred_info']
creds_for_apply_loan_proof = {cred_for_attr1['referent']: cred_for_attr1,
cred_for_predicate1['referent']: cred_for_predicate1,
cred_for_predicate2['referent']: cred_for_predicate2}
schemas_json, cred_defs_json, revoc_states_json = \
await prover_get_entities_from_ledger(pool_handle, alice_thrift_did, creds_for_apply_loan_proof, 'Alice')
print("\"Alice\" -> Create \"Loan-Application-Basic\" Proof")
apply_loan_requested_creds_json = json.dumps({
'self_attested_attributes': {},
'requested_attributes': {
'attr1_referent': {'cred_id': cred_for_attr1['referent'], 'revealed': True}
},
'requested_predicates': {
'predicate1_referent': {'cred_id': cred_for_predicate1['referent']},
'predicate2_referent': {'cred_id': cred_for_predicate2['referent']}
}
})
alice_apply_loan_proof_json = \
await anoncreds.prover_create_proof(alice_wallet, authdecrypted_apply_loan_proof_request_json,
apply_loan_requested_creds_json, alice_master_secret_id, schemas_json,
cred_defs_json, revoc_states_json)
print("\"Alice\" -> Authcrypt \"Loan-Application-Basic\" Proof for Thrift")
authcrypted_alice_apply_loan_proof_json = \
await crypto.auth_crypt(alice_wallet, alice_thrift_key, thrift_alice_verkey,
alice_apply_loan_proof_json.encode('utf-8'))
print("\"Alice\" -> Send authcrypted \"Loan-Application-Basic\" Proof to Thrift")
print("\"Thrift\" -> Authdecrypted \"Loan-Application-Basic\" Proof from Alice")
_, authdecrypted_alice_apply_loan_proof_json, authdecrypted_alice_apply_loan_proof = \
await auth_decrypt(thrift_wallet, thrift_alice_key, authcrypted_alice_apply_loan_proof_json)
print("\"Thrift\" -> Get Schemas, Credential Definitions and Revocation Registries from Ledger"
" required for Proof verifying")
schemas_json, cred_defs_json, revoc_defs_json, revoc_regs_json = \
await verifier_get_entities_from_ledger(pool_handle, thrift_did,
authdecrypted_alice_apply_loan_proof['identifiers'], 'Thrift')
print("\"Thrift\" -> Verify \"Loan-Application-Basic\" Proof from Alice")
assert 'Permanent' == \
authdecrypted_alice_apply_loan_proof['requested_proof']['revealed_attrs']['attr1_referent']['raw']
assert await anoncreds.verifier_verify_proof(apply_loan_proof_request_json,
authdecrypted_alice_apply_loan_proof_json,
schemas_json, cred_defs_json, revoc_defs_json, revoc_regs_json)
print("==============================")
print("==============================")
print("== Apply for the loan with Thrift - Transcript and Job-Certificate proving ==")
print("------------------------------")
print("\"Thrift\" -> Create \"Loan-Application-KYC\" Proof Request")
apply_loan_kyc_proof_request_json = json.dumps({
'nonce': '123432421212',
'name': 'Loan-Application-KYC',
'version': '0.1',
'requested_attributes': {
'attr1_referent': {'name': 'first_name'},
'attr2_referent': {'name': 'last_name'},
'attr3_referent': {'name': 'ssn'}
},
'requested_predicates': {}
})
print("\"Thrift\" -> Get key for Alice did")
alice_thrift_verkey = await did.key_for_did(pool_handle, thrift_wallet, thrift_alice_connection_response['did'])
print("\"Thrift\" -> Authcrypt \"Loan-Application-KYC\" Proof Request for Alice")
authcrypted_apply_loan_kyc_proof_request_json = \
await crypto.auth_crypt(thrift_wallet, thrift_alice_key, alice_thrift_verkey,
apply_loan_kyc_proof_request_json.encode('utf-8'))
print("\"Thrift\" -> Send authcrypted \"Loan-Application-KYC\" Proof Request to Alice")
print("\"Alice\" -> Authdecrypt \"Loan-Application-KYC\" Proof Request from Thrift")
thrift_alice_verkey, authdecrypted_apply_loan_kyc_proof_request_json, _ = \
await auth_decrypt(alice_wallet, alice_thrift_key, authcrypted_apply_loan_kyc_proof_request_json)
print("\"Alice\" -> Get credentials for \"Loan-Application-KYC\" Proof Request")
creds_json_for_apply_loan_kyc_proof_request = \
await anoncreds.prover_get_credentials_for_proof_req(alice_wallet,
authdecrypted_apply_loan_kyc_proof_request_json)
creds_for_apply_loan_kyc_proof_request = json.loads(creds_json_for_apply_loan_kyc_proof_request)
cred_for_attr1 = creds_for_apply_loan_kyc_proof_request['attrs']['attr1_referent'][0]['cred_info']
cred_for_attr2 = creds_for_apply_loan_kyc_proof_request['attrs']['attr2_referent'][0]['cred_info']
cred_for_attr3 = creds_for_apply_loan_kyc_proof_request['attrs']['attr3_referent'][0]['cred_info']
creds_for_apply_loan_kyc_proof = {cred_for_attr1['referent']: cred_for_attr1,
cred_for_attr2['referent']: cred_for_attr2,
cred_for_attr3['referent']: cred_for_attr3}
schemas_json, cred_defs_json, revoc_states_json = \
await prover_get_entities_from_ledger(pool_handle, alice_thrift_did, creds_for_apply_loan_kyc_proof, 'Alice')
print("\"Alice\" -> Create \"Loan-Application-KYC\" Proof")
apply_loan_kyc_requested_creds_json = json.dumps({
'self_attested_attributes': {},
'requested_attributes': {
'attr1_referent': {'cred_id': cred_for_attr1['referent'], 'revealed': True},
'attr2_referent': {'cred_id': cred_for_attr2['referent'], 'revealed': True},
'attr3_referent': {'cred_id': cred_for_attr3['referent'], 'revealed': True}
},
'requested_predicates': {}
})
alice_apply_loan_kyc_proof_json = \
await anoncreds.prover_create_proof(alice_wallet, authdecrypted_apply_loan_kyc_proof_request_json,
apply_loan_kyc_requested_creds_json, alice_master_secret_id,
schemas_json, cred_defs_json, revoc_states_json)
print("\"Alice\" -> Authcrypt \"Loan-Application-KYC\" Proof for Thrift")
authcrypted_alice_apply_loan_kyc_proof_json = \
await crypto.auth_crypt(alice_wallet, alice_thrift_key, thrift_alice_verkey,
alice_apply_loan_kyc_proof_json.encode('utf-8'))
print("\"Alice\" -> Send authcrypted \"Loan-Application-KYC\" Proof to Thrift")
print("\"Thrift\" -> Authdecrypted \"Loan-Application-KYC\" Proof from Alice")
_, authdecrypted_alice_apply_loan_kyc_proof_json, authdecrypted_alice_apply_loan_kyc_proof = \
await auth_decrypt(thrift_wallet, thrift_alice_key, authcrypted_alice_apply_loan_kyc_proof_json)
print("\"Thrift\" -> Get Schemas, Credential Definitions and Revocation Registries from Ledger"
" required for Proof verifying")
schemas_json, cred_defs_json, revoc_defs_json, revoc_regs_json = \
await verifier_get_entities_from_ledger(pool_handle, thrift_did,
authdecrypted_alice_apply_loan_kyc_proof['identifiers'], 'Thrift')
print("\"Thrift\" -> Verify \"Loan-Application-KYC\" Proof from Alice")
assert 'Alice' == \
authdecrypted_alice_apply_loan_kyc_proof['requested_proof']['revealed_attrs']['attr1_referent']['raw']
assert 'Garcia' == \
authdecrypted_alice_apply_loan_kyc_proof['requested_proof']['revealed_attrs']['attr2_referent']['raw']
assert '123-45-6789' == \
authdecrypted_alice_apply_loan_kyc_proof['requested_proof']['revealed_attrs']['attr3_referent']['raw']
assert await anoncreds.verifier_verify_proof(apply_loan_kyc_proof_request_json,
authdecrypted_alice_apply_loan_kyc_proof_json,
schemas_json, cred_defs_json, revoc_defs_json, revoc_regs_json)
print("==============================")
print(" \"Sovrin Steward\" -> Close and Delete wallet")
await wallet.close_wallet(steward_wallet)
await wallet.delete_wallet(steward_wallet_name, steward_wallet_credentials)
print("\"Government\" -> Close and Delete wallet")
await wallet.close_wallet(government_wallet)
await wallet.delete_wallet(government_wallet_name, government_wallet_credentials)
print("\"Faber\" -> Close and Delete wallet")
await wallet.close_wallet(faber_wallet)
await wallet.delete_wallet(faber_wallet_name, faber_wallet_credentials)
print("\"Acme\" -> Close and Delete wallet")
await wallet.close_wallet(acme_wallet)
await wallet.delete_wallet(acme_wallet_name, acme_wallet_credentials)
print("\"Thrift\" -> Close and Delete wallet")
await wallet.close_wallet(thrift_wallet)
await wallet.delete_wallet(thrift_wallet_name, thrift_wallet_credentials)
print("\"Alice\" -> Close and Delete wallet")
await wallet.close_wallet(alice_wallet)
await wallet.delete_wallet(alice_wallet_name, alice_wallet_credentials)
print("Close and Delete pool")
await pool.close_pool_ledger(pool_handle)
await pool.delete_pool_ledger_config(pool_name)
print("Getting started -> done")
async def onboarding(pool_handle, pool_name, _from, from_wallet, from_did, to,
to_wallet: Optional[str], to_wallet_name: str, to_wallet_credentials: str):
print("\"{}\" -> Create and store in Wallet \"{} {}\" DID".format(_from, _from, to))
(from_to_did, from_to_key) = await did.create_and_store_my_did(from_wallet, "{}")
print("\"{}\" -> Send Nym to Ledger for \"{} {}\" DID".format(_from, _from, to))
await send_nym(pool_handle, from_wallet, from_did, from_to_did, from_to_key, None)
print("\"{}\" -> Send connection request to {} with \"{} {}\" DID and nonce".format(_from, to, _from, to))
connection_request = {
'did': from_to_did,
'nonce': 123456789
}
if not to_wallet:
print("\"{}\" -> Create wallet".format(to))
try:
await wallet.create_wallet(pool_name, to_wallet_name, None, None, to_wallet_credentials)
except IndyError as ex:
if ex.error_code == ErrorCode.PoolLedgerConfigAlreadyExistsError:
pass
to_wallet = await wallet.open_wallet(to_wallet_name, None, to_wallet_credentials)
print("\"{}\" -> Create and store in Wallet \"{} {}\" DID".format(to, to, _from))
(to_from_did, to_from_key) = await did.create_and_store_my_did(to_wallet, "{}")
print("\"{}\" -> Get key for did from \"{}\" connection request".format(to, _from))
from_to_verkey = await did.key_for_did(pool_handle, to_wallet, connection_request['did'])
print("\"{}\" -> Anoncrypt connection response for \"{}\" with \"{} {}\" DID, verkey and nonce"
.format(to, _from, to, _from))
connection_response = json.dumps({
'did': to_from_did,
'verkey': to_from_key,
'nonce': connection_request['nonce']
})
anoncrypted_connection_response = await crypto.anon_crypt(from_to_verkey, connection_response.encode('utf-8'))
print("\"{}\" -> Send anoncrypted connection response to \"{}\"".format(to, _from))
print("\"{}\" -> Anondecrypt connection response from \"{}\"".format(_from, to))
decrypted_connection_response = \
json.loads((await crypto.anon_decrypt(from_wallet, from_to_key,
anoncrypted_connection_response)).decode("utf-8"))
print("\"{}\" -> Authenticates \"{}\" by comparision of Nonce".format(_from, to))
assert connection_request['nonce'] == decrypted_connection_response['nonce']
print("\"{}\" -> Send Nym to Ledger for \"{} {}\" DID".format(_from, to, _from))
await send_nym(pool_handle, from_wallet, from_did, to_from_did, to_from_key, None)
return to_wallet, from_to_key, to_from_did, to_from_key, decrypted_connection_response
async def get_verinym(pool_handle, _from, from_wallet, from_did, from_to_key,
to, to_wallet, to_from_did, to_from_key, role):
print("\"{}\" -> Create and store in Wallet \"{}\" new DID".format(to, to))
(to_did, to_key) = await did.create_and_store_my_did(to_wallet, "{}")
print("\"{}\" -> Authcrypt \"{} DID info\" for \"{}\"".format(to, to, _from))
did_info_json = json.dumps({
'did': to_did,
'verkey': to_key
})
authcrypted_did_info_json = \
await crypto.auth_crypt(to_wallet, to_from_key, from_to_key, did_info_json.encode('utf-8'))
print("\"{}\" -> Send authcrypted \"{} DID info\" to {}".format(to, to, _from))
print("\"{}\" -> Authdecrypted \"{} DID info\" from {}".format(_from, to, to))
sender_verkey, authdecrypted_did_info_json, authdecrypted_did_info = \
await auth_decrypt(from_wallet, from_to_key, authcrypted_did_info_json)
print("\"{}\" -> Authenticate {} by comparision of Verkeys".format(_from, to, ))
assert sender_verkey == await did.key_for_did(pool_handle, from_wallet, to_from_did)
print("\"{}\" -> Send Nym to Ledger for \"{} DID\" with {} Role".format(_from, to, role))
await send_nym(pool_handle, from_wallet, from_did, authdecrypted_did_info['did'],
authdecrypted_did_info['verkey'], role)
return to_did
async def send_nym(pool_handle, wallet_handle, _did, new_did, new_key, role):
nym_request = await ledger.build_nym_request(_did, new_did, new_key, None, role)
await ledger.sign_and_submit_request(pool_handle, wallet_handle, _did, nym_request)
async def send_schema(pool_handle, wallet_handle, _did, schema):
schema_request = await ledger.build_schema_request(_did, schema)
await ledger.sign_and_submit_request(pool_handle, wallet_handle, _did, schema_request)
async def send_cred_def(pool_handle, wallet_handle, _did, cred_def_json):
cred_def_request = await ledger.build_cred_def_request(_did, cred_def_json)
await ledger.sign_and_submit_request(pool_handle, wallet_handle, _did, cred_def_request)
async def get_schema(pool_handle, _did, schema_id):
get_schema_request = await ledger.build_get_schema_request(_did, schema_id)
get_schema_response = await ledger.submit_request(pool_handle, get_schema_request)
return await ledger.parse_get_schema_response(get_schema_response)
async def get_cred_def(pool_handle, _did, schema_id):
get_cred_def_request = await ledger.build_get_cred_def_request(_did, schema_id)
get_cred_def_response = await ledger.submit_request(pool_handle, get_cred_def_request)
return await ledger.parse_get_cred_def_response(get_cred_def_response)
async def prover_get_entities_from_ledger(pool_handle, _did, identifiers, actor):
schemas = {}
cred_defs = {}
rev_states = {}
for item in identifiers.values():
print("\"{}\" -> Get Schema from Ledger".format(actor))
(received_schema_id, received_schema) = await get_schema(pool_handle, _did, item['schema_id'])
schemas[received_schema_id] = json.loads(received_schema)
print("\"{}\" -> Get Credential Definition from Ledger".format(actor))
(received_cred_def_id, received_cred_def) = await get_cred_def(pool_handle, _did, item['cred_def_id'])
cred_defs[received_cred_def_id] = json.loads(received_cred_def)
if 'rev_reg_seq_no' in item:
pass # TODO Create Revocation States
return json.dumps(schemas), json.dumps(cred_defs), json.dumps(rev_states)
async def verifier_get_entities_from_ledger(pool_handle, _did, identifiers, actor):
schemas = {}
cred_defs = {}
rev_reg_defs = {}
rev_regs = {}
for item in identifiers:
print("\"{}\" -> Get Schema from Ledger".format(actor))
(received_schema_id, received_schema) = await get_schema(pool_handle, _did, item['schema_id'])
schemas[received_schema_id] = json.loads(received_schema)
print("\"{}\" -> Get Credential Definition from Ledger".format(actor))
(received_cred_def_id, received_cred_def) = await get_cred_def(pool_handle, _did, item['cred_def_id'])
cred_defs[received_cred_def_id] = json.loads(received_cred_def)
if 'rev_reg_seq_no' in item:
pass # TODO Get Revocation Definitions and Revocation Registries
return json.dumps(schemas), json.dumps(cred_defs), json.dumps(rev_reg_defs), json.dumps(rev_regs)
async def auth_decrypt(wallet_handle, key, message):
from_verkey, decrypted_message_json = await crypto.auth_decrypt(wallet_handle, key, message)
decrypted_message_json = decrypted_message_json.decode("utf-8")
decrypted_message = json.loads(decrypted_message_json)
return from_verkey, decrypted_message_json, decrypted_message
if __name__ == '__main__':
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(run())
time.sleep(1) # FIXME waiting for libindy thread complete
# -
| doc/getting-started/getting-started.ipynb |