code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pycaret.datasets import get_data
get_data('index')
data=get_data('boston')
from pycaret.regression import *
#setup ๆฐๆฎ่ฎพ็ฝฎ๏ผๅๆฐไธบ็ฉ้ต๏ผnp.array็ฑปๅ๏ผtargetไธบๆ ็ญพ
s = setup(data, target='medv')
lr = create_model('xgboost')
knn = create_model('knn')
knn_boosted = create_model('knn', ensemble=True, method='Boosting')
predict_model(lr)
Estimator Abbreviated String Original Implementation
--------- ------------------ -----------------------
Linear Regression 'lr' linear_model.LinearRegression
Lasso Regression 'lasso' linear_model.Lasso
Ridge Regression 'ridge' linear_model.Ridge
Elastic Net 'en' linear_model.ElasticNet
Least Angle Regression 'lar' linear_model.Lars
Lasso Least Angle Regression 'llar' linear_model.LassoLars
Orthogonal Matching Pursuit 'omp' linear_model.OMP
Bayesian Ridge 'br' linear_model.BayesianRidge
Automatic Relevance Determ. 'ard' linear_model.ARDRegression
Passive Aggressive Regressor 'par' linear_model.PAR
Random Sample Consensus 'ransac' linear_model.RANSACRegressor
TheilSen Regressor 'tr' linear_model.TheilSenRegressor
Huber Regressor 'huber' linear_model.HuberRegressor
Kernel Ridge 'kr' kernel_ridge.KernelRidge
Support Vector Machine 'svm' svm.SVR
K Neighbors Regressor 'knn' neighbors.KNeighborsRegressor
Decision Tree 'dt' tree.DecisionTreeRegressor
Random Forest 'rf' ensemble.RandomForestRegressor
Extra Trees Regressor 'et' ensemble.ExtraTreesRegressor
AdaBoost Regressor 'ada' ensemble.AdaBoostRegressor
Gradient Boosting 'gbr' ensemble.GradientBoostingRegressor
Multi Level Perceptron 'mlp' neural_network.MLPRegressor
Extreme Gradient Boosting 'xgboost' xgboost.readthedocs.io
Light Gradient Boosting 'lightgbm' github.com/microsoft/LightGBM
CatBoost Regressor 'catboost' https://catboost.ai
#่ฟๅ้ค้ปๅๅไปฅๅค็ๆๆๅญฆไน ๅจ็็ปๆ
compare_models(blacklist=['tr','ransac'])
ct = create_model('catboost')
interpret_model(ct,plot='correlation')
evaluate_model(rf)
interpret_model(ct, plot='reason', observation=5)
Name Abbreviated String Original Implementation
--------- ------------------ -----------------------
Residuals Plot 'residuals' .. / residuals.html
Prediction Error Plot 'error' .. / peplot.html
Cooks Distance Plot 'cooks' .. / influence.html
Recursive Feat. Selection 'rfe' .. / rfecv.html
Learning Curve 'learning' .. / learning_curve.html
Validation Curve 'vc' .. / validation_curve.html
Manifold Learning 'manifold' .. / manifold.html
Feature Importance 'feature' N/A
Model Hyperparameter 'parameter' N/A
rf=create_model('rf')
# Precision Recall Curve
plot_model(rf, plot = 'feature')
plot_model(rf, plot = 'residuals')
# Validation Curve
plot_model(dt, plot = 'parameter')
# Decision Boundary
plot_model(dt, plot = 'learning')
evaluate_model(dt)
interpret_model(dt)
interpret_model(dt,plot='correlation')
save_model(rf, model_name = 'E:/Machine Learning/rf_for_boston')
model=load_model(model_name = 'E:/Machine Learning/rf_for_boston')
| pycaret-2020-5-9.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# ### โ๊ฐ์ ๋โ ๋์ด ๊ฒ์: IoT, Deep Learning, Smart Factory, Smart City (smartbean.org forum)
# 18์๊ฐ
# what is Monte-Carlo Simulation
#
# a.k.a ๊ฐ๋
ธ๊ฐ๋ค ์ธก์ ๋ฒ
#
# ์ผ๋จ,
# ๋ชฌํ
์นด๋ฅผ๋ก๋ ๋์ ๊ตญ๊ฐ์ธ ๋ชจ๋์ฝ์
# ๋์(๋ง์?) ์ด๋ฆ์ด๋ฉฐ,
# ์ธ๊ตฌ๋ 3์ฒ๋ช
์ ๋๋ฐ ๋์์
๋๋ค.
# ๋๋ฐ ๋์๋ผ์ ํ๋ฅ ๊ณผ ๊ด๋ จ์ด..
#
# ์ฒจ๋ถํ ๊ทธ๋ฆผ์ ๋ด
์๋ค.
#
# ๋์ด์ ๋น์จ์ ๋ํ ๊ฒ์
๋๋ค.
#
# ์ธ๋ก ๊ธธ์ด๊ฐ 1, ๊ฐ๋ก ๊ธธ์ด๊ฐ 2์ธ
# ์ ์ฒด ๋ฉด์ ์ด 2์ธ ์ง์ฌ๊ฐํ์
๋๋ค.
#
# ์์ ์ ๊ณผ ์๋์ ์ ์ค์์ ์ ์ผ๋ก ์์ต๋๋ค.
# ๊ทธ๋ฌ๋ฉด, ์ผ์ชฝ ๋ฉด์ ๊ณผ ์ค๋ฅธ์ชฝ ๋ฉด์ ์
# ๋น์จ์ ์ผ๋ง์ผ๊น์?
#
# ์ง์ฌ๊ฐํ ๋ฉด์ ์ ๊ตฌํ๋ ๊ณต์์ด ๊ฐ๋ก x ์ธ๋ก ์ด๋ฏ๋ก
# ์ผ์ชฝ ๋ฉด์ ์ด 1, ์ค๋ฅธ์ชฝ ๋ฉด์ ์ด 1 ์
๋๋ค.
# ๋น์จ์ 1/1 = 1 ์
๋๋ค.
#
# -----
# ์ด์ , ์๋์ชฝ ๊ทธ๋ฆผ์ ๋ด
์๋ค.
# ์์ ์ ์ค์๊ณผ ์๋ซ์ ์ ์ค์์ ์ ์ผ๋ก ์๋๋ฐ,
# ์ง์ ์ด ์๋๋ผ ๊ตฌ๋ถ๊ตฌ๋ถํ๊ฒ ๊ท ํ์๊ฒ ์ด์์ต๋๋ค.
#
# ์, ์ด์ ์ผ์ชฝ์ ๋ฉด์ ๊ณผ ์ค๋ฅธ์ชฝ ๋ฉด์ ์
# ๋น์จ์ ๋ฌด์์ผ๊น์?
#
# ์ผ์ชฝ๊ณผ ์ค๋ฅธ์ชฝ์ ๋ฉด์ ์ ์์๋ด๋ ์ํ ๊ณต์์ด
# ๋ฌด์์ผ๊น์?
#
# ์์ผ๋ก ๊ตฌ๋ถ๊ตฌ๋ถํ๊ฒ ๊ทธ๋ฆฐ ์ค์์ ๋๋ฌธ์
# ๋ฉด์ ์ ๊ตฌํ๋ ๊ณต์์ ์์ต๋๋ค.
#
# ์ฌ์ธ ๊ฒ ๊ฐ์๋ ๋ฉด์ ์ ๋น์จ์ ๊ตฌํ ์ ์๊ฒ ๋์๋ค์.
#
# ์ด๋,
# ์ฌ์ฉํ๋ ๋ฐฉ์์ด ๋ชฌํ
์นด๋ฅผ๋ก ๋ฐฉ์์
๋๋ค.
#
# ๋ฌด์์๋ก ๋ฉด์ ์ ์ ์ ์ฐ์ ํ
# ๊ทธ ์ ์ด ์ด๋ ๋ฉด์ ์ ์๋์ง๋ ์ ์ ์์ต๋๋ค.
#
# ์ผ์ชฝ ๋ฉด์ ์ํ ์ ์ ๊ฐฏ์์
# ์ค๋ฅธ์ชฝ ๋ฉด์ ์ํ ์ ์ ๊ฐฏ์๋ฅผ ์ค์ ๋ก ์ธ์ด์
# ๊ฐฏ์๋ฅผ ๋น๊ตํ๋ฉด, ๋ฉด์ ์ ๋น์จ๊ณผ ๊ฑฐ์ ๊ฐ์ ๊ฒ์
๋๋ค.
#
# ์ ์ 10๊ฐ๋ง ์ฐ์ผ๋ฉด ๋ฌด์์๋ก ์ธํ์ฌ ํ์ชฝ์ผ๋ก
# ์น์ฐ์ณ์ ์ค์ฐจ๊ฐ ์ข ํด ์๊ฐ ์์ง๋ง,
# ์ ์ ์ฒ๋ง๊ฐ๋ฅผ ์ฐ์ด์ ๊ฐฏ์๋ฅผ ์ธ๋ฉด
# ์ค์ฐจ๋ ๋งค์ฐ ์์์ง๋๋ค.
#
# ์ฒ๋ง๊ฐ์ ์ ์ ๋๊ฐ ์ฐ๊ณ ๊ทธ๊ฑธ ๋ฌด์ํ๊ฒ ์ธ๋๋์
# ๋ฌธ์ ๊ฐ ๋จ๋๋ฐ์.
#
# '์๋ ๋๊ฐํค์ฐ๋' ์ ๊ฐ๊ณ ์์
# ์ปดํจํฐ์๊ฒ ์ํค๋ฉด ๋๋ ์๋์
๋๋ค.
#
# ์ด๋ฌํ ๋ฐฉ์์
# ํ๊ฒฝ์ ๋ํ
# ์์์ ๋ชจ๋ฅด๋ ๋ฌธ์ ์ ์ ์ฉํ๊ฑฐ๋,
# ์๋ฒฝํ ์์์ ์๋๋ผ๋ ๊ทธ ํด๋ฅผ ๊ตฌํ ์ ์๊ฑฐ๋,
# ์์ ํ ์ด์ ๋ฐฉ์์ ์๋๋ผ๋ ๋๋ฌด ๋ง์ ๊ฒฝ์ฐ์ ์๊ฐ ์๊ฑฐ๋
# ํ ๋ ์ฌ์ฉํฉ๋๋ค.
#
# AlphaGo ๋ด๋ถ์์
# ๊ฐํํ์ต (RL, Reinforcement Learning)์ ์ฌ์ฉ๋ฉ๋๋ค.4
from IPython.display import Image
Image('image/monte carlo area.jpg')
import numpy as np
import matplotlib.pyplot as plt
# +
# 1๋ฏธํฐ * 2๋ฏธํฐ : cm ๋จ์๋ก scaling
height = 100
width = 200
area = np.zeros([height, width])
#print(np.shape(area))
#print(area)
area_line = width // 2
# -
# ์ค๊ฐ์ ๊ธฐ์ ์ผ๋ก ์ข์ฐ๋ก 1์ฉ ์ด๋์์ผ ๊ฐ๋ฉด์ area ์ ์ ๊ตฌ์ฑํ๋ค.
for i in range(height):
area_line = area_line + np.random.randint(low=-1, high=2)
area_line = min(max(0, area_line), width-1)
#print(i, area_line)
area[i, area_line] = 1
#print(np.argmax(area[i]))
#print(area)
# +
# ์ ๋๋ ์ง ์ค๊ฐ์ ์ผ๋ก ํด์ ์ํ๋ง ๊ฒ์ฆ์ ์ฌ์ฉ
#area[:, area_line] = 1
# -
imgplot = plt.imshow(area, cmap='gray')
plt.show(imgplot)
area_left = 0
area_right = 0
area_sampling_count = height * width // 10 # ๋ฉด์ ๋ณด๋ค ์ผ๋ง๋ ์ ๊ฒ/๋ง๊ฒ ์ํ๋งํ ๊ฒ์ธ๊ฐ?
for i in range(area_sampling_count):
sampling_height = np.random.randint(low=0, high=height)
sampling_width = np.random.randint(low=0, high=width)
#print(np.argmax(area[sampling_height]), sampling_height, sampling_width)
if (np.argmax(area[sampling_height]) > sampling_width):
area_left = area_left + 1
else:
area_right = area_right + 1
#print(i, "area_left : ", area_left, "area_right : ", area_right)
print(area_sampling_count, "sampling in left : ", area_left, "sampling in right : ", area_right)
print(area_sampling_count, area_left/area_sampling_count, area_right/area_sampling_count)
# +
# ๊ตฌ๋ณ์ ๋์์ฃผ๊ธฐ ์ํด, ์ค๊ฐ์ ์ ์ถ๊ฐํด์ฃผ์
area_line = width // 2
area[:, area_line] = 1
imgplot = plt.imshow(area, cmap='gray')
plt.show(imgplot)
# +
# area ๋ถ๋ถ๊ฐ์ ํ์ธํด๋ณด์
area_prediction = 0
area_prediction_difference = 0
area_line = width // 2
for i in range(height):
area_prediction = area_prediction + np.argmax(area[i])
area_prediction_difference = area_prediction_difference + (area_line - np.argmax(area[i]))
print("left area : ", area_prediction)
#print(area_prediction_difference)
print("right area : ", (height * width - area_prediction))
# -
| code/MonteCarloSimulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import json
import subprocess
import os
import pandas as pd
def filecount(dir_name):
return len([f for f in os.listdir(dir_name) if os.path.isfile(os.path.join(dir_name, f))])
#NOTE: CAN RUN THIS SCRIPT ON RUNS WITH EITHER 286 OR 321 LOAD SIZE
#NOTE: g2p trials ONLY
# # TO BE CHANGED --> trial, only for new FAKE image batches
# trial = 'train_12_29_20'
# csv_file_data = trial + '_FAKE.csv'
# numPackets_folder = '/home/tom_phelan_ext/Documents/microstructure_analysis/grains2packets/numPackets_FAKE/'
# result_img_suffix = 'fake_B.png'
# print("Processing " + trial + "FAKE packet images...")
# print()
# FOR RESULTS IMAGES THAT ARE REAL
trial = 'train_12_23_20'
csv_file_data = trial + '.csv'
numPackets_folder = '/home/tom_phelan_ext/Documents/microstructure_analysis/grains2packets/numPackets_results_real/'
result_img_suffix = 'real_B.png'
print("Processing " + trial + "REAL packet images...")
print()
file_data = pd.DataFrame(columns=["image name", "folder", ".csv file"])
# paths for pipeline runs and outputting image data; folder B in packets2blocks is the real block images
pipeline_file = '/home/tom_phelan_ext/gitCode/pix2pix/pytorch-CycleGAN-and-pix2pix/dream3d_pipelines/g2p_analysis_FAKE.json'
pipeline_runner = '/home/tom_phelan_ext/Programs/DREAM3D/bin/PipelineRunner'
# images come from results folder of trial
result_images = '/home/tom_phelan_ext/gitCode/pix2pix/pytorch-CycleGAN-and-pix2pix/results/' + trial + '/test_latest/images/'
output_csv_folder = numPackets_folder + trial + '/'
# creates directories if do not exist
if(not(os.path.exists(numPackets_folder))): os.makedirs(numPackets_folder)
if (not(os.path.exists(output_csv_folder))): os.makedirs(output_csv_folder)
print(output_csv_folder)
# subdirs are those listed within image_folder
imageList = os.listdir(result_images)
total_index = 1
startNumber = 0
numImages = filecount(result_images)
for i in range(startNumber, startNumber + numImages):
# REAL OUTLINE IMAGES --> then map to its respective fake image
real_image = ''
fake_image = ''
if (str(imageList[i]).find("real_A.png") != -1):
# get real & correspinding fake
real_image = str(imageList[i])
fake_image = str(imageList[i])[:-10] + result_img_suffix #this suffix changes based on real/fake result img
print("Real image: " + real_image)
print("Fake image: " + fake_image)
# pipeline details, output .csv file
with open(pipeline_file) as pipeline_json:
pipeline_json_data = json.load(pipeline_json)
# real & fake image names mapped to image readers
pipeline_json_data['00']['FileName'] = result_images + real_image
print(pipeline_json_data['00']['FileName'])
pipeline_json_data['04']['FileName'] = result_images + fake_image
print(pipeline_json_data['04']['FileName'])
pipeline_json_data['23']['OutputFilePath'] = output_csv_folder + str(total_index) + '.csv'
pipeline_json_data['23']['OutputPath'] = output_csv_folder + str(total_index) + '.csv'
with open(pipeline_file, 'w') as pipeline_json:
pipeline_json.write(json.dumps(pipeline_json_data, indent=4))
process_call = pipeline_runner + ' -p' + ' ' + pipeline_file
print('*********************************')
print('Running permutation {} of {}'.format(i, numImages))
print('*********************************')
subprocess.call(process_call, shell=True)
# add to pandas dataFrame (.csv file later)
file_data_tuple = pd.DataFrame({"image name": real_image, "folder": 'results', ".csv file": str(total_index) + ".csv"}, index=[total_index])
print(file_data_tuple)
file_data = pd.concat([file_data, file_data_tuple])
total_index += 1
print(file_data.head())
# parse to .csv file with given parameters
file_data.to_csv(numPackets_folder + csv_file_data)
| post_processing/g2p_analysis_FAKE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="rFiCyWQ-NC5D"
# # Multiple Layer GRU
# + colab_type="code" id="Y20Lud2ZMBhW" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="64be92dc-38c1-4f98-fa82-d491f3688444"
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow_datasets as tfds
import tensorflow as tf
print(tf.__version__)
# + colab_type="code" id="AW-4Vo4TMUHb" colab={"base_uri": "https://localhost:8080/", "height": 353, "referenced_widgets": ["40d00829026c4e2fb57084371e27fed2", "ef7cec66169c41d08ada1660e4152d7a", "8b4caf419b3e4efa8bc003a217dd60da", "f72345b8731843cbb76685702d7a47c2", "cfd7cf37d7224ab6bef7af31e01e0e07", "de9e0ad324b3442986e1eeeb8a5e3dc3", "996539adce6b4f76b3d6e1346a4876df", "bafc21e5a86243ba819fad668d77e28f", "e787f7dd9a7e4105ba90f968eb704168", "<KEY>", "<KEY>", "<KEY>", "ede8a52aa0f344b8942947e06ad8007e", "f5798ef5670b4856a1c55cd4d6beafb2", "<KEY>", "8a74ca0d65644ff78378ed205f4e674a", "eeb02963e00b4c3a91133ac195ed638f", "<KEY>", "<KEY>", "<KEY>", "80344f9c97e546d89d1e85179d8df522", "<KEY>", "<KEY>", "81ffca9c33c445eaab643e0c800a1c76", "<KEY>", "<KEY>", "22e59e58d2164f0fa1a6205bd9c78ecb", "<KEY>", "<KEY>", "81595aad97284ca98af5fddc539c3abb", "<KEY>", "f7d61094ddac4a538495a432a9369cc4", "<KEY>", "<KEY>", "<KEY>", "2bddd6c6bd1a492b8e5c5484ce836e37", "1a6cffe8d69f4e6e853b26a82b697dfe", "<KEY>", "<KEY>", "81ab92858c3246b0a47395c16e96a490", "<KEY>", "d5d7d589e81340aeb69808c402cbd1de", "d1517e7bd1664d37b8306845a71efe59", "<KEY>", "fb9d084d0f40431ba9dd808453f5e99b", "<KEY>", "1bf040744b7e404892eabe6ada0d20be", "2c1fa87c96774d1a86d20a99ed63df7b", "<KEY>", "<KEY>", "de11a09c6fe74780889f35162960af41", "<KEY>", "976a9b4e366346a49cd639a0fce7c602", "7b76f5ecb63e446e99c8e7abb932a533", "<KEY>", "<KEY>", "36b8a61e33c9466eb9c4d6422e76ff49", "290c0e9e96e04ae3ae67319476469dae", "<KEY>", "<KEY>", "<KEY>", "9363758241eb480daceca2e155c18dfd", "d4ed7e15e6e74d8ea78769137f934da9", "5c4f4408144547dcb693371ec3de080d"]} outputId="561dda87-6aa1-4696-ae52-2521c1626ce3"
# Get the data
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
# + colab_type="code" id="L11bIR6-PKvs" colab={}
tokenizer = info.features['text'].encoder
# + colab_type="code" id="ffvRUI0_McDS" colab={"base_uri": "https://localhost:8080/", "height": 141} outputId="e9db682f-59de-4a53-b341-3dc8e7b462b4"
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, train_dataset.output_shapes)
test_dataset = test_dataset.padded_batch(BATCH_SIZE, test_dataset.output_shapes)
# + colab_type="code" id="jo1jjO3vn0jo" colab={}
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, 64),
tf.keras.layers.Conv1D(128, 5, activation='relu'),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# + colab_type="code" id="QKI5dfPgMioL" colab={"base_uri": "https://localhost:8080/", "height": 330} outputId="d69a0979-7bf6-422d-f4ef-c14e56ef279e"
model.summary()
# + colab_type="code" id="Uip7QOVzMoMq" colab={}
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# + colab_type="code" id="7mlgzaRDMtF6" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="77739bb6-5248-42c0-cb40-ce2d5d49beae"
NUM_EPOCHS = 10
history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset)
# + colab_type="code" id="Mp1Z7P9pYRSK" colab={}
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
# + colab_type="code" id="R_sX6ilIM515" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="c3cb987f-f8d6-4237-e0f1-6d00badda0b2"
plot_graphs(history, 'accuracy')
# + colab_type="code" id="RFEXtKtqNARB" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="f97d6778-bc17-4df4-903c-93d627b1ec2f"
plot_graphs(history, 'loss')
| IMDB Movie Reviews/IMDB_subwords_8k_with_1D_conv_layer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# rewritten from: https://github.com/JRussellHuffman/quantum-dice/blob/master/randomNumGenerator.py
import qiskit
from qiskit import *
from qiskit.tools.visualization import *
from qiskit.tools.monitor import job_monitor
# %matplotlib inline
with open('tocken.txt', 'r') as file:
myTocken = file.read().replace('\n', '')
IBMQ.save_account(myTocken,overwrite=True)
IBMQ.load_account()
qr = QuantumRegister(5)
cr = ClassicalRegister(5)
circuit = QuantumCircuit(qr,cr)
circuit.draw(output='mpl')
for x in range(0, 5):
circuit.h(qr[x])
circuit.draw(output='mpl')
circuit.measure(qr, cr)
circuit.draw(output='mpl')
simulator = Aer.get_backend('qasm_simulator')
shots = 1024
sim_job = execute(circuit, backend=simulator,shots=shots,memory=True)
sim_result = sim_job.result()
#to retrieve the results independently, instead of as a probability
sim_memory = sim_result.get_memory()
sim_outputArray = []
for x in range(0, shots):
converted = int(sim_memory[x], 2)
sim_outputArray.append(converted)
print(sim_outputArray)
plot_histogram(sim_result.get_counts(circuit))
provider = IBMQ.get_provider('ibm-q')
qcomp = provider.get_backend('ibmq_16_melbourne')
shots = 1024
q_job = execute(circuit, qcomp, shots = shots, memory=True)
job_monitor(q_job)
q_result = q_job.result()
q_memory = q_result.get_memory()
q_outputArray = []
for x in range(0, shots):
converted = int(q_memory[x], 2)
q_outputArray.append(converted)
print(q_outputArray)
print(sim_memory)
q_job.error_message()
| archive/Quantum_random_number_generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="dL3lqte5DC6U"
# #**Genetic Algorithm Notebook**
#
# This code is an except from David Ha's ESTool available at https://github.com/hardmaru/estool
#
# Assembled by:
# * <NAME> <<EMAIL>>, 11/2020
# + [markdown] id="YlUn-5bnA5jC"
# ## Genetic Algorithm Parameters
#
# + id="pYRvVXN_-E8A"
NPARAMS = 100 # make this a 100-dimensinal problem.
NPOPULATION = 101 # use population size of 101.
MAX_ITERATION = 4000 # run each solver for 5000 generations.
# + [markdown] id="G7Wa1OuE-E7y"
# ## Genetic Algorithm Class
#
# + id="7oIJEEf2-spb"
class SimpleGA:
'''Simple Genetic Algorithm.'''
def __init__(self, num_params, # number of model parameters
sigma_init=0.1, # initial standard deviation
sigma_decay=0.999, # anneal standard deviation
sigma_limit=0.01, # stop annealing if less than this
popsize=256, # population size
elite_ratio=0.1, # percentage of the elites
forget_best=False, # forget the historical best elites
weight_decay=0.01, # weight decay coefficient
):
self.num_params = num_params
self.sigma_init = sigma_init
self.sigma_decay = sigma_decay
self.sigma_limit = sigma_limit
self.popsize = popsize
self.elite_ratio = elite_ratio
self.elite_popsize = int(self.popsize * self.elite_ratio)
self.sigma = self.sigma_init
self.elite_params = np.zeros((self.elite_popsize, self.num_params))
self.elite_rewards = np.zeros(self.elite_popsize)
self.best_param = np.zeros(self.num_params)
self.best_reward = 0
self.first_iteration = True
self.forget_best = forget_best
self.weight_decay = weight_decay
def rms_stdev(self):
return self.sigma # same sigma for all parameters.
def ask(self):
'''returns a list of parameters'''
self.epsilon = np.random.randn(self.popsize, self.num_params) * self.sigma
solutions = []
def mate(a, b):
c = np.copy(a)
idx = np.where(np.random.rand((c.size)) > 0.5)
c[idx] = b[idx]
return c
elite_range = range(self.elite_popsize)
for i in range(self.popsize):
idx_a = np.random.choice(elite_range)
idx_b = np.random.choice(elite_range)
child_params = mate(self.elite_params[idx_a], self.elite_params[idx_b])
solutions.append(child_params + self.epsilon[i])
solutions = np.array(solutions)
self.solutions = solutions
return solutions
def tell(self, reward_table_result):
# input must be a numpy float array
assert(len(reward_table_result) == self.popsize), "Inconsistent reward_table size reported."
reward_table = np.array(reward_table_result)
if self.weight_decay > 0:
l2_decay = compute_weight_decay(self.weight_decay, self.solutions)
reward_table += l2_decay
if self.forget_best or self.first_iteration:
reward = reward_table
solution = self.solutions
else:
reward = np.concatenate([reward_table, self.elite_rewards])
solution = np.concatenate([self.solutions, self.elite_params])
idx = np.argsort(reward)[::-1][0:self.elite_popsize]
self.elite_rewards = reward[idx]
self.elite_params = solution[idx]
self.curr_best_reward = self.elite_rewards[0]
if self.first_iteration or (self.curr_best_reward > self.best_reward):
self.first_iteration = False
self.best_reward = self.elite_rewards[0]
self.best_param = np.copy(self.elite_params[0])
if (self.sigma > self.sigma_limit):
self.sigma *= self.sigma_decay
def current_param(self):
return self.elite_params[0]
def set_mu(self, mu):
pass
def best_param(self):
return self.best_param
def result(self): # return best params so far, along with historically best reward, curr reward, sigma
return (self.best_param, self.best_reward, self.curr_best_reward, self.sigma)
# + id="iyz3ePmy-E70"
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] id="bMCqwrVEBIVA"
# ## Fitness Function
# + id="nceKdSnv-E77"
# from https://github.com/CMA-ES/pycma/blob/master/cma/fitness_functions.py
def rastrigin(x):
"""Rastrigin test objective function, shifted by 10. units away from origin"""
x = np.copy(x)
x -= 10.0
if not np.isscalar(x[0]):
N = len(x[0])
return -np.array([10 * N + sum(xi**2 - 10 * np.cos(2 * np.pi * xi)) for xi in x])
N = len(x)
## Note: We are evaluating individuals and returning this value as a fitness
return -(10 * N + sum(x**2 - 10 * np.cos(2 * np.pi * x)))
# TODO: set our evaluation function to the definition of runSimulation
fit_func = rastrigin
# + [markdown] id="n_W2RyCVBCqN"
# ## Simulation
# + id="1op0Mbdv-E8C"
# Main Genetic Algorithm Loop - defines a function to use solver to solve fit_func
# This code is equivalent to def runSimulation
def test_solver(solver):
history = []
for j in range(MAX_ITERATION):
solutions = solver.ask()
fitness_list = np.zeros(solver.popsize)
for i in range(solver.popsize):
fitness_list[i] = fit_func(solutions[i])
solver.tell(fitness_list)
result = solver.result() # first element is the best solution, second element is the best fitness
history.append(result[1])
if (j+1) % 100 == 0:
print("fitness at iteration", (j+1), result[1])
print("local optimum discovered by solver:\n", result[0])
print("fitness score at this local optimum:", result[1])
return history
# + [markdown] id="4rmngbsmCQ4u"
# ## Run
# + [markdown] id="tEtg6EV7Vf3Q"
# This code is similar to the cell under Run in 'Maze Navigation.' In it, we define a search algorithm called SimpleGA, run the search, and save the results in ga_history. We save the results to this variable so that we can plot fitness over time. In the Maze Navigation code, we did not save our results but instead output simulation code directly to the ipynb standard out.
# + id="T2IOnC8h-E8O"
import numpy as np
# defines genetic algorithm solver
ga = SimpleGA(NPARAMS, # number of model parameters
sigma_init=0.5, # initial standard deviation
popsize=NPOPULATION, # population size
elite_ratio=0.1, # percentage of the elites
forget_best=False, # forget the historical best elites
weight_decay=0.00, # weight decay coefficient
)
ga_history = test_solver(ga)
# + [markdown] id="3ef85cKaBhsZ"
# ## Plotting
# + id="qOuPYOoU-E8E"
x = np.zeros(NPARAMS) # 100-dimensional problem
print("This is F(0):")
print(rastrigin(x))
# + id="2gAwHS2s-E8H"
x = np.ones(NPARAMS)*10. # 100-dimensional problem
print(rastrigin(x))
print("global optimum point:\n", x)
# + id="fBe7tWXl-E8o"
# Create a new figure of size 8x6 points, using 100 dots per inch
best_history = [0] * MAX_ITERATION
plt.figure(figsize=(16,8), dpi=150)
optimum_line, = plt.plot(best_history, color="black", linewidth=0.5, linestyle="-.", label='Global Optimum')
ga_line, = plt.plot(ga_history, color="green", linewidth=1.0, linestyle="-", label='GA')
#oes_line, = plt.plot(oes_history, color="orange", linewidth=1.0, linestyle="-", label='OpenAI-ES')
#pepg_line, = plt.plot(pepg_history, color="blue", linewidth=1.0, linestyle="-", label='PEPG / NES')
#cma_line, = plt.plot(cma_history, color="red", linewidth=1.0, linestyle="-", label='CMA-ES')
#plt.legend(handles=[optimum_line, ga_line, cma_line, pepg_line, oes_line], loc=4)
plt.legend(handles=[optimum_line, ga_line], loc=1)
# Set x limits
plt.xlim(0,2500)
plt.xlabel('generation')
plt.ylabel('fitness')
# plt.savefig("./rastrigin_10d.svg")
plt.show()
| Genetic_Algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="Hx2fy51HkPyI" colab_type="text"
# ## Exploring Clusers in Data
#
# - Use an auto-encoder to reduce dimensions down to 2 and made a scatter plot and then used kmeans clustering to colormap the clusters
# - use pca or tsne for that, but autoencoder I think can accomplish that task as well, if ur output later has 2 neurons, rt?
# + [markdown] colab_type="text" id="kgdLqZi5sa4A"
# # Imports and Installs
# + id="eI_0n7zzumGV" colab_type="code" colab={}
# Imports
import pandas as pd
import numpy as np
import pandas_profiling
from sklearn import preprocessing # for category encoder
from sklearn.neighbors import NearestNeighbors
from sklearn.model_selection import train_test_split
# much more efficient for larger files like Nearest Neighbors which the model
import joblib
# + id="GuXor5ics6t4" colab_type="code" colab={}
# Read in data
ddf = pd.read_csv('https://raw.githubusercontent.com/Build-Week-Spotify-Song-Suggester-5/Data-Science/master/spotify_unique_track_id.csv')
df = df.dropna() # drop null values
# + id="Q0waFzIOxi99" colab_type="code" colab={}
df.shape
# + [markdown] id="TnJsmHPSs1EM" colab_type="text"
# ## Neural Network
#
# #### Preprocessing
# + id="jvw155X9swUe" colab_type="code" colab={}
time_sig_encoding = { '0/4' : 0, '1/4' : 1,
'3/4' : 3, '4/4' : 4,
'5/4' : 5}
key_encoding = { 'A' : 0, 'A#' : 1, 'B' : 2,
'C' : 3, 'C#' : 4, 'D' : 5,
'D#' : 6, 'E' : 7, 'F' : 8,
'F#' : 9, 'G' : 10, ' G#' : 11 }
mode_encoding = { 'Major':0, 'Minor':1}
df['key'] = df['key'].map(key_encoding)
df['time_signature'] = df['time_signature'].map(time_sig_encoding)
df['mode'] = df['mode'].map(mode_encoding)
# helper function to one hot encode genre
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
return(res)
df = encode_and_bind(df, 'genre')
df = df.dropna() # drop null values again not sure why it created null values
# + colab_type="code" id="2VusgmXCqSyp" colab={}
# check worked out
df.dtypes
# + [markdown] id="GXHnn1tmp36O" colab_type="text"
# ### Principle Compoonent Analysis
#
#
# + id="NN7-7lg0qGyA" colab_type="code" colab={}
# make copy to transform in case you want to do stuff with the other one later in notebook
df_copy = df[features].copy()
# + id="i4x_lbtQqJG4" colab_type="code" colab={}
from numpy import array, mean, std, cov
from numpy.linalg import eig
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
# + id="F5XtBVbMqNlq" colab_type="code" colab={}
means = df_copy.mean()
# print("\n Means:", means)
scaler = StandardScaler()
# + id="wM_p1rFQp8rl" colab_type="code" colab={}
pca_X = scaler.fit_transform(df_copy.values)
pca = PCA()
df_pca = pca.fit(pca_X)
# + id="AtrzYMpQqVHR" colab_type="code" colab={}
print(pca.explained_variance_)
# + id="_mVZ2-NdqZAs" colab_type="code" colab={}
# %time
pca_neigh = NearestNeighbors(n_neighbors=11)
pca_neigh.fit(pca_X) # NN doesn't need to fit Y
# + [markdown] colab_type="text" id="ChYSLt-vT_03"
# # MODELING: Nearest Neighbors
# resources: https://scikit-learn.org/stable/modules/neighbors.html
# + colab_type="code" id="Q4LqcfPcT-_y" colab={}
neigh = NearestNeighbors()
# + colab_type="code" id="GFtCGrpLT-j9" colab={}
# to remove the transformed columns from model
remove = ['key', 'mode','time_signature']
features = [i for i in list(df.columns[4:]) if i not in remove]
# target = 'track_id'
# + colab_type="code" id="dsJSuEj6Q1eT" colab={}
X = df[features]
# y = df[target]
X.shape, y.shape
# + colab_type="code" id="D4kA2zQmORy3" colab={}
neigh.fit(X) # NN doesn't need to fit Y
# + [markdown] id="ftWHNeP3T4M3" colab_type="text"
# ### Vectorize Data
# + [markdown] id="QNSNGvqAy-O0" colab_type="text"
# ### Autoencoder
# + [markdown] id="38aCXICFoz55" colab_type="text"
# # Export Model with Joblib
# + id="4SBVgLGvn8hS" colab_type="code" colab={}
filename = 'NearestNeighbor.sav'
# + id="kLfmVJiKoN8X" colab_type="code" colab={}
joblib.dump(neigh, filename)
| .ipynb_checkpoints/Cluster_Mapping_Exploration-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spatial Point Sorter Algorithm Description
#
# Given a list of (x, y, z) irregular points in 3-d space (ie. there is no rotational symmety in the points), and an equivalent list of points that has been re-ordered and then transformed by some rigid body transformation (SO3 or SE3 group transformation), this algorithm will return the two lists or points with the second list ordered the same way as the first.
#
# This tool is used as a first step in finding the transformation between co-ordinate systems used by different measurement tools that use different co-ordinate systems.
#
# ## The Algorithm
#
# For every point in each set and create two vectors with the remaining points. Dot product them with each other, sort the results and put them into another list. This list is the fingerprint of the point. This is repeated for every point in each set. The "fingerprints" for the points in each set are subtracted from each other and summed, and the minimum value is the correspondance between the two datasets.
#
# Given a set of points $\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3,...$, calculate the values of the matrix below, and then flatten and sort the result:
#
# $$
# \begin {eqnarray}
# \mathbf{v}_{ij} & = & \mathbf{a}_j - \mathbf{a}_i \\
# F_k & = & \begin{bmatrix}
# 0 & \mathbf{v}_{12} \cdot \mathbf{v}_{13} & \mathbf{v}_{12} \cdot \mathbf{v}_{14} & \dots & \mathbf{v}_{12} \cdot \mathbf{v}_{1j} \\
# 0 & 0 & \mathbf{v}_{23} \cdot \mathbf{v}_{24} & \dots & \mathbf{v}_{23} \cdot \mathbf{v}_{2j} \\
# \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & 0 & 0 & \dots & \mathbf{v}_{i-1j-1} \cdot \mathbf{v}_{i-1j}
# \end{bmatrix}
# \end {eqnarray}
# $$
#
# Repeat this exercise for every point in each dataset to generate the fingerprint for each point. Then subtract each fingerprint from the fingerprint of ever point in the other dataset, sum the square of the difference, and find the minimum value to choose the correspondence.
| doc/SpatialPointSorterAlgorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/bwsi-hadr/08-Graph-Optimization-TSP/blob/master/08_Graph_Optimization_Problems_TSP.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="f0F8I8wnm5gX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a4c4b069-041a-4636-cfd5-40c61263f83f"
import networkx as nx
try:
import osmnx as ox
except:
# osmnx depends on the system package libspatialindex
# !apt install libspatialindex-dev
# !pip install osmnx
import osmnx as ox
try:
import geopandas as gpd
except:
# !pip install geopandas
import geopandas as gpd
try:
import contextily as ctx
except:
# install dependencies for contextily
# !apt install libproj-dev proj-data proj-bin
# !apt install libgeos-dev
# !pip install cython
# !pip install cartopy
# install contextily
# !pip install contextily==1.0rc1 --no-use-pep517 --no-cache-dir
import contextily as ctx
import fiona
from shapely.geometry import Point, LineString, Polygon
import gdal
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pathlib
# + id="M3ViLfABnRgS" colab_type="code" colab={}
# + [markdown] id="ejhrKBqPng9F" colab_type="text"
# # Traveling Salesman Problem
# The canonical Traveling Salesman Problem is stated as:
# > "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?"
#
# This is generalizable to finding the shortest [Hamiltonian cycle](http://mathworld.wolfram.com/HamiltonianCycle.html) on a fully connected graph (i.e. all nodes can be reached from all other nodes).
#
# This problem is [NP-hard](https://en.wikipedia.org/wiki/P_versus_NP_problem), meaning it is not possible for an algorithm to solve all instances of the problem quickly (i.e. in polynomial time). However, there are many approximate and heuristic approaches which can give reasonable solutions in shorter time.
# + id="C6JmtPFto13K" colab_type="code" colab={}
place_name = 'New York City, NY, United States'
place_roads = ox.graph_from_place(place_name)
# + id="k5W0RTsawXlE" colab_type="code" colab={}
place_roads_nodes, place_roads_edges = ox.graph_to_gdfs(place_roads)
# + id="Hnp2qXbMss49" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 530} outputId="08d8d5d5-600f-482e-9623-e7691cf4e06c"
fig = plt.figure(figsize=[10,10])
ax = fig.add_subplot(1,1,1)
place_roads_edges.plot(ax=ax, color=[0.8, 0.8, 0.8], alpha=0.5)
# + [markdown] id="IoiI3SS1pIIs" colab_type="text"
# Let's say you wanted to do a ice cream crawl: you want to visit every ice cream shop in a city. What is the shortest route that you would take that takes you to every ice cream shop in a city and brings you back to your starting point?
# + id="tAQK62C7pB7V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 779} outputId="8323c432-4da4-4e10-bd43-8a9534da31da"
place_ice_cream = ox.pois_from_place(place_name, amenities=['ice_cream'])
place_ice_cream
# + id="DSEkty3Lr0AX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="5fbf5594-6005-4b2c-93a2-0854624c22bb"
ice_cream_nodes = ox.get_nearest_nodes(place_roads, place_ice_cream.geometry.x, place_ice_cream.geometry.y)
ice_cream_nodes
# + [markdown] id="7vjZXdo4wwx0" colab_type="text"
# ## Exercise
# Plot the locations of the ice cream shops on the map of the roads
# + id="wUuhBee2w9Bd" colab_type="code" colab={}
# + [markdown] id="qF1NSIerw-j4" colab_type="text"
# ## Compute shortest path matrix
# + id="q2c8QobPsq8e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="86cd1497-74cc-4b3e-f8e2-9d1155adecd3"
shortest_path_matrix = np.zeros([len(ice_cream_nodes),len(ice_cream_nodes)])
for idx_i, orig in enumerate(ice_cream_nodes):
for idx_j, dest in enumerate(ice_cream_nodes):
shortest_path_matrix[idx_i, idx_j] = nx.shortest_path_length(place_roads, orig, dest, weight='length')
shortest_path_matrix
# + id="Dx6x4175wv90" colab_type="code" colab={}
ice_cream_graph = nx.from_numpy_matrix(shortest_path_matrix, create_using=nx.MultiDiGraph)
# + id="QXZedXvnzGX3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6927e94d-aa98-453a-f714-1cfdf650f7a7"
# new graph indexes from 0
ice_cream_graph.nodes
# + id="rUufs51xteVN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="75cdee08-8b22-4f0f-a9da-ab1dfb64676a"
# rename node labels using original labels
ice_cream_graph = nx.relabel_nodes(ice_cream_graph,{k:v for k, v in zip(ice_cream_graph.nodes, ice_cream_nodes)})
ice_cream_graph.nodes
# + [markdown] id="IkkkXc4YzRX4" colab_type="text"
# ## Exercise
# Find the best TSP path you can
# + id="s5iJS8jbyBFu" colab_type="code" colab={}
| 08_Graph_Optimization_Problems_TSP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
my_best = pd.read_csv('ensemble/test3/combine_submission_20190131_173241_cv3.645163.csv')
hyeonwoo_best = pd.read_csv('ensemble/test3/v25_3.645947369790411.csv')
my_best['target'].corr(hyeonwoo_best['target'])
my_best['target'] = my_best['target']*0.4 + hyeonwoo_best['target']*0.6
my_best.to_csv('submission_hyeonwoo683_my684_0.6_0.4.csv',index=False)
# ์ด์ ๊บผ
my_best = pd.read_csv('ensemble/test1/combine_submission_20190122_162808_cv3.644641.csv')
hyeonwoo_best = pd.read_csv('ensemble/test1/v25_3.6480581680117834.csv')
my_best['target'].corr(hyeonwoo_best['target'])
hyeonwoo_best.columns = ['card_id','h_target']
my_best = my_best.merge(hyeonwoo_best,on='card_id',how='left')
my_best['diff'] = my_best['target'] - my_best['h_target']
my_best.sort_values('diff')
my_best['target'] = my_best['target']*0.7 + hyeonwoo_best['target']*0.3
my_best.to_csv('submission.csv',index=False)
my_best['target'].corr(my_best2['target'])
# 2019/01/28
my_best2 = pd.read_csv('ensemble/test2/combine_submission_20190122_162808_cv3.644641.csv')
hyeonwoo_best2 = pd.read_csv('ensemble/test2/v7_3.6505696377393253.csv')
my_best2['target'] = my_best2['target']*0.7 + hyeonwoo_best2['target']*0.3
my_best2.to_csv('blend_v4_685_hyeonwoo_687_0.7_0.3.csv',index=False)
my_best2['target'].corr(hyeonwoo_best2['target'])
hyeonwoo_best2.columns = ['card_id','h_target']
my_best2 = my_best2.merge(hyeonwoo_best2,on='card_id',how='left')
my_best2['diff'] = my_best2['target'] - my_best2['h_target']
my_best2.sort_values('diff')
my_best.sort_values('diff')
plt.figure(figsize=(10,7))
sns.distplot(my_best.query("1>diff >-1")['diff'])
sns.distplot(my_best2.query("1>diff >-1")['diff'])
plt.figure(figsize=(10,7))
sns.distplot(my_best['target'],bins=100)
sns.distplot(my_best['h_target'],bins=100)
plt.figure(figsize=(10,7))
sns.distplot(my_best['h_target'],bins=100)
sns.distplot(my_best2['h_target'],bins=100)
| Elo Merchant Category Recommendation/code/Untitled1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
req = requests.get('https://data.covid19.go.id/public/api/update.json')
print(req)
covid_id_raw = req.json()
covid_id_raw
print(len(covid_id_raw))
print(covid_id_raw.keys())
# +
print('Tanggal pembaruan data penambahan COVID-19 : ', covid_id_raw['update']['penambahan']['tanggal'])
print('Jumlah penambahan data positif kasus COVID-19 : ', covid_id_raw['update']['penambahan']['jumlah_positif'])
print('Jumlah penambahan data sembuh kasus COVID-19 : ', covid_id_raw['update']['penambahan']['jumlah_sembuh'])
print('Jumlah penambahan data meninggal kasus COVID-19 : ', covid_id_raw['update']['penambahan']['jumlah_meninggal'])
print('')
print('Jumlah total kasus positif hingga saat ini : ', covid_id_raw['update']['total']['jumlah_positif'])
print('Jumlah total kasus sembuh hingga saat ini : ', covid_id_raw['update']['total']['jumlah_sembuh'])
print('Jumlah total kasus meninggal hingga saat ini : ', covid_id_raw['update']['total']['jumlah_meninggal'])
# -
req_jatim = requests.get('https://data.covid19.go.id/public/api/prov_detail_JAWA_TIMUR.json')
response = req_jatim.json()
response
print(response.keys())
print('Tanggal pembaruan data penambahan COVID-19 di Jawa Timur: ', response['last_date'])
print('Jumlah total kasus positif hingga saat ini : ', response['kasus_total'])
print('Persentase kasus sembuh hingga saat ini : ', response['sembuh_persen'])
print('Persentase kasus meninggal hingga saat ini : ', response['meninggal_persen'])
import numpy as np
import pandas as pd
covid_jatim = pd.DataFrame(response['list_perkembangan'])
covid_jatim.info()
covid_jatim.head(20)
covid_jatim_df = (covid_jatim.drop(columns=[item for item in covid_jatim.columns
if item.startswith('AKUMULASI')
or item.startswith('DIRAWAT')])
.rename(columns=str.lower)
.rename(columns={'kasus':'kasus baru'})
)
covid_jatim_df['tanggal'] = pd.to_datetime(covid_jatim_df['tanggal']*1e6, unit='ns')
covid_jatim_df.head()
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
plt.clf()
fig,ax = plt.subplots(figsize=(10,5))
ax.bar(data=covid_jatim_df, x="tanggal", height="kasus baru")
ax.set(xlabel = "Tanggal", ylabel="Jumlah Kasus")
ax.text(1, -0.1,'Sumber data: covid.19.co.id', color='blue',
ha='right', transform=ax.transAxes)
plt.show()
# ## Analisis Kasus Harian COVID-19 di Jawa Timur
# +
plt.clf()
fig,ax = plt.subplots(figsize=(18,7))
ax.bar(data=covid_jatim_df, x='tanggal', height='kasus baru', color='salmon')
ax.set_title("Kasus Harian Positif COVID-19 di Jawa Timur", fontsize=22)
ax.set_ylabel("Jumlah Kasus", fontsize=16)
ax.set(xlabel=" ")
ax.text(1, -0.1,'Sumber data: covid.19.co.id', color='blue',
ha='right', transform=ax.transAxes, fontsize=14)
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
plt.grid(axis='y')
plt.tight_layout()
plt.show()
# +
plt.clf()
fig,ax = plt.subplots(figsize=(18,7))
ax.bar(data=covid_jatim_df, x='tanggal', height='sembuh', color='olivedrab')
ax.set_title("Kasus Harian Sembuh COVID-19 di Jawa Timur", fontsize=22)
ax.set_ylabel("Jumlah Kasus", fontsize=16)
ax.set(xlabel=" ")
ax.text(1, -0.1,'Sumber data: covid.19.co.id', color='blue',
ha='right', transform=ax.transAxes, fontsize=14)
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
plt.grid(axis='y')
plt.tight_layout()
plt.show()
# +
plt.clf()
fig,ax = plt.subplots(figsize=(18,7))
ax.bar(data=covid_jatim_df, x='tanggal', height='meninggal', color='slategrey')
ax.set_title("Kasus Harian Meninggal COVID-19 di Jawa Timur", fontsize=22)
ax.set_ylabel("Jumlah Kasus", fontsize=16)
ax.set(xlabel=" ")
ax.text(1, -0.1,'Sumber data: covid.19.co.id', color='blue',
ha='right', transform=ax.transAxes, fontsize=14)
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
plt.grid(axis='y')
plt.tight_layout()
plt.show()
# -
# ## Analisis Kasus Pekanan COVID-19 di Jawa Timur
covid_jatim_pekanan = (covid_jatim_df.set_index('tanggal')['kasus baru'].resample('W').sum().reset_index().rename(columns={'kasus baru':'jumlah'}))
covid_jatim_pekanan['tahun'] = covid_jatim_pekanan['tanggal'].apply(lambda x:x.year)
covid_jatim_pekanan['pekan_ke'] = covid_jatim_pekanan['tanggal'].apply(lambda x:x.weekofyear)
covid_jatim_pekanan = covid_jatim_pekanan[['tahun','pekan_ke','jumlah']]
covid_jatim_pekanan.info()
covid_jatim_pekanan.head()
covid_jatim_pekanan['jumlah_pekan_lalu'] = covid_jatim_pekanan['jumlah'].shift().replace(np.nan,0).astype(np.int)
covid_jatim_pekanan['lebih_baik'] = covid_jatim_pekanan['jumlah'] < covid_jatim_pekanan['jumlah_pekan_lalu']
covid_jatim_pekanan
plt.clf()
fig, ax = plt.subplots(figsize=(15,7))
ax.bar(data=covid_jatim_pekanan, x='pekan_ke', height='jumlah', color=['mediumseagreen' if x is True else 'salmon' for x in covid_jatim_pekanan['lebih_baik']])
fig.suptitle('Kasus Positif Pekanan COVID-19 di Jawa Timur', fontsize=22, fontweight='bold', ha='center')
ax.set_title('Kolom hijau menunjukan penambahan kasus baru lebih sedikit dibandingkan satu pekan sebelumnya', fontsize=12)
ax.set_ylabel('Jumlah Kasus', fontsize=12)
ax.text(1, -0.1, 'Sumber data: covid.19.go.id', color='blue', ha ='right', transform=ax.transAxes)
plt.grid(axis='y')
plt.show()
# ## Analisis Kasus Aktif COVID-19 Saat Ini
covid_jatim_akumulasi = covid_jatim_df[['tanggal']].copy()
covid_jatim_akumulasi['akumulasi_aktif'] =(covid_jatim_df['kasus baru'] - covid_jatim_df['meninggal'] - covid_jatim_df['sembuh']).cumsum()
covid_jatim_akumulasi['akumulasi_sembuh'] = covid_jatim_df['sembuh'].cumsum()
covid_jatim_akumulasi['akumulasi_meninggal'] = covid_jatim_df['meninggal'].cumsum()
covid_jatim_akumulasi.tail()
# +
plt.clf()
fig, ax= plt.subplots(figsize=(18,7))
ax.plot('tanggal','akumulasi_aktif', data=covid_jatim_akumulasi, lw=2)
ax.set_title('Akumulasi Aktif COVID-19 di Jawa Timur', fontsize=22)
ax.set_ylabel('Akumulasi aktif', fontsize=12)
ax.text(1, -0.1, 'Sumber data: covid19.co.id', color='blue',ha='right',transform=ax.transAxes)
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
plt.grid()
plt.tight_layout()
plt.show()
# +
plt.clf()
fig, ax = plt.subplots(figsize=(10,5))
covid_jatim_akumulasi.plot(x='tanggal',lw=3,color=['salmon','slategrey','olivedrab'],ax=ax,kind='line')
ax.set_title('Dinamika Kasus COVID-19 di Jawa Timur', fontsize=22)
ax.set_ylabel('Akumulasi Jumlah Kasus')
ax.set_xlabel('')
ax.text(1, -0.1, 'Sumber data: covid19.co.id', color='blue',ha='right',transform=ax.transAxes)
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b'))
plt.grid()
plt.tight_layout()
plt.show()
# -
| Analisis Data COVID-19 di Jawa Timur.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="-kVb2H6Zm_PW" outputId="0bd7a6b1-8571-42b7-8a35-a90cc496ad52"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="JueK8EKRtI0h" outputId="b8e7e753-7e5d-4e4e-91e5-6c1e20024c6a"
# %cd /content/drive/MyDrive/MiCM2021-PKD/beta/
my_dir = '/content/drive/MyDrive/MiCM2021-PKD/beta/'
# + colab={"base_uri": "https://localhost:8080/"} id="hEH8ZgIku7t4" outputId="8c2c22ac-1235-492c-fddc-b5bf58ce3da8"
# !pip install pydicom
# + colab={"base_uri": "https://localhost:8080/"} id="nYCP-i6AtdZJ" outputId="be9759ea-7598-43d0-a005-f4839d31a8aa"
# !pip install tqdm
# + colab={"base_uri": "https://localhost:8080/"} id="YU-u636Sw1q9" outputId="5b001272-fbbe-4545-d469-1d1f6f85cb08"
# !pip install utils
# + id="OaFg6USWu1J6"
from torch.utils.data import DataLoader
from tqdm import tqdm
import random
import torch.optim as optim
import os
from UNET import UNET
from dataset import SliceDataset
import torch
import math
from DiceLoss import myDiceLoss
from transform import transforms
from utils import (
load_checkpoint,
save_checkpoint,
get_loaders,
check_accuracy,
load_seg,
remove_bg_only_test,
clean_test_ds
)
import time
import numpy as np
# + id="XhLu9e9avR60"
def train_fn(train_dataloader, model, optimizer, loss_fn, scaler):
loop = tqdm(train_dataloader, position=0, leave=True)
for batch_idx, (data, targets) in enumerate(loop):
data = data.unsqueeze(1).to(device=DEVICE)
targets = targets.float().unsqueeze(1).to(device=DEVICE)
# forward
with torch.cuda.amp.autocast():
predictions = model(data)
# print("pred: ", predictions.shape)
loss = loss_fn.forward(predictions, targets)
# loss = loss_fn(predictions, targets)
# print("loss: ", loss.shape)
# backward
optimizer.zero_grad()
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
# update tqdm loop
loop.set_postfix(loss=loss.item())
# + id="tRxRapqCvOpw"
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
LOAD_MODEL = False
ORGAN = 'lv'
# + colab={"base_uri": "https://localhost:8080/"} id="CEvUyddpy7CE" outputId="8317e989-a183-4a3c-b6ee-6609aac87d5a"
my_dir = '/content/drive/MyDrive/MiCM2021-PKD/beta/MR2D/Train'
img_paths = []
for dcm in os.listdir(my_dir + '/X'):
if dcm != ".DS_Store":
img_paths.append(my_dir + '/X/' + dcm)
img_paths.sort()
seg_paths = []
for seg in os.listdir(my_dir + '/Y'):
if seg != ".DS_Store":
seg_paths.append(my_dir + '/Y/' + seg)
seg_paths.sort()
# Train = ds.SliceDataset(img_paths, seg_paths)
train_idx = random.sample(range(0, len(img_paths)), math.ceil(0.75*len(img_paths)))
train_img_paths = []
train_seg_paths = []
val_img_paths = []
val_seg_paths = []
for idx in range(len(img_paths)):
if idx in train_idx:
train_img_paths.append(img_paths[idx])
train_seg_paths.append(seg_paths[idx])
else:
val_img_paths.append(img_paths[idx])
val_seg_paths.append(seg_paths[idx])
Train = SliceDataset(train_img_paths, train_seg_paths, organ=ORGAN, transform=transforms(0.5, 0.5))
Val = SliceDataset(val_img_paths, val_seg_paths, organ=ORGAN, transform=transforms(0.5, 0.5))
val_losses = []
dice_scores = []
UNet = UNET(in_channels=1, out_channels=1).to(DEVICE)
# loss_fn = nn.BCEWithLogitsLoss()
loss_fn = myDiceLoss()
optimizer = optim.Adam(UNet.parameters(), lr=2 * 1e-3)
train_loader, val_loader = get_loaders(Train, Val)
# check_accuracy(val_loader, UNet, device=DEVICE)
scaler = torch.cuda.amp.GradScaler()
NUM_EPOCHS = 10
if LOAD_MODEL:
load_checkpoint(torch.load("my_checkpoint.pth.tar"), UNet)
start_time = time.time()
for epoch in range(NUM_EPOCHS):
print("Epoch: {epoch}/{total}".format(epoch=epoch + 1, total=NUM_EPOCHS))
train_fn(train_loader, UNet, optimizer, loss_fn, scaler)
# save model
checkpoint = {
"state_dict": UNet.state_dict(),
"optimizer": optimizer.state_dict(),
}
# check accuracy
loss, dice = check_accuracy(val_loader, UNet, loss_fn, device=DEVICE)
val_losses.append(loss)
dice_scores.append(dice)
# # print some examples to a folder
# save_predictions_as_imgs(
# val_loader, model, folder="saved_images/", device=DEVICE
# )
print("--- Training time: %s seconds ---" % (time.time() - start_time))
save_checkpoint(checkpoint)
# + colab={"base_uri": "https://localhost:8080/", "height": 573} id="kv6-oPCr52iF" outputId="282e05ba-fe5e-4143-eb8b-f07c463cd073"
import matplotlib.pyplot as plt
m_loss=[]
for loss in val_losses:
m_loss.append(float(loss.cpu()))
m_dice=[]
for dice in dice_scores:
m_dice.append(float(dice.cpu()))
t = [i for i in range(1,NUM_EPOCHS+1)]
fig, ax = plt.subplots()
ax.plot(t, m_loss)
ax.set(xlabel='epoch', ylabel='validation loss',
title='Validation loss against epoch')
ax.grid()
plt.show()
fig, ax = plt.subplots()
ax.plot(t, m_dice)
ax.set(xlabel='epoch', ylabel='dice score',
title='Dice score against epoch')
ax.grid()
plt.show()
# + id="kW3KwjWy0VDQ"
# def test(loader, model, loss_fn, device="cuda"):
# num_correct = 0
# num_pixels = 0
# dice_score = 0
# model.eval()
# with torch.no_grad():
# for x, y in loader:
# # print("x: ", x.shape)
# # print("y: ", y.shape)
# x = x.unsqueeze(1).to(device)
# # print("x: ", x.shape)
# y = y.unsqueeze(1).to(device)
# # print("mo la")
# preds = torch.sigmoid(model(x))
# preds = (preds > 0.5).float()
# loss = loss_fn.forward(preds,y)
# num_correct += (preds == y).sum()
# num_pixels += torch.numel(preds)
# dice_score += (2 * (preds * y).sum()) / (
# (preds + y).sum() + 1e-8
# )
# print(
# f"Got {num_correct}/{num_pixels} with acc {num_correct/num_pixels*100:.2f}"
# )
# print(f"Dice score: {dice_score/len(loader)}")
# model.train()
# return loss, dice_score/len(loader)
# + id="XYg2Kb7a5cVX"
# def save_predictions_as_imgs(
# loader, model, device="cuda"
# ):
# model.eval()
# for idx, (x, y) in enumerate(loader):
# x = x.to(device=device)
# with torch.no_grad():
# preds = torch.sigmoid(model(x))
# preds = (preds > 0.5).float()
# # torchvision.utils.save_image(
# # preds, f"{folder}/pred_{idx}.png"
# # )
# # torchvision.utils.save_image(y.unsqueeze(1), f"{folder}{idx}.png")
# # print(type(preds))
# # print(preds.shape)
# model.train()
# return preds
# + id="WIfNdjiuoFqI"
# + id="kkB0Vo4yHakj"
# + colab={"base_uri": "https://localhost:8080/"} id="NKm-oq3966zM" outputId="e19695b3-3295-4dfb-9464-cb81b0f46a41"
from utils import (load_scan, load_seg, findOrgan)
test_img_paths = []
test_seg_paths = []
test_dir = '/content/drive/MyDrive/MiCM2021-PKD/beta/MR2D/Test'
for dcm in os.listdir(test_dir + '/X'):
if dcm != ".DS_Store":
test_img_paths.append(test_dir + '/X/' + dcm)
test_img_paths.sort()
for seg in os.listdir(test_dir + '/Y'):
if seg != ".DS_Store":
test_seg_paths.append(test_dir + '/Y/' + seg)
test_seg_paths.sort()
## apply find organ on everything. then remove the all 0s
mylist = []
for idx in range(len(test_img_paths)):
img = load_scan(test_img_paths[idx])
seg = load_seg(test_seg_paths[idx])
bin_img, bin_seg = findOrgan(img,seg,ORGAN)
if np.amax(bin_seg) != 0:
mylist.append(idx)
cleaned_test_img_paths = []
cleaned_test_seg_paths = []
for idx in mylist:
cleaned_test_img_paths.append(test_img_paths[idx])
cleaned_test_seg_paths.append(test_seg_paths[idx])
Total_Test = SliceDataset(cleaned_test_img_paths, cleaned_test_seg_paths, organ=ORGAN, transform=None, test=True)
# mylist = remove_bg_only_test(test_seg_paths)
# cleaned_test_img_paths, cleaned_test_seg_paths = clean_test_ds(test_img_paths, test_seg_paths, mylist)
test_idx = random.sample(range(0, len(cleaned_test_img_paths)), 10)
sampled_test_img_paths = []
sampled_test_seg_paths = []
for idx in test_idx:
sampled_test_img_paths.append(cleaned_test_img_paths[idx])
sampled_test_seg_paths.append(cleaned_test_seg_paths[idx])
# Test = SliceDataset(sampled_test_img_paths, sampled_test_seg_paths, transform=None)
## TOTAL
test_loader = DataLoader(Total_Test,1)
loop = tqdm(test_loader, position=0, leave=True)
preds = []
ground = []
for batch_idx, (data, targets) in enumerate(loop):
data = data.unsqueeze(1).to(device=DEVICE)
pred = torch.sigmoid(UNet(data))
pred = (pred > 0.5).float()
preds.append(pred.detach().cpu().numpy()[0][0])
targets = targets.float().unsqueeze(1).to(device=DEVICE)
ground.append(targets.detach().cpu().numpy()[0][0])
## SAMPLE
Sample_Test = SliceDataset(sampled_test_img_paths, sampled_test_seg_paths, organ=ORGAN, transform=None, test=True)
sample_test_loader = DataLoader(Sample_Test)
sample_loop = tqdm(sample_test_loader, position=0, leave=True)
sample_preds = []
sample_ground = []
for batch_idx, (data, targets) in enumerate(sample_loop):
data = data.unsqueeze(1).to(device=DEVICE)
pred = torch.sigmoid(UNet(data))
pred = (pred > 0.5).float()
sample_preds.append(pred.detach().cpu().numpy()[0][0])
targets = targets.float().unsqueeze(1).to(device=DEVICE)
sample_ground.append(targets.detach().cpu().numpy()[0][0])
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Vy5BqkmGNb7A" outputId="190e1699-66d5-4caa-9215-b57012221fba"
for idx in range(len(sample_preds)):
f, axarr = plt.subplots(1,2)
axarr[0].imshow(sample_ground[idx])
axarr[0].title.set_text('Ground')
axarr[1].imshow(sample_preds[idx])
axarr[1].title.set_text('Prediction')
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="rlw-HK2h8sim" outputId="4b0ff4ea-57fd-4ac3-b14d-b745800bf5fb"
for idx in range(len(preds)):
f, axarr = plt.subplots(1,2)
axarr[0].imshow(ground[idx])
axarr[1].imshow(preds[idx])
# for idx in range(len(preds)):
# axarr[idx, idx%2].imshow(preds[idx])
# axarr[idx, idx%2].imshow(ground[idx])
| beta/example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="HVVK9yua1SUx"
from random import choice
combo_indices = [
[0, 1, 2],
[3, 4, 5],
[6, 7, 8],
[0, 3, 6],
[1, 4, 7],
[2, 5, 8],
[0, 4, 8],
[2, 4, 6]
]
EMPTY_SIGN = '.'
AI_SIGN = 'X'
OPPONENT_SIGN = 'O'
def print_board(board):
print(" ")
print(' '.join(board[:3]))
print(' '.join(board[3:6]))
print(' '.join(board[6:]))
print(" ")
def opponent_move(board, row, column):
index = 3 * (row - 1) + (column - 1)
if board[index] == EMPTY_SIGN:
return board[:index] + OPPONENT_SIGN + board[index+1:]
return board
def game_won_by(board):
for index in combo_indices:
if board[index[0]] == board[index[1]] == board[index[2]] != EMPTY_SIGN:
return board[index[0]]
return EMPTY_SIGN
def game_loop():
board = EMPTY_SIGN * 9
empty_cell_count = 9
is_game_ended = False
while empty_cell_count > 0 and not is_game_ended:
if empty_cell_count % 2 == 1:
board = ai_move(board)
else:
row = int(input('Enter row: '))
col = int(input('Enter column: '))
board = opponent_move(board, row, col)
print_board(board)
is_game_ended = game_won_by(board) != EMPTY_SIGN
empty_cell_count = sum(1 for cell in board if cell == EMPTY_SIGN)
print('Game has been ended.')
def all_moves_from_board_list(board_list, sign):
move_list = []
for board in board_list:
move_list.extend(all_moves_from_board(board, sign))
return move_list
def filter_wins(move_list, ai_wins, opponent_wins):
for board in move_list:
won_by = game_won_by(board)
if won_by == AI_SIGN:
ai_wins.append(board)
move_list.remove(board)
elif won_by == OPPONENT_SIGN:
opponent_wins.append(board)
move_list.remove(board)
def count_possibilities():
board = EMPTY_SIGN * 9
move_list = [board]
ai_wins = []
opponent_wins = []
for i in range(9):
print('step ' + str(i) + '. Moves: ' + str(len(move_list)))
sign = AI_SIGN if i % 2 == 0 else OPPONENT_SIGN
move_list = all_moves_from_board_list(move_list, sign)
filter_wins(move_list, ai_wins, opponent_wins)
print('First player wins: ' + str(len(ai_wins)))
print('Second player wins: ' + str(len(opponent_wins)))
print('Draw', str(len(move_list)))
print('Total', str(len(ai_wins) + len(opponent_wins) + len(move_list)))
return len(ai_wins), len(opponent_wins), len(move_list), len(ai_wins) + len(opponent_wins) + len(move_list)
def player_can_win(board, sign):
next_moves = all_moves_from_board(board, sign)
for next_move in next_moves:
if game_won_by(next_move) == sign:
return True
return False
def ai_move(board):
new_boards = all_moves_from_board(board, AI_SIGN)
for new_board in new_boards:
if game_won_by(new_board) == AI_SIGN:
return new_board
safe_moves = []
for new_board in new_boards:
if not player_can_win(new_board, OPPONENT_SIGN):
safe_moves.append(new_board)
return choice(safe_moves) if len(safe_moves) > 0 else \
new_boards[0]
# + colab={} colab_type="code" id="_mSEFds81SU3"
def all_moves_from_board(board, sign):
if sign == AI_SIGN:
empty_field_count = board.count(EMPTY_SIGN)
if empty_field_count == 9:
return [sign + EMPTY_SIGN * 8]
elif empty_field_count == 7:
return [
board[:8] + sign if board[8] == \
EMPTY_SIGN else
board[:4] + sign + board[5:]
]
move_list = []
for i, v in enumerate(board):
if v == EMPTY_SIGN:
new_board = board[:i] + sign + board[i+1:]
move_list.append(new_board)
if game_won_by(new_board) == AI_SIGN:
return [new_board]
if sign == AI_SIGN:
safe_moves = []
for move in move_list:
if not player_can_win(move, OPPONENT_SIGN):
safe_moves.append(move)
return safe_moves if len(safe_moves) > 0 else \
move_list[0:1]
else:
return move_list
# + colab={"base_uri": "https://localhost:8080/", "height": 238} colab_type="code" id="33BfiO-21SU7" outputId="3f3dada1-fc1b-4ad5-f69e-a66674256739"
first_player, second_player, draw, total = count_possibilities()
| Chapter01/Activity1.03/Activity1_03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
import numpy as np
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digits = load_digits()
# -
from sklearn.model_selection import KFold
kf = KFold(n_splits=3)
kf
for train_index , test_index in kf.split([1,2,3,4,5,6,7,8,9]):
print(train_index,test_index)
def get_score(model,X_train,X_test,y_train,y_test):
model.fit(X_train,y_train)
return model.score(X_test,y_test)
X_train,X_test,y_train,y_test = train_test_split(digits.data,digits.target,test_size=0.2)
get_score(LogisticRegression(),X_train,X_test,y_train,y_test)
from sklearn.model_selection import StratifiedKFold
folds = StratifiedKFold(n_splits=3)
# +
scores_l = []
scores_svm = []
scores_rf = []
for train_index,test_index in kf.split(digits.data):
X_train,X_test,y_train,y_test = digits.data[train_index],digits.data[test_index],digits.target[train_index],digits.target[test_index]
scores_l.append(get_score(LogisticRegression(),X_train,X_test,y_train,y_test))
scores_svm.append(get_score(SVC(),X_train,X_test,y_train,y_test))
scores_rf.append(get_score(RandomForestClassifier(),X_train,X_test,y_train,y_test))
# -
scores_l
scores_rf
scores_svm
from sklearn.model_selection import cross_val_score
cross_val_score(LogisticRegression(),digits.data,digits.target)
cross_val_score(RandomForestClassifier(),digits.data,digits.target)
cross_val_score(SVC(),digits.data,digits.target)
| K Fold Cross Validation/digits_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:deep_animator]
# language: python
# name: conda-env-deep_animator-py
# ---
# +
# default_exp cli
# -
# # CLI
#
# > Execution module for Deep Animator library.
# hide
from nbdev.showdoc import *
# +
# export
import imageio
from skimage import img_as_ubyte
from skimage.transform import resize
from fastscript import call_parse, Param
from deep_animator.utils import load_checkpoints, animate
import warnings
warnings.filterwarnings("ignore")
# -
#export
@call_parse
def deep_animate(source: Param('Path to the source image.', str),
driving: Param('Path to the driving video.', str),
config: Param('Path to configuration file.', str),
checkpoint: Param('Path to model.', str),
device: Param('cpu or gpu accelaration', str) = 'cpu',
dest: Param('Path to save the generated video.', str) = 'generated_video.mp4',
relative: Param('Relative.', bool) = True,
adapt_movement_scale: Param('Adaptive moment scale.', bool) = True):
source_image = imageio.imread(source)
driving_video = imageio.imread(driving)
# resize image and video to 256x256
source_image = resize(source_image, (256, 256))[..., :3]
driving_image = resize(source_image, (256, 256))[..., :3]
generator, kp_detector = load_checkpoints(config_path=config, checkpoint_path=checkpoint)
predictions = animate(source_image, driving_image, generator, kp_detector, relative=relative,
adapt_movement_scale=adapt_movement_scale)
imageio.imsave(dest, predictions)
| nbs/cli.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
import cv2
import random
import solt
import solt.transforms as slt
# -
def vis_img(img):
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(1,1,1)
ax.imshow(img)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
img = cv2.imread('data/voc.jpg')
h, w, c = img.shape
img = img[:w]
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
vis_img(img)
stream = solt.Stream([
slt.Rotate(angle_range=(-90, 90), p=1, padding='r'),
slt.Flip(axis=1, p=0.5),
slt.Flip(axis=0, p=0.5),
slt.Shear(range_x=0.3, range_y=0.8, p=0.5, padding='r'),
slt.Scale(range_x=(0.8, 1.3), padding='r', range_y=(0.8, 1.3), same=False, p=0.5),
slt.Pad((w, h), 'r'),
slt.Crop((w, w), 'r'),
slt.CvtColor('rgb2gs', keep_dim=True, p=0.2),
slt.HSV((0, 10), (0, 10), (0, 10)),
slt.Blur(k_size=7, blur_type='m'),
solt.SelectiveStream([
slt.CutOut(40, p=1),
slt.CutOut(50, p=1),
slt.CutOut(10, p=1),
solt.Stream(),
solt.Stream(),
], n=3),
], ignore_fast_mode=True)
# + pycharm={"name": "#%%\n"}
fig = plt.figure(figsize=(16,16))
n_augs = 6
random.seed(42)
for i in range(n_augs):
img_aug = stream({'image': img}, return_torch=False, ).data[0].squeeze()
ax = fig.add_subplot(1,n_augs,i+1)
if i == 0:
ax.imshow(img)
else:
ax.imshow(img_aug)
ax.set_xticks([])
ax.set_yticks([])
plt.savefig('results/img_aug.png', bbox_inches='tight')
plt.show()
| examples/Images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Amazon Prime 0% Interest
# When purchasing items on amazon.com using Amazon's Amazon Prime credit card, I get 5% cash back OR I have the option to get 6 months 0% financing while making 6 equal monthly payments.
# However, giving up the 5% cashback is equivalent to paying interest. The goal of this post is to determine how much interest is effectively paid.
# +
product_cost = 100
cash_back_rate = .05
loan_term_months = 6
monthly_payment = product_cost / loan_term_months
debt_free_cost = product_cost - product_cost * cash_back_rate
debt_cost = product_cost
interest_paid = debt_cost - debt_free_cost
# +
print(f"Purchase a product for ${product_cost}")
print(f'Monthly Payments: ${round(monthly_payment,2)} for {loan_term_months} months.\n')
debt_level = product_cost
annual_debt_dollars = 0
for month in range(1, loan_term_months+1):
annual_debt_dollars += debt_level/12
print(f'Debt Level after {month} Months: {debt_level}')
debt_level = debt_level - monthly_payment
effective_interest_rate = interest_paid/annual_debt_dollars
print(f"Giving up {cash_back_rate*100}% cash back is equivalent to an effective Interest Rate: {effective_interest_rate*100}%")
# -
# # CHECK
interest_paid = 0
debt_level = product_cost
for month in range(1, loan_term_months+1):
interest_paid += effective_interest_rate/12 * debt_level
debt_level = debt_level - monthly_payment
print(interest_paid)
| _notebooks/2020-08-12-AmazonNoInterest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sales report using Pandas
# ***
# ## Problem Statement
#
# Hello budding Data Scientists. We have with us a bank data set which gives information about the revenue of various customers spread across different regions in USA.
#
# Using the knowledge of Pandas and Matplotlib, we will try to answer certain questions from the bank dataset
#
# We will also then scrape certain additional data from Wikipedia, clean it and combine it with our bank data for better understandability of the data.
#
#
# ## About the Dataset
#
# Preview of the dataset
#
# 
#
# The dataset has details of 15 customers with following 9 features.
#
# |Feature|Description|
# |-----|-----|
# |account|account Id|
# |name|name of the person|
# |street|Name of the street|
# |city|Name of the city|
# |state|Name of the state|
# |postal-code|numerical value|
# |Jan|Amount in doller|
# |Feb|Amount in doller|
# |Mar|Amount in doller|
#
#
#
#
# ## Why solve this project
#
# Doing this project will enable you to integrate Multiple data sources to answer basic questions. You will also learn to perform common excel tasks with pandas
#
# What will you learn in the session ?
# Python Basics
# Pandas
# Web Scrapping
# Functions
# Plotting
# Pre-requisites
# Working knowledge of Pandas, Numpy, Matplotlib
# Data indexing and slicing
# # Load Data and Compute total
# The first step - you know the drill by now - load the dataset and see how it looks like. Additionally, calculate the total amount in the first quarter of the financial year. Calculate the total amount of all the users for the month of jan, feb and Mar and also grand total.
#
#
# ## Instructions
#
# - Load dataset using pandas read_csv api in variable `df` and give file path as `path`.
# - The names of the states `state` column are changed to lower case and store it in `df['state']`
# - Create a new column named `total` which computes the total amount in the first quarter
# of the financial year i.e. for the months of Jan, Feb and Mar and store it in `df['total']`
# - Calculate the sum of amount of all users in the Month of Jan, Feb, March and store it in variable `sum_row`
# (Here the sum implies the sum of all the entries in the `Jan Column`, sum of entries in `Feb` Column and Grand total stands for the sum of entries in the column `total`)
# - Append this computed sum to the DataFrame `df_final`
#
#
#
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Code starts here
#Code ends here
# -
# # Scrape Data From the web
#
# Here, you will be scraping data from the web and cleaning it.
#
#
# ## Instructions:
#
# - Scrapes the url `https://en.wikipedia.org/wiki/List_of_U.S._state_abbreviations` and store it in variable `url`
# - Use module `requests` to `get` the url and store it in variable called `response`
# - load the html file in dataframe `df1`. `Note`:use `pd.read_html(response.content)[0]`.
# - First few rows consists of unclean data. You need to select rows from index 11 till end. Make the values at index 11 as column headers and store it in dataframe `df1`.
# - Remove space from the column named 'United States of America' and store the result in dataframe called `df1['United States of America']`
#
#
# +
import requests
# Code starts here
# Code ends here
# -
# # Mapping Countries to their abbreviations
#
# Using the data scraped from the previous task, map abbriviation to the name of states.
#
#
#
# ## Instructions:
#
# - Using the scraped data create a variable called `mapping` which has the Country
# as key and Abbreviation as value
# - Create a new column called `abbr` as the 7th column (index = 6) of the DataFrame `df_final`
# - map the `df_final['state']` on variable `mapping` and store it in `df_final['abbr']`
#
#
# +
# Code Starts here
# Code ends here
# -
# # Filling in the Missing Values
#
# What you will notice in the previous task is that for two states Mississippi and Tennessee will have NaN values in column `abbr`. In this task you will be filling those missing values manually.
#
#
#
# ## Intructions :
# - Locate the NaN in the abbr and replace `mississipi` with `MS` and store it in `df_mississipi`
# - Locate the NaN in the abbr and replace `tenessee` with `TN` and store it in `df_tenessee`
# - update the df_final
#
#
# +
# Code starts here
# Code ends here
# -
# ## Total amount bank hold
#
#
# Here, use the newly created abbr column to understand the total amount that the bank holds in each state. Let us make this data frame more readable by introducing units in this case `$` sign representing the unit of mone
#
#
#
# ## Instructions :
#
# - Groups by `abbr` and finds the sum of aabr,jan,feb ,mar and total store the result in `df_sub`
# - Write a `lambda function` to introduce `$` sign infromt of all the numbers using `applymap` and store the result in `formatted_df`
#
#
#
# +
# Code starts here
# Code ends here
# -
# # Append a row to the DataFrame
#
# In this task, you will append a row to the data frame which will give us information about the total amount of the various regions in Jan, Feb and march and also the grand total
#
# ## Instructions :
#
# - Computes the sum of amount of all users in the Month of Jan, Feb, March and the total in variable called `sum_row`
# (Here the sum implies the sum of all the entries in the `Jan Column`, sum of entries in `Feb` Column and Grand total stands for the sum of entries in the column `total`)
# - Tranpose the dataframe `sum_row` and store it in new dataframe `df_sub_sum`
# - Make sure you append the `$` to all the digits and store it in dataframe `df_sub_sum` .
# - Append this computed sum to the DataFrame `final_table`
# - rename the index of `final_table` to `{0: "Total"}`
#
#
#
# +
# Code starts here
# Code ends here
# -
# # Pie chart for total
#
#
# Having prepared all the data now its time to present the results visually
#
# ## Instructions :
# - add the total of all the three months and store it in variable called `df_sub['total']`
# - plot the pie chart for the `df_sub['total']`
#
#
#
# +
# Code starts here
# Code ends here
# -
| Hackathon_ML Sample Problems/Python Hackathon/python_hackathon/pandas_guided_project/notebook/pandas_guided_project_ca_questions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Numpy_Python_Cheat_Sheet.pdf
# Question - what is an array?
# ARR= a grid of values, indexed by integers, all values being the same dtype \
# Arrays can be multidimensional\
# from 0 dimensional to N dimensional\
# 0 dimensional = 'scalar'
#
#why is numpy needed and not a list
lst=[1,2,3,4,5]
#I want to add 2 to each value in my list
lst+2
#with a list I would need to use a for loop but with numpy I can no it easily
sum(lst)
#define the same list as an array
arr=np.array([1,2,3,4,5])
#could also have used my_array = np.array(my_list)
# to define I need ([]) format
#now lets add 2 to each value in the list
arr+2
# ### 0 and 1 dimension arrays (and some fun friday maths)
#example of a 0 d array
a=np.array(1)
a.ndim
#use shape to check the shape of the array
a.shape
#example of a 1 d array
b=np.array([1,2,3,4,5])
b.ndim
#try filling the variable b with a different/longer/more diverse array
#just stick to the format v=np.array([#,#,...])
v=np.array([23,4,456,4,65,7,6,6])
#use shape to check the shape of the array
#Access the second element from the array
b[1]
#try some basic maths operators with your array:
np.sum(b)
#eg median,mean,diff,max,min,prod,cumsum,cumprod
np.median(b)
np.max(b)
np.cumsum(b)
# https://numpy.org/doc/stable/reference/routines.math.html
# #### Talking of maths ...
# cast your minds back to when you were in school, remember this?
from IPython.display import Image
Image("pythag.png")
#lets revisit those nightmares and do some Pythag!
# define your right angled triangle:
base= 3
perp= 4
#use np to calculate the length of the hypoteneuse
hyp=np.hypot(base, perp)
hyp
# +
#check the above using the old fashioned method hint: import math for sqrt
import math
hyp= math.sqrt(base**2 + perp**2)
hyp
# +
#To pick up Simons question on float64... np can explain it visually
p=np.array((0.12345678976543,1,2,3),dtype=np.float16) #1 d array
print(p[0])
# -
q=np.array((0.12345678976543,1,2,3),dtype=np.float32)
print(q[0])
r=np.array((0.12345678976543,1,2,3),dtype=np.float64)
print(r[0])
# ## Multi dimensional np arrays
#example of a 2d array
c=np.array([[2,2,2,2],[8,7,6,5]])
c.ndim
#try filling the variable c with a different/longer/more diverse array
#just stick to the format v=np.array([[#],[#],...])
Image('2darray.png')
# +
#view the variable c as a matrix of rows and columns
c
# -
#use shape to check the shape of the array d
c.shape
# +
#Access the THIRD element from the second row of the array using v[][]
c[1][2]
#hint: row comes first
# -
# of course a pandas df is also a 2d np array
import pandas as pd
df=pd.read_csv('web_data.csv')#read any csv file here
df.ndim
#check the shape - which is first in (n,n), rows or columns? why?
df.shape
# +
#as with pandas we can apply maths operators to the 2d array c
c.sum()
#try defining the axis along which you wish to sum in ()
c.sum(axis=1) # 0 for rows and 1 for cols
# experiment with the other maths operators as for the 1d array
# +
#whats the min and max of each column of the array?
c.min(axis=0)
# -
c.max(axis=0)
# +
#whats the difference between the variance in each row of the array
# as compared to the overall array variance?
diff= c.var()-c.var(axis=1)
diff
##example of array
#like [ c.var()-c.var[0] , c.var()-c.var[1] ]
# a 3d array can be imagined as n x 2 d arrays
# -
#example of a 3d array
d=np.array([[[1,2,3], [4,4,4], [1,2,1], [3,2,2]], [[5,6,7], [8,7,8], [9,9,9], [5,5,6]]])
d.ndim
# a 3d array can be imagined as n x 2 d arrays
#view the variable d as a matrix of rows and columns
d
#use shape to check the shape of the array
d.shape
Image("3darray.png")
# It can be hard to envisage a 3d array, so think of it like a cube
#
# this might make for interesting reading on cubes: \
# https://www.holistics.io/blog/the-rise-and-fall-of-the-olap-cube/
#another way to make a 3d array - using rand
e = np.random.rand(3,3,3)
e
#random.rand - create an array of the given shape and populate it with random samples from a uniform distribution over [0, 1).
#confirm the no of dimensions of e
#create a 3d array of only zeros, hint: ask google
e = np.zeros((3, 4,3)) # into brackets i report the shape of my 3d array
e
# ## Class experiment
#
# * lets define a 2d array together by capturing some data from eachother
# * students will be organised into 4 breakout rooms of 6
# * Each group picks a data question from this [list](https://docs.google.com/spreadsheets/d/10Jj_w8Klj9u7NoPxqq5CqSbj0MPTDfKMHsOfkJ6nUM0/edit?usp=sharing) and collects data points from each student in the group to form a 1d array
# * each group will provide a 6 digit 1d array via the chat
# * everyone builds the same class 2d array
# * we will know the groups/members but not which question was chosen
# * through the process of analysing the 2d array we put together, can we guess which questions were chosen by each group?
#
#
classmates= np.array([[20,2,2,7,2,6],[6,3,4,3,3,3],[4,3,4,3,4,4],[29,1,2,7,5,8]])
classmates
classmates.min(axis=1)
classmates.max(axis=1)
classmates.var()-classmates.var(axis=1) #variances difference amongs rows
| class_1/Playing_with_NumPy_arrays.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <hr>
#
# ## Python Strings
#
# <hr>
# #### Strings:
#
# * Can be declared with single, double or triple quotes in python.
# * string is built in imutable datatype.
# +
# Declaration of a string
name = "Ajay"
print(f"String Content : ", name)
print(f"Type of string : ", type(name))
# +
# single quote
single_quote_name = 'A"jay"'
double_quote_name = "A'jay'"
triple_quote_name = """
We can write
a very long name
in multiple lines.
"""
print(f"Single quote string : {single_quote_name}")
print(f"Double quote string : {double_quote_name}")
print(f"Single quote string : {triple_quote_name}")
# -
# ##### If you see in above cases, declaring string by means give same result.
# ##### While using single quotes, double quotes can come in between and vice versa.
# <hr>
#
# ## String indexing and slicing
#
# <hr>
# #### String indexing:
#
# * string[index] --> Gives a particular charecter.
#
# #### String slicing:
# * string[start : stop : step] --> Gives slice(part) of a string.
# * start --> defaults to 0 (included).
# * stop --> defaults to end_index (not included).
# * step --> defaults to 1 --> the index number that is going to be incremented.
# +
name = "Gift"
print("="*30, "Indexing", "="*30, sep="")
print(f"name[0] : ", name[0])
print(f"name[3] : ", name[3])
print(f"name[-1]: ", name[-1])
print("="*30, "Slicing", "="*30, sep="")
print(f"Name from 0 to 2 : {name[0:2]}")
print(f"Name from 0 to 3 : {name[0:3]}")
print(f"Name from -3 to -1 : {name[-3:-1]}")
# This actually gives empty string as starting can't occur before ending
print(f"Name from -1 to -3 : {name[-1:-3]}")
print(f"Name with dropping indices : {name[:2]}")
print(f"Name with dropping indices : {name[2:]}")
# Reverse a string
print(f"Reverse a string : {name[::-1]}")
# -
# <hr>
#
# ## Escape Sequences
#
# <hr>
# #### Escape Sequences:
# * Any charecter that follows \ will be escaped by python interpreter.
# * \n \t \' \" are some of popular escape sequences.
#
# +
# \n adds a new line
print("Hi\nHow aye you?")
# printing double quotes even if string is declared using double quotes.
# Used may be when all programmers used "" for all strings and you use "" to maintain consistency.
print("\"Pro\"grammer")
# -
# <hr>
#
# ## Formatted Strings & Raw Strings
#
# <hr>
# ##### f-strings:
# * Used to substitute variables in a string.
# * Very useful during printing or dealing with dynamic urls.
#
# ##### r-strings:
# * When we use raw strings first \ doesn't work any more.
# * Very useful with file paths and regular expressions.
# +
# f-string
first = "Rajershi"
last = "Gupta"
print(f"His name is {first} {last}")
# r-strings
print(r"C:\Users\teamsquarebox")
# -
# <hr>
#
# ## Note:
# If something is not clear, try that once again or reach to me.
# Also if some topics in this are not discussed in the videos, they will be covered soon in future videos.
#
# <hr>
# <hr>
#
# ## For better understanding please watch the video at Sqaure Box channel.
#
#
# ### Please share with others who might find it useful.
#
# #### In case of any suggestions mail to: <EMAIL>
#
# <hr>
| Learning Python - 2 | Strings | Indexing & Slicing | {f,r}-strings.ipynb |
# ---
# jupyter:
# jupytext:
# cell_metadata_filter: -run_control,-deletable,-editable,-jupyter,-slideshow
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <p><font size="6"><b>Stacking of raster data</b></font></p>
#
#
# > *DS Python for GIS and Geoscience*
# > *October, 2021*
# >
# > *ยฉ 2021, <NAME> and <NAME>. Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
#
# ---
# +
import shutil
from pathlib import Path
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
# -
# ## Introduction
# Geospatial time series data is often stored as multiple individual files. For example, remote sensing data or geoscience model output are typically organized with each time step (or band) in a separate file. Handling all these indidivudal files is culbersome and workflows to combine these files into a single `xarray.Dataset` or `xarray.DataArray` prior to the analysis are required.
#
# In this notebook, we will explore some ways to combine indidivual files into a single data product ready for analysis.
# ## Load multiple files into a single `xarray.Dataset/xarray.DataArray`
# In some of the previous notebooks, we used the band 4 and band 8 Sentinel image data from Ghent, which are both stored as a separate data file in the `data` directory.
#
# One way to handle this is to load each of the data sets into memory and concatenate these afterwards:
arr_b4 = xr.open_rasterio("data/gent/raster/2020-09-17_Sentinel_2_L1C_B04.tiff")
arr_b8 = xr.open_rasterio("data/gent/raster/2020-09-17_Sentinel_2_L1C_B08.tiff")
band_var = xr.Variable('band', ["B4", "B8"])
arr = xr.concat([arr_b4, arr_b8], dim=band_var)
arr
# From now on, the data is contained in a single `DataArray` to do further analysis. This approach works just fine for this limited set of data.
#
# However, when more files need to be processed, this becomes labor/code intensive and additional automation is required. Consider the following example data in the data folder `./data/herstappe/raster/sentinel_moisture/`:
#
# ```
# ./data/herstappe/raster/sentinel_moisture/
# โโโ 2016-05-01, Sentinel-2A L1C, Moisture index.tiff
# โโโ 2019-02-15, Sentinel-2A L1C, Moisture index.tiff
# โโโ 2020-02-07, Sentinel-2A L1C, Moisture index.tiff
# ```
#
# It is a (small) extract of a time series of moisture index data derived from sentinel-2A, made available by [Sentinel-Hub](https://apps.sentinel-hub.com/eo-browser), a time series of remote sensing images.
#
# Instead of manually loading the data, we rather automate the data load from these files to a single xarray object:
# 1. Identify all files in the data folder and make a list of them:
from pathlib import Path
moisture_index_files = list(Path("./data/herstappe/raster/sentinel_moisture").rglob("*.tiff"))
# 2. Extract the time-dimension from each individual file name
moisture_index_dates = [pd.to_datetime(file_name.stem.split(",")[0]) for file_name in moisture_index_files]
moisture_index_dates
# __Note__ we use `pathlib` instead of `glob.glob` as it returns `Path` objects instead to represent the file names which are more powerful than regular strings returned by `glob.glob`, e.g. usage of `stem` attribute.
# 3. Prepare an xarray variable which can be used as the additional date dimension/coordinate
date_var = xr.Variable('date', moisture_index_dates)
date_var
# 4. Load in and concatenate all individual GeoTIFFs
moisture_index = xr.concat([xr.open_rasterio(file_name) for file_name in moisture_index_files], dim=date_var)
moisture_index
moisture_index.sortby("date").sum(dim="band").plot.imshow(
col="date", cmap="BrBG", figsize=(15, 4), aspect=1)
# ## Lazy load multiple files into a single `xarray.Dataset`
# In the previous example, all data is read into memory. Xarray provides a separate function [`open_mfdataset`](http://xarray.pydata.org/en/stable/generated/xarray.open_mfdataset.html#xarray-open-mfdataset) to read data lazy from disk (so not loading the data itself in memory) from multiple files.
#
# A usefule feature is the ability to preprocess the files:
#
# > __preprocess (callable(), optional)__ โ If provided, call this function on each dataset prior to concatenation. You can find the file-name from which each dataset was loaded in ds.encoding["source"].
#
# Applied to the previous moisture index files example:
def add_date_dimension(ds):
"""Add the date dimension derived from the file_name and rename to moisture_index"""
ds_date = pd.to_datetime(Path(ds.encoding["source"]).stem.split(",")[0])
ds = ds.assign_coords(date=("date", [ds_date])).rename({"band_data": "moisture_index"})
return ds
moisture_index_lazy = xr.open_mfdataset(Path("./data/herstappe/raster/sentinel_moisture").rglob("*.tiff"),
preprocess=add_date_dimension, engine="rasterio", decode_cf=False) # parallel=True
moisture_index_lazy["moisture_index"]
# The data itself is not loaded directly and is divided into 3 chunks, i.e. a chunk for each date. See the notebook [15-xarray-dask-big-data](./15-xarray-dask-big-data.ipynb) notebook for more information on the processing of (out of memory) lazy data with Dask.
# Further reading:
#
# - See http://xarray.pydata.org/en/stable/user-guide/io.html#reading-multi-file-datasets for more examples.
# - https://medium.com/@bonnefond.virginie/handling-multi-temporal-satellite-images-with-xarray-30d142d3391
# - https://docs.dea.ga.gov.au/notebooks/Frequently_used_code/Opening_GeoTIFFs_NetCDFs.html#Loading-multiple-files-into-a-single-xarray.Dataset
# ## Save concatenated data to a single file
# After processing multiple files, it is convenient to save the data in a preferred format afterwards. Convenient choices are [NetCDF](https://www.unidata.ucar.edu/software/netcdf/) and [Zarr](https://zarr.readthedocs.io/en/stable/). Zarr is a newer format providing some advantages when working in cloud environments, but can be used on a local machine as well.
moisture_index.to_netcdf("moisture_index_stacked.nc")
# Hence, the next the data set can be loaded directly from disk:
xr.open_dataarray("moisture_index_stacked.nc", engine="netcdf4")
# Storing to zarr files works on the `xarray.DataSet` level:
moisture_index_lazy.to_zarr("moisture_index_stacked.zarr")
xr.open_dataset("moisture_index_stacked.zarr", engine="zarr")
# _clean up of these example files_
# +
import shutil
if Path("moisture_index_stacked.zarr").exists():
shutil.rmtree("moisture_index_stacked.zarr")
if Path("moisture_index_stacked.nc").exists():
Path("moisture_index_stacked.nc").unlink()
# -
# <div class="alert alert-success">
#
# **EXERCISE**:
#
# The [NOAA's NCEP Reanalysis data](https://psl.noaa.gov/data/gridded/data.ncep.reanalysis.html) files are stored on a remote server and can be accessed over OpenDAP.
#
# > The NCEP/NCAR Reanalysis data set is a continually updated (1948โpresent) globally gridded data set that represents the state of the Earth's atmosphere, incorporating observations and numerical weather prediction (NWP) model output from 1948 to present.
#
# An example can be found in NCEP Reanalysis catalog:
#
# https://www.esrl.noaa.gov/psd/thredds/catalog/Datasets/ncep.reanalysis/surface/catalog.html
#
# The dataset is split into different files for each variable and year. For example, a single file download link for surface air temperature looks like:
#
# https://psl.noaa.gov/thredds/fileServer/Datasets/ncep.reanalysis/surface/air.sig995.1948.nc
#
# The structure is `'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis/surface/air.sig995.`' + `'YYYY'` + `'.nc'`
#
# We want to download the surface temperature data from 1990 till 2000 and combine them all in a single xarray DataSet. To do so:
#
# - Prepare all the links by composing the base_url ('http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis/surface/air.sig995') with the required years
# - Use the list of file links as the inputfor the `xr.open_mfdataset` to create a single `xarray.DataSet`.
# - Whereas this i 600MB of data, the initial loading is not actually reading in the data.
#
# <details>
#
# <summary>Hints</summary>
#
# * Python works with string formatting, e.g. f'{base_url}.{year}.nc' will nicely create the required links.
# * Xarray can both work with file names on a computer as a compatible network link.
# * As the netcdf data provided by NOAA is already well structured and confomr, no further adjustments are required as input to the
# `open_mfdataset` function, :-)
#
# </details>
#
# </div>
# + tags=["nbtutor-solution"]
base_url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis/surface/air.sig995'
files = [f'{base_url}.{year}.nc' for year in range(1990, 2001)]
files
# + tags=["nbtutor-solution"]
ds = xr.open_mfdataset(files, parallel=True)
ds
# -
# ## (Optional) Online data Catalogs: STAC
# __Note__ _These dependencies are not included in the environment, to run this section, install the required packages first in your conda environment: `conda install stackstac pystac-client=0.1.1`._
# Multiple initiatives do exist which publish data online which enables (lazy) loading of the data directly in xarray, such as [OpenDAP](https://www.opendap.org/) and [THREDDS](https://www.unidata.ucar.edu/software/tds/current/) which are well-known and used in the oceanographic and climate studies communities (see exercise). See for example the [ROMS Ocean Model Example](http://xarray.pydata.org/en/stable/examples/ROMS_ocean_model.html) tutorial of xarray.
#
# Another initiative that interacts well with xarray is the [SpatioTemporal Asset Catalogs](https://stacspec.org/) specification, which is increasingly used to publish remote sensing products.
import stackstac
import pystac_client
lon, lat = -105.78, 35.79
URL = "https://earth-search.aws.element84.com/v0"
catalog = pystac_client.Client.open(URL)
results = catalog.search(
intersects=dict(type="Point", coordinates=[lon, lat]),
collections=["sentinel-s2-l2a-cogs"],
datetime="2020-04-01/2020-05-01"
)
results.matched()
list(results.items())[0]
stacked = stackstac.stack(results.items_as_collection())
stacked
# See also https://github.com/stac-utils/pystac-client and https://stackstac.readthedocs.io/en/latest/.
# __Acknowledgements__ Thanks to [@rabernat](https://rabernat.github.io/research_computing_2018/xarray-tips-and-tricks.html) for the example case of NCEP reanalysis data load and https://stackstac.readthedocs.io/en/latest/basic.html#Basic-example for the stackteac example.
| notebooks/14-combine-data.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.1
# language: julia
# name: julia-1.3
# ---
# ## Optimal Power Flow
# _**[Power Systems Optimization](https://github.com/east-winds/power-systems-optimization)**_
#
# _by <NAME>, <NAME>, and <NAME>_
#
# This notebook consists an introductory glimpse of and a few hands-on activities and demostrations of the Optimal Power Flow (OPF) problemโwhich minimizes the short-run production costs of meeting electricity demand from a given set of generators subject to various technical and flow limit constraints.
#
# We will talk about a single-time period, simple generator, and line flow limit constraints (while modeling the network flows as dictated by the laws of physics). This is adds a layer of complexity and sophistication on top of the Economic Dispatch (ED) problem.
#
# Since we will only discuss single time-period version of the problem, we will not be considering inter-temporal constraints, like ramp-rate limits. However, this model can easily be extended to allow for such constraints.
#
# We will start off with some simple systems, whose solutions can be worked out manually without resorting to any mathematical optimization model and software. But, eventually we will be solving larger system, thereby emphasizing the importance of such software and mathematical models.
# ## Introduction to OPF
#
# Optimal Power Flow (OPF) is a power system optimal scheduling problem which fully captures the physics of electricity flows, which adds a leyr of complexity, as well gives a more realistic version of the Economic Dispatch (ED) problem. It usually attempts to capture the entire network topology by representing the interconnections between the different nodes through transmission lines and also representing the electrical parameters, like the resistance, series reactance, shunt admittance etc. of the lines. however, the full-blown "AC" OPF turns out to be an extremely hard problem to solve (usually NP-hard). Hence, system operators and power marketers usually go about solving a linearized version of it, called the DC-OPF. The DC-OPF approximation works satisfactorily for bulk power transmission networks as long as such networks are not operated at the brink of instability or, under very heavily heavily loaded conditions.
# ## Single-time period, simple generator constraints
# We will first examine the case where we are optimizing dispatch for a single snapshot in time, with only very simple constraints on the generators. $x^2$
#
#
# $$
# \begin{align}
# \mathbf{Objective\;Function:}\min_{P_g}\sum_{g\in{G}}C_{g}(P_{g})\longleftarrow\mathbf{power\;generation\; cost}\\
# \mathbf{Subject\;to:\:}{\underline{P}_{g}}\leqslant{P_{g}}\leqslant{{\overline{P}_{g}}},\;\forall{g\in{G}}\longleftarrow\mathbf{MW\; generation\; limits}\\
# P_{g(i)}-P_{d(i)}\longleftarrow\mathbf{real\; power\; injection}\notag\\=\sum_{j\in J(i)}B_{ij}(\theta_j-\theta_i),\;\forall{{i}\in\mathcal{N}}\\
# |P_{ij}|\leqslant{\overline{P}_{ij}},\;\forall{ij}\in{T}\longleftarrow\mathbf{MW\; line\; limit}\\
# \end{align}
# $$
#
# The **decision variable** in the above problem is:
#
# - $P_{g}$, the generation (in MW) produced by each generator, $g$
# - $\theta_i$, $\theta_j$ the voltage phase angle of each bus/node, $i,j$
#
# The **parameters** are:
#
# - ${\underline{P}_{g}}$, the minimum operating bounds for the generator (based on engineering or natural resource constraints)
# - ${\overline{P}_{g}}$, the maximum operating bounds for the generator (based on engineering or natural resource constraints)
# - $P_{d(i)}$, the demand (in MW) at node $i$
# - ${\overline{P}_{ij}}$, the line-flow limit for line connecting buses $i$ and $j$
# - $B_{ij}$, susceptance for line connecting buses $i$ and $j$
#
# just like the ED problem, here also, we can safely ignore fixed costs for the purposes of finding optimal dispatch.
#
# With that, let's implement OPF.
# # 1. Load packagesยถ
# New packages introduced in this tutorial (uncomment to download the first time)
import Pkg; Pkg.add("PlotlyBase")
using JuMP, GLPK
using Plots; plotly();
using VegaLite # to make some nice plots
using DataFrames, CSV, PrettyTables
ENV["COLUMNS"]=120; # Set so all columns of DataFrames and Matrices are displayed
# ### 2. Load and format data
#
# We will use data for IEEE 118 bus test case and two other test cases for a 3 bus and a 2 bus system:
#
# - generator cost curve, power limit data, and connection-node
# - load demand data with MW demand and connection node
# - transmission line data with resistance, reactance, line MW capacity, from, and to nodes
# +
datadir = joinpath("OPF_data")
# Note: joinpath is a good way to create path reference that is agnostic
# to what file system you are using (e.g. whether directories are denoted
# with a forward or backwards slash).
gen_info = CSV.read(joinpath(datadir,"Gen118.csv"), DataFrame);
line_info = CSV.read(joinpath(datadir,"Tran118.csv"), DataFrame);
loads = CSV.read(joinpath(datadir,"Load118.csv"), DataFrame);
# Rename all columns to lowercase (by convention)
for f in [gen_info, line_info, loads]
rename!(f,lowercase.(names(f)))
end
# -
#=
Function to solve Optimal Power Flow (OPF) problem (single-time period)
Inputs:
gen_info -- dataframe with generator info
line_info -- dataframe with transmission lines info
loads -- dataframe with load info
Note: it is always a good idea to include a comment blog describing your
function's inputs clearly!
=#
function OPF_single(gen_df, line_info, loads)
OPF = Model(GLPK.Optimizer) # You could use Clp as well, with Clp.Optimizer
# Define sets based on data
# A set of all variable generators
G_var = gen_df[gen_df[!,:is_variable] .== 1,:r_id]
# A set of all non-variable generators
G_nonvar = gen_df[gen_df[!,:is_variable] .== 0,:r_id]
# Set of all generators
G = gen_df.r_id
# Extract some parameters given the input data
# Generator capacity factor time series for variable generators
gen_var_cf = innerjoin(gen_variable,
gen_df[gen_df.is_variable .== 1 ,
[:r_id, :gen_full, :existing_cap_mw]],
on = :gen_full)
# Decision variables
@variables(ED, begin
GEN[G] >= 0 # generation
# Note: we assume Pmin = 0 for all resources for simplicty here
end)
# Objective function
@objective(ED, Min,
sum( (gen_df[i,:heat_rate_mmbtu_per_mwh] * gen_df[i,:fuel_cost] +
gen_df[i,:var_om_cost_per_mwh]) * GEN[i]
for i in G_nonvar) +
sum(gen_df[i,:var_om_cost_per_mwh] * GEN[i]
for i in G_var)
)
# Demand constraint
@constraint(ED, cDemand,
sum(GEN[i] for i in G) == loads[1,:demand])
# Capacity constraint (non-variable generation)
for i in G_nonvar
@constraint(ED, GEN[i] <= gen_df[i,:existing_cap_mw])
end
# Variable generation capacity constraint
for i in 1:nrow(gen_var_cf)
@constraint(ED, GEN[gen_var_cf[i,:r_id] ] <=
gen_var_cf[i,:cf] *
gen_var_cf[i,:existing_cap_mw])
end
# Solve statement (! indicates runs in place)
optimize!(ED)
# Dataframe of optimal decision variables
solution = DataFrame(
r_id = gen_df.r_id,
resource = gen_df.resource,
gen = value.(GEN).data
)
# Return the solution and objective as named tuple
return (
solution = solution,
cost = objective_value(ED),
)
end
| Notebooks/06-OPF-problem_other.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # World-level language modeling RNN
# - https://github.com/pytorch/examples/tree/master/word_language_model
import os
import time
import math
import numpy as np
import torch
import torch.nn as nn
seed = 1111
cuda = False
data = './data/wikitext-2'
batch_size = 20
torch.manual_seed(seed)
device = torch.device('cuda' if cuda else 'cpu')
device
class Dictionary(object):
def __init__(self):
self.word2idx = {}
self.idx2word = []
def add_word(self, word):
if word not in self.word2idx:
self.idx2word.append(word)
self.word2idx[word] = len(self.idx2word) - 1
return self.word2idx[word]
def __len__(self):
return len(self.idx2word)
class Corpus(object):
def __init__(self, path):
self.dictionary = Dictionary()
self.train = self.tokenize(os.path.join(path, 'train.txt'))
self.valid = self.tokenize(os.path.join(path, 'valid.txt'))
self.test = self.tokenize(os.path.join(path, 'test.txt'))
def tokenize(self, path):
assert os.path.exists(path)
with open(path, 'r', encoding='utf-8') as f:
tokens = 0
for line in f:
words = line.split() + ['<eos>']
tokens += len(words)
for word in words:
self.dictionary.add_word(word)
with open(path, 'r', encoding='utf-8') as f:
ids = torch.LongTensor(tokens)
token = 0
for line in f:
words = line.split() + ['<eos>']
for word in words:
ids[token] = self.dictionary.word2idx[word]
token += 1
return ids
corpus = Corpus(data)
corpus.train[:10]
[corpus.dictionary.idx2word[i] for i in corpus.train[:100]]
def batchify(data, bsz):
nbatch = data.size(0) // bsz
data = data.narrow(0, 0, nbatch * bsz)
data = data.view(bsz, -1).t().contiguous()
return data.to(device)
eval_batch_size = 10
train_data = batchify(corpus.train, batch_size)
val_data = batchify(corpus.valid, eval_batch_size)
test_data = batchify(corpus.test, eval_batch_size)
print(train_data.size())
print(val_data.size())
print(test_data.size())
[corpus.dictionary.idx2word[i] for i in train_data[:, 0][:10]]
xx = torch.from_numpy(np.arange(20))
xx = xx.view(4, 5).t().contiguous()
xx
ntokens = len(corpus.dictionary)
ntokens
model = 'LSTM'
emsize = 200 # size of word embeddings
nhid = 200 # number of hidden units per layer
nlayers = 2 # number of layers
dropout = 0.2
tied = False
class RNNModel(nn.Module):
def __init__(self, rnn_type, ntoken, ninp, nhid, nlayers, dropout=0.5, tie_weights=False):
super(RNNModel, self).__init__()
self.drop = nn.Dropout(dropout)
self.encoder = nn.Embedding(ntoken, ninp)
if rnn_type in ['LSTM', 'GRU']:
self.rnn = getattr(nn, rnn_type)(ninp, nhid, nlayers, dropout=dropout)
else:
try:
nonlinearity = {'RNN_TANH': 'tanh', 'RNN_RELU': 'relu'}[rnn_type]
except KeyError:
raise ValueError('invalid nonlinearity')
self.rnn = nn.RNN(ninp, nhid, nlayers, nonlinearity=nonlinearity, dropout=dropout)
self.decoder = nn.Linear(nhid, ntoken)
if tie_weights:
if nhid != ninp:
raise ValueError('When using the tied flag, nhid must be equal to emsize')
self.decoder.weight = self.encoder.weight
self.init_weights()
self.rnn_type = rnn_type
self.nhid = nhid
self.nlayers = nlayers
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, input, hidden):
emb = self.drop(self.encoder(input))
output, hidden = self.rnn(emb, hidden)
output = self.drop(output)
decoded = self.decoder(output.view(output.size(0) * output.size(1), output.size(2)))
return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden
def init_hidden(self, bsz):
weight = next(self.parameters())
if self.rnn_type == 'LSTM':
# (num_layers, batch, hidden_size)
return (weight.new_zeros(self.nlayers, bsz, self.nhid),
weight.new_zeros(self.nlayers, bsz, self.nhid))
else:
return weight.new_zeros(self.nlayers, bsz, self.nhid)
print(getattr(nn, 'LSTM'))
print(getattr(nn, 'GRU'))
print(getattr(nn, 'Conv2d'))
model = RNNModel(model, ntokens, emsize, nhid, nlayers, dropout, tied).to(device)
model
criterion = nn.CrossEntropyLoss()
bptt = 35
def get_batch(source, i):
seq_len = min(bptt, len(source) - 1 - i)
data = source[i:i+seq_len]
# ๅ็ณปๅใฎๆฌกใฎ่ฆ็ด ใๅบๅใใฆใปใใ
# lossใ่จ็ฎใใๆใซ1Dใใณใฝใซใซใใใใใซview(-1)ใใฆใใ
target = source[i+1:i+1+seq_len].view(-1)
return data, target
xx = torch.from_numpy(np.arange(20))
xx = xx.view(4, 5).t().contiguous()
bptt = 3
print(xx)
data, target = get_batch(xx, 3)
print(data)
print(target)
# +
bptt = 35
clip = 0.25 # gradient clipping
log_interval = 200
def repackage_hidden(h):
"""Wraps hidden states in new Tensors, to detach them from their history."""
if isinstance(h, torch.Tensor):
return h.detach()
else:
return tuple(repackage_hidden(v) for v in h)
def train():
model.train()
total_loss = 0.
start_time = time.time()
ntokens = len(corpus.dictionary)
hidden = model.init_hidden(batch_size)
for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)):
data, targets = get_batch(train_data, i)
# ใใใใใจใซ็ณปๅใใผใฟใๅฆ็ใใ
# ๆฌกใฎใใใใซ่กใฃใใๅใฎใใใใฎ้ ใ็ถๆ
ใๅๆๅคใจใใฆไฝฟใใ้ไผๆฌใฏใใชใ
# BPTTใชใฎใงใใไปฅๅใพใง้ไผๆฌใใชใใใใซ1ใคๅใฎใใใใฎ้ ใ็ถๆ
ใฏๅใ้ขใ
hidden = repackage_hidden(hidden)
model.zero_grad()
output, hidden = model(data, hidden)
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
for p in model.parameters():
p.data.add_(-lr, p.grad.data)
total_loss += loss.item()
if batch % log_interval == 0 and batch > 0:
cur_loss = total_loss / log_interval
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.2f} | ms/batch {:5.2f} | loss {:5.2f} | ppl {:8.2f}'.format(
epoch, batch, len(train_data) // bptt, lr,
elapsed * 1000 / log_interval,
cur_loss,
math.exp(cur_loss)))
total_loss = 0
start_time = time.time()
# -
def evaluate(data_source):
model.eval()
total_loss = 0.0
ntokens = len(corpus.dictionary)
hidden = model.init_hidden(eval_batch_size)
with torch.no_grad():
for i in range(0, data_source.size(0) - 1, bptt):
data, targets = get_batch(data_source, i)
output, hidden = model(data, hidden)
output_flat = output.view(-1, ntokens)
total_loss += len(data) * criterion(output_flat, targets).item()
hidden = repackage_hidden(hidden)
return total_loss / len(data_source)
lr = 20
epochs = 40
best_val_loss = None
save = 'model.pt'
try:
for epoch in range(1, epochs + 1):
epoch_start_time = time.time()
train()
val_loss = evaluate(val_data)
print('-' * 89)
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time), val_loss, math.exp(val_loss)))
print('-' * 89)
if not best_val_loss or val_loss < best_val_loss:
with open(save, 'wb') as f:
torch.save(model, f)
best_val_loss = val_loss
else:
lr /= 4.0
except KeyboardInterrupt:
print('-' * 89)
print('Exiting from training early')
| 180728-language-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# # Automated Machine Learning
# _**Energy Demand Forecasting**_
#
# ## Contents
# 1. [Introduction](#Introduction)
# 1. [Setup](#Setup)
# 1. [Data](#Data)
# 1. [Train](#Train)
# ## Introduction
# In this example, we show how AutoML can be used for energy demand forecasting.
#
# Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.
#
# In this notebook you would see
# 1. Creating an Experiment in an existing Workspace
# 2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name"
# 3. Training the Model using local compute
# 4. Exploring the results
# 5. Testing the fitted model
# ## Setup
#
# +
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from matplotlib import pyplot as plt
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
# -
# As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments.
# +
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-energydemandforecasting'
# project folder
project_folder = './sample_projects/automl-local-energydemandforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
# -
# ## Data
# Read energy demanding data from file, and preview data.
data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp'])
data.head()
# ### Split the data to train and test
#
#
train = data[data['timeStamp'] < '2017-02-01']
test = data[data['timeStamp'] >= '2017-02-01']
# ### Prepare the test data, we will feed X_test to the fitted model and get prediction
y_test = test.pop('demand').values
X_test = test
# ### Split the train data to train and valid
#
# Use one month's data as valid data
#
X_train = train[train['timeStamp'] < '2017-01-01']
X_valid = train[train['timeStamp'] >= '2017-01-01']
y_train = X_train.pop('demand').values
y_valid = X_valid.pop('demand').values
print(X_train.shape)
print(y_train.shape)
print(X_valid.shape)
print(y_valid.shape)
# ## Train
#
# Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.
#
# |Property|Description|
# |-|-|
# |**task**|forecasting|
# |**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>
# |**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|
# |**iteration_timeout_minutes**|Time limit in minutes for each iteration.|
# |**X**|(sparse) array-like, shape = [n_samples, n_features]|
# |**y**|(sparse) array-like, shape = [n_samples, ], targets values.|
# |**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]|
# |**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], targets values.|
# |**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.
# +
time_column_name = 'timeStamp'
automl_settings = {
"time_column_name": time_column_name,
}
automl_config = AutoMLConfig(task = 'forecasting',
debug_log = 'automl_nyc_energy_errors.log',
primary_metric='normalized_root_mean_squared_error',
iterations = 10,
iteration_timeout_minutes = 5,
X = X_train,
y = y_train,
X_valid = X_valid,
y_valid = y_valid,
path=project_folder,
verbosity = logging.INFO,
**automl_settings)
# -
# You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.
# You will see the currently running iterations printing to the console.
local_run = experiment.submit(automl_config, show_output=True)
local_run
# ### Retrieve the Best Model
# Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration.
best_run, fitted_model = local_run.get_output()
fitted_model.steps
# ### Test the Best Fitted Model
#
# Predict on training and test set, and calculate residual values.
y_pred = fitted_model.predict(X_test)
y_pred
# ### Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics
# +
if len(y_test) != len(y_pred):
raise ValueError(
'the true values and prediction values do not have equal length.')
elif len(y_test) == 0:
raise ValueError(
'y_true and y_pred are empty.')
# if there is any non-numeric element in the y_true or y_pred,
# the ValueError exception will be thrown.
y_test_f = np.array(y_test).astype(float)
y_pred_f = np.array(y_pred).astype(float)
# remove entries both in y_true and y_pred where at least
# one element in y_true or y_pred is missing
y_test = y_test_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))]
y_pred = y_pred_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))]
# -
# ### Calculate metrics for the prediction
#
# +
print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred)))
# Explained variance score: 1 is perfect prediction
print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))
print('R2 score: %.2f' % r2_score(y_test, y_pred))
# Plot outputs
# %matplotlib notebook
test_pred = plt.scatter(y_test, y_pred, color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
| how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="mt9dL5dIir8X"
# ##### Copyright 2020 The TensorFlow Authors.
# + cellView="form" id="ufPx7EiCiqgR"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="ucMoYase6URl"
# # Load images
# + [markdown] id="_Wwu5SXZmEkB"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/images"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="Oxw4WahM7DU9"
# This tutorial shows how to load and preprocess an image dataset in three ways. First, you will use high-level Keras preprocessing [utilities](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) and [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) to read a directory of images on disk. Next, you will write your own input pipeline from scratch using [tf.data](https://www.tensorflow.org/guide/data). Finally, you will download a dataset from the large [catalog](https://www.tensorflow.org/datasets/catalog/overview) available in [TensorFlow Datasets](https://www.tensorflow.org/datasets).
# + [markdown] id="hoQQiZDB6URn"
# ## Setup
# + id="3vhAMaIOBIee"
import numpy as np
import os
import PIL
import PIL.Image
import tensorflow as tf
import tensorflow_datasets as tfds
# + id="Qnp9Z2sT5dWj"
print(tf.__version__)
# + [markdown] id="wO0InzL66URu"
# ### Download the flowers dataset
#
# This tutorial uses a dataset of several thousand photos of flowers. The flowers dataset contains 5 sub-directories, one per class:
#
# ```
# flowers_photos/
# daisy/
# dandelion/
# roses/
# sunflowers/
# tulips/
# ```
# + [markdown] id="Ju2yXtdV5YaT"
# Note: all images are licensed CC-BY, creators are listed in the LICENSE.txt file.
# + id="rN-Pc6Zd6awg"
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='flower_photos',
untar=True)
data_dir = pathlib.Path(data_dir)
# + [markdown] id="rFkFK74oO--g"
# After downloading (218MB), you should now have a copy of the flower photos available. There are 3670 total images:
# + id="QhewYCxhXQBX"
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
# + [markdown] id="ZUFusk44d9GW"
# Each directory contains images of that type of flower. Here are some roses:
# + id="crs7ZjEp60Ot"
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
# + id="oV9PtjdKKWyI"
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[1]))
# + [markdown] id="9_kge08gSCan"
# ## Load using keras.preprocessing
#
# Let's load these images off disk using [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory).
# + [markdown] id="eRACclAfOPYR"
# Note: The Keras Preprocesing utilities and layers introduced in this section are currently experimental and may change.
# + [markdown] id="6jobDTUs8Wxu"
# ### Create a dataset
# + [markdown] id="lAmtzsnjDNhB"
# Define some parameters for the loader:
# + id="qJdpyqK541ty"
batch_size = 32
img_height = 180
img_width = 180
# + [markdown] id="ehhW308g8soJ"
# It's good practice to use a validation split when developing your model. We will use 80% of the images for training, and 20% for validation.
# + id="chqakIP14PDm"
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
# + id="pb2Af2lsUShk"
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
# + [markdown] id="Ug3ITsz0b_cF"
# You can find the class names in the `class_names` attribute on these datasets.
# + id="R7z2yKt7VDPJ"
class_names = train_ds.class_names
print(class_names)
# + [markdown] id="bK6CQCqIctCd"
# ### Visualize the data
#
# Here are the first 9 images from the training dataset.
# + id="AAY3LJN28Kuy"
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
# + [markdown] id="jUI0fr7igPtA"
# You can train a model using these datasets by passing them to `model.fit` (shown later in this tutorial). If you like, you can also manually iterate over the dataset and retrieve batches of images:
# + id="BdPHeHXt9sjA"
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
# + [markdown] id="2ZgIZeXaDUsF"
# The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension referes to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are the corresponding labels to the 32 images.
#
# + [markdown] id="LyM2y47W-cxJ"
# Note: you can call `.numpy()` on either of these tensors to convert them to a `numpy.ndarray`.
# + [markdown] id="Ybl6a2YCg1rV"
# ### Standardize the data
#
# + [markdown] id="IdogGjM2K6OU"
# The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, we will standardize values to be in the `[0, 1]` by using a Rescaling layer.
# + id="16yNdZXdExyM"
from tensorflow.keras import layers
normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)
# + [markdown] id="Nd0_enkb8uxZ"
# There are two ways to use this layer. You can apply it to the dataset by calling map:
# + id="QgOnza-U_z5Y"
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
# + [markdown] id="z39nXayj9ioS"
# Or, you can include the layer inside your model definition to simplify deployment. We will use the second approach here.
# + [markdown] id="hXLd3wMpDIkp"
# Note: If you would like to scale pixel values to `[-1,1]` you can instead write `Rescaling(1./127.5, offset=-1)`
# + [markdown] id="LeNWVa8qRBGm"
# Note: we previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model, you can use the [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer instead.
#
# + [markdown] id="Ti8avTlLofoJ"
# ### Configure the dataset for performance
#
# Let's make sure to use buffered prefetching so we can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.
#
# `.cache()` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.
#
# `.prefetch()` overlaps data preprocessing and model execution while training.
#
# Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance#prefetching).
# + id="Ea3kbMe-pGDw"
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
# + [markdown] id="XqHjIr6cplwY"
# ### Train a model
#
# For completeness, we will show how to train a simple model using the datasets we just prepared. This model has not been tuned in any way - the goal is to show you the mechanics using the datasets you just created. To learn more about image classification, visit this [tutorial](https://www.tensorflow.org/tutorials/images/classification).
# + id="LdR0BzCcqxw0"
num_classes = 5
model = tf.keras.Sequential([
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="t_BlmsnmsEr4"
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + [markdown] id="ffwd44ldNMOE"
# Note: we will only train for a few epochs so this tutorial runs quickly.
# + id="S08ZKKODsnGW"
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
# + [markdown] id="MEtT9YGjSAOK"
# Note: you can also write a custom training loop instead of using `model.fit`. To learn more, visit this [tutorial](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch).
# + [markdown] id="BaW4wx5L7hrZ"
# You may notice the validation accuracy is low compared to the training accuracy, indicating our model is overfitting. You can learn more about overfitting and how to reduce it in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).
# + [markdown] id="AxS1cLzM8mEp"
# ## Using tf.data for finer control
# + [markdown] id="Ylj9fgkamgWZ"
# The above keras.preprocessing utilities are a convenient way to create a `tf.data.Dataset` from a directory of images. For finer grain control, you can write your own input pipeline using `tf.data`. This section shows how to do just that, beginning with the file paths from the zip we downloaded earlier.
# + id="lAkQp5uxoINu"
list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'), shuffle=False)
list_ds = list_ds.shuffle(image_count, reshuffle_each_iteration=False)
# + id="coORvEH-NGwc"
for f in list_ds.take(5):
print(f.numpy())
# + [markdown] id="6NLQ_VJhWO4z"
# The tree structure of the files can be used to compile a `class_names` list.
# + id="uRPHzDGhKACK"
class_names = np.array(sorted([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"]))
print(class_names)
# + [markdown] id="CiptrWmAlmAa"
# Split the dataset into train and validation:
# + id="GWHNPzXclpVr"
val_size = int(image_count * 0.2)
train_ds = list_ds.skip(val_size)
val_ds = list_ds.take(val_size)
# + [markdown] id="rkB-IR4-pS3U"
# You can see the length of each dataset as follows:
# + id="SiKQrb9ppS-7"
print(tf.data.experimental.cardinality(train_ds).numpy())
print(tf.data.experimental.cardinality(val_ds).numpy())
# + [markdown] id="91CPfUUJ_8SZ"
# Write a short function that converts a file path to an `(img, label)` pair:
# + id="arSQzIey-4D4"
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
one_hot = parts[-2] == class_names
# Integer encode the label
return tf.argmax(one_hot)
# + id="MGlq4IP4Aktb"
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# resize the image to the desired size
return tf.image.resize(img, [img_height, img_width])
# + id="-xhBRgvNqRRe"
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
# + [markdown] id="S9a5GpsUOBx8"
# Use `Dataset.map` to create a dataset of `image, label` pairs:
# + id="3SDhbo8lOBQv"
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(process_path, num_parallel_calls=AUTOTUNE)
# + id="kxrl0lGdnpRz"
for image, label in train_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Label: ", label.numpy())
# + [markdown] id="vYGCgJuR_9Qp"
# ### Configure dataset for performance
# + [markdown] id="wwZavzgsIytz"
# To train a model with this dataset you will want the data:
#
# * To be well shuffled.
# * To be batched.
# * Batches to be available as soon as possible.
#
# These features can be added using the `tf.data` API. For more details, see the [Input Pipeline Performance](../../guide/performance/datasets) guide.
# + id="uZmZJx8ePw_5"
def configure_for_performance(ds):
ds = ds.cache()
ds = ds.shuffle(buffer_size=1000)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
# + [markdown] id="45P7OvzRWzOB"
# ### Visualize the data
#
# You can visualize this dataset similarly to the one you created previously.
# + id="UN_Dnl72YNIj"
image_batch, label_batch = next(iter(train_ds))
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].numpy().astype("uint8"))
label = label_batch[i]
plt.title(class_names[label])
plt.axis("off")
# + [markdown] id="fMT8kh_uXPRU"
# ### Continue training the model
#
# You have now manually built a similar `tf.data.Dataset` to the one created by the `keras.preprocessing` above. You can continue training the model with it. As before, we will train for just a few epochs to keep the running time short.
# + id="Vm_bi7NKXOzW"
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
# + [markdown] id="EDJXAexrwsx8"
# ## Using TensorFlow Datasets
#
# So far, this tutorial has focused on loading data off disk. You can also find a dataset to use by exploring the large [catalog](https://www.tensorflow.org/datasets/catalog/overview) of easy-to-download datasets at [TensorFlow Datasets](https://www.tensorflow.org/datasets). As you have previously loaded the Flowers dataset off disk, let's see how to import it with TensorFlow Datasets.
# + [markdown] id="Qyu9wWDf1gfH"
# Download the flowers [dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) using TensorFlow Datasets.
# + id="NTQ-53DNwv8o"
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
# + [markdown] id="3hxXSgtj1iLV"
# The flowers dataset has five classes.
# + id="kJvt6qzF1i4L"
num_classes = metadata.features['label'].num_classes
print(num_classes)
# + [markdown] id="6dbvEz_F1lgE"
# Retrieve an image from the dataset.
# + id="1lF3IUAO1ogi"
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
# + [markdown] id="lHOOH_4TwaUb"
# As before, remember to batch, shuffle, and configure each dataset for performance.
# + id="AMV6GtZiwfGP"
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
test_ds = configure_for_performance(test_ds)
# + [markdown] id="gmR7kT8l1w20"
# You can find a complete example of working with the flowers dataset and TensorFlow Datasets by visiting the [Data augmentation](https://www.tensorflow.org/tutorials/images/data_augmentation) tutorial.
# + [markdown] id="6cqkPenZIaHl"
# ## Next steps
#
# This tutorial showed two ways of loading images off disk. First, you learned how to load and preprocess an image dataset using Keras preprocessing layers and utilities. Next, you learned how to write an input pipeline from scratch using tf.data. Finally, you learned how to download a dataset from TensorFlow Datasets. As a next step, you can learn how to add data augmentation by visiting this [tutorial](https://www.tensorflow.org/tutorials/images/data_augmentation). To learn more about tf.data, you can visit this [guide](https://www.tensorflow.org/guide/data).
| site/en/tutorials/load_data/images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %reset -f
import numpy as np
from landlab import RasterModelGrid
from landlab.components import OverlandFlow, FlowAccumulator, SpatialPrecipitationDistribution
from landlab.plot.imshow import imshow_grid, imshow_grid_at_node
from landlab.io.esri_ascii import read_esri_ascii
from matplotlib import animation
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
colors = [(0,0.2,1,i) for i in np.linspace(0,1,3)]
cmap = mcolors.LinearSegmentedColormap.from_list('mycmap', colors, N=10)
#from PIL import Image, ImageDraw
# -
# Initial conditions
run_time = 500 # duration of run, (s)
h_init = 0.1 # initial thin layer of water (m)
n = 0.01 # roughness coefficient, (s/m^(1/3))
g = 9.8 # gravity (m/s^2)
alpha = 0.7 # time-step factor (nondimensional; from Bates et al., 2010)
u = 0.4 # constant velocity (m/s, de Almeida et al., 2012)
run_time_slices = np.arange(0,run_time+1,20) #show every 20th
elapsed_time = 1.0 #Elapsed time starts at 1 second. This prevents errors when setting our boundary conditions.
#Define grid
# here we use an arbitrary, very small, "real" catchment
fname = '../data/hugo_site.asc'
rmg, z = read_esri_ascii(fname, name='topographic__elevation')
rmg.status_at_node[rmg.nodes_at_right_edge] = rmg.BC_NODE_IS_FIXED_VALUE
rmg.status_at_node[np.isclose(z, -9999.)] = rmg.BC_NODE_IS_CLOSED
#Define outlet
rmg_outlet_node = 2277 #node
outlet_node_to_sample = 2277
outlet_link_to_sample = rmg.links_at_node[outlet_node_to_sample][3]
# +
#Plot topography and outlet
plt.figure()
imshow_grid_at_node(rmg, z, colorbar_label='Elevation (m)')
plt.plot(rmg.node_x[outlet_node_to_sample], rmg.node_y[outlet_node_to_sample], "yo")
plt.show()
# +
rmg.at_node["surface_water__depth"] = np.zeros(rmg.number_of_nodes)
h = rmg.at_node['surface_water__depth']
bools = (rmg.node_y > 100) * (rmg.node_y < 450) * (rmg.node_x < 400) * (rmg.node_x > 350)
h[bools] = 2
## Set inital discharge
rmg.at_node["surface_water__discharge"] = np.zeros(rmg.number_of_nodes)
# -
fig1 = plt.figure()
imshow_grid(rmg,'topographic__elevation',colorbar_label='Elevation (m)')
imshow_grid(rmg,'surface_water__depth',cmap=cmap,colorbar_label='Water depth (m)')
plt.title(f'Time = 0')
plt.show()
fig1.savefig(f"Hima_results/runoff_0.jpeg")
#Call overland flow model
of = OverlandFlow(rmg, steep_slopes=True)
#of.run_one_step()
# +
# look at hydorgraph at outlet
hydrograph_time = []
discharge_at_outlet = []
height_at_outlet = []
#Run model
for t in run_time_slices:
#Run until next time to plot
while elapsed_time < t:
# First, we calculate our time step.
dt = of.calc_time_step()
# Now, we can generate overland flow.
of.overland_flow()
# Increased elapsed time
elapsed_time += dt
## Append time and discharge and water depth to their lists to save data and for plotting.
hydrograph_time.append(elapsed_time)
q = rmg.at_link["surface_water__discharge"]
discharge_at_outlet.append(np.abs(q[outlet_link_to_sample]) * rmg.dx)
ht = rmg.at_node['surface_water__depth']
height_at_outlet.append(np.abs(ht[outlet_node_to_sample]))
fig=plt.figure()
imshow_grid(rmg,'topographic__elevation',colorbar_label='Elevation (m)')
imshow_grid(rmg,'surface_water__depth',limits=(0,2),cmap=cmap,colorbar_label='Water depth (m)')
plt.title(f'Time = {round(elapsed_time,1)} s')
plt.show()
fig.savefig(f"Hima_results/runoff_{round(elapsed_time,1)}.jpeg")
# +
## Plotting hydrographs and discharge
fig=plt.figure(2)
plt.plot(hydrograph_time, discharge_at_outlet, "b-", label="outlet")
plt.ylabel("Discharge (cms)")
plt.xlabel("Time (seconds)")
plt.legend(loc="upper right")
fig.savefig(f"Hima_results/runoff_discharge.jpeg")
fig=plt.figure(3)
plt.plot(hydrograph_time, height_at_outlet, "b-", label="outlet")
plt.ylabel("Water depth (m)")
plt.xlabel("Time (seconds)")
plt.legend(loc="upper right")
fig.savefig("Hima_results/runoff_waterdepth.jpeg")
# -
| upland/Overland_flow_driver_Hima.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cxbxmxcx/EvolutionaryDeepLearning/blob/main/EDL_4_2_PSO.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ZWk8QVItlplD"
# Original Source: https://github.com/DEAP/deap/blob/master/examples/ga/onemax_numpy.py
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
# + id="ct-pwA_aHMMa" colab={"base_uri": "https://localhost:8080/"} outputId="c48a6bb0-e8b9-42eb-8f24-9c24784879e6"
#@title Install DEAP
# !pip install deap --quiet
# + id="epVL5qPDHCPW"
#@title Imports
import operator
import random
import math
import time
import numpy as np
from deap import base
from deap import benchmarks
from deap import creator
from deap import tools
from IPython.display import clear_output
# + id="s3r8TiOjHYyy"
#@title Setup Fitness Criteria
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Particle", np.ndarray, fitness=creator.FitnessMax, speed=list,
smin=None, smax=None, best=None)
# + id="brtFWiHfhGHl"
#@title PSO Functions
def generate(size, pmin, pmax, smin, smax):
part = creator.Particle(np.random.uniform(pmin, pmax, size))
part.speed = np.random.uniform(smin, smax, size)
part.smin = smin
part.smax = smax
return part
def updateParticle(part, best, phi1, phi2):
u1 = np.random.uniform(0, phi1, len(part))
u2 = np.random.uniform(0, phi2, len(part))
v_u1 = u1 * (part.best - part)
v_u2 = u2 * (best - part)
part.speed += v_u1 + v_u2
for i, speed in enumerate(part.speed):
if abs(speed) < part.smin:
part.speed[i] = math.copysign(part.smin, speed)
elif abs(speed) > part.smax:
part.speed[i] = math.copysign(part.smax, speed)
part += part.speed
# + id="xVhtoopvtlr1"
#@title Evaluation Function
distance = 575 #@param {type:"slider", min:10, max:1000, step:5}
def evaluate(individual):
v = individual[0] if individual[0] > 0 else 0 #velocity
a = individual[1] * math.pi / 180 #angle to radians
return ((2*v**2 * math.sin(a) * math.cos(a))/9.8 - distance)**2,
# + id="DP0BRxxAH1uh"
#@title Add Functions to Toolbox
toolbox = base.Toolbox()
toolbox.register("particle",
generate, size=2, pmin=-6, pmax=6, smin=-3, smax=3)
toolbox.register("population",
tools.initRepeat, list, toolbox.particle)
toolbox.register("update",
updateParticle, phi1=200, phi2=200)
toolbox.register("evaluate", evaluate)
# + id="UM87TusHv8ab"
#@title Code to Plot the Expression Tree
import matplotlib.pyplot as plt
def plot_population(pop):
xs = [x for x,_ in pop]
ys = [y for _,y in pop]
plt.scatter(xs,ys)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 299} id="FC0-B2wAID9Z" outputId="ece6d7cf-9918-431c-b105-5eb4c7a68c19"
#@title Run the Evolution
random.seed(64)
pop = toolbox.population(n=500)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", np.mean)
stats.register("std", np.std)
stats.register("min", np.min)
stats.register("max", np.max)
logbook = tools.Logbook()
logbook.header = ["gen", "evals"] + stats.fields
GEN = 100
best = None
for g in range(GEN):
for part in pop:
part.fitness.values = tuple(np.subtract((0,), toolbox.evaluate(part)))
if part.best is None or part.best.fitness < part.fitness:
part.best = creator.Particle(part)
part.best.fitness.values = part.fitness.values
if best is None or best.fitness < part.fitness:
best = creator.Particle(part)
best.fitness.values = part.fitness.values
for part in pop:
toolbox.update(part, best)
if (g+1) % 10 == 0:
logbook.record(gen=g, evals=len(pop), **stats.compile(pop))
clear_output()
print(best)
plot_population(pop)
print(logbook.stream)
time.sleep(1)
# + colab={"base_uri": "https://localhost:8080/"} id="TW5XQ3S4QnYA" outputId="5b708913-80a4-429e-d5f6-cdb83f7b63e8"
v, a = best
a = a * math.pi / 180 #angle to radians
distance = (2*v**2 * math.sin(a) * math.cos(a))/9.8
print(distance)
| EDL_4_2_PSO.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + deletable=true editable=true
'''What all do i need in the code:
For the simple structure:
1) Word embedding layer, feeed input layer
1) Creating the gen batch function-> Seq and padded sequence
a) Feed input layer: Seq length and padded sequence
b) Labels: Padded too and >>>>>>CHUNKS<<<<<<
c) Word embedding according to vocab loaded
2)DONE Trim vocab and word embeddings based on vocab in both domains. DONE
3) Run through RNN
a) Ideally, have function here to append seq2seq and tree kernel representation
b) Computing tree kernel and gaining a good represnation out of it
4) Decoding:
a) TRAINING: <F1 measure< Allow for both with and without
b) PREDICTIONS:
1) F1 measure
2) Allow for with and without crf and without seq2seq
3) STORE model and allow for training continuation
4) Predictions with crf code
5) STORE accuracy in a file
6)Integrate seq2seq
'''
# -
import pickle
with open('./sequence_tagging/data/vocab_to_id.pkl','r') as p1:
x = pickle.load(p1)
len(x)
# +
import numpy as np
import os
import tensorflow as tf
from .data_utils import minibatches, pad_sequences, get_chunks
from .general_utils import Progbar
from .base_model import BaseModel
class NERModel(BaseModel):
"""Specialized class of Model for NER"""
def __init__(self, config):
super(NERModel, self).__init__(config)
self.idx_to_tag = {idx: tag for tag, idx in
self.config.vocab_tags.items()}
def add_placeholders(self):
"""Define placeholders = entries to computational graph"""
# shape = (batch size, max length of sentence in batch)
self.word_ids = tf.placeholder(tf.int32, shape=[None, None],
name="word_ids")
# shape = (batch size)
self.sequence_lengths = tf.placeholder(tf.int32, shape=[None],
name="sequence_lengths")
# shape = (batch size, max length of sentence, max length of word)
self.char_ids = tf.placeholder(tf.int32, shape=[None, None, None],
name="char_ids")
# shape = (batch_size, max_length of sentence)
self.word_lengths = tf.placeholder(tf.int32, shape=[None, None],
name="word_lengths")
# shape = (batch size, max length of sentence in batch)
self.labels = tf.placeholder(tf.int32, shape=[None, None],
name="labels")
# hyper parameters
self.dropout = tf.placeholder(dtype=tf.float32, shape=[],
name="dropout")
self.lr = tf.placeholder(dtype=tf.float32, shape=[],
name="lr")
def get_feed_dict(self, words, labels=None, lr=None, dropout=None):
"""Given some data, pad it and build a feed dictionary
Args:
words: list of sentences. A sentence is a list of ids of a list of
words. A word is a list of ids
labels: list of ids
lr: (float) learning rate
dropout: (float) keep prob
Returns:
dict {placeholder: value}
"""
# perform padding of the given data
if self.config.use_chars:
char_ids, word_ids = zip(*words)
word_ids, sequence_lengths = pad_sequences(word_ids, 0)
char_ids, word_lengths = pad_sequences(char_ids, pad_tok=0,
nlevels=2)
else:
word_ids, sequence_lengths = pad_sequences(words, 0)
# build feed dictionary
feed = {
self.word_ids: word_ids,
self.sequence_lengths: sequence_lengths
}
if self.config.use_chars:
feed[self.char_ids] = char_ids
feed[self.word_lengths] = word_lengths
if labels is not None:
labels, _ = pad_sequences(labels, 0)
feed[self.labels] = labels
if lr is not None:
feed[self.lr] = lr
if dropout is not None:
feed[self.dropout] = dropout
return feed, sequence_lengths
def add_word_embeddings_op(self):
"""Defines self.word_embeddings
If self.config.embeddings is not None and is a np array initialized
with pre-trained word vectors, the word embeddings is just a look-up
and we don't train the vectors. Otherwise, a random matrix with
the correct shape is initialized.
"""
with tf.variable_scope("words"):
if self.config.embeddings is None:
self.logger.info("WARNING: randomly initializing word vectors")
_word_embeddings = tf.get_variable(
name="_word_embeddings",
dtype=tf.float32,
shape=[self.config.nwords, self.config.dim_word])
else:
_word_embeddings = tf.Variable(
self.config.embeddings,
name="_word_embeddings",
dtype=tf.float32,
trainable=self.config.train_embeddings)
word_embeddings = tf.nn.embedding_lookup(_word_embeddings,
self.word_ids, name="word_embeddings")
with tf.variable_scope("chars"):
if self.config.use_chars:
# get char embeddings matrix
_char_embeddings = tf.get_variable(
name="_char_embeddings",
dtype=tf.float32,
shape=[self.config.nchars, self.config.dim_char])
char_embeddings = tf.nn.embedding_lookup(_char_embeddings,
self.char_ids, name="char_embeddings")
# put the time dimension on axis=1
s = tf.shape(char_embeddings)
char_embeddings = tf.reshape(char_embeddings,
shape=[s[0]*s[1], s[-2], self.config.dim_char])
word_lengths = tf.reshape(self.word_lengths, shape=[s[0]*s[1]])
# bi lstm on chars
cell_fw = tf.contrib.rnn.LSTMCell(self.config.hidden_size_char,
state_is_tuple=True)
cell_bw = tf.contrib.rnn.LSTMCell(self.config.hidden_size_char,
state_is_tuple=True)
_output = tf.nn.bidirectional_dynamic_rnn(
cell_fw, cell_bw, char_embeddings,
sequence_length=word_lengths, dtype=tf.float32)
# read and concat output
_, ((_, output_fw), (_, output_bw)) = _output
output = tf.concat([output_fw, output_bw], axis=-1)
# shape = (batch size, max sentence length, char hidden size)
output = tf.reshape(output,
shape=[s[0], s[1], 2*self.config.hidden_size_char])
word_embeddings = tf.concat([word_embeddings, output], axis=-1)
self.word_embeddings = tf.nn.dropout(word_embeddings, self.dropout)
def add_logits_op(self):
"""Defines self.logits
For each word in each sentence of the batch, it corresponds to a vector
of scores, of dimension equal to the number of tags.
"""
with tf.variable_scope("bi-lstm"):
cell_fw = tf.contrib.rnn.LSTMCell(self.config.hidden_size_lstm)
cell_bw = tf.contrib.rnn.LSTMCell(self.config.hidden_size_lstm)
(output_fw, output_bw), _ = tf.nn.bidirectional_dynamic_rnn(
cell_fw, cell_bw, self.word_embeddings,
sequence_length=self.sequence_lengths, dtype=tf.float32)
output = tf.concat([output_fw, output_bw], axis=-1)
output = tf.nn.dropout(output, self.dropout)
with tf.variable_scope("proj"):
W = tf.get_variable("W", dtype=tf.float32,
shape=[2*self.config.hidden_size_lstm, self.config.ntags])
b = tf.get_variable("b", shape=[self.config.ntags],
dtype=tf.float32, initializer=tf.zeros_initializer())
nsteps = tf.shape(output)[1]
output = tf.reshape(output, [-1, 2*self.config.hidden_size_lstm])
pred = tf.matmul(output, W) + b
self.logits = tf.reshape(pred, [-1, nsteps, self.config.ntags])
def add_pred_op(self):
"""Defines self.labels_pred
This op is defined only in the case where we don't use a CRF since in
that case we can make the prediction "in the graph" (thanks to tf
functions in other words). With theCRF, as the inference is coded
in python and not in pure tensroflow, we have to make the prediciton
outside the graph.
"""
if not self.config.use_crf:
self.labels_pred = tf.cast(tf.argmax(self.logits, axis=-1),
tf.int32)
def add_loss_op(self):
"""Defines the loss"""
if self.config.use_crf:
log_likelihood, trans_params = tf.contrib.crf.crf_log_likelihood(
self.logits, self.labels, self.sequence_lengths)
self.trans_params = trans_params # need to evaluate it for decoding
self.loss = tf.reduce_mean(-log_likelihood)
else:
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=self.logits, labels=self.labels)
mask = tf.sequence_mask(self.sequence_lengths)
losses = tf.boolean_mask(losses, mask)
self.loss = tf.reduce_mean(losses)
# for tensorboard
tf.summary.scalar("loss", self.loss)
def build(self):
# NER specific functions
self.add_placeholders()
self.add_word_embeddings_op()
self.add_logits_op()
self.add_pred_op()
self.add_loss_op()
# Generic functions that add training op and initialize session
self.add_train_op(self.config.lr_method, self.lr, self.loss,
self.config.clip)
self.initialize_session() # now self.sess is defined and vars are init
def predict_batch(self, words):
"""
Args:
words: list of sentences
Returns:
labels_pred: list of labels for each sentence
sequence_length
"""
fd, sequence_lengths = self.get_feed_dict(words, dropout=1.0)
if self.config.use_crf:
# get tag scores and transition params of CRF
viterbi_sequences = []
logits, trans_params = self.sess.run(
[self.logits, self.trans_params], feed_dict=fd)
# iterate over the sentences because no batching in vitervi_decode
for logit, sequence_length in zip(logits, sequence_lengths):
logit = logit[:sequence_length] # keep only the valid steps
viterbi_seq, viterbi_score = tf.contrib.crf.viterbi_decode(
logit, trans_params)
viterbi_sequences += [viterbi_seq]
return viterbi_sequences, sequence_lengths
else:
labels_pred = self.sess.run(self.labels_pred, feed_dict=fd)
return labels_pred, sequence_lengths
def run_epoch(self, train, dev, epoch):
"""Performs one complete pass over the train set and evaluate on dev
Args:
train: dataset that yields tuple of sentences, tags
dev: dataset
epoch: (int) index of the current epoch
Returns:
f1: (python float), score to select model on, higher is better
"""
# progbar stuff for logging
batch_size = self.config.batch_size
nbatches = (len(train) + batch_size - 1) // batch_size
prog = Progbar(target=nbatches)
# iterate over dataset
for i, (words, labels) in enumerate(minibatches(train, batch_size)):
fd, _ = self.get_feed_dict(words, labels, self.config.lr,
self.config.dropout)
_, train_loss, summary = self.sess.run(
[self.train_op, self.loss, self.merged], feed_dict=fd)
prog.update(i + 1, [("train loss", train_loss)])
# tensorboard
if i % 10 == 0:
self.file_writer.add_summary(summary, epoch*nbatches + i)
metrics = self.run_evaluate(dev)
msg = " - ".join(["{} {:04.2f}".format(k, v)
for k, v in metrics.items()])
self.logger.info(msg)
return metrics["f1"]
def calculate_f1(self, tp,fp,tn,fn):
recall = float(tp)/(tp+fn)
precision = float(tp)/(tp+fp)
f1 = 2*(precision*recall)/(precision+recall)
return f1, recall, precision
def run_evaluate(self, test):
"""Evaluates performance on test set
Args:
test: dataset that yields tuple of (sentences, tags)
Returns:
metrics: (dict) metrics["acc"] = 98.4, ...
"""
asp_tp = 0.
asp_fp = 0.
asp_tn = 0.
asp_fn = 0.
op_tp = 0.
op_fp = 0.
op_tn = 0.
op_fn = 0.
ot_tp = 0.
ot_fp = 0.
ot_tn = 0.
ot_fn = 0.
tag2id = self.config.vocab_tags
accs = []
correct_preds, total_correct, total_preds = 0., 0., 0.
for words, labels in minibatches(test, self.config.batch_size):
labels_pred, sequence_lengths = self.predict_batch(words)
for lab, lab_pred, length in zip(labels, labels_pred,
sequence_lengths):
lab = lab[:length]
lab_pred = lab_pred[:length]
for actual,pred in zip(lab, lab_pred):
actual = int(actual)
pred = int(pred)
#print(actual, actual ==4)
#print(pred, pred ==4)
if(actual == tag2id['B-A'] or actual == tag2id): #BA or IA-> Replace by tag2id later
if(pred == 0 or pred == 2):
asp_tp +=1
op_tn +=1
ot_tn +=1
else:
if(pred == 1 or pred==3):
asp_fn+=1
op_fp+=1
ot_tn+=1
elif(pred==4):
asp_fn+=1
ot_fp+=1
op_tn+=1
else:
print("Somethings wrong in prediction")
elif(actual == 1 or actual == 3): #BO or IO
if(pred == 1 or pred == 3):
op_tp +=1
asp_tn +=1
ot_tn +=1
else:
if(pred == 0 or pred==2):
op_fn+=1
asp_fp+=1
ot_tn+=1
elif(pred==4):
op_fn+=1
ot_fp+=1
asp_tn+=1
else:
print("Somethings wrong in prediction")
elif(actual == 4):
if(pred == 4):
ot_tp +=1
asp_tn +=1
op_tn +=1
else:
if(pred == 0 or pred==2):
ot_fn+=1
asp_fp+=1
op_tn+=1
elif(pred==1 or pred==3):
ot_fn+=1
op_fp+=1
asp_tn+=1
else:
print("Somethings wrong in prediction")
else:
print("Somethings wrong")
accs += [a==b for (a, b) in zip(lab, lab_pred)]
lab_chunks = set(get_chunks(lab, self.config.vocab_tags))
lab_pred_chunks = set(get_chunks(lab_pred,
self.config.vocab_tags))
correct_preds += len(lab_chunks & lab_pred_chunks)
total_preds += len(lab_pred_chunks)
total_correct += len(lab_chunks)
assert(asp_tp+asp_fp+asp_tn+asp_fn == op_tp+op_fp+op_tn+op_fn == ot_tp+ot_fp+ot_tn+ot_fn)
asp_scores = self.calculate_f1(asp_tp,asp_fp,asp_tn,asp_fn)
op_scores = self.calculate_f1(op_tp,op_fp,op_tn,op_fn)
ot_scores = self.calculate_f1(ot_tp,ot_fp,ot_tn,ot_fn)
p = correct_preds / total_preds if correct_preds > 0 else 0
r = correct_preds / total_correct if correct_preds > 0 else 0
f1 = 2 * p * r / (p + r) if correct_preds > 0 else 0
acc = np.mean(accs)
return {"acc": 100*acc, "f1": 100*f1, "asp_f1":100*asp_scores[0], "op_f1":100*op_scores[0], "ot_f1":100*ot_scores[0]}
def predict(self, words_raw):
"""Returns list of tags
Args:
words_raw: list of words (string), just one sentence (no batch)
Returns:
preds: list of tags (string), one for each word in the sentence
"""
words = [self.config.processing_word(w) for w in words_raw]
if type(words[0]) == tuple:
words = zip(*words)
pred_ids, _ = self.predict_batch([words])
preds = [self.idx_to_tag[idx] for idx in list(pred_ids[0])]
return preds
# + deletable=true editable=true
float(3)/5
# + deletable=true editable=true
def calculate_f1(tp,fp,tn,fn):
print(tp)
recall = float(tp)/(tp+fn)
precision = float(tp)/(tp+fp)
f1 = 2*float((precision*recall))/((precision+recall))
return f1, recall, precision
def lenient_metrics_absa(obj):
asp_tp = 0
asp_fp = 0
asp_tn = 0
asp_fn = 0
op_tp = 0
op_fp = 0
op_tn = 0
op_fn = 0
ot_tp = 0
ot_fp = 0
ot_tn = 0
ot_fn = 0
for lab, lab_pred, length in obj:
lab = lab[:length]
lab_pred = lab_pred[:length]
for actual,pred in zip(lab, lab_pred):
if(actual == 1 or actual == 2): #BA or IA
if(pred == 1 or pred == 2):
asp_tp +=1
op_tn +=1
ot_tn +=1
else:
if(pred == 3 or pred==4):
asp_fn+=1
op_fp+=1
ot_tn+=1
elif(pred == 5):
asp_fn+=1
ot_fp+=1
op_tn+=1
else:
print("Somethings wrong in prediction")
elif(actual == 3 or actual == 4):#BO or IO
if(pred == 3 or pred==4):
op_tp +=1
asp_tn +=1
ot_tn +=1
else:
if(pred ==1 or pred ==2):
op_fn +=1
asp_fp+=1
ot_tn+=1
elif(pred ==5):
op_fn+=1
ot_fp+=1
asp_tn+=1
else:
print("Somethings wrong in prediction")
elif(actual == 5): #O
if(pred == 5):
ot_tp +=1
asp_tn +=1
op_tn +=1
else:
if(pred ==1 or pred ==2):
ot_fn +=1
asp_fp+=1
op_tn+=1
elif(pred ==2 or pred ==3):
ot_fn+=1
op_fp+=1
asp_tn+=1
else:
print("Somethings wrong in prediction")
else:
print("Somethings wrong")
assert(asp_tp+asp_fp+asp_tn+asp_fn == op_tp+op_fp+op_tn+op_fn == ot_tp+ot_fp+ot_tn+ot_fn)
asp_scores = calculate_f1(asp_tp,asp_fp,asp_tn,asp_fn)
op_scores = calculate_f1(op_tp,op_fp,op_tn,op_fn)
ot_scores = calculate_f1(ot_tp,ot_fp,ot_tn,ot_fn)
return asp_scores, op_scores, ot_scores
# + deletable=true editable=true
labels = [[1,2,3,1,2,2,4,5,5],[1,2,3,5,5]]
preds = [[1,2,3,1,4,4,4,5,5],[2,1,3,5,5]]
length = [10,4]
obj = zip(labels, preds, length)
# + deletable=true editable=true
obj
# + deletable=true editable=true
lenient_metrics_absa(obj)
# + deletable=true editable=true
'''
Focus on creating gen batch function
a) Padded seq and seq length; Vocab2id; read file; batch_size, etc
b) Step through RNN
c) Store a model in tf
d) Edit accuracy measure in tf
'''
| absa_adapt_model/word_drop_bridge/Model Skeleton.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bucle de control
#
# ## Objetivos
# - Identificar los elementos tรญpicos de los sistemas controlados.
# - Identificar las tareas de cada elemento del bucle de control.
#
#
# ## Definiciรณn
# Se conoce como bucle de control al conjunto de sistemas interactando entre sรญ para tener [control en lazo cerrado](https://www.electronics-tutorials.ws/systems/closed-loop-system.html). El objetivo de esta interacciรณn es obtener comportamientos deseados como respuesta de una planta.
#
# En este diagrama se observa un bucle (bucla o lazo) tรญpico de control que sirve como orientaciรณn a la hora de automatizar procesos.
#
# 
#
# - La **planta** es el proceso que se debe controlar.
# - El **actuador** cambia el comportamiento de la planta a partir de las ordenes del **controlador**.
# - El **controlador** toma decisiones a partir del error del proceso para que el sistema controlado cumpla con el objetivo impuesto con la seรฑal de **referencia**.
# - El **sensor** se encarga de medir el comportamiento de la **planta** para dar esta informaciรณn al **controlador**.
#
# Con la tecnologรญa del momento, los controladores son electrรณnicos. Por esta razรณn, el **actuador** recibe seรฑales elรฉctricas y las lleva a la naturaleza propia de la planta, y el sensor transfiere la informaciรณn del comportamiento de la planta a forma elรฉctrica.
#
# Si se considera que:
# - Los sistemas **actuador** y **planta** se ven como un solo sistema desde el punto de vista del controlador, podrรญa adoptar el nombre de **proceso**.
# - El sistema **sensor** brinda informaciรณn sin error de forma muy rรกpida en comparaciรณn a la evoluciรณn del proceso.
#
# ## Reducciรณn del bucle
#
# El bucle de control puede reducirse a:
#
# 
#
# - $Y_{sp}$ es la seรฑal de referencia (sp por SetPoint)
# - $Y$ es la seรฑal de respuesta del sistema controlado.
# - $E = Y_{sp} - Y$ es la seรฑal de error.
# - $G_C$ es el **controlador**.
# - $U$ es el decisiรณn tomada por el **controlador** y la excitaciรณn del **proceso**.
# - $G_P$ es el **proceso**.
#
# Recuerde que las seรฑales varรญan en el tiempo. Asรญ, pueden definirse:
#
# \begin{align}
# E(t) &= Y_{sp}(t)-Y(t)\\
# U(t) &= \mathcal{G_C} \{E(t) \} = \mathcal{G_C} \{ Y_{sp}(t)-Y(t) \} \\
# Y(t) &= \mathcal{G_P} \{U(t) \} = \mathcal{G_P} \{ \mathcal{G_C} \{ Y_{sp}(t)-Y(t) \} \}
# \end{align}
#
#
# ## Bucle con sistemas LTI
#
# Si los sistemas son **LTI**, se pueden denominar $g_{C}(t)$ a la respuesta impulsional del **controlador**, y $g_{P}(t)$ a la respuesta impulsional del **proceso**. Esto permite reescribir las expresiones anteriores como:
#
# \begin{align}
# E(t) &= Y_{sp}(t)-Y(t)\\
# U(t) &= g_C(t) * ( Y_{sp}(t)-Y(t) ) \\
# Y(t) &= g_P(t) * g_C(t) * ( Y_{sp}(t)-Y(t) )
# \end{align}
#
# Esto indica que la seรฑal de respuesta depende de:
# - el deseo $Y_{sp}$
# - el proceso $G_{P}$
# - el controlador $G_{C}$
#
# Observe que para obtener un comportamiento deseado en $Y(t)=g_P(t)*g_C(t)*( Y_{sp}(t)-Y(t) )$ debe definirse $g_C(t)$ de manera que se corrijan los comportamientos del proceso. La labor de ingenierรญa de control es diseรฑar el controlador para cumplir especificaciones.
#
# Para facilitar el anรกlisis y trabajo de los sistemas controlados, se usarรก la **transformada de Laplace**.
# ## Juego de control
#
import matplotlib.pyplot as plt
# %matplotlib inline
from juego import ControlGame
game = ControlGame(runtime=45) # segundos
#
# Suponga que usted debe operar un sistema **SISO** (de una entrada y una salida) usando un botรณn deslizable y su percepciรณn del funcionamiento del sistema.
#
# - Ejecute la celda con `game.ui()`.
# - Presione el botรณn `Ejecutar` y mueva el botรณn `U(t)` para que la seรฑal `Salida` siga a la seรฑal `Referencia`, que cambia de forma aleatoria despuรฉs de cierta cantidad de segundos.
# - Tenga en cuenta que el `Puntaje` crece mรกs rรกpido mientras menor sea el error.
# - Ejecute la celda varias veces para ver cรณmo usted aprende a controlar el sistema.
# - Para visualizar su desempeรฑo como controlador, ejecute la celda con `game.plot()`.
#
game.ui()
game.plot()
# Los cambios que acaba de realizar manualmente deben ser ejercidos de manera automรกtica por el **controlador**.
#
# En el resto del curso se discutirรก sobre las tรฉcnicas de anรกlisis y diseรฑo mรกs usadas para sistemas anรกlogos.
| Control-main/Bucle_control.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# <p style="text-align: center; font-size: 300%"> Unterjรคhrige Verzinsung </p>
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Vorbemerkungen
# * Bitte installieren Sie __JETZT__ die Studentenversion der [Socrative](https://socrative.com/) Clicker App aus dem App-Store Ihres Mobiltelefons.
# * Diese Folien bestehen aus einem [Jupyter Notebook](https://jupyter.org/). Sie enthalten lauffรคhigen Pythoncode.
# * Sie sind zum Download verfรผgbar unter https://github.com/s-broda/ifz/ oder kรถnnen unter https://notebooks.azure.com/s-broda/projects/pres-ifz direkt im Browser ausgefรผhrt werden (erfordert kostenlosen Microsoft-Account). Klicken Sie auf `Clone` in der oberen rechten Ecke, dann auf `slides.ipynb`.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Inhalt
# * Recap
# * Motivation
# * Unterjรคhrige Verzinsung
# * Annuitรคtenrechnung bei unterjรคhriger Verzinsung
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Recap Tilgungsrechnung
# * __Annahmen__: Barkredit von CHF $K_0$ รผber $n$ Jahre zum jรคhrlichen Zinssatz $i$.
# * __Einmaltilgung__: Rรผckzahlung des gesamten Kredits nebst Zins und Zinseszins zum Fรคlligkeitsdatum $n$:
# $$K_n=K_0 (1+i)^n$$
# * __Annuitรคtentilgung__: gleichbleibende jรคhrliche Raten in Hรถhe von
# $$ r=K_0q^n\frac{q-1}{q^n-1}, \quad q:=1+i. \tag{*}$$
# * Intuition: Zahlungsprofil entspricht einer ewigen Rente i. H. v. $r$ mit erster Zahlung nach einem Jahr, abzgl. einer ewigen Rente mit erster Zahlung nach $n+1$ Jahren. Beispiel mit $n=5$:
#
# |Periode | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |...|
# |--------|---|---|---|---|---|---|---|---|---|---|
# | $\mbox{}$ | r | r | r | r | r | r | r | r | r |...|
# | $\mbox{}$ | 0 | 0 | 0 | 0 |0 |-r |-r |-r |-r |...|
# |Saldo | r | r | r | r | r | 0 | 0 | 0 | 0 |...|
#
# Barwert:
# $$
# K_0=\frac{r}{i}-\frac{1}{(1+i)^n}\frac{r}{i}.
# $$
# Einsetzen und umstellen ergibt (*).
# + [markdown] slideshow={"slide_type": "slide"}
# # Beispiel: Einmaltilgung
# + slideshow={"slide_type": "-"}
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, fixed
# %matplotlib inline
@interact(K0=(0., 200), i=(0.0, .2, 0.001), n=(0, 30))
def Kj(K0, i, n):
j = np.arange(1, n+1)
Kj = K0 * (1 + i) ** j
plt.step(j, Kj, where='post');
plt.xlabel('$j$'); plt.ylabel('$K_j$')
plt.annotate(s='$K_{'+'{}'.format(n)+'}='+'{}$'.format(Kj[-1]), xy=(n, Kj[-1]), xytext=(n/2, Kj[-1]), arrowprops={"arrowstyle": "->"})
# + [markdown] slideshow={"slide_type": "slide"}
# # Beispiel: Annuitรคtentilgung
#
# +
def annuity0(K0, i, n):
q = 1 + i; j = np.arange(0, n)
rate = K0 * (1/n if q == 1 else q**n * (q - 1) / (q**n - 1))
zins = K0 * (0 if q == 1 else (q**n - q**j) / (q**n - 1) * i)
tilgung = rate - zins
return rate, zins, tilgung
@interact(K0=(1., 100.), i=(-1, 1, 0.1), n=(1, 60))
def plot_annuities(K0 = 100, i = 0.12, n = 30):
rate, zins, tilgung = annuity0(K0, i, n)
j = np.arange(1, n + 1)
p1 = plt.bar(j, zins)
p2 = plt.bar(j, tilgung, bottom=np.maximum(0, zins))
p3 = plt.bar(j+.4, rate, width=.4, color="blue")
plt.legend((p1[0], p2[0], p3[0]), ('Zins', 'Tilgung', 'Rate'))
# + [markdown] slideshow={"slide_type": "slide"}
# # Clicker-Frage
# * Bitte รถffnen Sie die Socrative App und treten Sie dem Raum __BRODA173__ bei.
# * Sei $K_0=100$ und $n=30$. Wenn der Zinsatz $i=-100\%$ betrรคgt, dann
#
# a. tendiert die Tilgung gegen $\infty$.<br>
# b. tendiert der Zins gegen $-\infty$.<br>
# c. lรคsst sich die Annuitรคt nicht berechnen.<br>
# d. betrรคgt die jรคhrliche Rate $0$.<br>
# e. betrรคgt die jรคhrliche Rate $K_0/n$<br>
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Unterjรคhrige Verzinsung
# ## Motivation
# * Oben sind wir von _jรคhrlichen_ Raten ausgegangen. Die meisten Kreditvertrรคge (Hypothekendarlehen, Barkredite) sind aber als Annuitรคtendarlehen mit _monatlicher_ Tilgung ausgeformt.
# * Wir kรถnnen mit den bekannten Formeln weiterrechnen, mรผssen aber die Sichtweise รคndern, indem wir die Zeitperioden als _Monate_ auffassen.
# * Dementsprechend ist der zu verwendende Zins der _Monatszins_, fรผr welchen wir $i_{12}$ schreiben.
#
#
#
# + [markdown] slideshow={"slide_type": "-"}
# ## Beispiel
# * Annahme: Barkredit in Hรถhe von 100 CHF mit Laufzeit von 12 Monaten, monatlich verzinst zu $i_{12}=1\%$, Einmaltilgung (inkl. aufgelaufener Zinsen) nach 12 Monaten.
# * Rรผckzahlungsbetrag nach 12 Monaten entspricht
# $$
# 100 (1+i_{12})^{12}
# $$
# + slideshow={"slide_type": "-"}
100 * (1 + 0.01) ** 12
# + [markdown] slideshow={"slide_type": "slide"}
# # Clicker-Frage
# * Bitte รถffnen Sie die Socrative App und treten Sie dem Raum __BRODA173__ bei.
# * Wie hoch ist der Jahreszinssatz in obigem Beispiel?
#
# a. Definitiv zu hoch.<br>
# b. 12%<br>
# c. 12.68%<br>
# d. Weder b noch c sind falsch.<br>
# e. Alle Antworten sind richtig.
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Lรถsung
# * Fangfrage! Alle Antworten sind richtig; es kommt darauf an, _welcher_ Zins gemeint ist.
# * 12% ist der sog. _nominelle Jahreszinssatz_. Er dient als Berechnungsgrundlage fรผr den Monatszins: $i_{12}=\frac{i_{nom}}{12}$
# * 12.68% ist der sog. _effektive Jahreszinssatz_: derjenige Jahreszins, der zum gleichen Rรผckzahlungsbetrag fรผhrt wie unterjรคhrige Verzinsung zum Monatszins $i_{12}$, also:
#
# $$100(1+i_{eff})=100(1+i_{12})^{12}=112.68 \Leftrightarrow i_{eff}=12.68\%.$$
# * Die Differenz von $0.68\%$ resultiert aus dem Zinseszins auf die unterjรคhrigen Zinszahlungen.
# + [markdown] slideshow={"slide_type": "slide"}
# # Allgemeiner Fall
# * Auch andere unterjรคhrige Zinsperioden sind denkbar (z. B. halb- oder vierteljรคhrlich). Allgemein teilen wir das Jahr in $m$ Zinsperioden (engl. compounding periods) auf und schreiben $i_m=\frac{i_{nom}}{m}$ fรผr den entsprechenden Zins.
# * Dann gilt
# $$1+i_{eff}=\left(1+\frac{i_{nom}}{m}\right)^m \Leftrightarrow i_{eff}=\left(1+\frac{i_{nom}}{m}\right)^m-1.$$
# * Umgekehrt gilt
# $$ i_{nom}=m\left(\sqrt[m]{1+i_{eff}}-1\right).$$
# + [markdown] slideshow={"slide_type": "slide"}
# # Randbemerkung fรผr mathematisch Interessierte
# * Fรผr grosses $m$ konvergiert $(1+{i_{nom}}/{m})^m$ gegen die Exponentialfunktion:
# $$\lim_{m\rightarrow\infty}\left(1+\frac{i_{nom}}{m}\right)^m=e^{i_{nom}}$$
# sodass
# $$
# (1+i_{eff})^n=e^{n\cdot i_{nom}}
# $$
# * In diesem Fall spricht man von stetiger Verzinsung (continuous compounding).
# -
@interact(K0=fixed(100), i=(0.0, .5, 0.01), m=(1, 12), n=(1, 100))
def K1(K0=100, i=0.12, m=1, n=30):
j = np.arange(0, n * m + 1); Kj = K0 * (1 + i / m) ** j
p1 = plt.step(j, Kj, where='post', color='red'); p2 = plt.plot(j, K0*np.exp(i*j/m))
plt.xlabel('$j$'); plt.ylabel('$K_j$');
plt.title("Value after {} year(s), interest compounded {} time(s) per year".format(n, m)); plt.legend(('discrete compounding', 'continuous compounding'))
# + [markdown] slideshow={"slide_type": "slide"}
# # รbungsaufgaben
# * Bitte รถffnen Sie die Socrative App und treten Sie dem Raum __BRODA173__ bei.
# * Fรผr diese Fragen mรผssen Sie Ihren Namen eingeben und das numerische Ergebnis im Format xx.xx% angeben, gerundet auf zwei Nachkommastellen.
# * Zur Berechnung dรผrfen Sie den Taschenrechner verwenden.
#
# 1. Sei $m=2$ und $i_{eff}=12$%. Berechnen Sie $i_{nom}$.
# 2. Sei $m=4$ und $i_{nom}=12$%. Berechnen Sie $i_{eff}$.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Lรถsung
# 1.
# -
i_nom = 2 * (np.sqrt(1 + 0.12)-1)
# 2.
i_eff = (1 + 0.12 / 4) ** 4 - 1
# + [markdown] slideshow={"slide_type": "slide"}
# # Annuitรคtenrechnung bei unterjรคhriger Verzinsung
# * Auch bei monatlich getilgten Annuitรคtendarlehen kรถnnen wir die bestehenden Formeln weiterverwenden, vorausgesetzt wir rechnen mit dem monatlichen Zins:
#
# $$r=K_0q^n\frac{q-1}{q^n-1}, \quad q:=1+i_{12}=1+\frac{i_{nom}}{12}.$$
#
# * Beispiel: Annuitรคtendarlehen i. H. v. CHF 20'000, Laufzeit 30 Monate, nomineller Jahreszinssatz 9%, damit Monatszinssatz 0.75%.
# * Wir betrachten eine vereinfachte Version unserer Funktion `annuity0`, die nur die monatliche Rate retourniert:
#
# -
def annuity(K0, i, n):
q = 1 + i
rate = K0 * (q**n * (q - 1) / (q**n - 1))
return rate
# * Ergebnis:
annuity(20000, 0.0075, 30)
# * Der effektive Jahreszins betrรคgt
(1 + .09 / 12) ** 12 - 1
# + [markdown] slideshow={"slide_type": "slide"}
# # Berechnung des Zinses
# * Es ist auch mรถglich die Annuitรคtenformel fรผr gegebenes $K_0$ und $r$ nach dem Zins zu lรถsen, jedoch nicht in geschlossener Form.
# * Das Problem ist aber einfach numerisch zu lรถsen, da die Zielfunktion $K_0q^n(q-1)(q^n-1)-r=0\,$ nรคherungsweise linear ist:
#
# -
objective = lambda i: annuity(20000, i, 30) - 746.9632151166078
x = np.arange(.001, 0.9, 0.00001)
plt.plot(x, objective(x));
# * Numerisches Lรถsen ergibt
from scipy.optimize import newton
newton(objective, 0.005) # zweites Argument ist der Startwert
# * Das Problem lรคsst sich auch mit dem Solver auf dem Taschenrechner lรถsen: dafรผr lรถst man die Gleichung
# `20000X^30*(X-1)/(X^30-1)-746.96` nach `X`. `X` entspricht dann $q=1+i_{12}$.
# + [markdown] slideshow={"slide_type": "slide"}
# # Bemerkung
# * Sowohl das Schweizer KKG als auch die PAngV der EU verlangen, dass bei Konsumentenkrediten der effektive Zinssatz inklusive aller Nebenkosten angegeben wird. In der Schweiz darf dieser derzeit nicht mehr als 10% betragen.
# * Definiert ist der Effektivzins als der interne Zinsfuss der Zahlungsreihe aller relevanten cash flows, also derjenige Zinssatz, fรผr den der Barwert (die Summe aller abgezinsten Zahlungen) dem Kreditbetrag entspricht.
# * Fรผr kompliziertere als die hier betrachteten Vertrรคge lรคsst dieser sich nur numerisch berechnen. Wir kรถnnen aber รผberprรผfen ob unser oben berechneter Effektivzins korrekt ist:
# -
j = np.arange(1, 31)
d = (1 + 0.09380689767098382 ) ** (-j / 12)
746.9632151166078 * np.sum(d)
# + [markdown] slideshow={"slide_type": "slide"}
# # รbungsaufgaben
# * Bitte รถffnen Sie die Socrative App und treten Sie dem Raum __BRODA173__ bei.
# * Fรผr diese Fragen mรผssen Sie Ihren Namen eingeben und das numerische Ergebnis im Format xx.xx% angeben, gerundet auf zwei Nachkommastellen.
# * Zur Berechnung dรผrfen Sie den Taschenrechner verwenden.
# * Fรผr die folgenden Fragen sei ein monatlich zu tilgendes Annuitรคtendarlehen i. H. v. CHF 3'000 mit Laufzeit 2 Jahre gegeben.
#
# 1. Berechnen Sie die monatliche Rate unter der Annahme eines nominellen Jahreszinssatzes von 8%.
# 2. Die monatliche Rate betrage nun CHF 140. Berechnen Sie den effektiven Jahreszinssatz.
#
#
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Lรถsung:
# 1.
# -
r = annuity(3000, 0.08/12, 24)
# 2.
i_12 = newton(lambda i: annuity(3000, i, 24) - 140, 0.08)
i_eff = (1 + i_12) ** 12 - 1
# + [markdown] slideshow={"slide_type": "slide"}
# # Was Sie nach der heutigen Lektion beherrschen sollten
# * Nominelle und effektive Zinssรคtze unterscheiden und umrechnen.
# * Zinssรคtze und Monatsraten mit Taschenrechner und Software berechnen.
# + [markdown] slideshow={"slide_type": "slide"}
# # Exit poll
# * Bitte รถffnen Sie die Socrative App und treten Sie dem Raum BRODA173 bei.
# * Zu welchem Teil konnten Sie der heutigen Lektion folgen?
#
# a. 90-100%<br>
# b. 75-90%<br>
# c. 50-75%<br>
# d. 25-50%<br>
# e. 0-25%
#
| slides.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## TSNE Resolve Colors Bug Documentation
#
# <NAME> [reports](https://github.com/DistrictDataLabs/yellowbrick/pull/658) that there is a bug in TSNE that means that colors that are passed in on instantiation do not affect the colors of the plot.
#
# In this example, we'll validate that the bug exists, and that his proposed solution works
# +
import os
import sys
# Modify the path
sys.path.append("..")
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
# -
# ### Validate Bug
# +
from download import download_all
from sklearn.datasets.base import Bunch
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"hobbies": os.path.join(FIXTURES, "hobbies")
}
def load_data(name, download=True):
"""
Loads and wrangles the passed in text corpus by name.
If download is specified, this method will download any missing files.
"""
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Read the directories in the directory as the categories.
categories = [
cat for cat in os.listdir(path)
if os.path.isdir(os.path.join(path, cat))
]
files = [] # holds the file names relative to the root
data = [] # holds the text read from the file
target = [] # holds the string of the category
# Load the data from the files in the corpus
for cat in categories:
for name in os.listdir(os.path.join(path, cat)):
files.append(os.path.join(path, cat, name))
target.append(cat)
with open(os.path.join(path, cat, name), 'r') as f:
data.append(f.read())
# Return the data bunch for use similar to the newsgroups example
return Bunch(
categories=categories,
files=files,
data=data,
target=target,
)
# +
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = load_data('hobbies')
tfidf = TfidfVectorizer()
docs = tfidf.fit_transform(corpus.data)
labels = corpus.target
# +
from yellowbrick.text import TSNEVisualizer
tsne = TSNEVisualizer(colors=["purple","blue","orchid","indigo","plum","navy"])
tsne.fit(docs, labels)
tsne.poof()
# -
# ### Validate Solution
# +
import numpy as np
from collections import defaultdict
from yellowbrick.draw import manual_legend
from yellowbrick.text.base import TextVisualizer
from yellowbrick.style.colors import resolve_colors
from yellowbrick.exceptions import YellowbrickValueError
from sklearn.manifold import TSNE
from sklearn.pipeline import Pipeline
from sklearn.decomposition import TruncatedSVD, PCA
##########################################################################
## Quick Methods
##########################################################################
def tsne(X, y=None, ax=None, decompose='svd', decompose_by=50, classes=None,
colors=None, colormap=None, alpha=0.7, **kwargs):
"""
Display a projection of a vectorized corpus in two dimensions using TSNE,
a nonlinear dimensionality reduction method that is particularly well
suited to embedding in two or three dimensions for visualization as a
scatter plot. TSNE is widely used in text analysis to show clusters or
groups of documents or utterances and their relative proximities.
Parameters
----------
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features representing the corpus of
vectorized documents to visualize with tsne.
y : ndarray or Series of length n
An optional array or series of target or class values for instances.
If this is specified, then the points will be colored according to
their class. Often cluster labels are passed in to color the documents
in cluster space, so this method is used both for classification and
clustering methods.
ax : matplotlib axes
The axes to plot the figure on.
decompose : string or None
A preliminary decomposition is often used prior to TSNE to make the
projection faster. Specify `"svd"` for sparse data or `"pca"` for
dense data. If decompose is None, the original data set will be used.
decompose_by : int
Specify the number of components for preliminary decomposition, by
default this is 50; the more components, the slower TSNE will be.
classes : list of strings
The names of the classes in the target, used to create a legend.
colors : list or tuple of colors
Specify the colors for each individual class
colormap : string or matplotlib cmap
Sequential colormap for continuous target
alpha : float, default: 0.7
Specify a transparency where 1 is completely opaque and 0 is completely
transparent. This property makes densely clustered points more visible.
kwargs : dict
Pass any additional keyword arguments to the TSNE transformer.
Returns
-------
ax : matplotlib axes
Returns the axes that the parallel coordinates were drawn on.
"""
# Instantiate the visualizer
visualizer = TSNEVisualizer(
ax, decompose, decompose_by, classes, colors, colormap, alpha, **kwargs
)
# Fit and transform the visualizer (calls draw)
visualizer.fit(X, y, **kwargs)
visualizer.transform(X)
# Return the axes object on the visualizer
return visualizer.ax
##########################################################################
## TSNEVisualizer
##########################################################################
class TSNEVisualizer(TextVisualizer):
"""
Display a projection of a vectorized corpus in two dimensions using TSNE,
a nonlinear dimensionality reduction method that is particularly well
suited to embedding in two or three dimensions for visualization as a
scatter plot. TSNE is widely used in text analysis to show clusters or
groups of documents or utterances and their relative proximities.
TSNE will return a scatter plot of the vectorized corpus, such that each
point represents a document or utterance. The distance between two points
in the visual space is embedded using the probability distribution of
pairwise similarities in the higher dimensionality; thus TSNE shows
clusters of similar documents and the relationships between groups of
documents as a scatter plot.
TSNE can be used with either clustering or classification; by specifying
the ``classes`` argument, points will be colored based on their similar
traits. For example, by passing ``cluster.labels_`` as ``y`` in ``fit()``, all
points in the same cluster will be grouped together. This extends the
neighbor embedding with more information about similarity, and can allow
better interpretation of both clusters and classes.
For more, see https://lvdmaaten.github.io/tsne/
Parameters
----------
ax : matplotlib axes
The axes to plot the figure on.
decompose : string or None, default: ``'svd'``
A preliminary decomposition is often used prior to TSNE to make the
projection faster. Specify ``"svd"`` for sparse data or ``"pca"`` for
dense data. If None, the original data set will be used.
decompose_by : int, default: 50
Specify the number of components for preliminary decomposition, by
default this is 50; the more components, the slower TSNE will be.
labels : list of strings
The names of the classes in the target, used to create a legend.
Labels must match names of classes in sorted order.
colors : list or tuple of colors
Specify the colors for each individual class
colormap : string or matplotlib cmap
Sequential colormap for continuous target
random_state : int, RandomState instance or None, optional, default: None
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by np.random. The random state is applied to the preliminary
decomposition as well as tSNE.
alpha : float, default: 0.7
Specify a transparency where 1 is completely opaque and 0 is completely
transparent. This property makes densely clustered points more visible.
kwargs : dict
Pass any additional keyword arguments to the TSNE transformer.
"""
# NOTE: cannot be np.nan
NULL_CLASS = None
def __init__(self, ax=None, decompose='svd', decompose_by=50,
labels=None, classes=None, colors=None, colormap=None,
random_state=None, alpha=0.7, **kwargs):
# Visual Parameters
self.alpha = alpha
self.labels = labels
self.colors = colors
self.colormap = colormap
self.random_state = random_state
# Fetch TSNE kwargs from kwargs by popping only keys belonging to TSNE params
tsne_kwargs = {
key: kwargs.pop(key)
for key in TSNE().get_params()
if key in kwargs
}
self.transformer_ = self.make_transformer(decompose, decompose_by, tsne_kwargs)
# Call super at the end so that size and title are set correctly
super(TSNEVisualizer, self).__init__(ax=ax, **kwargs)
def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
"""
Creates an internal transformer pipeline to project the data set into
2D space using TSNE, applying an pre-decomposition technique ahead of
embedding if necessary. This method will reset the transformer on the
class, and can be used to explore different decompositions.
Parameters
----------
decompose : string or None, default: ``'svd'``
A preliminary decomposition is often used prior to TSNE to make
the projection faster. Specify ``"svd"`` for sparse data or ``"pca"``
for dense data. If decompose is None, the original data set will
be used.
decompose_by : int, default: 50
Specify the number of components for preliminary decomposition, by
default this is 50; the more components, the slower TSNE will be.
Returns
-------
transformer : Pipeline
Pipelined transformer for TSNE projections
"""
# TODO: detect decompose by inferring from sparse matrix or dense or
# If number of features > 50 etc.
decompositions = {
'svd': TruncatedSVD,
'pca': PCA,
}
if decompose and decompose.lower() not in decompositions:
raise YellowbrickValueError(
"'{}' is not a valid decomposition, use {}, or None".format(
decompose, ", ".join(decompositions.keys())
)
)
# Create the pipeline steps
steps = []
# Add the pre-decomposition
if decompose:
klass = decompositions[decompose]
steps.append((decompose, klass(
n_components=decompose_by, random_state=self.random_state)))
# Add the TSNE manifold
steps.append(('tsne', TSNE(
n_components=2, random_state=self.random_state, **tsne_kwargs)))
# return the pipeline
return Pipeline(steps)
def fit(self, X, y=None, **kwargs):
"""
The fit method is the primary drawing input for the TSNE projection
since the visualization requires both X and an optional y value. The
fit method expects an array of numeric vectors, so text documents must
be vectorized before passing them to this method.
Parameters
----------
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features representing the corpus of
vectorized documents to visualize with tsne.
y : ndarray or Series of length n
An optional array or series of target or class values for
instances. If this is specified, then the points will be colored
according to their class. Often cluster labels are passed in to
color the documents in cluster space, so this method is used both
for classification and clustering methods.
kwargs : dict
Pass generic arguments to the drawing method
Returns
-------
self : instance
Returns the instance of the transformer/visualizer
"""
# Store the classes we observed in y
if y is not None:
self.classes_ = np.unique(y)
elif y is None and self.labels is not None:
self.classes_ = np.array([self.labels[0]])
else:
self.classes_ = np.array([self.NULL_CLASS])
# Fit our internal transformer and transform the data.
vecs = self.transformer_.fit_transform(X)
self.n_instances_ = vecs.shape[0]
# Draw the vectors
self.draw(vecs, y, **kwargs)
# Fit always returns self.
return self
def draw(self, points, target=None, **kwargs):
"""
Called from the fit method, this method draws the TSNE scatter plot,
from a set of decomposed points in 2 dimensions. This method also
accepts a third dimension, target, which is used to specify the colors
of each of the points. If the target is not specified, then the points
are plotted as a single cloud to show similar documents.
"""
# Resolve the labels with the classes
labels = self.labels if self.labels is not None else self.classes_
if len(labels) != len(self.classes_):
raise YellowbrickValueError((
"number of supplied labels ({}) does not "
"match the number of classes ({})"
).format(len(labels), len(self.classes_)))
# Create the color mapping for the labels.
self.color_values_ = resolve_colors(
n_colors=len(labels), colormap=self.colormap, colors=self.colors)
colors = dict(zip(labels, self.color_values_))
# Transform labels into a map of class to label
labels = dict(zip(self.classes_, labels))
# Expand the points into vectors of x and y for scatter plotting,
# assigning them to their label if the label has been passed in.
# Additionally, filter classes not specified directly by the user.
series = defaultdict(lambda: {'x':[], 'y':[]})
if target is not None:
for t, point in zip(target, points):
label = labels[t]
series[label]['x'].append(point[0])
series[label]['y'].append(point[1])
else:
label = self.classes_[0]
for x,y in points:
series[label]['x'].append(x)
series[label]['y'].append(y)
# Plot the points
for label, points in series.items():
self.ax.scatter(
points['x'], points['y'], c=colors[label],
alpha=self.alpha, label=label
)
def finalize(self, **kwargs):
"""
Finalize the drawing by adding a title and legend, and removing the
axes objects that do not convey information about TNSE.
"""
self.set_title(
"TSNE Projection of {} Documents".format(self.n_instances_)
)
# Remove the ticks
self.ax.set_yticks([])
self.ax.set_xticks([])
# Add the legend outside of the figure box.
if not all(self.classes_ == np.array([self.NULL_CLASS])):
box = self.ax.get_position()
self.ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
manual_legend(
self, self.classes_, self.color_values_,
loc='center left', bbox_to_anchor=(1, 0.5)
)
# -
tsne = TSNEVisualizer(colors=["purple","blue","orchid","indigo","plum","navy"])
tsne.fit(docs, labels)
tsne.poof()
| examples/rebeccabilbro/tsne_resolve_colors.ipynb |
# ---
# jupyter:
# jupytext:
# cell_metadata_filter: -all
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Worksheet A-5: Working With Factors & Tibble Joins
#
# ## Getting Started
#
# Load the requirements for this worksheet:
suppressPackageStartupMessages(library(tidyverse))
suppressPackageStartupMessages(library(tsibble))
suppressPackageStartupMessages(library(gapminder))
suppressPackageStartupMessages(library(testthat))
suppressPackageStartupMessages(library(digest))
suppressMessages({
time <- read_csv("https://raw.githubusercontent.com/STAT545-UBC/Classroom/master/data/singer/songs.csv") %>%
rename(song = title)
album <- read_csv("https://raw.githubusercontent.com/STAT545-UBC/Classroom/master/data/singer/loc.csv") %>%
select(title, everything()) %>%
rename(song = title, album = release)
})
suppressMessages({
fell <- read_csv("https://raw.githubusercontent.com/jennybc/lotr-tidy/master/data/The_Fellowship_Of_The_Ring.csv")
ttow <- read_csv("https://raw.githubusercontent.com/jennybc/lotr-tidy/master/data/The_Two_Towers.csv")
retk <- read_csv("https://raw.githubusercontent.com/jennybc/lotr-tidy/master/data/The_Return_Of_The_King.csv")
})
# The following code chunk has been unlocked, to give you the flexibility to start this document with some of your own code. Remember, it's bad manners to keep a call to `install.packages()` in your source code, so don't forget to delete these lines if you ever need to run them.
# +
# An unlocked code cell.
# -
# # Part 0: Dates and Tsibble
#
# We'll convert dates into a year-month object with the tsibble package (loaded at the start of the worksheet).
#
# ## Question 0.1
#
# Consider the built-in presidential dataset that looks at the start and ending terms of US presidents:
head(presidential)
# Use `tsibble::yearmonth()` to convert the existing start and end column dates into only year and month. Name this tibble `president_ym`.
#
# ```
# president_ym <- presidential %>%
# mutate(start = FILL_THIS_IN,
# end = FILL_THIS_IN)
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(president_ym)
test_that("Question 0.1", expect_known_hash(president_ym[1,], "8b9ac24bc52a692ab7d1bd83f9e0a19c"))
# # Part 1: Creating Factors
#
# For the best experience working with factors in R, we will use the forcats package, which is part of the tidyverse metapackage.
# ## Question 1.1
#
# Using the gapminder dataset from the gapminder package, create a new data set for the year 1997, adding a new column `life_level` containing 5 new levels according to the following table.
#
# | Criteria |`life_level` |
# |-------------------|-------------|
# | less than 23 | very low |
# | between 23 and 48 | low |
# | between 48 and 59 | moderate |
# | between 59 and 70 | high |
# | more than 70 | very high |
#
# Store this new data frame in variable `gapminder_1997`.
#
# **Hint**: We are using `case_when()`, a tidier way to vectorise multiple `if_else()` statements.
# You can read more about this function [in the tidyverse reference](https://dplyr.tidyverse.org/reference/case_when.html).
#
# ```
# gapminder_1997 <- gapminder %>%
# FILL_THIS_IN(year == FILL_THIS_IN) %>%
# FILL_THIS_IN(life_level = case_when(FILL_THIS_IN < FILL_THIS_IN ~ "very low",
# FILL_THIS_IN < FILL_THIS_IN ~ "low",
# FILL_THIS_IN < FILL_THIS_IN ~ "moderate",
# FILL_THIS_IN < FILL_THIS_IN ~ "high",
# TRUE ~ "very high"))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(gapminder_1997)
test_that("Question 1.1", expect_known_hash(table(gapminder_1997$life_level), "3d2e691667d4706e66ce5784bb1d7042"))
# FYI: We can now plot boxplots for the GDP per capita per level of life expectancy.
# Run the following code to see the boxplots.
ggplot(gapminder_1997) + geom_boxplot(aes(x = life_level, y = gdpPercap)) +
labs(y = "GDP per capita ($)", x = "Life expectancy level (years)") +
ggtitle("GDP per capita per Level of Life Expectancy") +
theme_bw()
# ## Question 1.2
#
# Notice a few oddities in the above plot:
#
# - It seems that none of the countries had a "very low" life-expectancy in 1997.
# - However, since it was an option in our analysis it should be included in our plot. Right?
# - Notice also how levels on x-axis are placed in the "wrong" order. (in alphabetical order)
#
# You can correct these issues by explicitly making `life_level` a factor and setting the levels parameter.
# Create a new data frame as in Question 1.1, but make the column `life_level` a factor with levels ordered from *very low* to *very high*.
# Store this new data frame in variable `gapminder_1997_fct`.
#
# ```
# gapminder_1997_fct <- gapminder %>%
# FILL_THIS_IN(year == 1997) %>%
# FILL_THIS_IN(life_level = FILL_THIS_IN(case_when(FILL_THIS_IN < FILL_THIS_IN ~ "very low",
# FILL_THIS_IN < FILL_THIS_IN ~ "low",
# FILL_THIS_IN < FILL_THIS_IN ~ "moderate",
# FILL_THIS_IN < FILL_THIS_IN ~ "high",
# TRUE ~ "very high"),
# levels = c('FILL_THIS_IN', 'FILL_THIS_IN', 'FILL_THIS_IN', 'FILL_THIS_IN', 'FILL_THIS_IN')))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(gapminder_1997_fct)
test_that("Question 1.2", expect_known_hash(table(gapminder_1997_fct$life_level), "8e62f09fbd0756d7e69d1bc95715d333"))
# Run the following code to see the boxplots from the new data frame with life expectancy level as factor.
ggplot(gapminder_1997_fct) + geom_boxplot(aes(x = life_level, y = gdpPercap)) +
labs(y = "GDP per capita ($)", x= "Life expectancy level (years)") +
scale_x_discrete(drop = FALSE) + # Don't drop the very low factor
ggtitle("GDP per capita per level of Life Expectancy") +
theme_bw()
# # Part 2: Inspecting Factors
#
# In Part 1, you created your own factors, so now let's explore what categorical variables are in the `gapminder` dataset.
#
# ## Question 2.1
#
# What levels does the column `continent` have?
# Assign the levels to variable `continent_levels`, using the `levels()` function. (To mix things up a bit, the template code we're giving you extracts a column using the Base R way of extracting columns -- with a dollar sign.)
#
# ```
# continent_levels <- FILL_THIS_IN(gapminder$FILL_THIS_IN)
# ```
# your code here
fail() # No Answer - remove if you provide an answer
print(continent_levels)
test_that("Question 2.1", expect_known_hash(continent_levels, "6926255b7f073fb8e7d89773802102a6"))
# ## Question 2.2
#
# How many levels does the column `country` have?
# Assign the number of levels to variable `gap_nr_countries`. Hint: there's a function called `nlevels()`.
#
# ```
# gap_nr_countries <- FILL_THIS_IN(gapminder$FILL_THIS_IN)
# ```
# your code here
fail() # No Answer - remove if you provide an answer
print(gap_nr_countries)
test_that("Question 2.2", expect_known_hash(as.integer(gap_nr_countries), "3b6d002135d8d45a3c5f4a9fb857c323"))
# ## Question 2.3
#
# Consider we are only interested in the following 5 countries: Egypt, Haiti, Romania, Thailand, and Venezuela.
# Create a new data frame with only these 5 countries and store it in variable `gap_5`. _Hint_: nothing new here -- use your dplyr knowledge!
#
# ```
# gap_5 <- gapminder %>%
# FILL_THIS_IN(FILL_THIS_IN %in% c("FILL_THIS_IN", "FILL_THIS_IN", "FILL_THIS_IN", "FILL_THIS_IN", "FILL_THIS_IN"))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(gap_5)
test_that("Question 2.3", {
expect_known_hash(dim(gap_5), "6c0f8c2a8d488051f33fc89b2c327dcd")
expect_known_hash(table(gap_5$country), "05b8ca3033e94f96b9ec5422a69c1207")
})
# ## Question 2.4
#
# However, subsetting the data set does not affect the levels of the factors.
# The column `country` in tibble `gap_5` still has the same number of levels as in the original data frame.
#
# Your task: create a new tibble from `gap_5`, where all unused levels from column `country` are dropped. _Hint_: use the `droplevels()` function. Store new new tibble in variable `gap_5_dropped`.
#
# By way of demonstration, check the number of levels in the "country" column before and after the change -- we've included the code for this for you.
#
# ```
# nlevels(gap_5$country)
# gap_5_dropped <- FILL_THIS_IN(FILL_THIS_IN)
# nlevels(gap_5_dropped$country)
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(gap_5_dropped)
test_that("Question 2.4", expect_known_hash(sort(levels(gap_5_dropped$country)), "ac97b9af845a59395697b028c5121503"))
# ## Question 2.5
#
# The factor levels of column `continent` in data frame `gapminder` are ordered alphabetically.
# Create a new data frame, with the levels of column `continent` in *increasing* order according to their frequency (i.e., the number of rows for each continent).
# Store the new data frame in variable `gap_continent_freq`. *Hint*: Use `fct_infreq()` and `fct_rev()`.
#
# ```
# gap_continent_freq <- gapminder %>%
# mutate(continent = FILL_THIS_IN(FILL_THIS_IN(continent)))
# ```
#
# **Hint**: The first `FILL_THIS_IN` corresponds to a `fct_*` function that reverses the levels of the factors. The second `FILL_THIS_IN` correspond to a `fct_*` function that orders the levels by *decreasing* frequency.
# your code here
fail() # No Answer - remove if you provide an answer
head(gap_continent_freq)
test_that("Question 2.5", expect_known_hash(table(gap_continent_freq$continent), "0bb23ea87ce71deb5452eaae8cdbf7cf"))
# FYI: You can't "see" any difference in the tibble, but there are _attributes_ behind the hood keeping track of the order of the "continent" entries. You _can_ see the difference, however, in a plot, as below. Notice how the x-axis is no longer ordered alphabetically.
ggplot(gap_continent_freq, aes(continent)) + geom_bar()
# ## Question 2.6
#
# Again based on the `gapminder` data set, create another data frame, with the levels of column `continent` in *increasing* order of their average life expectancy (from column `lifeExp`).
# Store the new data frame in variable `gap_continent_life`. _Hint_: use `fct_reorder()`.
#
# ```
# gap_continent_life <- gapminder %>%
# mutate(continent = FILL_THIS_IN(FILL_THIS_IN, FILL_THIS_IN, FILL_THIS_IN))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(gap_continent_life)
test_that("Question 2.6", expect_known_hash(table(gap_continent_life$continent), "7688676a0807063f1bfa5b4cc721c2d9"))
# Again, you can't "see" any difference in the tibble. But here's a plot that makes the difference clearer. Notice the ordering of the x-axis.
ggplot(gap_continent_life, aes(continent, lifeExp)) + geom_boxplot()
# ## Question 2.7
#
# Consider now you want to make comparisons between countries, relative to Canada.
# Create a new data frame, with the levels of column `country` rearranged to have Canada as the first one.
# Store the new data frame in variable `gap_canada_base`.
#
# ```
# (gap_canada_base <- gapminder %>%
# mutate(country = FILL_THIS_IN(FILL_THIS_IN, "FILL_THIS_IN")))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(gap_canada_base)
test_that("Question 2.7", expect_known_hash(table(gap_canada_base$country), "72d75ce05a16d8965f7bd0ae3fb986d3"))
# Take a look at the levels of the "country" factor, and you'll now see Canada first:
gap_canada_base %>%
pull(country) %>%
levels()
# ## Question 2.8
#
# Sometimes you want to manually change a few factor levels, e.g., if the level is too wide for plotting.
# Based on the `gapminder` data set, create a new data frame with the Central African Republic renamed to *Central African Rep.* and Bosnia and Herzegovina renamed to *Bosnia & Herzegovina*.
# Store the new data frame in variable `gap_car`. _Hint_: use `fct_recode()`.
#
# ```
# gap_car <- gapminder %>%
# mutate(country = FILL_THIS_IN(FILL_THIS_IN, "Central African Rep." = "FILL_THIS_IN",
# "Bosnia & Herzegovina" = "FILL_THIS_IN"))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(gap_car)
test_that("Question 2.8", expect_known_hash(table(gap_car$country), "9cc15f09cb70b5596bbf3feaa73ee471"))
# # Part 3: Tibble Joins
#
# At the start of this worksheet, we loaded a couple datasets from the [singer](https://github.com/JoeyBernhardt/singer) package, and called them `time` and `album`. These two data sets contain information about a few popular songs and albums.
#
# We'll practice various joins using these two datasets. You'll need to find out which join is appropriate for each case!
#
# Run the following R codes to look at the two data sets:
head(time)
head(album)
# ## Question 3.1
#
# Create a new data frame containing all songs from `time` that have a corresponding album in the `album` dataset, while also adding the album information. Store the joined data set in variable `songs_with_album`.
#
# ```
# songs_with_album <- time %>%
# FILL_THIS_IN(FILL_THIS_IN, by = c("FILL_THIS_IN", "FILL_THIS_IN"))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(songs_with_album)
test_that("Question 3.1", {
expect_known_hash(sort(songs_with_album$song), "146ff293a74ccc1ad24505a6bc0b6682")
expect_known_hash(table(songs_with_album$artist_name), "51f7daeec65e839e5ae6c84ac5a1cb70")
})
# ## Question 3.2
#
# Go ahead and add the corresponding albums to the `time` tibble, being sure to preserve rows even if album info is not readily available.
# Store the joined data set in variable `all_songs`.
#
# ```
# all_songs <- time %>%
# FILL_THIS_IN(FILL_THIS_IN, by = c("FILL_THIS_IN", "FILL_THIS_IN"))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(all_songs)
test_that("Question 3.2", {
expect_known_hash(sort(all_songs$song), "dd1c0b2e14a879cb1a6f07077ed38e97")
expect_known_hash(all_songs$album[order(all_songs$song)], "2baea3c1a23797fdac5a9e0dc119073e")
})
# ## Question 3.3: Joining Rows by Columns
#
# Create a new tibble with songs from `time` for which there is no album info.
# Store the new data set in variable `songs_without_album`.
#
# ```
# songs_without_album <- time %>%
# FILL_THIS_IN(FILL_THIS_IN, by = c("FILL_THIS_IN", "FILL_THIS_IN"))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(songs_without_album)
test_that("Question 3.3", expect_known_hash(sort(songs_without_album$song), "3e6a210ad915fb07eb7e894a7ca0e856"))
# ## Question 3.4
#
# Create a new tibble with all songs from artists for whom there is no album information.
# Store the new data set in variable `songs_artists_no_album`.
#
# ```
# songs_artists_no_album <- time %>%
# FILL_THIS_IN(FILL_THIS_IN, by = "FILL_THIS_IN")
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(songs_artists_no_album)
test_that("Question 3.4", expect_known_hash(table(songs_artists_no_album$artist_name),
"244510c51477c31e6e795cbc0ca0b0d7"))
# ## Question 3.5
# Create a new tibble with all the information from both tibbles, regardless of no corresponding information being present in the other tibble.
# Store the new data set in variable `all_songs_and_albums`.
#
# ```
# all_songs_and_albums <- time %>%
# FILL_THIS_IN(FILL_THIS_IN, by = c("FILL_THIS_IN", "FILL_THIS_IN"))
# ```
# your code here
fail() # No Answer - remove if you provide an answer
head(all_songs_and_albums)
test_that("Question 3.5", {
expect_known_hash(sort(all_songs_and_albums$song), "ba2ba3507e50c56d21028893404259a5")
expect_known_hash(with(all_songs_and_albums, album[order(song)]), "dbc70af8d3078ea830be9cfb0dee6b9d")
expect_known_hash(with(all_songs_and_albums, year[order(song)]), "10669b0750ab4d53b54f0e509430e2d1")
})
# ## Part 4: Concatenating Rows
#
# At the start of the worksheet, we loaded three Lord of the Rings datasets (one for each of the three movies). Run the following R codes to take a look at the 3 tibbles:
fell
ttow
retk
# ## Question 4.1
#
# Combine the three data sets into a single tibble, storing the new tibble in variable `lotr`.
#
# ```
# lotr <- FILL_THIS_IN(fell, ttow, retk)
# ```
# your code here
fail() # No Answer - remove if you provide an answer
print(lotr)
test_that("Question 4.1", expect_known_hash(table(lotr$Film), "41c29122f6c217d447e85a9069f5a92f"))
# # Part 5: Set Operations
#
# Let's use three set functions: `intersect()`, `union()` and `setdiff()`.
# They work for data frames with the same column names.
#
# We'll work with two toy tibbles named `y` and `z`, similar to the Data Wrangling Cheatsheet.
#
# Run the following R codes to create the data.
(y <- tibble(x1 = LETTERS[1:3], x2 = 1:3))
(z <- tibble(x1 = c("B", "C", "D"), x2 = 2:4))
# ## Question 5.1
#
# Use one of the three methods mentioned above to create a new data set which contains all rows that appear in both `y` and `z`.
# Store the new data frame in variable `in_both`
#
# ```
# in_both <- FILL_THIS_IN(y, z)
# ```
# your code here
fail() # No Answer - remove if you provide an answer
in_both
test_that("Question 5.1", expect_known_hash(in_both$x1, "745ec49ab3231655a04484be44a15f98"))
# ## Question 5.2
# Assume that rows in `y` are from *Day 1* and rows in `z` are from *Day 2*.
# Create a new data set with all rows from `y` and `z`, as well as an additional column `day` which is *Day 1* for rows from `y` and *Day 2* for rows from `z`.
# Store the new data set in variable `both_days`.
#
# ```
# both_days <- FILL_THIS_IN(
# mutate(y, day = "Day 1"),
# mutate(z, day = "Day 2")
# )
# ```
# your code here
fail() # No Answer - remove if you provide an answer
both_days
test_that("Question 5.2", expect_known_hash(with(both_days, x1[order(x2, day)]), "66b9eefd39c2f0b5d130453c139a2051"))
# ## Question 5.3
#
# The rows contained in `z` are bad.
# Use one of the three methods mentioned above to create a new data set which contains only the rows from `y` which are not in `z`.
# Store the new data frame in variable `only_y`
#
# ```
# only_y <- FILL_THIS_IN(y, z)
# ```
# your code here
fail() # No Answer - remove if you provide an answer
only_y
test_that("Question 5.3", expect_known_hash(only_y$x1, "75f1160e72554f4270c809f041c7a776"))
# ### Attribution
#
# Assembled by <NAME> and <NAME>, reviewed by <NAME>, and assisted by <NAME>.
| content/worksheets/worksheet_a05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# # ้ๅญใณใณใใฅใผใฟใผไธใฎๅคๅ
ธ่จ็ฎ
# + [markdown] tags=["contents"]
# ## ็ฎๆฌก
#
# 1. [ใคใณใใญใใฏใทใงใณ](#intro)
# 2. [ใชใฉใฏใซใธใฎๅใๅใใ](#oracle)
# 3. [ใดใใฎๅใๅบใ](#garbage)
# -
# ## 1. ใคใณใใญใใฏใทใงใณ <a id="intro"></a>
#
# ๆฎ้ๆงใๆใค้ๅญใฒใผใใฎ้ๅใฏๅคๅ
ธ่จ็ฎใๅ็พใใใใจใใงใใพใใใ*่จ็ฎใฎๅๅญ*ใใง่ฆใฆใใใใใซใๅคๅ
ธ่จ็ฎใใใผใซ่ซ็ใฒใผใใซใณใณใใคใซใใฆ้ๅญใณใณใใฅใผใฟใผไธใงๅ็พใใใ ใใงใใ
#
# ใใใฏ้ๅญใณใณใใฅใผใฟใผใซใใใ้่ฆใชไบๅฎใ็คบใใฆใใพใใใใใฏๅคๅ
ธใณใณใใฅใผใฟใผใซๅฏ่ฝใชใใจใฏ้ๅญใณใณใใฅใผใฟใผไธใงใๅฏ่ฝใงใใใใใใซๅฐใชใใจใๅคๅ
ธ่จ็ฎใจๅ็ญใฎ่จ็ฎใฎ่ค้ๆงใๅฎ่กใงใใใจใใใใจใงใใ้ๅญใณใณใใฅใผใฟใผใฎๆดป็จใฏๆขใซๅคๅ
ธใณใณใใฅใผใฟใผใฎๆนใๅชใใใใใฉใผใใณในใ็บๆฎใใใฟในใฏใซๅฏพใใฆ่กใใใจใ็ฎ็ใงใฏใใใพใใใใใใใงใ้ๅญใณใณใใฅใผใฟใผใไธ่ฌ็ใชๅ้กใ่งฃๆฑบใใใใจใๅฏ่ฝใจใใใใจใใใ่ชฌๆใใฆใใพใใ
#
# ใใใซ้ๅญใณใณใใฅใผใฟใผใ็จใใ่งฃๆฑบใ่ฆใใๅ้กใฏใใฐใใฐๅคๅ
ธใขใซใดใชใบใ ใ็จใใใขใใญใผใใๅฏ่ฝใชๅด้ขใๅซใใงใใพใใใใฎใใใชๅ ดๅใฏๅคๅ
ธใณใณใใฅใผใฟใผใ็จใใ่งฃๆฑบใๅฏ่ฝใชๅ ดๅใใใใพใใใใใๅคใใฎๅ ดๅใๅคๅ
ธใขใซใดใชใบใ ใฏ้ใญๅใใ็ถๆ
ใๅญๅจใใๅ
ฅๅใซๅฏพใใฆๅฎ่กๅฏ่ฝใงใชใใใฐใใใพใใใใใใฏ้ๅญใณใณใใฅใผใฟใผไธใงใฎๅคๅ
ธใขใซใดใชใบใ ใฎๅฎ่กใ่ฆใใพใใๆฌ็ฏใงใฏใใฎใใใชใใใคใใฎไพใ็ดนไปใใพใใ
#
# ## 2. ใชใฉใฏใซใธใฎๅใๅใใ <a id="oracle"></a>
#
# ๅคใใฎ้ๅญใขใซใดใชใบใ ใฏ้ขๆฐ$f(x)$ใฎ่งฃๆใๅบๆฌใจใใฆใใใๅ
ฅๅ$x$ใซๅฏพๅฟใใๅบๅ$f(x)$ใ่ฟใ้ขๆฐใๅฎ่ฃ
ใใใใใใฉใใฏใใใฏในใใฎๅญๅจใไปฎๅฎใใฆใใใ ใใฎๅ ดๅใฎใใจใๅคใใงใใใใฎใใใช้ขๆฐใฏ*ใชใฉใฏใซ*ใจๅผใฐใใพใใ
#
# ใชใฉใฏใซใๆใกๅบใใฆๆฝ่ฑก็ใซ่ใใใใจใงใ้ขๆฐใใฎใใฎใใใ้ขๆฐใ่งฃๆใใใใฏใใใฏใซ้ไธญใใใใจใใงใใพใใ
#
# ้ๅญใขใซใดใชใบใ ไธญใงใชใฉใฏใซใใฉใฎใใใซๆฉ่ฝใใใ็่งฃใใใใใซใใชใฉใฏใซใใฉใฎใใใซๅฎ็พฉใใใใฎใใๅ
ทไฝ็ใซ่ฆใฆใใๅฟ
่ฆใใใใพใใใชใฉใฏใซใใจใไธป่ฆใชๅฝขๅผใฎไธใคใจใใฆๆฌกใฎใใใชใฆใใฟใช็บๅฑใง่จ่ฟฐใใใ*ใใผใซใชใฉใฏใซ*ใจใใใใฎใใใใพใใ
#
# $$ U_f \left|x , \bar 0 \right\rangle = \left|x, f(x)\right\rangle. $$
#
# ใใใง$\left|x , \bar 0 \right\rangle = \left|x \right\rangle \otimes \left|\bar 0 \right\rangle$ใฏ2ใคใฎใฌใธในใฟใผใใๆงๆใใใ่คๆฐ้ๅญใใใ็ถๆ
ใ่กจใใพใใๆๅใฎใฌใธในใฟใผใฏ็ถๆ
$\left|x\right\rangle$ใ่กจใใพใใใใใง$x$ใฏ้ขๆฐใฎๅ
ฅๅใฎใใคใใช่กจ็พใงใใใใฎใฌใธในใฟใผไธญใฎ้ๅญใใใๆฐใฏๅ
ฅๅใ่กจ็พใใใใใซๅฟ
่ฆใชใใใๆฐใงใใ
#
# 2ใค็ฎใฎใฌใธในใฟใผใฎๅฝนๅฒใฏๅบๅใฎใจใณใณใผใใงใใๅ
ทไฝ็ใซใฏ$U_f$ใไฝ็จใใใๅพใฎใใฎใฌใธในใฟใผใฎ็ถๆ
ใฏๅบๅ$\left|f(x)\right\rangle$ใฎใใคใใช่กจ็พใจใชใใๅบๅใ่กจ็พใใใใใซๅฟ
่ฆใชๆฐใฎ้ๅญใใใใซใใใฌใธในใฟใผใฏๆงๆใใใพใใ ใใฎใฌใธในใฟใผใฎๅๆ็ถๆ
$\left|\bar 0 \right\rangle$ใฏๅ
จใฆใฎ้ๅญใใใใ$\left|0 \right\rangle$ใฎ็ถๆ
ใ่กจใใพใใใใฎใปใใฎๅๆ็ถๆ
ใงใฏ$U_f$ใไฝ็จใใใใจ็ฐใชใ็ตๆใๅพใใใพใใๅ
ทไฝ็ใช็ตๆใฏใฆใใฟใช่กๅ$U_f$ใใฉใฎใใใซๅฎ็พฉใใใใซใใใพใใ
#
# ใใไธใคใฎใชใฉใฏใซใฎๅฝขๅผใซๆฌกใฎใใใซๅฎ็พฉใใใ*ไฝ็ธใชใฉใฏใซ*ใใใใพใใ
#
# $$ P_f \left|x \right\rangle = (-1)^{f(x)} \left|x \right\rangle, $$
#
# ไฝ็ธใชใฉใฏใซใฏใใผใซใชใฉใฏใซใจใใใถใ็ฐใชใๅฝขใซ่ฆใใพใใใๅบ็คใจใชใ่ใใฏๅใๅฅ่กจ็พใงใใๅใฎ็ฏใงๆฑใฃใใไฝ็ธใญใใฏใใใฏใใฎใกใซใใบใ ใ็จใใฆ็่งฃใใใใจใใงใใพใใ
#
# ใใผใซใชใฉใฏใซใจใฏๅฝขๅผใๅคงใใ็ฐใชใใใใซ่ฆใใพใใใๅใๅบๆฌ็ใช่ใๆนใฎๅฅ่กจ็พใงใใๅฎ้ใๅใฎใปใฏใทใงใณใง่ชฌๆใใใไฝ็ธใญใใฏใใใฏใใจๅใใกใซใใบใ ใไฝฟ็จใใฆๅฎ็พใงใใพใใ
#
# ใใใ็ขบ่ชใใใใใซใๅใๆฉ่ฝใๆใคใใผใซใชใฉใฏใซ$U_f$ใ่ใใฆใฟใพใใใใฎ้ขๆฐใฏๆฌ่ณช็ใซไธ่ฌๅใใใๅถๅพกNOTใฎๅฝขๅผใงๅฎ่ฃ
ใใใใจใใงใใพใใๅ
ฅๅใฌใธในใฟใผใซใใๅถๅพกใใใใใใ$f(x)=0$ใซๅฏพใใฆใฏๅบๅใใใใฏ$\left|0 \right\rangle$ใฎใพใพใงใ$f(x)=1$ใฎๅ ดๅใฏ$X$ใไฝ็จใใใฆ$\left|1 \right\rangle$ใธใจๅ่ปขใใพใใๅๆ็ถๆ
ใ$\left|0 \right\rangle$ใงใฏใชใ$\left|- \right\rangle$ใฎๅ ดๅใ$U_f$ใฏ$(-1)^{f(x)}$ใ ใไฝ็ธใไธใใพใใ
#
# $$ U_f \left( \left|x \right\rangle \otimes \left| - \right\rangle \right) = (P_f \otimes I) \left( \left|x \right\rangle \otimes \left| - \right\rangle \right) $$
#
# ๅบๅ้ๅญใใใใฎ็ถๆ
$\left|- \right\rangle$ใฏๅ
จ้็จใซใใใฆไธๅคใงใใใใ็ก่ฆใใใใจใใงใใพใใ็ตๅฑไฝ็ธใชใฉใฏใซใฏใใผใซใชใฉใฏใซใ็จใใฆๅฎ่ฃ
ใๅฏ่ฝใ ใจใใใใจใงใใ
# ## 3. ใดใใฎๅใ้คใ <a id="garbage"></a>
#
# ใชใฉใฏใซใง่ฉไพกใใใ้ขๆฐใฏ้ๅธธๅคๅ
ธใณใณใใฅใผใฟใผไธใงๅน็ใใ่ฉไพกใใใ้ขๆฐใงใใใใใใใชใฉใฏใซใไธใง็คบใใใใใชๅฝขใฎใฆใใฟใชใฒใผใใจใใฆๅฎ่ฃ
ใใๅฟ
่ฆใใใใใจใฏใใชใฉใฏใซใ้ๅญใฒใผใใ็จใใฆๅฎ่ฃ
ใใชใใฆใฏใชใใชใใใจใๆๅณใใฆใใพใใใใใใชใใๅคๅ
ธใขใซใดใชใบใ ใซไฝฟใใใใใผใซใฒใผใใๆใกๅบใใฆๅฏพๅฟใใใใฎใซ็ฝฎใๆใใใฐใใใจใใๅ็ดใช่ฉฑใงใฏใใใพใใใ
#
# ๆฐใไปใใชใใใฐใชใใชใ่ชฒ้กใฎ๏ผใคใฏๅฏ้ๆงใงใใ$U = \sum_x \left| f(x) \right\rangle \left\langle x \right|$ใจใใๅฝขๅผใฎใฆใใฟใชๆงใฏไธๆใฎๅ
ฅๅ$x$ใซๅฏพใใฆไธๆใฎๅบๅ$f(x)$ใๅพใใใๅ ดๅใซใฎใฟๆใใใกใพใใใไธ่ฌ็ใซ็ใจใฏใชใใพใใใใใใๅบๅใซๅ
ฅๅใฎใณใใผใๅซใใใใใซใใใ ใใง็ใจใใใใจใใงใใพใใใใใฏๅ
ใปใฉ่ฆใใใผใซใชใฉใฏใซใฎๅฝขๅผใงใใ
#
# ใฆใใฟใชๆงใๆใคใใใซๆผ็ฎใ่จ่ฟฐใใใใจใงใ้ใญๅใใ็ถๆ
ใซๅฏพใใๆผ็ฎใ่ๅฏใใใใจใๅบๆฅใพใใไพใใฐๅ
ฅๅ$x$ใใจใๅพใๅ
จใฆใฎ็ถๆ
ใฎ้ใญๅใใ็ถๆ
ใงใใๅ ดๅใ่ใใพใใใ๏ผ็ฐกๅใฎใใ่ฆๆ ผๅใฏใใฆใใพใใ๏ผใ็ตๆใฏใจใๅพใๅ
ฅๅบๅใใขใฎ้ใญๅใใใจใชใใงใใใใ
#
# $$ U_f \sum_x \left|x,0\right\rangle = \sum_x \left|x,f(x)\right\rangle. $$
#
# ๅคๅ
ธใขใซใดใชใบใ ใซ้ฉ็จใใๅ ดๅใ้ใญๅใใใฏ็งใใกใฎ่ฆๆฑ้ใใฎใตใใพใใใใใใใซๆฐใไปใใๅฟ
่ฆใใใใพใใๅคๅ
ธใขใซใดใชใบใ ใฏๆๆใฎๅบๅใๆผ็ฎใใใ ใใงใชใใ่จ็ฎ้ไธญใง่ฟฝๅ ใฎๆ
ๅ ฑใ็ใฟๅบใใใใพใใใใฎใใใชๆผ็ฎใซใใใไปๅ ็ใชๆ
ๅ ฑใฎใดใใฏๅคๅ
ธ็ใซใฏ้ๅคงใชๅ้กใจใฏใชใใพใใใใไฝฟใใใฆใใใกใขใชใใดใใๅ้คใใใฐ็ฐกๅใซๅ
้ใใซใใใใจใใงใใพใใใใใ้ๅญ็ใช่ฆณ็นใงใฏใใ็ฐกๅใชใใจใงใฏใใใพใใใ
#
# ไพใใฐๅคๅ
ธใขใซใดใชใบใ ใงใฎๆฌกใฎใใใชๆผ็ฎใ่ใใพใใใใ$$ V_f \left|x,\bar 0, \bar 0 \right\rangle = \left| x,f(x), g(x) \right\rangle $$ ใใใซใใ3ใค็ฎใฎใฌใธในใฟใผใฏใๅคๅ
ธใขใซใดใชใบใ ใซใใใฆใใกใขใใจใใฆไฝฟใใใฆใใพใใๆผ็ฎใ็ตใใจใฌใธในใฟใผใซๆฎใใใๆ
ๅ ฑใฏใใดใใ$g(x)$ใจใใฆๆฑใใใพใใ$V_f$ใ็จใใฆไธใฎๅฎ่ฃ
ใฎใฆใใฟใชๆงใ็คบใใพใใใใ
#
# ้ๅญใขใซใดใชใบใ ใฏ้ๅธธๅนฒๆธๅนๆใๅ
ใซๆง็ฏใใใฆใใพใใๆใๅ็ดใชใฎใฏใฆใใฟใชๆไฝใซใใ้ใญๅใใใไฝใๅบใใ้ๆไฝใซใใฃใฆ้ใญๅใใใ่งฃๆถใใใใจใงใใๅ
จไฝใจใใฆใฏไบ็ดฐใชใใจใงใใใ้ๅญใณใณใใฅใผใฟใๅฐใชใใจใใจใใใฎใใใชไบ็ดฐใชใใจใๅฏ่ฝใงใใใจใใใใจใฏ็ขบใใใชใใใฐใชใใพใใใ
#
# ไพใใฐ้ๅญๆผ็ฎใงไฝใใใฎ้็จใ็ตใฆ้ใญๅใใ็ถๆ
$\sum_x \left|x,f(x)\right\rangle$ใไฝใๅบใใ็งใใกใฏ$\sum_x \left|x,0\right\rangle$ใฎ็ถๆ
ใ่ฟใๅฟ
่ฆใใใใจใใ็ถๆณใ่ใใฆใฟใพใใใใใใฎๅ ดๅใฏๅใซ$U_f^\dagger$ใไฝ็จใใใใฐใใใงใใใใใ้ฉ็จใใๅ่ทฏใฏ$U_f$ใไฝ็จใใใๅ่ทฏใ็ฅใฃใฆใใใจใใใใพใใใชใใชใๅใซ$U_f$ใฎๅ่ทฏไธญใฎๅใฒใผใใ้ๆผ็ฎใฎใฒใผใใซ้้ ใง็ฝฎใๆใใใฐใใใใใงใใ
#
# ใใใ$U_f$ใใฉใฎใใใซไฝ็จใใใใฐใใใใใใใชใใ$V_f$ใฎไฝ็จใฎใใๆนใฏใใใฃใฆใใๅ ดๅใ่ใใฆใฟใพใใใใฎๅ ดๅ$U_f^\dagger$ใไฝ็จใใใใใจใฏๅบๆฅใพใใใใ$V_f^\dagger$ใงใใใฐๅฏ่ฝใงใใๆฎๅฟตใชใใจใซๆ
ๅ ฑใฎใดใใๅญๅจใใใใจใง$U_f$ใฎๅ ดๅใจๅใ็ตๆใฏๅพใใใพใใใ
#
# ใใใใใใไพใจใใฆ้ๅธธใซๅ็ดใชๅ ดๅใ่ใใพใใ$x$, $f(x)$, $g(x)$ใใใใใๅไธใใใใงๆงๆใใใพใ$f(x) = x$ and $g(x) = x$ใงใใใจใใพใใใใใใฏๅ
ฅๅใใใใงๅถๅพกใใใๅไธใฎ`cx`ใฒใผใใ็จใใใใจใงๅฎ็พๅฏ่ฝใงใใ
#
# ๅ
ทไฝ็ใซใใใจใ$U_f$ใๅฎ่ฃ
ใใๅ่ทฏใฏๅไธใฎๅ
ฅๅบๅใฌใธในใฟใผ้ใฎ`cx`ใฒใผใใงใใใจใใใใจใงใใ
from qiskit import QuantumCircuit, QuantumRegister
# +
input_bit = QuantumRegister(1, 'input')
output_bit = QuantumRegister(1, 'output')
garbage_bit = QuantumRegister(1, 'garbage')
Uf = QuantumCircuit(input_bit, output_bit, garbage_bit)
Uf.cx(input_bit[0], output_bit[0])
Uf.draw()
# -
# $V_f$ใงใฏใดใ(garbage)ใฎใใใซๅ
ฅๅใฎใณใใผใไฝใๅฟ
่ฆใใใใๆฌกใฎใใใซ2ใคใฎ`cx`ใฒใผใใ็จใใใใจใๅบๆฅใพใใ
Vf = QuantumCircuit(input_bit, output_bit, garbage_bit)
Vf.cx(input_bit[0], garbage_bit[0])
Vf.cx(input_bit[0], output_bit[0])
Vf.draw()
# ใใใงใฏๆๅใซ$U_f$ใไฝ็จใใใๆฌกใซ$V_f^{\dagger}$ใไฝ็จใใใฆใฟใพใใใใ็ตๆใฏๆฌกใฎใใใชๅ่ทฏใจใชใใพใใ
qc = Uf + Vf.inverse()
qc.draw()
# ใใฎๅ่ทฏใฏไบใใซใญใฃใณใปใซใๅใไบใคใฎ`cx`ใฒใผใใใๅงใพใใพใใๆฎใฃใใฎใฏๆๅพใซใใๅ
ฅๅใจgarbageใฌใธในใฟใผ้ใฎ`cx`ใฒใผใใซใชใใพใใๆฐๅญฆ็ใซใฏไธใฎใใใชๆๅณใๆใกใพใใ
#
# $$ V_f^\dagger U_f \left| x,0,0 \right\rangle = V_f^\dagger \left| x,f(x),0 \right\rangle = \left| x , 0 ,g(x) \right\rangle. $$
#
# ใใใงใใใใใจใฏ$V_f^\dagger$ใฏๅใซๅๆ็ถๆ
ใ่ฟใใใใงใฏใชใใๆๅใฎ้ๅญใใใใจๆใพใชใgarbageใใใใจใฎใจใณใฟใณใฐใซใจใชใฃใฆใใใจใใใใจใงใใ่ฟใฃใฆใใ็ถๆ
ใ็งใใกใๅฟ
่ฆใจใใฆใใใใฎใงใฏใชใใใใซใใขใซใดใชใบใ ใฎไธ้ฃใฎๆตใใฏไบๆณ้ใใซๅไฝใใชใใฎใงใใ
#
# ใใฎใใใช็็ฑใใ้ๅญใขใซใดใชใบใ ใงใฏๅคๅ
ธ็ใชgarbageใใใใๅใ้คใๅฟ
่ฆใใใใพใใใ้ๆผ็ฎใใจๅผใฐใใๆๆณใ็จใใใใจใงๅฎ็พใใใใจใๅฏ่ฝใงใใๅฟ
่ฆใชใฎใฏใใฉใณใฏใฎๅคๆฐใ็จๆใใฆ$V_f$ใใใใใ ใใงใใ
#
# $$ \left| x, 0, 0, 0 \right\rangle \rightarrow \left| x,f(x),g(x),0 \right\rangle. $$
#
# ใใใฆๅบๅใใจใณใณใผใใใ้ๅญใใใใใณใณใใญใผใซใใใใซใๆฐใใ็จๆใใใใฉใณใฏใฎๅคๆฐใใฟใผใฒใใใใใใจใใๅถๅพกNOTใฒใผใใไฝ็จใใใพใใ
#
# ใใใ1้ๅญใฌใธในใฟใผใ็จใใๅ่ทฏใฎไพใงใใ
# +
final_output_bit = QuantumRegister(1, 'final-output')
copy = QuantumCircuit(output_bit, final_output_bit)
copy.cx(output_bit, final_output_bit)
copy.draw()
# -
# ใใฎๅ่ทฏใฏๆ
ๅ ฑใใณใใผใใๅฝนๅฒใๆใฃใฆใใพใ๏ผใใ้ๅญ่ค่ฃฝไธๅฏ่ฝๅฎ็ใ็ฅใฃใฆใใใชใใใใใฏๅใใใญใปในใงใฏใใใพใใ๏ผใๅ
ทไฝ็ใซใฏ็ถๆ
ใๆฌกใฎใใใซๅคๆใใฆใใพใใ
#
# $$ \left| x,f(x),g(x),0 \right\rangle \rightarrow \left| x,f(x),g(x),f(x) \right\rangle. $$
#
# ๆๅพใซๆๅใฎๆผ็ฎใๅใๆถใ$V_f^\dagger$ใไฝ็จใใใพใใ
#
# $$ \left| x,f(x),g(x),0 \right\rangle \rightarrow \left| x,0,0,f(x) \right\rangle. $$
#
# ใใใงใใณใใผใใใๅบๅใฏๆฎใใพใใ็ตๅฑgarbageใใใใชใใซๆผ็ฎใๅฏ่ฝใงใใใ็ฎ็ใฎ$U_f$ใๅพใใใจใๅบๆฅใพใใใ
#
# ใใฎไพใงๆฑใฃใ1้ๅญใใใใฌใธในใฟใผใง$f(x) = x$ใฎๅ ดๅใฎๅ่ทฏใฏๆฌกใฎใใใซใชใใพใใ
(Vf.inverse() + copy + Vf).draw()
# `cx`ใฒใผใใฎ็ฅ่ญใ็จใใใใจใง2ใคใฎgarbageใฌใธในใฟใผใฏไบใใซๆใกๆถใๅใใใจใ็่งฃใใพใใใใใใใใใซgarbageใฌใธในใฟใผใๅใ้คใใใจใๅบๆฅใใฎใงใใ
#
# ### ็ทด็ฟๅ้ก
#
# 1. ใoutputใใฌใธในใฟใผใ$|0\rangle$ใงๅๆๅใใใฆใใใจใๅบๅใใfinal outputใ(ใฎใฟ)ใซๆญฃใใๆธใ่พผใพใใใใจใ็คบใใพใใใใ
# 2. ใoutputใใฌใธในใฟใผใ$|1\rangle$.ใงๅๆๅใใใฆใใใจใไฝใ่ตทใใใงใใใใ
# ๆฌ็ฏใใใณๆฌ็ซ ใฎไปใฎ็ฏใฎๆๆณใ็จใใใใจใง้ๅญใขใซใดใชใบใ ใๆง็ฏใใใฎใซๅฟ
่ฆใชใใผใซใฏๆใซๅ
ฅใใพใใใใใใงใฏใขใซใดใชใบใ ใๅฎ้ใซ่ฆใฆใใใพใใใใ
import qiskit.tools.jupyter
# %qiskit_version_table
| translations/ja/ch-gates/oracles.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: qiskitdevl
# language: python
# name: qiskitdevl
# ---
# # Calibrating Qubits using OpenPulse
# Contents
# 1. Introduction
# 1. Finding our qubit
# 1. Rabi experiment
# 1. 0 vs 1
# 1. Measuring T1
# 1. Ramsey experiment
# 1. Measuring T2
# 1. Dynamical Decoupling
# # 1. Introduction
# +
# %matplotlib inline
import qiskit.pulse as pulse
import qiskit.pulse.pulse_lib as pulse_lib
from qiskit.compiler import assemble
import qiskit
qiskit.__qiskit_version__
# +
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(hub='your-hub-name') # change to your hub name
backend = provider.get_backend('ibmq_poughkeepsie')
backend_config = backend.configuration()
# -
from qiskit.tools.jupyter import backend_overview, backend_monitor
# %qiskit_backend_monitor backend
# The superconducting devices at IBM are routinely calibrated to determine the properties of each qubit. The calibration procedure determines the qubit frequency, coherence and energy relaxation times, and pulse parameters, among other things. In this notebook, we show how these parameters can be determined at the microwave level using Terra.Pulse.
#
# For an introduction to the experiments, please see [this paper](https://arxiv.org/pdf/0812.1865.pdf) or [this paper](https://arxiv.org/abs/cond-mat/0703002) or [this paper](http://qulab.eng.yale.edu/documents/reprints/QIP_Devoret_squbit_review.pdf).
#
# Note: Pulse is a fairly new component of Qiskit. Please contact <EMAIL> if you find that something in this notebook is suddenly broken.
backend_defaults = backend.defaults()
backend_devicespec = pulse.DeviceSpecification.create_from(backend)
dt = backend_config.dt
# # 2. Finding our qubit
# choose device to work on
from qiskit import IBMQ
IBMQ.load_account()
# Define the frequency range that will be swept in search of the qubit.
# +
qubit = 1
center_frequency_GHz = backend_defaults.qubit_freq_est[qubit]
# define frequencies to do VNA sweep
import numpy as np
frequency_span_kHz = 20000
frequency_step_kHz = 1000
frequency_min = center_frequency_GHz - frequency_span_kHz/2.e6
frequency_max = center_frequency_GHz + frequency_span_kHz/2.e6
frequencies_GHz = np.arange(frequency_min, frequency_max, frequency_step_kHz/1e6)
print(frequencies_GHz)
# -
# Define drive and measurement pulse parameters for the experiment
# +
# drive pulse parameters
drive_power = 0.01
drive_samples = 128
drive_sigma = 16
# creating drive pulse
drive_pulse = pulse_lib.gaussian(duration=drive_samples, amp=drive_power,
sigma=drive_sigma, name='mydrivepulse')
drive_pulse_qubit = drive_pulse(backend_devicespec.q[qubit].drive)
# measurement pulse parameters
meas_amp = 0.05
meas_samples = 1200
meas_sigma = 4
meas_risefall = 25
# creating measurement pulse
meas_pulse = pulse_lib.gaussian_square(duration=meas_samples, amp=meas_amp,
sigma=meas_sigma, risefall=meas_risefall,
name='mymeasurepulse')
meas_pulse_qubit = meas_pulse(backend_devicespec.q[qubit].measure)
# create acquire pulse
acq_cmd=pulse.Acquire(duration=meas_samples)
acq_cmd_qubit = acq_cmd(backend_devicespec.q, backend_devicespec.mem)
# combined measure and acquire pulse
measure_and_acquire_qubit = meas_pulse_qubit | acq_cmd_qubit
# scalefactor for received data
scale_factor = 1e-10
# -
# Once the pulse parameters have been defined, we can create the pulse schedules corresponding to each frequency in the sweep.
# +
# schedules
schedules = []
schedule_LOs = []
num_shots_per_frequency = 256
for jj, drive_frequency in enumerate(frequencies_GHz):
# start an empty schedule with a label
this_schedule = pulse.Schedule(name="Frequency = {}".format(drive_frequency))
this_schedule += drive_pulse_qubit
this_schedule += measure_and_acquire_qubit << this_schedule.duration
schedules.append(this_schedule)
thisLO = pulse.LoConfig({backend_devicespec.q[qubit].drive: drive_frequency})
schedule_LOs.append(thisLO)
VNASweep_experiment_qobj = assemble(schedules, backend = backend,
meas_level=1, meas_return='single',
shots=num_shots_per_frequency,
schedule_los = schedule_LOs
)
# -
schedules[-1].draw(channels_to_plot=[backend_devicespec.q[qubit].measure,
backend_devicespec.q[qubit].drive,
#backend_devicespec.q[qubit].acquire,
],
scaling=10.0)
job = backend.run(VNASweep_experiment_qobj)
from qiskit.tools.monitor import job_monitor
print(job.job_id())
job_monitor(job, monitor_async='True')
job = backend.retrieve_job('5d2e228e15ce0100196d8c22')
VNASweep_results = job.result(timeout=3600)
# +
plot_X = frequencies_GHz
plot_Y = []
for kk, drive_frequency in enumerate(frequencies_GHz):
thisfrequency_results = VNASweep_results.get_memory(kk)*scale_factor
plot_Y.append( np.mean(thisfrequency_results[:, qubit]) )
import matplotlib.pyplot as plotter
plotter.plot(plot_X, plot_Y)
# -
rough_frequency_qubit = frequencies_GHz [
np.where( plot_Y == np.max(plot_Y))[0]
].tolist()[0]
rough_frequency_qubit = round(rough_frequency_qubit, 5)
print(rough_frequency_qubit)
# # 3. Rabi experiment
# Once we know the frequency of our qubit, the next step is to determine the strength of a $\pi$ pulse.
# +
# Rabi experiment parameters
num_Rabi_points = 64
num_shots_per_point = 256
# drive parameters
drive_power_min = 0
drive_power_max = 0.1
drive_powers = np.linspace(drive_power_min, drive_power_max, num_Rabi_points)
drive_samples = 128
drive_sigma = 16
# -
# create schedules for Rabi experiment
Rabi_schedules = []
Rabi_schedule_LOs = []
for ii, drive_power in enumerate(drive_powers):
rabi_pulse = pulse_lib.gaussian(duration=drive_samples, amp=drive_power,
sigma=drive_sigma, name='rabi_pulse_{}'.format(ii))
rabi_pulse_qubit = rabi_pulse(backend_devicespec.q[qubit].drive)
# start an empty schedule with a label
this_schedule = pulse.Schedule(name="Rabi drive = {}".format(drive_power))
this_schedule += rabi_pulse_qubit
this_schedule += measure_and_acquire_qubit << this_schedule.duration
Rabi_schedules.append(this_schedule)
thisLO = pulse.LoConfig({backend_devicespec.q[qubit].drive: rough_frequency_qubit})
Rabi_schedule_LOs.append(thisLO)
Rabi_schedules[-1].draw(channels_to_plot=[backend_devicespec.q[qubit].measure,
backend_devicespec.q[qubit].drive,
#backend_devicespec.q[qubit].acquire,
],
scaling=10.0)
rabi_experiment_qobj = assemble (Rabi_schedules, backend = backend,
meas_level=1, meas_return='avg',
shots=num_shots_per_point,
schedule_los = Rabi_schedule_LOs
)
job = backend.run(rabi_experiment_qobj)
print(job.job_id())
job_monitor(job, monitor_async=True)
job = backend.retrieve_job('5d2e2a0099a509001888ab02')
Rabi_results = job.result(timeout=3600)
# +
plot_X = drive_powers
plot_Y = []
for jj, drive_power in enumerate(drive_powers):
thispower_results = Rabi_results.get_memory(jj)*scale_factor
plot_Y.append( thispower_results[qubit] )
import matplotlib.pyplot as plotter
plot_Y = plot_Y - np.mean(plot_Y)
plotter.plot(plot_X, plot_Y)
# +
from scipy.optimize import curve_fit
fit_func = lambda x,A,B,T,phi: (A*np.cos(2*np.pi*x/T+phi)+B)
#Fit the data
fitparams, conv = curve_fit(fit_func, plot_X, plot_Y, [3.0 ,0.0 ,0.04 ,0])
#get the pi amplitude
first_peak = abs(np.pi-fitparams[3])*fitparams[2]/(2*np.pi)
pi_amp = abs(fitparams[2]/2)
plotter.scatter(plot_X, plot_Y)
plotter.plot(plot_X, fit_func(plot_X, *fitparams), color='red')
plotter.axvline(first_peak, color='black', linestyle='dashed')
plotter.axvline(first_peak + pi_amp, color='black', linestyle='dashed')
plotter.xlabel('Pulse amplitude, a.u.', fontsize=20)
plotter.ylabel('Signal, a.u.', fontsize=20)
plotter.title('Rough Pi Amplitude Calibration', fontsize=20)
print('Pi Amplitude %f'%(pi_amp))
# -
# # 4. 0 vs 1
# One our $\pi$ pulses have been calibrated, we can now create the state $\vert1\rangle$ with reasonably probability. We can use this to find out what the states $\vert0\rangle$ and $\vert1\rangle$ look like in our measurements.
# +
# Rabi experiment parameters
num_shots_gndexc = 512
# drive parameters
drive_power = pi_amp
print(drive_power)
# +
# create schedules for Rabi experiment
gndexc_schedules = []
gndexc_schedule_LOs = []
pi_pulse = pulse_lib.gaussian(duration=drive_samples, amp=pi_amp,
sigma=drive_sigma, name='pi_pulse'.format(ii))
pi_pulse_qubit = pi_pulse(backend_devicespec.q[qubit].drive)
# ground state schedule
gnd_schedule = pulse.Schedule(name="ground state")
gnd_schedule += measure_and_acquire_qubit << gnd_schedule.duration
thisLO = pulse.LoConfig({backend_devicespec.q[qubit].drive: rough_frequency_qubit})
# excited state schedule
exc_schedule = pulse.Schedule(name="excited state")
exc_schedule += pi_pulse_qubit
exc_schedule += measure_and_acquire_qubit << exc_schedule.duration
thisLO = pulse.LoConfig({backend_devicespec.q[qubit].drive: rough_frequency_qubit})
gndexc_schedules.append(gnd_schedule)
gndexc_schedules.append(exc_schedule)
gndexc_schedule_LOs.append(thisLO)
gndexc_schedule_LOs.append(thisLO)
# -
gndexc_schedules[0].draw(channels_to_plot=[backend_devicespec.q[qubit].measure,
backend_devicespec.q[qubit].drive,
#backend_devicespec.q[qubit].acquire,
],
scaling=10.0)
gndexc_schedules[1].draw(channels_to_plot=[backend_devicespec.q[qubit].measure,
backend_devicespec.q[qubit].drive,
#backend_devicespec.q[qubit].acquire,
],
scaling=10.0)
gndexc_experiment_qobj = assemble (gndexc_schedules, backend = backend,
meas_level=1, meas_return='single',
shots=num_shots_gndexc,
schedule_los = gndexc_schedule_LOs
)
job = backend.run(gndexc_experiment_qobj)
print(job.job_id())
job_monitor(job, monitor_async=True)
job = backend.retrieve_job('5d2e2c3a61157a0018e22440')
gndexc_results = job.result(timeout=3600)
# +
gnd_results = gndexc_results.get_memory(0)[:, qubit]*scale_factor
exc_results = gndexc_results.get_memory(1)[:, qubit]*scale_factor
plotter.scatter(np.real(gnd_results), np.imag(gnd_results),
s=5, cmap='viridis',c='blue',alpha=0.5, label='state_0')
plotter.scatter(np.real(exc_results), np.imag(exc_results),
s=5, cmap='viridis',c='red',alpha=0.5, label='state_1')
mean_gnd = np.mean(gnd_results) # takes mean of both real and imaginary parts
mean_exc = np.mean(exc_results)
plotter.scatter(np.real(mean_gnd), np.imag(mean_gnd),
s=200, cmap='viridis',c='blue',alpha=1.0, label='state_0_mean')
plotter.scatter(np.real(mean_exc), np.imag(mean_exc),
s=200, cmap='viridis',c='red',alpha=1.0, label='state_1_mean')
plotter.xlabel('I (a.u.)')
plotter.xlabel('Q (a.u.)')
# +
def get_01(IQ_data):
dist_0 = np.linalg.norm(np.array([
np.real(IQ_data) - np.real(mean_gnd),
np.imag(IQ_data) - np.imag(mean_gnd)
]))
dist_1 = np.linalg.norm(np.array([
np.real(IQ_data) - np.real(mean_exc),
np.imag(IQ_data) - np.imag(mean_exc)
]))
if dist_1 <= dist_0:
return 1
else:
return 0
print(get_01(mean_gnd), get_01(mean_exc))
# -
# # 5. Measuring T1
# +
# T1 experiment parameters
time_max_us = 500
time_step_us = 2
times_us = np.arange(1, time_max_us, time_step_us)
num_shots_per_point = 512
# drive parameters
drive_power = pi_amp
print(drive_power)
# +
# create schedules for Ramsey experiment
T1_schedules = []
T1_schedule_LOs = []
T1_pulse = pulse_lib.gaussian(duration=drive_samples, amp=drive_power,
sigma=drive_sigma, name='T1_pulse')
T1_pulse_qubit = T1_pulse(backend_devicespec.q[qubit].drive)
thisLO = pulse.LoConfig({backend_devicespec.q[qubit].drive: rough_frequency_qubit})
for ii, delay_time_us in enumerate(times_us):
# start an empty schedule with a label
this_schedule = pulse.Schedule(name="T1 delay = {} us".format(delay_time_us))
this_schedule += T1_pulse_qubit
this_schedule |= (measure_and_acquire_qubit << int(delay_time_us*1000/dt))
T1_schedules.append(this_schedule)
T1_schedule_LOs.append(thisLO)
# -
T1_schedules[0].draw(channels_to_plot=[backend_devicespec.q[qubit].measure,
backend_devicespec.q[qubit].drive,
#backend_devicespec.q[qubit].acquire,
],
scaling=10.0)
T1_experiment_qobj = assemble (T1_schedules, backend = backend,
meas_level=1, meas_return='avg',
shots=num_shots_per_point,
schedule_los = T1_schedule_LOs
)
job = backend.run(T1_experiment_qobj)
print(job.job_id())
job_monitor(job, monitor_async=True)
job = backend.retrieve_job('5d2e79ad99a509001888ab09')
T1_results = job.result(timeout=3600)
# +
plot_X = times_us
plot_Y = []
for jj, delay_time_us in enumerate(times_us):
thisdelay_results = T1_results.get_memory(jj)*scale_factor
plot_Y.append( thisdelay_results[qubit] )
plotter.plot(plot_X, plot_Y)
# +
from scipy.optimize import curve_fit
fit_func2 = lambda x,A,B: (A*np.exp(-x/59.8)+B)
#Fit the data
fitparams2, conv2 = curve_fit(fit_func2, plot_X,
plot_Y,
[-1.0,-11])
print(f"T1 from backend = {backend.properties().qubits[qubit][0].value} us")
plotter.scatter(plot_X, plot_Y)
plotter.plot(plot_X, fit_func2(plot_X, *fitparams2), color='black')
plotter.xlim(0, np.max(plot_X))
plotter.xlabel('Delay before measurement, ($\mu$s)', fontsize=20)
plotter.ylabel('Measured signal, a.u.', fontsize=20)
# -
# # 6. Ramsey experiment
# Now, we determine both $T_2$ and the qubit frequency to better precision. This is done using a Ramsey pulse sequence.
#
# In this pulse sequence, we first apply a $\pi/2$ pulse, wait some time $\Delta t$, and then apply another $\pi/2$ pulse.
# +
# Ramsey experiment parameters
time_max_us = 100
time_step_us = 0.25
times_us = np.arange(1, time_max_us, time_step_us)
num_shots_per_point = 256
# drive parameters
drive_power = pi_amp/2
print(drive_power)
# -
# create schedules for Ramsey experiment
Ramsey_schedules = []
Ramsey_schedule_LOs = []
ramsey_pulse = pulse_lib.gaussian(duration=drive_samples, amp=drive_power,
sigma=drive_sigma, name='ramsey_pulse')
ramsey_pulse_qubit = ramsey_pulse(backend_devicespec.q[qubit].drive)
thisLO = pulse.LoConfig({backend_devicespec.q[qubit].drive: rough_frequency_qubit})
for ii, delay_time_us in enumerate(times_us):
# start an empty schedule with a label
this_schedule = pulse.Schedule(name="Ramsey delay = {} us".format(delay_time_us))
this_schedule += ramsey_pulse_qubit
this_schedule |= (ramsey_pulse_qubit << int(this_schedule.duration+delay_time_us*1000/dt))
this_schedule |= (measure_and_acquire_qubit << this_schedule.duration)
Ramsey_schedules.append(this_schedule)
Ramsey_schedule_LOs.append(thisLO)
Ramsey_schedules[-1].draw(channels_to_plot=[backend_devicespec.q[qubit].measure,
backend_devicespec.q[qubit].drive,
#backend_devicespec.q[qubit].acquire,
],
scaling=10.0)
ramsey_experiment_qobj = assemble (Ramsey_schedules, backend = backend,
meas_level=1, meas_return='avg',
shots=num_shots_per_point,
schedule_los = Ramsey_schedule_LOs
)
job = backend.run(ramsey_experiment_qobj)
print(job.job_id())
job_monitor(job, monitor_async=True)
job = backend.retrieve_job('5d2e75dc137af400181be14a')
Ramsey_results = job.result(timeout=3600)
# +
plot_X = times_us
plot_Y = []
for jj, delay_time_us in enumerate(times_us):
thisdelay_results = Ramsey_results.get_memory(jj)[qubit]*scale_factor
plot_Y.append(np.mean(thisdelay_results))
plotter.plot(plot_X, (plot_Y))
# +
from scipy.optimize import curve_fit
fit_func = lambda x,A,T,phi,T2p,B: (A*np.exp(-x/T2p)*(np.sin(2*np.pi*x/T+phi))+B)
#Fit the data
fitparams, conv = curve_fit(fit_func, plot_X,
plot_Y,
[1.0,10,0,4,34])
#off-resonance component
delT = fitparams[1]
delf_MHz = 1./(delT)
print(f"df = {delf_MHz} MHz")
first_peak = (np.pi-fitparams[2])*delT/(2*np.pi) + delT/4
second_peak = first_peak + delT
print(f"T2p = {fitparams[3]} us")
print(f"T2 from backend = {backend.properties().qubits[qubit][1].value} us")
#get the pi amplitude
plotter.scatter(plot_X, plot_Y)
plotter.plot(plot_X, fit_func(plot_X, *fitparams), color='red')
plotter.axvline(first_peak, color='black', linestyle='dashed')
plotter.axvline(second_peak, color='red', linestyle='dashed')
plotter.xlim(0, np.max(plot_X))
plotter.xlabel('Ramsey delay, ($\mu$s)', fontsize=20)
plotter.ylabel('Ramsey signal, a.u.', fontsize=20)
plotter.title('Rough $\Delta$f Calibration', fontsize=20)
# -
precise_frequency_qubit_plus = round(rough_frequency_qubit + delf_MHz/1e3, 5)
precise_frequency_qubit_minus = round(rough_frequency_qubit - delf_MHz/1e3, 5)
print(f"{rough_frequency_qubit}->{precise_frequency_qubit_plus} or {precise_frequency_qubit_minus}")
# # 7. Measuring T2
# +
# T2 experiment parameters
time_max_us = 125
time_step_us = 0.5
times_us = np.arange(1, time_max_us, time_step_us)
num_shots_per_point = 512
# drive parameters
drive_power_1 = pi_amp/2
drive_power_2 = pi_amp
print(drive_power_1)
print(drive_power_2)
# +
# create schedules for Ramsey experiment
T2_schedules = []
T2_schedule_LOs = []
T2_pulse_pio2 = pulse_lib.gaussian(duration=drive_samples, amp=drive_power_1,
sigma=drive_sigma, name='T2_pio2_pulse')
T2_pulse_pio2_qubit = T2_pulse_pio2(backend_devicespec.q[qubit].drive)
T2_pulse_pi = pulse_lib.gaussian(duration=drive_samples, amp=drive_power_2,
sigma=drive_sigma, name='T2_pi_pulse')
T2_pulse_pi_qubit = T2_pulse_pi(backend_devicespec.q[qubit].drive)
thisLO = pulse.LoConfig({backend_devicespec.q[qubit].drive: precise_frequency_qubit_minus})
for ii, delay_time_us in enumerate(times_us):
# start an empty schedule with a label
this_schedule = pulse.Schedule(name="T2 delay = {} us".format(delay_time_us))
this_schedule |= T2_pulse_pio2_qubit
this_schedule |= (T2_pulse_pi_qubit << int(this_schedule.duration +
delay_time_us*1000/dt))
this_schedule |= (T2_pulse_pio2_qubit << int(this_schedule.duration +
delay_time_us*1000/dt))
this_schedule |= (measure_and_acquire_qubit << int(this_schedule.duration))
T2_schedules.append(this_schedule)
T2_schedule_LOs.append(thisLO)
# -
T2_schedules[0].draw(channels_to_plot=[backend_devicespec.q[qubit].measure,
backend_devicespec.q[qubit].drive,
#backend_devicespec.q[qubit].acquire,
],
scaling=10.0)
T2_experiment_qobj = assemble (T2_schedules, backend = backend,
meas_level=1, meas_return='avg',
shots=num_shots_per_point,
schedule_los = T2_schedule_LOs
)
job = backend.run(T2_experiment_qobj)
print(job.job_id())
job_monitor(job, monitor_async=True)
T2job = backend.retrieve_job('5d2f6c0ae741150012334c44')
T2_results = T2job.result(timeout=3600)
# +
plot_X = 2.*times_us
plot_Y = []
for jj, delay_time_us in enumerate(times_us):
thisdelay_results = T2_results.get_memory(jj)*scale_factor
plot_Y.append( thisdelay_results[qubit] )
plotter.plot(plot_X, plot_Y)
T2y_echo = plot_Y
T2x_echo = plot_X
# +
from scipy.optimize import curve_fit
T2guess = backend.properties().qubits[qubit][1].value
fit_func2 = lambda x,A,B: (A*np.exp(-x/T2guess)+B)
#Fit the data
fitparams2, conv2 = curve_fit(fit_func2, plot_X,
plot_Y,
[-2.0,1.0])
print(f"T2 from backend = {backend.properties().qubits[qubit][1].value} us")
plotter.scatter(plot_X, plot_Y)
plotter.plot(plot_X, fit_func2(plot_X, *fitparams2), color='black')
plotter.xlim(0, np.max(plot_X))
plotter.xlabel('Total time, ($\mu$s)', fontsize=20)
plotter.ylabel('Measured signal, a.u.', fontsize=20)
# +
# measurement pulse parameters
meas_amp = 0.1
meas_samples = 1200
meas_sigma = 4
meas_risefall = 25
# creating measurement pulse
meas_pulse = pulse_lib.gaussian_square(duration=meas_samples, amp=meas_amp,
sigma=meas_sigma, risefall=meas_risefall,
name='mymeasurepulse')
meas_pulse_qubit = meas_pulse(backend_devicespec.q[qubit].measure)
# create acquire pulse
acq_cmd=pulse.Acquire(duration=meas_samples)
acq_cmd_qubit = acq_cmd(backend_devicespec.q, backend_devicespec.mem)
# combined measure and acquire pulse
measure_and_acquire_qubit = meas_pulse_qubit | acq_cmd_qubit
# scalefactor for received data
scale_factor = 1e-10
# -
# # 8. Doing CPMG
# +
# T2 experiment parameters
tau_us_min = 1
tau_us_max = 30
tau_step_us = 0.1
taus_us = np.arange(tau_us_min, tau_us_max, tau_step_us)
num_shots_per_point = 512
ncpmg = 10
# drive parameters
drive_power_1 = pi_amp/2
drive_power_2 = pi_amp
print(f"Total time ranges from {2.*ncpmg*taus_us[0]} to {2.*ncpmg*taus_us[-1]} us")
# +
# create schedules for Ramsey experiment
T2cpmg_schedules = []
T2cpmg_schedule_LOs = []
T2cpmg_pulse_pio2 = pulse_lib.gaussian(duration=drive_samples, amp=drive_power_1,
sigma=drive_sigma, name='T2cpmg_pio2_pulse')
T2cpmg_pulse_pio2_qubit = T2cpmg_pulse_pio2(backend_devicespec.q[qubit].drive)
T2cpmg_pulse_pi = pulse_lib.gaussian(duration=drive_samples, amp=drive_power_2,
sigma=drive_sigma, name='T2cpmg_pi_pulse')
T2cpmg_pulse_pi_qubit = T2cpmg_pulse_pi(backend_devicespec.q[qubit].drive)
thisLO = pulse.LoConfig({backend_devicespec.q[qubit].drive: precise_frequency_qubit_minus})
for ii, delay_time_us in enumerate(taus_us):
# start an empty schedule with a label
this_schedule = pulse.Schedule(name="T2cpmg delay = {} us".format(delay_time_us))
this_schedule |= T2cpmg_pulse_pio2_qubit
this_schedule |= (T2cpmg_pulse_pi_qubit << int(this_schedule.duration +
delay_time_us*1000/dt))
for _ in range(ncpmg-1):
this_schedule |= (T2cpmg_pulse_pi_qubit << int(this_schedule.duration +
2*delay_time_us*1000/dt))
this_schedule |= (T2cpmg_pulse_pio2_qubit << int(this_schedule.duration +
delay_time_us*1000/dt))
this_schedule |= (measure_and_acquire_qubit << int(this_schedule.duration))
T2cpmg_schedules.append(this_schedule)
T2cpmg_schedule_LOs.append(thisLO)
# -
T2cpmg_schedules[0].draw(channels_to_plot=[backend_devicespec.q[qubit].measure,
backend_devicespec.q[qubit].drive,
#backend_devicespec.q[qubit].acquire,
],
scaling=10.0)
T2cpmg_experiment_qobj = assemble (T2cpmg_schedules, backend = backend,
meas_level=1, meas_return='avg',
shots=num_shots_per_point,
schedule_los = T2cpmg_schedule_LOs
)
job = backend.run(T2cpmg_experiment_qobj)
print(job.job_id())
job_monitor(job, monitor_async=True)
T2cpmgjob = backend.retrieve_job('5d2f6e1aca4ad70012795340')
T2cpmg_results = T2cpmgjob.result(timeout=3600)
# +
plot_X = 2.*ncpmg*taus_us
plot_Y = []
for jj, delay_time_us in enumerate(taus_us):
thisdelay_results = T2cpmg_results.get_memory(jj)*scale_factor
plot_Y.append( thisdelay_results[qubit] )
plotter.plot(plot_X, plot_Y)
T2y_cpmg = plot_Y
T2x_cpmg = plot_X
# +
from scipy.optimize import curve_fit
T2guess = backend.properties().qubits[qubit][1].value
fit_func2 = lambda x,A,B: (A*np.exp(-x/T2guess)+B)
#Fit the data
fitparams2, conv2 = curve_fit(fit_func2, plot_X,
plot_Y,
[-2.0,1.0])
print(f"T2 from backend = {T2guess} us")
plotter.scatter(plot_X, plot_Y)
plotter.plot(plot_X, fit_func2(plot_X, *fitparams2), color='black')
plotter.xlim(0, np.max(plot_X))
plotter.xlabel('Total time, ($\mu$s)', fontsize=20)
plotter.ylabel('Measured signal, a.u.', fontsize=20)
| ch-quantum-hardware/calibrating-qubits-openpulse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pygeom import Axes, Point, Triangle
# %matplotlib inline
# -
# ## Demo
# +
# Create the cartesian axis
axes = Axes(xlim=(-1,10), ylim=(-1,10), figsize=(12,10))
# Points
p1 = Point(1, 1, color='grey')
p2 = Point(5, 5, color='grey')
p3 = Point(8, 5, color='grey')
tr = Triangle(p1, p2, p3, alpha=0.5)
axes.addMany([p1, p2, p3])
axes.add(tr)
axes.draw()
# -
| notebooks/triangle_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="o1YkGd2CPlfG"
# # Introduction
#
# In this tutorial, it is evaluated the HCDF function proposed by Ramoneda et at. [1] on the symbolic domain. Originally, this algorithm was proposed on the audio domain.
#
# The data used for the evaluation is from the Haydn op20 dataset [2]. All the quartet movements scores of the Haydn op 20 are annotated with chords. This dataset is loaded with the mirdata library[3].
#
#
#
#
# >[1] <NAME>., & <NAME>. (2020, October). Revisiting Harmonic Change Detection. In Audio Engineering Society Convention 149. Audio Engineering Society.
#
# >[2] <NAME>. (2017). Automatic harmonic analysis of classical string quartets from symbolic score (Doctoral dissertation, Masterโs thesis, Universitat Pompeu Fabra).
#
# >[3] <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2019). mirdata: Software for Reproducible Usage of Datasets. In ISMIR (pp. 99-106).
#
# ---
# + [markdown] id="4eZlA38kSkAB"
# Firstly, It is imported a TIVlib [4] in-house version.
#
#
#
# >[4] <NAME>, et al. "TIV. lib: an open-source library for the tonal description of musical audio." arXiv preprint arXiv:2008.11529 (2020).
# + id="K3111oshCZiY"
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import gaussian_filter
from astropy.convolution import convolve, Gaussian1DKernel
from scipy.spatial.distance import cosine, euclidean
np.seterr(all='raise')
class TIV:
weights_symbolic = [2, 11, 17, 16, 19, 7]
weights = [3, 8, 11.5, 15, 14.5, 7.5]
def __init__(self, energy, vector):
self.energy = energy
self.vector = vector
@classmethod
def from_pcp(cls, pcp, symbolic=False):
if not everything_is_zero(pcp):
fft = np.fft.rfft(pcp, n=12)
energy = fft[0]
vector = fft[1:7]
if symbolic:
vector = ((vector / energy) * cls.weights_symbolic)
else:
vector = ((vector / energy) * cls.weights)
return cls(energy, vector)
else:
return cls(complex(0), np.array([0, 0, 0, 0, 0, 0]).astype(complex))
def get_vector(self):
return np.array(self.vector)
def dissonance(self):
return 1 - (np.linalg.norm(self.vector) / np.sqrt(np.sum(np.dot(self.weights, self.weights))))
def coefficient(self, ii):
return self.mags()[ii] / self.weights[ii]
def chromaticity(self):
return self.mags()[0] / self.weights[0]
def dyadicity(self):
return self.mags()[1] / self.weights[1]
def triadicity(self):
return self.mags()[2] / self.weights[2]
def diminished_quality(self):
return self.mags()[3] / self.weights[3]
def diatonicity(self):
return self.mags()[4] / self.weights[4]
def wholetoneness(self):
return self.mags()[5] / self.weights[5]
def mags(self):
return np.abs(self.vector)
def plot_tiv(self):
titles = ["m2/M7", "TT", "M3/m6", "m3/M6", "P4/P5", "M2/m7"]
tivs_vector = self.vector / self.weights
i = 1
for tiv in tivs_vector:
circle = plt.Circle((0, 0), 1, fill=False)
plt.subplot(2, 3, i)
plt.subplots_adjust(hspace=0.4)
plt.gca().add_patch(circle)
plt.title(titles[i - 1])
plt.scatter(tiv.real, tiv.imag)
plt.xlim((-1.5, 1.5))
plt.ylim((-1.5, 1.5))
plt.grid()
i = i + 1
plt.show()
@classmethod
def euclidean(cls, tiv1, tiv2):
return np.linalg.norm(tiv1.vector - tiv2.vector)
@classmethod
def cosine(cls, tiv1, tiv2):
a = np.concatenate((tiv1.vector.real, tiv1.vector.imag), axis=0)
b = np.concatenate((tiv2.vector.real, tiv2.vector.imag), axis=0)
if everything_is_zero(a) or everything_is_zero(b):
distance_computed = euclidean(a, b)
else:
distance_computed = cosine(a, b)
return distance_computed
zero_sequence = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
one_sequence = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
def everything_is_zero(vector):
for element in vector:
if element != 0:
return False
return True
def complex_to_vector(vector):
ans = []
for i in range(0, vector.shape[1]):
row1 = []
row2 = []
for j in range(0, vector.shape[0]):
row1.append(vector[j][i].real)
row2.append(vector[j][i].imag)
ans.append(row1)
ans.append(row2)
return np.array(ans)
def tonal_interval_space(chroma, symbolic=False):
centroid_vector = []
for i in range(0, chroma.shape[1]):
each_chroma = [chroma[j][i] for j in range(0, chroma.shape[0])]
# print(each_chroma)
if everything_is_zero(each_chroma):
centroid = [0. + 0.j, 0. + 0.j, 0. + 0.j, 0. + 0.j, 0. + 0.j, 0. + 0.j]
else:
tonal = TIV.from_pcp(each_chroma, symbolic)
centroid = tonal.get_vector()
centroid_vector.append(centroid)
return complex_to_vector(np.array(centroid_vector))
def gaussian_blur(centroid_vector, sigma):
centroid_vector = gaussian_filter(centroid_vector, sigma=sigma)
return centroid_vector
def get_distance(centroids, dist):
ans = [0]
if dist == 'euclidean':
for j in range(1, centroids.shape[1] - 1):
sum = 0
for i in range(0, centroids.shape[0]):
sum += ((centroids[i][j + 1] - centroids[i][j - 1]) ** 2)
sum = np.math.sqrt(sum)
ans.append(sum)
if dist == 'cosine':
for j in range(1, centroids.shape[1] - 1):
a = centroids[:, j - 1]
b = centroids[:, j + 1]
if everything_is_zero(a) or everything_is_zero(b):
distance_computed = euclidean(a, b)
else:
distance_computed = cosine(a, b)
ans.append(distance_computed)
ans.append(0)
return np.array(ans)
def get_peaks_hcdf(hcdf_function, rate_centroids_second, symbolic=False):
changes = [0]
hcdf_changes = []
last = 0
for i in range(2, hcdf_function.shape[0] - 1):
if hcdf_function[i - 1] < hcdf_function[i] and hcdf_function[i + 1] < hcdf_function[i]:
hcdf_changes.append(hcdf_function[i])
if not symbolic:
changes.append(i / rate_centroids_second)
else:
changes.append(i)
last = i
return np.array(changes), np.array(hcdf_changes)
def harmonic_change(chroma: list, window_size: int=2048, symbolic: bool=False,
sigma: int = 5, dist: str = 'euclidean'):
chroma = np.array(chroma).transpose()
centroid_vector = tonal_interval_space(chroma, symbolic=symbolic)
# blur
centroid_vector_blurred = gaussian_blur(centroid_vector, sigma)
# harmonic distance and calculate peaks
harmonic_function = get_distance(centroid_vector_blurred, dist)
changes, hcdf_changes = get_peaks_hcdf(harmonic_function, window_size, symbolic)
return changes, hcdf_changes, harmonic_function
# + [markdown] id="7MrbQoVYV1X7"
# Install and import of ot6her required libraries.
# + colab={"base_uri": "https://localhost:8080/"} id="WsqSVzjpIGUj" outputId="9c1548ce-5c24-490c-fea6-ac769c56d2ab"
# !pip install git+https://github.com/mir-dataset-loaders/mirdata.git@Pedro/haydn_quartets
# !pip install mido
# !pip uninstal music21
# !pip install music21==6.7.1
# !pip install unidecode
# !pip install mir_eval
# + id="ibTg6ZwdSrnd"
import music21
import mido
import mirdata
import os
import sys
import mir_eval
import plotly.express as px
import pandas as pd
from mido import MidiFile
import numpy as np
from unidecode import unidecode
# + [markdown] id="QBPFl15wWDAa"
# Load and validate of haydn op20 with mirdata library
# + colab={"base_uri": "https://localhost:8080/"} id="2Rkb02z2JhFo" outputId="6892fd41-2936-40ff-fb83-ece8bb73fb86"
h20 = mirdata.initialize('haydn_op20')
h20.download()
h20.validate()
# + [markdown] id="aqqIBnHSWNJ_"
# Example of chord annotation in a random quartet movement.
# + colab={"base_uri": "https://localhost:8080/"} id="tDWUMbtLS_th" outputId="0ab3b9f3-5600-437e-930a-e7c7e9d085da"
h20.choice_track().chords
# + [markdown] id="svvUWWsaWbEE"
# Import utility functions for dealing with piano rolls
# + id="s0ur2XR6X1yz"
#######
# Pianorolls dims are : TIME * PITCH
class Read_midi(object):
def __init__(self, song_path, quantization):
## Metadata
self.__song_path = song_path
self.__quantization = quantization
## Pianoroll
self.__T_pr = None
## Private misc
self.__num_ticks = None
self.__T_file = None
@property
def quantization(self):
return self.__quantization
@property
def T_pr(self):
return self.__T_pr
@property
def T_file(self):
return self.__T_file
def get_total_num_tick(self):
# Midi length should be written in a meta message at the beginning of the file,
# but in many cases, lazy motherfuckers didn't write it...
# Read a midi file and return a dictionnary {track_name : pianoroll}
mid = MidiFile(self.__song_path)
# Parse track by track
num_ticks = 0
for i, track in enumerate(mid.tracks):
tick_counter = 0
for message in track:
# Note on
time = float(message.time)
tick_counter += time
num_ticks = max(num_ticks, tick_counter)
self.__num_ticks = num_ticks
def get_pitch_range(self):
mid = MidiFile(self.__song_path)
min_pitch = 200
max_pitch = 0
for i, track in enumerate(mid.tracks):
for message in track:
if message.type in ['note_on', 'note_off']:
pitch = message.note
if pitch > max_pitch:
max_pitch = pitch
if pitch < min_pitch:
min_pitch = pitch
return min_pitch, max_pitch
def get_time_file(self):
# Get the time dimension for a pianoroll given a certain quantization
mid = MidiFile(self.__song_path)
# Tick per beat
ticks_per_beat = mid.ticks_per_beat
# Total number of ticks
self.get_total_num_tick()
# Dimensions of the pianoroll for each track
self.__T_file = int((self.__num_ticks / ticks_per_beat) * self.__quantization)
return self.__T_file
def read_file(self):
# Read the midi file and return a dictionnary {track_name : pianoroll}
mid = MidiFile(self.__song_path)
# Tick per beat
ticks_per_beat = mid.ticks_per_beat
# Get total time
self.get_time_file()
T_pr = self.__T_file
# Pitch dimension
N_pr = 128
pianoroll = {}
def add_note_to_pr(note_off, notes_on, pr):
pitch_off, _, time_off = note_off
# Note off : search for the note in the list of note on,
# get the start and end time
# write it in th pr
match_list = [(ind, item) for (ind, item) in enumerate(notes_on) if item[0] == pitch_off]
if len(match_list) == 0:
print("Try to note off a note that has never been turned on")
# Do nothing
return
# Add note to the pr
pitch, velocity, time_on = match_list[0][1]
pr[time_on:time_off, pitch] = velocity
# Remove the note from notes_on
ind_match = match_list[0][0]
del notes_on[ind_match]
return
# Parse track by track
counter_unnamed_track = 0
for i, track in enumerate(mid.tracks):
# Instanciate the pianoroll
pr = np.zeros([T_pr, N_pr])
time_counter = 0
notes_on = []
for message in track:
##########################################
##########################################
##########################################
# TODO : keep track of tempo information
# import re
# if re.search("tempo", message.type):
# import pdb; pdb.set_trace()
##########################################
##########################################
##########################################
# print message
# Time. Must be incremented, whether it is a note on/off or not
time = float(message.time)
time_counter += time / ticks_per_beat * self.__quantization
# Time in pr (mapping)
time_pr = int(round(time_counter))
# Note on
if message.type == 'note_on':
# Get pitch
pitch = message.note
# Get velocity
velocity = message.velocity
if velocity > 0:
notes_on.append((pitch, velocity, time_pr))
elif velocity == 0:
add_note_to_pr((pitch, velocity, time_pr), notes_on, pr)
# Note off
elif message.type == 'note_off':
pitch = message.note
velocity = message.velocity
add_note_to_pr((pitch, velocity, time_pr), notes_on, pr)
# We deal with discrete values ranged between 0 and 127
# -> convert to int
pr = pr.astype(np.int16)
if np.sum(np.sum(pr)) > 0:
name = unidecode(track.name)
name = name.rstrip('\x00')
if name == u'':
name = 'unnamed' + str(counter_unnamed_track)
counter_unnamed_track += 1
if name in pianoroll.keys():
# Take max of the to pianorolls
pianoroll[name] = np.maximum(pr, pianoroll[name])
else:
pianoroll[name] = pr
return pianoroll
# + [markdown] id="HU6AQii2WrVN"
# Example of hcdf across one quartet movement
# + id="wR_sj802bcUw"
choice = h20.load_tracks()['0']
midi_matrixes = Read_midi(choice.midi_path, 28).read_file()
# + colab={"base_uri": "https://localhost:8080/"} id="ZT1ri1LgjG5g" outputId="62702993-ed06-408a-d61f-858f7639196a"
for k, t in midi_matrixes.items():
print(t.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="8x_wXxt1kVCd" outputId="fdf80932-9fb2-46d2-ed99-9c2ff2b00f4c"
mat = list(midi_matrixes.values())
midi_quartet = mat[0] + mat[1] + mat[2] + mat[3]
midi_quartet.shape
# + id="gjyL99Ggoq05"
np.set_printoptions(threshold=sys.maxsize)
# + id="YNIXHjfZlGq_"
def midi2chroma(midi_vector):
chroma_vector = np.zeros((midi_vector.shape[0], 12))
for ii, midi_frame in enumerate(midi_vector):
for jj, element in enumerate(midi_frame):
chroma_vector[ii][jj % 12] += element
return chroma_vector
chroma_quartets = midi2chroma(midi_quartet)
# + colab={"base_uri": "https://localhost:8080/"} id="fhk2N6ONppIf" outputId="a6cbee45-d089-4114-e711-742bfa0d2f8d"
changes, hcdf_changes, harmonic_function = harmonic_change(chroma=chroma_quartets, symbolic=True,
sigma=28, dist='euclidean')
changes
# + id="Mqwl-cqgrkwu" colab={"base_uri": "https://localhost:8080/"} outputId="e6b11859-c428-4488-b6a8-8b4af859c508"
changes_ground_truth = np.array([c['time'] for c in choice.chords])
changes_ground_truth
# + colab={"base_uri": "https://localhost:8080/"} id="ZmEMoHZYlYio" outputId="e0ecec8d-1d91-437a-d5a4-31dd6f03fffb"
f_measure, precision, recall = mir_eval.onset.f_measure(changes_ground_truth, changes, window=31.218) #same window than Harte
f_measure, precision, recall
# + [markdown] id="esbRBSeFY6uL"
# # HCDF evaluation across the haydn op20 dataset
# + id="lvgmYUX9puQ7" colab={"base_uri": "https://localhost:8080/"} outputId="92eb1c9f-ebc4-47c9-b45a-1e4419722604"
def evaluate_hcdf_across_haydn_op20(sigma=30, distance='euclidean'):
f_measure_results = []
precision_results = []
recall_results = []
print("evaluate_hcdf_across_haydn_op20", sigma, distance)
for k, t in h20.load_tracks().items():
midi_matrixes = Read_midi(t.midi_path, 28).read_file()
mat = list(midi_matrixes.values())
midi_quartet = mat[0] + mat[1] + mat[2] + mat[3]
chroma_quartets = midi2chroma(midi_quartet)
changes, hcdf_changes, harmonic_function = harmonic_change(chroma=chroma_quartets, symbolic=True,
sigma=sigma, dist=distance)
changes_ground_truth = np.array([c['time'] for c in t.chords])
f_measure, precision, recall = mir_eval.onset.f_measure(changes_ground_truth, changes, window=31.218) #same window than Harte
# print(t.title, f_measure, precision, recall)
f_measure_results.append(f_measure)
precision_results.append(precision)
recall_results.append(recall)
return np.mean(np.array(f_measure_results)), \
np.mean(np.array(precision_results)), \
np.mean(np.array(recall_results))
evaluate_hcdf_across_haydn_op20()
# + colab={"base_uri": "https://localhost:8080/"} id="e_vVo7vPpuaL" outputId="b236e903-9641-4569-bb2d-b9f383cf53af"
results_euclidean = {
sigma: evaluate_hcdf_across_haydn_op20(sigma=sigma, distance='euclidean')
for sigma in range(1, 52, 5)
}
# + id="bFznKVn312nU"
def tune_sigma_plot(evaluation_result):
sigma_list = []; type_metric = []; metrics = []
for s, v in evaluation_result.items():
f, p, r = v
# f measure
sigma_list.append(s)
type_metric.append("F_score")
metrics.append(f)
# Precision
sigma_list.append(s)
type_metric.append("Precision")
metrics.append(p)
# Recall
sigma_list.append(s)
type_metric.append("Recall")
metrics.append(r)
df_dict = {
"sigma": sigma_list,
"metric": type_metric,
"value": metrics
}
df = pd.DataFrame(df_dict)
fig = px.line(df, x="sigma", y="value", color="metric", render_mode="svg")
fig.show()
# + [markdown] id="BPigqVpmjsd4"
# Tuning sigma gaussian hyperparameter for HCDF with euclidean distance.
# + id="BluRurgd1hyf" colab={"base_uri": "https://localhost:8080/", "height": 562} outputId="e86270ed-03c0-40bd-9cbc-90bd41175a4d"
tune_sigma_plot(results_euclidean)
# + [markdown] id="i23A4r9Nj6QP"
# Results are better segmenting the chord boundaries that the current approaches for chord recognition in symbolic domain. With a sigma=20 all the metrics computed across the Haydn op 20 dataset are greater than 70%.
# Due to chord analysis subjectivity the results are enough good for using this function to segment harmonically symbolic data.
# + colab={"base_uri": "https://localhost:8080/", "height": 744} id="-ascCxXc6_Il" outputId="21641960-f6f3-4fb3-bc58-c7f1aab4cb7a"
results_cosine = {
sigma: evaluate_hcdf_across_haydn_op20(sigma=sigma, distance='cosine')
for sigma in range(1, 52, 5)
}
tune_sigma_plot(results_cosine)
# + [markdown] id="A8F7tAHyng1d"
# The performance of HCDF with the cosine distance is a bit worse than using the euclidean distance.
| haydn_op20/haydn_op20.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Financial Econometrics I: Homework 1
# Team Member:
#
# <NAME> : <EMAIL>
#
# <NAME> : <EMAIL>
# # Problem 1
# ##### From the the symbols.csv choose 1 of the 10 Sectors (Industrials, Financials, Health Care, etc.). Download the prices for all the stocks belonging to the corresponding Sector for the period 01/2015 - 12/2021. Exclude the stocks that are not available in the quantmod package. Check that your data contains all the desired symbols (include this check in your output).
# #####
# Setup environment
Sys.setenv(LANG = "en")
options(warn = -1) # suppressing warnings
library(repr)
library("quantmod")
library(moments)
library("stabledist")
library("StableEstim")
options(repr.plot.width = 10, repr.plot.height = 8)
# read data from file
smbs <- read.csv('symbols.csv',sep = ';',colClasses = "character")
head(smbs)
# Our group chose 'Consumer Discretionary' sector as our Homework data set.
symbols <- smbs[smbs['Sector'] == 'Consumer Discretionary',1]
symbols
#download data
data <- lapply(symbols, function(y)
{
try(getSymbols(y, auto.assign = FALSE,from = as.Date('2015-01-01'), to = '2021-12-31'),silent=TRUE)
})
names(data) <- symbols
# check all symbols used for downloading process are used, no data is missing
length(symbols)
length(data)
# save original data before filter for invalid symbols
data0 <- data
# We read 83 stock names from the file, and check each of them for loading the data from source.
symbols
# remove stocks that are not available from Default Yahoo data source
for (i in seq_along(data)) {
if(grepl('Error', data[[i]][1], fixed=TRUE)){
print(data[[i]][1])
data[[i]]<-NULL
}
}
# check available symbols data
length(data)
head(data[[1]])
# +
# save data before filter for closing price
data2<-data
# only need the closing price
data <- lapply(names(data), function(y){
data[[y]] <- data[[y]][, paste0(y, '.Close')]
})
# check the date range
head(data[[1]])
tail(data[[1]])
# Check data output
#lapply(data, head)
# -
# ##### Conclution:
# In 83 stocks of 'Consumer Discretionary' Sector, 69 stocks have valid data from the data source.
# ##### 1. Compute the log-returns and simple returns for all the stocks. Save these to lrets and rets objectsm respectively. From now on, you will work with the logarithmic returns.
# +
# compute log-returns for returns
lrets <- lapply(data, function(y){
y <- na.omit(diff(log(y)))
})
head(lrets[[1]])
# compute simple returns
rets <- lapply(data, function(y){
y <- na.omit(diff(y)/lag(y))
})
head(rets[[1]])
# -
# ##### 2. Compute the sample mean, variance, skewness, excess kurtosis, minimum and maximum of the series of logarithmic returns for each of the stocks in your sample. Display these in a nicely readable manner.
# +
#Compute the sample mean, variance, skewness, excess kurtosis, minimum and maximum
stats <- lapply(lrets, function(y){
c(mean(y), var(y), skewness(y),kurtosis(y),min(y), max(y))
})
#round the numbers
stats <- sapply(stats, function(y){
round(y, 4)
})
#name the row: mean, variance, skewness, excess kurtosis, minimum and maximum
rownames(stats) <- c('Mean', 'Var', 'Skew', 'Kurt', 'Min', 'Max')
colnames(stats) <- names(data2)
stats
# -
#transform the table for nicely readable manner
stats_t <- t(stats)
head(stats_t) # check head
tail(stats_t) # check tail
nrow(stats_t) # check total number of symbols
# #####
# ##### 3. Try to devise one Figure that plots all time series of returns in your sample.
# Plot time series of returns of the choosen sector
par(mfrow = c(2, 3))
sapply(lrets, function(y){
plot(as.Date(index(y)), y, type = 'l', main= colnames(y), xlab = 'Year',
ylab = 'returns')
})
# #####
# ##### 4. Discard the symbols, where you don't have valid data (non-missing, non-NA) for at least 80% of the dates in the sample period (use the stocks with the most observations as the benchmark for the sample period). For each symbol in your dataset, keep only the dates where you have valid data for all of the remaining symbols, i.e. you will have N time-series with matching timestamps. Now compute the mean logarithmic return for each date. The result should be a time-series with one (mean) log-return for each date.
# +
# Check the date length of the sample period
dates <- index(lrets[[1]])
date_len <- length(dates)
date_len
# Choose benchmark at least 80% of the dates in the sample period
date_length <- date_len * 0.8
date_length
# -
length(lrets)
# save lrets data before filter symbols which do not have enough date samples
lrets0 <- lrets
# remove symbols have invalid dates
for (i in 2:length(lrets)){
if (nrow(lrets[[i]]) < date_length){
print(i)
print((lrets[[i]][1]))
lrets[[i]] <- NULL
}
}
# +
# check the number of remaining symbols
length(lrets)
# save the date before filter for common
dates0 <-dates
length(dates)
# find the dates that are common for all the symbols
for (i in 2:length(lrets)){
dates <- index(lrets[[i]])[index(lrets[[i]]) %in% dates]
}
length(dates)
# +
# save the lrets data before filter for the common dates
lrets1 <- lrets
#filter the lrets
lrets <- lapply(lrets, function(y){
y[index(y) %in% dates]
})
# Check if the observations are matching.
head(lapply(lrets, nrow))
tail(lapply(lrets, nrow))
# -
# Calculate the time serises of cross sector means
N <- nrow(lrets[[1]])
lrets_mean <- sapply(1:N, function(y){
mean(sapply(lrets, '[[', y))
})
head(lrets_mean)
# ##### Conclution:
# In 69 Stocks in our data set, 62 of them have at least 80% of the dates in the sample period. After matching, we have 1752 days of return data for each stocks.
# ##### 5. Estimate the parameters of the stable distribution for the mean returns computed in 4.
# Calculate mean and standard deviation of the mean returns
m <- mean(na.omit(lrets_mean))
std <- sd(na.omit(lrets_mean))
print(c(m, std))
# Estimate the parameters
ret = as.numeric(na.omit(lrets_mean))
objKout <- Estim(EstimMethod = "Kout", data = ret, pm = 0,
ComputeCov = FALSE, HandleError = FALSE,
spacing = "Kout")
objKout
# #####
# ##### Conclution:
# By using Stable distribution, we estimate the mean returns of our 'Consumer Discretionary' sector as Alpha 1.59 and Beta -0.186, which is quite different from Normal Distribution
# ##### 6. Plot the histogram of the mean returns and compare to the densities of normal distribution, and stable distribution with the fitted parameters from the previous step.
# +
# Use Histogram to draw mean returns of 'Consumer Discretionary' sector
hist(ret, n = 100, probability = TRUE, border = "white",
col = "steelblue")
# Compare with stable distribution with the fitted parameters from the previous step as the blue curve
x <- seq(-0.1, 0.1, 0.001)
lines(x, dstable(x, alpha = objKout@par[1], beta = objKout@par[2],
gamma=objKout@par[3], delta=objKout@par[4],tol= 1e-3), lwd = 2)
# Compare to normal distribution as the red curve
lines(x, dnorm(x, mean = m, sd = std), lwd = 2, col = 'red')# normal distribution
# -
# ##### Conclution:
# By comparation, we could see, the normal distribution(red curve) does not match our sector data very good. While by using stable distribution with the fitted parameters from the previous step(blue curve), it matches better to our mean return data of of 'Consumer Discretionary' sector.
#
# # Problem 2
# ##### Consider 2 processes. The firrst one is given by the formula:
# ##### $$p_{t}= p_{t-1} + \epsilon_{t} - \epsilon_{t-1}, $$
# ##### where $\epsilon_{t}$ is an i.i.d N(0, 4) distributed sequence.
# ##### The second process is given by:
# ##### $$r_{t}= r_{t-1} + \epsilon_{t}, $$
# ##### where $\epsilon$ follows a random walk:
# ##### $$\epsilon_{t}= \mu + \epsilon_{t-1} + \eta_{t}, $$
# ##### where $\eta_{t}$ is an i.i.d N(0; 1) distributed sequence, cov($\eta_{t},\epsilon_{t-k}$) = 0 for all t, and k.
# ##### 1. Compute theoretical mean, and variance for both processes $p_{t}$ and $r_{t}$. Is any of the processes stationary in terms of mean and variance? Which process has a constant variance?
# +
# Simulate p
l <- 501
#e <- rnorm(l)
p_e <- rnorm(l, 0, sqrt(4))
#simulate noise
p_nd <- vector()
p_nd[1] <- 0 #initial value
for (i in 2 : l){
p_nd[i] <- p_nd[i-1] + p_e[i] - p_e[i-1]
}
# +
# Simulate r
#r(t) = r(t-1) + e(t)
# Simulate e random walk
# e(t)= u + e(t-1) + n(t)
r_nd <- vector()
r_nd[1] <- 0 #initial value
e_nd <- vector()
n_e <- rnorm(l) #simulate noise
e_nd[1] <- 0 #initial value
u <- 0
for (i in 2 : l){
e_nd[i] <- u + e_nd[i-1] + n_e[i]
r_nd[i] <- r_nd[i-1] + e_nd[i]
}
par(mfrow = c(1, 2))
plot.ts(p_nd,col='blue', main='Process P')
plot.ts(r_nd,col='red', main='Process R')
# -
# ##### Conclution: For Process P
# since :
#
# $ p_{1} = p_{0} + \epsilon_{1} - \epsilon_{0} $
#
# $ p_{2} = p_{1} + \epsilon_{2} - \epsilon_{1} = (p_{0} + \epsilon_{1} - \epsilon_{0}) + \epsilon_{2} - \epsilon_{1} = p_{0} + \epsilon_{2} - \epsilon_{0}$
#
# ...
#
# so we have:
#
# $ p_{t} = p_{0} + \epsilon_{t} - \epsilon_{0}$
#
# and $ E[p_{t}] = E[p_{0} + \epsilon_{t} - \epsilon_{0}] = E[p_{0}] + E[\epsilon_{t}]- E[\epsilon_{0}] = p_{0} + 0 + 0 = p_{0}$
#
# thus Process P has mean as $p_{0}$ .
#
# Because:
#
# $ Var[p_{t}] = Var[p_{0} + \epsilon_{t} - \epsilon_{0}]$
#
# $ Var[p_{t}] = Var[p_{0}] + Var[\epsilon_{t} - \epsilon_{0}] + 2 * Cov[p_{0}, \epsilon_{t} - \epsilon_{0}]$
#
# $ Var[\epsilon_{0}] = 4 ; Var[\epsilon_{t}] = 4$ and $ Var[p_{0}] = 0$
#
# so $ Var[p_{t}] = 8 $
#
# thus Process P has Var as 8 .
#
# So Process P has mean = 0 and variance = 8, it is stationary and has constant variance.
#
# ##### For Process R
# since $\epsilon$ follows a random walk with drift \mu
#
# So the mean is $ t * \mu $ .
#
# becasue $ Var[\eta_{t}] = 1$ so variance should be $ t^2$ , it is increase fast together with time.
#
# Thus Process R is not stationary.
#
# ##### 2. Compute Cov($\epsilon_{t}$, $\epsilon_{t-1}$) for both processes.
# ##### Conclution: For Process P
# Because $\epsilon_{t}$ is an i.i.d N(0,4) distributed sequence, so by defination, $ Cov[\epsilon_{t},\epsilon_{t-i}] = 0$
#
# ##### 3. Simulate 1000 realizations of length 500 for both processes $p_{t}$ and $r_{t}$ (i.e. simulate a random realization of each process of length T = 500, repeat 1000 times) with following parameters $\mu$ = 1; $p_{0}$ = $r_{0}$ = 25; $\epsilon_{0}$ = 0.
# +
# Simulate process p
l <- 501
p_rws <- matrix(ncol = 1000, nrow = l)
# 1000 columns, 500 rows, 1000 obvervations
for (j in 1 : ncol(p_rws)){
e <- rnorm(l, 0, sqrt(4))
p_rws[1, j] <- 25
for (i in 2 : l){
p_rws[i, j] <- p_rws[i-1, j] + e[i] - e[i-1]
}}
head(p_rws)
# -
plot.ts(p_rws[, 1], ylim = c(min(p_rws),max(p_rws)), ylab = 'p', main='Simulate of Process P')
for (j in 2:ncol(p_rws)){
lines(p_rws[, j], col = colors()[j])
}
# +
options(repr.plot.width = 10, repr.plot.height = 8)
x <- c(0:500, 500:0)
y <- c(c(0, sqrt(seq(1,500,1))), rev(c(0, -sqrt(seq(1,500,1)))))
plot.ts(p_rws[, 1], ylim = c(min(p_rws),max(p_rws)), ylab = 'p',main = "Analysis of Process P")
for (j in 1:ncol(p_rws)){
lines(p_rws[, j], col = "lightblue")
}
p_mean <- apply(p_rws, 1, mean)
lines(p_mean,ylim = c(-30,30), type = "l",col = "black")
lines(p_mean+ sqrt(apply(p_rws, 1, var)),col="black")
lines(p_mean-sqrt(apply(p_rws, 1, var)),col="black")
# +
# Simulate process R
l <- 501
u <- 1
e_nd <- vector()
r_rws <- matrix(ncol = 1000, nrow = l)
# 1000 columns, 500 rows, 1000 obvervations
for (j in 1 : ncol(r_rws)){
n_e <- rnorm(l) #simulate noise
e_nd[1] <- 0 #initial value
r_rws[1, j] <- 25
for (i in 2 : l){
e_nd[i] <- u + e_nd[i-1] + n_e[i]
r_rws[i, j] <- r_rws[i-1, j] + e_nd[i]
}}
head(r_rws)
# -
# plot simulate process R
plot.ts(r_rws[, 1], ylim = c(min(r_rws),max(r_rws)), ylab = 'r', main='Simulate of Process R')
for (j in 2:ncol(r_rws)){
lines(r_rws[, j], col = colors()[j])
}
# ##### Conclution
# Process P is stationary, and Process R is non-stationary and both mean and variance grow exponentially in time.
| HW1-submit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''gv2'': conda)'
# name: python3
# ---
# +
# Prueba de hipรณtesis chi-square.
import pandas as pd
import numpy as np
from scipy import stats
from matplotlib import pyplot as plt
# +
path = "../datos/"
fname = "Tabla_A2_ppt_Ithaca.dat"
# Se lee el archivo .dat y se ajusta su formato.
df = pd.read_table(path + fname, names = ["Year", "Precipitation"])
df = df.set_index("Year")
df.head()
# +
# Ajuste de parรกmetros.
alpha, zeta, beta = stats.gamma.fit(
df["Precipitation"], loc = 0)
mu, sigma = stats.norm.fit(df["Precipitation"])
# +
# Histograma de datos observados.
bins_lim = [0, 1, 1.5, 2, 2.5, 3,
df["Precipitation"].max()
]
n_obs, bins = np.histogram( df["Precipitation"],
bins = bins_lim )
# Se discretizan las distribuciones continuas.
n_norm = n_obs.sum() * np.array( [
stats.norm.cdf(bins_lim[1], mu, sigma),
stats.norm.cdf(bins_lim[2], mu, sigma) -
stats.norm.cdf(bins_lim[1], mu, sigma),
stats.norm.cdf(bins_lim[3], mu, sigma) -
stats.norm.cdf(bins_lim[2], mu, sigma),
stats.norm.cdf(bins_lim[4], mu, sigma) -
stats.norm.cdf(bins_lim[3], mu, sigma),
stats.norm.cdf(bins_lim[5], mu, sigma) -
stats.norm.cdf(bins_lim[4], mu, sigma),
stats.norm.sf(bins_lim[5], mu, sigma)
] )
n_gamma = n_obs.sum() * np.array( [
stats.gamma.cdf(bins_lim[1], alpha, zeta, beta),
stats.gamma.cdf(bins_lim[2], alpha, zeta, beta) -
stats.gamma.cdf(bins_lim[1], alpha, zeta, beta),
stats.gamma.cdf(bins_lim[3], alpha, zeta, beta) -
stats.gamma.cdf(bins_lim[2], alpha, zeta, beta),
stats.gamma.cdf(bins_lim[4], alpha, zeta, beta) -
stats.gamma.cdf(bins_lim[3], alpha, zeta, beta),
stats.gamma.cdf(bins_lim[5], alpha, zeta, beta) -
stats.gamma.cdf(bins_lim[4], alpha, zeta, beta),
stats.gamma.sf(bins_lim[5], alpha, zeta, beta)
] )
# +
# Graficamos los datos y las distribuciones.
fig, ax = plt.subplots()
df["Precipitation"].hist( bins = bins_lim,
density = True, ax = ax )
x = np.linspace(0, df["Precipitation"].max(), 1000)
y_1 = stats.gamma.pdf(x, alpha, zeta, beta)
y_2 = stats.norm.pdf(x, mu, sigma)
ax.plot(x, y_1)
ax.plot(x, y_2)
ax.set_title("Distibuciรณn Gamma vs. Normal",
fontsize = 16)
ax.set_xlabel("Precipitaciรณn [mm]")
ax.set_ylabel("P")
ax.legend(["Gamma", "Normal", "Histograma"])
ax.set_xlim(0, bins[-1])
ax.set_ylim(0)
# +
# Prueba chi-square.
chi_norm = stats.chisquare(
n_obs, n_norm, ddof = 2)
chi_gamma = stats.chisquare(
n_obs, n_gamma, ddof = 2)
print("Chi-square")
print()
print("Normal")
print(f"Chi-square: {chi_norm.statistic:.2f}")
print(f"p: {chi_norm.pvalue:.4f}")
print()
print("Gamma")
print(f"Chi-square: {chi_gamma.statistic:.2f}")
print(f"p: {chi_gamma.pvalue:.4f}")
| code/chi_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All the IPython Notebooks in this lecture series by Dr. <NAME> are available @ **[GitHub](https://github.com/milaan9/11_Python_Matplotlib_Module)**
# </i></small></small>
# # Python Matplotlib
#
# **[Matplotlib](https://matplotlib.org/)** is a Python 2D plotting library that produces high-quality charts and figures, which helps us visualize extensive data to understand better. Pandas is a handy and useful data-structure tool for analyzing large and complex data.
# ### Load Necessary Libraries
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# ### Basic Graph
# +
x = [0,1,2,3,4]
y = [0,2,4,6,8]
# Resize your Graph (dpi specifies pixels per inch. When saving probably should use 300 if possible)
plt.figure(figsize=(8,5), dpi=100)
# Line 1
# Keyword Argument Notation
#plt.plot(x,y, label='2x', color='red', linewidth=2, marker='.', linestyle='--', markersize=10, markeredgecolor='blue')
# Shorthand notation
# fmt = '[color][marker][line]'
plt.plot(x,y, 'b^--', label='2x')
## Line 2
# select interval we want to plot points at
x2 = np.arange(0,4.5,0.5)
# Plot part of the graph as line
plt.plot(x2[:6], x2[:6]**2, 'r', label='X^2')
# Plot remainder of graph as a dot
plt.plot(x2[5:], x2[5:]**2, 'r--')
# Add a title (specify font parameters with fontdict)
plt.title('Our First Graph!', fontdict={'fontname': 'Comic Sans MS', 'fontsize': 20})
# X and Y labels
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
# X, Y axis Tickmarks (scale of your graph)
plt.xticks([0,1,2,3,4,])
#plt.yticks([0,2,4,6,8,10])
# Add a legend
plt.legend()
# Save figure (dpi 300 is good when saving so graph has high resolution)
plt.savefig('mygraph.png', dpi=300)
# Show plot
plt.show()
# -
# The line plot graph should look like this:
# <div>
# <img src="img/ex1_1.png" width="600"/>
# </div>
# ### Bar Chart
# +
labels = ['A', 'B', 'C']
values = [1,4,2]
plt.figure(figsize=(5,3), dpi=100)
bars = plt.bar(labels, values)
patterns = ['/', 'O', '*']
for bar in bars:
bar.set_hatch(patterns.pop(0))
plt.savefig('barchart.png', dpi=300)
plt.show()
# -
# The line bar chart should look like this:
# <div>
# <img src="img/ex1_2.png" width="500"/>
# </div>
# # Real World Examples
#
# Download datasets from my Github:
# 1. **[gas_prices.csv](https://github.com/milaan9/11_Python_Matplotlib_Module/blob/main/gas_prices.csv)**
# 2. **[fifa_data.csv](https://github.com/milaan9/11_Python_Matplotlib_Module/blob/main/fifa_data.csv)**
# 3. **[iris_data.csv](https://github.com/milaan9/11_Python_Matplotlib_Module/blob/main/iris_data.csv)**
# ### Line Graph
# +
gas = pd.read_csv('gas_prices.csv')
plt.figure(figsize=(8,5))
plt.title('Gas Prices over Time (in USD)', fontdict={'fontweight':'bold', 'fontsize': 18})
plt.plot(gas.Year, gas.USA, 'b.-', label='United States')
plt.plot(gas.Year, gas.Canada, 'r.-')
plt.plot(gas.Year, gas['South Korea'], 'g.-')
plt.plot(gas.Year, gas.Australia, 'y.-')
# Another Way to plot many values!
# countries_to_look_at = ['Australia', 'USA', 'Canada', 'South Korea']
# for country in gas:
# if country in countries_to_look_at:
# plt.plot(gas.Year, gas[country], marker='.')
plt.xticks(gas.Year[::3].tolist()+[2011])
plt.xlabel('Year')
plt.ylabel('US Dollars')
plt.legend()
plt.savefig('Gas_price_figure.png', dpi=300)
plt.show()
# -
# The line graph should look like this:
# <div>
# <img src="img/ex1_3.png" width="600"/>
# </div>
# ### Load Fifa Data
# +
fifa = pd.read_csv('fifa_data.csv')
fifa.head(5)
# -
# ### Histogram
# +
bins = [40,50,60,70,80,90,100]
plt.figure(figsize=(8,5))
plt.hist(fifa.Overall, bins=bins, color='#abcdef')
plt.xticks(bins)
plt.ylabel('Number of Players')
plt.xlabel('Skill Level')
plt.title('Distribution of Player Skills in FIFA 2018')
plt.savefig('histogram.png', dpi=300)
plt.show()
# -
# The histogram should look like this:
# <div>
# <img src="img/ex1_4.png" width="600"/>
# </div>
# ### Pie Chart
# #### Pie Chart #1
# +
left = fifa.loc[fifa['Preferred Foot'] == 'Left'].count()[0]
right = fifa.loc[fifa['Preferred Foot'] == 'Right'].count()[0]
plt.figure(figsize=(8,5))
labels = ['Left', 'Right']
colors = ['#abcdef', '#aabbcc']
plt.pie([left, right], labels = labels, colors=colors, autopct='%.2f %%')
plt.title('Foot Preference of FIFA Players')
plt.show()
# -
# The piechart should look like this:
# <div>
# <img src="img/ex1_5.png" width="400"/>
# </div>
# #### Pie Chart #2
# +
plt.figure(figsize=(8,5), dpi=100)
plt.style.use('ggplot')
fifa.Weight = [int(x.strip('lbs')) if type(x)==str else x for x in fifa.Weight]
light = fifa.loc[fifa.Weight < 125].count()[0]
light_medium = fifa[(fifa.Weight >= 125) & (fifa.Weight < 150)].count()[0]
medium = fifa[(fifa.Weight >= 150) & (fifa.Weight < 175)].count()[0]
medium_heavy = fifa[(fifa.Weight >= 175) & (fifa.Weight < 200)].count()[0]
heavy = fifa[fifa.Weight >= 200].count()[0]
weights = [light,light_medium, medium, medium_heavy, heavy]
label = ['under 125', '125-150', '150-175', '175-200', 'over 200']
explode = (.4,.2,0,0,.4)
plt.title('Weight of Professional Soccer Players (lbs)')
plt.pie(weights, labels=label, explode=explode, pctdistance=0.8,autopct='%.2f %%')
plt.show()
# -
# The piechart should look like this:
# <div>
# <img src="img/ex1_6.png" width="500"/>
# </div>
# #### Pie Chart #3
import pandas as pd
data = pd.read_csv("iris_data.csv")
data.head()
# +
SepalLength = data['SepalLengthCm'].value_counts()
# Plot a pie chart
# %matplotlib inline
from matplotlib import pyplot as plt
SepalLength.plot(kind='pie', title='Sepal Length', figsize=(9,9))
plt.legend()
plt.show()
# -
# The piechart should look like this:
# <div>
# <img src="img/ex1_7.png" width="600"/>
# </div>
# ### Box and Whiskers Chart
#
# A box and whisker plot(box plot) *displays the five-number summary of a set of data. The five-number summary is the minimum, first quartile, median, third quartile, and maximum.*
# #### Box plot #1
# +
plt.figure(figsize=(5,8), dpi=100)
plt.style.use('default')
barcelona = fifa.loc[fifa.Club == "FC Barcelona"]['Overall']
madrid = fifa.loc[fifa.Club == "Real Madrid"]['Overall']
revs = fifa.loc[fifa.Club == "New England Revolution"]['Overall']
#bp = plt.boxplot([barcelona, madrid, revs], labels=['a','b','c'], boxprops=dict(facecolor='red'))
bp = plt.boxplot([barcelona, madrid, revs], labels=['FC Barcelona','Real Madrid','NE Revolution'], patch_artist=True, medianprops={'linewidth': 2})
plt.title('Professional Soccer Team Comparison')
plt.ylabel('FIFA Overall Rating')
for box in bp['boxes']:
# change outline color
box.set(color='#4286f4', linewidth=2)
# change fill color
box.set(facecolor = '#e0e0e0' )
# change hatch
#box.set(hatch = '/')
plt.show()
# -
# The box plot should look like this:
# <div>
# <img src="img/ex1_8.png" width="400"/>
# </div>
# #### Box plot #2
#cateating data
import pandas as pd
df = pd.DataFrame({'Name': ['John', 'Rad', 'Var', 'Mathew', 'Alina', 'Lee', 'Rogers'],
'Salary':[60000,64000,60000,289000,66000,50000,60000],
'Hours':[41,40,36,30,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
print(df)
# Quartiles of Hours
print(df['Hours'].quantile([0.25, 0.5, 0.75]))
# Plot a box-whisker chart
import matplotlib.pyplot as plt
df['Hours'].plot(kind='box', title='Weekly Hours Distribution', figsize=(10,8))
plt.show()
# The box plot should look like this:
# <div>
# <img src="img/ex1_9.png" width="600"/>
# </div>
# Quartiles of Salary
print(df['Salary'].quantile([0.25, 0.5, 0.75]))
# Plot a box-whisker chart
df['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8))
plt.show()
# The box plot should look like this:
# <div>
# <img src="img/ex1_10.png" width="600"/>
# </div>
| 002_Python_Matplotlib_Exercise_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Buscando dados sobre meus certificados da Alura
# +
from bs4 import BeautifulSoup
import pandas as pd
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
import datetime
import time
import numpy as np
def busca_html(url):
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36'}
try:
req = Request(url, headers = headers)
response = urlopen(req)
# print('Foi\n\n')
# print(response.getcode())
html = response.read()
except HTTPError as e:
print('HTTPError\n\n')
print(response.getcode())
print(e.status, e.reason)
except URLError as e:
print('URLError\n\n')
print(response.getcode())
print(e.reason)
html = html.decode('utf-8')
def trata_html(input):
return " ".join(input.split()).replace('> <', '><')
html = trata_html(html)
soup = BeautifulSoup(html, 'html.parser')
return soup
# -
url = 'https://cursos.alura.com.br/user/viniantunes'
soup = busca_html(url)
cursos_concluidos = soup.findAll('li', {'class': 'card-list__item'})
# +
saida = pd.DataFrame()
cursos = {}
for curso in cursos_concluidos:
cursos['titulo'] = curso.attrs['data-course-name']
cursos['data_inicio'] = datetime.datetime.strptime(curso.attrs['data-started-at'], '%m/%d/%Y').date()
cursos['data_fim'] = datetime.datetime.strptime(curso.attrs['data-finished-at'], '%m/%d/%Y').date()
cursos['certificado'] = url.split('/user/viniantunes')[0] + curso.find('a', {'class': 'course-card__certificate bootcamp-text-color'}).attrs['href']
cursos['icone'] = curso.find('img').attrs['src']
df_cursos = pd.DataFrame([cursos])
saida = pd.concat([saida, df_cursos], ignore_index=True)
saida.sort_values(by='data_fim', inplace=True, ignore_index=True)
# -
saida
path = 'C:\\Users\\vinicius.oliveira\\Desktop\\Estudos\\certificados\\data\\'
saida.to_csv(path + 'data_certificates_pt1.csv', index=False, sep=';')
dados = pd.read_csv(path + 'data_certificates_pt1.csv', sep=';')
dados
# +
# %%time
carga_horaria = []
exercicios = []
certficate_auths = []
cod_auths = []
for url_certificado in dados['certificado']:
tempo_espera = np.random.randint(2, 6)
time.sleep(tempo_espera)
infos = busca_html(url_certificado)
carga_horaria.append(int(infos.find('span', {'class': 'certificate-hours'}).getText().split(' horas')[0].split('estimada em ')[1]))
exercicios.append(int(infos.find('span', {'class': 'exercises-done'}).getText().split(' de ')[0].strip()))
certficate_auths.append(infos.find('a', {'class': 'authenticity alura'}).getText())
cod_auths.append(infos.find('a', {'class': 'authenticity alura'}).getText().split('/')[-1])
dados['carga_horaria'] = carga_horaria
dados['exercicios'] = exercicios
dados['certficate_auths'] = certficate_auths
dados['cod_auths'] = cod_auths
# -
dados
dados.to_csv(path + 'data_certificates_final.csv', index=False, sep=';')
| busca_dados_certificados.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: widgets-tutorial
# language: python
# name: widgets-tutorial
# ---
# # The Jupyter Interactive Widget Ecosystem
#
# ## SciPy 2018
# ## <NAME>, <NAME>, <NAME> and <NAME>
#
# This tutorial will introduce you to the widgets in the Jupyter notebook, walk through a few approaches to writing widgets, and introduce some relatively new widget packages.
#
# We are using ipywidgets 7.2.
# 00. [Introduction](00.00-introduction.ipynb)
# 01. [Overview](01.00-overview.ipynb)
# 02. [Widgets without writing widgets: interact](02.00-Using Interact.ipynb)
# 03. [Simple Widget Introduction](03.00-Widget_Basics.ipynb)
# 04. [Widget List](04.00-widget-list.ipynb)
# 02. [Output widgets: leveraging Jupyter's display system](04.02-more-on-output-widget.ipynb)
# 05. [Exercises](05.00-interact and widget basics exercises.ipynb)
# 06. [Layout and Styling of Jupyter widgets](06.00-Widget_Styling.ipynb)
# 07. [Widget layout exercises](07.00-container-exercises.ipynb)
# 08. [Widget Events](08.00-Widget_Events.ipynb)
# 09. [Three approaches to events](09.00-Widget Events 2.ipynb)
# 01. [Password generator: `observe`](09.01-Widget Events 2 -- bad password generator, version 1.ipynb)
# 02. [Separating the logic from the widgets](09.02-Widget Events 2 -- Separating Concerns.ipynb)
# 03. [Separating the logic using classes](09.03-Widget Events 2--Separating concerns, object oriented.ipynb)
# 10. [More widget libraries](10.00-More widget libraries.ipynb)
# 01. [bqplot: complex interactive visualizations](10.01-bqplot.ipynb)
# 02. [pythreejs: 3D rendering in the browser](10.02-pythreejs.ipynb)
# 03. [ipyvolume: 3D plotting in the notebook](10.03-ipyvolume.ipynb)
# 04. [ipyleaflet: Maps in the notebook](10.04-ipyleaflet.ipynb)
# 05. [Astronomical widget libraries](10.05-astro-libraries.ipynb)
# 06. [Exercise: Using one plot as a control for another](10.06-bqplot--A plot as a control in a widget.ipynb)
# 07. [Widget library exercise: Link some widgets up](10.07-widget-library-exercises.ipynb)
# 08. [Demo: Reactive plot with multiple regressions](10.08-bqplot--A--Penalized regression.ipynb)
# 09. [Input Widgets and Geospatial Data Analysis](10.09-flight-sim.ipynb)
# 10. [Vaex - Out of core dataframes](10.10-vaex.ipynb)
# 11. [ipywebrtc](10.11-ipywebrtc.ipynb)
# ## [Table of widget and style properties](Table of widget keys and style keys.ipynb)
# # Acknowledgements
#
# + Special thanks to the dozens of [ipywidgets developers](https://github.com/jupyter-widgets/ipywidgets/graphs/contributors), including <NAME> who wrote much of the code in the early years of ipywidgets.
# + Several of the notebooks in this tutorial were originally developed as part of [ipywidgets](http://ipywidgets.readthedocs.io/en/latest/) by <NAME> ([@ellisonbg](https://github.com/ellisonbg)) and <NAME> ([@jdfreder](https://github.com/jdfreder)).
# + Thanks to <NAME> ([@DougRzz](https://github.com/DougRzz))
# + Project Jupyter core developer <NAME> ([@willingc](https://github.com/willingc)) and [Minnesota State University Moorhead](http://physics.mnstate.edu) students <NAME> ([@ACBlock](https://github.com/ACBlock)) and <NAME> ([@janeglanzer](https://github.com/janeglanzer)) provided very useful feedback on early drafts of this tutorial.
#
| notebooks/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from sklearn.datasets import load_iris
from sklearn.preprocessing import Normalizer
from sklearn.model_selection import train_test_split
iris = load_iris()
X, y = iris.data[:, :2], iris.target
print(X.shape)
print(y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
print(X_train[0])
print(X_test[0])
scaler = Normalizer().fit(X_train)
normalized_X = scaler.transform(X_train)
normalized_X_test = scaler.transform(X_test)
print(normalized_X[0])
print(normalized_X_test[0])
| codecheatsheet/preprocessing_normalization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Password Entropy Analysis
# *Analysing the entropy and cryptographic security of various password creating methodologies*
#
# ## Diceware
# Dice passwords are a password strategy that uses the highly random number generating characteristics of a fair dice throw. These passwords can use true entropy analysis because the generation of these classes of passwords doesn't have a human element. Even the word list choice can't use human bias in the password cracking strategy because randomness selects which word to use, so entropy is the deciding factor in cracking these passwords. The trick is to make sure that the list generates memorable passwords that have large permutations. Let's look at 4 & 5 dice toss word lists
# +
import numpy as np
dice4 = 6**4
dice5 = 6**5
dice6 = 6**6
print("4-Dice-Throw combinations: " + str(dice4) + ", has entropy of: " + str(np.log2(dice4)))
print("5-Dice-Throw combinations: " + str(dice5) + ", has entropy of: " + str(np.log2(dice5)))
print("4-Dice-Throw combinations: " + str(dice6) + ", has entropy of: " + str(np.log2(dice6)))
# -
# Four, five and six dice throw generated passwords will have 1296, 7776, 46656, permutations respectively. This results in 10.3, 12.9, 15.5 bits of entropy respectively. Going by what can somewhat reasonably be acheived with consumer hardware, a hash-rate of 350 GH/s as a benchmark for how strong these passwords are, let's set a minimum number of hashes it would take to crack these passwords. 350 GH/s means that a hacker who's stolen a password hash during their operations would have to run an expensive computer (about 5000USD) nonstop one second to test 350 billion permutations, so a password that is considered strong should be able to force a hacker to run this cracking procedure for at least a month before half the possible permutations have been attempted by this computer. This is assuming passwords get changed at least once a month. Below is the number of hashes this would amount to
hash_rate = 350 * 10**9
seconds_month = 60 * 60 * 24 * 30
hashes_month = seconds_month * hash_rate
exahashes_month = hashes_month / (10**18)
exahashes_month
# This amounts to nearly one exa-hash per month, or $10^{18}$ permutations that can be attempted. Also, remember that this is probabilistic, it's very likely that the combination will be reached by chance before then, so at least twice this many permutations should be aimed for for a password strategy. So how many dice throws would this amount to? Well, the number of permutations generated per dice throw is $6^d$, hash-rate is denoted by $R$, so then the problem becomes a simple rate problem for duration $t$ to try 50% of all permutations of a password:
# $$ t = \frac{6^d}{2R} \rightarrow d(t) = log_6(2Rt) $$
# So, to get the number of dice tosses to satisfy the requirement of this powerful consumer computer being able to only attempt half of all possible password combinations over a month, the below code figures out the number of dice tosses requried to satisfy that requirement.
log6 = lambda x: np.log(x) / np.log(6)
dice_throws_month = log6(2 * hashes_month)
dice_throws_month
# This means that about 24 dice throws would be needed in-order to have a high assurance that the password generated from those dice throws won't be cracked in under a month. So what remains, is to figure out what combination of memorability of each word in a given wordlist, and how many of them in a password string would be ideal to satisfy this requirement. Fortunately 24 is an easily divisible number, so let's look at the 3 and 4 dice throw generated lists for possible strategies since they're divisible. Only using three dice throws per word in a list would result in 216 words, and four results in 1296. Both of those result in lists of words that are pretty easy to populate with memorable words, so let's go with 4 dice tosses per word, that would mean that 6 dice tosses would be required to ensure that only half the password combinations could be attempted by this powerful computer in a month.
#
# What if 1 month, isn't good enough? Say the password being created should be expected to be used safely for half a year because changing every month might be less desirable than having to remember longer ones. What kind of dice password should be used?
hashes_6month = 6 * hashes_month
dice_throws_6month = log6(2 * hashes_6month)
dice_throws_6month
# As it turns out, at this point the level of exponential increase is so large that only increasing the dice throw by 1 allows for six months of safety instead of the 1 month of a 24 dice throw password. That becomes a pretty easy to make tradeoff to increase security that much, so let's try a wordlist using 5 dice throws, and using 5 of them in one password string instead.
| password-strategy-analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
#####################################################################
# This notebook is authored by: <NAME> #
# Date: November 2020 #
# If you use this code or the results from this work please cite: #
# Resurrecting bbh with kinematic shapes #
# <NAME>, <NAME> and <NAME> #
# arXiv:2011.13945 (https://arxiv.org/abs/2011.13945) #
#####################################################################
# IMPORTANT:: USE xgb 1.0.2 and shap 0.34 versions for parallisation to work.
import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn import ensemble
import sklearn.model_selection as ms
from sklearn import metrics
import shap
import matplotlib.pyplot as plt
import seaborn as sns
import os
import math as m
import collections
import pickle
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
from colour import Color
from matplotlib import rc
import sys
#import mplhep as hep
####
rc("font",family="serif")
rc("axes",xmargin=0)
rc("axes",ymargin=0.05)
rc("xtick",direction='in')
rc("ytick",direction='in')
rc("xtick",top=True)
rc("ytick",right=True)
rc("legend",frameon=False)
rc("xtick.major",size=5)
rc("xtick.minor",size=2.5)
rc("ytick.major",size=5)
rc("ytick.minor",size=2.5)
rc("savefig",facecolor="ff000000")
####
# To supress warnings from shap
if not sys.warnoptions:
import warnings
warnings.simplefilter("ignore")
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
#N_THREADS = 42 ## Change for reducing load on CPU
N_THREADS = 42 ## Change for reducing load on CPU
os.environ['OMP_NUM_THREADS'] = str(N_THREADS)
seed = 42
# #31455b nice colour
col= ['#D9296A','#1BA698','#52c9ed','#EB5952','#F2955E','#5196A6']
colors = col[0:2]
cmp_2 = LinearSegmentedColormap.from_list('my_list', [Color(c1).rgb for c1 in colors], N=len(colors))
colors = col[0:3]
cmp_3 = LinearSegmentedColormap.from_list('my_list', [Color(c1).rgb for c1 in colors], N=len(colors))
colors = col[0:4]
cmp_4 = LinearSegmentedColormap.from_list('my_list', [Color(c1).rgb for c1 in colors], N=len(colors))
colors = col[0:5]
cmp_5 = LinearSegmentedColormap.from_list('my_list', [Color(c1).rgb for c1 in colors], N=len(colors))
# -
# ## Helper funtion for I/O, BDT analysis and evaluation of results
# + pycharm={"is_executing": false, "name": "#%%\n"}
def fileparser(path, dlist, frac=0.5, sample=0, L=2, weights=True):
""" The fileparser to read the events from a csv
argument:
path: the path to the file
dlist: the list of variables to be excluded
frac: the fraction of sample that will be the test sample when sample is set to 0
sample: the number of events that will be the train sample.
L: Luminosity scaling
returns:
df_train: the training dataframe
df_test: the testing dataframe
weight: the weight (related to crosssection)
"""
df = pd.read_csv(path)
df.drop(columns=dlist, inplace=True)
n = len(df)
if weights: weight = int(round(np.abs(df['weight'].sum()) * 3. * 1e6 * L)) ## The abs(mean()) is taken to make the weight of ybyt +ve
else: weight = int(round(np.abs(df['weight'].mean()) * 3. * 1e3 * L)) ## REMEMBER: Weight is put by hand in the root file and is just the XS in fb.
# df['weight'] = df['weight']/np.abs(df['weight'])
if sample != 0:
df_train = df.sample(n=sample, random_state=seed)
df_test = df.drop(df_train.index)
else :
df_test = df.sample(frac=frac, random_state=seed)
df_train = df.drop(df_test.index)
# df_train = df_train[(df_train.maa > 123) & (df_train.maa < 127)]
# df_test = df_test[(df_test.maa > 123) & (df_test.maa < 127)]
return df_train, df_test, weight
def runBDT(df, filename='', rf=False, depth=10, sample=1, seed=seed):
""" The BDT/RF runner
argument:
df: the dataframe with all the events
filename: the name of the pickle file to store the model in
rf: a bolean to toggle between BDT and Random Forest classifiers
sample: The fraction of variables to sample
seed: the seed for the random number generator
returns:
classifier: the classifier
x_test: the features for the test set
y_test: the labels for the test set
shap_values: the SHAP values
X_shap: the feature set with which the shap values have been computed
"""
mshap = True if depth <= 10 else False
df = df.sample(frac=sample)
X = df.drop(columns=['class', 'weight'])
y = df['class'].values
# Split for training and testing
x_train, x_test, y_train, y_test = ms.train_test_split(X.values, y, test_size=0.2, random_state=seed)
eval_set = [(x_train, y_train), (x_test, y_test)]
# Fit the decision tree
if rf:
classifier = ensemble.RandomForestClassifier(max_depth=depth, n_estimators=1000, criterion='gini', n_jobs=int(N_THREADS/2), random_state=seed)
classifier = classifier.fit(x_train, y_train)
else:
classifier = xgb.XGBClassifier(max_depth=depth, learning_rate=0.01, objective='multi:softprob', num_class=nchannels,
n_jobs=N_THREADS, subsample=0.5, colsample_bytree=1, n_estimators=5000, random_state=seed)
classifier = classifier.fit(x_train, y_train, early_stopping_rounds=50, eval_set=eval_set,
eval_metric=["merror", "mlogloss"], verbose=False)
# Predictions
y_pred = classifier.predict(x_test)
print('Accuracy Score: {:4.2f}% '.format(100*metrics.accuracy_score(y_test, y_pred)))
if filename != '': pickle.dump(classifier, open(filename, 'wb'))
# Calculate the SHAP scores
if mshap:
X_shap = pd.DataFrame(x_test, columns=df.drop(columns=['class', 'weight']).columns)
explainer = shap.TreeExplainer(classifier)
shap_values = explainer.shap_values(X_shap)
else:
shap_values = []
X_shap = pd.DataFrame()
return classifier, x_test, y_test, shap_values, X_shap
def eval_training(classifier):
""" Evaluate the training
argument:
classifier: the BDT classifier
"""
results = classifier.evals_result()
epochs = len(results['validation_0']['merror'])
x_axis = range(0, epochs)
# plot log loss
#plt.style.use(hep.style.LHCb2)
plt.figure(figsize=(12,5))
plt.subplot(1, 2, 1)
plt.plot(x_axis, results['validation_0']['mlogloss'], label='train')
plt.plot(x_axis, results['validation_1']['mlogloss'], label='test')
plt.legend()
plt.ylabel('log loss')
plt.title('Classifier log loss')
plt.grid()
# plot classification error
plt.subplot(1, 2, 2)
plt.plot(x_axis, results['validation_0']['merror'], label='train')
plt.plot(x_axis, results['validation_1']['merror'], label='test')
plt.legend()
plt.ylabel('Classification Error')
plt.title('Classification Error')
plt.grid()
plt.show()
def get_mclass(i, df_array, weight_array, ps_exp_class, seed=seed):
""" This function is used to create the confusion matrix
arguments:
i: integer corresponding to the class number
df_array: the array of the dataframes of the different classes
weight_array: the array of the weights for the different classes
ps_exp_class: the collection of the pseudo experiment events
seed: the seed for the random number generator
returns:
nevents: the number of events
sif: the significance
"""
mclass = []
for j in range(nchannels):
mclass.append(collections.Counter(classifier.predict(df_array[j].iloc[:,:-2].values))[i]/len(df_array[j])*weight_array[j]/weight_array[i])
sig = np.sqrt(ps_exp_class[i])*mclass[i]/np.sum(mclass)
nevents = np.round(ps_exp_class[i]/np.sum(mclass)*np.array(mclass)).astype(int)
if nchannels == 5: print('sig: {:2.2f}, ku events: {}, hhsm events: {}, tth events: {}, bbh events: {}, bbxaa events: {}'.format(sig, nevents[4], nevents[3], nevents[2], nevents[1], nevents[0]))
if nchannels == 4: print('sig: {:2.2f}, hhsm events: {}, tth events: {}, bbh events: {}, bbxaa events: {}'.format(sig, nevents[3], nevents[2], nevents[1], nevents[0]))
if nchannels == 3: print('sig: {:2.2f}, kd events: {}, ku events: {}, hhsm events: {} '.format(sig, nevents[2], nevents[1], nevents[0]))
if nchannels == 2: print('sig: {:2.2f}, ku events: {}, kd events: {}'.format(sig, nevents[1], nevents[0]))
return nevents, sig
def abs_shap(df_shap, df, shap_plot, names, class_names, cmp):
''' A function to plot the bar plot for the mean abs SHAP values
arguments:
df_shap: the dataframe of the SHAP values
df: the dataframe for the feature values for which the SHAP values have been determined
shap_plot: The name of the output file for the plot
names: The names of the variables
class_names: names of the classes
cmp: the colour map
'''
rc('text', usetex=True)
#plt.style.use(hep.style.LHCb2)
plt.rcParams['text.latex.preamble'] = r"\usepackage{amsmath}"
plt.figure(figsize=(5,5))
shap.summary_plot(df_shap, df, color=cmp, class_names=class_names, class_inds='original', plot_size=(5,5), show=False)#, feature_names=names)
ax = plt.gca()
handles, labels = ax.get_legend_handles_labels()
ax.legend(reversed(handles), reversed(labels), loc='lower right', fontsize=15)
plt.xlabel(r'$\overline{|S_v|}$', fontsize=15)
ax = plt.gca()
ax.spines["top"].set_visible(True)
ax.spines["right"].set_visible(True)
ax.spines["left"].set_visible(True)
vals = ax.get_xticks()
ax.tick_params(axis='both', which='major', labelsize=15)
for tick in vals:
ax.axvline(x=tick, linestyle='dashed', alpha=0.7, color='#808080', zorder=0, linewidth=0.5)
plt.tight_layout()
plt.savefig(shap_plot, dpi=300)
rc('text', usetex=False)
#define a function to convert logodds to probability for multi-class
def logodds_to_proba(logodds):
return np.exp(logodds)/np.exp(logodds).sum()
########### Calculate the
v=246.0
MT= 173.2
MH=125.1
alps= 0.1198
###################################################
def XSNLO(c6,cg,ckin,cu):
L=1e6
cH=c6/L
cuH=cu/L
cHkin=ckin/L
cHG=cg/L/16/np.pi**2
return 1.000-3.9866459999999995*cHkin*v**2+8.949195999999999*cHkin**2*v**4 +((3.1233253007332573 -14.981140447025783*cHkin*v**2)*cuH*v**3)/MT+(cH*(1.7820680000000002-4.268759999999999*cHkin*v**2)*v**4)/MH**2 +(6.637391499999998*cuH**2*v**6)/MT**2 +(2.545778159529616*cH*cuH*v**7)/(MH**2*MT) + (1.368992*cH**2*v**8)/MH**4+ (3106.4364774009646*cHG**2*v**4)/alps**2 - (74.20004538081712*cH*cHG*v**6)/(MH**2*alps)+(cHG*v**2*((-52.47197151255347 + 236.21675299329388*cHkin)*MT -188.14684359273673*cuH*v**3))/(MT*alps)
###################################################
def XSLO(c6,cg,ckin,cu):
L=1e6
cH=c6/L
cuH=cu/L
cHkin=ckin/L
cHG=cg/L/16/np.pi**2
return 1.-3.2954600000000003*cHkin*v**2 +6.722859999999997*cHkin**2*v**4+((2.6399053858140427- 11.480309927700123*cHkin*v**2)*cuH*v**3)/MT+(cH*(1.6048999999999998-3.015439999999998*cHkin*v**2)*v**4)/MH**2+(5.18126*cuH**2*v**6)/MT**2+(1.753815736173558*cH*cuH*v**7)/(MH**2*MT)+(1.11256*cH**2*v**8)/MH**4+(3050.5592304475026*cHG**2*v**4)/alps**2-(60.40684513611929*cH*cHG*v**6)/(MH**2*alps)+(cHG*v**2*((-53.14725283220398+186.0102075680209*cHkin)*MT-154.7657478815316*cuH*v**3))/(MT*alps)
####################################################
def Kfac(kappa_lambda):
L=1e6
c6 = L*MH**2/v**4*0.5*(1-kappa_lambda)
num =XSNLO(c6,0,0,0)
den=XSLO(c6,0,0,0)
return (num/den)
##############################
def GetKappaStat(confusion_tot):
d= confusion_tot.shape[0]
norm = np.sum(confusion_tot)
pa = np.sum([confusion_tot[i,i] for i in range(d) ])/norm
p = np.empty(d)
for i in range(d):
p[i] = np.sum(confusion_tot[i,:])/norm * np.sum(confusion_tot[:,i])/norm
pe= np.sum(p)
return (pa-pe)/(1-pe)
# -
# ## 14 TeV Analysis
# ### Load the data
# + pycharm={"is_executing": false, "name": "#%%\n"}
# dlist = ['dphibb', 'etaaa', 'ptb2', 'drbamin', 'etaa2', 'etab1', 'etaa1', 'nbjet', 'etab2']
# dlist = ['etaaa', 'ptb2', 'drbamin', 'etaa2', 'etab1', 'etaa1', 'mbbh', 'met', 'drbamin', 'njjet', 'etab2']
# dlist = ['ptb2', 'etaaa', 'etaa1', 'met', 'pta1', 'etab1', 'etaa2', 'dphibb', 'dphiba1', 'etab2']
# dlist = ['ptb2', 'etab2', 'nbjet', 'etaaa', 'dphibb', 'drba1', 'etaa1', 'etab1']
#dlist = ['ptb2', 'etab2', 'nbjet', 'etaaa', 'dphibb', 'drba1', 'etaa1', 'etab1', 'dphiba1', 'etaa2', 'mb1h', 'mbbh', 'drbamin', 'pta2']
#dlist = ['ptb2', 'etab2', 'nbjet','njjet' ,'etaaa', 'dphibb', 'drba1', 'etaa1', 'etab1', 'dphiba1', 'drbamin', 'pta2']
# names = [r'$n_{jet}$', r'$p_T^{b_1}$', r'$p_T^{\gamma_1}$', r'$p_T^{\gamma_2}$', r'$p_T^{\gamma\gamma}$', r'$m_{bb}$', r'$m_{\gamma\gamma}$', r'$m_{b_1h}$', r'$m_{bbh}$',
# r'$H_T$', r'$\delta R_{b\gamma_1}$', r'$\delta\phi_{b\gamma_1}$']
#names = [ r'$p_T^{b_2}$', r'$\eta_{b_2}$',r'$n_{bjet}$',r'$n_{jet}$',r'$\eta_{\gamma_1}$',]
# 14 TeV
dlist =[]
dlist=['ptb2','nbjet','dphiba1','etab1','pta2','drba1','dphibb','etab2']
df_hhk8, df_hhk8_test, weight_hhk8 = fileparser("../simulations/HL-LHC/klp8.csv", dlist, sample=40000, weights=True)
df_hhsm, df_hhsm_test, weight_hhsm = fileparser("../simulations/HL-LHC/hhsm.csv", dlist, sample=40000, weights=True)
df_kd, df_kd_test, weight_kd = fileparser("../simulations/HL-LHC/kd.csv", dlist, sample=40000, weights=True)
df_ku, df_ku_test, weight_ku = fileparser("../simulations/HL-LHC/ku.csv", dlist, sample=40000, weights=True)
df_tth_lep, df_tth_test_lep, weight_tth_lep = fileparser("../simulations/HL-LHC/ttH_lep.csv", dlist, sample=20000, weights=True)
df_tth_full, df_tth_test_full, weight_tth_full = fileparser("../simulations/HL-LHC/ttH_full.csv", dlist, sample=60000, weights=True)
df_yb2, df_yb2_test, weight_yb2 = fileparser("../simulations/HL-LHC/yb2.csv", dlist, sample=3890*6)
df_ybyt, df_ybyt_test, weight_ybyt = fileparser("../simulations/HL-LHC/ybyt.csv", dlist, sample=500*6)
df_yt2, df_yt2_test, weight_yt2 = fileparser("../simulations/HL-LHC/yt2.csv", dlist, sample=7360*6)
df_zh, df_zh_test, weight_zh = fileparser("../simulations/HL-LHC/zh.csv", dlist, sample=4960*6)
df_bbh = pd.concat([df_yb2, df_ybyt, df_yt2, df_zh])
df_bbh_test = pd.concat([df_yb2_test, df_ybyt_test, df_yt2_test, df_zh_test])
df_bbh_test = df_bbh_test.sample(frac=0.5).reset_index(drop=True)
df_bbh['class'] = 1
df_bbh_test['class'] = 1
weight_bbh = int(weight_yb2*1.5 - weight_ybyt*1.9 + weight_yt2*2.5 + weight_zh*1.3)
df_bbxaa, df_bbxaa_test, weight_bbxaa = fileparser("../simulations/HL-LHC/bbxaa.csv", dlist, sample=100000)
# choose the decay of thw W
df_tth =df_tth_full
df_tth_test=df_tth_test_full
weight_tth=weight_tth_full
df_hhsm['class'] = 3 # voodoo to fix the class for hhsm from 5
df_hhsm_test['class'] = 3 # voodoo to fix the class for hhsm from 5
df_tth['class'] = 2 # voodoo to fix the class for from 4
df_tth_test['class'] = 2 # voodoo to fix the class from 4
names = list(df_bbxaa.columns)[:-2]
print("No. of hh klambda=8 events: train = {}, test = {}".format(df_hhk8.shape[0],df_hhk8_test.shape[0]))
print("No. of hhsm events: train = {}, test = {}".format(df_hhsm.shape[0],df_hhsm_test.shape[0]))
print("No. of ku=1600 events: train = {}, test = {}".format(df_ku.shape[0],df_ku_test.shape[0]))
print("No. of kd=800 events: train = {}, test = {}".format(df_kd.shape[0],df_kd_test.shape[0]))
print("No. of tth events: train = {}, test = {}".format(df_tth.shape[0],df_tth_test.shape[0]))
print("No. of bbh events: train = {}, test = {}".format(df_bbh.shape[0],df_bbh_test.shape[0]))
print("No. of bbxaa events: train = {}, test = {}".format(df_bbxaa.shape[0],df_bbxaa_test.shape[0]))
# -
# ## The hh SM analysis
#
# - Add the two datasets
# - Run the BDT and make the SHAP plot
# - Check the accuracy of the classifier
# - make the discriminator plot
channels = [df_hhsm, df_bbh, df_tth, df_bbxaa]
nchannels = len(channels)
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
# + pycharm={"is_executing": false, "name": "#%%\n"}
class_names = [r'$bb\gamma\gamma$', r'$b\bar{b}h$', r'$t\bar{t}h$', r'$hh^{SM}$']
filename = 'models/HL-LHC-BDT/hbb-BDT-4class-hhsm-fullW-1btag.pickle.dat' ## The pickle model store if necessary.
shap_plot = '../plots/shap-bbxaa-bbh-tthfull-hhsm.pdf'
classifier, x_test, y_test, shap_values_4, X_shap_4 = runBDT(df_train, filename)
abs_shap(shap_values_4, X_shap_4, shap_plot, names=names, class_names=class_names, cmp=cmp_4)
# + pycharm={"is_executing": false, "name": "#%%\n"}
disc = 3
enc=1
hhsm_p = pd.DataFrame(classifier.predict_proba(df_hhsm_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(df_hhsm_test['class'].values, classifier.predict(df_hhsm_test.drop(columns=['class', 'weight']).values))))
hhsm_p['weight'] = df_hhsm_test['weight'].values
tth_p = pd.DataFrame(classifier.predict_proba(df_tth_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for tth: {:4.2f}% '.format(100*metrics.accuracy_score(df_tth_test['class'].values, classifier.predict(df_tth_test.drop(columns=['class', 'weight']).values))))
tth_p['weight'] = df_tth_test['weight'].values
bbh_p = pd.DataFrame(classifier.predict_proba(df_bbh_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for bbh: {:4.2f}% '.format(100*metrics.accuracy_score(df_bbh_test['class'].values, classifier.predict(df_bbh_test.drop(columns=['class', 'weight']).values))))
bbh_p['weight'] = df_bbh_test['weight'].values
bbxaa_p = pd.DataFrame(classifier.predict_proba(df_bbxaa_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(df_bbxaa_test['class'].values, classifier.predict(df_bbxaa_test.drop(columns=['class', 'weight']).values))))
bbxaa_p['weight'] = df_bbxaa_test['weight'].values
hhsm_pred = hhsm_p.sample(n=round(weight_hhsm*1.72*enc), replace=True, random_state=seed).reset_index(drop=True)
tth_pred = tth_p.sample(n=round(weight_tth*1.2*enc), replace=True, random_state=seed).reset_index(drop=True)
bbh_pred = bbh_p.sample(n=round(weight_bbh*enc), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_pred = bbxaa_p.sample(n=round(weight_bbxaa*1.5*enc), replace=True, random_state=seed).reset_index(drop=True)
plt.figure(figsize=(6,4))
ax = plt.gca()
ax.set_prop_cycle(color=(col[0:4])[::-1])
rc('text', usetex=True)
plt.rcParams['text.latex.preamble'] = [r"\usepackage{amsmath}"]
sns.distplot(hhsm_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(hhsm_pred[0])}, label=r'$hh^{SM}$')
sns.distplot(tth_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(tth_pred[0])}, label=r'$t\bar{t}h$')
sns.distplot(bbh_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(bbh_pred[0])}, label=r'$b\bar{b}h$')
sns.distplot(bbxaa_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(bbxaa_pred[0])}, label=r'$bb\gamma\gamma$')
plt.legend(loc='upper center', fontsize=14)
plt.grid(linestyle='dashed', alpha=0.4, color='#808080')
#ax = plt.gca()
ax.tick_params(axis='both', which='major', labelsize=14)
plt.xlabel(r'$p(hh^{SM})$', fontsize=14)
plt.ylabel(r'$N$', fontsize=14)
plt.yscale('log')
plt.tight_layout()
plt.savefig('../plots/bbxaa-bbh-hhsm-Wfull-BDT-dist.pdf', dpi=300)
# +
df_array = [df_bbxaa_test, df_bbh_test, df_tth_test, df_hhsm_test]
weight_array = [weight_bbxaa*1.5, weight_bbh, weight_tth*1.2, weight_hhsm*1.72]
ps_exp_class = collections.Counter(classifier.predict(pd.concat([df_array[3].iloc[:,:-2].sample(n=round(weight_array[3]), random_state=seed, replace=True),
df_array[2].iloc[:,:-2].sample(n=round(weight_array[2]), random_state=seed, replace=True),
df_array[1].iloc[:,:-2].sample(n=round(weight_array[1]), random_state=seed, replace=True),
df_array[0].iloc[:,:-2].sample(n=round(weight_array[0]), random_state=seed, replace=True)]).values))
nevents_hhsm, sig_hhsm = get_mclass(3, df_array, weight_array, ps_exp_class)
nevents_tth, sig_tth = get_mclass(2, df_array, weight_array, ps_exp_class)
nevents_bbh, sig_bbh = get_mclass(1, df_array, weight_array, ps_exp_class)
nevents_bbxaa, sig_bbxaa = get_mclass(0, df_array, weight_array, ps_exp_class)
confusion = np.column_stack((nevents_hhsm, nevents_tth, nevents_bbh, nevents_bbxaa))
# -
# #### The Confusion Matrix, total events count for each channel and the signal significance
# +
confusion_tot = np.round(np.array([confusion[3], confusion[2], confusion[1], confusion[0]])).astype(int)
event_total = np.array([[np.sum(confusion_tot[i])] for i in range(confusion_tot.shape[0])])
significance = np.append(np.array([np.abs(confusion_tot[i,i])/np.sqrt(np.sum(confusion_tot[:,i])) for i in range(confusion_tot.shape[0])]), 0)
confusion_tab = np.vstack((np.append(confusion_tot, event_total, axis=1), significance))
df_conf = pd.DataFrame(confusion_tab, [r'$hh^{SM}$', r'$t\bar{t}h$', r'$b\bar{b}h$', r'$bb\gamma\gamma$', r'$\sigma$'])
df_conf.columns = [r'$hh^{SM}$', r'$t\bar{t}h$', r'$b\bar{b}h$', r'$bb\gamma\gamma$', 'total']
print(df_conf.to_latex(escape=False))
kappa_stat = GetKappaStat(confusion_tot)
print("kappa statistics = {:4.3f}".format(kappa_stat))
# -
# ## The BSM vs. SM analysis
#
# - Add the two datasets
# - Run the BDT and make the SHAP plot
# - Check the accuracy of the classifier
# - make the discriminator plot
df_ku['class'] = 0
df_ku_test['class'] = 0
df_kd['class'] = 1
df_kd_test['class'] = 1
channels = [df_ku,df_kd]
nchannels = len(channels)
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
# + pycharm={"is_executing": false, "name": "#%%\n"}
class_names = [ r'$\kappa_u$',r'$\kappa_d$']
filename = 'models/HL-LHC-BDT/hbb-BDT-2class-1btag-kappaukappad.pickle.dat' ## The pickle model store if necessary.
shap_plot = '../plots/shap-kappa_1st_gen.pdf'
classifier, x_test, y_test, shap_values_3, X_shap_3 = runBDT(df_train, filename)
abs_shap(shap_values_3, X_shap_3, shap_plot, names=names, class_names=class_names, cmp=cmp_2)
# + pycharm={"is_executing": false, "name": "#%%\n"}
classifier = pickle.load(open('models/HL-LHC-BDT/hbb-BDT-2class-1btag-kappaukappad.pickle.dat', 'rb')) ## If model is stored
comb_test = pd.concat([ df_ku_test.iloc[:,:-1].sample(n=20000, random_state=seed, replace=True)\
,df_kd_test.iloc[:,:-1].sample(n=20000, random_state=seed, replace=True)])
print('Accuracy Score: {:4.2f}% '.format(100*metrics.accuracy_score(comb_test['class'].values, classifier.predict(comb_test.drop(columns=['class']).values))))
# + pycharm={"is_executing": false, "name": "#%%\n"}
disc = 1
enc=10
kd_p = pd.DataFrame(classifier.predict_proba(df_kd_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for kd: {:4.2f}% '.format(100*metrics.accuracy_score(df_kd_test['class'].values, classifier.predict(df_kd_test.drop(columns=['class', 'weight']).values))))
kd_p['weight'] = df_kd_test['weight'].values
ku_p = pd.DataFrame(classifier.predict_proba(df_ku_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(df_ku_test['class'].values, classifier.predict(df_ku_test.drop(columns=['class', 'weight']).values))))
ku_p['weight'] = df_ku_test['weight'].values
#hhsm_p = pd.DataFrame(classifier.predict_proba(df_hhsm_test.drop(columns=['class', 'weight']).values)[:,disc])
#print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(df_hhsm_test['class'].values, classifier.predict(df_hhsm_test.drop(columns=['class', 'weight']).values))))
#hhsm_p['weight'] = df_hhsm_test['weight'].values
kd_pred = kd_p.sample(n=round(weight_kd*1.28*enc), replace=True, random_state=seed).reset_index(drop=True)
ku_pred = ku_p.sample(n=round(weight_kd*1.28*enc), replace=True, random_state=seed).reset_index(drop=True)
#hhsm_pred = hhsm_p.sample(n=round(weight_kd*1.28*enc), replace=True, random_state=seed).reset_index(drop=True)
#hhsm_pred = hhsm_p.sample(n=round(weight_hhsm*1.72*enc), replace=True, random_state=seed).reset_index(drop=True)
plt.figure(figsize=(6,4))
ax = plt.gca()
ax.set_prop_cycle(color=(col[0:2])[::-1])
rc('text', usetex=True)
plt.rcParams['text.latex.preamble'] = [r"\usepackage{amsmath}"]
sns.distplot(kd_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(kd_pred[0])}, label=r'$\kappa_d$')
sns.distplot(ku_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(ku_pred[0])}, label=r'$\kappa_u$')
#sns.distplot(hhsm_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
#hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(hhsm_pred[0])}, label=r'$hh^{SM}$')
plt.legend(loc='best', fontsize=14)
plt.grid(linestyle='dashed', alpha=0.4, color='#808080')
ax.tick_params(axis='both', which='major', labelsize=14)
plt.xlabel(r'$p(\kappa_q)$', fontsize=14)
plt.ylabel(r'$N$', fontsize=14)
# plt.yscale('log')
plt.tight_layout()
plt.savefig('../plots/hhsm-kukd-BDT-dist.pdf', dpi=300)
# +
df_array = [ df_ku_test,df_kd_test]
#make all the flavours have the same xs as the sm
weight_array = [weight_hhsm*1.72,weight_hhsm*1.72]
ps_exp_class = collections.Counter(classifier.predict(pd.concat([df_array[1].iloc[:,:-2].sample(n=round(weight_array[1]), random_state=seed),
df_array[1].iloc[:,:-2].sample(n=round(weight_array[1]), random_state=seed),
df_array[0].iloc[:,:-2].sample(n=round(weight_array[0]), random_state=seed,)]).values))
nevents_kd, sig_kd = get_mclass(1, df_array, weight_array, ps_exp_class)
nevents_ku, sig_ku = get_mclass(0, df_array, weight_array, ps_exp_class)
#nevents_hhsm, sig_hhsm = get_mclass(0, df_array, weight_array, ps_exp_class)
confusion = np.column_stack((nevents_ku, nevents_kd))
# +
confusion_tot = np.round(np.array(([confusion[1], confusion[0]])).astype(int))
event_total = np.array([[np.sum(confusion_tot[i])] for i in range(confusion_tot.shape[0])])
significance = np.append(np.array([np.abs(confusion_tot[i,i])/np.sqrt(np.sum(confusion_tot[:,i])) for i in range(confusion_tot.shape[0])]), 0)
confusion_tab = np.vstack((np.append(confusion_tot, event_total, axis=1), significance))
df_conf = pd.DataFrame(confusion_tab, [r'$\kappa_{u}$', r'$\kappa_{d}$', 'sigma'])
df_conf.columns = [[r'$\kappa_{u}$', r'$\kappa_{d}$', 'total']
print(df_conf.to_latex(escape=False))
kappa_stat =np.sum([confusion_tot[i,i] for i in range(confusion.shape[0]) ])/ np.sum(confusion_tot)
print("kappa statistics = {:4.3f}".format(kappa_stat))
# +
# down analysis
df_hhsm['class'] = 3
df_hhsm_test['class'] = 3
df_kd['class'] = 4
df_kd_test['class'] = 4
channels = [df_hhsm, df_bbh, df_tth, df_bbxaa, df_kd]
nchannels = len(channels)
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
# + pycharm={"is_executing": false, "name": "#%%\n"}
class_names = [r'$bb\gamma\gamma$', r'$b\bar{b}h$', r'$t\bar{t}h$', r'$hh^{SM}$', r'$hh^{\kappa_d}$']
filename = 'models/HL-LHC-BDT/hbb-BDT-4class-tthfullW-hhsm-kd-1btag.pickle.dat' ## The pickle model store if necessary.
shap_plot = '../plots/shap-bbxaa-bbh-tthfullW-hhsm-kd.pdf'
classifier, x_test, y_test, shap_values_5, X_shap_5 = runBDT(df_train, filename)
abs_shap(shap_values_5, X_shap_5, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
# + pycharm={"is_executing": false, "name": "#%%\n"}
disc = 4
enc=1
#ku_p = pd.DataFrame(classifier.predict_proba(df_ku_test.drop(columns=['class', 'weight']).values)[:,disc])
#print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(df_ku_test['class'].values, classifier.predict(df_ku_test.drop(columns=['class', 'weight']).values))))
#ku_p['weight'] = df_ku_test['weight'].values
kd_p = pd.DataFrame(classifier.predict_proba(df_kd_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for kd: {:4.2f}% '.format(100*metrics.accuracy_score(df_kd_test['class'].values, classifier.predict(df_kd_test.drop(columns=['class', 'weight']).values))))
kd_p['weight'] = df_kd_test['weight'].values
hhsm_p = pd.DataFrame(classifier.predict_proba(df_hhsm_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(df_hhsm_test['class'].values, classifier.predict(df_hhsm_test.drop(columns=['class', 'weight']).values))))
hhsm_p['weight'] = df_hhsm_test['weight'].values
tth_p = pd.DataFrame(classifier.predict_proba(df_tth_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for tth: {:4.2f}% '.format(100*metrics.accuracy_score(df_tth_test['class'].values, classifier.predict(df_tth_test.drop(columns=['class', 'weight']).values))))
tth_p['weight'] = df_tth_test['weight'].values
bbh_p = pd.DataFrame(classifier.predict_proba(df_bbh_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for bbh: {:4.2f}% '.format(100*metrics.accuracy_score(df_bbh_test['class'].values, classifier.predict(df_bbh_test.drop(columns=['class', 'weight']).values))))
bbh_p['weight'] = df_bbh_test['weight'].values
bbxaa_p = pd.DataFrame(classifier.predict_proba(df_bbxaa_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(df_bbxaa_test['class'].values, classifier.predict(df_bbxaa_test.drop(columns=['class', 'weight']).values))))
bbxaa_p['weight'] = df_bbxaa_test['weight'].values
kd_pred = kd_p.sample(n=round(weight_kd*1.28*enc), replace=True, random_state=seed).reset_index(drop=True)
hhsm_pred = hhsm_p.sample(n=round(weight_hhsm*1.72*enc), replace=True, random_state=seed).reset_index(drop=True)
tth_pred = tth_p.sample(n=round(weight_tth*1.2*enc), replace=True, random_state=seed).reset_index(drop=True)
bbh_pred = bbh_p.sample(n=round(weight_bbh*enc), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_pred = bbxaa_p.sample(n=round(weight_bbxaa*1.5*enc), replace=True, random_state=seed).reset_index(drop=True)
plt.figure(figsize=(6,4))
ax = plt.gca()
ax.set_prop_cycle(color=(col[0:5])[::-1])
rc('text', usetex=True)
plt.rcParams['text.latex.preamble'] = [r"\usepackage{amsmath}"]
sns.distplot(kd_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(kd_pred[0])}, label=r'$hh^{\kappa_d}$')
sns.distplot(hhsm_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(hhsm_pred[0])}, label=r'$hh^{SM}$')
sns.distplot(tth_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(tth_pred[0])}, label=r'$t\bar{t}h$')
sns.distplot(bbh_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(bbh_pred[0])}, label=r'$b\bar{b}h$')
sns.distplot(bbxaa_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(bbxaa_pred[0])}, label=r'$bb\gamma\gamma$')
plt.legend(loc='upper center', fontsize=14)
plt.grid(linestyle='dashed', alpha=0.4, color='#808080')
ax.tick_params(axis='both', which='major', labelsize=14)
plt.xlabel(r'$p(hh)$', fontsize=14)
plt.ylabel(r'$N$', fontsize=14)
plt.yscale('log')
plt.tight_layout()
plt.savefig('../plots/bbxaa-bbh-hhsm-Wfull-BDT-dist-kd.pdf', dpi=300)
# +
df_array = [df_bbxaa_test, df_bbh_test, df_tth_test, df_hhsm_test, df_kd_test]
weight_array = [weight_bbxaa*1.5, weight_bbh, weight_tth*1.2, weight_hhsm*1.72, weight_kd*1.28]
ps_exp_class = collections.Counter(classifier.predict(pd.concat([df_array[4].iloc[:,:-2].sample(n=round(weight_array[4]), random_state=seed, replace=True),
df_array[3].iloc[:,:-2].sample(n=round(weight_array[3]), random_state=seed, replace=True),
df_array[2].iloc[:,:-2].sample(n=round(weight_array[2]), random_state=seed, replace=True),
df_array[1].iloc[:,:-2].sample(n=round(weight_array[1]), random_state=seed, replace=True),
df_array[0].iloc[:,:-2].sample(n=round(weight_array[0]), random_state=seed, replace=True)]).values))
nevents_kd, sig_kd = get_mclass(4, df_array, weight_array, ps_exp_class)
nevents_hhsm, sig_hhsm = get_mclass(3, df_array, weight_array, ps_exp_class)
nevents_tth, sig_tth = get_mclass(2, df_array, weight_array, ps_exp_class)
nevents_bbh, sig_bbh = get_mclass(1, df_array, weight_array, ps_exp_class)
nevents_bbxaa, sig_bbxaa = get_mclass(0, df_array, weight_array, ps_exp_class)
confusion = np.column_stack((nevents_kd, nevents_hhsm, nevents_tth, nevents_bbh, nevents_bbxaa))
# +
confusion_tot = np.round(np.array([confusion[4],confusion[3], confusion[2], confusion[1], confusion[0]])).astype(int)
event_total = np.array([[np.sum(confusion_tot[i])] for i in range(confusion_tot.shape[0])])
significance = np.append(np.array([np.abs(confusion_tot[i,i])/np.sqrt(np.sum(confusion_tot[:,i])) for i in range(confusion_tot.shape[0])]), 0)
confusion_tab = np.vstack((np.append(confusion_tot, event_total, axis=1), significance))
df_conf = pd.DataFrame(confusion_tab, [r'$\kappa_d$',r'$hh^{SM}$', r'$t\bar{t}h$', r'$b\bar{b}h$', r'$bb\gamma\gamma$', r'$\sigma$'])
df_conf.columns = [r'$\kappa_d$',r'$hh^{SM}$', r'$t\bar{t}h$', r'$b\bar{b}h$', r'$bb\gamma\gamma$', 'total']
print(df_conf.to_latex(escape=False))
kappa_stat =np.sum([confusion_tot[i,i] for i in range(confusion.shape[0]) ])/ np.sum(confusion_tot)
print("kappa statistics = {:4.3f}".format(kappa_stat))
# +
# up analysis
df_hhsm['class'] = 3
df_hhsm_test['class'] = 3
df_ku['class'] = 4
df_ku_test['class'] = 4
channels = [df_hhsm, df_bbh, df_tth, df_bbxaa, df_ku]
nchannels = len(channels)
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
##########
class_names = [r'$bb\gamma\gamma$', r'$b\bar{b}h$', r'$t\bar{t}h$', r'$hh^{SM}$', r'$hh^{\kappa_u}$']
filename = 'models/HL-LHC-BDT/hbb-BDT-4class-tthfullW-hhsm-ku-1btag.pickle.dat' ## The pickle model store if necessary.
shap_plot = '../plots/shap-bbxaa-bbh-tthfullW-hhsm-ku.pdf'
classifier, x_test, y_test, shap_values_5, X_shap_5 = runBDT(df_train, filename)
abs_shap(shap_values_5, X_shap_5, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
# -
# +
disc = 4
enc=1
ku_p = pd.DataFrame(classifier.predict_proba(df_ku_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(df_ku_test['class'].values, classifier.predict(df_ku_test.drop(columns=['class', 'weight']).values))))
ku_p['weight'] = df_ku_test['weight'].values
hhsm_p = pd.DataFrame(classifier.predict_proba(df_hhsm_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(df_hhsm_test['class'].values, classifier.predict(df_hhsm_test.drop(columns=['class', 'weight']).values))))
hhsm_p['weight'] = df_hhsm_test['weight'].values
tth_p = pd.DataFrame(classifier.predict_proba(df_tth_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for tth: {:4.2f}% '.format(100*metrics.accuracy_score(df_tth_test['class'].values, classifier.predict(df_tth_test.drop(columns=['class', 'weight']).values))))
tth_p['weight'] = df_tth_test['weight'].values
bbh_p = pd.DataFrame(classifier.predict_proba(df_bbh_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for bbh: {:4.2f}% '.format(100*metrics.accuracy_score(df_bbh_test['class'].values, classifier.predict(df_bbh_test.drop(columns=['class', 'weight']).values))))
bbh_p['weight'] = df_bbh_test['weight'].values
bbxaa_p = pd.DataFrame(classifier.predict_proba(df_bbxaa_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(df_bbxaa_test['class'].values, classifier.predict(df_bbxaa_test.drop(columns=['class', 'weight']).values))))
bbxaa_p['weight'] = df_bbxaa_test['weight'].values
ku_pred = ku_p.sample(n=round(weight_ku*1.28*enc), replace=True, random_state=seed).reset_index(drop=True)
hhsm_pred = hhsm_p.sample(n=round(weight_hhsm*1.72*enc), replace=True, random_state=seed).reset_index(drop=True)
tth_pred = tth_p.sample(n=round(weight_tth*1.2*enc), replace=True, random_state=seed).reset_index(drop=True)
bbh_pred = bbh_p.sample(n=round(weight_bbh*enc), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_pred = bbxaa_p.sample(n=round(weight_bbxaa*1.5*enc), replace=True, random_state=seed).reset_index(drop=True)
plt.figure(figsize=(6,4))
ax = plt.gca()
ax.set_prop_cycle(color=(col[0:5])[::-1])
rc('text', usetex=True)
plt.rcParams['text.latex.preamble'] = [r"\usepackage{amsmath}"]
sns.distplot(ku_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(ku_pred[0])}, label=r'$hh^{\kappa_u}$')
sns.distplot(hhsm_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(hhsm_pred[0])}, label=r'$hh^{SM}$')
sns.distplot(tth_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(tth_pred[0])}, label=r'$t\bar{t}h$')
sns.distplot(bbh_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(bbh_pred[0])}, label=r'$b\bar{b}h$')
sns.distplot(bbxaa_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(bbxaa_pred[0])}, label=r'$bb\gamma\gamma$')
plt.legend(loc='upper center', fontsize=14)
plt.grid(linestyle='dashed', alpha=0.4, color='#808080')
ax.tick_params(axis='both', which='major', labelsize=14)
plt.xlabel(r'$p(hh)$', fontsize=14)
plt.ylabel(r'$N$', fontsize=14)
plt.yscale('log')
plt.tight_layout()
plt.savefig('../plots/bbxaa-bbh-hhsm-Wfull-BDT-dist-ku.pdf', dpi=300)
# +
df_array = [df_bbxaa_test, df_bbh_test, df_tth_test, df_hhsm_test, df_ku_test]
weight_array = [weight_bbxaa*1.5, weight_bbh, weight_tth*1.2, weight_hhsm*1.72, weight_ku*1.28]
ps_exp_class = collections.Counter(classifier.predict(pd.concat([df_array[4].iloc[:,:-2].sample(n=round(weight_array[4]), random_state=seed, replace=True),
df_array[3].iloc[:,:-2].sample(n=round(weight_array[3]), random_state=seed, replace=True),
df_array[2].iloc[:,:-2].sample(n=round(weight_array[2]), random_state=seed, replace=True),
df_array[1].iloc[:,:-2].sample(n=round(weight_array[1]), random_state=seed, replace=True),
df_array[0].iloc[:,:-2].sample(n=round(weight_array[0]), random_state=seed, replace=True)]).values))
nevents_ku, sig_ku = get_mclass(4, df_array, weight_array, ps_exp_class)
nevents_hhsm, sig_hhsm = get_mclass(3, df_array, weight_array, ps_exp_class)
nevents_tth, sig_tth = get_mclass(2, df_array, weight_array, ps_exp_class)
nevents_bbh, sig_bbh = get_mclass(1, df_array, weight_array, ps_exp_class)
nevents_bbxaa, sig_bbxaa = get_mclass(0, df_array, weight_array, ps_exp_class)
confusion = np.column_stack((nevents_ku, nevents_hhsm, nevents_tth, nevents_bbh, nevents_bbxaa))
# -
kappa_stat =np.sum([confusion_tot[i,i] for i in range(confusion.shape[0]) ])/ np.sum(confusion_tot)
print("kappa statistics = {:4.3f}".format(kappa_stat))
#k_lambda
df_hhsm['class'] = 3
df_hhsm_test['class'] = 3
df_hhk8['class'] = 4
df_hhk8_test['class'] = 4
######
channels = [df_hhk8,df_hhsm, df_bbh, df_tth, df_bbxaa]
nchannels = len(channels)
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$b\bar{b}h$', r'$t\bar{t}h$', r'$hh^{SM}$',r'$hh^{\kappa_{\lambda}=8.0}$']
filename = 'models/HL-LHC-BDT/hbb-BDT-5class-hhsm-vshhkl8-fullW-1btag.pickle.dat' ## The pickle model store if necessary.
shap_plot = '../plots/shap-bbxaa-bbh-tthfull-hhsm-kl8.pdf'
classifier, x_test, y_test, shap_values_5, X_shap_5 = runBDT(df_train, filename)
abs_shap(shap_values_5, X_shap_5, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
# +
disc = 4
enc=1
kl_p = pd.DataFrame(classifier.predict_proba(df_hhk8_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(df_hhk8_test['class'].values, classifier.predict(df_hhk8_test.drop(columns=['class', 'weight']).values))))
kl_p['weight'] = df_hhk8_test['weight'].values
hhsm_p = pd.DataFrame(classifier.predict_proba(df_hhsm_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(df_hhsm_test['class'].values, classifier.predict(df_hhsm_test.drop(columns=['class', 'weight']).values))))
hhsm_p['weight'] = df_hhsm_test['weight'].values
tth_p = pd.DataFrame(classifier.predict_proba(df_tth_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for tth: {:4.2f}% '.format(100*metrics.accuracy_score(df_tth_test['class'].values, classifier.predict(df_tth_test.drop(columns=['class', 'weight']).values))))
tth_p['weight'] = df_tth_test['weight'].values
bbh_p = pd.DataFrame(classifier.predict_proba(df_bbh_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for bbh: {:4.2f}% '.format(100*metrics.accuracy_score(df_bbh_test['class'].values, classifier.predict(df_bbh_test.drop(columns=['class', 'weight']).values))))
bbh_p['weight'] = df_bbh_test['weight'].values
bbxaa_p = pd.DataFrame(classifier.predict_proba(df_bbxaa_test.drop(columns=['class', 'weight']).values)[:,disc])
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(df_bbxaa_test['class'].values, classifier.predict(df_bbxaa_test.drop(columns=['class', 'weight']).values))))
bbxaa_p['weight'] = df_bbxaa_test['weight'].values
kl_pred = kl_p.sample(n=round(weight_hhk8*Kfac(8.0)/1.72*enc), replace=True, random_state=seed).reset_index(drop=True)
hhsm_pred = hhsm_p.sample(n=round(weight_hhsm*1.72*enc), replace=True, random_state=seed).reset_index(drop=True)
tth_pred = tth_p.sample(n=round(weight_tth*1.2*enc), replace=True, random_state=seed).reset_index(drop=True)
bbh_pred = bbh_p.sample(n=round(weight_bbh*enc), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_pred = bbxaa_p.sample(n=round(weight_bbxaa*1.5*enc), replace=True, random_state=seed).reset_index(drop=True)
plt.figure(figsize=(6,4))
ax = plt.gca()
ax.set_prop_cycle(color=(col[0:5])[::-1])
rc('text', usetex=True)
plt.rcParams['text.latex.preamble'] = [r"\usepackage{amsmath}"]
sns.distplot(kl_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(kl_pred[0])}, label=r'$hh^{\kappa_\lambda =8.0}$')
sns.distplot(hhsm_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(hhsm_pred[0])}, label=r'$hh^{SM}$')
sns.distplot(tth_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(tth_pred[0])}, label=r'$t\bar{t}h$')
sns.distplot(bbh_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(bbh_pred[0])}, label=r'$b\bar{b}h$')
sns.distplot(bbxaa_pred[0], kde=False, bins=np.arange(0, 1 + 0.04, 0.04),
hist_kws={'alpha': 0.8, 'histtype': 'step', 'linewidth': 3, 'weights': [1/enc]*len(bbxaa_pred[0])}, label=r'$bb\gamma\gamma$')
plt.legend(loc='upper center', fontsize=14)
plt.grid(linestyle='dashed', alpha=0.4, color='#808080')
ax.tick_params(axis='both', which='major', labelsize=14)
plt.xlabel(r'$p(hh)$', fontsize=14)
plt.ylabel(r'$N$', fontsize=14)
plt.yscale('log')
plt.tight_layout()
plt.savefig('../plots/bbxaa-bbh-hhsm-Wfull-BDT-dist-kl8.pdf', dpi=300)
# +
df_array = [df_bbxaa_test, df_bbh_test, df_tth_test, df_hhsm_test, df_hhk8_test]
weight_array = [weight_bbxaa*1.5, weight_bbh, weight_tth*1.2, weight_hhsm*1.72, weight_hhk8*Kfac(8.0)]
ps_exp_class = collections.Counter(classifier.predict(pd.concat([df_array[4].iloc[:,:-2].sample(n=round(weight_array[4]), random_state=seed, replace=True),
df_array[3].iloc[:,:-2].sample(n=round(weight_array[3]), random_state=seed, replace=True),
df_array[2].iloc[:,:-2].sample(n=round(weight_array[2]), random_state=seed, replace=True),
df_array[1].iloc[:,:-2].sample(n=round(weight_array[1]), random_state=seed, replace=True),
df_array[0].iloc[:,:-2].sample(n=round(weight_array[0]), random_state=seed, replace=True)]).values))
nevents_kl, sig_kl = get_mclass(4, df_array, weight_array, ps_exp_class)
nevents_hhsm, sig_hhsm = get_mclass(3, df_array, weight_array, ps_exp_class)
nevents_tth, sig_tth = get_mclass(2, df_array, weight_array, ps_exp_class)
nevents_bbh, sig_bbh = get_mclass(1, df_array, weight_array, ps_exp_class)
nevents_bbxaa, sig_bbxaa = get_mclass(0, df_array, weight_array, ps_exp_class)
confusion = np.column_stack((nevents_kl, nevents_hhsm, nevents_tth, nevents_bbh, nevents_bbxaa))
# +
confusion_tot = np.round(np.array([confusion[4],confusion[3], confusion[2], confusion[1], confusion[0]])).astype(int)
event_total = np.array([[np.sum(confusion_tot[i])] for i in range(confusion_tot.shape[0])])
significance = np.append(np.array([np.abs(confusion_tot[i,i])/np.sqrt(np.sum(confusion_tot[:,i])) for i in range(confusion_tot.shape[0])]), 0)
confusion_tab = np.vstack((np.append(confusion_tot, event_total, axis=1), significance))
df_conf = pd.DataFrame(confusion_tab, [r'$\kappa_d$',r'$hh^{SM}$', r'$t\bar{t}h$', r'$b\bar{b}h$', r'$bb\gamma\gamma$', r'$\sigma$'])
df_conf.columns = [r'$\kappa_d$',r'$hh^{SM}$', r'$t\bar{t}h$', r'$b\bar{b}h$', r'$bb\gamma\gamma$', 'total']
print(df_conf.to_latex(escape=False))
kappa_stat =np.sum([confusion_tot[i,i] for i in range(confusion.shape[0]) ])/ np.sum(confusion_tot)
print("kappa statistics = {:4.3f}".format(kappa_stat))
# -
| Notebooks/Lina-Initial_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NetColoc analysis of rare and common variants in Autism spectrum disorder (ASD)
#
# Example of NetColoc workflow on genes associated with rare and common variants in autism.
#
# Some background:
#
# Here we introduce NetColoc, a tool which evaluates the extent to which two gene sets are related in network space, i.e. the extent to which they are colocalized in a molecular interaction network, and interrogates the underlying biological pathways and processes using multiscale community detection. This framework may be applied to any number of scenarios in which gene sets have been associated with a phenotype or condition, including rare and common variants within the same disease, genes associated with two comorbid diseases, genetically correlated GWAS phenotypes, GWAS across two different species, or gene expression changes after treatment with two different drugs, to name a few. NetColoc relies on a dual network propagation approach to identify the region of network space which is significantly proximal to both input gene sets, and as such is highly effective for small to medium input gene sets
#
# +
# load required packages
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
import pandas as pd
import random
from IPython.display import display
import getpass
import ndex2
# latex rendering of text in graphs
import matplotlib as mpl
mpl.rc('text', usetex = False)
mpl.rc('font', family = 'serif')
from matplotlib import rcParams
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Arial']
sns.set(font_scale=1.4)
sns.set_style('white')
sns.set_style("ticks", {"xtick.major.size": 15, "ytick.major.size": 15})
plt.rcParams['svg.fonttype'] = 'none'
import sys
# % matplotlib inline
# +
import sys
sys.path.append('../netcoloc/')
import netprop_zscore
import netprop
import network_colocalization
import imp
imp.reload(netprop_zscore)
imp.reload(netprop)
imp.reload(network_colocalization)
# -
nx.__version__
# set random seed to enable reproducibility between runs
import random
np.random.seed(1)
# # 1. Load two gene sets of interest
#
#
# Identify two gene sets of interest. Gene sets should come from experimental data (not manual curation) to avoid bias. For example, genes associated with significant loci from GWAS (common variants). Summary statistics are readily available for most GWAS. We note there are existing methods to map summary statistics to corresponding genes (REFS MAGMA, TWAS/PREDIXCAN/ FUMA/ PASCAL, etc). In our work we use the PASCAL algorithm (https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004714), a positional mapper which accounts for linkage disequilibrium. Another example is genes associated with damaging variants from case-control studies in exome sequencing (rare variants). There exist well established pipelines for identifying deleterious variants in exome sequencing (REFS). In this case the variant-gene mapping is trivial because all variants are by definition found within the gene body. In practice, less than 500 genes work best as input to NetColoc, because of sampling issues.
#
# **Usage Note**: gene sets should be < 500 genes (propagation algorithm breaks down if seeded with larger sets). If your gene set is larger, only use the top 500 as seeds to the network propagation.
#
#
# +
# load rare variants (from https://www.sciencedirect.com/science/article/abs/pii/S0092867419313984)
ASD_rare_df = pd.read_csv('../docs/data/HC_genes/Satterstrom--Top-102-ASD-genes--May2019.csv')
ASD_rare_df.index=ASD_rare_df['gene']
print('number rare genes:')
print(len(ASD_rare_df))
ASD_rare_genes = ASD_rare_df.index.tolist() # define rare variant genes to seed network propagation
print(ASD_rare_genes[0:5])
# -
# load common variant genes (ASD summary stats from LINK, mapped using PASCAL)
ASD_common_df = pd.read_csv('../docs/data/HC_genes/ASD_sumstats_pascal.sum.genescores.txt',sep='\t')
pthresh=1E-4 # set p-value cutoff for common variant genes
ASD_common_genes = ASD_common_df[ASD_common_df['pvalue']<pthresh]['gene_symbol'].tolist()
print('number common genes:')
print(len(ASD_common_genes))
print(ASD_common_genes[0:5])
# how much overlap between gene sets?
print('number of rare and common genes overlapping:')
print(len(np.intersect1d(ASD_common_genes,ASD_rare_genes)))
# # 2. Load interactome
#
# **Coverage**. Larger, denser interactomes will be more inclusive and be amenable to creating more granular models. Human curated interactomes are smaller, sparser and are biased towards known biology. Many, however, have richer descriptions of the relationships. Data derived interactomes based on specific projects have the advantage that the experimental context is well-defined and consistent.
#
#
# **Interaction Types**. The edges that were useful in computing the coloc may not be useful for interpretation. For example, the edges in PCNet are not typed. For purposes of interpretation we need to know how the genes relate to each other. Further, we are best able to understand physical interactions, and so it may be most useful to review the nodes in a community or other subnetwork using a protein-protein interactome, or at least one in which the edges can be filtered when needed.
#
# **Net recommendation**: use an inclusive interactome for generating the model but then annotate subsystem networks with relationships derived from richer, if less comprehensive, sources. Or from sources specifically relevant to the experimental context.
#
#
# **Usage note**: PCnet is a general purpose interactome, a good starting place https://www.sciencedirect.com/science/article/pii/S2405471218300954
# +
interactome_uuid='4de852d9-9908-11e9-bcaf-0ac135e8bacf' # for PCNet
ndex_server='public.ndexbio.org'
ndex_user=None
ndex_password=<PASSWORD>
G_PC = ndex2.create_nice_cx_from_server(
ndex_server,
username=ndex_user,
password=<PASSWORD>,
uuid=interactome_uuid
).to_networkx()
nodes = list(G_PC.nodes)
# print out interactome num nodes and edges for diagnostic purposes
print('number of nodes:')
print(len(G_PC.nodes))
print('\nnumber of edges:')
print(len(G_PC.edges))
# -
pc_nodes = list(G_PC.nodes)
# # 3. Network co-localization
#
# Network propagation from genes on selected interactome
# - Control for degree of input genes
# - Generate a proximity z-score, which defines genes which are closer to input set than expected by chance.
# - Repeat for rare and common variant genes, defined above
#
# Background on network propagation: https://www.nature.com/articles/nrg.2017.38.pdf?origin=ppub
#
# +
# pre calculate mats used for netprop... this step takes a few minutes, more for denser interactomes
print('\ncalculating w_prime')
w_prime = netprop.get_normalized_adjacency_matrix(G_PC, conserve_heat=True)
print('\ncalculating w_double_prime')
w_double_prime = netprop.get_individual_heats_matrix(w_prime, .5)
# +
# subset seed genes to those found in interactome
print(len(ASD_rare_genes))
ASD_rare_genes = list(np.intersect1d(ASD_rare_genes,pc_nodes))
print(len(ASD_rare_genes))
print(len(ASD_common_genes))
ASD_common_genes = list(np.intersect1d(ASD_common_genes,pc_nodes))
print(len(ASD_common_genes))
# +
# Rare variant netprop
print('\nCalculating rare variant z-scores: ')
z_rare, Fnew_rare, Fnew_rand_rare = netprop_zscore.calc_zscore_heat(w_double_prime, pc_nodes,
dict(G_PC.degree),
ASD_rare_genes, num_reps=1000,
minimum_bin_size=100)
z_rare = pd.DataFrame({'z':z_rare})
z_rare.sort_values('z',ascending=False).head()
# +
# common variant netprop
print('\nCalculating common variant z-scores: ')
z_common, Fnew_common, Fnew_rand_common = netprop_zscore.calc_zscore_heat(w_double_prime, pc_nodes,
dict(G_PC.degree),
ASD_common_genes, num_reps=1000,
minimum_bin_size=100)
z_common = pd.DataFrame({'z':z_common})
z_common.sort_values('z',ascending=False).head()
# -
# ## calculate size of network overlap, and compare to expected size
#
#
# Size of network co-localization subgraph compared to null model created by permuting individual propagation z-scores.
#
#
# Note: seed genes are excluded from this calculation
#
#
# +
from scipy.stats import hypergeom
from scipy.stats import norm
# ------ customize this section based on your gene sets and how they should be labeled -------
z_dict = {'ASD_rare':z_rare,'ASD_common':z_common}
seed_dict = {'ASD_rare':ASD_rare_genes,'ASD_common':ASD_common_genes}
# --------------------------------------------------------------------------------------------
# save the num overlap and overlap p-val in dataframes
focal_diseases = ['ASD_rare','ASD_common']
network_num_overlap = pd.DataFrame(np.zeros((len(focal_diseases),len(focal_diseases))),index=focal_diseases)
network_num_overlap.columns = focal_diseases
network_obs_exp = pd.DataFrame(np.zeros((len(focal_diseases),len(focal_diseases))),index=focal_diseases)
network_obs_exp.columns = focal_diseases
network_pval_overlap = pd.DataFrame(np.ones((len(focal_diseases),len(focal_diseases))),index=focal_diseases)
network_pval_overlap.columns = focal_diseases
network_exp_mean_overlap = pd.DataFrame(np.ones((len(focal_diseases),len(focal_diseases))),index=focal_diseases)
network_exp_mean_overlap.columns = focal_diseases
network_exp_std_overlap = pd.DataFrame(np.ones((len(focal_diseases),len(focal_diseases))),index=focal_diseases)
network_exp_std_overlap.columns = focal_diseases
zthresh=3
for i in np.arange(len(focal_diseases)-1):
for j in np.arange(1+i,len(focal_diseases)):
d1=focal_diseases[i]
d2=focal_diseases[j]
seed1 = seed_dict[d1]
seed2 = seed_dict[d2]
z1=z_dict[d1]
z1_noseeds = z1.drop(list(np.intersect1d(seed1+seed2,z1.index.tolist())))
z2=z_dict[d2]
z2_noseeds = z2.drop(list(np.intersect1d(seed1+seed2,z2.index.tolist())))
# replace hypergeometric with permutation empirical p
# z_d1d2_size,high_z_rand=network_colocalization.calculate_expected_overlap(d1,d2,z1_noseeds,z2_noseeds,
# plot=False,numreps=1000,zthresh=zthresh)
z_d1d2_size,high_z_rand=network_colocalization.calculate_expected_overlap(z1['z'],z2['z'],d1,d2,
plot=False,num_reps=1000,z_score_threshold=zthresh)
ztemp = (z_d1d2_size-np.mean(high_z_rand))/np.std(high_z_rand)
ptemp = norm.sf(ztemp)
print(d1+' + '+d2)
print('size of network intersection = '+str(z_d1d2_size))
obs_exp_temp = float(z_d1d2_size)/np.mean(high_z_rand)
print('observed size/ expected size = ' + str(obs_exp_temp))
print('p = '+ str(ptemp))
network_num_overlap.loc[d1][d2]=z_d1d2_size
network_num_overlap.loc[d2][d1]=z_d1d2_size
network_pval_overlap.loc[d1][d2]=ptemp
network_pval_overlap.loc[d2][d1]=ptemp
network_obs_exp.loc[d1][d2]=obs_exp_temp
network_obs_exp.loc[d2][d1]=obs_exp_temp
network_exp_mean_overlap.loc[d1][d2]=np.mean(high_z_rand)
network_exp_mean_overlap.loc[d2][d1]=np.mean(high_z_rand)
network_exp_std_overlap.loc[d1][d2]=np.std(high_z_rand)
network_exp_std_overlap.loc[d2][d1]=np.std(high_z_rand)
# +
# plot the overlap ... useful when there are lots of comparisons... not so much here
xlabels = []
observed_overlap_list=[]
mean_exp_overlap_list=[]
std_exp_overlap_list=[]
for i in range(len(focal_diseases)-1): #[0]: #
for j in range(i+1,len(focal_diseases)):
di = focal_diseases[i]
dj=focal_diseases[j]
xlabels.append(di+'-'+dj)
observed_overlap_list.append(network_num_overlap.loc[di][dj])
mean_exp_overlap_list.append(network_exp_mean_overlap.loc[di][dj])
std_exp_overlap_list.append(network_exp_std_overlap.loc[di][dj])
obs_div_exp_list = np.divide(observed_overlap_list,mean_exp_overlap_list)
# change to 95% confidence interval (*1.96 sigma)
yerr_lower = np.subtract(obs_div_exp_list,np.divide(observed_overlap_list,np.add(mean_exp_overlap_list,1.96*np.array(std_exp_overlap_list))))
yerr_upper = np.subtract(np.divide(observed_overlap_list,np.subtract(mean_exp_overlap_list,1.96*np.array(std_exp_overlap_list))),obs_div_exp_list)
log_yerr_lower = np.subtract(np.log2(obs_div_exp_list),np.log2(np.divide(observed_overlap_list,np.add(mean_exp_overlap_list,2*np.array(std_exp_overlap_list)))))
log_yerr_upper = np.subtract(np.log2(np.divide(observed_overlap_list,np.subtract(mean_exp_overlap_list,2*np.array(std_exp_overlap_list)))),np.log2(obs_div_exp_list))
log_obs_div_exp=np.log2(obs_div_exp_list)
# log_yerr_lower=np.log2(obs_div_exp_lower_list)
# log_yerr_upper=np.log2(obs_div_exp_upper_list)
network_intersection_df = pd.DataFrame({'name':xlabels,'observed_overlap':observed_overlap_list,
'log2_obs_div_exp':log_obs_div_exp,
'log2_yerr_lower':log_yerr_lower,
'log2_yerr_upper':log_yerr_upper,
'obs_div_exp':obs_div_exp_list,
'yerr_lower':yerr_lower,
'yerr_upper':yerr_upper})
network_intersection_df.index=network_intersection_df['name']
# sort it
network_intersection_df=network_intersection_df.sort_values('obs_div_exp',ascending=False)
plt.figure(figsize=(2,3))
plt.errorbar(np.arange(len(network_intersection_df)),network_intersection_df['obs_div_exp'],
yerr=[network_intersection_df['yerr_lower'],network_intersection_df['yerr_upper']],
fmt='o',color='k')
tmp=plt.xticks(np.arange(len(observed_overlap_list)),network_intersection_df.index.tolist(),fontsize=16,rotation='vertical')
plt.ylabel('observed/expected size of network intersection\n(95% CI)',fontsize=16)
#plt.plot([0,len(obs_div_exp_list)],[0,0],'gray','--')
plt.hlines(1,xmin=-.5,xmax=len(network_intersection_df),color='gray',linestyles='dashed')
# plt.ylim([0.8,1.5])
plt.yticks(fontsize=16)
plt.xlim([-.5,len(network_intersection_df)-.5])
# -
# ## Output network overlap to NDEx/cytoscape for clustering/annotation
#
# ----- If a significant overlap is detected: ------
#
# Create the network co-localization subgraph, save network to NDEX, then open in Cytoscape for clustering/annotation. (See CDAPS documentation)
#
# +
# network_colocalization.calculate_network_overlap?
# +
# select genes in network intersection, make a subgraph
d1='ASD_rare'
d2='ASD_common'
z1=z_dict[d1]
z2=z_dict[d2]
G_overlap = network_colocalization.calculate_network_overlap_subgraph(G_PC,z1['z'],z2['z'],z_score_threshold=3)
print(len(G_overlap.nodes()))
print(len(G_overlap.edges()))
# +
# compile dataframe of metadata for overlapping nodes
node_df = pd.DataFrame(index=list(G_overlap.nodes))
node_df[d1+'_seeds']=0
node_df[d2+'_seeds']=0
node_df[d1+'_seeds'].loc[list(np.intersect1d(d1_seeds_in_network,node_df.index.tolist()))]=1
node_df[d2+'_seeds'].loc[list(np.intersect1d(d2_seeds_in_network,node_df.index.tolist()))]=1
node_df['z_'+d1]=z1.loc[list(G_overlap.nodes)]['z']
node_df['z_'+d2]=z2.loc[list(G_overlap.nodes)]['z']
node_df['z_both']=node_df['z_'+d1]*node_df['z_'+d2]
node_df = node_df.sort_values('z_both',ascending=False)
node_df.head()
# -
# ## Annotate network and upload to NDEx
#
# +
# ----- a number of properties should be customized here ------
#Annotate network
print(len(G_overlap.nodes()))
print(len(G_overlap.edges()))
G_overlap_cx = ndex2.create_nice_cx_from_networkx(G_overlap)
G_overlap_cx.set_name('ASD_rare_common_network_temp')
for node_id, node in G_overlap_cx.get_nodes():
data = node_df.loc[node['n']]
for row, value in data.items():
if row == 'ASD_rare_seeds' or row == 'ASD_common_seeds':
data_type = 'boolean'
if value == 0:
value = False
else:
value = True
else:
data_type = 'double'
G_overlap_cx.set_node_attribute(node_id, row, value, type=data_type)
#Upload to NDEx
SERVER = input('NDEx server (probably ndexbio.org): ')
USERNAME = input('NDEx user name: ')
PASSWORD = <PASSWORD>('NDEx password: ')
network_uuid = G_overlap_cx.upload_to(SERVER, USERNAME, PASSWORD)
# -
# # 4. Build multiscale systems map
#
# This step performed in Cytoscape
#
# https://apps.cytoscape.org/apps/cycommunitydetection
#
# Instructions for use available in the manuscript
| example_notebooks/.ipynb_checkpoints/ASD_rare_common_network_updated_code-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import os
import matplotlib.pyplot as plt
# This is an algorithm for the simplified version of crystal plasticity. This version does not include texture evoution and only touches upon a stress update algorithm.
# # Define constants:
# +
m = 0.02 # Strain-rate constant
n = 0.1 # Power-law constant: hardening exponent
e_dot_0 = 0.001 # ??
t_0 = 200 # [MPa] Critical resolved shear stress
h_0 = 250 # [MPa] Initial hardness after yeilding
sigma_bar = t_0 # [MPa] Initial hardening value (yield strength)
e_bar_p = 0 # Initial hardness-related parameter (equivalent to accumulated slip in crystal plasticity)
# +
# Elastic tensor:
L = np.zeros((6, 6))
for i in [0, 1, 2]:
L[i, i] = 1.0372e+5
for i in [3, 4, 5]:
L[i, i] = 25316
for i in [0, 1, 2]:
for j in [0, 1, 2]:
if i != j:
L[i, j] = 51084
print("Elastic tensor:\n\n", L)
# -
# Strain and stress (-rate, too) tensor are written in the Voigt notation:
#
# $\tilde{\sigma}=\left(\sigma_{x x}, \sigma_{y y}, \sigma_{z z}, \sigma_{y z}, \sigma_{x z}, \sigma_{x y}\right) \equiv\left(\sigma_{1}, \sigma_{2}, \sigma_{3}, \sigma_{4}, \sigma_{5}, \sigma_{6}\right)$
# +
# Time step:
delta_t = 1
print("Time-step:\n\n", delta_t)
# Input strain-rate tensor:
D = np.asarray([1.0, -0.5, -0.5, 0.0, 0.0, 0.0])*1e-05
print("\n\nInput strain-rate tensor:\n\n", D)
T = int(0.1/D[0]/delta_t)
print("\n\nThe number of time-steps:\n\n", T)
# +
# Stress and strain tensors:
stress = np.zeros((6,))
print("Initial stress-tensor:\n\n", stress)
strain = np.zeros((6,))
print(" \n\nInitial strain-tensor:\n\n", strain)
# -
# # Helper functions
def dev_stress(stress):
"""
Calculate deviatoric stress component,
stress [6x1]
dev_stress [6x1]
"""
trace_s = stress[0] + stress[1] + stress[2]
dev_stress = stress - trace_s*np.asarray([1, 1, 1, 0, 0, 0])/3
return dev_stress
def sigma_vm(stress):
"""
Von Mises equivalent stress, ver 1
"""
s = dev_stress(stress)
stress_vm = 0
for i in range(6):
if i > 2:
stress_vm += 2*s[i]*s[i]
else:
stress_vm += s[i]*s[i]
return np.sqrt(3*stress_vm/2)
# $\tilde{\sigma}=\left(\sigma_{x x}, \sigma_{y y}, \sigma_{z z}, \sigma_{y z}, \sigma_{x z}, \sigma_{x y}\right) \equiv\left(\sigma_{0}, \sigma_{1}, \sigma_{2}, \sigma_{3}, \sigma_{4}, \sigma_{5}\right)$
def sigma_vm2(stress):
"""
Von Mises equivalent stress, ver 2
"""
a = (stress[0] - stress[1])**2 + \
(stress[1] - stress[2])**2 + \
(stress[2] - stress[0])**2
b = 6 *\
(stress[3]**2 + stress[4]**2 + stress[5]**2)
return np.sqrt(0.5*(a + b))
def calculate_e_bar_dot_p(e_dot_0, stress, sigma_bar, m):
"""
Calculate e_bar_dot_p
"""
e_bar_dot_p = e_dot_0 * (sigma_vm(stress)/sigma_bar)**(1/m)
return e_bar_dot_p
def tc_4d2(L, D):
"""
Double-dot product, Voigt notation
L - [6x6]
D - [6x1]
L:D - [6x1]
"""
return np.matmul(L,D)
# +
def calculate_Dp(e_bar_dot_p, stress):
"""
Dp -- [6x1]
"""
Dp = e_bar_dot_p * (3/2) * dev_stress(stress)/sigma_vm(stress)
# print('Dp:\t', Dp)
return Dp
# -
def stress_update(L, D, Dp, sigma_t, delta_t):
"""
Calculate delta sigma
Update the stress [6x1]
"""
d_sigma = tc_4d2(L, D) - tc_4d2(L, Dp)
return sigma_t + d_sigma*delta_t
def hardness_update(h_0, t_0, n, e_bar_dot_p, e_bar_p, sigma_bar, delta_t):
"""
Updating the hardness-related variables:sigma_bar and the variable e_bar_p
"""
# Update epsilon bar p:
delta_e_bar_p = e_bar_dot_p*delta_t
e_bar_p = e_bar_p + delta_e_bar_p
# Update sigma bar:
delta_sigma_bar = h_0 * \
((h_0 * e_bar_p)/(t_0 * n) + 1)**(n-1) * \
delta_e_bar_p
sigma_bar_new = sigma_bar + delta_sigma_bar
return e_bar_p, sigma_bar_new
# # Stress integration: a stress-update algorithm
# 1. Find $\dot{\bar{\varepsilon}}^{p}$:
#
#
# $\dot{\bar{\varepsilon}}^{p}=\dot{\varepsilon_{0}} \cdot\left(\frac{\sigma_{e q}\left(\sigma_{t}\right)}{\bar{\sigma}}\right)^{1 / m}$
#
#
# 2. Find $D^{p}$:
#
# $D^{p}=\dot{\bar{\varepsilon}}^{p} \frac{3}{2} \cdot \frac{\sigma_{t}^{dev}}{\sigma_{e q}\left(\sigma_{t}\right)}$
#
# 3. Calculate $\dot{\sigma}$
#
# $\dot{\sigma}=L: D-L: D^{p}$
#
# 4. Update hardness:
#
# $\Delta \bar{\sigma}=h_{0}\left(\frac{h_{0} \bar{\varepsilon}^{p}}{\tau_{0} n}+1\right)^{n-1} \cdot\left(\Delta \bar{\varepsilon}^{p}\right)$
#
# $\bar{\sigma}_{t+1}=\bar{\sigma}_{t}+\Delta \bar{\sigma}$
#
# +
# %%time
# %%capture
stress = np.zeros((6,))
print("Initial stress-tensor:\n\n", stress)
strain = np.zeros((6,))
print(" \n\nInitial strain-tensor:\n\n", strain)
# Initialize history variables
stress_history = {}
strain_history = {}
for i in ['11', '22', '33', '23', '13', '12']:
stress_history[i] = [0]
strain_history[i] = [0]
sigma_bar_history = [sigma_bar]
for timestep in range(T):
print('\n\n\n\n')
e_bar_dot_p = calculate_e_bar_dot_p(e_dot_0, stress, sigma_bar, m)
if timestep == 0:
Dp = np.zeros((6,))
else:
Dp = calculate_Dp(e_bar_dot_p, stress)
stress = stress_update(L, D, Dp, stress, delta_t)
e_bar_p, sigma_bar = hardness_update(h_0, t_0, n, e_bar_dot_p, e_bar_p, sigma_bar, delta_t)
# Save history variables:
sigma_bar_history.append(sigma_bar)
idx = 0
for i in ['11', '22', '33', '23', '13', '12']:
stress_history[i].append(stress[idx])
strain_history[i].append(D[idx]*(timestep+1))
idx+=1
#
# -
# # Plot results
# +
plt.rcParams.update({'font.size': 30})
plt.figure(figsize=(40, 15))
plt_i = 1
for i in ['11', '22', '33', '23', '13', '12']:
plt.subplot(2, 3, plt_i)
plt.plot(strain_history[i], stress_history[i], "s-", linewidth=5)
plt.grid(color='gray', linestyle=':', linewidth=0.5)
txt = r"Stress-strain curve, $\sigma_{" + i + r"} - \varepsilon_{" + i +"}$"
plt.title(txt)
txt = r'$\varepsilon_{' + i + r'}$'
plt.xlabel(txt)
txt = r'$\sigma_{' + i + r'}$ [MPa]'
plt.ylabel(txt)
plt_i += 1
plt.subplots_adjust(hspace=0.4, wspace=0.4)
plt.show()
# -
# The results show expected behavior for uniaxial tension loading. Strain on the x-axis is engineering strain. The parameters and strain-rate tensor could be varied for different loading and different strain-path.
| Simple CP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Topic Modeling & Sentiment Analysis Part I
#
# First pass "quick-and-dirty" version of topic modeling and sentiment analysis.
# Primarily as a means of EDA, this was useful to get a better understanding of the types of preprocessing that are going to be necessary.
#
# ## Topic Modeling
#
# Quick topic modeling, to see whether anything interesting pops out.
# Quick and dirty as a first pass, reference [AWS](https://rstudio-pubs-static.s3.amazonaws.com/79360_850b2a69980c4488b1db95987a24867a.html) and [DataCamp](https://www.datacamp.com/community/tutorials/discovering-hidden-topics-python).
# (Interestingly there's some verbatim code copying across these two articles. Someone stole someone
# elses' code and didn't cite them! tut!)
#
# TODO: add evaluation of coherence and test different numbers of topics.
import pandas as pd
import numpy as np
import os
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.porter import PorterStemmer
from gensim import corpora, models
import gensim
from textblob import TextBlob
import matplotlib as plt
import seaborn as sns
pd.set_option('display.max_colwidth', -1)
# +
# Select local path vs kaggle kernel
path = os.getcwd()
if 'data-projects/kaggle_quora/notebooks' in path:
data_dir = '../data/raw/'
else:
data_dir = ''
dat = pd.read_csv(data_dir +'train.csv')
# +
def preprocess_data(doc_set):
"""
Input : docuemnt list
Purpose: preprocess text (tokenize, removing stopwords, and stemming)
Output : preprocessed text
"""
tokenizer = RegexpTokenizer(r'\w+')
# create English stop words list
en_stop = get_stop_words('en')
# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()
# list for tokenized documents
texts = []
# loop through document list
for question in doc_set:
# clean and tokenize document string
raw = question.lower()
tokens = tokenizer.tokenize(raw)
# remove stop words from tokens
stopped_tokens = [i for i in tokens if not i in en_stop]
# stem tokens
stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
# add tokens to list
texts.append(stemmed_tokens)
return texts
def prepare_corpus(doc_clean):
"""
Input : clean document
Purpose: create term dictionary of our courpus and Converting list of documents (corpus) into Document Term Matrix
Output : term dictionary and Document Term Matrix
"""
# turn our tokenized documents into a id <-> term dictionary
dictionary = corpora.Dictionary(doc_clean)
# convert tokenized documents into a document-term matrix
corpus = [dictionary.doc2bow(text) for text in doc_clean]
return dictionary, corpus
# +
# %timeit
# preprocess questions
doc_clean = preprocess_data(list(dat.question_text.values))
dat['question_text_processed'] = doc_clean
# Create corpus and dictionary
dictionary, corpus = prepare_corpus(doc_clean)
# -
print(dictionary)
# Dictionary(153239 unique tokens: ['1960', 'nation', 'nationalist', 'provinc', 'quebec']...)
# I started with an LDA (Latent Dirichlet Allocation) model, as it is thought to generalize to new documents better than the simpler Latent Semantic Analysis ([more details here](https://medium.com/nanonets/topic-modeling-with-lsa-psla-lda-and-lda2vec-555ff65b0b05)). But it is much slower to fit, and of course I have no interest in generalizing to new documents!
#
# However, the results revealed some interesting gotchas in the preprocessing that are worth exploring before moving forward.
# +
# # %timeit
# # generate LDA model
# ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=10, id2word = dictionary, passes=20)
# ldamodel.save('lda.model')
# print(ldamodel.print_topics(num_topics=10, num_words=20))
# -
# [(0, '0.034*"s" + 0.013*"war" + 0.013*"class" + 0.012*"book" + 0.012*"read" + 0.010*"major" + 0.010*"black" + 0.010*"man" + 0.009*"c" + 0.008*"forc" + 0.008*"movi" + 0.007*"becom" + 0.007*"averag" + 0.007*"u" + 0.007*"tv" + 0.006*"known" + 0.006*"subject" + 0.006*"star" + 0.006*"process" + 0.005*"boy"'), (1, '0.016*"state" + 0.015*"student" + 0.013*"will" + 0.013*"s" + 0.013*"indian" + 0.012*"import" + 0.011*"system" + 0.010*"scienc" + 0.010*"govern" + 0.010*"manag" + 0.010*"india" + 0.009*"china" + 0.009*"human" + 0.009*"happen" + 0.008*"thing" + 0.008*"eat" + 0.008*"polit" + 0.008*"term" + 0.008*"histori" + 0.008*"form"'), (2, '0.038*"use" + 0.035*"best" + 0.033*"can" + 0.020*"way" + 0.018*"learn" + 0.018*"work" + 0.015*"compani" + 0.013*"start" + 0.011*"develop" + 0.011*"busi" + 0.010*"good" + 0.009*"market" + 0.009*"languag" + 0.009*"creat" + 0.008*"program" + 0.008*"product" + 0.008*"free" + 0.008*"make" + 0.008*"s" + 0.007*"onlin"'), (3, '0.022*"engin" + 0.019*"trump" + 0.014*"mean" + 0.011*"prepar" + 0.011*"studi" + 0.011*"univers" + 0.011*"best" + 0.011*"cours" + 0.010*"social" + 0.009*"complet" + 0.009*"exam" + 0.008*"can" + 0.008*"relat" + 0.008*"good" + 0.007*"gener" + 0.007*"s" + 0.007*"data" + 0.007*"math" + 0.007*"jee" + 0.007*"media"'), (4, '0.040*"can" + 0.015*"1" + 0.015*"use" + 0.013*"buy" + 0.012*"best" + 0.011*"number" + 0.011*"account" + 0.010*"servic" + 0.009*"get" + 0.009*"test" + 0.009*"car" + 0.008*"bank" + 0.008*"hous" + 0.008*"cost" + 0.007*"food" + 0.007*"build" + 0.007*"offic" + 0.006*"watch" + 0.006*"much" + 0.006*"song"'), (5, '0.026*"countri" + 0.018*"live" + 0.017*"american" + 0.015*"peopl" + 0.014*"india" + 0.012*"like" + 0.011*"world" + 0.010*"differ" + 0.010*"us" + 0.010*"consid" + 0.009*"s" + 0.009*"america" + 0.008*"usa" + 0.008*"chines" + 0.008*"muslim" + 0.007*"big" + 0.007*"mani" + 0.007*"citi" + 0.007*"uk" + 0.007*"non"'), (6, '0.031*"can" + 0.016*"quora" + 0.014*"question" + 0.014*"get" + 0.013*"ever" + 0.012*"t" + 0.012*"s" + 0.010*"name" + 0.010*"ask" + 0.009*"answer" + 0.009*"anyon" + 0.007*"peopl" + 0.007*"help" + 0.007*"ve" + 0.007*"will" + 0.007*"stop" + 0.007*"right" + 0.006*"problem" + 0.006*"one" + 0.006*"made"'), (7, '0.034*"get" + 0.034*"can" + 0.022*"job" + 0.019*"take" + 0.017*"time" + 0.014*"school" + 0.011*"day" + 0.011*"long" + 0.011*"colleg" + 0.011*"go" + 0.011*"caus" + 0.011*"place" + 0.010*"work" + 0.010*"experi" + 0.009*"best" + 0.008*"will" + 0.008*"one" + 0.008*"4" + 0.007*"high" + 0.007*"famili"'), (8, '0.035*"like" + 0.034*"t" + 0.028*"peopl" + 0.017*"feel" + 0.016*"can" + 0.016*"s" + 0.015*"know" + 0.015*"don" + 0.014*"life" + 0.014*"think" + 0.013*"say" + 0.013*"girl" + 0.012*"person" + 0.012*"want" + 0.011*"love" + 0.011*"someon" + 0.010*"look" + 0.010*"women" + 0.009*"just" + 0.009*"friend"'), (9, '0.032*"year" + 0.027*"can" + 0.019*"get" + 0.015*"2" + 0.015*"will" + 0.013*"money" + 0.012*"old" + 0.011*"3" + 0.010*"much" + 0.010*"interest" + 0.010*"2017" + 0.009*"5" + 0.009*"talk" + 0.009*"10" + 0.008*"date" + 0.008*"age" + 0.008*"first" + 0.008*"game" + 0.008*"make" + 0.008*"month"')]
# +
# corpus_topics = ldamodel.get_document_topics(corpus, per_word_topics=False)
# doc_topics = [doc_topics for doc_topics in corpus_topics]
# q_i = 10
# print(corpus_topics[q_i])
# print(max(corpus_topics[q_i], key=lambda x: x[1]))
# print(dat.question_text[q_i])
# -
# (8, 0.52495134)
# What can you say about feminism?
# Here is topic 8 with 30 words:
# (8, '0.035*"like" + 0.034*"t" + 0.028*"peopl" + 0.017*"feel" + 0.016*"can" + 0.016*"s" + 0.015*"know" + 0.015*"don" + 0.014*"life" + 0.014*"think" + 0.013*"say" + 0.013*"girl" + 0.012*"person" + 0.012*"want" + 0.011*"love" + 0.011*"someon" + 0.010*"look" + 0.010*"women" + 0.009*"just" + 0.009*"friend" + 0.008*"make" + 0.008*"one" + 0.008*"guy" + 0.008*"men" + 0.008*"realli" + 0.007*"sex" + 0.007*"see" + 0.007*"tell" + 0.006*"call" + 0.006*"m"')
# ### First Model Topics
# Just to handwave about what these topics might represent:
# 0. Studies (class, book, read, major)
# 1. ?
# 2. Work / starting a business (learn, work, company, start, business, market)
# 3. Trump + more studies (trump, prepare, exam, university)
# 4. ?
# 5. USA vs Other Countries (country, live, america, people, india, china, world, differ)
# 6. Quora quesions (quora, question, ask, answer)
# 7. Getting a job (get, job, time, school, experience)
# 8. Relationship advice (like, feel, love, sex, tell, look, friend)
# 9. Financial advice (year, money, interest, make, month)
#
# Which certainly seems like a valid assortment of topics on Quora! And I suspect whether some topics, such as financial advice, are less likely to
#
# Though I'm a little surprised that a clearer US politics cluster didn't turn up.
# ### Need Better Preprocessing!
# The topics are plausible, but it's clear to me that the sloppy pre-processing didn't help.
# #### Review Stemming
#
# Notice that lots of single letters turn up "s" and "t". Presumably from contractions that were split into tokens. These are not meaningful.
#
# The letters "u" and "c" also show up. Unclear whether this is due to stemming gone wrong or SMS slang ("c u l8r").
dat[dat['question_text_processed'].apply(lambda x: 'c' in x)].sample(10)
# OMG mostly references to **C the programming language**!
#
# Other uses:
# * washington DC.
# * light constant c
# * educational acronyms
# * temperature
# * musical notes
# * chartered accountant
# * math questions
dat['c'] = dat['question_text_processed'].apply(lambda x: 'c' in x)
dat[['target', 'c']].groupby('c').agg(['mean', 'count'])
dat['c_plus_plus'] = dat['question_text'].apply(lambda x: 'c++' in x.lower())
dat[['target', 'c_plus_plus']].groupby('c_plus_plus').agg(['mean', 'count'])
dat[np.logical_and(dat.c_plus_plus, dat.target==1)]
dat['java'] = dat['question_text'].apply(lambda x: 'java ' in x.lower())
dat[['target', 'java']].groupby('java').agg(['mean', 'count'])
dat[np.logical_and(dat.java, dat.target==1)]
dat['python'] = dat['question_text'].apply(lambda x: 'python' in x.lower())
dat[['target', 'python']].groupby('python').agg(['mean', 'count'])
# Python is a language for beginners huh? Yeah, this kid is punk
dat[np.logical_and(dat.python, dat.target==1)]
# Anyhoo, programming language related questions are low-frequency insincere.
dat[dat['question_text_processed'].apply(lambda x: 'd' in x)].sample(10)
# Notes:
# * he'd -> he would; I'd -> I would; Why'd -> why would; you'd -> you would
# * D' are mostly names
# * Ph.D -> Doctor of Philosophy
# * D.C -> Washington DC
dat[dat['question_text_processed'].apply(lambda x: 'u' in x)].sample(10)
# * U.S or U.S. -> United States of America or USA (check embedding vocabulary)
# * U -> You
dat[dat['question_text_processed'].apply(lambda x: 'r' in x)].sample(10)
# #### Review Stop Words
' '.join(get_stop_words('en'))
# Some important notes about this stopword set.
# 1. Why and are lead questions were far more likely to be insincere than how, what, where and which leading questions. Using these as stop-words might be fine for topic modeling, but isn't a good idea for, say, IFIDF/Naive-Bayes.
# 2. It is unclear to me that stopping "can't" and "cannot" but not "can" is a good idea.
# 3. [Could, should, would](https://en.wikipedia.org/wiki/Modal_verb) are modal verbs. Examples include: She can go (ability), You may go (permission), You should go (request), and You must go (command). These might be problematic for topic modeling, but not something that should be lost for final models.
# ## Is sentiment or subjectivity associated with insincerity?
dat['sentiment'] = dat.question_text.apply(lambda x: TextBlob(x).sentiment)
dat['polarity'] = dat['sentiment'].apply(lambda x: x.polarity)
dat['subjectivity'] = dat['sentiment'].apply(lambda x: x.subjectivity)
dat.head()
dat[['polarity', 'subjectivity', 'target']].groupby('target').mean()
# Grouped boxplot
_ = sns.boxplot(x="target", y="polarity", data=dat, palette="Set1")
# Grouped boxplot
_ = sns.boxplot(x="target", y="subjectivity", data=dat, palette="Set1")
# Polarity is slightly higher for the sincere questions, meaning that they are slightly more positive.
# Subjectivity is slightly higher among insincere questions.
# These differences aren't large, but they do follow the expected directionality.
#
# TextBlob uses the Pattern library, which in turn uses a corpus based on movie reviews. This may not map very well to the types of language used on Quora. NLTK does include other corpus options, so those could be evaluated.
#
# In addition doing some pre-processing on this text before may improve sentiment scoring.
# ## Next Steps
#
# * Pre-processing (text cleaning)
| notebooks/1_Topic_and_sentiment_V1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import xarray as xr
import metpy.calc as mpcalc
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
from metpy.units import units
# +
# Open dataset
ds = xr.open_dataset('http://nomads.ncep.noaa.gov:80/dods/nam/nam20201121/nam1hr_12z')
# Get profile data variables
ds = ds[['tmpprs', 'rhprs', 'ugrdprs', 'vgrdprs', 'hgtprs']]
# Select time
ds = ds.sel(time=ds.time[0])
# Select levels
ds = ds.sel(lev=slice(1000, 100))
# Extract data for lat/lon point
ds = ds.sel(lat=42.38, lon=-76.87, method='nearest', tolerance=1)
ds
# -
ds.to_dataframe()
| NOMADS/NAM_sounding_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="Y2TnpdCDuDnT"
# **Initial step:** Please try to put the extracted heavy_makeup_CelebA folder in your Google Drive!
# So now you could mount your data to this ipynb!
# + colab={} colab_type="code" id="09fk3N3Gn1Pl"
from google.colab import drive
drive.mount('/content/drive')
# + colab={} colab_type="code" id="KJihfaYeoG8m"
# if you mount Google drive correctly, the following commands should be able to be executed correctly
# !ls /content/drive/
# %cd "/content/drive/My Drive"
# %cd "heavy_makeup_CelebA"
# !ls
# + colab={} colab_type="code" id="NCmzcH9siOTH"
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
plt.ion() # interactive mode
# + colab={} colab_type="code" id="RTZrF2PdjPcU"
## Please try to adjust data augmentation strategy here
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224), #้จๆฉ้ทๅฏฌๆฏ่ฃๅช,่ผธๅบ็บ224X224: need modify
transforms.RandomHorizontalFlip(), #้จๆฉๆฐดๅนณ็ฟป่ฝ,50%็ฟป่ฝ, 50%ไธ็ฟป่ฝ: ok
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) #ๆๅผต้ๆจๆบๅ: ok, ไธๅฝฑ้ฟimage cut
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224), #ๅฐๅ็้ฒ่กไธญๅฟๅๅฒ, ไปฅๅพๅฐ็ตฆๅฎ็size: wait check
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
# the directory of your data in Google Drive
data_dir = '/content/drive/My Drive/heavy_makeup_CelebA'
#data_dir = './heavy_makeup_CelebA' # on my PC
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
#print(torch.cuda.is_available())
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']}
# + [markdown] colab_type="text" id="69P-kXNr46dQ"
# Let's show some training data. Make sure the lables match the images
# + colab={} colab_type="code" id="QxL65Tiole9n"
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
# + [markdown] colab_type="text" id="pJkOaVv9lwJa"
# Training the model
# Now, letโs write a general function to train a model. Here, we will illustrate:
#
# Scheduling the learning rate
# Saving the best model
# In the following, parameter scheduler is an LR scheduler object from torch.optim.lr_scheduler.
# + colab={} colab_type="code" id="Z6RZDPY6lxoS"
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
# + [markdown] colab_type="text" id="nQt3IXA0_cRT"
# **Case 1:**
# using ConvNet as fixed feature extractor
# Here, we need to freeze all the network except the final layer. We need to set requires_grad == False to freeze the parameters so that the gradients are not computed in backward().
#
# You can read more about this in the documentation here.
# + colab={} colab_type="code" id="WkcOdA4o_gzh"
model_conv = models.alexnet(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
# (1) freeze the parameters so that the gradients are not computed in backward().
# (2) Parameters of newly constructed modules have requires_grad=True by default
model_conv.classifier = nn.Sequential(*[model_conv.classifier[i] for i in range(6)]) # remove the last layer (4096x1000)
addition_fc = nn.Linear(4096, 2) # the layer to be stacked
model_conv.classifier = nn.Sequential(model_conv.classifier,addition_fc)
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
# As opposed to before, only parameters of final layer are being optimized
optimizer_conv = optim.SGD(model_conv.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=5, gamma=0.1)
# + [markdown] colab_type="text" id="ASUNHxIrVQjX"
# Let's train the model as a feature extractor
# + colab={} colab_type="code" id="IpNyOhSM_llg"
model_conv = train_model(model_conv, criterion, optimizer_conv,
exp_lr_scheduler, num_epochs=25)
# + [markdown] colab_type="text" id="rb942ovS_Tvw"
# **Q1-1**: The validation accuracy of a pretrained Alexnet used as a feature extractor is **0.80**
# + [markdown] colab_type="text" id="TQSHmLcdmgpf"
# **Case 2**: Finetuning the convnet
# Load a pretrained model and reset final fully connected layer.
# + colab={} colab_type="code" id="4TwcHLSKmi03"
## Alexnet
model_ft = models.alexnet(pretrained=True)
model_ft.classifier = nn.Sequential(*[model_ft.classifier[i] for i in range(6)]) # remove the last layer (4096x1000)
addition_fc = nn.Linear(4096, 2) # the layer to be stacked
model_ft.classifier = nn.Sequential(model_ft.classifier,addition_fc)
#model_ft = nn.Sequential(model_ft,addition_fc)
print(model_ft)
##
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# step size could be
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=5, gamma=0.1)
# + [markdown] colab_type="text" id="dI6yUHZrmqtd"
# Train and evaluate
# It should take around 15-25 min on CPU. On GPU though, it takes less than 5 minutes.
# + colab={} colab_type="code" id="Tp07ugkXms8l"
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)
# + [markdown] colab_type="text" id="8JOMQBCy_ppb"
# **Q1-2**: The validation accuracy of a pretrained Alexnet after it is finetuned is **0.875**
# + [markdown] colab_type="text" id="_OU2kGYPmUGz"
# **Visualizing the model predictions:**
# Generic function to display predictions for a few images
# + colab={} colab_type="code" id="G3u5SQux51pH"
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
#model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)
visualize_model(model_ft)
# + [markdown] colab_type="text" id="okvapCcx0SaC"
# **Case 3**: Finetuning the non-pretrain model
# + colab={} colab_type="code" id="QBiujf2t0SaC"
## Alexnet
model_nft = models.alexnet(pretrained=False)
model_nft.classifier = nn.Sequential(*[model_nft.classifier[i] for i in range(6)]) # remove the last layer (4096x1000)
addition_fc = nn.Linear(4096, 2) # the layer to be stacked
model_nft.classifier = nn.Sequential(model_nft.classifier,addition_fc)
#model_ft = nn.Sequential(model_ft,addition_fc)
print(model_nft)
##
model_nft = model_nft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_nft = optim.SGD(model_nft.parameters(), lr=0.001, momentum=0.9)
# step size could be
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_nft, step_size=5, gamma=0.1)
# + colab={} colab_type="code" id="lPKjVjFR0SaE"
model_nft = train_model(model_nft, criterion, optimizer_nft, exp_lr_scheduler, num_epochs=25)
# + [markdown] colab_type="text" id="ufRwIHTbDRJo"
# **Q1-3**: the validation accuracy of a non-pretrained Alexnet after it is trained is **0.67**
# + [markdown] colab_type="text" id="l_cuO3UN0SaG"
# Case 4: Correct the data augmentation strategy in order to let the entire face of each image be seen and report the validation accuracy of a pre-trained Alexnet as a feature extractor in the two-class classification problem
# + [markdown] colab_type="text" id="6TL1ejs60SaH"
# After I watched all the training & validation pictures, I found that all the faces are near the center of the image. So I changed the data transform from RandomResizeCrop to CenterCrop for data augmentation stratey.
# + colab={} colab_type="code" id="Cj2TiKHc0SaH"
## Please try to adjust data augmentation strategy here
data_transforms = {
'train': transforms.Compose([
#transforms.RandomResizedCrop(224), #้จๆฉ้ทๅฏฌๆฏ่ฃๅช,่ผธๅบ็บ224X224: need modify
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.RandomHorizontalFlip(), #้จๆฉๆฐดๅนณ็ฟป่ฝ,50%็ฟป่ฝ, 50%ไธ็ฟป่ฝ: ok
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) #ๆๅผต้ๆจๆบๅ: ok, ไธๅฝฑ้ฟimage cut
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224), #ๅฐๅ็้ฒ่กไธญๅฟๅๅฒ, ไปฅๅพๅฐ็ตฆๅฎ็size: wait check
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
# the directory of your data in Google Drive
data_dir = '/content/drive/My Drive/heavy_makeup_CelebA'
#data_dir = './heavy_makeup_CelebA' # on my PC
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
#print(torch.cuda.is_available())
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']}
# + [markdown] colab_type="text" id="vwVdDu750SaJ"
# After correct the data augmentation strategy:
# All face have been cropped correctly:
# + colab={} colab_type="code" id="Y4FBReDv0SaN"
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
# + [markdown] colab_type="text" id="1QzaCFF_0SaL"
# Before: some face did not cropped correctily
# + colab={} colab_type="code" id="GrZ8pYPG0SaK"
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
# + colab={} colab_type="code" id="qkAkw0Jj0SaU"
model_conv = models.alexnet(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
# (1) freeze the parameters so that the gradients are not computed in backward().
# (2) Parameters of newly constructed modules have requires_grad=True by default
model_conv.classifier = nn.Sequential(*[model_conv.classifier[i] for i in range(6)]) # remove the last layer (4096x1000)
addition_fc = nn.Linear(4096, 2) # the layer to be stacked
model_conv.classifier = nn.Sequential(model_conv.classifier,addition_fc)
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
# As opposed to before, only parameters of final layer are being optimized
optimizer_conv = optim.SGD(model_conv.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=5, gamma=0.1)
# + colab={} colab_type="code" id="PeZp9s6X0SaV"
model_conv = train_model(model_conv, criterion, optimizer_conv,
exp_lr_scheduler, num_epochs=25)
# + [markdown] colab_type="text" id="mSBCJiodF7Ek"
# **Q1-5**: After correct the data augmentation strategy, the validation accuracy of pre-trained Alexnet as a feature extractor is **0.85**. (>0.80, before currect the data augmentation strategy)
# + [markdown] colab_type="text" id="OGDDZKkq0SaY"
# Case 5: correct the data augmentation strategy in order to let the entire face of each image be seen and report the validation accuracy of a pre-trained Alexnet after it is **fine-tuned** in the two-class classification problem
# + colab={} colab_type="code" id="2DcQJFtr0SaZ"
## Alexnet
model_ft = models.alexnet(pretrained=True)
model_ft.classifier = nn.Sequential(*[model_ft.classifier[i] for i in range(6)]) # remove the last layer (4096x1000)
addition_fc = nn.Linear(4096, 2) # the layer to be stacked
model_ft.classifier = nn.Sequential(model_ft.classifier,addition_fc)
#model_ft = nn.Sequential(model_ft,addition_fc)
print(model_ft)
##
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# step size could be
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=5, gamma=0.1)
# + colab={} colab_type="code" id="wt2W7u6w0Sab"
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)
# + [markdown] colab_type="text" id="ukI_JOl_GME8"
# **Q1-6**: After correct the data augmentation strategy, the validation accuracy of pre-trained Alexnet after it is fine-tuned is **0.8875** (>0.80, before currect the data augmentation strategy)
| Transfer Learning and Semantic Segmentation/Q1-8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Logistic regression of mouse behaviour data
# ## Using softmax in tensorflow
#
# #### M.Evans 02.06.16
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns # caused kernel to die 02.06.16
import random
from scipy.signal import resample
# %matplotlib inline
from IPython import display # For plotting intermediate results
# +
# # ! pip install pandas
# # ! pip install seaborn
# import seaborn as sns
# # ! pip install matplotlib
# # ! pip install sklearn
# +
# Import the data. For one mouse ATM
theta = pd.read_csv('~/work/whiskfree/data/theta_36.csv',header=None)
kappa = pd.read_csv('~/work/whiskfree/data/kappa_36.csv',header=None)
tt = pd.read_csv('~/work/whiskfree/data/trialtype_36.csv',header=None)
ch = pd.read_csv('~/work/whiskfree/data/choice_36.csv',header=None)
# -
from scipy.signal import resample
from scipy.stats import zscore
# Restrict analysis to 500ms post-touch and downsample with resample
theta_r = np.array([[resample(theta.values.squeeze()[i,950:1440],50)] for i in range(0,theta.shape[0])])
theta_r = zscore(theta_r.squeeze(),axis=None)
print(theta_r.shape)
_ = plt.plot(theta_r[:10].T)
kappa_r = np.array([[resample(kappa.values.squeeze()[i,950:1440],50)] for i in range(0,kappa.shape[0])])
kappa_r = zscore(kappa_r.squeeze(),axis=None)
print(kappa_r.shape)
_ = plt.plot(kappa_r[:10].T)
# _ = plt.plot(zscore(kappa_r[:10],axis=1).T)
# fig,ax = plt.subplots(1,2)
# ax[0].imshow(zscore(kappa_r,axis=None),aspect=float(50/1790),cmap='seismic')
# ax[1].imshow(kappa_r,aspect=float(50/1790),cmap='seismic')
kappa_df = pd.DataFrame(kappa_r)
theta_df = pd.DataFrame(theta_r)
kappa_df[:10].T.plot()
both_df = pd.concat([theta_df,kappa_df],axis=1)
both_df.shape
fig, ax = plt.subplots(figsize=(10,5))
plt.imshow(both_df.values.squeeze(),aspect=float(100/1790))
plt.colorbar()
# +
# np.mean?
# -
# ## Trying to classify trialtype from theta/kappa/both
# First generate a clean datasets, dropping trialtype = 0, as numpy arrays
clean = tt.values !=0
tt_c = tt[tt.values !=0].values
both = both_df.values
both_c = both[clean.squeeze(),:]
both_c.shape
# +
# Turn labels into 'one-hot' array (using a great one-liner from reddit :sunglasses:)
labs = np.eye(3)[tt_c-1]
# y[np.arange(3), a] = 1
labs = labs.squeeze()
fig, ax = plt.subplots(2,1,figsize = (20,2))
ax[0].plot(tt_c[0:100])
ax[1].imshow(labs[0:100,:].T,interpolation = 'none',origin='lower')
labs.shape
# +
# Let's use 20% of the data for testing and 80% for training
trainsize = int(len(both_c) * 0.8)
testsize = len(both_c) - trainsize
print('Desired training/test set sizes:',trainsize, testsize)
subset = random.sample(range(len(both_c)),trainsize)
traindata = both_c[subset,:]
trainlabs = labs[subset,:]
testdata = np.delete(both_c,subset,axis=0)
testlabs = np.delete(labs,subset,axis=0)
print('training set shape:',traindata.shape)
print('test set shape:',testdata.shape)
print('training labels shape:',trainlabs.shape)
print('test labels shape:',testlabs.shape)
# +
# Construct the data flow graph following the TF beginner's MNIST example
x = tf.placeholder(tf.float32,[None,100]) # data
W = tf.Variable(tf.zeros([100,3])) # W and b are model variables to be fit by the model
b = tf.Variable(tf.zeros([3])) # 3 possible trial types
y = tf.nn.softmax(tf.matmul(x,W) + b) # This is the softmax nn model
y_ = tf.placeholder(tf.float32,[None,3]) # Placeholder for correct answers (test labels)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) # Cross entropy loss
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # training step
# -
# Function to load a random batch of data
def next_batch(data,labels,n):
subset = random.sample(range(len(data)),n)
batch_data = data[subset,:]
batch_labels = labels[subset,:]
return batch_data, batch_labels
# +
# Test the next_batch function
from IPython import display
fig,ax = plt.subplots(2,1)
for i in range(10):
batch_xs, batch_ys = next_batch(traindata,trainlabs,10)
ax[0].plot(batch_xs.T)
ax[1].imshow(batch_ys.T,interpolation='none')
display.clear_output(wait=True)
display.display(plt.gcf())
# -
# +
# Set wheels in motion and train the model
init = tf.initialize_all_variables()
sess = tf.Session() # Start tf session
sess.run(init)
# -
# Run a training loop
for i in range(10000):
batch_xs, batch_ys = next_batch(traindata,trainlabs,250)
sess.run(train_step,feed_dict={x: batch_xs, y_: batch_ys})
# Evaluate model performance
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print(sess.run(accuracy,feed_dict={x: testdata,y_:testlabs}))
# Compare the mouse to the model with a confusion matrix
preds = sess.run(y,feed_dict={x:testdata})
preds
with sns.axes_style("white"):
fig, ax = plt.subplots(2,1,figsize=[20,1])
ax[0].imshow(preds.T,interpolation=None,aspect = 3)
ax[1].imshow(testlabs.T,interpolation=None,aspect = 3)
fig,ax = plt.subplots(1,2)
ax[0].hist(np.argmax(preds,1))
ax[1].hist(np.argmax(testlabs,1))
from sklearn.metrics import confusion_matrix
# +
# To do: repeat but with combined data from all mice (interesting to see if this helps)
| tf/.ipynb_checkpoints/softmax_tf-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Plot ActivitySim memory usage over the model run
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# +
def read_mem_log(mem_log_file_path, col_name):
mem_df = pd.read_csv(mem_log_file_path)
t = pd.to_datetime(mem_df.time, errors='coerce', format='%Y/%m/%d %H:%M:%S')
seconds = (t - t.min()).dt.total_seconds()
minutes = (seconds / 60)
mem_df['minutes'] = minutes.round(2)
mem_df['mem_gb'] = (mem_df[col_name].astype(np.int64) / 1_000_000_000)
mem_df = mem_df.sort_values('minutes')
mem_df = mem_df[['mem_gb', 'minutes']].set_index('minutes')
#print(mem_df)
return mem_df
def plot_mem_usage(mem_log_file_path, col_name, title):
mem_df = read_mem_log(mem_log_file_path, col_name)
with plt.style.context('seaborn'):
ax = mem_df['mem_gb'].plot()
ax.set_ylabel(f"{col_name} (GB)")
ax.set_xlabel(f"runtime (minutes)")
plt.title(title)
# -
plot_mem_usage("output/omnibus_mem.csv", 'uss', 'memory usage')
| activitysim/examples/example_mtc/notebooks/memory_usage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import csv
import itertools
import math
def transform_point(target):
return {
'featureType': target['featureType'],
'latitude': float(target['latitude']),
'longitude': float(target['longitude'])
}
with open('combined_no_dedupe.csv') as f:
points = [transform_point(x) for x in csv.DictReader(f)]
already_seen = {'supermarket': [], 'fastFood': [], 'home': []}
for point in points:
feature_type = point['featureType']
target_list = already_seen[feature_type]
def get_distance(other):
latitude_diff = abs(other['latitude'] - point['latitude'])
longitude_diff = abs(other['longitude'] - point['longitude'])
return math.sqrt(latitude_diff ** 2 + longitude_diff ** 2)
matching = filter(lambda x: get_distance(x) < 0.001, target_list)
num_matching = sum(map(lambda x: 1, matching))
if num_matching == 0:
target_list.append(point)
all_records = itertools.chain(*already_seen.values())
with open('combined_dedupe.csv', 'w') as f:
writer = csv.DictWriter(f, fieldnames=['featureType', 'latitude', 'longitude'])
writer.writeheader()
writer.writerows(all_records)
| transform/Dedupe.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# ## Notebook Dashboard
# Example of a fairly complex dashboard. Initially inspired by a Dash tutorial example
# ### Imports
# +
# ipyplotly
from ipyplotly.datatypes import FigureWidget
from ipyplotly.callbacks import Points, InputState
# pandas
import pandas as pd
from pandas.api.types import is_numeric_dtype
# numpy
import numpy as np
# ipywidgets
from ipywidgets import Dropdown, HBox, VBox
# -
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/mtcars.csv')
numeric_cols = [col for col in df.columns if is_numeric_dtype(df[col])]
numeric_cols
f = FigureWidget()
f
bar = f.add_bar(y=df.manufacturer.values, orientation='h')
# +
f.layout.margin.l = 120
bar.marker.showscale = True
bar.marker.colorscale = 'viridis'
f.layout.width = 1100
f.layout.height = 800
bar.marker.line.width = 1
bar.marker.line.color = 'darkgray'
# +
trace, points, state = bar, Points(), InputState()
# Bar click callback
def update_click(trace, points, state):
new_clr = np.zeros(df['mpg'].size)
new_clr[points.point_inds] = 1
bar_line_sizes = np.ones(df['mpg'].size)
bar_line_sizes[points.point_inds] = 3
# Update pct line color
par.line.color = new_clr
# Update bar line color and width
with f.batch_update():
bar.marker.line.width = bar_line_sizes
bar.marker.line.color = new_clr
bar.on_click(update_click)
bar.on_selected(update_click)
# -
f2 = FigureWidget(layout={'width': 1100})
f2
par = f2.add_parcoords(dimensions=[{
'values': df[col].values,
'label': col,
'range': [np.floor(df[col].min()), np.ceil(df[col].max())]} for col in numeric_cols])
# +
# Set up selection colormap
par.line.colorscale = [[0, 'darkgray'], [1, 'red']]
par.line.cmin = 0
par.line.cmax = 1
par.line.color = np.zeros(df['mpg'].size)
bar.marker.line.colorscale = par.line.colorscale
bar.marker.line.cmin = 0
bar.marker.line.cmax = 1
# +
# Widgets
dd = Dropdown(options=df.columns, description='X', value='mpg')
clr_dd = Dropdown(options=numeric_cols, description='Color')
def update_col(val):
col = dd.value
clr = clr_dd.value
with f.batch_update():
bar.x = df[col].values
bar.marker.color = df[clr].values
bar.marker.colorbar.title = clr
f.layout.xaxis.title = col
dd.observe(update_col, 'value')
clr_dd.observe(update_col, 'value')
update_col(None)
# -
# ## Display Dashboard
# - Dropdowns control barchart x-axis feature and coloring feature
# - Click or select bars to highlight in barchart and parallel coordinate diagram
VBox([f, HBox([dd, clr_dd]), f2])
# Adjust barchart height
f.layout.height = 650
| examples/overviews/Bar PCT dashboard.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import numpy as np
# # # !/usr/bin/env python3
# # -*- coding: utf-8 -*-
# """
# Created on 20181219
# @author: zhangji
# Trajection of a ellipse, Jeffery equation.
# """
# # %pylab inline
# pylab.rcParams['figure.figsize'] = (25, 11)
# fontsize = 40
# import numpy as np
# import scipy as sp
# from scipy.optimize import leastsq, curve_fit
# from scipy import interpolate
# from scipy.interpolate import interp1d
# from scipy.io import loadmat, savemat
# # import scipy.misc
# import matplotlib
# from matplotlib import pyplot as plt
# from matplotlib import animation, rc
# import matplotlib.ticker as mtick
# from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes
# from mpl_toolkits.mplot3d import Axes3D, axes3d
# from sympy import symbols, simplify, series, exp
# from sympy.matrices import Matrix
# from sympy.solvers import solve
# from IPython.display import display, HTML
from tqdm import tqdm
from tqdm.notebook import tqdm as tqdm_notebook
# import pandas as pd
# import re
# from scanf import scanf
# import os
# import glob
# from codeStore import support_fun as spf
# from src.support_class import *
# from src import stokes_flow as sf
# rc('animation', html='html5')
# PWD = os.getcwd()
# font = {'size': 20}
# matplotlib.rc('font', **font)
# np.set_printoptions(linewidth=90, precision=5)
import os
import glob
import natsort
import numpy as np
import scipy as sp
from scipy.optimize import leastsq, curve_fit
from scipy import interpolate, integrate
from scipy import spatial, signal
# from scipy.interpolate import interp1d
from scipy.io import loadmat, savemat
# import scipy.misc
import importlib
from IPython.display import display, HTML
import pandas as pd
import pickle
import re
from scanf import scanf
import matplotlib
# matplotlib.use('agg')
from matplotlib import pyplot as plt
import matplotlib.colors as colors
from matplotlib import animation, rc
import matplotlib.ticker as mtick
from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes
from mpl_toolkits.mplot3d import Axes3D, axes3d
from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable
from mpl_toolkits.mplot3d.art3d import Line3DCollection
from matplotlib import cm
from tqdm import tqdm
from tqdm.notebook import tqdm as tqdm_notebook
from time import time
from src.support_class import *
from src import jeffery_model as jm
from codeStore import support_fun as spf
from codeStore import support_fun_table as spf_tb
# # %matplotlib notebook
# %matplotlib inline
rc('animation', html='html5')
fontsize = 40
PWD = os.getcwd()
# -
fig = plt.figure(figsize=(2, 2))
fig.patch.set_facecolor('white')
ax0 = fig.add_subplot(1, 1, 1)
job_dir = 'ecoliB01_a'
table_name = 'planeShearRatex_1d'
# +
# show phase map of theta-phi, load date
importlib.reload(spf_tb)
t_headle = '(.*?).pickle'
t_path = os.listdir(os.path.join(PWD, job_dir))
filename_list = [filename for filename in os.listdir(os.path.join(PWD, job_dir))
if re.match(t_headle, filename) is not None]
for tname in tqdm_notebook(filename_list[:]):
tpath = os.path.join(PWD, job_dir, tname)
with open(tpath, 'rb') as handle:
tpick = pickle.load(handle)
Table_t = tpick['Table_t']
if 'Table_dt' not in tpick.keys():
Table_dt = np.hstack((np.diff(tpick['Table_t']), 0))
else:
Table_dt = tpick['Table_dt']
Table_X = tpick['Table_X']
Table_P = tpick['Table_P']
Table_P2 = tpick['Table_P2']
Table_theta = tpick['Table_theta']
Table_phi = tpick['Table_phi']
Table_psi = tpick['Table_psi']
Table_eta = tpick['Table_eta']
save_name = '%s.jpg' % (os.path.splitext(os.path.basename(tname))[0])
idx = Table_t > 0
fig = spf_tb.save_table_result(os.path.join(PWD, job_dir, save_name),
Table_t[idx], Table_dt[idx], Table_X[idx], Table_P[idx], Table_P2[idx],
Table_theta[idx], Table_phi[idx], Table_psi[idx], Table_eta[idx])
plt.close(fig)
# -
filename_list
| head_Force/do_calculate_table/pickle2jpg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
df=pd.read_json('../../05_DataMining_New_Columns/01_Preprocessing/First.json')
df.head(2)
cw=pd.read_csv('CWUR.csv')
cw.head(2)
cw=cw.institution.astype(str).str.lower()
cw.head()
cw=cw.astype(str).str.replace('university','')
cw=cw.astype(str).str.replace('of','')
cw=cw.astype(str).str.replace('technology','')
cw=cw.astype(str).str.replace('.','')
cw=cw.astype(str).str.replace(',','')
cw=cw.astype(str).str.replace('universita','').str.strip()
cw[2]='mit'
cw.head()
df['uniRank']=-1
# import sys
# reload(sys)
# sys.setdefaultencoding('Cp1252')
# for i in df.index:
# s=df.ix[i].targetUni
# if cw[cw.str.find(s)==0].empty==False:
# rnk=cw[cw.str.find(s)==0].index[0]
# df.ix[i].uniRank=rnk
df.uniRank.value_counts()
cw.to_csv('cw.json',date_format='utf8')
df.to_json('First.json',date_format='utf8')
| 07_DM_September2017/01_Preprocessing/Second.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
import json
# https://developers.mercadolibre.com.ve/es_ar/gestiona-preguntas-respuestas
# +
TEXTO = "Gracias por preguntar"
item_ID = "MLV541722757"
pregunta_ID = 6358337462
access_token='xxxx'
if __name__=="__main__":
url='https://api.mercadolibre.com/answers'
headers = {'Content-Type': 'application/json'}
args= { "access_token" : access_token, "question_id" : pregunta_ID, "text":TEXTO}
response = requests.post(url, data = json.dumps(args))
if response.status_code==200:
estructura=response.json()
else:
print(response)
| API_respuesta.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import scipy
import psycopg2
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import os
import json
from collections import Counter
# +
def parse_testdata(path='../data/rainfall-submissions.tsv'):
file = open(path,'r')
raw = file.readlines()
file.close()
res = dict()
exid = "3c79c115-0f5f-4d8e-b02c-b4b33155a4b3"
get_code = lambda data: data["mooc-2017-ohjelmointi"]["osa02-Osa02_16.MarsinLampotilanKeskiarvo"]["/src/MarsinLampotilanKeskiarvo.java"]
for line in raw:
id = line[:len(exid)]
body = json.loads(line[len(exid):])
res[id] = get_code(body)
return res
def parse_testdata_df(path='../data/rainfall-submissions.tsv'):
file = open(path,'r')
raw = file.readlines()
file.close()
ids = [None] * len(raw)
code = [None] * len(raw)
exid = "3c79c115-0f5f-4d8e-b02c-b4b33155a4b3"
get_code = lambda data: data["mooc-2017-ohjelmointi"]["osa02-Osa02_16.MarsinLampotilanKeskiarvo"]["/src/MarsinLampotilanKeskiarvo.java"]
for i, line in enumerate(raw):
id = line[:len(exid)]
body = json.loads(line[len(exid):])
ids[i] = id
code[i] = get_code(body)
return pd.DataFrame({ "ids": ids, "code": code })
rain = parse_testdata()
rain_df = parse_testdata_df()
# -
print(rain['b4df7baf-1ba2-4a67-8b82-dabc5a1a0bb8'])
# +
import antlr4
from antlr_local.generated.JavaLexer import JavaLexer
from antlr_local.generated.JavaParser import JavaParser
from antlr_local.generated.JavaParserListener import JavaParserListener
from antlr_local.MyListener import KeyPrinter
from antlr_local.java_tokens import interestingTokenTypes, rareTokenTypes
import pprint
from antlr4 import RuleContext
from antlr_local.java_parsers import parse_ast_complete, parse_ast_modified, parse_complete_tree, parse_modified_tokens
code = rain['b4df7baf-1ba2-4a67-8b82-dabc5a1a0bb8']
comp = parse_complete_tree(code)
mod = parse_modified_tokens(code)
# -
comp.toList()
mod
# +
import requests
SOLR_URL="http://localhost:8983"
CORE="submission-search"
def add_dynamic_field(fieldName, fieldType="pint"):
url = f'{SOLR_URL}/solr/{CORE}/schema?commit=true'
data = {
"add-dynamic-field": {
"stored": "true",
"indexed": "true",
"name": f'*_{fieldName}',
"type": fieldType
}
}
headers = {
"Content-type": "application/json"
}
res = requests.post(url, json=data, headers=headers)
print(res.text)
return res
def update_submission(res):
url = f'{SOLR_URL}/solr/{CORE}/update?overwrite=true&commit=true'
def create_solr_updation(d, subId):
r = { f'{key}_metric': { "set": d[key] } for key in d.keys() }
r['id'] = subId
return r
data = [create_solr_updation(res[sub_id], sub_id) for sub_id in res.keys()]
headers = {
"Content-type": "application/json"
}
#return data
resp = requests.post(url, json=data, headers=headers)
print(resp.text)
return resp
#http://localhost:8983/solr/submission-search/update?_=1594129245796&commitWithin=1000&overwrite=true&wt=json
#add_dynamic_field('metric')
#resp = update_submission(res)
# -
resp
d = res['774992ef-83b5-45f9-8757-ffdbeecc521d']
keys = d.keys()
{ key: { "set": d[key] } for key in d.keys() }
# +
import psycopg2
from dotenv import load_dotenv
import os
import json
load_dotenv()
POSTGRES_HOST = os.getenv("DB_HOST")
POSTGRES_PORT = os.getenv("DB_PORT")
POSTGRES_DB = os.getenv("DB_NAME")
POSTGRES_USER = os.getenv("DB_USER")
POSTGRES_PASSWORD = os.getenv("DB_PASSWORD")
conn = psycopg2.connect(host=POSTGRES_HOST, port=POSTGRES_PORT, database=POSTGRES_DB, user=POSTGRES_USER, password=POSTGRES_PASSWORD)
cur = conn.cursor()
class NumpyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.ndarray):
return obj.tolist()
return json.JSONEncoder.default(self, obj)
def query_many(query):
cur.execute(query)
return cur.fetchall()
def fetch_submissions(courseId, exerciseId):
ex_rows = query_many(f"""
SELECT program_language FROM exercise WHERE course_id = {courseId} AND exercise_id = {exerciseId}
""")
rows = query_many(f"""
SELECT submission_id, code FROM submission
WHERE course_id = {courseId} AND exercise_id = {exerciseId}
""")
submissionIds = [r[0] for r in rows]
codeList = [r[1] for r in rows]
language = ex_rows[0][0]
return submissionIds, codeList, language
# +
import time
import subprocess
from subprocess import PIPE
import sys
METRICS_FOLDER_PATH="/tmp/codeclusters-run-metrics"
USED_CHECKSTYLE_METRICS=[
'JavaNCSS',
'CyclomaticComplexity',
'NPathComplexity',
'ClassDataAbstractionCoupling',
'ClassFanOutComplexity',
'BooleanExpressionComplexity'
]
CHECKSTYLE_JAR_PATH="/Users/teemu/Downloads/checkstyle-8.34-all.jarx"
CHECKSTYLE_XML_PATH="/Users/teemu/Downloads/mdsol-checkstyle.xml"
def get_file_extension(language):
if language == 'Java':
return 'java'
return ''
def get_metric(line):
TYPE_MARKER = 'type:'
VAL_MARKER = 'val:'
def get(line, marker):
marker_idx = line.find(marker)
return line[(marker_idx + len(marker)):(line.find(' ', marker_idx))]
mtype = get(line, TYPE_MARKER)
val = get(line, VAL_MARKER)
return mtype, int(val)
def create_folder(runId):
dir_path = f"{METRICS_FOLDER_PATH}/{runId}"
try:
os.makedirs(dir_path)
print("Directory " , dir_path, " created ")
return dir_path
except FileExistsError:
print("Directory " , dir_path, " already exists")
return dir_path
def write_files(submissionIds, codeList, fileExt, folderPath):
for idx, code in enumerate(codeList):
with open(f"{folderPath}/{submissionIds[idx]}.{fileExt}", "w") as f:
f.write(code)
def delete_folder(folderPath):
files = os.listdir(folderPath)
for file in files:
os.remove(f'{folderPath}/{file}')
os.rmdir(folderPath)
print('Directory ', folderPath, ' deleted')
def add_loc(res, submissionIds, codeList):
locs = [len(code.split('\n')) for code in codeList]
for idx, sub_id in enumerate(submissionIds):
res[sub_id]['LOC'] = locs[idx]
return res
def run_checkstyle(folderPath):
args = ['java', '-jar', CHECKSTYLE_JAR_PATH, '-c', CHECKSTYLE_XML_PATH, 'com.puppycrawl.tools.checkstyle.gui.Main', f'{folderPath}/']
checkstyle_result = subprocess.run(args, stdout=PIPE, stderr=PIPE, check=False)
print(checkstyle_result)
stdout = checkstyle_result.stdout.decode(sys.stdout.encoding)
stderr = checkstyle_result.stderr.decode(sys.stderr.encoding)
if len(stderr) != 0:
raise Exception(f'Running checkstyle throwed an error: {stderr}')
return stdout.split('\n')
def generate_result_dict(lines, submissionIds):
res = {}
for line in lines:
sub_id = line.split('/')[-1][:36]
module = line.split(' ')[-1][1:-1]
if sub_id not in res and sub_id in submissionIds:
res[sub_id] = {}
if module in USED_CHECKSTYLE_METRICS:
m, v = get_metric(line)
res[sub_id][m] = v
return res
def fetch_and_run_metrics(courseId, exerciseId):
submissionIds, codeList, language = fetch_submissions(courseId, exerciseId)
file_ext = get_file_extension(language)
run_id = int(time.time())
folderPath = ''
lines = []
res = {}
try:
folderPath = create_folder(run_id)
write_files(submissionIds, codeList, file_ext, folderPath)
lines = run_checkstyle(folderPath)
res = generate_result_dict(lines, submissionIds)
res = add_loc(res, submissionIds, codeList)
delete_folder(folderPath)
except:
delete_folder(folderPath)
raise
return lines, res
lines, res = fetch_and_run_metrics(2, 4)
# -
res
plt.hist([res[x]['NPath'] for x in res], bins=10)
[res[x] for x in res]
res[2][95:(95+14)]
res[2][95:].find(',')
lines[2]
lines
lines[4].split('/')[-1][53:]
len('24176cce-0737-44f7-a120-4965b0bf4b9f')
| notebooks/metrics-solr-indexing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A Full Text Searchable Database of Lang's Fairy Books
#
# In the late 19th and early 20th century, <NAME> published various collections of fairy tales, starting with *The Blue Fairy Book* and then progressing though various other colours to *The Olive Fairy Book*.
#
# This notebook represents a playful aside in trying to build various searchable contexts over the stories.
#
# To begin with, let's start by ingesting the stories into a database and building a full text search over them.
# ## Obtain Source Texts
#
# We can download the raw text for each of Lang's coloured Fairy Books from the Sacred Texts website. The books are listed on a single index page:
#
# 
#
# Let's start by importing some packages that can help us download pages from the Sacred Texts website in an efficient and straightforward way:
# +
# These packages make it easy to download web pages so that we can work with them
import requests
# "Cacheing" pages mans grabbing a local copy of the page so we only need to download it once
import requests_cache
from datetime import timedelta
requests_cache.install_cache('web_cache', backend='sqlite', expire_after=timedelta(days=100))
# -
# Given the index page URL, we can easily download the index page:
# +
# Specify the URL of the page we want to download
url = "https://www.sacred-texts.com/neu/lfb/index.htm"
# And then grab the page
html = requests.get(url)
# Preview some of the raw web page / HTML text in the page we just downloaded
html.text[:1000]
# -
# By inspection of the HTML, we see the books are in `span` tag with a `ista-content` class. Digging further, we then notice the links are in `c_t` classed `span` elements. We can extract them using beautiful soup:
# +
# The BeautifulSoup package provides a range of tools
# that help us work with the downloaded web page,
# such as extracting particular elements from it
from bs4 import BeautifulSoup
# The "soup" is a parsed and structured form of the page we downloaded
soup = BeautifulSoup(html.content, "html.parser")
# Find the span elements containing the links
items_ = soup.find("span", class_="ista-content").find_all("span", class_="c_t")
# Preview the first few extracted <span> elements
items_[:3]
# -
# Let's grab just the anchor tags from there:
# +
# The following construction is known as a "list comprehension"
# It generates a list of items (items contained in square brackets, [])
# from another list of items
items_ = [item.find("a") for item in items_]
items_
# -
# `````{admonition} List Comprehensions
# List comprehensions provide a concise form for defining one list structure based on the contents of another (or more generally, any iterable).
#
# In an expanded form, we might create one list from another using a loop of the form:
#
# ```python
# new_list = []
# for item in items:
# new_list.append( process(items) )
# ```
#
# In a list comprehension, we might write:
#
# ```python
# new_list = []
# for item in items:
# new_list.append( process(items) )
# ```
#
# `````
# The links are *relative* links, which means we need to resolve them relative to the path of the current page.
#
# Obtain the path to the current page:
# Strip the "index.htm" element from the URL to give a "base" URL
base_url = url.replace("index.htm", "")
# Extract the link text (`link.text`) and relative links (`link.get('href')`) from the `<a>` tags and use a Pyhton f-string to generate full links for each book page (`f"{base_url}{link.get('href')}"`):
# +
links = [(link.text, f"{base_url}{link.get('href')}") for link in items_]
# Display some annotated output to see what's going on
print(f"Base URL: {base_url}\nExample links: {links[:3]}")
# -
# ```{admonition} Python f-strings
#
# Python's f-strings (*formatted string literals*, [PEP 498](https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep498)) are strings prefixed with an `f` character. The strings contain "replacement fields" of code contained within curly braces. The contents of the curly braces are evaluated and included in the returned string.
# ```
# We can also grab the publication year for each work:
years_ = soup.find("span", class_="ista-content").find_all("span", class_="c_d")
years = [year.text for year in years_]
# And merge those in to a metadata record collection:
sacred_metadata = list(zip(links, years))
sacred_metadata[:3]
# We could now load each of those pages and then scrape the download link. But, we notice that the download links have a regular pattern: `https://www.sacred-texts.com/neu/lfb/bl/blfb.txt.gz` which we can derive from the book pages:
# +
download_links = []
for (_title, _url) in links:
# We need to get the "short" colour name of the book
# which can be found in the URL path...
book_path = _url.split("/")[-2]
zip_fn = f"{book_path}fb.txt.gz"
zip_url = _url.replace("index.htm", zip_fn)
download_links.append((_title, zip_url))
download_links[:3]
# -
# Now we can download and unzip the files...
# +
import urllib
for (_, url) in download_links:
# Create a file name to save file to as the file downloaded from the URL
zip_file = url.split("/")[-1]
urllib.request.urlretrieve(url, zip_file)
# -
# !ls
# The following function will read in the contents of a local gzip file:
# +
import gzip
def gzip_txt(fn):
"""Open gzip file and extract text."""
with gzip.open(fn,'rb') as f:
txt = f.read().decode('UTF-8').replace("\r", "")
return txt
# -
# Let's see how it works:
gzip_txt('gnfb.txt.gz')[:1000]
# !ls
# Select one of the books and read in the book text:
# +
txt = gzip_txt('blfb.txt.gz')
# Preview the first 1500 characters
txt[:1500]
# -
# ## Extract Stories
#
# Having got the contents, let's now extract all the stories.
#
# Within each book, the stories are delimited by a pattern `[fNN]` (for digits `N`). We can use this pattern to split out the stories.
#
# To do this, we'll use the `re` regular expression package:
import re
# We can now define a pattern against which we can split each file into separate chunks:
# +
# Split the file into separate chunks delimited by the pattern: [fNN]
stories = re.split("\[f\d{2}\]", txt)
# Strip whitespace at start and end
stories = [s.strip("\n") for s in stories]
# -
# ## Extract the contents
#
# The contents appear in the first "story chunk" (index `0`) in the text:
stories[0]
# Let's pull out the book name:
# The name appears before the first comma
book = stories[0].split(",")[0]
book
# The Python [`parse`](https://github.com/r1chardj0n3s/parse) package provides a simple way of *matching* patterns using syntax that resembles a string formatting template that could be used to create the strings being matched against.
import parse
# We can alternatively is this package to extract the title against a template style pattern:
# +
#The Blue Fairy Book, by <NAME>, [1889], at sacred-texts.com
metadata = parse.parse("{title}, by <NAME>, [{year}]{}, at sacred-texts.com", stories[0])
metadata["title"], metadata["year"]
# -
# There are plenty of cribs to help us pull out the contents, although it may not be obviously clear with the early content items whether they are stories or not...
# There is a Contents header, but it may be cased...
# So split in a case insensitive way
boilerplate = re.split('(Contents|CONTENTS)', stories[0])
boilerplate
# The name of the book repeats at the end of the content block
# So snip it out...
contents_ = boilerplate[-1].split(book)[0].strip("\n")
contents_
# We note that `contents_` conains a string with repeated end of line elements (`\n\n`) separating the titles in the form `[*STORY TITLE]` (for example, `[*LITTLE RED RIDING-HOOD]`).
# We can parse out titles from the contents list based on the pattern delimiter `[*EXTRACT THIS PATTERN]`:
# +
# Match against [* and ] and extract everything in between
contents = parse.findall("[*{}]", contents_)
# The title text available as item.fixed[0]
# Also convert the title to title case
titles = [item.fixed[0].title() for item in contents]
titles
# -
# ##ย Coping With Page Numbers
#
# There seems to be work in progress adding page numbers to books using a pattern of the form `[p. ix]`, `[p. 1]`, `[p. 11]` and so on.
#
# For now, let's create a regular expression substitution to remove those...
# +
example = """[f01]
[p. ix]
THE YELLOW FAIRY BOOK
THE CAT AND THE MOUSE IN PARTNERSHIP
A cat had made acquaintance with a mouse, and had spoken so much of the great love and friendship she felt for her, that at last the Mouse consented to live in the same house with her, and to go shares in the housekeeping. 'But we must provide for the winter or else we shall suffer hunger,' said the Cat. 'You, little Mouse, cannot venture everywhere in case you run at last into a trap.' This good counsel was followed, and a little pot of fat was bought. But they did not know where to put it. At length, after long consultation, the Cat said, 'I know of no place where it could be better put than in the church. No one will trouble to take it away from there. We will hide it in a corner, and we won't touch it till we are in want.' So the little pot was placed in safety; but it was not long before the Cat had a great longing for it, and said to the Mouse, 'I wanted to tell you, little Mouse, that my cousin has a little son, white with brown spots, and she wants me to be godmother to it. Let me go out to-day, and do you take care of the house alone.'
[p. 1]
'Yes, go certainly,' replied the Mouse, 'and when you eat anything good, think of me; I should very much like a drop of the red christening wine.'
But it was all untrue. The Cat had no cousin, and had not been asked to be godmother. She went straight to the church, slunk to the little pot of fat, began to lick it, and licked the top off. Then she took a walk on the roofs of the town, looked at the view, stretched
[P. 22]
herself out in the sun, and licked her lips whenever she thought of the little pot of fat. As soon as it was evening she went home again.
"""
# Example of regex to remove page numbers
re.sub(r'\n*\[[pP]\. [^\]\s]*\]\n\n', '', example)
# -
# ## Pulling the Parser Together
#
# Let's create a function to parse the book for us by pulling together all the previous fragments:
def parse_book(txt):
"""Parse book from text."""
#ย Get story chunks
stories = re.split("\[f\d{2}\]", txt)
stories = [s.strip("\n") for s in stories]
# Get book name
book = stories[0].split(",")[0]
# Process contents
boilerplate = re.split('(Contents|CONTENTS)', stories[0])
# The name of the book repeats at the end of the content block
# So snip it out...
contents_ = boilerplate[-1].split(book)[0].strip("\n")
# Match against [* and ] and extract everything in between
contents = parse.findall("[*{}]", contents_)
# Get titles from contents
titles = [item.fixed[0].title() for item in contents]
# Get metadata
metadata = parse.parse("{title}, by <NAME>, [{year}]{}, at sacred-texts.com", stories[0]).named
return book, stories, titles, metadata
# ## Create Simple Database Structure
#
# Let's create a simple database structure and configure it for full text search.
#
# We'll use SQLite3 for the database. One of the easiest ways of working with SQLite3 databases is via the [`sqlite_utils`](https://sqlite-utils.datasette.io/en/stable/) package.
from sqlite_utils import Database
# Specifiy the database filename (and optionally conntect to the database if it already exists):
# +
db_name = "demo.db"
# Uncomment the following lines to connect to a pre-existing database
#db = Database(db_name)
# -
# The following will create a new database (or overwrite a pre-existing one of the same name) and define the database tables we require.
#
# Note that we also enable full text search on the `book` that creates an extra virtual table that supports full text search.
# +
# Do not run this cell if your database already exists!
# While developing the script, recreate database each time...
db = Database(db_name, recreate=True)
# This schema has been evolved iteratively as I have identified structure
# that can be usefully mined...
db["books"].create({
"book": str,
"title": str,
"text": str,
"last_para": str, # sometimes contains provenance
"first_line": str, # maybe we want to review the openings, or create an index...
"provenance": str, # attempt at provenance
"chapter_order": int, # Sort order of stories in book
}, pk=("book", "title"))
db["books_metadata"].create({
"title": str,
"year": int
})
# Enable full text search
# This creates an extra virtual table (books_fts) to support the full text search
db["books"].enable_fts(["title", "text"], create_triggers=True)
# -
# ## Build Database
#
# Let's now create a function that can populate our database based on the contents of one of the books:
def extract_book_stories(db_tbl, book, stories, titles=None, quiet=False):
book_items = []
#ย The titles are from the contents list
# We will actually grab titles from the story
# but the titles grabbed from the contents can be passed in
# if we want to write a check against them.
# Note: there may be punctation differences in the title in the contents
# and the actual title in the text
for i, story in enumerate(stories[1:]):
# Remove the page numbers for now...
story = re.sub(r'\n*\[[pP]\. [^\]\s]*\]\n\n', '', story).strip("\n")
# Other cleaning
story = re.sub(r'\[\*\d+\s*\]', '', story)
# Get the title from the start of the story text
story_ = story.split("\n\n")
title_ = story_[0].strip()
# Force the title case variant of the title
title = title_.title().replace("'S", "'s")
# Optionally display the titles and the book
if not quiet:
print(f"{title} :: {book}")
# Reassemble the story
text = "\n\n".join(story_[1:])
# Clean out the name of the book if it is in the text
#The Green Fairy Book, by <NAME>, [1892], at sacred-texts.com
name_ignorecase = re.compile(f"{book}, by <NAME>, \[\d*\], at sacred-texts.com", re.IGNORECASE)
text = name_ignorecase.sub('', text).strip()
# Extract the first line then add the full stop back in.
first_line = text.split("\n")[0].split(".")[0] + "."
last_para = text.split("\n")[-1]
provenance_1 = parse.parse('[{}] {provenance}', last_para)
provenance_2 = parse.parse('[{provenance}]', last_para)
provenance_3 = parse.parse('({provenance})', last_para)
provenance_4 = {"provenance":last_para} if len(last_para.split())<7 else {} # Heuristic
provenance_ = provenance_1 or provenance_2 or provenance_3 or provenance_4
provenance = provenance_["provenance"] if provenance_ else ""
book_items.append({"book": book,
"title": title,
"text": text,
"last_para": last_para,
"first_line": first_line,
"provenance": provenance,
"chapter_order":i})
# The upsert means "add or replace"
db_tbl.upsert_all(book_items, pk=("book", "title" ))
# We can add the data for a particular book by passing in the titles and stories:
# +
book, stories, titles, metadata = parse_book(txt)
extract_book_stories(db["books"], book, stories)
# -
# We can now run a full text search over the stories. For example, if we are looking for a story with a king and three sons:
# +
q = 'king "three sons"'
# The `.search()` method knows how to find the full text search table
# given the original table name
for story in db["books"].search(db.quote_fts(q), columns=["title", "book"]):
print(story)
# -
# We can also construct a full text search query over the full text search virtual table explicitly:
# +
q2 = 'king "three sons" goose'
_q = f'SELECT title FROM books_fts WHERE books_fts MATCH {db.quote(q2)} ;'
for row in db.query(_q):
print(row["title"])
# -
# The full text search also allows us to select snippets around the search term:
# +
q3 = '"three sons"'
_q = f"""
SELECT title, snippet(books_fts, -1, "__", "__", "...", 30) as clip
FROM books_fts WHERE books_fts MATCH {db.quote(q3)} LIMIT 2 ;
"""
for row in db.query(_q):
print(row["clip"]+'\n---\n')
# -
# We can now create a complete database of Lang's collected fairy stories by churning through all the books and adding them to the database:
# +
import os
for fn in [fn for fn in os.listdir() if fn.endswith(".gz")]:
# Read in book from gzip file
txt = gzip_txt(fn)
# Parse book
book, stories, titles, metadata = parse_book(txt)
#Populate metadata table
db["books_metadata"].upsert(metadata, pk=("title", "year"))
# Extract stories and add them to the database
# The records are upserted (add or replaced) so we won't get duplicate records
# for the book we have already loaded into the database
extract_book_stories(db["books"], book, stories, quiet=True)
# -
# How many books are there?
for row in db.query('SELECT * FROM books_metadata ORDER BY year ASC'):
print(row)
# Okay - the titles are fine but the years look a bit shonky to me...
#
# The dates are okay if we use the ones from the sacred texts listing page that we previously grabbed into `sacred_metadata`:
# +
new_metadata = []
for m in sacred_metadata:
new_metadata.append({"title": m[0][0], "year": m[1]})
new_metadata
# -
# Replace the `books_metadata` table:
# The truncate=True clears the records from the original table
db["books_metadata"].insert_all(new_metadata, pk=("title", "year"), truncate=True)
for row in db.query('SELECT * FROM books_metadata ORDER BY year ASC'):
print(row)
# That looks a bit better.
# How many stories do we now have with a king and three sons?
# +
print(f"Search on: {q}\n")
for story in db["books"].search(db.quote_fts(q), columns=["title", "book"]):
print(story)
# -
# How about Jack stories?
for story in db["books"].search("Jack", columns=["title", "book"]):
print(story)
# Ah... so maybe *Preface* is something we could also catch and exclude... And perhaps *To The Friendly Reader* as a special exception.
# Or Hans?
for story in db["books"].search("Hans", columns=["title", "book"]):
print(story)
for story in db["books"].search("donkey", columns=["title", "book"]):
print(story)
# We can also run explicit SQL queries over the database. For example, how do some of the stories start?
for row in db.query('SELECT first_line FROM books LIMIT 5'):
print(row["first_line"])
# I seem to recall there may have been some sources at the end of some texts? A quick text for that is to see if there is any mention of `Grimm`:
for story in db["books"].search("Grimm", columns=["title", "book"]):
print(story)
# Okay, so let's check the end of one of those:
for row in db.query('SELECT last_para FROM books WHERE text LIKE "%Grimm%"'):
print(row["last_para"][-200:])
# How about some stories that don't reference Grimm?
# This query was used to help iterate the regular expressions used to extract the provenance
for row in db.query('SELECT last_para, provenance FROM books WHERE text NOT LIKE "%Grimm%" LIMIT 10'):
print(row["provenance"],"::", row["last_para"][-200:])
for row in db.query('SELECT DISTINCT provenance, COUNT(*) AS num FROM books GROUP BY provenance ORDER BY num DESC LIMIT 10'):
print(row["num"], row["provenance"])
# Hmm.. it seemed like there were more mentions of Grimm than that?
# ## Making *pandas* based Database Queries
#
# For convenience, let's set up a database connection so we can easily run *pandas* mediated queries:
# +
import pandas as pd
import sqlite3
conn = sqlite3.connect(db_name)
# +
#--SPLITHERE--
# -
# ## Entity Extraction...
#
# So what entities can we find in the stories...?!
#
# Let's load in the `spacy` natural language processing toolkit:
# #%pip install --upgrade spacy
import spacy
nlp = spacy.load("en_core_web_sm")
# Get a dataframe of data frm the database:
# +
q = "SELECT * FROM books"
df = pd.read_sql(q, conn)
df.head()
# -
# Now let's have a go at extracting some entities (this may take some time!):
# +
# Extract a set of entities, rather than a list...
get_entities = lambda desc: {f"{entity.label_} :: {entity.text}" for entity in nlp(desc).ents}
# The full run takes some time....
df['entities'] = df["text"].apply(get_entities)
df.head(10)
# -
# *We should probably just do this once and add an appropriate table of entities to the database...*
#
# We can explode these out into a long format dataframe:
# +
from pandas import Series
# Explode the entities one per row...
df_long = df.explode('entities')
df_long.rename(columns={"entities":"entity"}, inplace=True)
# And then separate out entity type and value
df_long[["entity_typ", "entity_value"]] = df_long["entity"].str.split(" :: ").apply(Series)
df_long.head()
# -
# And explore...
df_long["entity_typ"].value_counts()
# What sort of money has been identified in the stories?
df_long[df_long["entity_typ"]=="MONEY"]["entity_value"].value_counts().head(10)
# Dollars? Really??? What about gold coins?! Do I need to train a new classifier?! Or was the original text really like that... Or has the text been got at? *(Maybe I should do my own digitisation project to extract the text from copies of the original books on the Internet Archive? Hmmm.. that could be interesting for when we go on strike...)*
#
# What about other quantities?
df_long[df_long["entity_typ"]=="QUANTITY"]["entity_value"].value_counts().head(10)
# What people have been identified?
df_long[df_long["entity_typ"]=="PERSON"]["entity_value"].value_counts().head(10)
# How about geo-political entities (GPEs)?
df_long[df_long["entity_typ"]=="GPE"]["entity_value"].value_counts().head(10)
# When did things happen?
df_long[df_long["entity_typ"]=="DATE"]["entity_value"].value_counts().head(10)
# And how about time considerations?
df_long[df_long["entity_typ"]=="TIME"]["entity_value"].value_counts().head(10)
# How were things organised?
df_long[df_long["entity_typ"]=="ORG"]["entity_value"].value_counts().head(10)
# What's a `NORP`? (Ah... *Nationalities Or Religious or Political groups.)*
df_long[df_long["entity_typ"]=="NORP"]["entity_value"].value_counts().head(10)
# +
#--SPLITHERE--
# -
# ## Add Wikipedia Links
#
# The Wikipedia page [`Lang's_Fairy_Books`](https://en.wikipedia.org/wiki/Lang's_Fairy_Books) lists the contents of Lang's coloured fairy books (as well as several other books), along with links to the Wikipedia page associated with each tale, if available.
#
# This means we can have a go at annotating our database with Wikipedia links for each story. From those pages in turn, or associated *DBpedia* pages, we might also be able to extract Aarne-Thompson classification codes for the corresponding stories.
# +
url = "https://en.wikipedia.org/wiki/Lang's_Fairy_Books"
html = requests.get(url)
wp_soup = BeautifulSoup(html.content, "html.parser")
# +
# Find the span for a particular book
wp_book_loc = wp_soup.find("span", id="The_Blue_Fairy_Book_(1889)")
# Then navigate relative to this to get the (linked) story list
wp_book_stories = wp_book_loc.find_parent().find_next("ul").find_all('li')
wp_book_stories[:3]
# -
# Get the Wikipedia path for stories with a Wikipedia page:
# +
wp_book_paths = [(li.find("a").get("title"), li.find("a").get("href")) for li in wp_book_stories]
wp_book_paths[:3]
# -
# Useful as a list of `dict`s or *pandas* `DataFrame`?
# +
import pandas as pd
wp_book_paths_wide = []
for item in wp_book_paths:
wp_book_paths_wide.append( {"title":item[0].strip(), "path":item[1]} )
wp_book_df = pd.DataFrame(wp_book_paths_wide)
wp_book_df
# -
# See if we can then cross reference these with stories in the database?
# +
q = "SELECT book, title, chapter_order FROM books WHERE book='The Blue Fairy Book' ORDER BY chapter_order ASC"
df_blue = pd.read_sql(q, conn)
df_blue.head()
# -
# Let's see if the chapters align in terms of order as presented:
pd.DataFrame({"book":df_blue["title"], "wp":wp_book_df["title"], "wp_path":wp_book_df["path"]})
# Yes, they do so we can use that as a basis of a merge. That said, in the genral case it would probably also be useful to generate a fuzzy match score between matched titles with a report on any low scoring matches, just in case the alignment has gone awry.
# +
# TO DO - wp table for links, story and story order?
# TO DO fuzzy match score test just to check ingest and allow user to check poor matches
# -
# In passing,what if we wanted to try to match on the titles themselves?
#
# If we use decased, but otherwise exact, matching, we see it's bit flaky....
pd.merge(df_blue["title"], wp_book_df,
left_on=df_blue["title"].str.lower(),
right_on=wp_book_df["title"].str.lower(),
how ="left" )
# A fuzzy match might be able to improve things...
# +
# Reused from on https://stackoverflow.com/a/56315491/454773
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
def fuzzy_merge(df_1, df_2, key1, key2, threshold=90, limit=2):
"""
:param df_1: the left table to join
:param df_2: the right table to join
:param key1: key column of the left table
:param key2: key column of the right table
:param threshold: how close the matches should be to return a match, based on Levenshtein distance
:param limit: the amount of matches that will get returned, these are sorted high to low
:return: dataframe with boths keys and matches
"""
s = df_2[key2].tolist()
m = df_1[key1].apply(lambda x: process.extract(x, s, limit=limit))
df_1['matches'] = m
m2 = df_1['matches'].apply(lambda x: ', '.join([i[0] for i in x if i[1] >= threshold]))
df_1['matches'] = m2
return df_1
# -
fuzzy_merge(df_blue, wp_book_df, "title", "title", 88, limit=1)[["title", "matches"]]
# +
#https://github.com/jsoma/fuzzy_pandas/
# This is probably overkill...
# #%pip install fuzzy_pandas
import fuzzy_pandas as fpd
fpd.fuzzy_merge(df_blue[["title"]], wp_book_df,
left_on='title',
right_on='title',
ignore_case=True,
ignore_nonalpha=True,
method='jaro', #bilenko, levenshtein, metaphone, jaro
threshold=0.86, # If we move to 0.86 we get a false positive...
keep_left='all',
keep_right="all"
)
# -
fpd.fuzzy_merge(df_blue[["title"]], wp_book_df,
left_on='title',
right_on='title',
ignore_case=True,
ignore_nonalpha=True,
method='metaphone', #levenshtein, metaphone, jaro, bilenko
threshold=0.86,
keep_left='all',
keep_right="all"
)
# ##ย Other Things to Link In
#
# Have other people generated data sets that can be linked in?
#
# - http://www.mythfolklore.net/andrewlang/indexbib.htm /via @OnlineCrsLady
# +
#--SPLITHERE--
# -
# ## Common Refrains / Repeating Phrases
#
# Many stories incorporate a repeating phrase or refrain in the story, but you may need to read quite a long way into a story before you can identify that repeating phrase. So are there any tools we might be able to use
# +
#db = Database(db_name)
q2 = '"pretty hen"'
_q = f'SELECT * FROM books_fts WHERE books_fts MATCH {db.quote(q2)} ;'
for row in db.query(_q):
print(row["title"])
# +
import nltk
from nltk.util import ngrams as nltk_ngrams
tokens = nltk.word_tokenize(row["text"])
size = 5
#for i in nltk_ngrams(tokens, size):
# print(' '.join(i))
# -
# We could then look for repeating phrases:
# +
import pandas as pd
df = pd.DataFrame({'phrase':[' '.join(i) for i in nltk_ngrams(tokens, size)]})
df['phrase'].value_counts()
# -
# Really, we need to do a scan down from large token size until we find a match (longest match phrase).
#
# But for now, let's see what repeating elements we get from one of those search phrases:
# +
import re
_q = 'pretty brindled cow'
for m in re.finditer(_q, row["text"]):
# Display the matched terms and the 50 characters
# immediately preceding and following the phrase
print(f'===\n{q2}: ', m.start(), m.end(), row["text"][max(0, m.start()-50):m.end()+50])
# -
# Make a function for that:
# +
def find_contexts(text, phrase, width=50):
"""Find the context(s) of the phrase."""
contexts = []
for m in re.finditer(phrase, text):
# Display the matched terms and the `width` characters
# immediately preceding and following the phrase
contexts.append(text[max(0, m.start()-width):m.end()+width])
return contexts
for i in find_contexts(row['text'], 'pretty brindled cow'):
print(i,"\n==")
# -
find_contexts(row['text'], 'pretty brindled cow')
# We can also make this a SQLite lookup function:
# +
from vtfunc import TableFunction
def concordances(text, phrase, width=50):
"""Find the concordances of a phrase in a text."""
contexts = []
for m in re.finditer(phrase, text):
# Display the matched terms and the `width` characters
# immediately preceding and following the phrase
context = text[max(0, m.start()-width):m.end()+width]
contexts.append( (context, m.start(), m.end()) )
return contexts
class Concordances(TableFunction):
params = ['phrase', 'text']
columns = ['match', 'start', 'end']
name = 'concordance'
def initialize(self, phrase=None, text=None):
self._iter = iter(concordances(text, phrase))
def iterate(self, idx):
(context, start, end) = next(self._iter)
return (context, start, end,)
Concordances.register(db.conn)
# -
concordances(row['text'], 'pretty brindled cow')
q = """
SELECT matched.*
FROM books, concordance("pretty brindled cow", books.text) AS matched
WHERE title="The House In The Wood";
"""
for i in db.execute(q):
print(i)
# +
# allow different tokenisers
from nltk.tokenize import RegexpTokenizer
def scanner(text, minlen=4, startlen=50, min_repeats = 3, autostop=True):
"""Search a text for repeated phrases above a minimum length."""
# Tokenise the text
tokenizer = RegexpTokenizer(r'\w+')
tokenizer.tokenize('Eighty-seven miles to go, yet. Onward!')
tokens = nltk.word_tokenize(text)
#nltk_ngrams returns an empty list if we ask for an ngram longer than the sentence
# So set the (long) start length to the lesser of the original provided
# start length or the token length of the original text;
# which is to say, the minimum of the provided start length
# or the length of the text
startlen = min(startlen, len(tokens))
# Start with a long sequence then iterate down to a minumum length sequence
for size in range(startlen, minlen-1, -1):
# Generate a dataframe containing all the ngrams, one row per ngram
df = pd.DataFrame({'phrase':[' '.join(i) for i in nltk_ngrams(tokens, size)]})
# Find the occurrence counts of each phrase
value_counts_series = df['phrase'].value_counts()
# If we have at least the specified number of occurrences
# don't bother searching for any more
if max(value_counts_series) >= min_repeats:
if autostop:
break
pass
# Return a pandas series (an indexed list, essentially)
# containing the longest (or phrases) we found
return value_counts_series[(value_counts_series>=min_repeats) & (value_counts_series==max(value_counts_series))]
# -
scanner( row["text"] )
# Display the first (0'th indexed) item
# (In this case there is only one item hat repeats this number of times anyway.)
scanner( row["text"] ).index[0], scanner( row["text"] ).values[0]
# If we constrain this function to return a single item, we can create a simple SQLite function that will search through records and return the longest phrase above a certain minimum length (or the first longest phrase, if several long phrases of the same length are found):
def find_repeating_phrase(text):
"""Return the longest repeating phrase found in a text.
If there are more than one of the same length, return the first.
"""
phrase = scanner(text)
#If there is at least one response, take the first
if not phrase.empty:
return phrase.index[0]
find_repeating_phrase(row['text'])
# The `db` object is a sqlite_utils database object
# Pass in:
# - the name of the function we want to use in the database
# - the number of arguments it takes
# - the function we want to invoke
db.conn.create_function('find_repeating_phrase', 1,
find_repeating_phrase)
# +
_q = """
SELECT book, title, find_repeating_phrase(text) AS phrase
FROM books WHERE title="The House In The Wood" ;
"""
for row2 in db.query(_q):
print(row2)
# +
_q = """
SELECT title, find_repeating_phrase(text) AS phrase
FROM books WHERE book="The Pink Fairy Book" ;
"""
for row3 in db.query(_q):
if row3['phrase'] is not None:
print(row3)
# -
# The punctuation gets in the way somewhat, so it might be useful if removed the punctuation and tried again:
# +
#Allow param and de-punctuate
def scanner2(text, minlen=4, startlen=50, min_repeats = 4, autostop=True, tokeniser='word'):
"""Search a text for repeated phrases above a minimum length."""
# Tokenise the text
if tokeniser == 'depunc_word':
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(text)
elif tokeniser == 'sent':
pass
else:
# eg for default: tokeniser='word'
tokenizer = RegexpTokenizer(r'\w+')
tokenizer.tokenize('Eighty-seven miles to go, yet. Onward!')
tokens = nltk.word_tokenize(text)
#nltk_ngrams returns an empty list if we ask for an ngram longer than the sentence
# So set the (long) start length to the lesser of the original provided
# start length or the token length of the original text;
# which is to say, the minimum of the provided start length
# or the lenth of the text
startlen = min(startlen, len(tokens))
# Start with a long sequence then iterate down to a minumum length sequence
for size in range(startlen, minlen-1, -1):
# Generate a dataframe containing all the ngrams, one row per ngram
df = pd.DataFrame({'phrase':[' '.join(i) for i in nltk_ngrams(tokens,size)]})
# Find the occurrence counts of each phrase
value_counts_series = df['phrase'].value_counts()
# If we have at least the specified number of occurrences
# don't bother searching for any more
if max(value_counts_series) >= min_repeats:
if autostop:
break
pass
# Return a pandas series (an indexed list, essentially)
# containing the long phrase (or phrases) we found
return value_counts_series[(value_counts_series>=min_repeats) & (value_counts_series==max(value_counts_series))]
# -
def find_repeating_phrase_depunc(text, minlen):
"""Return the longest repeating phrase found in a text.
If there are more than one of the same lentgh, return the first.
"""
# Accepts a specified minimum phrase length (minlin)
# Reduce the required number of repeats
phrase = scanner2(text, minlen=minlen, min_repeats = 3,
tokeniser='depunc_word')
#If there is at least one response, take the first
if not phrase.empty:
return phrase.index[0]
find_repeating_phrase_depunc(row['text'], 5)
# Register the function:
# Note we need to update the number of arguments (max. 2)
db.conn.create_function('find_repeating_phrase_depunc', 2,
find_repeating_phrase_depunc)
# Try again:
# +
_q = """
SELECT book, title, find_repeating_phrase_depunc(text, 7) AS phrase
FROM books WHERE book="The Pink Fairy Book" ;
"""
for row5 in db.query(_q):
if row5['phrase'] is not None:
print(row5)
# -
# Check the context:
# +
_q = """
SELECT text, find_repeating_phrase(text) AS phrase
FROM books WHERE title="Maiden Bright-Eye" ;
"""
for row6 in db.query(_q):
for c in find_contexts(row6['text'], "Where is my wicked ", 100):
print(c,"\n===")
#print(row6['phrase'])
# -
for row6 in db.query(_q):
for c in find_contexts(row6['text'], "the king's palace", 100):
print(c,"\n===")
# We need to be able to find short sentences down to the minimum that are not in a longer phrase:
def scanner_all(text, minlen=4, startlen=50,
min_repeats = 4, autostop=True):
long_phrases = {}
tokens = nltk.word_tokenize(text)
for size in range(startlen, minlen-1, -1):
df = pd.DataFrame({'phrase':[' '.join(i) for i in nltk_ngrams(tokens, min(size, len(tokens)))]})
value_counts_series = df['phrase'].value_counts()
if max(value_counts_series) >= min_repeats:
test_phrases = value_counts_series[value_counts_series==max(value_counts_series)]
for (test_phrase, val) in test_phrases.iteritems():
if (test_phrase not in long_phrases) and not any(test_phrase in long_phrase for long_phrase in long_phrases):
long_phrases[test_phrase] = val
return long_phrases
txt_reps ="""
Nota that There once was a thing that and 5 There once was a thing that and 4 There once was a thing that and 3
There once was a thing that and 1 There once was a thing that and 6 There once was a thing that and 7
there was another that 1 and there was another that 2 and there was another that 3 and there was another that and
there was another that and there was another that 5 and there was another that 9 and there was another that
"""
scanner( txt_reps )
scanner_all(txt_reps)
scanner_all( row["text"])
# ## Longest Common Substring
#
# Could we use `difflib.SequenceMatcher.find_longest_match()` on first and second half of doc, or various docs samples, to try to find common refrains?
#
# Or chunk into paragraphs and compare every paragraph with every other paragraph?
#
# Here's how the to call the `SequenceMatcher().find_longest_match()` function:
# +
from difflib import SequenceMatcher
m = SequenceMatcher(None, txt_reps.split('\n')[1],
txt_reps.split('\n')[2]).find_longest_match()
m, txt_reps.split('\n')[1][m.a: m.a + m.size]
# -
# ## Doc2Vec Search Engine
#
# To explore: a simple `Doc2Vec` powered search engine based on https://www.kaggle.com/hgilles06/a-doc2vec-search-engine-cord19-new-version .
| old/lang-fairy-books-db.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tensorflow time series for aiops
# +
import os
import datetime
import IPython
import IPython.display
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False
# +
df = pd.read_csv('../../../cto_k8s/m_data_10.11.1.80:9091.csv', usecols=['time', 'cpu_value', 'memory_value'])
df.rename(columns={'cpu_value':'cpu', 'memory_value' : 'memory'}, inplace=True)
df['cpu'] = df['cpu'].fillna(df['cpu'].mean())
df['memory'] = df['memory'].fillna(df['memory'].mean())
df_ori = df[['time', 'cpu', 'memory']]
df['Date Time'] = pd.to_datetime(df.time, unit='s')
df = df[['Date Time', 'cpu', 'memory']]
date_time = df.pop('Date Time')
df.head()
# -
df.tail()
# +
plot_cols = ['cpu', 'memory']
plot_features = df[plot_cols]
plot_features.index = date_time
_ = plot_features.plot(subplots = True)
plot_features = df[plot_cols][:144]
plot_features.index = date_time[:144]
_ = plot_features.plot(subplots = True)
'''
day_of_data = 1*6*24 # start 8์ 4์ผz
plot_features_test = df[plot_cols][day_of_data*7:day_of_data*8]
plot_features_test.index = date_time[day_of_data*7:day_of_data*8]
_ = plot_features_test.plot(subplots = True)
'''
# -
df.describe().transpose()
plt.hist2d(df['cpu'], df['memory'], bins=(50, 50), vmax=50)
plt.colorbar()
plt.xlabel('cpu')
plt.ylabel('memory')
# ์์ ์๊ด๊ด๊ณ ์กด์ฌ
df_scatter = df[['cpu', 'memory']]
plt.xlabel('CPU')
plt.ylabel('MEMORY')
plt.scatter(df_scatter['cpu'], df_scatter['memory'])
# ### ๋ฐ์ดํฐ ๋ถํ
timestamp_s = date_time.map(datetime.datetime.timestamp)
df['Day sin'] = np.sin(df_ori['time'] * (2 * np.pi / day))
df['Day cos'] = np.cos(df_ori['time'] * (2 * np.pi / day))
df['Year sin'] = np.sin(df_ori['time'] * (2 * np.pi / day))
df['Year cos'] = np.cos(df_ori['time'] * (2 * np.pi / day))
plt.plot(np.array(df['Day sin'])[:120])
plt.plot(np.array(df['Day cos'])[:120])
plt.xlabel('Time [h]')
plt.title('Time of day signal')
# +
fft = tf.signal.rfft(df['cpu'])
f_per_dataset = np.arange(0, len(fft))
n_samples_h = len(df['cpu'])
hours_per_week = 24*7
years_per_dataset = n_samples_h/(hours_per_week)
f_per_year = f_per_dataset/years_per_dataset
plt.step(f_per_year, np.abs(fft))
plt.xscale('log')
plt.ylim(0, 500)
plt.xlim([0.1, max(plt.xlim())])
plt.xticks([1, 7], labels = ['1/Week', '1/day'])
_ = plt.xlabel('Frequency (log scale)')
# +
column_indices = {name: i for i, name in enumerate(df.columns)}
n = len(df)
train_df = df[0:int(n*0.7)]
val_df = df[int(n*0.7):int(n*0.9)]
test_df = df[int(n*0.9):]
num_features = df.shape[1]
# +
train_mean = train_df.mean()
train_std = train_df.std()
train_df = (train_df - train_mean) / train_std
val_df = (train_df - train_mean) / train_std
test_df = (train_df - train_mean) / train_std
# -
df_std = (df - train_mean) / train_std
df_std = df_std.melt(var_name = 'Column', value_name = 'Normalized')
plt.figure(figsize = (12, 6))
ax = sns.violinplot(x='Column', y='Normalized', data=df_std)
_ = ax.set_xticklabels(df.keys(), rotation=90)
# +
## 1. ์ธ๋ฑ์ค ๋ฐ ์คํ์
# -
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
w1 = WindowGenerator(input_width=24, label_width=1, shift=24,
label_columns=['cpu', 'memory'])
w1
w2 = WindowGenerator(input_width=6, label_width=1, shift=1,
label_columns=['cpu', 'memory'])
w2
# +
## 2. ๋ถํ
# +
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window
# +
# Stack three slices, the length of the total window:
example_window = tf.stack([np.array(train_df[:w2.total_window_size]),
np.array(train_df[100:100+w2.total_window_size]),
np.array(train_df[200:200+w2.total_window_size])])
example_inputs, example_labels = w2.split_window(example_window)
print('All shapes are: (batch, time, features)')
print(f'Window shape: {example_window.shape}')
print(f'Inputs shape: {example_inputs.shape}')
print(f'labels shape: {example_labels.shape}')
# +
## 3. ํ๋กฏ : ๋ถํ ์ฐฝ์ ๊ฐ๋จํ๊ฒ ์๊ฐํ
# -
w2.example = example_inputs, example_labels
# +
def plot(self, model=None, plot_col='cpu', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(3, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
WindowGenerator.plot = plot
# -
w2.plot()
# +
## 4. tf.data.Dataset ์์ฑ
# +
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
ds = tf.keras.preprocessing.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset
# +
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of 'inputs, labels' for plotting. """
result = getattr(self, '_example', None)
if result is None:
#No example batch was found, so get one from the '.train' dataset
result = next(iter(self.train))
#And cache it for net time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
# -
# Each element is an (inputs, label) pair
w2.train.element_spec
for example_inputs, example_labels in w2.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
# # ๋จ์ผ ๋จ๊ณ ๋ชจ๋ธ
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['cpu'])
single_step_window
for example_inputs, example_labels in single_step_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
# +
## Baseline
# -
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]
# +
baseline = Baseline(label_index=column_indices['cpu'])
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
# +
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['T (degC)'])
wide_window
# -
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', baseline(single_step_window.example[0]).shape)
wide_window.plot(baseline)
# +
## Linear model
# -
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', linear(single_step_window.example[0]).shape)
# +
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
# +
history = compile_and_fit(linear, single_step_window)
val_performance['Linear'] = linear.evaluate(single_step_window.val)
performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)
# -
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
wide_window.plot(linear)
plt.bar(x = range(len(train_df.columns)),
height=linear.layers[0].kernel[:,0].numpy())
axis = plt.gca()
axis.set_xticks(range(len(train_df.columns)))
_ = axis.set_xticklabels(train_df.columns, rotation=90)
# +
## Dense
# +
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=1)
])
history = compile_and_fit(dense, single_step_window)
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
# +
## Multi-step dense
# +
CONV_WIDTH = 3
conv_window = WindowGenerator(
input_width=CONV_WIDTH,
label_width=1,
shift=1,
label_columns=['T (degC)'])
conv_window
# -
conv_window.plot()
plt.title("Given 3h as input, predict 1h into the future.")
multi_step_dense = tf.keras.Sequential([
# Shape: (time, features) => (time*features)
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
# Add back the time dimension.
# Shape: (outputs) => (1, outputs)
tf.keras.layers.Reshape([1, -1]),
])
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', multi_step_dense(conv_window.example[0]).shape)
# +
history = compile_and_fit(multi_step_dense, conv_window)
IPython.display.clear_output()
val_performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.val)
performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.test, verbose=0)
# -
conv_window.plot(multi_step_dense)
print('Input shape:', wide_window.example[0].shape)
try:
print('Output shape:', multi_step_dense(wide_window.example[0]).shape)
except Exception as e:
print(f'\n{type(e).__name__}:{e}')
# +
## ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง (CNN)
# -
conv_model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=(CONV_WIDTH,),
activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
])
print("Conv model on `conv_window`")
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', conv_model(conv_window.example[0]).shape)
# +
history = compile_and_fit(conv_model, conv_window)
IPython.display.clear_output()
val_performance['Conv'] = conv_model.evaluate(conv_window.val)
performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)
# -
print("Wide window")
print('Input shape:', wide_window.example[0].shape)
print('Labels shape:', wide_window.example[1].shape)
print('Output shape:', conv_model(wide_window.example[0]).shape)
# +
LABEL_WIDTH = 24
INPUT_WIDTH = LABEL_WIDTH + (CONV_WIDTH - 1)
wide_conv_window = WindowGenerator(
input_width=INPUT_WIDTH,
label_width=LABEL_WIDTH,
shift=1,
label_columns=['T (degC)'])
wide_conv_window
# -
print("Wide conv window")
print('Input shape:', wide_conv_window.example[0].shape)
print('Labels shape:', wide_conv_window.example[1].shape)
print('Output shape:', conv_model(wide_conv_window.example[0]).shape)
wide_conv_window.plot(conv_model)
# +
## ์ํ ์ ๊ฒฝ๋ง (RNN)
# -
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=1)
])
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', lstm_model(wide_window.example[0]).shape)
# +
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)
# -
wide_window.plot(lstm_model)
# +
## Performance
# +
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.ylabel('mean_absolute_error [T (degC), normalized]')
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
_ = plt.legend()
# -
for name, value in performance.items():
print(f'{name:12s}: {value[1]:0.4f}')
| gantry-jupyterhub/time_series/test/tf_time_weather/test_tf_time_aiops_single_step_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tarea 2
# A partir del corpus proporcionado realizar un modelo del lenguaje neuronal con base en la arquitectura de word2vec. Siganse los siguientes pasos:
#
# 1. Limpiar los Textos y apliccar stemming a las palabras. **[DONE]**
# 2. Insertar simbolos de inicio y final de cadena. **[DONE]**
# 3. Obtener los bigramas que aparecen en este texto. **[DONE]**
# 4. Entrenar con los bigramas la red neuronal y obtener los valores para los hiperparamteros. Tomar de 100 a 300 unidades para la capa oculta.
# 5. Obtener la matriz A y Pi a partir de las salidas de la red neuronal
# 6. Calcular la probabilidad de las siguientes oraciones:
# - Nos baรฑamos con agua caliente
# - El animalito le olรญa la cabeza
# - Pascuala ordeรฑaba las vacas
# +
# Importamos las librerias
# Usamos os para cargar los libros
from os import listdir,getcwd
from os.path import isfile, join
import re
# -
# ## Importamos el corpus
# Lo primero que hacemos es importar el corpus al notebook para que podamos utilizarlos. En este caso definimos dos formas de cargar el corpus, ya sea por documento o cargando todos los documentos del folder.
# +
# Obtenemos el path del folder donde se almacenan los corpus
folder_path = (getcwd() + r"/CorpusDocs")
# Los almacenamos en una lista donde se almacenan los nombres de los archivos.
# Esto es en caso de que usemos todos los corpus.
corpus_name_list = [f for f in listdir(folder_path) if isfile(join(folder_path, f))]
def loadAllCorpus():
"""
Esta funcion carga todos los corpus que estan en el folder Corpus Docs.
"""
corpis = ''
for file in corpus_name_list:
with open("./CorpusDocs/" + file, 'r', encoding="utf8") as f:
corpis += f.read()
return corpis
def loadCorpus(corpus_name):
"""
Esta funcion nos sirve para cargar un corpus especifico
"""
with open("./CorpusDocs/" + corpus_name, 'r', encoding="utf8") as f:
corpus = f.read()
return corpus
# +
# Cargamos el corpus.
#corpus = loadAllCorpus()
corpus = loadCorpus('corpusML.txt')
# -
# ## Limpieza del Texto
# Separamos las palabras de las oraciones para poder trabajar con ellas individualmente
def add_eos_init(corpus):
"""
Adds <eos> and <init> to a corpus. Based on line jumps
"""
init = '<init> '
eos = ' <eos>'
corpus_in_eo = []
corpus_in_eo = init + corpus + eos
corpus_in_eo= corpus_in_eo.replace("\n", eos +" \n"+init)
return corpus_in_eo
corpus_init = add_eos_init(corpus)
words = corpus_init.split()
print(words[:20])
# Eliminamos la puntuaciรณn del documento, acentos y normalizamos el texto en minusculas. Para hacer la eliminaciรณn de los sรญmbolos de puntuaciรณn utilziamos una tabla de traducciรณn para optimizar la velocidad de procesamiento. Tambien fue necesario extender la tabla de sรญmbolos para que incluyera algunos sรญmbolos latinos que faltaban.
#
# Para eliminar acentos usamos la libreria unidecode que se tiene que instalar adicionalmente: `pip install unidecode`
# +
import string
import unidecode
# Para poder mantener algunos flags quitamos < > de los elementos que se pueden eliminar
# de los simbolos de puntuaciรณn
punct = string.punctuation.replace("<", '')
punct = punct.replace(">", '')
print(punct)
# +
lat_punctuation = punct+'ยฟยก1234567890'
#print(lat_punctuation)
table = str.maketrans('', '', lat_punctuation)
# +
clean_words = []
for word in words:
word = word.lower() # Minusculas
word = unidecode.unidecode(word) # Quitamos acentos.
# Clean punctuation
temp_w = []
for letter in word:
if letter not in lat_punctuation:
temp_w.append(letter)
word = ''.join(temp_w)
clean_words.append(word)
# -
# ## Stemming de Palabras
# Para hacer el stemming de las palabras usamos NLTK. Para esto hay que instalar NLTK:
#
# Lo primero que hacemos es definir un stemmer. En este caso usaremos [Snowball Stemmer](http://snowball.tartarus.org/texts/introduction.html).
from nltk.stem import SnowballStemmer
stemmer = SnowballStemmer('spanish')
# +
stemmed_text = []
for word in clean_words:
stemmed_text.append(stemmer.stem(word))
print(stemmed_text[:10])
# -
# ## Bigramas
# Para obtener los biogramas existentes del corpus creamos una fucnion que nos sierve para obtener todos los bigramas que existen en el corpus y a traves de el generar una lista de ellos.
def create_ngrams(stemmed_text, n):
"""
Creates an n-gram structure from a stemmed or tokenized text.
Params
------
stemmed_text: Tokens or stemmed words of a corpus
n: the size of the n gram ex. 2 for a bigram
"""
return zip(*[stemmed_text[i:] for i in range(n)])
bigramas = list(create_ngrams(stemmed_text, 2))
print(bigramas[0])
import collections
counter=collections.Counter(bigramas)
print(counter)
print(counter['me', 'peg'])
# +
# Obtenemos el alfabeto de las palabras del stem
#alfabeto = set(stemmed_text)
alfabetoPI = []
for stem in stemmed_text:
if stem not in alfabetoPI:
alfabetoPI.append(stem)
alfabetoPI.remove('<init>')
print("AlfabetoPI total: {0}".format(len(alfabetoPI)))
# -
# ## Visualizaciรณn de Matrices
# Utilizamos Pandas para visualizar la matriz A y la matriz Pi
import pandas as pd
MatrixA = pd.DataFrame(index=alfabetoPI, columns=alfabetoPI)
for x in alfabetoPI:
for y in alfabetoPI:
MatrixA.set_value(y,x, counter[y,x]+1)
MatrixA
MatrixPI = pd.DataFrame(index=alfabetoPI, columns=['<init>'])
for y in alfabetoPI:
if y is not '<eos>':
MatrixPI.set_value(y,'<init>', counter['<init>',y])
MatrixPI
# # One Hot
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(alfabetoPI)
print(integer_encoded)
Matrix_One_Hot = pd.DataFrame(index=integer_encoded, columns=integer_encoded)
Values_MatrixA = MatrixA.get_values()
w = 0
j = 0
for x in integer_encoded:
for y in integer_encoded:
Matrix_One_Hot.set_value(y,x,Values_MatrixA[j,w])
j = j + 1
j=0
w = w + 1
Matrix_One_Hot
# # Red Neuronal
# ## N-300-N
# +
import random
import numpy as np
import math
def softmax(x,vx):
vy = np.zeros([len(vx)])
for i in range(0,len(vy)-1):
vy[i] = math.exp(vx[i])
sf = math.exp(x) / sum(vy)
return sf
def softmaxP(phi,y):
return phi-y
def obtainTarget():
values = np.zeros([1216])
for i in range(0,1215):
values[i] = Matrix_One_Hot.columns[i]
return values
def EntropyX(y,p):
return (1/softmaxP(y,p)) * math.exp(p)
def Dout(phi,y):
x = softmaxP(phi,y)
if x == 0:
return 1
else:
return 0
#Revisar funciรณn de riesgo
def riego(y,p):
return -sum(y)*math.log(EntropyX(y,p))
#Funciones para la red neuronal
def weight_B_Init():
bias = random.random()
weight = random.random()
return weight, bias
def randomProbability():
wX = np.zeros([300])
bX = np.zeros([300])
for i in range(0,299):
wX[i], bX[i] = weight_B_Init()
return wX,bX
def lineal(x):
if(x<0):
return 0
else:
return x
def error(x,y):
return y-x
def feedForward(bigrama,probabilityWhen,v_W,v_B,v_P,j):
#Rango de aprendizaje
alpha = 0.01
a0 = probabilityWhen
a1 = lineal(v_W*a0 + v_B)
u = probabilityWhen*a1 + v_B
a2 = softmax(u,v_P)
e = error(a2,bigrama[1])
s2 = (-2)*softmaxP(a1,probabilityWhen) * e
u = np.array([[softmax(a1,v_P),0]])
s1 = u * probabilityWhen * s2
w2_1 = probabilityWhen - alpha*s2*a1
b2_1 = v_B - alpha*s2
v_P[j] = alpha * EntropyX(s2,probabilityWhen)
# print(EntropyX(Total_Value,probabilityWhen))
return w2_1, b2_1
# -
# # Creaciรณn de la red neuronal
# +
#Se crean 300 pesos y bias aleatorios
vector_Weight, vector_Bias = randomProbability()
#Matriz de salida(probabilidad de cada bigrama bigrama)
matrix_Probability = Matrix_One_Hot / 1216 #Se inicializa
# Variables para movernos sobre la tabla
count_Context = 0
count_Target = 0
################# EJEMPLO PARA EL CONTEXTO 1 ########################
#Tomamos el contexto X
bigram_Context = Matrix_One_Hot.columns[count_Context]
#Tomamos el vector con todas las posibles combinaciones
bigram_Target = obtainTarget()
#Tomamos la probabilidad de la columna
vector_Probability = matrix_Probability.iloc[count_Context,:].get_values()
#Se calcula la probabilidad total del vector_Probability
total_Vector_Probability = sum(vector_Probability)
#Ahora ingresamos las posibles combinaciones del bigrama a la red neuronal
for j in range(0,1215): #Primer Ciclo -> Recorrer todos los bigramas de un solo contexto
#Ahora calculamos la probabilidad del bigrama entre la sumatoria de todas las combinaciones del contexto
probabilityWhen = matrix_Probability.iloc[count_Context,count_Target] / total_Vector_Probability
for i in range(0,299): #Segundo Ciclo -> Pasar por los 300 nodos
bigram = [bigram_Context, bigram_Target[count_Target]] #Se crea el bigrama (Contexto,Target)
vector_Weight[i], vector_Bias[i]= feedForward(bigram,probabilityWhen,
vector_Weight[i],vector_Bias[i],
vector_Probability,j)
i = 0
count_Target = count_Target + 1
vector_Probability
# -
# # Realizado por:
# - <NAME>
# - <NAME>
#
| Tarea #2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:metis] *
# language: python
# name: conda-env-metis-py
# ---
# +
#Using different classification models on the molecular data to predict a GPCR ligand
# +
import pandas as pd
import numpy as np
# visualization imports
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
# %matplotlib inline
# modeling imports
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import StandardScaler
pd.options.display.max_columns = None
import uniprot as up
import pprint
import csv
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import cross_val_score
from sklearn_pandas import DataFrameMapper
import xgboost as xgb
import imblearn.over_sampling
from sklearn.metrics import roc_curve
sns.set_style(style = 'white')
# -
# ## First try KNN as a base model
#Read in data, need to drop a few columns
df_class = pd.read_csv('df_classification.csv')
df_class.drop(labels = ['Unnamed: 0', 'PubChem CID', 'CID'], axis = 1, inplace = True)
#Drop columns with NA and set features and target
df_class.dropna(inplace = True)
X = df_class.select_dtypes(include = 'number', exclude = 'object')
X.drop(['GPCR'], axis = 1, inplace = True)
y = df_class['GPCR']
X
# Create train and test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Scale data
mapper = DataFrameMapper([(X_train.columns, StandardScaler())])
scaled_X_train = mapper.fit_transform(X_train.copy())
scaled_X_train_df = pd.DataFrame(scaled_X_train, columns=X_train.columns)
scaled_X_test = mapper.fit_transform(X_test.copy())
scaled_X_test_df = pd.DataFrame(scaled_X_test, columns=X_test.columns)
#Perform KNN and evaluate model on the training data
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(scaled_X_train, y_train)
y_train_pred = knn.predict(scaled_X_train)
print(accuracy_score(y_train, y_train_pred))
print(recall_score(y_train, y_train_pred))
print(precision_score(y_train, y_train_pred))
print(metrics.f1_score(y_train, y_train_pred))
#Evaluate model on the test data
knn.fit(scaled_X_test_df, y_test)
y_test_pred = knn.predict(scaled_X_test)
print(accuracy_score(y_test, y_test_pred))
print(recall_score(y_test, y_test_pred))
print(precision_score(y_test, y_test_pred))
print(metrics.accuracy_score(y_test, y_test_pred))
print(metrics.f1_score(y_test, y_test_pred))
# + tags=[]
#Cross validated the KNN modles with the three types for scoring metrics
mapper = DataFrameMapper([(X.columns, StandardScaler())])
scaled_X = mapper.fit_transform(X.copy())
scaled_X_df = pd.DataFrame(scaled_X, columns=X.columns)
metrics_ = ['recall', 'precision', 'accuracy', 'f1']
for metric in metrics_:
knn = KNeighborsClassifier(n_neighbors=5, weights = 'distance')
scores = cross_val_score(knn, scaled_X_df, y, cv=10, scoring=metric)
print(scores)
# -
# #### Find optimal K for each metric and each weight type and plot
def opt_k(X,y,metric, weight):
mapper = DataFrameMapper([(X.columns, StandardScaler())])
scaled_X = mapper.fit_transform(X.copy())
scaled_X_df = pd.DataFrame(scaled_X, columns=X.columns)
k_range = list(range(1, 31))
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k, weights = weight)
scores = cross_val_score(knn, scaled_X_df, y, cv=10, scoring= metric)
k_scores.append(scores.mean())
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated ' + metric + ' with ' + weight + ' weights')
opt_k(X,y,'recall', 'uniform')
opt_k(X,y,'recall', 'distance')
opt_k(X,y, 'precision', 'uniform')
opt_k(X,y, 'precision', 'distance')
opt_k(X,y, 'accuracy', 'uniform')
opt_k(X,y, 'accuracy', 'distance')
# Okay, that was fun, but let's get down to business.
# ## Compare different models
#
# Now I am going to do a quick test to determine whether logreg, decision tree, random forest methods or gradient boosted trees will work best for my data
# +
def quick_test(model, X, y):
xtrain, xtest, ytrain, ytest = train_test_split(X, y, test_size=0.3, random_state=22)
model.fit(xtrain, ytrain)
return model.score(xtest, ytest)
def quick_test_afew_times(model, X, y, n=10):
return np.mean([quick_test(model, X, y) for j in range(n)])
# -
knn = KNeighborsClassifier(n_neighbors=5)
print(quick_test(knn, X, y))
print(quick_test_afew_times(knn, X, y))
logreg = LogisticRegression(max_iter = 1000, C = 1000)
print(quick_test(logreg, X, y))
print(quick_test_afew_times(logreg, X, y))
decisiontree = DecisionTreeClassifier(max_depth=10)
print(quick_test(decisiontree, X, y))
print(quick_test_afew_times(decisiontree, X, y))
gbm = xgb.XGBRegressor(n_estimators = 30000)
eval_set=[(X_train,y_train),(X_test,y_test)]
print(quick_test(gbm, X, y))
print(quick_test_afew_times(gbm, X, y))
# Ignore this - note to self
# **Why don't we test/train split on the decision tree and randomforest?**
# Always test/train split
# Ignore this - note to self
# **Is feature engineering important in decision tree and random forest?**
# Always important
# But in non-linear classification models: don't worry about scaling or linearizing the data
# Ignore this - note to self
# **Do decision trees require scaling?**
# For standard scaling - no
# Hyperparameter to limit size trees - yes
# Accuracy: randomforest > decisiontree > logistic regression > gbm
# I'd like to look at some other metrics though, in particular f1
# +
def accuracy(actuals, preds):
return np.mean(actuals == preds)
def precision(actuals, preds):
tp = np.sum((actuals == 1) & (preds == 1))
fp = np.sum((actuals == 0) & (preds == 1))
return tp / (tp + fp)
def recall(actuals, preds):
tp = np.sum((actuals == 1) & (preds == 1))
fn = np.sum((actuals == 1) & (preds == 0))
return tp / (tp + fn)
def F1(actuals, preds):
p, r = precision(actuals, preds), recall(actuals, preds)
return 2*p*r / (p + r)
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=22)
logreg.fit(X_train, y_train)
knn.fit(X_train, y_train)
decisiontree.fit(X_train, y_train)
randomforest.fit(X_train, y_train)
gbm.fit(X_train, y_train)
print('Logistic regression validation metrics: \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_test, logreg.predict(X_test)),
precision(y_test, logreg.predict(X_test)),
recall(y_test, logreg.predict(X_test)),
F1(y_test, logreg.predict(X_test))))
print('\n')
print('5 nearest neighbors validation metrics: \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_test, knn.predict(X_test)),
precision(y_test, knn.predict(X_test)),
recall(y_test, knn.predict(X_test)),
F1(y_test, knn.predict(X_test))))
print('\n')
print('Decisiontree max depth 10 validation metrics: \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_test, decisiontree.predict(X_test)),
precision(y_test, decisiontree.predict(X_test)),
recall(y_test, decisiontree.predict(X_test)),
F1(y_test, decisiontree.predict(X_test))))
print('\n')
print('Random Forest max depth 10 validation metrics: \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_test, randomforest.predict(X_test)),
precision(y_test, randomforest.predict(X_test)),
recall(y_test, randomforest.predict(X_test)),
F1(y_test, randomforest.predict(X_test))))
print('\n')
print('Gradient boosted trees metrics: \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_test, gbm.predict(X_test)),
precision(y_test, gbm.predict(X_test)),
recall(y_test, gbm.predict(X_test)),
F1(y_test, gbm.predict(X_test))))
# -
# **How would I increase recall in my best method?**
# 1) Model selection - random forest seems to work best
# 2) Feature engineering - I put my logic below
# 3) Handle class imbalance -
# 4) Hyperparameter tuning
# 5) Threshold selection
# ## Data resampling
# Correct for class imbalances between GPCRs and not GPCRs.
# Determine if there is class imbalances
print(df_class.GPCR.sum())
print(len(df_class.GPCR))
# +
#Compare strateiges - RandomOverSampler and SMOTE
ROS = imblearn.over_sampling.RandomOverSampler(sampling_strategy = ratio, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train_OS, y_train_OS = ROS.fit_resample(X_train, y_train)
rf_OS = RandomForestClassifier(n_estimators=100)
rf_OS.fit(X_train_OS, y_train_OS)
print('Random Forest with over sampling : \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_test, rf_OS.predict(X_test)),
precision(y_test, rf_OS.predict(X_test)),
recall(y_test, rf_OS.predict(X_test)),
F1(y_test, rf_OS.predict(X_test))))
smote = imblearn.over_sampling.SMOTE(sampling_strategy = ratio, random_state=42)
X_train_SMOTE, y_train_SMOTE = smote.fit_resample(X_train, y_train)
rf_SMOTE = RandomForestClassifier(n_estimators=100)
rf_SMOTE.fit(X_train_SMOTE, y_train_SMOTE)
print('Random Forest with over sampling : \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_test, rf_SMOTE.predict(X_test)),
precision(y_test, rf_SMOTE.predict(X_test)),
recall(y_test, rf_SMOTE.predict(X_test)),
F1(y_test, rf_SMOTE.predict(X_test))))
# -
# ## Feature Engineering
# ### Very general characteristics of GPCR ligands
# - Binds in a more hydrophobic environment (so tends to have lower octanol/water partition coefficient which is XlogP), and a lower polar surface area
# - H-bonds donors and acceptors are very important in binding
# - Heavy atom count will skew higher due to peptide ligands
# +
# Normalize for molecular mass
df_class_fe = df_class.copy()
df_class_fe['TPSA_Mass'] = df_class['TPSA']/df_class['ExactMass']
df_class_fe['XLogP_Mass'] = df_class['XLogP']/df_class['ExactMass']
df_class_fe['HBondAcceptorCount_Mass'] = df_class['HBondAcceptorCount']/df_class['ExactMass']
df_class_fe['HBondDonorCount_Mass'] = df_class['HBondDonorCount']/df_class['ExactMass']
df_class_fe['HeavyAtomCount_Mass'] = df_class['HeavyAtomCount']/df_class['ExactMass']
df_class_fe['Complexity_Mass'] = df_class['Complexity']/df_class['ExactMass']
# len(df_class_fe.GPCR
# +
X_fe = df_class_fe.select_dtypes(include = 'number', exclude = 'object')
X_fe.drop(['GPCR'], axis = 1, inplace = True)
y_fe = df_class_fe['GPCR']
X_train_fe, X_test_fe, y_train_fe, y_test_fe = train_test_split(X_fe, y_fe, test_size=0.2, random_state=22)
# y_train_fe
y_train_fe.value_counts()
# -
n_pos = np.sum(y_train_fe == 1)
n_neg = np.sum(y_train_fe == 0)
ratio = {1 : int(n_pos * 2), 0 : n_neg}
ratio
# +
ROS = imblearn.over_sampling.RandomOverSampler(sampling_strategy = ratio, random_state=22)
X_train_OS_fe, y_train_OS_fe = ROS.fit_sample(X_train_fe, y_train_fe)
rf_OS_fe = RandomForestClassifier(n_estimators=100)
rf_OS_fe.fit(X_train_OS_fe, y_train_OS_fe)
y_train_OS_fe.value_counts()
# -
print('Feature engineering improvements: \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_test_fe, rf_OS_fe.predict(X_test_fe)),
precision(y_test_fe, rf_OS_fe.predict(X_test_fe)),
recall(y_test_fe, rf_OS_fe.predict(X_test_fe)),
F1(y_test_fe, rf_OS_fe.predict(X_test_fe))))
# ## Parameter tuning
# First I need to figure out if I am overfitting or underfitting
print('Train data metrics: \n Accuracy: %.4f \n Precision: %.4f \n Recall: %.4f \n F1: %.4f' %
(accuracy(y_train_OS_fe, rf_OS_fe.predict(X_train_OS_fe)),
precision(y_train_OS_fe, rf_OS_fe.predict(X_train_OS_fe)),
recall(y_train_OS_fe, rf_OS_fe.predict(X_train_OS_fe)),
F1(y_train_OS_fe, rf_OS_fe.predict(X_train_OS_fe))))
# I am definitely overfitting. Increase n_estimators above from 100 to 500.
#
# Previous metrics -
# Accuracy: 0.9099
# Precision: 0.8708
# Recall: 0.8047
# F1: 0.8365
# New metrics -
# Accuracy: 0.9110
# Precision: 0.8757
# Recall: 0.8029
# F1: 0.8377
#
# That didn't do much
# Try modifying hyperparameters of the random forest model
y_test_fe# cross_val_score(rf_OS_fe_depth, X_test_fe, y_test_fe, cv=5, scoring= 'f1')
# +
# Change max_depth, didn't do much
depth_range = np.arange(10, 500, 10)
depth_scores = []
for depth in depth_range:
rf_OS_fe_depth = RandomForestClassifier(n_estimators = 100, max_depth = depth)
rf_OS_fe_depth.fit(X_train_OS_fe, y_train_OS_fe)
y_test_pred_rf = rf_OS_fe_depth.predict(X_test_fe)
scores = metrics.f1_score(y_test_fe, y_test_pred_rf)
depth_scores.append(scores.mean())
plt.plot(depth_range, depth_scores)
plt.xlabel('Max Depth for Random Forest')
# +
nodes_range = np.arange(2, 200, 5)
nodes_scores = []
for nodes in nodes_range:
rf_OS_fe_nodes = RandomForestClassifier(n_estimators = 100, max_leaf_nodes = nodes)
rf_OS_fe_nodes.fit(X_train_OS_fe, y_train_OS_fe)
y_test_pred_rf = rf_OS_fe_depth.predict(X_test_fe)
scores = metrics.f1_score(y_test_fe, y_test_pred_rf)
nodes_scores.append(scores.mean())
plt.plot(nodes_range, nodes_scores)
plt.xlabel('Nodes ranbe for Random Forest')
# +
#Next I'll try the ccp score
ccp_range = np.arange(0, 0.01, .0005)
ccp_scores = []
for ccp in ccp_range:
rf_OS_fe_ccp = RandomForestClassifier(n_estimators = 100, ccp_alpha = ccp)
rf_OS_fe_ccp.fit(X_train_OS_fe, y_train_OS_fe)
y_test_pred_rf = rf_OS_fe_ccp.predict(X_test_fe)
scores = metrics.f1_score(y_test_fe, y_test_pred_rf)
ccp_scores.append(scores.mean())
plt.plot(ccp_range, ccp_scores)
plt.xlabel('CCP for Random Forest')
# +
#Now try number of features
feat_range = np.arange(1, 20, 1)
feat_scores = []
for feat in feat_range:
rf_OS_fe_feat = RandomForestClassifier(n_estimators = 100, max_features = feat)
rf_OS_fe_feat.fit(X_train_OS_fe, y_train_OS_fe)
y_test_pred_rf = rf_OS_fe_feat.predict(X_test_fe)
scores = metrics.f1_score(y_test_fe, y_test_pred_rf)
feat_scores.append(scores.mean())
plt.plot(feat_range, feat_scores)
plt.xlabel('Feature Range for Random Forest')
# +
#Finally try leaf_range
leaf_range = np.arange(1, 100, 2)
leaf_scores = []
for leaf in leaf_range:
rf_OS_fe_leaf = RandomForestClassifier(n_estimators = 100, min_samples_leaf = leaf)
rf_OS_fe_leaf.fit(X_train_OS_fe, y_train_OS_fe)
y_test_pred_rf = rf_OS_fe_leaf.predict(X_test_fe)
scores = metrics.f1_score(y_test_fe, y_test_pred_rf)
leaf_scores.append(scores.mean())
plt.plot(leaf_range, leaf_scores)
plt.xlabel('Minimun leaf size for Random Forest')
# -
# Tuning of the parameters above didn't really help the overfitting problem - I probably just need more data
# ## Cross validation and graphing of metrics
#Cross-validation of model
scores = cross_val_score(rf_OS_fe, X_fe, y_fe, cv=10, scoring='f1')
print(scores)
print(scores.mean())
print(scores.std())
#Why is this so much lower??
# +
#ROC
sns.set_style(style = 'whitegrid')
fpr, tpr, _ = roc_curve(y_test_fe, rf_OS_fe.predict_proba(X_test_fe)[:,1])
plt.plot(fpr, tpr)
# fpr, tpr, _ = roc_curve(y_test, rf_OS.predict_proba(X_test)[:,1])
# sns.lineplot(fpr, tpr)
x = np.linspace(0,1, 100000)
plt.plot(x, x, linestyle='--')
plt.title('ROC Curve')
plt.xticks(size = 11)
plt.yticks(size = 11)
plt.xlabel('False Positive Rate', size = 13)
plt.ylabel('True Positive Rate', size = 13)
plt.grid(False)
sns.despine()
plt.savefig('ROC.svg', bbox_inches = 'tight');
# +
# Feature importance
# plt.fig_size([10,10])
sns.set_style(style = 'whitegrid')
features = X_fe.columns
importances = rf_OS_fe.feature_importances_
indices = np.argsort(importances)
plt.figure(figsize = (8, 6), facecolor = 'w')
plt.title('Feature Importances', size = 15)
plt.barh(range(len(indices)), importances[indices])
plt.yticks(range(len(indices)), [features[i] for i in indices], size = 12)
plt.xlabel('Relative Importance', size = 12)
plt.grid(False)
sns.despine()
plt.patch.set_facecolor('white')
plt.savefig('feature_importance.svg', bbox_inches = 'tight')
# -
X_test_fe.shape
y_test_fe.shape
#Confusion matrix
rf_OS_fe_confusion = confusion_matrix(y_test_fe, rf_OS_fe.predict(X_test_fe))
plt.figure(dpi=150)
sns.heatmap(rf_OS_fe_confusion, cmap=plt.cm.Blues, annot=True, square=True, fmt = 'g')
plt.xlabel('Predicted GPCR molecule')
plt.ylabel('Actual GPCR molecule')
plt.title('Confusion matrix')
plt.savefig('confusion_matrix.svg', facecolor = 'red')
# **Can I determine which samples are being misclassified?**
# +
def make_confusion_matrix(model, threshold=0.16):
# Predict class 1 if probability of being in class 1 is greater than threshold
# (model.predict(X_test) does this automatically with a threshold of 0.5)
y_predict_fe = (model.predict_proba(X_test_fe)[:, 1] >= threshold)
fraud_confusion = confusion_matrix(y_test_fe, y_predict_fe)
plt.figure(dpi=80)
sns.heatmap(fraud_confusion, cmap=plt.cm.Blues, annot=True, square=True, fmt='d',
xticklabels=['legit', 'fraud'],
yticklabels=['legit', 'fraud']);
plt.xlabel('prediction')
plt.ylabel('actual')
plt.savefig('confusion_matrix_threshold.svg')
# -
make_confusion_matrix(rf_OS_fe, 0.16)
# +
#Interactive confusion matrix
from ipywidgets import interactive, FloatSlider
interactive(lambda threshold: make_confusion_matrix(rf_OS_fe, threshold), threshold=(0.0,1.0,0.02))
# -
| metis-classification-models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modeling and Simulation in Python
#
# Case study: Spider-Man
#
# Copyright 2017 <NAME>
#
# License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
#
# +
# Configure Jupyter so figures appear in the notebook
# %matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
# %config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
# -
#
# I'll start by getting the units we'll need from Pint.
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
degree = UNITS.degree
radian = UNITS.radian
# ### Spider-Man
# In this case study we'll develop a model of Spider-Man swinging from a springy cable of webbing attached to the top of the Empire State Building. Initially, Spider-Man is at the top of a nearby building, as shown in this diagram.
#
# 
#
# The origin, `O`, is at the base of the Empire State Building. The vector `H` represents the position where the webbing is attached to the building, relative to `O`. The vector `P` is the position of Spider-Man relative to `O`. And `L` is the vector from the attachment point to Spider-Man.
#
# By following the arrows from `O`, along `H`, and along `L`, we can see that
#
# `H + L = P`
#
# So we can compute `L` like this:
#
# `L = P - H`
#
# The goals of this case study are:
#
# 1. Implement a model of this scenario to predict Spider-Man's trajectory.
#
# 2. Choose the right time for Spider-Man to let go of the webbing in order to maximize the distance he travels before landing.
#
# 3. Choose the best angle for Spider-Man to jump off the building, and let go of the webbing, to maximize range.
# I'll create a `Params` object to contain the quantities we'll need:
#
# 1. According to [the Spider-Man Wiki](http://spiderman.wikia.com/wiki/Peter_Parker_%28Earth-616%29), Spider-Man weighs 76 kg.
#
# 2. Let's assume his terminal velocity is 60 m/s.
#
# 3. The length of the web is 100 m.
#
# 4. The initial angle of the web is 45 degrees to the left of straight down.
#
# 5. The spring constant of the web is 40 N / m when the cord is stretched, and 0 when it's compressed.
#
# Here's a `Params` object.
params = Params(height = 381 * m,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
length = 100 * m,
angle = (270 - 45) * degree,
k = 40 * N / m,
t_0 = 0 * s,
t_end = 30 * s)
# Now here's a version of `make_system` that takes a `Params` object as a parameter.
#
# `make_system` uses the given value of `v_term` to compute the drag coefficient `C_d`.
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
init = State(x=P_0.x, y=P_0.y, vx=V_0.x, vy=V_0.y)
C_d = 2 * mass * g / (rho * area * v_term**2)
return System(init=init, g=g, mass=mass, rho=rho,
C_d=C_d, area=area, length=length, k=k,
t_0=t_0, t_end=t_end)
# Compute the initial position
def compute_initial_condition(params):
"""Compute the initial values of L and P.
"""
unpack(params)
H = Vector(0, height)
theta = angle.to(radian)
x, y = pol2cart(theta, length)
L_0 = Vector(x, y)
P_0 = H + L_0
V_0 = Vector(0, 0) * m/s
params.set(P_0=P_0, V_0=V_0)
compute_initial_condition(params)
params.P_0
params.V_0
# Let's make a `System`
system = make_system(params)
system.init
# ### Drag and spring forces
#
# Here's drag force, as we saw in Chapter 22.
def drag_force(V, system):
"""Compute drag force.
V: velocity Vector
system: `System` object
returns: force Vector
"""
unpack(system)
mag = rho * V.mag**2 * C_d * area / 2
direction = -V.hat()
f_drag = direction * mag
return f_drag
V_test = Vector(10, 10) * m/s
drag_force(V_test, system)
# And here's the 2-D version of spring force. We saw the 1-D version in Chapter 21.
def spring_force(L, system):
"""Compute drag force.
L: Vector representing the webbing
system: System object
returns: force Vector
"""
unpack(system)
extension = L.mag - length
if magnitude(extension) < 0:
mag = 0
else:
mag = k * extension
direction = -L.hat()
f_spring = direction * mag
return f_spring
L_test = Vector(0, -system.length-1*m)
f_spring = spring_force(L_test, system)
# Here's the slope function, including acceleration due to gravity, drag, and the spring force of the webbing.
def slope_func(state, t, system):
"""Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with g, rho, C_d, area, mass
returns: sequence (vx, vy, ax, ay)
"""
x, y, vx, vy = state
unpack(system)
H = Vector(0, height)
P = Vector(x, y)
V = Vector(vx, vy)
L = P - H
a_grav = Vector(0, -g)
a_spring = spring_force(L, system) / mass
a_drag = drag_force(V, system) / mass
a = a_grav + a_drag + a_spring
return vx, vy, a.x, a.y
# As always, let's test the slope function with the initial conditions.
slope_func(system.init, 0, system)
# And then run the simulation.
# %time results, details = run_ode_solver(system, slope_func, max_step=0.3)
details
# ### Visualizing the results
#
# We can extract the x and y components as `Series` objects.
# The simplest way to visualize the results is to plot x and y as functions of time.
# +
def plot_position(results):
plot(results.x, label='x')
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
# -
# We can plot the velocities the same way.
# +
def plot_velocity(results):
plot(results.vx, label='vx')
plot(results.vy, label='vy')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
# -
# Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion.
# +
def plot_trajectory(results):
plot(results.x, results.y, label='trajectory')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
plot_trajectory(results)
# -
# ### Letting go
#
# Now let's find the optimal time for Spider-Man to let go. We have to run the simulation in two phases because the spring force changes abruptly when Spider-Man lets go, so we can't integrate through it.
#
# Here are the parameters for Phase 1, running for 9 seconds.
params1 = Params(params, t_end=9*s)
system1 = make_system(params1)
# %time results1, details1 = run_ode_solver(system1, slope_func, max_step=0.4)
plot_trajectory(results1)
# The final conditions from Phase 1 are the initial conditions for Phase 2.
t_final = get_last_label(results1) * s
# Here's the position Vector.
x, y, vx, vy = get_last_value(results1)
P_0 = Vector(x, y) * m
# And the velocity Vector.
V_0 = Vector(vx, vy) * m/s
# Here are the parameters for Phase 2. We can turn off the spring force by setting `k=0`, so we don't have to write a new slope function.
params2 = Params(params1, t_0=t_final, t_end=t_final+10*s, P_0=P_0, V_0=V_0, k=0)
system2 = make_system(params2)
# Here's an event function that stops the simulation when Spider-Man reaches the ground.
def event_func(state, t, system):
"""Stops when y=0.
state: State object
t: time
system: System object
returns: height
"""
x, y, vx, xy = state
return y
# Run Phase 2.
# %time results2, details2 = run_ode_solver(system2, slope_func, events=event_func, max_step=0.4)
# Plot the results.
# +
plot(results1.x, results1.y, label='Phase 1')
plot(results2.x, results2.y, label='Phase 2')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
# -
# Now we can gather all that into a function that takes `t_release` and `V_0`, runs both phases, and returns the results.
def run_two_phase(t_release, V_0, params):
"""Run both phases.
t_release: time when Spider-Man lets go of the webbing
V_0: initial velocity
"""
params1 = Params(params, t_end=t_release, V_0=V_0)
system1 = make_system(params1)
results1, details1 = run_ode_solver(system1, slope_func, max_step=0.4)
t_final = get_last_label(results1) * s
x, y, vx, vy = get_last_value(results1)
P_0 = Vector(x, y) * m
V_0 = Vector(vx, vy) * m/s
params2 = Params(params1, t_0=t_final, t_end=t_final+20*s,
P_0=P_0, V_0=V_0, k=0)
system2 = make_system(params2)
results2, details2 = run_ode_solver(system2, slope_func, events=event_func, max_step=0.4)
results = results1.combine_first(results2)
return results
# And here's a test run.
# +
t_release = 9 * s
V_0 = Vector(0, 0) * m/s
results = run_two_phase(t_release, V_0, params)
plot_trajectory(results)
x_final = get_last_value(results.x) * m
# -
# ### Maximizing range
#
# To find the best value of `t_release`, we need a function that takes possible values, runs the simulation, and returns the range.
def range_func(t_release, params):
V_0 = Vector(0, 0) * m/s
results = run_two_phase(t_release, V_0, params)
x_final = get_last_value(results.x) * m
return x_final
# We can test it.
range_func(9*s, params)
# And run it for a few values.
for t_release in linrange(3, 15, 3) * s:
print(t_release, range_func(t_release, params))
# Now we can use `max_bounded` to find the optimum.
max_bounded(range_func, [6, 12], params)
# Finally, we can run the simulation with the optimal value.
V_0 = Vector(0, 0) * m/s
results = run_two_phase(8*s, V_0, params)
plot_trajectory(results)
x_final = get_last_value(results.x) * m
# ### Taking a flying leap
#
# Now suppose Spider-Man can jump off the wall in any direction at a maximum speed of 20 meters per second. In what direction should he jump, and what time should he let go, to maximize the distance he travels?
#
# Before you go on, think about it and see what you think the optimal angle is.
#
# Here's a new range function that takes a guess as a parameter, where `guess` is a sequence of three values: `t_release`, launch velocity, and launch angle.
#
# It computes `V_0`, runs the simulation, and returns the final `x` position.
def range_func2(guess, params):
t_release, velocity, theta = guess
print(t_release, velocity, theta)
V_0 = Vector(pol2cart(theta, velocity)) * m/s
results = run_two_phase(t_release, V_0, params)
x_final = get_last_value(results.x) * m
return -x_final
# We can test it with the conditions from the previous section.
x0 = 8*s, 0*m/2, 0*radian
range_func2(x0, params)
# Now we can use `minimize` to find the optimal values for `t_release`, launch velocity, and launch angle. It takes a while to run because it has to search a 3-D space.
# +
guess = [8, 5, 0]
bounds = [(0,20), (0,20), (-np.pi, np.pi)]
res = minimize(range_func2, guess, params, bounds=bounds)
# -
# Here are the optimal values.
t_release, velocity, theta = res.x
V_0 = Vector(pol2cart(theta, velocity))
# It turns out that the best angle is down and to the left. Not obvious.
V_0.mag
# Here's what the trajectory looks like with the optimal values.
results = run_two_phase(t_release, V_0, params)
plot_trajectory(results)
x_final = get_last_value(results.x)
| code/soln/spiderman_soln.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import time
# #%matplotlib inline
# +
# %matplotlib notebook
import matplotlib
#modify some matplotlib parameters to manage the images for illustrator
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
# -
# create a initial grid
grid_size = 500 #evetually define as an input
grid = np.zeros((grid_size,grid_size))
# define the initial pattern
grid[int(grid_size/2), int(grid_size/2)]=2
grid
def show_grid(grid_array):
plt.figure()
plt.imshow(grid, cmap=plt.cm.gray)
plt.show()
show_grid(grid)
# +
# Define rules
def rule1(grid):
# 2 = growing cell
# 1 = stationary cell
# 0 = empty space
#this rule makes each cell divide only one time per step
new_grid = grid #to separate the evaluation from the actualization
#g_ones = grid[grid == 1]
g_index = np.nonzero(grid == 2) # growth index = where cell value == 2
#g_mask = np.array([[0,1,2],[3,8,4],[5,6,7]]) #position 8 is the center and not selectable
#shuffle the elements in order to eliminate the bias of the computation order
arr = np.arange(g_index[0].shape[0])
np.random.shuffle(arr)
#for i in range(len(g_index[0])): #go through every position with a "growing cell" ---> value = 2
for i in arr: #go through every position with a "growing cell" ---> value = 2
mask_pos = np.array([[-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1]])
remove_list = [] #initialize a list to store "neighbours spaces"( = mask_pos) ocuped by cells
# go through every position surrounding position --> eigh posibilities
for j in range(len(mask_pos)):
m = g_index[0][i] + mask_pos[j][0]
n = g_index[1][i] + mask_pos[j][1]
if grid[m,n] !=0 : #make a list with the positions which are not empty places
remove_list.append(j)
new_mask_pos = np.delete(mask_pos, remove_list, 0)
# to exit when there is not a surrounding empty position
l = len(new_mask_pos)
if l > 1:
r_pos = np.random.randint(l) # a random number between [0,len[
new_pos = new_mask_pos[r_pos]
#m = g_ones_index[0][i] + mask_pos[new_pos][0]
#n = g_ones_index[1][i] + mask_pos[new_pos][1]
m = g_index[0][i] + new_pos[0]
n = g_index[1][i] + new_pos[1]
grid[m,n] = 2
elif l == 1:
new_pos = new_mask_pos[0]
m = g_index[0][i] + new_pos[0]
n = g_index[1][i] + new_pos[1]
grid[m,n] = 2
else: #when len(new_mask_pos) == 0
m = g_index[0][i]
n = g_index[1][i]
grid[m,n] = 1 # then, that position will not be evaluated again
return(grid)
# +
# Define rules
def rule2(grid):
# 2 = growing cell
# 1 = stationary cell
# 0 = empty space
#this rule makes each cell divide only one time per step
#g_ones = grid[grid == 1]
g_index = np.nonzero(grid == 2) # growth index = where cell value == 2
#g_mask = np.array([[0,1,2],[3,8,4],[5,6,7]]) #position 8 is the center and not selectable
#choose a random cell in the dividing state
cell_pos = int(np.random.rand(1)[0]*g_index[0].shape[0])
mask_pos = np.array([[-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1]])
remove_list = [] #initialize a list to store "neighbours spaces"( = mask_pos) ocuped by cells
# go through every position surrounding position --> eigh posibilities
for j in range(len(mask_pos)):
m = g_index[0][cell_pos] + mask_pos[j][0]
n = g_index[1][cell_pos] + mask_pos[j][1]
if grid[m,n] !=0 : #make a list with the positions which are not empty places
remove_list.append(j)
new_mask_pos = np.delete(mask_pos, remove_list, 0)
# to exit when there is not a surrounding empty position
l = len(new_mask_pos)
if l > 1:
r_pos = np.random.randint(l) # a random number between [0,len[
new_pos = new_mask_pos[r_pos]
#m = g_ones_index[0][i] + mask_pos[new_pos][0]
#n = g_ones_index[1][i] + mask_pos[new_pos][1]
m = g_index[0][cell_pos] + new_pos[0]
n = g_index[1][cell_pos] + new_pos[1]
grid[m,n] = 2
elif l == 1:
new_pos = new_mask_pos[0]
m = g_index[0][cell_pos] + new_pos[0]
n = g_index[1][cell_pos] + new_pos[1]
grid[m,n] = 2
else: #when len(new_mask_pos) == 0
m = g_index[0][cell_pos]
n = g_index[1][cell_pos]
grid[m,n] = 1 # then, that position will not be evaluated again
return(grid)
# +
# Define rules
def rule3(grid):
# 2 = growing cell
# 1 = stationary cell
# 0 = empty space
#this rule makes each cell divide only one time per step
#g_ones = grid[grid == 1]
g_index = np.nonzero(grid == 2) # growth index = where cell value == 2
#g_mask = np.array([[0,1,2],[3,8,4],[5,6,7]]) #position 8 is the center and not selectable
#choose a random cell in the dividing state
index_pos = int(np.random.rand(1)[0]*g_index[0].shape[0])
m = g_index[0][index_pos]
n = g_index[1][index_pos]
#define the neighborhood
nb = grid[m-1:m+2,n-1:n+2] #nb = neighborhood
#define the free spaces in nb
fs = np.where(nb == 0)
if fs[0].shape[0] > 0: # go if there are a place in the neighbour
if len(fs[0]) == 1:
grid[m,n] = 1 # then, that position will not be evaluated again
#grown over an empty position
new_pos = int(np.random.rand(1)[0]*fs[0].shape[0]) #new pos in the neighbour matrix
m_new = m + fs[0][new_pos] - 1 #-1 to convert [0 1 2] to [-1 0 1]
n_new = n + fs[1][new_pos] - 1
grid[m_new, n_new] = 2
#save the cell grid index positions
cell_index = [m,n]
ncell_index = [m_new, n_new]
else:
grid[m,n] = 1
return(grid)
# +
#to show rule 3
time1 = time.clock()
# create a initial grid
grid_size = 150 #evetually define as an input
grid = np.zeros((grid_size,grid_size))
# define the initial pattern
#grid[int(grid_size/2), int(grid_size/2)]=2
grid = initial_pattern(grid, 0)
#perform the loop
steps = 10000
#sleep_time = 0.01
#to save the last figure
filename = 'Segregation\\image_%03d.jpg'
#filename = os.path.join(fpath, 'image_%05d.jpg')
fig = plt.figure()
fig.show()
show_grid(grid) #show the initial grid
plt.title('step 0')
every = 100
count = 0
for i in range(steps):
#time.sleep(sleep_time)
grid = rule3(grid)
#plt.imshow(grid, cmap=plt.cm.gray)
#plt.title('step '+ str(i+1))
#fig.canvas.draw()
if i%every == 0 or i == steps-1:
count += 1
plt.title('step '+ str(i+1))
plt.imshow(grid, cmap=plt.cm.gray)
#plt.savefig(str(filename) + ".pdf", transparent=True)
plt.savefig(filename%(count))#, transparent=True)
#plt.imshow(grid, cmap=plt.cm.gray)
elapsed = time.clock() - time1
print(elapsed)
# -
def select_cell(grid):
# 2 = growing cell
# 1 = stationary cell
# 0 = empty space
#this rule makes each cell divide only one time per step
g_index = np.nonzero(grid == 2) # growth index = where cell value == 2
#choose a random cell in the dividing state
index_pos = int(np.random.rand(1)[0]*g_index[0].shape[0])
m = g_index[0][index_pos]
n = g_index[1][index_pos]
#save the cell grid index positions
cell_index = [m,n]
return(cell_index)
def check_nbhd(grid, cell_index):
#chek free spaces in the neighbourhood
# fs: array
# index of free spaces in the neighborhood
m = cell_index[0]
n = cell_index[1]
#define the neighborhood
nb = grid[m-1:m+2,n-1:n+2] #nb = neighborhood
#define the free spaces in nb
fs = np.where(nb == 0)
return(fs)
def nb_prob(grid, cell_index, prob_dist = 'contact_linear'):
# assign division probabilities based on empty space cell contacts
# prob_dist: uniform - contact_linear - contact_exp
# contact linear is the default or if another thing is written
# fs: array
# index of free spaces in the neighborhood
# return
# prob: list
# list with the [0,1] probability partition limit of each free space
# e.g. prob = [0.23, 0.81, 1] --> second cell has bigger probability
m = cell_index[0]
n = cell_index[1]
#define neighborhood
nb = grid[m-1:m+2,n-1:n+2] #nb = neighborhood
#define the free spaces in nb
fs = np.where(nb == 0)
#define cell spaces in bn
cs = np.where(nb != 0)
fs_num = len(fs[0])
prob = np.zeros(fs_num)
contacts = np.zeros(fs_num)
if prob_dist != 'uniform':
# if prob_dist is something different from the options, contact_linear is the default
for i in range(fs_num):
mg = m + fs[0][i] - 1 #-1 to convert [0 1 2] to [-1 0 1]
ng = n + fs[1][i] - 1
i_nb = grid[mg-1:mg+2,ng-1:ng+2] # i position neighborhood
occup = np.where(i_nb != 0)
contacts[i] = len(occup[0]) #save the number of contacts of this position
if prob_dist == 'contact_exp':
contacts = np.exp(contacts)
else:
contacts = np.ones(fs_num) #assign uniform values
total = sum(contacts)
prob[0] = (contacts[0]/total)
for i in range(1,fs_num):
prob[i] = prob[i-1]+contacts[i]/total
return(prob)
def cell_divition_uniform(grid, cell_index, fs):
# uniform neighborhood divition probability
#fs: free neighborhood spaces
m = cell_index[0]
n = cell_index[1]
if len(fs[0]) == 1:
grid[m,n] = 1 # then, that position will not divide again
#grown over an empty position
#new_pos = int(np.random.rand(1)[0]*fs[0].shape[0])
new_pos = int(np.random.rand(1)[0]*fs[0].shape[0]) #new pos in the neighbour matrix
m_new = m + fs[0][new_pos] - 1 #-1 to convert [0 1 2] to [-1 0 1]
n_new = n + fs[1][new_pos] - 1
grid[m_new, n_new] = 2 # crates the new cell
ncell_index = [m_new ,n_new]
return(grid, ncell_index)
def cell_divition(grid, cell_index, fs, fs_proba):
#fs: free neighborhood spaces
#fs_proba: free spaces growth probabilities
m = cell_index[0]
n = cell_index[1]
if len(fs[0]) == 1:
grid[m,n] = 1 # then, that position will not divide again
#grown over an empty position
rand_val = np.random.rand(1)[0]
# find the first position which is bigger than rand_val
new_pos = np.where( (fs_proba > rand_val) == True )[0][0] #new pos in the neighbour matrix
m_new = m + fs[0][new_pos] - 1 #-1 to convert [0 1 2] to [-1 0 1]
n_new = n + fs[1][new_pos] - 1
grid[m_new, n_new] = 2 # crates the new cell
ncell_index = [m_new ,n_new]
return(grid, ncell_index)
plt.figure()
im_grid = np.zeros((100,100,4))
im_grid[:,:,0] = np.ones((100,100))*0
im_grid[:,:,1] = np.ones((100,100))*80
im_grid[:,:,2] = np.ones((100,100))*0
im_grid[:,:,3] = np.ones((100,100))*1
plt.imshow(im_grid)
plt.figure()
im_grid[:,:,1] = np.ones((100,100))*1
plt.imshow(im_grid)
def initial_plasmids(grid, pattern_num = 0, num_plas = 2, max_copy = 4):
# grid: initial grid
c_index = np.nonzero(grid) # c_index,
#cell_number = c_index[0].shape[0]
gs = grid.shape
pattern = np.zeros((gs[0],gs[1],max_copy)) #initialize the pattern array
# add different patterns
if pattern_num == 0: #random plasmid pattern
for i in range(c_index[0].shape[0]): #assign a random plasmid pattern to each cell position
pattern[c_index[0][i],c_index[1][i],:] = ((num_plas +1 )*np.random.rand(max_copy)).astype(int)
#num_plas +1 to add "no-plasmid" state
elif pattern_num == 1:
pattern = np.ones((grid.shape))
return(pattern)
def role_divideFlag(plasmids):
#plasmids: cell plasmids vector
max_plasmids = plasmids.shape[0]
num_plasmids = np.nonzero(plasmids)[0].shape[0]
divisor = max_plasmids*1.1 #arbitrary defined to make division(max_plasmids number) < 1
# make a cuadratic function of probabilities
probability = (num_plasmids/divisor)**2
#if a cell has no plasmids --> will not divide
if np.random.rand(1) < probability:
return(1) # divide
else:
return(0) # not divide
#Probability tables
#plasmid_nums = np.arange(max_plasmids +1)
#probability = (plasmid_nums/divisor)**2
def create_image(grid, plasgrid):
im_s = plasgrid.shape
aux_imR = np.zeros((im_s[0],im_s[1],im_s[2]))
aux_imG = np.zeros((im_s[0],im_s[1],im_s[2]))
for i in range(im_s[2]):
aux_imR[:,:,i] = 1*(plasgrid[:,:,i]==1)
aux_imG[:,:,i] = 1*(plasgrid[:,:,i]==2)
aux_imR = np.sum(aux_imR,axis=2)
aux_imG = np.sum(aux_imG,axis=2)
aux_transparency = 0.5*(grid[:,:]==1) + 1*(grid[:,:]==2)
# create the image
im_grid = np.zeros((im_s[0],im_s[1],im_s[2]))
im_grid[:,:,0] = np.multiply(np.ones((im_s[0],im_s[1])),aux_imR)
im_grid[:,:,1] = np.multiply(np.ones((im_s[0],im_s[1])),aux_imG)
#im_grid[:,:,2] = np.ones((100,100))*250
im_grid[:,:,3] = np.multiply(np.ones((im_s[0],im_s[1])),aux_transparency)
# stationary cell -> transparency = 0.5)
return(im_grid)
def create_image2(grid, plasgrid):
im_s = plasgrid.shape
aux_imR = np.zeros((im_s[0],im_s[1],im_s[2]))
aux_imG = np.zeros((im_s[0],im_s[1],im_s[2]))
for i in range(im_s[2]):
aux_imR[:,:,i] = 1*(plasgrid[:,:,i]==1)
aux_imG[:,:,i] = 1*(plasgrid[:,:,i]==2)
aux_imR = np.multiply(1*(np.sum(aux_imR,axis=2)>0),50*(grid[:,:]==1)) + 1*(np.sum(aux_imR,axis=2)>0)
aux_imG = np.multiply(1*(np.sum(aux_imG,axis=2)>0),50*(grid[:,:]==1)) + 1*(np.sum(aux_imG,axis=2)>0)
# create the image
im_grid = np.zeros((im_s[0],im_s[1],3))
im_grid[:,:,0] = np.multiply(np.ones((im_s[0],im_s[1])),aux_imR)
im_grid[:,:,1] = np.multiply(np.ones((im_s[0],im_s[1])),aux_imG)
#im_grid[:,:,2] = np.ones((100,100))*250
return(im_grid)
def plasmid_gProb(g_ratio=[1,1], p_types = [1,2]):
#define a growtn probability (=velocity) based on the plasmids
#g_ratio: ratio of growth rate between genotypes (i.e. plasmids)
#p_types: plasmids types or labels
#built the probability class vector
cat_len = len(g_ratio)
probs = np.zeros(cat_len)
denominator = sum(g_ratio)
probs[0] = g_ratio[0]/denominator
for i in range(1,cat_len):
probs[i] = probs[i-1]+g_ratio[i]/denominator
return(probs)
def plasm_g_test(plasmids,probs):
#perform the probability test
rand_val = np.random.rand(1)
pos = np.where( (probs > rand_val) == True )[0][0]
ptype = pos + 1
found = np.where(plasmids == ptype)[0]
growth = False
if found.size>0:
growth = True
return(growth)
def cell_ratio(plasmgrid, ptype = [1,2]):
c_num_plasm = np.sum(plasmgrid>0, axis=2) #number of plasmids in each grid
plasm_sum = np.sum(plasmgrid, axis = 2)
divition = np.divide(plasm_sum,c_num_plasm)
#total = np.sum(np.isnan(divition) == False, axis = (0,1)) #it include cells with mix plasmids
found = np.zeros(len(ptype))
total = 0
for i in range(len(ptype)):
found[i] = len(np.where(divition == ptype[i])[0])
total += found[i]
ratio = found[0]/total
return(ratio)
count=0
plasmids = np.ones(4)*2
for i in range(1000):
plas_probs = plasmid_gProb(g_ratio= [1,2])
ifG = plasm_g_test(plasmids, plas_probs)
if ifG == True:
count+=1
print(count)
#main
sim_num = 1
all_ratios = []
for j in range(sim_num):
time1 = time.clock()
# create a initial empty grid
grid_size = 1000 #evetually define as an input
grid = np.zeros((grid_size,grid_size))
# define the initial grid and plasmid pattern
grid = initial_pattern(grid, 1)
# show_grid(grid)
plasm_grid = initial_plasmids(grid)
# Show the initial state
# im_grid = create_image(grid, plasm_grid)
# plt.imshow(im_grid)
#perform the loop
steps = 100000
#sleep_time = 0.01
#to save the last figure
#filename = 'null'
# filename = 'Segregation\\ratios\\image_%03d.jpg'
filename = 'Seg_1ratio.jpg'
# fig = plt.figure()
# plt.title('step 0')
#this two lines to save secuential images
every = 100 #every how many save images
count = 0
#define plasmid grwoth ratio
plas_probs = plasmid_gProb(g_ratio= [1,1])
# g_ratio = [2,1] --> plasmid 1 divide twice fast than plasmid 2
ratios = [] #to store the cell type ratios
for i in range(steps):
#select a random growing cell
cell_pos = select_cell(grid)
free_nb = check_nbhd(grid, cell_pos)
if free_nb[0].shape[0] > 0: # go if there is a place in the neighborhood
plasmids = plasm_grid[cell_pos[0], cell_pos[1],:] #get its plasmids
c_growth = plasm_g_test(plasmids, plas_probs)
if c_growth == True:
# tal vez poner esto arriba y chequear que los plasmidos no estene en cero
#update its plasmids and cell state, n:new
n_plasmids, n_state = plasmid_update(plasmids, cell_pos)
plasm_grid[cell_pos[0], cell_pos[1],:] = n_plasmids
grid[cell_pos[0], cell_pos[1]] = n_state
#state will not be evaluated before role_divide
#role_divide function shouldnยดt allow divition of that cell
divide_flag = role_divideFlag(n_plasmids)
#perform the divition if flag changed
if divide_flag != 0:
#assign a cell to a new position
free_proba = nb_prob(grid, cell_pos, prob_dist = 'contact_exp')
grid, nCell_pos = cell_divition(grid, cell_pos, free_nb, free_proba)
#split the mother plasmids
m_plasmids, c_plasmids = divide_plasmids(n_plasmids)
#assign mother and child plasmids
plasm_grid[cell_pos[0], cell_pos[1],:] = m_plasmids
plasm_grid[nCell_pos[0], nCell_pos[1],:] = c_plasmids
else:
grid[cell_pos[0],cell_pos[1]] = 1
#save cell type ratios
if i%every == 0:
ratios.append(cell_ratio(plasm_grid))
#Plot the result
if i == steps-1:
#if i%every == 0 or i == steps-1:
# count += 1
plt.title('step '+ str(i+1))
im_grid = create_image2(grid, plasm_grid)
plt.imshow(im_grid)
# #fig.canvas.draw()
# #plt.savefig(str(filename) + ".pdf", transparent=True)
#plt.savefig(filename%(count), transparent=True)
# plt.savefig(filename%(j), transparent=True)
plt.savefig(filename, transparent=True)
all_ratios.append(np.asarray(ratios))
elapsed = time.clock() - time1
print(elapsed)
mean_ratio = 0
plt.figure()
for i in range(len(all_ratios)):
plt.plot(all_ratios[i])
mean_ratio += all_ratios[i][-1]
plt.show()
mean_ratio= mean_ratio/len(all_ratios)
print(mean_ratio)
ratio11=all_ratios
ratio11_mean = mean_ratio
# +
plt.figure()
for i in range(len(ratio11)):
plt.plot(ratio11[i])
plt.title('growth ratio 1:1')
plt.ylabel('cell type ratio')
plt.xlabel('check point step number')
plt.show()
plt.savefig('ratio11', transparent=True)
# -
ratio32=all_ratios
ratio32_mean = mean_ratio
ratio43=all_ratios
ratio43_mean = mean_ratio
# +
plt.figure()
for i in range(len(ratio43)):
plt.plot(ratio43[i])
plt.title('growth ratio 4:3')
plt.ylabel('cell type ratio')
plt.xlabel('check point step number')
plt.show()
plt.savefig('ratio43', transparent=True)
# -
ratio109=all_ratios
ratio109_mean = mean_ratio
# +
mean_ratios = [ratio11_mean,ratio109_mean,ratio43_mean,ratio32_mean]
expected = [1/2, 10/19, 4/7, 3/5]
plt.figure()
plt.plot([1, 2, 3, 4], mean_ratios, 'bo', label= 'observed ratio')
plt.plot([1, 2, 3, 4], expected, 'ro', label = 'growth ratio')
plt.ylabel('cell type ratio')
plt.legend()
plt.xticks([])
#plt.xticks(['1:1','10:9','4:3','3:2'])
plt.xticks([1,2,3,4], ['1:1','10:9','4:3','3:2'])
plt.show()
plt.savefig('ratios_obs_exp', transparent=True)
# +
mean_ratios = [ratio11_mean,ratio109_mean,ratio43_mean,ratio32_mean]
expected = [1/2, 10/19, 4/7, 3/5]
plt.figure()
plt.plot(expected, mean_ratios, 'bo')
plt.xlabel
plt.show()
# -
def plasmid_update(plasmids, pos_index):
#plasmids: vector with plasmids. e.g [0,1,1,0,2]
state = 2 # cell state = growing state
plasmids_pos = np.nonzero(plasmids)
empty_pos = np.where(plasmids == 0)
num_plas = plasmids_pos[0].shape[0]
if num_plas == 0:
#it means no plasmid in the cell
state = 1 #to not evaluate this cell in the loop again
elif num_plas == plasmids.shape[0]:
#it means all plasmids positions are full
return(plasmids, state)
else:
copied_pos = np.random.randint(num_plas)
plasmids[empty_pos[0][0]] = plasmids[plasmids_pos[0][copied_pos]]
#copy the plasmid in the first free space
return(plasmids, state)
def divide_plasmids(plasmids):
#plasmids: cell plasmids
p_size = plasmids.size
mother_p = np.zeros(p_size)
child_p = np.zeros(p_size)
np.random.shuffle(plasmids) #shuffle the plasmids
if (p_size & 1) == 1: #odd case
#sum a random value to choose which cell keep more plasmids
rand_val = np.random.rand(1)
half_p = int(p_size/2 + rand_val)
else: #even case
half_p = int(p_size/2)
mother_p[:half_p] = plasmids[:half_p]
child_p[half_p:]= plasmids[half_p:]
return(mother_p, child_p)
def initial_pattern(grid, pattern_num):
pattern = {} #initiate initial pattern dictionary
# add different patterns
pattern[0] = np.array([[2]])
pattern[1] = np.array([[0, 0, 2, 0, 0],[0,2,2,2,0],[2,2,1,2,2],[0,2,2,2,0],[0,0,2,0,0]])
pattern[2] = np.ones((2,35))*2
#make elements which are not in the border to be = 1
fixed_pat = pattern[pattern_num]
#put the pattern in the grid
gs = grid.shape
m0 = int(gs[0]/2)
n0 = int(gs[1]/2)
ps = fixed_pat.shape
mpm = int(ps[0]/2)
npm = int(ps[1]/2)
for i in range(ps[0]):
for j in range(ps[1]):
m = m0 + (i - mpm)
n = n0 + (j - npm)
grid[m,n] = fixed_pat[i,j]
return(grid)
# +
#perform the loop
steps = 50
sleep_time = 0.1
fig = plt.figure()
fig.show()
show_grid(grid) #show the initial grid
plt.title('step 0')
for i in range(steps):
time.sleep(sleep_time)
grid = rule1(grid)
plt.imshow(grid, cmap=plt.cm.gray)
plt.title('step '+ str(i+1))
fig.canvas.draw()
# -
plt.figure()
plt.imshow(grid)
# +
#perform the loop
steps = 10
sleep_time = 0.5
show_grid(grid) #show the initial grid
plt.figure(0)
for i in range(steps):
time.sleep(sleep_time)
grid = rule1(grid)
show_grid(grid)
# +
#to show
time1 = time.clock()
# create a initial grid
grid_size = 150 #evetually define as an input
grid = np.zeros((grid_size,grid_size))
# define the initial pattern
#grid[int(grid_size/2), int(grid_size/2)]=2
grid = initial_pattern(grid, 2)
#perform the loop
steps = 10000
#sleep_time = 0.01
#to save the last figure
filename = 'null'
fig = plt.figure()
fig.show()
show_grid(grid) #show the initial grid
plt.title('step 0')
for i in range(steps):
#time.sleep(sleep_time)
#grid = rule1(grid)
grid = rule3(grid)
#plt.imshow(grid, cmap=plt.cm.gray)
plt.title('step '+ str(i+1))
#fig.canvas.draw()
if i == steps-1:
plt.savefig(str(filename) + ".pdf", transparent=True)
plt.imshow(grid, cmap=plt.cm.gray)
elapsed = time.clock() - time1
print(elapsed)
# +
#to show
time1 = time.clock()
# create a initial grid
grid_size = 150 #evetually define as an input
grid = np.zeros((grid_size,grid_size))
# define the initial pattern
grid = initial_pattern(grid, 0)
#perform the loop
steps = 15000
#sleep_time = 0.01
#to save the last figure
filename = 'null'
fig = plt.figure()
plt.title('step 0')
for i in range(steps):
#time.sleep(sleep_time)
#grid = rule1(grid)
grid = rule2(grid)
#plt.imshow(grid, cmap=plt.cm.gray)
plt.title('step '+ str(i+1))
if i == steps-1:
plt.imshow(grid, cmap=plt.cm.gray)
#fig.canvas.draw()
plt.savefig(str(filename) + ".pdf", transparent=True)
elapsed = time.clock() - time1
print(elapsed)
# -
mask[1,2]
vals = np.array([[-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1],[0,0]])
vals[4][1]
pos = np.nonzero(grid == 0)
print(pos[0][10])
len(pos[1])
grid[grid == 1]
# +
#make automata loop
time_steps = 100
for t in time_steps:
# -
np.random.randint(8) # a random number between [0,8[ --> eigh posibilities
grid[0,0]
# +
time1 = time.clock()
arr = np.random.rand(5000,5000)
elapsed = time.clock() - time1
print(elapsed) # it takes 0.21186613101770035 seg for me
# -
np.amin(np.nonzero(grid == 2)[0])
# +
#hacer clases
# to not make:
# cell = [px,py,vx,vy]
# cell[0]+= cell[2]*deltaT
# make classes is the same in general terms (in computation time, etc), but is a more recomended paradigma
# because is more clear and organized.
class cell:
def __init__(self,px,py): #primer parametro siempre es self -->que hace referencia al objeto cell
self.px = px
self.py = py
self.vx = 0
self.vy = 0
# la gracia es que puedo definir funciones dentro de la clase
def mover(self,vx,vy,t): #siempre el primero es self!
self.px += vx*t
self.py += vy*t
celula = cell(-1,1)
print(celula.px) #--> arroja -1
ceula.px +=1
print(celula.px) #--> arroja 0
celula.mover(-1,1,1)
lista = []
for n in range(10):
lista.append(cell(n/10,n/10))
# or to make some computations faster when it is a big number of elements (>1000)
class sim:
def __init__(self, ncells):
positions = np.array((ncells,2))
#Also you can create an inheritance class
class bacterium(cell): #that means bacterium "is a" cell
def infect(): #then you can add new functions to the class bacterium
# todos los objetos en python pueden guardarse con el packete pickle
# -
| Celular automata tester.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Class practice: strings, lists & numbers
#
# These were copied and slightly modified from [<NAME>'s Urban Informatics and Visualization](https://github.com/waddell/urban-informatics-and-visualization). All credit goes to him, all mistakes are mine.
# ## Exercises | String & List
# ### Reversing numbers
# Write code that reverses even numbers from 0 to 100 (including 100) and print the result.
#
end = 100001
# %%timeit
a = list(range(end))
b = a[::-2]
#print(b)
# %%timeit
a = []
for i in range(end):
if i % 2 ==0:
a.append(i)
b = a[::-1]
#print(b)
# %%timeit
a = list(range(end-1, -1, -2))
#print(a)
# ### List manipulations
# #### We have two lists, a and b, a=[10,20,30] b=[30,60,90]. Write code that give us the following outputs: [LIST]
#
# a. [[10,20,30],[30,60,90]]
#
# b. [10,20,30,30,60,90]
#
# c. [10,20,60,90] (first two of a, last two of b)
#
# d. [20,40,60] (the element-wise differences between b and a)
n = 100_000
a=list(range(n))
b=list(range(n))
import dis
dis.dis("pd = [b[i]-a[i] for i in range(0,len(a))]")
dis.dis("""
l = []
for i in range(len(a)):
l.append(b[i] - a[i])
""")
[x for x in range(10) if x%2==0]
# ### List insertions
#
# **Write code that add the name "Norah" to the following list, after the name "Michael".**
#
# Make sure your code continues to do the right thing if more names are added to the list, or if the list is reordered, or if you need to find Jessica instead of Michael (or anyone else on the list).
#
# names: Akshara, Anna, Aqshems, Chester, Echo, James, Jessica, Matthew, Michael, Philip, Sarah
# +
names = ["Akshara", "Anna", "Aqshems", "Chester", "Echo", "James", "Jessica", "Matthew", "Michael", "Philip", "Sarah"]
def add_after(name_list, name_to_add, name_to_after):
"""Add a name to a list, at the specified position.
Adds....
....
"""
names = name_list
for i, name in enumerate(names):
if name == name_to_after:
names.insert(i+1,name_to_add)
break
return names
print(add_after(names, "Norah", "Michael"))
print(add_after(names, "Lance", "Jessica"))
print(names)
# -
name = ["Akshara", "Anna", "Aqshems", "Chester", "Echo", "James", "Jessica", "Matthew", "Michael", "Philip", "Sarah"]
name.insert(name.index('Michael')+1,"Norah")
name
# ### Maximizing a sum
# **Find a list in the following list (G) whose sum of its elements is the highest.**
#
# G = [[13,9,8], [14,6,12], [10,13,11], [7,18,9]]
# +
# Type your code here
# -
# ### Cars and brown trucks
#
# Write code that prints all colors in the list and the word 'car', one per line, **unless** the color is brown, when you should print 'truck' instead:
#
# ```python
# colors = ['red', 'black', 'gray', 'brown', 'blue', 'white']
# ```
colors = ['red', 'black', 'gray', 'brown', 'blue', 'white']
vehicle = ['car' if color!='brown' else 'truck' for color in colors]
for pair in zip(colors,vehicle):
print(" ".join(pair))
# ## Numbers
#
# ### Reversing numbers
#
# Write a function nums_reversed that takes in an integer `n` and returns a string containing the numbers 1 through `n` including `n` in reverse order, separated
# by spaces. For example:
#
# >>> nums_reversed(5)
# '5 4 3 2 1'
def nums_reversed(n):
print(" ".join([str(x) for x in range(n,0,-1)]))
print(nums_reversed(10))
# ### Divisibility
#
# Write a program which will find all such numbers which are divisible by 7 but are not a multiple of 5,
# between 1000 and 1200 (both included).
# The numbers obtained should be printed in a comma-separated sequence on a single line.
#
# *Hint:* Consider using `range(#begin, #end)`.
# +
# Type your code here
# -
# Write the same program but this time use while loop instead of for loop
# +
# Type your code here
# -
# ### Double trouble
#
# Write a function `double100` that takes in a list of integers
# and returns `True` only if the list has two `100`s next to each other.
#
# >>> double100([100, 2, 3, 100])
# False
# >>> double100([2, 3, 100, 100, 5])
# True
# +
# Type your code here
| lectures/05-class-practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Example: Variable Boundary Condition Through Time
#
# This example is equivalent to the model presented in the folder `continental_rift` with a coarser mesh and variable boundary condition for the velocity field. The model simulates 20 Myr of lithospheric extension followed by 50 Myr of convergence and 30 Myr of tectonic quiescence.
#
# The domain of the model comprises 1600 x 300 km<sup>2</sup>, composed of a regular mesh with square elements of 4 x 4 km<sup>2</sup>.
# The boundary conditions for the velocity field simulate the lithospheric stretching
# assuming a reference frame fixed on the lithospheric plate on the left side of the model,
# and the plate on the right side moves rightward with a velocity of 1 cm/year.
# The velocity field in the left and right boundaries of the model is chosen to ensure conservation of mass
# and is symmetrical if the adopted reference frame movies to the right with a velocity of 0.5 cm/year relative to the left plate.
# Additionally, free slip condition was assumed on the top and bottom of the numerical domain.
# To simulate the free surface, the "sticky air" approach (e.g. Gerya and Yuen, 2003b) is adopted,
# taking into account a 40-km thick layer with a relatively low viscosity material but with a compatible density with the atmospheric air.
# The initial temperature structure is only depth dependent and is 0 ยบC at the surface and 1300 ยบC at the base of the lithosphere at 130 km.
#
# The velocity field is inverted at 20 Myr, starting the convergence of the model. At 70 Myr the velocity field at the boundary of the model is set to zero, simulating the tectonic quiescence.
#
# To avoid artifacts created by a homogeneous rheology, a random perturbation of the initial strain in each finite element of the model (e.g. Brune et al., 2014) is applied.
# This random perturbation follows a normal distribution in which the mean initial strain is 0.25 with a standard deviation of 0.08.
# Additionally, to ensure the nucleation of rifting at the center of the numerical domain,
# a weak seed (e.g. Huismans and Beaumont, 2003) is present in the lithospheric mantle with a constant initial strain of 0.3.
#
#
# <NAME>., <NAME>., <NAME>., <NAME>., Rift migration explains continental margin asymmetry and crustal hyper-extension,
# Nature communications, 2014, vol. 5, p. 1.
#
# <NAME>., <NAME>., Characteristics-based marker-in-cell method with conservative finite-differences schemes for modeling geological flows with strongly variable transport properties, Physics of the Earth and Planetary Interiors, 2003a, vol. 140, p. 293
#
# <NAME>., <NAME>., Symmetric and asymmetric lithospheric extension: Relative effects of frictional-plastic and viscous strain softening, Journal of Geophysical Research: Solid Earth, 2003, vol. 108
# ## Generate input files
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# -
# ### Shape of the model
# +
# Horizontal and vertical extent of the model in meters:
Lx, Lz = 1600.0e3, 300.0e3
# Number of points in horizontal and vertical direction:
Nx, Nz = 401, 76
# +
x = np.linspace(0, Lx, Nx)
z = np.linspace(Lz, 0, Nz)
X, Z = np.meshgrid(x, z)
# -
# ### Define the thickness of the layers
#
# They are in meters.
# +
# Sticky air layer:
thickness_sa = 40.0e3
# Lower crust:
thickness_lower_crust = 20.0e3
# Upper crust:
thickness_upper_crust = 20.0e3
# Lithosphere:
thickness_litho = 130.0e3
# Seed depth bellow base of lower crust
seed_depth = 13.0e3
# -
# ### Create the interfaces (bottom first)
interfaces = {
"litho": np.ones(Nx) * (thickness_litho + thickness_sa),
"seed_base": np.ones(Nx) * (seed_depth + thickness_lower_crust + thickness_upper_crust + thickness_sa),
"seed_top": np.ones(Nx) * (seed_depth + thickness_lower_crust + thickness_upper_crust + thickness_sa),
"lower_crust": np.ones(Nx) * (thickness_lower_crust + thickness_upper_crust + thickness_sa),
"upper_crust": np.ones(Nx) * (thickness_upper_crust + thickness_sa),
"air": np.ones(Nx) * (thickness_sa),
}
# Improve the seed layers:
# +
# Seed thickness in meters:
thickness_seed = 6.0e3
# Seed horizontal position in meters:
x_seed = 750.0e3
# Number of points of horizontal extent
n_seed = 2
interfaces["seed_base"][
int(Nx * x_seed // Lx - n_seed // 2) : int(Nx * x_seed // Lx + n_seed // 2)
] = (
interfaces["seed_base"][
int(Nx * x_seed // Lx - n_seed // 2) : int(Nx * x_seed // Lx + n_seed // 2)
]
+ thickness_seed // 2
)
interfaces["seed_top"][
int(Nx * x_seed // Lx - n_seed // 2) : int(Nx * x_seed // Lx + n_seed // 2)
] = (
interfaces["seed_top"][
int(Nx * x_seed // Lx - n_seed // 2) : int(Nx * x_seed // Lx + n_seed // 2)
]
- thickness_seed // 2
)
# -
# Plot the interfaces:
# +
fig, ax = plt.subplots(figsize=(16, 8))
for label, layer in interfaces.items():
ax.plot(x / 1e3, (-layer + thickness_sa) / 1e3, label=f"{label}")
ax.set_yticks(np.arange(-Lz / 1e3, 1 / 1e3, 10))
ax.set_xlim([0, Lx/1000])
ax.set_ylim([(-Lz + thickness_sa) / 1e3, 0 + thickness_sa / 1e3])
ax.set_xlabel("x [km]")
ax.set_ylabel("Depth [km]")
plt.title("Interfaces")
plt.legend()
plt.show()
# -
# #### Create the interface file
#
# The interface file contain the layer properties and the interface's depth between these layers.
#
# Layer properties:
# * Compositional factor (C)
# * Density (rho)
# * Radiogenic heat (H)
# * Pre-exponential scale factor (A)
# * Power law exponent (n)
# * Activation energy (Q)
# * Activation volume (v)
# +
# Define the radiogenic heat for the upper and lower crust in W/kg:
Huc = 2.5e-6 / 2700.0
Hlc = 0.8e-6 / 2800.0
# Create and save the interface file:
with open("interfaces.txt", "w") as f:
layer_properties = f"""
C 1.0 1.0 0.1 1.0 1.0 1.0 1.0
rho 3378.0 3354.0 3354.0 3354.0 2800.0 2700.0 1.0
H 0.0 9.0e-12 9.0e-12 9.0e-12 {Hlc} {Huc} 0.0
A 1.393e-14 2.4168e-15 2.4168e-15 2.4168e-15 8.574e-28 8.574e-28 1.0e-18
n 3.0 3.5 3.5 3.5 4.0 4.0 1.0
Q 429.0e3 540.0e3 540.0e3 540.0e3 222.0e3 222.0e3 0.0
V 15.0e-6 25.0e-6 25.0e-6 25.0e-6 0.0 0.0 0.0
"""
for line in layer_properties.split("\n"):
line = line.strip()
if len(line):
f.write(" ".join(line.split()) + "\n")
# layer interfaces
data = -1 * np.array(tuple(interfaces.values())).T
np.savetxt(f, data, fmt="%.1f")
# -
# ### Initial temperature field
#
# The initial temperature structure is depth dependent and is 0ยฐC at the surface and 1300ยฐC at the base of the lithosphere at 130 km.
# With these boundary conditions, the initial temperature structure in the interior of the lithosphere is given by the solution of the following equation:
#
# $$ \kappa \frac{\partial^2 T(z)}{\partial z^2} + \frac{H(z)}{c_p} = 0$$
#
# where $H(z)$ is the internal heat production of the different layers.
#
# The sublithospheric temperature follows an adiabatic increase up to the bottom of the model:
#
# $$T = T_p exp (g \alpha z โc_p)$$
#
# Where $T_p$ is the potential temperature for the mantle, $g$ is the gravity aceletation, $\alpha$ is the volumetric expansion coefficient, $c_p$ is the specific heat capacity.
kappa = 1.0e-6 # m^2/s
ccapacity = 1250 # J/(kg K)
tem_p = 1262 # ยฐC
g = -10 # m/s^2
alpha = 3.28e-5 # 1/K
# +
# Temperature when z < 130 km:
temp_z = 1300 * (z - thickness_sa) / (thickness_litho)
# Sublithospheric temperature:
temp_adiabatic = tem_p / np.exp(g * alpha * (z - thickness_sa) / ccapacity)
temp_z[temp_z < 0.0] = 0.0
temp_z[temp_z > temp_adiabatic] = temp_adiabatic[temp_z > temp_adiabatic]
# -
# Now, we will apply the thermal diffusivity in the model.
# Create the internal heat production model:
# +
H = np.zeros_like(temp_z)
# Add the H value for the upper crust:
cond = (z >= thickness_sa) & (z < thickness_upper_crust + thickness_sa)
H[cond] = Huc
# Add the H value for the lower crust:
cond = (z >= thickness_upper_crust + thickness_sa) & (
z < thickness_lower_crust + thickness_upper_crust + thickness_sa
) # lower crust
H[cond] = Hlc
# +
Taux = np.copy(temp_z)
t = 0
dt = 10000
dt_sec = dt * 365 * 24 * 3600
cond = (z > thickness_sa + thickness_litho) | (temp_z == 0)
dz = Lz / (Nz - 1)
# Apply the thermal diffusivity
while t < 500.0e6:
temp_z[1:-1] += (
kappa * dt_sec * ((temp_z[2:] + temp_z[:-2] - 2 * temp_z[1:-1]) / dz ** 2)
+ H[1:-1] * dt_sec / ccapacity
)
temp_z[cond] = Taux[cond]
t = t + dt
# -
# Save initial temperature file:
# +
temp_z = np.ones_like(X) * temp_z[:, None]
print(np.shape(temp_z))
# Save the initial temperature file
np.savetxt(
"input_temperature_0.txt",
np.reshape(temp_z, (Nx * Nz)),
header="T1\nT2\nT3\nT4"
)
# -
# Plot the temperature model:
# +
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(16, 8))
# Temperature field:
im1 = ax0.contourf(
X / 1.0e3,
(Z - thickness_sa) / 1.0e3,
temp_z,
levels=np.arange(0, 1610, 100)
)
ax0.set_ylim((Lz - thickness_sa) / 1.0e3, -thickness_sa / 1e3)
ax0.set_xlabel("Depth [km]")
ax0.set_ylabel("x [km]")
cbar = fig.colorbar(im1, orientation='horizontal', ax=ax0)
cbar.set_label("Temperature [ยฐC]")
# Profile:
ax1.set_title("Thermal profile [ยฐC]")
ax1.plot(temp_z[:, 0], (z - thickness_sa) / 1.0e3, "-k")
# Add interfaces:
code = 0
for label in list(interfaces.keys()):
code += 1
color = "C" + str(code)
ax1.hlines(
(interfaces[label][0] - thickness_sa) / 1.0e3,
np.min(temp_z[:, 0]),
np.max(temp_z[:, 0]),
label=f"{label}",
color=color,
)
ax1.set_ylim((Lz - thickness_sa) / 1.0e3, -thickness_sa / 1e3)
plt.legend(loc="lower left")
plt.show()
# -
# ### Boundary condition - Velocity
#
# The horizontal velocity field along the left and right borders of the domain presents two layers:
# * Constant velocity with depth at $0 โค z < h_c$
# * Linearly variable velocity with depth at $h_c โค z โค h_c + h_a$
#
# where $h_c$ = 150 km is the thickness of the upper layer with constant velocity, corresponding to the lithosphere $h_{litho}$ = 130 km and part of the asthenosphere, and $h_a$ = 110 km corresponds to the remaining asthenospheric portion of the model until the bottom
# of the model, where the horizontal velocity at the borders of the model varies linearly with depth.
# Therefore, the sum $h_c + h_a$ represents the total thickness of the model without the โsticky airโ layer.
#
# +
# Convert 1 cm/year to m/s:
velocity_L = 0.005 / (365 * 24 * 3600)
# Define the thickness with constant velocity in meters
thickness_v_const = 150.0e3
thickness_a = Lz - thickness_sa - thickness_v_const
velocity_R = 2 * velocity_L * (thickness_v_const + thickness_a) / thickness_a
fac_air = 10.0e3
# + tags=[]
# Create horizontal and vertical velocity:
VX, VZ = np.zeros_like(X), np.zeros_like(X)
# -
# Velocity for the left side (`x == 0`):
# When 0 <= z <= (h_v_const + thickness_sa), VX is zero.
# When (h_v_const * thickness_sa) <= z <= Lz, VX goes from 0 to vR.
cond = (Z > thickness_v_const + thickness_sa) & (X == 0)
VX[cond] = velocity_R * (Z[cond] - thickness_v_const - thickness_sa) / thickness_a
# Velocity for the right side (`x == Lx`):
# +
# When 0 <= z <= (h_v_const + thickness_sa), VX is 2vL
# When (h_v_const + thickness_sa) < z <= Lz, VX goes from 2vL to -vR + 2vL
cond = (Z > thickness_v_const + thickness_sa) & (X == Lx)
VX[cond] = -velocity_R * (Z[cond] - thickness_v_const - thickness_sa) / thickness_a
VX[X == Lx] += 2 * velocity_L
VX[Z <= thickness_sa - fac_air] = 0
# -
# Due to the mass conservation is assumed, the sum of the integrals over the boundaries (material flow) must be zero.
# +
# For the left side:
v0= VX[(X == 0)]
sum_velocity_left = np.sum(v0[1:-1]) + (v0[0] + v0[-1]) / 2.0
# For the right side:
vf = VX[(X == Lx)]
sum_velocity_right = np.sum(vf[1:-1]) + (vf[0] + vf[-1]) / 2.0
dz = Lz / (Nz - 1)
diff = (sum_velocity_right - sum_velocity_left) * dz
print("Sum of the integrals over the boundary is:", diff)
# -
# If the sum of the integrals over the boundaries is not zero, because rounding errors, we add a very small flow on the top to compensate this difference.
# In fact this is a very small correction.
# +
vv = -diff / Lx
VZ[Z == 0] = vv
# -
# Create and save the initial velocity file:
# +
VVX = np.copy(np.reshape(VX, Nx * Nz))
VVZ = np.copy(np.reshape(VZ, Nx * Nz))
velocity = np.zeros((2, Nx * Nz))
velocity[0, :] = VVX
velocity[1, :] = VVZ
velocity = np.reshape(velocity.T, (np.size(velocity)))
np.savetxt("input_velocity_0.txt", velocity, header="v1\nv2\nv3\nv4")
# -
# Plot the velocity profile for the boundaries:
# +
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(9, 9), sharey=True)
ax0.plot(VX[:, 0], (z) / 1e3, "k-", label="left side")
ax1.plot(VZ[:, 0], (z) / 1e3, "k-", label="left side")
ax0.plot(VX[:, -1], (z ) / 1e3, "r-", label="right side")
ax1.plot(VZ[:, -1], (z) / 1e3, "r-", label="right side")
ax0.legend()
ax1.legend()
ax0_xlim = ax0.get_xlim()
ax1_xlim = ax1.get_xlim()
ax0.set_yticks(np.arange(-40, Lz / 1e3, 10))
#ax1.set_yticks(np.arange(0, Lz / 1000, 20))
ax0.set_ylim([Lz / 1e3 , 0])
ax0.set_xlim([-8e-10, 8e-10])
ax1.set_xlim([-8e-10, 8e-10])
ax0.set_xlabel(" Velocity [m/s]")
ax1.set_xlabel(" Velocity [m/s]")
ax0.set_ylabel("Depth [km]")
ax0.set_title("Horizontal component of velocity")
ax1.set_title("Vertical component of velocity")
plt.show()
# + [markdown] tags=[]
# ### Create the parameter file
# + tags=[]
params = f"""
nx = {Nx}
nz = {Nz}
lx = {Lx}
lz = {Lz}
# Simulation options
multigrid = 1 # ok -> soon to be on the command line only
solver = direct # default is direct [direct/iterative]
denok = 1.0e-15 # default is 1.0E-4
particles_per_element = 40 # default is 81
particles_perturb_factor = 0.7 # default is 0.5 [values are between 0 and 1]
rtol = 1.0e-7 # the absolute size of the residual norm (relevant only for iterative methods), default is 1.0E-5
RK4 = Euler # default is Euler [Euler/Runge-Kutta]
Xi_min = 1.0e-7 # default is 1.0E-14
random_initial_strain = 0.3 # default is 0.0
pressure_const = -1.0 # default is -1.0 (not used) - useful only in horizontal 2D models
initial_dynamic_range = True # default is False [True/False]
periodic_boundary = False # default is False [True/False]
high_kappa_in_asthenosphere = False # default is False [True/False]
K_fluvial = 2.0e-7 # default is 2.0E-7
m_fluvial = 1.0 # default is 1.0
sea_level = 0.0 # default is 0.0
basal_heat = 0.0 # default is -1.0
# Surface processes
sp_surface_tracking = False # default is False [True/False]
sp_surface_processes = False # default is False [True/False]
sp_dt = 1.0e5 # default is 0.0
sp_d_c = 1.0 # default is 0.0
plot_sediment = False # default is False [True/False]
a2l = True # default is True [True/False]
free_surface_stab = True # default is True [True/False]
theta_FSSA = 0.5 # default is 0.5 (only relevant when free_surface_stab = True)
# Time constrains
step_max = 10000 # Maximum time-step of the simulation
time_max = 100.0e6 # Maximum time of the simulation [years]
dt_max = 10.0e3 # Maximum time between steps of the simulation [years]
step_print = 20 # Make file every <step_print>
sub_division_time_step = 0.5 # default is 1.0
initial_print_step = 0 # default is 0
initial_print_max_time = 1.0e6 # default is 1.0E6 [years]
# Viscosity
viscosity_reference = 1.0e26 # Reference viscosity [Pa.s]
viscosity_max = 1.0e25 # Maximum viscosity [Pa.s]
viscosity_min = 1.0e18 # Minimum viscosity [Pa.s]
viscosity_per_element = constant # default is variable [constant/variable]
viscosity_mean_method = arithmetic # default is harmonic [harmonic/arithmetic]
viscosity_dependence = pressure # default is depth [pressure/depth]
# External ASCII inputs/outputs
interfaces_from_ascii = True # default is False [True/False]
n_interfaces = {len(interfaces.keys())} # Number of interfaces int the interfaces.txt file
### THE FOLLOWING LINE MUST BE TRUE, SIMULATING THE VARIABLE [B]OUNDARY [C]ONDITIONS FOR [V]EOLOCITY ###
variable_bcv = True # default is False [True/False]
temperature_from_ascii = True # default is False [True/False]
velocity_from_ascii = True # default is False [True/False]
binary_output = False # default is False [True/False]
sticky_blanket_air = True # default is False [True/False]
precipitation_profile_from_ascii = False # default is False [True/False]
climate_change_from_ascii = False # default is False [True/False]
print_step_files = True # default is True [True/False]
checkered = False # Print one element in the print_step_files (default is False [True/False])
sp_mode = 5 # default is 1 [0/1/2]
geoq = on # ok
geoq_fac = 100.0 # ok
# Physical parameters
temperature_difference = 1500. # ok
thermal_expansion_coefficient = 3.28e-5 # ok
thermal_diffusivity_coefficient = 1.0e-6 # ok
gravity_acceleration = 10.0 # ok
density_mantle = 3300. # ok
external_heat = 0.0e-12 # ok
heat_capacity = 1250. # ok
non_linear_method = on # ok
adiabatic_component = on # ok
radiogenic_component = on # ok
# Velocity boundary conditions
top_normal_velocity = fixed # ok
top_tangential_velocity = free # ok
bot_normal_velocity = fixed # ok
bot_tangential_velocity = free # ok
left_normal_velocity = fixed # ok
left_tangential_velocity = fixed # ok
right_normal_velocity = fixed # ok
right_tangential_velocity = fixed # ok
surface_velocity = 0.0e-2 # ok
multi_velocity = False # default is False [True/False]
# Temperature boundary conditions
top_temperature = fixed # ok
bot_temperature = fixed # ok
left_temperature = fixed # ok
right_temperature = fixed # ok
rheology_model = 9 # ok
T_initial = 3 # ok
"""
# Create the parameter file
with open("param.txt", "w") as f:
for line in params.split("\n"):
line = line.strip()
if len(line):
f.write(" ".join(line.split()) + "\n")
# -
# ### The creation of the file `scale_bcv.txt`
# The first line of the file indicates the number of times the velocity field is modified during the simulation.
# The following lines are formed by two values: (1) Timing of change (in Myr); and (2) Scale factor.
# In our example, the velocity field is modified in two instants: at 20.0 and at 70.0 Myr.
# At 20.0 Myr the velocity field is rescaled for -1.0, inverting the sense of the vectors.
# At 70.0 Myr the velocity field is rescaled to 0.0, simulating the tectonic quiescence.
# This file is only relevant if the flag `variable_bcv` is set to `True` in the `param.txt` file.
var_bcv = f""" 2
20.0 -1.0
70.0 0.0
"""
# Create the parameter file
with open("scale_bcv.txt", "w") as f:
for line in var_bcv.split("\n"):
line = line.strip()
if len(line):
f.write(" ".join(line.split()) + "\n")
# ## Run the model
#
# In this example, mandyoc use the following flags:
#
# * -seed 0,2
# * -strain_seed 0.0,1.0
#
# You can run the model as:
#
# ```
# mpirun -n NUMBER_OF_CORES mandyoc -seed 0,2 -strain_seed 0.0,1.0
# ```
#
# *You have to change NUMBER_OF_CORES.*
# + [markdown] tags=[]
# ## Post-processing
#
# ### Plot the results
# -
# Determine the initial and final step to make the plots:
# +
step_initial = 0
step_final = 5000
d_step = 1000
# -
# Load the parameter file to generate the grid of the model:
# +
with open("param.txt", "r") as f:
line = f.readline()
line = line.split()
Nx = int(line[2])
line = f.readline()
line = line.split()
Nz = int(line[2])
line = f.readline()
line = line.split()
Lx = float(line[2])
line = f.readline()
line = line.split()
Lz = float(line[2])
print(
"nx:", Nx, "\n",
"nz:", Nz, "\n",
"Lx:", Lx, "\n",
"Lz:", Lz
)
# -
# Create the grid in kilometers:
# +
xi = np.linspace(0, Lx / 1e3, Nx)
zi = np.linspace(-Lz / 1e3, 0, Nz)
xx, zz = np.meshgrid(xi, zi)
# -
# Define the thickness of the air layer in kilometers
thickness_air = 40.0
# Plot the results:
for cont in range(step_initial, step_final + d_step, d_step):
# Read time
time = np.loadtxt("time_" + str(cont) + ".txt", dtype="str")
time = time[:, 2:]
time = time.astype("float")
# Read density
rho = pd.read_csv(
"density_" + str(cont) + ".txt",
delimiter=" ",
comment="P",
skiprows=2,
header=None,
)
rho = rho.to_numpy()
rho[np.abs(rho) < 1.0e-200] = 0
rho = np.reshape(rho, (Nx, Nz), order="F")
rho = np.transpose(rho)
# Read strain
strain = pd.read_csv(
"strain_" + str(cont) + ".txt",
delimiter=" ",
comment="P",
skiprows=2,
header=None,
)
strain = strain.to_numpy()
strain[np.abs(strain) < 1.0e-200] = 0
strain = np.reshape(strain, (Nx, Nz), order="F")
strain = np.transpose(strain)
strain[rho < 200] = 0
strain_log = np.log10(strain)
print("Step =", cont)
print("Time = %.1lf Myr\n\n" % (time[0] / 1.0e6))
print("strain(log)", np.min(strain_log), np.max(strain_log))
print("strain", np.min(strain), np.max(strain))
plt.figure(figsize=(20, 5))
plt.title("Time = %.1lf Myr\n\n" % (time[0] / 1.0e6))
# Create the colors to plot the density
cr = 255.0
color_upper_crust = (228.0 / cr, 156.0 / cr, 124.0 / cr)
color_lower_crust = (240.0 / cr, 209.0 / cr, 188.0 / cr)
color_lithosphere = (155.0 / cr, 194.0 / cr, 155.0 / cr)
color_asthenosphere = (207.0 / cr, 226.0 / cr, 205.0 / cr)
colors = [
color_upper_crust,
color_lower_crust,
color_lithosphere,
color_asthenosphere
]
# Plot density
plt.contourf(
xx,
zz + thickness_air,
rho,
levels=[200.0, 2750, 2900, 3365, 3900],
colors=colors,
)
# Plot strain
plt.imshow(
strain_log[::-1, :],
extent=[0, Lx / 1e3, -Lz / 1e3 + thickness_air, thickness_air],
zorder=100,
alpha=0.2,
cmap=plt.get_cmap("Greys"),
vmin=-0.5,
vmax=0.9,
)
b1 = [0.74, 0.41, 0.2, 0.2]
bv1 = plt.axes(b1)
A = np.zeros((100, 10))
A[:25, :] = 2700
A[25:50, :] = 2800
A[50:75, :] = 3300
A[75:100, :] = 3400
A = A[::-1, :]
xA = np.linspace(-0.5, 0.9, 10)
yA = np.linspace(0, 1.5, 100)
xxA, yyA = np.meshgrid(xA, yA)
air_threshold = 200
plt.contourf(
xxA,
yyA,
A,
levels=[air_threshold, 2750, 2900, 3365, 3900],
colors=colors,
)
plt.imshow(
xxA[::-1, :],
extent=[-0.5, 0.9, 0, 1.5],
zorder=100,
alpha=0.2,
cmap=plt.get_cmap("Greys"),
vmin=-0.5,
vmax=0.9,
)
bv1.set_yticklabels([])
plt.xlabel("$log_{10}(\epsilon_{II})$", size=18)
plt.show()
| examples/variable_velocity_field/create_input_and_plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Fu5R1m9grA_Q"
# # Vaex Examples
# + [markdown] id="nDDGr_mureVT"
# ##### Copyright 2021 <NAME>
# + id="kl9rWJBHrfLE"
#@title Licensed under MIT License (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://huqy.github.io/HighPerfDataSciPython/LICENSE.md
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="xzx4RYa8rBqU"
# Vaex v4 is released, fastest DataFrame library we know of! Instant file opening, subsecond groupby and join on +1,000,000,000 (billion) rows
# + [markdown] id="925sXh4ctXZQ"
# ## Installation Vaex on Colab
# + [markdown] id="lqcqkrJR2imO"
# You need to restart the session runtime after the pip install commands below under the current version of Colab.
# + id="TLxuRn9od1YX" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="799345e4-5221-40be-fb34-a3bb7c55e4eb"
# !pip install vaex
# !pip install ipython==7.0.0
# + [markdown] id="uLxaDKuUtdP0"
# ## Check the default dataset example
#
# The original data is a simulation of the disruption of 33 satellite galaxies in a Galactic potential by <NAME> Zeeuw (2000). The incorporated dataset in Vaex is a random 10% subset of it containing 330 000 rows and serves well to
# demonstrate what can be done with vaex while being reasonably small in size.
#
# * <NAME>. 2000, MNRAS, 319, 657
#
# Most of the cells are copied from [Vaex introduction in 11 minutes](https://vaex.io/docs/tutorial.html).
# + id="OAfhbPlod5pQ" colab={"base_uri": "https://localhost:8080/"} outputId="2d97ee49-6a0c-4b08-a1dd-361002ebad21"
import vaex
df = vaex.example()
print(df)
# + id="P8F2_TFZeQ48" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="85bc0772-33d3-4ff9-8114-f08e5a0bee54"
# !ls -lh /root/.vaex/data/helmi-dezeeuw-2000-FeH-v2-10percent.hdf5
# + id="_QZkgKS-sAaT" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="f2ae9c8a-f011-43a8-fc5d-460fbda6cdd5"
df.x # df.col.x or df['x'] are equivalent, but df.x may be preferred because it is more tab completion friendly or programming friendly respectively
# + id="hw3AHIvgsLoX" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="d73c9259-6070-4fb8-acda-02fee01a4aff"
df.x.values
# + id="fC81ili5sSt_" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="4c2a0530-c1b3-41c3-8359-8deb655f5f40"
import numpy as np
np.sqrt(df.x**2 + df.y**2 + df.z**2)
# + id="H8cYfF2NsWUo" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="5e7cc7ab-f14a-480e-edf2-4e7a4dfff559"
df['r'] = np.sqrt(df.x**2 + df.y**2 + df.z**2)
print(df[['x', 'y', 'z', 'r']])
# + [markdown] id="yNLJd1JstWni"
# Storing an expression as a column is called a virtual column since it does **not** take up any memory, and is computed on the fly when needed. A virtual column is treated just as a normal column.
# + id="PMWxit_1saUg" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="2146e245-484a-4f89-b364-6266456aaffb"
df.select(df.x < 0)
df.evaluate(df.x, selection=True)
# + id="NYwJmDL2tnE1" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="52a5b3de-233f-467b-916d-7abc1df5fa43"
df_negative = df[df.x < 0]
print(df_negative[['x', 'y', 'z', 'r']])
# + id="5HdoRiodtqBG" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1049771b-6546-43bd-9908-960a95984d53"
df.count(), df.mean(df.x), df.mean(df.x, selection=True)
# + id="u0d57SENuwkt" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="713e55b8-c5d0-4ac1-e222-abaadc83d751"
counts_x = df.count(binby=df.x, limits=[-10, 10], shape=64)
counts_x
# + id="tq6Xg5wuu8r7" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="03edd456-cf4a-4b7d-bbb7-e4204fe13ceb"
import matplotlib.pylab as plt
plt.plot(np.linspace(-10, 10, 64), counts_x)
plt.show()
# + id="SDP0lrZCvADJ" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="e46f5eb5-0232-4483-dc16-ff6cafa62329"
xycounts = df.count(binby=[df.x, df.y], limits=[[-10, 10], [-10, 20]], shape=(64, 128))
xycounts
# + id="VMgBVMkzvDSR" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="696f148d-7018-4012-bddd-c00a20e4d880"
plt.imshow(xycounts.T, origin='lower', extent=[-10, 10, -10, 20])
plt.show()
# + id="4RmD75rnvIrZ" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="bb105a9e-a0f9-4b08-fe86-54ea52e3ecea"
v = np.sqrt(df.vx**2 + df.vy**2 + df.vz**2)
xy_mean_v = df.mean(v, binby=[df.x, df.y], limits=[[-10, 10], [-10, 20]], shape=(64, 128))
xy_mean_v
# + [markdown] id="Iyggxo_avazA"
# ## Getting your data in
# + id="M5G1sAWgvPiw" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="6f6004fe-23fc-4c29-ce13-670a97554f74"
import vaex
import numpy as np
x = np.arange(5)
y = x**2
df = vaex.from_arrays(x=x, y=y)
print(df)
# + id="c6aONE6lvLz0" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="469a6885-fddf-43ba-ca84-f1e60b6aaa7f"
plt.imshow(xy_mean_v.T, origin='lower', extent=[-10, 10, -10, 20])
plt.show()
# + [markdown] id="IRoYu8gMXuV6"
# ## Vaex 4.0 New Features
# + [markdown] id="4tWT_TJXvnAj"
# ### With Apache Arrow Arrays
# + id="xWQupCZEX0-u"
import vaex
import numpy as np
import pyarrow as pa
# + colab={"base_uri": "https://localhost:8080/"} id="hIsABnN0YTaT" outputId="c1c47216-5cf3-4a38-8b48-15e9d45a9705"
df = vaex.from_arrays(library=['Vaex', 'NumPy', 'Apache Arrow'])
print(df)
# + colab={"base_uri": "https://localhost:8080/"} id="R_wwwgUdYd4P" outputId="e1269560-0281-4bf3-a803-c86b2dc19d75"
print(repr(df['library'].values))
# + colab={"base_uri": "https://localhost:8080/"} id="hR-3O0apYqss" outputId="963c9513-413e-4dc8-bbd8-f7ffa6f784d1"
# mixing Arrow and Numpy
x = np.arange(4)
y = pa.array([42, 12, 144, 1024])
df = vaex.from_arrays(x=x, y=y)
df['y'] = df.x * df.y
print(repr(df.y))
# + colab={"base_uri": "https://localhost:8080/"} id="CVsWmUQHZFih" outputId="083f4251-a1c6-4d3b-e5c6-e6dc5e1615ba"
# By default, Arrow takes a Pandas-like approach of converting mmissing values to NaN when mixing with Numpy
x = np.ma.array(np.arange(4), mask=[0, 1, 0, 0], dtype='i4')
y = pa.array([42, 12, 144, None], type=pa.int32())
print(x*y)
# + colab={"base_uri": "https://localhost:8080/"} id="dsZgI-jVZmHU" outputId="6c0a70b6-59bb-48d4-f614-16b068a79073"
# Vaex ensures the missing values stay missing and arrays do not get upcasted.
df = vaex.from_arrays(x=x, y=y)
df['y'] = df.x * df.y
print(repr(df.y))
# + colab={"base_uri": "https://localhost:8080/"} id="E6KUae6SaDZ0" outputId="3882b3cf-115a-4e4c-ce61-91eaa4204270"
df = vaex.from_arrays(text=['So, can you split this?', 'And this.', None])
df.text.str.split(" ")
# + colab={"base_uri": "https://localhost:8080/"} id="s2Hq19Zsdtn_" outputId="b118a4e5-a3bd-4c80-c9fe-6300c179dbae"
# apply string operations to each string in the list
df.text.str.split(" ").str.strip(' ,?.')
# + colab={"base_uri": "https://localhost:8080/"} id="UQLqt64jeAb1" outputId="a7c4d1b5-e0e7-450e-97c9-8f2872fc3572"
# String splitting can even be done multiple times creating a nested list without any performance loss.
df.text.str.split(" ").str.strip(' ,?.').str.split('a')
# + [markdown] id="1cuGE-BIgpiU"
# ### With Apache Parquet Support
# + id="L05bgDSbgpQt"
import vaex
# + id="LmXYYcMHgjPG"
countries = ['US', 'US', 'NL', 'FR', 'NL', 'NL']
years = [2020, 2021, 2020, 2020, 2019, 2020]
values = [1, 2, 3, 4, 5, 6]
# + id="5LBeUIqYhd8F"
df = vaex.from_dict({
'country': countries,
'year': years,
'value': values
})
df.export_partitioned('./partitioned', by=['country', 'year'])
# + colab={"base_uri": "https://localhost:8080/"} id="eiH4GXUwhgNe" outputId="74a9a134-3246-4fe8-a34e-1fa2f246384a"
# !ls -R ./partitioned/
# + colab={"base_uri": "https://localhost:8080/"} id="aOCsuNZLhrAM" outputId="e5f89778-47ce-4601-b416-1e3b1c20237e"
df = vaex.open('./partitioned')
print(df)
# + [markdown] id="SKDLANpJ4MIi"
# ## Stretching Colab to handle BIG data
#
# * Read 100+GB file with 1 billion rows
# * For demo, just processing 20% of it which still has 20+GB with 200 million rows
# + id="AMgoK-0R6OvS"
import vaex
# + id="szF59xvRveuD" colab={"base_uri": "https://localhost:8080/"} outputId="59b3aca1-e9f2-4c24-d9e6-14c0cbe40fb3"
# Read in the NYC Taxi dataset straight from S3
# Lazy streaming from S3 supported in combination with memory mapping.
# %%time
df = vaex.open('s3://vaex/taxi/yellow_taxi_2009_2015_f32.hdf5?anon=true') # open a 100+G file, 1 billion+ rows
print(df)
# + id="j-bbmRdMp8vI" colab={"base_uri": "https://localhost:8080/"} outputId="b28b50a5-7ffc-4f3e-d1fc-2c3c5f90b015"
# lets use just 20% of the data, since we want to make sure it fits
# into memory (so we don't measure just hdd/ssd speed)
# %%time
df.set_active_fraction(0.2)
print(df)
# + id="9bY1p1FN5ueI" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="f13e99f4-22fb-4b6c-df24-22458436a896"
df.plot(df.col.pickup_longitude, df.col.pickup_latitude, f="log1p", show=True, limits="96%");
# + id="Hylk_wMy1aun" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="1b07529e-da4a-401d-c041-5f9fd3f3300e"
df['tip_percentage'] = df.tip_amount / df.total_amount
df['tip_percentage']
# + [markdown] id="kq5Ff2MR3nc6"
# Filtering and evaluating expressions will not waste memory by making copies; the data is kept untouched on disk, and will be streamed only when needed. Delay the time before you need a cluster.
# + id="3T5SrTrL2ath"
dff = df[df.total_amount > 0]
# + id="xUBMUAd82mjc" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["d5aa350783c0465083f2aafaddd92b65", "13e2c6e664c846eb90f77c609b2d7ddf", "b7bd3aaf6f3f4de583653272fb111709", "fa0d9d35ca824adaac271e45dc294db1", "11360157f0424023ae0e14dda2d6a32f", "5febc4f1591648d49aafcb8c13f0c616", "916200bae088467daf7a5a6dcf70efd9", "971f1e2ba9cd4653bf90e97e05b34a7e"]} outputId="0ed9d383-e141-42cf-f4f7-7669bb0ae8a7"
dff.tip_amount.mean(progress='widget')
# + id="ap5uHK1t33AL" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="b9162ca3-5c42-4d78-8094-87b0fc7c432e"
f"{len(dff):,} rows filtered and processed"
# + id="FLl-9TKowjHM" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="fbfa6351-d056-4e44-b9ce-fef6c2b81b0e"
# %%time
df_group = df.groupby(dff.passenger_count, agg=vaex.agg.mean(df.tip_amount))
# + id="BHjIP3pixsyA" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="7b973e56-6eff-4cd3-e6e5-37ebd55b1f88"
# %%time
df_joined = df.extract().join(df_group, on='passenger_count') # no memory copies
print(df_joined['passenger_count', 'tip_amount', 'tip_amount_mean'])
# + [markdown] id="LkW0z7PTpZiN"
# ## Just-In-Time compilation
#
# The heavy calculation can be optimized by doing a Just-In-Time compilation, based on numba, pythran, or if you happen to have an NVIDIA graphics card cuda. Choose whichever gives the best performance or is easiest to install.
#
# [Link](https://vaex.readthedocs.io/en/latest/tutorial.html#Just-In-Time-compilation)
# + id="EIkeefbErvrJ"
import numpy as np
# From http://pythonhosted.org/pythran/MANUAL.html
def arc_distance(theta_1, phi_1, theta_2, phi_2):
"""
Calculates the pairwise arc distance
between all points in vector a and b.
"""
temp = (np.sin((theta_2-2-theta_1)/2)**2
+ np.cos(theta_1)*np.cos(theta_2) * np.sin((phi_2-phi_1)/2)**2)
distance_matrix = 2 * np.arctan2(np.sqrt(temp), np.sqrt(1-temp))
return distance_matrix
# + id="tzuqSq2Ur0HY"
df['arc_distance'] = arc_distance(df.pickup_longitude * np.pi/180,
df.pickup_latitude * np.pi/180,
df.dropoff_longitude * np.pi/180,
df.dropoff_latitude * np.pi/180)
# + id="Vf5wdO1er5zV" colab={"base_uri": "https://localhost:8080/"} outputId="56f16861-b05d-4a49-a811-1ce7b42e2389"
# %%time
df.mean(df.arc_distance)
# + [markdown] id="rOp4Rw7_3RPl"
# ### Using Cuda GPU
# + id="iDb5OBqTsF0U"
df['arc_distance_jit'] = df.arc_distance.jit_cuda()
# + id="2KYF8v4-sTXY" colab={"base_uri": "https://localhost:8080/"} outputId="1e4fb5cc-c8fa-4557-a914-7fa30af8c0ba"
# %%time
df.mean(df.arc_distance_jit)
# + [markdown] id="zC_847sJ3Upf"
# ### Using Numba
# + id="ttn9IpA_3BQj"
df['arc_distance_jit'] = df.arc_distance.jit_numba()
# + colab={"base_uri": "https://localhost:8080/"} id="nJVBIohe3EEx" outputId="c80e4848-c0d0-44ff-b6b3-71230ac2170c"
# %%time
df.mean(df.arc_distance_jit)
# + [markdown] id="2UqMsbms3W5R"
# ### Using Pythran
# + colab={"base_uri": "https://localhost:8080/"} id="yUe02f_G3KMw" outputId="2533b2bc-63fc-4477-98a1-3ead63861e59"
# !apt-get install libatlas-base-dev
# !apt-get install python-dev python-ply python-networkx python-numpy
# !pip install pythran
# + id="f_sOXSyj3Glb"
df['arc_distance_jit'] = df.arc_distance.jit_pythran()
# + colab={"base_uri": "https://localhost:8080/"} id="y7O1S--I3Io2" outputId="2ee5c58b-6b8f-48f3-c545-dfa16c019a7b"
# %%time
df.mean(df.arc_distance_jit)
# + [markdown] id="jW4FZwRr3fwb"
# ### Compare the performance
# + colab={"base_uri": "https://localhost:8080/", "height": 336} id="AMzsirp93jOx" outputId="56bbe790-4ad9-4ea3-a0b8-9f67848b9fd3"
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
vaex_comp = ['Plain Vaex', 'JIT with Cuda', 'JIT with Numba', 'JIT with Pythran']
time = [92,1.79,17.8,30.3]
ax.bar(vaex_comp,time, width=0.5)
ax.set_ylabel('Run Time (seconds)')
plt.show()
| Vaex_examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Wprowadzenie do laboratorium
#
# - ลrodowisko obliczeniowe
# - podstawowe biblioteki
# - podstawowe opracje na obrazach
# Sprawdzenie wersji systemu **Anaconda Python**
import sys
print(sys.version)
# Sprawdzenie wersji biblioteki **OpenCV**
import cv2
print(cv2.__version__)
# Pobranie obrazu testowego z biblioteki **scikit-image**.
#
# Inne obrazy testowe dostฤpne w bibliotece sฤ
dostฤpne w dokumentacji moduลu [data](http://scikit-image.org/docs/dev/api/skimage.data.html).
# +
from skimage import data
im = data.astronaut()
# -
# Wyswietlanie obrazu za pomocฤ
biblioteki **matplotlib**. Kod wielokrotnego uลผytku umoลผliwiajฤ
cy parametryzacjฤ: rozmiar obrazu (*figsize*) oraz mapa kolorystyczna (cmap).
#
# Dostepne warianty map kolorystycznych sฤ
dostฤpne w dokumentacji [cmap](https://matplotlib.org/examples/color/colormaps_reference.html).
# +
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(5,5))
plt.imshow(im, cmap="gray")
plt.axis('off')
plt.show()
# -
# Sprawdzenie podstawowych parametrรณw obrazu: wymiary, typ kodowania luminancji (koloru) oraz rozmiar.
print(im.shape, im.dtype, im.size)
# Przykลad prostej manipulacji punktami na obrazie. Wykreลlenie linii pod kฤ
tem $-45^\circ$ w kolorze biaลym.
# +
for i in range(im.shape[0]):
for j in range(im.shape[1]):
if i==j:
im[i,j] = 255
plt.figure(figsize=(5,5))
plt.imshow(im, cmap="gray")
plt.axis('off')
plt.show()
# -
# Weryfikacja podstawowych parametrรณw obrazu wyjลciowego.
print(im.min(), im.max())
print(im.dtype)
# Wyznaczanie histogramu obrazu za pomocฤ
funkcji **histogram** z moduลu **exposure**.
#
# Prezentacja w postaci wykresu liniowego oraz sลupkowego dla obrazu z wewnฤtrznฤ
reprezentacjฤ
luminancji w zakresie caลkowitym jednobajtowym tj. $[0, 255]$.
# +
from skimage import exposure
histogram = exposure.histogram(im, nbins=256)
hist, cbins = histogram
plt.plot(cbins, hist)
plt.xlim([-10, 265])
plt.grid()
plt.show()
plt.bar(cbins, hist)
plt.xlim([-10, 265])
plt.grid()
plt.show()
# -
# Poziomy luminancji uwzglฤdniane przez histogram.
print('Liczba poziomรณw: ', len(cbins), '\n')
print(cbins)
# Wyznaczanie histogramu obrazu za pomocฤ
funkcji **histogram** z moduลu **exposure**.
#
# Prezentacja w postaci wykresu liniowego oraz sลupkowego dla obrazu z wewnฤtrznฤ
reprezentacjฤ
luminancji w zakresie rzeczywistym tj. $[0, 1]$. Konwersja pomiฤdzy reprezentacjฤ
typu **uint8** a **float64** jest moลผliwa za pomoca wbudowanych funkcji transformujฤ
cych m.in. **img_as_float** i **img_as_ubyte** (szczegรณลy w [dokumentacji](http://scikit-image.org/docs/dev/api/skimage.html)).
# +
from skimage import img_as_float
imf = img_as_float(im)
nbins = 100
histogram = exposure.histogram(imf, nbins=nbins)
hist, cbins = histogram
plt.plot(cbins, hist)
plt.xlim([-0.2, 1.2])
plt.grid()
plt.show()
plt.bar(cbins, hist, width=0.005)
plt.xlim([-0.2, 1.2])
plt.grid()
plt.show()
# -
# Poziomy luminancji uwzglฤdniane przez histogram.
print(imf.dtype)
print('Liczba poziomรณw: ', len(cbins), '\n')
print(cbins)
# Skalowanie obrazรณw i wykresรณw.
# +
scale = 2.0
fig = plt.figure()
default_size = fig.get_size_inches()
fig.set_size_inches((default_size[0]*scale, default_size[1]*scale))
plt.bar(cbins, hist, width=0.005)
plt.xlim([-0.2, 1.2])
plt.grid()
plt.show()
# -
# Inne funkcje pozwalajฤ
ce na wyznaczanie histogramu.
# +
nhistogram = np.histogram(imf, 100)
nhist, ncbins = nhistogram
bins = [item/(10.0*nbins) for item in range(5,1000,10)]
plt.bar(bins, nhist, width=0.005)
plt.xlim([-0.2, 1.2])
plt.grid()
plt.show()
plt.bar(cbins, hist, width=0.005)
plt.xlim([-0.2, 1.2])
plt.grid()
plt.show()
# -
# Bezpoลrednie wyznaczanie ลrodkรณw przedziaลรณw luminancji czyli dziedziny histogramu obliczonego dla obrazu z kodowaniem typu **float64**.
nbins = 20
step = 1/nbins
start = step/2
pp = np.linspace(0, 1, nbins, endpoint=False)+start
print(pp, len(pp))
# Operacja wyrรณwnywania histogramu.
#
#
# +
imeq = exposure.equalize_hist(imf, nbins=256)
plt.figure(figsize=(5,5))
plt.imshow(im, cmap="gray")
plt.axis('off')
plt.show()
plt.figure(figsize=(5,5))
plt.imshow(imeq, cmap="gray")
plt.axis('off')
plt.show()
histogram = exposure.histogram(imf, nbins=256)
ohist, ocbins = histogram
plt.bar(ocbins, ohist, width=0.005)
plt.xlim([-0.2, 1.2])
plt.title('Histogram oryginalny')
plt.grid()
plt.show()
histogram = exposure.histogram(imeq, nbins=256)
ehist, ecbins = histogram
plt.bar(ecbins, ehist, width=0.005)
plt.xlim([-0.2, 1.2])
plt.title('Histogram po wyrรณwaniu')
plt.grid()
plt.show()
# -
# Sprawdzenie czy histogramy majฤ
identyczny wymiar.
print(len(ohist), len(ehist))
# Definicja funkcji obliczajฤ
cej dystrybuantฤ (ang. Cumulative Distribution Function) dla zadanego histogramu obrazu.
#
# Zakลadajฤ
c, ลผe $n_i$ oznacza liczbฤ punktรณw o poziomie luminacji $i$ na obrazie $X = {x}$, gdzie zakres luminacji (odcieni szaroลci) zawiera siฤ w zakresie $0 < i < L$.
#
# W takim przypadku prawdopodobieลstwo wystฤ
pienia punktu o poziomie jasnoลci $i$ definuje siฤ jako:
#
# $$ p_{x}(i) = p(x = i) = \frac{n_i}{n}$$.
#
# gdzie: $L$ okreลla caลkowitฤ
liczbฤ odcieni szaroลci na danym obrazie, $n$ okreลla caลkowitฤ
liczbฤ pikseli na obrazie, a $p_{x}(i)$ okreลla wartoลฤ histogramu obrazu (znormalizowanฤ
do zakresu $[0,1]$) odpowiadajฤ
cฤ
pikselowi o wartoลci $i$.
#
# Dla tak zdefiniowanych parametrรณw moลผemy zdefiniowac dystrybuantฤ postaci:
#
# $$ cdf_{x} (i) = \sum_{j=0}^{i} p_{x}(j)$$
#
# Operacja wyrรณwnania histogramu polega na takiej transformacji $y = T(x)$, aby w efekcie uzyskaฤ nowy obraz $Y = {y}$, dla ktรณrego dystrybuanta ma podlegaฤ lineryzacji przez caลy zakres wartoลci.
#
# $$ cdf_{y} (i) = iK$$
# $$K \in \mathbb{R}$$
def computeCDF(hist):
cdf = np.zeros(hist.size)
for idx in range(hist.size):
cdf[idx] = np.sum(hist[:idx+1])
return cdf
# Obliczenie dysrtybuant dla histogramรณw.
ocdf = computeCDF(ohist)
ecdf = computeCDF(ehist)
# Wykresy ilustrujฤ
ce rรณznicฤ pomiฤdzy histogramem przed i po przeksztaลceniu.
# +
plt.bar(ocbins, ocdf, width=0.005)
plt.xlim([-0.2, 1.2])
plt.title('Dystrybuanta obrazu oryginalnego')
plt.grid()
plt.show()
plt.bar(ecbins, ecdf, width=0.005)
plt.xlim([-0.2, 1.2])
plt.title('Dystrybuanta po wyrรณwnaniu histogramu')
plt.grid()
plt.show()
# -
| Lab1/01_podstawy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="0oVqNrwK_Ijs" executionInfo={"status": "ok", "timestamp": 1634027121888, "user_tz": -330, "elapsed": 2801, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}}
# importing libraries
import os
from collections import Counter
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# models
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Activation, Dropout, Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
from PIL import Image
from glob import glob
# to mount a drive
from google.colab import drive
# + colab={"base_uri": "https://localhost:8080/"} id="G0J9ij_Y_VYi" executionInfo={"status": "ok", "timestamp": 1634027424525, "user_tz": -330, "elapsed": 67331, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="f4848e4c-10d3-46e6-eb62-8281560b27cc"
#Mounting the drive
drive.mount('/content/gdrive')
#Setting kaggle configuration directory
os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/My Drive/Sem7_ML_Data/Kaggle"
# %cd /content/gdrive/My Drive/Kaggle
#Downloading and unzip dataset
# !kaggle datasets download -d moltean/fruits
# !unzip \*.zip && rm *.zip
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="h4LeR-5hAgOB" executionInfo={"status": "ok", "timestamp": 1634027438141, "user_tz": -330, "elapsed": 1011, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="3b339f98-efcd-4b63-d187-d43e5eab3f0c"
# printing Images
#Setting Training & Test dir paths
train_path = './fruits-360_dataset/fruits-360/Training/'
test_path = './fruits-360_dataset/fruits-360/Test/'
#Displaying the image
img = load_img(train_path + "Guava/r_8_100.jpg", target_size=(100, 100))
plt.imshow(img)
plt.axis("off")
plt.show()
#Printing the shape of the image array
x = img_to_array(img)
print(x.shape)
# + [markdown] id="Z8TZGD0yHj5x"
# The dataset is rich in terms of the variety of fruits it contains. Letโs explore some more images of the fruits. Weโll specify some fruit names in the images list and display them on a plot. Weโll do this using the matplotlib library.
# + colab={"base_uri": "https://localhost:8080/", "height": 319} id="iZrpd0TNCdCb" executionInfo={"status": "ok", "timestamp": 1634027448388, "user_tz": -330, "elapsed": 1776, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="1c05b9cb-4e47-4e09-cc15-b7bd0de512ff"
#Visualizing more Images
images = ['Orange', 'Cauliflower', 'Cactus fruit', 'Eggplant', 'Avocado', 'Guava','Lychee', 'Walnut']
fig = plt.figure(figsize = (10, 5))
for i in range(len(images)):
ax = fig.add_subplot(3, 3, i + 1, xticks = [], yticks = [])
plt.title(images[i])
plt.axis("off")
ax.imshow(load_img(train_path + images[i] +"/0_100.jpg", target_size=(100, 100)))
# + [markdown] id="SrWuKEyrHcfC"
# Weโll find the 4 most frequent fruits in the dataset. For this, weโll create a list named fruits and populate it with all the occurrences of fruits. Then, weโll use Counter from the collections library to find out the 4 most frequently occurring fruits in the โfruitsโ list.
# + colab={"base_uri": "https://localhost:8080/"} id="LPCqHkMeEgvZ" executionInfo={"status": "ok", "timestamp": 1634027455445, "user_tz": -330, "elapsed": 477, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="a56a9341-4432-471e-e74e-72b0fbd8e519"
#Storing occurences of fruits in a list
fruits = []
fruits_image = []
for i in os.listdir(train_path):
for image_filename in os.listdir(train_path + i):
fruits.append(i)
fruits_image.append(i + '/' + image_filename)
#Finding top 5 frequent Fruits
newData = Counter(fruits)
frequent_fruits = newData.most_common(4)
print("Top 5 frequent Fruits:")
frequent_fruits
# + [markdown] id="Zt2hP_U4JNYJ"
# Weโll find out the total number of classes for the dataset. To do this, weโll use glob. The glob module finds all the pathnames matching a specified pattern and returns them in arbitrary order. The directory containing a particular fruitโs image has the name same as that of the fruit. So, weโll be able to get the classes of fruits from directory names.
# + colab={"base_uri": "https://localhost:8080/"} id="XcGmsHUwIAFJ" executionInfo={"status": "ok", "timestamp": 1634027461037, "user_tz": -330, "elapsed": 506, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="ab728d2d-a355-4832-8298-c7dd71b1f6ec"
#Finding number of classes
className = glob(train_path + '/*')
number_of_class = len(className)
print(number_of_class)
# + [markdown] id="_5wAPxryJ5ep"
# First, weโll call an โemptyโ sequential model. Weโll add to this empty model one layer at a time. The first layer is a convolutional layer with a depth of 32 and a filter size of 3x3.
# Activation: "relu"
#
# We need to specify an input size only for our first layer as the subsequent layers can infer the input size from the output size of the previous layer. Here, our input size is (100, 100, 3).
# + colab={"base_uri": "https://localhost:8080/"} id="wNi0QuxIJQFx" executionInfo={"status": "ok", "timestamp": 1634027466848, "user_tz": -330, "elapsed": 524, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="9a271634-d2cc-4d00-a290-2768ac930cc7"
'''
There is a MaxPooling2D layer after every convolutional layer.
This layer downsamples the input representation by taking the maximum value over a window.
โPoolingโ is basically the process of merging for the purpose of reducing the size of the data.
'''
#Creating the model
model = Sequential()
model.add(Conv2D(32,(3,3),input_shape = x.shape))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(32,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(number_of_class))
model.add(Activation("softmax"))
#Compiling the model
model.compile(loss = "categorical_crossentropy",
optimizer = "rmsprop",
metrics = ["accuracy"])
#Getting model's summary
model.summary()
# + id="CHNBa-_mKezD" executionInfo={"status": "ok", "timestamp": 1634027482357, "user_tz": -330, "elapsed": 514, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}}
#Specifing epochs & batch size
epochs = 50
batch_size = 71
# + colab={"base_uri": "https://localhost:8080/"} id="c1F1EQbYKkgp" executionInfo={"status": "ok", "timestamp": 1634027489586, "user_tz": -330, "elapsed": 5259, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="5ebdb962-b757-4c7a-f982-32c3d212abee"
#Creating an object of ImageDataGenerator.
train_datagen = ImageDataGenerator(rescale= 1./255,
shear_range = 0.3,
horizontal_flip=True,
zoom_range = 0.3)
test_datagen = ImageDataGenerator(rescale= 1./255)
#Generating batches of Augmented data.
train_generator = train_datagen.flow_from_directory(
directory = train_path,
target_size= x.shape[:2],
batch_size = batch_size,
color_mode= "rgb",
class_mode= "categorical")
# test generator
test_generator = test_datagen.flow_from_directory(
directory = test_path,
target_size= x.shape[:2],
batch_size = batch_size,
color_mode= "rgb",
class_mode= "categorical")
# + colab={"base_uri": "https://localhost:8080/"} id="azz7de0Rz5Ow" executionInfo={"status": "ok", "timestamp": 1634029103768, "user_tz": -330, "elapsed": 1608974, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="15decd3d-03d9-4f72-f998-07900208105b"
#Fitting the model
hist = model.fit_generator(
generator = train_generator,
steps_per_epoch = 1600 // batch_size,
epochs=epochs,
validation_data = test_generator,
validation_steps = 800 // batch_size)
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="IDdFTW4H0HHV" executionInfo={"status": "ok", "timestamp": 1634029112510, "user_tz": -330, "elapsed": 672, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="6ce69c34-0bfd-4ec1-99d7-e4b9de355403"
#Plotting train & validation loss
plt.figure()
plt.plot(hist.history["loss"],label = "Train Loss", color = "green")
plt.plot(hist.history["val_loss"],label = "Validation Loss", color = "mediumvioletred", linestyle="dashed",markeredgecolor = "purple", markeredgewidth = 2)
plt.title("Model Loss", color = "darkred", size = 13)
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="HRxFIZVq6_Lj" executionInfo={"status": "ok", "timestamp": 1634029117121, "user_tz": -330, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03977101096195279046"}} outputId="f0676246-120b-4b13-85e2-1d1f3f9016a4"
#Plotting train & validation accuracy
plt.figure()
plt.plot(hist.history["accuracy"],label = "Train Accuracy", color = "yellow")
plt.plot(hist.history["val_accuracy"],label = "Validation Accuracy", color = "mediumvioletred", linestyle="dashed",markeredgecolor = "purple", markeredgewidth = 2)
plt.title("Model Accuracy", color = "darkred", size = 13)
plt.legend()
plt.show()
| LAB10/071_10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="cpbJ26lr0h5U"
# # INSTALLING ANACONDA
# + colab={"base_uri": "https://localhost:8080/"} id="J-bW_xhq0hk9" outputId="7861a2e5-615b-47a4-af44-f27948fee63c"
# !echo $PYTHONPATH
# + colab={"base_uri": "https://localhost:8080/"} id="qmiTXjX90oxe" outputId="b07d246c-439a-4d9d-a032-9e302c9355df"
# %env PYTHONPATH=
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="gsKMPEke0qI-" outputId="ac6f51d1-3ccd-465c-e676-2b3411577a69" language="bash"
# MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
# MINICONDA_PREFIX=/usr/local
# wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
# chmod +x $MINICONDA_INSTALLER_SCRIPT
# ./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="6d6Cfd-M0wXS" outputId="3450d44f-6a60-402c-b472-4b75a2fb51e8" language="bash"
# conda install --channel defaults conda python=3.6 --yes
# conda update --channel defaults --all --yes
# + colab={"background_save": true} id="PDiWTz4S1Djz"
import sys
_ = (sys.path
.append("/usr/local/lib/python3.6/site-packages"))
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="exF392Fd1H6B" outputId="f35dcbbe-15bf-4737-d7bb-545aacc025b4"
# !conda install --channel conda-forge featuretools --yes
# + [markdown] id="YOXynmej1IWh"
# #DeepPurpose
# + colab={"base_uri": "https://localhost:8080/"} id="NwSskeYTu0kp" outputId="4f2fb11e-472c-4e3c-806c-26c7f10afc9f"
# !git clone https://github.com/kexinhuang12345/DeepPurpose.git
# + colab={"base_uri": "https://localhost:8080/"} id="9anX4WN75cgj" outputId="1d2d99c1-6d49-4893-e40f-1996262e1c74" language="bash"
# cd /content/DeepPurpose
# echo -y | conda env create -f environment.yml
# source activate DeepPurpose
#
#
# python
#
#
# from DeepPurpose import oneliner
# from DeepPurpose.dataset import *
# oneliner.repurpose(*load_SARS_CoV2_Protease_3CL(), *load_antiviral_drugs(no_cid = True))
# + id="szpKX-rUQJbi"
| tutorial-notebooks/Hackbio_Case_Study_2_(b)__Repurposing_using_Customized_training_data_with_One_Line.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#printing 1 to 200 prime numbers:
# -
for num in range(1,200):
count=0
for i in range(num,0,-1):
if(num%i==0):
count+=1
if(count==2):
print(num ,end=" ")
# +
# USing if-else clause:
# -
num=int(input())
if(num<=1000):
print("safe to land")
elif(num>1000 and num<=5000):
print("come down to 1000 mtrs")
else:
print("turn around")
| assignment-day_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
plt.show() #if you dont use inline
x = np.linspace(0,5,11)
x
y = x ** 2
plt.plot(x,y,'r-',marker = "o")
plt.xlabel('Number')
plt.ylabel("Squared")
plt.title("Exponential: Power 2")
plt.plot(x,y)
plt.subplot(1,2,1)
plt.plot(x,y,'r')
plt.subplot(1,2,2)
plt.plot(y,x,'b')
fig = plt.figure()
axes = fig.add_axes([0,0,1,1])
axes.plot(x,y)
axes.set_xlabel('X Label')
axes.set_ylabel('Y Label')
fig = plt.figure()
axes1 = fig.add_axes([0, 0, 1, 1])
axes2 = fig.add_axes([0.2, 0.5, 0.3, 0.3])
fig, axes = plt.subplots(3,3)
plt.tight_layout()
for current_axes_row in axes:
for current_element in current_axes_row:
current_element.plot(x,y)
fig = plt.figure(figsize=(3,2))
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y)
fig, axes = plt.subplots(2,2,figsize = (8,6))
for row_ in axes:
for ele_ in row_:
ele_.set_title("plot")
ele_.set_xlabel("xlabel")
ele_.set_ylabel("ylabel")
ele_.plot(x,y,label = "Squared")
ele_.legend(loc = 0)
plt.tight_layout()
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.set_xlim([0,5])
ax.set_ylim([0,30])
ax.plot(x,y,color = "green",linestyle="--",linewidth = "3",marker = "o", markersize = "15", markerfacecolor = "red", markeredgecolor = "red")
| Plots4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/usm.jpg" width="480" height="240" align="left"/>
# # MAT281 - Laboratorio Nยฐ06
#
# ## Objetivos de la clase
#
# * Reforzar los conceptos bรกsicos del E.D.A..
# ## Contenidos
#
# * [Problema 01](#p1)
#
# ## Problema 01
# <img src="./images/logo_iris.jpg" width="360" height="360" align="center"/>
# El **Iris dataset** es un conjunto de datos que contine una muestras de tres especies de Iris (Iris setosa, Iris virginica e Iris versicolor). Se midiรณ cuatro rasgos de cada muestra: el largo y ancho del sรฉpalo y pรฉtalo, en centรญmetros.
#
# Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
# +
# librerias
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.max_columns', 500) # Ver mรกs columnas de los dataframes
# Ver grรกficos de matplotlib en jupyter notebook/lab
# %matplotlib inline
# +
# cargar datos
df = pd.read_csv(os.path.join("data","iris_contaminados.csv"))
#df = pd.read_csv("iris_contaminados.csv")
df.columns = ['sepalLength',
'sepalWidth',
'petalLength',
'petalWidth',
'species']
df.head()
# -
# ### Bases del experimento
#
# Lo primero es identificar las variables que influyen en el estudio y la naturaleza de esta.
#
# * **species**:
# * Descripciรณn: Nombre de la especie de Iris.
# * Tipo de dato: *string*
# * Limitantes: solo existen tres tipos (setosa, virginica y versicolor).
# * **sepalLength**:
# * Descripciรณn: largo del sรฉpalo.
# * Tipo de dato: *float*.
# * Limitantes: los valores se encuentran entre 4.0 y 7.0 cm.
# * **sepalWidth**:
# * Descripciรณn: ancho del sรฉpalo.
# * Tipo de dato: *float*.
# * Limitantes: los valores se encuentran entre 2.0 y 4.5 cm.
# * **petalLength**:
# * Descripciรณn: largo del pรฉtalo.
# * Tipo de dato: *float*.
# * Limitantes: los valores se encuentran entre 1.0 y 7.0 cm.
# * **petalWidth**:
# * Descripciรณn: ancho del pรฉpalo.
# * Tipo de dato: *float*.
# * Limitantes: los valores se encuentran entre 0.1 y 2.5 cm.
# Su objetivo es realizar un correcto **E.D.A.**, para esto debe seguir las siguientes intrucciones:
# 1. Realizar un conteo de elementos de la columna **species** y corregir segรบn su criterio. Reemplace por "default" los valores nan..
# +
species_rep = df["species"].unique()
#Usamos groupby para separar species
group_species = df.groupby("species")
#Ponemos sepalLength (podria ser cualquiera) para poder contar en species
group_species[['sepalLength']].count().reset_index()
# -
# Podemos notar diferentes problemas dentro de species:
# 1) Datos vacรญos: Serรกn reemplazados por default
#
# 2) Distinciรณn entre mayuscula y minuscula
#
# 3) Apariciรณn de categorias repetidas debido a espacios
#
#
# +
df.loc[df['species'].isnull(),'species'] = 'default' #Resolvemos 1)
df['species'] = df['species'].str.lower().str.strip() #Resolvemos 2) y 3)
# -
# Volvemos a contar sobre cada especie para notar que se encuentran solo las 3 categorias pedidas y los valores nan fueron reemplazados por default
#Usamos groupby para separar species
group_species = df.groupby("species")
#Ponemos sepalLength (podria ser cualquiera) para poder contar en species
group_species[['sepalLength']].count().reset_index()
# 2. Realizar un grรกfico de box-plot sobre el largo y ancho de los petalos y sรฉpalos. Reemplace por **0** los valores nan.
# Primero realizamos la correcciรณn de largo y ancho para nan asignando 0:
df.loc[df[df.columns[0]].isnull(),df.columns[0]] = '0'
df.loc[df[df.columns[1]].isnull(),df.columns[1]] = '0'
df.loc[df[df.columns[2]].isnull(),df.columns[2]] = '0'
df.loc[df[df.columns[3]].isnull(),df.columns[3]] = '0'
data_st = df.drop(["species"],axis=1)
sns.boxplot(data = data_st)
# 3. Anteriormente se define un rango de valores vรกlidos para los valores del largo y ancho de los petalos y sรฉpalos. Agregue una columna denominada **label** que identifique cuรกl de estos valores esta fuera del rango de valores vรกlidos.
# Realizamos un filtro de las variables
L =[]
cont=0
cotas_sup = [7,4.5,7,2.5]
cotas_inf = [4,2,1,0.1]
for variable in df.columns[:4]:
mask_inf = df[variable].astype(float) >=cotas_inf[cont]
mask_sup = df[variable].astype(float) <=cotas_sup[cont]
mask = mask_inf & mask_sup
cont+=1
L.append(mask)
#obtenemos una lista con el filtrado de cada variable, ahora usaremos lรณgica proposicional
# Notemos que ante la apariciรณn de un valor False debemos asignar un label False, si solo hay True debemos asignar True, esto tiene la misma dinรกmica que el operador lรณgico de conjunciรณn "y" pues ante la apariciรณn de un False retorna False, asรญ nuestra columna label es la conjunciรณn entre los filtros
etiquetas=[]
for i in range(len(L[0])):
etiquetas.append((L[0][i]) and (L[1][i]) and (L[2][i]) and (L[3][i]))
label_row = pd.DataFrame(etiquetas,columns=["label"])
df = pd.concat([df,label_row], axis=1, sort=False)
df
# 4. Realice un grรกfico de *sepalLength* vs *petalLength* y otro de *sepalWidth* vs *petalWidth* categorizados por la etiqueta **label**. Concluya sus resultados.
#sepalLength vs petalLength
sns.scatterplot(
x = "sepalLength",
y = "petalLength",
data = df,
hue = "label",
palette = ["red","blue"]
)
#sepalWidth vs petalWidth
sns.scatterplot(
x = "sepalWidth",
y = "petalWidth",
data = df,
hue = "label",
palette = ["red","blue"]
)
# 5. Filtre los datos vรกlidos y realice un grรกfico de *sepalLength* vs *petalLength* categorizados por la etiqueta **species**.
# Los datos vรกlidos tienen etiqueta True, realizando la filtraciรณn:
mask = df['label']==True
df_filtrado = df[mask]
df_filtrado
print("Observemos la cantidad de datos perdidos al realizar el filtrado")
print('Cantidad de filas dataset sin filtro:',len(df))
print('Cantidad de filas dataset con filtro:',len(df_filtrado))
# +
print("El grafico pedido se presenta a continuaciรณn:")
#sepalLength vs petalLength
sns.scatterplot(
x = "sepalLength",
y = "petalLength",
data = df,
hue = "species",
)
# -
# Como observaciรณn podemos notar que los valores que eran Nan anteriormente son mucho menores que la cantidad de valores que no son perdidos, ademรกs podemos notar que las especies presentan algunos datos dispersos dentro de la nube de puntos, en el caso de setosa se puede ver que existe un punto sobre la nube donde se cateogoriza y ante esto se suele hacer un anรกlisis de Outliers para tratar de mejorar la separabilidad de estos conjuntos
| labs/lab_06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/melekayas/hu-bby162-2021/blob/main/adres.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="FNk35mABe6Kk" outputId="4f1d7b19-c990-4958-b019-f3731a2917ac"
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
# + [markdown] id="yhpTkxZzggLA"
# # Yeni Bรถlรผm
# + id="CsMdpS_tiMyL"
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
f = open(dosya, 'w') # Mevcut veriye ek veri yazฤฑlmasฤฑ iรงin parametre: 'a'
f.write("test") # Her yeni verinin bir alt satฤฑra yazdฤฑrฤฑlmasฤฑ "test\n"
f.close()
# + colab={"base_uri": "https://localhost:8080/"} id="oENglOWiZzfz" outputId="c06084e1-a346-454b-8448-d8f3c0777983"
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
adSoyad = input("Ad soyad giriniz: ")
email = input("Email giriniz: ")
'''
def adSoyad():
bilgi = input("Ad Soyad giriniz: ")
def email():
bilg = input("Email giriniz: ")
def giris():
print(" 1- Ad ve soyad giriniz.")
print(" 2- E-mail giriniz.")
secilen = input("Hangi iลlemi yapmak istiyorsunuz (1/2):")
if secilen == "1":
adSoyad()
else:
email()
giris() #Bu fonksiyonu รงalฤฑลtฤฑrdฤฑm ve baลarฤฑlฤฑ oldu fakat [f.write(adSoyad + " / " + email + "\n")] bu kฤฑsmฤฑ geรงersiz saydฤฑ ve dรผzeltemedim.
Bu yรผzden eklemedim.
'''
f = open(dosya, 'a') # Mevcut veriye ek veri yazฤฑlmasฤฑ iรงin parametre: 'a'
f.write(adSoyad + " / " + email + "\n")
f.close()
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
| adres.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:immune-evolution]
# language: python
# name: conda-env-immune-evolution-py
# ---
# # Imports
# %load_ext autoreload
# %autoreload 2
# +
import glob
import os
import matplotlib.pyplot as plt
import pandas as pd
import scanpy as sc
import seaborn as sns
# +
from path_constants import H5AD
from nb_utils import describe
from celltype_utils import get_shared_adata
from path_constants import sig_outdir_base, FIGURE_FOLDER
# -
# # Read parquet files
# ## File paths
sketch_id = 'alphabet-dayhoff__ksize-51__scaled-10'
kmer_categories = (
"Not in reference genome",
"In ref genome, not in a gene",
"In ref genome, not in a 1:1 orthologous gene",
"In ref genome, in a 1:1 orthologous gene",
)
# ## Read hash2kmer with predictions and orthology from parquet
hash2kmer_with_predictions = pd.read_parquet(
os.path.join(sig_outdir_base, f"aggregated-hash2kmer-with-predicted-cells--{sketch_id}--with-orthology.parquet")
)
# ## Get number of k-mers in different categories per `groundtruth_celltype`
# +
# %%time
celltype_col = "groundtruth_celltype"
diagnostic_kmers_n_per_category = hash2kmer_with_predictions.groupby(
["species", celltype_col, "kmer_category"]
).hashval.nunique()
diagnostic_kmers_n_per_category.name = "percent_kmers"
diagnostic_kmers_n_per_category
# -
diagnostic_kmers_n_per_category_df = diagnostic_kmers_n_per_category.groupby(level=[0, 1]).apply(lambda x: 100*x/x.sum()).reset_index()
diagnostic_kmers_n_per_category_df
diagnostic_kmers_n_per_category_df.query(
'(species == "mouse") and (kmer_category == "Not in reference genome")'
).sort_values('percent_kmers')
diagnostic_kmers_n_per_category_df.query(
'(species == "mouse") and (kmer_category == "Not in reference genome")'
).sort_values('percent_kmers')
diagnostic_kmers_n_per_category_df.query(
'(species == "lemur") and (kmer_category == "Not in reference genome")'
).sort_values('percent_kmers')
# # Plot percentage of kmers in ref genome, etc
figure_folder = os.path.join(FIGURE_FOLDER, 'kmer_gene_orthology')
# ! mkdir $figure_folder
species_order = 'human', 'lemur', 'mouse', 'bat'
diagnostic_kmers_n_per_category_df['species'] = pd.Categorical(diagnostic_kmers_n_per_category_df['species'], categories=species_order, ordered=True)
g = sns.catplot(
data=diagnostic_kmers_n_per_category_df,
col=celltype_col,
order=species_order,
col_wrap=5,
y="species",
x="percent_kmers",
hue="kmer_category",
hue_order=kmer_categories,
palette='mako',
kind='bar',
height=2,
linewidth=1,
edgecolor='white'
# legend=True,
)
g.set_titles('{col_name}')
for ax in g.axes.flat:
title = ax.get_title()
if title == 'Smooth Muscle and Myofibroblast':
ax.set_title('Smooth Muscle\nand Myofibroblast', fontsize=10, pad=-20)
pdf = os.path.join(figure_folder, 'unstacked_barplot__col-celltype__y-species__hue-kmer_category.pdf')
g.savefig(pdf)
g = sns.catplot(
data=diagnostic_kmers_n_per_category_df,
hue=celltype_col,
# col_wrap=2,
y="species",
order=species_order,
x="percent_kmers",
col="kmer_category",
col_order=kmer_categories,
palette='tab10',
kind='bar',
height=2.5,
sharex=True,
linewidth=.5,
edgecolor='white'
# legend=True,
)
g.set_titles('{col_name}')
for ax in g.axes.flat:
if ax.is_last_row():
title = ax.get_title()
title = title.replace(',', ',\n')
ax.set_title(title, fontsize=10, pad=-20)
# if ax.is_first_col():
# ax.set(xscale='log')
pdf = os.path.join(figure_folder, 'unstacked_barplot__col-kmer_category__y-species__hue-celltype.pdf')
g.savefig(pdf)
species_order_no_lemur = ['human','mouse', 'bat']
diagnostic_kmers_n_per_category_df_no_lemur = diagnostic_kmers_n_per_category_df.query(
"species in @species_order_no_lemur"
)
diagnostic_kmers_n_per_category_df_no_lemur.species = pd.Categorical(
diagnostic_kmers_n_per_category_df_no_lemur.species,
categories=species_order_no_lemur,
ordered=True,
)
describe(diagnostic_kmers_n_per_category_df_no_lemur)
# +
fig, axes = plt.subplots(ncols=5, nrows=2, figsize=(12, 3), sharex=True, sharey=True)
for (celltype, df), ax in zip(
diagnostic_kmers_n_per_category_df_no_lemur.groupby(celltype_col), axes.flat
):
# legend = ax.is_last_col() and ax.is_last_row()
# One liner to create a stacked bar chart.
sns.histplot(
df,
y="species",
hue="kmer_category",
weights="percent_kmers",
multiple="stack",
palette="mako",
linewidth=1,
edgecolor="white",
legend=False,
ax=ax,
)
ax.set(
ylabel="species",
xlabel='Percentage',
title=celltype.replace("and", "\nand"),
# yticks=species_order,
)
# Fix the legend so it's not on top of the bars.
# legend = ax.get_legend()
# legend.set_bbox_to_anchor((1, 1))
sns.despine()
fig.tight_layout()
pdf = os.path.join(figure_folder, 'stacked_barplot__col-celtype__y-species__hue-kmer_category.pdf')
fig.savefig(pdf)
# -
figure_folder
# +
fig, axes = plt.subplots(ncols=2, nrows=5, figsize=(4.5, 6), sharex=True, sharey=True)
for (celltype, df), ax in zip(
diagnostic_kmers_n_per_category_df_no_lemur.groupby(celltype_col), axes.flat
):
# legend = ax.is_last_col() and ax.is_last_row()
# One liner to create a stacked bar chart.
sns.histplot(
df,
y="species",
hue="kmer_category",
weights="percent_kmers",
multiple="stack",
palette="mako",
linewidth=1,
edgecolor="white",
legend=False,
ax=ax,
)
ax.set(
ylabel="species",
xlabel='Percentage',
title=celltype.replace("and", "\nand"),
)
# Fix the legend so it's not on top of the bars.
# legend = ax.get_legend()
# legend.set_bbox_to_anchor((1, 1))
sns.despine()
fig.tight_layout()
pdf = os.path.join(figure_folder, 'stacked_barplot__col-celtype__y-species__hue-kmer_category__two_column.pdf')
fig.savefig(pdf)
# +
fig, axes = plt.subplots(ncols=3, figsize=(12, 4), sharex=True, sharey=True)
for (species, df), ax in zip(
diagnostic_kmers_n_per_category_df_no_lemur.groupby("species"), axes.flat
):
legend = ax.is_last_col()
# One liner to create a stacked bar chart.
sns.histplot(
df,
y=celltype_col,
hue="kmer_category",
weights="percent_kmers",
multiple="stack",
palette="mako",
linewidth=1,
edgecolor="white",
legend=legend,
ax=ax,
)
ax.set(
ylabel="percentage",
title=species,
# yticks=["bat", "human"],
)
# Fix the legend so it's not on top of the bars.
legend = ax.get_legend()
legend.set_bbox_to_anchor((1, 1))
sns.despine()
fig.tight_layout()
pdf = os.path.join(figure_folder, 'stacked_barplot__col-species__y-celltype__hue-kmer_category.pdf')
fig.savefig(pdf)
# -
1+1
# ## Get number of k-mers in different categories per `predicted_compartment`
# +
# %%time
celltype_col = "predicted_compartment"
compartment_kmers_n_per_category = hash2kmer_with_predictions.groupby(
["species", celltype_col, "kmer_category"]
).hashval.nunique()
compartment_kmers_n_per_category.name = "percent_kmers"
compartment_kmers_n_per_category
# -
compartment_kmers_n_per_category_df = compartment_kmers_n_per_category.groupby(level=[0, 1]).apply(lambda x: 100*x/x.sum()).reset_index()
compartment_kmers_n_per_category_df
compartment_kmers_n_per_category_df.query(
'(species == "mouse") and (kmer_category == "Not in reference genome")'
).sort_values('percent_kmers')
compartment_kmers_n_per_category_df.query(
'(species == "mouse") and (kmer_category == "Not in reference genome")'
).sort_values('percent_kmers')
compartment_kmers_n_per_category_df.query(
'(species == "lemur") and (kmer_category == "Not in reference genome")'
).sort_values('percent_kmers')
# # Plot percentage of kmers in ref genome, etc
figure_folder = '/home/olga/googledrive/kmer-homology-paper/figures/kmer_gene_orthology'
# ! mkdir $figure_folder
species_order = 'human', 'lemur', 'mouse', 'bat'
compartment_kmers_n_per_category_df['species'] = pd.Categorical(compartment_kmers_n_per_category_df ['species'], categories=species_order, ordered=True)
g = sns.catplot(
data=compartment_kmers_n_per_category_df,
col=celltype_col,
order=species_order,
col_wrap=5,
y="species",
x="percent_kmers",
hue="kmer_category",
hue_order=kmer_categories,
palette='mako',
kind='bar',
height=2,
linewidth=1,
edgecolor='white'
# legend=True,
)
g.set_titles('{col_name}')
for ax in g.axes.flat:
title = ax.get_title()
if title == 'Smooth Muscle and Myofibroblast':
ax.set_title('Smooth Muscle\nand Myofibroblast', fontsize=10, pad=-20)
pdf = os.path.join(figure_folder, f'unstacked_barplot__col-celltype__y-species__hue-kmer_category__{celltype_col}.pdf')
g.savefig(pdf)
g = sns.catplot(
data=compartment_kmers_n_per_category_df,
hue=celltype_col,
# col_wrap=2,
y="species",
order=species_order,
x="percent_kmers",
col="kmer_category",
col_order=kmer_categories,
palette='tab10',
kind='bar',
height=2.5,
sharex=True,
linewidth=.5,
edgecolor='white'
# legend=True,
)
g.set_titles('{col_name}')
for ax in g.axes.flat:
if ax.is_last_row():
title = ax.get_title()
title = title.replace(',', ',\n')
ax.set_title(title, fontsize=10, pad=-20)
# if ax.is_first_col():
# ax.set(xscale='log')
pdf = os.path.join(figure_folder, f'unstacked_barplot__col-kmer_category__y-species__hue-celltype__{celltype_col}.pdf')
g.savefig(pdf)
species_order_no_lemur = ['human','mouse', 'bat']
compartment_kmers_n_per_category_df_no_lemur = compartment_kmers_n_per_category_df.query(
"species in @species_order_no_lemur"
)
compartment_kmers_n_per_category_df_no_lemur.species = pd.Categorical(
compartment_kmers_n_per_category_df_no_lemur.species,
categories=species_order_no_lemur,
ordered=True,
)
describe(compartment_kmers_n_per_category_df_no_lemur)
compartment_kmers_n_per_category_df_no_lemur.query('kmer_category == "Not in reference genome"')
# +
fig, axes = plt.subplots(ncols=1, nrows=5, figsize=(4, 6), sharex=True, sharey=True)
for (celltype, df), ax in zip(
compartment_kmers_n_per_category_df_no_lemur.groupby(celltype_col), axes.flat
):
# legend = ax.is_last_col() and ax.is_last_row()
# One liner to create a stacked bar chart.
sns.histplot(
df,
y="species",
hue="kmer_category",
weights="percent_kmers",
multiple="stack",
palette="mako",
linewidth=1,
edgecolor="white",
legend=False,
ax=ax,
)
ax.set(
ylabel="species",
xlabel='Percentage',
title=celltype.replace("and", "\nand"),
)
# Fix the legend so it's not on top of the bars.
# legend = ax.get_legend()
# legend.set_bbox_to_anchor((1, 1))
sns.despine()
fig.tight_layout()
pdf = os.path.join(figure_folder, f'stacked_barplot__col-celtype__y-species__hue-kmer_category__two_column__{celltype_col}.pdf')
fig.savefig(pdf)
# -
pdf
| notebooks/figure_3G--02_count_kmer_orthology_per_celltype_per_species.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.9 64-bit (''learn-env'': conda)'
# metadata:
# interpreter:
# hash: 7981d81b3924aa8f8737ad1f515d2c1560025393c5e8fb9a51b730ca6a994a8e
# name: 'Python 3.6.9 64-bit (''learn-env'': conda)'
# ---
# +
# little note about tuple:
# what makes a tuple is not the (), but the ,
a = (1, 2, 3)
print(type(a))
b = 1, 2, 3
print(type(b))
# -
# sometimes we see parallel assignment like below
# it is actually unpacking
x, y, z = 1, 2, 3
print(x, y, z)
# +
# sometimes we have a list and we want to unpack a few first element
# and we don't know actually how many more left in the list
a = [1, 2, 3, 4, 5, 6, 7]
first, second, *rest = a
print(first)
print(second)
print(rest)
# +
# it's more interesting when we can unpack a feew first and last element, leave the rest (middle)
a = [1, 2, 3, 4, 5, 6, 7]
first, second, *rest, last = a
print(first)
print(second)
print(rest)
print(last)
# +
# it's even more interesting when we can the * operator on the right hand side of the assignment
a = [1, 2, 3]
b = ['a', 'b', 'c', 'd']
c = [*a, *b]
print(a)
print(b)
print(c)
# +
# be very careful with unpacking * on the left hand side with UNORDERED types like set or dict
# because these types are unordered what returned may be unexpected
a = {1, 2, 3, 5, 'a', 'b', 4, 'c'}
first, *rest, last = a
print(first)
print(rest)
print(last)
# +
# BUT, sometimes it can be REALLY useful when we unpack the right hand side of the unordered types
d1 = {'key1': 1, 'key2': 2, 'key3': 3}
d2 = {'key2': 4, 'key4': 6, 'key5': 8}
# we want to get all the keys from both dict including the duplicates
keys_list = [*d1, *d2]
print(keys_list)
# or, we want to get all Unique keys from both dict
keys_set = {*d1, *d2}
print(keys_set)
# +
# so, nwo the question is cn we unpack both key and value of dict?
# yes, we can by using ** operator. It can be very useful but be careful with the oder of dictionaries
# if they have some shared key, the value of the later dict will OVERRIDE the previous dict
d1 = {'key1': 1, 'key2': 2, 'key3': 3}
d2 = {'key2': 4, 'key4': 6, 'key5': 8}
d3 = {'key1': 5, 'key4': 10, 'key6': 15}
d = {**d1, **d2, **d3}
print(d)
# +
# NOTE that we can NOT using ** operator on the left hand side
# -
# what if we have a nested list? Yes, we cn unpack them too
a = [1, 2, 3, ['a', 'b', 'c'], [4, 5]]
*x, (a, b, c), (y, z) = a
print(x)
print(a, b, c)
print(y, z)
| function-parameter/unpacking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Introduction to the Monte Carlo method
#
# ----
#
# Start by defining the [Gibbs (or Boltzmann) distribution](https://en.wikipedia.org/wiki/Boltzmann_distribution):
# $$P(\alpha) = e^{-E(\alpha)/kT}$$
# this expression, defines the probability of observing a particular configuration of spins, $\alpha$.
# As you can see, the probability of $\alpha$ decays exponentially with increasing energy of $\alpha$, $E(\alpha)$,
# where $k$ is the Boltzmann constant, $k = 1.38064852 \times 10^{-23} J/K$
# and $T$ is the temperature in Kelvin.
#
# ## What defines the energy of a configuration of spins?
# Given a configuration of spins (e.g., $\uparrow\downarrow\downarrow\uparrow\downarrow$) we can define the energy using what is referred to as an Ising Hamiltonian:
# $$ \hat{H}' = \frac{\hat{H}}{k} = -\frac{J}{k}\sum_{<ij>} s_is_j,$$
# where, $s_i=1$ if the $i^{th}$ spin is `up` and $s_i=-1$ if it is `down`, and the brackets $<ij>$ indicate a sum over spins that are connected,
# and $J$ is a constant that determines the energy scale.
# The energy here has been divided by the Boltzmann constant to yield units of temperature.
# Let's consider the following case, which has the sites connected in a single 1D line:
# $$\alpha = \uparrow-\downarrow-\downarrow-\uparrow-\downarrow.$$
# What is the energy of such a configuration?
# $$ E(\alpha)' = J/k(-1 + 1 - 1 - 1) = \frac{E(\alpha)}{k} = -2J/k$$
#
# ## P1: Write a class that defines a spin configuration
class spinConfiguration:
def __init__(self, binaryConfiguration, numElements):
# number of elements in list of spins
self.numElements = numElements
self.spins = self.getSpins(binaryConfiguration)
def getSpins(self, binaryConfiguration):
spinList = []
bitList = [bit for bit in bin(binaryConfiguration)][2:self.numElements + 2]
while(len(bitList) < self.numElements):
bitList.insert(0,'0')
for bitChar in bitList:
if(bitChar == '0'):
spinList.append(-1)
else:
spinList.append(1)
return spinList
def getNumElements(self):
return self.numElements
def calculateMagnetism(self):
magnetism = 0
for i in range(self.numElements):
magnetism += self.spins[i]
return magnetism;
# ## P2: Write a class that defines the 1D hamiltonian, containing a function that computes the energy of a configuration
# +
class SingleDimensionHamiltionian:
# k constant, arbitrarily sent to 1
# constructor
def __init__(self, J, mu, spinConfiguration):
self.k = 1;
self.J = J
self.mu = mu
self.spinConfiguration = spinConfiguration
self.Hamiltonian = self.calculateHamiltonian()
def calculateHamiltonian(self):
factor = -1 * self.J / self.k
spinSums = 0
# nested loops calculate alignment factor for each pair of elements
for i in range(0, self.spinConfiguration.getNumElements()):
for j in range(0, self.spinConfiguration.getNumElements()):
if(i != j):
spinSums += self.spinConfiguration.spins[i] * self.spinConfiguration.spins[j]
return factor * spinSums
def calculateEnergy(self):
sumOfProductsOfSpins = 0
sumOfSpins = 0
for i in range(self.spinConfiguration.numElements):
sumOfSpins += self.spinConfiguration.spins[i]
for j in range(1, self.spinConfiguration.numElements):
if(i + 1 == j):
sumOfProductsOfSpins += self.spinConfiguration.spins[i] * self.spinConfiguration.spins[j]
sumOfProductsOfSpins += self.spinConfiguration.spins[0] * self.spinConfiguration.spins[self.spinConfiguration.numElements - 1]
firstComponent = -1 * self.J * sumOfProductsOfSpins
secondComponent = self.mu * sumOfSpins
instanceEnergy = firstComponent + secondComponent
return instanceEnergy
# -
# ## Ising 2:
# for now only produces the configurations with two elements, should change soon
def generateSpinConfigurations(n):
#generate spin configurations
spinConfigurations = []#convert each List in spinConfigurationsLists in SpinConfiguration object
for binaryConfiguration in range(2 ** n):
spinConfigurations.append(spinConfiguration(binaryConfiguration, n))
return spinConfigurations
# +
import math
# calculates averageEnergy, averageMagnetism, HeatCapacity, and Magnetic Susceptibility
def calculateValues(temperature, J, mu, latticeLength):
#set k constant
k = 1
#instantiate probability variable
totalProbability = 0
#instantiate lists for Energies and Probabilities
energies = []
energiesSquared = []
magnetisms = []
magnetismsSquared = []
probabilities = []
#calculate spinConfiguration
spinConfigurations = generateSpinConfigurations(latticeLength)
#calculate Energies and Magnetisms for each spin configuration
for i in range(len(spinConfigurations)):
# define spinConfiguration and Hamiltonian for given instance
instanceSpinConfiguration = spinConfigurations[i];
instanceHamiltonian = SingleDimensionHamiltionian(J, mu, instanceSpinConfiguration)
# add instance values to energies, magnetisms, and probabilities lists
instanceEnergy = instanceHamiltonian.calculateEnergy()
energies.append(instanceEnergy)
instanceMagnetism = instanceSpinConfiguration.calculateMagnetism();
magnetisms.append(instanceMagnetism)
instanceProbability = math.e ** ((-1 * instanceEnergy) / temperature)
probabilities.append(instanceProbability)
# add instance probability to totalProbability
# print("instanceProbability {}: {}".format(i, instanceProbability))
totalProbability += instanceProbability
# print("totalprobability: {}".format(totalProbability))
# calculate Average Energy and Magnetism and EnergySquared and MagnetismSquared
averageEnergyNumerator = 0
averageEnergySquaredNumerator = 0
averageMagnetismNumerator = 0
averageMagnetismSquaredNumerator = 0
# print(energies)
# print(probabilities)
for i in range(len(energies)):
# print("instanceEnergy {}: {}".format(i, energies[i]))
# print("instanceMagnetism {}: {}".format(i, magnetisms[i]))
averageEnergyNumerator += energies[i] * probabilities[i]
averageEnergySquaredNumerator += energies[i] * energies[i] * probabilities[i]
averageMagnetismNumerator += magnetisms[i] * probabilities[i]
averageMagnetismSquaredNumerator += magnetisms[i] * magnetisms[i] * probabilities[i]
# print("averageEnergyNumerator: {}".format(averageEnergyNumerator))
averageEnergy = averageEnergyNumerator / totalProbability
averageEnergySquared = averageEnergySquaredNumerator / totalProbability
averageMagnetism = averageMagnetismNumerator / totalProbability
averageMagnetismSquared = averageMagnetismSquaredNumerator / totalProbability
# return results as a dictionary
return {"averageEnergy": averageEnergy,
"averageEnergySquared": averageEnergySquared,
"averageMagnetism": averageMagnetism,
"averageMagnetismSquared": averageMagnetismSquared,
"temperature": temperature
}
# +
# function to calculate heat capacity
def calculateHeatCapacity(averageEnergySquared, averageEnergy, temperature):
k = 1
# (<EE> - <E><E> ) / (kTT)
return (averageEnergySquared - (averageEnergy ** 2)) / (k * (temperature ** 2))
# function to calculate magnetic susceptibility
def calculateMagneticSusceptibilty(averageMagnetismSquared, averageMagnetism, temperature):
k = 1
# (<MM> - <M><M> ) / (kT)
return (averageMagnetismSquared - (averageMagnetism ** 2)) / (k * temperature)
# +
# calculate values for each category to be plotted
import matplotlib.pyplot as plt
temperatureValues = []
averageEnergyValues = []
averageMagnetismValues = []
heatCapacityValues = []
magneticSusceptibilityValues = []
for t in range(1,100):
currentTemp = t / 10
temperatureValues.append(currentTemp)
# call calculateValues
values = calculateValues(currentTemp, -2, 1.1, 8)
# add averageEnergy and averageMagnetism values to lists
averageEnergyValues.append(values["averageEnergy"])
averageMagnetismValues.append(values["averageMagnetism"])
# add heatCapacity value to list
heatCapacityValues.append(calculateHeatCapacity(values["averageEnergySquared"], values["averageEnergy"], values["temperature"]))
# add magnetic susceptibility value to list
magneticSusceptibilityValues.append(calculateMagneticSusceptibilty(values["averageMagnetismSquared"], values["averageMagnetism"], values["temperature"]))
# -
# plot average energy values
plt.plot(temperatureValues, averageEnergyValues, label="<E>")
plt.plot(temperatureValues, averageMagnetismValues, label="<M>")
plt.plot(temperatureValues, heatCapacityValues, label="Heat Capacity")
plt.plot(temperatureValues, magneticSusceptibilityValues, label="Magnetic Susceptibility")
plt.legend()
plt.show()
# ## Q3: What is the energy for (++-+---+--+)?
# +
conf = spinConfiguration(1673, 11)
print(conf.spins)
# Define my hamiltonian values
ham = SingleDimensionHamiltionian(-2, 1.1, conf)
print("Energy: {}".format(ham.calculateEnergy()))
print("Magnetism: {}".format(conf.calculateMagnetism()))
# Compute the average values for Temperature = 1
# values = calculateValues(1, -2, 1.1, 2)
# # E, M, HC, MS, T =
# E = values["averageEnergy"]
# M = values["averageMagnetism"]
# # print(" E = %12.8f" %E)
# print(E)
# print(M)
# +
# Define a new configuration instance for a 2-site lattice
conf = spinConfiguration(2, 2)
# Define my hamiltonian values
ham = SingleDimensionHamiltionian(-2, 1.1, conf)
# Compute the average values for Temperature = 1
values = calculateValues(1, -2, 1.1, 2)
# E, M, HC, MS, T =
E = values["averageEnergy"]
M = values["averageMagnetism"]
# print(" E = %12.8f" %E)
print(E)
print(M)
# print(" M = %12.8f" %M)
# print(" HC = %12.8f" %HC)
# print(" MS = %12.8f" %MS)
# -
# ## Properties
# For any fixed state, $\alpha$, the `magnetization` ($M$) is proportional to the _excess_ number of spins pointing up or down while the energy is given by the
# Hamiltonian:
# $$M(\alpha) = N_{\text{up}}(\alpha) - N_{\text{down}}(\alpha).$$
# As a dynamical, fluctuating system, each time you measure the magnetization, the system might be in a different state ($\alpha$) and so you'll get a different number!
# However, we already know what the probability of measuring any particular $\alpha$ is, so in order to compute the average magnetization, $\left<M\right>$, we just need to multiply the magnetization of each possible configuration times the probability of it being measured, and then add them all up!
# $$ \left<M\right> = \sum_\alpha M(\alpha)P(\alpha).$$
# In fact, any average value can be obtained by adding up the value of an individual configuration multiplied by it's probability:
# $$ \left<E\right> = \sum_\alpha E(\alpha)P(\alpha).$$
#
# This means that to obtain any average value (also known as an `expectation value`) computationally, we must compute the both the value and probability of all possible configurations. This becomes extremely expensive as the number of spins ($N$) increases.
# ## P3: Write a function that computes the magnetization of a spin configuration
# ## Q2: How many configurations are possible for:
# (a) N=10?
# (b) N=100?
# (c) N=1000?
# ## Testing Functions
def testSpinConfiguration():
assert(spinConfiguration(0, 2).spins == [-1, -1])
assert(spinConfiguration(2, 2).spins == [1, -1])
assert(spinConfiguration(1, 4).spins == [-1, -1, -1, 1])
assert(spinConfiguration(15, 4).spins == [1, 1, 1, 1])
testSpinConfiguration()
def testSingleDimensionHamiltonian():
assert()
testSingleDimensionHamiltonian()
| notebooks/Ising.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <font color='blue'>Data Science Academy - Python Fundamentos - Capรญtulo 8</font>
#
# ## Download: http://github.com/dsacademybr
# Versรฃo da Linguagem Python
from platform import python_version
print('Versรฃo da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# ### Bokeh
# ### Caso o Bokeh nรฃo esteja instalado, executar no prompt ou terminal: pip install bokeh
# Importando o mรณdulo Bokeh
import bokeh
from bokeh.io import show, output_notebook
from bokeh.plotting import figure, output_file
from bokeh.models import ColumnDataSource
from bokeh.transform import factor_cmap
from bokeh.palettes import Spectral6
# Carregando o Bokeh
output_notebook()
# Arquivo gerado pela visualizaรงรฃo
output_file("Bokeh-Grafico-Interativo.html")
p = figure()
type(p)
p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], line_width = 2)
show(p)
# ## Grรกfico de Barras
# +
# Criando um novo grรกfico
output_file("Bokeh-Grafico-Barras.html")
fruits = ['Maรงas', 'Peras', 'Tangerinas', 'Uvas', 'Melancias', 'Morangos']
counts = [5, 3, 4, 2, 4, 6]
source = ColumnDataSource(data=dict(fruits=fruits, counts=counts))
p = figure(x_range=fruits, plot_height=350, toolbar_location=None, title="Contagem de Frutas")
p.vbar(x='fruits',
top='counts',
width=0.9,
source=source,
legend_label="fruits",
line_color='white',
fill_color=factor_cmap('fruits', palette=Spectral6, factors=fruits))
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.y_range.end = 9
p.legend.orientation = "horizontal"
p.legend.location = "top_center"
show(p)
# -
# ## ScatterPlot
# +
# Construindo um ScatterPlot
from bokeh.plotting import figure, show, output_file
from bokeh.sampledata.iris import flowers
colormap = {'setosa': 'red', 'versicolor': 'green', 'virginica': 'blue'}
colors = [colormap[x] for x in flowers['species']]
p = figure(title = "Iris Morphology")
p.xaxis.axis_label = 'Petal Length'
p.yaxis.axis_label = 'Petal Width'
p.circle(flowers["petal_length"], flowers["petal_width"], color=colors, fill_alpha=0.2, size=10)
output_file("Bokeh_grafico_Iris.html", title="iris.py example")
show(p)
# -
# ## Grรกfico de Cรญrculos
# +
from bokeh.plotting import figure, output_file, show
# Outuput
output_file("Bokeh-Grafico-Circulos.html")
p = figure(plot_width = 400, plot_height = 400)
# Adicionando cรญrculos ao grรกfico
p.circle([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], size = 20, color = "navy", alpha = 0.5)
# Mostrando o resultado
show(p)
# -
# ## Grรกfico com Dados Geofรญsicos
# +
# Geojson
from bokeh.io import output_file, show
from bokeh.models import GeoJSONDataSource
from bokeh.plotting import figure
from bokeh.sampledata.sample_geojson import geojson
geo_source = GeoJSONDataSource(geojson=geojson)
p = figure()
p.circle(x = 'x', y = 'y', alpha = 0.9, source = geo_source)
output_file("Bokeh-GeoJSON.html")
show(p)
# -
# Baixando o diretรณrio de dados de exemplo do Bokeh
bokeh.sampledata.download()
# +
from bokeh.io import show
from bokeh.models import (ColumnDataSource, HoverTool, LogColorMapper)
from bokeh.palettes import Viridis6 as palette
from bokeh.plotting import figure
from bokeh.sampledata.us_counties import data as counties
from bokeh.sampledata.unemployment import data as unemployment
palette.reverse()
counties = {code: county for code, county in counties.items() if county["state"] == "tx"}
county_xs = [county["lons"] for county in counties.values()]
county_ys = [county["lats"] for county in counties.values()]
county_names = [county['name'] for county in counties.values()]
county_rates = [unemployment[county_id] for county_id in counties]
color_mapper = LogColorMapper(palette=palette)
source = ColumnDataSource(data=dict(
x=county_xs,
y=county_ys,
name=county_names,
rate=county_rates,
))
TOOLS = "pan,wheel_zoom,reset,hover,save"
p = figure(
title="Texas Unemployment, 2009", tools=TOOLS,
x_axis_location=None, y_axis_location=None
)
p.grid.grid_line_color = None
p.patches('x', 'y', source=source,
fill_color={'field': 'rate', 'transform': color_mapper},
fill_alpha=0.7, line_color="white", line_width=0.5)
hover = p.select_one(HoverTool)
hover.point_policy = "follow_mouse"
hover.tooltips = [
("Name", "@name"),
("Unemployment rate)", "@rate%"),
("(Long, Lat)", "($x, $y)"),
]
show(p)
# -
# Conheรงa a Formaรงรฃo Cientista de Dados, um programa completo, 100% online e 100% em portuguรชs, com mais de 400 horas, mais de 1.200 aulas em vรญdeos e 26 projetos, que vรฃo ajudรก-lo a se tornar um dos profissionais mais cobiรงados do mercado de anรกlise de dados. Clique no link abaixo, faรงa sua inscriรงรฃo, comece hoje mesmo e aumente sua empregabilidade:
#
# https://www.datascienceacademy.com.br/pages/formacao-cientista-de-dados
# # Fim
# ### Obrigado - Data Science Academy - <a href="http://facebook.com/dsacademybr">facebook.com/dsacademybr</a>
| Data Science Academy/Cap08/Notebooks/DSA-Python-Cap08-06-Bokeh.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [12 steps toward rock-solid scientific Python code](https://www.davidketcheson.info/2015/05/10/rock_solid_code.html)
#
#
# ## Twelve (baby) steps
#
# 1. [Use version control](https://www.davidketcheson.info/2015/05/11/use_version_control.html)
# 1. [Put your code in the cloud, in the open](https://www.davidketcheson.info/2015/05/12/code_in_the_open.html)
# 1. [Add a README and a License](https://www.davidketcheson.info/2015/05/13/add_a_readme.html)
# 1. [Write docstrings](https://www.davidketcheson.info/2015/05/14/write_docstrings.html)
# 1. [Write tests](https://www.davidketcheson.info/2015/05/15/write_tests.html)
# 1. [Keep track of issues](https://www.davidketcheson.info/2015/05/16/track_issues.html)
# 1. [Automate the tests](https://www.davidketcheson.info/2015/05/29/automate_tests.html)
# 1. Automate the build (coming soon)
# 1. Use continuous integration (coming soon)
# 1. Monitor test coverage (coming soon)
# 1. Write narrative documentation (coming soon)
# 1. Catch errors as you type them (coming soon)
#
# ## [Step 1 - Use version control](https://www.davidketcheson.info/2015/05/11/use_version_control.html)
#
# State-of-the-art is **git**, documented in the [Pro Git book](http://git-scm.com/book/en/v2).
# If you donโt have it, get it here: [http://git-scm.com/downloads](http://git-scm.com/downloads).
# Then take a moment to [set up git](https://help.github.com/articles/set-up-git/#setting-up-git).
#
# ## [Step 2 - Code in the open](https://www.davidketcheson.info/2015/05/12/code_in_the_open.html)
#
# 1. Share your code on [Github](https://github.com/) where you should already have an account.
# 1. Once youโre logged in, click on the โ+โ in the upper-right part of the screen and select โNew repositoryโ.
# 1. Give it the same name as your project, and write a short description.
# 1. Donโt initialize it with a README or license file (weโll do that in Step #3).
# 1. Preferably, select โpublicโ repository.
#
# ## [Step 3 - Add a README and a License](https://www.davidketcheson.info/2015/05/13/add_a_readme.html)
#
# ### You need a README file
#
# This is a minimalist documentation โ just what is absolutely necessary for someone to start using your code.
#
# 1. Go to your project directory and open a new file. Call it README.md. The .md extension stands for Markdown, which is just an embellished format for text files that lets you add text formatting in simple ways that will automatically show up on Github. You can learn more about Markdown here, but for the moment just think of it as a text file.
# 1. Write the contents of the README file. You should probably include:
# * a brief description of what your code does;
# * instructions for installing your code;
# * what other code needs to be installed for it to work;
# * one or two examples of how to invoke your code;
# * optionally: who wrote the code, how to cite it, and who to contact for help. One good example of a README file is [here](https://github.com/github/markup/blob/master/README.md).
# 1. Save and close the file.
# 1. Add it to your repository with git add and git commit.
# 1. Push the file to github with git push.
# 1. Go to the page for your project on Github. You should see the contents of your README file displayed automatically right below the directory listing. It should look something like this.
#
#
# ### You need a License file
#
# The most common licenses for open source scientific software are
#
# * [BSD](https://choosealicense.com/licenses/bsd-2-clause/),
# * [MIT](https://choosealicense.com/licenses/mit/), and
# * [GPL](https://choosealicense.com/licenses/gpl-2.0/).
#
# My suggestion is to use a BSD license, but if you want to investigate in more detail, try [Choose A License](https://choosealicense.com/) or go read [this paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3406002/).
#
# Hereโs what to do:
#
# 1. Create a file called LICENSE.txt in your project directory.
# 1. Paste the license text (from one of the links above) into the file, save, and close.
# 1. Commit and push the file to Github.
#
# ## [Step 4 - write docstrings](https://www.davidketcheson.info/2015/05/14/write_docstrings.html)
#
# Steps 1-3 were language-agnostic, but now Iโm going to assume youโre using Python. The Python language has a built-in feature for documenting functions, classes, and modules; it is the docstring. A docstring for a very simple function looks like this:
def square(x):
"""
Takes a number x and returns x*x.
Examples:
>>> square(5)
25
>>> square(2)
4
"""
return x*x
# The docstring is, of course, the part inside the triple quotes. If you type a function name followed by a โ?โ at the Python interpreter (or in a Jupyter notebook):
# +
# square?
# -
# then Python shows you the docstring for the function. If youโve ever tried to get help on a function that had no docstring, you know the dark feeling of despair that attends such a moment. Donโt let your code be that code. Write docstrings!
#
# For a somewhat longer docstring, see my [Gaussian elimination example](https://github.com/ketch/rock-solid-code-demo/blob/master/factor.py).
#
# What should go in a docstring? Obviously, you should describe the arguments to the function and values returned by the function. But usually the most useful part of a docstring is examples. Iโll repeat that, because itโs important:
# ## [Step 5 - write tests](https://www.davidketcheson.info/2015/05/15/write_tests.html)
#
# Fortunately, I donโt have to convince you, because Iโm not going to ask you to write tests in this step. As a matter of fact, I tricked you into writing tests in step 4. Remember those examples you put in your docstring?
#
# Yeah, Iโm sneaky like that.
#
# What is a test? Itโs a bit of code that uses your project code, together with assertions regarding the output. Hereโs how you can use the docstring you wrote as a test:
#
# 1. Go to your project directory and identify the file to which you added one or more docstrings in step 4. Weโll refer to that as my_file.py.
# 1. At the command line (i.e., in a terminal), type `python -m doctest -v my_file.py` (but substitute the name of your file).
# 1. Look at the printed output to see if your test(s) passed.
#
# What just happened? Doctest is a python module that takes all the examples in your docstrings, runs them, and checks whether the output in the docstring matches the actual output. If any of your doctests failed, you should compare the actual output with your docstring and correct things.
#
# I always forget how to invoke doctest, so I put the following code at the bottom of all my `.py` files:
if __name__ == "__main__":
import doctest
doctest.testmod()
# After adding that, I can just do
python my_file.py -v
# and it will automatically run the doctests. One warning: if you donโt add the -v flag (for verbose) on the command line, then there will be no printed output at all unless some test fails. And if you put -v before your filename, youโll get something totally different.
#
# Doctests are certainly not all there is to testing in Python, but for me they are a minimal-effort approach that makes my code much more reliable. If you add a docstring with a doctest to each function and module in your code, youโll spend a lot less time debugging later on. I bet youโll also find some bugs as you add the doctests.
#
# From now on, just make it a habit to add a docstring and a doctest whenever you write a new function. Your future self will thank you.
# ## [Step 6 - Keep track of issues](https://www.davidketcheson.info/2015/05/16/track_issues.html)
#
#
# ## [Step 7 - Automate the tests](https://www.davidketcheson.info/2015/05/29/automate_tests.html)
#
# Also rf.
# * [Run your Python Unit Tests with GitHub Actions](https://www.techiediaries.com/python-unit-tests-github-actions/)
# * [Run your Python unit tests via GitHub actions - <NAME> Dev](https://mattsegal.dev/pytest-on-github-actions.html)
# * [Modern Python part 2: write unit tests & enforce Git commit conventions](https://www.adaltas.com/en/2021/06/24/unit-tests-conventional-commits/)
#
| Rock-Solid Scientific Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# To run this example, move this file to the main directory of this repository
from citylearn import CityLearn
import matplotlib.pyplot as plt
from pathlib import Path
import numpy as np
from agents.rbc import RBC
# +
# Select the climate zone and load environment
climate_zone = 5
sim_period = (0, 8760*4-1)
params = {'data_path':Path("data/Climate_Zone_"+str(climate_zone)),
'building_attributes':'building_attributes.json',
'weather_file':'weather_data.csv',
'solar_profile':'solar_generation_1kW.csv',
'carbon_intensity':'carbon_intensity.csv',
'building_ids':["Building_"+str(i) for i in [1,2,3,4,5,6,7,8,9]],
'buildings_states_actions':'buildings_state_action_space.json',
'simulation_period': sim_period,
'cost_function': ['ramping','1-load_factor','average_daily_peak','peak_demand','net_electricity_consumption','carbon_emissions'],
'central_agent': False,
'save_memory': False }
env = CityLearn(**params)
observations_spaces, actions_spaces = env.get_state_action_spaces()
# -
# Simulation without energy storage
env.reset()
done = False
while not done:
_, rewards, done, _ = env.step([[0 for _ in range(len(actions_spaces[i].sample()))] for i in range(9)])
cost_no_storage, cost_no_storage_last_yr = env.cost()
env.cost()
interval = range(sim_period[0], sim_period[1])
plt.figure(figsize=(12,8))
plt.plot(env.net_electric_consumption[interval]+env.electric_generation[interval]-env.electric_consumption_cooling_storage[interval]-env.electric_consumption_dhw_storage[interval])
plt.plot(env.net_electric_consumption[interval]-env.electric_consumption_cooling_storage[interval]-env.electric_consumption_dhw_storage[interval])
plt.legend(['Electricity demand without storage or generation (kW)', 'Electricity demand with PV generation and without storage(kW)'])
# +
# RULE-BASED CONTROLLER (RBC) (Stores energy at night and releases it during the day)
# In this example, each building has its own RBC, which tries to flatten a generic building load
# by storing energy at night and using it during the day, which isn't necessarily the best solution
# in order to flatten the total load of the district.
# Select the climate zone and load environment
'''IMPORTANT: Make sure that the buildings_state_action_space.json file contains the hour of day as 3rd true state:
{"Building_1": {
"states": {
"month": true,
"day": true,
"hour": true
Alternative, modify the line: "hour_day = states[0][2]" of the RBC_Agent Class in agent.py
'''
import json
import time
# Instantiating the control agent(s)
agents = RBC(actions_spaces)
# Finding which state
with open('buildings_state_action_space.json') as file:
actions_ = json.load(file)
indx_hour = -1
for obs_name, selected in list(actions_.values())[0]['states'].items():
indx_hour += 1
if obs_name=='hour':
break
assert indx_hour < len(list(actions_.values())[0]['states'].items()) - 1, "Please, select hour as a state for Building_1 to run the RBC"
state = env.reset()
done = False
rewards_list = []
start = time.time()
while not done:
hour_state = np.array([[state[0][indx_hour]]])
action = agents.select_action(hour_state)
next_state, rewards, done, _ = env.step(action)
state = next_state
rewards_list.append(rewards)
cost_rbc = env.cost()
end = time.time()
print(end-start)
# -
1.18602800e-01 + 0.034*2
action[0][0]=0.034
action[0][1]=0.034
action[0][2]=0.034
next_state, rewards, done, _ = env.step(action)
action[0]
state[0]
next_state[0]
cost_rbc
# Plotting electricity consumption breakdown
interval = range(sim_period[0], sim_period[1])
plt.figure(figsize=(16,5))
plt.plot(env.net_electric_consumption_no_pv_no_storage[interval])
plt.plot(env.net_electric_consumption_no_storage[interval])
plt.plot(env.net_electric_consumption[interval], '--')
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Electricity demand without storage or generation (kW)', 'Electricity demand with PV generation and without storage(kW)', 'Electricity demand with PV generation and using RBC for storage(kW)'])
# Plotting 5 days of winter operation of year 1
plt.figure(figsize=(16,5))
interval = range(0,24*5)
plt.plot(env.net_electric_consumption_no_pv_no_storage[interval])
plt.plot(env.net_electric_consumption_no_storage[interval])
plt.plot(env.net_electric_consumption[interval], '--')
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Electricity demand without storage or generation (kW)', 'Electricity demand with PV generation and without storage(kW)', 'Electricity demand with PV generation and using RBC for storage(kW)'])
# Plotting summer operation of year 1
plt.figure(figsize=(16,5))
interval = range(24*30*7,24*30*7 + 24)
plt.plot(env.net_electric_consumption_no_pv_no_storage[interval])
plt.plot(env.net_electric_consumption_no_storage[interval])
plt.plot(env.net_electric_consumption[interval], '--')
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Electricity demand without storage or generation (kW)', 'Electricity demand with PV generation and without storage(kW)', 'Electricity demand with PV generation and using RBC for storage(kW)'])
# Plotting summer operation
interval = range(5000,5000 + 24*10)
plt.figure(figsize=(16,5))
plt.plot(env.net_electric_consumption_no_pv_no_storage[interval])
plt.plot(env.net_electric_consumption_no_storage[interval])
plt.plot(env.net_electric_consumption[interval], '--')
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Electricity demand without storage or generation (kW)', 'Electricity demand with PV generation and without storage(kW)', 'Electricity demand with PV generation and using RBC for storage(kW)'])
# Plot for one building of the total cooling supply, the state of charge, and the actions of the controller during winter
building_number = 'Building_5'
plt.figure(figsize=(12,8))
plt.plot(env.buildings[building_number].cooling_demand_building[3500:3500+24*5])
plt.plot(env.buildings[building_number].cooling_storage_soc[3500:3500+24*5])
plt.plot(env.buildings[building_number].cooling_device_to_building[3500:3500+24*5] + env.buildings[building_number].cooling_device_to_storage[3500:3500+24*5])
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Building Cooling Demand (kWh)','Energy Storage State of Charge - SOC (kWh)', 'Heat Pump Total Cooling Supply (kW)'])
building_number = 'Building_1'
interval = range(0,24*4)
plt.figure(figsize=(12,8))
plt.plot(env.buildings[building_number].cooling_demand_building[interval])
plt.plot(env.buildings[building_number].cooling_storage_to_building[interval] - env.buildings[building_number].cooling_device_to_storage[interval])
plt.plot(env.buildings[building_number].cooling_device.cooling_supply[interval])
plt.plot(env.electric_consumption_cooling[interval])
plt.plot(env.buildings[building_number].cooling_device.cop_cooling[interval],'--')
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend(['Cooling Demand (kWh)','Energy Balance of Chilled Water Tank (kWh)', 'Heat Pump Total Cooling Supply (kWh)', 'Heat Pump Electricity Consumption (kWh)','Heat Pump COP'])
| example_rbc-test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Make sure to include** `import allow_local_imports` on top of every notebook in `notebooks/` dir to be able to use `lib/` modules.
# Include this on top, as the first import
# This must always be imported first. If you are restarting the notebook
# don't forget to run this cell first!
import allow_local_imports
import numpy as np
import matplotlib.pyplot as plt
from numpy.random import default_rng
# ### Minority Game with p = 0.1
# +
from lib.minority_game import MinorityGame
from lib.agents.agent import Agent, StrategyUpdatingAgent
from lib.agents.factory import AgentFactory
from lib.strategies import AlwaysOneStrategy, DefaultStrategy, FiftyFiftyStrategy
from lib.memory import UniformMemoryGenerator
from lib.plots import default_plot
n_agents = 101 # check with David why it does not work if I do MinorityGame.n_agents
tot_omega_01 = []
tot_alpha_01 = []
for M in range (2,10):
times, attendances, mean_A_t, vol_A_t = MinorityGame(
n_agents=101,
factory_dict={
1: AgentFactory(
StrategyUpdatingAgent,
agent_kwargs=dict(
strategy_clss=[DefaultStrategy,DefaultStrategy],
strategy_update_rate=0.1
),
memory_generator=UniformMemoryGenerator(M)
),
}
).simulate_game(max_steps=50000)
# in order to create the graph
omega = np.average(vol_A_t)/n_agents
alpha = 2**M/n_agents
tot_omega_01.append(omega)
tot_alpha_01.append(alpha)
# +
fig, ax = plt.subplots(figsize=(12, 6))
ax.axhline(y=1, color="k", linestyle="--")
ax.plot(tot_alpha_01, tot_omega_01, 'bo')
ax.set_xlabel("Alpha = $2^m/N$")
ax.set_ylabel("Volatility")
plt.title("Simple Minority Game with s=2, N=101, p = 0.1")
plt.yscale('log')
plt.xscale('log')
plt.xlim([0.01,100])
plt.ylim([0.1,100])
plt.show()
# -
# ### Minority Game with p = 0.01
# +
from lib.minority_game import MinorityGame
from lib.agents.agent import Agent, StrategyUpdatingAgent
from lib.agents.factory import AgentFactory
from lib.strategies import AlwaysOneStrategy, DefaultStrategy, FiftyFiftyStrategy
from lib.memory import UniformMemoryGenerator
from lib.plots import default_plot
n_agents = 101 # check with David why it does not work if I do MinorityGame.n_agents
tot_omega_001 = []
tot_alpha_001 = []
for M in range (2,10):
times, attendances, mean_A_t, vol_A_t = MinorityGame(
n_agents=101,
factory_dict={
1: AgentFactory(
StrategyUpdatingAgent,
agent_kwargs=dict(
strategy_clss=[DefaultStrategy,DefaultStrategy],
strategy_update_rate=0.01
),
memory_generator=UniformMemoryGenerator(M)
),
}
).simulate_game(max_steps=50000)
# in order to create the graph
omega = np.average(vol_A_t)/n_agents
alpha = 2**M/n_agents
tot_omega_001.append(omega)
tot_alpha_001.append(alpha)
# +
fig, ax = plt.subplots(figsize=(12, 6))
ax.axhline(y=1, color="k", linestyle="--")
ax.plot(tot_alpha_001, tot_omega_001, 'bo')
ax.set_xlabel("Alpha = $2^m/N$")
ax.set_ylabel("Volatility")
plt.title("Simple Minority Game with s=2, N=101, p = 0.01")
plt.yscale('log')
plt.xscale('log')
plt.xlim([0.01,100])
plt.ylim([0.1,100])
plt.show()
# -
# ### Simple Minority Game (p = 0)
# +
from lib.minority_game import MinorityGame
from lib.agents.agent import Agent, StrategyUpdatingAgent
from lib.agents.factory import AgentFactory
from lib.strategies import AlwaysOneStrategy, DefaultStrategy, FiftyFiftyStrategy
from lib.memory import UniformMemoryGenerator
from lib.plots import default_plot
n_agents = 101 # check with David why it does not work if I do MinorityGame.n_agents
tot_omega_0 = []
tot_alpha_0 = []
for M in range (2,10):
times, attendances, mean_A_t, vol_A_t = MinorityGame(
n_agents=101,
factory_dict={
1: AgentFactory(
Agent,
agent_kwargs=dict(
strategy_clss=[DefaultStrategy,DefaultStrategy]
),
memory_generator=UniformMemoryGenerator(M)
),
}
).simulate_game(max_steps=50000)
# in order to create the graph
omega = np.average(vol_A_t)/n_agents
alpha = 2**M/n_agents
tot_omega_0.append(omega)
tot_alpha_0.append(alpha)
# +
fig, ax = plt.subplots(figsize=(12, 6))
ax.axhline(y=1, color="k", linestyle="--")
ax.plot(tot_alpha_0, tot_omega_0, 'bo')
ax.set_xlabel("Alpha = $2^m/N$")
ax.set_ylabel("Volatility")
plt.title("Simple Minority Game with s=2, N=101")
plt.yscale('log')
plt.xscale('log')
plt.xlim([0.01,100])
plt.ylim([0.1,100])
plt.show()
# -
# ### Minority Game with two group, 80% of agent with p = 0 and 20% with p = 0.1
# +
from lib.minority_game import MinorityGame
from lib.agents.agent import Agent, StrategyUpdatingAgent
from lib.agents.factory import AgentFactory
from lib.strategies import AlwaysOneStrategy, DefaultStrategy, FiftyFiftyStrategy
from lib.memory import UniformMemoryGenerator
from lib.plots import default_plot
n_agents = 101 # check with David why it does not work if I do MinorityGame.n_agents
tot_omega_80_20 = []
tot_alpha_80_20 = []
for M in range (2,10):
times, attendances, mean_A_t, vol_A_t = MinorityGame(
n_agents=101,
factory_dict={
0.8: AgentFactory(
Agent,
agent_kwargs=dict(strategy_clss=[DefaultStrategy, DefaultStrategy]),
memory_generator=UniformMemoryGenerator(M)
),
0.2: AgentFactory(
StrategyUpdatingAgent,
agent_kwargs=dict(
strategy_clss=[DefaultStrategy, DefaultStrategy],
strategy_update_rate=0.1
),
memory_generator=UniformMemoryGenerator(M)
),
}
).simulate_game(max_steps=50000)
# in order to create the graph
omega = np.average(vol_A_t)/n_agents
alpha = 2**M/n_agents
tot_omega_80_20.append(omega)
tot_alpha_80_20.append(alpha)
# +
fig, ax = plt.subplots(figsize=(12, 6))
ax.axhline(y=1, color="k", linestyle="--")
ax.plot(tot_alpha_80_20, tot_omega_80_20, 'bo')
ax.set_xlabel("Alpha = $2^m/N$")
ax.set_ylabel("Volatility")
plt.title("Minority Game with s=2, N=101, 20% of agents with p = 0.1")
plt.yscale('log')
plt.xscale('log')
plt.xlim([0.01,100])
plt.ylim([0.1,100])
plt.show()
# +
# everything in one graph
fig, ax = plt.subplots(figsize=(20, 8))
ax.axhline(y=1, color="k", linestyle="--") # vol = 1 -> randomness
ax.plot(tot_alpha_0, tot_omega_0, '>r')
ax.plot(tot_alpha_01, tot_omega_01, 'bo')
ax.plot(tot_alpha_001, tot_omega_001, 'Dy')
ax.plot(tot_alpha_80_20, tot_omega_80_20, 'sg')
ax.set_xlabel(r"Alpha $\alpha = 2^m/N$", fontsize=15)
ax.set_ylabel("Volatility $\sigma^2/N$", fontsize=15)
ax.legend(["randomness",
"p = 0 (simple MG)",
"p = 0.1",
"p = 0.01",
"80% p = 0, 20% p = 0.1"])
#plt.title("Volatilty as a function of alpha (MG with s=2, N=101)")
plt.yscale('log')
plt.xscale('log')
plt.xlim([0.01,100])
plt.ylim([0.1,100])
plt.savefig("out/different_p.png", dpi = 300)
plt.show()
| scenarios/Scenario_2_Inspiration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import csv
import cv2
import numpy as np
import keras
from scipy import ndimage
from random import shuffle
lines=[]
with open('data/data/driving_log.csv') as csvfile:
reader=csv.reader(csvfile)
i_have_seen_firstline=False
for line in reader:
if i_have_seen_firstline:
lines.append(line)
else:
i_have_seen_firstline = True
#
print(len(lines))
#
import sklearn
from sklearn.model_selection import train_test_split
train_samples, validation_samples = train_test_split(lines, test_size=0.2)
print(len(train_samples))
print(len(validation_samples))
def generator(samples, batch_size=32):
num_samples = len(samples)
while 1: # Loop forever so the generator never terminates
shuffle(samples)
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
images = []
angles = []
for batch_sample in batch_samples:
#name = './IMG/'+batch_sample[0].split('/')[-1]
current_path = 'data/data/IMG/' + batch_sample[0].split('/')[-1]
current_left_path = 'data/data/IMG/' + batch_sample[1].split('/')[-1]
current_right_path = 'data/data/IMG/' + batch_sample[2].split('/')[-1]
#center_image = cv2.imread(current_path)
center_image = ndimage.imread(current_path)
left_image = ndimage.imread(current_left_path)
right_image = ndimage.imread(current_right_path)
center_angle = float(batch_sample[3])
correction = 0.003 # this is a parameter to tune 0.03 was not bad
left_angle = center_angle + correction
right_angle = center_angle - correction
#left_angle = center_angle *1.15
#ight_angle = center_angle - 1.15
use_all_cameras = True
if use_all_cameras:
images.extend([center_image, left_image,right_image])
angles.extend([center_angle,left_angle,right_angle])
else:
images.append(center_image)
angles.extend(center_angle)
augment_by_flipping=True
if augment_by_flipping:
augmented_images, augmented_angles = [],[]
for image,angle in zip(images, angles):
augmented_images.append(image)
augmented_angles.append(angle)
#augmented_images.append(cv2.flip(image,1))
augmented_images.append(np.fliplr(image))
augmented_angles.append(angle*-1.0)
else:
augmented_images, augmented_angles =images,angles
# trim image to only see section with road
X_train = np.array(augmented_images)
y_train = np.array(augmented_angles)
yield sklearn.utils.shuffle(X_train, y_train)
#images=[]
#measurements=[]
#for line in lines:
# source_path = line[0]
# filename= source_path.split('/')[-1]
# current_path = 'data/data/IMG/' + filename
# #image=cv2.imread(current_path)
# image = ndimage.imread(current_path)
# images.append(image)
# measurement=float(line[3])
# measurements.append(measurement)
#False
#augment_by_flipping=False
#if augment_by_flipping:
# augmented_images, augmented_measurements = [],[]
# for image,measurement in zip(images, measurements):
# augmented_images.append(image)
# augmented_measurements.append(measurement)
# augmented_images.append(cv2.flip(image,1))
# augmented_measurements.append(measurement*-1.0)
#else:
# None
# augmented_images, augmented_measurements =images,measurements
#X_train = np.array(augmented_images)
#y_train = np.array(augmented_measurements)
# -
#print(X_train.shape)
#print(np.mean(y_train**2* 180/3.14*16)) # convert from rad to deg and then to steerin-WEEL-angle
# +
from keras.models import Sequential
from keras.layers import Flatten,Dense,Lambda,Dense, Activation, Dropout
from keras.layers.convolutional import Conv2D, Cropping2D
from keras.layers.pooling import MaxPooling2D
import matplotlib.pyplot as plt
# compile and train the model using the generator function
my_batch_size= 16 #128
train_generator = generator(train_samples, batch_size=my_batch_size)
validation_generator = generator(validation_samples, batch_size=my_batch_size)
ch, row, col = 3, 160, 320 # Trimmed image format
dropout_prob=1.0#0.8
model=Sequential()
#model.add(Lambda(lambda x: x/255.0 -0.5, input_shape=(160,320,3)))
model.add(Lambda(lambda x: x/127.5 - 1., #
input_shape=(row, col,ch))) #,
#output_shape=(row, col, ch)))
cropping= False
if cropping:
model.add(Cropping2D(cropping=((50,0), (0,0)), input_shape=(160,320,3)))
#model.add(Flatten())
model.add(Conv2D(6, kernel_size=(5, 5),
activation='relu',
#input_shape=(90, 320, 3),
padding='valid'))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(dropout_prob))
model.add(Conv2D(32, kernel_size=(5, 5),
activation='relu', padding='valid'))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(dropout_prob))
model.add(Flatten())
model.add(Dense(120))
model.add(Activation('relu'))
model.add(Dropout(dropout_prob))
model.add(Dense(84))
model.add(Activation('relu'))
model.add(Dropout(dropout_prob))
model.add(Dense(1))
model.summary()
# +
###########
print(len(train_samples))
model.compile(loss='mse',optimizer='adam')
#history_object = model.fit(X_train,y_train,validation_split=0.2,shuffle=True, epochs=4, verbose=1)
#history_object = model.fit_generator(train_generator, steps_per_epoch=
# len(train_samples),validation_steps=
# len(train_samples), validation_data=validaright_angle = center_angle - correctiontion_generator, epochs=2, verbose=1)
history_object = model.fit_generator(train_generator, steps_per_epoch= len(train_samples)/my_batch_size,
epochs=4, verbose=1,
validation_data=validation_generator, validation_steps= len(validation_samples)/my_batch_size, use_multiprocessing=True
)
# +
# %matplotlib inline
print(history_object.history.keys())
plt.plot(history_object.history['loss'])
plt.plot(history_object.history['val_loss'])
plt.title('model mean squared error loss')
plt.ylabel('mean squared error loss')
plt.xlabel('epoch')
plt.legend(['training set', 'validation set'], loc='upper right')
plt.show()
##############
model.save('model.h5')
# -
keras.__version__
| P4_batched.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Perceptron algorithm at work
# In this notebook, we will look in detail at the Perceptron algorithm for learning a linear classifier in the case of binary labels.
# We start with inc. standard libraries
# %matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
# ## 1. The algorithm
# This first procedure, **evaluate_classifier**, takes as input the parameters of a linear classifier (`w,b`) as well as a data point (`x`) and returns the prediction of that classifier at `x`.
#
# The prediction is:
# * `1` if `w.x+b > 0`
# * `0` if `w.x+b = 0`
# * `-1` if `w.x+b < -1`
def evaluate_classifier(w,b,x):
if (np.dot(w,x) + b) > 0:
return 1
if (np.dot(w,x) + b) <= 0:
return -1
return 0
# Here is the Perceptron training procedure. It is invoked as follows:
# * `w,b,converged = train_perceptron(x,y,n_iters)`
#
# where
# * `x`: n-by-d numpy array with n data points, each d-dimensional
# * `y`: n-dimensional numpy array with the labels (each 1 or -1)
# * `n_iters`: the training procedure will run through the data at most this many times (default: 100)
# * `w,b`: parameters for the final linear classifier
# * `converged`: flag (True/False) indicating whether the algorithm converged within the prescribed number of iterations
#
# If the data is <b>not linearly</b> separable, then the training procedure will not converge.
def train_perceptron(x,y,n_iters=100):
n,d = x.shape
w = np.zeros((d,))
b = 0
done = False
converged = True
iters = 0
np.random.seed(None)
while not(done):
done = True
I = np.random.permutation(n)
for i in range(n):
j = I[i]
if (evaluate_classifier(w,b,x[j,:]) != y[j]):
w = w + y[j] * x[j,:]
b = b + y[j]
done = False
iters = iters + 1
if iters > n_iters:
done = True
converged = False
if converged:
print(f"Perceptron algorithm: {iters} iterations until convergence.")
else:
print("Perceptron algorithm: did not converge within the specified number of iterations.")
return w, b, converged
# ## 2. Experiments with the Perceptron
# The directory containing this notebook should also contain the two-dimensional data files, `data_1.txt` and `data_2.txt`. These files contain one data point per line, along with a label, like:
# * `3 8 1` (meaning that point `x=(3,8)` has label `y=1`)
#
# The next procedure, **run_perceptron**, loads one of these data sets, learns a linear classifier using the Perceptron algorithm, and then displays the data as well as the boundary.
def run_perceptron(x, y):
#n,d = data.shape
# Create training set x and labels y
#x = data[:,0:-1]
#y = data[:,-1]
# Run the Perceptron algorithm for at most 100 iterations
w,b,converged = train_perceptron(x,y,100)
# Determine the x1- and x2- limits of the plot
x1min = min(x[:,0]) - 1
x1max = max(x[:,0]) + 1
x2min = min(x[:,1]) - 1
x2max = max(x[:,1]) + 1
plt.xlim(x1min,x1max)
plt.ylim(x2min,x2max)
# Plot the data points
plt.plot(x[(y==1),0], x[(y==1),1], 'ro')
plt.plot(x[(y==-1),0], x[(y==-1),1], 'k^')
# Construct a grid of points at which to evaluate the classifier
if converged:
grid_spacing = 0.05
xx1, xx2 = np.meshgrid(np.arange(x1min, x1max, grid_spacing),
np.arange(x2min, x2max, grid_spacing))
grid = np.c_[xx1.ravel(), xx2.ravel()]
Z = np.array([evaluate_classifier(w,b,pt) for pt in grid])
# Show the classifier's boundary using a color plot
Z = Z.reshape(xx1.shape)
plt.pcolormesh(xx1, xx2, Z, cmap=plt.cm.PRGn, vmin=-3, vmax=3)
plt.show()
# Let's run this on `data_1.txt`. Try running it a few times; you should get slightly different outcomes, because of the randomization in the learning procedure.
dat1 = np.loadtxt('data_1.txt')
print(dat1[:,0:-1].shape)
run_perceptron(dat1[:,0:-1], dat1[:,-1])
# And now, let's try running it on `data_2.txt`. *What's going on here?*
data2 = np.loadtxt('data_2.txt')
run_perceptron(data2[:,0:-1], dat2[:,-1])
# ### 3. For you to do
# <font color="magenta">Design a data set</font> with the following specifications:
# * there are just two data points, with labels -1 and 1
# * the two points are distinct, with coordinate values in the range [-1,1]
# * the Perceptron algorithm requires more than 1000 iterations to converge
#x1 = np.random.uniform(-1, 1, 1000).reshape((1, 1000))
#x2 = np.random.uniform(-1, 1, 1000).reshape((1, 1000))
x3 = np.ones((1, 1000))
x4 = np.ones((1, 1000))
x4[0,0] = 0.9
y = np.array([1, -1]).reshape((2, 1))
dataSet = np.vstack((x3, x4))
print(dataSet.shape)
#dataSet = np.hstack((dataSet, y))
w,b,converged = train_perceptron(dataSet,y,100000)
| Week 6/perceptron_at_work/perceptron_at_work.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# URL: http://bokeh.pydata.org/en/latest/docs/gallery/les_mis.html
import numpy as np
import holoviews as hv
hv.extension('bokeh')
# ## Declare data
# +
from bokeh.sampledata.les_mis import data
nodes = data['nodes']
names = [node['name'] for node in sorted(data['nodes'], key=lambda x: x['group'])]
N = len(nodes)
counts = np.zeros((N, N))
for link in data['links']:
counts[link['source'], link['target']] = link['value']
counts[link['target'], link['source']] = link['value']
xname = []
yname = []
color = []
alpha = []
for i, node1 in enumerate(nodes):
for j, node2 in enumerate(nodes):
xname.append(node1['name'])
yname.append(node2['name'])
alpha.append(counts[i,j])
if node1['group'] == node2['group']:
color.append(node1['group'])
else:
color.append('lightgrey')
ds = hv.Dataset((xname, yname, color, alpha), kdims=['x', 'y', 'Cluster', 'Occurences'])
overlaid = ds.to(hv.HeatMap, ['x', 'y'], ['Occurences']).overlay()
# -
# ## Plot
# +
plot_opts = dict(height=800, width=800, xaxis='top', logz=True, xrotation=90,
fontsize={'ticks': '7pt', 'title': '18pt'}, invert_xaxis=True, tools=['hover'],
labelled=[], clipping_colors={'NaN':(1,1,1,0.)})
cmaps = ['Greys', 'Reds', 'Greys', 'Greens', 'Blues',
'Purples', 'Oranges', 'Greys', 'Greys', 'PuRd', 'Reds', 'Greys']
combined = hv.Overlay([o(style=dict(cmap=cm), plot=plot_opts).sort()
for o, cm in zip(overlaid, cmaps)], label='LesMis Occurences')
combined
# -
hv.Layout([c(plot=dict(width=300, height=300))
for c in combined if len(c)>10][:-1],
label='LesMis Large Clusters').cols(3)
| examples/gallery/demos/bokeh/lesmis_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # IBM Cloud Pak for Data - Multi-Cloud Virtualization Hands-on Lab
# ## Introduction
# Welcome to the IBM Cloud Pak for Data Multi-Cloud Virtualization Hands on Lab.
#
# In this lab you analyze data from multiple data sources, from across multiple Clouds, without copying data into a warehouse.
#
# This hands-on lab uses live databases, were data is โvirtuallyโ available through the IBM Cloud Pak for Data Virtualization Service. This makes it easy to analyze data from across your multi-cloud enterprise using tools like, Jupyter Notebooks, Watson Studio or your favorite reporting tool like Cognos.
# ### Where to find this sample online
# You can find a copy of this notebook on GITHUB at https://github.com/Db2-DTE-POC/CPDDVLAB.
# ### The business problem and the landscape
# The Acme Company needs timely analysis of stock trading data from multiple source systems.
#
# Their data science and development teams needs access to:
# * Customer data
# * Account data
# * Trading data
# * Stock history and Symbol data
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/CPDDVLandscape.png">
#
# The data sources are running on premises and on the cloud. In this example many of the databases are also running on OpenShift but they could be managed, virtual or bare-metal cloud installations. IBM Cloud Pak for Data doesn't care. Enterprise DB (Postgres) is also running in the Cloud. Mongo and Informix are running on premises. Finally, we also have a VSAM file on zOS leveraging the Data Virtualization Manager for zOS.
#
# To simplify access for Data Scientists and Developers the Acme team wants to make all their data look like it is coming from a single database. They also want to combine data to create simple to use tables.
#
# In the past, Acme built a dedicated data warehouse, and then created ETL (Export, Transform and Load) job to move data from each data source into the warehouse were it could be combined. Now they can just virtualize your data without moving it.
# ### In this lab you learn how to:
#
# * Sign into IBM Cloud Pak for Data using your own Data Engineer and Data Scientist (User) userids
# * Connect to different data sources, on premises and across a multi-vendor Cloud
# * Make remote data from across your multi-vendor enterprise look and act like local tables in a single database
# * Make combining complex data and queries simple even for basic users
# * Capture complex SQL in easy to consume VIEWs that act just like simple tables
# * Ensure that users can securely access even complex data across multiple sources
# * Use roles and priviledges to ensure that only the right user may see the right data
# * Make development easy by connecting to your virtualized data using Analytic tools and Application from outside of IBM Cloud Pak for Data.
# ## Getting Started
# ### Using Jupyter notebooks
# You are now officially using a Jupyter notebook! If this is your first time using a Jupyter notebook you might want to go through the [An Introduction to Jupyter Notebooks](http://localhost:8888/notebooks/An_Introduction_to_Jupyter_Notebooks.ipynb). The introduction shows you some of the basics of using a notebook, including how to create the cells, run code, and save files for future use.
#
# Jupyter notebooks are based on IPython which started in development in the 2006/7 timeframe. The existing Python interpreter was limited in functionality and work was started to create a richer development environment. By 2011 the development efforts resulted in IPython being released (http://blog.fperez.org/2012/01/ipython-notebook-historical.html).
#
# Jupyter notebooks were a spinoff (2014) from the original IPython project. IPython continues to be the kernel that Jupyter runs on, but the notebooks are now a project on their own.
#
# Jupyter notebooks run in a browser and communicate to the backend IPython server which renders this content. These notebooks are used extensively by data scientists and anyone wanting to document, plot, and execute their code in an interactive environment. The beauty of Jupyter notebooks is that you document what you do as you go along.
# ### Connecting to IBM Cloud Pak for Data
# For this lab you will be assigned two IBM Cloud Pak for Data User IDs: A Data Engineer userid and and end-user userid. Check with the lab coordinator which userid and passwords you should use.
# * **Engineer:**
# * ID: LABDATAENGINEERx
# * PASSWORD: <PASSWORD>
# * **User:**
# * ID: LABUSERx
#
# * PASSWORD: <PASSWORD>
#
# To get started, sign in using you Engineer id:
# 1. Right-click the following link and select **open link in new window** to open the IBM Cloud Pak for Data Console: https://services-uscentral.skytap.com:9152/
# 1. Organize your screen so that you can see both this notebook as well as the IBM Cloud Pak for Data Console at the same time. This will make it much easier for you to complete the lab without switch back and forth between screens.
# 2. Sign in using your Engineer userid and password
# 3. Click the icon at the very top right of the webpage. It will look something like this:
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.06.10 EngineerUserIcon.png">
#
# 4. Click **Profile and settings**
# 5. Click **Permissions** and review the user permissions for this user
# 6. Click the **three bar menu** at the very top left of the console webpage
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/2.42.03 Three Bar.png">
#
# 7. Click **Collect** if the Collect menu isn't already open
# 7. Click **Data Virtualization**. The Data Virtualization user interface is displayed
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.06.12 CollectDataVirtualization.png">
#
# 8. Click the carrot symbol beside **Menu** below the Data Virtualization title
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/3.07.47 Menu Carrot.png">
#
# This displays the actions available to your user. Different user have access to more or fewer menu options depending on their role in Data Virtualization.
#
# As a Data Engineer you can:
# * Add and modify Data sources. Each source is a connection to a single database, either inside or outside of IBM Cloud Pak for Data.
# * Virtualize data. This makes tables in other data sources look and act like tables that are local to the Data Virtualization database
# * Work with the data you have virtualized.
# * Write SQL to access and join data that you have virtualized
# * See detailed information on how to connect external analytic tools and applications to your virtualized data
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.12.54 Menu Data sources.png">
#
# As a User you can only:
# * Work with data that has been virtualized for you
# * Write SQL to work with that data
# * See detailed connection information
#
# As an Administrator (only available to the course instructor) you can also:
# * Manage IBM Cloud Pak for Data User Access and Roles
# * Create and Manage Data Caches to accelerate performance
# * Change key service setttings
# ## Basic Data Virtualiation
# ### Exploring Data Source Connections
# Let's start by looking at the the Data Source Connections that are already available.
#
# 1. Click the Data Virtualization menu and select **Data Sources**.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.12.54 Menu Data sources.png">
#
# 2. Click the **icon below the menu with a circle with three connected dots**.
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.14.50 Connections Icons Spider.png">
# 3. A spider diagram of the connected data sources opens.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.15.31 Data Sources Spider.png">
#
# This displays the Data Source Graph with 8 active data sources:
# * 4 Db2 Family Databases hosted on premises, IBM Cloud, Azure and AWS
# * 1 EDB Postgres Database on Azure
# * 1 zOS VSAM file
# * 1 Informix Database running on premises
#
# **We are not going to add a new data source** but just go through the steps so you can see how to add additional data sources.
# 1. Click **+ Add** at the right of the console screen
# 2. Select **Add data source** from the menu
# You can see a history of other data source connection information that was used before. This history is maintain to make reconnecting to data sources easier and faster.
# 3. Click **Add connection**
# 4. Click the field below **Connection type**
# 5. Scroll through all the **available data sources** to see the available connection types
# 6. Select **different data connection types** from the list to see the information required to connect to a new data source.
# At a minumum you typically need the host URL and port address, database name, userid and password. You can also connect using an SSL certificate that can be dragged and dropped directly into the console interface.
# 7. Click **Cancel** to return to the previous list of connections to add
# 8. Click **Cancel** again to return to the list of currently connected data sources
# ### Exploring the available data
# Now that you understand how to connect to data sources you can start virtualizing data. Much of the work has already been done for you. IBM Cloud Pak for Data searches through the available data sources and compiles a single large inventory of all the tables and data available to virtualize in IBM Cloud Pak for Data.
#
# 1. Click the Data Virtualization menu and select **Virtualize**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.07 Menu Virtualize.png">
#
# 2. Check the total number of available tables at the top of the list. There should be well over 500 available.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.15.50 Available Tables.png">
#
# 3. Enter "STOCK" into the search field and hit **Enter**. Any tables with the string
# **STOCK** in the table name, the table schema or with a colunn name that includes **STOCK** appears in the search results.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.39.43 Find STOCK.png">
#
# 4. Hover your mouse pointer to the far right side to the search results table. An **eye** icon will appear on each row as you move your mouse.
# 5. Click the **eye** icon beside one table. This displays a preview of the data in the selected table.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/3.26.54 Eye.png">
#
# 6. Click **X** at the top right of the dialog box to return to the search results.
# ### Creating New Tables
# So that each user in this lab can have their own data to virtualize you will create your own table in a remote database.
#
# In this part of the lab you will use this Jupyter notebook and Python code to connect to a source database, create a simple table and populate it with data.
#
# IBM Cloud Pak for Data will automatically detect the change in the source database and make the new table available for virtualization.
#
# In this example, you connect to the Db2 Warehouse database running in IBM Cloud Pak for Data but the database can be anywhere. All you need is the connection information and authorized credentials.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/Db2CPDDatabase.png">
# The first step is to connect to one of our remote data sources directly as if we were part of the team builing a new business application. Since each lab user will create their own table in their own schema the first thing you need to do is update and run the cell below with your engineer name.
# 1. In this Juypyter notebook, click on the cell below
# 2. Update the lab number in the cell below to your assigned user and lab number
# 3. Click **Run** from the Jupyter notebook menu above
# Setting your userID
labnumber = 0
engineer = 'DATAENGINEER' + str(labnumber)
print('variable engineer set to = ' + str(engineer))
# The next part of the lab relies on a Jupyter notebook extension, commonly refer to as a "magic" command, to connect to a Db2 database. To use the commands you load load the extension by running another notebook call db2 that contains all the required code
# <pre>
# %run db2.ipynb
# </pre>
# The cell below loads the Db2 extension directly from GITHUB. Note that it will take a few seconds for the extension to load, so you should generally wait until the "Db2 Extensions Loaded" message is displayed in your notebook.
# 1. Click the cell below
# 2. Click **Run**. When the cell is finished running, In[*] will change to In[2]
# +
# # !wget https://raw.githubusercontent.com/IBM/db2-jupyter/master/db2.ipynb
# !wget -O db2.ipynb https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/db2.ipynb
# %run db2.ipynb
print('db2.ipynb loaded')
# -
# #### Connecting to Db2
#
# Before any SQL commands can be issued, a connection needs to be made to the Db2 database that you will be using.
#
# The Db2 magic command tracks whether or not a connection has occured in the past and saves this information between notebooks and sessions. When you start up a notebook and issue a command, the program will reconnect to the database using your credentials from the last session. In the event that you have not connected before, the system will prompt you for all the information it needs to connect. This information includes:
#
# - Database name
# - Hostname
# - PORT
# - Userid
# - Password
#
# Run the next cell.
# #### Connecting to Db2
# +
# Connect to the Db2 Warehouse on IBM Cloud Pak for Data Database from inside of IBM Cloud Pak for Data
database = 'bludb'
user = 'user999'
password = '<PASSWORD>'
host = 'openshift-skytap-nfs-woker-5.ibm.com'
port = '31928'
# %sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}
# +
# Connect to the Db2 Warehouse on IBM Cloud Pak for Data Database from outside of IBM Cloud Pak for Data
database = 'bludb'
user = 'user999'
password = '<PASSWORD>'
host = 'services-uscentral.skytap.com'
port = '9094'
# %sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}
# -
# To check that the connection is working. Run the following cell. It lists the tables in the database in the **DVDEMO** schema. Only the first 5 tables are listed.
# %sql select TABNAME, OWNER from syscat.tables where TABSCHEMA = 'DVDEMO'
# Now that you can successfully connect to the database, you are going to create two tables with the same name and column across two different schemas. In following steps of the lab you are going to virtualize these tables in IBM Cloud Paks for Data and fold them together into a single table.
#
# The next cell sets the default schema to your engineer name followed by 'A'. Notice how you can set a python variable and substitute it into the SQL Statement in the cell. The **-e** option echos the command.
#
# Run the next cell.
# +
schema_name = engineer+'A'
table_name = 'DISCOVER_'+str(labnumber)
print("")
print("Lab #: "+str(labnumber))
print("Schema name: " + str(schema_name))
print("Table name: " + str(table_name))
# %sql -e SET CURRENT SCHEMA {schema_name}
# -
# Run next cell to create a table with a single INTEGER column containing values from 1 to 10. The **-q** flag in the %sql command supresses any warning message if the table already exists.
# +
sqlin = f'''
DROP TABLE {table_name};
CREATE TABLE {table_name} (A INT);
INSERT INTO {table_name} VALUES 1,2,3,4,5,6,7,8,9,10;
SELECT * FROM {table_name};
'''
# %sql -q {sqlin}
# -
# Run the next two cells to create the same table in a schema ending in **B**. It is populated with values from 11 to 20.
# +
schema_name = engineer+'B'
print("")
print("Lab #: "+str(labnumber))
print("Schema name: " + str(schema_name))
print("Table name: " + str(table_name))
# %sql -e SET CURRENT SCHEMA {schema_name}
# -
sqlin = f'''
DROP TABLE {table_name};
CREATE TABLE {table_name} (A INT);
INSERT INTO {table_name} VALUES 11,12,13,14,15,16,17,18,19,20;
SELECT * FROM {table_name};
'''
# %sql -q {sqlin}
# Run the next cell to see all the tables in the database you just created.
# %sql SELECT TABSCHEMA, TABNAME FROM SYSCAT.TABLES WHERE TABNAME = '{table_name}'
# Run the next cell to see all the tables in the database that are like **DISCOVER**. You may see tables created by other people running the lab.
# %sql SELECT TABSCHEMA, TABNAME FROM SYSCAT.TABLES WHERE TABNAME LIKE 'DISCOVER%'
# ### Virtualizing your new Tables
# Now that you have created two new tables you can virtualize that data and make it look like a single table in your database.
# 1. Return to the IBM Cloud Pak for Data Console
# 2. Click **Virtualize** in the Data Virtualization menu if you are not still in the Virtualize page
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.07 Menu Virtualize.png">
#
# 3. Enter your current userid, i.e. DATAENGINEER1 in the search bar and hit **Enter**. Now you can see that your new tables have automatically been discovered by IBM Cloud Pak for Data.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.31.01 Available Discover Tables.png">
#
# 4. Select the two tables you just created by clicking the **check box** beside each table. Make sure you only select those for your LABDATAENGINEER schema.
# 5. Click **Add to Cart**. Notice that the number of items in your cart is now **2**.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.33.11 Available ENGINEER Tables.png">
#
# 6. Click **View Cart**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.33.31 View Cart(2).png">
#
# 7. Change the name of your two tables from DISCOVER to **DISCOVERA** and **DISCOVERB**. These are the new names that you will be able to use to find your tables in the Data Virtualization database. Don't change the Schema name. It is unique to your current userid.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.34.21 Assign to Project.png">
#
# 9. Click the **back arrow** beside **Review cart and virtualize tables**. We are going to add one more thing to your cart.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.34.30 Back Arrow Icon.png">
#
# 10. Click the checkbox beside **Automatically group tables**. Notice how all the tables called **DISCOVER** have been grouped together into a single entry.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.35.18 Automatically Group Available Tables.png">
#
# 11. Select the row were all the DISCOVER tables have been grouped together
# 12. Click **Add to cart**.
# 13. Click **View cart**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.35.28 View cart(3).png">
#
# You should now see three items in your cart.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.35.57 Cart with Fold.png">
#
# 14. Hover over the elipsis icon at the right side of the list for the **DISCOVER** table
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.34.44 Elipsis.png">
#
# 15. Select **Edit grouped tables**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.36.11 Cart Elipsis Menu.png">
#
# 16. Deselect all the tables except for those in one of the schemas you created. You should now have two tables selected.
# 17. Click **Apply**
# 17. Change the name of the new combined table to **DISCOVERFOLD**
# 18. Select the **Data Virtualization Hands in Lab** project from the drop down list.
# 20. Click **Virtualize**. You see that three new virtual tables have been created.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.36.49 Virtualize.png">
#
# The Virtual tables created dialog box opens.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.37.24 Virtual tables created.png">
#
# 21. Click **View my virtualized data**. You return to the My virtualized data page.
# ### Working with your new tables
# 1. Enter DISCOVER_# where # is your lab number
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.37.55 Find DISCOVER.png">
#
# You should see the three virtual tables you just created. Notice that you do not see tables that other users have created. By default, Data Engineers only see virtualized tables they have virtualized or virtual tables where they have been given access by other users.
# 2. Click the elipsis (...) beside your **DISCOVERFOLD_#** table and select **Preview** to confirm that it contains 20 rows.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/4.32.01 Elipsis Fold.png">
#
# 3. Click **SQL Editor** from the Data Virtualization menu
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.33 Menu SQL editor.png">
#
# 4. Click **Blank** to create a new blank SQL Script
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.38.24 + Blank.png">
#
# 4. Enter **SELECT * FROM DISCOVERFOLD_#;** into the SQL Editor
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.38.44 SELECT*.png">
#
# 5. Click **Run All** at the bottom left of the SQL Editor window. You should see 20 rows returned in the result.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.38.52 Run all.png">
#
# Notice that you didn't have to specify the schema for your new virtual tables. The SQL Editor automatically uses the schema associated with your userid that was used when you created your new tables.
#
# Now you can:
# * Create connection to a remote data source
# * Make a new or existing table in that remote data source look and act like a local table
# * Fold data from different tables in the same data source or access data sources by folding it together into a single virtual table
# ## Gaining Insight from Virtualized Data
# Now that you understand the basics of Data Virtualization you can explore how easy it is to gain insight across multiple data sources without moving data.
#
# In the next set of steps you connect to virtualized data from this notebook using your LABDATAENGINEER userid. You can use the same techniques to connect to virtualized data from applications and analytic tools from outside of IBM Cloud Pak for Data.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/ConnectingTotheAnalyticsDatabase.png">
#
# Connecting to all your virtualized data is just like connecting to a single database. All the complexity of a dozens of tables across multiple databases on different on premises and cloud providers is now as simple as connecting to a single database and querying a table.
#
# We are going to connect to the IBM Cloud Pak for Data Virtualization database in exactly the same way we connected to a Db2 database earlier in this lab. However we need to change the detailed connection information.
#
# 1. Click **Connection Details** in the Data Virtualization menu
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.44 Menu connection details.png">
#
# 2. Click **Without SSL**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.14.29 Connection details.png">
#
# 3. Copy the **User ID** by highlighting it with your mouse, right click and select **Copy**
# 4. Paste the **User ID** in to the next cell in this notebook where **user=** (see below) between the quotation marks
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.54.27 Notebook Login.png">
#
# 5. Click **Service Settings** in the Data Virtualization menu
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.14.05 Menu Service settings.png">
#
# 6. Look for the Access Information section of the page
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.14.15 Access information.png">
#
# 6. Click **Show** to see the password. Highlight the password and copy using the right-click menu
# 7. Paste the **password** into the cell below between the quotation marks using the righ click paste.
# 8. Run the cell below to connect to the Data Virtualization database.
# #### Connecting to Data Virtualization SQL Engine
# +
# Connect to the IBM Cloud Pak for Data Virtualization Database from inside CPD
database = 'bigsql'
user = 'userxxxx'
password = '<PASSWORD>'
host = 'openshift-skytap-nfs-lb.ibm.com'
port = '32080'
# %sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}
# +
# Connect to the IBM Cloud Pak for Data Virtualization Database from outside CPD
database = 'bigsql'
user = 'user999'
password = '<PASSWORD>'
host = 'services-uscentral.skytap.com'
port = '19245'
# %sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}
# -
# ### Stock Symbol Table
# #### Get information about the stocks that are in the database
# **System Z - VSAM**
# This table comes from a VSAM file on zOS. IBM Cloud Pak for Data Virtualization works together with Data Virtualization Manager for zOS to make this looks like a local database table. For the following examples you can substitute any of the symbols below.
# %sql -a select * from DVDEMO.STOCK_SYMBOLS
# ### Stock History Table
# #### Get Price of a Stock over the Year
# Set the Stock Symbol in the line below and run the cell. This information is folded together with data coming from two identical tables, one on Db2 database and on on and Informix database. Run the next two cells. Then pick a new stock symbol from the list above, enter it into the cell below and run both cells again.
#
# **CP4D - Db2, Skytap - Informix**
stock = 'AXP'
print('variable stock set to = ' + str(stock))
# + magic_args="-pl" language="sql"
# SELECT WEEK(TX_DATE) AS WEEK, OPEN FROM FOLDING.STOCK_HISTORY
# WHERE SYMBOL = :stock AND TX_DATE != '2017-12-01'
# ORDER BY WEEK(TX_DATE) ASC
# -
# #### Trend of Three Stocks
# This chart shows three stock prices over the course of a year. It uses the same folded stock history information.
#
# **CP4D - Db2, Skytap - Informix**
stocks = ['INTC','MSFT','AAPL']
# + magic_args="-pl" language="sql"
# SELECT SYMBOL, WEEK(TX_DATE), OPEN FROM FOLDING.STOCK_HISTORY
# WHERE SYMBOL IN (:stocks) AND TX_DATE != '2017-12-01'
# ORDER BY WEEK(TX_DATE) ASC
# -
# #### 30 Day Moving Average of a Stock
# Enter the Stock Symbol below to see the 30 day moving average of a single stock.
#
# **CP4D - Db2, Skytap - Informix**
stock = 'AAPL'
# +
sqlin = \
"""
SELECT WEEK(TX_DATE) AS WEEK, OPEN,
AVG(OPEN) OVER (
ORDER BY TX_DATE
ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING) AS MOVING_AVG
FROM FOLDING.STOCK_HISTORY
WHERE SYMBOL = :stock
ORDER BY WEEK(TX_DATE)
"""
# df = %sql {sqlin}
txdate= df['WEEK']
sales = df['OPEN']
avg = df['MOVING_AVG']
plt.xlabel("Day", fontsize=12);
plt.ylabel("Opening Price", fontsize=12);
plt.suptitle("Opening Price and Moving Average of " + stock, fontsize=20);
plt.plot(txdate, sales, 'r');
plt.plot(txdate, avg, 'b');
plt.show();
# -
# #### Trading volume of INTC versus MSFT and AAPL in first week of November
# **CP4D - Db2, Skytap - Informix**
stocks = ['INTC','MSFT','AAPL']
# + magic_args="-pb" language="sql"
# SELECT SYMBOL, DAY(TX_DATE), VOLUME/1000000 FROM FOLDING.STOCK_HISTORY
# WHERE SYMBOL IN (:stocks) AND WEEK(TX_DATE) = 45
# ORDER BY DAY(TX_DATE) ASC
# -
# #### Show Stocks that Represent at least 3% of the Total Purchases during Week 45
# **CP4D - Db2, Skytap - Informix**
# + magic_args="-pie" language="sql"
# WITH WEEK45(SYMBOL, PURCHASES) AS (
# SELECT SYMBOL, SUM(VOLUME * CLOSE) FROM FOLDING.STOCK_HISTORY
# WHERE WEEK(TX_DATE) = 45 AND SYMBOL <> 'DJIA'
# GROUP BY SYMBOL
# ),
# ALL45(TOTAL) AS (
# SELECT SUM(PURCHASES) * .03 FROM WEEK45
# )
# SELECT SYMBOL, PURCHASES FROM WEEK45, ALL45
# WHERE PURCHASES > TOTAL
# ORDER BY SYMBOL, PURCHASES
# -
# ### Stock Transaction Table
# #### Show Transactions by Customer
# This next two examples uses data folded together from three different data sources representing three different trading organizations to create a combined of a single customer's stock trades.
#
# **AWS - Db2, Azure - EDB (Postgres), Azure - Db2**
# + magic_args="-a" language="sql"
# SELECT * FROM FOLDING.STOCK_TRANSACTIONS_DV
# WHERE CUSTID = '107196'
# FETCH FIRST 10 ROWS ONLY
# -
# #### Bought/Sold Amounts of Top 5 stocks
# **AWS - Db2, Azure - EDB (Postgres), Azure - Db2**
# + magic_args="-a" language="sql"
# WITH BOUGHT(SYMBOL, AMOUNT) AS
# (
# SELECT SYMBOL, SUM(QUANTITY) FROM FOLDING.STOCK_TRANSACTIONS_DV
# WHERE QUANTITY > 0
# GROUP BY SYMBOL
# ),
# SOLD(SYMBOL, AMOUNT) AS
# (
# SELECT SYMBOL, -SUM(QUANTITY) FROM FOLDING.STOCK_TRANSACTIONS_Dv
# WHERE QUANTITY < 0
# GROUP BY SYMBOL
# )
# SELECT B.SYMBOL, B.AMOUNT AS BOUGHT, S.AMOUNT AS SOLD
# FROM BOUGHT B, SOLD S
# WHERE B.SYMBOL = S.SYMBOL
# ORDER BY B.AMOUNT DESC
# FETCH FIRST 5 ROWS ONLY
# -
# ### Customer Accounts
# #### Show Top 5 Customer Balance
# These next two examples use data folded from systems running on AWS and Azure.
# **AWS - Db2, Azure - EDB (Postgres), Azure - Db2**
# + magic_args="-a" language="sql"
# SELECT CUSTID, BALANCE FROM FOLDING.ACCOUNTS_DV
# ORDER BY BALANCE DESC
# FETCH FIRST 5 ROWS ONLY
# -
# #### Show Bottom 5 Customer Balance
# **AWS - Db2, Azure - EDB (Postgres), Azure - Db2**
# + magic_args="-a" language="sql"
# SELECT CUSTID, BALANCE FROM FOLDING.ACCOUNTS_DV
# ORDER BY BALANCE ASC
# FETCH FIRST 5 ROWS ONLY
# -
# ### Selecting Customer Information from MongoDB
# The MongoDB database (running on premises) has customer information in a document format. In order to materialize the document data as relational tables, a total of four virtual tables are generated. The following query shows the tables that are generated for the Customer document collection.
# %sql LIST TABLES FOR SCHEMA MONGO_ONPREM
# The tables are all connected through the CUSTOMERID field, which is based on the generated _id of the main CUSTOMER colllection. In order to reassemble these tables into a document, we must join them using this unique identifier. An example of the contents of the CUSTOMER_CONTACT table is shown below.
# %sql -a SELECT * FROM MONGO_ONPREM.CUSTOMER_CONTACT FETCH FIRST 5 ROWS ONLY
# A full document record is shown in the following SQL statement which joins all of the tables together.
# + magic_args="-a" language="sql"
# SELECT C.CUSTOMERID AS CUSTID,
# CI.FIRSTNAME, CI.LASTNAME, CI.BIRTHDATE,
# CC.CITY, CC.ZIPCODE, CC.EMAIL, CC.PHONE, CC.STREET, CC.STATE,
# CP.CARD_TYPE, CP.CARD_NO
# FROM MONGO_ONPREM.CUSTOMER C, MONGO_ONPREM.CUSTOMER_CONTACT CC,
# MONGO_ONPREM.CUSTOMER_IDENTITY CI, MONGO_ONPREM.CUSTOMER_PAYMENT CP
# WHERE CC.CUSTOMER_ID = C."_ID" AND
# CI.CUSTOMER_ID = C."_ID" AND
# CP.CUSTOMER_ID = C."_ID"
# FETCH FIRST 3 ROWS ONLY
# -
# ### Querying All Virtualized Data
# In this final example we use data from each data source to answer a complex business question. "What are the names of the customers in Ohio, who bought the most during the highest trading day of the year (based on the Dow Jones Industrial Index)?"
#
# **AWS Db2, Azure EDB, Azure Db2, Skytap MongoDB, CP4D Db2Wh, Skytap Informix**
# + language="sql"
# WITH MAX_VOLUME(AMOUNT) AS (
# SELECT MAX(VOLUME) FROM FOLDING.STOCK_HISTORY
# WHERE SYMBOL = 'DJIA'
# ),
# HIGHDATE(TX_DATE) AS (
# SELECT TX_DATE FROM FOLDING.STOCK_HISTORY, MAX_VOLUME M
# WHERE SYMBOL = 'DJIA' AND VOLUME = M.AMOUNT
# ),
# CUSTOMERS_IN_OHIO(CUSTID) AS (
# SELECT C.CUSTID FROM TRADING.CUSTOMERS C
# WHERE C.STATE = 'OH'
# ),
# TOTAL_BUY(CUSTID,TOTAL) AS (
# SELECT C.CUSTID, SUM(SH.QUANTITY * SH.PRICE)
# FROM CUSTOMERS_IN_OHIO C, FOLDING.STOCK_TRANSACTIONS_DV SH, HIGHDATE HD
# WHERE SH.CUSTID = C.CUSTID AND
# SH.TX_DATE = HD.TX_DATE AND
# QUANTITY > 0
# GROUP BY C.CUSTID
# )
# SELECT LASTNAME, T.TOTAL
# FROM MONGO_ONPREM.CUSTOMER_IDENTITY CI, MONGO_ONPREM.CUSTOMER C, TOTAL_BUY T
# WHERE CI.CUSTOMER_ID = C."_ID" AND C.CUSTOMERID = CUSTID
# ORDER BY TOTAL DESC
# -
# ### Seeing where your Virtualized Data is coming from
# You may eventually work with a complex Data Virtualization system. As an administrator or a Data Scientist you may need to understand where data is coming from.
#
# Fortunately, the Data Virtualization engine is based on Db2. It includes the same catalog of information as does Db2 with some additional features. If you want to work backwards and understand where each of your virtualized tables comes from, the information is included in the **SYSCAT.TABOPTIONS** catalog table.
# + language="sql"
# SELECT TABSCHEMA, TABNAME, SETTING
# FROM SYSCAT.TABOPTIONS
# WHERE OPTION = 'SOURCELIST'
# AND TABSCHEMA <> 'QPLEXSYS';
# + language="sql"
# SELECT * from SYSCAT.TABOPTIONS;
# -
# The table includes more information than you need to answer the question of where is my data coming from. The query below only shows the rows that contain the information of the source of the data ('SOURCELIST'). Notice that tables that have been folded together from several tables includes each of the data source information seperated by a semi-colon.
# + language="sql"
# SELECT TABSCHEMA, TABNAME, SETTING
# FROM SYSCAT.TABOPTIONS
# WHERE OPTION = 'SOURCELIST'
# AND TABSCHEMA <> 'QPLEXSYS';
# + language="sql"
# SELECT TABSCHEMA, TABNAME, SETTING
# FROM SYSCAT.TABOPTIONS
# WHERE TABSCHEMA = 'DVDEMO';
# -
# In this last example, you can search for any virtualized data coming from a Postgres database by searching for **SETTING LIKE '%POST%'**.
# + language="sql"
# SELECT TABSCHEMA, TABNAME, SETTING
# FROM SYSCAT.TABOPTIONS
# WHERE OPTION = 'SOURCELIST'
# AND SETTING LIKE '%POST%'
# AND TABSCHEMA <> 'QPLEXSYS';
# -
# What is missing is additional detail for each connection. For example all we can see in the table above is a connection. You can find that detail in another table: **QPLEXSYS.LISTRDBC**. In the last cell, you can see that CID DB210113 is included in the STOCK_TRANSACTIONS virtual table. You can find the details on that copy of Db2 by running the next cell.
# + language="sql"
# SELECT CID, USR, SRCTYPE, SRCHOSTNAME, SRCPORT, DBNAME, IS_DOCKER FROM QPLEXSYS.LISTRDBC;
# -
# ## Advanced Data Virtualization
# Now that you have seen how powerful and easy it is to gain insight from your existing virtualized data, you can learn more about how to do advanced data virtualization. You will learn how to join different remote tables together to create a new virtual table and how to capture complex SQL into VIEWs.
#
#
# ### Joining Tables Together
# The virtualized tables below come from different data sources on different systems. We can combine them into a single virtual table.
#
# * Select **My virtualized data** from the Data Virtualization menu
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.20 Menu My virtual data.png">
#
# * Enter **Stock** in the find field and hit enter
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.39.43 Find STOCK.png">
#
# * Select table **STOCK_TRANSACTIONS_DV** in the **FOLDING** schema
# * Select table **STOCK_SYMBOLS** in the **DVDEMO** schema
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.40.18 Two STOCK seleted.png">
#
# * Click **Join View**
# * In table STOCK_SYMBOLS: deselect **SYMBOL**
# * In table STOCK_TRANSACTIONS: deselect **TX_NO**
# * Click **STOCK_TRANSACTION_DV.SYMBOL** and drag to **STOCK_SYMBOLS.SYMBOL**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.41.07 Joining Tables.png">
#
# * Click **Preview** to check that your join is working. Each row shoud now contain the stock symbol and the long stock name.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.41.55 New Join Preview.png">
#
# * Click **X** to close the preview window
# * Click **JOIN**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.42.20 Join.png">
#
# * Type view name **TRANSACTIONS_FULLNAME**
# * Don't change the default schema. This corresponds to your LABENGINEER user id.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.43.10 View Name.png">
#
# * Click **NEXT**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.43.30 Next.png">
#
# * Select the **Data Virtualization Hands on Lab** project.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.43.58 Assign to Project.png">
#
# * Click **CREATE VIEW**.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.44.06 Create view.png">
#
# You see the successful Join View window.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.44.23 Join view created.png">
#
#
# * Click **View my virtualized data**
# * Click the elipsis menu beside **TRANSACTIONS_FULLNAME**
# * Click **Preview**
#
# You can now join virtualize tables together to combine them into new virtualized tables. Now that you know how to perform simple table joins you can learn how to combine multiple data sources and virtual tables using the powerful SQL query engine that is part of the IBM Cloud Pak for Data - Virtualization.
# ### Using Queries to Answer Complex Business Questions
# The IBM Cloud Pak for Data Virtualization Administrator has set up more complex data from multiple source for the next steps. The administrator has also given you access to this virtualized data. You may have noticed this in previous steps.
# 1. Select **My virtualized data** from the Data Virtualiztion menu. All of these virtualized tables look and act like normal Db2 tables.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.20 Menu My virtual data.png">
#
# 2. Click **Preview** for any of the tables to see what they contain.
#
# The virtualized tables in the **FOLDING** schema have all been created by combining the same tables from different data sources. Folding isn't something that is restricted to the same data source in the simple example you just completed.
#
# The virtualized tables in the **TRADING** schema are a view of complex queries that were use to combine data from multiple data sources to answer specific business questions.
#
# 3. Select **SQL Editor** from the Data Virtualization menu.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.33 Menu SQL editor.png">
#
# 4. Select **Script Library**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.45.02 Script Library.png">
#
# 5. Search for **OHIO**
# 6. Select and expand the **OHIO Customer** query
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.45.47 Ohio Script.png">
#
# 7. Click the **Open a script to edit** icon to open the script in the SQL Editor. **Note** that if you cannot open the script then you may have to refresh your browser or contract and expand the script details section before the icon is active.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.45.54 Open Script.png">
#
# 8. Click **Run All**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.46.21 Run Ohio Script.png">
#
#
# This script is a complex SQL join query that uses data from all the virtualize data sources you explored in the first steps of this lab. While the SQL looks complex the author of the query did not have be aware that the data was coming from multiple sources. Everything used in this query looks like it comes from a single database, not eight different data sources across eight different systems on premises or in the Cloud.
# ### Making Complex SQL Simple to Consume
# You can easily make this complex query easy for a user to consume. Instead of sharing this query with other users, you can wrap the query into a view that looks and acts like a simple table.
# 1. Enter **CREATE VIEW MYOHIOQUERY AS** in the SQL Editor at the first line below the comment and before the **WITH** clause
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.46.54 Add CREATE VIEW.png">
#
# 2. Click **Run all**
# 3. Click **+** to **Add a new script**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.48.28 Add to script.png">
#
# 4. Click **Blank**
# 4. Enter **SELECT * FROM MYOHIOQUERY;**
# 5. Click **Run all**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.48.57 Run Ohio View.png">
#
#
# Now you have a very simple virtualized table that is pulling data from eight different data sources, combining the data together to resolve a complex business problem. In the next step you will share your new virtualized data with a user.
# ### Sharing Virtualized Tables
# 1. Select **My virtualized data** from the Data Virtualization Menu.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.20 Menu My virtual data.png">
#
# 2. Click the elipsis (...) menu to the right of the **MYOHIOQUERY** virtualized table
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.49.30 Select MYOHIOQUERY.png">
#
# 3. Select **Manage Access** from the elipsis menu
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.49.46 Virtualized Data Menu.png">
#
# 3. Click **Grant access**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.50.07 Grant access.png">
#
# 4. Select the **LABUSERx** id associated with your lab. For example, if you are LABDATAENGINEER5, then select LABUSER5.
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.52.42 Grant access to specific user.png">
#
# 5. Click **Add**
#
# <img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.50.28 Add.png">
#
#
# You should now see that your **LABUSER** id has view-only access to the new virtualized table. Next switch to your LABUSERx id to check that you can see the data you have just granted access for.
#
# 6. Click the user icon at the very top right of the console
# 7. Click **Log out**
# 8. Sign in using the LABUSER id specified by your lab instructor
# 9. Click the three bar menu at the top left of the IBM Cloud Pak for Data console
# 10. Select **Data Virtualization**
#
# You should see the **MYOHIOQUERY** with the schema from your engineer userid in the list of virtualized data.
#
# 11. Make a note of the schema of the MYOHIOQUERY in your list of virtualized tables. It starts with **USER**.
# 12. Select the **SQL Editor** from the Data virtualization menu
# 13. Click **Blank** to open a new SQL Editor window
# 14. Enter **SELECT * FROM USERxxxx.MYOHIOQUERY** where xxxx is the user number of your engineer user. The view created by your engineer user was created in their default schema.
# 15. Click **Run all**
# 16. Add the following to your query: **WHERE TOTAL > 3000 ORDER BY TOTAL**
# 17. Click **</>** to format the query so it is easiler to read
# 18. Click **Run all**
#
# You can see how you have just make a very complex data set extremely easy to consume by a data user. They don't have to know how to connect to multiple data sources or how to combine the data using complex SQL. You can hide that complexity while ensuring only the right user has access to the right data.
#
# In the next steps you will learn how to access virtualized data from outside of IBM Cloud Pak for Data.
# ### Allowing User to Access Virtualized Data with Analytic Tools
# In the next set of steps you connect to virtualized data from this notebook using your **LABUSER** userid.
#
# Just like you connected to IBM Cloud Pak for Data Virtualized Data using your LABDATAENGINEER you can connect using your LABUSER.
#
# We are going to connect to the IBM Cloud Pak for Data Virtualization database in exactly the same way we connected using you LABENGINEER. However you need to change the detailed connection information. Each user has their own unique userid and password to connect to the service. This ensures that no matter what tool you use to connect to virtualized data you are always in control over who can access specifical virtualized data.
#
# 2. Click the user icon at the top right of the IBM Cloud Pak for data console to confirm that you are using your **LABUSER** id
# 1. Click **Connection Details** in the Data Virtualization menu
# 2. Click **Without SSL**
# 3. Copy the **User ID** by highlighting it with your mouse, right click and select **Copy**
# 4. Paste the **User ID** in to the cell below were **user =** between the quotation marks
# 5. Click **Service Settings** in the Data Virtualization menu
# 6. Show the password. Highlight the password and copy using the right click menu
# 7. Paste the **password** into the cell below between the quotation marks using the righ click paste.
# 8. Run the cell below to connect to the Data Virtualization database.
# #### Connecting a USER to Data Virtualization SQL Engine
# +
# Connect to the IBM Cloud Pak for Data Virtualization Database from inside CPD
database = 'bigsql'
user = 'userxxxx'
password = '<PASSWORD>'
host = 'openshift-skytap-nfs-lb.ibm.com'
port = '32080'
# %sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}
# +
# Connect to the IBM Cloud Pak for Data Virtualization Database from outside CPD
database = 'bigsql'
user = 'USER1130'
password = '<PASSWORD>'
host = 'services-uscentral.skytap.com'
port = '19245'
# %sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}
# -
# Now you can try out the view that was created by the LABDATAENGINEER userid.
#
# Substitute the **xxxx** for the schema used by your ***LABDATAENGINEERx*** user in the next two cells before you run them.
# %sql SELECT * FROM USERxxxx.MYOHIOQUERY WHERE TOTAL > 3000 ORDER BY TOTAL;
# Only LABENGINEER virtualized tables that have been authorized for the LABUSER to see are available. Try running the next cell. You should receive an error that the current user does not have the required authorization or privlege to perform the operation.
# %sql SELECT * FROM USERxxxx.DISCOVERFOLD;
# ### Next Steps:
# Now you can use IBM Cloud Pak for Data to make even complex data and queries from different data sources, on premises and across a multi-vendor Cloud look like simple tables in a single database. You are ready for some more advanced labs.
#
# 1. Use Db2 SQL and Jupyter Notebooks to Analyze Virtualized Data
# * Build simple to complex queries to answer important business questions using the virtualized data available to you in IBM Cloud Pak for Data
# * See how you can transform the queries into simple tables available to all your users
# 2. Use Open RESTful Services to connect to the IBM Cloud Pak for Data Virtualization
# * Everything you can do in the IBM Cloud Pak for Data User Interface is accessible through Open RESTful APIs
# * Learn how to automate and script your managment of Data Virtualization using RESTful API
# * Learn how to accelerate appliation development by accessing virtaulied data through RESTful APIs
# ## Automating Data Virtualization Setup and Management through REST
# The IBM Cloud Pak for Data Console is only one way you can interact with the Virtualization service. IBM Cloud Pak for Data is built on a set of microservices that communicate with each other and the Console user interface using RESTful APIs. You can use these services to automate anything you can do throught the user interface.
#
# This Jupyter Notebook contains examples of how to use the Open APIs to retrieve information from the virtualization service, how to run SQL statements directly against the service through REST and how to provide authoritization to objects. This provides a way write your own script to automate the setup and configuration of the virtualization service.
# + [markdown] hide_input=true
# The next part of the lab relies on a set of base classes to help you interact with the RESTful Services API for IBM Cloud Pak for Data Virtualization. You can access this library on GITHUT. The commands below download the library and run them as part of this notebook.
# <pre>
# %run CPDDVRestClass.ipynb
# </pre>
# The cell below loads the RESTful Service Classes and methods directly from GITHUB. Note that it will take a few seconds for the extension to load, so you should generally wait until the "Db2 Extensions Loaded" message is displayed in your notebook.
# 1. Click the cell below
# 2. Click **Run**
# -
# !wget -O CPDDVRestClass.ipynb https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/CPDDVRestClass.ipynb
# %run CPDDVRestClass.ipynb
# ### The Db2 Class
# The CPDDVRestClass.ipynb notebook includes a Python class called Db2 that encapsulates the Rest API calls used to connect to the IBM Cloud Pak for Data Virtualization service.
#
# To access the service you need to first authenticate with the service and create a reusable token that we can use for each call to the service. This ensures that we don't have to provide a userID and password each time we run a command. The token makes sure this is secure.
#
# Each request is constructed of several parts. First, the URL and the API identify how to connect to the service. Second the REST service request that identifies the request and the options. For example '/metrics/applications/connections/current/list'. And finally some complex requests also include a JSON payload. For example running SQL includes a JSON object that identifies the script, statement delimiters, the maximum number of rows in the results set as well as what do if a statement fails.
#
# You can find this class and use it for your own notebooks in GITHUB. Have a look at how the class encapsulated the API calls by clicking on the following link: https://github.com/Db2-DTE-POC/CPDDVLAB/blob/master/CPDDVRestClass.ipynb
# ### Example Connections
# To connect to the Data Virtualization service you need to provide the URL, the service name (v1) and profile the console user name and password. For this lab we are assuming that the following values are used for the connection:
# * Userid: LABDATAENGINEERx
# * Password: password
#
# Substitute your assigned LABDATAENGINEER userid below along with your password and run the cell. It will generate a breaer token that is used in the following steps to authenticate your use of the API.
# #### Connecting to Data Virtualization API Service
# +
# Set the service URL to connect from inside the ICPD Cluster
# Console = 'https://openshift-skytap-nfs-lb.ibm.com'
# Set the service URL to connect from outside the ICPD Cluster
Console = 'https://services-uscentral.skytap.com:9152'
# Connect to the Db2 Data Management Console service
user = 'LABDATAENGINEERx'
password = '<PASSWORD>'
# Set up the required connection
databaseAPI = Db2(Console)
api = '/v1'
databaseAPI.authenticate(api, user, password)
database = Console
# -
# #### Data Sources and Availability
# The following Python function (getDataSources) runs SQL against the **QPLEXSYS.LISTRDB** catalog table and combines it with a stored procedure call **QPLEXSYS.LISTRDBCDETAILS()** to add the **AVAILABLE** column to the results. The IBM Cloud Pak for Data Virtualization Service checks each data sources every 5 to 10 seconds to ensure that it is still up and available. In the table (DataFrame) in the next cell a **1** in the **AVAILABLE** column indicates that the data source is responding. A **0** indicdates that it is not longer responding.
#
# Run the following cell.
# +
# Display the Available Data Sources already configured
dataSources = databaseAPI.getDataSources()
display(dataSources)
# -
# #### Virtualized Data
# This call retrieves all of the virtualized data available to the role of Data Engineer. It uses a direct RESTful service call and does not use SQL. The service returns a JSON result set that is converted into a Python Pandas dataframe. Dataframes are very useful in being able to manipulate tables of data in Python. If there is a problem with the call, the error code is displayed.
# Display the Virtualized Assets Avalable to Engineers and Users
roles = ['DV_ENGINEER','DV_USER']
for role in roles:
r = databaseAPI.getRole(role)
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
df = pd.DataFrame(json_normalize(json['objects']))
display(df)
else:
print(databaseAPI.getStatusCode(r))
# #### Virtualized Tables and Views
# This call retrieves all the virtualized tables and view available to the userid that you use to connect to the service. In this example the whole call is included in the DB2 class library and returned as a complete Dataframe ready for display or to be used for analysis or administration.
### Display Virtualized Tables and Views
display(databaseAPI.getVirtualizedTablesDF())
display(databaseAPI.getVirtualizedViewsDF())
# #### Get a list of the IBM Cloud Pak for Data Users
# This example returns a list of all the users of the IBM Cloud Pak for Data system. It only displays three colunns in the Dataframe, but the list of all the available columns is als printed out. Try changing the code to display other columns.
# Get the list of CPD Users
r = databaseAPI.getUsers()
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
df = pd.DataFrame(json_normalize(json))
print(', '.join(list(df))) # List available column names
display(df[['uid','username','displayName']])
else:
print(databaseAPI.getStatusCode(r))
# #### Get the list of available schemas in the DV Database
# Do not forget that the Data Virtualization engine supports the same function as a regular Db2 database. So you can also look at standard Db2 objects like schemas.
# Get the list of available schemas in the DV Database
r = databaseAPI.getSchemas()
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
df = pd.DataFrame(json_normalize(json['resources']))
print(', '.join(list(df)))
display(df[['name']].head(10))
else:
print(databaseAPI.getStatusCode(r))
# #### Object Search
# Fuzzy object search is also available. The call is a bit more complex. If you look at the routine in the DB2 class it posts a RESTful service call that includes a JSON payload. The payload includes the details of the search request.
# Search for tables across all schemas that match simple search critera
# Display the first 100
# Switch between searching tables or views
object = 'view'
# object = 'table'
r = databaseAPI.postSearchObjects(object,"TRADING",10,'false','false')
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
df = pd.DataFrame(json_normalize(json))
print('Columns:')
print(', '.join(list(df)))
display(df[[object+'_name']].head(100))
else:
print("RC: "+str(databaseAPI.getStatusCode(r)))
# #### Run SQL through the SQL Editor Service
# You can also use the SQL Editor service to run your own SQL. Statements are submitted to the editor. Your code then needs to poll the editor service until the script is complete. Fortunately you can use the DB2 class included in this lab so that it becomes a very simple Python call. The **runScript** routine runs the SQL and the **displayResults** routine formats the returned JSON.
#
# Run the next cell.
databaseAPI.displayResults(databaseAPI.runScript('SELECT * FROM TRADING.MOVING_AVERAGE'))
# You can also run longer more complex statements by using three quotes to create a multi-line string in Python.
# +
# Use this query if Mongo is available
sqlText = \
'''
WITH MAX_VOLUME(AMOUNT) AS (
SELECT MAX(VOLUME) FROM FOLDING.STOCK_HISTORY
WHERE SYMBOL = 'DJIA'
),
HIGHDATE(TX_DATE) AS (
SELECT TX_DATE FROM FOLDING.STOCK_HISTORY, MAX_VOLUME M
WHERE SYMBOL = 'DJIA' AND VOLUME = M.AMOUNT
),
CUSTOMERS_IN_OHIO(CUSTID) AS (
SELECT C.CUSTID FROM TRADING.CUSTOMERS C
WHERE C.STATE = 'OH'
),
TOTAL_BUY(CUSTID,TOTAL) AS (
SELECT C.CUSTID, SUM(SH.QUANTITY * SH.PRICE)
FROM CUSTOMERS_IN_OHIO C, FOLDING.STOCK_TRANSACTIONS SH, HIGHDATE HD
WHERE SH.CUSTID = C.CUSTID AND
SH.TX_DATE = HD.TX_DATE AND
QUANTITY > 0
GROUP BY C.CUSTID
)
SELECT LASTNAME, T.TOTAL
FROM MONGO_ONPREM.CUSTOMER_IDENTITY CI, MONGO_ONPREM.CUSTOMER C, TOTAL_BUY T
WHERE CI.CUSTOMER_ID = C."_ID" AND C.CUSTOMERID = CUSTID
ORDER BY TOTAL DESC
FETCH FIRST 5 ROWS ONLY;
'''
databaseAPI.displayResults(databaseAPI.runScript(sqlText))
# +
# Use this query if Mongo is not available
sqlText = \
'''
WITH MAX_VOLUME(AMOUNT) AS (
SELECT MAX(VOLUME) FROM FOLDING.STOCK_HISTORY
WHERE SYMBOL = 'DJIA'
),
HIGHDATE(TX_DATE) AS (
SELECT TX_DATE FROM FOLDING.STOCK_HISTORY, MAX_VOLUME M
WHERE SYMBOL = 'DJIA' AND VOLUME = M.AMOUNT
),
CUSTOMERS_IN_OHIO(CUSTID) AS (
SELECT C.CUSTID FROM TRADING.CUSTOMERS C
WHERE C.STATE = 'OH'
),
TOTAL_BUY(CUSTID,TOTAL) AS (
SELECT C.CUSTID, SUM(SH.QUANTITY * SH.PRICE)
FROM CUSTOMERS_IN_OHIO C, FOLDING.STOCK_TRANSACTIONS_DV SH, HIGHDATE HD
WHERE SH.CUSTID = C.CUSTID AND
SH.TX_DATE = HD.TX_DATE AND
QUANTITY > 0
GROUP BY C.CUSTID
)
SELECT LASTNAME, T.TOTAL
FROM TRADING.CUSTOMERS C, TOTAL_BUY T
WHERE C.CUSTID = T.CUSTID
'''
databaseAPI.displayResults(databaseAPI.runScript(sqlText))
# -
# #### Run scripts of SQL Statements repeatedly through the SQL Editor Service
# The runScript routine can contain more than one statement. The next example runs a scipt with eight SQL statements multple times.
# +
repeat = 3
sqlText = \
'''
SELECT * FROM TRADING.MOVING_AVERAGE;
SELECT * FROM TRADING.VOLUME;
SELECT * FROM TRADING.THREEPERCENT;
SELECT * FROM TRADING.TRANSBYCUSTOMER;
SELECT * FROM TRADING.TOPBOUGHTSOLD;
SELECT * FROM TRADING.TOPFIVE;
SELECT * FROM TRADING.BOTTOMFIVE;
SELECT * FROM TRADING.OHIO;
'''
for x in range(0, repeat):
print('Repetition number: '+str(x))
databaseAPI.displayResults(databaseAPI.runScript(sqlText))
print('done')
# -
# ### What's next
# if you are inteested in finding out more about using RESTful services to work with Db2, check out this DZone article: https://dzone.com/articles/db2-dte-pocdb2dmc. The article also includes a link to a complete hands-on lab for Db2 and the Db2 Data Management Console. In it you can find out more about using REST and Db2 together.
# #### Credits: IBM 2019, <NAME> [<EMAIL>]
| media/CPD-DV Hands on Lab Development.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import package and function
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage
from scipy.io import loadmat
from scipy.ndimage import gaussian_filter
import os
# %matplotlib inline
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams["mathtext.fontset"] = "cm"
# -
# # load files
os.chdir('..')
data_folder = os.getcwd()+"\\Experimental_Data_Example\\OLED_Data\\" # Note that use absolute path on your computer instead.
BS = loadmat(data_folder+'oled_boundary_set', squeeze_me =True)
ExpData = loadmat(data_folder+'merge_0224_Checkerboard_30Hz_27_15min_Br50_Q100', squeeze_me =True)
# +
cn = 9
dt = 1/60
timeBinNum = 60
Taxis = np.arange(timeBinNum)*dt
checkerboard = ExpData['bin_pos']
fs = 1.5
GFcheckerboard = np.array([gaussian_filter(cb.astype(float), fs) for cb in checkerboard])
GFCcheckerboard = GFcheckerboard - np.mean(GFcheckerboard, axis = 0)
rstate, _ = np.histogram(ExpData['reconstruct_spikes'][cn-1], np.arange(len(checkerboard)+1)*dt)
# -
STK = np.zeros([timeBinNum,27,27])
for i in range(timeBinNum): #1s
for ii in np.arange(0, len(checkerboard)-i):
STK[i,:,:] += rstate[ii+i]*GFCcheckerboard[ii,:,:]
STK[i,:,:] /= np.sum(rstate[:len(checkerboard)-i])
# # SVD
rSTK = STK[:,:,:].reshape((STK.shape[0],-1))
U,sigma,VT=np.linalg.svd(rSTK)
sigma/np.sum(sigma)
plt.plot(np.arange(timeBinNum+1)*dt, np.append(0,U[:,0]))
plt.xlabel(r'$t$ (s)')
plt.ylabel(r'$\left| u_1 \right\rangle(t)$')
plt.title(r'$\left| u_1 \right\rangle$', fontsize=20)
plt.xlim([0,1])
plt.imshow( VT[0,:].reshape((27,27)) , cmap='gray')
plt.title(r'$\left\langle {v_1} \right|$', fontsize=20)
plt.gca().axes.xaxis.set_visible(False)
plt.gca().axes.yaxis.set_visible(False)
plt.gcf().set_size_inches(3,3.5)
# # Figure 3.4: A reconstructed separable STK from SVD compared with the original STK.
SVDtogather =VT[0,:].reshape((27,27))* U[0,0]
STKtogather = STK[0,:,:]
for i in np.arange(1,18):
SVDtogather = np.hstack((SVDtogather, VT[0,:].reshape((27,27)) * U[i,0] ))
STKtogather = np.hstack((STKtogather, STK[i,:,:] ))
Togather = np.vstack((STKtogather, SVDtogather))
imshowdict = {'cmap': 'gray',
'vmin': np.min(Togather),
'vmax': np.max(Togather)}
fig, ax = plt.subplots(3,3, constrained_layout=True)
for i in np.arange(9):
ax.flatten()[i].imshow(STK[i*2,:,:], **imshowdict)
ax.flatten()[i].set_title(r'$t=$'+str(np.round((i*2)/60, 3))+' s', fontsize = 16)
ax.flatten()[i].axes.xaxis.set_visible(False)
ax.flatten()[i].axes.yaxis.set_visible(False)
# fig.tight_layout()
fig.suptitle(r'STK $K_{st}(t,\vec{x})$', fontsize=24)
fig.set_size_inches(6,7.5)
fig, ax = plt.subplots(3,3, constrained_layout=True)
for i in range(9):
ax.flatten()[i].imshow(VT[0,:].reshape((27,27)) * U[i*2,0], **imshowdict)
ax.flatten()[i].set_title(r'$t=$'+str(np.round((i)/30, 3))+' s', fontsize = 16)
ax.flatten()[i].axes.xaxis.set_visible(False)
ax.flatten()[i].axes.yaxis.set_visible(False)
# fig.tight_layout()
fig.suptitle('Reconstructed separable\n'+r'STK $\sigma_1 \left| u_1 \right\rangle \left\langle {v_1} \right|(t,\vec{x})$ by SVD', fontsize=24)
fig.set_size_inches(6,8)
| Code/STKnSVD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from dishonest_casino import dishonest_casino_play
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
# -
fair_prob = [1./6, 1./6, 1./6, 1./6, 1./6, 1./6]
unfair_prob = [1./10, 1./10, 1./10, 1./10, 1./10, 1./2]
switch_to_loaded_dice_prob = 0.05
switch_to_fair_dice_prob = 0.1
n = 200
h, v = dishonest_casino_play(n=n, fair_prob=fair_prob, unfair_prob=unfair_prob,
prob_switch_to_unfair=switch_to_loaded_dice_prob,
prob_switch_to_fair=switch_to_fair_dice_prob)
# +
x = np.arange(1, len(h) + 1, 1)
v = np.array(v)
h = np.array(h)
possible_values = [1, 2, 3, 4, 5, 6]
values_fair_dice = v[h==0]
freq_fair_dice = [sum(values_fair_dice==i) for i in possible_values]
values_loaded_dice = v[h==1]
freq_loaded_dice = [sum(values_loaded_dice==i) for i in possible_values]
freq_global = [sum(v == i) for i in possible_values]
values = [1, 2, 3, 4, 5, 6]
explode = (0, 0, 0, 0, 0, 0.1)
fig = plt.figure(figsize=(50, 20), dpi= 80, facecolor='w', edgecolor='k')
ax1 = fig.add_subplot(131)
ax1.pie(freq_fair_dice, labels=values, autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
ax1.set_title("fair dice")
#explode = (0, 0.1)
ax2 = fig.add_subplot(132)
ax2.pie(freq_loaded_dice, explode=explode, labels=values, autopct='%1.1f%%',
shadow=True, startangle=90)
ax2.axis('equal')
ax2.set_title("loaded dice")
ax3 = fig.add_subplot(133)
ax3.pie(freq_global, explode=explode, labels=values, autopct='%1.1f%%',
shadow=True, startangle=90)
ax3.set_title("global")
plt.show()
# -
fig = plt.figure(figsize=(12, 4), dpi= 80, facecolor='w', edgecolor='k')
ax1 = fig.add_subplot(121)
ax1.scatter(x, v, color='k', label='observed values')
ax1.set_title('all observed values')
ax1.set_ylim(0, 10)
ax2 = fig.add_subplot(122)
ax2.scatter(x[h==0], v[h==0], color='r', label='values with fair dice')
ax2.scatter(x[h==1], v[h==1], color='b', label='values with loaded dice')
ax2.set_ylim(0, 10)
ax2.set_title('Information on hidden states')
plt.legend(loc='best')
plt.show()
# ## Recovering the hidden states using viterbi's algorithm
#
# We are going to recover the most likely hidden path using the viterbi's algorithm
# +
from math import log
def viterbi(obs_states, hidden_states, init_prob, trans_prob, emis_prob):
d_viterbi = []
# Initialisation
log_probabilities = [log(init_prob[k]) + log(emis_prob[k][obs_states[0]]) for k in hidden_states]
previous_state = None
d_viterbi.append({"log_prob": log_probabilities, "prev_s" : previous_state})
# Do Viterbi
for i in range(1, len(obs_states)):
log_probabilities = []
previous_state = []
for l in hidden_states:
trans_prob_to_l = [d_viterbi[i-1]["log_prob"][k] +\
log(trans_prob[k][l]) for k in hidden_states]
max_log_prob = max(trans_prob_to_l)
log_probabilities.append(max_log_prob + log(emis_prob[l][obs_states[i]]))
previous_state.append(trans_prob_to_l.index(max_log_prob))
d_viterbi.append({"log_prob": log_probabilities, "prev_s": previous_state})
# Last state
last_prob = max(d_viterbi[-1]["log_prob"])
last_state = d_viterbi[-1]["log_prob"].index(last_prob)
# Tracing back
h_states = [0 for i in range(len(obs_states))]
h_states[-1] = last_state
for i in range(1, len(obs_states)):
prev_state = d_viterbi[-i]["prev_s"][h_states[-i]]
h_states[-i-1] = prev_state
return (h_states, last_prob)
h, v = dishonest_casino_play(n=300, fair_prob=fair_prob, unfair_prob=unfair_prob,
prob_switch_to_unfair=switch_to_loaded_dice_prob,
prob_switch_to_fair=switch_to_fair_dice_prob)
obs_states = [i-1 for i in v]
hidden_states = [0, 1]
initial_prob = [0.5, 0.5]
trans_prob = [[0.95, 0.05], [0.1, 0.9]]
emis_prob = [[1./6, 1./6, 1./6, 1./6, 1./6, 1./6],
[0.1, 0.1, 0.1, 0.1, 0.1, 0.5]]
estim_h, prob = viterbi(obs_states, hidden_states, initial_prob, trans_prob, emis_prob)
# -
mat = np.zeros((len(h), len(h)))
for i in range(len(h)):
mat[i,] = h
fig = plt.figure()
ax = fig.add_subplot(111)
ax.matshow(mat,cmap='gray')
ax.plot([i*200 for i in h], color='r')
ax.set_ylim(-10, 210)
plt.show()
mat = [[1, 2, 3, 1, 1, 1, 1, 1, 1, 1]]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.matshow(mat)
plt.show()
| Notebooks/WR-playing_dishonest_casino.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DS Automation Assignment
# Using our prepared churn data from week 2:
# - use pycaret to find an ML algorithm that performs best on the data
# - Choose a metric you think is best to use for finding the best model; by default, it is accuracy but it could be AUC, precision, recall, etc. The week 3 FTE has some information on these different metrics.
# - save the model to disk
# - create a Python script/file/module with a function that takes a pandas dataframe as an input and returns the probability of churn for each row in the dataframe
# - your Python file/function should print out the predictions for new data (new_churn_data.csv)
# - the true values for the new data are [1, 0, 0, 1, 0] if you're interested
# - test your Python module and function with the new data, new_churn_data.csv
# - write a short summary of the process and results at the end of this notebook
# - upload this Jupyter Notebook and Python file to a Github repository, and turn in a link to the repository in the week 5 assignment dropbox
#
# *Optional* challenges:
# - return the probability of churn for each new prediction, and the percentile where that prediction is in the distribution of probability predictions from the training dataset (e.g. a high probability of churn like 0.78 might be at the 90th percentile)
# - use other autoML packages, such as TPOT, H2O, MLBox, etc, and compare performance and features with pycaret
# - create a class in your Python module to hold the functions that you created
# - accept user input to specify a file using a tool such as Python's `input()` function, the `click` package for command-line arguments, or a GUI
# - Use the unmodified churn data (new_unmodified_churn_data.csv) in your Python script. This will require adding the same preprocessing steps from week 2 since this data is like the original unmodified dataset from week 1.
# +
import pandas as pd
df = pd.read_csv('prepped_churn_data.csv', index_col ='customerID')
df
# -
#AutoML with Pycart Installing the package
# !conda install -c conda-forge pycaret -y
from pycaret.classification import setup, compare_models, predict_model, save_model, load_model
automl = setup(df, target = 'Churn')
automl
#use autoML to find bext model
best_model= compare_models()
best_model
df.iloc[-2:-1].shape
predict_model(best_model, df.iloc[-2:-1])
#saving model in file
save_model(best_model, 'ABC')
# +
import pickle
with open('ABC_model.pk', 'wb') as f:
pickle.dump(best_model, f)
# -
with open('ABC_model.pk', 'rb') as f:
loaded_model = pickle.load(f)
new_data = df.iloc[-2:-1].copy()
new_data.drop('Churn', axis=1, inplace=True)
loaded_model.predict(new_data)
loaded_abc=load_model('ABC')
predict_model(loaded_abc, new_data)
# +
from IPython.display import Code
Code('predict_churn.py')
# -
# %run predict_churn.py
import pandas as pd
from pycaret.classification import predict_model, load_model
def load_data(filepath):
"""
Loads churn data into dataframe from string filepath.
"""
df = pd.read_csv('prepped_churn_data.csv', index_col ='customerID')
return df
def make_predictions(df):
"""
Uses pycaret best model to make predictions on data in the df dataframe
"""
model = load_model('CTC')
predictions = predict_model(model, data=df)
predictions.rename({'Label': 'Churn_prediction'}, axis=1, inplace=True)
predictions['Churn_prediction'].replace({1: 'Churn', 0: 'No Churn'}, inplace=True)
return predictions['Churn_prediction']
if __name__ == "__main__":
df= load_data('new_churn_data.csv')
predictions = make_predictions(df)
print('predictions:')
print(predictions)
# # Summary
# Write a short summary of the process and results here.
| Week_5_assignment_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Report on US-Healthcare Databas with Stastistical Analysis
# + [markdown] _cell_guid="e517f1f3-b54e-4af4-bcca-a2cf6362760c" _uuid="03f9ed9a25f2361725c70bc7e79b3c20c1689c7c"
# Health searches data contains the statistics of google searches made in US.
# To start our analysis, let's read the data into a pandas dataframe and also we look at the first 3 rows to understand the columns/data.
# + _cell_guid="9230bd00-6365-4b1a-bea1-b4f8fef5ec93" _uuid="e2105e85a340f0500e1722e35624a11db0e0b442"
import numpy as np
import pandas as pd
from IPython.display import display
import matplotlib.pyplot as plt
# %matplotlib inline
healthSearchData=pd.read_csv("RegionalInterestByConditionOverTime.csv")
healthSearchData.head(3)
# + [markdown] _cell_guid="d92bbc78-8c7d-401e-961a-57153a79274a" _uuid="3f49954ecf067b4f8060a7e294a228906fed604f"
# For our study, we do not consider the "geoCode" column and lets drop it. This is because we already have the city name in a separate column and I would like to keep the data simple.
# + _cell_guid="7c5172ff-6662-42cb-9300-e3bb3ec650f4" _uuid="755c3f945607176620eca2ee6afd14c8a04d14d4"
healthSearchData = healthSearchData.drop(['geoCode'],axis=1)
# + [markdown] _cell_guid="43a39db6-4895-4481-b0b6-23122280258c" _uuid="8e90b2b61d4b35eeeb663eaa7e337425fa7fb848"
# In the dataset, we have 9 medical conditions and the search data is from 2004 to 2017. Its soo refreshing to see data for more than 10 years. Anyway, now we plot year wise search change for the diseases available.
# + _cell_guid="2175a00c-0c89-4797-a6ee-a7c74f4cd479" _uuid="79f8474948ed02af9ddc2f5d3a0ac5fdf8e52c5b"
#2004-2017
#cancer cardiovascular stroke depression rehab vaccine diarrhea obesity diabetes
yearWiseMeam = {}
for col in healthSearchData.columns:
if '+' in col:
year = col.split('+')[0]
disease = col.split('+')[-1]
if not disease in yearWiseMeam:
yearWiseMeam[disease] = {}
if not year in yearWiseMeam[disease]:
yearWiseMeam[disease][year] = np.mean(list(healthSearchData[col]))
plt.figure(figsize=(18, 6))
ax = plt.subplot(111)
plt.title("Year wise google medical search", fontsize=20)
ax.set_xticks([0,1,2,3,4,5,6,7,8,9,10,11,12,13])
ax.set_xticklabels(list(yearWiseMeam['cancer'].keys()))
lh = {}
for disease in yearWiseMeam:
lh[disease] = plt.plot(yearWiseMeam[disease].values())
plt.legend(lh, loc='best')
# + [markdown] _cell_guid="cdef5293-7849-452a-8ff3-2de46bf7f7b2" _uuid="3abc5679d2a220bb6fd3451a3f01e95f41c318a1"
# It can be observed that the line plot has so many uneven jumps. Let's smooth the plot and visualise how the search looks like. This is just for observational benefits and need not be performed everytime.
# + _cell_guid="57a49b99-cbe3-444c-8357-17fd66e38b95" _uuid="be6453fc2b0fb7120bacbbe540ec712287c73c77"
plt.figure(figsize=(18, 6))
ax = plt.subplot(111)
plt.title("Year wise google medical search [smoothened]", fontsize=20)
ax.set_xticks([0,1,2,3,4,5,6,7,8,9,10,11,12,13])
ax.set_xticklabels(list(yearWiseMeam['cancer'].keys()))
lh = {}
myLambda = 0.7
for disease in yearWiseMeam:
tempList = list(yearWiseMeam[disease].values())
localMean = np.mean(tempList)
smoothList = []
for x in tempList:
smoothList.append(x + myLambda * (localMean - x))
lh[disease] = plt.plot(smoothList)
plt.legend(lh, loc='best')
# + [markdown] _cell_guid="75f3148f-2729-4d5f-89d6-04cd8d674e7f" _uuid="e80c6af9b23a5b3697253d984bb382d8fdc9b9ab"
# We see that Cancer is the most searched illness whereas cardiovascular search is the least. Surprisingly, in 2017, diabetes is the highest searched illness. I believe that people are becoming more aware about their health and this can mostly be preemptive search to avoid any future illness. Whatever the case, diabetes has overtaken Cancer in search data.
#
#
# -
# # Conclusion
# # It appears what I thought would be seen in the data came out to be true.There has been an increase of search for health issues every year except for one. Also, the region which is considered the โsickestโ (South) has the most searches for health issues, while the region considered the โhealthiestโ (Northeast) has the least searches.
| analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # a tuple is immutable iterable datatype whereas a list is mutable.
# ## creation of tuple
# +
# You can't add elements to a tuple. Tuples have no append or extend method.
# You can't remove elements from a tuple. Tuples have no remove or pop method.
# You can find elements in a tuple, since this doesnโt change the tuple.
# You can also use the in operator to check if an element exists in the tuple.
# -
# tuple can be made using () operator
flowers = ("Rose", "Lily", "Iris", "Tulip")
# you cant append list
flowers.append("Magenta")
# you cant remove the element
flowers.remove("Tulip")
# you cant remove pop()
flowers.pop()
print(flowers)
# +
# Tuples are faster than lists.
# -
| 09. Python Tuples/01. Advantages of Tuple over List.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
import numpy as np
from scipy.misc import derivative
def f(x): return x**5
derivative(f, 1.0, dx=1e-6, order=15)
derivative(f, 1.0, dx=1e-6, order=15, n=2)
# +
p = np.poly1d([1,0,0,0,0,0]);
print (p)
np.polyder(p,1)(1.0)
p.deriv()(1.0)
np.polyder(p,2)(1.0)
p.deriv(2)(1.0)
# -
from sympy import diff, symbols
x = symbols('x', real=True)
diff(x**5, x)
diff(x**5, x, x)
diff(x**5, x).subs(x, 1.0)
diff(x**5, x, x).subs(x, 1.0)
# +
def g(x): return np.exp(-x) * np.sin(x)
derivative(g, 1.0, dx=1e-6, order=101)
from sympy import sin as Sin, exp as Exp
diff(Exp(-x) * Sin(x), x).subs(x, 1.0)
# -
y, z = symbols('y z', real=True)
diff(Exp(x * y * z), z, z, y, x).subs({x:1.0, y:1.0, z:2.0})
| Chapter08/Differentiation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import requests
github_articles = []
for offset in range(0, 100000, 1000):
query = f"https://p55oroem7k.execute-api.eu-west-1.amazonaws.com/prod/articles/?skip={offset}&limit=1000"
res = requests.get(query).json()
print(query + " -- " + str(len(res)))
for a in res:
if "github" in a["summary"].lower():
github_articles.append(a)
len(github_articles)
with open("github_articles.json", "w") as fout:
json.dump(github_articles, fout)
with open("github_articles.json", "r") as fin:
github_articles = json.load(fin)
len(github_articles)
github_articles_ids = set()
for a in github_articles:
github_articles_ids.add(a["id"])
len(github_articles_ids)
with open("response_1594217709420.json", "r") as fin:
github_articles_search = json.load(fin)
len(github_articles_search)
github_articles_search_ids = set()
for a in github_articles_search:
github_articles_search_ids.add(a["id"])
len(github_articles_search_ids)
# ## Async IO version
# +
import asyncio
import httpx
async def request_one(offset, client):
url = f"https://p55oroem7k.execute-api.eu-west-1.amazonaws.com/prod/articles/?skip={offset}&limit=1000"
resp = await client.get(url, timeout=30)
resp = resp.json()
gh_articles = []
try:
for a in resp:
if "github" in a["summary"].lower():
gh_articles.append(a)
return gh_articles
except TypeError as e:
print(resp)
raise(e)
async def perform_requests():
async with httpx.AsyncClient() as client:
tasks = []
for offset in range(0, 98000, 1000):
tasks.append(
request_one(offset=offset, client=client)
)
res = await asyncio.gather(*tasks)
return res
# -
results = await perform_requests()
gh_articles = [a for r in results for a in r]
| results/20200706-cord-github/find_github.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Ray Tasks Revisited
#
# ยฉ 2019-2022, Anyscale. All Rights Reserved
#
# 
#
# The [Ray Crash Course](../ray-crash-course/00-Ray-Crash-Course-Overview.ipynb) introduced the core concepts of Ray's API and how they parallelize work. Specifically, we learned how to define Ray _tasks_ and _actors_, run them, and retrieve the results.
#
# This lesson explores Ray tasks in greater depth, including the following:
#
# * How task dependencies are handled automatically by Ray
# * Usage patterns for `ray.get()` and `ray.wait()`
# * Specifying limits on the number of invocations and retries on failure
# * An exploration of task granularity considerations
# * Profiling tasks
import ray, time, os, sys
import numpy as np
sys.path.append("..")
from util.printing import pd, pnd # convenience methods for printing results.
ray.init(ignore_reinit_error=True)
# The Ray Dashboard URL is printed above and also part of the output dictionary item `webui_url`
#
# (When using the Anyscale platform, use the URL provided by your instructor to access the Ray Dashboard.)
# ## Ray Task Dependencies
#
# Let's define a few remote tasks, which will have _dependency_ relationships. We'll learn how Ray handles these dependent, asynchronous computations.
#
# One task will return a random NumPy array of some size `n` and the other task will add two such arrays. We'll also add a sleep time, one tenth the size of `n` to simulate expensive computation.
#
# > **Note:** Dependencies and how Ray implements handling of them are explored in depth in the [03: Ray Internals](03-Ray-Internals.ipynb) lesson.
@ray.remote
def make_array(n):
time.sleep(n/10.0)
return np.random.standard_normal(n)
# Now define a task that can add two NumPy arrays together. The arrays need to be the same size, but we'll ignore any checking for this requirement.
@ray.remote
def add_arrays(a1, a2):
time.sleep(a1.size/10.0)
return np.add(a1, a2)
# Now lets use them!
start = time.time()
ref1 = make_array.remote(20)
ref2 = make_array.remote(20)
ref3 = add_arrays.remote(ref1, ref2)
print(ray.get(ref3))
pd(time.time() - start, prefix="Total time:")
# Something subtle and "magical" happened here; when we called `add_arrays`, we didn't need to call `ray.get()` first for `ref1` and `ref2`, since `add_arrays` expects NumPy arrays. Because `add_arrays` is a Ray task, Ray automatically does the extraction for us, so we can write code that looks more natural and Pythonic.
#
# Furthermore, note that the `add_arrays` task effectively depends on the outputs of the two `make_array` tasks. Ray won't run `add_arrays` until the other tasks are finished. Hence, Ray automatically handles task dependencies for us.
#
# This is why the elapsed time is about 4 seconds. We used a size of 20, so we slept 2 seconds in each call to `make_array`, but those happened in parallel, _followed_ by a second sleep of 2 seconds in `add_arrays`.
# Even though three task invocations occurred, we only used one call to `ray.get()`, when we actually needed the final results. Eliminating unnecessary `ray.get()` calls helps avoid forcing tasks to become synchronous when they could be asynchronous. So, keep these two key points in mind:
#
# * _Don't ask for results you don't need._
# * _Don't ask for the results you need until you really need them._
#
# We don't need to see the objects for `id1` and `id2`. We only need the final array for `id3`.
# ## Using ray.wait() with ray.get()
#
# Here is an idiomatic way to use `ray.get()`, where we fire all five asynchronous tasks at once, then ask for all the results at once with `ray.get()`:
# +
start = time.time()
# Comprehension list: five NumPy object references or futures created
array_refs = [make_array.remote(n*10) for n in range(5)]
# Comprehension list: object references or futures of the result of addition
added_array_refs = [add_arrays.remote(ref, ref) for ref in array_refs]
# Iterate o er the list of object references or futures
for array in ray.get(added_array_refs):
print(f'{array.size}: {array}')
pd(time.time() - start, prefix="Total time:")
# -
# This takes about eight seconds, four seconds for the longest invocation invocation of `make_array`, `make_array(4)` , and four seconds with longest invocation of `add_arrays`, when passed the results of `make_array(4)`.
#
# We did the right thing inside each list comprehension. We started the asynchronous tasks all at once and allowed Ray to handle the dependencies. Then we waited on one `ray.get()` call for all the output.
#
# However, what you see is no output and then everything is suddenly printed at once after eight seconds.
# There are two fundamental problems with the way we've used `ray.get()` so far:
#
# 1. There's no timeout, in case something gets "hung".
# 2. We have to wait for _all_ the objects to be available before `ray.get()` returns.
#
# The ability to specify a timeout is essential in production code as a defensive measure. Many potential problems could happen in a real production system, any one of which could cause the task we're waiting on to take an abnormally long time to complete or never complete. Our application would be deadlocked waiting on this task. Hence, it's **strongly recommended** in production software to always use timeouts on blocking calls, so that the application can attempt some sort of recovery in situations like this, or at least report the error and "degrade gracefully".
#
# Actually, there _is_ a `timeout=<value>` option you can pass to `ray.get()` ([documentation](https://ray.readthedocs.io/en/latest/package-ref.html#ray.get)), but it will most likely be removed in a future release of Ray. Why remove it if timeouts are important? This change will simplify the implementation of `ray.get()` and encourage the use of `ray.wait()` for waiting ([documentation](https://ray.readthedocs.io/en/latest/package-ref.html#ray.wait)) instead, followed by using `ray.get()` to retrieve values for tasks that `ray.wait()` tells us are finished.
#
# Using `ray.wait()` is also the way to fix the second problem with using `ray.get()` by itself, that we have to wait for all tasks to finish before we get any values back. Some of those tasks finish more quickly in our contrived example. We would like to process those results as soon as they are available, even while others continue to run. We'll use `ray.wait()` for this purpose.
#
# Therefore, while `ray.get()` is simple and convenient, for _production code_, we recommend using `ray.wait()`, **with** timeouts, for blocking on running tasks. Then use `ray.get()` to retrieve values of completed tasks.
#
# Here is the previous example rewritten to use `ray.wait()`:
# +
start = time.time()
array_refs = [make_array.remote(n*10) for n in range(5)]
added_array_refs = [add_arrays.remote(ref, ref) for ref in array_refs]
arrays = []
waiting_refs = list(added_array_refs) # Assign a working list to the full list of refs
while len(waiting_refs) > 0: # Loop until all tasks have completed
# Call ray.wait with:
# 1. the list of refs we're still waiting to complete,
# 2. tell it to return immediately as soon as one of them completes,
# 3. tell it wait up to 10 seconds before timing out.
ready_refs, remaining_refs = ray.wait(waiting_refs, num_returns=1, timeout=10.0)
print('Returned {:3d} completed tasks. (elapsed time: {:6.3f})'.format(len(ready_refs), time.time() - start))
new_arrays = ray.get(ready_refs)
arrays.extend(new_arrays)
for array in new_arrays:
print(f'{array.size}: {array}')
waiting_refs = remaining_refs # Reset this list; don't include the completed refs in the list again!
print(f"\nall arrays: {arrays}")
pd(time.time() - start, prefix="Total time:")
# -
# Now it still takes about 8 seconds to complete, 4 seconds for the longest invocation of `make_array` and 4 seconds for the invocation of `add_arrays`, but since the others complete more quickly, we see their results as soon as they become available, at 0, 2, 4, and 6 second intervals.
#
# > **Warning:** For each call to `ray.wait()` in a loop like this, it's important to remove the refs that have completed. Otherwise, `ray.wait()` will return immediately with the same list containg the first completed item, over and over again; you'll loop forever!! Resetting the list is easy, since the second list returned by `ray.wait()` is the rest of the items that are still running. So, that's what we use.
#
# Now let's try it with `num_returns = 2`:
# +
start = time.time()
array_refs = [make_array.remote(n*10) for n in range(5)]
added_array_refs = [add_arrays.remote(ref, ref) for ref in array_refs]
arrays = []
waiting_refs = list(added_array_refs) # Assign a working list to the full list of refs
while len(waiting_refs) > 0: # Loop until all tasks have completed
# Call ray.wait with:
# 1. the list of refs we're still waiting to complete,
# 2. tell it to return immediately as soon as TWO of them complete,
# 3. tell it wait up to 10 seconds before timing out.
return_n = 2 if len(waiting_refs) > 1 else 1
ready_refs, remaining_refs = ray.wait(waiting_refs, num_returns=return_n, timeout=10.0)
print('Returned {:3d} completed tasks. (elapsed time: {:6.3f})'.format(len(ready_refs), time.time() - start))
new_arrays = ray.get(ready_refs)
arrays.extend(new_arrays)
for array in new_arrays:
print(f'{array.size}: {array}')
waiting_refs = remaining_refs # Reset this list; don't include the completed refs in the list again!
print(f"\nall arrays: {arrays}")
pd(time.time() - start, prefix="Total time:")
# -
# Now we get two at a time output. Note that we don't actually pass `num_returns=2` every time. If you ask for more items than the length of the input list, you get an error. So, we compute `num_returns`, using `2` except when there's only one task to wait on, in which case we use `1`. So, in fact, the output for `40` was a single task result, because we started with `5` and processed two at a time.
# For a longer discussion on `ray.wait()`, see [this blog post](https://medium.com/distributed-computing-with-ray/ray-tips-and-tricks-part-i-ray-wait-9ed7a0b9836d).
# ## Exercise 1
#
# The following cell is identical to the last one. Modify it to use a timeout of `2.5` seconds, shorter than our longest tasks. What happens now? Try using other times.
#
# See the [solutions notebook](solutions/Advanced-Ray-Solutions.ipynb) for a discussion of this exercise and the subsequent exercises.
# +
start = time.time()
array_refs = [make_array.remote(n*10) for n in range(5)]
added_array_refs = [add_arrays.remote(ref, ref) for ref in array_refs]
arrays = []
waiting_refs = list(added_array_refs) # Assign a working list to the full list of refs
while len(waiting_refs) > 0: # Loop until all tasks have completed
# Call ray.wait with:
# 1. the list of refs we're still waiting to complete,
# 2. tell it to return immediately as soon as TWO of them complete,
# 3. tell it wait up to 10 seconds before timing out.
return_n = 2 if len(waiting_refs) > 1 else 1
ready_refs, remaining_refs = ray.wait(waiting_refs, num_returns=return_n, timeout=10.0)
print('Returned {:3d} completed tasks. (elapsed time: {:6.3f})'.format(len(ready_refs), time.time() - start))
new_arrays = ray.get(ready_refs)
arrays.extend(new_arrays)
for array in new_arrays:
print(f'{array.size}: {array}')
waiting_refs = remaining_refs # Reset this list; don't include the completed refs in the list again!
print(f"\nall arrays: {arrays}")
pd(time.time() - start, prefix="Total time:")
# -
# In conclusion:
#
# > **Tips:**
# >
# > 1. Use `ray.wait()` with a timeout to wait for one or more running tasks. Then use `ray.get()` to retrieve the values for the finished tasks.
# > 2. When looping over calls to `ray.wait()` with a list of object refs for running tasks, remove the previously-completed and retrieved objects from the list.
# > 3. Don't ask for results you don't need.
# > 4. Don't ask for the results you need until you really need them.
# ## Exercise 3
#
# Let's practice converting a slow loop to Ray, including using `ray.wait()`. Change the function to be a Ray task. Change the invocations to use the `ray.wait()` idiom. You can just use the default values for `num_returns` and `timeout` if you want. The second cell uses `assert` statements to check your work.
# +
def slow_square(n):
time.sleep(n)
return n*n
start = time.time()
squares = [slow_square(n) for n in range(4)]
for square in squares:
print (f'finished: {square}')
duration = time.time() - start
# -
assert squares == [0, 1, 4, 9], f'Did you use ray.get() to retrieve the values? squares = {squares}'
assert duration < 4.1, f'Did you use Ray to parallelize the work? duration = {duration}'
# ## Limiting Task Invocations and Retries on Failure
#
# > **Note:** This feature may change in a future version of Ray. See the latest details in the [Ray documentation](https://docs.ray.io/en/latest/package-ref.html#ray.remote).
#
# Two options you can pass to `ray.remote` when defining a task affect how often it can be invoked and retrying on failure:
#
# * `max_calls`: This specifies the maximum number of times that a given worker can execute the given remote function before it must exit. This can be used to address memory leaks in third-party libraries or to reclaim resources that cannot easily be released, e.g., GPU memory that was acquired by TensorFlow. By default this is infinite.
# * `max_retries`: This specifies the maximum number of times that the remote function should be rerun when the worker process executing it crashes unexpectedly. The minimum valid value is 0, the default is 4, and a value of -1 indicates infinite retries are allowed.
#
# Example:
#
# ```python
# @ray.remote(max_calls=10000, max_retries=10)
# def foo():
# pass
# ```
#
# See the [ray.remote()](https://docs.ray.io/en/latest/package-ref.html#ray.remote) documentation for all the keyword arguments supported.
# ### Overriding with config()
#
# Remote task and actor objects returned by `@ray.remote` can also be dynamically modified with the same arguments supported by `ray.remote()` using `options()` as in the following examples:
#
# ```python
# @ray.remote(num_gpus=1, max_calls=1, num_return_vals=2)
# def f():
# return 1, 2
# g = f.options(num_gpus=2, max_calls=None)
# ```
# ## What Is the Optimal Task Granularity
#
# How fine-grained should Ray tasks be? There's no fixed rule of thumb, but Ray clearly adds some overhead for task management and using object stores in a cluster. Therefore, it makes sense that tasks which are too small will perform poorly.
#
# We'll explore this topic over several more lessons, but for now, let's get a sense of the overhead while running in your setup.
#
# We'll continue to use NumPy arrays to create "load", but remove the `sleep` calls:
# +
def noop(n):
return n
def local_make_array(n):
return np.random.standard_normal(n)
@ray.remote
def remote_make_array(n):
return local_make_array(n)
# -
# Let's do `trials` runs for each experiment, to average out background noise:
trials=100
# First, let's use `noop` to baseline local function calls. Note that we call `print` for the duration, rathern than `pd`, because the overhead is so low the `pd` formatting will print `0.000`:
start = time.time()
[noop(t) for t in range(trials)]
print(f'{time.time() - start} seconds')
# Let's try the same run with `local_make_array(n)` for `n = 100000`:
start = time.time()
[local_make_array(100000) for _ in range(trials)]
print(f'{time.time() - start} seconds')
# So, we can safely ignore the "noop" overhead for now. For completeness, here's what happens with remote execution:
start = time.time()
refs = [remote_make_array.remote(100000) for _ in range(trials)]
ray.get(refs)
print(f'{time.time() - start} seconds')
# For arrays of 100000, using Ray is faster (at least on this test machine). The benefits of parallel computation, rather than synchronous, already outweight the Ray overhead.
# ## Exercise 4
#
# 1. Try incrementing size of n to 2n
# 2. Do you see a marked difference between the local vs remote times?
# 3. For the brave, try using `mathplotlib` to plot it.
# ## Profiling Tasks with ray.timeline()
#
# Sometimes you need to debug performance problems in Ray tasks. Calling `ray.timeline(file)` ([documentation](https://ray.readthedocs.io/en/latest/package-ref.html#ray.timeline)) captures profiling information for subsequent task execution to the specified file. Afterwards, you can view the data in a Chrome web browser. The format used is unique to Chrome, so Chrome has be used to view the data.
#
# Let's try it with our `make_array` and `add_arrays` methods in the following code. First some potential cleanup:
timeline_file = 'task-timeline.txt' # Will be found in the same directory as this notebook.
if os.path.isfile(timeline_file): # Delete old one, if an old one exists already.
os.remove(timeline_file)
ray.timeline(timeline_file)
start = time.time()
array_refs = [make_array.remote(n*10) for n in range(5)]
added_array_refs = [add_arrays.remote(ref, ref) for ref in array_refs]
for array in ray.get(added_array_refs):
print(f'{array.size}: {array}')
pd(time.time() - start, prefix="Total time:")
# Now, to view the data:
#
# 1. Open Chrome and enter chrome://tracing.
# 2. Click the _Load_ button to load the `task-timeline.txt` file, which will be in this notebook's directory.
# 3. To zoom in or out, click the "asymmetric" up-down arrow button. Then hold the mouse button in the graph and roll the mouse scroll wheel up or down. (On a laptop trackpad, press and hold, then use another finger to slide up and down.)
# 4. To move around, click the crossed arrow and drag a section in view.
# 5. Click on a box in the timeline to see details about it.
#
# Look for blocks corresponding to long-running tasks and look for idle periods, which reflect processing outside the context of Ray.
#
# Here is a screen grab profiling the previous code, zoomed in on one block of tasks and with one task selected. Note the processes shown on the left for drivers (more than one notebook was running at this time) and workers.
#
# 
ray.shutdown() # "Undo ray.init()".
# The next lesson, [Ray Actors Revisited](02-Ray-Actors-Revisited.ipynb), revisits actors. It provides a more in-depth look at actor characteristics and profiling actor performance using the _Ray Dashboard_.
| advanced-ray/01-Ray-Tasks-Revisited.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="7lEU2B93ivBz" colab_type="text"
# # Hierarchical Clustering
# In this notebook we will give a basic example of how agglomerative hierarchical cluster works.
# We use scipy and sklearn libraries.
# + id="beCEkyHzwL-5" colab_type="code" colab={}
from sklearn.metrics import normalized_mutual_info_score
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage, fcluster
from sklearn.datasets.samples_generator import make_blobs
import numpy as np
# + [markdown] id="elEYgSyIjP8c" colab_type="text"
# # Generating Sample data
# `make_blobs` is used to generate sample data where:
#
#
# `n_samples` : the total number of points equally divided among clusters.
#
# `centers` : the number of centers to generate, or the fixed center locations.
#
# `n_features` : the number of features for each sample.
#
# `random_state`: determines random number generation for dataset creation.
#
#
#
# This function returns two outputs:
#
# `X`: the generated samples.
#
# `y`: The integer labels for cluster membership of each sample.
#
# Then we use `plt.scatter` to plot the data points in the figure below.
#
#
# + id="Nxjz1FiSEl9Q" colab_type="code" outputId="3f6f6713-ab54-4250-df8a-68b7922d5313" colab={"base_uri": "https://localhost:8080/", "height": 347}
X, y = make_blobs(n_samples=90, centers=4, n_features=3, random_state=4)
plt.scatter(X[:, 0], X[:, 1])
plt.show()
# + [markdown] id="Gd2x3DM3qiLi" colab_type="text"
# # Performing Hierarchical clustering:
# In this part, we are performing agglomerative hierarchical clustering using linkage function from scipy library::
#
# `method`: is the linkage method, 'single' means the linkage method will be single linkage method.
#
# `metric`: is our similarity metric, 'euclidean' means the metric will be euclidean distance.
#
# "A `(n-1)` by 4 matrix `Z` is returned. At the -th iteration, clusters with indices `Z[i, 0]` and `Z[i, 1]` are combined to form cluster with index `(n+i)` . A cluster with an index less than `n` corresponds to one of the `n` original observations. The distance between clusters `Z[i, 0]` and `Z[i, 1]` is given by `Z[i, 2]`. The fourth value `Z[i, 3]` represents the number of original observations in the newly formed cluster.
#
# The following linkage methods are used to compute the distance `d(s,t)`between two clusters `s`and `t`. The algorithm begins with a forest of clusters that have yet to be used in the hierarchy being formed. When two clusters `s` and `t`from this forest are combined into a single cluster `u`, `s`and `t` are removed from the forest, and `u` is added to the forest. When only one cluster remains in the forest, the algorithm stops, and this cluster becomes the root.
#
# A distance matrix is maintained at each iteration. The `d[i,j]`` entry corresponds to the distance between cluster `ii` and `j` in the original forest.
#
# At each iteration, the algorithm must update the distance matrix to reflect the distance of the newly formed cluster u with the remaining clusters in the forest."
#
#
# For more details check the docmentation of linkage: https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html
#
# + id="hrFUAgplFE8T" colab_type="code" outputId="fa9c51c8-3ef6-431b-dd36-15ba837f440a" colab={"base_uri": "https://localhost:8080/", "height": 1547}
Z = linkage(X, method="single", metric="euclidean")
print(Z.shape)
Z
# + [markdown] id="5KVO5Sb4wJNx" colab_type="text"
# # Plotting dendrogram
# The dedrogram function from scipy is used to plot dendrogram:
#
#
#
# * On the `x` axis we see the indexes of our samples.
# * On the `y` axis we see the distances of our metric ('Euclidean').
#
#
#
#
# + id="g5xM3EWJJBsH" colab_type="code" outputId="2006ee9b-4637-4cb3-c936-2c2e05196ab9" colab={"base_uri": "https://localhost:8080/", "height": 640}
plt.figure(figsize=(25, 10))
plt.title("Hierarchical Clustering Dendrogram")
plt.xlabel("Samples indexes")
plt.ylabel("distance")
dendrogram(Z, leaf_rotation=90., leaf_font_size=8. )
plt.show()
# + [markdown] id="kbERWste0pfM" colab_type="text"
# # Retrive the clusters
# `fcluster` is used to retrive clusters with some level of distance.
#
# The number two determines the distance in which we want to cut the dendrogram. The number of crossed line is equal to number of clusters.
# + id="vscUQI1hKYHc" colab_type="code" outputId="aeef37c7-347a-408a-8a70-0c3e23397308" colab={"base_uri": "https://localhost:8080/", "height": 102}
cluster = fcluster(Z, 2, criterion="distance")
cluster
# + [markdown] id="jXxmbM1i7cVT" colab_type="text"
# # Plotting Clusters
# Plotting the final result. Each color represents a different cluster (four clusters in total).
# + id="VMAFl7wiOOGt" colab_type="code" outputId="23188b59-f7a1-42bf-d30b-0a30aaf7d1c2" colab={"base_uri": "https://localhost:8080/", "height": 483}
plt.figure(figsize=(10, 8))
plt.scatter(X[:, 0], X[:, 1], c=cluster, cmap="Accent")
plt.savefig("clusters.png")
plt.show()
# + [markdown] id="2GU4miqf-dLu" colab_type="text"
# # Evaluting clusters:
# Finally we will use Normalized Mutual Information (NMI) score to evaluate our clusters. Mutual information is a symmetric measure for the degree of dependency between the clustering and the manual classification. When NMI value is close to one, it indicates high similarity between clusters and actual labels. But if it was close to zero, it indicates high dissimilarity between them.
# + id="BirJIkyZOpfZ" colab_type="code" outputId="2c8f934f-0b98-474a-f7c5-610378c9f79b" colab={"base_uri": "https://localhost:8080/", "height": 88}
normalized_mutual_info_score(y, cluster)
# + id="b_TD3pKJbBkl" colab_type="code" colab={}
| 4-assets/BOOKS/Jupyter-Notebooks/Overflow/hierarchical_clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Classification on CIFAR and ImageNet
# +
import sys
# check whether run in Colab
root = "."
if "google.colab" in sys.modules:
print("Running in Colab.")
# !pip3 install matplotlib
# !pip3 install einops==0.3.0
# !pip3 install timm==0.4.9
# !git clone https://github.com/xxxnell/how-do-vits-work.git
root = "./how-do-vits-work"
sys.path.append(root)
# +
import os
import time
import yaml
import copy
from pathlib import Path
import datetime
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import models
import ops.trains as trains
import ops.tests as tests
import ops.datasets as datasets
import ops.schedulers as schedulers
# +
# config_path = "%s/configs/cifar10_vit.yaml" % root
config_path = "%s/configs/cifar100_vit.yaml" % root
# config_path = "%s/configs/imagenet_vit.yaml" % root
with open(config_path) as f:
args = yaml.load(f)
print(args)
# -
dataset_args = copy.deepcopy(args).get("dataset")
train_args = copy.deepcopy(args).get("train")
val_args = copy.deepcopy(args).get("val")
model_args = copy.deepcopy(args).get("model")
optim_args = copy.deepcopy(args).get("optim")
env_args = copy.deepcopy(args).get("env")
# +
dataset_train, dataset_test = datasets.get_dataset(**dataset_args, download=True)
dataset_name = dataset_args["name"]
num_classes = len(dataset_train.classes)
dataset_train = DataLoader(dataset_train,
shuffle=True,
num_workers=train_args.get("num_workers", 4),
batch_size=train_args.get("batch_size", 128))
dataset_test = DataLoader(dataset_test,
num_workers=val_args.get("num_workers", 4),
batch_size=val_args.get("batch_size", 128))
print("Train: %s, Test: %s, Classes: %s" % (
len(dataset_train.dataset),
len(dataset_test.dataset),
num_classes
))
# -
# ## Model
# Use provided models:
# +
# ResNet
# name = "resnet_dnn_50"
# name = "resnet_dnn_101"
# ViT
name = "vit_ti"
# name = "vit_s"
vit_kwargs = { # for CIFAR
"image_size": 32,
"patch_size": 2,
}
model = models.get_model(name, num_classes=num_classes,
stem=model_args.get("stem", False), **vit_kwargs)
# models.load(model, dataset_name, uid=current_time)
# -
# Or use `timm`:
# +
import timm
model = timm.models.vision_transformer.VisionTransformer(
img_size=32, patch_size=2, num_classes=num_classes, # for CIFAR
embed_dim=192, depth=12, num_heads=3, qkv_bias=False, # ViT-Ti
)
model.name = "vit_ti"
models.stats(model)
# -
# Parallelize the given `moodel` by splitting the input:
name = model.name
model = nn.DataParallel(model)
model.name = name
# ## Train
# Define a TensorBoard writer:
# +
current_time = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
log_dir = os.path.join("runs", dataset_name, model.name, current_time)
writer = SummaryWriter(log_dir)
with open("%s/config.yaml" % log_dir, "w") as f:
yaml.dump(args, f)
with open("%s/model.log" % log_dir, "w") as f:
f.write(repr(model))
print("Create TensorBoard log dir: ", log_dir)
# -
# Train the model:
# +
gpu = torch.cuda.is_available()
optimizer, train_scheduler = trains.get_optimizer(model, **optim_args)
warmup_scheduler = schedulers.WarmupScheduler(optimizer, len(dataset_train) * train_args.get("warmup_epochs", 0))
trains.train(model, optimizer,
dataset_train, dataset_test,
train_scheduler, warmup_scheduler,
train_args, val_args, gpu,
writer,
snapshot=-1, dataset_name=dataset_name, uid=current_time) # Set `snapshot=N` to save snapshots every N epochs.
# -
# Save the model:
models.save(model, dataset_name, current_time, optimizer=optimizer)
# ## Test
# +
gpu = torch.cuda.is_available()
model = model.cuda() if gpu else model.cpu()
metrics_list = []
for n_ff in [1]:
print("N: %s, " % n_ff, end="")
*metrics, cal_diag = tests.test(model, n_ff, dataset_test, verbose=False, gpu=gpu)
metrics_list.append([n_ff, *metrics])
leaderboard_path = os.path.join("leaderboard", "logs", dataset_name, model.name)
Path(leaderboard_path).mkdir(parents=True, exist_ok=True)
metrics_dir = os.path.join(leaderboard_path, "%s_%s_%s.csv" % (dataset_name, model.name, current_time))
tests.save_metrics(metrics_dir, metrics_list)
# -
| classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="OuZ8MiejlrMw"
# # TP2 - Topic Modeling
# + [markdown] id="Rv8_RKRdlrM0"
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/figure2.png?raw=1" width="1000">
# + [markdown] id="wAjih59mlrM1"
# Le <i>Topic Modeling</i> est une approche statistique qui permet de faire รฉmerger des topics abstraits d'un corpus de documents.
# Cette approche permet รฉgalement d'analyser la structure du corpus de documents en regroupant ceux qui prรฉsentent des topics similaires puis en analysant ces groupes, ou en analysant les caractรฉristiques des topics identifiรฉs.
#
# La plupart des modรจles de <i>Topic Modeling</i> s'appuient sur des hypothรจses de modรฉlisations similaires:
# * Chaque document est modรฉlisรฉ comme une distribution sur les _topics_ ;
# * Chaque _topic_ est modรฉlisรฉ comme une distribution sur les mots du vocabulaire.
#
# On a illustrรฉ cette modรฉlisation ci-dessous. Ainsi chaque document est reprรฉsentรฉ par une distribution sur une variable latente (on dit aussi cachรฉe), les _topics_. Ces derniers ne sont pas "observรฉs" : en pratique chaque document est dรฉcrit par une distribution sur les mots du vocabulaire. **L'objectif des modรจles de _topics_ est donc de caractรฉriser la forme de cette variable latente.** Nous allons voir plusieurs mรฉthodes et modรจles proposant cette caractรฉrisation.
#
# Ci-dessous, on a illustrรฉ l'intuition derriรจre cette modรฉlisation. Chaque document va contenir plusieurs _topics_, par exemple, les transports et les vacances. On retrouvera donc des mots caractรฉristiques de ces topic: "avion", "plage", "congรฉs" ... Des documents qui abordent des _topics_ proches contiendront donc un vocabulaire proche. Ainsi chaque _topic_ pourra รชtre caractรฉrisรฉ par des mots saillants qui lui sont spรฉcifiques.
#
#
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/lda-idee.png?raw=true" width="1000">
#
# + id="QqYKKuwxlrM2"
# %%capture
# โ ๏ธ Execute only if running in Colab
if 'google.colab' in str(get_ipython()):
IN_COLAB = True
else:
IN_COLAB = False
if IN_COLAB:
# !pip install -q scikit-learn==0.23.2 nltk==3.5 unidecode pysrt
# !pip install --no-deps pyLDAvis==3.3.1
# !pip install --no-deps funcy==1.16
# !python3 -m spacy download fr_core_news_md
# + id="toaCFG6NlrM4" outputId="35e8f0e5-1e01-4714-df4b-0ed66023b443" colab={"base_uri": "https://localhost:8080/"}
import nltk
from nltk.corpus import stopwords
from nltk.stem.snowball import FrenchStemmer
import numpy as np
import os
from pyLDAvis import sklearn as sklearn_lda
import pickle
import pyLDAvis
import pysrt
import re
from sklearn.decomposition import LatentDirichletAllocation as LDA
from sklearn.feature_extraction.text import CountVectorizer
from spacy.lang.fr.stop_words import STOP_WORDS
from tqdm.auto import tqdm
import unidecode
import urllib.request
# IPython automatically reload all changed code
# %load_ext autoreload
# %autoreload 2
# Inline Figures with matplotlib
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
# + id="aIEmRGw2lrM4"
# import extrenal modules
repo_url = 'https://raw.githubusercontent.com/AntoineSimoulin/m2-data-sciences/master/'
_ = urllib.request.urlretrieve(repo_url + 'src/plot_dirichlet.py', 'plot_dirichlet.py')
for season in range(1, 9):
dir = './data/S{:02d}'.format(season)
if not os.path.exists(dir):
os.makedirs(dir)
for episode in range(1, 11):
try:
_ = urllib.request.urlretrieve(
repo_url + 'TP2%20-%20Text%20Mining/sous-titres-got/S{:02d}/E{:02d}.srt'.format(season, episode),
'./data/S{:02d}/E{:02d}.srt'.format(season, episode))
except:
pass
from plot_dirichlet import Dirichlet, draw_pdf_contours
# + [markdown] id="vM5DL-BmlrM5"
# ## Latent Semantic Analysis (LSA)
# + [markdown] id="J2i-4j6KlrM6"
# Le modรจle Latent Semantic Analysis (LSA) ([Landauer & Dumais, 1997](#landauer-dumais-1997)) cherche ร dรฉcomposer la matrice de dรฉcomposition des documents selon le vocabulaire en deux matrices : une matrice de dรฉcomposition des documents selon les topics et une matrice de distribution des topics selon les mots du vocabulaires.
#
# On commencer donc par reprรฉsenter les documents selon une distribution sur le vocabulaire. Pour cela on utilise le Tf-Idf qui permet de reprรฉsenter chaque document du corpus comme une distribution sur le vocabulaire, en pratique, un vecteur de la taille du vocabulaire. On peut donc reprรฉsenter le corpus comme une matrice de taille $(M, V)$ avec $M$ le nombre de documents dans le corpus et $V$ la taille du vocabulaire. Cette reprรฉsentation est illustrรฉe ci-dessous.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/bow.png?raw=true" width="500">
#
# On va ensuite dรฉcomposer la matrice en utilisant **dรฉcomposition en valeurs singuliรจres** (en anglais Singular Value Decomposition, [SVD](https://en.wikipedia.org/wiki/Singular_value_decomposition)). On peut interprรฉter la SVD comme la gรฉnรฉralisation de la diagonalisation d'une matrice normale a des matrices arbitraires. Ainsi une matrice $A$ de taille $n \times m$ peut รชtre factorisรฉe sous la forme $A = U \Sigma V^T$, avec $U$ et $V$ des matrices orthogonales de tailles respectives $m \times m$ et $n \times n$ et $\Sigma$ une martice rectangulaire diagonale de taille $m \times n$.
#
# En pratique, il est peu commun de dรฉcomposer complรฉtement la matrice, on utilise plutรดt la <a href="https://en.wikipedia.org/wiki/Singular_value_decomposition#Truncated_SVD"><i>Trucated Singular Value Decomposition</i></a> qui permet de ne calculer que les $t$ premiรจres valeurs singuliรจres. Dans ce cas, on ne considรจre que les $t$ premiรจres colonnes de la matrice $U$ et les $t$ premiรจres lignes de la matrice $V$. On a ainsi :
#
# $$A_t = U_t \Sigma_t V_t^T$$
#
# Avec $U_t$ de taille $m \times t$ et $V_t$ de taille $n \times t$. Cette dรฉcomposition est illustrรฉe ci-dessous.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/svd-formule.png?raw=true" width="1000">
#
# On illustre ci-dessous l'application de la dรฉcomposition ร notre matrice Tf-Idf. La matrice $U_t$ apparait comme la matrice <i>document-topic</i> qui dรฉfinit chaque document comme une distribution de topic. La matrice $V_t$ apparait comme la matrice <i>terme-topic</i> qui dรฉfinit chaque topic comme une distribution sur le vocabulaire.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/svd-illustration.png?raw=true" width="1000">
#
# On peut aussi interprรฉter le <i>Topic Modeling</i> comme une approche de rรฉduction de dimension. En effet, la matrice Tf-Idf a plusieurs dรฉfauts : Elle est de grande dimension (la taille du vocabulaire), elle est _sparse_ _i.e._ beaucoup d'entrรฉes sont ร zรฉro, elle est trรจs bruitรฉe et les information sont redondantes selon plusieurs dimensions. La dรฉcomposition permet ainsi de la factoriser. Les deux matrices rรฉsultantes permetent d'utiliser la similaritรฉ cosinus pour comparer simplement des doccuments ou des mots.
# + [markdown] id="34VqFV9WlrM8"
# ## Probabilistic Latent Semantic Analysis (pLSA)
# + [markdown] id="yJqm3PeXlrM8"
# La LSA est une mรฉthode trรจs efficace. Nรฉanmoins en pratique, les topics rรฉsultants sont parfois difficiles ร interprรฉter.
# La mรฉthode nรฉcessite un corpus important pour obtenir des rรฉsultats pertinents.
#
# La methode de Probabilistic Latent Semantic Analysis (pLSA) remplace ainsi la SVD par une approche probabiliste.
# Il s'agit d'un modรจle **gรฉnรฉratif**, qui permet de gรฉnรฉrer les documents que l'on observe.
# En pratique il permet de gรฉnรฉrer la matrice Bag-of-words qui reprรฉsente le corpus. Le modรจle ne tient donc pas compte de l'ordre des mots.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/plda_principe.png?raw=true" width="1000">
#
#
# Les modรจles graphiques reprรฉsentent les variables alรฉatoies comme des noeuds. Les arcs entre les noeuds indiquent les variables potentiellement dรฉpendantes. Les variables observรฉes sont grisรฉes. Dans la figure ci-dessous, les noeuds $ X_{1,...,N}$ sont observรฉs alors que le noeud $Y$ est une variable latente. Dans cet exemple, les variables observรฉes dรฉpendent de cette variable latente. Les rectangles synthรฉtisent la rรฉplication de plusieurs structures. Un rectangle rรฉsume donc plusieurs variables $X_n$ avec $n \in N$.
#
# La structure du graph dรฉfinie les dรฉpendances conditionnelles entre l'ensemble des variables. Par exemple dans le graph ci-dessous, on a $p(Y,X_{1},...,X_{N})=p(Y)\prod _{n=1}^{N}p(X_{n}|Y)$.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/graphical_model.png?raw=1" width="500">
#
# Le fonctionnement du modรจle est dรฉtaillรฉ selon la reprรฉsentation graphique suivante :
# * Etant donnรฉ un document $d$, un topic $z$ est prรฉsent dans le document avec une probabilitรฉ $P(z|d)$.
# * Etant donnรฉ un topic $z$, un mot est gรฉnรฉrรฉ selon la probabilitรฉ conditionnelle $P(w|z)$.
#
# La probabilitรฉ jointe d'observer un mot dans un document s'exprime donc :
#
# $$P(D,W)=P(D)\sum_ZP(Z|D)P(W|Z)$$
#
# Ici, $P(D)$, $P(Z|D)$ et $P(W|Z)$ sont des paramรจtres du modรจle. $P(D)$ peut รชtre calculรฉ directement ร partir du corpus.
# $P(Z|D)$ et $P(W|Z)$ sont modรฉlisรฉs par des distributions multinomiales qui peuvent รชtre entrainรฉs par la mรฉthode [EM](https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm).
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/plsa.png?raw=true" width="500">
#
# On peut inteprรฉter la probabilitรฉ selon la procรฉdure suivante : On commence par un document avec la probabilitรฉ $P(D)$, on gรฉnรจre un _topic_ avec la probabilitรฉ $P(Z|D)$ puis on gรฉnรจre un mot avec la probabilitรฉ $P(W|Z)$. En pratique, on apprend donc les paramรจtres de notre modรจles qui permetent d'expliquer qu mieux le corpus observรฉ comme illustrรฉ ci-dessous.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/plda_inference.png?raw=true" width="1000">
#
# On peut aussi exprimer la probabilitรฉ jointe selon la dรฉcomposition suivante :
#
# $$P(D,W)=\sum_ZP(Z)P(D|Z)P(W|Z)$$
#
# Dans cette modรฉlisation, on commence par le _topic_ avec $P(Z)$ et on gรฉnรจre ensuite indรฉpendamment le document avec $P(D|Z)$ et le mot avec $P(W|Z)$.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/plda_process.png?raw=true" width="500">
#
# L'intรฉrรชt de cette paramรฉtrisation, c'est qu'elle fait appraitre un parallรจle avec la LSA.
#
# La probabilitรฉ du topic $P(Z)$ correspond ร la matrice diagonale de la dรฉcomposition en valeurs singuliรจers. La probabilitรฉ d'un document en donction du topic $P(D|Z)$ correpond ร la matrice document-_topic_ $U$ et la probabilitรฉ d'un mot en fonction du _topic_ $P(W|Z)$ correpond ร la matricec terme-_topic_ $V$. Les deux approches prรฉsentent donc des similaritรฉs, la pLSA apporte un traitement statistique des _topics_ et des mots par rapport ร la LSA.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/plsa-formule.png?raw=true" width="500">
#
# + [markdown] id="TZRA_JKmlrM9"
# ## Latent Dirichlet Allocation (LDA)
#
# La pLSA a un certain nombre dee contraintes :
# * Il n'y a pas de paramรจtres pour modรฉliser $P(D)$, on ne peut donc pas assigner de probabilitรฉ ร de nouveaux documents
# * Le nombre de paramรจtres grandit linรฉairement avec le nombre de documents dans le corpus, le modรจle est donc sujet ร l'_overfitting_.
#
# En pratique, la pLSA n'est donc pas souvent utilisรฉe, on lui prรฉfรจre gรฉnรฉralement la Latent Dirichlet Allocation (LDA) ([Blei et al., 2001](#blei-2001)). La LDA utilise des prior de dirichlet pour les distributions des documents selon les _topics_ et _topics_ selon les mots, ce qui lui donne de meilleures propriรฉtรฉs de gรฉnรฉralisation: on peut gรฉnรฉraliser pour de nouveaux documents.
#
#
# ### La distribution de Dirichlet
#
# La [distribution de Dirichlet](https://en.wikipedia.org/wiki/Dirichlet_distribution) est gรฉnรฉralement notรฉe $Dir(\alpha)$. Il s'agit d'une famille de lois de probabilitรฉs continues pour des variables alรฉatoires multinomiales. Cette loi (ou encore distribution) est paramรฉtrรฉe par le vecteur ${\bf \alpha}$ de nombres rรฉels positifs. La taille du vecteur ${\bf \alpha}$ indique la dimension de la distribution. Ce type de distribution est souvent utilisรฉe comme distribution ร priori dans les modรจles Bayรฉsiens. Sans rentrer dans les dรฉtails, voici quelques caraactรฉristiques de la distribution de Dirichlet :
#
# * La distribution est dรฉfinie sur un simplex de vecteurs positifs dont la somme est รฉgale ร 1
# * Sa densitรฉ est caractรฉrisรฉe par : $P(\theta |{\overrightarrow {\alpha }})={\frac {\Gamma (\Sigma _{i}\alpha _{i})f}{\Pi _{i}\Gamma (\alpha _{i})}}\Pi _{i}\theta _{i}^{\alpha _{i}-1}$
# * En pratique, si toutes les dimensions de ${\bf \alpha}$ ont des valeurs similaires, la distribution est plus รฉtalรฉe. Elle devient plus concentrรฉe pour des valeurs plus importantes de ${\bf \alpha}$.
#
# La distribution est illustrรฉe ci-dessous pour des valeurs des valeurs de ${\bf \alpha}$
# + id="RfGauvsIlrM-" outputId="bd8932d6-475f-45b8-f2e8-a7b82ed4a540" colab={"base_uri": "https://localhost:8080/", "height": 1000}
for alpha in [(0.85, 0.85, 0.85), (5, 5, 5), (1, 1, 1), (1, 2, 3), (2, 5, 10), (50, 50, 50)]:
draw_pdf_contours(Dirichlet(alpha))
# + [markdown] id="gOviMQYNlrM-"
# Cette distribution a des avantages pratiques. En particulier, on s'attend ร ce que les documents du corpus contiennent un _topic_ "majoritaire". Ils ne sont pas gรฉnรฉrรฉs selon une distribution 25% vacances, 25% sport, 25% รฉlections, 25% transports mais plutรดt des distributions du type 85% vacances, 5% sport, 5% รฉlections, 5% transports. Ces distributions vont donc attribuรฉ un poids important ร un certain _topic_. C'est justement ce que permet de rรฉaliser la distribution de Dirichlet avec de faibles valeurs de $\alpha$.
#
# La reprรฉsentation garphique du modรจle est proposรฉe ci-dessous. La LDA suppose le procรฉssus gรฉnรฉratif suivant pour chaque document $W$ dans le corpus $D$.
#
#
# > 1. On choisit $\theta \sim Dir(\alpha )$
# > 2. Pour chaque document dans le corpus:
# > * Pour chacun des $N$ mots $w_{n}$ dans le document :
# > * on gรฉnรจre un topic $z_{n}\sim Multinomial(\theta )$
# > * on gรฉnรจre un mot $w_{n}$ $p(w_{n}|z_{n},B)$ selon une loi multinimoale conditionnรฉe par le topic $z_{n}$.
#
#
# Our goal here is to estimate parameters ฯ, ฮธ to maximize p(w; ฮฑ, ฮฒ). The main advantage of LDA over pLSA is that it generalizes well for unseen documents.
#
# <img src="https://github.com/AntoineSimoulin/m2-data-sciences/blob/master/TP2%20-%20Text%20Mining/figures/lda_graph.png?raw=true" width="700">
# + [markdown] id="453oQfWClrM_"
# ## 3. Utilisation des librairies
# + [markdown] id="PMZHzpwalrM_"
# On va chercher ร analyser les thรจmes de la Sรฉrie Game Of Thrones. On utilise pour รงa les sous-titres de l'ensemble des saisons qui ont รฉtรฉ rรฉcupรฉrรฉs sur le site https://www.sous-titres.eu/series/game_of_thrones.html.
# + id="3PQn5ZoflrM_"
def create_subtitle_file_dict(subtitles_dir):
"Retourne les chemins vers les fichiers de sous titres"
subtitles_file_path = {}
for path, _, files in os.walk(subtitles_dir):
for name in files:
episode_name = '_'.join([os.path.basename(path), name.split('.')[0]])
subtitles_file_path[episode_name] = os.path.join(path, name)
return subtitles_file_path
def parse_srt_file(srt_file, encoding='iso-8859-1'):
"Lit un ficher de sous titres au format rst"
subs = pysrt.open(srt_file, encoding=encoding)
text = ' '.join([' '.join(sub.text.split('\n')) for sub in subs])
return text
def create_corpus(subtitles_file_path):
"Crรฉer un corpus ร partir de tous les fichiers rst dans un dossier"
corpus = []
for _, v in subtitles_file_path.items():
if v.endswith('srt'):
corpus.append(parse_srt_file(v))
return corpus
def split_episodes(corpus):
"split chaque episode en un sous รฉpisode de 400 mots."
corpus_split = []
for episode in corpus:
episode_words = episode.split()
i = 0
while i < len(episode_words):
corpus_split.append(' '.join(episode_words[i:i+400]))
i+=400
return corpus_split
# + id="eRNdeeD0lrNA"
subtitles_file_path = create_subtitle_file_dict('./data/')
# + id="ycVU6lBOlrNA"
episode_1_txt = parse_srt_file(subtitles_file_path['S01_E01'])
# + id="W3v02QPylrNA" outputId="0797e8ff-b2d1-4eab-d4ea-00ede260629c" colab={"base_uri": "https://localhost:8080/"}
print(episode_1_txt[:100])
# + id="-_WsLo7VlrNB"
corpus = create_corpus(subtitles_file_path)
corpus = split_episodes(corpus)
# + id="mi2oS_n8lrNB" outputId="b5c3c791-6a70-4a08-8609-034d9135b6db" colab={"base_uri": "https://localhost:8080/"}
len(corpus)
# + id="eTSNe1Z0lrNB" outputId="5d572499-c0a1-47ea-e373-2403d6faf1aa" colab={"base_uri": "https://localhost:8080/", "height": 35}
corpus[0][:100]
# + [markdown] id="1IWX2nfmlrNC"
# <hr>
# <div class="alert alert-info" role="alert">
# <p><b>๐ Exercice :</b> Nettoyer le corpus pour enlever les accents, mettre le texte en minuscule, enlever la ponctuation et les doubles espaces. Eventuellement pour le stemming.</p>
# </div>
# <hr>
# + id="OVqJR5zhlrNC"
stemmer = FrenchStemmer()
def clean_corpus(corpus):
for i in range(len(corpus)):
corpus[i] = unidecode.unidecode(corpus[i])
corpus[i] = re.sub(r'[^\w\s]', ' ', corpus[i])
corpus[i] = corpus[i].lower()
corpus[i] = re.sub(r'\s{2,}', ' ', corpus[i])
# corpus[i] = ' '.join([stemmer.stem(x) for x in corpus[i].split()])
return corpus
# + id="wu8yPgHslrNC"
corpus = clean_corpus(corpus)
# + id="61XSxdLTlrNC" outputId="cb8fb345-d337-4bf9-91cf-ea854b20204f" colab={"base_uri": "https://localhost:8080/", "height": 35}
corpus[0][:100]
# + id="8Qf0m5oJlrNC" outputId="667ac6e9-ada5-4049-cb48-96d2c15882ad" colab={"base_uri": "https://localhost:8080/"}
len(corpus)
# + id="-EKmiUCvlrND"
def tokenize_corpus(corpus):
tokens = []
for sentence in corpus.split('\n'):
tokens.append(nltk.word_tokenize(sentence))
return tokens
# + id="-sATNluMlrND"
sentence_length = [len(x.split()) for x in corpus]
# + id="5k3Wb2G2lrND" outputId="2bde0cb3-fc42-4f69-f001-c46e1ec0c2c9" colab={"base_uri": "https://localhost:8080/"}
np.mean(sentence_length), np.std(sentence_length)
# + [markdown] id="-YG1R2exlrND"
# <hr>
# <div class="alert alert-info" role="alert">
# <p><b>๐ Exercice :</b> Vectorizer le corpus en utilisant la mรฉthode Bag-Of-Words.</p>
# </div>
# <hr>
# + id="tErsm6mplrND" outputId="30af2233-1b7c-433c-cd72-8b9d1d94800b" colab={"base_uri": "https://localhost:8080/"}
# Initialise the count vectorizer
count_vectorizer = CountVectorizer(max_features=2000,
stop_words=STOP_WORDS,
max_df=0.9,
min_df=20)
count_data = count_vectorizer.fit_transform(corpus)
# + id="P8Yg6VW6lrND" outputId="6d3d6d14-e417-43e4-a76b-e878c46a8183" colab={"base_uri": "https://localhost:8080/"}
len(corpus)
# + id="3N-isn8plrNE" outputId="7e1d0501-aec5-450a-c332-12dd2e0014f2" colab={"base_uri": "https://localhost:8080/"}
# Faire varier les paramรจtres ci-dessous
number_topics = 15
number_words = 10
# Create and fit the LDA model
lda = LDA(n_components=number_topics, n_jobs=-1)
lda.fit(count_data)
# + id="2qB_4egslrNE"
def print_topics(model, count_vectorizer, n_top_words):
words = count_vectorizer.get_feature_names()
for topic_idx, topic in enumerate(model.components_):
print("\nTopic #%d:" % topic_idx)
print(" ".join([words[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
# + id="TpAoQwqClrNE" outputId="e8a4388a-98a7-406a-f2df-4ac77d19707b" colab={"base_uri": "https://localhost:8080/"}
# Print the topics found by the LDA model
print("Topics found via LDA:")
print_topics(lda, count_vectorizer, number_words)
# + [markdown] id="Lu3I4mxhlrNE"
# ## 4. Visualisation
# + id="_vJIoPtElrNE" outputId="2ce1d729-4344-43a1-aebc-344e9d0e947f" colab={"base_uri": "https://localhost:8080/"}
# %%time
LDAvis_data_filepath = os.path.join('./ldavis_prepared_'+str(number_topics))
LDAvis_prepared = sklearn_lda.prepare(lda, count_data, count_vectorizer, mds='mmds')
# + id="VWEA4o6clrNF"
with open(LDAvis_data_filepath, 'wb') as f:
pickle.dump(LDAvis_prepared, f)
# load the pre-prepared pyLDAvis data from disk
with open(LDAvis_data_filepath, 'rb') as f:
LDAvis_prepared = pickle.load(f)
pyLDAvis.save_html(LDAvis_prepared, './ldavis_prepared_'+ str(number_topics) +'.html')
# + id="rZdO_r11lrNF" outputId="0e1416ef-ea59-4051-ec8b-ff941266c8d4" colab={"base_uri": "https://localhost:8080/", "height": 861}
pyLDAvis.display(LDAvis_prepared)
# + [markdown] id="TOT9PErLlrNF"
# <hr>
# <div class="alert alert-info" role="alert">
# <p><b>๐ Exercice :</b> Faire varier le paramรจtre Lambda et justifier de son impact.</p>
# </div>
# <hr>
# + [markdown] id="Riw0RYvllvRe"
# On peut dรฉfinir la pertinence d'un mot $\omega$ dans un topic $\tau$ comme la combinaison convexe :
# $$\mathcal{R_\lambda}\left(\omega,\tau\right) = \lambda \log\left(\mathbb{P}\left(\omega \mid \tau\right)\right) +
# \left(1-\lambda\right)\log\left(\frac{\mathbb{P}\left(\omega \mid \tau\right)}{\mathbb{P}\left(\omega\right)}\right),$$
# oรน $\lambda$ est un paramรจtre de pondรฉration. Ce dernier va donc influer la pondรฉration des termes associรฉs ร chaque topics (partie droite de la visualisation). Par contre, il n'a pas d'impact sur les distances inter-topics (partie gauche de la visualisation)
#
# * Quand $\lambda$ est proche de 1, on a : $\mathcal{R_\lambda}(w|\tau) \approx \mathbb{P}(w|\tau)$. La pertinence se rรฉsume alors ร la probabilitรฉ qu'un mot $w$ apparaisse dans un _topic_ donnรฉ $t$. les termes sont ordonnรฉs selon leur probabilitรฉ conditionnelle dans un topic. Des tokens assez gรฉnรฉraux comme "et", "que", "รงa" ou "ici" apparaissent souvent en haut des classements car ils ont une frรฉquence trรจs รฉlevรฉe dans tous les topics.
#
# * Lorsque $\lambda$ est proche de 0, on a : $\mathcal{R_\lambda}(w|\tau) \approx \frac{\mathbb{P}(w|\tau)}{\mathbb{P}(w)}$. Les termes sont ordonnรฉs par leur <i>lift</i> qui reprรฉsente le rapport entre la probabilitรฉ conditionnelle des mots dans un _topic_ par leur probabilitรฉ d'apparition ร l'รฉchelle du corpus. Les mots sรฉlectionnรฉs pour dรฉcrire chaque topic ont des frรฉquences plus faibles et s'ils sont plus caractรฉristiques peuvent aussi parfois sembler plus marginaux comme par exemple "courageux", "voler", "tenus". Nรฉanmoins les mots gรฉnรฉriques sont filtrรฉs.
#
# * On peut aussi trouver un รฉquilibre entre la frรฉquence "brute" et le ratio en manipulant le paramรจtre Lambda afin de faire apparaรฎtre en haut du classement les termes les plus pertinents au topic sans donner une trop grande importance aux exceptions statistiques. Par exemple en fixant le paramรจtre lambda ร une valeur entre 0.2 et 0.4. On filtre ainsi les mots gรฉnรฉriques sans attribuer un poids trop important aux expressions plus รฉsotรฉriques.
# + [markdown] id="B2UmVDvblrNG"
# <hr>
# <div class="alert alert-info" role="alert">
# <p><b>๐ Exercice :</b> Faire varier le prรฉprocessing,en particulier la stemmatization. Analyser l'impact sur l'analyse des clusters.</p>
# </div>
# <hr>
# + id="2wfzRjZ2lrNI"
import spacy
# + id="PMOsT5BLl3Wv" outputId="0fa91dc6-8193-462a-e077-5df848816d48" colab={"base_uri": "https://localhost:8080/"}
nlp = spacy.load('fr_core_news_md')
stemmer = FrenchStemmer()
# + id="WoVRXDhil3Y9" outputId="d15d77c2-0590-410d-97cd-bc93593f1f6e" colab={"base_uri": "https://localhost:8080/", "height": 861}
def clean_corpus_(corpus, stem=False, lem=False):
for i in range(len(corpus)):
corpus[i] = unidecode.unidecode(corpus[i])
corpus[i] = re.sub(r'[^\w\s]', ' ', corpus[i])
corpus[i] = corpus[i].lower()
corpus[i] = re.sub(r'\s{2,}', ' ', corpus[i])
if stem:
corpus[i] = ' '.join([stemmer.stem(x) for x in corpus[i].split()])
if lem and not stem:
doc = nlp(str(corpus[i] ))
corpus[i] = ' '.join([token.lemma_ for token in doc])
return corpus
clean_corpus = clean_corpus_(corpus, stem=True)
clean_corpus_split = []
for episode in clean_corpus:
episode_words = episode.split()
i = 0
while i < len(episode_words):
clean_corpus_split.append(' '.join(episode_words[i:i+400]))
i+=400
count_data = count_vectorizer.fit_transform(clean_corpus_split)
lda = LDA(n_components=number_topics, n_jobs=-1)
lda.fit(count_data)
LDAvis_prepared = sklearn_lda.prepare(lda, count_data, count_vectorizer, mds='mmds')
pyLDAvis.display(LDAvis_prepared)
# + [markdown] id="j1_azByil7R9"
# La lemmatisation et la stemmatisation permettent de rรฉduire le vocabulaire et de mieux le "spรฉcifier". Pour la stemmatisation, on supprime la fin des mots. Les termes "garde" et "garder" seront associรฉs ร la mรชme racine "gard".
#
# Visuellement, les topics semblent moins bien sรฉparรฉs (partie de gauche de la visualisation). A priori ils partagent plus de vocabulaire. La stemmatisation accentue la frรฉquence de mots dรฉjร trรจs reprรฉsentรฉs ("ca", "qu" ...) que l'on ne souhaite pas identifier. Cette pondรฉration se fait au dรฉpend de termes plus caractรฉristiques comme le token "dragon" qui disparait du topic 9. Par ailleurs la stemmatisation est un procรฉssus assez destructeur et certaines racines sont difficiles ร rattacher ร leur terme d'origine. Par exemple "vis" pourrait se rapporter ร "viser" ou simplement ร la conjugaison du verbe "voir".
#
# Sans appliquer la stemmatisation, on trouvait peu de formes alternatives du mรชme mots dans les termes associรฉs ร chaque topic. Cette derniรจre apparait donc comme assez peu opportune car elle ne favorise pas la lisibilitรฉ des topics.
# + id="R0zomW_Ql3a6"
# %%capture
clean_corpus = clean_corpus_(corpus, lem=True)
clean_corpus_split = []
for episode in clean_corpus:
episode_words = episode.split()
i = 0
while i < len(episode_words):
clean_corpus_split.append(' '.join(episode_words[i:i+400]))
i+=400
count_data = count_vectorizer.fit_transform(clean_corpus_split)
lda = LDA(n_components=number_topics, n_jobs=-1)
lda.fit(count_data)
LDAvis_prepared = sklearn_lda.prepare(lda, count_data, count_vectorizer, mds='mmds')
# + id="H4zCANcoq2VJ" outputId="4a90d692-500e-4e96-ea3c-03118be35b53" colab={"base_uri": "https://localhost:8080/", "height": 861}
pyLDAvis.display(LDAvis_prepared)
# + [markdown] id="h0YPvJbTl9qq"
# A l'instar le la stemmatisation, la lemmatisation cherche ร standardiser le vocabulaire. Nรฉanmoins le procรฉssus est plus subtile. On ne supprime pas simplement la fin des mots mais identifie leur "lem" grรขce au contexte du mots dans la phrase et ร une ontologie. A premiรจre vue, on constate des effets similaires ร ceux obtenus avec la stemmatisation : les topics semblent moins bien sรฉparรฉs sur la visualisation des distances inter-topics. Nรฉanmoins, les termes les plus caractรฉristiques associรฉs ร chaque topic semblent plus lisibles car le processus est moins "destructeur". Le mot dragon apparait ร nouveau dans le topic 9. Finalement la lemmatisation semble un bon รฉquilibre pour standardiser le vocabulaire.
# + [markdown] id="Sqmh6FsElrNI"
# <hr>
# <div class="alert alert-info" role="alert">
# <p><b>๐ Exercice :</b> Etudier l'impact des Stop Words sur les topics.</p>
# </div>
# <hr>
# + id="gYHImNwglrNI" outputId="3ea5998c-a424-4c0a-9a9d-af3600baa6df" colab={"base_uri": "https://localhost:8080/", "height": 861}
# Initialise the count vectorizer
count_vectorizer = CountVectorizer(max_features=2000,
stop_words=None,
max_df=0.9,
min_df=20)
count_data = count_vectorizer.fit_transform(clean_corpus_split)
lda = LDA(n_components=number_topics, n_jobs=-1)
lda.fit(count_data)
LDAvis_prepared = sklearn_lda.prepare(lda, count_data, count_vectorizer, mds='mmds')
pyLDAvis.display(LDAvis_prepared)
# + [markdown] id="ILMmt8dJmC8l"
# Les stops words sont des mots souvent trรจs frรฉquents mais assez peu caractรฉristiques (prรฉpositions, articles, pronoms personnels ...). Lorsqu'on ne les filtre pas, ils viennent polluer la caractรฉrisation des topics puisqu'on les retrouve assez systรฉmatiquement dans les mots caractรฉristiques. Ce bruitage des topics affecte รฉgalement la distinction inter topics puisque ces derniers sont moins bien sรฉparรฉs. Il semble donc plus pertinents de filtrer les stop words pour favoriser l'interprรฉtabilitรฉ des topics.
# + [markdown] id="piKy8D7ClrNI"
# ## References
#
# > <div id="landauer-dumais-1997">Landauer, <NAME>. et al. <a href=http://lsa.colorado.edu/papers/dp1.LSAintro.pdf>An introduction to latent semantic analysis.</a> Discourse Processes 25 (1998): 259-284.</div>
#
# > <div id="blei-2001"> <NAME>. Blei, <NAME>, <NAME>: <a href=https://ai.stanford.edu/~ang/papers/nips01-lda>Latent Dirichlet Allocation.</a> NIPS 2001: 601-608</div>
#
# > <div id="alghamdi-2001"> <NAME> and <NAME>: <a href=http://dx.doi.org/10.14569/IJACSA.2015.060121>A Survey of Topic Modeling in Text Mining.</a> International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</div>
#
# > <div id="sievert-2014"> <NAME>, and <NAME>. <a href="https://aclanthology.org/W14-3110.pdf">LDAvis: A method for visualizing and interpreting topics.</a> Proceedings of the workshop on interactive language learning, visualization, and interfaces. 2014.
#
# > <div id="chuang-2012"> <NAME>, <NAME>, and <NAME>. <a href="https://dl.acm.org/doi/10.1145/2254556.2254572">Termite: Visualization techniques for assessing textual topic models.</a> Proceedings of the international working conference on advanced visual interfaces. 2012.
# + [markdown] id="DNxtddYalrNI"
# **Copyright 2021 <NAME>.**
#
# <i>Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Icons made by <a href="https://www.flaticon.com/authors/freepik" title="Freepik">Freepik</a>, <a href="https://www.flaticon.com/authors/pixel-perfect" title="Pixel perfect">Pixel perfect</a>, <a href="https://www.flaticon.com/authors/becris" title="Becris">Becris</a>, <a href="https://www.flaticon.com/authors/smashicons" title="Smashicons">Smashicons</a>, <a href="https://www.flaticon.com/authors/srip" title="srip">srip</a>, <a href="https://www.flaticon.com/authors/adib-sulthon" title="Adib">Adib</a>, <a href="https://www.flaticon.com/authors/flat-icons" title="Flat Icons">Flat Icons</a> and <a href="https://www.flaticon.com/authors/dinosoftlabs" title="Pixel perfect">DinosoftLabs</a> from <a href="https://www.flaticon.com/" title="Flaticon"> www.flaticon.com</a></i>
| TP2 - Text Mining/TP2 - Exploration de topics[corr].ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.cross_decomposition import PLSRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import cross_val_predict
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# +
# LOAD DATASET
data = pd.read_csv('D:/data/train/salt_content_ham2.csv', index_col=0)
data.head()
# +
# ORGANIZE DATA
Y = data['Salt']
X = data.values[:,:-1]
print(Y.shape)
print(X.shape)
# Plot spectra
# define domain: wavelenght bands of specim IQ
wl = np.arange(1,205,1)
with plt.style.context(('ggplot')):
plt.plot(wl, X.T)
plt.xlabel('Bands')
plt.ylabel('Reflectance')
plt.show()
# +
# Attempt to process signal
from scipy.signal import savgol_filter
# Calculate second derivative
X2 = savgol_filter(X, 21, polyorder = 2,deriv=2)
# Plot second derivative
plt.figure(figsize=(8,4.5))
with plt.style.context(('ggplot')):
plt.plot(wl, X2.T)
plt.xlabel('Bands')
plt.ylabel('D2 reflectance')
plt.show()
# +
# PLS REGRESSION ATTEMPT
from sys import stdout
def optimise_pls_cv(X, y, n_comp, plot_components=True):
'''Run PLS including a variable number of components, up to n_comp,
and calculate MSE '''
mse = []
component = np.arange(1, n_comp)
for i in component:
pls = PLSRegression(n_components=i)
# Cross-validation
y_cv = cross_val_predict(pls, X, y, cv=10)
mse.append(mean_squared_error(y, y_cv))
comp = 100*(i+1)/40
# Trick to update status on the same line
stdout.write("\r%d%% completed" % comp)
stdout.flush()
stdout.write("\n")
# Calculate and print the position of minimum in MSE
msemin = np.argmin(mse)
print("Suggested number of components: ", msemin+1)
stdout.write("\n")
if plot_components is True:
with plt.style.context(('ggplot')):
plt.plot(component, np.array(mse), '-v', color = 'blue', mfc='blue')
plt.plot(component[msemin], np.array(mse)[msemin], 'P', ms=10, mfc='red')
plt.xlabel('Number of PLS components')
plt.ylabel('MSE')
plt.title('PLS')
plt.xlim(left=-1)
plt.show()
# Define PLS object with optimal number of components
pls_opt = PLSRegression(n_components=msemin+1)
# Fir to the entire dataset
pls_opt.fit(X, y)
y_c = pls_opt.predict(X)
# Cross-validation
y_cv = cross_val_predict(pls_opt, X, y, cv=10)
# Calculate scores for calibration and cross-validation
score_c = r2_score(y, y_c)
score_cv = r2_score(y, y_cv)
# Calculate mean squared error for calibration and cross validation
mse_c = mean_squared_error(y, y_c)
mse_cv = mean_squared_error(y, y_cv)
print('R2 calib: %5.3f' % score_c)
print('R2 CV: %5.3f' % score_cv)
print('MSE calib: %5.3f' % mse_c)
print('MSE CV: %5.3f' % mse_cv)
# Plot regression and figures of merit
rangey = max(y) - min(y)
rangex = max(y_c) - min(y_c)
# Fit a line to the CV vs response
z = np.polyfit(y, y_c, 1)
with plt.style.context(('ggplot')):
fig, ax = plt.subplots(figsize=(9, 5))
ax.scatter(y_c, y, c='red', edgecolors='k')
#Plot the best fit line
ax.plot(np.polyval(z,y), y, c='blue', linewidth=1)
#Plot the ideal 1:1 line
ax.plot(y, y, color='green', linewidth=1)
plt.title('$R^{2}$ (CV): '+str(score_cv))
plt.xlabel('Predicted $^{\circ}$Salt')
plt.ylabel('Measured $^{\circ}$Salt')
plt.show()
return
optimise_pls_cv(X2,Y, 40, plot_components=True)
# +
# SCATTERING CORRECTIONS: MSC and SNV
# Multiplicative scatter corection
def msc(input_data, reference=None):
''' Perform Multiplicative scatter correction'''
# Baseline correction
for i in range(input_data.shape[0]):
input_data[i,:] -= input_data[i,:].mean()
# Get the reference spectrum. If not given, estimate from the mean
if reference is None:
# Calculate mean
matm = np.mean(input_data, axis=0)
else:
matm = reference
# Define a new data matrix and populate it with the corrected data
output_data = np.zeros_like(input_data)
for i in range(input_data.shape[0]):
# Run regression
fit = np.polyfit(matm, input_data[i,:], 1, full=True)
# Apply correction
output_data[i,:] = (input_data[i,:] - fit[0][1]) / fit[0][0]
return (output_data, matm)
# Standard normal Variate
def snv(input_data):
# Define a new array and populate it with the corrected data
output_data = np.zeros_like(input_data)
for i in range(input_data.shape[0]):
# Apply correction
output_data[i,:] = (input_data[i,:] - np.mean(input_data[i,:])) / np.std(input_data[i,:])
return output_data
# +
# Apply corrections
Xmsc = msc(X)[0] # Take the first element of the output tuple
Xsnv = snv(Xmsc)
## Plot original and corrected spectra
plt.figure(figsize=(8,9))
with plt.style.context(('ggplot')):
ax1 = plt.subplot(311)
plt.plot(wl, X.T)
plt.title('Original data')
ax2 = plt.subplot(312)
plt.plot(wl, Xmsc.T)
plt.ylabel('Absorbance spectra')
plt.title('MSC')
ax2 = plt.subplot(313)
plt.plot(wl, Xsnv.T)
plt.xlabel('Wavelength (nm)')
plt.title('SNV')
plt.show()
# -
X1snv = savgol_filter(Xsnv, 11, polyorder = 2, deriv=1)
# Define the PLS regression object
pls = PLSRegression(n_components=9)
# Fit data
pls.fit(X1snv, Y)
#X1 = savgol_filter(X, 11, polyorder = 2, deriv=1)
# Plot spectra
plt.figure(figsize=(8,9))
with plt.style.context(('ggplot')):
ax1 = plt.subplot(211)
plt.plot(wl, X1snv.T)
plt.ylabel('First derivative absorbance spectra')
ax2 = plt.subplot(212, sharex=ax1)
plt.plot(wl, np.abs(pls.coef_[:,0]))
plt.xlabel('Wavelength (nm)')
plt.ylabel('Absolute value of PLS coefficients')
plt.show()
# +
sorted_ind = np.argsort(np.abs(pls.coef_[:,0]))
# Sort spectra according to ascending absolute value of PLS coefficients
Xc = X1snv[:,sorted_ind]
# +
def pls_variable_selection(X, y, max_comp):
# Define MSE array to be populated
mse = np.zeros((max_comp,X.shape[1]))
# Loop over the number of PLS components
for i in range(max_comp):
# Regression with specified number of components, using full spectrum
pls1 = PLSRegression(n_components=i+1)
pls1.fit(X, y)
# Indices of sort spectra according to ascending absolute value of PLS coefficients
sorted_ind = np.argsort(np.abs(pls1.coef_[:,0]))
# Sort spectra accordingly
Xc = X[:,sorted_ind]
# Discard one wavelength at a time of the sorted spectra,
# regress, and calculate the MSE cross-validation
for j in range(Xc.shape[1]-(i+1)):
pls2 = PLSRegression(n_components=i+1)
pls2.fit(Xc[:, j:], y)
y_cv = cross_val_predict(pls2, Xc[:, j:], y, cv=5)
mse[i,j] = mean_squared_error(y, y_cv)
comp = 100*(i+1)/(max_comp)
stdout.write("\r%d%% completed" % comp)
stdout.flush()
stdout.write("\n")
# # Calculate and print the position of minimum in MSE
mseminx,mseminy = np.where(mse==np.min(mse[np.nonzero(mse)]))
print("Optimised number of PLS components: ", mseminx[0]+1)
print("Wavelengths to be discarded ",mseminy[0])
print('Optimised MSEP ', mse[mseminx,mseminy][0])
stdout.write("\n")
# plt.imshow(mse, interpolation=None)
# plt.show()
# Calculate PLS with optimal components and export values
pls = PLSRegression(n_components=mseminx[0]+1)
print("PLS: ", str(pls))
pls.fit(X, y)
sorted_ind = np.argsort(np.abs(pls.coef_[:,0]))
Xc = X[:,sorted_ind]
return(Xc[:,mseminy[0]:],mseminx[0]+1,mseminy[0], sorted_ind)
def simple_pls_cv(X, y, n_comp):
# Run PLS with suggested number of components
pls = PLSRegression(n_components=n_comp)
pls.fit(X, y)
y_c = pls.predict(X)
params = pls.get_params()
print(params)
# Cross-validation
y_cv = cross_val_predict(pls, X, y, cv=10)
# Calculate scores for calibration and cross-validation
score_c = r2_score(y, y_c)
score_cv = r2_score(y, y_cv)
# Calculate mean square error for calibration and cross validation
mse_c = mean_squared_error(y, y_c)
mse_cv = mean_squared_error(y, y_cv)
print('R2 calib: %5.3f' % score_c)
print('R2 CV: %5.3f' % score_cv)
print('MSE calib: %5.3f' % mse_c)
print('MSE CV: %5.3f' % mse_cv)
# Plot regression
z = np.polyfit(y, y_cv, 1)
with plt.style.context(('ggplot')):
fig, ax = plt.subplots(figsize=(9, 5))
ax.scatter(y_cv, y, c='red', edgecolors='k')
ax.plot(z[1]+z[0]*y, y, c='blue', linewidth=1)
ax.plot(y, y, color='green', linewidth=1)
plt.title('$R^{2}$ (CV): '+str(score_cv))
plt.xlabel('Predicted $^{\circ}$Salt')
plt.ylabel('Measured $^{\circ}$Salt')
plt.show()
# -
# Variable Selection
opt_Xc, ncomp, wav, sorted_ind = pls_variable_selection(X1snv, Y, 15)
simple_pls_cv(opt_Xc, Y, ncomp)
# +
# Show discarded bands
# Get a boolean array according to the indices that are being discarded
ix = np.in1d(wl.ravel(), wl[sorted_ind][:wav])
import matplotlib.collections as collections
# Plot spectra with superimpose selected bands
fig, ax = plt.subplots(figsize=(8,9))
with plt.style.context(('ggplot')):
ax.plot(wl, X1snv.T)
plt.ylabel('First derivative absorbance spectra')
plt.xlabel('Wavelength (nm)')
collection = collections.BrokenBarHCollection.span_where(
wl, ymin=-1, ymax=1, where=ix == True, facecolor='red', alpha=0.3)
ax.add_collection(collection)
plt.show()
# -
# Variable Selection
opt_Xc, ncomp, wav, sorted_ind = pls_variable_selection(X1snv, Y, 15)
simple_pls_cv(opt_Xc, Y, ncomp)
X2snv = savgol_filter(Xsnv, 7, polyorder = 2, deriv=1)
opt_Xc, ncomp, wav, sorted_ind = pls_variable_selection(X2snv, Y, 15)
simple_pls_cv(opt_Xc, Y, ncomp)
X3snv = savgol_filter(Xsnv, 31, polyorder = 2, deriv=1)
opt_Xc, ncomp, wav, sorted_ind = pls_variable_selection(X3snv, Y, 15)
simple_pls_cv(opt_Xc, Y, ncomp)
X1msc = savgol_filter(Xmsc, 13, polyorder = 2, deriv=1)
opt_Xc, ncomp, wav, sorted_ind = pls_variable_selection(X1msc, Y, 15)
simple_pls_cv(opt_Xc, Y, ncomp)
| ham_dataset_salt_prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import wandb
import pandas_profiling
import matplotlib.pyplot as plt
# +
# save_code = True -> wands will upload the notebook and sync it with the weights and biases server
# so t hat we will be tracking what happens in the notebook
run = wandb.init(project = 'exercise_4', save_code = True)
# -
artifact = run.use_artifact('exercise_4/genres_mod.parquet:latest')
local_path = artifact.file()
local_path
df = pd.read_parquet(local_path)
df.head(2)
profile = pandas_profiling.ProfileReport(df)
profile.to_widgets()
df = df.drop_duplicates().reset_index(drop=True)
df['title'].fillna( value = )
| ML-devops-Eng/Data_Exploration_and_Preparation/lesson_2_cabreira/exercise_4/starter/wandb/run-20211121_220301-1qg6v4sl/tmp/code/EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="5b4wUiWhCYrF" colab_type="text"
# # Fit NED to Pixels Map
# This is an example of scaling local North East coordinates to pixel to plot on a sattlelite Google Map Image to
# + id="HTxZ2mZGCQU5" colab_type="code" colab={}
# %matplotlib inline
# Import important libraries
import matplotlib.pyplot as plt
from math import cos, sin, pi, sqrt, atan2, degrees, hypot, pi
# + id="ECYShSERDfri" colab_type="code" outputId="556173cd-b9e9-4bef-9abe-a17f53b7ec80" colab={"base_uri": "https://localhost:8080/", "height": 315}
# Plotting the original top view image
fn = '/content/topview.png'
img = plt.imread(fn)
imgplot = plt.imshow(img, origin='upper')
L, R, B, T = imgplot.get_extent() # Get original axis value of image left, right, bottom, top
print ("Left, Right, Bottom, Top: %d, %d, %d, %d" % (L, R, B, T))
print ("")
plt.title("Original Image")
plt.show()
# + [markdown] id="hb4upaJBFTcP" colab_type="text"
# ## Picking points to scale
# Pick two points in both pixel coordinate and Geo-coordinate (latitude, longitude) to scale the image.
# + id="YFN1NpisCWdb" colab_type="code" colab={}
p_ref = [40.1099206, -82.9922587] # reference lat, long origin
p1_ll = [40.110098, -82.992001] # pick point 1 (from Google map)
p2_ll = [40.109728, -82.992526] # pick point 2 (from Google map)
# Points coordinates in pixels (x,y)
i1 = [821.826, 155.962]
i2 = [260.746, 677.53]
# + [markdown] id="1s5Jueipakkq" colab_type="text"
# ## Annotate the picked points
# + id="9BvoAvtkGKL_" colab_type="code" outputId="f7e1b69b-0aca-448e-a73b-1e1f3df12610" colab={"base_uri": "https://localhost:8080/", "height": 281}
## Annotate the picked points
imgplot = plt.imshow(img, origin='upper')
plt.annotate('Pick point 1', xy=(i1[0], i1[1]), xycoords='data',
xytext=(0.7, 0.95), textcoords='axes fraction',
arrowprops=dict(facecolor='blue', shrink=0.005),
horizontalalignment='right', verticalalignment='top')
plt.annotate('Pick point 2', xy=(i2[0], i2[1]), xycoords='data',
xytext=(0.4, 0.05), textcoords='axes fraction',
arrowprops=dict(facecolor='blue', shrink=0.005),
horizontalalignment='left', verticalalignment='top')
plt.title("Picking points")
plt.show()
# + [markdown] id="v_GypxWbIJRB" colab_type="text"
# # SCALING global lat-long to image pixels
# + id="NSdl0oLIHyP3" colab_type="code" colab={}
# Help Functions
_e = 0.0818191908426
_R = 6378137
def EN_factors(RefLat, RefLong):
""" Calculate East North factors """
Efactor = cos(RefLat*pi/180)*_R/sqrt(1-(sin(RefLat*pi/180)**2*_e**2) )*pi/180
Nfactor = (1-_e**2)*_R/((1-(sin(RefLat*pi/180)**2*_e**2))*sqrt(1-(sin(RefLat*pi/180)**2*_e**2)))*pi/180
return Efactor, Nfactor
def LL2NE(longitude, latitude, RefLat, RefLong):
""" Convert lat long to north east """
Efactor, Nfactor = EN_factors(RefLat, RefLong)
pos_east = (longitude - RefLong) * Efactor
pos_north = (latitude - RefLat) * Nfactor
return pos_north, pos_east
def rotate(x, y, angle):
""" Positive counter-clockwise, 2D rotation for XY """
_x = x*cos(angle) + y*sin(angle)
_y = y*cos(angle) - x*sin(angle)
return _x, _y
def scale_NE_to_XY(N, E, N_anchor, E_anchor, i_anchor, theta, ppm):
""" Scaling N-E coordinate to X-Y pixels and translate to local pixel coordinate """
rotated_E, rotated_N = rotate(E - E_anchor, N - N_anchor, theta)
x_p = -1*rotated_E*ppm + i_anchor[0]
y_p = rotated_N*ppm + i_anchor[1]
return x_p, y_p
# + id="5olSq0mbGSCp" colab_type="code" colab={}
# Convert geo-coordinate to north-east for two points we use to scale
# Here, point 2 is chosen as anchor point for rotation adjustment
N1, E1 = LL2NE(p1_ll[1], p1_ll[0], p_ref[0], p_ref[1])
N2, E2 = LL2NE(p2_ll[1], p2_ll[0], p_ref[0], p_ref[1])
# Calculate pixels/meter factor.
# Calculate 'dm' distance between p1 and p2 as well as 'dp' between i1 and i2
# then find pixels/meter 'ppm = dp/dm'
dp = hypot(i1[0]-i2[0], i1[1]-i2[1]) # pixels
dm = hypot(N1-N2, E1-E2) # meters
ppm = dp/dm # pixels/meters
# Calculate angle to rotate
pic_angle = -1*atan2(i1[1]-i2[1], i1[0]-i2[0]) #radians
real_angle = atan2(N1-N2,E1-E2) #radians
theta = real_angle - pic_angle + pi
# + [markdown] id="V7bvM08lGfD9" colab_type="text"
# ## Sample tests
# We have enough information to scale position coordinates to pixels coordinates at this point. Let's try out a couple Lat Long points from scratch
# + id="NWQezAOPGeTh" colab_type="code" outputId="638a6726-effb-4ae0-ddaf-b53ed36d7e57" colab={"base_uri": "https://localhost:8080/", "height": 85}
p3_ll = [40.110116, -82.992500] # coordinates obtained from Google Map
p4_ll = [40.109710, -82.992020]
i3 = [280.73, 128.02] # Estimate pixels coordinate of the points above
i4 = [799.031, 695.629]
# Convert Lat-Long to local N-E
N3, E3 = LL2NE(p3_ll[1], p3_ll[0], p_ref[0], p_ref[1])
N4, E4 = LL2NE(p4_ll[1], p4_ll[0], p_ref[0], p_ref[1])
# Convert N-E to X-Y pixels
x3_p, y3_p = scale_NE_to_XY(N3, E3, N2, E2, i2, theta, ppm)
x4_p, y4_p = scale_NE_to_XY(N4, E4, N2, E2, i2, theta, ppm)
print ("Estimated point 3 xy: [%d, %d]" % (i3[0], i3[1]))
print ("Calculated point 3 xy based on 2 points scaling: [%d, %d]" % (x3_p, y3_p))
print ("Estimated point 4 xy: [%d, %d]" % (i4[0], i4[1]))
print ("Calculated point 4 xy based on 2 points scaling: [%d, %d]" % (x4_p, y4_p))
# + [markdown] id="K-s5vh1pLInN" colab_type="text"
# ### Plot Point 3 and Point 4 on Map
# + id="toLutE3bItW0" colab_type="code" outputId="7c0853c4-8beb-48c2-9284-0165f3766d43" colab={"base_uri": "https://localhost:8080/", "height": 269}
imgplot = plt.imshow(img, origin='upper')
plt.annotate('Estimated point 3', xy=(i3[0], i3[1]), xycoords='data',
xytext=(0.4, 0.65), textcoords='axes fraction',
arrowprops=dict(facecolor='blue', shrink=0.005),
horizontalalignment='left', verticalalignment='top')
plt.annotate('Estimated point 4', xy=(i4[0], i4[1]), xycoords='data',
xytext=(0.5, 0.35), textcoords='axes fraction',
arrowprops=dict(facecolor='blue', shrink=0.005),
horizontalalignment='right', verticalalignment='top')
plt.annotate('Scaled point 3', xy=(x3_p, y3_p), xycoords='data',
xytext=(0.7, 0.85), textcoords='axes fraction',
arrowprops=dict(facecolor='red', shrink=0.005),
horizontalalignment='right', verticalalignment='top')
plt.annotate('Scaled point 4', xy=(x4_p, y4_p), xycoords='data',
xytext=(0.75, 0.45), textcoords='axes fraction',
arrowprops=dict(facecolor='red', shrink=0.005),
horizontalalignment='right', verticalalignment='top')
plt.show()
# + [markdown] id="A7zAr60EMPnL" colab_type="text"
# ## Finally, plot a path over the map
# + id="2oPq_zwVIgbq" colab_type="code" outputId="7f444113-d115-482f-c941-f91722f95753" colab={"base_uri": "https://localhost:8080/", "height": 269}
# Loading a path file, and convert to local East North (XY) coordinates
import numpy as np
path_fn = '/content/SEA_SE_right_turn.txt'
# path_fn = '/content/R100.txt'
latList, longList = np.array([]), np.array([])
with open(path_fn, 'r') as f:
f.readline()
for line in f:
l = line.replace('\r\n', '').split('\t')
_lat, _long = float(l[0]), float(l[1])
latList = np.append(latList, _lat)
longList = np.append(longList, _long)
N_list, E_list = LL2NE(longList, latList, p_ref[0], p_ref[1])
x_list, y_list = scale_NE_to_XY(N_list, E_list, N2, E2, i2, theta, ppm)
imgplot = plt.imshow(img, origin='upper')
plt.plot(x_list, y_list, c='r')
plt.show()
# + id="-Z4OWSMnP9TM" colab_type="code" colab={}
| globalGeo2Pixels/ImageScale.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Self-Driving Car Engineer Nanodegree
#
#
# ## Project: **Finding Lane Lines on the Road**
# ***
# In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
#
# Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
#
# In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
#
# ---
# Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
#
# **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
#
# ---
# **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
#
# ---
#
# <figure>
# <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
# </figcaption>
# </figure>
# <p></p>
# <figure>
# <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
# </figcaption>
# </figure>
# **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
# ## Import Packages
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import math
from enum import Enum
# %matplotlib inline
# ## Read in an Image
# +
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
# -
# ## Ideas for Lane Detection Pipeline
# **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
#
# `cv2.inRange()` for color selection
# `cv2.fillPoly()` for regions selection
# `cv2.line()` to draw lines on an image given endpoints
# `cv2.addWeighted()` to coadd / overlay two images
# `cv2.cvtColor()` to grayscale or change color
# `cv2.imwrite()` to output images to file
# `cv2.bitwise_and()` to apply a mask to an image
#
# **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
# +
viewport_left_x = 400
viewport_right_x = 550
viewport_y = 320
mid_of_viewport_x = math.floor(((viewport_right_x - viewport_left_x)/2) + viewport_left_x)
class Lanes:
def __init__(self):
self.x = 0
def draw_left_lane(target_image, left_lines, image_height):
last_line_count = 0
chosen_left_lane = None
for left_line in left_lines:
left_lane = FullLeftLanePolygon.create_from_left_line(left_line, Viewport.y(target_image.image_data), image_height)
line_count = left_lane.contains_lines(left_lines)
if last_line_count < line_count:
last_line_count = line_count
chosen_left_lane = left_lane
if chosen_left_lane == None:
if len(left_lines) > 0:
chosen_left_lane = FullLeftLanePolygon.create_from_left_line(left_lines[0], Viewport.y(target_image.image_data), image_height)
if chosen_left_lane != None:
Lanes.last_left_lane = chosen_left_lane
if Lanes.last_left_lane != None:
PolygonPlotter.draw_to(target_image, Lanes.last_left_lane.polygon.arr, (0, 0, 255))
def draw_right_lane(target_image, right_lines, image_height):
last_line_count = 0
chosen_right_lane = None
print(right_lines)
for right_line in right_lines:
right_lane = FullRightLanePolygon.create_from_right_line(right_line, Viewport.y(target_image.image_data), image_height)
line_count = right_lane.contains_lines(right_lines)
if last_line_count < line_count:
last_line_count = line_count
chosen_right_lane = right_lane
if chosen_right_lane == None:
if len(right_lines) > 0:
chosen_right_lane = FullRightLanePolygon.create_from_right_line(right_lines[0], Viewport.y(target_image.image_data), image_height)
if chosen_right_lane != None:
Lanes.last_right_lane = chosen_right_lane
if Lanes.last_right_lane != None:
PolygonPlotter.draw_to(target_image, Lanes.last_right_lane.polygon.arr, (0, 0, 255))
print("Lanes.last_right_lane:", Lanes.last_right_lane)
@staticmethod
def draw_to(target_image, left_lines, right_lines, image_height):
Lanes.draw_left_lane(target_image, left_lines, image_height)
Lanes.draw_right_lane(target_image, right_lines, image_height)
Lanes.last_left_lane = None
Lanes.last_right_lane = None
class LoadAs(Enum):
Grayscale = 1
ColorBGR = 2
ColorRGB = 3
ColorHSV = 4
ColorHSL = 5
class CvImage:
def __init__(self):
self.load_as = None
self.image_data = np.array([], dtype=np.int32)
@staticmethod
def create_black_image(width, height, channel_count = 3):
image_data = np.zeros((height, width, channel_count), np.uint8)
return CvImage.from_cv_image_data(image_data, LoadAs.ColorBGR)
@staticmethod
def load_from_file(filepath, load_as = LoadAs.ColorBGR):
instance = CvImage()
instance.load_as = load_as
if load_as == LoadAs.Grayscale:
instance.image_data = cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)
else:
instance.image_data = cv2.imread(filepath)
return instance
@staticmethod
def from_cv_image_data(cv_image_data, load_as = LoadAs.ColorBGR):
# if cv_image_data.shape[0] == 0:
# return None
instance = CvImage()
instance.load_as = load_as
instance.image_data = cv_image_data
return instance
@staticmethod
def load_from_image_data(cv_image_data, load_as = LoadAs.ColorBGR):
return CvImage.from_cv_image_data(cv_image_data, load_as)
def _get_intensity_m(self, brightness_value):
intensitym = np.ones(self.image_data.shape, dtype="uint8") * brightness_value
return intensitym
def darken(self, darken_value):
intensitym = self._get_intensity_m(darken_value)
darkened_image_data = cv2.subtract(self.image_data, intensitym)
return CvImage.load_from_image_data(darkened_image_data, self.load_as)
def brighten(self, darken_value):
intensitym = self._get_intensity_m(darken_value)
darkened_image_data = cv2.add(self.image_data, intensitym)
return CvImage.load_from_image_data(darkened_image_data, self.load_as)
def height(self):
# if self.image_data.shape[0] == 0:
# return 0
height, width = self.image_data.shape[:2]
return height
def width(self):
# if self.image_data.shape[0] == 0:
# return 0
height, width = self.image_data.shape[:2]
return width
def channel_count(self):
# if self.image_data.shape[0] == 0:
# return 0
return self.image_data.shape[2]
def to_grayscale(self):
# if self.image_data.shape[0] == 0:
# return None
grayscale_image = cv2.cvtColor(self.image_data, cv2.COLOR_BGR2GRAY)
return CvImage.from_cv_image_data(grayscale_image, LoadAs.Grayscale)
def mask_and(self, other_image):
result = cv2.bitwise_and(self.image_data, other_image.image_data)
return CvImage.load_from_image_data(result)
def mask_or(self, other_image):
result = cv2.bitwise_or(self.image_data, other_image.image_data)
return CvImage.load_from_image_data(result)
def mask_xor(self, other_image):
result = cv2.bitwise_xor(self.image_data, other_image.image_data)
return CvImage.load_from_image_data(result)
def mask_not(self):
result = cv2.bitwise_not(self.image_data)
return CvImage.load_from_image_data(result)
def to_HSV(self):
hsv = cv2.cvtColor(self.image_data, cv2.COLOR_RGB2HSV)
return CvImage.load_from_image_data(hsv, LoadAs.ColorHSV)
def to_HSL(self):
hsv = cv2.cvtColor(self.image_data, cv2.COLOR_RGB2HLS)
return CvImage.load_from_image_data(hsv, LoadAs.ColorHSL)
def to_RGB(self):
if self.load_as == LoadAs.Grayscale:
rgb_image = cv2.cvtColor(self.image_data, cv2.COLOR_GRAY2RGB)
return CvImage.load_from_image_data(rgb_image, LoadAs.ColorRGB)
else:
rgb_image = cv2.cvtColor(self.image_data, cv2.COLOR_BGR2RGB)
return CvImage.load_from_image_data(rgb_image, LoadAs.ColorRGB)
def gaussian_blur(self, kernel_size = 5):
blurred = cv2.GaussianBlur(self.image_data, (kernel_size, kernel_size), 0)
return CvImage.load_from_image_data(blurred, self.load_as);
def threshold(self, black_threshold, other_color, ttype):
ret, result = cv2.threshold(self.image_data, black_threshold, other_color, ttype)
return CvImage.load_from_image_data(result, self.load_as)
def threshold_binary(self, black_threshold, other_color):
return self.threshold(black_threshold, other_color, cv2.THRESH_BINARY)
def threshold_binary_inverse(self, black_threshold, other_color):
return self.threshold(black_threshold, other_color, cv2.THRESH_BINARY_INV)
def threshold_truncate(self, black_threshold, other_color):
return self.threshold(black_threshold, other_color, cv2.THRESH_TRUNC)
def threshold_to_zero(self, black_threshold, other_color):
return self.threshold(black_threshold, other_color, cv2.THRESH_TOZERO)
def canny(self, low_threshold = 50, high_threshold = 150, aperture_size = 3):
cannied = cv2.Canny(self.image_data, low_threshold, high_threshold, aperture_size)
return CvImage.load_from_image_data(cannied)
def houghlines(self, threshold = 200):
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 40 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 20 #minimum number of pixels making up a line
max_line_gap = 5 # maximum gap in pixels between connectable line segments
lines = cv2.HoughLinesP(self.image_data, rho, theta, threshold, np.array([]), minLineLength=min_line_length, maxLineGap=max_line_gap)
black_image = CvImage.create_black_image(self.to_RGB().width(), self.to_RGB().height(), self.to_RGB().channel_count())
right_line_color = [255, 0, 0]
left_line_color = [0, 0, 255]
thickness = 2
left_lines = []
right_lines = []
# PolygonPlotter.draw_to(black_image, Viewport.left_area_polygon(self.image_data).arr)
# PolygonPlotter.draw_to(black_image, Viewport.right_area_polygon(self.image_data).arr)
for line_arr in lines:
line = Line.create_from_array(line_arr)
if line.abs_dx() > 0 and line.abs_dy() > 0:
if line.is_inside(Viewport.right_area_polygon(self.image_data)):
if line.slope() > 0.5 and line.slope() < 0.7:
right_lines.append(line)
elif line.is_inside(Viewport.left_area_polygon(self.image_data)):
if line.slope() < -0.6 and line.slope() > -0.8:
left_lines.append(line)
print("right_lines:", right_lines)
Lanes.draw_to(black_image, left_lines, right_lines, self.height())
return black_image
def filter_inrange(self, lower_value, upper_value):
filtered = cv2.inRange(self.image_data, lower_value, upper_value)
return CvImage.load_from_image_data(filtered)
class Lane:
def slope(self, x1, y1, x2, y2):
divider = (x2-x1)
if divider == 0: return 0
m = (y2-y1)/divider
return m
def contains_lines(self, whole_lines):
line_count = 0
for line in whole_lines:
if line.is_inside(self.polygon):
line_count += 1
return line_count
class FullLeftLanePolygon(Lane):
@staticmethod
def create_from_left_line(left_line, viewport_y, image_height):
additional_area = 5
left_lane = FullLeftLanePolygon()
left_lane.top = left_lane.normalize_left_lane_right_most_point([left_line.x2, left_line.y2], viewport_y, [left_line.x1, left_line.y1])
left_lane.bottom = left_lane.normalize_left_lane_left_most_point([left_line.x1, left_line.y1], image_height, [left_line.x2, left_line.y2])
left_lane.polygon = Polygon.create_from_array(np.array([(left_lane.top[0] - additional_area + 2, left_lane.top[1]), (left_lane.bottom[0] - additional_area, left_lane.bottom[1]), \
(left_lane.bottom[0] + additional_area, left_lane.bottom[1]), (left_lane.top[0] + additional_area - 2, left_lane.top[1])]))
return left_lane
def normalize_left_lane_right_most_point(self, left_lane_right_most_point, viewport_y, left_lane_left_most_point):
if left_lane_right_most_point[1] != viewport_y:
line_slope = self.slope(left_lane_left_most_point[0], left_lane_left_most_point[1], left_lane_right_most_point[0], left_lane_right_most_point[1])
right_most_x = (line_slope * left_lane_right_most_point[0] + (viewport_y - left_lane_right_most_point[1])) / line_slope
left_lane_right_most_point = [math.floor(right_most_x), viewport_y]
return left_lane_right_most_point
def normalize_left_lane_left_most_point(self, left_lane_left_most_point, image_bottom, left_lane_right_most_point):
if left_lane_left_most_point[1] != image_bottom:
line_slope = self.slope(left_lane_left_most_point[0], left_lane_left_most_point[1], left_lane_right_most_point[0], left_lane_right_most_point[1])
left_most_x = (line_slope * left_lane_left_most_point[0] + (image_bottom - left_lane_left_most_point[1])) / line_slope
left_lane_left_most_point = [math.floor(left_most_x), image_bottom]
return left_lane_left_most_point
class FullRightLanePolygon(Lane):
@staticmethod
def create_from_right_line(right_line, viewport_y, image_height):
additional_area = 5
right_lane = FullRightLanePolygon()
right_lane.top = right_lane.normalize_right_lane_left_most_point([right_line.x2, right_line.y2], viewport_y, [right_line.x1, right_line.y1])
right_lane.bottom = right_lane.normalize_right_lane_right_most_point([right_line.x1, right_line.y1], image_height, [right_line.x2, right_line.y2])
right_lane.polygon = Polygon.create_from_array(np.array([(right_lane.top[0] - additional_area + 2, right_lane.top[1]), (right_lane.bottom[0] - additional_area, right_lane.bottom[1]), \
(right_lane.bottom[0] + additional_area, right_lane.bottom[1]), (right_lane.top[0] + additional_area - 2, right_lane.top[1])]))
return right_lane
def normalize_right_lane_right_most_point(self, right_lane_right_most_point, image_bottom, right_lane_left_most_point):
if right_lane_right_most_point[1] != image_bottom:
line_slope = self.slope(right_lane_left_most_point[0], right_lane_left_most_point[1], right_lane_right_most_point[0], right_lane_right_most_point[1])
right_most_x = (line_slope * right_lane_right_most_point[0] + (image_bottom - right_lane_right_most_point[1])) / line_slope
right_lane_right_most_point = [math.floor(right_most_x), image_bottom]
return right_lane_right_most_point
def normalize_right_lane_left_most_point(self, right_lane_left_most_point, viewport_y, right_lane_right_most_point):
if right_lane_left_most_point[1] != viewport_y:
line_slope = self.slope(right_lane_left_most_point[0], right_lane_left_most_point[1], right_lane_right_most_point[0], right_lane_right_most_point[1])
left_most_x = (line_slope * right_lane_left_most_point[0] + (viewport_y - right_lane_left_most_point[1])) / line_slope
right_lane_left_most_point = [math.floor(left_most_x), viewport_y]
return right_lane_left_most_point
class Viewport:
@staticmethod
def y(image):
height, width = image.shape[:2]
viewport_y = math.floor(height / 2) + 50
return viewport_y
@staticmethod
def right_area_polygon(image):
height, width = image.shape[:2]
mid_of_viewport_x = math.floor(width / 2)
viewport_y = math.floor(height / 2) + 50 + 90
viewport_right_x = mid_of_viewport_x + 30
return Polygon.create_from_array(np.array([(mid_of_viewport_x + 1, viewport_y), (mid_of_viewport_x + 1, height), (width, height), (viewport_right_x, viewport_y)]))
@staticmethod
def left_area_polygon(image):
height, width = image.shape[:2]
mid_of_viewport_x = math.floor(width / 2)
viewport_y = math.floor(height / 2) + 50 + 90
viewport_left_x = mid_of_viewport_x - 30
return Polygon.create_from_array(np.array([(viewport_left_x, viewport_y), (0,height), (mid_of_viewport_x, height), (mid_of_viewport_x, viewport_y)]))
class Polygon:
@staticmethod
def create_from_array(arr):
poly = Polygon(arr)
return poly
def __init__(self, arr):
self.arr = arr
def is_point_inside(self, point_tuple):
result = cv2.pointPolygonTest(self.arr, point_tuple, False)
return (result >= 0)
class Line:
def __init__(self, x1, y1, x2, y2):
self.x1 = x1
self.y1 = y1
self.x2 = x2
self.y2 = y2
@staticmethod
def create_from_array(arr):
x1, y1, x2, y2 = arr[0]
new_line = Line(x1, y1, x2, y2)
return new_line
def abs_dx(self):
return abs(self.dx())
def abs_dy(self):
return abs(self.dy())
def dx(self):
return self.x1 - self.x2
def dy(self):
return self.y1 - self.y2
def slope(self):
return self.dy() / self.dx()
def is_inside(self, polygon):
return (polygon.is_point_inside((self.x1, self.y1)) and polygon.is_point_inside((self.x2, self.y2)))
class LinePlotter:
@staticmethod
def draw_to(target_image, point1, point2, thickness = 4, bgr_color = (100, 255, 0)):
cv2.line(target_image.image_data, point1, point2, bgr_color, thickness)
class CirclePlotter:
@staticmethod
def draw_to(target_image, center_point, radius, bgr_color = (100, 255, 0), solid_color = -1):
cv2.circle(target_image.image_data, center_point, radius, bgr_color, solid_color)
class RectanglePlotter:
@staticmethod
def draw_to(target_image, left_point, right_point, bgr_color, solid_color = -2):
cv2.rectangle(target_image.image_data, left_point, right_point, bgr_color, solid_color)
class PolygonPlotter:
@staticmethod
def draw_to(target_image, vertices, color_value = (255, 255, 255)):
adapted_vertices = np.array([vertices], dtype=np.int32)
cv2.fillPoly(target_image.image_data, adapted_vertices, color_value)
class ImageViewer:
@staticmethod
def display_image(cv_image):
plt.imshow(cv_image.to_RGB().image_data)
plt.tight_layout()
plt.show()
import os
print(os.listdir("test_images/"))
cvimage = CvImage.load_from_file('test_images/solidYellowCurve.jpg')
dark_image = cvimage.to_HSV() #.to_grayscale().filter_inrange(50, 100).threshold_binary_inverse(20, 255)
mask = dark_image.filter_inrange((10, 40, 150), (255,255, 255))
mask2 = dark_image.filter_inrange((0, 0, 210), (255,255, 255))
mask = mask.mask_or(mask2)
# dark_image = mask.to_RGB().mask_or(dark_image)
# viewport_image = CvImage.create_black_image(dark_image.width(), dark_image.height(), 3)
# vertices = [(0,dark_image.height()),(viewport_left_x, viewport_y), (viewport_right_x, viewport_y), (dark_image.width(), dark_image.height())]
# PolygonPlotter.draw_to(viewport_image, vertices)
# dark_image = dark_image.mask_and(viewport_image).canny().houghlines()
cvimage = cvimage.to_grayscale().filter_inrange(10, 190).threshold_binary_inverse(0, 255).to_RGB().mask_and(viewport_image).canny().houghlines()
ImageViewer.display_image(dark_image)
ImageViewer.display_image(mask2.canny().houghlines())
ImageViewer.display_image(mask.canny().houghlines())
# -
# ## Helper Functions
# Below are some helper functions to help get you started. They should look familiar from the lesson!
# +
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 20 #minimum number of pixels making up a line
max_line_gap = 10 # maximum gap in pixels between connectable line segments
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
# return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def increase_brightness(img, value=30):
image = cv2.add(img, np.array([value]))
return image
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
# Image should have already been converted to HSL color space
def isolate_yellow_hsl(img):
low_threshold = np.array([15, 38, 115], dtype=np.uint8)
high_threshold = np.array([35, 204, 255], dtype=np.uint8)
yellow_mask = cv2.inRange(img, low_threshold, high_threshold)
return yellow_mask
# Image should have already been converted to HSL color space
def isolate_white_hsl(img):
low_threshold = np.array([0, 200, 0], dtype=np.uint8)
high_threshold = np.array([180, 255, 255], dtype=np.uint8)
white_mask = cv2.inRange(img, low_threshold, high_threshold)
return white_mask
def combine_yw_isolated(img, hsl_img):
hsl_yellow = isolate_yellow_hsl(hsl_img)
hsl_white = isolate_white_hsl(hsl_img)
hsl_mask = cv2.bitwise_or(hsl_yellow, hsl_white)
return cv2.bitwise_and(img, img, mask=hsl_mask)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, 255)
# cv2.fillPoly(mask, vertices, 100)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
# return mask
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def get_hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
return lines
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def get_image(file_path):
# image = mpimg.imread(file_path)
image = cv2.imread(file_path)
return image
def bgr_to(img, colr):
if colr == "hsv":
img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
elif colr == "hsl":
img = cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
elif colr == 'gray':
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
return img
def get_masked_image(original_image, gauss_kernel_size=5, canny_low=50, canny_high=150):
image = original_image
# gray = grayscale(image)
# darkened = increase_brightness(image, -50.0)
isolated = combine_yw_isolated(original_image, bgr_to(original_image, "hsl"))
blur_gray = gaussian_blur(isolated, gauss_kernel_size)
edges = canny(isolated, canny_low, canny_high)
imshape = image.shape
vertices = np.array([[(0,imshape[0]),(viewport_left_x, viewport_y), (viewport_right_x, viewport_y), (imshape[1],imshape[0])]], dtype=np.int32)
masked = region_of_interest(edges, vertices)
return masked
def normalize_left_lane_right_most_point(left_lane_right_most_point, viewport_y, left_lane_left_most_point):
if left_lane_right_most_point[1] != viewport_y:
line_slope = slope(left_lane_left_most_point[0], left_lane_left_most_point[1], left_lane_right_most_point[0], left_lane_right_most_point[1])
right_most_x = (line_slope * left_lane_right_most_point[0] + (viewport_y - left_lane_right_most_point[1])) / line_slope
left_lane_right_most_point = [math.floor(right_most_x), viewport_y]
return left_lane_right_most_point
def normalize_left_lane_left_most_point(left_lane_left_most_point, image_bottom, left_lane_right_most_point):
if left_lane_left_most_point[1] != image_bottom:
line_slope = slope(left_lane_left_most_point[0], left_lane_left_most_point[1], left_lane_right_most_point[0], left_lane_right_most_point[1])
left_most_x = (line_slope * left_lane_left_most_point[0] + (image_bottom - left_lane_left_most_point[1])) / line_slope
left_lane_left_most_point = [math.floor(left_most_x), image_bottom]
return left_lane_left_most_point
def normalize_right_lane_right_most_point(right_lane_right_most_point, image_bottom, right_lane_left_most_point):
if right_lane_right_most_point[1] != image_bottom:
line_slope = slope(right_lane_left_most_point[0], right_lane_left_most_point[1], right_lane_right_most_point[0], right_lane_right_most_point[1])
right_most_x = (line_slope * right_lane_right_most_point[0] + (image_bottom - right_lane_right_most_point[1])) / line_slope
right_lane_right_most_point = [math.floor(right_most_x), image_bottom]
return right_lane_right_most_point
def normalize_right_lane_left_most_point(right_lane_left_most_point, viewport_y, right_lane_right_most_point):
if right_lane_left_most_point[1] != viewport_y:
line_slope = slope(right_lane_left_most_point[0], right_lane_left_most_point[1], right_lane_right_most_point[0], right_lane_right_most_point[1])
left_most_x = (line_slope * right_lane_left_most_point[0] + (viewport_y - right_lane_left_most_point[1])) / line_slope
right_lane_left_most_point = [math.floor(left_most_x), viewport_y]
return right_lane_left_most_point
def draw_lane_lines(original_image, masked_image):
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
linesarr = get_hough_lines(masked_image, rho, theta, threshold, min_line_length, max_line_gap)
left_lane_left_most_point = [100000, 100000]
left_lane_right_most_point = [-1, -1]
right_lane_left_most_point = [100000, 100000]
right_lane_right_most_point = [-1, -1]
for line in linesarr:
for x1,y1,x2,y2 in line:
slopeval = slope(x1, y1, x2, y2)
# print(slopeval, line)
if slopeval >= 0.05 and (x1 >= mid_of_viewport_x and x2 >= mid_of_viewport_x):
if right_lane_left_most_point[0] > x1:
right_lane_left_most_point = [x1, y1]
if right_lane_right_most_point[0] < x2:
right_lane_right_most_point = [x2, y2]
elif slopeval < 0.05 and (x1 < mid_of_viewport_x and x2 < mid_of_viewport_x):
if left_lane_left_most_point[0] > x1 :
left_lane_left_most_point = [x1, y1]
if left_lane_right_most_point[0] < x2:
left_lane_right_most_point = [x2, y2]
line_img = np.zeros((original_image.shape[0], original_image.shape[1], 3), dtype=np.uint8)
# print("left_lane:", left_lane_left_most_point, left_lane_right_most_point)
# print("right_lane:", right_lane_left_most_point, right_lane_right_most_point)
image_bottom = original_image.shape[1]
left_lane_right_most_point = normalize_left_lane_right_most_point(left_lane_right_most_point, viewport_y, left_lane_left_most_point)
left_lane_left_most_point = normalize_left_lane_left_most_point(left_lane_left_most_point, image_bottom, left_lane_right_most_point)
right_lane_right_most_point = normalize_right_lane_right_most_point(right_lane_right_most_point, image_bottom, right_lane_left_most_point)
right_lane_left_most_point = normalize_right_lane_left_most_point(right_lane_left_most_point, viewport_y, right_lane_right_most_point)
line_thickness=6
cv2.line(line_img, (left_lane_left_most_point[0], left_lane_left_most_point[1]), (left_lane_right_most_point[0], left_lane_right_most_point[1]), [255, 0, 0], line_thickness)
cv2.line(line_img, (right_lane_left_most_point[0], right_lane_left_most_point[1]), (right_lane_right_most_point[0], right_lane_right_most_point[1]), [255, 0, 0], line_thickness)
weighted_lines = weighted_img(line_img, original_image)
return weighted_lines
def weighted_img(img, initial_img, ฮฑ=0.8, ฮฒ=1., ฮณ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * ฮฑ + img * ฮฒ + ฮณ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, ฮฑ, img, ฮฒ, ฮณ)
def slope(x1, y1, x2, y2):
divider = (x2-x1)
if divider == 0: return 0
m = (y2-y1)/divider
return m
# -
# ## Test Images
#
# Build your pipeline to work on the images in the directory "test_images"
# **You should make sure your pipeline works well on these images before you try the videos.**
# +
import os
print(os.listdir("test_images/"))
original_image = get_image('test_images/whiteCarLaneSwitch.jpg')
print('This image is:', type(original_image), 'with dimensions:', original_image.shape)
masked_image = get_masked_image(original_image, 5, 50, 150)
# final_image = hough_lines(masked_image, rho, theta, threshold, min_line_length, max_line_gap)
final_image = draw_lane_lines(original_image, masked_image)
plt.imshow(final_image)
# -
# ## Build a Lane Finding Pipeline
#
#
# Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
#
# Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# +
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
original_image = get_image('test_images/solidWhiteCurve.jpg')
print('This image is:', type(original_image), 'with dimensions:', original_image.shape)
masked_image = get_masked_image(original_image, 5, 50, 150)
final_image = draw_lane_lines(original_image, masked_image)
plt.imshow(final_image)
# -
# ## Test on Videos
#
# You know what's cooler than drawing lanes over images? Drawing lanes over video!
#
# We can test our solution on two provided videos:
#
# `solidWhiteRight.mp4`
#
# `solidYellowLeft.mp4`
#
# **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
#
# **If you get an error that looks like this:**
# ```
# NeedDownloadError: Need ffmpeg exe.
# You can download it by calling:
# imageio.plugins.ffmpeg.download()
# ```
# **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
cvimage = CvImage.load_from_image_data(image)
gray = cvimage.darken(100).to_grayscale()
white_only = gray.filter_inrange(100, 200) #.threshold_binary_inverse(20, 255)
yellow_only1 = gray.filter_inrange(10, 60)
yellow_only2 = gray.filter_inrange(75, 100)
yellow_only3 = gray.filter_inrange(80, 150).mask_and(yellow_only2)
yellow_only = gray.filter_inrange(90, 120).threshold_binary_inverse(90, 120).mask_and(yellow_only3).mask_or(yellow_only2).canny().houghlines()
return cvimage.mask_or(yellow_only.to_RGB()).image_data
# Let's try the one with the solid white lane on the right first ...
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
# Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
# ## Improve the draw_lines() function
#
# **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
#
# **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
# Now for the one with the solid yellow lane on the left. This one's more tricky!
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
# %time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
# ## Writeup and Submission
#
# If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
#
# ## Optional Challenge
#
# Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
# %time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format('test_videos/challenge.mp4'))
| .ipynb_checkpoints/P1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Omnidata Docs
#
# > <strong>Quick links to docs</strong>: [ <a href='/omnidata-tools/pretrained.html'>Pretrained Models</a> ] [ <a href='/omnidata-tools/starter_dataset.html'>Starter Dataset ] [ <a href='//omnidata-tools/annotator_usage.html'>Annotator Demo</a> ]
#
#
# **This site is intended to be a wiki/documentation site for everything that we open-sourced from the paper.** There are three main folders: the annotator, utilities (dataloaders, download tools, pretrained models, etc), and a code dump of stuff from the paper that is just for reference.
#
# (Check out the main site for an overview of 'steerable datastes' and the 3D โ 2D rendering pipeline).
#
#
#
# <br>
#
# #### Download the code
# If you want to see and edit the code, then you can clone the github and install with:
#
# ```bash
# git clone https://github.com/EPFL-VILAB/omnidata-tools
# # cd omnidata-tools
# pip install -e . # this will install the python requirements (and also install the CLI)
# ```
# This is probably the best option for you if you want to use the pretrained models, dataloaders, etc in other work.
#
# <br>
#
#
# #### Install just CLI tools (`omnitools`)
# If you are only interested in using the [CLI tools](/omnidata-tools/omnitools.html), you can install them with: `pip install omnidata-tools`. This might be preferable if you only want to quickly download the starter data, or if you just want a simple way to manipulate the vision datasets output by the annotator.
#
# _Note:_ The annotator can also be used with a [docker-based](/omnidata-tools/annotator_usage.html) CLI, but you don't need to use the annotator to use the starter dataset, pretrained models, or training code.
#
#
# <br>
#
#
# > ...were you looking for the [research paper](//omnidata.vision/#paper) or [project website](//omnidata.vision)?
# <!-- <img src="https://raw.githubusercontent.com/alexsax/omnidata-tools/main/docs/images/omnidata_front_page.jpg?token=<KEY>" alt="Website main page" style='max-width: 100%;'/> -->
#
| nbs/00_index.ipynb |